chapter
stringlengths
1.97k
1.53M
path
stringlengths
47
241
Conditional Probability the likelihood that an event will occur given that another event has already occurred Contingency Table the method of displaying a frequency distribution as a table with rows and columns to show how two variables may be dependent (contingent) upon each other; the table provides an easy way to calculate conditional probabilities. Dependent Events If two events are NOT independent, then we say that they are dependent. Equally Likely Each outcome of an experiment has the same probability. Event a subset of the set of all outcomes of an experiment; the set of all outcomes of an experiment is called a sample space and is usually denoted by S. An event is an arbitrary subset in S. It can contain one outcome, two outcomes, no outcomes (empty subset), the entire sample space, and the like. Standard notations for events are capital letters such as A, B, C, and so on. Experiment a planned activity carried out under controlled conditions Independent Events The occurrence of one event has no effect on the probability of the occurrence of another event. Events A and B are independent if one of the following is true: 1. \(P(A|B) = P(A)\) 2. \(P(B|A) = P(B)\) 3. \(P(A \cap B) = P(A)P(B)\) Mutually Exclusive Two events are mutually exclusive if the probability that they both happen at the same time is zero. If events A and B are mutually exclusive, then \(P(A \cap B) = 0\). Outcome a particular result of an experiment Probability a number between zero and one, inclusive, that gives the likelihood that a specific event will occur; the foundation of statistics is given by the following 3 axioms (by A.N. Kolmogorov, 1930’s): Let S denote the sample space and A and B are two events in S. Then: • \(0 ≤ P(A) ≤ 1\) • If A and B are any two mutually exclusive events, then \(P(A \cup B) = P(A) + P(B)\). • \(P(S) = 1\) Sample Space the set of all possible outcomes of an experiment Sampling with Replacement If each member of a population is replaced after it is picked, then that member has the possibility of being chosen more than once. Sampling without Replacement When sampling is done without replacement, each member of a population may be chosen only once. The Complement Event The complement of event A consists of all outcomes that are NOT in A. The Conditional Probability of \(A | B\) P(A||B) is the probability that event A will occur given that the event B has already occurred. The Intersection: the \(\cap \) Event An outcome is in the event |(A \cap B\) if the outcome is in both \(A \cap B\) at the same time. The Union: the \(\cup\) Event An outcome is in the event \(A \cup B\) if the outcome is in A or is in B or is in both A and B. Tree Diagram the useful visual representation of a sample space and events in the form of a “tree” with branches marked by possible outcomes together with associated probabilities (frequencies, relative frequencies) Venn Diagram the visual representation of a sample space and events in the form of circles or ovals showing their intersections
textbooks/stats/Applied_Statistics/Introductory_Business_Statistics_(OpenStax)/03%3A_Probability_Topics/3.08%3A_Chapter_Key_Terms.txt
Use the following information to answer the next seven exercises. An article in the New England Journal of Medicine, reported about a study of smokers in California and Hawaii. In one part of the report, the self-reported ethnicity and smoking levels per day were given. Of the people smoking at most ten cigarettes per day, there were 9,886 African Americans, 2,745 Native Hawaiians, 12,831 Latinos, 8,378 Japanese Americans, and 7,650 Whites. Of the people smoking 11 to 20 cigarettes per day, there were 6,514 African Americans, 3,062 Native Hawaiians, 4,932 Latinos, 10,680 Japanese Americans, and 9,877 Whites. Of the people smoking 21 to 30 cigarettes per day, there were 1,671 African Americans, 1,419 Native Hawaiians, 1,406 Latinos, 4,715 Japanese Americans, and 6,062 Whites. Of the people smoking at least 31 cigarettes per day, there were 759 African Americans, 788 Native Hawaiians, 800 Latinos, 2,305 Japanese Americans, and 3,970 Whites. 59. Complete the table using the data provided. Suppose that one person from the study is randomly selected. Find the probability that person smoked 11 to 20 cigarettes per day. Smoking levelAfrican AmericanNative HawaiianLatinoJapanese AmericansWhiteTOTALS 1–10 11–20 21–30 31+ TOTALS Table \(13\) Smoking Levels by Ethnicity 60. Suppose that one person from the study is randomly selected. Find the probability that person smoked 11 to 20 cigarettes per day. 61. Find the probability that the person was Latino. 62. In words, explain what it means to pick one person from the study who is “Japanese American AND smokes 21 to 30 cigarettes per day.” Also, find the probability. 63. In words, explain what it means to pick one person from the study who is “Japanese American \(\cup \) smokes 21 to 30 cigarettes per day.” Also, find the probability. 64. In words, explain what it means to pick one person from the study who is “Japanese American \(|\) that person smokes 21 to 30 cigarettes per day.” Also, find the probability. 65. Prove that smoking level/day and ethnicity are dependent events. Use the following information to answer the next two exercises. Suppose that you have eight cards. Five are green and three are yellow. The cards are well shuffled. 66. Suppose that you randomly draw two cards, one at a time, with replacement. Let \(G_1\) = first card is green Let \(G_2\) = second card is green 1. Use the following information to answer the next two exercises. The percent of licensed U.S. drivers (from a recent year) that are female is 48.60. Of the females, 5.03% are age 19 and under; 81.36% are age 20–64; 13.61% are age 65 or over. Of the licensed U.S. male drivers, 5.04% are age 19 and under; 81.43% are age 20–64; 13.53% are age 65 or over.68. Complete the following. 1. Construct a table or a tree diagram of the situation. 2. Find P(driver is female). 3. Find P(driver is age 65 or over \(|\) driver is female). 4. Find P(driver is age 65 or over \(\cap \) female). 5. In words, explain the difference between the probabilities in part c and part d. 6. Find P(driver is age 65 or over). 7. Are being age 65 or over and being female mutually exclusive events? How do you know? 69. Suppose that 10,000 U.S. licensed drivers are randomly selected. 1. How many would you expect to be male? 2. Using the table or tree diagram, construct a contingency table of gender versus age group. 3. Using the contingency table, find the probability that out of the age 20–64 group, a randomly selected driver is female. 70. Approximately 86.5% of Americans commute to work by car, truck, or van. Out of that group, 84.6% drive alone and 15.4% drive in a carpool. Approximately 3.9% walk to work and approximately 5.3% take public transportation. 1. Construct a table or a tree diagram of the situation. Include a branch for all other modes of transportation to work. 2. Assuming that the walkers walk alone, what percent of all commuters travel alone to work? 3. Suppose that 1,000 workers are randomly selected. How many would you expect to travel alone to work? 4. Suppose that 1,000 workers are randomly selected. How many would you expect to drive in a carpool? 71. When the Euro coin was introduced in 2002, two math professors had their statistics students test whether the Belgian one Euro coin was a fair coin. They spun the coin rather than tossing it and found that out of 250 spins, 140 showed a head (event H) while 110 showed a tail (event T). On that basis, they claimed that it is not a fair coin. 1. Based on the given data, find P(H) and P(T). 2. Use a tree to find the probabilities of each possible outcome for the experiment of tossing the coin twice. 3. Use the tree to find the probability of obtaining exactly one head in two tosses of the coin. 4. Use the tree to find the probability of obtaining at least one head.
textbooks/stats/Applied_Statistics/Introductory_Business_Statistics_(OpenStax)/03%3A_Probability_Topics/3.09%3A_Chapter_More_Practice.txt
3.1 Terminology 1. In a particular college class, there are male and female students. Some students have long hair and some students have short hair. Write the symbols for the probabilities of the events for parts a through j. (Note that you cannot find numerical answers here. You were not given enough information to find any probability values yet; concentrate on understanding the symbols.) • Use the following information to answer the next four exercises. A box is filled with several party favors. It contains 12 hats, 15 noisemakers, ten finger traps, and five bags of confetti. Let H = the event of getting a hat. Let N = the event of getting a noisemaker. Let F = the event of getting a finger trap. Let C = the event of getting a bag of confetti.2. Find P(H). 3. Find P(N). 4. Find P(F). 5. Find P(C). 6. Find P(B). 7. Find P(G). 8. Find P(P). 9. Find P(R). 10. Find P(Y). 11. Find P(O). Use the following information to answer the next six exercises. There are 23 countries in North America, 12 countries in South America, 47 countries in Europe, 44 countries in Asia, 54 countries in Africa, and 14 in Oceania (Pacific Ocean region). Let A = the event that a country is in Asia. Let E = the event that a country is in Europe. Let F = the event that a country is in Africa. Let N = the event that a country is in North America. Let O = the event that a country is in Oceania. Let S = the event that a country is in South America. 12. Find P(A). 13. Find P(E). 14. Find P(F). 15. Find P(N). 16. Find P(O). 17. Find P(S). 18. What is the probability of drawing a red card in a standard deck of 52 cards? 19. What is the probability of drawing a club in a standard deck of 52 cards? 20. What is the probability of rolling an even number of dots with a fair, six-sided die numbered one through six? 21. What is the probability of rolling a prime number of dots with a fair, six-sided die numbered one through six? Use the following information to answer the next two exercises. You see a game at a local fair. You have to throw a dart at a color wheel. Each section on the color wheel is equal in area. Let B = the event of landing on blue. Let R = the event of landing on red. Let G = the event of landing on green. Let Y = the event of landing on yellow. 22. If you land on Y, you get the biggest prize. Find P(Y). 23. If you land on red, you don’t get a prize. What is P(R)? Use the following information to answer the next ten exercises. On a baseball team, there are infielders and outfielders. Some players are great hitters, and some players are not great hitters. Let I = the event that a player in an infielder. Let O = the event that a player is an outfielder. Let H = the event that a player is a great hitter. Let N = the event that a player is not a great hitter. 24. Write the symbols for the probability that a player is not an outfielder. 25. Write the symbols for the probability that a player is an outfielder or is a great hitter. 26. Write the symbols for the probability that a player is an infielder and is not a great hitter. 27. Write the symbols for the probability that a player is a great hitter, given that the player is an infielder. 28. Write the symbols for the probability that a player is an infielder, given that the player is a great hitter. 29. Write the symbols for the probability that of all the outfielders, a player is not a great hitter. 30. Write the symbols for the probability that of all the great hitters, a player is an outfielder. 31. Write the symbols for the probability that a player is an infielder or is not a great hitter. 32. Write the symbols for the probability that a player is an outfielder and is a great hitter. 33. Write the symbols for the probability that a player is an infielder. 34. What is the word for the set of all possible outcomes? 35. What is conditional probability? 36. A shelf holds 12 books. Eight are fiction and the rest are nonfiction. Each is a different book with a unique title. The fiction books are numbered one to eight. The nonfiction books are numbered one to four. Randomly select one book Let F = event that book is fiction Let N = event that book is nonfiction What is the sample space? 37. What is the sum of the probabilities of an event and its complement? Use the following information to answer the next two exercises. You are rolling a fair, six-sided number cube. Let E = the event that it lands on an even number. Let M = the event that it lands on a multiple of three. 38. What does $P(E|M)$ mean in words? 39. What does $P(E \cup M)$ mean in words? 3.2 Independent and Mutually Exclusive Events 40. $E \text { and } F \text { are mutually exclusive events. } P(E)=0.4 ; P(F)=0.5 . \text { Find } P(E | F)$ 41. $J \text { and } K \text { are independent events. } P(J | K)=0.3 . \text { Find } P(J)$ 42. $U \text { and } V \text { are mutually exclusive events. } P(U)=0.26 ; P(V)=0.37. \text {Find}:$ 1. Use the following information to answer the next ten exercises. Forty-eight percent of all Californians registered voters prefer life in prison without parole over the death penalty for a person convicted of first degree murder. Among Latino California registered voters, 55% prefer life in prison without parole over the death penalty for a person convicted of first degree murder. 37.6% of all Californians are Latino. In this problem, let: • Suppose that one Californian is randomly selected.44. Find P(C). 45. Find $P(L)$. 46. Find $P(C|L)$. 47. In words, what is $C|L$? 48. Find $P(L \cap C)$. 49. In words, what is $L \cap C$? 50. Are L and C independent events? Show why or why not. 51. Find $P(L \cup C)$. 52. In words, what is L$\cup C$? 53. Are L and C mutually exclusive events? Show why or why not. 3.5 Venn Diagrams GenderSelf-taughtStudied in schoolPrivate instructionTotal Female12382272 Male19241558 Total316237130 Table $12$ 54. Find P(musician is a female). 55. Find P(musician is a male $\cap$ had private instruction). 56. Find P(musician is a female $\cup$ is self taught). 57. Are the events “being a female musician” and “learning music in school” mutually exclusive events? 58. The probability that a man develops some form of cancer in his lifetime is 0.4567. The probability that a man has at least one false positive test result (meaning the test comes back for cancer when the man does not have it) is 0.51. Let: C = a man develops cancer in his lifetime; P = man has at least one false positive. Construct a tree diagram of the situation.
textbooks/stats/Applied_Statistics/Introductory_Business_Statistics_(OpenStax)/03%3A_Probability_Topics/3.10%3A_Chapter_Practice.txt
3.1 Terminology “Countries List by Continent.” Worldatlas, 2013. Available online at http://www.worldatlas.com/cntycont.htm (accessed May 2, 2013). 3.2 Independent and Mutually Exclusive Events Lopez, Shane, Preety Sidhu. “U.S. Teachers Love Their Lives, but Struggle in the Workplace.” Gallup Wellbeing, 2013. http://www.gallup.com/poll/161516/te...workplace.aspx (accessed May 2, 2013). Data from Gallup. Available online at www.gallup.com/ (accessed May 2, 2013). 3.3 Two Basic Rules of Probability DiCamillo, Mark, Mervin Field. “The File Poll.” Field Research Corporation. Available online at www.field.com/fieldpollonline...rs/Rls2443.pdf (accessed May 2, 2013). Rider, David, “Ford support plummeting, poll suggests,” The Star, September 14, 2011. Available online at http://www.thestar.com/news/gta/2011..._suggests.html (accessed May 2, 2013). “Mayor’s Approval Down.” News Release by Forum Research Inc. Available online at www.forumresearch.com/forms/News Archives/News Releases/74209_TO_Issues_-_Mayoral_Approval_%28Forum_Research%29%2820130320%29.pdf (accessed May 2, 2013). “Roulette.” Wikipedia. Available online at http://en.Wikipedia.org/wiki/Roulette (accessed May 2, 2013). Shin, Hyon B., Robert A. Kominski. “Language Use in the United States: 2007.” United States Census Bureau. Available online at www.census.gov/hhes/socdemo/l...acs/ACS-12.pdf (accessed May 2, 2013). Data from the Baseball-Almanac, 2013. Available online at www.baseball-almanac.com (accessed May 2, 2013). Data from U.S. Census Bureau. Data from the Wall Street Journal. Data from The Roper Center: Public Opinion Archives at the University of Connecticut. Available online at www.ropercenter.uconn.edu/ (accessed May 2, 2013). Data from Field Research Corporation. Available online at www.field.com/fieldpollonline (accessed May 2,2 013). 3.4 Contingency Tables and Probability Trees “Blood Types.” American Red Cross, 2013. Available online at http://www.redcrossblood.org/learn-a...od/blood-types (accessed May 3, 2013). Data from the National Center for Health Statistics, part of the United States Department of Health and Human Services. Data from United States Senate. Available online at www.senate.gov (accessed May 2, 2013). “Human Blood Types.” Unite Blood Services, 2011. Available online at https://www.vitalant.org/Donate/Blood-Donation/Donate-Blood-Overview.aspx (accessed May 2, 2013). Haiman, Christopher A., Daniel O. Stram, Lynn R. Wilkens, Malcom C. Pike, Laurence N. Kolonel, Brien E. Henderson, and Loīc Le Marchand. “Ethnic and Racial Differences in the Smoking-Related Risk of Lung Cancer.” The New England Journal of Medicine, 2013. Available online at http://www.nejm.org/doi/full/10.1056/NEJMoa033250 (accessed May 2, 2013). Samuel, T. M. “Strange Facts about RH Negative Blood.” eHow Health, 2013. Available online at http://www.ehow.com/facts_5552003_st...ive-blood.html (accessed May 2, 2013). “United States: Uniform Crime Report – State Statistics from 1960–2011.” The Disaster Center. Available online at http://www.disastercenter.com/crime/ (accessed May 2, 2013). Data from Clara County Public H.D. Data from the American Cancer Society. Data from The Data and Story Library, 1996. Available online at http://lib.stat.cmu.edu/DASL/ (accessed May 2, 2013). Data from the Federal Highway Administration, part of the United States Department of Transportation. Data from the United States Census Bureau, part of the United States Department of Commerce. Data from USA Today. “Environment.” The World Bank, 2013. Available online at http://data.worldbank.org/topic/environment (accessed May 2, 2013). “Search for Datasets.” Roper Center: Public Opinion Archives, University of Connecticut., 2013. Available online at https://ropercenter.cornell.edu/?s=S...h+for+Datasets (accessed February 6, 2019). 3.12: Chapter Review 3.1 Terminology In this module we learned the basic terminology of probability. The set of all possible outcomes of an experiment is called the sample space. Events are subsets of the sample space, and they are assigned a probability that is a number between zero and one, inclusive. 3.2 Independent and Mutually Exclusive Events Two events A and B are independent if the knowledge that one occurred does not affect the chance the other occurs. If two events are not independent, then we say that they are dependent. In sampling with replacement, each member of a population is replaced after it is picked, so that member has the possibility of being chosen more than once, and the events are considered to be independent. In sampling without replacement, each member of a population may be chosen only once, and the events are considered not to be independent. When events do not share outcomes, they are mutually exclusive of each other. 3.3 Two Basic Rules of Probability The multiplication rule and the addition rule are used for computing the probability of A and B, as well as the probability of A or B for two given events A, B defined on the sample space. In sampling with replacement each member of a population is replaced after it is picked, so that member has the possibility of being chosen more than once, and the events are considered to be independent. In sampling without replacement, each member of a population may be chosen only once, and the events are considered to be not independent. The events A and B are mutually exclusive events when they do not have any outcomes in common. 3.4 Contingency Tables and Probability Trees There are several tools you can use to help organize and sort data when calculating probabilities. Contingency tables help display data and are particularly useful when calculating probabilities that have multiple dependent variables. A tree diagram use branches to show the different outcomes of experiments and makes complex probability questions easy to visualize. 3.5 Venn Diagrams A Venn diagram is a picture that represents the outcomes of an experiment. It generally consists of a box that represents the sample space S or universe of the objects of interest together with circles or ovals. The circles or ovals represent groups of events called sets. A Venn diagram is especially helpful for visualizing the \(\cup \) event, the \(\cap\) event, and the complement of an event and for understanding conditional probabilities. A Venn diagram is especially helpful for visualizing an Intersection of two events, a Union of two events, or a Complement of one event. A system of Venn diagrams can also help to understand Conditional probabilities. Venn diagrams connect the brain and eyes by matching the literal arithmetic to a picture. It is important to note that more than one Venn diagram is needed to solve the probability rule formulas introduced in Section 3.3.
textbooks/stats/Applied_Statistics/Introductory_Business_Statistics_(OpenStax)/03%3A_Probability_Topics/3.11%3A_Chapter_Reference.txt
1. 1. $P(L′) = P(S)$ 2. $P(M \cup S)$ 3. $P(F \cap L)$ 4. $P(M|L)$ 5. $P(L|M)$ 6. $P(S|F)$ 7. $P(F|L)$ 8. $P(F \cup L)$ 9. $P(M \cap S)$ 10. $P(F)$ 3. $P(N)=\frac{15}{42}=\frac{5}{14}=0.36$ 5. $P(C)=\frac{5}{42}=0.12$ 7. $P(G)=\frac{20}{150}=\frac{2}{15}=0.13$ 9. $P(R)=\frac{22}{150}=\frac{11}{75}=0.15$ 11. $P(O)=\frac{150-22-38-20-28-26}{150}=\frac{16}{150}=\frac{8}{75}=0.11$ 13. $P(E)=\frac{47}{194}=0.24$ 15. $P(N)=\frac{23}{194}=0.12$ 17. $P(S)=\frac{12}{194}=\frac{6}{97}=0.06$ 19. $\frac{13}{52}=\frac{1}{4}=0.25$ 21. $\frac{3}{6}=\frac{1}{2}=0.5$ 23. $P(R)=\frac{4}{8}=0.5$ 25. $P(O \cup H)$ 27. $P(H|I)$ 29. $P(N|O)$ 31. $P(I \cup N)$ 33. $P(I)$ 35. The likelihood that an event will occur given that another event has already occurred. 37. 1 39. the probability of landing on an even number or a multiple of three 41. $P(J) = 0.3$ 43. $P(Q\cap R)=P(Q)P(R)$ $0.1 = (0.4)P(R)$ $P(R) = 0.25$ 45. 0.376 47. C|L means, given the person chosen is a Latino Californian, the person is a registered voter who prefers life in prison without parole for a person convicted of first degree murder. 49. L \cap C is the event that the person chosen is a Latino California registered voter who prefers life without parole over the death penalty for a person convicted of first degree murder. 51. 0.6492 53. No, because P(L \cap C) does not equal 0. 55. $P(\text { musician is a male } \cap \text { had private instruction) }=\frac{15}{130}=\frac{3}{26}=0.12.$ 57. The events are not mutually exclusive. It is possible to be a female musician who learned music in school. 58. Figure $21$ 60. $\frac{35,065}{100,450}$ 62. To pick one person from the study who is Japanese American AND smokes 21 to 30 cigarettes per day means that the person has to meet both criteria: both Japanese American and smokes 21 to 30 cigarettes. The sample space should include everyone in the study. The probability is $\frac{4,715}{100,450}$. 64. To pick one person from the study who is Japanese American given that person smokes 21-30 cigarettes per day, means that the person must fulfill both criteria and the sample space is reduced to those who smoke 21-30 cigarettes per day. The probability is $\frac{4715}{15,273}$. 66. 1. Figure $22$ 2. $P(G G)=\left(\frac{5}{8}\right)\left(\frac{5}{8}\right)=\frac{25}{64}$ 3. $P(\text { at least one green })=P(G G)+P(G Y)+P(Y G)=\frac{25}{64}+\frac{15}{64}+\frac{15}{64}=\frac{55}{64}$ 4. $P(G | G)=\frac{5}{8}$ 5. Yes, they are independent because the first card is placed back in the bag before the second card is drawn; the composition of cards in the bag remains the same from draw one to draw two. 68. 1. <20> 20–64 >64 Totals Female " class="lt-stats-5549">0.0244 0.3954 64" class="lt-stats-5549">64">0.0661 0.486 Male " class="lt-stats-5549">0.0259 0.4186 64" class="lt-stats-5549">64">0.0695 0.514 Totals " class="lt-stats-5549">0.0503 0.8140 64" class="lt-stats-5549">64">0.1356 1 Table3.22 2. $P(F) = 0.486$ 3. $P(>64 | F) = 0.1361$ 4. $P(>64 \text{ and } F) = P(F) P(>64|F) = (0.486)(0.1361) = 0.0661$ 5. $P(>64 | F)$ is the percentage of female drivers who are 65 or older and P(>64 \cap F) is the percentage of drivers who are female and 65 or older. 6. $P(>64) = P(>64 \cap F) + P(>64 \cap M) = 0.1356$ 7. No, being female and 65 or older are not mutually exclusive because they can occur at the same time $P(>64 \cap F) = 0.0661$. 70. 1. Car, truck or van Walk Public transportation Other Totals Alone 0.7318 Not alone 0.1332 Totals 0.8650 0.0390 0.0530 0.0430 1 Table3.23 2. If we assume that all walkers are alone and that none from the other two groups travel alone (which is a big assumption) we have: $P(\text{Alone}) = 0.7318 + 0.0390 = 0.7708$. 3. Make the same assumptions as in (b) we have: $(0.7708)(1,000) = 771$ 4. $(0.1332)(1,000) = 133$ 73. 1. You can't calculate the joint probability knowing the probability of both events occurring, which is not in the information given; the probabilities should be multiplied, not added; and probability is never greater than 100% 2. A home run by definition is a successful hit, so he has to have at least as many successful hits as home runs. 75. 0 77. 0.3571 79. 0.2142 81. Physician (83.7) 83. $83.7 − 79.6 = 4.1$ 85. $P(\text{Occupation} < 81.3) = 0.5$ 87. 1. The Forum Research surveyed 1,046 Torontonians. 2. 58% 3. 42% of 1,046 = 439 (rounding to the nearest integer) 4. 0.57 5. 0.60. 89. 1. $P(\text { Betting on two line that touch each other on the table) }=\frac{6}{38}.$ 2. $P(\text { Betting on three numbers in a line })=\frac{3}{38}$ 3. $P(\text { Betting on one number })=\frac{1}{38}$ 4. $P(\text { Betting on four number that touch each other to form a square) }=\frac{4}{38}.$ 5. $P(\text { Betting on two number that touch each other on the table })=\frac{2}{38}$ 6. $P(\text { Betting on } 0-00-1-2-3)=\frac{5}{38}$ 7. $P(\text { Betting on } 0-1-2 ; \text { or } 0-00-2 ; \text { or } 00-2-3)=\frac{3}{38}$ 91. 1. $\{G1, G2, G3, G4, G5, Y1, Y2, Y3\}$ 2. $\frac{5}{8}$ 3. $\frac{2}{3}$ 4. $\frac{2}{8}$ 5. $\frac{6}{8}$ 6. No, because $P(G \cap E)$ does not equal 0. 93. NOTE The coin toss is independent of the card picked first. 1. $\{(G,H) (G,T) (B,H) (B,T) (R,H) (R,T)\}$ 2. $P(A)=P(\text { blue }) P(\text { head })=\left(\frac{3}{10}\right)\left(\frac{1}{2}\right)=\frac{3}{20}$ 3. Yes, A and B are mutually exclusive because they cannot happen at the same time; you cannot pick a card that is both blue and also (red or green). $P(A \cap B) = 0$ 4. No, A and C are not mutually exclusive because they can occur at the same time. In fact, C includes all of the outcomes of A; if the card chosen is blue it is also (red or blue). $P(A \cap C) = P(A) = \frac{3}{20}$ 95. 1. $S = \{(HHH), (HHT), (HTH), (HTT), (THH), (THT), (TTH), (TTT)\}$ 2. $\frac{4}{8}$ 3. Yes, because if A has occurred, it is impossible to obtain two tails. In other words, $P(A \cap B) = 0$. 97. 1. If Y and Z are independent, then $P(Y \cap Z) = P(Y)P(Z)$, so $P(Y \cup Z) = P(Y) + P(Z) - P(Y)P(Z)$. 2. 0.5 99. iii i iv ii 101. 1. $P(R) = 0.44$ 2. $P(R|E) = 0.56$ 3. $P(R|O) = 0.31$ 4. No, whether the money is returned is not independent of which class the money was placed in. There are several ways to justify this mathematically, but one is that the money placed in economics classes is not returned at the same overall rate; $P(R|E) \neq P(R)$. 5. No, this study definitely does not support that notion; in fact, it suggests the opposite. The money placed in the economics classrooms was returned at a higher rate than the money place in all classes collectively; $P(R|E) > P(R)$. 103. 1. $P(\text { type } \mathrm{O} \cup \mathrm{Rh}-)=P(\text { type } \mathrm{O})+P(\mathrm{Rh}-)-P(\text { type } \mathrm{O} \cap \mathrm{Rh}-)$ $0.52=0.43+0.15-P(\text { type } O \cap \mathrm{Rh}-)$; solve to find $P(\text { type } \mathrm{O} \cap \mathrm{Rh}-)= 0.06$ 6% of people have type O, Rh- blood 2. $P(\text { NOT (type O } \cap \mathrm{Rh}-) )=1-P(\text { type } \mathrm{O} \cap \mathrm{Rh}-)=1-0.06=0.94$ 94% of people do not have type O, Rh- blood 105. 1. Let C = be the event that the cookie contains chocolate. Let N = the event that the cookie contains nuts. 2. $P(C \cup N) = P(C) + P(N) - P(C \cap N) = 0.36 + 0.12 - 0.08 = 0.40$ 3. $P(\text { NElTHER chocolate NOR nuts) }=1-P(C \cup N)=1-0.40=0.60$ 107. 0 109. $\frac{10}{67}$ 111. $\frac{10}{34}$ 113. d 115. 1. Race and sex 1–14 15–24 25–64 Over 64 TOTALS White, male 210 3,360 13,610 4,870 22,050 White, female 80 580 3,380 890 4,930 Black, male 10 460 1,060 140 1,670 Black, female 0 40 270 20 330 All others 100 TOTALS 310 4,650 18,780 6,020 29,760 Table3.24 2. Race and sex 1–14 15–24 25–64 Over 64 TOTALS White, male 210 3,360 13,610 4,870 22,050 White, female 80 580 3,380 890 4,930 Black, male 10 460 1,060 140 1,670 Black, female 0 40 270 20 330 All others 10 210 460 100 780 TOTALS 310 4,650 18,780 6,020 29,760 Table3.25 3. $\frac{22,050}{29,760}$ 4. $\frac{330}{29,760}$ 5. $\frac{2,000}{29,760}$ 6. $\frac{23,720}{29,760}$ 7. $\frac{5,010}{6,020}$ 117. b 119. 1. $\frac{26}{106}$ 2. $\frac{33}{106}$ 3. $\frac{21}{106}$ 4. $\left(\frac{26}{106}\right)+\left(\frac{33}{106}\right)-\left(\frac{21}{106}\right)=\left(\frac{38}{106}\right)$ 5. $\frac{21}{33}$ 121. a
textbooks/stats/Applied_Statistics/Introductory_Business_Statistics_(OpenStax)/03%3A_Probability_Topics/3.13%3A_Chapter_Solution_%28Practice__Homework%29.txt
A student takes a ten-question, true-false quiz. Because the student had such a busy schedule, he or she could not study and guesses randomly at each answer. What is the probability of the student passing the test with at least a 70%? Small companies might be interested in the number of long-distance phone calls their employees make during the peak time of the day. Suppose the historical average is 20 calls. What is the probability that the employees make more than 20 long-distance phone calls during the peak time? These two examples illustrate two different types of probability problems involving discrete random variables. Recall that discrete data are data that you can count, that is, the random variable can only take on whole number values. A random variable describes the outcomes of a statistical experiment in words. The values of a random variable can vary with each repetition of an experiment, often called a trial. Random Variable Notation The upper case letter X denotes a random variable. Lower case letters like x or y denote the value of a random variable. If X is a random variable, then X is written in words, and x is given as a number. For example, let X = the number of heads you get when you toss three fair coins. The sample space for the toss of three fair coins is TTT; THH; HTH; HHT; HTT; THT; TTH; HHH. Then, x = 0, 1, 2, 3. X is in words and x is a number. Notice that for this example, the x values are countable outcomes. Because you can count the possible values as whole numbers that X can take on and the outcomes are random (the x values 0, 1, 2, 3), X is a discrete random variable. Probability Density Functions (PDF) for a Random Variable A probability density function or probability distribution function has two characteristics: 1. A probability density function is a mathematical formula that calculates probabilities for specific types of events, what we have been calling experiments. There is a sort of magic to a probability density function (Pdf) partially because the same formula often describes very different types of events. For example, the binomial Pdf will calculate probabilities for flipping coins, yes/no questions on an exam, opinions of voters in an up or down opinion poll, indeed any binary event. Other probability density functions will provide probabilities for the time until a part will fail, when a customer will arrive at the turnpike booth, the number of telephone calls arriving at a central switchboard, the growth rate of a bacterium, and on and on. There are whole families of probability density functions that are used in a wide variety of applications, including medicine, business and finance, physics and engineering, among others. For our needs here we will concentrate on only a few probability density functions as we develop the tools of inferential statistics. Counting Formulas and the Combinational Formula As an equation this is: $P(A)=\frac{\text { number of ways to get } \mathrm{A}}{\text { Total number of possible outcomes }}$ When we looked at the sample space for flipping 3 coins we could easily write the full sample space and thus could easily count the number of events that met our desired result, e.g. x = 1 , where X is the random variable defined as the number of heads. As we have larger numbers of items in the sample space, such as a full deck of 52 cards, the ability to write out the sample space becomes impossible. We see that probabilities are nothing more than counting the events in each group we are interested in and dividing by the number of elements in the universe, or sample space. This is easy enough if we are counting sophomores in a Stat class, but in more complicated cases listing all the possible outcomes may take a life time. There are, for example, 36 possible outcomes from throwing just two six-sided dice where the random variable is the sum of the number of spots on the up-facing sides. If there were four dice then the total number of possible outcomes would become 1,296. There are more than 2.5 MILLION possible 5 card poker hands in a standard deck of 52 cards. Obviously keeping track of all these possibilities and counting them to get at a single probability would be tedious at best. An alternative to listing the complete sample space and counting the number of elements we are interested in, is to skip the step of listing the sample space, and simply figuring out the number of elements in it and doing the appropriate division. If we are after a probability we really do not need to see each and every element in the sample space, we only need to know how many elements are there. Counting formulas were invented to do just this. They tell us the number of unordered subsets of a certain size that can be created from a set of unique elements. By unordered it is meant that, for example, when dealing cards, it does not matter if you got {ace, ace, ace, ace, king} or {king, ace, ace, ace, ace} or {ace, king, ace, ace, ace} and so on. Each of these subsets are the same because they each have 4 aces and one king. Combinational Formula $\left(\begin{array}{l}{n} \ {x}\end{array}\right)=_{n} C_{x}=\frac{n !}{x !(n-x) !}\nonumber$ This is the formula that tells the number of unique unordered subsets of size x that can be created from n unique elements. The formula is read “n combinatorial x”. Sometimes it is read as “n choose x." The exclamation point "!" is called a factorial and tells us to take all the numbers from 1 through the number before the ! and multiply them together thus 4! is 1·2·3·4=24. By definition 0! = 1. The formula is called the Combinatorial Formula. It is also called the Binomial Coefficient, for reasons that will be clear shortly. While this mathematical concept was understood long before 1653, Blaise Pascal is given major credit for his proof that he published in that year. Further, he developed a generalized method of calculating the values for combinatorials known to us as the Pascal Triangle. Pascal was one of the geniuses of an era of extraordinary intellectual advancement which included the work of Galileo, Rene Descartes, Isaac Newton, William Shakespeare and the refinement of the scientific method, the very rationale for the topic of this text. Let’s find the hard way the total number of combinations of the four aces in a deck of cards if we were going to take them two at a time. The sample space would be: S={Spade,Heart),(Spade, Diamond),(Spade,Club), (Diamond,Club),(Heart,Diamond),(Heart,Club)} There are 6 combinations; formally, six unique unordered subsets of size 2 that can be created from 4 unique elements. To use the combinatorial formula we would solve the formula as follows: $\left(\begin{array}{l}{4} \ {2}\end{array}\right)=\frac{4 !}{(4-2) ! 2 !}=\frac{4 \cdot 3 \cdot 2 \cdot 1}{2 \cdot 1 \cdot 2 \cdot 1}=6\nonumber$ If we wanted to know the number of unique 5 card poker hands that could be created from a 52 card deck we simply compute: $\left(\begin{array}{c}{52} \ {5}\end{array}\right)\nonumber$ where 52 is the total number of unique elements from which we are drawing and 5 is the size group we are putting them into. With the combinatorial formula we can count the number of elements in a sample space without having to write each one of them down, truly a lifetime's work for just the number of 5 card hands from a deck of 52 cards. We can now apply this tool to a very important probability density function, the hypergeometric distribution. Remember, a probability density function computes probabilities for us. We simply put the appropriate numbers in the formula and we get the probability of specific events. However, for these formulas to work they must be applied only to cases for which they were designed.
textbooks/stats/Applied_Statistics/Introductory_Business_Statistics_(OpenStax)/04%3A_Discrete_Random_Variables/4.00%3A_Introduction_to_Discrete_Random_Variables.txt
The simplest probability density function is the hypergeometric. This is the most basic one because it is created by combining our knowledge of probabilities from Venn diagrams, the addition and multiplication rules, and the combinatorial counting formula. To find the number of ways to get 2 aces from the four in the deck we computed: $\left(\begin{array}{l}{4} \ {2}\end{array}\right)=\frac{4 !}{2 !(4-2) !}=6\nonumber$ And if we did not care what else we had in our hand for the other three cards we would compute: $\left(\begin{array}{c}{48} \ {3}\end{array}\right)=\frac{48 !}{3 ! 45 !}=17,296\nonumber$ Putting this together, we can compute the probability of getting exactly two aces in a 5 card poker hand as: $\frac{\left(\begin{array}{l}{4} \ {2}\end{array}\right)\left(\begin{array}{c}{48} \ {3}\end{array}\right)}{\left(\begin{array}{c}{52} \ {5}\end{array}\right)}=.0399\nonumber$ This solution is really just the probability distribution known as the Hypergeometric. The generalized formula is: $h(x)=\frac{\left(\begin{array}{l}{A} \ {x}\end{array}\right)\left(\begin{array}{c}{N-A} \ {n-x}\end{array}\right)}{\left(\begin{array}{l}{N} \ {n}\end{array}\right)}\nonumber$ where $x$ = the number we are interested in coming from the group with A objects. $h(x)$ is the probability of $x$ successes, in n attempts, when A successes (aces in this case) are in a population that contains N elements. The hypergeometric distribution is an example of a discrete probability distribution because there is no possibility of partial success, that is, there can be no poker hands with 2 1/2 aces. Said another way, a discrete random variable has to be a whole, or counting, number only. This probability distribution works in cases where the probability of a success changes with each draw. Another way of saying this is that the events are NOT independent. In using a deck of cards, we are sampling WITHOUT replacement. If we put each card back after it was drawn then the hypergeometric distribution be an inappropriate Pdf. For the hypergeometric to work, 1. the population must be dividable into two and only two independent subsets (aces and non-aces in our example). The random variable $X$ = the number of items from the group of interest. 2. the experiment must have changing probabilities of success with each experiment (the fact that cards are not replaced after the draw in our example makes this true in this case). Another way to say this is that you sample without replacement and therefore each pick is not independent. 3. the random variable must be discrete, rather than continuous. Example $1$ A candy dish contains 30 jelly beans and 20 gumdrops. Ten candies are picked at random. What is the probability that 5 of the 10 are gumdrops? The two groups are jelly beans and gumdrops. Since the probability question asks for the probability of picking gumdrops, the group of interest (first group A in the formula) is gumdrops. The size of the group of interest (first group) is 30. The size of the second group is 20. The size of the sample is 10 (jelly beans or gumdrops). Let $X$ = the number of gumdrops in the sample of 10. $X$ takes on the values $x = 0, 1, 2, ..., 10$. a. What is the probability statement written mathematically? b. What is the hypergeometric probability density function written out to solve this problem? c. What is the answer to the question "What is the probability of drawing 5 gumdrops in 10 picks from the dish?" Answer a.$P(x=5)$ b.$P(x=5)=\frac{\left(\begin{array}{c}{30} \ {5}\end{array}\right)\left(\begin{array}{c}{20} \ {5}\end{array}\right)}{\left(\begin{array}{c}{50} \ {10}\end{array}\right)}$ c.$P(x=5)=0.215$ Exercise $1$ A bag contains letter tiles. Forty-four of the tiles are vowels, and 56 are consonants. Seven tiles are picked at random. You want to know the probability that four of the seven tiles are vowels. What is the group of interest, the size of the group of interest, and the size of the sample?
textbooks/stats/Applied_Statistics/Introductory_Business_Statistics_(OpenStax)/04%3A_Discrete_Random_Variables/4.01%3A_Hypergeometric_Distribution.txt
A more valuable probability density function with many applications is the binomial distribution. This distribution will compute probabilities for any binomial process. A binomial process, often called a Bernoulli process after the first person to fully develop its properties, is any case where there are only two possible outcomes in any one trial, called successes and failures. It gets its name from the binary number system where all numbers are reduced to either 1's or 0's, which is the basis for computer technology and CD music recordings. Binomial Formula $b(x)=\left(\begin{array}{l}{n} \ {x}\end{array}\right) p^{x} q^{n-x}\nonumber$ where $b(x)$ is the probability of $X$ successes in $n$ trials when the probability of a success in ANY ONE TRIAL is $p$. And of course $q=(1-p)$ and is the probability of a failure in any one trial. We can see now why the combinatorial formula is also called the binomial coefficient because it reappears here again in the binomial probability function. For the binomial formula to work, the probability of a success in any one trial must be the same from trial to trial, or in other words, the outcomes of each trial must be independent. Flipping a coin is a binomial process because the probability of getting a head in one flip does not depend upon what has happened in PREVIOUS flips. (At this time it should be noted that using $p$ for the parameter of the binomial distribution is a violation of the rule that population parameters are designated with Greek letters. In many textbooks $\theta$ (pronounced theta) is used instead of p and this is how it should be. Just like a set of data, a probability density function has a mean and a standard deviation that describes the data set. For the binomial distribution these are given by the formulas: $\mu=np\nonumber$ $\sigma=\sqrt{n p q}\nonumber$ Notice that p is the only parameter in these equations. The binomial distribution is thus seen as coming from the one-parameter family of probability distributions. In short, we know all there is to know about the binomial once we know p, the probability of a success in any one trial. In probability theory, under certain circumstances, one probability distribution can be used to approximate another. We say that one is the limiting distribution of the other. If a small number is to be drawn from a large population, even if there is no replacement, we can still use the binomial even thought this is not a binomial process. If there is no replacement it violates the independence rule of the binomial. Nevertheless, we can use the binomial to approximate a probability that is really a hypergeometric distribution if we are drawing fewer than 10 percent of the population, i.e. n is less than 10 percent of N in the formula for the hypergeometric function. The rationale for this argument is that when drawing a small percentage of the population we do not alter the probability of a success from draw to draw in any meaningful way. Imagine drawing from not one deck of 52 cards but from 6 decks of cards. The probability of say drawing an ace does not change the conditional probability of what happens on a second draw in the same way it would if there were only 4 aces rather than the 24 aces now to draw from. This ability to use one probability distribution to estimate others will become very valuable to us later. There are three characteristics of a binomial experiment. 1. There are a fixed number of trials. Think of trials as repetitions of an experiment. The letter $n$ denotes the number of trials. 2. The random variable, $x$, number of successes, is discrete. 3. There are only two possible outcomes, called "success" and "failure," for each trial. The letter $p$ denotes the probability of a success on any one trial, and $q$ denotes the probability of a failure on any one trial. $p + q = 1$. 4. The n trials are independent and are repeated using identical conditions. Think of this as drawing WITH replacement. Because the n trials are independent, the outcome of one trial does not help in predicting the outcome of another trial. Another way of saying this is that for each individual trial, the probability, $p$, of a success and probability, $q$, of a failure remain the same. For example, randomly guessing at a true-false statistics question has only two outcomes. If a success is guessing correctly, then a failure is guessing incorrectly. Suppose Joe always guesses correctly on any statistics true-false question with a probability $p = 0.6$. Then, $q = 0.4$. This means that for every true-false statistics question Joe answers, his probability of success ($p = 0.6$) and his probability of failure ($q = 0.4$) remain the same. The outcomes of a binomial experiment fit a binomial probability distribution. The random variable $X$ = the number of successes obtained in the $n$ independent trials. The mean, $\mu$, and variance, $\sigma^2$, for the binomial probability distribution are $\mu = np$ and $\sigma^2 = npq$. The standard deviation, $\sigma$, is then \sigma = $\sqrt{n p q}$. Any experiment that has characteristics three and four and where $n = 1$ is called a Bernoulli Trial (named after Jacob Bernoulli who, in the late 1600s, studied them extensively). A binomial experiment takes place when the number of successes is counted in one or more Bernoulli Trials. Example $2$ Suppose you play a game that you can only either win or lose. The probability that you win any game is 55%, and the probability that you lose is 45%. Each game you play is independent. If you play the game 20 times, write the function that describes the probability that you win 15 of the 20 times. Here, if you define $X$ as the number of wins, then $X$ takes on the values 0, 1, 2, 3, ..., 20. The probability of a success is $p = 0.55$. The probability of a failure is $q = 0.45$. The number of trials is $n = 20$. The probability question can be stated mathematically as $P(x = 15)$ Exercise $2$ A trainer is teaching a dolphin to do tricks. The probability that the dolphin successfully performs the trick is 35%, and the probability that the dolphin does not successfully perform the trick is 65%. Out of 20 attempts, you want to find the probability that the dolphin succeeds 12 times. Find the $P(X=12)$ using the binomial Pdf Example $3$ A fair coin is flipped 15 times. Each flip is independent. What is the probability of getting more than ten heads? Let $X$ = the number of heads in 15 flips of the fair coin. $X$ takes on the values 0, 1, 2, 3, ..., 15. Since the coin is fair, $p = 0.5$ and $q = 0.5$. The number of trials is $n = 15$. State the probability question mathematically. Answer $P (x > 10)$ Example $4$ Approximately 70% of statistics students do their homework in time for it to be collected and graded. Each student does homework independently. In a statistics class of 50 students, what is the probability that at least 40 will do their homework on time? Students are selected randomly. a. This is a binomial problem because there is only a success or a __________, there are a fixed number of trials, and the probability of a success is 0.70 for each trial. Answer a. failure b. If we are interested in the number of students who do their homework on time, then how do we define $X$? Answer b. $X$ = the number of statistics students who do their homework on time c. What values does $x$ take on? Answer c. 0, 1, 2, …, 50 d. What is a "failure," in words? Answer d. Failure is defined as a student who does not complete his or her homework on time. The probability of a success is $p = 0.70$. The number of trials is $n = 50$. e. If $p + q = 1$, then what is $q$? Answer e. $q = 0.30$ f. The words "at least" translate as what kind of inequality for the probability question $P(x$ ____ 40). Answer f. greater than or equal to ($\geq$) The probability question is $P(x \geq 40)$. Exercise $4$ Sixty-five percent of people pass the state driver’s exam on the first try. A group of 50 individuals who have taken the driver’s exam is randomly selected. Give two reasons why this is a binomial problem Exercise $4$ During the 2013 regular NBA season, DeAndre Jordan of the Los Angeles Clippers had the highest field goal completion rate in the league. DeAndre scored with 61.3% of his shots. Suppose you choose a random sample of 80 shots made by DeAndre during the 2013 season. Let $X$ = the number of shots that scored points. 1. What is the probability distribution for $X$? 2. Using the formulas, calculate the (i) mean and (ii) standard deviation of $X$. 3. Find the probability that DeAndre scored with 60 of these shots. 4. Find the probability that DeAndre scored with more than 50 of these shots.
textbooks/stats/Applied_Statistics/Introductory_Business_Statistics_(OpenStax)/04%3A_Discrete_Random_Variables/4.02%3A_Binomial_Distribution.txt
The geometric probability density function builds upon what we have learned from the binomial distribution. In this case the experiment continues until either a success or a failure occurs rather than for a set number of trials. There are three main characteristics of a geometric experiment. 1. There are one or more Bernoulli trials with all failures except the last one, which is a success. In other words, you keep repeating what you are doing until the first success. Then you stop. For example, you throw a dart at a bullseye until you hit the bullseye. The first time you hit the bullseye is a "success" so you stop throwing the dart. It might take six tries until you hit the bullseye. You can think of the trials as failure, failure, failure, failure, failure, success, STOP. 2. In theory, the number of trials could go on forever. 3. The probability, $p$, of a success and the probability, $q$, of a failure is the same for each trial. $p + q = 1$ and $q = 1 − p$. For example, the probability of rolling a three when you throw one fair die is $\frac{1}{6}$. This is true no matter how many times you roll the die. Suppose you want to know the probability of getting the first three on the fifth roll. On rolls one through four, you do not get a face with a three. The probability for each of the rolls is q = $\frac{5}{6}$, the probability of a failure. The probability of getting a three on the fifth roll is $\left(\frac{5}{6}\right)\left(\frac{5}{6}\right)\left(\frac{5}{6}\right)\left(\frac{5}{6}\right)\left(\frac{1}{6}\right) = 0.0804$ 4. $X$ = the number of independent trials until the first success. Example $5$ You play a game of chance that you can either win or lose (there are no other possibilities) until you lose. Your probability of losing is $p = 0.57$. What is the probability that it takes five games until you lose? Let $X$ = the number of games you play until you lose (includes the losing game). Then X takes on the values 1, 2, 3, ... (could go on indefinitely). The probability question is $P (x = 5)$. Exercise $5$ You throw darts at a board until you hit the center area. Your probability of hitting the center area is $p = 0.17$. You want to find the probability that it takes eight throws until you hit the center. What values does $X$ take on? Example $6$ A safety engineer feels that 35% of all industrial accidents in her plant are caused by failure of employees to follow instructions. She decides to look at the accident reports (selected randomly and replaced in the pile after reading) until she finds one that shows an accident caused by failure of employees to follow instructions. On average, how many reports would the safety engineer expect to look at until she finds a report showing an accident caused by employee failure to follow instructions? What is the probability that the safety engineer will have to examine at least three reports until she finds a report showing an accident caused by employee failure to follow instructions? Let $X$ = the number of accidents the safety engineer must examine until she finds a report showing an accident caused by employee failure to follow instructions. X takes on the values 1, 2, 3, .... The first question asks you to find the expected value or the mean. The second question asks you to find $P (x \geq 3)$. ("At least" translates to a "greater than or equal to" symbol). Exercise $6$ An instructor feels that 15% of students get below a C on their final exam. She decides to look at final exams (selected randomly and replaced in the pile after reading) until she finds one that shows a grade below a C. We want to know the probability that the instructor will have to examine at least ten exams until she finds one with a grade below a C. What is the probability question stated mathematically? Example $7$ Suppose that you are looking for a student at your college who lives within five miles of you. You know that 55% of the 25,000 students do live within five miles of you. You randomly contact students from the college until one says he or she lives within five miles of you. What is the probability that you need to contact four people? This is a geometric problem because you may have a number of failures before you have the one success you desire. Also, the probability of a success stays approximately the same each time you ask a student if he or she lives within five miles of you. There is no definite number of trials (number of times you ask a student). a. Let $X$ = the number of ____________ you must ask ____________ one says yes. Answer a. Let $X$ = the number of students you must ask until one says yes. b. What values does $X$ take on? Answer b. 1, 2, 3, …, (total number of students) c. What are $p$ and $q$ ? Answer c. $p = 0.55; q = 0.45$ d. The probability question is $P$ (_______). Answer d. $P (x = 4)$ Notation for the Geometric: G = Geometric Probability Distribution Function $X \sim G (p)$ Read this as "$X$ is a random variable with a geometric distribution." The parameter is $p$; $p$ = the probability of a success for each trial. The Geometric Pdf tells us the probability that the first occurrence of success requires $x$ number of independent trials, each with success probability p. If the probability of success on each trial is p, then the probability that the $x$th trial (out of $x$ trials) is the first success is: $\mathrm{P}(X=x)=(1-p)^{x-1} p\nonumber$ for $x = 1, 2, 3$, .... The expected value of $X$, the mean of this distribution, is $1/p$. This tells us how many trials we have to expect until we get the first success including in the count the trial that results in success. The above form of the Geometric distribution is used for modeling the number of trials until the first success. The number of trials includes the one that is a success: $x$ = all trials including the one that is a success. This can be seen in the form of the formula. If $X$ = number of trials including the success, then we must multiply the probability of failure, $(1-p)$, times the number of failures, that is $X-1$. By contrast, the following form of the geometric distribution is used for modeling number of failures until the first success: $\mathrm{P}(X=x)=(1-p)^{x} p\nonumber$ for $x = 0, 1, 2, 3$, .... In this case the trial that is a success is not counted as a trial in the formula: $x$ = number of failures. The expected value, mean, of this distribution is $\mu=\frac{(1-p)}{p}$. This tells us how many failures to expect before we have a success. In either case, the sequence of probabilities is a geometric sequence. Example $8$ Assume that the probability of a defective computer component is 0.02. Components are randomly selected. Find the probability that the first defect is caused by the seventh component tested. How many components do you expect to test until one is found to be defective? Let $X$ = the number of computer components tested until the first defect is found. X takes on the values $1, 2, 3$, ... where $p = 0.02. X \sim G(0.02)$ Find $P (x = 7)$. Answer: $P (x = 7) = (1 - 0.02)7-1 \times 0.02 = 0.0177$. The probability that the seventh component is the first defect is 0.0177. The graph of $X \sim G(0.02)$ is: The $y$-axis contains the probability of $x$, where $X$ = the number of computer components tested. Notice that the probabilities decline by a common increment. This increment is the same ratio between each number and is called a geometric progression and thus the name for this probability density function. The number of components that you would expect to test until you find the first defective component is the mean, $\mu = 50$. The formula for the mean for the random variable defined as number of failures until first success is $\mu=\frac{1}{p}=\frac{1}{0.02}=50$ See Example $9$ for an example where the geometric random variable is defined as number of trials until first success. The expected value of this formula for the geometric will be different from this version of the distribution. The formula for the variance is $\sigma^2 =\left(\frac{1}{p}\right)\left(\frac{1}{p}-1\right)=\left(\frac{1}{0.02}\right)\left(\frac{1}{0.02}-1\right)= 2,450$ The standard deviation is $\sigma = \sqrt{\left(\frac{1}{p}\right)\left(\frac{1}{p}-1\right)}=\sqrt{\left(\frac{1}{0.02}\right)\left(\frac{1}{0.02}-1\right)} = 49.5$ The lifetime risk of developing pancreatic cancer is about one in 78 (1.28%). Let X = the number of people you ask before one says he or she has pancreatic cancer. The random variable X in this case includes only the number of trials that were failures and does not count the trial that was a success in finding a person who had the disease. The appropriate formula for this random variable is the second one presented above. Then X is a discrete random variable with a geometric distribution: X ~ G $\left(\frac{1}{78}\right)$ or X ~ G (0.0128). 1. What is the probability of that you ask 9 people before one says he or she has pancreatic cancer? This is asking, what is the probability that you ask 9 people unsuccessfully and the tenth person is a success? 2. What is the probability that you must ask 20 people? 3. Find the (i) mean and (ii) standard deviation of X. Answer a. $P(x=9)=(1-0.0128)^{9} \cdot 0.0128=0.0114$ b. $P(x=20)=(1-0.0128)^{19} \cdot 0.0128=0.01$ 1. Mean = $\mu =\frac{(1-p)}{p}=\frac{(1-0.0128)}{0.0128}=77.12$ 2. Standard Deviation = $\sigma =\sqrt{\frac{1-p}{p^{2}}}=\sqrt{\frac{1-0.0128}{0.0128^{2}}} \approx 77.62$ Exercise $9$ The literacy rate for a nation measures the proportion of people age 15 and over who can read and write. The literacy rate for women in The United Colonies of Independence is 12%. Let $X$ = the number of women you ask until one says that she is literate. 1. What is the probability distribution of $X$ ? 2. What is the probability that you ask five women before one says she is literate? 3. What is the probability that you must ask ten women? Example $10$ A baseball player has a batting average of 0.320. This is the general probability that he gets a hit each time he is at bat. What is the probability that he gets his first hit in the third trip to bat? Answer $P(x=3)=(1-0.32)^{3-1} \times .32=0.1480$ In this case the sequence is failure, failure success. How many trips to bat do you expect the hitter to need before getting a hit? Answer $\mu=\frac{1}{p}=\frac{1}{0.320}=3.125 \approx 3$ This is simply the expected value of successes and therefore the mean of the distribution. Example $11$ There is an 80% chance that a Dalmatian dog has 13 black spots. You go to a dog show and count the spots on Dalmatians. What is the probability that you will review the spots on 3 dogs before you find one that has 13 black spots? Answer $P(x=3)=(1-0.80)^{3} \times 0.80=0.0064$ Footnotes 1 ”Prevalence of HIV, total (% of populations ages 15-49),” The World Bank, 2013. Available online at http://data.worldbank.org/indicator/...last&sort=desc (accessed May 15, 2013).
textbooks/stats/Applied_Statistics/Introductory_Business_Statistics_(OpenStax)/04%3A_Discrete_Random_Variables/4.03%3A_Geometric_Distribution.txt
Another useful probability distribution is the Poisson distribution, or waiting time distribution. This distribution is used to determine how many checkout clerks are needed to keep the waiting time in line to specified levels, how may telephone lines are needed to keep the system from overloading, and many other practical applications. A modification of the Poisson, the Pascal, invented nearly four centuries ago, is used today by telecommunications companies worldwide for load factors, satellite hookup levels and Internet capacity problems. The distribution gets its name from Simeon Poisson who presented it in 1837 as an extension of the binomial distribution which we will see can be estimated with the Poisson. There are two main characteristics of a Poisson experiment. 1. The Poisson probability distribution gives the probability of a number of events occurring in a fixed interval of time or space if these events happen with a known average rate. 2. The events are independently of the time since the last event. For example, a book editor might be interested in the number of words spelled incorrectly in a particular book. It might be that, on the average, there are five words spelled incorrectly in 100 pages. The interval is the 100 pages and it is assumed that there is no relationship between when misspellings occur. 3. The random variable $X$ = the number of occurrences in the interval of interest. Example $12$ A bank expects to receive six bad checks per day, on average. What is the probability of the bank getting fewer than five bad checks on any given day? Of interest is the number of checks the bank receives in one day, so the time interval of interest is one day. Let $X$ = the number of bad checks the bank receives in one day. If the bank expects to receive six bad checks per day then the average is six checks per day. Write a mathematical statement for the probability question. Answer $P (x < 5)$ Example $13$ You notice that a news reporter says "uh," on average, two times per broadcast. What is the probability that the news reporter says "uh" more than two times per broadcast. This is a Poisson problem because you are interested in knowing the number of times the news reporter says "uh" during a broadcast. a. What is the interval of interest? Answer a. one broadcast measured in minutes b. What is the average number of times the news reporter says "uh" during one broadcast? Answer b. 2 c. Let $X$ = ____________. What values does $X$ take on? Answer c. Let $X$ = the number of times the news reporter says "uh" during one broadcast. $x = 0, 1, 2, 3$, ... d. The probability question is $P$ (______). Answer d. $P (x > 2)$ Notation for the Poisson: P = Poisson Probability Distribution Function $X \sim P (\mu)$ Read this as "$X$ is a random variable with a Poisson distribution." The parameter is $\mu (or λ); \mu (or λ) = the mean for the interval of interest. The mean is the number of occurrences that occur on average during the interval period. The formula for computing probabilities that are from a Poisson process is: $P(x)=\frac{\mu^{x} e^{-\mu}}{x !}\nonumber$ where \(P(X)$ is the probability of $X$ successes, $\mu$ is the expected number of successes based upon historical data, e is the natural logarithm approximately equal to 2.718, and $X$ is the number of successes per unit, usually per unit of time. In order to use the Poisson distribution, certain assumptions must hold. These are: the probability of a success, $\mu$, is unchanged within the interval, there cannot be simultaneous successes within the interval, and finally, that the probability of a success among intervals is independent, the same assumption of the binomial distribution. In a way, the Poisson distribution can be thought of as a clever way to convert a continuous random variable, usually time, into a discrete random variable by breaking up time into discrete independent intervals. This way of thinking about the Poisson helps us understand why it can be used to estimate the probability for the discrete random variable from the binomial distribution. The Poisson is asking for the probability of a number of successes during a period of time while the binomial is asking for the probability of a certain number of successes for a given number of trials. Example $14$ Leah's answering machine receives about six telephone calls between 8 a.m. and 10 a.m. What is the probability that Leah receives more than one call in the next 15 minutes? Let X = the number of calls Leah receives in 15 minutes. (The interval of interest is 15 minutes or $\frac{1}{4}$ hour.) $x = 0, 1, 2, 3$, ... If Leah receives, on the average, six telephone calls in two hours, and there are eight 15 minute intervals in two hours, then Leah receives $\left(\frac{1}{8}\right)$(6)= 0.75 calls in 15 minutes, on average. So, \mu = 0.75 for this problem. $X \sim P (0.75)$ Find $P (x > 1). P (x > 1) = 0.1734$ Probability that Leah receives more than one telephone call in the next 15 minutes is about 0.1734. The graph of $X \sim P (0.75)$ is: The $y$-axis contains the probability of $x$ where $X$ = the number of calls in 15 minutes. Example $15$ According to a survey a university professor gets, on average, 7 emails per day. Let X = the number of emails a professor receives per day. The discrete random variable X takes on the values x = 0, 1, 2 …. The random variable X has a Poisson distribution: X ~ P (7). The mean is 7 emails. 1. What is the probability that an email user receives exactly 2 emails per day? 2. What is the probability that an email user receives at most 2 emails per day? 3. What is the standard deviation? Answer a. $P(x=2)=\frac{\mu^{x_{e}-\mu}}{x !}=\frac{7^{2} e^{-7}}{2 !}=0.022$ b. $P(x \leq 2)=\frac{7^{0} e^{-7}}{0 !}+\frac{7^{1} e^{-7}}{1 !}+\frac{7^{2} e^{-7}}{2 !}=0.029$ c. Standard Deviation = $\sigma=\sqrt{\mu}=\sqrt{7} \approx 2.65$ Example $16$ Text message users receive or send an average of 41.5 text messages per day. 1. How many text messages does a text message user receive or send per hour? 2. What is the probability that a text message user receives or sends two messages per hour? 3. What is the probability that a text message user receives or sends more than two messages per hour? Answer a.Let X = the number of texts that a user sends or receives in one hour. The average number of texts received per hour is $\frac{41.5}{24}$ ≈ 1.7292. b.$P(x=2)=\frac{\mu^{x} e^{-\mu}}{x !}=\frac{1.729^{2} e^{-1.729}}{2 !}=0.265$ c.$P(x>2)=1-P(x \leq 2)=1-\left[\frac{7^{0} e^{-7}}{0 !}+\frac{7^{1} e^{7}}{1 !}+\frac{7^{2} e^{-7}}{2 !}\right]=0.250$ Example $17$ On May 13, 2013, starting at 4:30 PM, the probability of low seismic activity for the next 48 hours in Alaska was reported as about 1.02%. Use this information for the next 200 days to find the probability that there will be low seismic activity in ten of the next 200 days. Use both the binomial and Poisson distributions to calculate the probabilities. Are they close? Answer Let X = the number of days with low seismic activity. Using the binomial distribution: $P\left(x=10\right)=\frac{200 !}{10 !(200-10) !} \times .0102^{10} \times .9898^{190}=0.000039\nonumber$ Using the Poisson distribution: Calculate $\mu = np = 200(0.0102) \approx 2.04$ $P\left(x=10\right)=\frac{\mu^{x} e^{-\mu}}{x !}=\frac{2.04^{10} e^{-2.04}}{10 !}=0.000045\nonumber$ We expect the approximation to be good because $n$ is large (greater than 20) and $p$ is small (less than 0.05). The results are close—both probabilities reported are almost 0. Estimating the Binomial Distribution with the Poisson Distribution We found before that the binomial distribution provided an approximation for the hypergeometric distribution. Now we find that the Poisson distribution can provide an approximation for the binomial. We say that the binomial distribution approaches the Poisson. The binomial distribution approaches the Poisson distribution is as n gets larger and p is small such that np becomes a constant value. There are several rules of thumb for when one can say they will use a Poisson to estimate a binomial. One suggests that np, the mean of the binomial, should be less than 25. Another author suggests that it should be less than 7. And another, noting that the mean and variance of the Poisson are both the same, suggests that np and npq, the mean and variance of the binomial, should be greater than 5. There is no one broadly accepted rule of thumb for when one can use the Poisson to estimate the binomial. As we move through these probability distributions we are getting to more sophisticated distributions that, in a sense, contain the less sophisticated distributions within them. This proposition has been proven by mathematicians. This gets us to the highest level of sophistication in the next probability distribution which can be used as an approximation to all of those that we have discussed so far. This is the normal distribution. Example $18$ A survey of 500 seniors in the Price Business School yields the following information. 75% go straight to work after graduation. 15% go on to work on their MBA. 9% stay to get a minor in another program. 1% go on to get a Master's in Finance. What is the probability that more than 2 seniors go to graduate school for their Master's in finance? Answer This is clearly a binomial probability distribution problem. The choices are binary when we define the results as "Graduate School in Finance" versus "all other options." The random variable is discrete, and the events are, we could assume, independent. Solving as a binomial problem, we have: Binomial Solution $n\cdot p=500\cdot 0.01=5=\mu\nonumber$ $P(0)=\frac{500 !}{0 !(500-0) !} 0.01^{0}(1-0.01)^{500^{-0}}=0.00657\nonumber$ $P(1)=\frac{500 !}{1 !(500-1) !} 0.01^{1}(1-0.01)^{500}=0.03318\nonumber$ $P(2)=\frac{500 !}{2 !(500-2) !} 0.01^{2}(1-0.01)^{500^{2}}=0.08363\nonumber$ Adding all 3 together = 0.12339 $1−0.12339=0.87661\nonumber$ Poisson approximation $n\cdot p=500\cdot 0.01=5=\mu\nonumber$ $n \cdot p \cdot(1-p)=500 \cdot 0.01 \cdot(0.99) \approx 5=\sigma^{2}=\mu\nonumber$ $P(X)=\frac{e^{-n p}(n p)^{x}}{x !}=\left\{P(0)=\frac{e^{-5} \cdot 5^{0}}{0 !}\right\}+\left\{P(1)=\frac{e^{-5} \cdot 5^{1}}{1 !}\right\}+\left\{P(2)=\frac{e^{-5} \cdot 5^{2}}{2 !}\right\}\nonumber$ $0.0067+0.0337+0.0842=0.1247\nonumber$ $1−0.1247=0.8753\nonumber$ An approximation that is off by 1 one thousandth is certainly an acceptable approximation.
textbooks/stats/Applied_Statistics/Introductory_Business_Statistics_(OpenStax)/04%3A_Discrete_Random_Variables/4.04%3A_Poisson_Distribution.txt
Hypergeometric Distribution $h(x)=\frac{\left(\begin{array}{l}{A} \ {x}\end{array}\right)\left(\begin{array}{l}{N-A} \ {n-x}\end{array}\right)}{\left(\begin{array}{l}{N} \ {n}\end{array}\right)}$ Binomial Distribution $X \sim B(n, p)$ means that the discrete random variable $X$ has a binomial probability distribution with $n$ trials and probability of success $p$. $X =$ the number of successes in n independent trials $n =$ the number of independent trials $X$ takes on the values $x = 0, 1, 2, 3, ..., n$ $p =$ the probability of a success for any trial $q =$ the probability of a failure for any trial $p + q = 1$ $q = 1 – p$ The mean of $X$ is $\mu = np$. The standard deviation of $X$ is $\sigma=\sqrt{n p q}$. $P(x)=\frac{n !}{x !(n-x) !} \cdot p^{x} q^{(n-x)}\nonumber$ where $P(X)$ is the probability of $X$ successes in $n$ trials when the probability of a success in ANY ONE TRIAL is $p$. Geometric Distribution $P(X=x)=p(1-p)^{x-1}$ $X \sim G(p)$ means that the discrete random variable $X$ has a geometric probability distribution with probability of success in a single trial $p$. $X =$ the number of independent trials until the first success $X$ takes on the values $x = 1, 2, 3, ...$ $p =$ the probability of a success for any trial $q =$ the probability of a failure for any trial $p + q = 1$ $q = 1 – p$ The mean is $\mu = \frac{1}{p}$. The standard deviation is $\sigma=\sqrt{\frac{1-p}{p^{2}}}=\sqrt{\frac{1}{p}\left(\frac{1}{p}-1\right)}$. Poisson Distribution $X \sim P(\mu )$ means that $X$ has a Poisson probability distribution where $X =$ the number of occurrences in the interval of interest. $X$ takes on the values $x = 0, 1, 2, 3, ...$ The mean $\mu$ or $\lambda$ is typically given. The variance is $\sigma ^2 = \mu$, and the standard deviation is $\sigma=\sqrt{\mu}$. When $P(\mu)$ is used to approximate a binomial distribution, $\mu = np$ where n represents the number of independent trials and $p$ represents the probability of success in a single trial. $P(x)=\frac{\mu^{x} e^{-\mu}}{x !}\nonumber$ 4.06: Chapter Homework 4.1 Hypergeometric Distribution 47. A group of Martial Arts students is planning on participating in an upcoming demonstration. Six are students of Tae Kwon Do; seven are students of Shotokan Karate. Suppose that eight students are randomly picked to be in the first demonstration. We are interested in the number of Shotokan Karate students in that first demonstration. 1. Suppose that 1,000 babies from healthy baby nurseries were randomly surveyed. Find the probability that exactly two babies were born deaf. Use the following information to answer the next four exercises. Recently, a nurse commented that when a patient calls the medical advice line claiming to have the flu, the chance that he or she truly has the flu (and not just a nasty cold) is only about 4%. Of the next 25 patients calling in claiming to have the flu, we are interested in how many actually have the flu. 53. Define the random variable and list its possible values. 54. State the distribution of \(X\). 55. Find the probability that at least four of the 25 patients actually have the flu. 56. On average, for every 25 patients calling in, how many do you expect to have the flu? 57. People visiting video rental stores often rent more than one DVD at a time. The probability distribution for DVD rentals per customer at Video To Go is given Table \(5\). There is five-video limit per customer at this store, so nobody ever rents more than five DVDs. \(x\)\(P(x)\) 00.03 10.50 20.24 3 40.07 50.04 Table \(5\) 1. Use the following information to answer the next two exercises: The probability that the San Jose Sharks will win any given game is 0.3694 based on a 13-year win history of 382 wins out of 1,034 games played (as of a certain date). An upcoming monthly schedule contains 12 games.59. The expected number of wins for that upcoming month is: 1. Let X = the number of games won in that upcoming month.60. What is the probability that the San Jose Sharks win six games in that upcoming month? 1. Use the following information to answer the next two exercises: The average number of times per week that Mrs. Plum’s cats wake her up at night because they want to play is ten. We are interested in the number of times her cats wake her up each week.93. In words, the random variable \(X =\) _________________ 1. the number of times Mrs. Plum’s cats wake her up each week. 2. the number of times Mrs. Plum’s cats wake her up each hour. 3. the number of times Mrs. Plum’s cats wake her up each night. 4. the number of times Mrs. Plum’s cats wake her up. 94. Find the probability that her cats will wake her up no more than five times next week. 1. 0.5000 2. 0.9329 3. 0.0378 4. 0.0671
textbooks/stats/Applied_Statistics/Introductory_Business_Statistics_(OpenStax)/04%3A_Discrete_Random_Variables/4.05%3A_Chapter_Formula_Review.txt
Bernoulli Trials an experiment with the following characteristics: 1. There are only two possible outcomes called “success” and “failure” for each trial. 2. The probability $p$ of a success is the same for any trial (so the probability $q = 1 − p$ of a failure is the same for any trial). Binomial Experiment a statistical experiment that satisfies the following three conditions: 1. There are a fixed number of trials, $n$. 2. There are only two possible outcomes, called "success" and, "failure," for each trial. The letter $p$ denotes the probability of a success on one trial, and $q$ denotes the probability of a failure on one trial. 3. The $n$ trials are independent and are repeated using identical conditions. Binomial Probability Distribution a discrete random variable (RV) that arises from Bernoulli trials; there are a fixed number, $n$, of independent trials. “Independent” means that the result of any trial (for example, trial one) does not affect the results of the following trials, and all trials are conducted under the same conditions. Under these circumstances the binomial RV $X$ is defined as the number of successes in n trials. The mean is $\mu=n p$ and the standard deviation is $\sigma=\sqrt{n p q}$. The probability of exactly x successes in $n$ trials is $P(X=x)=\left(\begin{array}{l}{n} \ {x}\end{array}\right) p^{x} q^{n-x}$. Geometric Distribution a discrete random variable (RV) that arises from the Bernoulli trials; the trials are repeated until the first success. The geometric variable X is defined as the number of trials until the first success. The mean is $\mu=\frac{1}{p}$ and the standard deviation is $\sigma = \sqrt{\frac{1}{p}\left(\frac{1}{p}-1\right)}$. The probability of exactly x failures before the first success is given by the formula: $P(X=x)=p(1-p)^{x-1}$ where one wants to know probability for the number of trials until the first success: the $x$th trail is the first success. An alternative formulation of the geometric distribution asks the question: what is the probability of $x$ failures until the first success? In this formulation the trial that resulted in the first success is not counted. The formula for this presentation of the geometric is: $P(X=x)=p(1-p)^{x}$ The expected value in this form of the geometric distribution is $\mu=\frac{1-p}{p}$ The easiest way to keep these two forms of the geometric distribution straight is to remember that p is the probability of success and $(1−p)$ is the probability of failure. In the formula the exponents simply count the number of successes and number of failures of the desired outcome of the experiment. Of course the sum of these two numbers must add to the number of trials in the experiment. Geometric Experiment a statistical experiment with the following properties: 1. There are one or more Bernoulli trials with all failures except the last one, which is a success. 2. In theory, the number of trials could go on forever. There must be at least one trial. 3. The probability, $p$, of a success and the probability, $q$, of a failure do not change from trial to trial. Hypergeometric Experiment a statistical experiment with the following properties: 1. You take samples from two groups. 2. You are concerned with a group of interest, called the first group. 3. You sample without replacement from the combined groups. 4. Each pick is not independent, since sampling is without replacement. Hypergeometric Probability a discrete random variable (RV) that is characterized by: 1. A fixed number of trials. 2. The probability of success is not the same from trial to trial. We sample from two groups of items when we are interested in only one group. $X$ is defined as the number of successes out of the total number of items chosen. Poisson Probability Distribution a discrete random variable (RV) that counts the number of times a certain event will occur in a specific interval; characteristics of the variable: • The probability that the event occurs in a given interval is the same for all intervals. • The events occur with a known mean and independently of the time since the last event. The distribution is defined by the mean $\mu$ of the event in the interval. The mean is $\mu = np$. The standard deviation is $\sigma=\sqrt{\mu}$. The probability of having exactly $x$ successes in $r$ trials is $P(x)=\frac{\mu^{x} e^{-\mu}}{x !}$. The Poisson distribution is often used to approximate the binomial distribution, when $n$ is “large” and $p$ is “small” (a general rule is that $np$ should be greater than or equal to 25 and $p$ should be less than or equal to 0.01). Probability Distribution Function (PDF) a mathematical description of a discrete random variable (RV), given either in the form of an equation (formula) or in the form of a table listing all the possible outcomes of an experiment and the probability associated with each outcome. Random Variable (RV) a characteristic of interest in a population being studied; common notation for variables are upper case Latin letters $X, Y, Z$,...; common notation for a specific value from the domain (set of all possible values of a variable) are lower case Latin letters $x, y$, and $z$. For example, if $X$ is the number of children in a family, then $x$ represents a specific integer 0, 1, 2, 3,.... Variables in statistics differ from variables in intermediate algebra in the two following ways. • The domain of the random variable (RV) is not necessarily a numerical set; the domain may be expressed in words; for example, if $X =$ hair color then the domain is {black, blond, gray, green, orange}. • We can tell what specific value x the random variable $X$ takes only after performing the experiment.
textbooks/stats/Applied_Statistics/Introductory_Business_Statistics_(OpenStax)/04%3A_Discrete_Random_Variables/4.07%3A_Chapter_Key_Items.txt
Introduction Use the following information to answer the next five exercises: A company wants to evaluate its attrition rate, in other words, how long new hires stay with the company. Over the years, they have established the following probability distribution. Let $X =$ the number of years a new hire will stay with the company. Let $P(x) =$ the probability that a new hire will stay with the company x years. 1. Complete Table $1$ using the data provided. $x$$P(x)$ 00.12 10.18 20.30 30.15 4 50.10 60.05 Table $1$ 2. $P(x = 4) =$ _______ 3. $P(x ≥ 5) =$ _______ 4. On average, how long would you expect a new hire to stay with the company? 5. What does the column “$P(x)$” sum to? Use the following information to answer the next six exercises: A baker is deciding how many batches of muffins to make to sell in his bakery. He wants to make enough to sell every one and no fewer. Through observation, the baker has established a probability distribution. $x$$P(x)$ 10.15 20.35 30.40 40.10 Table $2$ 6. Define the random variable $X$. 7. What is the probability the baker will sell more than one batch? $P(x > 1) =$ _______ 8. What is the probability the baker will sell exactly one batch? $P(x = 1) =$ _______ 9. On average, how many batches should the baker make? Use the following information to answer the next four exercises: Ellen has music practice three days a week. She practices for all of the three days 85% of the time, two days 8% of the time, one day 4% of the time, and no days 3% of the time. One week is selected at random. 10. Define the random variable $X$. 11. Construct a probability distribution table for the data. 12. We know that for a probability distribution function to be discrete, it must have two characteristics. One is that the sum of the probabilities is one. What is the other characteristic? Use the following information to answer the next five exercises: Javier volunteers in community events each month. He does not do more than five events in a month. He attends exactly five events 35% of the time, four events 25% of the time, three events 20% of the time, two events 10% of the time, one event 5% of the time, and no events 5% of the time. 13. Define the random variable $X$. 14. What values does $x$ take on? 15. Construct a PDF table. 16. Find the probability that Javier volunteers for less than three events each month. $P(x < 3) =$ _______ 17. Find the probability that Javier volunteers for at least one event each month. $P(x > 0) =$ _______ 4.1 Hypergeometric Distribution Use the following information to answer the next five exercises: Suppose that a group of statistics students is divided into two groups: business majors and non-business majors. There are 16 business majors in the group and seven non-business majors in the group. A random sample of nine students is taken. We are interested in the number of business majors in the sample. 18. In words, define the random variable $X$. 19. What values does $X$ take on? 4.2 Binomial Distribution Use the following information to answer the next eight exercises: The Higher Education Research Institute at UCLA collected data from 203,967 incoming first-time, full-time freshmen from 270 four-year colleges and universities in the U.S. 71.3% of those students replied that, yes, they believe that same-sex couples should have the right to legal marital status. Suppose that you randomly pick eight first-time, full-time freshmen from the survey. You are interested in the number that believes that same sex-couples should have the right to legal marital status. 20. In words, define the random variable $X$. 21. $X \sim$_____(_____,_____) 22. What values does the random variable $X$ take on? 23. Construct the probability distribution function (PDF). $x$$P(x)$ Table $3$ 24. On average ($\mu$), how many would you expect to answer yes? 25. What is the standard deviation ($\sigma$)? 26. What is the probability that at most five of the freshmen reply “yes”? 27. What is the probability that at least two of the freshmen reply “yes”? 4.3 Geometric Distribution Use the following information to answer the next six exercises: The Higher Education Research Institute at UCLA collected data from 203,967 incoming first-time, full-time freshmen from 270 four-year colleges and universities in the U.S. 71.3% of those students replied that, yes, they believe that same-sex couples should have the right to legal marital status. Suppose that you randomly select freshman from the study until you find one who replies “yes.” You are interested in the number of freshmen you must ask. 28. In words, define the random variable $X$. 29. $X \sim$_____(_____,_____) 30. What values does the random variable $X$ take on? 31. Construct the probability distribution function (PDF). Stop at $x = 6$. $x$$P(x)$ 1 2 3 4 5 6 Table $4$ 32. On average ($\mu$), how many freshmen would you expect to have to ask until you found one who replies "yes?" 33. What is the probability that you will need to ask fewer than three freshmen? 4.4 Poisson Distribution Use the following information to answer the next six exercises: On average, a clothing store gets 120 customers per day. 34. Assume the event occurs independently in any given day. Define the random variable $X$. 35. What values does $X$ take on? 36. What is the probability of getting 150 customers in one day? 37. What is the probability of getting 35 customers in the first four hours? Assume the store is open 12 hours each day. 38. What is the probability that the store will have more than 12 customers in the first hour? 39. What is the probability that the store will have fewer than 12 customers in the first two hours? 40. Which type of distribution can the Poisson model be used to approximate? When would you do this? Use the following information to answer the next six exercises: On average, eight teens in the U.S. die from motor vehicle injuries per day. As a result, states across the country are debating raising the driving age. 41. Assume the event occurs independently in any given day. In words, define the random variable $X$. 42. $X \sim$_____(_____,_____) 43. What values does $X$ take on? 44. For the given values of the random variable $X$, fill in the corresponding probabilities. 45. Is it likely that there will be no teens killed from motor vehicle injuries on any given day in the U.S? Justify your answer numerically. 46. Is it likely that there will be more than 20 teens killed from motor vehicle injuries on any given day in the U.S.? Justify your answer numerically
textbooks/stats/Applied_Statistics/Introductory_Business_Statistics_(OpenStax)/04%3A_Discrete_Random_Variables/4.08%3A_Chapter_Practice.txt
Poisson Distribution • “ATL Fact Sheet,” Department of Aviation at the Hartsfield-Jackson Atlanta International Airport, 2013. Available online at www.atl.com/about-atl/atl-factsheet/ (accessed February 6, 2019). • Center for Disease Control and Prevention. “Teen Drivers: Fact Sheet,” Injury Prevention & Control: Motor Vehicle Safety, October 2, 2012. Available online at http://www.cdc.gov/Motorvehiclesafet...factsheet.html (accessed May 15, 2013). • “Children and Childrearing,” Ministry of Health, Labour, and Welfare. Available online at http://www.mhlw.go.jp/english/policy...ing/index.html (accessed May 15, 2013). • “Eating Disorder Statistics,” South Carolina Department of Mental Health, 2006. Available online at http://www.state.sc.us/dmh/anorexia/statistics.htm (accessed May 15, 2013). • “Giving Birth in Manila: The maternity ward at the Dr Jose Fabella Memorial Hospital in Manila, the busiest in the Philippines, where there is an average of 60 births a day,” theguardian, 2013. Available online at http://www.theguardian.com/world/gal...471900&index=2 (accessed May 15, 2013). • “How Americans Use Text Messaging,” Pew Internet, 2013. Available online at http://pewinternet.org/Reports/2011/...in-Report.aspx (accessed May 15, 2013). • Lenhart, Amanda. “Teens, Smartphones & Testing: Texting volum is up while the frequency of voice calling is down. About one in four teens say they own smartphones,” Pew Internet, 2012. Available online at www.pewinternet.org/~/media/F...nd_Texting.pdf (accessed May 15, 2013). • “One born every minute: the maternity unit where mothers are THREE to a bed,” MailOnline. Available online at http://www.dailymail.co.uk/news/arti...thers-bed.html (accessed May 15, 2013). • Vanderkam, Laura. “Stop Checking Your Email, Now.” CNNMoney, 2013. Available online at management.fortune.cnn.com/20...our-email-now/ (accessed May 15, 2013). • “World Earthquakes: Live Earthquake News and Highlights,” World Earthquakes, 2012. www.world-earthquakes.com/ind...thq_prediction (accessed May 15, 2013). 4.10: Chapter Review Introduction The characteristics of a probability distribution or density function (PDF) are as follows: 1. Each probability is between zero and one, inclusive (inclusive means to include zero and one). 2. The sum of the probabilities is one. 4.1 Hypergeometric Distribution The combinatorial formula can provide the number of unique subsets of size $x$ that can be created from $n$ unique objects to help us calculate probabilities. The combinatorial formula is $\left(\begin{array}{l}{n} \ {x}\end{array}\right)=_{n} C_{x}=\frac{n !}{x !(n-x) !}$ A hypergeometric experiment is a statistical experiment with the following properties: 1. You take samples from two groups. 2. You are concerned with a group of interest, called the first group. 3. You sample without replacement from the combined groups. 4. Each pick is not independent, since sampling is without replacement. The outcomes of a hypergeometric experiment fit a hypergeometric probability distribution. The random variable $X =$ the number of items from the group of interest. $h(x)=\frac{\left(\begin{array}{l}{A} \ {x}\end{array}\right)\left(\begin{array}{l}{N-A} \ {n-x}\end{array}\right)}{\left(\begin{array}{l}{N} \ {n}\end{array}\right)}$. Binomial Distribution A statistical experiment can be classified as a binomial experiment if the following conditions are met: 1. There are a fixed number of trials, $n$. 2. There are only two possible outcomes, called "success" and, "failure" for each trial. The letter $p$ denotes the probability of a success on one trial and $q$ denotes the probability of a failure on one trial. 3. The $n$ trials are independent and are repeated using identical conditions. The outcomes of a binomial experiment fit a binomial probability distribution. The random variable $X =$ the number of successes obtained in the $n$ independent trials. The mean of $X$ can be calculated using the formula $\mu = np$, and the standard deviation is given by the formula $\sigma=\sqrt{n p q}$. The formula for the Binomial probability density function is $P(x)=\frac{n !}{x !(n-x) !} \cdot p^{x} q^{(n-x)}\nonumber$ Geometric Distribution There are three characteristics of a geometric experiment: 1. There are one or more Bernoulli trials with all failures except the last one, which is a success. 2. In theory, the number of trials could go on forever. There must be at least one trial. 3. The probability, $p$, of a success and the probability, $q$, of a failure are the same for each trial. In a geometric experiment, define the discrete random variable $X$ as the number of independent trials until the first success. We say that $X$ has a geometric distribution and write $X \sim G(p)$ where $p$ is the probability of success in a single trial. The mean of the geometric distribution $X \sim G(p)$ is $\mu = 1/p$ where $x =$ number of trials until first success for the formula $P(X=x)=(1-p)^{x-1} p$ where the number of trials is up and including the first success. An alternative formulation of the geometric distribution asks the question: what is the probability of x failures until the first success? In this formulation the trial that resulted in the first success is not counted. The formula for this presentation of the geometric is: $P(X=x)=p(1-p)^{x}\nonumber$ The expected value in this form of the geometric distribution is $\mu=\frac{1-p}{p}\nonumber$ The easiest way to keep these two forms of the geometric distribution straight is to remember that $p$ is the probability of success and $(1−p)$ is the probability of failure. In the formula the exponents simply count the number of successes and number of failures of the desired outcome of the experiment. Of course the sum of these two numbers must add to the number of trials in the experiment. Poisson Distribution A Poisson probability distribution of a discrete random variable gives the probability of a number of events occurring in a fixed interval of time or space, if these events happen at a known average rate and independently of the time since the last event. The Poisson distribution may be used to approximate the binomial, if the probability of success is "small" (less than or equal to 0.01) and the number of trials is "large" (greater than or equal to 25). Other rules of thumb are also suggested by different authors, but all recognize that the Poisson distribution is the limiting distribution of the binomial as $n$ increases and $p$ approaches zero. The formula for computing probabilities that are from a Poisson process is: $P(x)=\frac{\mu^{x} e^{-\mu}}{x !}\nonumber$ where $P(X)$ is the probability of successes, $\mu$ (pronounced mu) is the expected number of successes, $e$ is the natural logarithm approximately equal to $2.718$, and $X$ is the number of successes per unit, usually per unit of time. 4.11: Chapter Solution (Practice Homework) 1. \(x\)\(P(x)\) 00.12 10.18 20.30 30.15 40.10 50.10 60.05 Table \(6\) 3. 0.10 + 0.05 = 0.15 5. 1 7. 0.35 + 0.40 + 0.10 = 0.85 9. 1(0.15) + 2(0.35) + 3(0.40) + 4(0.10) = 0.15 + 0.70 + 1.20 + 0.40 = 2.45 11. \(x\)\(P(x)\) 00.03 10.04 20.08 30.85 Table \(7\) 13. Let \(X =\) the number of events Javier volunteers for each month. 15. \(x\)\(P(x)\) 00.05 10.05 20.10 30.20 40.25 50.35 Table \(8\) 17. 1 – 0.05 = 0.95 18. \(X =\) the number of business majors in the sample. 19. 2, 3, 4, 5, 6, 7, 8, 9 20. \(X =\) the number that reply “yes” 22. 0, 1, 2, 3, 4, 5, 6, 7, 8 24. 5.7 26. 0.4151 28. \(X =\) the number of freshmen selected from the study until one replied "yes" that same-sex couples should have the right to legal marital status. 30. 1,2,… 32. 1.4 35. 0, 1, 2, 3, 4, … 37. 0.0485 39. 0.0214 41. \(X =\) the number of U.S. teens who die from motor vehicle injuries per day. 43. 0, 1, 2, 3, 4, ... 45. No 48. 1. 50. 1. 53. \(X =\) the number of patients calling in claiming to have the flu, who actually have the flu. 55. 0.0165 57. 1. 59. 4. 4.43 4 63. • 65. 1. 67. 1. 69. 1. 71. 1. 73. 1. Figure \(4\) 2. 75. 1. 77. 1. 79. 0, 1, 2, and 3 1. 82. 1. 84. Let \(X =\) the number of defective bulbs in a string. • Using the binomial distribution: • The Poisson approximation is very good—the difference between the probabilities is only \(0.0026\). 86. 1. 88. 1. 90. 1. 92. 1. 94. 4
textbooks/stats/Applied_Statistics/Introductory_Business_Statistics_(OpenStax)/04%3A_Discrete_Random_Variables/4.09%3A_Chapter_References.txt
Continuous random variables have many applications. Baseball batting averages, IQ scores, the length of time a long distance telephone call lasts, the amount of money a person carries, the length of time a computer chip lasts, rates of return from an investment, and SAT scores are just a few. The field of reliability depends on a variety of continuous random variables, as do all areas of risk analysis. Note The values of discrete and continuous random variables can be ambiguous. For example, if \(X\) is equal to the number of miles (to the nearest mile) you drive to work, then \(X\) is a discrete random variable. You count the miles. If \(X\) is the distance you drive to work, then you measure values of \(X\) and \(X\) is a continuous random variable. For a second example, if \(X\) is equal to the number of books in a backpack, then \(X\) is a discrete random variable. If \(X\) is the weight of a book, then \(X\) is a continuous random variable because weights are measured. How the random variable is defined is very important. 5.01: Properties of Continuous Probability Density Functions The graph of a continuous probability distribution is a curve. Probability is represented by area under the curve. We have already met this concept when we developed relative frequencies with histograms in Chapter 2. The relative area for a range of values was the probability of drawing at random an observation in that group. Again with the Poisson distribution in Chapter 4, the graph in Example $14$ used boxes to represent the probability of specific values of the random variable. In this case, we were being a bit casual because the random variables of a Poisson distribution are discrete, whole numbers, and a box has width. Notice that the horizontal axis, the random variable $x$, purposefully did not mark the points along the axis. The probability of a specific value of a continuous random variable will be zero because the area under a point is zero. Probability is area. The curve is called the probability density function (abbreviated as pdf). We use the symbol $f(x))$ to represent the curve. $f(x))$ is the function that corresponds to the graph; we use the density function $f(x))$ to draw the graph of the probability distribution. Area under the curve is given by a different function called the cumulative distribution function (abbreviated as cdf). The cumulative distribution function is used to evaluate probability as area. Mathematically, the cumulative probability density function is the integral of the pdf, and the probability between two values of a continuous random variable will be the integral of the pdf between these two values: the area under the curve between these values. Remember that the area under the pdf for all possible values of the random variable is one, certainty. Probability thus can be seen as the relative percent of certainty between the two values of interest. • The outcomes are measured, not counted. • The entire area under the curve and above the x-axis is equal to one. • Probability is found for intervals of x values rather than for individual $x$ values. • $P(c < x < d)$ is the probability that the random variable X is in the interval between the values c and d. $P(c < x < d)$ is the area under the curve, above the x-axis, to the right of $c$ and the left of $d$. • $P(x = c) = 0$ The probability that $x$ takes on any single individual value is zero. The area below the curve, above the x-axis, and between $x = c$ and $x = c$ has no width, and therefore no area ($\text{area }= 0$). Since the probability is equal to the area, the probability is also zero. • $P(c < x < d)$ is the same as $P(c ≤ x ≤ d)$ because probability is equal to area. We will find the area that represents probability by using geometry, formulas, technology, or probability tables. In general, integral calculus is needed to find the area under the curve for many probability density functions. When we use formulas to find the area in this textbook, the formulas were found by using the techniques of integral calculus. There are many continuous probability distributions. When using a continuous probability distribution to model probability, the distribution used is selected to model and fit the particular situation in the best way. In this chapter and the next, we will study the uniform distribution, the exponential distribution, and the normal distribution. The following graphs illustrate these distributions. For continuous probability distributions, PROBABILITY = AREA. Example $1$ Consider the function $f(x) = \frac{1}{20}$ for $0 ≤ x ≤ 20. x =$ a real number. The graph of $f(x) = \frac{1}{20}$ is a horizontal line. However, since $0 ≤ x≤ 20, f(x)$ is restricted to the portion between $x = 0$ and $x = 20$, inclusive. $f(x) = \frac{1}{20}$ for $0 ≤ x ≤ 20$. The graph of $f(x) =\frac{1}{20}$ is a horizontal line segment when $0 ≤ x ≤ 20$. The area between $f(x) = \frac{1}{20}$ where $0 ≤ x ≤ 20$ and the x-axis is the area of a rectangle with base $= 20$ and height $= \frac{1}{20}$. $\operatorname{AREA}=20\left(\frac{1}{20}\right)=1\nonumber$ Suppose we want to find the area between $bf{f(x)) = \frac{1}{20}}$ and the x-axis where $\bf{0 < x < 2}$. $\operatorname{AREA}=(2-0)\left(\frac{1}{20}\right)=0.1\nonumber$ $(2-0)=2= \text{base of rectangle}\nonumber$ REMINDER area of a rectangle = (base)(height). The area corresponds to a probability. The probability that $x$ is between zero and two is $0.1$, which can be written mathematically as $P(0 < x < 2) = P(x < 2) = 0.1$. Suppose we want to find the area between $\bf{f(x) = \frac{1}{20}}$ and the x-axis where $\bf{ 4 < x < 15 }$. $\operatorname{AREA}=(15-4)\left(\frac{1}{20}\right)=0.55$ $(15 – 4) = 11 = \text{the base of a rectangle}$ The area corresponds to the probability $P (4 < x < 15) = 0.55$. Suppose we want to find $P(x = 15)$. On an x-y graph, $x = 15$ is a vertical line. A vertical line has no width (or zero width). Therefore, $P(x = 15) =$ (base)(height) $= (0)\left(\frac{1}{20}\right) = 0$ $P(X ≤ x)$, which can also be written as $P(X < x)$ for continuous distributions, is called the cumulative distribution function or CDF. Notice the "less than or equal to" symbol. We can also use the CDF to calculate $P (X > x)$. The CDF gives "area to the left" and $P(X > x)$ gives "area to the right." We calculate $P(X > x)$ for continuous distributions as follows: $P(X > x) = 1 – P (X < x)$. Label the graph with $f(x)$ and $x$. Scale the $x$ and $y$ axes with the maximum $x$ and $y$ values. $f(x) = \frac{1}{20} , 0 ≤ x ≤ 20$. To calculate the probability that $x$ is between two values, look at the following graph. Shade the region between $x = 2.3$ and $x = 12.7$. Then calculate the shaded area of a rectangle. $P(2.3<x<12.7)=(\text { base })(\text { height })=(12.7-2.3)\left(\frac{1}{20}\right)=0.52$ Exercise $1$ Consider the function $f(x) = \frac{1}{8}$ for $0 \leq x \leq 8$. Draw the graph of $f(x))$ and find $P(2.5 < x < 7.5)$. 5.02: The Uniform Distribution The uniform distribution is a continuous probability distribution and is concerned with events that are equally likely to occur. When working out problems that have a uniform distribution, be careful to note if the data is inclusive or exclusive of endpoints. The mathematical statement of the uniform distribution is $f(x) = \frac{1}{b-a}$ for $a \leq x \leq b$ where $a =$ the lowest value of $x$ and $b =$ the highest value of $x$. Formulas for the theoretical mean and standard deviation are $\mu=\frac{a+b}{2}$ and $\sigma=\sqrt{\frac{(b-a)^{2}}{12}}$ Exercise $1$ The data that follow are the number of passengers on 35 different charter fishing boats. The sample mean = 7.9 and the sample standard deviation = 4.33. The data follow a uniform distribution where all values between and including zero and 14 are equally likely. State the values of $a$ and $b$. Write the distribution in proper notation, and calculate the theoretical mean and standard deviation. 1 12 4 10 4 14 11 7 11 4 13 2 4 6 3 10 0 12 6 9 10 5 13 4 10 14 12 11 6 10 11 0 11 13 2 Table 5.1 Example $2$ The amount of time, in minutes, that a person must wait for a bus is uniformly distributed between zero and 15 minutes, inclusive. a. What is the probability that a person waits fewer than 12.5 minutes? Answer a. Let $X$ = the number of minutes a person must wait for a bus. $a = 0$ and $b = 15$. $X \sim U(0, 15)$. Write the probability density function. $f(x) = \frac{1}{15-0}=\frac{1}{15}$ for $0 \leq x \leq 15$. Find $P(x < 12.5)$. Draw a graph. $P(x<k)=\text { (base) (height) }=(12.5-0)\left(\frac{1}{15}\right)=0.8333\nonumber$ The probability a person waits less than 12.5 minutes is 0.8333. b. On the average, how long must a person wait? Find the mean, $\mu$, and the standard deviation, $\sigma$. Answer b. $\mu=\frac{a+b}{2}=\frac{15+0}{2}=7.5$. On the average, a person must wait 7.5 minutes. $\sigma=\sqrt{\frac{(b-a)^{2}}{12}}=\sqrt{\frac{(15-\theta)^{2}}{12}}=4.3$. The Standard deviation is 4.3 minutes. c. Ninety percent of the time, the time a person must wait falls below what value? Note This asks for the 90th percentile. Answer c. Find the 90th percentile. Draw a graph. Let $k =$ the 90th percentile. $P(x<k)> \(0.90=(k)\left(\frac{1}{15}\right)$ $k=(0.90)(15)=13.5$ The 90th percentile is 13.5 minutes. Ninety percent of the time, a person must wait at most 13.5 minutes. Exercise $2$ The total duration of baseball games in the major league in the 2011 season is uniformly distributed between 447 hours and 521 hours inclusive. 1. Find $a$ and $b$ and describe what they represent. 2. Write the distribution. 3. Find the mean and the standard deviation. 4. What is the probability that the duration of games for a team for the 2011 season is between 480 and 500 hours?
textbooks/stats/Applied_Statistics/Introductory_Business_Statistics_(OpenStax)/05%3A_Continuous_Random_Variables/5.00%3A_Prelude_to_Continuous_Random_Variables.txt
The exponential distribution is often concerned with the amount of time until some specific event occurs. For example, the amount of time (beginning now) until an earthquake occurs has an exponential distribution. Other examples include the length of time, in minutes, of long distance business telephone calls, and the amount of time, in months, a car battery lasts. It can be shown, too, that the value of the change that you have in your pocket or purse approximately follows an exponential distribution. Values for an exponential random variable occur in the following way. There are fewer large values and more small values. For example, marketing studies have shown that the amount of money customers spend in one trip to the supermarket follows an exponential distribution. There are more people who spend small amounts of money and fewer people who spend large amounts of money. Exponential distributions are commonly used in calculations of product reliability, or the length of time a product lasts. The random variable for the exponential distribution is continuous and often measures a passage of time, although it can be used in other applications. Typical questions may be, “what is the probability that some event will occur within the next $x$ hours or days, or what is the probability that some event will occur between $x_1$ hours and $x_2$ hours, or what is the probability that the event will take more than $x_1$ hours to perform?” In short, the random variable $X$ equals (a) the time between events or (b) the passage of time to complete an action, e.g. wait on a customer. The probability density function is given by: $f(x)=\frac{1}{\mu} e^{-\frac{1}{\mu} x}\nonumber$ where $\mu$ is the historical average waiting time. and has a mean and standard deviation of $1/\mu$. An alternative form of the exponential distribution formula recognizes what is often called the decay factor. The decay factor simply measures how rapidly the probability of an event declines as the random variable $X$ increases. When the notation using the decay parameter m is used, the probability density function is presented as: $f(x)=m e^{-m x}\nonumber$ where $m=\frac{1}{\mu}$ In order to calculate probabilities for specific probability density functions, the cumulative density function is used. The cumulative density function (cdf) is simply the integral of the pdf and is: $F(x)=\int_{0}^{\infty}\left[\frac{1}{\mu} e^{-\frac{x}{\mu}}\right]=1-e^{-\frac{x}{\mu}}\nonumber$ Example $3$ Let $X$ = amount of time (in minutes) a postal clerk spends with a customer. The time is known from historical data to have an average amount of time equal to four minutes. It is given that $\mu = 4$ minutes, that is, the average time the clerk spends with a customer is 4 minutes. Remember that we are still doing probability and thus we have to be told the population parameters such as the mean. To do any calculations, we need to know the mean of the distribution: the historical time to provide a service, for example. Knowing the historical mean allows the calculation of the decay parameter, m. $m=\frac{1}{\mu}$. Therefore, $m=\frac{1}{4}=0.25$. When the notation used the decay parameter, m, the probability density function is presented as $f(x)=m e^{-m x}$, which is simply the original formula with m substituted for $\frac{1}{\mu}$, or $f(x)=\frac{1}{\mu} e^{-\frac{1}{\mu} x}$. To calculate probabilities for an exponential probability density function, we need to use the cumulative density function. As shown below, the curve for the cumulative density function is: $f(x) = 0.25e^{–0.25x}$ where x is at least zero and $m = 0.25$. For example, $f(5) = 0.25e^{(-0.25)(5)} = 0.072$. In other words, the function has a value of .072 when $x = 5$. The graph is as follows: Notice the graph is a declining curve. When $x = 0$, $f(x) = 0.25e^{(−0.25)(0)} = (0.25)(1) = 0.25 = m$. The maximum value on the y-axis is always $m$, one divided by the mean. Exercise $3$ The amount of time spouses shop for anniversary cards can be modeled by an exponential distribution with the average amount of time equal to eight minutes. Write the distribution, state the probability density function, and graph the distribution. Example $4$ a. Using the information in Example $3$, find the probability that a clerk spends four to five minutes with a randomly selected customer. Answer a. Find $P (4 < x < 5)$. The cumulative distribution function (CDF) gives the area to the left. $P(x < x) = 1 – e^{–mx}$ $P(x < 5) = 1 – e^{(–0.25)(5)} = 0.7135$ and $P(x < 4) = 1 – e^{(–0.25)(4)} = 0.6321$ $P(4 < x < 5)= 0.7135 – 0.6321 = 0.0814$ Exercise $4$ The number of days ahead travelers purchase their airline tickets can be modeled by an exponential distribution with the average amount of time equal to 15 days. Find the probability that a traveler will purchase a ticket fewer than ten days in advance. How many days do half of all travelers wait? Example $5$ On the average, a certain computer part lasts ten years. The length of time the computer part lasts is exponentially distributed. a. What is the probability that a computer part lasts more than 7 years? Answer a. Let $x =$ the amount of time (in years) a computer part lasts. \mu = 10 so $m=\frac{1}{\mu}=\frac{1}{10}=0.1$ Find $P(x > 7)$. Draw the graph. $P(x > 7) = 1 – P(x < 7)$. Since $P(X < x) = 1 – e^{–mx}$ then $P(X > x) = 1 – ( 1 –^{e–mx}) = e^{–mx}$ $P(x > 7) = e(–0.1)(7) = 0.4966$. The probability that a computer part lasts more than seven years is $0.4966$. b. On the average, how long would five computer parts last if they are used one after another? Answer b. On the average, one computer part lasts ten years. Therefore, five computer parts, if they are used one right after the other would last, on the average, (5)(10) = 50 years. d. What is the probability that a computer part lasts between nine and 11 years? Answer d. Find $P (9 < x < 11)$. Draw the graph. $P(9 < x < 11) = P(x < 11) – P(x < 9) = (1 – e^{(–0.1)(11)}) – (1 – e^{(–0.1)(9)}) = 0.6671 – 0.5934 = 0.0737$. The probability that a computer part lasts between nine and 11 years is $0.0737$. Exercise $5$ On average, a pair of running shoes can last 18 months if used every day. The length of time running shoes last is exponentially distributed. What is the probability that a pair of running shoes last more than 15 months? On average, how long would six pairs of running shoes last if they are used one after the other? Eighty percent of running shoes last at most how long if used every day? Example $6$ Suppose that the length of a phone call, in minutes, is an exponential random variable with decay parameter $\frac{1}{12}$. The decay p[parameter is another way to view 1/λ. If another person arrives at a public telephone just before you, find the probability that you will have to wait more than five minutes. Let X = the length of a phone call, in minutes. What is $m, \mu$, and $\sigma$? The probability that you must wait more than five minutes is _______ . Answer $m = \frac{1}{12}$ $\mu = 12$ $\sigma = 12$ $P(x > 5) = 0.6592$ Example $7$ The time spent waiting between events is often modeled using the exponential distribution. For example, suppose that an average of 30 customers per hour arrive at a store and the time between arrivals is exponentially distributed. 1. On average, how many minutes elapse between two successive arrivals? 2. When the store first opens, how long on average does it take for three customers to arrive? 3. After a customer arrives, find the probability that it takes less than one minute for the next customer to arrive. 4. After a customer arrives, find the probability that it takes more than five minutes for the next customer to arrive. 5. Is an exponential distribution reasonable for this situation? Answer a.Since we expect 30 customers to arrive per hour (60 minutes), we expect on average one customer to arrive every two minutes on average. b.Since one customer arrives every two minutes on average, it will take six minutes on average for three customers to arrive. c.Let $X =$ the time between arrivals, in minutes. By part a, $\mu = 2$, so $m = \frac{1}{2}= 0.5$. The cumulative distribution function is $P(X < x) = 1 – e^{(-0.5)(x)}$ Therefore $P(X < 1) = 1 – e^{(–0.5)(1)} = 0.3935$. d.$P(X > 5) = 1 – P(X < 5) = 1 – (1 – e^{(-0.5)(5)}) = e^{–2.5} \approx 0.0821$. This model assumes that a single customer arrives at a time, which may not be reasonable since people might shop in groups, leading to several customers arriving at the same time. It also assumes that the flow of customers does not change throughout the day, which is not valid if some times of the day are busier than others. Memorylessness of the Exponential Distribution Recall that the amount of time between customers for the postal clerk discussed earlier is exponentially distributed with a mean of two minutes. Suppose that five minutes have elapsed since the last customer arrived. Since an unusually long amount of time has now elapsed, it would seem to be more likely for a customer to arrive within the next minute. With the exponential distribution, this is not the case–the additional time spent waiting for the next customer does not depend on how much time has already elapsed since the last customer. This is referred to as the memoryless property. The exponential and geometric probability density functions are the only probability functions that have the memoryless property. Specifically, the memoryless property says that $P(X > r + t | X > r) = P (X > t)$ for all $r \geq 0$ and $t \geq 0$ For example, if five minutes have elapsed since the last customer arrived, then the probability that more than one minute will elapse before the next customer arrives is computed by using r = 5 and t = 1 in the foregoing equation. $P(X > 5 + 1 | X > 5) = P(X > 1) = e^{(-0.5)(1)} = 0.6065$. This is the same probability as that of waiting more than one minute for a customer to arrive after the previous arrival. The exponential distribution is often used to model the longevity of an electrical or mechanical device. In Example $5$, the lifetime of a certain computer part has the exponential distribution with a mean of ten years. The memoryless property says that knowledge of what has occurred in the past has no effect on future probabilities. In this case it means that an old part is not any more likely to break down at any particular time than a brand new part. In other words, the part stays as good as new until it suddenly breaks. For example, if the part has already lasted ten years, then the probability that it lasts another seven years is $P(X > 17|X > 10) = P(X > 7) = 0.4966$, where the vertical line is read as "given". Example $8$ Refer back to the postal clerk again where the time a postal clerk spends with his or her customer has an exponential distribution with a mean of four minutes. Suppose a customer has spent four minutes with a postal clerk. What is the probability that he or she will spend at least an additional three minutes with the postal clerk? The decay parameter of $X$ is $m = \frac{1}{4} = 0.25$, so $X \sim Exp(0.25)$. The cumulative distribution function is $P(X < x) = 1 – e^{–0.25x}$. We want to find $P (X > 7|X > 4)$. The memoryless property says that $P (X > 7|X > 4) = P (X > 3)$, so we just need to find the probability that a customer spends more than three minutes with a postal clerk. This is $P(X > 3) = 1 – P(X < 3) = 1 – (1 – e^{–0.25⋅3}) = e^{–0.75} \approx 0.4724$. Relationship between the Poisson and the Exponential Distribution There is an interesting relationship between the exponential distribution and the Poisson distribution. Suppose that the time that elapses between two successive events follows the exponential distribution with a mean of $\mu$ units of time. Also assume that these times are independent, meaning that the time between events is not affected by the times between previous events. If these assumptions hold, then the number of events per unit time follows a Poisson distribution with mean $\mu$. Recall that if $X$ has the Poisson distribution with mean $\mu$, then $P(X=x)=\frac{\mu^{x_{e}-\mu}}{x !}$. The formula for the exponential distribution: $P(X=x)=m e^{-m x}=\frac{1}{\mu} e^{-\frac{1}{\mu} x}$ Where $m =$ the rate parameter, or $\mu =$ average time between occurrences. We see that the exponential is the cousin of the Poisson distribution and they are linked through this formula. There are important differences that make each distribution relevant for different types of probability problems. First, the Poisson has a discrete random variable, $x$, where time; a continuous variable is artificially broken into discrete pieces. We saw that the number of occurrences of an event in a given time interval, $x$, follows the Poisson distribution. For example, the number of times the telephone rings per hour. By contrast, the time between occurrences follows the exponential distribution. For example. The telephone just rang, how long will it be until it rings again? We are measuring length of time of the interval, a continuous random variable, exponential, not events during an interval, Poisson. The Exponential Distribution v. the Poisson Distribution A visual way to show both the similarities and differences between these two distributions is with a time line. The random variable for the Poisson distribution is discrete and thus counts events during a given time period, $t_1$ to $t_2$ on Figure $20$, and calculates the probability of that number occurring. The number of events, four in the graph, is measured in counting numbers; therefore, the random variable of the Poisson is a discrete random variable. The exponential probability distribution calculates probabilities of the passage of time, a continuous random variable. In Figure $20$ this is shown as the bracket from t1 to the next occurrence of the event marked with a triangle. Classic Poisson distribution questions are "how many people will arrive at my checkout window in the next hour?". Classic exponential distribution questions are "how long it will be until the next person arrives," or a variant, "how long will the person remain here once they have arrived?". Again, the formula for the exponential distribution is: $f(x)=m e^{-m x} \operatorname{orf}(x)=\frac{1}{\mu} e^{-\frac{1}{\mu} x}\nonumber$ We see immediately the similarity between the exponential formula and the Poisson formula. $P(x)=\frac{\mu^{x} e^{-\mu}}{x !}\nonumber$ Both probability density functions are based upon the relationship between time and exponential growth or decay. The “e” in the formula is a constant with the approximate value of 2.71828 and is the base of the natural logarithmic exponential growth formula. When people say that something has grown exponentially this is what they are talking about. An example of the exponential and the Poisson will make clear the differences been the two. It will also show the interesting applications they have. Poisson Distribution Suppose that historically 10 customers arrive at the checkout lines each hour. Remember that this is still probability so we have to be told these historical values. We see this is a Poisson probability problem. We can put this information into the Poisson probability density function and get a general formula that will calculate the probability of any specific number of customers arriving in the next hour. The formula is for any value of the random variable we chose, and so the x is put into the formula. This is the formula: $f(x)=\frac{10^{x} e^{-10}}{x !}\nonumber$ As an example, the probability of 15 people arriving at the checkout counter in the next hour would be $P(x=15)=\frac{10^{15} e^{-10}}{15 !}=0.0611\nonumber$ Here we have inserted x = 15 and calculated the probability that in the next hour 15 people will arrive is .061. Exponential Distribution If we keep the same historical facts that 10 customers arrive each hour, but we now are interested in the service time a person spends at the counter, then we would use the exponential distribution. The exponential probability function for any value of x, the random variable, for this particular checkout counter historical data is: $f(x)=\frac{1}{.1} e^{-x / 1}=10 e^{-10 x}\nonumber$ To calculate $\mu$, the historical average service time, we simply divide the number of people that arrive per hour, 10 , into the time period, one hour, and have $\mu = 0.1$. Historically, people spend 0.1 of an hour at the checkout counter, or 6 minutes. This explains the .1 in the formula. There is a natural confusion with $\mu$ in both the Poisson and exponential formulas. They have different meanings, although they have the same symbol. The mean of the exponential is one divided by the mean of the Poisson. If you are given the historical number of arrivals you have the mean of the Poisson. If you are given an historical length of time between events you have the mean of an exponential. Continuing with our example at the checkout clerk; if we wanted to know the probability that a person would spend 9 minutes or less checking out, then we use this formula. First, we convert to the same time units which are parts of one hour. Nine minutes is 0.15 of one hour. Next we note that we are asking for a range of values. This is always the case for a continuous random variable. We write the probability question as: $p(x \leq 9)=1-10 e^{-10 x}\nonumber$ We can now put the numbers into the formula and we have our result. $p(x=.15)=1-10 e^{-10(.15)}=0.7769\nonumber$ The probability that a customer will spend 9 minutes or less checking out is $0.7769$. We see that we have a high probability of getting out in less than nine minutes and a tiny probability of having 15 customers arriving in the next hour.
textbooks/stats/Applied_Statistics/Introductory_Business_Statistics_(OpenStax)/05%3A_Continuous_Random_Variables/5.03%3A_The_Exponential_Distribution.txt
5.1 Properties of Continuous Probability Density Functions Probability density function (pdf) $f(x)$: • Cumulative distribution function (cdf): $P(X \leq x)$ 5.2 The Uniform Distribution $X \sim U (a, b)$ The mean is $\mu=\frac{a+b}{2}$ The standard deviation is $\sigma=\sqrt{\frac{(b-a)^{2}}{12}}$ Probability density function: $f(x)=\frac{1}{b-a} \text { for } a \leq X \leq b$ Area to the Left of $\bf{x}$: $P(X<x)> Area to the Right of \(\bf{x}$: $P(X>x)=(b-x)\left(\frac{1}{b-a}\right)$ Area Between $\bf{c}$ and $\bf{d}$: $P(c<d)> • 5.3 The Exponential Distribution • pdf: \(f(x) = me^{(–mx)}$ where $x \geq 0$ and $m > 0$ • cdf: $P(X \leq x) = 1 – e^{(–mx)}$ • mean $\mu = \frac{1}{m}$ • standard deviation $\sigma = \mu$ • Additionally • $P(X > x) = e^{(–mx)}$ • $P(a < X < b) = e^{(–ma)} – e^{(–mb)}$ • Poisson probability: $P(X=x)=\frac{\mu^{x} e^{-\mu}}{x !}$ with mean and variance of $\mu$ 5.05: Chapter Homework 5.1 Properties of Continuous Probability Density Functions For each probability and percentile problem, draw the picture. 70. Consider the following experiment. You are one of 100 people enlisted to take part in a study to determine the percent of nurses in America with an R.N. (registered nurse) degree. You ask nurses if they have an R.N. degree. The nurses answer “yes” or “no.” You then calculate the percentage of nurses with an R.N. degree. You give that percentage to your supervisor. 1. What part of the experiment will yield discrete data? 2. What part of the experiment will yield continuous data? 71. When age is rounded to the nearest year, do the data stay continuous, or do they become discrete? Why? 5.2 The Uniform Distribution For each probability and percentile problem, draw the picture. 72. Births are approximately uniformly distributed between the 52 weeks of the year. They can be said to follow a uniform distribution from one to 53 (spread of 52 weeks). 1. Graph the probability distribution. 2. $f(x) =$ _________ 3. $\mu =$ _________ 4. $\sigma =$ _________ 5. Find the probability that a person is born at the exact moment week 19 starts. That is, find $P(x = 19) =$ _________ 6. $P(2 < x < 31) =$ _________ 7. Find the probability that a person is born after week 40. 8. $P(12 < x | x < 28) =$ _________ 73. A random number generator picks a number from one to nine in a uniform manner. 1. Graph the probability distribution. 2. $f(x) =$ _________ 3. $\mu =$ _________ 4. $\sigma =$ _________ 5. $P(3.5 < x < 7.25) =$ _________ 6. $P(x > 5.67)$ 7. $P(x > 5 | x > 3) =$ _________ 74. According to a study by Dr. John McDougall of his live-in weight loss program at St. Helena Hospital, the people who follow his program lose between six and 15 pounds a month until they approach trim body weight. Let’s suppose that the weight loss is uniformly distributed. We are interested in the weight loss of a randomly selected individual following the program for one month. 1. Define the random variable. $X =$ _________ 2. Graph the probability distribution. 3. $f(x) =$ _________ 4. $\mu =$ _________ 5. $\sigma =$ _________ 6. Find the probability that the individual lost more than ten pounds in a month. 7. Suppose it is known that the individual lost more than ten pounds in a month. Find the probability that he lost less than 12 pounds in the month. 8. $P(7 < x < 13 | x > 9) =$ __________. State this in a probability question, similarly to parts g and h, draw the picture, and find the probability. 75. A subway train on the Red Line arrives every eight minutes during rush hour. We are interested in the length of time a commuter must wait for a train to arrive. The time follows a uniform distribution. 1. Define the random variable. $X =$ _______ 2. Graph the probability distribution. 3. $f(x) =$ _______ 4. $\mu =$ _______ 5. $\sigma =$ _______ 6. Find the probability that the commuter waits less than one minute. 7. Find the probability that the commuter waits between three and four minutes. 76. The age of a first grader on September 1 at Garden Elementary School is uniformly distributed from 5.8 to 6.8 years. We randomly select one first grader from the class. 1. Define the random variable. $X =$ _________ 2. Graph the probability distribution. 3. $f(x) =$ _________ 4. $\mu =$_________ 5. $\sigma =$ _________ 6. Find the probability that she is over 6.5 years old. 7. Find the probability that she is between four and six years old. Use the following information to answer the next three exercises. The Sky Train from the terminal to the rental–car and long–term parking center is supposed to arrive every eight minutes. The waiting times for the train are known to follow a uniform distribution. 77. What is the average waiting time (in minutes)? 1. zero 2. two 3. three 4. four 78. The probability of waiting more than seven minutes given a person has waited more than four minutes is? 1. 0.125 2. 0.25 3. 0.5 4. 0.75 79. The time (in minutes) until the next bus departs a major bus depot follows a distribution with f(x) = 120120 where x goes from 25 to 45 minutes. 1. Define the random variable. $X =$ ________ 2. Graph the probability distribution. 3. The distribution is ______________ (name of distribution). It is _____________ (discrete or continuous). 4. $\mu =$ ________ 5. $\sigma =$ ________ 6. Find the probability that the time is at most 30 minutes. Sketch and label a graph of the distribution. Shade the area of interest. Write the answer in a probability statement. 7. Find the probability that the time is between 30 and 40 minutes. Sketch and label a graph of the distribution. Shade the area of interest. Write the answer in a probability statement. 8. $P(25 < x < 55) =$ _________. State this in a probability statement, similarly to parts g and h, draw the picture, and find the probability. 80. Suppose that the value of a stock varies each day from $16 to$25 with a uniform distribution. 1. Find the probability that the value of the stock is more than $19. 2. Find the probability that the value of the stock is between$19 and $22. 3. Given that the stock is greater than$18, find the probability that the stock is more than $21. 81. A fireworks show is designed so that the time between fireworks is between one and five seconds, and follows a uniform distribution. 1. Find the average time between fireworks. 2. Find probability that the time between fireworks is greater than four seconds. 82. The number of miles driven by a truck driver falls between 300 and 700, and follows a uniform distribution. 1. Find the probability that the truck driver goes more than 650 miles in a day. 2. Find the probability that the truck drivers goes between 400 and 650 miles in a day. 5.3 The Exponential Distribution 83. Suppose that the length of long distance phone calls, measured in minutes, is known to have an exponential distribution with the average length of a call equal to eight minutes. 1. Define the random variable. $X =$ ________________. 2. Is $X$ continuous or discrete? 3. $\mu =$ ________ 4. $\sigma =$ ________ 5. Draw a graph of the probability distribution. Label the axes. 6. Find the probability that a phone call lasts less than nine minutes. 7. Find the probability that a phone call lasts more than nine minutes. 8. Find the probability that a phone call lasts between seven and nine minutes. 9. If 25 phone calls are made one after another, on average, what would you expect the total to be? Why? 84. Suppose that the useful life of a particular car battery, measured in months, decays with parameter 0.025. We are interested in the life of the battery. 1. Define the random variable. $X =$ _________________________________. 2. Is $X$ continuous or discrete? 3. On average, how long would you expect one car battery to last? 4. On average, how long would you expect nine car batteries to last, if they are used one after another? 5. Find the probability that a car battery lasts more than 36 months. 6. Seventy percent of the batteries last at least how long? 85. The percent of persons (ages five and older) in each state who speak a language at home other than English is approximately exponentially distributed with a mean of 9.848. Suppose we randomly pick a state. 1. Define the random variable. $X =$ _________________________________. 2. Is $X$ continuous or discrete? 3. $\mu =$ ________ 4. $\sigma =$ ________ 5. Draw a graph of the probability distribution. Label the axes. 6. Find the probability that the percent is less than 12. 7. Find the probability that the percent is between eight and 14. 8. The percent of all individuals living in the United States who speak a language at home other than English is 13.8. • Why is this number different from 9.848%? • What would make this number higher than 9.848%? 86. The time (in years) after reaching age 60 that it takes an individual to retire is approximately exponentially distributed with a mean of about five years. Suppose we randomly pick one retired individual. We are interested in the time after age 60 to retirement. 1. Define the random variable. $X =$ _________________________________. 2. Is $X$ continuous or discrete? 3. $\mu =$ ________ 4. $\sigma =$ ________ 5. Draw a graph of the probability distribution. Label the axes. 6. Find the probability that the person retired after age 70. 7. Do more people retire before age 65 or after age 65? 8. In a room of 1,000 people over age 80, how many do you expect will NOT have retired yet? 87. The cost of all maintenance for a car during its first year is approximately exponentially distributed with a mean of$150. 1. Define the random variable. $X =$ _________________________________. 2. $\mu =$ ________ 3. $\sigma =$ ________ 4. Draw a graph of the probability distribution. Label the axes. 5. Find the probability that a car required over \$300 for maintenance during its first year. Use the following information to answer the next three exercises. The average lifetime of a certain new cell phone is three years. The manufacturer will replace any cell phone failing within two years of the date of purchase. The lifetime of these cell phones is known to follow an exponential distribution. 88. The decay rate is: 1. 0.3333 2. 0.5000 3. 2 4. 3 89. What is the probability that a phone will fail within two years of the date of purchase? 1. 0.8647 2. 0.4866 3. 0.2212 4. 0.9997 90. What is the median lifetime of these phones (in years)? 1. 0.1941 2. 1.3863 3. 2.0794 4. 5.5452 91. At a 911 call center, calls come in at an average rate of one call every two minutes. Assume that the time that elapses from one call to the next has the exponential distribution. 1. On average, how much time occurs between five consecutive calls? 2. Find the probability that after a call is received, it takes more than three minutes for the next call to occur. 3. Ninety-percent of all calls occur within how many minutes of the previous call? 4. Suppose that two minutes have elapsed since the last call. Find the probability that the next call will occur within the next minute. 5. Find the probability that less than 20 calls occur within an hour. 92. In major league baseball, a no-hitter is a game in which a pitcher, or pitchers, doesn't give up any hits throughout the game. No-hitters occur at a rate of about three per season. Assume that the duration of time between no-hitters is exponential. 1. What is the probability that an entire season elapses with a single no-hitter? 2. If an entire season elapses without any no-hitters, what is the probability that there are no no-hitters in the following season? 3. What is the probability that there are more than 3 no-hitters in a single season? 93. During the years 1998–2012, a total of 29 earthquakes of magnitude greater than 6.5 have occurred in Papua New Guinea. Assume that the time spent waiting between earthquakes is exponential. 1. What is the probability that the next earthquake occurs within the next three months? 2. Given that six months has passed without an earthquake in Papua New Guinea, what is the probability that the next three months will be free of earthquakes? 3. What is the probability of zero earthquakes occurring in 2014? 4. What is the probability that at least two earthquakes will occur in 2014? 94. According to the American Red Cross, about one out of nine people in the U.S. have Type B blood. Suppose the blood types of people arriving at a blood drive are independent. In this case, the number of Type B blood types that arrive roughly follows the Poisson distribution. 1. If 100 people arrive, how many on average would be expected to have Type B blood? 2. What is the probability that over 10 people out of these 100 have type B blood? 3. What is the probability that more than 20 people arrive before a person with type B blood is found? 95. A web site experiences traffic during normal working hours at a rate of 12 visits per hour. Assume that the duration between visits has the exponential distribution. 1. Find the probability that the duration between two successive visits to the web site is more than ten minutes. 2. The top 25% of durations between visits are at least how long? 3. Suppose that 20 minutes have passed since the last visit to the web site. What is the probability that the next visit will occur within the next 5 minutes? 4. Find the probability that less than 7 visits occur within a one-hour period. 96. At an urgent care facility, patients arrive at an average rate of one patient every seven minutes. Assume that the duration between arrivals is exponentially distributed. 1. Find the probability that the time between two successive visits to the urgent care facility is less than 2 minutes. 2. Find the probability that the time between two successive visits to the urgent care facility is more than 15 minutes. 3. If 10 minutes have passed since the last arrival, what is the probability that the next person will arrive within the next five minutes? 4. Find the probability that more than eight patients arrive during a half-hour period.
textbooks/stats/Applied_Statistics/Introductory_Business_Statistics_(OpenStax)/05%3A_Continuous_Random_Variables/5.04%3A_Chapter_Formula_Review.txt
Conditional Probability the likelihood that an event will occur given that another event has already occurred. decay parameter The decay parameter describes the rate at which probabilities decay to zero for increasing values of $x$. It is the value m in the probability density function $f(x)=m e^{(-m x)}$ of an exponential random variable. It is also equal to $m = \frac{1}{\mu}$, where $\mu$ is the mean of the random variable. Exponential Distribution a continuous random variable (RV) that appears when we are interested in the intervals of time between some random events, for example, the length of time between emergency arrivals at a hospital. The mean is $\mu = \frac{1}{m}$ and the standard deviation is $\sigma = \frac{1}{m}$. The probability density function is $f(x)=m e^{-m x} \text { or } f(x)=\frac{1}{\mu} e^{-\frac{1}{\mu} x}, x \geq 0$ and the cumulative distribution function is $P(X \leq x)=1-e^{-m x} \text { or } P(X \leq x)=1-e^{-\frac{1}{\mu} x}$. memoryless property For an exponential random variable $X$, the memoryless property is the statement that knowledge of what has occurred in the past has no effect on future probabilities. This means that the probability that $X$ exceeds $x + t$, given that it has exceeded $x$, is the same as the probability that $X$ would exceed t if we had no knowledge about it. In symbols we say that $P(X > x + t|X > x) = P(X > t)$. Poisson distribution If there is a known average of \mu events occurring per unit time, and these events are independent of each other, then the number of events X occurring in one unit of time has the Poisson distribution. The probability of x events occurring in one unit time is equal to $P(X=x)=\frac{\mu^{x} e^{-\mu}}{x !}$. Uniform Distribution a continuous random variable (RV) that has equally likely outcomes over the domain, $a < x < b$; it is often referred as the rectangular distribution because the graph of the pdf has the form of a rectangle. The mean is $\mu=\frac{a+b}{2}$ and the standard deviation is $\sigma=\sqrt{\frac{(b-a)^{2}}{12}}$. The probability density function is \(f(x)=\frac{1}{b-a} \text { for } a 5.07: Chapter Practice 5.1 Properties of Continuous Probability Density Functions 1. Which type of distribution does the graph illustrate? 2. Which type of distribution does the graph illustrate? 3. Which type of distribution does the graph illustrate? 4. What does the shaded area represent? $P$(___$< x <$ ___) 5. What does the shaded area represent? $P$(___$< x <$ ___) 6. For a continuous probablity distribution, $0 \leq x \leq 15$. What is $P(x > 15)$? 7. What is the area under $f(x)$ if the function is a continuous probability density function? 8. For a continuous probability distribution, $0 \leq x \leq 10$. What is $P(x = 7)$? 9. A continuous probability function is restricted to the portion between $x = 0$ and $7$. What is $P(x = 10)$? 10. $f(x)$ for a continuous probability function is $\frac{1}{5}$, and the function is restricted to $0 \leq x \leq 5$. What is $P(x < 0)$? 11. $f(x)$, a continuous probability function, is equal to $\frac{1}{12}$, and the function is restricted to $0 \leq x \leq 12$. What is $P(0 < x < 12)$? 12. Find the probability that $x$ falls in the shaded area. 13. Find the probability that $x$ falls in the shaded area. 14. Find the probability that $x$ falls in the shaded area. 15. $f(x)$, a continuous probability function, is equal to $\frac{1}{3}$ and the function is restricted to $1 \leq x \leq 4$. Describe $P(x>\frac{3}{2})$. 5.2 The Uniform Distribution Use the following information to answer the next ten questions. The data that follow are the square footage (in 1,000 feet squared) of 28 homes. 1.5 2.4 3.6 2.6 1.6 2.4 2.0 3.5 2.5 1.8 2.4 2.5 3.5 4.0 2.6 1.6 2.2 1.8 3.8 2.5 1.5 2.8 1.8 4.5 1.9 1.9 3.1 1.6 Table $2$ The sample mean = 2.50 and the sample standard deviation = 0.8302. The distribution can be written as $X \sim U(1.5, 4.5)$. 16. What type of distribution is this? 17. In this distribution, outcomes are equally likely. What does this mean? 18. What is the height of $f(x)$ for the continuous probability distribution? 19. What are the constraints for the values of $x$? 20. Graph $P(2 < x < 3)$. 21. What is $P(2 < x < 3)$? 22. What is $P(x < 3.5 | x < 4)$? 23. What is $P(x = 1.5)$? 24. Find the probability that a randomly selected home has more than 3,000 square feet given that you already know the house has more than 2,000 square feet. Use the following information to answer the next eight exercises. A distribution is given as $X \sim U(0, 12)$. 25. What is $a$? What does it represent? 26. What is $b$? What does it represent? 27. What is the probability density function? 28. What is the theoretical mean? 29. What is the theoretical standard deviation? 30. Draw the graph of the distribution for $P(x > 9)$. 31. Find $P(x > 9)$. Use the following information to answer the next eleven exercises. The age of cars in the staff parking lot of a suburban college is uniformly distributed from six months (0.5 years) to 9.5 years. 32. What is being measured here? 33. In words, define the random variable $X$. 34. Are the data discrete or continuous? 35. The interval of values for $x$ is ______. 36. The distribution for $X$ is ______. 37. Write the probability density function. 38. Graph the probability distribution. 1. Sketch the graph of the probability distribution. 2. Identify the following values: • Lowest value for $\overline{x}$: _______ • Highest value for $\overline{x}$: _______ • Height of the rectangle: _______ • Label for x-axis (words): _______ • Label for y-axis (words): _______ 39. Find the average age of the cars in the lot. 40. Find the probability that a randomly chosen car in the lot was less than four years old. 1. Sketch the graph, and shade the area of interest. 2. Find the probability. $P(x < 4)$ = _______ 41. Considering only the cars less than 7.5 years old, find the probability that a randomly chosen car in the lot was less than four years old. 1. Sketch the graph, shade the area of interest. 2. Find the probability. $P(x < 4 | x < 7.5) =$ _______ 42. What has changed in the previous two problems that made the solutions different? 43. Find the third quartile of ages of cars in the lot. This means you will have to find the value such that $\frac{3}{4}$, or 75%, of the cars are at most (less than or equal to) that age. 1. Sketch the graph, and shade the area of interest. 2. Find the value $k$ such that $P(x < k) = 0.75$. 3. The third quartile is _______ 5.3 The Exponential Distribution Use the following information to answer the next ten exercises. A customer service representative must spend different amounts of time with each customer to resolve various concerns. The amount of time spent with each customer can be modeled by the following distribution: $X \sim Exp(0.2)$ 44. What type of distribution is this? 45. Are outcomes equally likely in this distribution? Why or why not? 46. What is $m$? What does it represent? 47. What is the mean? 48. What is the standard deviation? 49. State the probability density function. 50. Graph the distribution. 51. Find $P(2 < x < 10)$. 52. Find $P(x > 6)$. 53. Find the 70th percentile. Use the following information to answer the next seven exercises. A distribution is given as $X \sim Exp(0.75)$. 54. What is m? 55. What is the probability density function? 56. What is the cumulative distribution function? 57. Draw the distribution. 58. Find $P(x < 4)$. 59. Find the 30th percentile. 60. Find the median. 61. Which is larger, the mean or the median? Use the following information to answer the next 16 exercises. Carbon-14 is a radioactive element with a half-life of about 5,730 years. Carbon-14 is said to decay exponentially. The decay rate is 0.000121. We start with one gram of carbon-14. We are interested in the time (years) it takes to decay carbon-14. 62. What is being measured here? 63. Are the data discrete or continuous? 64. In words, define the random variable $X$. 65. What is the decay rate ($m$)? 66. The distribution for $X$ is ______. 67. Find the amount (percent of one gram) of carbon-14 lasting less than 5,730 years. This means, find $P(x < 5,730)$. 1. Sketch the graph, and shade the area of interest. 2. Find the probability. $P(x < 5,730) =$ __________ 68. Find the percentage of carbon-14 lasting longer than 10,000 years. 1. Sketch the graph, and shade the area of interest. 2. Find the probability. $P(x > 10,000) =$ ________ 69. Thirty percent (30%) of carbon-14 will decay within how many years? 1. Sketch the graph, and shade the area of interest. Find the value $k$ such that $P(x < k) = 0.30$. 5.08: Chapter References 5.2 The Uniform Distribution McDougall, John A. The McDougall Program for Maximum Weight Loss. Plume, 1995. 5.3 The Exponential Distribution Data from the United States Census Bureau. Data from World Earthquakes, 2013. Available online at http://www.world-earthquakes.com/ (accessed June 11, 2013). “No-hitter.” Baseball-Reference.com, 2013. Available online at http://www.baseball-reference.com/bullpen/No-hitter (accessed June 11, 2013). Zhou, Rick. “Exponential Distribution lecture slides.” Available online at www.public.iastate.edu/~riczw/stat330s11/lecture/lec13.pdf‎ (accessed June 11, 2013). 5.09: Chapter Review 5.1 Properties of Continuous Probability Density Functions The probability density function (pdf) is used to describe probabilities for continuous random variables. The area under the density curve between two points corresponds to the probability that the variable falls between those two values. In other words, the area under the density curve between points a and b is equal to $P(a < x < b)$. The cumulative distribution function (cdf) gives the probability as an area. If $X$ is a continuous random variable, the probability density function (pdf), $f(x)$, is used to draw the graph of the probability distribution. The total area under the graph of $f(x)$ is one. The area under the graph of $f(x)$ and between values $a$ and $b$ gives the probability $P(a < x < b)$. The cumulative distribution function (cdf) of $X$ is defined by $P(X \leq x)$. It is a function of x that gives the probability that the random variable is less than or equal to x. 5.2 The Uniform Distribution If $X$ has a uniform distribution where $a < x < b$ or $a \leq x \leq b$, then $X$ takes on values between $a$ and $b$ (may include $a$ and $b$). All values $x$ are equally likely. We write $X \sim U(a, b)$. The mean of $X$ is $\mu=\frac{a+b}{2}$. The standard deviation of $X$ is $\sigma=\sqrt{\frac{(b-a)^{2}}{12}}$. The probability density function of $X$ is $f(x)=\frac{1}{b-a}$ for $a \leq x \leq b$. The cumulative distribution function of $X$ is $P(X \leq x)=\frac{x-a}{b-a}$. $X$ is continuous. The probability $P(c < X < d)$ may be found by computing the area under $f(x)$, between $c$ and $d$. Since the corresponding area is a rectangle, the area may be found simply by multiplying the width and the height. 5.3 The Exponential Distribution If $X$ has an exponential distribution with mean $\mu$, then the decay parameter is $m=\frac{1}{\mu}$. The probability density function of $X$ is $f(x) = me^{-mx}$ (or equivalently $f(x)=\frac{1}{\mu} e^{-x / \mu}$. The cumulative distribution function of $X$ is $P(X \leq x)=1-e^{-m x}$. 5.10: Chapter Solution (Practice Homework) Figure \(41\)
textbooks/stats/Applied_Statistics/Introductory_Business_Statistics_(OpenStax)/05%3A_Continuous_Random_Variables/5.06%3A_Chapter_Key_Terms.txt
The normal probability density function, a continuous distribution, is the most important of all the distributions. It is widely used and even more widely abused. Its graph is bell-shaped. You see the bell curve in almost all disciplines. Some of these include psychology, business, economics, the sciences, nursing, and, of course, mathematics. Some of your instructors may use the normal distribution to help determine your grade. Most IQ scores are normally distributed. Often real-estate prices fit a normal distribution. The normal distribution is extremely important, but it cannot be applied to everything in the real world. Remember here that we are still talking about the distribution of population data. This is a discussion of probability and thus it is the population data that may be normally distributed, and if it is, then this is how we can find probabilities of specific events just as we did for population data that may be binomially distributed or Poisson distributed. This caution is here because in the next chapter we will see that the normal distribution describes something very different from raw data and forms the foundation of inferential statistics. The normal distribution has two parameters (two numerical descriptive measures): the mean ($\mu$) and the standard deviation ($\sigma$). If X is a quantity to be measured that has a normal distribution with mean ($\mu$) and standard deviation ($\sigma$), we designate this by writing the following formula of the normal probability density function: The probability density function is a rather complicated function. Do not memorize it. It is not necessary. $f(x)=\frac{1}{\sigma \cdot \sqrt{2 \cdot \pi}} \cdot \mathrm{e}^{-\frac{1}{2} \cdot\left(\frac{x-\mu}{\sigma}\right)^{2}}\nonumber$ The curve is symmetric about a vertical line drawn through the mean, $\mu$. The mean is the same as the median, which is the same as the mode, because the graph is symmetric about $\mu$. As the notation indicates, the normal distribution depends only on the mean and the standard deviation. Note that this is unlike several probability density functions we have already studied, such as the Poisson, where the mean is equal to $\mu$$\mu$ and the standard deviation simply the square root of the mean, or the binomial, where p is used to determine both the mean and standard deviation. Since the area under the curve must equal one, a change in the standard deviation, $\sigma$, causes a change in the shape of the normal curve; the curve becomes fatter and wider or skinnier and taller depending on $\sigma$. A change in $\mu$causes the graph to shift to the left or right. This means there are an infinite number of normal probability distributions. One of special interest is called the standard normal distribution. 6.01: The Standard Normal Distribution The standard normal distribution is a normal distribution of standardized values called z-scores. A z-score is measured in units of the standard deviation. The mean for the standard normal distribution is zero, and the standard deviation is one. What this does is dramatically simplify the mathematical calculation of probabilities. Take a moment and substitute zero and one in the appropriate places in the above formula and you can see that the equation collapses into one that can be much more easily solved using integral calculus. The transformation $z=\frac{x-\mu}{\sigma}$ produces the distribution $Z \sim N(0, 1)$. The value $x$ in the given equation comes from a known normal distribution with known mean $\mu$ and known standard deviation $\sigma$. The z-score tells how many standard deviations a particular $x$ is away from the mean. Z-Scores If $X$ is a normally distributed random variable and $X \sim N(\mu, \sigma)$, then the z-score for a particular $x$ is: $z=\frac{x-\mu}{\sigma}\nonumber$ The z-score tells you how many standard deviations the value $\bf{x}$ is above (to the right of) or below (to the left of) the mean, $\bf{\mu}$.Values of $x$ that are larger than the mean have positive z-scores, and values of $x$ that are smaller than the mean have negative z-scores. If x equals the mean, then x has a z-score of zero. Example $1$ Suppose $X \sim N(5, 6)$. This says that $X$ is a normally distributed random variable with mean $\mu = 5$ and standard deviation $\sigma = 6$. Suppose $x = 17$. Then: $z=\frac{x-\mu}{\sigma}=\frac{17-5}{6}=2\nonumber$ This means that $x = 17$ is two standard deviations $(2\sigma)$ above or to the right of the mean $\mu = 5$. Now suppose $x = 1$. Then: $z=\frac{x-\mu}{\sigma}=\frac{1-5}{6}=-0.67$ (rounded to two decimal places) This means that $\bf{x = 1}$ is 0.67 standard deviations $\bf{(–0.67\sigma)}$ below or to the left of the mean $\bf{\mu = 5}$. The Empirical Rule If $X$ is a random variable and has a normal distribution with mean $\mu$ and standard deviation $\sigma$, then the Empirical Rule states the following: • About 68% of the $x$ values lie between $–1\sigma$ and $+1\sigma$ of the mean $\mu$ (within one standard deviation of the mean). • About 95% of the $x$ values lie between $–2\sigma$ and $+2\sigma$ of the mean $\mu$ (within two standard deviations of the mean). • About 99.7% of the $x$ values lie between $–3\sigma$ and $+3\sigma$ of the mean $\mu$ (within three standard deviations of the mean). Notice that almost all the x values lie within three standard deviations of the mean. • The z-scores for $+1\sigma$ and $–1\sigma$ are $+1$ and $–1$, respectively. • The z-scores for $+2\sigma$ and $–2\sigma$ are $+2$ and $–2$, respectively. • The z-scores for $+3\sigma$ and $–3\sigma$ are $+3$ and $–3$ respectively. Example $1$ Suppose $x$ has a normal distribution with mean 50 and standard deviation 6. • About 68% of the $x$ values lie within one standard deviation of the mean. Therefore, about 68% of the $x$ values lie between $–1\sigma = (–1)(6) = –6$ and $1\sigma = (1)(6) = 6$ of the mean 50. The values $50 – 6 = 44$ and $50 + 6 = 56$ are within one standard deviation from the mean 50. The z-scores are –1 and +1 for 44 and 56, respectively. • About 95% of the $x$ values lie within two standard deviations of the mean. Therefore, about 95% of the $x$ values lie between $–2\sigma = (–2)(6) = –12$ and $2\sigma = (2)(6) = 12$. The values $50 – 12 = 38$ and $50 + 12 = 62$ are within two standard deviations from the mean 50. The z-scores are –2 and +2 for 38 and 62, respectively. • About 99.7% of the $x$ values lie within three standard deviations of the mean. Therefore, about 99.7% of the $x$ values lie between $–3\sigma = (–3)(6) = –18$ and $3\sigma = (3)(6) = 18$ of the mean 50. The values $50 – 18 = 32$ and $50 + 18 = 68$ are within three standard deviations from the mean 50. The z-scores are –3 and +3 for 32 and 68, respectively.
textbooks/stats/Applied_Statistics/Introductory_Business_Statistics_(OpenStax)/06%3A_The_Normal_Distribution/6.00%3A_Introduction_to_Normal_Distribution.txt
The shaded area in the following graph indicates the area to the right of $x$. This area is represented by the probability $P(X > x)$. Normal tables provide the probability between the mean, zero for the standard normal distribution, and a specific value such as $x_1$. This is the unshaded part of the graph from the mean to $x_1$. Because the normal distribution is symmetrical , if $x_1$ were the same distance to the left of the mean the area, probability, in the left tail, would be the same as the shaded area in the right tail. Also, bear in mind that because of the symmetry of this distribution, one-half of the probability is to the right of the mean and one-half is to the left of the mean. Calculations of Probabilities To find the probability for probability density functions with a continuous random variable we need to calculate the area under the function across the values of $X$ we are interested in. For the normal distribution this seems a difficult task given the complexity of the formula. There is, however, a simply way to get what we want. Here again is the formula for the normal distribution: $f(x)=\frac{1}{\sigma \cdot \sqrt{2 \cdot \pi}} \cdot \mathrm{e}^{-\frac{1}{2} \cdot\left(\frac{x-\mu}{\sigma}\right)^{2}}\nonumber$ Looking at the formula for the normal distribution it is not clear just how we are going to solve for the probability doing it the same way we did it with the previous probability functions. There we put the data into the formula and did the math. To solve this puzzle we start knowing that the area under a probability density function is the probability. This shows that the area between $X_1$ and $X_2$ is the probability as stated in the formula: $P (X_1 \leq X \leq X_2)$ The mathematical tool needed to find the area under a curve is integral calculus. The integral of the normal probability density function between the two points x1 and x2 is the area under the curve between these two points and is the probability between these two points. Doing these integrals is no fun and can be very time consuming. But now, remembering that there are an infinite number of normal distributions out there, we can consider the one with a mean of zero and a standard deviation of 1. This particular normal distribution is given the name Standard Normal Distribution. Putting these values into the formula it reduces to a very simple equation. We can now quite easily calculate all probabilities for any value of x, for this particular normal distribution, that has a mean of zero and a standard deviation of 1. These have been produced and are available here in the appendix to the text or everywhere on the web. They are presented in various ways. The table in this text is the most common presentation and is set up with probabilities for one-half the distribution beginning with zero, the mean, and moving outward. The shaded area in the graph at the top of the table in Statistical Tables represents the probability from zero to the specific $Z$ value noted on the horizontal axis, $Z$. The only problem is that even with this table, it would be a ridiculous coincidence that our data had a mean of zero and a standard deviation of one. The solution is to convert the distribution we have with its mean and standard deviation to this new Standard Normal Distribution. The Standard Normal has a random variable called $Z$. Using the standard normal table, typically called the normal table, to find the probability of one standard deviation, go to the $Z$ column, reading down to 1.0 and then read at column 0. That number, $0.3413$ is the probability from zero to 1 standard deviation. At the top of the table is the shaded area in the distribution which is the probability for one standard deviation. The table has solved our integral calculus problem. But only if our data has a mean of zero and a standard deviation of 1. However, the essential point here is, the probability for one standard deviation on one normal distribution is the same on every normal distribution. If the population data set has a mean of 10 and a standard deviation of 5 then the probability from 10 to 15, one standard deviation, is the same as from zero to 1, one standard deviation on the standard normal distribution. To compute probabilities, areas, for any normal distribution, we need only to convert the particular normal distribution to the standard normal distribution and look up the answer in the tables. As review, here again is the standardizing formula: $Z=\frac{x-\mu}{\sigma}\nonumber$ where $Z$ is the value on the standard normal distribution, $X$ is the value from a normal distribution one wishes to convert to the standard normal, $\mu$ and $\sigma$ are, respectively, the mean and standard deviation of that population. Note that the equation uses $\mu$ and $\sigma$ which denotes population parameters. This is still dealing with probability so we always are dealing with the population, with known parameter values and a known distribution. It is also important to note that because the normal distribution is symmetrical it does not matter if the z-score is positive or negative when calculating a probability. One standard deviation to the left (negative Z-score) covers the same area as one standard deviation to the right (positive Z-score). This fact is why the Standard Normal tables do not provide areas for the left side of the distribution. Because of this symmetry, the Z-score formula is sometimes written as: $Z=\frac{|x-\mu|}{\sigma}\nonumber$ Where the vertical lines in the equation means the absolute value of the number. What the standardizing formula is really doing is computing the number of standard deviations $X$ is from the mean of its own distribution. The standardizing formula and the concept of counting standard deviations from the mean is the secret of all that we will do in this statistics class. The reason this is true is that all of statistics boils down to variation, and the counting of standard deviations is a measure of variation. This formula, in many disguises, will reappear over and over throughout this course. Example $1$ The final exam scores in a statistics class were normally distributed with a mean of 63 and a standard deviation of five. a. Find the probability that a randomly selected student scored more than 65 on the exam. b. Find the probability that a randomly selected student scored less than 85. Answer a Let $X$ = a score on the final exam. $X \sim N(63, 5)$, where $\mu = 63$ and $\sigma = 5$. Draw a graph. Then, find $P(x > 65)$. $P(x > 65) = 0.3446$ $Z_{1}=\frac{x_{1}-\mu}{\sigma}=\frac{65-63}{5}=0.4\nonumber$ $P\left(x \geq x_{1}\right)=P\left(Z \geq Z_{1}\right)=0.3446$ The probability that any student selected at random scores more than 65 is 0.3446. Here is how we found this answer. Answer b The normal table provides probabilities from zero to the value $Z_1$. For this problem the question can be written as: $P(X \geq 65) = P(Z \geq Z1)$, which is the area in the tail. To find this area the formula would be $0.5 – P(X \leq 65)$. One half of the probability is above the mean value because this is a symmetrical distribution. The graph shows how to find the area in the tail by subtracting that portion from the mean, zero, to the $Z_1$ value. The final answer is: $P(X \geq 63) = P(Z \geq 0.4) = 0.3446$ $z=\frac{65-63}{5}=0.4$ Area to the left of $Z_1$ to the mean of zero is $0.1554$ $P(x > 65) = P(z > 0.4) = 0.5 – 0.1554 = 0.3446$ $Z=\frac{x-\mu}{\sigma}=\frac{85-63}{5}=4.4$ which is larger than the maximum value on the Standard Normal Table. Therefore, the probability that one student scores less than 85 is approximately one or 100%. A score of 85 is 4.4 standard deviations from the mean of 63 which is beyond the range of the standard normal table. Therefore, the probability that one student scores less than 85 is approximately one (or 100%). Exercise $1$ The golf scores for a school team were normally distributed with a mean of 68 and a standard deviation of three. Find the probability that a randomly selected golfer scored less than 65. Example $\PageIndex{2A}$ A personal computer is used for office work at home, research, communication, personal finances, education, entertainment, social networking, and a myriad of other things. Suppose that the average number of hours a household personal computer is used for entertainment is two hours per day. Assume the times for entertainment are normally distributed and the standard deviation for the times is half an hour. a. Find the probability that a household personal computer is used for entertainment between 1.8 and 2.75 hours per day. Answer a. Let $X$ = the amount of time (in hours) a household personal computer is used for entertainment. $X \sim N(2, 0.5)$ where $\mu= 2$ and $\sigma = 0.5$. Find $P(1.8 < X < 2.75)$. The probability for which you are looking is the area between $X = 1.8$ and $X = 2.75$. $P(1.8 < X < 2.75) = 0.5886$ $P(1.8 \leq X \leq 2.75) = P(Z_1 \leq Z \leq Z_2)$ The probability that a household personal computer is used between 1.8 and 2.75 hours per day for entertainment is 0.5886. Example $\PageIndex{2B}$ b. Find the maximum number of hours per day that the bottom quartile of households uses a personal computer for entertainment. Answer Solution 6.4 b. To find the maximum number of hours per day that the bottom quartile of households uses a personal computer for entertainment, find the 25th percentile, $k$, where $P(x < k) = 0.25$. $f(Z)=0.5-0.25=0.25, \text { therefore } Z \approx-0.675(\text { or just } 0.67 \text { using the table) } Z=\frac{x-\mu}{\sigma}=\frac{x-2}{0.5}=-0.675 , \text {therefore } x=-0.675 * 0.5+2=1.66$ The maximum number of hours per day that the bottom quartile of households uses a personal computer for entertainment is 1.66 hours. Exercise $2$ The golf scores for a school team were normally distributed with a mean of 68 and a standard deviation of three. Find the probability that a golfer scored between 66 and 70. Example $3$ In the United States the ages 13 to 55+ of smartphone users approximately follow a normal distribution with approximate mean and standard deviation of 36.9 years and 13.9 years, respectively. a. Determine the probability that a random smartphone user in the age range 13 to 55+ is between 23 and 64.7 years old. Answer Answer a. 0.8186 b. 0.8413 Example $4$ A citrus farmer who grows mandarin oranges finds that the diameters of mandarin oranges harvested on his farm follow a normal distribution with a mean diameter of 5.85 cm and a standard deviation of 0.24 cm. a. Find the probability that a randomly selected mandarin orange from this farm has a diameter larger than 6.0 cm. Sketch the graph. Answer $Z_{1}=\frac{6-5.85}{.24}=.625\nonumber$ $P(x \geq 6) = P(z \geq 0.625) = 0.2670$ b. The middle 20% of mandarin oranges from this farm have diameters between ______ and ______. $f(Z)=\frac{0.20}{2}=0.10, \text { therefore } Z \approx \pm 0.25$ $Z=\frac{x-\mu}{\sigma}=\frac{x-5.85}{0.24}=\pm 0.25 \rightarrow \pm 0.25 \cdot 0.24+5.85=(5.79,5.91)$ 6.03: Estimating the Binomial with the Normal Distribution We found earlier that various probability density functions are the limiting distributions of others; thus, we can estimate one with another under certain circumstances. We will find here that the normal distribution can be used to estimate a binomial process. The Poisson was used to estimate the binomial previously, and the binomial was used to estimate the hypergeometric distribution. In the case of the relationship between the hypergeometric distribution and the binomial, we had to recognize that a binomial process assumes that the probability of a success remains constant from trial to trial: a head on the last flip cannot have an effect on the probability of a head on the next flip. In the hypergeometric distribution this is the essence of the question because the experiment assumes that any "draw" is without replacement. If one draws without replacement, then all subsequent "draws" are conditional probabilities. We found that if the hypergeometric experiment draws only a small percentage of the total objects, then we can ignore the impact on the probability from draw to draw. Imagine that there are 312 cards in a deck comprised of 6 normal decks. If the experiment called for drawing only 10 cards, less than 5% of the total, than we will accept the binomial estimate of the probability, even though this is actually a hypergeometric distribution because the cards are presumably drawn without replacement. The Poisson likewise was considered an appropriate estimate of the binomial under certain circumstances. In Figure \(11\) shows a symmetrical normal distribution transposed on a graph of a binomial distribution where \(p = 0.2\) and \(n = 5\). The discrepancy between the estimated probability using a normal distribution and the probability of the original binomial distribution is apparent. The criteria for using a normal distribution to estimate a binomial thus addresses this problem by requiring BOTH \(np\) AND \(n(1 − p)\) are greater than five. Again, this is a rule of thumb, but is effective and results in acceptable estimates of the binomial probability. \(1-[p(X=0)+p(X=1)+p(X=2)+\ldots+p(X=16)]=p(X>16)=p(Z>2)=0.0228\) 6.04: Chapter Formula Review Introduction $X \sim N(\mu, \sigma)$ $\mu =$ the mean; $\sigma =$ the standard deviation The Standard Normal Distribution $Z \sim N(0, 1)$ $z = a$ standardized value (z-score) mean = 0; standard deviation = 1 To find the $k^{\text{th}}$ percentile of $X$ when the z-scores is known: $k = \mu + (z)\sigma$ z-score: $z=\frac{x-\mu}{\sigma}$ or $z=\frac{|x-\mu|}{\sigma}$ $Z =$ the random variable for z-scores $Z \sim N(0, 1)$ Estimating the Binomial with the Normal Distribution Normal Distribution: $X \sim N(\mu, \sigma)$ where $\mu$ is the mean and $\sigma$ is the standard deviation. Standard Normal Distribution: $Z \sim N(0, 1)$.
textbooks/stats/Applied_Statistics/Introductory_Business_Statistics_(OpenStax)/06%3A_The_Normal_Distribution/6.02%3A_Using_the_Normal_Distribution.txt
6.1 The Standard Normal Distribution Use the following information to answer the next two exercises: The patient recovery time from a particular surgical procedure is normally distributed with a mean of 5.3 days and a standard deviation of 2.1 days. 65. What is the median recovery time? 1. 2.7 2. 5.3 3. 7.4 4. 2.1 66. What is the z-score for a patient who takes ten days to recover? 1. 1.5 2. 0.2 3. 2.2 4. 7.3 67. The length of time to find a parking space at 9 A.M. follows a normal distribution with a mean of five minutes and a standard deviation of two minutes. If the mean is significantly greater than the standard deviation, which of the following statements is true? 1. The data cannot follow the uniform distribution. 2. The data cannot follow the exponential distribution.. 3. The data cannot follow the normal distribution. 1. I only 2. II only 3. III only 4. I, II, and III 68. The heights of the 430 National Basketball Association players were listed on team rosters at the start of the 2005–2006 season. The heights of basketball players have an approximate normal distribution with mean, $\mu = 79$ inches and a standard deviation, $\sigma = 3.89$ inches. For each of the following heights, calculate the z-score and interpret it using complete sentences. 1. 77 inches 2. 85 inches 3. If an NBA player reported his height had a z-score of 3.5, would you believe him? Explain your answer. 69. The systolic blood pressure (given in millimeters) of males has an approximately normal distribution with mean $\mu = 125$ and standard deviation $\sigma = 14$. Systolic blood pressure for males follows a normal distribution. 1. Calculate the z-scores for the male systolic blood pressures 100 and 150 millimeters. 2. If a male friend of yours said he thought his systolic blood pressure was 2.5 standard deviations below the mean, but that he believed his blood pressure was between 100 and 150 millimeters, what would you say to him? 70. Kyle’s doctor told him that the z-score for his systolic blood pressure is 1.75. Which of the following is the best interpretation of this standardized score? The systolic blood pressure (given in millimeters) of males has an approximately normal distribution with mean $\mu = 125$ and standard deviation $\sigma = 14$. If $X =$ a systolic blood pressure score then $X \sim$ N (125, 14). 1. Which answer(s) is/are correct? • Kyle’s systolic blood pressure is 175. • Kyle’s systolic blood pressure is 1.75 times the average blood pressure of men his age. • Kyle’s systolic blood pressure is 1.75 above the average systolic blood pressure of men his age. • Kyles’s systolic blood pressure is 1.75 standard deviations above the average systolic blood pressure for men. 2. Calculate Kyle’s blood pressure. 71. Height and weight are two measurements used to track a child’s development. The World Health Organization measures child development by comparing the weights of children who are the same height and the same gender. In 2009, weights for all 80 cm girls in the reference population had a mean $\mu = 10.2$ kg and standard deviation $\sigma = 0.8$ kg. Weights are normally distributed. $X \sim$N(10.2, 0.8). Calculate the z-scores that correspond to the following weights and interpret them. 1. 11 kg 2. 7.9 kg 3. 12.2 kg 72. In 2005, 1,475,623 students heading to college took the SAT. The distribution of scores in the math section of the SAT follows a normal distribution with mean $\mu = 520$ and standard deviation $\sigma = 115$. 1. Calculate the z-score for an SAT score of 720. Interpret it using a complete sentence. 2. What math SAT score is 1.5 standard deviations above the mean? What can you say about this SAT score? 3. For 2012, the SAT math test had a mean of 514 and standard deviation 117. The ACT math test is an alternate to the SAT and is approximately normally distributed with mean 21 and standard deviation 5.3. If one person took the SAT math test and scored 700 and a second person took the ACT math test and scored 30, who did better with respect to the test they took? 6.3 Estimating the Binomial with the Normal Distribution Use the following information to answer the next two exercises: The patient recovery time from a particular surgical procedure is normally distributed with a mean of 5.3 days and a standard deviation of 2.1 days. 73. What is the probability of spending more than two days in recovery? 1. 0.0580 2. 0.8447 3. 0.0553 4. 0.9420 Use the following information to answer the next three exercises: The length of time it takes to find a parking space at 9 A.M. follows a normal distribution with a mean of five minutes and a standard deviation of two minutes. 74. Based upon the given information and numerically justified, would you be surprised if it took less than one minute to find a parking space? 1. Yes 2. No 3. Unable to determine 75. Find the probability that it takes at least eight minutes to find a parking space. 1. 0.0001 2. 0.9270 3. 0.1862 4. 0.0668 76. Seventy percent of the time, it takes more than how many minutes to find a parking space? 1. 1.24 2. 2.41 3. 3.95 4. 6.05 77. According to a study done by De Anza students, the height for Asian adult males is normally distributed with an average of 66 inches and a standard deviation of 2.5 inches. Suppose one Asian adult male is randomly chosen. Let $X =$ height of the individual. 1. $X \sim$ _____(_____,_____) 2. Find the probability that the person is between 65 and 69 inches. Include a sketch of the graph, and write a probability statement. 3. Would you expect to meet many Asian adult males over 72 inches? Explain why or why not, and justify your answer numerically. 4. The middle 40% of heights fall between what two values? Sketch the graph, and write the probability statement. 78. IQ is normally distributed with a mean of 100 and a standard deviation of 15. Suppose one individual is randomly chosen. Let X= IQ of an individual. 1. $X \sim$ _____(_____,_____) 2. Find the probability that the person has an IQ greater than 120. Include a sketch of the graph, and write a probability statement. 3. MENSA is an organization whose members have the top 2% of all IQs. Find the minimum IQ needed to qualify for the MENSA organization. Sketch the graph, and write the probability statement. 79. The percent of fat calories that a person in America consumes each day is normally distributed with a mean of about 36 and a standard deviation of 10. Suppose that one individual is randomly chosen. Let $X =$ percent of fat calories. 1. $X \sim$ _____(_____,_____) 2. Find the probability that the percent of fat calories a person consumes is more than 40. Graph the situation. Shade in the area to be determined. 3. Find the maximum number for the lower quarter of percent of fat calories. Sketch the graph and write the probability statement. 80. Suppose that the distance of fly balls hit to the outfield (in baseball) is normally distributed with a mean of 250 feet and a standard deviation of 50 feet. 1. If $X =$ distance in feet for a fly ball, then $X \sim$ _____(_____,_____) 2. If one fly ball is randomly chosen from this distribution, what is the probability that this ball traveled fewer than 220 feet? Sketch the graph. Scale the horizontal axis $X$. Shade the region corresponding to the probability. Find the probability. 81. In China, four-year-olds average three hours a day unsupervised. Most of the unsupervised children live in rural areas, considered safe. Suppose that the standard deviation is 1.5 hours and the amount of time spent alone is normally distributed. We randomly select one Chinese four-year-old living in a rural area. We are interested in the amount of time the child spends alone per day. 1. In words, define the random variable $X$. 2. $X \sim$ _____(_____,_____) 3. Find the probability that the child spends less than one hour per day unsupervised. Sketch the graph, and write the probability statement. 4. What percent of the children spend over ten hours per day unsupervised? 5. Seventy percent of the children spend at least how long per day unsupervised? 82. In the 1992 presidential election, Alaska’s 40 election districts averaged 1,956.8 votes per district for President Clinton. The standard deviation was 572.3. (There are only 40 election districts in Alaska.) The distribution of the votes per district for President Clinton was bell-shaped. Let $X =$ number of votes for President Clinton for an election district. 1. State the approximate distribution of $X$. 2. Is 1,956.8 a population mean or a sample mean? How do you know? 3. Find the probability that a randomly selected district had fewer than 1,600 votes for President Clinton. Sketch the graph and write the probability statement. 4. Find the probability that a randomly selected district had between 1,800 and 2,000 votes for President Clinton. 5. Find the third quartile for votes for President Clinton. 83. Suppose that the duration of a particular type of criminal trial is known to be normally distributed with a mean of 21 days and a standard deviation of seven days. 1. In words, define the random variable $X$. 2. $X \sim$ _____(_____,_____) 3. If one of the trials is randomly chosen, find the probability that it lasted at least 24 days. Sketch the graph and write the probability statement. 4. Sixty percent of all trials of this type are completed within how many days? 84. Terri Vogel, an amateur motorcycle racer, averages 129.71 seconds per 2.5 mile lap (in a seven-lap race) with a standard deviation of 2.28 seconds. The distribution of her race times is normally distributed. We are interested in one of her randomly selected laps. 1. In words, define the random variable $X.$ 2. $X \sim$ _____(_____,_____) 3. Find the percent of her laps that are completed in less than 130 seconds. 4. The fastest 3% of her laps are under _____. 5. The middle 80% of her laps are from _______ seconds to _______ seconds. 85. Thuy Dau, Ngoc Bui, Sam Su, and Lan Voung conducted a survey as to how long customers at Lucky claimed to wait in the checkout line until their turn. Let $X =$ time in line. Table $1$ displays the ordered real data (in minutes): 0.50 4.25 5 6 7.25 1.75 4.25 5.25 6 7.25 2 4.25 5.25 6.25 7.25 2.25 4.25 5.5 6.25 7.75 2.25 4.5 5.5 6.5 8 2.5 4.75 5.5 6.5 8.25 2.75 4.75 5.75 6.5 9.5 3.25 4.75 5.75 6.75 9.5 3.75 5 6 6.75 9.75 3.75 5 6 6.75 10.75 Table $1$ 1. Calculate the sample mean and the sample standard deviation. 2. Construct a histogram. 3. Draw a smooth curve through the midpoints of the tops of the bars. 4. In words, describe the shape of your histogram and smooth curve. 5. Let the sample mean approximate μ and the sample standard deviation approximate \sigma. The distribution of X can then be approximated by $X \sim$ _____(_____,_____) 6. Use the distribution in part e to calculate the probability that a person will wait fewer than 6.1 minutes. 7. Determine the cumulative relative frequency for waiting less than 6.1 minutes. 8. Why aren’t the answers to part 6 and part 7 exactly the same? 9. Why are the answers to part 6 and part 7 as close as they are? 10. If only ten customers has been surveyed rather than 50, do you think the answers to part f and part g would have been closer together or farther apart? Explain your conclusion. 86. Suppose that Ricardo and Anita attend different colleges. Ricardo’s GPA is the same as the average GPA at his school. Anita’s GPA is 0.70 standard deviations above her school average. In complete sentences, explain why each of the following statements may be false. 1. Ricardo’s actual GPA is lower than Anita’s actual GPA. 2. Ricardo is not passing because his z-score is zero. 3. Anita is in the $70^{\text{th}}$ percentile of students at her college. 87. An expert witness for a paternity lawsuit testifies that the length of a pregnancy is normally distributed with a mean of 280 days and a standard deviation of 13 days. An alleged father was out of the country from 240 to 306 days before the birth of the child, so the pregnancy would have been less than 240 days or more than 306 days long if he was the father. The birth was uncomplicated, and the child needed no medical intervention. What is the probability that he was NOT the father? What is the probability that he could be the father? Calculate the z-scores first, and then use those to calculate the probability. 88. A NUMMI assembly line, which has been operating since 1984, has built an average of 6,000 cars and trucks a week. Generally, 10% of the cars were defective coming off the assembly line. Suppose we draw a random sample of $n = 100$ cars. Let $X$ represent the number of defective cars in the sample. What can we say about $X$ in regard to the 68-95-99.7 empirical rule (one standard deviation, two standard deviations and three standard deviations from the mean are being referred to)? Assume a normal distribution for the defective cars in the sample. 89. We flip a coin 100 times ($n = 100$) and note that it only comes up heads 20% ($p = 0.20$) of the time. The mean and standard deviation for the number of times the coin lands on heads is $\mu = 20$ and $\sigma = 4$ (verify the mean and standard deviation). Solve the following: 1. There is about a 68% chance that the number of heads will be somewhere between ___ and ___. 2. There is about a ____chance that the number of heads will be somewhere between 12 and 28. 3. There is about a ____ chance that the number of heads will be somewhere between eight and 32. 90. A \$1 scratch off lotto ticket will be a winner one out of five times. Out of a shipment of $n = 190$ lotto tickets, find the probability for the lotto tickets that there are 1. somewhere between 34 and 54 prizes. 2. somewhere between 54 and 64 prizes. 3. more than 64 prizes. 91. Facebook provides a variety of statistics on its Web site that detail the growth and popularity of the site. On average, 28 percent of 18 to 34 year olds check their Facebook profiles before getting out of bed in the morning. Suppose this percentage follows a normal distribution with a standard deviation of five percent. 92. A hospital has 49 births in a year. It is considered equally likely that a birth be a boy as it is the birth be a girl. 1. What is the mean? 2. What is the standard deviation? 3. Can this binomial distribution be approximated with a normal distribution? 4. If so, use the normal distribution to find the probability that at least 23 of the 49 births were boys. 93. Historically, a final exam in a course is passed with a probability of 0.9. The exam is given to a group of 70 students. 1. What is the mean of the binomial distribution? 2. What is the standard deviation? 3. Can this binomial distribution be approximate with a normal distribution? 4. If so, use the normal distribution to find the probability that at least 60 of the students pass the exam? 94. A tree in an orchard has 200 oranges. Of the oranges, 40 are not ripe. Use the normal distribution to approximate the binomial distribution, and determine the probability a box containing 35 oranges has at most two oranges that are not ripe. 95. In a large city one in ten fire hydrants are in need of repair. If a crew examines 100 fire hydrants in a week, what is the probability they will find nine of fewer fire hydrants that need repair? Use the normal distribution to approximate the binomial distribution. 96. On an assembly line it is determined 85% of the assembled products have no defects. If one day 50 items are assembled, what is the probability at least 4 and no more than 8 are defective. Use the normal distribution to approximate the binomial distribution.
textbooks/stats/Applied_Statistics/Introductory_Business_Statistics_(OpenStax)/06%3A_The_Normal_Distribution/6.05%3A_Chapter_Homework.txt
Normal Distribution a continuous random variable $(RV)$ with pdf $f(x) =$ $\frac{1}{\sigma \sqrt{2 \pi}} \mathrm{e}^{\frac{-(x-\mu)^{2}}{2 \sigma^{2}}}\nonumber$ , where $\mu$ is the mean of the distribution and $\sigma$ is the standard deviation; notation: $X \sim N(\mu, \sigma)$. If $\mu = 0$ and $\sigma = 1$, the $RV$, $Z$, is called the standard normal distribution. Standard Normal Distribution a continuous random variable $(RV) X \sim N(0, 1)$; when $X$ follows the standard normal distribution, it is often noted as $Z \sim N(0, 1)$. z-score the linear transformation of the form $z=\frac{x-\mu}{\sigma}$ or written as $z=\frac{|x-\mu|}{\sigma}$; if this transformation is applied to any normal distribution $X \sim N(\mu, \sigma)$ the result is the standard normal distribution $Z \sim N(0,1)$. If this transformation is applied to any specific value $x$ of the $RV$ with mean $\mu$ and standard deviation $\sigma$, the result is called the z-score of $x$. The z-score allows us to compare data that are normally distributed but scaled differently. A z-score is the number of standard deviations a particular $x$ is away from its mean value. 6.07: Chapter Practice 6.1 The Standard Normal Distribution 1. A bottle of water contains 12.05 fluid ounces with a standard deviation of 0.01 ounces. Define the random variable $X$ in words. $X=$ ____________. 2. A normal distribution has a mean of 61 and a standard deviation of 15. What is the median? 3. $X \sim N(1, 2)$ $\sigma =$ _______ 4. A company manufactures rubber balls. The mean diameter of a ball is 12 cm with a standard deviation of 0.2 cm. Define the random variable $X$ in words. $X =$ ______________. 5. $X \sim N(–4, 1)$ What is the median? 6. $X \sim N(3, 5)$ $\sigma =$ _______ 7. $X \sim N(–2, 1)$ $\mu =$ _______ 8. What does a z-score measure? 9. What does standardizing a normal distribution do to the mean? 10. Is $X \sim N(0, 1)$ a standardized normal distribution? Why or why not? 11. What is the z-score of $x = 12$, if it is two standard deviations to the right of the mean? 12. What is the z-score of $x = 9$, if it is 1.5 standard deviations to the left of the mean? 13. What is the z-score of $x = –2$, if it is 2.78 standard deviations to the right of the mean? 14. What is the z-score of $x = 7$, if it is 0.133 standard deviations to the left of the mean? 15. Suppose $X \sim N(2, 6)$. What value of $x$ has a z-score of three? 16. Suppose $X \sim N(8, 1)$. What value of $x$ has a z-score of –2.25? 17. Suppose $X \sim N(9, 5)$. What value of $x$ has a z-score of –0.5? 18. Suppose $X \sim N(2, 3)$. What value of $x$ has a z-score of –0.67? 19. Suppose $X \sim N(4, 2)$. What value of $x$ is 1.5 standard deviations to the left of the mean? 20. Suppose $X \sim N(4, 2)$. What value of $x$ is two standard deviations to the right of the mean? 21. Suppose $X \sim N(8, 9)$. What value of $x$ is 0.67 standard deviations to the left of the mean? 22. Suppose $X \sim N(–1, 2)$. What is the z-score of $x = 2$? 23. Suppose $X \sim N(12, 6)$. What is the z-score of $x = 2$? 24. Suppose $X \sim N(9, 3)$. What is the z-score of $x = 9$? 25. Suppose a normal distribution has a mean of six and a standard deviation of 1.5. What is the z-score of $x = 5.5$? 26. In a normal distribution, $x = 5$ and $z = –1.25$. This tells you that $x = 5$ is ____ standard deviations to the ____ (right or left) of the mean. 27. In a normal distribution, $x = 3$ and $z = 0.67$. This tells you that $x = 3$ is ____ standard deviations to the ____ (right or left) of the mean. 28. In a normal distribution, $x = –2$ and $z = 6$. This tells you that $x = –2$ is ____ standard deviations to the ____ (right or left) of the mean. 29. In a normal distribution, $x = –5$ and $z = –3.14$. This tells you that $x = –5$ is ____ standard deviations to the ____ (right or left) of the mean. 30. In a normal distribution, $x = 6$ and $z = –1.7$. This tells you that $x = 6$ is ____ standard deviations to the ____ (right or left) of the mean. 31. About what percent of $x$ values from a normal distribution lie within one standard deviation (left and right) of the mean of that distribution? 32. About what percent of the $x$ values from a normal distribution lie within two standard deviations (left and right) of the mean of that distribution? 33. About what percent of $x$ values lie between the second and third standard deviations (both sides)? 34. Suppose $X \sim N(15, 3)$. Between what $x$ values does 68.27% of the data lie? The range of $x$ values is centered at the mean of the distribution (i.e., 15). 35. Suppose $X \sim N(–3, 1)$. Between what $x$ values does 95.45% of the data lie? The range of $x$ values is centered at the mean of the distribution(i.e., –3). 36. Suppose $X \sim N(–3, 1)$. Between what $x$ values does 34.14% of the data lie? 37. About what percent of $x$ values lie between the mean and three standard deviations? 38. About what percent of $x$ values lie between the mean and one standard deviation? 39. About what percent of $x$ values lie between the first and second standard deviations from the mean (both sides)? 40. About what percent of $x$ values lie betwween the first and third standard deviations(both sides)? Use the following information to answer the next two exercises: The life of Sunshine CD players is normally distributed with mean of 4.1 years and a standard deviation of 1.3 years. A CD player is guaranteed for three years. We are interested in the length of time a CD player lasts. 41. Define the random variable $X$ in words. $X =$ _______________. 42. $X \sim$ _____(_____,_____) 6.3 Estimating the Binomial with the Normal Distribution 43. How would you represent the area to the left of one in a probability statement? 44. What is the area to the right of one? 45. Is $P(x < 1)$ equal to $P(x \leq 1)$? Why? 46. How would you represent the area to the left of three in a probability statement? 47. What is the area to the right of three? 48. If the area to the left of $x$ in a normal distribution is $0.123$, what is the area to the right of $x$? 49. If the area to the right of $x$ in a normal distribution is $0.543$, what is the area to the left of $x$? Use the following information to answer the next four exercises: $X \sim N(54, 8)$ 50. Find the probability that $x > 56$. 51. Find the probability that $x < 30$. 52. $X \sim N(6, 2)$ Find the probability that $x$ is between three and nine. 53. $X \sim N(–3, 4)$ Find the probability that $x$ is between one and four. 54. $X \sim N(4, 5)$ Find the maximum of $x$ in the bottom quartile. 55. Use the following information to answer the next three exercise: The life of Sunshine CD players is normally distributed with a mean of 4.1 years and a standard deviation of 1.3 years. A CD player is guaranteed for three years. We are interested in the length of time a CD player lasts. Find the probability that a CD player will break down during the guarantee period. 1. Sketch the situation. Label and scale the axes. Shade the region corresponding to the probability. 2. $P(0 < x <$ ____________) = ___________ (Use zero for the minimum value of $x$.) 56. Find the probability that a CD player will last between 2.8 and six years. 1. Sketch the situation. Label and scale the axes. Shade the region corresponding to the probability. 2. $P$(__________ $< x <$ __________) = __________ 57. An experiment with a probability of success given as 0.40 is repeated 100 times. Use the normal distribution to approximate the binomial distribution, and find the probability the experiment will have at least 45 successes. 58. An experiment with a probability of success given as 0.30 is repeated 90 times. Use the normal distribution to approximate the binomial distribution, and find the probability the experiment will have at least 22 successes. 59. An experiment with a probability of success given as 0.40 is repeated 100 times. Use the normal distribution to approximate the binomial distribution, and find the probability the experiment will have from 35 to 45 successes. 60. An experiment with a probability of success given as 0.30 is repeated 90 times. Use the normal distribution to approximate the binomial distribution, and find the probability the experiment will have from 26 to 30 successes. 61. An experiment with a probability of success given as 0.40 is repeated 100 times. Use the normal distribution to approximate the binomial distribution, and find the probability the experiment will have at most 34 successes. 62. An experiment with a probability of success given as 0.30 is repeated 90 times. Use the normal distribution to approximate the binomial distribution, and find the probability the experiment will have at most 34 successes. 63. A multiple choice test has a probability any question will be guesses correctly of 0.25. There are 100 questions, and a student guesses at all of them. Use the normal distribution to approximate the binomial distribution, and determine the probability at least 30, but no more than 32, questions will be guessed correctly. 64. A multiple choice test has a probability any question will be guesses correctly of 0.25. There are 100 questions, and a student guesses at all of them. Use the normal distribution to approximate the binomial distribution, and determine the probability at least 24, but no more than 28, questions will be guessed correctly. 6.08: Chapter References The Standard Normal Distribution • “Blood Pressure of Males and Females.” StatCruch, 2013. Available online at http://www.statcrunch.com/5.0/viewre...reportid=11960 (accessed May 14, 2013). • “The Use of Epidemiological Tools in Conflict-affected populations: Open-access educational resources for policy-makers: Calculation of z-scores.” London School of Hygiene and Tropical Medicine, 2009. Available online at http://conflict.lshtm.ac.uk/page_125.htm (accessed May 14, 2013). • “2012 College-Bound Seniors Total Group Profile Report.” CollegeBoard, 2012. Available online at http://media.collegeboard.com/digita...Group-2012.pdf (accessed May 14, 2013). • “Digest of Education Statistics: ACT score average and standard deviations by sex and race/ethnicity and percentage of ACT test takers, by selected composite score ranges and planned fields of study: Selected years, 1995 through 2009.” National Center for Education Statistics. Available online at http://nces.ed.gov/programs/digest/d...s/dt09_147.asp (accessed May 14, 2013). • Data from the San Jose Mercury News. • Data from The World Almanac and Book of Facts. • “List of stadiums by capacity.” Wikipedia. Available online at https://en.Wikipedia.org/wiki/List_o...ms_by_capacity (accessed May 14, 2013). • Data from the National Basketball Association. Available online at www.nba.com (accessed May 14, 2013). 6.09: Chapter Review 6.1 The Standard Normal Distribution A z-score is a standardized value. Its distribution is the standard normal, $Z \sim N(0, 1)$. The mean of the z-scores is zero and the standard deviation is one. If $z$ is the z-score for a value $x$ from the normal distribution $N(\mu, \sigma)$ then $z$ tells you how many standard deviations $x$ is above (greater than) or below (less than) $\mu$. 6.3 Estimating the Binomial with the Normal Distribution The normal distribution, which is continuous, is the most important of all the probability distributions. Its graph is bell-shaped. This bell-shaped curve is used in almost all disciplines. Since it is a continuous distribution, the total area under the curve is one. The parameters of the normal are the mean $\mu$ and the standard deviation $\sigma$. A special normal distribution, called the standard normal distribution is the distribution of z-scores. Its mean is zero, and its standard deviation is one. 6.10: Chapter Solution (Practice Homework) 1. ounces of water in a bottle 3. 2 5. –4 7. –2 9. The mean becomes zero. 11. \(z = 2\) 13. \(z = 2.78\) 15. \(x = 20\) 17. \(x = 6.5\) 19. \(x = 1\) 21. \(x = 1.97\) 23. \(z = –1.67\) 25. \(z \approx –0.33\) 27. 0.67, right 29. 3.14, left 31. about 68% 33. about 4% 35. between –5 and –1 37. about 50% 39. about 27% 41. The lifetime of a Sunshine CD player measured in years. 43. \(P(x < 1)\) 45. Yes, because they are the same in a continuous distribution: \(P(x = 1) = 0\) 47. \(1 – P(x < 3)\) or \(P(x > 3)\) 49. \(1 – 0.543 = 0.457\) 51. 0.0013 53. 0.1186 55. 1. 57. 0.154 0.874 59. 0.693 60. 0.346 61. 0.110 62. 0.946 63. 0.071 64. 0.347 66. c 68. 1. 70. 1. 72. Let \(X =\) an SAT math score and \(Y =\) an ACT math score. 1. 75. d 1. 79. 1. 81. 1. 83. 1. 85. 1. 88.
textbooks/stats/Applied_Statistics/Introductory_Business_Statistics_(OpenStax)/06%3A_The_Normal_Distribution/6.06%3A_Chapter_Key_Items.txt
Why are we so concerned with means? Two reasons are: they give us a middle ground for comparison, and they are easy to calculate. In this chapter, you will study means and the Central Limit Theorem. The Central Limit Theorem is one of the most powerful and useful ideas in all of statistics. The Central Limit Theorem is a theorem which means that it is NOT a theory or just somebody's idea of the way things work. As a theorem it ranks with the Pythagorean Theorem, or the theorem that tells us that the sum of the angles of a triangle must add to 180. These are facts of the ways of the world rigorously demonstrated with mathematical precision and logic. As we will see this powerful theorem will determine just what we can, and cannot say, in inferential statistics. The Central Limit Theorem is concerned with drawing finite samples of size $n$ from a population with a known mean, $\mu$, and a known standard deviation, $\sigma$. The conclusion is that if we collect samples of size $n$ with a "large enough $n$," calculate each sample's mean, and create a histogram (distribution) of those means, then the resulting distribution will tend to have an approximate normal distribution. The astounding result is that it does not matter what the distribution of the original population is, or whether you even need to know it. The important fact is that the distribution of sample means tend to follow the normal distribution. The size of the sample, $n$, that is required in order to be "large enough" depends on the original population from which the samples are drawn (the sample size should be at least 30 or the data should come from a normal distribution). If the original population is far from normal, then more observations are needed for the sample means. Sampling is done randomly and with replacement in the theoretical model. 7.01: The Central Limit Theorem for Sample Means The sampling distribution is a theoretical distribution. It is created by taking many many samples of size $n$ from a population. Each sample mean is then treated like a single observation of this new distribution, the sampling distribution. The genius of thinking this way is that it recognizes that when we sample we are creating an observation and that observation must come from some particular distribution. The Central Limit Theorem answers the question: from what distribution did a sample mean come? If this is discovered, then we can treat a sample mean just like any other observation and calculate probabilities about what values it might take on. We have effectively moved from the world of statistics where we know only what we have from the sample, to the world of probability where we know the distribution from which the sample mean came and the parameters of that distribution. The reasons that one samples a population are obvious. The time and expense of checking every invoice to determine its validity or every shipment to see if it contains all the items may well exceed the cost of errors in billing or shipping. For some products, sampling would require destroying them, called destructive sampling. One such example is measuring the ability of a metal to withstand saltwater corrosion for parts on ocean going vessels. Sampling thus raises an important question; just which sample was drawn. Even if the sample were randomly drawn, there are theoretically an almost infinite number of samples. With just 100 items, there are more than 75 million unique samples of size five that can be drawn. If six are in the sample, the number of possible samples increases to just more than one billion. Of the 75 million possible samples, then, which one did you get? If there is variation in the items to be sampled, there will be variation in the samples. One could draw an "unlucky" sample and make very wrong conclusions concerning the population. This recognition that any sample we draw is really only one from a distribution of samples provides us with what is probably the single most important theorem is statistics: the Central Limit Theorem. Without the Central Limit Theorem it would be impossible to proceed to inferential statistics from simple probability theory. In its most basic form, the Central Limit Theorem states that regardless of the underlying probability density function of the population data, the theoretical distribution of the means of samples from the population will be normally distributed. In essence, this says that the mean of a sample should be treated like an observation drawn from a normal distribution. The Central Limit Theorem only holds if the sample size is "large enough" which has been shown to be only 30 observations or more. Figure 7.2 graphically displays this very important proposition. Notice that the horizontal axis in the top panel is labeled $X$. These are the individual observations of the population. This is the unknown distribution of the population values. The graph is purposefully drawn all squiggly to show that it does not matter just how odd ball it really is. Remember, we will never know what this distribution looks like, or its mean or standard deviation for that matter. The horizontal axis in the bottom panel is labeled $\overline{X}$'s. This is the theoretical distribution called the sampling distribution of the means. Each observation on this distribution is a sample mean. All these sample means were calculated from individual samples with the same sample size. The theoretical sampling distribution contains all of the sample mean values from all the possible samples that could have been taken from the population. Of course, no one would ever actually take all of these samples, but if they did this is how they would look. And the Central Limit Theorem says that they will be normally distributed. The Central Limit Theorem goes even further and tells us the mean and standard deviation of this theoretical distribution. Table 7.1 Parameter Population distribution Sample Sampling distribution of $\overline{X}$'s Mean $\mu$ $\overline{X}$ $\mu_{\overline{x}} \text { and } \mathrm{E}\left(\mu_{\overline{x}}\right)=\mu$ Standard deviation $\sigma$ $s$ $\sigma_{\overline{x}}=\frac{\sigma}{\sqrt{n}}$ The practical significance of The Central Limit Theorem is that now we can compute probabilities for drawing a sample mean, $\overline{X}$, in just the same way as we did for drawing specific observations, $X$'s, when we knew the population mean and standard deviation and that the population data were normally distributed.. The standardizing formula has to be amended to recognize that the mean and standard deviation of the sampling distribution, sometimes, called the standard error of the mean, are different from those of the population distribution, but otherwise nothing has changed. The new standardizing formula is $Z=\frac{\overline{X}-\mu_{\overline{X}}}{\sigma_{\overline{X}}}=\frac{\overline{X}-\mu}{\frac{\sigma}{\sqrt{n}}}\nonumber$ Notice that $\mu_{\overline{X}}$ in the first formula has been changed to simply $\mu$ in the second version. The reason is that mathematically it can be shown that the expected value of $\mu_{\overline{X}}$ is equal to $\mu$. This was stated in Table 7.1 above. Mathematically, the $E(x)$ symbol read the “expected value of $x$”. This formula will be used in the next unit to provide estimates of the unknown population parameter $\mu$.
textbooks/stats/Applied_Statistics/Introductory_Business_Statistics_(OpenStax)/07%3A_The_Central_Limit_Theorem/7.00%3A_Introduction_to_the_Central_Limit_Theorem.txt
Examples of the Central Limit Theorem Law of Large Numbers The law of large numbers says that if you take samples of larger and larger size from any population, then the mean of the sampling distribution, $\mu_{\overline x}$ tends to get closer and closer to the true population mean, $\mu$. From the Central Limit Theorem, we know that as $n$ gets larger and larger, the sample means follow a normal distribution. The larger n gets, the smaller the standard deviation of the sampling distribution gets. (Remember that the standard deviation for the sampling distribution of $\overline X$ is $\frac{\sigma}{\sqrt{n}}$.) This means that the sample mean $\overline x$ must be closer to the population mean $\mu$ as $n$ increases. We can say that $\mu$ is the value that the sample means approach as n gets larger. The Central Limit Theorem illustrates the law of large numbers. This concept is so important and plays such a critical role in what follows it deserves to be developed further. Indeed, there are two critical issues that flow from the Central Limit Theorem and the application of the Law of Large numbers to it. These are 1. The probability density function of the sampling distribution of means is normally distributed regardless of the underlying distribution of the population observations and 2. standard deviation of the sampling distribution decreases as the size of the samples that were used to calculate the means for the sampling distribution increases. Taking these in order. It would seem counterintuitive that the population may have any distribution and the distribution of means coming from it would be normally distributed. With the use of computers, experiments can be simulated that show the process by which the sampling distribution changes as the sample size is increased. These simulations show visually the results of the mathematical proof of the Central Limit Theorem. Here are three examples of very different population distributions and the evolution of the sampling distribution to a normal distribution as the sample size increases. The top panel in these cases represents the histogram for the original data. The three panels show the histograms for 1,000 randomly drawn samples for different sample sizes: $n=10$, $n= 25$ and $n=50$. As the sample size increases, and the number of samples taken remains constant, the distribution of the 1,000 sample means becomes closer to the smooth line that represents the normal distribution. Figure $3$ is for a normal distribution of individual observations and we would expect the sampling distribution to converge on the normal quickly. The results show this and show that even at a very small sample size the distribution is close to the normal distribution. Figure $4$ is a uniform distribution which, a bit amazingly, quickly approached the normal distribution even with only a sample of 10. Figure $5$ is a skewed distribution. This last one could be an exponential, geometric, or binomial with a small probability of success creating the skew in the distribution. For skewed distributions our intuition would say that this will take larger sample sizes to move to a normal distribution and indeed that is what we observe from the simulation. Nevertheless, at a sample size of 50, not considered a very large sample, the distribution of sample means has very decidedly gained the shape of the normal distribution. The Central Limit Theorem provides more than the proof that the sampling distribution of means is normally distributed. It also provides us with the mean and standard deviation of this distribution. Further, as discussed above, the expected value of the mean, $\mu_{\overline{x}}$, is equal to the mean of the population of the original data which is what we are interested in estimating from the sample we took. We have already inserted this conclusion of the Central Limit Theorem into the formula we use for standardizing from the sampling distribution to the standard normal distribution. And finally, the Central Limit Theorem has also provided the standard deviation of the sampling distribution, $\sigma_{\overline{x}}=\frac{\sigma}{\sqrt{n}}$, and this is critical to have to calculate probabilities of values of the new random variable, $\overline x$. Figure $6$ shows a sampling distribution. The mean has been marked on the horizontal axis of the $\overline X$'s and the standard deviation has been written to the right above the distribution. Notice that the standard deviation of the sampling distribution is the original standard deviation of the population, divided by the sample size. We have already seen that as the sample size increases the sampling distribution becomes closer and closer to the normal distribution. As this happens, the standard deviation of the sampling distribution changes in another way; the standard deviation decreases as $n$ increases. At very very large $n$, the standard deviation of the sampling distribution becomes very small and at infinity it collapses on top of the population mean. This is what it means that the expected value of $\mu_{\overline{x}}$ is the population mean, $\mu$. At non-extreme values of $n$, this relationship between the standard deviation of the sampling distribution and the sample size plays a very important part in our ability to estimate the parameters we are interested in. Figure $7$ shows three sampling distributions. The only change that was made is the sample size that was used to get the sample means for each distribution. As the sample size increases, $n$ goes from 10 to 30 to 50, the standard deviations of the respective sampling distributions decrease because the sample size is in the denominator of the standard deviations of the sampling distributions. The implications for this are very important. Figure $8$ shows the effect of the sample size on the confidence we will have in our estimates. These are two sampling distributions from the same population. One sampling distribution was created with samples of size 10 and the other with samples of size 50. All other things constant, the sampling distribution with sample size 50 has a smaller standard deviation that causes the graph to be higher and narrower. The important effect of this is that for the same probability of one standard deviation from the mean, this distribution covers much less of a range of possible values than the other distribution. One standard deviation is marked on the $\overline X$ axis for each distribution. This is shown by the two arrows that are plus or minus one standard deviation for each distribution. If the probability that the true mean is one standard deviation away from the mean, then for the sampling distribution with the smaller sample size, the possible range of values is much greater. A simple question is, would you rather have a sample mean from the narrow, tight distribution, or the flat, wide distribution as the estimate of the population mean? Your answer tells us why people intuitively will always choose data from a large sample rather than a small sample. The sample mean they are getting is coming from a more compact distribution. This concept will be the foundation for what will be called level of confidence in the next unit.
textbooks/stats/Applied_Statistics/Introductory_Business_Statistics_(OpenStax)/07%3A_The_Central_Limit_Theorem/7.02%3A_Using_the_Central_Limit_Theorem.txt
The Central Limit Theorem tells us that the point estimate for the sample mean, $\overline x$, comes from a normal distribution of $\overline x$'s. This theoretical distribution is called the sampling distribution of $\overline x$'s. We now investigate the sampling distribution for another important parameter we wish to estimate; $p$ from the binomial probability density function. If the random variable is discrete, such as for categorical data, then the parameter we wish to estimate is the population proportion. This is, of course, the probability of drawing a success in any one random draw. Unlike the case just discussed for a continuous random variable where we did not know the population distribution of $X$'s, here we actually know the underlying probability density function for these data; it is the binomial. The random variable is $X =$ the number of successes and the parameter we wish to know is $p$, the probability of drawing a success which is of course the proportion of successes in the population. The question at issue is: from what distribution was the sample proportion, $p^{\prime}=\frac{x}{n}$ drawn? The sample size is $n$ and $X$ is the number of successes found in that sample. This is a parallel question that was just answered by the Central Limit Theorem: from what distribution was the sample mean, $\overline x$, drawn? We saw that once we knew that the distribution was the Normal distribution then we were able to create confidence intervals for the population parameter, $\mu$. We will also use this same information to test hypotheses about the population mean later. We wish now to be able to develop confidence intervals for the population parameter "$p$" from the binomial probability density function. In order to find the distribution from which sample proportions come we need to develop the sampling distribution of sample proportions just as we did for sample means. So again imagine that we randomly sample say 50 people and ask them if they support the new school bond issue. From this we find a sample proportion, $p^{\prime}$, and graph it on the axis of $p$'s. We do this again and again etc., etc. until we have the theoretical distribution of $p$'s. Some sample proportions will show high favorability toward the bond issue and others will show low favorability because random sampling will reflect the variation of views within the population. What we have done can be seen in Figure $9$. The top panel is the population distributions of probabilities for each possible value of the random variable $X$. While we do not know what the specific distribution looks like because we do not know $p$, the population parameter, we do know that it must look something like this. In reality, we do not know either the mean or the standard deviation of this population distribution, the same difficulty we faced when analyzing the $X$'s previously. Figure $9$ places the mean on the distribution of population probabilities as $\mu=np$ but of course we do not actually know the population mean because we do not know the population probability of success, $p$. Below the distribution of the population values is the sampling distribution of $p$'s. Again the Central Limit Theorem tells us that this distribution is normally distributed just like the case of the sampling distribution for $\overline x$'s. This sampling distribution also has a mean, the mean of the $p$'s, and a standard deviation, $\sigma_{p^{\prime}}$. Importantly, in the case of the analysis of the distribution of sample means, the Central Limit Theorem told us the expected value of the mean of the sample means in the sampling distribution, and the standard deviation of the sampling distribution. Again the Central Limit Theorem provides this information for the sampling distribution for proportions. The answers are: 1. The expected value of the mean of sampling distribution of sample proportions, $\mu_{p^{\prime}}$, is the population proportion, $p$. 2. The standard deviation of the sampling distribution of sample proportions, $\sigma_{p^{\prime}}$, is the population standard deviation divided by the square root of the sample size, $n$. Both these conclusions are the same as we found for the sampling distribution for sample means. However in this case, because the mean and standard deviation of the binomial distribution both rely upon pp, the formula for the standard deviation of the sampling distribution requires algebraic manipulation to be useful. We will take that up in the next chapter. The proof of these important conclusions from the Central Limit Theorem is provided below. $E\left(p^{\prime}\right)=E\left(\frac{x}{n}\right)=\left(\frac{1}{n}\right) E(x)=\left(\frac{1}{n}\right) n p=p\nonumber$ (The expected value of $X$, $E(x)$, is simply the mean of the binomial distribution which we know to be np.) $\sigma_{\mathrm{p}}^{2}=\operatorname{Var}\left(p^{\prime}\right)=\operatorname{Var}\left(\frac{x}{n}\right)=\frac{1}{n^{2}}(\operatorname{Var}(x))=\frac{1}{n^{2}}(n p(1-p))=\frac{p(1-p)}{n}\nonumber$ The standard deviation of the sampling distribution for proportions is thus: $\sigma_{\mathrm{p}},=\sqrt{\frac{p(1-P)}{n}}\nonumber$ Parameter Population distribution Sample Sampling distribution of $p$'s Mean $\mu = np$ $p^{\prime}=\frac{x}{n}$\) $p^{\prime} \text { and } E(p^{\prime})=p$ Standard Deviation $\sigma=\sqrt{n p q}$ $\sigma_{p^{\prime}}=\sqrt{\frac{p(1-p)}{n}}$ Table $2$ Table $2$ summarizes these results and shows the relationship between the population, sample and sampling distribution. Notice the parallel between this Table and Table $1$ for the case where the random variable is continuous and we were developing the sampling distribution for means. Reviewing the formula for the standard deviation of the sampling distribution for proportions we see that as $n$ increases the standard deviation decreases. This is the same observation we made for the standard deviation for the sampling distribution for means. Again, as the sample size increases, the point estimate for either $\mu$ or $p$ is found to come from a distribution with a narrower and narrower distribution. We concluded that with a given level of probability, the range from which the point estimate comes is smaller as the sample size, $n$, increases. Figure $8$ shows this result for the case of sample means. Simply substitute $p^{\prime}$ for $\overline x$ and we can see the impact of the sample size on the estimate of the sample proportion. 7.04: Finite Population Correction Factor We saw that the sample size has an important effect on the variance and thus the standard deviation of the sampling distribution. Also of interest is the proportion of the total population that has been sampled. We have assumed that the population is extremely large and that we have sampled a small part of the population. As the population becomes smaller and we sample a larger number of observations the sample observations are not independent of each other. To correct for the impact of this, the Finite Correction Factor can be used to adjust the variance of the sampling distribution. It is appropriate when more than 5% of the population is being sampled and the population has a known population size. There are cases when the population is known, and therefore the correction factor must be applied. The issue arises for both the sampling distribution of the means and the sampling distribution of proportions. The Finite Population Correction Factor for the variance of the means shown in the standardizing formula is: $Z=\frac{\overline{x}-\mu}{\frac{\sigma}{\sqrt{n}} \cdot \sqrt{\frac{N-n}{N-1}}}\nonumber$ and for the variance of proportions is: $\sigma_{\mathrm{p}^{\prime}}=\sqrt{\frac{p(1-p)}{n}} \times \sqrt{\frac{N-n}{N-1}}\nonumber$ The following examples show how to apply the factor. Sampling variances get adjusted using the above formula. Example $1$ It is learned that the population of White German Shepherds in the USA is 4,000 dogs and the mean weight for German Shepherds is 75.45 pounds. It is also learned that the population standard deviation is 10.37 pounds. If the sample size is 100 dogs, then find the probability that a sample will have a mean that differs from the true probability mean by less than 2 pounds. Answer Solution 7.1 $N=4000, \quad n=100, \quad \sigma=10.37, \quad \mu=75.45, \quad(\overline{x}-\mu)=\pm 2$ $Z=\frac{\overline{x}-\mu}{\frac{\sigma}{\sqrt{n}} \cdot \sqrt{\frac{N-n}{N-1}}}=\frac{ \pm 2}{\frac{10.37}{\sqrt{100}} \cdot \sqrt{\frac{4000-100}{4000-1}}}=\pm 1.95\nonumber$ $f(Z)=0.4744 \cdot 2=0.9488\nonumber$ Note that "differs by less" references the area on both sides of the mean within 2 pounds right or left. Example $2$ When a customer places an order with Rudy's On-Line Office Supplies, a computerized accounting information system (AIS) automatically checks to see if the customer has exceeded his or her credit limit. Past records indicate that the probability of customers exceeding their credit limit is .06. Suppose that on a given day, 3,000 orders are placed in total. If we randomly select 360 orders, what is the probability that between 10 and 20 customers will exceed their credit limit? Answer Solution 7.2 $N=3000, \quad n=360, \quad p=0.06$ $\sigma_{\mathrm{p}^{\prime}}=\sqrt{\frac{p(1-p)}{n}} \times \sqrt{\frac{N-n}{N-1}}=\sqrt{\frac{0.06(1-0.06)}{360}} \times \sqrt{\frac{3000-360}{3000-1}}=0.0117\nonumber$ $p_{1}=\frac{10}{360}=0.0278, \quad p_{2}=\frac{20}{360}=0.0556\nonumber$ $Z=\frac{p^{\prime}-p}{\sqrt{\frac{p(1-p)}{n}} \cdot \sqrt{\frac{N-n}{N-1}}}=\frac{0.0278-0.06}{0.011744}=-2.74\nonumber$ $p\left(\frac{0.0278-0.06}{0.011744}<\frac{0.0556-0.06}{0.011744}\right)$ 7.05: Chapter Formula Review 7.1 The Central Limit Theorem for Sample Means The Central Limit Theorem for Sample Means: $\overline{X} \sim N\left(\mu_{\overline{x}}, \frac{\sigma}{\sqrt{n}}\right)$ $Z=\frac{\overline{X}-\mu_{\overline{X}}}{\sigma_{X}}=\frac{\overline{X}-\mu}{\sigma / \sqrt{n}}$ The Mean $\overline{X} : \mu_{\overline x}$ Central Limit Theorem for Sample Means z-score $z=\frac{\overline{x}-\mu_{\overline{x}}}{\left(\frac{\sigma}{\sqrt{n}}\right)}$ Standard Error of the Mean (Standard Deviation $(\overline{X}) ) : \frac{\sigma}{\sqrt{n}}$ Finite Population Correction Factor for the sampling distribution of means: $Z=\frac{\overline{x}-\mu}{\frac{\sigma}{\sqrt{n}} \cdot \sqrt{\frac{N-n}{N-1}}}$ Finite Population Correction Factor for the sampling distribution of proportions: $\sigma_{\mathrm{p}^{\prime}}=\sqrt{\frac{p(1-p)}{n}} \times \sqrt{\frac{N-n}{N-1}}$
textbooks/stats/Applied_Statistics/Introductory_Business_Statistics_(OpenStax)/07%3A_The_Central_Limit_Theorem/7.03%3A_The_Central_Limit_Theorem_for_Proportions.txt
The Central Limit Theorem for Sample Means 49 Previously, De Anza statistics students estimated that the amount of change daytime statistics students carry is exponentially distributed with a mean of $0.88. Suppose that we randomly pick 25 daytime statistics students. 1. In words, $Χ$ = ____________ 2. $Χ \sim$ _____(_____,_____) 3. In words, $\overline X$ = ____________ 4. $\overline X \sim$ ______ (______, ______) 5. Find the probability that an individual had between$0.80 and $1.00. Graph the situation, and shade in the area to be determined. 6. Find the probability that the average of the 25 students was between$0.80 and $1.00. Graph the situation, and shade in the area to be determined. 7. Explain why there is a difference in part e and part f. Answer 1. $Χ$ = amount of change students carry 2. $Χ \sim E(0.88, 0.88)$ 3. $\overline X$ = average amount of change carried by a sample of 25 students. 4. $\overline X \sim N(0.88, 0.176)$ 5. $0.0819$ 6. $0.1882$ 7. The distributions are different. Part 1 is exponential and part 2 is normal. 50. Suppose that the distance of fly balls hit to the outfield (in baseball) is normally distributed with a mean of 250 feet and a standard deviation of 50 feet. We randomly sample 49 fly balls. 1. If $\overline X$ = average distance in feet for 49 fly balls, then $\overline X \sim$ _______(_______,_______) 2. What is the probability that the 49 balls traveled an average of less than 240 feet? Sketch the graph. Scale the horizontal axis for $\overline X$. Shade the region corresponding to the probability. Find the probability. 3. Find the 80th percentile of the distribution of the average of 49 fly balls. 51. According to the Internal Revenue Service, the average length of time for an individual to complete (keep records for, learn, prepare, copy, assemble, and send) IRS Form 1040 is 10.53 hours (without any attached schedules). The distribution is unknown. Let us assume that the standard deviation is two hours. Suppose we randomly sample 36 taxpayers. 1. In words, $Χ =$ _____________ 2. In words, $\overline X$ = _____________ 3. $\overline X \sim$ _____(_____,_____) 4. Would you be surprised if the 36 taxpayers finished their Form 1040s in an average of more than 12 hours? Explain why or why not in complete sentences. 5. Would you be surprised if one taxpayer finished his or her Form 1040 in more than 12 hours? In a complete sentence, explain why. 52. Suppose that a category of world-class runners are known to run a marathon (26 miles) in an average of 145 minutes with a standard deviation of 14 minutes. Consider 49 of the races. Let $\overline X$ the average of the 49 races. 1. $\overline X \sim$ _____(_____,_____) 2. Find the probability that the runner will average between 142 and 146 minutes in these 49 marathons. 3. Find the $80^{th}$ percentile for the average of these 49 marathons. 4. Find the median of the average running times. 53. The length of songs in a collector’s iTunes album collection is uniformly distributed from two to 3.5 minutes. Suppose we randomly pick five albums from the collection. There are a total of 43 songs on the five albums. 1. In words, $Χ$ = _________ 2. $Χ \sim$ _____________ 3. In words, $\overline X$ = _____________ 4. $\overline X \sim$ _____(_____,_____) 5. Find the first quartile for the average song length. 6. The $IQR$ (interquartile range) for the average song length is from _______–_______. 54. In 1940 the average size of a U.S. farm was 174 acres. Let’s say that the standard deviation was 55 acres. Suppose we randomly survey 38 farmers from 1940. 1. In words, $Χ$ = _____________ 2. In words, $\overline X$ = _____________ 3. $\overline X \sim$ _____(_____,_____) 4. The $IQR$ for $\overline X$ is from _______ acres to _______ acres. 55. Determine which of the following are true and which are false. Then, in complete sentences, justify your answers. 1. When the sample size is large, the mean of $\overline X$ is approximately equal to the mean of $Χ$. 2. When the sample size is large, $\overline X$ is approximately normally distributed. 3. When the sample size is large, the standard deviation of $\overline X$ is approximately the same as the standard deviation of $Χ$. 56. The percent of fat calories that a person in America consumes each day is normally distributed with a mean of about 36 and a standard deviation of about ten. Suppose that 16 individuals are randomly chosen. Let $\overline X$ = average percent of fat calories. 1. $\overline X \sim$ ______(______, ______) 2. For the group of 16, find the probability that the average percent of fat calories consumed is more than five. Graph the situation and shade in the area to be determined. 3. Find the first quartile for the average percent of fat calories. 57. The distribution of income in some Third World countries is considered wedge shaped (many very poor people, very few middle income people, and even fewer wealthy people). Suppose we pick a country with a wedge shaped distribution. Let the average salary be$2,000 per year with a standard deviation of $8,000. We randomly survey 1,000 residents of that country. 1. In words, $Χ$ = _____________ 2. In words, $\overline X$ = _____________ 3. $\overline X \sim$ _____(_____,_____) 4. How is it possible for the standard deviation to be greater than the average? 5. Why is it more likely that the average of the 1,000 residents will be from$2,000 to $2,100 than from$2,100 to $2,200? 58. Which of the following is NOT TRUE about the distribution for averages? 1. The mean, median, and mode are equal. 2. The area under the curve is one. 3. The curve never touches the x-axis. 4. The curve is skewed to the right. 59. The cost of unleaded gasoline in the Bay Area once followed an unknown distribution with a mean of$4.59 and a standard deviation of \$0.10. Sixteen gas stations from the Bay Area are randomly chosen. We are interested in the average cost of gasoline for the 16 gas stations. The distribution to use for the average cost of gasoline for the 16 gas stations is: a. $\overline X \sim N(4.59, 0.10)$ b. $\overline X \sim N\left(4.59, \frac{0.10}{\sqrt{16}}\right)$ c. $\overline X \sim N\left(4.59, \frac{16}{0.10}\right)$ d. $\overline X \sim N\left(4.59, \frac{\sqrt{16}}{0.10}\right)$ Using the Central Limit Theorem 60. A large population of 5,000 students take a practice test to prepare for a standardized test. The population mean is 140 questions correct, and the standard deviation is 80. What size samples should a researcher take to get a distribution of means of the samples with a standard deviation of 10? 61. A large population has skewed data with a mean of 70 and a standard deviation of 6. Samples of size 100 are taken, and the distribution of the means of these samples is analyzed. 1. Will the distribution of the means be closer to a normal distribution than the distribution of the population? 2. Will the mean of the means of the samples remain close to 70? 3. Will the distribution of the means have a smaller standard deviation? 4. What is that standard deviation? 62. A researcher is looking at data from a large population with a standard deviation that is much too large. In order to concentrate the information, the researcher decides to repeatedly sample the data and use the distribution of the means of the samples? The first effort used sample sized of 100. But the standard deviation was about double the value the researcher wanted. What is the smallest size samples the researcher can use to remedy the problem? 63. A researcher looks at a large set of data, and concludes the population has a standard deviation of 40. Using sample sizes of 64, the researcher is able to focus the mean of the means of the sample to a narrower distribution where the standard deviation is 5. Then, the researcher realizes there was an error in the original calculations, and the initial standard deviation is really 20. Since the standard deviation of the means of the samples was obtained using the original standard deviation, this value is also impacted by the discovery of the error. What is the correct value of the standard deviation of the means of the samples? 64. A population has a standard deviation of 50. It is sampled with samples of size 100. What is the variance of the means of the samples? The Central Limit Theorem for Proportions 65. A farmer picks pumpkins from a large field. The farmer makes samples of 260 pumpkins and inspects them. If one in fifty pumpkins are not fit to market and will be saved for seeds, what is the standard deviation of the mean of the sampling distribution of sample proportions? 66. A store surveys customers to see if they are satisfied with the service they received. Samples of 25 surveys are taken. One in five people are unsatisfied. What is the variance of the mean of the sampling distribution of sample proportions for the number of unsatisfied customers? What is the variance for satisfied customers? 67. A company gives an anonymous survey to its employees to see what percent of its employees are happy. The company is too large to check each response, so samples of 50 are taken, and the tendency is that three-fourths of the employees are happy. For the mean of the sampling distribution of sample proportions, answer the following questions, if the sample size is doubled. 1. How does this affect the mean? 2. How does this affect the standard deviation? 3. How does this affect the variance? 68. A pollster asks a single question with only yes and no as answer possibilities. The poll is conducted nationwide, so samples of 100 responses are taken. There are four yes answers for each no answer overall. For the mean of the sampling distribution of sample proportions, find the following for yes answers. 1. The expected value. 2. The standard deviation. 3. The variance. 69. The mean of the sampling distribution of sample proportions has a value of $p$ of 0.3, and sample size of 40. 1. Is there a difference in the expected value if $p$ and $q$ reverse roles? 2. Is there a difference in the calculation of the standard deviation with the same reversal? Finite Population Correction Factor 70. A company has 1,000 employees. The average number of workdays between absence for illness is 80 with a standard deviation of 11 days. Samples of 80 employees are examined. What is the probability a sample has a mean of workdays with no absence for illness of at least 78 days and at most 84 days? 71. Trucks pass an automatic scale that monitors 2,000 trucks. This population of trucks has an average weight of 20 tons with a standard deviation of 2 tons. If a sample of 50 trucks is taken, what is the probability the sample will have an average weight within one-half ton of the population mean? 72. A town keeps weather records. From these records it has been determined that it rains on an average of 12% of the days each year. If 30 days are selected at random from one year, what is the probability that at most 3 days had rain? 73. A maker of greeting cards has an ink problem that causes the ink to smear on 7% of the cards. The daily production run is 500 cards. What is the probability that if a sample of 35 cards is checked, there will be ink smeared on at most 5 cards? 74. A school has 500 students. Usually, there are an average of 20 students who are absent. If a sample of 30 students is taken on a certain day, what is the probability that at least 2 students in the sample will be absent?
textbooks/stats/Applied_Statistics/Introductory_Business_Statistics_(OpenStax)/07%3A_The_Central_Limit_Theorem/7.06%3A_Chapter_Homework.txt
Average a number that describes the central tendency of the data; there are a number of specialized averages, including the arithmetic mean, weighted mean, median, mode, and geometric mean. Central Limit Theorem Given a random variable with known mean μ and known standard deviation, σ, we are sampling with size n, and we are interested in two new RVs: the sample mean, $\overline X$. If the size ($n$) of the sample is sufficiently large, then $\overline{X} \sim N\left(\mu, \frac{\sigma}{\sqrt{n}}\right)$. If the size ($n$) of the sample is sufficiently large, then the distribution of the sample means will approximate a normal distributions regardless of the shape of the population. The mean of the sample means will equal the population mean. The standard deviation of the distribution of the sample means, $\frac{\sigma}{\sqrt{n}}$, is called the standard error of the mean. Finite Population Correction Factor adjusts the variance of the sampling distribution if the population is known and more than 5% of the population is being sampled. Mean a number that measures the central tendency; a common name for mean is "average." The term "mean" is a shortened form of "arithmetic mean." By definition, the mean for a sample (denoted by $\overline x$) is $\overline x =\overline{x}=\frac{\text { Sum of all values in the sample }}{\text { Number of values in the sample }}$, and the mean for a population (denoted by $\mu$) is $\mu=\frac{\text { Sum of all values in the population }}{\text { Number of values in the population }}$. Normal Distribution a continuous random variable with pdf $f(x)=\frac{1}{\sigma \sqrt{2 \pi}} e^{\frac{-(x-\mu)^{2}}{2 \sigma^{2}}}$, where $\mu$ is the mean of the distribution and $\sigma$ is the standard deviation.; notation: $X \sim N(\mu, \sigma)$. If $\mu = 0$ and $\sigma = 1$, the random variable, $Z$, is called the standard normal distribution. Sampling Distribution Given simple random samples of size $n$ from a given population with a measured characteristic such as mean, proportion, or standard deviation for each sample, the probability distribution of all the measured characteristics is called a sampling distribution. Standard Error of the Mean the standard deviation of the distribution of the sample means, or $\frac{\sigma}{\sqrt{n}}$. Standard Error of the Proportion the standard deviation of the sampling distribution of proportions 7.08: Chapter Practice Using the Central Limit Theorem Use the following information to answer the next ten exercises: A manufacturer produces 25-pound lifting weights. The lowest actual weight is 24 pounds, and the highest is 26 pounds. Each weight is equally likely so the distribution of weights is uniform. A sample of 100 weights is taken. 1. 1. What is the distribution for the weights of one 25-pound lifting weight? What is the mean and standard deivation? 2. What is the distribution for the mean weight of 100 25-pound lifting weights? 3. Find the probability that the mean actual weight for the 100 weights is less than 24.9. 2. Draw the graph from Exercise \(1\) 3. Find the probability that the mean actual weight for the 100 weights is greater than 25.2. 4. Draw the graph from Exercise \(3\) 5. Find the 90th percentile for the mean weight for the 100 weights. 6. Draw the graph from Exercise \(5\) 7. 1. What is the distribution for the sum of the weights of 100 25-pound lifting weights? 2. Find \(P(\Sigma x<2,450)\). 8. Draw the graph from Exercise \(7\) 9. Find the 90th percentile for the total weight of the 100 weights. 10. Draw the graph from Exercise \(9\) Use the following information to answer the next five exercises: The length of time a particular smartphone's battery lasts follows an exponential distribution with a mean of ten months. A sample of 64 of these smartphones is taken. 11. 1. What is the standard deviation? 2. What is the parameter \(m\)? 12. What is the distribution for the length of time one battery lasts? 13. What is the distribution for the mean length of time 64 batteries last? 14. What is the distribution for the total length of time 64 batteries last? 15. Find the probability that the sample mean is between seven and 11. 16. Find the 80th percentile for the total length of time 64 batteries last. 17. Find the \(IQR\) for the mean amount of time 64 batteries last. 18. Find the middle 80% for the total amount of time 64 batteries last. Use the following information to answer the next eight exercises: A uniform distribution has a minimum of six and a maximum of ten. A sample of 50 is taken. 19. Find \(P(\Sigma x > 420)\). 20. Find the 90th percentile for the sums. 21. Find the 15th percentile for the sums. 22. Find the first quartile for the sums. 23. Find the third quartile for the sums. 24. Find the 80th percentile for the sums. 25. A population has a mean of 25 and a standard deviation of 2. If it is sampled repeatedly with samples of size 49, what is the mean and standard deviation of the sample means? 26. A population has a mean of 48 and a standard deviation of 5. If it is sampled repeatedly with samples of size 36, what is the mean and standard deviation of the sample means? 27. A population has a mean of 90 and a standard deviation of 6. If it is sampled repeatedly with samples of size 64, what is the mean and standard deviation of the sample means? 28. A population has a mean of 120 and a standard deviation of 2.4. If it is sampled repeatedly with samples of size 40, what is the mean and standard deviation of the sample means? 29. A population has a mean of 17 and a standard deviation of 1.2. If it is sampled repeatedly with samples of size 50, what is the mean and standard deviation of the sample means? 30. A population has a mean of 17 and a standard deviation of 0.2. If it is sampled repeatedly with samples of size 16, what is the expected value and standard deviation of the sample means? 31. A population has a mean of 38 and a standard deviation of 3. If it is sampled repeatedly with samples of size 48, what is the expected value and standard deviation of the sample means? 32. A population has a mean of 14 and a standard deviation of 5. If it is sampled repeatedly with samples of size 60, what is the expected value and standard deviation of the sample means? The Central Limit Theorem for Proportions 33. A question is asked of a class of 200 freshmen, and 23% of the students know the correct answer. If a sample of 50 students is taken repeatedly, what is the expected value of the mean of the sampling distribution of sample proportions? 34. A question is asked of a class of 200 freshmen, and 23% of the students know the correct answer. If a sample of 50 students is taken repeatedly, what is the standard deviation of the mean of the sampling distribution of sample proportions? 35. A game is played repeatedly. A player wins one-fifth of the time. If samples of 40 times the game is played are taken repeatedly, what is the expected value of the mean of the sampling distribution of sample proportions? 36. A game is played repeatedly. A player wins one-fifth of the time. If samples of 40 times the game is played are taken repeatedly, what is the standard deviation of the mean of the sampling distribution of sample proportions? 37. A virus attacks one in three of the people exposed to it. An entire large city is exposed. If samples of 70 people are taken, what is the expected value of the mean of the sampling distribution of sample proportions? 38. A virus attacks one in three of the people exposed to it. An entire large city is exposed. If samples of 70 people are taken, what is the standard deviation of the mean of the sampling distribution of sample proportions? 39. A company inspects products coming through its production process, and rejects detected products. One-tenth of the items are rejected. If samples of 50 items are taken, what is the expected value of the mean of the sampling distribution of sample proportions? 40. A company inspects products coming through its production process, and rejects detected products. One-tenth of the items are rejected. If samples of 50 items are taken, what is the standard deviation of the mean of the sampling distribution of sample proportions? Finite Population Correction Factor 41. A fishing boat has 1,000 fish on board, with an average weight of 120 pounds and a standard deviation of 6.0 pounds. If sample sizes of 50 fish are checked, what is the probability the fish in a sample will have mean weight within 2.8 pounds the true mean of the population? 42. An experimental garden has 500 sunflowers plants. The plants are being treated so they grow to unusual heights. The average height is 9.3 feet with a standard deviation of 0.5 foot. If sample sizes of 60 plants are taken, what is the probability the plants in a given sample will have an average height within 0.1 foot of the true mean of the population? 43. A company has 800 employees. The average number of workdays between absence for illness is 123 with a standard deviation of 14 days. Samples of 50 employees are examined. What is the probability a sample has a mean of workdays with no absence for illness of at least 124 days? 44. Cars pass an automatic speed check device that monitors 2,000 cars on a given day. This population of cars has an average speed of 67 miles per hour with a standard deviation of 2 miles per hour. If samples of 30 cars are taken, what is the probability a given sample will have an average speed within 0.50 mile per hour of the population mean? 45. A town keeps weather records. From these records it has been determined that it rains on an average of 37% of the days each year. If 30 days are selected at random from one year, what is the probability that at least 5 and at most 11 days had rain? 46. A maker of yardsticks has an ink problem that causes the markings to smear on 4% of the yardsticks. The daily production run is 2,000 yardsticks. What is the probability if a sample of 100 yardsticks is checked, there will be ink smeared on at most 4 yardsticks? 47. A school has 300 students. Usually, there are an average of 21 students who are absent. If a sample of 30 students is taken on a certain day, what is the probability that at most 2 students in the sample will be absent? 48. A college gives a placement test to 5,000 incoming students each year. On the average 1,213 place in one or more developmental courses. If a sample of 50 is taken from the 5,000, what is the probability at most 12 of those sampled will have to take at least one developmental course? 7.09: Chapter References 7.1 The Central Limit Theorem for Sample Means Baran, Daya. “20 Percent of Americans Have Never Used Email.”WebGuild, 2010. Available online at http://www.webguild.org/20080519/20-...ver-used-email (accessed May 17, 2013). Data from The Flurry Blog, 2013. Available online at blog.flurry.com (accessed May 17, 2013). Data from the United States Department of Agriculture. 7.10: Chapter Review 7.1 The Central Limit Theorem for Sample Means In a population whose distribution may be known or unknown, if the size ($n$) of samples is sufficiently large, the distribution of the sample means will be approximately normal. The mean of the sample means will equal the population mean. The standard deviation of the distribution of the sample means, called the standard error of the mean, is equal to the population standard deviation divided by the square root of the sample size ($n$). 7.2 Using the Central Limit Theorem The Central Limit Theorem can be used to illustrate the law of large numbers. The law of large numbers states that the larger the sample size you take from a population, the closer the sample mean $\overline x$ gets to $\mu$. 7.3 The Central Limit Theorem for Proportions The Central Limit Theorem can also be used to illustrate that the sampling distribution of sample proportions is normally distributed with the expected value of $p$ and a standard deviation of $\sigma_{\mathrm{p}^{\prime}}=\sqrt{\frac{p(1-p)}{n}}$ 7.11: Chapter Solution (Practice Homework) 1. 1. 3. 0.0003 25.07 7. 1. 9. 2,507.40 1. 13. $N(10, \frac{10}{8}))$ 0.7799 17. 1.69 19. 0.0072 21. 391.54 23. 405.51 25. Mean = 25, standard deviation = 2/7 26. Mean = 48, standard deviation = 5/6 27. Mean = 90, standard deviation = 3/4 28. Mean = 120, standard deviation = 0.38 29. Mean = 17, standard deviation = 0.17 30. Expected value = 17, standard deviation = 0.05 31. Expected value = 38, standard deviation = 0.43 32. Expected value = 14, standard deviation = 0.65 33. 0.23 34. 0.060 35. 1/5 36. 0.063 37. 1/3 38. 0.056 39. 1/10 40. 0.042 41. 0.999 42. 0.901 43. 0.301 44. 0.832 45. 0.483 46. 0.500 47. 0.502 48. 0.519 49. 1. 51. 1. 53. 1. 55. 1. 57. 1. 59. b 64 61. 1. 62. 400 2.5 64. 25 65. 0.0087 66. 0.0064, 0.0064 67. 1. 68. 1. 69. 1. 70. 0.955 0.927 72. 0.648 73. 0.101 74. 0.273
textbooks/stats/Applied_Statistics/Introductory_Business_Statistics_(OpenStax)/07%3A_The_Central_Limit_Theorem/7.07%3A_Chapter_Key_Terms.txt
Suppose you were trying to determine the mean rent of a two-bedroom apartment in your town. You might look in the classified section of the newspaper, write down several rents listed, and average them together. You would have obtained a point estimate of the true mean. If you are trying to determine the percentage of times you make a basket when shooting a basketball, you might count the number of shots you make and divide that by the number of shots you attempted. In this case, you would have obtained a point estimate for the true proportion the parameter $p$ in the binomial probability density function. We use sample data to make generalizations about an unknown population. This part of statistics is called inferential statistics. The sample data help us to make an estimate of a population parameter. We realize that the point estimate is most likely not the exact value of the population parameter, but close to it. After calculating point estimates, we construct interval estimates, called confidence intervals. What statistics provides us beyond a simple average, or point estimate, is an estimate to which we can attach a probability of accuracy, what we will call a confidence level. We make inferences with a known level of probability. In this chapter, you will learn to construct and interpret confidence intervals. You will also learn a new distribution, the Student's-t, and how it is used with these intervals. Throughout the chapter, it is important to keep in mind that the confidence interval is a random variable. It is the population parameter that is fixed. If you worked in the marketing department of an entertainment company, you might be interested in the mean number of songs a consumer downloads a month from iTunes. If so, you could conduct a survey and calculate the sample mean, $\overline x$, and the sample standard deviation, $s$. You would use $\overline x$ to estimate the population mean and $s$ to estimate the population standard deviation. The sample mean, $\overline x$, is the point estimate for the population mean, $\mu$. The sample standard deviation, $s$, is the point estimate for the population standard deviation, $\sigma$. $\overline x$ and $s$ are each called a statistic. A confidence interval is another type of estimate but, instead of being just one number, it is an interval of numbers. The interval of numbers is a range of values calculated from a given set of sample data. The confidence interval is likely to include the unknown population parameter. Suppose, for the iTunes example, we do not know the population mean $\mu$, but we do know that the population standard deviation is $\sigma = 1$ and our sample size is 100. Then, by the central limit theorem, the standard deviation of the sampling distribution of the sample means is $\frac{\sigma}{\sqrt{n}}=\frac{1}{\sqrt{100}}=0.1.\nonumber$ The empirical rule, which applies to the normal distribution, says that in approximately 95% of the samples, the sample mean, $\overline x$, will be within two standard deviations of the population mean \mu. For our iTunes example, two standard deviations is $(2)(0.1) = 0.2$. The sample mean $\overline x$ is likely to be within 0.2 units of $\mu$. Because $\overline x$ is within 0.2 units of $\mu$, which is unknown, then $\mu$ is likely to be within 0.2 units of $\overline x$ with 95% probability. The population mean $\mu$ is contained in an interval whose lower number is calculated by taking the sample mean and subtracting two standard deviations $(2)(0.1)$ and whose upper number is calculated by taking the sample mean and adding two standard deviations. In other words, $\mu$ is between $\overline{x}-0.2$ and $\overline{x}+0.2$ in 95% of all the samples. For the iTunes example, suppose that a sample produced a sample mean $\overline{x}=2$. Then with 95% probability the unknown population mean $\mu$ is between $\overline{x}-0.2=2-0.2=1.8 \text { and } \overline{x}+0.2=2+0.2=2.2 \nonumber$ We say that we are 95% confident that the unknown population mean number of songs downloaded from iTunes per month is between 1.8 and 2.2. The 95% confidence interval is (1.8, 2.2). Please note that we talked in terms of 95% confidence using the empirical rule. The empirical rule for two standard deviations is only approximately 95% of the probability under the normal distribution. To be precise, two standard deviations under a normal distribution is actually 95.44% of the probability. To calculate the exact 95% confidence level we would use 1.96 standard deviations. The 95% confidence interval implies two possibilities. Either the interval (1.8, 2.2) contains the true mean $\mu$, or our sample produced an $\overline x$ that is not within 0.2 units of the true mean $\mu$. The second possibility happens for only 5% of all the samples (95% minus 100% = 5%). Remember that a confidence interval is created for an unknown population parameter like the population mean, $\mu$. For the confidence interval for a mean the formula would be: $\mu=\overline{X} \pm Z_{\alpha} \sigma / \sqrt{n}\nonumber$ Or written another way as: $\overline{X}-Z_{\alpha} \sigma /_{\sqrt{n}} \leq \mu \leq \overline{X}+Z_{\alpha} \sigma / \sqrt{n}\nonumber$ Where $\overline x$ is the sample mean. $Z_{\alpha}$ is determined by the level of confidence desired by the analyst, and $\sigma / \sqrt{n}$ is the standard deviation of the sampling distribution for means given to us by the Central Limit Theorem.
textbooks/stats/Applied_Statistics/Introductory_Business_Statistics_(OpenStax)/08%3A_Confidence_Intervals/8.00%3A_Introduction_to_Confidence_Intervals.txt
A confidence interval for a population mean with a known population standard deviation is based on the conclusion of the Central Limit Theorem that the sampling distribution of the sample means follow an approximately normal distribution. Calculating the Confidence Interval Consider the standardizing formula for the sampling distribution developed in the discussion of the Central Limit Theorem: $Z_{1}=\frac{\overline{X}-\mu_{\overline{X}}}{\sigma_{\overline{X}}}=\frac{\overline{X}-\mu}{\sigma / \sqrt{n}}\nonumber$ Notice that $\mu$ is substituted for $\mu_{\overline{x}}$ because we know that the expected value of $\mu_{\overline{x}}$ is $\mu$ from the Central Limit theorem and $\sigma_{\overline{x}}$ is replaced with $\sigma / \sqrt{n}$, also from the Central Limit Theorem. In this formula we know $\overline X$, $\sigma_{\overline{x}}$ and $n$, the sample size. (In actuality we do not know the population standard deviation, but we do have a point estimate for it, $s$, from the sample we took. More on this later.) What we do not know is $\mu$ or $Z_1$. We can solve for either one of these in terms of the other. Solving for $\mu$ in terms of $Z_1$ gives: $\mu=\overline{X} \pm Z_{1} {\sigma} / \sqrt{n}\nonumber$ Remembering that the Central Limit Theorem tells us that the distribution of the $\overline X$'s, the sampling distribution for means, is normal, and that the normal distribution is symmetrical, we can rearrange terms thus: $\overline{X}-Z_{\alpha}(\sigma / \sqrt{n}) \leq \mu \leq \overline{X}+Z_{\alpha}(\sigma / \sqrt{n})\nonumber$ This is the formula for a confidence interval for the mean of a population. Notice that $Z_\alpha$ has been substituted for $Z_1$ in this equation. This is where a choice must be made by the statistician. The analyst must decide the level of confidence they wish to impose on the confidence interval. \alpha is the probability that the interval will not contain the true population mean. The confidence level is defined as $(1-\alpha)$. $Z_\alpha$ is the number of standard deviations $\overline X$ lies from the mean with a certain probability. If we chose $Z_\alpha = 1.96$ we are asking for the 95% confidence interval because we are setting the probability that the true mean lies within the range at 0.95. If we set $Z_\alpha$ at 1.64 we are asking for the 90% confidence interval because we have set the probability at 0.90. These numbers can be verified by consulting the Standard Normal table. Divide either 0.95 or 0.90 in half and find that probability inside the body of the table. Then read on the top and left margins the number of standard deviations it takes to get this level of probability. In reality, we can set whatever level of confidence we desire simply by changing the $Z_\alpha$ value in the formula. It is the analyst's choice. Common convention in Economics and most social sciences sets confidence intervals at either 90, 95, or 99 percent levels. Levels less than 90% are considered of little value. The level of confidence of a particular interval estimate is called by $(1-\alpha)$. A good way to see the development of a confidence interval is to graphically depict the solution to a problem requesting a confidence interval. This is presented in Figure $2$ for the example in the introduction concerning the number of downloads from iTunes. That case was for a 95% confidence interval, but other levels of confidence could have just as easily been chosen depending on the need of the analyst. However, the level of confidence MUST be pre-set and not subject to revision as a result of the calculations. For this example, let's say we know that the actual population mean number of iTunes downloads is 2.1. The true population mean falls within the range of the 95% confidence interval. There is absolutely nothing to guarantee that this will happen. Further, if the true mean falls outside of the interval we will never know it. We must always remember that we will never ever know the true mean.Statistics simply allows us, with a given level of probability (confidence), to say that the true mean is within the range calculated. This is what was called in the introduction, the "level of ignorance admitted". Changing the Confidence Level or Sample Size Here again is the formula for a confidence interval for an unknown population mean assuming we know the population standard deviation: $\overline{X}-Z_{\alpha}(\sigma / \sqrt{n}) \leq \mu \leq \overline{X}+Z_{\alpha}(\sigma / \sqrt{n})\nonumber$ It is clear that the confidence interval is driven by two things, the chosen level of confidence, $Z_\alpha$, and the standard deviation of the sampling distribution. The Standard deviation of the sampling distribution is further affected by two things, the standard deviation of the population and the sample size we chose for our data. Here we wish to examine the effects of each of the choices we have made on the calculated confidence interval, the confidence level and the sample size. For a moment we should ask just what we desire in a confidence interval. Our goal was to estimate the population mean from a sample. We have forsaken the hope that we will ever find the true population mean, and population standard deviation for that matter, for any case except where we have an extremely small population and the cost of gathering the data of interest is very small. In all other cases we must rely on samples. With the Central Limit Theorem we have the tools to provide a meaningful confidence interval with a given level of confidence, meaning a known probability of being wrong. By meaningful confidence interval we mean one that is useful. Imagine that you are asked for a confidence interval for the ages of your classmates. You have taken a sample and find a mean of 19.8 years. You wish to be very confident so you report an interval between 9.8 years and 29.8 years. This interval would certainly contain the true population mean and have a very high confidence level. However, it hardly qualifies as meaningful. The very best confidence interval is narrow while having high confidence. There is a natural tension between these two goals. The higher the level of confidence the wider the confidence interval as the case of the students' ages above. We can see this tension in the equation for the confidence interval. $\mu=\overline{x} \pm Z_{\alpha}\left(\frac{\sigma}{\sqrt{n}}\right)\nonumber$ The confidence interval will increase in width as $Z_\alpha$ increases, $Z_\alpha$ increases as the level of confidence increases. There is a tradeoff between the level of confidence and the width of the interval. Now let's look at the formula again and we see that the sample size also plays an important role in the width of the confidence interval. The sample sized, nn, shows up in the denominator of the standard deviation of the sampling distribution. As the sample size increases, the standard deviation of the sampling distribution decreases and thus the width of the confidence interval, while holding constant the level of confidence. This relationship was demonstrated in Figure $8$. Again we see the importance of having large samples for our analysis although we then face a second constraint, the cost of gathering data. Calculating the Confidence Interval: An Alternative Approach Another way to approach confidence intervals is through the use of something called the Error Bound. The Error Bound gets its name from the recognition that it provides the boundary of the interval derived from the standard error of the sampling distribution. In the equations above it is seen that the interval is simply the estimated mean, sample mean, plus or minus something. That something is the Error Bound and is driven by the probability we desire to maintain in our estimate, $Z_\alpha$, times the standard deviation of the sampling distribution. The Error Bound for a mean is given the name, Error Bound Mean, or $EBM$. To construct a confidence interval for a single unknown population mean $\mu$, where the population standard deviation is known, we need $\overline x$ as an estimate for $\mu$ and we need the margin of error. Here, the margin of error $(EBM)$ is called the error bound for a population mean (abbreviated EBM). The sample mean $\overline x$ is the point estimate of the unknown population mean $\mu$. The confidence interval estimate will have the form: (point estimate - error bound, point estimate + error bound) or, in symbols, $(\overline{x}-E B M, \overline{x}+E B M)$ The mathematical formula for this confidence interval is: $\overline{X}-Z_{\alpha}(\sigma / \sqrt{n}) \leq \mu \leq \overline{X}+Z_{\alpha}(\sigma / \sqrt{n})$ The margin of error (EBM) depends on the confidence level (abbreviated CL). The confidence level is often considered the probability that the calculated confidence interval estimate will contain the true population parameter. However, it is more accurate to state that the confidence level is the percent of confidence intervals that contain the true population parameter when repeated samples are taken. Most often, it is the choice of the person constructing the confidence interval to choose a confidence level of 90% or higher because that person wants to be reasonably certain of his or her conclusions. There is another probability called alpha ($\alpha$). $\alpha$ is related to the confidence level, $CL$. $\alpha$ is the probability that the interval does not contain the unknown population parameter. Mathematically, $1 - \alpha = CL$. A confidence interval for a population mean with a known standard deviation is based on the fact that the sampling distribution of the sample means follow an approximately normal distribution. Suppose that our sample has a mean of $\overline x = 10$, and we have constructed the 90% confidence interval $(5, 15)$ where $EBM = 5$. To get a 90% confidence interval, we must include the central 90% of the probability of the normal distribution. If we include the central 90%, we leave out a total of $\alpha = 10%$ in both tails, or 5% in each tail, of the normal distribution. To capture the central 90%, we must go out 1.645 standard deviations on either side of the calculated sample mean. The value 1.645 is the z-score from a standard normal probability distribution that puts an area of 0.90 in the center, an area of 0.05 in the far left tail, and an area of 0.05 in the far right tail. It is important that the standard deviation used must be appropriate for the parameter we are estimating, so in this section we need to use the standard deviation that applies to the sampling distribution for means which we studied with the Central Limit Theorem and is, $\frac{\sigma}{\sqrt{n}}$. Calculating the Confidence Interval Using EMB To construct a confidence interval estimate for an unknown population mean, we need data from a random sample. The steps to construct and interpret the confidence interval are: • Calculate the sample mean $\overline x$ from the sample data. Remember, in this section we know the population standard deviation $\sigma$. • Find the z-score from the standard normal table that corresponds to the confidence level desired. • Calculate the error bound $EBM$. • Construct the confidence interval. • Write a sentence that interprets the estimate in the context of the situation in the problem. We will first examine each step in more detail, and then illustrate the process with some examples. Finding the z-score for the Stated Confidence Level When we know the population standard deviation \sigma, we use a standard normal distribution to calculate the error bound $EBM$ and construct the confidence interval. We need to find the value of $z$ that puts an area equal to the confidence level (in decimal form) in the middle of the standard normal distribution $Z \sim N(0, 1)$. The confidence level, $CL$, is the area in the middle of the standard normal distribution. $CL = 1 – \alpha$, so $\alpha$ is the area that is split equally between the two tails. Each of the tails contains an area equal to $\frac{\alpha}{2}$. The z-score that has an area to the right of $\frac{\alpha}{2}$ is denoted by $Z_{\frac{\alpha}{2}}$. For example, when $CL = 0.95$, $\alpha = 0.05$ and $\frac{\alpha}{2} = 0.025$; we write $Z_{\frac{\alpha}{2}}$ = Z_{0.025}\). The area to the right of $Z_{0.025}$ is 0.025 and the area to the left of $Z_{0.025}$ is $1 – 0.025 = 0.975$. $Z_{\frac{\alpha}{2}} = Z_{0.025} = 1.96$, using a standard normal probability table. We will see later that we can use a different probability table, the Student's t-distribution, for finding the number of standard deviations of commonly used levels of confidence. Calculating the Error Bound (EBM) The error bound formula for an unknown population mean \mu when the population standard deviation \sigma is known is • $E B M=\left(Z \frac{\alpha}{2}\right)\left(\frac{\sigma}{\sqrt{n}}\right)$ Constructing the Confidence Interval • The confidence interval estimate has the format $(\overline{x}-E B M, \overline{x}+E B M)$ or the formula: $\overline{X}-Z_{\alpha}(\sigma / \sqrt{n}) \leq \mu \leq \overline{X}+Z_{\alpha}(\sigma / \sqrt{n})$ The graph gives a picture of the entire situation. $C L+\frac{\alpha}{2}+\frac{\alpha}{2}=C L+\alpha=1$. Example $1$ Suppose we are interested in the mean scores on an exam. A random sample of 36 scores is taken and gives a sample mean (sample mean score) of 68 (X−X- = 68). In this example we have the unusual knowledge that the population standard deviation is 3 points. Do not count on knowing the population parameters outside of textbook examples. Find a confidence interval estimate for the population mean exam score (the mean score on all exams). Find a 90% confidence interval for the true (population) mean of statistics exam scores. Answer Solution 8.1 • The solution is shown step-by-step. To find the confidence interval, you need the sample mean, $\overline x$, and the $EBM$. • $\overline x = 68$ • $EBM = \left(Z_{\frac{\alpha}{2}}\right)\left(\frac{\sigma}{\sqrt{n}}\right)$ • $\sigma = 3$; $n = 36$; The confidence level is 90% $(CL = 0.90)$ $CL = 0.90$ so $\alpha = 1 – CL = 1 – 0.90 = 0.10$ $\frac{\alpha}{2}=0.05, Z_{\frac{\alpha}{2}}=z_{0.05}$ The area to the right of $Z_{0.05}$ is $0.05$ and the area to the left of $Z_{0.05}$ is $1 – 0.05 = 0.95$. $Z_{\frac{\alpha}{2}}=Z_{0.05}=1.645$ This can be found using a computer, or using a probability table for the standard normal distribution. Because the common levels of confidence in the social sciences are 90%, 95% and 99% it will not be long until you become familiar with the numbers , 1.645, 1.96, and 2.56 $E B M=(1.645)\left(\frac{3}{\sqrt{36}}\right)=0.8225$ $\overline{x}-E B M=68-0.8225=67.1775$ $\overline{x}+E B M=68+0.8225=68.8225$ The 90% confidence interval is (67.1775, 68.8225). Interpretation We estimate with 90% confidence that the true population mean exam score for all statistics students is between 67.18 and 68.82. Example $2$ Suppose we change the original problem in Example $1$ by using a 95% confidence level. Find a 95% confidence interval for the true (population) mean statistics exam score. Answer Solution 8.2 $\mu=\overline{x} \pm Z_{\alpha}\left(\frac{\sigma}{\sqrt{n}}\right)\nonumber$ $\mu=68 \pm 1.96\left(\frac{3}{\sqrt{36}}\right)\nonumber$ $67.02 \leq \mu \leq 68.98\nonumber$ $\sigma = 3$; $n = 36$; The confidence level is 95% ($CL = 0.95$). $CL = 0.95$ so $\alpha = 1 – CL = 1 – 0.95 = 0.05$ $Z_{\frac{\alpha}{2}}=Z_{0.025}=1.96$ Notice that the $EBM$ is larger for a 95% confidence level in the original problem. Comparing the results The 90% confidence interval is (67.18, 68.82). The 95% confidence interval is (67.02, 68.98). The 95% confidence interval is wider. If you look at the graphs, because the area 0.95 is larger than the area 0.90, it makes sense that the 95% confidence interval is wider. To be more confident that the confidence interval actually does contain the true value of the population mean for all statistics exam scores, the confidence interval necessarily needs to be wider. This demonstrates a very important principle of confidence intervals. There is a trade off between the level of confidence and the width of the interval. Our desire is to have a narrow confidence interval, huge wide intervals provide little information that is useful. But we would also like to have a high level of confidence in our interval. This demonstrates that we cannot have both. Summary: Effect of Changing the Confidence Level • Increasing the confidence level makes the confidence interval wider. • Decreasing the confidence level makes the confidence interval narrower. And again here is the formula for a confidence interval for an unknown mean assuming we have the population standard deviation: $\overline{X}-Z_{\alpha}(\sigma / \sqrt{n}) \leq \mu \leq \overline{X}+Z_{\alpha}(\sigma / \sqrt{n})\nonumber$ The standard deviation of the sampling distribution was provided by the Central Limit Theorem as $\sigma / \sqrt{n}$. While we infrequently get to choose the sample size it plays an important role in the confidence interval. Because the sample size is in the denominator of the equation, as $n$ increases it causes the standard deviation of the sampling distribution to decrease and thus the width of the confidence interval to decrease. We have met this before as we reviewed the effects of sample size on the Central Limit Theorem. There we saw that as $n$ increases the sampling distribution narrows until in the limit it collapses on the true population mean. Example $3$ Suppose we change the original problem in Example $1$ to see what happens to the confidence interval if the sample size is changed. Leave everything the same except the sample size. Use the original 90% confidence level. What happens to the confidence interval if we increase the sample size and use $n = 100$ instead of $n = 36$? What happens if we decrease the sample size to $n = 25$ instead of $n = 36$? Answer Solution 8.3 Solution A $\mu=\overline{x} \pm Z_{\alpha}\left(\frac{\sigma}{\sqrt{n}}\right)$ $\mu=68 \pm 1.645\left(\frac{3}{\sqrt{100}}\right)$ $67.5065 \leq \mu \leq 68.4935$ If we increase the sample size $n$ to 100, we decrease the width of the confidence interval relative to the original sample size of 36 observations. Answer Solution 8.3 Solution B $\mu=\overline{x} \pm Z_{\alpha}\left(\frac{\sigma}{\sqrt{n}}\right)$ $\mu=68 \pm 1.645\left(\frac{3}{\sqrt{25}}\right)$ $67.013 \leq \mu \leq 68.987$ If we decrease the sample size $n$ to 25, we increase the width of the confidence interval by comparison to the original sample size of 36 observations. Summary: Effect of Changing the Sample Size • Increasing the sample size makes the confidence interval narrower. • Decreasing the sample size makes the confidence interval wider. We have already seen this effect when we reviewed the effects of changing the size of the sample, n, on the Central Limit Theorem. See Figure $7$ to see this effect. Before we saw that as the sample size increased the standard deviation of the sampling distribution decreases. This was why we choose the sample mean from a large sample as compared to a small sample, all other things held constant. Thus far we assumed that we knew the population standard deviation. This will virtually never be the case. We will have the sample standard deviation, s, however. This is a point estimate for the population standard deviation and can be substituted into the formula for confidence intervals for a mean under certain circumstances. We just saw the effect the sample size has on the width of confidence interval and the impact on the sampling distribution for our discussion of the Central Limit Theorem. We can invoke this to substitute the point estimate for the standard deviation if the sample size is large "enough". Simulation studies indicate that 30 observations or more will be sufficient to eliminate any meaningful bias in the estimated confidence interval. Example $4$ Spring break can be a very expensive holiday. A sample of 80 students is surveyed, and the average amount spent by students on travel and beverages is $593.84. The sample standard deviation is approximately$369.34. Construct a 92% confidence interval for the population mean amount of money spent by spring breakers. Answer Solution 8.4 We begin with the confidence interval for a mean. We use the formula for a mean because the random variable is dollars spent and this is a continuous random variable. The point estimate for the population standard deviation, s, has been substituted for the true population standard deviation because with 80 observations there is no concern for bias in the estimate of the confidence interval. $\mu=\overline{x} \pm\left[Z_{(\mathrm{a} / 2)} \frac{s}{\sqrt{n}}\right]\nonumber$ Substituting the values into the formula, we have: $\mu=593.84 \pm\left[1.75 \frac{369.34}{\sqrt{80}}\right]\nonumber$ $Z_{(a / 2)}$ is found on the standard normal table by looking up 0.46 in the body of the table and finding the number of standard deviations on the side and top of the table; 1.75. The solution for the interval is thus: $\mu=593.84 \pm 72.2636=(521.57,666.10)\nonumber$ $\ 521.58 \leq \mu \leq \ 666.10\nonumber$ Formula Review The general form for a confidence interval for a single population mean, known standard deviation, normal distribution is given by $\overline{X}-Z_{\alpha}(\sigma / \sqrt{n}) \leq \mu \leq \overline{X}+Z_{\alpha}(\sigma / \sqrt{n})$ This formula is used when the population standard deviation is known. $CL$ = confidence level, or the proportion of confidence intervals created that are expected to contain the true population parameter $\alpha = 1 – CL$ = the proportion of confidence intervals that will not contain the population parameter $z_{\frac{\alpha}{2}}$ = the z-score with the property that the area to the right of the z-score is $\frac{\propto}{2}$ this is the z-score used in the calculation of "$EBM$" where $\alpha = 1 – CL$.
textbooks/stats/Applied_Statistics/Introductory_Business_Statistics_(OpenStax)/08%3A_Confidence_Intervals/8.01%3A_A_Confidence_Interval_for_a_Population_Standard_Deviation_Known_or_Large_Sample_Size.txt
In practice, we rarely know the population standard deviation. In the past, when the sample size was large, this did not present a problem to statisticians. They used the sample standard deviation s as an estimate for $\sigma$ and proceeded as before to calculate a confidence interval with close enough results. This is what we did in Example $4$ above. The point estimate for the standard deviation, $s$, was substituted in the formula for the confidence interval for the population standard deviation. In this case there 80 observation well above the suggested 30 observations to eliminate any bias from a small sample. However, statisticians ran into problems when the sample size was small. A small sample size caused inaccuracies in the confidence interval. William S. Goset (1876–1937) of the Guinness brewery in Dublin, Ireland ran into this problem. His experiments with hops and barley produced very few samples. Just replacing $\sigma$ with $s$ did not produce accurate results when he tried to calculate a confidence interval. He realized that he could not use a normal distribution for the calculation; he found that the actual distribution depends on the sample size. This problem led him to "discover" what is called the Student's t-distribution. The name comes from the fact that Gosset wrote under the pen name "A Student." Up until the mid-1970s, some statisticians used the normal distribution approximation for large sample sizes and used the Student's t-distribution only for sample sizes of at most 30 observations. If you draw a simple random sample of size $n$ from a population with mean $\mu$ and unknown population standard deviation $\sigma$ and calculate the t-score $t=\frac{\overline{x}-\mu}{\left(\frac{s}{\sqrt{n}}\right)}$ then the t-scores follow a Student's t-distribution with $\bf{n – 1}$ degrees of freedom. The t-score has the same interpretation as the z-score. It measures how far in standard deviation units $\overline x$ is from its mean \mu. For each sample size $n$, there is a different Student's t-distribution. The degrees of freedom, $\bf{n – 1}$, come from the calculation of the sample standard deviation $\bf{s}$. Remember when we first calculated a sample standard deviation we divided the sum of the squared deviations by $n – 1$, but we used $n$ deviations ($\overline x$ values) to calculate $\bf{s}$. Because the sum of the deviations is zero, we can find the last deviation once we know the other $\bf{n – 1}$ deviations. The other $\bf{n – 1}$ deviations can change or vary freely. We call the number $\bf{n – 1}$ the degrees of freedom ($df$) in recognition that one is lost in the calculations. The effect of losing a degree of freedom is that the t-value increases and the confidence interval increases in width. Properties of the Student's t-Distribution • The graph for the Student's t-distribution is similar to the standard normal curve and at infinite degrees of freedom it is the normal distribution. You can confirm this by reading the bottom line at infinite degrees of freedom for a familiar level of confidence, e.g. at column 0.05, 95% level of confidence, we find the t-value of 1.96 at infinite degrees of freedom. • The mean for the Student's t-distribution is zero and the distribution is symmetric about zero, again like the standard normal distribution. • The Student's t-distribution has more probability in its tails than the standard normal distribution because the spread of the t-distribution is greater than the spread of the standard normal. So the graph of the Student's t-distribution will be thicker in the tails and shorter in the center than the graph of the standard normal distribution. • The exact shape of the Student's t-distribution depends on the degrees of freedom. As the degrees of freedom increases, the graph of Student's t-distribution becomes more like the graph of the standard normal distribution. • The underlying population of individual observations is assumed to be normally distributed with unknown population mean \$mu$ and unknown population standard deviation $\sigma$. This assumption comes from the Central Limit theorem because the individual observations in this case are the $\overline x$s of the sampling distribution. The size of the underlying population is generally not relevant unless it is very small. If it is normal then the assumption is met and doesn't need discussion. A probability table for the Student's t-distribution is used to calculate t-values at various commonly-used levels of confidence. The table gives t-scores that correspond to the confidence level (column) and degrees of freedom (row). When using a t-table, note that some tables are formatted to show the confidence level in the column headings, while the column headings in some tables may show only corresponding area in one or both tails. Notice that at the bottom the table will show the t-value for infinite degrees of freedom. Mathematically, as the degrees of freedom increase, the $t$ distribution approaches the standard normal distribution. You can find familiar Z-values by looking in the relevant alpha column and reading value in the last row. A Student's t table (Table $6$) gives t-scores given the degrees of freedom and the right-tailed probability. The Student's t distribution has one of the most desirable properties of the normal: it is symmetrical. What the Student's t distribution does is spread out the horizontal axis so it takes a larger number of standard deviations to capture the same amount of probability. In reality there are an infinite number of Student's t distributions, one for each adjustment to the sample size. As the sample size increases, the Student's t distribution become more and more like the normal distribution. When the sample size reaches 30 the normal distribution is usually substituted for the Student's t because they are so much alike. This relationship between the Student's t distribution and the normal distribution is shown in Figure $8$. This is another example of one distribution limiting another one, in this case the normal distribution is the limiting distribution of the Student's t when the degrees of freedom in the Student's t approaches infinity. This conclusion comes directly from the derivation of the Student's t distribution by Mr. Gosset. He recognized the problem as having few observations and no estimate of the population standard deviation. He was substituting the sample standard deviation and getting volatile results. He therefore created the Student's t distribution as a ratio of the normal distribution and Chi squared distribution. The Chi squared distribution is itself a ratio of two variances, in this case the sample variance and the unknown population variance. The Student's t distribution thus is tied to the normal distribution, but has degrees of freedom that come from those of the Chi squared distribution. The algebraic solution demonstrates this result. Development of Student's t-distribution: 1. $t=\frac{z}{\sqrt{\frac{\chi^{2}}{v}}}$ Where $Z$ is the standard normal distribution and $X^2$ is the chi-squared distribution with $v$ degrees of freedom. 2. $t=\frac{\frac{(\overline x-\mu)}{\sigma}}{\sqrt{\frac{\frac{s^{2}}{(n-1)}}{\frac{\sigma^{2}}{(n-1)}}}}$ by substitution, and thus Student's t with $v = n − 1$ degrees of freedom is: 3. $t=\frac{\overline{x}-\mu}{\frac{s}{\sqrt{n}}}$ Restating the formula for a confidence interval for the mean for cases when the sample size is smaller than 30 and we do not know the population standard deviation, $\sigma$: $\overline{x}-t_{\nu, \alpha}\left(\frac{s}{\sqrt{n}}\right) \leq \mu \leq \overline{x}+t_{\nu, \alpha}\left(\frac{s}{\sqrt{n}}\right)\nonumber$ Here the point estimate of the population standard deviation, $s$ has been substituted for the population standard deviation, $\sigma$, and $t_{\nu}$,$\alpha$ has been substituted for $Z_{\alpha}$. The Greek letter $\nu$ (pronounced nu) is placed in the general formula in recognition that there are many Student $t_{\nu}$ distributions, one for each sample size. $\nu$ is the symbol for the degrees of freedom of the distribution and depends on the size of the sample. Often df is used to abbreviate degrees of freedom. For this type of problem, the degrees of freedom is $\nu = n-1$, where $n$ is the sample size. To look up a probability in the Student's t table we have to know the degrees of freedom in the problem. Example $1$ The average earnings per share (EPS) for 10 industrial stocks randomly selected from those listed on the Dow-Jones Industrial Average was found to be $\overline X = 1.85$ with a standard deviation of $s=0.395$. Calculate a 99% confidence interval for the average EPS of all the industrials listed on the $DJIA$. $\overline{x}-t_{v, \alpha}\left(\frac{s}{\sqrt{n}}\right) \leq \mu \leq \overline{x}+t_{\nu, \alpha}\left(\frac{s}{\sqrt{n}}\right)\nonumber$ Answer To help visualize the process of calculating a confident interval we draw the appropriate distribution for the problem. In this case this is the Student’s t because we do not know the population standard deviation and the sample is small, less than 30. To find the appropriate t-value requires two pieces of information, the level of confidence desired and the degrees of freedom. The question asked for a 99% confidence level. On the graph this is shown where ($1-\alpha$) , the level of confidence , is in the unshaded area. The tails, thus, have .005 probability each, $\alpha/2$. The degrees of freedom for this type of problem is $n-1= 9$. From the Student’s t table, at the row marked 9 and column marked .005, is the number of standard deviations to capture 99% of the probability, 3.2498. These are then placed on the graph remembering that the Student’s $t$ is symmetrical and so the t-value is both plus or minus on each side of the mean. Inserting these values into the formula gives the result. These values can be placed on the graph to see the relationship between the distribution of the sample means, $\overline X$'s and the Student’s t distribution. $\mu=\overline{X} \pm t_{\alpha / 2, \mathrm{df}=n-1} \frac{s}{\sqrt{n}}=1.851 \pm 3.2498 \frac{0.395}{\sqrt{10}}=1.8551 \pm 0.406\nonumber$ $1.445 \leq \mu \leq 2.257\nonumber$ We state the formal conclusion as : With 99% confidence level, the average $EPS$ of all the industries listed at $DJIA$ is from $1.44 to$2.26. Exercise $2$ You do a study of hypnotherapy to determine how effective it is in increasing the number of hours of sleep subjects get each night. You measure hours of sleep for 12 subjects with the following results. Construct a 95% confidence interval for the mean number of hours slept for the population (assumed normal) from which you took the data. 8.2; 9.1; 7.7; 8.6; 6.9; 11.2; 10.1; 9.9; 8.9; 9.2; 7.5; 10.5
textbooks/stats/Applied_Statistics/Introductory_Business_Statistics_(OpenStax)/08%3A_Confidence_Intervals/8.02%3A_A_Confidence_Interval_for_a_Population_Standard_Deviation_Unknown_Small_Sample_Case.txt
During an election year, we see articles in the newspaper that state confidence intervals in terms of proportions or percentages. For example, a poll for a particular candidate running for president might show that the candidate has 40% of the vote within three percentage points (if the sample is large enough). Often, election polls are calculated with 95% confidence, so, the pollsters would be 95% confident that the true proportion of voters who favored the candidate would be between 0.37 and 0.43. Investors in the stock market are interested in the true proportion of stocks that go up and down each week. Businesses that sell personal computers are interested in the proportion of households in the United States that own personal computers. Confidence intervals can be calculated for the true proportion of stocks that go up or down each week and for the true proportion of households in the United States that own personal computers. The procedure to find the confidence interval for a population proportion is similar to that for the population mean, but the formulas are a bit different although conceptually identical. While the formulas are different, they are based upon the same mathematical foundation given to us by the Central Limit Theorem. Because of this we will see the same basic format using the same three pieces of information: the sample value of the parameter in question, the standard deviation of the relevant sampling distribution, and the number of standard deviations we need to have the confidence in our estimate that we desire. How do you know you are dealing with a proportion problem? First, the underlying distribution has a binary random variable and therefore is a binomial distribution. (There is no mention of a mean or average.) If $X$ is a binomial random variable, then $X \sim B(n, p)$ where $n$ is the number of trials and $p$ is the probability of a success. To form a sample proportion, take $X$, the random variable for the number of successes and divide it by $n$, the number of trials (or the sample size). The random variable $P^{\prime}$ (read "P prime") is the sample proportion, $P^{\prime}=\frac{X}{n} \nonumber$ (Sometimes the random variable is denoted as $\hat{P}$, read "P hat".) • $P^{\prime}$ = the estimated proportion of successes or sample proportion of successes ($P^{\prime}$ is a point estimate for $p$, the true population proportion, and thus $q$ is the probability of a failure in any one trial.) • $x$ = the number of successes in the sample • $n$ = the size of the sample The formula for the confidence interval for a population proportion follows the same format as that for an estimate of a population mean. Remembering the sampling distribution for the proportion from Chapter 7, the standard deviation was found to be: $\sigma_{\mathrm{p}^{\prime}}=\sqrt{\frac{p(1-p)}{n}}\nonumber$ The confidence interval for a population proportion, therefore, becomes: $p=p^{\prime} \pm\left[Z_{\left(\frac{a}{2}\right)} \sqrt{\frac{p^{\prime}\left(1-p^{\prime}\right)}{n}}\right]\nonumber$ $Z_{\left(\frac{a}{2}\right)}$ is set according to our desired degree of confidence and $\sqrt{\frac{p^{\prime}\left(1-p^{\prime}\right)}{n}}$ is the standard deviation of the sampling distribution. The sample proportions $\bf{p^{\prime}}$ and $\bf{q^{\prime}}$ are estimates of the unknown population proportions $\bf{p}$ and $\bf{q}$. The estimated proportions $p^{\prime}$ and $q^{\prime}$ are used because $p$ and $q$ are not known. Remember that as $p$ moves further from 0.5 the binomial distribution becomes less symmetrical. Because we are estimating the binomial with the symmetrical normal distribution the further away from symmetrical the binomial becomes the less confidence we have in the estimate. This conclusion can be demonstrated through the following analysis. Proportions are based upon the binomial probability distribution. The possible outcomes are binary, either “success” or “failure”. This gives rise to a proportion, meaning the percentage of the outcomes that are “successes”. It was shown that the binomial distribution could be fully understood if we knew only the probability of a success in any one trial, called $p$. The mean and the standard deviation of the binomial were found to be: $\mu=\mathrm{np}\nonumber$ $\sigma=\sqrt{npq}\nonumber$ It was also shown that the binomial could be estimated by the normal distribution if BOTH $np$ AND $nq$ were greater than 5. From the discussion above, it was found that the standardizing formula for the binomial distribution is: $Z=\frac{\mathrm{p}^{\prime}-p}{\sqrt{\left(\frac{p q}{n}\right)}}\nonumber$ which is nothing more than a restatement of the general standardizing formula with appropriate substitutions for $\mu$ and $\sigma$ from the binomial. We can use the standard normal distribution, the reason $Z$ is in the equation, because the normal distribution is the limiting distribution of the binomial. This is another example of the Central Limit Theorem. We have already seen that the sampling distribution of means is normally distributed. Recall the extended discussion in Chapter 7 concerning the sampling distribution of proportions and the conclusions of the Central Limit Theorem. We can now manipulate this formula in just the same way we did for finding the confidence intervals for a mean, but to find the confidence interval for the binomial population parameter, $p$. $\mathrm{p}^{\prime}-Z_{\alpha} \sqrt{\frac{\mathrm{p}^{\prime} \mathrm{q}^{\prime}}{n}} \leq p \leq \mathrm{p}^{\prime}+Z_{\alpha} \sqrt{\frac{\mathrm{p}^{\prime} \mathrm{q}^{\prime}}{n}}\nonumber$ Where $p^{\prime} = x/n$, the point estimate of $p$ taken from the sample. Notice that $p^{\prime}$ has replaced $p$ in the formula. This is because we do not know $p$, indeed, this is just what we are trying to estimate. Unfortunately, there is no correction factor for cases where the sample size is small so $np^{\prime}$ and $nq^{\prime}$ must always be greater than 5 to develop an interval estimate for $p$. Example $1$ Suppose that a market research firm is hired to estimate the percent of adults living in a large city who have cell phones. Five hundred randomly selected adult residents in this city are surveyed to determine whether they have cell phones. Of the 500 people sampled, 421 responded yes - they own cell phones. Using a 95% confidence level, compute a confidence interval estimate for the true proportion of adult residents of this city who have cell phones. Answer The solution step-by-step. Let $X$ = the number of people in the sample who have cell phones. $X$ is binomial: the random variable is binary, people either have a cell phone or they do not. To calculate the confidence interval, we must find $p^{\prime}, q^{\prime}$. $n = 500$ $x=\text { the number of successes in the sample }=421$ $p^{\prime}=\frac{x}{n}=\frac{421}{500}=0.842$ $p^{\prime}=0.842$ is the sample proportion; this is the point estimate of the population proportion. $q^{\prime}=1-p^{\prime}=1-0.842=0.158$ Since the requested confidence level is $CL = 0.95$, then $\alpha=1-C L=1-0.95=0.05\left(\frac{\alpha}{2}\right)=0.025$. Then $z_{\frac{\alpha}{2}}=z_{0.025}=1.96$ This can be found using the Standard Normal probability table in Table $6$. This can also be found in the students t table at the 0.025 column and infinity degrees of freedom because at infinite degrees of freedom the students t distribution becomes the standard normal distribution, $Z$. The confidence interval for the true binomial population proportion is $\mathrm{p}^{\prime}-Z_{\alpha} \sqrt{\frac{\mathrm{p}^{\prime} \mathrm{q}^{\prime}}{n}} \leq p \leq \mathrm{p}^{\prime}+Z_{\alpha} \sqrt{\frac{\mathrm{p}^{\prime} \mathrm{q}^{\prime}}{n}}\nonumber$ $\text{Substituting in the values from above we find the confidence interval is : } 0.810 \leq p \leq 0.874$ Interpretation We estimate with 95% confidence that between 81% and 87.4% of all adult residents of this city have cell phones. Explanation of 95% Confidence Level Ninety-five percent of the confidence intervals constructed in this way would contain the true value for the population proportion of all adult residents of this city who have cell phones. Exercise $1$ Suppose 250 randomly selected people are surveyed to determine if they own a tablet. Of the 250 surveyed, 98 reported owning a tablet. Using a 95% confidence level, compute a confidence interval estimate for the true proportion of people who own tablets. Example $2$ The Dundee Dog Training School has a larger than average proportion of clients who compete in competitive professional events. A confidence interval for the population proportion of dogs that compete in professional events from 150 different training schools is constructed. The lower limit is determined to be 0.08 and the upper limit is determined to be 0.16. Determine the level of confidence used to construct the interval of the population proportion of dogs that compete in professional events. Answer We begin with the formula for a confidence interval for a proportion because the random variable is binary; either the client competes in professional competitive dog events or they don't. $p=p^{\prime} \pm\left[Z_{\left(\frac{a}{2}\right)} \sqrt{\frac{p^{\prime}\left(1-p^{\prime}\right)}{n}}\right]\nonumber$ Next we find the sample proportion: $p^{\prime}=\frac{0.08+0.16}{2}=0.12\nonumber$ The $\pm$ that makes up the confidence interval is thus $0.04; 0.12 + 0.04 = 0.16$ and $0.12 − 0.04 = 0.08$, the boundaries of the confidence interval. Finally, we solve for $Z$. $\left[Z \cdot \sqrt{\frac{0.12(1-0.12)}{150}}\right]=0.04, \textbf { therefore } \bf{z=1.51}$ And then look up the probability for 1.51 standard deviations on the standard normal table. $p(Z=1.51)=0.4345, p(Z) \cdot 2=0.8690 \textbf { or } 86.90 \%$. Example $3$ A financial officer for a company wants to estimate the percent of accounts receivable that are more than 30 days overdue. He surveys 500 accounts and finds that 300 are more than 30 days overdue. Compute a 90% confidence interval for the true percent of accounts receivable that are more than 30 days overdue, and interpret the confidence interval. Answer The solution is step-by-step: Ninety percent of all confidence intervals constructed in this way contain the true value for the population percent of accounts receivable that are overdue 30 days. Explanation of 90% Confidence Level $x = 300$ and $n = 500$ $p^{\prime}=\frac{x}{n}=\frac{300}{500}=0.600$ $q^{\prime}=1-p^{\prime}=1-0.600=0.400$ Since confidence level = $0.90$, then $a=1-\text { confidence level }=(1-0.90)=0.10\left(\frac{\alpha}{2}\right)=0.05$ $Z_{\frac{\alpha}{2}}=Z_{0.05}=1.645$ This Z-value can be found using a standard normal probability table. The student's t-table can also be used by entering the table at the 0.05 column and reading at the line for infinite degrees of freedom. The t-distribution is the normal distribution at infinite degrees of freedom. This is a handy trick to remember in finding Z-values for commonly used levels of confidence. We use this formula for a confidence interval for a proportion: $\mathrm{p}^{\prime}-Z_{\alpha} \sqrt{\frac{\mathrm{p}^{\prime} \mathrm{q}^{\prime}}{n}} \leq p \leq \mathrm{p}^{\prime}+Z_{\alpha} \sqrt{\frac{\mathrm{p}^{\prime} \mathrm{q}^{\prime}}{n}}\nonumber$ Substituting in the values from above we find the confidence interval for the true binomial population proportion is $0.564 \leq p \leq 0.636$ Interpretation We estimate with 90% confidence that the true percent of all accounts receivable overdue 30 days is between 56.4% and 63.6%. Alternate Wording: We estimate with 90% confidence that between 56.4% and 63.6% of ALL accounts are overdue 30 days. Exercise $2$ A student polls his school to see if students in the school district are for or against the new legislation regarding school uniforms. She surveys 600 students and finds that 480 are against the new legislation. 1. Compute a 90% confidence interval for the true percent of students who are against the new legislation, and interpret the confidence interval. 2. In a sample of 300 students, 68% said they own an iPod and a smart phone. Compute a 97% confidence interval for the true percent of students who own an iPod and a smartphone.
textbooks/stats/Applied_Statistics/Introductory_Business_Statistics_(OpenStax)/08%3A_Confidence_Intervals/8.03%3A_A_Confidence_Interval_for_A_Population_Proportion.txt
Continuous Random Variables Usually we have no control over the sample size of a data set. However, if we are able to set the sample size, as in cases where we are taking a survey, it is very helpful to know just how large it should be to provide the most information. Sampling can be very costly in both time and product. Simple telephone surveys will cost approximately \$30.00 each, for example, and some sampling requires the destruction of the product. If we go back to our standardizing formula for the sampling distribution for means, we can see that it is possible to solve it for n. If we do this we have $(\overline{X}-\mu)$ in the denominator. $n=\frac{Z_{\alpha}^{2} \sigma^{2}}{(\overline{X}-\mu)^{2}}=\frac{Z_{\alpha}^{2} \sigma^{2}}{e^{2}}\nonumber$ Because we have not taken a sample yet we do not know any of the variables in the formula except that we can set $Z_{\alpha}$ to the level of confidence we desire just as we did when determining confidence intervals. If we set a predetermined acceptable error, or tolerance, for the difference between $\overline{X}$ and $\mu$, called e in the formula, we are much further in solving for the sample size $n$. We still do not know the population standard deviation, $\sigma$. In practice, a pre-survey is usually done which allows for fine tuning the questionnaire and will give a sample standard deviation that can be used. In other cases, previous information from other surveys may be used for $\sigma$ in the formula. While crude, this method of determining the sample size may help in reducing cost significantly. It will be the actual data gathered that determines the inferences about the population, so caution in the sample size is appropriate calling for high levels of confidence and small sampling errors. Binary Random Variables What was done in cases when looking for the mean of a distribution can also be done when sampling to determine the population parameter $p$ for proportions. Manipulation of the standardizing formula for proportions gives: $n=\frac{Z_{\alpha}^{2} \mathrm{pq}}{e^{2}}\nonumber$ where $e=\left(p^{\prime}-p\right)$, and is the acceptable sampling error, or tolerance, for this application. This will be measured in percentage points. In this case the very object of our search is in the formula, $p$, and of course $q$ because $q =1-p$. This result occurs because the binomial distribution is a one parameter distribution. If we know $p$ then we know the mean and the standard deviation. Therefore, $p$ shows up in the standard deviation of the sampling distribution which is where we got this formula. If, in an abundance of caution, we substitute 0.5 for $p$ we will draw the largest required sample size that will provide the level of confidence specified by $Z \alpha$ and the tolerance we have selected. This is true because of all combinations of two fractions that add to one, the largest multiple is when each is 0.5. Without any other information concerning the population parameter $p$, this is the common practice. This may result in oversampling, but certainly not under sampling, thus, this is a cautious approach. There is an interesting trade-off between the level of confidence and the sample size that shows up here when considering the cost of sampling. Table $1$ shows the appropriate sample size at different levels of confidence and different level of the acceptable error, or tolerance. Required sample size (90%) Required sample size (95%) Tolerance level 1691 2401 2% 752 1067 3% 271 384 5% 68 96 10% Table $1$ This table is designed to show the maximum sample size required at different levels of confidence given an assumed $p= 0.5$ and $q=0.5$ as discussed above. The acceptable error, called tolerance in the table, is measured in plus or minus values from the actual proportion. For example, an acceptable error of 5% means that if the sample proportion was found to be 26 percent, the conclusion would be that the actual population proportion is between 21 and 31 percent with a 90 percent level of confidence if a sample of 271 had been taken. Likewise, if the acceptable error was set at 2%, then the population proportion would be between 24 and 28 percent with a 90 percent level of confidence, but would require that the sample size be increased from 271 to 1,691. If we wished a higher level of confidence, we would require a larger sample size. Moving from a 90 percent level of confidence to a 95 percent level at a plus or minus 5% tolerance requires changing the sample size from 271 to 384. A very common sample size often seen reported in political surveys is 384. With the survey results it is frequently stated that the results are good to a plus or minus 5% level of “accuracy”. Example $9$ Suppose a mobile phone company wants to determine the current percentage of customers aged 50+ who use text messaging on their cell phones. How many customers aged 50+ should the company survey in order to be 90% confident that the estimated (sample) proportion is within three percentage points of the true population proportion of customers aged 50+ who use text messaging on their cell phones. Answer Solution 8.9 From the problem, we know that the acceptable error, $e$, is 0.03 (3%=0.03) and $z_{\frac{\alpha}{2}} Z_{0.05}=1.645$ because the confidence level is 90%. The acceptable error, $e$, is the difference between the actual population proportion p, and the sample proportion we expect to get from the sample. However, in order to find $n$, we need to know the estimated (sample) proportion $p^{\prime}$. Remember that $q^{\prime} = 1 – p^{\prime}$. But, we do not know $p^{\prime}$ yet. Since we multiply $p^{\prime}$ and $q^{\prime}$ together, we make them both equal to 0.5 because $p^{\prime}q^{\prime} = (0.5)(0.5) = 0.25$ results in the largest possible product. (Try other products: $(0.6)(0.4) = 0.24; (0.3)(0.7) = 0.21; (0.2)(0.8) = 0.16$ and so on). The largest possible product gives us the largest n. This gives us a large enough sample so that we can be 90% confident that we are within three percentage points of the true population proportion. To calculate the sample size n, use the formula and make the substitutions. $n=\frac{z^{2} p^{\prime} q^{\prime}}{e^{2}} \text { gives } n=\frac{1.645^{2}(0.5)(0.5)}{0.03^{2}}=751.7$ Round the answer to the next higher value. The sample size should be 752 cell phone customers aged 50+ in order to be 90% confident that the estimated (sample) proportion is within three percentage points of the true population proportion of all customers aged 50+ who use text messaging on their cell phones. Exercise $9$ Suppose an internet marketing company wants to determine the current percentage of customers who click on ads on their smartphones. How many customers should the company survey in order to be 90% confident that the estimated proportion is within five percentage points of the true population proportion of customers who click on ads on their smartphones? 8.05: Chapter Formula Review A Confidence Interval for a Population Standard Deviation Unknown, Small Sample Case $s$ = the standard deviation of sample values. $t=\frac{\overline{x}-\mu}{\frac{s}{\sqrt{n}}}$ is the formula for the t-score which measures how far away a measure is from the population mean in the Student’s t-distribution $df = n - 1$; the degrees of freedom for a Student’s t-distribution where $n$ represents the size of the sample $T \sim t_{d f}$ the random variable, $T$, has a Student’s t-distribution with df degrees of freedom The general form for a confidence interval for a single mean, population standard deviation unknown, and sample size less than 30 Student's t is given by: $\overline{x}-t_{\mathrm{v}, \alpha}\left(\frac{s}{\sqrt{n}}\right) \leq \mu \leq \overline{x}+t_{\mathrm{v}, \alpha}\left(\frac{s}{\sqrt{n}}\right)$ A Confidence Interval for A Population Proportion $p^{\prime}=\frac{x}{n}$ where $x$ represents the number of successes in a sample and $n$ represents the sample size. The variable p′ is the sample proportion and serves as the point estimate for the true population proportion. $q^{\prime}=1-p^{\prime}$ The variable $p^{\prime}$ has a binomial distribution that can be approximated with the normal distribution shown here. The confidence interval for the true population proportion is given by the formula: $\mathrm{p}^{\prime}-Z_{\alpha} \sqrt{\frac{\mathrm{p}^{\prime} \mathrm{q}^{\prime}}{n}} \leq p \leq \mathrm{p}^{\prime}+Z_{\alpha} \sqrt{\frac{\mathrm{p}^{\prime} \mathrm{q}^{\prime}}{n}}$ $n=\frac{Z_{\frac{\alpha}{2}}^{2} p^{\prime} q^{\prime}}{e^{2}}$ provides the number of observations needed to sample to estimate the population proportion, $p$, with confidence $1 - \alpha$ and margin of error $e$. Where $e$ = the acceptable difference between the actual population proportion and the sample proportion. Calculating the Sample Size n: Continuous and Binary Random Variables $n=\frac{Z^{2} \sigma^{2}}{(\overline{x}-\mu)^{2}}$ = the formula used to determine the sample size ($n$) needed to achieve a desired margin of error at a given level of confidence for a continuous random variable $n=\frac{Z_{\alpha}^{2} \mathrm{pq}}{e^{2}}$ = the formula used to determine the sample size if the random variable is binary
textbooks/stats/Applied_Statistics/Introductory_Business_Statistics_(OpenStax)/08%3A_Confidence_Intervals/8.04%3A_Calculating_the_Sample_Size_n-_Continuous_and_Binary_Random_Variables.txt
8.2 A Confidence Interval for a Population Standard Deviation Unknown, Small Sample Case 102. In six packages of “The Flintstones® Real Fruit Snacks” there were five Bam-Bam snack pieces. The total number of snack pieces in the six bags was 68. We wish to calculate a 96% confidence interval for the population proportion of Bam-Bam snack pieces. 1. The FEC has reported financial information for 556 Leadership PACs that operating during the 2011–2012 election cycle. The following table shows the total receipts during this cycle for a random selection of 30 Leadership PACs. $46,500.00$0$40,966.50$105,887.20$5,175.00$29,050.00$19,500.00$181,557.20$31,500.00$149,970.80 $2,555,363.20$12,025.00$409,000.00$60,521.70$18,000.00$61,810.20$76,530.80$119,459.20$0$63,520.00 $6,500.00$502,578.00$705,061.10$708,258.90$135,810.00$2,000.00$2,000.00$0$1,287,933.80$219,148.30 Table $3$ $s=\ 521,130.41$ Use this sample data to construct a 95% confidence interval for the mean amount of money raised by all Leadership PACs during the 2011–2012 election cycle. Use the Student's t-distribution. 108. Forbes magazine published data on the best small firms in 2012. These were firms that had been publicly traded for at least a year, have a stock price of at least $5 per share, and have reported annual revenue between$5 million and $1 billion. The Table $4$ shows the ages of the corporate CEOs for a random sample of these firms. 4858516156 5974635350 5960605746 5563574755 5743616249 6767555549 Table 8.4 Use this sample data to construct a 90% confidence interval for the mean age of CEO’s for these top small firms. Use the Student's t-distribution. 109. Unoccupied seats on flights cause airlines to lose revenue. Suppose a large airline wants to estimate its mean number of unoccupied seats per flight over the past year. To accomplish this, the records of 225 flights are randomly selected and the number of unoccupied seats is noted for each of the sampled flights. The sample mean is 11.6 seats and the sample standard deviation is 4.1 seats. 1. Use the following information to answer the next two exercises: A quality control specialist for a restaurant chain takes a random sample of size 12 to check the amount of soda served in the 16 oz. serving size. The sample mean is 13.30 with a sample standard deviation of 1.55. Assume the underlying population is normally distributed.113. Find the 95% Confidence Interval for the true population mean for the amount of soda served. 1. (12.42, 14.18) 2. (12.32, 14.29) 3. (12.50, 14.10) 4. Impossible to determine 8.3 A Confidence Interval for A Population Proportion 114. Insurance companies are interested in knowing the population percent of drivers who always buckle up before riding in a car. 1. When designing a study to determine this population proportion, what is the minimum number you would need to survey to be 95% confident that the population proportion is estimated to within 0.03? 2. If it were later determined that it was important to be more than 95% confident and a new survey was commissioned, how would that affect the minimum number you would need to survey? Why? 115. Suppose that the insurance companies did do a survey. They randomly surveyed 400 drivers and found that 320 claimed they always buckle up. We are interested in the population proportion of drivers who claim they always buckle up. • $x$ = __________ • $n$ = __________ • $p^{\prime}$ = __________ 1. Define the random variables $X$ and $P^{\prime}$, in words. 2. Which distribution should you use for this problem? Explain your choice. 3. Construct a 95% confidence interval for the population proportion who claim they always buckle up. • State the confidence interval. • Sketch the graph. 4. If this survey were done by telephone, list three difficulties the companies might have in obtaining random results. 116. According to a recent survey of 1,200 people, 61% feel that the president is doing an acceptable job. We are interested in the population proportion of people who feel the president is doing an acceptable job. 1. Define the random variables $X$ and $P^{\prime}$ in words. 2. Which distribution should you use for this problem? Explain your choice. 3. Construct a 90% confidence interval for the population proportion of people who feel the president is doing an acceptable job. • State the confidence interval. • Sketch the graph. 117. An article regarding interracial dating and marriage recently appeared in the Washington Post. Of the 1,709 randomly selected adults, 315 identified themselves as Latinos, 323 identified themselves as blacks, 254 identified themselves as Asians, and 779 identified themselves as whites. In this survey, 86% of blacks said that they would welcome a white person into their families. Among Asians, 77% would welcome a white person into their families, 71% would welcome a Latino, and 66% would welcome a black person. 1. We are interested in finding the 95% confidence interval for the percent of all black adults who would welcome a white person into their families. Define the random variables $X$ and $P^{\prime}$, in words. 2. Which distribution should you use for this problem? Explain your choice. 3. Construct a 95% confidence interval. • State the confidence interval. • Sketch the graph. 118. Refer to the information in Table $5$ shows the total receipts from individuals for a random selection of 40 House candidates rounded to the nearest$100. The standard deviation for this data to the nearest hundred is $\sigma$ = $909,200.$3,600$1,243,900$10,900$385,200$581,500 $7,400$2,900$400$3,714,500$632,500$391,000$467,400$56,800$5,800$405,200 $733,200$8,000$468,700$75,200$41,000$13,300$9,500$953,800$1,113,500$1,109,300 $353,900$986,100$88,600$378,200$13,200$3,800$745,100$5,800$3,072,100$1,626,700 $512,900$2,309,200$6,600$202,400$15,800 Table $5$ 1. Find the point estimate for the population mean. 2. Using 95% confidence, calculate the error bound. 3. Create a 95% confidence interval for the mean total individual contributions. 4. Interpret the confidence interval in the context of the problem. 137. The American Community Survey (ACS), part of the United States Census Bureau, conducts a yearly census similar to the one taken every ten years, but with a smaller percentage of participants. The most recent survey estimates with 90% confidence that the mean household income in the U.S. falls between$69,720 and \$69,922. Find the point estimate for mean U.S. household income and the error bound for mean U.S. household income. 138. The average height of young adult males has a normal distribution with standard deviation of 2.5 inches. You want to estimate the mean height of students at your college or university to within one inch with 93% confidence. How many male students must you measure? 139. If the confidence interval is change to a higher probability, would this cause a lower, or a higher, minimum sample size? 140. If the tolerance is reduced by half, how would this affect the minimum sample size? 141. If the value of $p$ is reduced, would this necessarily reduce the sample size needed? 142. Is it acceptable to use a higher sample size than the one calculated by $\frac{z^{2} p q}{e^{2}}$? 143. A company has been running an assembly line with 97.42%% of the products made being acceptable. Then, a critical piece broke down. After the repairs the decision was made to see if the number of defective products made was still close enough to the long standing production quality. Samples of 500 pieces were selected at random, and the defective rate was found to be 0.025%. 1. Is this sample size adequate to claim the company is checking within the 90% confidence interval? 2. The 95% confidence interval? 8.07: Chapter Key Terms Binomial Distribution a discrete random variable (RV) which arises from Bernoulli trials; there are a fixed number, $n$, of independent trials. “Independent” means that the result of any trial (for example, trial 1) does not affect the results of the following trials, and all trials are conducted under the same conditions. Under these circumstances the binomial $RV$ $X$ is defined as the number of successes in n trials. The notation is: $X \sim B(\bf{n,p})$. The mean is $\mu = np$ and the standard deviation is $\sigma=\sqrt{n p q}$. The probability of exactly $x$ successes in $n$ trials is $P(X=x)=\left(\begin{array}{l}{n} \ {x}\end{array}\right) p^{x} q^{n-x}$. Confidence Interval (CI) an interval estimate for an unknown population parameter. This depends on: • the desired confidence level, • information that is known about the distribution (for example, known standard deviation), • the sample and its size. Confidence Level (CL) the percent expression for the probability that the confidence interval contains the true population parameter; for example, if the CL = 90%, then in 90 out of 100 samples the interval estimate will enclose the true population parameter. Degrees of Freedom (df) the number of objects in a sample that are free to vary Error Bound for a Population Mean (EBM) the margin of error; depends on the confidence level, sample size, and known or estimated population standard deviation. Error Bound for a Population Proportion (EBP) the margin of error; depends on the confidence level, the sample size, and the estimated (from the sample) proportion of successes. Inferential Statistics also called statistical inference or inductive statistics; this facet of statistics deals with estimating a population parameter based on a sample statistic. For example, if four out of the 100 calculators sampled are defective we might infer that four percent of the production is defective. Normal Distribution a continuous random variable (RV) with pdf $f(x)=\frac{1}{\sigma \sqrt{2 \pi}} e^{-(x-\mu)^{2} / 2 \sigma^{2}}$, where $\mu$ is the mean of the distribution and $\sigma$ is the standard deviation, notation: $X \sim N(\mu,\sigma)$. If $\mu = 0$ and $\sigma = 1$, the RV is called the standard normal distribution. Parameter a numerical characteristic of a population Point Estimate a single number computed from a sample and used to estimate a population parameter Standard Deviation a number that is equal to the square root of the variance and measures how far data values are from their mean; notation: $s$ for sample standard deviation and \sigma for population standard deviation Student's t-Distribution investigated and reported by William S. Gossett in 1908 and published under the pseudonym Student; the major characteristics of this random variable ($RV$) are: • It is continuous and assumes any real values. • The pdf is symmetrical about its mean of zero. • It approaches the standard normal distribution as $n$ get larger. • There is a "family" of t–distributions: each representative of the family is completely defined by the number of degrees of freedom, which depends upon the application for which the t is being used.
textbooks/stats/Applied_Statistics/Introductory_Business_Statistics_(OpenStax)/08%3A_Confidence_Intervals/8.06%3A_Chapter_Homework.txt
8.2 A Confidence Interval for a Population Standard Deviation Unknown, Small Sample Case Use the following information to answer the next five exercises. A hospital is trying to cut down on emergency room wait times. It is interested in the amount of time patients must wait before being called back to be examined. An investigation committee randomly surveyed 70 patients. The sample mean was 1.5 hours with a sample standard deviation of 0.5 hours. 1. Identify the following: 1. Use the following information to answer the next six exercises: One hundred eight Americans were surveyed to determine the number of hours they spend watching television each month. It was revealed that they watched an average of 151 hours each month with a standard deviation of 32 hours. Assume that the underlying population distribution is normal.6. Identify the following: 1. Use the following information to answer the next 13 exercises: The data in Table $2$ are the result of a random survey of 39 national flags (with replacement between picks) from various countries. We are interested in finding a confidence interval for the true mean number of colors on a national flag. Let $X$ = the number of colors on a national flag. $X$Freq. 11 27 318 47 56 12. Calculate the following: 1. Construct a 95% confidence interval for the true mean number of colors on national flags.17. How much area is in both tails (combined)? 18. How much area is in each tail? 19. Calculate the following: 1. Use the following information to answer the next two exercises: Marketing companies are interested in knowing the population percent of women who make the majority of household purchasing decisions.25. When designing a study to determine this population proportion, what is the minimum number you would need to survey to be 90% confident that the population proportion is estimated to within 0.05? 26. If it were later determined that it was important to be more than 90% confident and a new survey were commissioned, how would it affect the minimum number you need to survey? Why? 27. Identify the following: 1. Use the following information to answer the next five exercises: Of 1,050 randomly selected adults, 360 identified themselves as manual laborers, 280 identified themselves as non-manual wage earners, 250 identified themselves as mid-level managers, and 160 identified themselves as executives. In the survey, 82% of manual laborers preferred trucks, 62% of non-manual wage earners preferred trucks, 54% of mid-level managers preferred trucks, and 26% of executives preferred trucks.32. We are interested in finding the 95% confidence interval for the percent of executives who prefer trucks. Define random variables $X$ and $p^{\prime}$ in words. 33. Which distribution should you use for this problem? 34. Construct a 95% confidence interval. State the confidence interval, sketch the graph, and calculate the error bound. 35. Suppose we want to lower the sampling error. What is one way to accomplish that? 36. The sampling error given in the survey is ±2%. Explain what the ±2% means. 37. Define the random variable $X$ in words. 38. Define the random variable $p^{\prime}$ in words. 39. Which distribution should you use for this problem? 40. Construct a 90% confidence interval, and state the confidence interval and the error bound. 41. What would happen to the confidence interval if the level of confidence were 95%? Use the following information to answer the next 16 exercises: The Ice Chalet offers dozens of different beginning ice-skating classes. All of the class names are put into a bucket. The 5 P.M., Monday night, ages 8 to 12, beginning ice-skating class was picked. In that class were 64 girls and 16 boys. Suppose that we are interested in the true proportion of girls, ages 8 to 12, in all beginning ice-skating classes at the Ice Chalet. Assume that the children in the selected class are a random sample of the population. 42. What is being counted? 43. In words, define the random variable $X$. 44. Calculate the following: 1. Use the following information to answer the next five exercises: The standard deviation of the weights of elephants is known to be approximately 15 pounds. We wish to construct a 95% confidence interval for the mean weight of newborn elephant calves. Fifty newborn elephants are weighed. The sample mean is 244 pounds. The sample standard deviation is 11 pounds.58. Identify the following: 1. Use the following information to answer the next seven exercises: The U.S. Census Bureau conducts a study to determine the time needed to complete the short form. The Bureau surveys 200 people. The sample mean is 8.2 minutes. There is a known standard deviation of 2.2 minutes. The population distribution is assumed to be normal.63. Identify the following: 1. Use the following information to answer the next ten exercises: A sample of 20 heads of lettuce was selected. Assume that the population distribution of head weight is normal. The weight of each head of lettuce was then recorded. The mean weight was 2.2 pounds with a standard deviation of 0.1 pounds. The population standard deviation is known to be 0.2 pounds.70. Identify the following: 1. Use the following information to answer the next 14 exercises: The mean age for all Foothill College students for a recent Fall term was 33.2. The population standard deviation has been pretty consistent at 15. Suppose that twenty-five Winter students were randomly selected. The mean age for the sample was 30.4. We are interested in the true mean age for Winter Foothill College students. Let $X$ = the age of a Winter Foothill College student.80. $\overline x$ = _____ 81. $n$ = _____ 82. ________ = 15 83. In words, define the random variable $\overline X$. 84. What is $\overline x$ estimating? 85. Is $\sigma_x$ known? 86. As a result of your answer to Exercise $83$, state the exact distribution to use when calculating the confidence interval. 87. How much area is in both tails (combined)? $\alpha$ =________ 88. How much area is in each tail? $\frac{\alpha}{2}$ =________ 89. Identify the following specifications: 1. lower limit 2. upper limit 3. error bound 90. The 95% confidence interval is:__________________. 91. Fill in the blanks on the graph with the areas, upper and lower limits of the confidence interval, and the sample mean. 92. In one complete sentence, explain what the interval means. 93. Using the same mean, standard deviation, and level of confidence, suppose that $n$ were 69 instead of 25. Would the error bound become larger or smaller? How do you know? 94. Using the same mean, standard deviation, and sample size, how would the error bound change if the confidence level were reduced to 90%? Why? 95. Find the value of the sample size needed to if the confidence interval is 90% that the sample proportion and the population proportion are within 4% of each other. The sample proportion is 0.60. Note: Round all fractions up for $n$. 96. Find the value of the sample size needed to if the confidence interval is 95% that the sample proportion and the population proportion are within 2% of each other. The sample proportion is 0.650. Note: Round all fractions up for $n$. 97. Find the value of the sample size needed to if the confidence interval is 96% that the sample proportion and the population proportion are within 5% of each other. The sample proportion is 0.70. Note: Round all fractions up for $n$. 98. Find the value of the sample size needed to if the confidence interval is 90% that the sample proportion and the population proportion are within 1% of each other. The sample proportion is 0.50. Note: Round all fractions up for $n$. 99. Find the value of the sample size needed to if the confidence interval is 94% that the sample proportion and the population proportion are within 2% of each other. The sample proportion is 0.65. Note: Round all fractions up for $n$. 100. Find the value of the sample size needed to if the confidence interval is 95% that the sample proportion and the population proportion are within 4% of each other. The sample proportion is 0.45. Note: Round all fractions up for $n$. 101. Find the value of the sample size needed to if the confidence interval is 90% that the sample proportion and the population proportion are within 2% of each other. The sample proportion is 0.3. Note: Round all fractions up for $n$.
textbooks/stats/Applied_Statistics/Introductory_Business_Statistics_(OpenStax)/08%3A_Confidence_Intervals/8.08%3A_Chapter_Practice.txt
A Confidence Interval for a Population Standard Deviation, Known or Large Sample Size • “American Fact Finder.” U.S. Census Bureau. Available online at http://factfinder2.census.gov/faces/...html?refresh=t (accessed July 2, 2013). • “Disclosure Data Catalog: Candidate Summary Report 2012.” U.S. Federal Election Commission. Available online at www.fec.gov/data/index.jsp (accessed July 2, 2013). • “Headcount Enrollment Trends by Student Demographics Ten-Year Fall Trends to Most Recently Completed Fall.” Foothill De Anza Community College District. Available online at research.fhda.edu/factbook/FH...phicTrends.htm (accessed September 30,2013). • Kuczmarski, Robert J., Cynthia L. Ogden, Shumei S. Guo, Laurence M. Grummer-Strawn, Katherine M. Flegal, Zuguo Mei, Rong Wei, Lester R. Curtin, Alex F. Roche, Clifford L. Johnson. “2000 CDC Growth Charts for the United States: Methods and Development.” Centers for Disease Control and Prevention. Available online at http://www.cdc.gov/growthcharts/2000...thchart-us.pdf (accessed July 2, 2013). • La, Lynn, Kent German. "Cell Phone Radiation Levels." c|net part of CBX Interactive Inc. Available online at http://reviews.cnet.com/cell-phone-radiation-levels/ (accessed July 2, 2013). • “Mean Income in the Past 12 Months (in 2011 Inflaction-Adjusted Dollars): 2011 American Community Survey 1-Year Estimates.” American Fact Finder, U.S. Census Bureau. Available online at http://factfinder2.census.gov/faces/...prodType=table (accessed July 2, 2013). • “Metadata Description of Candidate Summary File.” U.S. Federal Election Commission. Available online at www.fec.gov/finance/disclosur...esummary.shtml (accessed July 2, 2013). • “National Health and Nutrition Examination Survey.” Centers for Disease Control and Prevention. Available online at http://www.cdc.gov/nchs/nhanes.htm (accessed July 2, 2013). A Confidence Interval for a Population Standard Deviation Unknown, Small Sample Case • “America’s Best Small Companies.” Forbes, 2013. Available online at http://www.forbes.com/best-small-companies/list/ (accessed July 2, 2013). • Data from Microsoft Bookshelf. • Data from http://www.businessweek.com/. • Data from http://www.forbes.com/. • “Disclosure Data Catalog: Leadership PAC and Sponsors Report, 2012.” Federal Election Commission. Available online at www.fec.gov/data/index.jsp (accessed July 2,2013). • “Human Toxome Project: Mapping the Pollution in People.” Environmental Working Group. Available online at www.ewg.org/sites/humantoxome...tero%2Fnewborn (accessed July 2, 2013). • “Metadata Description of Leadership PAC List.” Federal Election Commission. Available online at www.fec.gov/finance/disclosur...pPacList.shtml (accessed July 2, 2013). A Confidence Interval for A Population Proportion • Jensen, Tom. “Democrats, Republicans Divided on Opinion of Music Icons.” Public Policy Polling. Available online at www.publicpolicypolling.com/Day2MusicPoll.pdf (accessed July 2, 2013). • Madden, Mary, Amanda Lenhart, Sandra Coresi, Urs Gasser, Maeve Duggan, Aaron Smith, and Meredith Beaton. “Teens, Social Media, and Privacy.” PewInternet, 2013. Available online at www.pewinternet.org/Reports/2...d-Privacy.aspx (accessed July 2, 2013). • Prince Survey Research Associates International. “2013 Teen and Privacy Management Survey.” Pew Research Center: Internet and American Life Project. Available online at www.pewinternet.org/~/media//...al%20Media.pdf (accessed July 2, 2013). • Saad, Lydia. “Three in Four U.S. Workers Plan to Work Pas Retirement Age: Slightly more say they will do this by choice rather than necessity.” Gallup® Economy, 2013. Available online at http://www.gallup.com/poll/162758/th...ement-age.aspx (accessed July 2, 2013). • The Field Poll. Available online at field.com/fieldpollonline/subscribers/ (accessed July 2, 2013). • Zogby. “New SUNYIT/Zogby Analytics Poll: Few Americans Worry about Emergency Situations Occurring in Their Community; Only one in three have an Emergency Plan; 70% Support Infrastructure ‘Investment’ for National Security.” Zogby Analytics, 2013. Available online at http://www.zogbyanalytics.com/news/2...analytics-poll (accessed July 2, 2013). • “52% Say Big-Time College Athletics Corrupt Education Process.” Rasmussen Reports, 2013. Available online at http://www.rasmussenreports.com/publ...cation_process (accessed July 2, 2013). 8.10: Chapter Review 8.2 A Confidence Interval for a Population Standard Deviation Unknown, Small Sample Case In many cases, the researcher does not know the population standard deviation, $\sigma$, of the measure being studied. In these cases, it is common to use the sample standard deviation, s, as an estimate of \sigma. The normal distribution creates accurate confidence intervals when $\sigma$ is known, but it is not as accurate when s is used as an estimate. In this case, the Student’s t-distribution is much better. Define a t-score using the following formula: $t=\frac{\overline{x}-\mu}{s / \sqrt{n}}$ The t-score follows the Student’s t-distribution with $n – 1$ degrees of freedom. The confidence interval under this distribution is calculated with $\overline{x} \pm\left(t_{\frac{\alpha}{2}}\right) \frac{s}{\sqrt{n}}$ where $t_{\frac{\alpha}{2}}$ is the t-score with area to the right equal to $\frac{\alpha}{2}$, $s$ is the sample standard deviation, and $n$ is the sample size. Use a table, calculator, or computer to find $t_{\frac{\alpha}{2}}$ for a given $\alpha$. 8.3 A Confidence Interval for A Population Proportion Some statistical measures, like many survey questions, measure qualitative rather than quantitative data. In this case, the population parameter being estimated is a proportion. It is possible to create a confidence interval for the true population proportion following procedures similar to those used in creating confidence intervals for population means. The formulas are slightly different, but they follow the same reasoning. Let $p^{\prime}$ represent the sample proportion, $x/n$, where $x$ represents the number of successes and $n$ represents the sample size. Let $q^{\prime}=1-p^{\prime}$. Then the confidence interval for a population proportion is given by the following formula: $\mathrm{p}^{\prime}-Z_{\alpha} \sqrt{\frac{\mathrm{p}^{\prime} \mathrm{q}^{\prime}}{n}} \leq p \leq \mathrm{p}^{\prime}+Z_{\alpha} \sqrt{\frac{\mathrm{p}^{\prime} \mathrm{q}^{\prime}}{n}}$ 8.4 Calculating the Sample Size n: Continuous and Binary Random Variables Sometimes researchers know in advance that they want to estimate a population mean within a specific margin of error for a given level of confidence. In that case, solve the relevant confidence interval formula for n to discover the size of the sample that is needed to achieve this goal: $n=\frac{Z_{\alpha}^{2} \sigma^{2}}{(\overline{x}-\mu)^{2}}$ If the random variable is binary then the formula for the appropriate sample size to maintain a particular level of confidence with a specific tolerance level is given by $n=\frac{Z_{\alpha}^{2} \mathrm{pq}}{e^{2}}$
textbooks/stats/Applied_Statistics/Introductory_Business_Statistics_(OpenStax)/08%3A_Confidence_Intervals/8.09%3A_Chapter_References.txt
Now we are down to the bread and butter work of the statistician: developing and testing hypotheses. It is important to put this material in a broader context so that the method by which a hypothesis is formed is understood completely. Using textbook examples often clouds the real source of statistical hypotheses. Statistical testing is part of a much larger process known as the scientific method. This method was developed more than two centuries ago as the accepted way that new knowledge could be created. Until then, and unfortunately even today, among some, "knowledge" could be created simply by some authority saying something was so, ipso dicta. Superstition and conspiracy theories were (are?) accepted uncritically. The scientific method, briefly, states that only by following a careful and specific process can some assertion be included in the accepted body of knowledge. This process begins with a set of assumptions upon which a theory, sometimes called a model, is built. This theory, if it has any validity, will lead to predictions; what we call hypotheses. As an example, in Microeconomics the theory of consumer choice begins with certain assumption concerning human behavior. From these assumptions a theory of how consumers make choices using indifference curves and the budget line. This theory gave rise to a very important prediction, namely, that there was an inverse relationship between price and quantity demanded. This relationship was known as the demand curve. The negative slope of the demand curve is really just a prediction, or a hypothesis, that can be tested with statistical tools. Unless hundreds and hundreds of statistical tests of this hypothesis had not confirmed this relationship, the so-called Law of Demand would have been discarded years ago. This is the role of statistics, to test the hypotheses of various theories to determine if they should be admitted into the accepted body of knowledge; how we understand our world. Once admitted, however, they may be later discarded if new theories come along that make better predictions. Not long ago two scientists claimed that they could get more energy out of a process than was put in. This caused a tremendous stir for obvious reasons. They were on the cover of Time and were offered extravagant sums to bring their research work to private industry and any number of universities. It was not long until their work was subjected to the rigorous tests of the scientific method and found to be a failure. No other lab could replicate their findings. Consequently they have sunk into obscurity and their theory discarded. It may surface again when someone can pass the tests of the hypotheses required by the scientific method, but until then it is just a curiosity. Many pure frauds have been attempted over time, but most have been found out by applying the process of the scientific method. This discussion is meant to show just where in this process statistics falls. Statistics and statisticians are not necessarily in the business of developing theories, but in the business of testing others' theories. Hypotheses come from these theories based upon an explicit set of assumptions and sound logic. The hypothesis comes first, before any data are gathered. Data do not create hypotheses; they are used to test them. If we bear this in mind as we study this section the process of forming and testing hypotheses will make more sense. One job of a statistician is to make statistical inferences about populations based on samples taken from the population. Confidence intervals are one way to estimate a population parameter. Another way to make a statistical inference is to make a decision about the value of a specific parameter. For instance, a car dealer advertises that its new small truck gets 35 miles per gallon, on average. A tutoring service claims that its method of tutoring helps 90% of its students get an A or a B. A company says that women managers in their company earn an average of \$60,000 per year. A statistician will make a decision about these claims. This process is called "hypothesis testing." A hypothesis test involves collecting data from a sample and evaluating the data. Then, the statistician makes a decision as to whether or not there is sufficient evidence, based upon analyses of the data, to reject the null hypothesis. In this chapter, you will conduct hypothesis tests on single means and single proportions. You will also learn about the errors associated with these tests. 9.01: Null and Alternative Hypotheses The actual test begins by considering two hypotheses. They are called the null hypothesis and the alternative hypothesis. These hypotheses contain opposing viewpoints. • $H_0$: The null hypothesis: It is a statement of no difference between a sample mean or proportion and a population mean or proportion. In other words, the difference equals 0. This can often be considered the status quo and as a result if you cannot accept the null it requires some action. • $H_a$: The alternative hypothesis: It is a claim about the population that is contradictory to $H_0$ and what we conclude when we cannot accept $H_0$. The alternative hypothesis is the contender and must win with significant evidence to overthrow the status quo. This concept is sometimes referred to the tyranny of the status quo because as we will see later, to overthrow the null hypothesis takes usually 90 or greater confidence that this is the proper decision. Since the null and alternative hypotheses are contradictory, you must examine evidence to decide if you have enough evidence to reject the null hypothesis or not. The evidence is in the form of sample data. After you have determined which hypothesis the sample supports, you make a decision. There are two options for a decision. They are "cannot accept $H_0$" if the sample information favors the alternative hypothesis or "do not reject $H_0$" or "decline to reject $H_0$" if the sample information is insufficient to reject the null hypothesis. These conclusions are all based upon a level of probability, a significance level, that is set my the analyst. Table 9.1 presents the various hypotheses in the relevant pairs. For example, if the null hypothesis is equal to some value, the alternative has to be not equal to that value. Table 9.1 $H_0$ $H_a$ equal (=) not equal ($\neq$) greater than or equal to ($\geq$) less than (<) less than or equal to ($\leq$) more than (>) Note As a mathematical convention $H_0$ always has a symbol with an equal in it. Ha never has a symbol with an equal in it. The choice of symbol depends on the wording of the hypothesis test. Example 9.1 $H_0$: No more than 30% of the registered voters in Santa Clara County voted in the primary election. $p \leq 30$ $H_a$: More than 30% of the registered voters in Santa Clara County voted in the primary election. $p > 30$ Example 9.2 We want to test whether the mean GPA of students in American colleges is different from 2.0 (out of 4.0). The null and alternative hypotheses are: $H_0: \mu = 2.0$ $H_a: \mu \neq 2.0$ Example 9.3 We want to test if college students take less than five years to graduate from college, on the average. The null and alternative hypotheses are: $H_0: \mu \geq 5$ $H_a: \mu < 5$
textbooks/stats/Applied_Statistics/Introductory_Business_Statistics_(OpenStax)/09%3A_Hypothesis_Testing_with_One_Sample/9.00%3A_Introduction_to_Hypothesis_Testing.txt
When you perform a hypothesis test, there are four possible outcomes depending on the actual truth (or falseness) of the null hypothesis $H_0$ and the decision to reject or not. The outcomes are summarized in the following table: Table 9.2 $\textbf{Statistical Decision}$ $\bf{H_0} \textbf{ is actually...}$ True False Cannot accept $H_0$ Type I error Correct outcome Cannot reject $H_0$ Correct outcome Type II error The four possible outcomes in the table are: 1. The decision is cannot reject $\bf{H_0}$ when $\bf{H_0}$ is true (correct decision). 2. The decision is cannot accept $\bf{H_0}$ when $\bf{H_0}$ is true (incorrect decision known as a Type I error). This case is described as "rejecting a good null". As we will see later, it is this type of error that we will guard against by setting the probability of making such an error. The goal is to NOT take an action that is an error. 3. The decision is cannot reject $\bf{H_0}$ when, in fact, $\bf{H_0}$ is false (incorrect decision known as a Type II error). This is called "accepting a false null". In this situation you have allowed the status quo to remain in force when it should be overturned. As we will see, the null hypothesis has the advantage in competition with the alternative. 4. The decision is cannot accept $\bf{H_0}$ when $\bf{H_0}$ is false (correct decision). Each of the errors occurs with a particular probability. The Greek letters $\alpha$ and $\beta$ represent the probabilities. • $\alpha$ = probability of a Type I error = $\bf{P}$(Type I error) = probability of rejecting the null hypothesis when the null hypothesis is true: rejecting a good null. • $\beta$ = probability of a Type II error = $\bf{P}$(Type II error) = probability of not rejecting the null hypothesis when the null hypothesis is false. ($1 − \beta$) is called the Power of the Test. $\alpha$ and $\beta$ should be as small as possible because they are probabilities of errors. Statistics allows us to set the probability that we are making a Type I error. The probability of making a Type I error is $\alpha$. Recall that the confidence intervals in the last unit were set by choosing a value called $Z_{\alpha}$ (or $t_{\alpha}$) and the alpha value determined the confidence level of the estimate because it was the probability of the interval failing to capture the true mean (or proportion parameter $p$). This alpha and that one are the same. The easiest way to see the relationship between the alpha error and the level of confidence is with the following figure. In the center of Figure 9.2 is a normally distributed sampling distribution marked $H_0$. This is a sampling distribution of $\overline X$ and by the Central Limit Theorem it is normally distributed. The distribution in the center is marked $H_0$ and represents the distribution for the null hypotheses $H_0$: $\mu = 100$. This is the value that is being tested. The formal statements of the null and alternative hypotheses are listed below the figure. The distributions on either side of the $H_0$ distribution represent distributions that would be true if $H_0$ is false, under the alternative hypothesis listed as Ha. We do not know which is true, and will never know. There are, in fact, an infinite number of distributions from which the data could have been drawn if Ha is true, but only two of them are on Figure 9.2 representing all of the others. To test a hypothesis we take a sample from the population and determine if it could have come from the hypothesized distribution with an acceptable level of significance. This level of significance is the alpha error and is marked on Figure 9.2 as the shaded areas in each tail of the $H_0$ distribution. (Each area is actually \alpha/2 because the distribution is symmetrical and the alternative hypothesis allows for the possibility for the value to be either greater than or less than the hypothesized value--called a two-tailed test). If the sample mean marked as $\overline{X}_{1}$ is in the tail of the distribution of $H_0$, we conclude that the probability that it could have come from the $H_0$ distribution is less than alpha. We consequently state, "the null hypothesis cannot be accepted with (\alpha) level of significance". The truth may be that this $\overline{X}_{1}$ did come from the $H_0$ distribution, but from out in the tail. If this is so then we have falsely rejected a true null hypothesis and have made a Type I error. What statistics has done is provide an estimate about what we know, and what we control, and that is the probability of us being wrong, $\alpha$. We can also see in Figure 9.2 that the sample mean could be really from an Ha distribution, but within the boundary set by the alpha level. Such a case is marked as $\overline{X}_{2}$. There is a probability that $\overline{X}_{2}$ actually came from Ha but shows up in the range of $H_0$ between the two tails. This probability is the beta error, the probability of accepting a false null. Our problem is that we can only set the alpha error because there are an infinite number of alternative distributions from which the mean could have come that are not equal to $H_0$. As a result, the statistician places the burden of proof on the alternative hypothesis. That is, we will not reject a null hypothesis unless there is a greater than 90, or 95, or even 99 percent probability that the null is false: the burden of proof lies with the alternative hypothesis. This is why we called this the tyranny of the status quo earlier. By way of example, the American judicial system begins with the concept that a defendant is "presumed innocent". This is the status quo and is the null hypothesis. The judge will tell the jury that they can not find the defendant guilty unless the evidence indicates guilt beyond a "reasonable doubt" which is usually defined in criminal cases as 95% certainty of guilt. If the jury cannot accept the null, innocent, then action will be taken, jail time. The burden of proof always lies with the alternative hypothesis. (In civil cases, the jury needs only to be more than 50% certain of wrongdoing to find culpability, called "a preponderance of the evidence"). The example above was for a test of a mean, but the same logic applies to tests of hypotheses for all statistical parameters one may wish to test. The following are examples of Type I and Type II errors. Example 9.4 Suppose the null hypothesis, $H_0$, is: Frank's rock climbing equipment is safe. Type I error: Frank thinks that his rock climbing equipment may not be safe when, in fact, it really is safe. Type II error: Frank thinks that his rock climbing equipment may be safe when, in fact, it is not safe. $\bf{\alpha =}$ probability that Frank thinks his rock climbing equipment may not be safe when, in fact, it really is safe. $\bf{\beta =}$ probability that Frank thinks his rock climbing equipment may be safe when, in fact, it is not safe. Notice that, in this case, the error with the greater consequence is the Type II error. (If Frank thinks his rock climbing equipment is safe, he will go ahead and use it.) This is a situation described as "accepting a false null". Example 9.5 Suppose the null hypothesis, $H_0$, is: The victim of an automobile accident is alive when he arrives at the emergency room of a hospital. This is the status quo and requires no action if it is true. If the null hypothesis cannot be accepted then action is required and the hospital will begin appropriate procedures. Type I error: The emergency crew thinks that the victim is dead when, in fact, the victim is alive. Type II error: The emergency crew does not know if the victim is alive when, in fact, the victim is dead. $\bf{\alpha =}$ probability that the emergency crew thinks the victim is dead when, in fact, he is really alive = P(Type I error). $\bf{\beta =}$ probability that the emergency crew does not know if the victim is alive when, in fact, the victim is dead = P(Type II error). The error with the greater consequence is the Type I error. (If the emergency crew thinks the victim is dead, they will not treat him.) Exercise 9.5 Suppose the null hypothesis, $H_0$, is: a patient is not sick. Which type of error has the greater consequence, Type I or Type II? Example 9.6 It’s a Boy Genetic Labs claim to be able to increase the likelihood that a pregnancy will result in a boy being born. Statisticians want to test the claim. Suppose that the null hypothesis, $H_0$, is: It’s a Boy Genetic Labs has no effect on gender outcome. The status quo is that the claim is false. The burden of proof always falls to the person making the claim, in this case the Genetics Lab. Type I error: This results when a true null hypothesis is rejected. In the context of this scenario, we would state that we believe that It’s a Boy Genetic Labs influences the gender outcome, when in fact it has no effect. The probability of this error occurring is denoted by the Greek letter alpha, \alpha. Type II error: This results when we fail to reject a false null hypothesis. In context, we would state that It’s a Boy Genetic Labs does not influence the gender outcome of a pregnancy when, in fact, it does. The probability of this error occurring is denoted by the Greek letter beta, \beta. The error of greater consequence would be the Type I error since couples would use the It’s a Boy Genetic Labs product in hopes of increasing the chances of having a boy. Exercise 9.6 “Red tide” is a bloom of poison-producing algae–a few different species of a class of plankton called dinoflagellates. When the weather and water conditions cause these blooms, shellfish such as clams living in the area develop dangerous levels of a paralysis-inducing toxin. In Massachusetts, the Division of Marine Fisheries (DMF) monitors levels of the toxin in shellfish by regular sampling of shellfish along the coastline. If the mean level of toxin in clams exceeds 800 μg (micrograms) of toxin per kg of clam meat in any area, clam harvesting is banned there until the bloom is over and levels of toxin in clams subside. Describe both a Type I and a Type II error in this context, and state which error has the greater consequence. Example 9.7 A certain experimental drug claims a cure rate of at least 75% for males with prostate cancer. Describe both the Type I and Type II errors in context. Which error is the more serious? Type I: A cancer patient believes the cure rate for the drug is less than 75% when it actually is at least 75%. Type II: A cancer patient believes the experimental drug has at least a 75% cure rate when it has a cure rate that is less than 75%. In this scenario, the Type II error contains the more severe consequence. If a patient believes the drug works at least 75% of the time, this most likely will influence the patient’s (and doctor’s) choice about whether to use the drug as a treatment option.
textbooks/stats/Applied_Statistics/Introductory_Business_Statistics_(OpenStax)/09%3A_Hypothesis_Testing_with_One_Sample/9.02%3A_Outcomes_and_the_Type_I_and_Type_II_Errors.txt
Earlier, we discussed sampling distributions. Particular distributions are associated with hypothesis testing.We will perform hypotheses tests of a population mean using a normal distribution or a Student's $t$-distribution. (Remember, use a Student's $t$-distribution when the population standard deviation is unknown and the sample size is small, where small is considered to be less than 30 observations.) We perform tests of a population proportion using a normal distribution when we can assume that the distribution is normally distributed. We consider this to be true if the sample proportion, $p^{\prime}$, times the sample size is greater than 5 and $1-p^{\prime}$ times the sample size is also greater then 5. This is the same rule of thumb we used when developing the formula for the confidence interval for a population proportion. Hypothesis Test for the Mean Going back to the standardizing formula we can derive the test statistic for testing hypotheses concerning means. $Z_{c}=\frac{\overline{x}-\mu_{0}}{\sigma / \sqrt{n}}\nonumber$ The standardizing formula can not be solved as it is because we do not have $\mu$, the population mean. However, if we substitute in the hypothesized value of the mean, $\mu_0$ in the formula as above, we can compute a $Z$ value. This is the test statistic for a test of hypothesis for a mean and is presented in Figure 9.3. We interpret this $Z$ value as the associated probability that a sample with a sample mean of $\overline X$ could have come from a distribution with a population mean of $H_0$ and we call this $Z$ value $Z_c$ for “calculated”. Figure 9.3 and Figure 9.4 show this process. In Figure 9.3 two of the three possible outcomes are presented. $\overline X_1$ and $\overline X_3$ are in the tails of the hypothesized distribution of $H_0$. Notice that the horizontal axis in the top panel is labeled $\overline X$'s. This is the same theoretical distribution of $\overline X$'s, the sampling distribution, that the Central Limit Theorem tells us is normally distributed. This is why we can draw it with this shape. The horizontal axis of the bottom panel is labeled $Z$ and is the standard normal distribution. $Z_{\frac{\alpha}{2}}$ and $-Z_{\frac{\alpha}{2}}$, called the critical values, are marked on the bottom panel as the $Z$ values associated with the probability the analyst has set as the level of significance in the test, ($\alpha$). The probabilities in the tails of both panels are, therefore, the same. Notice that for each $\overline X$ there is an associated $Z_c$, called the calculated $Z$, that comes from solving the equation above. This calculated $Z$ is nothing more than the number of standard deviations that the hypothesized mean is from the sample mean. If the sample mean falls "too many" standard deviations from the hypothesized mean we conclude that the sample mean could not have come from the distribution with the hypothesized mean, given our pre-set required level of significance. It could have come from $H_0$, but it is deemed just too unlikely. In Figure 9.3 both $\overline X_1$ and $\overline X_3$ are in the tails of the distribution. They are deemed "too far" from the hypothesized value of the mean given the chosen level of alpha. If in fact this sample mean it did come from $H_0$, but from in the tail, we have made a Type I error: we have rejected a good null. Our only real comfort is that we know the probability of making such an error, \alpha, and we can control the size of $\alpha$. Figure 9.4 shows the third possibility for the location of the sample mean, $\overline x$. Here the sample mean is within the two critical values. That is, within the probability of $(1-\alpha)$ and we cannot reject the null hypothesis. This gives us the decision rule for testing a hypothesis for a two-tailed test: Decision rule: two-tail test If $\left|\mathrm{Z}_{c}\right|<\mathrm{Z}_{\frac{\alpha}{2}}$ : then do not REJECT $H_0$ If $\left|\mathrm{Z}_{c}\right|>\mathrm{Z}_{\frac{\alpha}{2}}$ : then REJECT $H_0$ Table 9.3 This rule will always be the same no matter what hypothesis we are testing or what formulas we are using to make the test. The only change will be to change the $Z_c$ to the appropriate symbol for the test statistic for the parameter being tested. Stating the decision rule another way: if the sample mean is unlikely to have come from the distribution with the hypothesized mean we cannot accept the null hypothesis. Here we define "unlikely" as having a probability less than alpha of occurring. P-value Approach An alternative decision rule can be developed by calculating the probability that a sample mean could be found that would give a test statistic larger than the test statistic found from the current sample data assuming that the null hypothesis is true. Here the notion of "likely" and "unlikely" is defined by the probability of drawing a sample with a mean from a population with the hypothesized mean that is either larger or smaller than that found in the sample data. Simply stated, the $p$-value approach compares the desired significance level, $\alpha$, to the $p$-value which is the probability of drawing a sample mean further from the hypothesized value than the actual sample mean. A large $p$-value calculated from the data indicates that we should not reject the null hypothesis. The smaller the $p$-value, the more unlikely the outcome, and the stronger the evidence is against the null hypothesis. We would reject the null hypothesis if the evidence is strongly against it. The relationship between the decision rule of comparing the calculated test statistics, $Z_c$, and the Critical Value, $Z_\alpha$, and using the $p$-value can be seen in Figure 9.5. The calculated value of the test statistic is $Z_c$ in this example and is marked on the bottom graph of the standard normal distribution because it is a $Z$ value. In this case the calculated value is in the tail and thus we cannot accept the null hypothesis, the associated $\overline X$ is just too unusually large to believe that it came from the distribution with a mean of $\mu_0$ with a significance level of \alpha. If we use the $p$-value decision rule we need one more step. We need to find in the standard normal table the probability associated with the calculated test statistic, $Z_c$. We then compare that to the \alpha associated with our selected level of confidence. In Figure 9.5 we see that the $p$-value is less than \alpha and therefore we cannot accept the null. We know that the $p$-value is less than \alpha because the area under the $p$-value is smaller than $\alpha/ 2$. It is important to note that two researchers drawing randomly from the same population may find two different $p$-values from their samples. This occurs because the $p$-value is calculated as the probability in the tail beyond the sample mean assuming that the null hypothesis is correct. Because the sample means will in all likelihood be different this will create two different $p$-values. Nevertheless, the conclusions as to the null hypothesis should be different with only the level of probability of $\alpha$. Here is a systematic way to make a decision of whether you cannot accept or cannot reject a null hypothesis if using the $\bf{p}$-value and a preset or preconceived $\bf{\alpha}$ (the "significance level"). A preset $\alpha$ is the probability of a Type I error (rejecting the null hypothesis when the null hypothesis is true). It may or may not be given to you at the beginning of the problem. In any case, the value of $\alpha$ is the decision of the analyst. When you make a decision to reject or not reject $H_0$, do as follows: • If $\alpha > p$-value, cannot accept $H_0$. The results of the sample data are significant. There is sufficient evidence to conclude that $H_0$ is an incorrect belief and that the alternative hypothesis, Ha, may be correct. • If $\alpha \leq p$-value, cannot reject $H_0$. The results of the sample data are not significant. There is not sufficient evidence to conclude that the alternative hypothesis, Ha, may be correct. In this case the status quo stands. • When you "cannot reject $H_0$", it does not mean that you should believe that $H_0$ is true. It simply means that the sample data have failed to provide sufficient evidence to cast serious doubt about the truthfulness of $H_0$. Remember that the null is the status quo and it takes high probability to overthrow the status quo. This bias in favor of the null hypothesis is what gives rise to the statement "tyranny of the status quo" when discussing hypothesis testing and the scientific method. Both decision rules will result in the same decision and it is a matter of preference which one is used. One and Two-tailed Tests The discussion of Figure 9.3-Figure 9.5 was based on the null and alternative hypothesis presented in Figure 9.3. This was called a two-tailed test because the alternative hypothesis allowed that the mean could have come from a population which was either larger or smaller than the hypothesized mean in the null hypothesis. This could be seen by the statement of the alternative hypothesis as $\mu \neq 100$, in this example. It may be that the analyst has no concern about the value being "too" high or "too" low from the hypothesized value. If this is the case, it becomes a one-tailed test and all of the alpha probability is placed in just one tail and not split into $\alpha /2$ as in the above case of a two-tailed test. Any test of a claim will be a one-tailed test. For example, a car manufacturer claims that their Model 17B provides gas mileage of greater than 25 miles per gallon. The null and alternative hypothesis would be: • $H_0: \mu \leq 25$ • $H_a: \mu > 25$ The claim would be in the alternative hypothesis. The burden of proof in hypothesis testing is carried in the alternative. This is because failing to reject the null, the status quo, must be accomplished with 90 or 95 percent significance that it cannot be maintained. Said another way, we want to have only a 5 or 10 percent probability of making a Type I error, rejecting a good null; overthrowing the status quo. This is a one-tailed test and all of the alpha probability is placed in just one tail and not split into $\alpha /2$ as in the above case of a two-tailed test. Figure 9.6 shows the two possible cases and the form of the null and alternative hypothesis that give rise to them. where $\mu_0$ is the hypothesized value of the population mean. Sample size Test statistic < 30 ($\sigma$ unknown) $t_{c}=\frac{\overline{X}-\mu_{0}}{s / \sqrt{n}}$ < 30 ($\sigma$ known) $Z_{c}=\frac{\overline{X}-\mu_{0}}{\sigma / \sqrt{n}}$ > 30 ($\sigma$ unknown) $Z_{c}=\frac{\overline{X}-\mu_{0}}{s / \sqrt{n}}$ > 30 ($\sigma$ known) $Z_{c}=\frac{\overline{X}-\mu_{0}}{\sigma / \sqrt{n}}$ Table 9.4 Test Statistics for Test of Means, Varying Sample Size, Population Standard Deviation Known or Unknown Effects of Sample Size on Test Statistic In developing the confidence intervals for the mean from a sample, we found that most often we would not have the population standard deviation, $\sigma$. If the sample size were less than 30, we could simply substitute the point estimate for $\sigma$, the sample standard deviation, $s$, and use the student's $t$-distribution to correct for this lack of information. When testing hypotheses we are faced with this same problem and the solution is exactly the same. Namely: If the population standard deviation is unknown, and the sample size is less than 30, substitute $s$, the point estimate for the population standard deviation, $\sigma$, in the formula for the test statistic and use the student's $t$ distribution. All the formulas and figures above are unchanged except for this substitution and changing the $Z$ distribution to the student's t distribution on the graph. Remember that the student's t distribution can only be computed knowing the proper degrees of freedom for the problem. In this case, the degrees of freedom is computed as before with confidence intervals: $df = (n-1)$. The calculated t-value is compared to the t-value associated with the pre-set level of confidence required in the test, $t_{\alpha, df}$ found in the student's t tables. If we do not know $\sigma$, but the sample size is 30 or more, we simply substitute $s$ for $\sigma$ and use the normal distribution. Table 9.4 summarizes these rules. A Systematic Approach for Testing A Hypothesis A systematic approach to hypothesis testing follows the following steps and in this order. This template will work for all hypotheses that you will ever test. • Set up the null and alternative hypothesis. This is typically the hardest part of the process. Here the question being asked is reviewed. What parameter is being tested, a mean, a proportion, differences in means, etc. Is this a one-tailed test or two-tailed test? Remember, if someone is making a claim it will always be a one-tailed test. • Decide the level of significance required for this particular case and determine the critical value. These can be found in the appropriate statistical table. The levels of confidence typical for businesses are 80, 90, 95, 98, and 99. However, the level of significance is a policy decision and should be based upon the risk of making a Type I error, rejecting a good null. Consider the consequences of making a Type I error. Next, on the basis of the hypotheses and sample size, select the appropriate test statistic and find the relevant critical value: $Z_\alpha$, $t_\alpha$, etc. Drawing the relevant probability distribution and marking the critical value is always big help. Be sure to match the graph with the hypothesis, especially if it is a one-tailed test. • Take a sample(s) and calculate the relevant parameters: sample mean, standard deviation, or proportion. Using the formula for the test statistic from above in step 2, now calculate the test statistic for this particular case using the parameters you have just calculated. • Compare the calculated test statistic and the critical value. Marking these on the graph will give a good visual picture of the situation. There are now only two situations: 1. The test statistic is in the tail: Cannot Accept the null, the probability that this sample mean (proportion) came from the hypothesized distribution is too small to believe that it is the real home of these sample data. 2. The test statistic is not in the tail: Cannot Reject the null, the sample data are compatible with the hypothesized population parameter. • Reach a conclusion. It is best to articulate the conclusion two different ways. First a formal statistical conclusion such as “With a 5 % level of significance we cannot accept the null hypotheses that the population mean is equal to XX (units of measurement)”. The second statement of the conclusion is less formal and states the action, or lack of action, required. If the formal conclusion was that above, then the informal one might be, “The machine is broken and we need to shut it down and call for repairs”. All hypotheses tested will go through this same process. The only changes are the relevant formulas and those are determined by the hypothesis required to answer the original question.
textbooks/stats/Applied_Statistics/Introductory_Business_Statistics_(OpenStax)/09%3A_Hypothesis_Testing_with_One_Sample/9.03%3A_Distribution_Needed_for_Hypothesis_Testing.txt
Tests on Means Example $8$ Jeffrey, as an eight-year old, established a mean time of 16.43 seconds for swimming the 25-yard freestyle, with a standard deviation of 0.8 seconds. His dad, Frank, thought that Jeffrey could swim the 25-yard freestyle faster using goggles. Frank bought Jeffrey a new pair of expensive goggles and timed Jeffrey for 15 25-yard freestyle swims. For the 15 swims, Jeffrey's mean time was 16 seconds. Frank thought that the goggles helped Jeffrey to swim faster than the 16.43 seconds. Conduct a hypothesis test using a preset $\alpha = 0.05$. Answer Set up the Hypothesis Test: Since the problem is about a mean, this is a test of a single population mean. Set the null and alternative hypothesis: In this case there is an implied challenge or claim. This is that the goggles will reduce the swimming time. The effect of this is to set the hypothesis as a one-tailed test. The claim will always be in the alternative hypothesis because the burden of proof always lies with the alternative. Remember that the status quo must be defeated with a high degree of confidence, in this case 95 % confidence. The null and alternative hypotheses are thus: $H_0: \mu \geq 16.43$  $H_a: \mu < 16.43$ For Jeffrey to swim faster, his time will be less than 16.43 seconds. The "<" tells you this is left-tailed. Determine the distribution needed: Random variable: $\overline X$ = the mean time to swim the 25-yard freestyle. Distribution for the test statistic: The sample size is less than 30 and we do not know the population standard deviation so this is a t-test. and the proper formula is: $t_{c}=\frac{\overline{X}-\mu_{0}}{\sigma / \sqrt{n}}$ $\mu_ 0 = 16.43$ comes from $H_0$ and not the data. $\overline X = 16$. $s = 0.8$, and $n = 15$. Our step 2, setting the level of significance, has already been determined by the problem, .05 for a 95 % significance level. It is worth thinking about the meaning of this choice. The Type I error is to conclude that Jeffrey swims the 25-yard freestyle, on average, in less than 16.43 seconds when, in fact, he actually swims the 25-yard freestyle, on average, in 16.43 seconds. (Reject the null hypothesis when the null hypothesis is true.) For this case the only concern with a Type I error would seem to be that Jeffery’s dad may fail to bet on his son’s victory because he does not have appropriate confidence in the effect of the goggles. To find the critical value we need to select the appropriate test statistic. We have concluded that this is a t-test on the basis of the sample size and that we are interested in a population mean. We can now draw the graph of the t-distribution and mark the critical value. For this problem the degrees of freedom are n-1, or 14. Looking up 14 degrees of freedom at the 0.05 column of the t-table we find 1.761. This is the critical value and we can put this on our graph. Step 3 is the calculation of the test statistic using the formula we have selected. We find that the calculated test statistic is 2.08, meaning that the sample mean is 2.08 standard deviations away from the hypothesized mean of 16.43. $t_{c}=\frac{\overline{x}-\mu_{0}}{s / \sqrt{n}}=\frac{16-16.43}{.8 / \sqrt{15}}=-2.08\nonumber$ Step 4 has us compare the test statistic and the critical value and mark these on the graph. We see that the test statistic is in the tail and thus we move to step 4 and reach a conclusion. The probability that an average time of 16 minutes could come from a distribution with a population mean of 16.43 minutes is too unlikely for us to accept the null hypothesis. We cannot accept the null. Step 5 has us state our conclusions first formally and then less formally. A formal conclusion would be stated as: “With a 95% level of significance we cannot accept the null hypothesis that the swimming time with goggles comes from a distribution with a population mean time of 16.43 minutes.” Less formally, “With 95% significance we believe that the goggles improves swimming speed” If we wished to use the $p$-value system of reaching a conclusion we would calculate the statistic and take the additional step to find the probability of being 2.08 standard deviations from the mean on a t-distribution. This value is .0187. Comparing this to the \alpha-level of .05 we see that we cannot accept the null. The $p$-value has been put on the graph as the shaded area beyond -2.08 and it shows that it is smaller than the hatched area which is the alpha level of 0.05. Both methods reach the same conclusion that we cannot accept the null hypothesis. Exercise $8$ The mean throwing distance of a football for Marco, a high school freshman quarterback, is 40 yards, with a standard deviation of two yards. The team coach tells Marco to adjust his grip to get more distance. The coach records the distances for 20 throws. For the 20 throws, Marco’s mean distance was 45 yards. The coach thought the different grip helped Marco throw farther than 40 yards. Conduct a hypothesis test using a preset $\alpha = 0.05$. Assume the throw distances for footballs are normal. First, determine what type of test this is, set up the hypothesis test, find the $p$-value, sketch the graph, and state your conclusion. Example $9$ Jane has just begun her new job as on the sales force of a very competitive company. In a sample of 16 sales calls it was found that she closed the contract for an average value of 108 dollars with a standard deviation of 12 dollars. Test at 5% significance that the population mean is at least 100 dollars against the alternative that it is less than 100 dollars. Company policy requires that new members of the sales force must exceed an average of $100 per contract during the trial employment period. Can we conclude that Jane has met this requirement at the significance level of 95%? Answer 1. $H_0: \mu \leq 100$ $H_a: \mu > 100$ The null and alternative hypothesis are for the parameter $\mu$ because the number of dollars of the contracts is a continuous random variable. Also, this is a one-tailed test because the company has only an interested if the number of dollars per contact is below a particular number not "too high" a number. This can be thought of as making a claim that the requirement is being met and thus the claim is in the alternative hypothesis. 2. Test statistic: $t_{c}=\frac{\overline{x}-\mu_{0}}{\frac{s}{\sqrt{n}}}=\frac{108-100}{\left(\frac{12}{\sqrt{16}}\right)}=2.67$ 3. Critical value: $t_a=1.753$ with $n-1$ degrees of freedom = 15 The test statistic is a Student's t because the sample size is below 30; therefore, we cannot use the normal distribution. Comparing the calculated value of the test statistic and the critical value of tt (ta)(ta) at a 5% significance level, we see that the calculated value is in the tail of the distribution. Thus, we conclude that 108 dollars per contract is significantly larger than the hypothesized value of 100 and thus we cannot accept the null hypothesis. There is evidence that supports Jane's performance meets company standards. Exercise $9$ It is believed that a stock price for a particular company will grow at a rate of$5 per week with a standard deviation of $1. An investor believes the stock won’t grow as quickly. The changes in stock price is recorded for ten weeks and are as follows:$4, $3,$2, $3,$1, $7,$2, $1,$1, \$2. Perform a hypothesis test using a 5% level of significance. State the null and alternative hypotheses, state your conclusion, and identify the Type I errors. Example $10$ A manufacturer of salad dressings uses machines to dispense liquid ingredients into bottles that move along a filling line. The machine that dispenses salad dressings is working properly when 8 ounces are dispensed. Suppose that the average amount dispensed in a particular sample of 35 bottles is 7.91 ounces with a variance of 0.03 ounces squared,$s^2$. Is there evidence that the machine should be stopped and production wait for repairs? The lost production from a shutdown is potentially so great that management feels that the level of significance in the analysis should be 99%. Again we will follow the steps in our analysis of this problem. Answer STEP 1: Set the Null and Alternative Hypothesis. The random variable is the quantity of fluid placed in the bottles. This is a continuous random variable and the parameter we are interested in is the mean. Our hypothesis therefore is about the mean. In this case we are concerned that the machine is not filling properly. From what we are told it does not matter if the machine is over-filling or under-filling, both seem to be an equally bad error. This tells us that this is a two-tailed test: if the machine is malfunctioning it will be shutdown regardless if it is from over-filling or under-filling. The null and alternative hypotheses are thus: $H_0:\mu=8\nonumber$ $Ha:\mu \neq 8\nonumber$ STEP 2: Decide the level of significance and draw the graph showing the critical value. This problem has already set the level of significance at 99%. The decision seems an appropriate one and shows the thought process when setting the significance level. Management wants to be very certain, as certain as probability will allow, that they are not shutting down a machine that is not in need of repair. To draw the distribution and the critical value, we need to know which distribution to use. Because this is a continuous random variable and we are interested in the mean, and the sample size is greater than 30, the appropriate distribution is the normal distribution and the relevant critical value is 2.575 from the normal table or the t-table at 0.005 column and infinite degrees of freedom. We draw the graph and mark these points. STEP 3: Calculate sample parameters and the test statistic. The sample parameters are provided, the sample mean is 7.91 and the sample variance is .03 and the sample size is 35. We need to note that the sample variance was provided not the sample standard deviation, which is what we need for the formula. Remembering that the standard deviation is simply the square root of the variance, we therefore know the sample standard deviation, s, is 0.173. With this information we calculate the test statistic as -3.07, and mark it on the graph. $Z_{c}=\frac{\overline{x}-\mu_{0}}{s / \sqrt{n}}=\frac{7.91-8}{\cdot 173 / \sqrt{35}}=-3.07\nonumber$ STEP 4: Compare test statistic and the critical values Now we compare the test statistic and the critical value by placing the test statistic on the graph. We see that the test statistic is in the tail, decidedly greater than the critical value of 2.575. We note that even the very small difference between the hypothesized value and the sample value is still a large number of standard deviations. The sample mean is only 0.08 ounces different from the required level of 8 ounces, but it is 3 plus standard deviations away and thus we cannot accept the null hypothesis. STEP 5: Reach a Conclusion Three standard deviations of a test statistic will guarantee that the test will fail. The probability that anything is within three standard deviations is almost zero. Actually it is 0.0026 on the normal distribution, which is certainly almost zero in a practical sense. Our formal conclusion would be “ At a 99% level of significance we cannot accept the hypothesis that the sample mean came from a distribution with a mean of 8 ounces” Or less formally, and getting to the point, “At a 99% level of significance we conclude that the machine is under filling the bottles and is in need of repair”. Hypothesis Test for Proportions Just as there were confidence intervals for proportions, or more formally, the population parameter $p$ of the binomial distribution, there is the ability to test hypotheses concerning $p$. The population parameter for the binomial is $p$. The estimated value (point estimate) for $p$ is $p^{\prime}$ where $p^{\prime} = x/n$, $x$ is the number of successes in the sample and $n$ is the sample size. When you perform a hypothesis test of a population proportion $p$, you take a simple random sample from the population. The conditions for a binomial distribution must be met, which are: there are a certain number n of independent trials meaning random sampling, the outcomes of any trial are binary, success or failure, and each trial has the same probability of a success $p$. The shape of the binomial distribution needs to be similar to the shape of the normal distribution. To ensure this, the quantities $np^{\prime}$ and $nq^{\prime}$ must both be greater than five ($np^{\prime} > 5$ and $nq^{\prime} > 5$). In this case the binomial distribution of a sample (estimated) proportion can be approximated by the normal distribution with $\mu=np$ and $\sigma=\sqrt{n p q}$. Remember that $q=1–p$. There is no distribution that can correct for this small sample bias and thus if these conditions are not met we simply cannot test the hypothesis with the data available at that time. We met this condition when we first were estimating confidence intervals for $p$. Again, we begin with the standardizing formula modified because this is the distribution of a binomial. $Z=\frac{\mathrm{p}^{\prime}-p}{\sqrt{\frac{\mathrm{pq}}{n}}}\nonumber$ Substituting $p_0$, the hypothesized value of $p$, we have: $Z_{c}=\frac{\mathrm{p}^{\prime}-p_{0}}{\sqrt{\frac{p_{0} q_{0}}{n}}}\nonumber$ This is the test statistic for testing hypothesized values of p, where the null and alternative hypotheses take one of the following forms: Two-tailed test One-tailed test One-tailed test $H_0: p = p_0$ $H_0: p \leq p_0$ $H_0: p \geq p_0$ $H_a: p \neq p_0$ $H_a: p > p_0$ $H_a: p < p_0$ Table $5$ The decision rule stated above applies here also: if the calculated value of $Z_c$ shows that the sample proportion is "too many" standard deviations from the hypothesized proportion, the null hypothesis cannot be accepted. The decision as to what is "too many" is pre-determined by the analyst depending on the level of significance required in the test. Example $11$ The mortgage department of a large bank is interested in the nature of loans of first-time borrowers. This information will be used to tailor their marketing strategy. They believe that 50% of first-time borrowers take out smaller loans than other borrowers. They perform a hypothesis test to determine if the percentage is the same or different from 50%. They sample 100 first-time borrowers and find 53 of these loans are smaller that the other borrowers. For the hypothesis test, they choose a 5% level of significance. Answer STEP 1: Set the null and alternative hypothesis. $H_0: p = 0.50$  $H_a: p \neq 0.50$ The words "is the same or different from" tell you this is a two-tailed test. The Type I and Type II errors are as follows: The Type I error is to conclude that the proportion of borrowers is different from 50% when, in fact, the proportion is actually 50%. (Reject the null hypothesis when the null hypothesis is true). The Type II error is there is not enough evidence to conclude that the proportion of first time borrowers differs from 50% when, in fact, the proportion does differ from 50%. (You fail to reject the null hypothesis when the null hypothesis is false.) STEP 2: Decide the level of significance and draw the graph showing the critical value The level of significance has been set by the problem at the 95% level. Because this is two-tailed test one-half of the alpha value will be in the upper tail and one-half in the lower tail as shown on the graph. The critical value for the normal distribution at the 95% level of confidence is 1.96. This can easily be found on the student’s t-table at the very bottom at infinite degrees of freedom remembering that at infinity the t-distribution is the normal distribution. Of course the value can also be found on the normal table but you have go looking for one-half of 95 (0.475) inside the body of the table and then read out to the sides and top for the number of standard deviations. STEP 3: Calculate the sample parameters and critical value of the test statistic. The test statistic is a normal distribution, $Z$, for testing proportions and is: $Z=\frac{p^{\prime}-p_{0}}{\sqrt{\frac{p_{0} q_{0}}{n}}}=\frac{.53-.50}{\sqrt{\frac{.5(.5)}{100}}}=0.60\nonumber$ For this case, the sample of 100 found 53 first-time borrowers were different from other borrowers. The sample proportion, $p^{\prime} = 53/100= 0.53$ The test question, therefore, is : “Is 0.53 significantly different from .50?” Putting these values into the formula for the test statistic we find that 0.53 is only 0.60 standard deviations away from .50. This is barely off of the mean of the standard normal distribution of zero. There is virtually no difference from the sample proportion and the hypothesized proportion in terms of standard deviations. STEP 4: Compare the test statistic and the critical value. The calculated value is well within the critical values of $\pm 1.96$ standard deviations and thus we cannot reject the null hypothesis. To reject the null hypothesis we need significant evident of difference between the hypothesized value and the sample value. In this case the sample value is very nearly the same as the hypothesized value measured in terms of standard deviations. STEP 5: Reach a conclusion The formal conclusion would be “At a 95% level of significance we cannot reject the null hypothesis that 50% of first-time borrowers have the same size loans as other borrowers”. Less formally we would say that “There is no evidence that one-half of first-time borrowers are significantly different in loan size from other borrowers”. Notice the length to which the conclusion goes to include all of the conditions that are attached to the conclusion. Statisticians for all the criticism they receive, are careful to be very specific even when this seems trivial. Statisticians cannot say more than they know and the data constrain the conclusion to be within the metes and bounds of the data. Exercise $11$ A teacher believes that 85% of students in the class will want to go on a field trip to the local zoo. She performs a hypothesis test to determine if the percentage is the same or different from 85%. The teacher samples 50 students and 39 reply that they would want to go to the zoo. For the hypothesis test, use a 1% level of significance. Example $12$ Suppose a consumer group suspects that the proportion of households that have three or more cell phones is 30%. A cell phone company has reason to believe that the proportion is not 30%. Before they start a big advertising campaign, they conduct a hypothesis test. Their marketing people survey 150 households with the result that 43 of the households have three or more cell phones. Answer Here is an abbreviate version of the system to solve hypothesis tests applied to a test on a proportions. $H_0 : p = 0.3 \nonumber$ $H_a : p \neq 0.3 \nonumber$ $n = 150\nonumber$ $\mathrm{p}^{\prime}=\frac{x}{n}=\frac{43}{150}=0.287\nonumber$ $Z_{c}=\frac{\mathrm{p}^{\prime}-p_{0}}{\sqrt{\frac{p_{0} q_{0}}{n}}}=\frac{0.287-0.3}{\sqrt{\frac{3(7)}{150}}}=0.347\nonumber$ Example $13$ The National Institute of Standards and Technology provides exact data on conductivity properties of materials. Following are conductivity measurements for 11 randomly selected pieces of a particular type of glass. 1.11; 1.07; 1.11; 1.07; 1.12; 1.08; .98; .98 1.02; .95; .95 Is there convincing evidence that the average conductivity of this type of glass is greater than one? Use a significance level of 0.05. Answer Let’s follow a four-step process to answer this statistical question. State the Question: We need to determine if, at a 0.05 significance level, the average conductivity of the selected glass is greater than one. Our hypotheses will be 1. $H_0: \mu \leq 1$ 2. $H_a: \mu > 1$ Plan: We are testing a sample mean without a known population standard deviation with less than 30 observations. Therefore, we need to use a Student's-t distribution. Assume the underlying population is normal. Do the calculations and draw the graph. State the Conclusions: We cannot accept the null hypothesis. It is reasonable to state that the data supports the claim that the average conductivity level is greater than one. Example $14$ In a study of 420,019 cell phone users, 172 of the subjects developed brain cancer. Test the claim that cell phone users developed brain cancer at a greater rate than that for non-cell phone users (the rate of brain cancer for non-cell phone users is 0.0340%). Since this is a critical issue, use a 0.005 significance level. Explain why the significance level should be so low in terms of a Type I error. Answer 1. We need to conduct a hypothesis test on the claimed cancer rate. Our hypotheses will be 1. $H_0: p \leq 0.00034$ 2. $H_a: p > 0.00034$ If we commit a Type I error, we are essentially accepting a false claim. Since the claim describes cancer-causing environments, we want to minimize the chances of incorrectly identifying causes of cancer. 2. We will be testing a sample proportion with $x = 172$ and $n = 420,019$. The sample is sufficiently large because we have $np^{\prime} = 420,019(0.00034) = 142.8$, $n q^{\prime}=420,019(0.99966)=419,876.2$, two independent outcomes, and a fixed probability of success $p^{\prime} = 0.00034$. Thus we will be able to generalize our results to the population.
textbooks/stats/Applied_Statistics/Introductory_Business_Statistics_(OpenStax)/09%3A_Hypothesis_Testing_with_One_Sample/9.04%3A_Full_Hypothesis_Test_Examples.txt
9.3 Distribution Needed for Hypothesis Testing Sample sizeTest statistic < 30 ($\sigma$ unknown) $t_{c}=\frac{\overline{X}-\mu_{0}}{s / \sqrt{n}}$ < 30 ($\sigma$ known) $Z_{c}=\frac{\overline{X}-\mu_{0}}{\sigma / \sqrt{n}}$ > 30 ($\sigma$ unknown) $Z_{c}=\frac{\overline{X}-\mu_{0}}{s / \sqrt{n}}$ > 30 ($\sigma$ known) $Z_{c}=\frac{\overline{X}-\mu_{0}}{\sigma / \sqrt{n}}$ Table $6$ Test Statistics for Test of Means, Varying Sample Size, Population Known or Unknown 9.07: Chapter Key Terms Binomial Distribution a discrete random variable (RV) that arises from Bernoulli trials. There are a fixed number, n, of independent trials. “Independent” means that the result of any trial (for example, trial 1) does not affect the results of the following trials, and all trials are conducted under the same conditions. Under these circumstances the binomial RV Χ is defined as the number of successes in $n$ trials. The notation is: $X \sim B(n, p) \mu = np$ and the standard deviation is $\sigma=\sqrt{n p q}$. The probability of exactly $x$ successes in $n$ trials is $P(X=x)=\left(\begin{array}{l}{n} \ {x}\end{array}\right) p^{x} q^{n-x}$. Central Limit Theorem Given a random variable (RV) with known mean $\mu$ and known standard deviation $\sigma$. We are sampling with size n and we are interested in two new RVs - the sample mean, $\overline X$. If the size n of the sample is sufficiently large, then $\overline{X} \sim N\left(\mu, \frac{\sigma}{\sqrt{n}}\right)$. If the size n of the sample is sufficiently large, then the distribution of the sample means will approximate a normal distribution regardless of the shape of the population. The expected value of the mean of the sample means will equal the population mean. The standard deviation of the distribution of the sample means, $\frac{\sigma}{\sqrt{n}}$, is called the standard error of the mean. Confidence Interval (CI) an interval estimate for an unknown population parameter. This depends on: • The desired confidence level. • Information that is known about the distribution (for example, known standard deviation). • The sample and its size. Critical Value The $t$ or $Z$ value set by the researcher that measures the probability of a Type I error, $\sigma$. Hypothesis a statement about the value of a population parameter, in case of two hypotheses, the statement assumed to be true is called the null hypothesis (notation $H_0$) and the contradictory statement is called the alternative hypothesis (notation $H_a$). Hypothesis Testing Based on sample evidence, a procedure for determining whether the hypothesis stated is a reasonable statement and should not be rejected, or is unreasonable and should be rejected. Normal Distribution a continuous random variable (RV) with pdf $f(x)=\frac{1}{\sigma \sqrt{2 \pi}} e^{\frac{-(x-\mu)^{2}}{2 \sigma^{2}}}$, where $\mu$ is the mean of the distribution, and $\sigma$ is the standard deviation, notation: $X \sim N(\mu, \sigma)$. If $\mu = 0$ and $\sigma = 1$, the RV is called the standard normal distribution. Standard Deviation a number that is equal to the square root of the variance and measures how far data values are from their mean; notation: s for sample standard deviation and σ for population standard deviation. Student's t-Distribution investigated and reported by William S. Gossett in 1908 and published under the pseudonym Student. The major characteristics of the random variable (RV) are: • It is continuous and assumes any real values. • The pdf is symmetrical about its mean of zero. However, it is more spread out and flatter at the apex than the normal distribution. • It approaches the standard normal distribution as n gets larger. • There is a "family" of t distributions: every representative of the family is completely defined by the number of degrees of freedom which is one less than the number of data items. Test Statistic The formula that counts the number of standard deviations on the relevant distribution that estimated parameter is away from the hypothesized value. Type I Error The decision is to reject the null hypothesis when, in fact, the null hypothesis is true. Type II Error The decision is not to reject the null hypothesis when, in fact, the null hypothesis is false.
textbooks/stats/Applied_Statistics/Introductory_Business_Statistics_(OpenStax)/09%3A_Hypothesis_Testing_with_One_Sample/9.05%3A_Chapter_Formula_Review.txt
9.3 Distribution Needed for Hypothesis Testing 21. Which two distributions can you use for hypothesis testing for this chapter? 22. Which distribution do you use when you are testing a population mean and the population standard deviation is known? Assume sample size is large. Assume a normal distribution with $n \geq 30$. 23. Which distribution do you use when the standard deviation is not known and you are testing one population mean? Assume a normal distribution, with $n \geq 30$. 24. A population mean is 13. The sample mean is 12.8, and the sample standard deviation is two. The sample size is 20. What distribution should you use to perform a hypothesis test? Assume the underlying population is normal. 25. A population has a mean is 25 and a standard deviation of five. The sample mean is 24, and the sample size is 108. What distribution should you use to perform a hypothesis test? 26. It is thought that 42% of respondents in a taste test would prefer Brand $A$. In a particular test of 100 people, 39% preferred Brand $A$. What distribution should you use to perform a hypothesis test? 27. You are performing a hypothesis test of a single population mean using a Student’s t-distribution. What must you assume about the distribution of the data? 28. You are performing a hypothesis test of a single population mean using a Student’s t-distribution. The data are not from a simple random sample. Can you accurately perform the hypothesis test? 29. You are performing a hypothesis test of a single population proportion. What must be true about the quantities of $np$ and $nq$? 30. You are performing a hypothesis test of a single population proportion. You find out that $np$ is less than five. What must you do to be able to perform a valid hypothesis test? 31. You are performing a hypothesis test of a single population proportion. The data come from which distribution? 9.4 Full Hypothesis Test Examples 32. Assume $H_0: \mu = 9$ and $H_a: \mu < 9$. Is this a left-tailed, right-tailed, or two-tailed test? 33. Assume $H_0: \mu \leq 6$ and $H_a: \mu > 6). Is this a left-tailed, right-tailed, or two-tailed test? 34. Assume \(H_0: p = 0.25$ and $H_a: p \neq 0.25$. Is this a left-tailed, right-tailed, or two-tailed test? 35. Draw the general graph of a left-tailed test. 36. Draw the graph of a two-tailed test. 37. A bottle of water is labeled as containing 16 fluid ounces of water. You believe it is less than that. What type of test would you use? 38. Your friend claims that his mean golf score is 63. You want to show that it is higher than that. What type of test would you use? 39. A bathroom scale claims to be able to identify correctly any weight within a pound. You think that it cannot be that accurate. What type of test would you use? 40. You flip a coin and record whether it shows heads or tails. You know the probability of getting heads is 50%, but you think it is less for this particular coin. What type of test would you use? 41. If the alternative hypothesis has a not equals ( $\neq$ ) symbol, you know to use which type of test? 42. Assume the null hypothesis states that the mean is at least 18. Is this a left-tailed, right-tailed, or two-tailed test? 43. Assume the null hypothesis states that the mean is at most 12. Is this a left-tailed, right-tailed, or two-tailed test? 44. Assume the null hypothesis states that the mean is equal to 88. The alternative hypothesis states that the mean is not equal to 88. Is this a left-tailed, right-tailed, or two-tailed test?
textbooks/stats/Applied_Statistics/Introductory_Business_Statistics_(OpenStax)/09%3A_Hypothesis_Testing_with_One_Sample/9.08%3A_Chapter_Practice.txt
9.1 Null and Alternative Hypotheses Data from the National Institute of Mental Health. Available online at http://www.nimh.nih.gov/publicat/depression.cfm. 9.4 Full Hypothesis Test Examples Data from Amit Schitai. Director of Instructional Technology and Distance Learning. LBCC. Data from Bloomberg Businessweek. Available online at http://www.businessweek.com/news/2011- 09-15/nyc-smoking-rate-falls-to-record-low-of-14-bloomberg-says.html. Data from energy.gov. Available online at http://energy.gov (accessed June 27. 2013). Data from Gallup®. Available online at www.gallup.com (accessed June 27, 2013). Data from Growing by Degrees by Allen and Seaman. Data from La Leche League International. Available online at www.lalecheleague.org/Law/BAFeb01.html. Data from the American Automobile Association. Available online at www.aaa.com (accessed June 27, 2013). Data from the American Library Association. Available online at www.ala.org (accessed June 27, 2013). Data from the Bureau of Labor Statistics. Available online at http://www.bls.gov/oes/current/oes291111.htm. Data from the Centers for Disease Control and Prevention. Available online at www.cdc.gov (accessed June 27, 2013) Data from the U.S. Census Bureau, available online at quickfacts.census.gov/qfd/states/00000.html (accessed June 27, 2013). Data from the United States Census Bureau. Available online at www.census.gov/hhes/socdemo/language/. Data from Toastmasters International. Available online at toastmasters.org/artisan/deta...eID=429&Page=1. Data from Weather Underground. Available online at www.wunderground.com (accessed June 27, 2013). Federal Bureau of Investigations. “Uniform Crime Reports and Index of Crime in Daviess in the State of Kentucky enforced by Daviess County from 1985 to 2005.” Available online at http://www.disastercenter.com/kentucky/crime/3868.htm (accessed June 27, 2013). “Foothill-De Anza Community College District.” De Anza College, Winter 2006. Available online at research.fhda.edu/factbook/DA...t_da_2006w.pdf. Johansen, C., J. Boice, Jr., J. McLaughlin, J. Olsen. “Cellular Telephones and Cancer—a Nationwide Cohort Study in Denmark.” Institute of Cancer Epidemiology and the Danish Cancer Society, 93(3):203-7. Available online at http://www.ncbi.nlm.nih.gov/pubmed/11158188 (accessed June 27, 2013). Rape, Abuse & Incest National Network. “How often does sexual assault occur?” RAINN, 2009. Available online at http://www.rainn.org/get-information...sexual-assault (accessed June 27, 2013). 9.10: Chapter Review 9.1 Null and Alternative Hypotheses In a hypothesis test, sample data is evaluated in order to arrive at a decision about some type of claim. If certain conditions about the sample are satisfied, then the claim can be evaluated for a population. In a hypothesis test, we: 1. Evaluate the null hypothesis, typically denoted with H0. The null is not rejected unless the hypothesis test shows otherwise. The null statement must always contain some form of equality (=, ≤ or ≥) 2. Always write the alternative hypothesis, typically denoted with $H_a$ or $H_1$, using not equal, less than or greater than symbols, i.e., ($neq$, <, or > ). 3. If we reject the null hypothesis, then we can assume there is enough evidence to support the alternative hypothesis. 4. Never state that a claim is proven true or false. Keep in mind the underlying fact that hypothesis testing is based on probability laws; therefore, we can talk only in terms of non-absolute certainties. 9.2 Outcomes and the Type I and Type II Errors In every hypothesis test, the outcomes are dependent on a correct interpretation of the data. Incorrect calculations or misunderstood summary statistics can yield errors that affect the results. A Type I error occurs when a true null hypothesis is rejected. A Type II error occurs when a false null hypothesis is not rejected. The probabilities of these errors are denoted by the Greek letters $\alpha$ and $\beta$, for a Type I and a Type II error respectively. The power of the test, $1 – \beta$, quantifies the likelihood that a test will yield the correct result of a true alternative hypothesis being accepted. A high power is desirable. 9.3 Distribution Needed for Hypothesis Testing In order for a hypothesis test’s results to be generalized to a population, certain requirements must be satisfied. When testing for a single population mean: 1. A Student's $t$-test should be used if the data come from a simple, random sample and the population is approximately normally distributed, or the sample size is large, with an unknown standard deviation. 2. The normal test will work if the data come from a simple, random sample and the population is approximately normally distributed, or the sample size is large. When testing a single population proportion use a normal test for a single population proportion if the data comes from a simple, random sample, fill the requirements for a binomial distribution, and the mean number of successes and the mean number of failures satisfy the conditions: $np > 5$ and $nq > 5$ where $n$ is the sample size, $p$ is the probability of a success, and $q$ is the probability of a failure. 9.4 Full Hypothesis Test Examples The hypothesis test itself has an established process. This can be summarized as follows: 1. Determine $H_0$ and $H_a$. Remember, they are contradictory. 2. Determine the random variable. 3. Determine the distribution for the test. 4. Draw a graph and calculate the test statistic. 5. Compare the calculated test statistic with the $Z$ critical value determined by the level of significance required by the test and make a decision (cannot reject $H_0$ or cannot accept $H_0$), and write a clear conclusion using English sentences. 9.11: Chapter Solution (Practice Homework) Figure $12$ 38. a right-tailed test 40. a left-tailed test 42. This is a left-tailed test. 44. This is a two-tailed test. 45. 1. 47. c 1. 51. b d 55. d 56. 1. 58. 1. 60. 1. 62. 1. 64. 1. 66. 1. 69. 1. 71. 1. 73. c c 77. 1. 79. 1. 81. 1. 83. 1. 85. 1. $H_0: \mu \geq 150 H_a: \mu < 150$ 2. $p$-value = 0.0622 3. alpha = 0.01 4. Do not reject the null hypothesis. 5. At the 1% significance level, there is not enough evidence to conclude that freshmen students study less than 2.5 hours per day, on average. 6. The student academic group’s claim appears to be correct.
textbooks/stats/Applied_Statistics/Introductory_Business_Statistics_(OpenStax)/09%3A_Hypothesis_Testing_with_One_Sample/9.09%3A_Chapter_References.txt
Studies often compare two groups. For example, researchers are interested in the effect aspirin has in preventing heart attacks. Over the last few years, newspapers and magazines have reported various aspirin studies involving two groups. Typically, one group is given aspirin and the other group is given a placebo. Then, the heart attack rate is studied over several years. There are other situations that deal with the comparison of two groups. For example, studies compare various diet and exercise programs. Politicians compare the proportion of individuals from different income brackets who might vote for them. Students are interested in whether SAT or GRE preparatory courses really help raise their scores. Many business applications require comparing two groups. It may be the investment returns of two different investment strategies, or the differences in production efficiency of different management styles. To compare two means or two proportions, you work with two groups. The groups are classified either as independent or matched pairs. Independent groups consist of two samples that are independent, that is, sample values selected from one population are not related in any way to sample values selected from the other population. Matched pairs consist of two samples that are dependent. The parameter tested using matched pairs is the population mean. The parameters tested using independent groups are either population means or population proportions of each group. 10.01: Comparing Two Independent Population Means The comparison of two independent population means is very common and provides a way to test the hypothesis that the two groups differ from each other. Is the night shift less productive than the day shift, are the rates of return from fixed asset investments different from those from common stock investments, and so on? An observed difference between two sample means depends on both the means and the sample standard deviations. Very different means can occur by chance if there is great variation among the individual samples. The test statistic will have to account for this fact. The test comparing two independent population means with unknown and possibly unequal population standard deviations is called the Aspin-Welch $t$-test. The degrees of freedom formula we will see later was developed by Aspin-Welch. When we developed the hypothesis test for the mean and proportions we began with the Central Limit Theorem. We recognized that a sample mean came from a distribution of sample means, and sample proportions came from the sampling distribution of sample proportions. This made our sample parameters, the sample means and sample proportions, into random variables. It was important for us to know the distribution that these random variables came from. The Central Limit Theorem gave us the answer: the normal distribution. Our $Z$ and $t$ statistics came from this theorem. This provided us with the solution to our question of how to measure the probability that a sample mean came from a distribution with a particular hypothesized value of the mean or proportion. In both cases that was the question: what is the probability that the mean (or proportion) from our sample data came from a population distribution with the hypothesized value we are interested in? Now we are interested in whether or not two samples have the same mean. Our question has not changed: Do these two samples come from the same population distribution? To approach this problem we create a new random variable. We recognize that we have two sample means, one from each set of data, and thus we have two random variables coming from two unknown distributions. To solve the problem we create a new random variable, the difference between the sample means. This new random variable also has a distribution and, again, the Central Limit Theorem tells us that this new distribution is normally distributed, regardless of the underlying distributions of the original data. A graph may help to understand this concept. Pictured are two distributions of data, $X_1$ and $X_2$, with unknown means and standard deviations. The second panel shows the sampling distribution of the newly created random variable ($\overline{X}_{1}-\overline{X}_{2}$). This distribution is the theoretical distribution of many many sample means from population 1 minus sample means from population 2. The Central Limit Theorem tells us that this theoretical sampling distribution of differences in sample means is normally distributed, regardless of the distribution of the actual population data shown in the top panel. Because the sampling distribution is normally distributed, we can develop a standardizing formula and calculate probabilities from the standard normal distribution in the bottom panel, the $Z$ distribution. We have seen this same analysis before in Chapter 7 Figure $2$ . The Central Limit Theorem, as before, provides us with the standard deviation of the sampling distribution, and further, that the expected value of the mean of the distribution of differences in sample means is equal to the differences in the population means. Mathematically this can be stated: $E\left(\mu_{\overline{x}_{1}}-\mu_{\overline{x}_{2}}\right)=\mu_{1}-\mu_{2}\nonumber$ Because we do not know the population standard deviations, we estimate them using the two sample standard deviations from our independent samples. For the hypothesis test, we calculate the estimated standard deviation, or standard error, of the difference in sample means, $\overline{X}_{1}-\overline{X}_{2}$. $\textbf{The standard error is:}\nonumber$ $\sqrt{\frac{\left(s_{1}\right)^{2}}{n_{1}}+\frac{\left(s_{2}\right)^{2}}{n_{2}}}\nonumber$ We remember that substituting the sample variance for the population variance when we did not have the population variance was the technique we used when building the confidence interval and the test statistic for the test of hypothesis for a single mean back in Confidence Intervals and Hypothesis Testing with One Sample. The test statistic (t-score) is calculated as follows: $t_{c}=\frac{\left(\overline{x}_{1}-\overline{x}_{2}\right)-\delta_{0}}{\sqrt{\frac{\left(s_{1}\right)^{2}}{n_{1}}+\frac{\left(s_{2}\right)^{2}}{n_{2}}}}\nonumber$ where: • $s_1$ and $s_2$, the sample standard deviations, are estimates of $\sigma_1$ and $\sigma_2$, respectively and • $\sigma_1$ and $\sigma_2$ are the unknown population standard deviations. • $\overline{x}_{1}$ and $\overline{x}_{2}$ are the sample means. $\mu_1$ and $\mu_2$ are the unknown population means. The number of degrees of freedom (df) requires a somewhat complicated calculation. The $df$ are not always a whole number. The test statistic above is approximated by the Student's $t$-distribution with $df$ as follows: Degrees of freedom $df=\frac{\left(\frac{\left(s_{1}\right)^{2}}{n_{1}}+\frac{\left(s_{2}\right)^{2}}{n_{2}}\right)^{2}}{\left(\frac{1}{n_{1}-1}\right)\left(\frac{\left(s_{1}\right)^{2}}{n_{1}}\right)^{2}+\left(\frac{1}{n_{2}-1}\right)\left(\frac{\left(s_{2}\right)^{2}}{n_{2}}\right)^{2}}\nonumber$ When both sample sizes $n_1$ and $n_2$ are 30 or larger, the Student's t approximation is very good. If each sample has more than 30 observations then the degrees of freedom can be calculated as $n_1 + n_2 - 2$. The format of the sampling distribution, differences in sample means, specifies that the format of the null and alternative hypothesis is: $H_{0} : \mu_{1}-\mu_{2}=\delta_{0}\nonumber$ $H_{\mathrm{a}} : \mu_{1}-\mu_{2} \neq \delta_{0}\nonumber$ where $\delta_{0}$ is the hypothesized difference between the two means. If the question is simply “is there any difference between the means?” then $\delta_{0} = 0$ and the null and alternative hypotheses becomes: $H_{0} : \mu_{1}=\mu_{2}\nonumber$ $H_{\mathrm{a}} : \mu_{1} \neq \mu_{2}\nonumber$ An example of when $\delta_{0}$ might not be zero is when the comparison of the two groups requires a specific difference for the decision to be meaningful. Imagine that you are making a capital investment. You are considering changing from your current model machine to another. You measure the productivity of your machines by the speed they produce the product. It may be that a contender to replace the old model is faster in terms of product throughput, but is also more expensive. The second machine may also have more maintenance costs, setup costs, etc. The null hypothesis would be set up so that the new machine would have to be better than the old one by enough to cover these extra costs in terms of speed and cost of production. This form of the null and alternative hypothesis shows how valuable this particular hypothesis test can be. For most of our work we will be testing simple hypotheses asking if there is any difference between the two distribution means. Example $1$ INDEPENDENT GROUPS The Kona Iki Corporation produces coconut milk. They take coconuts and extract the milk inside by drilling a hole and pouring the milk into a vat for processing. They have both a day shift (called the B shift) and a night shift (called the G shift) to do this part of the process. They would like to know if the day shift and the night shift are equally efficient in processing the coconuts. A study is done sampling 9 shifts of the G shift and 16 shifts of the B shift. The results of the number of hours required to process 100 pounds of coconuts is presented in Table $1$. A study is done and data are collected, resulting in the data in Table $1$. Sample Size Average Number of Hours to Process 100 Pounds of Coconuts Sample Standard Deviation G Shift 9 2 0.8660.866 B Shift 16 3.2 1.00 Table $1$ Is there a difference in the mean amount of time for each shift to process 100 pounds of coconuts? Test at the 5% level of significance. Answer Solution 10.1 The population standard deviations are not known and cannot be assumed to equal each other. Let $g$ be the subscript for the G Shift and $b$ be the subscript for the B Shift. Then, $\mu_g$ is the population mean for G Shift and $\mu_b$ is the population mean for B Shift. This is a test of two independent groups, two population means. Random variable: $\overline{X}_{g}-\overline{X}_{b}$ = difference in the sample mean amount of time between the G Shift and the B Shift takes to process the coconuts. $\H_{0}: \mu_g = \mu_b$  $\H_{0}: \mu_g – \mu_b = 0$ $H_a: \mu_g \neq \mu_b$  $H_a: \mu_g – \mu_b \neq 0$ The words "the same" tell you $\H_{0}$ has an "=". Since there are no other words to indicate $H_a$, is either faster or slower. This is a two tailed test. Distribution for the test: Use $t_{df}$ where $df$ is calculated using the $df$ formula for independent groups, two population means above. Using a calculator, $df$ is approximately 18.8462. Graph: $\mathrm{t}_{\mathrm{c}}=\frac{\left(\overline{X}_{1}-\overline{X}_{2}\right)-\delta_{0}}{\sqrt{\frac{S_{1}^{2}}{n_{1}}+\frac{S_{2}^{2}}{n_{2}}}}=-3.01\nonumber$ We next find the critical value on the $t$-table using the degrees of freedom from above. The critical value, 2.093, is found in the .025 column, this is $\alpha/2$, at 19 degrees of freedom. (The convention is to round up the degrees of freedom to make the conclusion more conservative.) Next we calculate the test statistic and mark this on the $t$-distribution graph. Make a decision: Since the calculated $t$-value is in the tail we cannot accept the null hypothesis that there is no difference between the two groups. The means are different. The graph has included the sampling distribution of the differences in the sample means to show how the t-distribution aligns with the sampling distribution data. We see in the top panel that the calculated difference in the two means is -1.2 and the bottom panel shows that this is 3.01 standard deviations from the mean. Typically we do not need to show the sampling distribution graph and can rely on the graph of the test statistic, the t-distribution in this case, to reach our conclusion. Conclusion: At the 5% level of significance, the sample data show there is sufficient evidence to conclude that the mean number of hours that the G Shift takes to process 100 pounds of coconuts is different from the B Shift (mean number of hours for the B Shift is greater than the mean number of hours for the G Shift). NOTE When the sum of the sample sizes is larger than $30\left(n_{1}+n_{2}>30\right)$ you can use the normal distribution to approximate the Student's $t$. Example $2$ A study is done to determine if Company A retains its workers longer than Company B. It is believed that Company A has a higher retention than Company B. The study finds that in a sample of 11 workers at Company A their average time with the company is four years with a standard deviation of 1.5 years. A sample of 9 workers at Company B finds that the average time with the company was 3.5 years with a standard deviation of 1 year. Test this proposition at the 1% level of significance. a. Is this a test of two means or two proportions? Answer Solution 10.2 a. two means because time is a continuous random variable. b. Are the populations standard deviations known or unknown? Answer Solution 10.2 b. unknown c. Which distribution do you use to perform the test? Answer Solution 10.2 c. Student's $t$ d. What is the random variable? Answer Solution 10.2 d. $\overline{X}_{A}-\overline{X}_{B}$ e. What are the null and alternate hypotheses? Answer Solution 10.2 e. • $H_{0} : \mu_{A} \leq \mu_{B}$ • $H_{a} : \mu_{A}>\mu_{B}$ f. Is this test right-, left-, or two-tailed? Answer Solution 10.2 f. right one-tailed test g. What is the value of the test statistic? Answer Solution 10.2 g. $t_{c}=\frac{\left(\overline{X}_{1}-\overline{X}_{2}\right)-\delta_{0}}{\sqrt{\frac{S_{1}^{2}}{n_{1}}+\frac{S_{2}^{2}}{n_{2}}}}=0.89$ h. Can you accept/reject the null hypothesis? Answer Solution 10.2 h. Cannot reject the null hypothesis that there is no difference between the two groups. Test statistic is not in the tail. The critical value of the t distribution is 2.764 with 10 degrees of freedom. This example shows how difficult it is to reject a null hypothesis with a very small sample. The critical values require very large test statistics to reach the tail. i. Conclusion: Answer Solution 10.2 i. At the 1% level of significance, from the sample data, there is not sufficient evidence to conclude that the retention of workers at Company A is longer than Company B, on average. Example $3$ An interesting research question is the effect, if any, that different types of teaching formats have on the grade outcomes of students. To investigate this issue one sample of students' grades was taken from a hybrid class and another sample taken from a standard lecture format class. Both classes were for the same subject. The mean course grade in percent for the 35 hybrid students is 74 with a standard deviation of 16. The mean grades of the 40 students form the standard lecture class was 76 percent with a standard deviation of 9. Test at 5% to see if there is any significant difference in the population mean grades between standard lecture course and hybrid class. Answer Solution 10.3 We begin by noting that we have two groups, students from a hybrid class and students from a standard lecture format class. We also note that the random variable, what we are interested in, is students' grades, a continuous random variable. We could have asked the research question in a different way and had a binary random variable. For example, we could have studied the percentage of students with a failing grade, or with an A grade. Both of these would be binary and thus a test of proportions and not a test of means as is the case here. Finally, there is no presumption as to which format might lead to higher grades so the hypothesis is stated as a two-tailed test. $H_{0}: \mu_1 = \mu_2$ $H_a: \mu_1 \neq \mu_2$ As would virtually always be the case, we do not know the population variances of the two distributions and thus our test statistic is: $t_{c}=\frac{\left(\overline{x}_{1}-\overline{x}_{2}\right)-\delta_{0}}{\sqrt{\frac{s^{2}}{n_{1}}+\frac{s^{2}}{n_{2}}}}=\frac{(74-76)-0}{\sqrt{\frac{16^{2}}{35}+\frac{9^{2}}{40}}}=-0.65\nonumber$ To determine the critical value of the Student's t we need the degrees of freedom. For this case we use: $df = n_1 + n_2 - 2 = 35 + 40 -2 = 73$. This is large enough to consider it the normal distribution thus $t_{\alpha /2} = 1.96$. Again as always we determine if the calculated value is in the tail determined by the critical value. In this case we do not even need to look up the critical value: the calculated value of the difference in these two average grades is not even one standard deviation apart. Certainly not in the tail. Conclusion: Cannot reject the null at $\bf{\alpha = 5\%}$. Therefore, evidence does not exist to prove that the grades in hybrid and standard classes differ.
textbooks/stats/Applied_Statistics/Introductory_Business_Statistics_(OpenStax)/10%3A_Hypothesis_Testing_with_Two_Samples/10.00%3A_Introduction.txt
Cohen's $\bf{d}$ is a measure of "effect size" based on the differences between two means. Cohen’s $d$, named for United States statistician Jacob Cohen, measures the relative strength of the differences between the means of two populations based on sample data. The calculated value of effect size is then compared to Cohen’s standards of small, medium, and large effect sizes. Size of effect$d$ Small0.2 Medium0.5 Large0.8 Table 10.2 Cohen's Standard Effect Sizes Cohen's $d$ is the measure of the difference between two means divided by the pooled standard deviation: $d=\frac{\overline{x}_{1}-\overline{x}_{2}}{s_{\text { pooled }}}$ where $s_{p o o l e d}=\sqrt{\frac{\left(n_{1}-1\right) s_{1}^{2}+\left(n_{2}-1\right) s_{2}^{2}}{n_{1}+n_{2}-2}}$ It is important to note that Cohen's $d$ does not provide a level of confidence as to the magnitude of the size of the effect comparable to the other tests of hypothesis we have studied. The sizes of the effects are simply indicative. The effect is small because 0.384 is between Cohen’s value of 0.2 for small effect size and 0.5 for medium effect size. The size of the differences of the means for the two companies is small indicating that there is not a significant difference between them. 10.03: Test for Differences in Means- Assuming Equal Population Variances Typically we can never expect to know any of the population parameters, mean, proportion, or standard deviation. When testing hypotheses concerning differences in means we are faced with the difficulty of two unknown variances that play a critical role in the test statistic. We have been substituting the sample variances just as we did when testing hypotheses for a single mean. And as we did before, we used a Student's t to compensate for this lack of information on the population variance. There may be situations, however, when we do not know the population variances, but we can assume that the two populations have the same variance. If this is true then the pooled sample variance will be smaller than the individual sample variances. This will give more precise estimates and reduce the probability of discarding a good null. The null and alternative hypotheses remain the same, but the test statistic changes to: $t_{c}=\frac{\left(\overline{x}_{1}-\overline{x}_{2}\right)-\delta_{0}}{\sqrt{S^{2} p\left(\frac{1}{n_{1}}+\frac{1}{n_{2}}\right)}}\nonumber$ where $S_{p}^{2}$ is the pooled variance given by the formula: $S_{p}^{2}=\frac{\left(n_{1}-1\right) s_{2}^{1}+\left(n_{2}-1\right) s_{2}^{2}}{n_{1}+n_{2}-2}\nonumber$ The test statistic is clearly in the tail, 2.31 is larger than the critical value of 1.703, and therefore we cannot maintain the null hypothesis. Thus, we conclude that there is significant evidence at the 95% level of confidence that the new medicine produces the effect desired. 10.04: Comparing Two Independent Population Proportions When conducting a hypothesis test that compares two independent population proportions, the following characteristics should be present: 1. The two independent samples are random samples that are independent. 2. The number of successes is at least five, and the number of failures is at least five, for each of the samples. 3. Growing literature states that the population must be at least ten or even perhaps 20 times the size of the sample. This keeps each population from being over-sampled and causing biased results. Comparing two proportions, like comparing two means, is common. If two estimated proportions are different, it may be due to a difference in the populations or it may be due to chance in the sampling. A hypothesis test can help determine if a difference in the estimated proportions reflects a difference in the two population proportions. Like the case of differences in sample means, we construct a sampling distribution for differences in sample proportions: $\left(p_{A}^{\prime}-p_{B}^{\prime}\right)$ where $p_{A}^{\prime}=X_{\frac{A}{n_{A}}}$ and $p_{B}^{\prime}=X_{\frac{B}{n_{B}}}$ are the sample proportions for the two sets of data in question. $X_A$ and $X_B$ are the number of successes in each sample group respectively, and $n_A$ and $n_B$ are the respective sample sizes from the two groups. Again we go the Central Figure $5$. Generally, the null hypothesis allows for the test of a difference of a particular value, $\delta_{0}$, just as we did for the case of differences in means. $H_{0} : p_{1}-p_{2}=\delta_{0}\nonumber$ $H_{1} : p_{1}-p_{2} \neq \delta_{0}\nonumber$ Most common, however, is the test that the two proportions are the same. That is, $H_{0} : p_{\mathrm{A}}=p_{B}\nonumber$ $H_{a} : p_{\mathrm{A}} \neq p_{B}\nonumber$ To conduct the test, we use a pooled proportion, $p_c$. $\textbf{The pooled proportion is calculated as follows:}\nonumber$ $p_{c}=\frac{x_{A}+x_{B}}{n_{A}+n_{B}}\nonumber$ $\textbf{The test statistic (z-score) is:}\nonumber$ $Z_{c}=\frac{\left(p_{A}^{\prime}-p_{B}^{\prime}\right)-\delta_{0}}{\sqrt{p_{c}\left(1-p_{c}\right)\left(\frac{1}{n_{A}}+\frac{1}{n_{B}}\right)}}\nonumber$ where $\delta_{0}$ is the hypothesized differences between the two proportions and pc is the pooled variance from the formula above. Example $6$ A bank has recently acquired a new branch and thus has customers in this new territory. They are interested in the default rate in their new territory. They wish to test the hypothesis that the default rate is different from their current customer base. They sample 200 files in area A, their current customers, and find that 20 have defaulted. In area B, the new customers, another sample of 200 files shows 12 have defaulted on their loans. At a 10% level of significance can we say that the default rates are the same or different? Answer Solution 10.6 This is a test of proportions. We know this because the underlying random variable is binary, default or not default. Further, we know it is a test of differences in proportions because we have two sample groups, the current customer base and the newly acquired customer base. Let A and B be the subscripts for the two customer groups. Then pAand pB are the two population proportions we wish to test. Random Variable: $P_{A}^{\prime}-P_{B}^{\prime}$ = difference in the proportions of customers who defaulted in the two groups. $H_{0} : p_{A}=p_{B}$ $H_{a} : p_{A} \neq p_{B}$ The words "is a difference" tell you the test is two-tailed. Distribution for the test: Since this is a test of two binomial population proportions, the distribution is normal: $p_{c}=\frac{x_{A}+x_{B}}{n_{A}+n_{B}}=\frac{20+12}{200+200}=0.08$ $1-p_{c}=0.92$ $\left(p^{\prime} A-p^{\prime} B\right)=0.04$ follows an approximate normal distribution. Estimated proportion for group A: $p^{\prime}_{A}=\frac{x_{A}}{n_{A}}=\frac{20}{200}=0.1$ Estimated proportion for group B: $p^{\prime}_{B}=\frac{x_{B}}{n_{B}}=\frac{12}{200}=0.06$ The estimated difference between the two groups is : $p_{A}^{\prime}-p_{B}^{\prime}=0.1-0.06=0.04$. $Z_{c}=\frac{\left(\mathrm{P}_{A}^{\prime}-\mathrm{P}_{B}^{\prime}\right)-\delta_{0}}{P_{c}\left(1-P_{c}\right)\left(\frac{1}{n_{A}}+\frac{1}{n_{B}}\right)}=0.54\nonumber$ The calculated test statistic is .54 and is not in the tail of the distribution. Make a decision: Since the calculate test statistic is not in the tail of the distribution we cannot reject $H_0$. Conclusion: At a 1% level of significance, from the sample data, there is not sufficient evidence to conclude that there is a difference between the proportions of customers who defaulted in the two groups. Exercise $6$ Two types of valves are being tested to determine if there is a difference in pressure tolerances. Fifteen out of a random sample of 100 of Valve A cracked under 4,500 psi. Six out of a random sample of 100 of Valve B cracked under 4,500 psi. Test at a 5% level of significance.
textbooks/stats/Applied_Statistics/Introductory_Business_Statistics_(OpenStax)/10%3A_Hypothesis_Testing_with_Two_Samples/10.02%3A_Cohen%27s_Standards_for_Small_Medium_and_Large_Effect_Sizes.txt
Even though this situation is not likely (knowing the population standard deviations is very unlikely), the following example illustrates hypothesis testing for independent means with known population standard deviations. The sampling distribution for the difference between the means is normal in accordance with the central limit theorem. The random variable is $\overline{X_{1}}-\overline{X_{2}}$. The normal distribution has the following format: $\textbf{The standard deviation is:}\nonumber$ $\sqrt{\frac{\left(\sigma_{1}\right)^{2}}{n_{1}}+\frac{\left(\sigma_{2}\right)^{2}}{n_{2}}}\nonumber$ $\textbf{The test statistic (z-score) is:}\nonumber$ $Z_{c}=\frac{\left(\overline{x}_{1}-\overline{x}_{2}\right)-\delta_{0}}{\sqrt{\frac{\left(\sigma_{1}\right)^{2}}{n_{1}}+\frac{\left(\sigma_{2}\right)^{2}}{n_{2}}}}\nonumber$ At the 5% level of significance, from the sample data, there is not sufficient evidence to conclude that the mean age of Democratic senators is greater than the mean age of the Republican senators. 10.06: Matched or Paired Samples In most cases of economic or business data we have little or no control over the process of how the data are gathered. In this sense the data are not the result of a planned controlled experiment. In some cases, however, we can develop data that are part of a controlled experiment. This situation occurs frequently in quality control situations. Imagine that the production rates of two machines built to the same design, but at different manufacturing plants, are being tested for differences in some production metric such as speed of output or meeting some production specification such as strength of the product. The test is the same in format to what we have been testing, but here we can have matched pairs for which we can test if differences exist. Each observation has its matched pair against which differences are calculated. First, the differences in the metric to be tested between the two lists of observations must be calculated, and this is typically labeled with the letter "d." Then, the average of these matched differences, $\overline{X}_{d}$ is calculated as is its standard deviation, $S_d$. We expect that the standard deviation of the differences of the matched pairs will be smaller than unmatched pairs because presumably fewer differences should exist because of the correlation between the two groups. When using a hypothesis test for matched or paired samples, the following characteristics may be present: 1. In a hypothesis test for matched or paired samples, subjects are matched in pairs and differences are calculated. The differences are the data. The population mean for the differences, $\mu_d$, is then tested using a Student's-t test for a single population mean with $n – 1$ degrees of freedom, where $n$ is the number of differences, that is, the number of pairs not the number of observations. $\textbf{The null and alternative hypotheses for this test are:}\nonumber$ $H_{a} : \mu_{d} \neq 0\nonumber$ $\textbf{The test statistic is:}\nonumber$ $t_{c}=\frac{\overline{x}_{d}-\mu_{d}}{\left(\frac{s_{d}}{\sqrt{n}}\right)}\nonumber$ At a 5% level of significance, from the sample data, there is not sufficient evidence to conclude that the strength development class helped to make the players stronger, on average. 10.07: Homework Use the following information to answer the next ten exercises. indicate which of the following choices best identifies the hypothesis test. 1. Table 10.25115. University of Michigan researchers reported in the Journal of the National Cancer Institute that quitting smoking is especially beneficial for those under age 49. In this American Cancer Society study, the risk (probability) of dying of lung cancer was about the same as for those who had never smoked. 116. Lesley E. Tan investigated the relationship between left-handedness vs. right-handedness and motor competence in preschool children. Random samples of 41 left-handed preschool children and 41 right-handed preschool children were given several tests of motor skills to determine if there is evidence of a difference between the children based on this experiment. The experiment produced the means and standard deviations shown Table \(26\). Determine the appropriate test and best distribution to use for that test. Left-handedRight-handed Sample size4141 Sample mean97.598.1 Sample standard deviation17.519.2 Table \(26\) 1. This is: 1. a test of two independent means. 2. a test of two proportions. 3. a test of a single mean. 4. a test of a single proportion. 10.08: Chapter Formula Review 10.1 Comparing Two Independent Population Means Standard error: $S E=\sqrt{\frac{\left(s_{1}\right)^{2}}{n_{1}}+\frac{\left(s_{2}\right)^{2}}{n_{2}}}$ Test statistic (t-score): $t_{c}=\frac{\left(\overline{x}_{1}-\overline{x}_{2}\right)-\delta_{0}}{\sqrt{\frac{\left(s_{1}\right)^{2}}{n_{1}}+\frac{\left(s_{2}\right)^{2}}{n_{2}}}}$ Degrees of freedom: $d f=\frac{\left(\frac{\left(s_{1}\right)^{2}}{n_{1}}+\frac{\left(s_{2}\right)^{2}}{n_{2}}\right)^{2}}{\left(\frac{1}{n_{1}-1}\right)\left(\frac{\left(s_{1}\right)^{2}}{n_{1}}\right)^{2}+\left(\frac{1}{n_{2}-1}\right)\left(\frac{\left(s_{2}\right)^{2}}{n_{2}}\right)^{2}}$ where: $s_1$ and $s_2$ are the sample standard deviations, and $n_1$ and $n_2$ are the sample sizes. $\overline{x}_{1}$ and $\overline{x}_{2}$ are the sample means. 10.2 Cohen's Standards for Small, Medium, and Large Effect Sizes Cohen’s $d$ is the measure of effect size: $d=\frac{\overline{x}_{1}-\overline{x}_{2}}{s_{\text {pooled}}}$ where $s_{\text {pooled}}=\sqrt{\frac{\left(n_{1}-1\right) s_{1}^{2}+\left(n_{2}-1\right) s_{2}^{2}}{n_{1}+n_{2}-2}}$ 10.3 Test for Differences in Means: Assuming Equal Population Variances $t_{c}=\frac{\left(\overline{x}_{1}-\overline{x}_{2}\right)-\delta_{0}}{\sqrt{S^{2}\left(\frac{1}{n_{1}}+\frac{1}{n_{2}}\right)}}\nonumber$ where $S_{p}^{2}$ is the pooled variance given by the formula: $S_{p}^{2}=\frac{\left(n_{1}-1\right) s_{2}^{1}+\left(n_{2}-1\right) s_{2}^{2}}{n_{1}+n_{2}-2}\nonumber$ 10.4 Comparing Two Independent Population Proportions Pooled Proportion: $p_{c}=\frac{x_{A}+x_{B}}{n_{A}+n_{B}}$ Test Statistic (z-score): $Z_{c}=\frac{\left(p^{\prime}_{A}-p^{\prime}_{B}\right)}{\sqrt{p_{c}\left(1-p_{c}\right)\left(\frac{1}{n_{A}}+\frac{1}{n_{B}}\right)}}$ where $p_{A}^{\prime}$ and $p_{B}^{\prime}$ are the sample proportions, $p_A$ and $p_B$ are the population proportions, $P_c$ is the pooled proportion, and $n_A$ and $n_B$ are the sample sizes. 10.5 Two Population Means with Known Standard Deviations Test Statistic (z-score): $Z_{c}=\frac{\left(\overline{x}_{1}-\overline{x}_{2}\right)-\delta_{0}}{\sqrt{\frac{\left(\sigma_{1}\right)^{2}}{n_{1}}+\frac{\left(\sigma_{2}\right)^{2}}{n_{2}}}}$ where: $\sigma_1$ and $\sigma_2$ are the known population standard deviations. $n_1$ and $n_2$ are the sample sizes. $\overline{x}_{1}$ and $\overline{x}_{2}$ are the sample means. $\mu_1$ and $\mu_2$ are the population means. 10.6 Matched or Paired Samples Test Statistic (t-score): $t_{c}=\frac{\overline{x}_{d}-\mu_{d}}{\left(\frac{s_{d}}{\sqrt{n}}\right)}$ where: $\overline{x}_{d}$ is the mean of the sample differences. $\mu_d$ is the mean of the population differences. $s_d$ is the sample standard deviation of the differences. $n$ is the sample size.
textbooks/stats/Applied_Statistics/Introductory_Business_Statistics_(OpenStax)/10%3A_Hypothesis_Testing_with_Two_Samples/10.05%3A_Two_Population_Means_with_Known_Standard_Deviations.txt
10.1 Comparing Two Independent Population Means 64. The mean number of English courses taken in a two–year time period by male and female college students is believed to be about the same. An experiment is conducted and data are collected from 29 males and 16 females. The males took an average of three English courses with a standard deviation of 0.8. The females took an average of four English courses with a standard deviation of 1.0. Are the means statistically the same? 65. A student at a four-year college claims that mean enrollment at four–year colleges is higher than at two–year colleges in the United States. Two surveys are conducted. Of the 35 two–year colleges surveyed, the mean enrollment was 5,068 with a standard deviation of 4,777. Of the 35 four-year colleges surveyed, the mean enrollment was 5,466 with a standard deviation of 8,191. 66. At Rachel’s 11th birthday party, eight girls were timed to see how long (in seconds) they could hold their breath in a relaxed position. After a two-minute rest, they timed themselves while jumping. The girls thought that the mean difference between their jumping and relaxed times would be zero. Test their hypothesis. Relaxed time (seconds)Jumping time (seconds) 2621 4740 3028 2221 2325 4543 3735 2932 Table 10.14 67. Mean entry-level salaries for college graduates with mechanical engineering degrees and electrical engineering degrees are believed to be approximately the same. A recruiting office thinks that the mean mechanical engineering salary is actually lower than the mean electrical engineering salary. The recruiting office randomly surveys 50 entry level mechanical engineers and 60 entry level electrical engineers. Their mean salaries were $46,100 and$46,700, respectively. Their standard deviations were $3,450 and$4,210, respectively. Conduct a hypothesis test to determine if you agree that the mean entry-level mechanical engineering salary is lower than the mean entry-level electrical engineering salary. 68. Marketing companies have collected data implying that teenage girls use more ring tones on their cellular phones than teenage boys do. In one particular study of 40 randomly chosen teenage girls and boys (20 of each) with cellular phones, the mean number of ring tones for the girls was 3.2 with a standard deviation of 1.5. The mean for the boys was 1.7 with a standard deviation of 0.8. Conduct a hypothesis test to determine if the means are approximately the same or if the girls’ mean is higher than the boys’ mean. Use the information from Appendix C: Data Sets to answer the next four exercises. 69. Using the data from Lap 1 only, conduct a hypothesis test to determine if the mean time for completing a lap in races is the same as it is in practices. 70. Repeat the test in Table $16$. Test at the 1% level of significance. Number who are obeseSample size Men42,769155,525 Women67,169248,775 Table 10.16 87. Two computer users were discussing tablet computers. A higher proportion of people ages 16 to 29 use tablets than the proportion of people age 30 and older. Table $17$ details the number of tablet owners for each age group. Test at the 1% level of significance. 16–29 year olds30 years old and older Own a tablet69231 Sample size6282,309 Table $17$ 88. A group of friends debated whether more men use smartphones than women. They consulted a research study of smartphone use among adults. The results of the survey indicate that of the 973 men randomly sampled, 379 use smartphones. For women, 404 of the 1,304 who were randomly sampled use smartphones. Test at the 5% level of significance. 89. While her husband spent 2½ hours picking out new speakers, a statistician decided to determine whether the percent of men who enjoy shopping for electronic equipment is higher than the percent of women who enjoy shopping for electronic equipment. The population was Saturday afternoon shoppers. Out of 67 men, 24 said they enjoyed the activity. Eight of the 24 women surveyed claimed to enjoy the activity. Interpret the results of the survey. 90. We are interested in whether children’s educational computer software costs less, on average, than children’s entertainment software. Thirty-six educational software titles were randomly picked from a catalog. The mean cost was $31.14 with a standard deviation of$4.69. Thirty-five entertainment software titles were randomly picked from the same catalog. The mean cost was $33.86 with a standard deviation of$10.87. Decide whether children’s educational software costs less, on average, than children’s entertainment software. 91. Joan Nguyen recently claimed that the proportion of college-age males with at least one pierced ear is as high as the proportion of college-age females. She conducted a survey in her classes. Out of 107 males, 20 had at least one pierced ear. Out of 92 females, 47 had at least one pierced ear. Do you believe that the proportion of males has reached the proportion of females? 92. "To Breakfast or Not to Breakfast?" by Richard Ayore In the American society, birthdays are one of those days that everyone looks forward to. People of different ages and peer groups gather to mark the 18th, 20th, …, birthdays. During this time, one looks back to see what he or she has achieved for the past year and also focuses ahead for more to come. If, by any chance, I am invited to one of these parties, my experience is always different. Instead of dancing around with my friends while the music is booming, I get carried away by memories of my family back home in Kenya. I remember the good times I had with my brothers and sister while we did our daily routine. Every morning, I remember we went to the shamba (garden) to weed our crops. I remember one day arguing with my brother as to why he always remained behind just to join us an hour later. In his defense, he said that he preferred waiting for breakfast before he came to weed. He said, “This is why I always work more hours than you guys!” And so, to prove him wrong or right, we decided to give it a try. One day we went to work as usual without breakfast, and recorded the time we could work before getting tired and stopping. On the next day, we all ate breakfast before going to work. We recorded how long we worked again before getting tired and stopping. Of interest was our mean increase in work time. Though not sure, my brother insisted that it was more than two hours. Using the data in Table $18$, solve our problem. Work hours with breakfastWork hours without breakfast 86 75 95 54 97 87 107 75 66 95 Table $18$ NOTE If you are using a Student's $t$-distribution for one of the following homework problems, including for paired data, you may assume that the underlying population is normally distributed. (When using these tests in a real situation, you must first prove that assumption, however.) 93. A study is done to determine if students in the California state university system take longer to graduate, on average, than students enrolled in private universities. One hundred students from both the California state university system and private universities are surveyed. Suppose that from years of research, it is known that the population standard deviations are 1.5811 years and 1 year, respectively. The following data are collected. The California state university system students took on average 4.5 years with a standard deviation of 0.8. The private university students took on average 4.1 years with a standard deviation of 0.3. 94. Parents of teenage boys often complain that auto insurance costs more, on average, for teenage boys than for teenage girls. A group of concerned parents examines a random sample of insurance bills. The mean annual cost for 36 teenage boys was $679. For 23 teenage girls, it was$559. From past years, it is known that the population standard deviation for each group is $180. Determine whether or not you believe that the mean cost for auto insurance for teenage boys is greater than that for teenage girls. 95. A group of transfer bound students wondered if they will spend the same mean amount on texts and supplies each year at their four-year university as they have at their community college. They conducted a random survey of 54 students at their community college and 66 students at their local four-year university. The sample means were$947 and $1,011, respectively. The population standard deviations are known to be$254 and \$87, respectively. Conduct a hypothesis test to determine if the means are statistically the same. 96. Some manufacturers claim that non-hybrid sedan cars have a lower mean miles-per-gallon (mpg) than hybrid ones. Suppose that consumers test 21 hybrid sedans and get a mean of 31 mpg with a standard deviation of seven mpg. Thirty-one non-hybrid sedans get a mean of 22 mpg with a standard deviation of four mpg. Suppose that the population standard deviations are known to be six and three, respectively. Conduct a hypothesis test to evaluate the manufacturers claim. 97. A baseball fan wanted to know if there is a difference between the number of games played in a World Series when the American League won the series versus when the National League won the series. From 1922 to 2012, the population standard deviation of games won by the American League was 1.14, and the population standard deviation of games won by the National League was 1.11. Of 19 randomly selected World Series games won by the American League, the mean number of games won was 5.76. The mean number of 17 randomly selected games won by the National League was 5.42. Conduct a hypothesis test. 98. One of the questions in a study of marital satisfaction of dual-career couples was to rate the statement “I’m pleased with the way we divide the responsibilities for childcare.” The ratings went from one (strongly agree) to five (strongly disagree). Table $19$ contains ten of the paired responses for husbands and wives. Conduct a hypothesis test to see if the mean difference in the husband’s versus the wife’s satisfaction level is negative (meaning that, within the partnership, the husband is happier than the wife). Wife’s score2233421124 Husband’s score2213211124 Table 10.19 10.6 Matched or Paired Samples 99. Ten individuals went on a low–fat diet for 12 weeks to lower their cholesterol. The data are recorded in Table $20$. Do you think that their cholesterol levels were significantly lowered? Starting cholesterol levelEnding cholesterol level 140140 220230 110120 240220 200190 180150 190200 360300 280300 260240 Table $20$ Use the following information to answer the next two exercises. A new AIDS prevention drug was tried on a group of 224 HIV positive patients. Forty-five patients developed AIDS after four years. In a control group of 224 HIV positive patients, 68 developed AIDS after four years. We want to test whether the method of treatment reduces the proportion of patients that develop AIDS after four years or if the proportions of the treated group and the untreated group stay the same. Let the subscript $t$ = treated patient and $ut$ = untreated patient. 100. The appropriate hypotheses are: 1. Use the following information to answer the next two exercises. An experiment is conducted to show that blood pressure can be consciously reduced in people trained in a “biofeedback exercise program.” Six subjects were randomly selected and blood pressure measurements were recorded before and after the training. The difference between blood pressures was calculated (after - before) producing the following results: $\overline{x}_{d}=-10.2$ $s_{d}=8.4$. Using the data, test the hypothesis that the blood pressure has decreased after the training.101. The distribution for the test is: 1. The correct decision is: 1. Table 10.23105. A politician asked his staff to determine whether the underemployment rate in the northeast decreased from 2011 to 2012. The results are in Table $24$. Northeastern states20112012 Connecticut17.316.4 Delaware17.413.7 Maine19.316.1 Maryland16.015.5 Massachusetts17.618.2 New Hampshire15.413.5 New Jersey19.218.7 New York18.518.7 Ohio18.218.8 Pennsylvania16.516.9 Rhode Island20.722.4 Vermont14.712.3 West Virginia15.517.3 Table $24$
textbooks/stats/Applied_Statistics/Introductory_Business_Statistics_(OpenStax)/10%3A_Hypothesis_Testing_with_Two_Samples/10.09%3A_Chapter_Homework.txt
Cohen’s d a measure of effect size based on the differences between two means. If \(d\) is between 0 and 0.2 then the effect is small. If \(d\) approaches is 0.5, then the effect is medium, and if \(d\) approaches 0.8, then it is a large effect. Independent Groups two samples that are selected from two populations, and the values from one population are not related in any way to the values from the other population. Matched Pairs two samples that are dependent. Differences between a before and after scenario are tested by testing one population mean of differences. Pooled Variance a weighted average of two variances that can then be used when calculating standard error. 10.11: Chapter Practice 10.1 Comparing Two Independent Population Means Use the following information to answer the next 15 exercises: Indicate if the hypothesis test is for 1. Use the following information to answer the next three exercises: A study is done to determine which of two soft drinks has more sugar. There are 13 cans of Beverage A in a sample and six cans of Beverage B. The mean amount of sugar in Beverage A is 36 grams with a standard deviation of 0.6 grams. The mean amount of sugar in Beverage B is 38 grams with a standard deviation of 0.8 grams. The researchers believe that Beverage B has more sugar than Beverage A, on average. Both populations have normal distributions.16. Are standard deviations known or unknown? 17. What is the random variable? 18. Is this a one-tailed or two-tailed test? 19. Is this a test of means or proportions? 20. State the null and alternative hypotheses. 1. Table 10.844. What is the random variable? 45. State the null and alternative hypotheses. 46. What is the test statistic? 47. At the 1% significance level, what is your conclusion? Plant groupSample mean height of plants (inches)Population standard deviation Food162.5 No food141.5 Table \(9\) 48. Is the population standard deviation known or unknown? 49. State the null and alternative hypotheses. 50. At the 1% significance level, what is your conclusion? Use the following information to answer the next five exercises. Two metal alloys are being considered as material for ball bearings. The mean melting point of the two alloys is to be compared. 15 pieces of each metal are being tested. Both populations have normal distributions. The following table is the result. It is believed that Alloy Zeta has a different melting point. Sample mean melting temperatures (°F)Population standard deviation Alloy Gamma80095 Alloy Zeta900105 Table 10.10 51. State the null and alternative hypotheses. 52. Is this a right-, left-, or two-tailed test? 53. At the 1% significance level, what is your conclusion? 10.6 Matched or Paired Samples Use the following information to answer the next five exercises. A study was conducted to test the effectiveness of a software patch in reducing system failures over a six-month period. Results for randomly selected installations are shown in Table \(11\). The “before” value is matched to an “after” value, and the differences are calculated. The differences have a normal distribution. Test at the 1% significance level. InstallationABCDEFGH Before36425826 After15201022 Table 10.11 54. What is the random variable? 55. State the null and alternative hypotheses. 56. What conclusion can you draw about the software patch? Use the following information to answer next five exercises. A study was conducted to test the effectiveness of a juggling class. Before the class started, six subjects juggled as many balls as they could at once. After the class, the same six subjects juggled as many balls as they could. The differences in the number of balls are calculated. The differences have a normal distribution. Test at the 1% significance level. SubjectABCDEF Before343245 After456457 Table \(12\) 57. State the null and alternative hypotheses. 58. What is the sample mean difference? 59. What conclusion can you draw about the juggling class? Use the following information to answer the next five exercises. A doctor wants to know if a blood pressure medication is effective. Six subjects have their blood pressures recorded. After twelve weeks on the medication, the same six subjects have their blood pressure recorded again. For this test, only systolic pressure is of concern. Test at the 1% significance level. PatientABCDEF Before161162165162166171 After158159166160167169 Table 10.13 60. State the null and alternative hypotheses. 61. What is the test statistic? 62. What is the sample mean difference? 63. What is the conclusion?
textbooks/stats/Applied_Statistics/Introductory_Business_Statistics_(OpenStax)/10%3A_Hypothesis_Testing_with_Two_Samples/10.10%3A_Chapter_Key_Terms.txt
10.1 Comparing Two Independent Population Means Data from Graduating Engineer + Computer Careers. Available online at www.graduatingengineer.com Data from Microsoft Bookshelf. Data from the United States Senate website, available online at www.Senate.gov (accessed June 17, 2013). “List of current United States Senators by Age.” Wikipedia. Available online at en.Wikipedia.org/wiki/List_of...enators_by_age (accessed June 17, 2013). “Sectoring by Industry Groups.” Nasdaq. Available online at www.nasdaq.com/markets/barcha...&base=industry (accessed June 17, 2013). “Strip Clubs: Where Prostitution and Trafficking Happen.” Prostitution Research and Education, 2013. Available online at www.prostitutionresearch.com/ProsViolPosttrauStress.html (accessed June 17, 2013). “World Series History.” Baseball-Almanac, 2013. Available online at http://www.baseball-almanac.com/ws/wsmenu.shtml (accessed June 17, 2013). 10.4 Comparing Two Independent Population Proportions Data from Educational Resources, December catalog. Data from Hilton Hotels. Available online at http://www.hilton.com (accessed June 17, 2013). Data from Hyatt Hotels. Available online at hyatt.com (accessed June 17, 2013). Data from Statistics, United States Department of Health and Human Services. Data from Whitney Exhibit on loan to San Jose Museum of Art. Data from the American Cancer Society. Available online at http://www.cancer.org/index (accessed June 17, 2013). Data from the Chancellor’s Office, California Community Colleges, November 1994. “State of the States.” Gallup, 2013. Available online at http://www.gallup.com/poll/125066/St...ef=interactive (accessed June 17, 2013). “West Nile Virus.” Centers for Disease Control and Prevention. Available online at http://www.cdc.gov/ncidod/dvbid/westnile/index.htm (accessed June 17, 2013). 10.5 Two Population Means with Known Standard Deviations Data from the United States Census Bureau. Available online at www.census.gov/prod/cen2010/b...c2010br-02.pdf Hinduja, Sameer. “Sexting Research and Gender Differences.” Cyberbulling Research Center, 2013. Available online at http://cyberbullying.us/blog/sexting...r-differences/ (accessed June 17, 2013). “Smart Phone Users, By the Numbers.” Visually, 2013. Available online at http://visual.ly/smart-phone-users-numbers (accessed June 17, 2013). Smith, Aaron. “35% of American adults own a Smartphone.” Pew Internet, 2013. Available online at www.pewinternet.org/~/media/F...martphones.pdf (accessed June 17, 2013). “State-Specific Prevalence of Obesity AmongAduls—Unites States, 2007.” MMWR, CDC. Available online at http://www.cdc.gov/mmwr/preview/mmwrhtml/mm5728a1.htm (accessed June 17, 2013). “Texas Crime Rates 1960–1012.” FBI, Uniform Crime Reports, 2013. Available online at: http://www.disastercenter.com/crime/txcrime.htm (accessed June 17, 2013). 10.13: Chapter Review 10.1 Comparing Two Independent Population Means Two population means from independent samples where the population standard deviations are not known • Random Variable: $\overline{X}_{1}-\overline{X}_{2}$ = the difference of the sampling means • Distribution: Student's t-distribution with degrees of freedom (variances not pooled) 10.2 Cohen's Standards for Small, Medium, and Large Effect Sizes Cohen's d is a measure of “effect size” based on the differences between two means. It is important to note that Cohen's $d$ does not provide a level of confidence as to the magnitude of the size of the effect comparable to the other tests of hypothesis we have studied. The sizes of the effects are simply indicative. 10.3 Test for Differences in Means: Assuming Equal Population Variances In situations when we do not know the population variances but assume the variances are the same, the pooled sample variance will be smaller than the individual sample variances. This will give more precise estimates and reduce the probability of discarding a good null. 10.4 Comparing Two Independent Population Proportions Test of two population proportions from independent samples. • Random variable: $\mathbf{p}^{\prime}_{A}-\mathbf{p}_{B}^{\prime}$= difference between the two estimated proportions • Distribution: normal distribution 10.5 Two Population Means with Known Standard Deviations A hypothesis test of two population means from independent samples where the population standard deviations are known (typically approximated with the sample standard deviations), will have these characteristics: • Random variable: $\overline{X}_{1}-\overline{X}_{2}$ = the difference of the means • Distribution: normal distribution 10.6 Matched or Paired Samples A hypothesis test for matched or paired samples (t-test) has these characteristics: • Test the differences by subtracting one measurement from the other measurement • Random Variable: $\overline{x}_{d}$ = mean of the differences • Distribution: Student’s t-distribution with $n – 1$ degrees of freedom • If the number of differences is small (less than 30), the differences must follow a normal distribution. • Two samples are drawn from the same set of objects. • Samples are dependent.
textbooks/stats/Applied_Statistics/Introductory_Business_Statistics_(OpenStax)/10%3A_Hypothesis_Testing_with_Two_Samples/10.12%3A_Chapter_References.txt
1. two proportions 3. matched or paired samples 5. single mean 7. independent group means, population standard deviations and/or variances unknown 9. two proportions 11. independent group means, population standard deviations and/or variances unknown 13. independent group means, population standard deviations and/or variances unknown 15. two proportions 17. The random variable is the difference between the mean amounts of sugar in the two soft drinks. 19. means 21. two-tailed 23. the difference between the mean life spans of whites and nonwhites 25. This is a comparison of two population means with unknown population standard deviations. 27. Check student’s solution. 28. 1. 31. $P^{\prime}_{O S 1}-P_{O S 2}^{\prime}$ = difference in the proportions of phones that had system failures within the first eight hours of operation with $OS_1$ and $OS_2$. proportions 36. right-tailed 38. The random variable is the difference in proportions (percents) of the populations that are of two or more races in Nevada and North Dakota. 40. Our sample sizes are much greater than five each, so we use the normal for two proportions distribution for this hypothesis test. 42. 1. 44. The difference in mean speeds of the fastball pitches of the two pitchers –2.46 47. At the 1% significance level, we can reject the null hypothesis. There is sufficient data to conclude that the mean speed of Rodriguez’s fastball is faster than Wesley’s. 49. Subscripts: 1 = Food, 2 = No Food $H_{0} : \mu_{1} \leq \mu_{2}$ $H_{a} : \mu_{1}>\mu_{2}$ 51. Subscripts: 1 = Gamma, 2 = Zeta $H_{0} : \mu_{1}=\mu_{2}$ $H_{a} : \mu_{1} \neq \mu_{2}$ 53. There is sufficient evidence so we cannot accept the null hypothesis. The data support that the melting point for Alloy Zeta is different from the melting point of Alloy Gamma. 54. the mean difference of the system failures 56. With a $p$-value 0.0067, we can cannot accept the null hypothesis. There is enough evidence to support that the software patch is effective in reducing the number of system failures. 60. $H_{0} : \mu_{d} \geq 0$ $H_{a} : \mu_{d}<0$ 63. We decline to reject the null hypothesis. There is not sufficient evidence to support that the medication is effective. 65. Subscripts: 1: two-year colleges; 2: four-year colleges 1. 67. Subscripts: 1: mechanical engineering; 2: electrical engineering 1. 69. 1. 71. 1. 74. c Test: two independent sample means, population standard deviations unknown. Random variable: $\overline{X}_{1}-\overline{X}_{2}\nonumber$ Distribution: $H_{0} : \mu_{1}=\mu_{2}$ $H_{a} : \mu_{1}<\mu_{2}$. The mean age of entering prostitution in Canada is lower than the mean age in the United States. Graph: left-tailed $p$-value : 0.0151 Decision: Cannot reject $H_0$. Conclusion: At the 1% level of significance, from the sample data, there is not sufficient evidence to conclude that the mean age of entering prostitution in Canada is lower than the mean age in the United States. 78. d 80. 1. 82. Subscripts: 1 = Cabrillo College, 2 = Lake Tahoe College 1. 84. a Test: two independent sample proportions. Random variable: $p_{1}^{\prime}-p_{2}^{\prime}$ Distribution: $H_{0} : p_{1}=p_{2}$ $H_{a} : p_{1} \neq p_{2}$. The proportion of eReader users is different for the 16- to 29-year-old users from that of the 30 and older users. Graph: two-tailed 87. Test: two independent sample proportions Random variable: $p_{1}^{\prime}-p_{2}^{\prime}$ Distribution:$H_{0} : p_{1}=p_{2}$ $H_{a} : p_{1}>p_{2}$. A higher proportion of tablet owners are aged 16 to 29 years old than are 30 years old and older. Graph: right-tailed Do not reject the $H_0$. Conclusion: At the 1% level of significance, from the sample data, there is not sufficient evidence to conclude that a higher proportion of tablet owners are aged 16 to 29 years old than are 30 years old and older. 89. Subscripts: 1: men; 2: women 1. 91. 1. 92. 1. 94. Subscripts: 1 = boys, 2 = girls 1. 96. Subscripts: 1 = non-hybrid sedans, 2 = hybrid sedans 1. 98. 1. 99. $p$-value = 0.1494 103. Test: two matched pairs or paired samples ($t$-test) Random variable: $\overline{X}_{d}$ Distribution: $t_{12}$ $H_{0} : \mu_{d}=0$ $H_{a} : \mu_{d}>0$ The mean of the differences of new female breast cancer cases in the south between 2013 and 2012 is greater than zero. The estimate for new female breast cancer cases in the south is higher in 2013 than in 2012. Graph: right-tailed $p$-value: 0.0004 Decision: Cannot accept $H_0$ Conclusion: At the 5% level of significance, from the sample data, there is sufficient evidence to conclude that there was a higher estimate of new female breast cancer cases in 2013 than in 2012. 105. Test: matched or paired samples ($t$-test) Difference data: $\{–0.9, –3.7, –3.2, –0.5, 0.6, –1.9, –0.5, 0.2, 0.6, 0.4, 1.7, –2.4, 1.8\}$ Random Variable: $\overline{X}_{d}$ Distribution: $H_{0} : \mu_{d}=0 H_{a} : \mu_{d}<0$ The mean of the differences of the rate of underemployment in the northeastern states between 2012 and 2011 is less than zero. The underemployment rate went down from 2011 to 2012. Graph: left-tailed. Decision: Cannot reject $H_0$. Conclusion: At the 5% level of significance, from the sample data, there is not sufficient evidence to conclude that there was a decrease in the underemployment rates of the northeastern states from 2011 to 2012. 107. e 109. d 111. f 113. e 115. f 117. a
textbooks/stats/Applied_Statistics/Introductory_Business_Statistics_(OpenStax)/10%3A_Hypothesis_Testing_with_Two_Samples/10.14%3A_Chapter_Solution_%28Practice__Homework%29.txt
Have you ever wondered if lottery winning numbers were evenly distributed or if some numbers occurred with a greater frequency? How about if the types of movies people preferred were different across different age groups? What about if a coffee machine was dispensing approximately the same amount of coffee each time? You could answer these questions by conducting a hypothesis test. You will now study a new distribution, one that is used to determine the answers to such questions. This distribution is called the chi-square distribution. In this chapter, you will learn the three major applications of the chi-square distribution: 1. the goodness-of-fit test, which determines if data fit a particular distribution, such as in the lottery example 2. the test of independence, which determines if events are independent, such as in the movie example 3. the test of a single variance, which tests variability, such as in the coffee example 11.01: Facts About the Chi-Square Distribution The notation for the chi-square distribution is: $\chi \sim \chi_{d f}^{2}\nonumber$ where $df$ = degrees of freedom which depends on how chi-square is being used. (If you want to practice calculating chi-square probabilities then use $df = n - 1$. The degrees of freedom for the three major uses are each calculated differently.) For the $\chi^2$ distribution, the population mean is $\mu = df$ and the population standard deviation is $\sigma=\sqrt{2(d f)}$. The random variable is shown as $\chi^2$. The random variable for a chi-square distribution with $k$ degrees of freedom is the sum of $k$ independent, squared standard normal variables. $\chi^{2}=\left(Z_{1}\right)^{2}+\left(Z_{2}\right)^{2}+\ldots+\left(Z_{k}\right)^{2}\nonumber$ 1. The curve is non-symmetrical and skewed to the right. 2. There is a different chi-square curve for each $df$ ($1$). 3. The test statistic for any test is always greater than or equal to zero. 4. When $df > 90$, the chi-square curve approximates the normal distribution. For $\chi \sim \chi_{1,000}^{2}$ the mean, $\mu = df = 1,000$ and the standard deviation, $\sigma=\sqrt{2(1,000)}=44.7$. Therefore, $\chi \sim N(1,000,44.7)$, approximately. 5. The mean, $\mu$, is located just to the right of the peak. 11.02: Test of a Single Variance Thus far our interest has been exclusively on the population parameter $μ$ or it's counterpart in the binomial, $p$. Surely the mean of a population is the most critical piece of information to have, but in some cases we are interested in the variability of the outcomes of some distribution. In almost all production processes quality is measured not only by how closely the machine matches the target, but also the variability of the process. If one were filling bags with potato chips not only would there be interest in the average weight of the bag, but also how much variation there was in the weights. No one wants to be assured that the average weight is accurate when their bag has no chips. Electricity voltage may meet some average level, but great variability, spikes, can cause serious damage to electrical machines, especially computers. I would not only like to have a high mean grade in my classes, but also low variation about this mean. In short, statistical tests concerning the variance of a distribution have great value and many applications. A test of a single variance assumes that the underlying distribution is normal. The null and alternative hypotheses are stated in terms of the population variance. The test statistic is: $\chi_{c}^{2}=\frac{(n-1) s^{2}}{\sigma_{0}^{2}}\nonumber$ where: • $n$ = the total number of observations in the sample data • $s^2$ = sample variance • $\sigma_{0}^{2}$ = hypothesized value of the population variance • $H_{0} : \sigma^{2}=\sigma_{0}^{2}$ • $H_{a} : \sigma^{2} \neq \sigma_{0}^{2}$ You may think of s as the random variable in this test. The number of degrees of freedom is $df = n - 1$. A test of a single variance may be right-tailed, left-tailed, or two-tailed. Example $1$ will show you how to set up the null and alternative hypotheses. The null and alternative hypotheses contain statements about the population variance. Example $1$ Math instructors are not only interested in how their students do on exams, on average, but how the exam scores vary. To many instructors, the variance (or standard deviation) may be more important than the average. Suppose a math instructor believes that the standard deviation for his final exam is five points. One of his best students thinks otherwise. The student claims that the standard deviation is more than five points. If the student were to conduct a hypothesis test, what would the null and alternative hypotheses be? Answer Even though we are given the population standard deviation, we can set up the test using the population variance as follows. • $H_{0} : \sigma^{2} \leq 5^{2}$ • $H_{a} : \sigma^{2}>5^{2}$ Exercise $1$ A SCUBA instructor wants to record the collective depths each of his students' dives during their checkout. He is interested in how the depths vary, even though everyone should have been at the same depth. He believes the standard deviation is three feet. His assistant thinks the standard deviation is less than three feet. If the instructor were to conduct a test, what would the null and alternative hypotheses be? Example $2$ With individual lines at its various windows, a post office finds that the standard deviation for waiting times for customers on Friday afternoon is 7.2 minutes. The post office experiments with a single, main waiting line and finds that for a random sample of 25 customers, the waiting times for customers have a standard deviation of 3.5 minutes on a Friday afternoon. With a significance level of 5%, test the claim that a single line causes lower variation among waiting times for customers. Answer Since the claim is that a single line causes less variation, this is a test of a single variance. The parameter is the population variance, $\sigma^2$. Random Variable: The sample standard deviation, $s$, is the random variable. Let $s$ = standard deviation for the waiting times. • $H_{0} : \sigma^{2} \geq 7.2^{2}$ • $H_{a} : \sigma^{2}<7.2^{2}$ Distribution for the test: $\chi_{24}^{2}$, where: • $n$ = the number of customers sampled • $df = n – 1 = 25 – 1 = 24$ Calculate the test statistic: $\chi_{c}^{2}=\frac{(n-1) s^{2}}{\sigma^{2}}=\frac{(25-1)(3.5)^{2}}{7.2^{2}}=5.67$ where $n = 25$, $s = 3.5$, and $\sigma = 7.2$. The graph of the Chi-square shows the distribution and marks the critical value with 24 degrees of freedom at 95% level of confidence, $\alpha = 0.05$, 13.85. The critical value of 13.85 came from the Chi squared table which is read very much like the students t table. The difference is that the students t distribution is symmetrical and the Chi squared distribution is not. At the top of the Chi squared table we see not only the familiar 0.05, 0.10, etc. but also 0.95, 0.975, etc. These are the columns used to find the left hand critical value. The graph also marks the calculated $\chi^2$ test statistic of 5.67. Comparing the test statistic with the critical value, as we have done with all other hypothesis tests, we reach the conclusion. The word "less" tells you this is a left-tailed test. Make a decision: Because the calculated test statistic is in the tail we cannot accept $H_0$. This means that you reject $\sigma^2 \geq 7.2^2$. In other words, you do not think the variation in waiting times is 7.2 minutes or more; you think the variation in waiting times is less. Conclusion: At a 5% level of significance, from the data, there is sufficient evidence to conclude that a single line causes a lower variation among the waiting times or with a single line, the customer waiting times vary less than 7.2 minutes. Example $3$ Professor Hadley has a weakness for cream filled donuts, but he believes that some bakeries are not properly filling the donuts. A sample of 24 donuts reveals a mean amount of filling equal to 0.04 cups, and the sample standard deviation is 0.11 cups. Professor Hadley has an interest in the average quantity of filling, of course, but he is particularly distressed if one donut is radically different from another. Professor Hadley does not like surprises. Test at 95% the null hypothesis that the population variance of donut filling is significantly different from the average amount of filling. Answer This is clearly a problem dealing with variances. In this case we are testing a single sample rather than comparing two samples from different populations. The null and alternative hypotheses are thus: $H_{0} : \sigma^{2}=0.04\nonumber$ $H_{0} : \sigma^{2} \neq 0.04\nonumber$ The test is set up as a two-tailed test because Professor Hadley has shown concern with too much variation in filling as well as too little: his dislike of a surprise is any level of filling outside the expected average of 0.04 cups. The test statistic is calculated to be: $\chi_{c}^{2}=\frac{(n-1) s^{2}}{\sigma_{o}^{2}}=\frac{(24-1) 0 \cdot 11^{2}}{0.04^{2}}=6.9575\nonumber$ The calculated $\chi^2$ test statistic, 6.96, is in the tail therefore at a 0.05 level of significance, we cannot accept the null hypothesis that the variance in the donut filling is equal to 0.04 cups. It seems that Professor Hadley is destined to meet disappointment with each bit. Exercise $3$ The FCC conducts broadband speed tests to measure how much data per second passes between a consumer’s computer and the internet. As of August of 2012, the standard deviation of Internet speeds across Internet Service Providers (ISPs) was 12.2 percent. Suppose a sample of 15 ISPs is taken, and the standard deviation is 13.2. An analyst claims that the standard deviation of speeds is more than what was reported. State the null and alternative hypotheses, compute the degrees of freedom, the test statistic, sketch the graph of the distribution and mark the area associated with the level of confidence, and draw a conclusion. Test at the 1% significance level.
textbooks/stats/Applied_Statistics/Introductory_Business_Statistics_(OpenStax)/11%3A_The_Chi-Square_Distribution/11.00%3A_Prelude_to_the_Chi-Square_Distribution.txt
In this type of hypothesis test, you determine whether the data "fit" a particular distribution or not. For example, you may suspect your unknown data fit a binomial distribution. You use a chi-square test (meaning the distribution for the hypothesis test is chi-square) to determine if there is a fit or not. The null and the alternative hypotheses for this test may be written in sentences or may be stated as equations or inequalities. The test statistic for a goodness-of-fit test is: $\sum_{k} \frac{(O-E)^{2}}{E}\nonumber$ where: • $O$ = observed values (data) • $E$ = expected values (from theory) • $k$ = the number of different data cells or categories The observed values are the data values and the expected values are the values you would expect to get if the null hypothesis were true. There are n terms of the form $\frac{(O-E)^{2}}{E}$. The number of degrees of freedom is $df$ = (number of categories – 1). The goodness-of-fit test is almost always right-tailed. If the observed values and the corresponding expected values are not close to each other, then the test statistic can get very large and will be way out in the right tail of the chi-square curve. NOTE The number of expected values inside each cell needs to be at least five in order to use this test. Example $4$ Absenteeism of college students from math classes is a major concern to math instructors because missing class appears to increase the drop rate. Suppose that a study was done to determine if the actual student absenteeism rate follows faculty perception. The faculty expected that a group of 100 students would miss class according to Table $1$. Number of absences per term Expected number of students 0–2 50 3–5 30 6–8 12 9–11 6 12+ 2 Table $1$ A random survey across all mathematics courses was then done to determine the actual number (observed) of absences in a course. The chart in Table $2$ displays the results of that survey. Number of absences per term Actual number of students 0–2 35 3–5 40 6–8 20 9–11 1 12+ 4 Table $2$ Determine the null and alternative hypotheses needed to conduct a goodness-of-fit test. $\bf{H_a}$: Student absenteeism fits faculty perception. The alternative hypothesis is the opposite of the null hypothesis. $\bf{H_a}$: Student absenteeism does not fit faculty perception. a. Can you use the information as it appears in the charts to conduct the goodness-of-fit test? Answer Solution 11.4 a. No. Notice that the expected number of absences for the "12+" entry is less than five (it is two). Combine that group with the "9–11" group to create new tables where the number of students for each entry are at least five. The new results are in Table $3$ and Table $4$. Number of absences per term Expected number of students 0–2 50 3–5 30 6–8 12 9+ 8 Table 11.3 Number of absences per term Actual number of students 0–2 35 3–5 40 6–8 20 9+ 5 Table $4$ b. What is the number of degrees of freedom ($df$)? Answer Solution 11.4 b. There are four "cells" or categories in each of the new tables. $d f=\text { number of cells }-1=4-1=3$ Exercise $4$ A factory manager needs to understand how many products are defective versus how many are produced. The number of expected defects is listed in Table $5$. Number produced Number defective 0–100 5 101–200 6 201–300 7 301–400 8 401–500 10 Table $5$ A random sample was taken to determine the actual number of defects. Table $6$ shows the results of the survey. Number produced Number defective 0–100 5 101–200 7 201–300 8 301–400 9 401–500 11 Table $6$ State the null and alternative hypotheses needed to conduct a goodness-of-fit test, and state the degrees of freedom. Example $5$ Employers want to know which days of the week employees are absent in a five-day work week. Most employers would like to believe that employees are absent equally during the week. Suppose a random sample of 60 managers were asked on which day of the week they had the highest number of employee absences. The results were distributed as in Table $7$. For the population of employees, do the days for the highest number of absences occur with equal frequencies during a five-day work week? Test at a 5% significance level. Monday Tuesday Wednesday Thursday Friday Number of absences 15 12 9 9 15 Table $7$ Day of the Week Employees were Most Absent Answer Solution 11.5 The null and alternative hypotheses are: • $H_0$: The absent days occur with equal frequencies, that is, they fit a uniform distribution. • $H_a$: The absent days occur with unequal frequencies, that is, they do not fit a uniform distribution. If the absent days occur with equal frequencies, then, out of 60 absent days (the total in the sample: $15 + 12 + 9 + 9 + 15 = 60$), there would be 12 absences on Monday, 12 on Tuesday, 12 on Wednesday, 12 on Thursday, and 12 on Friday. These numbers are the expected ($E$) values. The values in the table are the observed ($O$) values or data. This time, calculate the \chi2 test statistic by hand. Make a chart with the following headings and fill in the columns: • Expected ($E$) values $(12, 12, 12, 12, 12)$ • Observed ($O$) values $(15, 12, 9, 9, 15)$ • $(O – E)$ • $(O – E)^2$ • $\frac{(O-E)^{2}}{E}$ Now add (sum) the last column. The sum is three. This is the $\chi^2$ test statistic. The calculated test statistics is 3 and the critical value of the $\chi^2$ distribution at 4 degrees of freedom the 0.05 level of confidence is 9.48. This value is found in the $\chi^2$ table at the 0.05 column on the degrees of freedom row 4. $\text{The degrees of freedom are the number of cells }– 1 = 5 – 1 = 4$ Next, complete a graph like the following one with the proper labeling and shading. (You should shade the right tail.) $\bf{\chi}_{c}^{2}=\sum_{k} \frac{(O-E)^{2}}{E}=3\nonumber$ The decision is not to reject the null hypothesis because the calculated value of the test statistic is not in the tail of the distribution. Conclusion: At a 5% level of significance, from the sample data, there is not sufficient evidence to conclude that the absent days do not occur with equal frequencies. Exercise $5$ Teachers want to know which night each week their students are doing most of their homework. Most teachers think that students do homework equally throughout the week. Suppose a random sample of 56 students were asked on which night of the week they did the most homework. The results were distributed as in Table $8$. Sunday Monday Tuesday Wednesday Thursday Friday Saturday Number of students 11 8 10 7 10 5 5 Table $8$ From the population of students, do the nights for the highest number of students doing the majority of their homework occur with equal frequencies during a week? What type of hypothesis test should you use? Example $6$ One study indicates that the number of televisions that American families have is distributed (this is the given distribution for the American population) as in Table $9$. Number of Televisions Percent 0 10 1 16 2 55 3 11 4+ 8 Table $9$ The table contains expected ($E$) percents. A random sample of 600 families in the far western United States resulted in the data in Table $10$. Number of Televisions Frequency Total = 600 0 66 1 119 2 340 3 60 4+ 15 Table $10$ The table contains observed ($O$) frequency values. At the 1% significance level, does it appear that the distribution "number of televisions" of far western United States families is different from the distribution for the American population as a whole? Answer Solution 11.6 This problem asks you to test whether the far western United States families distribution fits the distribution of the American families. This test is always right-tailed. The first table contains expected percentages. To get expected (E) frequencies, multiply the percentage by 600. The expected frequencies are shown in Table $11$. Number of televisions Percent Expected frequency 0 10 (0.10)(600) = 60 1 16 (0.16)(600) = 96 2 55 (0.55)(600) = 330 3 11 (0.11)(600) = 66 over 3 8 (0.08)(600) = 48 Table $11$ Therefore, the expected frequencies are 60, 96, 330, 66, and 48. $H_0$: The "number of televisions" distribution of far western United States families is the same as the "number of televisions" distribution of the American population. $H_a$: The "number of televisions" distribution of far western United States families is different from the "number of televisions" distribution of the American population. Distribution for the test: $\chi_{4}^{2} \text { where } d f=(\text { the number of cells })-1=5-1=4$. Calculate the test statistic: $\chi^2 = 29.65$ Graph: The graph of the Chi-square shows the distribution and marks the critical value with four degrees of freedom at 99% level of confidence, α = .01, 13.277. The graph also marks the calculated chi squared test statistic of 29.65. Comparing the test statistic with the critical value, as we have done with all other hypothesis tests, we reach the conclusion. Make a decision: Because the test statistic is in the tail of the distribution we cannot accept the null hypothesis. This means you reject the belief that the distribution for the far western states is the same as that of the American population as a whole. Conclusion: At the 1% significance level, from the data, there is sufficient evidence to conclude that the "number of televisions" distribution for the far western United States is different from the "number of televisions" distribution for the American population as a whole. Exercise $6$ The expected percentage of the number of pets students have in their homes is distributed (this is the given distribution for the student population of the United States) as in Table $12$. Number of pets Percent 0 18 1 25 2 30 3 18 4+ 9 Table $12$ A random sample of 1,000 students from the Eastern United States resulted in the data in Table $13$. Number of pets Frequency 0 210 1 240 2 320 3 140 4+ 90 Table $13$ At the 1% significance level, does it appear that the distribution “number of pets” of students in the Eastern United States is different from the distribution for the United States student population as a whole? Example $7$ Suppose you flip two coins 100 times. The results are $20 HH, 27 HT, 30 TH$, and $23 TT$. Are the coins fair? Test at a 5% significance level. Answer Solution 11.7 This problem can be set up as a goodness-of-fit problem. The sample space for flipping two fair coins is $\{HH, HT, TH, TT\}$. Out of 100 flips, you would expect 25 $HH, 25 HT, 25 TH$, and $25 TT$. This is the expected distribution from the binomial probability distribution. The question, "Are the coins fair?" is the same as saying, "Does the distribution of the coins $(20 HH, 27 HT, 30 TH, 23 TT)$ fit the expected distribution?" Random Variable: Let $X$ = the number of heads in one flip of the two coins. X takes on the values 0, 1, 2. (There are 0, 1, or 2 heads in the flip of two coins.) Therefore, the number of cells is three. Since $X$ = the number of heads, the observed frequencies are 20 (for two heads), 57 (for one head), and 23 (for zero heads or both tails). The expected frequencies are 25 (for two heads), 50 (for one head), and 25 (for zero heads or both tails). This test is right-tailed. $\bf{H_0}$: The coins are fair. $\bf{H_a}$: The coins are not fair. Distribution for the test: $\chi_2^2$ where $df = 3 – 1 = 2$. Calculate the test statistic: $\chi^2 = 2.14$. Graph: The graph of the Chi-square shows the distribution and marks the critical value with two degrees of freedom at 95% level of confidence, $\alpha = 0.05$, 5.991. The graph also marks the calculated $\chi^2$ test statistic of 2.14. Comparing the test statistic with the critical value, as we have done with all other hypothesis tests, we reach the conclusion. Conclusion: There is insufficient evidence to conclude that the coins are not fair: we cannot reject the null hypothesis that the coins are fair.
textbooks/stats/Applied_Statistics/Introductory_Business_Statistics_(OpenStax)/11%3A_The_Chi-Square_Distribution/11.03%3A_Goodness-of-Fit_Test.txt
Tests of independence involve using a contingency table of observed (data) values. The test statistic for a test of independence is similar to that of a goodness-of-fit test: $\sum_{(i \cdot j)} \frac{(O-E)^{2}}{E}\nonumber$ where: • $O$ = observed values • $E$ = expected values • $i$ = the number of rows in the table • $j$ = the number of columns in the table There are $i \cdot j$ terms of the form $\frac{(O-E)^{2}}{E}$. A test of independence determines whether two factors are independent or not. You first encountered the term independence in Table 3.1 earlier. As a review, consider the following example. Note The expected value inside each cell needs to be at least five in order for you to use this test. Example 11.8 Suppose $A$ = a speeding violation in the last year and $B$ = a cell phone user while driving. If $A$ and $B$ are independent then $P(A \cap B)=P(A) P(B) . A \cap B$ is the event that a driver received a speeding violation last year and also used a cell phone while driving. Suppose, in a study of drivers who received speeding violations in the last year, and who used cell phone while driving, that 755 people were surveyed. Out of the 755, 70 had a speeding violation and 685 did not; 305 used cell phones while driving and 450 did not. Let y = expected number of drivers who used a cell phone while driving and received speeding violations. If $A$ and $B$ are independent, then $P(A \cap B)=P(A) P(B)$. By substitution, $\frac{y}{755}=\left(\frac{70}{755}\right)\left(\frac{305}{755}\right)\nonumber$ Solve for $y$: $y=\frac{(70)(305)}{755}=28.3$ About 28 people from the sample are expected to use cell phones while driving and to receive speeding violations. In a test of independence, we state the null and alternative hypotheses in words. Since the contingency table consists of two factors, the null hypothesis states that the factors are independent and the alternative hypothesis states that they are not independent (dependent). If we do a test of independence using the example, then the null hypothesis is: $H_0$: Being a cell phone user while driving and receiving a speeding violation are independent events; in other words, they have no effect on each other. If the null hypothesis were true, we would expect about 28 people to use cell phones while driving and to receive a speeding violation. The test of independence is always right-tailed because of the calculation of the test statistic. If the expected and observed values are not close together, then the test statistic is very large and way out in the right tail of the chi-square curve, as it is in a goodness-of-fit. The number of degrees of freedom for the test of independence is: $d f=(\text { number of columns }-1)(\text { number of rows }-1)$ The following formula calculates the expected number (E): $E=\frac{(\text { row total })(\text { column total })}{\text { total number surveyed }}\nonumber$ Exercise 11.8 A sample of 300 students is taken. Of the students surveyed, 50 were music students, while 250 were not. Ninety-seven of the 300 surveyed were on the honor roll, while 203 were not. If we assume being a music student and being on the honor roll are independent events, what is the expected number of music students who are also on the honor roll? Example 11.9 A volunteer group, provides from one to nine hours each week with disabled senior citizens. The program recruits among community college students, four-year college students, and nonstudents. In Table 11.14 is a sample of the adult volunteers and the number of hours they volunteer per week. The table contains observed (O) values (data). Type of volunteer 1–3 Hours 4–6 Hours 7–9 Hours Row total Community college students 111 96 48 255 Four-year college students 96 133 61 290 Nonstudents 91 150 53 294 Column total 298 379 162 839 Table 11.14 Number of Hours Worked Per Week by Volunteer Type (Observed) Is the number of hours volunteered independent of the type of volunteer? Answer Solution 11.9 The observed table and the question at the end of the problem, "Is the number of hours volunteered independent of the type of volunteer?" tell you this is a test of independence. The two factors are number of hours volunteered and type of volunteer. This test is always right-tailed. $H_0$: The number of hours volunteered is independent of the type of volunteer. $H_a$: The number of hours volunteered is dependent on the type of volunteer. The expected result are in Table 11.15. The table contains expected (E) values (data). Type of volunteer 1-3 Hours 4-6 Hours 7-9 Hours Community college students 90.57 115.19 49.24 Four-year college students 103.00 131.00 56.00 Nonstudents 104.42 132.81 56.77 Table 11.15 Number of Hours Worked Per Week by Volunteer Type (Expected) For example, the calculation for the expected frequency for the top left cell is $E=\frac{(\text { row total })(\text { column total })}{\text { total number surveyed }}=\frac{(255)(298)}{839}=90.57\nonumber$ Calculate the test statistic: $\chi^2 = 12.99$ (calculator or computer) Distribution for the test: $\chi_4^2$ $d f=(3 \text { columns }-1)(3 \text { rows }-1)=(2)(2)=4$ Graph: The graph of the Chi-square shows the distribution and marks the critical value with four degrees of freedom at 95% level of confidence, $\alpha = 0.05$, 9.488. The graph also marks the calculated $\chi_{c}^{2}$ test statistic of 12.99. Comparing the test statistic with the critical value, as we have done with all other hypothesis tests, we reach the conclusion. Make a decision: Because the calculated test statistic is in the tail we cannot accept H0. This means that the factors are not independent. Conclusion: At a 5% level of significance, from the data, there is sufficient evidence to conclude that the number of hours volunteered and the type of volunteer are dependent on one another. For the example in Table 11.15, if there had been another type of volunteer, teenagers, what would the degrees of freedom be? Exercise 11.9 The Bureau of Labor Statistics gathers data about employment in the United States. A sample is taken to calculate the number of U.S. citizens working in one of several industry sectors over time. Table 11.16 shows the results: Industry sector 2000 2010 2020 Total Nonagriculture wage and salary 13,243 13,044 15,018 41,305 Goods-producing, excluding agriculture 2,457 1,771 1,950 6,178 Services-providing 10,786 11,273 13,068 35,127 Agriculture, forestry, fishing, and hunting 240 214 201 655 Nonagriculture self-employed and unpaid family worker 931 894 972 2,797 Secondary wage and salary jobs in agriculture and private household industries 14 11 11 36 Secondary jobs as a self-employed or unpaid family worker 196 144 152 492 Total 27,867 27,351 31,372 86,590 Table 11.16 We want to know if the change in the number of jobs is independent of the change in years. State the null and alternative hypotheses and the degrees of freedom. Example 11.10 De Anza College is interested in the relationship between anxiety level and the need to succeed in school. A random sample of 400 students took a test that measured anxiety level and need to succeed in school. Table 11.17 shows the results. De Anza College wants to know if anxiety level and need to succeed in school are independent events. Need to succeed in school High anxiety Med-high anxiety Medium anxiety Med-low anxiety Low anxiety Row total High need 35 42 53 15 10 155 Medium need 18 48 63 33 31 193 Low need 4 5 11 15 17 52 Column total 57 95 127 63 58 400 Table 11.17 Need to Succeed in School vs. Anxiety Level a. How many high anxiety level students are expected to have a high need to succeed in school? Answer Solution 11.10 a. The column total for a high anxiety level is 57. The row total for high need to succeed in school is 155. The sample size or total surveyed is 400. $E=\frac{(\text { row total })(\text { column total })}{\text { total surveyed }}=\frac{155 \cdot 57}{400}=22.09\nonumber$ The expected number of students who have a high anxiety level and a high need to succeed in school is about 22. b. If the two variables are independent, how many students do you expect to have a low need to succeed in school and a med-low level of anxiety? Answer Solution 11.10 b. The column total for a med-low anxiety level is 63. The row total for a low need to succeed in school is 52. The sample size or total surveyed is 400. c. $E=\frac{(\text { row total })(\text { column total })}{\text { total surveyed }}=$ ________ Answer Solution 11.10 c. $E=\frac{(\text { row total })(\text { column total })}{\text { total surveyed }}=8.19$ d. The expected number of students who have a med-low anxiety level and a low need to succeed in school is about ________. Answer Solution 11.10 d. 8
textbooks/stats/Applied_Statistics/Introductory_Business_Statistics_(OpenStax)/11%3A_The_Chi-Square_Distribution/11.04%3A_Test_of_Independence.txt
The goodness–of–fit test can be used to decide whether a population fits a given distribution, but it will not suffice to decide whether two populations follow the same unknown distribution. A different test, called the test for homogeneity, can be used to draw a conclusion about whether two populations have the same distribution. To calculate the test statistic for a test for homogeneity, follow the same procedure as with the test of independence. NOTE The expected value inside each cell needs to be at least five in order for you to use this test. Hypotheses • $H_0$: The distributions of the two populations are the same. • $H_a$: The distributions of the two populations are not the same. Test Statistic Use a $\chi^2$ test statistic. It is computed in the same way as the test for independence. Degrees of Freedom ($\bf{df}$) $df = \text{ number of columns }- 1$ Requirements All values in the table must be greater than or equal to five. Common Uses Comparing two populations. For example: men vs. women, before vs. after, east vs. west. The variable is categorical with more than two possible response values. Example $1$ Do male and female college students have the same distribution of living arrangements? Use a level of significance of 0.05. Suppose that 250 randomly selected male college students and 300 randomly selected female college students were asked about their living arrangements: dormitory, apartment, with parents, other. The results are shown in Table $18$. Do male and female college students have the same distribution of living arrangements? Dormitory Apartment With Parents Other Males 72 84 49 45 Females 91 86 88 35 Table $18$ Distribution of living arragements for college males and college females Answer Solution 11.11 $H_0$: The distribution of living arrangements for male college students is the same as the distribution of living arrangements for female college students. $H_a$: The distribution of living arrangements for male college students is not the same as the distribution of living arrangements for female college students. Degrees of Freedom ($\bf{df}$): $df =\text{ number of columns }– 1 = 4 – 1 = 3$ Distribution for the test:$\chi_3^2$ Calculate the test statistic: $\chi_c^2 = 10.129$ The graph of the Chi-square shows the distribution and marks the critical value with three degrees of freedom at 95% level of confidence, $\alpha = 0.05$, 7.815. The graph also marks the calculated $\chi^2$ test statistic of 10.129. Comparing the test statistic with the critical value, as we have done with all other hypothesis tests, we reach the conclusion. Make a decision: Because the calculated test statistic is in the tail we cannot accept $H_0$. This means that the distributions are not the same. Conclusion: At a 5% level of significance, from the data, there is sufficient evidence to conclude that the distributions of living arrangements for male and female college students are not the same. Notice that the conclusion is only that the distributions are not the same. We cannot use the test for homogeneity to draw any conclusions about how they differ. Exercise $\PageIndex{1A}$ Do families and singles have the same distribution of cars? Use a level of significance of 0.05. Suppose that 100 randomly selected families and 200 randomly selected singles were asked what type of car they drove: sport, sedan, hatchback, truck, van/SUV. The results are shown in Table $19$. Do families and singles have the same distribution of cars? Test at a level of significance of 0.05. Sport Sedan Hatchback Truck Van/SUV Family 5 15 35 17 28 Single 45 65 37 46 7 Table $19$ Exercise $\PageIndex{1B}$ Ivy League schools receive many applications, but only some can be accepted. At the schools listed in Table $20$, two types of applications are accepted: regular and early decision. Application type accepted Brown Columbia Cornell Dartmouth Penn Yale Regular 2,115 1,792 5,306 1,734 2,685 1,245 Early decision 577 627 1,228 444 1,195 761 Table $20$ We want to know if the number of regular applications accepted follows the same distribution as the number of early applications accepted. State the null and alternative hypotheses, the degrees of freedom and the test statistic, sketch the graph of the $\chi^2$ distribution and show the critical value and the calculated value of the test statistic, and draw a conclusion about the test of homogeneity.
textbooks/stats/Applied_Statistics/Introductory_Business_Statistics_(OpenStax)/11%3A_The_Chi-Square_Distribution/11.05%3A_Test_for_Homogeneity.txt
Above the $\chi^2$ test statistic was used in three different circumstances. The following list is a summary of which $\chi^2$ test is the appropriate one to use in different circumstances. Test for Goodness-of-Fit Use the goodness-of-fit test to decide whether a population with an unknown distribution "fits" a known distribution. In this case there will be a single qualitative survey question or a single outcome of an experiment from a single population. Goodness-of-Fit is typically used to see if the population is uniform (all outcomes occur with equal frequency), the population is normal, or the population is the same as another population with a known distribution. The null and alternative hypotheses are: • $H_0$: The population fits the given distribution. • $H_a$: The population does not fit the given distribution. Test for Independence Use the test for independence to decide whether two variables (factors) are independent or dependent. In this case there will be two qualitative survey questions or experiments and a contingency table will be constructed. The goal is to see if the two variables are unrelated (independent) or related (dependent). The null and alternative hypotheses are: • $H_0$: The two variables (factors) are independent. • $H_a$: The two variables (factors) are dependent. Test for Homogeneity Use the test for homogeneity to decide if two populations with unknown distributions have the same distribution as each other. In this case there will be a single qualitative survey question or experiment given to two different populations. The null and alternative hypotheses are: • $H_0$: The two populations follow the same distribution. • $H_a$: The two populations have different distributions. 11.07: Homework 122. 1. Explain why a goodness-of-fit test and a test of independence are generally right-tailed tests. 2. If you did a left-tailed test, what would you be testing? 11.08: Chapter Formula Review Facts About the Chi-Square Distribution $x^{2}=\left(Z_{1}\right)^{2}+\left(Z_{2}\right)^{2}+\ldots\left(Z_{d f}\right)^{2}$ chi-square distribution random variable $\mu_{\chi}^{2}=d f$ chi-square distribution population mean $\sigma_{\chi^{2}}=\sqrt{2(d f)}$ Chi-Square distribution population standard deviation Test of a Single Variance $\chi^{2}=\frac{(n-1) s^{2}}{\sigma_{0}^{2}}$ Test of a single variance statistic where: $n$: sample size $s$: sample standard deviation $\sigma_{0}$: hypothesized value of the population standard deviation $df = n – 1$ Degrees of freedom Test of a Single Variance • Use the test to determine variation. • The degrees of freedom is the number of samples – 1. • The test statistic is $\frac{(n-1) s^{2}}{\sigma_{0}^{2}}$, where $n$ = sample size, $s^2$ = sample variance, and $\sigma^2$ = population variance. • The test may be left-, right-, or two-tailed. Goodness-of-Fit Test $\sum_{k} \frac{(O-E)^{2}}{E}$ goodness-of-fit test statistic where: $O$: observed values $E$: expected values $k$: number of different data cells or categories $df = k − 1$ degrees of freedom Test of Independence Test of Independence • The number of degrees of freedom is equal to (number of columns - 1)(number of rows - 1). • The test statistic is $\sum_{i \cdot j} \frac{(O-E)^{2}}{E}$ where $O$ = observed values, $E$ = expected values, $i$ = the number of rows in the table, and $j$ = the number of columns in the table. • If the null hypothesis is true, the expected number $E=\frac{(\text { row total })(\text { column total })}{\text { total surveyed }}$. Test for Homogeneity $\sum_{i . j} \frac{(O-E)^{2}}{E}$ Homogeneity test statistic where: $O$ = observed values $E$ = expected values $i$ = number of rows in data contingency table $j$ = number of columns in data contingency table $df = (i −1)(j −1)$ Degrees of freedom
textbooks/stats/Applied_Statistics/Introductory_Business_Statistics_(OpenStax)/11%3A_The_Chi-Square_Distribution/11.06%3A_Comparison_of_the_Chi-Square_Tests.txt
Facts About the Chi-Square Distribution Decide whether the following statements are true or false. 63. As the number of degrees of freedom increases, the graph of the chi-square distribution looks more and more symmetrical. 64. The standard deviation of the chi-square distribution is twice the mean. 65. The mean and the median of the chi-square distribution are the same if $df$ = 24. Test of a Single Variance Use the following information to answer the next twelve exercises: Suppose an airline claims that its flights are consistently on time with an average delay of at most 15 minutes. It claims that the average delay is so consistent that the variance is no more than 150 minutes. Doubting the consistency part of the claim, a disgruntled traveler calculates the delays for his next 25 flights. The average delay for those 25 flights is 22 minutes with a standard deviation of 15 minutes. 66. Is the traveler disputing the claim about the average or about the variance? 67. A sample standard deviation of 15 minutes is the same as a sample variance of __________ minutes. 68. Is this a right-tailed, left-tailed, or two-tailed test? 69. $H_0$: __________ 70. $df$ = ________ 71. chi-square test statistic = ________ 72. Graph the situation. Label and scale the horizontal axis. Mark the mean and test statistic. Shade the area associated with the level of confidence. 73. Let $\alpha = 0.05$ Decision: ________ Conclusion (write out in a complete sentence.): ________ 74. How did you know to test the variance instead of the mean? 75. If an additional test were done on the claim of the average delay, which distribution would you use? 76. If an additional test were done on the claim of the average delay, but 45 flights were surveyed, which distribution would you use? 77. A plant manager is concerned her equipment may need recalibrating. It seems that the actual weight of the 15 oz. cereal boxes it fills has been fluctuating. The standard deviation should be at most 0.5 oz. In order to determine if the machine needs to be recalibrated, 84 randomly selected boxes of cereal from the next day’s production were weighed. The standard deviation of the 84 boxes was 0.54. Does the machine need to be recalibrated? 78. Consumers may be interested in whether the cost of a particular calculator varies from store to store. Based on surveying 43 stores, which yielded a sample mean of $84 and a sample standard deviation of$12, test the claim that the standard deviation is greater than $15. 79. Isabella, an accomplished Bay to Breakers runner, claims that the standard deviation for her time to run the 7.5 mile race is at most three minutes. To test her claim, Rupinder looks up five of her race times. They are 55 minutes, 61 minutes, 58 minutes, 63 minutes, and 57 minutes. 80. Airline companies are interested in the consistency of the number of babies on each flight, so that they have adequate safety equipment. They are also interested in the variation of the number of babies. Suppose that an airline executive believes the average number of babies on flights is six with a variance of nine at most. The airline conducts a survey. The results of the 18 flights surveyed give a sample average of 6.4 with a sample standard deviation of 3.9. Conduct a hypothesis test of the airline executive’s belief. 81. The number of births per woman in China is 1.6 down from 5.91 in 1966. This fertility rate has been attributed to the law passed in 1979 restricting births to one per woman. Suppose that a group of students studied whether or not the standard deviation of births per woman was greater than 0.75. They asked 50 women across China the number of births they had had. The results are shown in Table $28$. Does the students’ survey indicate that the standard deviation is greater than 0.75? Table $28$ # of births Frequency 0 5 1 30 2 10 3 5 82. According to an avid aquarist, the average number of fish in a 20-gallon tank is 10, with a standard deviation of two. His friend, also an aquarist, does not believe that the standard deviation is two. She counts the number of fish in 15 other 20-gallon tanks. Based on the results that follow, do you think that the standard deviation is different from two? Data: 11; 10; 9; 10; 10; 11; 11; 10; 12; 9; 7; 9; 11; 10; 11 83. The manager of "Frenchies" is concerned that patrons are not consistently receiving the same amount of French fries with each order. The chef claims that the standard deviation for a ten-ounce order of fries is at most 1.5 oz., but the manager thinks that it may be higher. He randomly weighs 49 orders of fries, which yields a mean of 11 oz. and a standard deviation of two oz. 84. You want to buy a specific computer. A sales representative of the manufacturer claims that retail stores sell this computer at an average price of$1,249 with a very narrow standard deviation of $25. You find a website that has a price comparison for the same computer at a series of stores as follows:$1,299; $1,229.99;$1,193.08; $1,279;$1,224.95; $1,229.99;$1,269.95; $1,249. Can you argue that pricing has a larger standard deviation than claimed by the manufacturer? Use the 5% significance level. As a potential buyer, what would be the practical conclusion from your analysis? 85. A company packages apples by weight. One of the weight grades is Class A apples. Class A apples have a mean weight of 150 g, and there is a maximum allowed weight tolerance of 5% above or below the mean for apples in the same consumer package. A batch of apples is selected to be included in a Class A apple package. Given the following apple weights of the batch, does the fruit comply with the Class A grade weight tolerance requirements. Conduct an appropriate hypothesis test. 1. at the 5% significance level 2. at the 1% significance level Weights in selected apple batch (in grams): 158; 167; 149; 169; 164; 139; 154; 150; 157; 171; 152; 161; 141; 166; 172; 11.3 Goodness-of-Fit Test 86. A six-sided die is rolled 120 times. Fill in the expected frequency column. Then, conduct a hypothesis test to determine if the die is fair. The data in Table $29$ are the result of the 120 rolls. Table $29$ Face value Frequency Expected frequency 1 15 2 29 3 16 4 15 5 30 6 15 87. The marital status distribution of the U.S. male population, ages 15 and older, is as shown in Table $30$. Marital status Percent Expected frequency Never married 31.3 Married 56.1 Widowed 2.5 Divorced/Separated 10.1 Table $30$ Suppose that a random sample of 400 U.S. young adult males, 18 to 24 years old, yielded the following frequency distribution. We are interested in whether this age group of males fits the distribution of the U.S. adult population. Calculate the frequency one would expect when surveying 400 people. Fill in Table $30$, rounding to two decimal places. Marital status Frequency Never married 140 Married 238 Widowed 2 Divorced/Separated 20 Table $31$ Use the following information to answer the next two exercises: The columns in Table $32$ contain the Race/Ethnicity of U.S. Public Schools for a recent year, the percentages for the Advanced Placement Examinee Population for that class, and the Overall Student Population. Suppose the right column contains the result of a survey of 1,000 local students from that year who took an AP Exam. Race/Ethnicity AP examinee population Overall student population Survey frequency Asian, Asian American, or Pacific Islander 10.2% 5.4% 113 Black or African-American 8.2% 14.5% 94 Hispanic or Latino 15.5% 15.9% 136 American Indian or Alaska Native 0.6% 1.2% 10 White 59.4% 61.6% 604 Not reported/other 6.1% 1.4% 43 Table $32$ 88. Perform a goodness-of-fit test to determine whether the local results follow the distribution of the U.S. overall student population based on ethnicity. 89. Perform a goodness-of-fit test to determine whether the local results follow the distribution of U.S. AP examinee population, based on ethnicity. 90. The City of South Lake Tahoe, CA, has an Asian population of 1,419 people, out of a total population of 23,609. Suppose that a survey of 1,419 self-reported Asians in the Manhattan, NY, area yielded the data in Table $33$. Conduct a goodness-of-fit test to determine if the self-reported sub-groups of Asians in the Manhattan area fit that of the Lake Tahoe area. Race Lake Tahoe frequency Manhattan frequency Asian Indian 131 174 Chinese 118 557 Filipino 1,045 518 Japanese 80 54 Korean 12 29 Vietnamese 9 21 Other 24 66 Table $33$ Use the following information to answer the next two exercises: UCLA conducted a survey of more than 263,000 college freshmen from 385 colleges in fall 2005. The results of students' expected majors by gender were reported in The Chronicle of Higher Education (2/2/2006). Suppose a survey of 5,000 graduating females and 5,000 graduating males was done as a follow-up last year to determine what their actual majors were. The results are shown in the tables for Table $36$ shows the business categories in the survey, the sample size of each category, and the number of businesses in each category that recycle one commodity. Based on the study, on average half of the businesses were expected to be recycling one commodity. As a result, the last column shows the expected number of businesses in each category that recycle one commodity. At the 5% significance level, perform a hypothesis test to determine if the observed number of businesses that recycle one commodity follows the uniform distribution of the expected values. Business type Number in class Observed number that recycle one commodity Expected number that recycle one commodity Office 35 19 17.5 Retail/Wholesale 48 27 24 Food/Restaurants 53 35 26.5 Manufacturing/Medical 52 21 26 Hotel/Mixed 24 9 12 Table $36$ 98. Table $37$ contains information from a survey among 499 participants classified according to their age groups. The second column shows the percentage of obese people per age class among the study participants. The last column comes from a different study at the national level that shows the corresponding percentages of obese people in the same age classes in the USA. Perform a hypothesis test at the 5% significance level to determine whether the survey participants are a representative sample of the USA obese population. Age class (years) Obese (percentage) Expected USA average (percentage) 20–30 75.0 32.6 31–40 26.5 32.6 41–50 13.6 36.6 51–60 21.9 36.6 61–70 21.0 39.7 Table $37$ 11.4 Test of Independence 99. A recent debate about where in the United States skiers believe the skiing is best prompted the following survey. Test to see if the best ski area is independent of the level of the skier. U.S. ski area Beginner Intermediate Advanced Tahoe 20 30 40 Utah 10 30 60 Colorado 10 40 50 Table 11.38 100. Car manufacturers are interested in whether there is a relationship between the size of car an individual drives and the number of people in the driver’s family (that is, whether car size and family size are independent). To test this, suppose that 800 car owners were randomly surveyed with the results in Table $39$. Conduct a test of independence. Family Size Sub & Compact Mid-size Full-size Van & Truck 1 20 35 40 35 2 20 50 70 80 3–4 20 50 100 90 5+ 20 30 70 70 Table $39$ 101. College students may be interested in whether or not their majors have any effect on starting salaries after graduation. Suppose that 300 recent graduates were surveyed as to their majors in college and their starting salaries after graduation. Table $40$ shows the data. Conduct a test of independence. Major <$50,000 $50,000 –$68,999 $69,000 + English 5 20 5 Engineering 10 30 60 Nursing 10 15 15 Business 10 20 30 Psychology 20 30 20 Table 11.40 102. Some travel agents claim that honeymoon hot spots vary according to age of the bride. Suppose that 280 recent brides were interviewed as to where they spent their honeymoons. The information is given in Table $41$. Conduct a test of independence. Location 20–29 30–39 40–49 50 and over Niagara Falls 15 25 25 20 Poconos 15 25 25 10 Europe 10 25 15 5 Virgin Islands 20 25 15 5 Table $41$ 103. A manager of a sports club keeps information concerning the main sport in which members participate and their ages. To test whether there is a relationship between the age of a member and his or her choice of sport, 643 members of the sports club are randomly selected. Conduct a test of independence. Sport 18 - 25 26 - 30 31 - 40 41 and over Racquetball 42 58 30 46 Tennis 58 76 38 65 Swimming 72 60 65 33 Table 11.42 104. A major food manufacturer is concerned that the sales for its skinny french fries have been decreasing. As a part of a feasibility study, the company conducts research into the types of fries sold across the country to determine if the type of fries sold is independent of the area of the country. The results of the study are shown in Table $43$. Conduct a test of independence. Type of Fries Northeast South Central West Skinny fries 70 50 20 25 Curly fries 100 60 15 30 Steak fries 20 40 10 10 Table $43$ 105. According to Dan Lenard, an independent insurance agent in the Buffalo, N.Y. area, the following is a breakdown of the amount of life insurance purchased by males in the following age groups. He is interested in whether the age of the male and the amount of life insurance purchased are independent events. Conduct a test for independence. Age of males None <$200,000 $200,000–$400,000 $401,001–$1,000,000 $1,000,001+ 20–29 40 15 40 0 5 30–39 35 5 20 20 10 40–49 20 0 30 0 30 50+ 40 30 15 15 10 Table 11.44 106. Suppose that 600 thirty-year-olds were surveyed to determine whether or not there is a relationship between the level of education an individual has and salary. Conduct a test of independence. Annual salary Not a high school graduate High school graduate College graduate Masters or doctorate <$30,000 15 25 10 5 $30,000–$40,000 20 40 70 30 $40,000–$50,000 10 20 40 55 $50,000–$60,000 5 10 20 60 \$60,000+ 0 5 10 150 Table $45$ Read the statement and decide whether it is true or false. 107. The number of degrees of freedom for a test of independence is equal to the sample size minus one. 108. The test for independence uses tables of observed and expected data values. 109. The test to use when determining if the college or university a student chooses to attend is related to his or her socioeconomic status is a test for independence. 110. In a test of independence, the expected number is equal to the row total multiplied by the column total divided by the total surveyed. 111. An ice cream maker performs a nationwide survey about favorite flavors of ice cream in different geographic areas of the U.S. Based on Table $46$, do the numbers suggest that geographic location is independent of favorite ice cream flavors? Test at the 5% significance level. U.S. region/Flavor Strawberry Chocolate Vanilla Rocky road Mint chocolate chip Pistachio Row total West 12 21 22 19 15 8 97 Midwest 10 32 22 11 15 6 96 East 8 31 27 8 15 7 96 South 15 28 30 8 15 6 102 Column total 45 112 101 46 60 27 391 Table 11.46 112. Table $47$ provides a recent survey of the youngest online entrepreneurs whose net worth is estimated at one million dollars or more. Their ages range from 17 to 30. Each cell in the table illustrates the number of entrepreneurs who correspond to the specific age group and their net worth. Are the ages and net worth independent? Perform a test of independence at the 5% significance level. Age group\ Net worth value (in millions of US dollars) 1–5 6–24 ≥25 Row total 17–25 8 7 5 20 26–30 6 5 9 20 Column total 14 12 14 40 Table $47$ 113. A 2013 poll in California surveyed people about taxing sugar-sweetened beverages. The results are presented in Table $48$, and are classified by ethnic group and response type. Are the poll responses independent of the participants’ ethnic group? Conduct a test of independence at the 5% significance level. Opinion/Ethnicity Asian-American White/Non-Hispanic African-American Latino Row total Against tax 48 433 41 160 682 In favor of tax 54 234 24 147 459 No opinion 16 43 16 19 94 Column total 118 710 81 326 1235 Table 11.48 114. A psychologist is interested in testing whether there is a difference in the distribution of personality types for business majors and social science majors. The results of the study are shown in Table $49$. Conduct a test of homogeneity. Test at a 5% level of significance. Open Conscientious Extrovert Agreeable Neurotic Business 41 52 46 61 58 Social Science 72 75 63 80 65 Table $49$ 115. Do men and women select different breakfasts? The breakfasts ordered by randomly selected men and women at a popular breakfast place is shown in Table $50$. Conduct a test for homogeneity at a 5% level of significance. French toast Pancakes Waffles Omelettes Men 47 35 28 53 Women 65 59 55 60 Table 11.50 116. A fisherman is interested in whether the distribution of fish caught in Green Valley Lake is the same as the distribution of fish caught in Echo Lake. Of the 191 randomly selected fish caught in Green Valley Lake, 105 were rainbow trout, 27 were other trout, 35 were bass, and 24 were catfish. Of the 293 randomly selected fish caught in Echo Lake, 115 were rainbow trout, 58 were other trout, 67 were bass, and 53 were catfish. Perform a test for homogeneity at a 5% level of significance. 117. In 2007, the United States had 1.5 million homeschooled students, according to the U.S. National Center for Education Statistics. In Table $51$ you can see that parents decide to homeschool their children for different reasons, and some reasons are ranked by parents as more important than others. According to the survey results shown in the table, is the distribution of applicable reasons the same as the distribution of the most important reason? Provide your assessment at the 5% significance level. Did you expect the result you obtained? Reasons for fomeschooling Applicable reason (in thousands of respondents) Most important reason (in thousands of respondents) Row total Concern about the environment of other schools 1,321 309 1,630 Dissatisfaction with academic instruction at other schools 1,096 258 1,354 To provide religious or moral instruction 1,257 540 1,797 Child has special needs, other than physical or mental 315 55 370 Nontraditional approach to child’s education 984 99 1,083 Other reasons (e.g., finances, travel, family time, etc.) 485 216 701 Column total 5,458 1,477 6,935 Table 11.51 118. When looking at energy consumption, we are often interested in detecting trends over time and how they correlate among different countries. The information in Table $52$ shows the average energy use (in units of kg of oil equivalent per capita) in the USA and the joint European Union countries (EU) for the six-year period 2005 to 2010. Do the energy use values in these two areas come from the same distribution? Perform the analysis at the 5% significance level. Year European Union United States Row total 2010 3,413 7,164 10,557 2009 3,302 7,057 10,359 2008 3,505 7,488 10,993 2007 3,537 7,758 11,295 2006 3,595 7,697 11,292 2005 3,613 7,847 11,460 Column total 20,965 45,011 65,976 Table $52$ 119. The Insurance Institute for Highway Safety collects safety information about all types of cars every year, and publishes a report of Top Safety Picks among all cars, makes, and models. Table $53$ presents the number of Top Safety Picks in six car categories for the two years 2009 and 2013. Analyze the table data to conclude whether the distribution of cars that earned the Top Safety Picks safety award has remained the same between 2009 and 2013. Derive your results at the 5% significance level. Year \ Car type Small Mid-size Large Small SUV Mid-size SUV Large SUV Row total 2009 12 22 10 10 27 6 87 2013 31 30 19 11 29 4 124 Column total 43 52 29 21 56 10 211 Table 11.53 120. Is there a difference between the distribution of community college statistics students and the distribution of university statistics students in what technology they use on their homework? Of some randomly selected community college students, 43 used a computer, 102 used a calculator with built in statistics functions, and 65 used a table from the textbook. Of some randomly selected university students, 28 used a computer, 33 used a calculator with built in statistics functions, and 40 used a table from the textbook. Conduct an appropriate hypothesis test using a 0.05 level of significance. Read the statement and decide whether it is true or false. 121. If $df$ = 2, the chi-square distribution has a shape that reminds us of the exponential.
textbooks/stats/Applied_Statistics/Introductory_Business_Statistics_(OpenStax)/11%3A_The_Chi-Square_Distribution/11.09%3A_Chapter_Homework.txt
Contingency Table a table that displays sample values for two different factors that may be dependent or contingent on one another; it facilitates determining conditional probabilities. Goodness-of-Fit a hypothesis test that compares expected and observed values in order to look for significant differences within one non-parametric variable. The degrees of freedom used equals the (number of categories – 1). Test for Homogeneity a test used to draw a conclusion about whether two populations have the same distribution. The degrees of freedom used equals the (number of columns – 1). Test of Independence a hypothesis test that compares expected and observed values for contingency tables in order to test for independence between two variables. The degrees of freedom used equals the (number of columns – 1) multiplied by the (number of rows – 1). 11.11: Chapter Practice 1. 11.2 Test of a Single Variance Use the following information to answer the next three exercises: An archer’s standard deviation for his hits is six (data is measured in distance from the center of the target). An observer claims the standard deviation is less. 6. What type of test should be used? 7. State the null and alternative hypotheses. 8. Is this a right-tailed, left-tailed, or two-tailed test? Let $\alpha = 0.05$ Decision: ________________ Reason for the Decision: ________________ Conclusion (write out in complete sentences): ________________ 29. Does it appear that the pattern of AIDS cases in Santa Clara County corresponds to the distribution of ethnic groups in this county? Why or why not? 11.4 Test of Independence Determine the appropriate test to be used in the next three exercises. 30. A pharmaceutical company is interested in the relationship between age and presentation of symptoms for a common viral infection. A random sample is taken of 500 people with the infection across different age groups. 31. The owner of a baseball team is interested in the relationship between player salaries and team winning percentage. He takes a random sample of 100 players from different organizations. 32. A marathon runner is interested in the relationship between the brand of shoes runners wear and their run times. She takes a random sample of 50 runners and records their run times as well as the brand of shoes they were wearing. Use the following information to answer the next seven exercises: Transit Railroads is interested in the relationship between travel distance and the ticket class purchased. A random sample of 200 passengers is taken. Table $25$ shows the results. The railroad wants to know if a passenger’s choice in ticket class is independent of the distance they must travel. Traveling distance Third class Second class First class Total 1–100 miles 21 14 6 41 101–200 miles 18 16 8 42 201–300 miles 16 17 15 48 301–400 miles 12 14 21 47 401–500 miles 6 6 10 22 Total 73 67 60 200 Table $25$ 33. State the hypotheses. $H_0$: _______ $H_a$: _______ 34. $df$ = _______ 35. How many passengers are expected to travel between 201 and 300 miles and purchase second-class tickets? 36. How many passengers are expected to travel between 401 and 500 miles and purchase first-class tickets? 37. What is the test statistic? 38. What can you conclude at the 5% level of significance? Use the following information to answer the next eight exercises: An article in the New England Journal of Medicine, discussed a study on smokers in California and Hawaii. In one part of the report, the self-reported ethnicity and smoking levels per day were given. Of the people smoking at most ten cigarettes per day, there were 9,886 African Americans, 2,745 Native Hawaiians, 12,831 Latinos, 8,378 Japanese Americans and 7,650 whites. Of the people smoking 11 to 20 cigarettes per day, there were 6,514 African Americans, 3,062 Native Hawaiians, 4,932 Latinos, 10,680 Japanese Americans, and 9,877 whites. Of the people smoking 21 to 30 cigarettes per day, there were 1,671 African Americans, 1,419 Native Hawaiians, 1,406 Latinos, 4,715 Japanese Americans, and 6,062 whites. Of the people smoking at least 31 cigarettes per day, there were 759 African Americans, 788 Native Hawaiians, 800 Latinos, 2,305 Japanese Americans, and 3,970 whites. 39. Complete the table. Smoking level per day African American Native Hawaiian Latino Japanese Americans White Totals 1-10 11-20 21-30 31+ Totals Table $26$ Smoking Levels by Ethnicity (Observed) 40. State the hypotheses. $H_0$: _______ $H_a$: _______ 41. Enter expected values in Table $26$. Round to two decimal places. Calculate the following values: 42. $df$ = _______ 43. $\chi^2$ test statistic = ______ 44. Is this a right-tailed, left-tailed, or two-tailed test? Explain why. 45. Graph the situation. Label and scale the horizontal axis. Mark the mean and test statistic. Shade in the region corresponding to the confidence level. State the decision and conclusion (in a complete sentence) for the following preconceived levels of \alpha. 46. $\alpha = 0.05$ 1. Decision: ___________________ 2. Reason for the decision: ___________________ 3. Conclusion (write out in a complete sentence): ___________________ 47. $\alpha = 0.01$ 1. Decision: ___________________ 2. Reason for the decision: ___________________ 3. Conclusion (write out in a complete sentence): ___________________ 48. A math teacher wants to see if two of her classes have the same distribution of test scores. What test should she use? 49. What are the null and alternative hypotheses for Table $27$. 20–30 30–40 40–50 50–60 Private practice 16 40 38 6 Hospital 8 44 59 39 Table $27$ 53. State the null and alternative hypotheses. 54. $df$ = _______ 55. What is the test statistic? 56. What can you conclude at the 5% significance level? 57. Which test do you use to decide whether an observed distribution is the same as an expected distribution? 58. What is the null hypothesis for the type of test from Exercise $57$? 59. Which test would you use to decide whether two factors have a relationship? 60. Which test would you use to decide if two populations have the same distribution? 61. How are tests of independence similar to tests for homogeneity? 62. How are tests of independence different from tests for homogeneity?
textbooks/stats/Applied_Statistics/Introductory_Business_Statistics_(OpenStax)/11%3A_The_Chi-Square_Distribution/11.10%3A_Chapter_Key_Terms.txt
11.1 Facts About the Chi-Square Distribution Data from Parade Magazine. “HIV/AIDS Epidemiology Santa Clara County.”Santa Clara County Public Health Department, May 2011. 11.2 Test of a Single Variance “AppleInsider Price Guides.” Apple Insider, 2013. Available online at http://appleinsider.com/mac_price_guide (accessed May 14, 2013). Data from the World Bank, June 5, 2012. 11.3 Goodness-of-Fit Test Data from the U.S. Census Bureau Data from the College Board. Available online at http://www.collegeboard.com. Data from the U.S. Census Bureau, Current Population Reports. Ma, Y., E.R. Bertone, E.J. Stanek III, G.W. Reed, J.R. Hebert, N.L. Cohen, P.A. Merriam, I.S. Ockene, “Association between Eating Patterns and Obesity in a Free-living US Adult Population.” American Journal of Epidemiology volume 158, no. 1, pages 85-92. Ogden, Cynthia L., Margaret D. Carroll, Brian K. Kit, Katherine M. Flegal, “Prevalence of Obesity in the United States, 2009–2010.” NCHS Data Brief no. 82, January 2012. Available online at http://www.cdc.gov/nchs/data/databriefs/db82.pdf (accessed May 24, 2013). Stevens, Barbara J., “Multi-family and Commercial Solid Waste and Recycling Survey.” Arlington Count, VA. Available online at www.arlingtonva.us/department.../file84429.pdf (accessed May 24,2013). 11.4 Test of Independence DiCamilo, Mark, Mervin Field, “Most Californians See a Direct Linkage between Obesity and Sugary Sodas. Two in Three Voters Support Taxing Sugar-Sweetened Beverages If Proceeds are Tied to Improving School Nutrition and Physical Activity Programs.” The Field Poll, released Feb. 14, 2013. Available online at field.com/fieldpollonline/sub...rs/Rls2436.pdf (accessed May 24, 2013). Harris Interactive, “Favorite Flavor of Ice Cream.” Available online at http://www.statisticbrain.com/favori...r-of-ice-cream (accessed May 24, 2013) “Youngest Online Entrepreneurs List.” Available online at http://www.statisticbrain.com/younge...repreneur-list (accessed May 24, 2013). 11.5 Test for Homogeneity Data from the Insurance Institute for Highway Safety, 2013. Available online at www.iihs.org/iihs/ratings (accessed May 24, 2013). “Energy use (kg of oil equivalent per capita).” The World Bank, 2013. Available online at http://data.worldbank.org/indicator/...G.OE/countries (accessed May 24, 2013). “Parent and Family Involvement Survey of 2007 National Household Education Survey Program (NHES),” U.S. Department of Education, National Center for Education Statistics. Available online at http://nces.ed.gov/pubsearch/pubsinf...?pubid=2009030 (accessed May 24, 2013). “Parent and Family Involvement Survey of 2007 National Household Education Survey Program (NHES),” U.S. Department of Education, National Center for Education Statistics. Available online at http://nces.ed.gov/pubs2009/2009030_sup.pdf (accessed May 24, 2013). 11.13: Chapter Review 11.1 Facts About the Chi-Square Distribution The chi-square distribution is a useful tool for assessment in a series of problem categories. These problem categories include primarily (i) whether a data set fits a particular distribution, (ii) whether the distributions of two populations are the same, (iii) whether two events might be independent, and (iv) whether there is a different variability than expected within a population. An important parameter in a chi-square distribution is the degrees of freedom \(df\) in a given problem. The random variable in the chi-square distribution is the sum of squares of \(df\) standard normal variables, which must be independent. The key characteristics of the chi-square distribution also depend directly on the degrees of freedom. The chi-square distribution curve is skewed to the right, and its shape depends on the degrees of freedom \(df\). For \(df > 90\), the curve approximates the normal distribution. Test statistics based on the chi-square distribution are always greater than or equal to zero. Such application tests are almost always right-tailed tests. 11.2 Test of a Single Variance To test variability, use the chi-square test of a single variance. The test may be left-, right-, or two-tailed, and its hypotheses are always expressed in terms of the variance (or standard deviation). 11.3 Goodness-of-Fit Test To assess whether a data set fits a specific distribution, you can apply the goodness-of-fit hypothesis test that uses the chi-square distribution. The null hypothesis for this test states that the data come from the assumed distribution. The test compares observed values against the values you would expect to have if your data followed the assumed distribution. The test is almost always right-tailed. Each observation or cell category must have an expected value of at least five. 11.4 Test of Independence To assess whether two factors are independent or not, you can apply the test of independence that uses the chi-square distribution. The null hypothesis for this test states that the two factors are independent. The test compares observed values to expected values. The test is right-tailed. Each observation or cell category must have an expected value of at least 5. 11.5 Test for Homogeneity To assess whether two data sets are derived from the same distribution—which need not be known, you can apply the test for homogeneity that uses the chi-square distribution. The null hypothesis for this test states that the populations of the two data sets come from the same distribution. The test compares the observed values against the expected values if the two populations followed the same distribution. The test is right-tailed. Each observation or cell category must have an expected value of at least five. 11.6 Comparison of the Chi-Square Tests The goodness-of-fit test is typically used to determine if data fits a particular distribution. The test of independence makes use of a contingency table to determine the independence of two factors. The test for homogeneity determines whether two populations come from the same distribution, even if this distribution is unknown. 11.14: Chapter Solution (Practice Homework) 1. mean = 25 and standard deviation = 7.0711 3. when the number of degrees of freedom is greater than 90 5. $df = 2$ 6. a test of a single variance 8. a left-tailed test 10. $H_0: \sigma^2 = 0.812$; $H_a: \sigma^2 > 0.812$. 12. a test of a single variance 16. a goodness-of-fit test 18. 3 20. 2.04 21. We decline to reject the null hypothesis. There is not enough evidence to suggest that the observed test scores are significantly different from the expected test scores. 23. $H_0$: the distribution of AIDS cases follows the ethnicities of the general population of Santa Clara County. 25. right-tailed 27. 2016.136 28. • 30. a test of independence a test of independence 34. 8 36. 6.6 39. Smoking level per dayAfrican AmericanNative HawaiianLatinoJapanese AmericansWhiteTotals 1-109,8862,74512,8318,3787,65041,490 11-206,5143,0624,93210,6809,87735,065 21-301,6711,4191,4064,7156,06215,273 31+7597888002,3053,9708,622 Totals18,8308,01419,96926,07827,55910,0450 Table $54$ 41. Smoking level per dayAfrican AmericanNative HawaiianLatinoJapanese AmericansWhite 1-107777.573310.118248.0210771.2911383.01 11-206573.162797.526970.769103.299620.27 21-302863.021218.493036.203965.054190.23 31+1616.25687.871714.012238.372365.49 Table $55$ 43. 10,301.8 44. right 46. 1. 48. test for homogeneity test for homogeneity 52. All values in the table must be greater than or equal to five. 54. 3 57. a goodness-of-fit test 59. a test for independence 61. Answers will vary. Sample answer: Tests of independence and tests for homogeneity both calculate the test statistic the same way $\sum_{(i j)} \frac{(O-E)^{2}}{E}$. In addition, all values must be greater than or equal to five. 63. true 65. false 67. 225 69. $H_0: \sigma^2 \leq 150$ 71. 36 72. Check student’s solution. 74. The claim is that the variance is no more than 150 minutes. 76. a Student's $t$- or normal distribution 78. 1. 80. 1. 82. 1. 84. 1. 87. Marital statusPercentExpected frequency Never married31.3125.2 Married56.1224.4 Widowed2.510 Divorced/Separated10.140.4 Table $56$ 1. 89. 1. 91. 1. 94. true false 98. 1. 100. 1. 102. 1. 104. 1. 106. 1. 108. true true 112. 1. 114. 1. 116. 1. 118. 1. 120. 1. 122. 1. The test statistic is always positive and if the expected and observed values are not close together, the test statistic is large and the null hypothesis will be rejected. 2. Testing to see if the data fits the distribution “too well” or is too perfect.
textbooks/stats/Applied_Statistics/Introductory_Business_Statistics_(OpenStax)/11%3A_The_Chi-Square_Distribution/11.12%3A_Chapter_References.txt
Many statistical applications in psychology, social science, business administration, and the natural sciences involve several groups. For example, an environmentalist is interested in knowing if the average amount of pollution varies in several bodies of water. A sociologist is interested in knowing if the amount of income a person earns varies according to his or her upbringing. A consumer looking for a new car might compare the average gas mileage of several models. For hypothesis tests comparing averages among more than two groups, statisticians have developed a method called "Analysis of Variance" (abbreviated ANOVA). In this chapter, you will study the simplest form of ANOVA called single factor or one-way ANOVA. You will also study the \(F\) distribution, used for one-way ANOVA, and the test for differences between two variances. This is just a very brief overview of one-way ANOVA. One-Way ANOVA, as it is presented here, relies heavily on a calculator or computer. 12.01: Test of Two Variances This chapter introduces a new probability density function, the $F$ distribution. This distribution is used for many applications including ANOVA and for testing equality across multiple means. We begin with the $F$ distribution and the test of hypothesis of differences in variances. It is often desirable to compare two variances rather than two averages. For instance, college administrators would like two college professors grading exams to have the same variation in their grading. In order for a lid to fit a container, the variation in the lid and the container should be approximately the same. A supermarket might be interested in the variability of check-out times for two checkers. In finance, the variance is a measure of risk and thus an interesting question would be to test the hypothesis that two different investment portfolios have the same variance, the volatility. In order to perform a $F$ test of two variances, it is important that the following are true: 1. The populations from which the two samples are drawn are approximately normally distributed. 2. The two populations are independent of each other. Unlike most other hypothesis tests in this book, the $F$ test for equality of two variances is very sensitive to deviations from normality. If the two distributions are not normal, or close, the test can give a biased result for the test statistic. Suppose we sample randomly from two independent normal populations. Let $\sigma_1^2$ and $\sigma_2^2$ be the unknown population variances and $s_1^2$and $s_2^2$ be the sample variances. Let the sample sizes be $n_1$ and $n_2$. Since we are interested in comparing the two sample variances, we use the $F$ ratio: $F=\frac{\left[\frac{s_{1}^{2}}{\sigma_{1}^{2}}\right]}{\left[\frac{s_{2}^{2}}{\sigma_{2}^{2}}\right]}$ $F$ has the distribution $F \sim F\left(n_{1}-1, n_{2}-1\right)$ where $n_1 – 1$ are the degrees of freedom for the numerator and $n_2 – 1$ are the degrees of freedom for the denominator. If the null hypothesis is $\sigma_{1}^{2}=\sigma_{2}^{2}$, then the $F$ Ratio, test statistic, becomes $F_{c}=\frac{\left[\frac{s_{1}^{2}}{\sigma_{1}^{2}}\right]}{\left[\frac{s_{2}^{2}}{\sigma_{2}^{2}}\right]}=\frac{s_{1}^{2}}{s_{2}^{2}}$ The various forms of the hypotheses tested are: Two-Tailed Test One-Tailed Test One-Tailed Test $\mathrm{H}_{0} : \sigma_{1}^{2}=\sigma_{2}^{2}$ $\mathrm{H}_{0} : \sigma_{1}^{2} \leq \sigma_{2}^{2}$ $\mathrm{H}_{0} : \sigma_{1}^{2} \geq \sigma_{2}^{2}$ $\mathrm{H}_{1} : \sigma_{1}^{2} \neq \sigma_{2}^{2}$ $\mathrm{H}_{1} : \sigma_{1}^{2}>\sigma_{2}^{2}$ $\mathrm{H}_{1} : \sigma_{1}^{2}<\sigma_{2}^{2}$ Table 12.1 A more general form of the null and alternative hypothesis for a two tailed test would be : $H_{0} : \frac{\sigma_{1}^{2}}{\sigma_{2}^{2}}=\delta_{0}\nonumber$ $H_{a} : \frac{\sigma_{1}^{2}}{\sigma_{2}^{2}} \neq \delta_{0}\nonumber$ Where if $\delta_{0}=1$ it is a simple test of the hypothesis that the two variances are equal. This form of the hypothesis does have the benefit of allowing for tests that are more than for simple differences and can accommodate tests for specific differences as we did for differences in means and differences in proportions. This form of the hypothesis also shows the relationship between the $F$ distribution and the $\chi^2$ : the $F$ is a ratio of two chi squared distributions a distribution we saw in the last chapter. This is helpful in determining the degrees of freedom of the resultant $F$ distribution. If the two populations have equal variances, then $s_1^2$ and $s_2^2$ are close in value and the test statistic, $F_{c}=\frac{s_{1}^{2}}{s_{2}^{2}}$ is close to one. But if the two population variances are very different, $s_1^2$ and $s_2^2$ tend to be very different, too. Choosing $s_1^2$ as the larger sample variance causes the ratio $\frac{s_{1}^{2}}{s_{2}^{2}}$ to be greater than one. If $s_1^2$ and $s_2^2$ are far apart, then $F_{c}=\frac{s_{1}^{2}}{s_{2}^{2}}$ is a large number. Therefore, if $F$ is close to one, the evidence favors the null hypothesis (the two population variances are equal). But if $F$ is much larger than one, then the evidence is against the null hypothesis. In essence, we are asking if the calculated F statistic, test statistic, is significantly different from one. To determine the critical points we have to find $F_{\alpha,df1,df2}$. See Appendix A for the $F$ table. This $F$ table has values for various levels of significance from 0.1 to 0.001 designated as "p" in the first column. To find the critical value choose the desired significance level and follow down and across to find the critical value at the intersection of the two different degrees of freedom. The $F$ distribution has two different degrees of freedom, one associated with the numerator, $_{df1}$, and one associated with the denominator, $_{df2}$ and to complicate matters the $F$ distribution is not symmetrical and changes the degree of skewness as the degrees of freedom change. The degrees of freedom in the numerator is $n_1-1$, where $n_1$ is the sample size for group 1, and the degrees of freedom in the denominator is $n_2-1$, where $n_2$ is the sample size for group 2. $F_{\alpha,df1,df2}$ will give the critical value on the upper end of the $F$ distribution. To find the critical value for the lower end of the distribution, reverse the degrees of freedom and divide the $F$-value from the table into one. • Upper tail critical value : $F_{\alpha,df1,df2}$ • Lower tail critical value : $1/F_{\alpha,df2,df1}$ When the calculated value of $F$ is between the critical values, not in the tail, we cannot reject the null hypothesis that the two variances came from a population with the same variance. If the calculated F-value is in either tail we cannot accept the null hypothesis just as we have been doing for all of the previous tests of hypothesis. An alternative way of finding the critical values of the $F$ distribution makes the use of the $F$-table easier. We note in the $F$-table that all the values of $F$ are greater than one therefore the critical $F$ value for the left hand tail will always be less than one because to find the critical value on the left tail we divide an $F$ value into the number one as shown above. We also note that if the sample variance in the numerator of the test statistic is larger than the sample variance in the denominator, the resulting $F$ value will be greater than one. The shorthand method for this test is thus to be sure that the larger of the two sample variances is placed in the numerator to calculate the test statistic. This will mean that only the right hand tail critical value will have to be found in the $F$-table. Example 12.1 Two college instructors are interested in whether or not there is any variation in the way they grade math exams. They each grade the same set of 10 exams. The first instructor's grades have a variance of 52.3. The second instructor's grades have a variance of 89.9. Test the claim that the first instructor's variance is smaller. (In most colleges, it is desirable for the variances of exam grades to be nearly the same among instructors.) The level of significance is 10%. Answer Solution 12.1 Let 1 and 2 be the subscripts that indicate the first and second instructor, respectively. $n_1 = n_2 = 10$. $H_{0} : \sigma_{1}^{2} \geq \sigma_{2}^{2}$ and $H_{a} : \sigma_{1}^{2}<\sigma_{2}^{2}$ Calculate the test statistic: By the null hypothesis ($\sigma_{1}^{2} \geq \sigma_{2}^{2}$), the $F$ statistic is: $F_{c}=\frac{s_{2}^{2}}{s_{1}^{2}}=\frac{89.9}{52.3}=1.719$ Critical value for the test: $F_{9,9}=5.35$ where $n_1 – 1 = 9$ and $n_2 – 1 = 9$. Make a decision: Since the calculated $F$ value is not in the tail we cannot reject $H_0$. Conclusion: With a 10% level of significance, from the data, there is insufficient evidence to conclude that the variance in grades for the first instructor is smaller. Exercise 12.1 The New York Choral Society divides male singers up into four categories from highest voices to lowest: Tenor1, Tenor2, Bass1, Bass2. In the table are heights of the men in the Tenor1 and Bass2 groups. One suspects that taller men will have lower voices, and that the variance of height may go up with the lower voices as well. Do we have good evidence that the variance of the heights of singers in each of these two groups (Tenor1 and Bass2) are different? Tenor1 Bass 2 Tenor 1 Bass 2 Tenor 1 Bass 2 69 72 67 72 68 67 72 75 70 74 67 70 71 67 65 70 64 70 66 75 72 66 69 76 74 70 68 72 74 72 68 75 71 71 72 64 68 74 66 74 73 70 75 68 72 66 72 Table 12.2 12.02: One-Way ANOVA The purpose of a one-way ANOVA test is to determine the existence of a statistically significant difference among several group means. The test actually uses variances to help determine if the means are equal or not. In order to perform a one-way ANOVA test, there are five basic assumptions to be fulfilled: 1. The null hypothesis is simply that all the group population means are the same. The alternative hypothesis is that at least one pair of means is different. For example, if there are k groups: $H_{0} : \mu_{1}=\mu_{2}=\mu_{3}=\ldots \mu_{k}$ The graphs, a set of box plots representing the distribution of values with the group means indicated by a horizontal line through the box, help in the understanding of the hypothesis test. In the first graph (red box plots), $H_{0} : \mu_{1}=\mu_{2}=\mu_{3}$ and the three populations have the same distribution if the null hypothesis is true. The variance of the combined data is approximately the same as the variance of each of the populations. If the null hypothesis is false, then the variance of the combined data is larger which is caused by the different means as shown in the second graph (green box plots).
textbooks/stats/Applied_Statistics/Introductory_Business_Statistics_(OpenStax)/12%3A_F_Distribution_and_One-Way_ANOVA/12.00%3A_Introduction_to_F_Distribution_and_One-Way_ANOVA.txt
The distribution used for the hypothesis test is a new one. It is called the F-distribution, invented by George Snedecor but named in honor of Sir Ronald Fisher, an English statistician. The $F$ statistic is a ratio (a fraction). There are two sets of degrees of freedom; one for the numerator and one for the denominator. For example, if $F$ follows an $F$ distribution and the number of degrees of freedom for the numerator is four, and the number of degrees of freedom for the denominator is ten, then $F \sim F_{4,10}$. To calculate the $\bf{F}$ ratio, two estimates of the variance are made. 1. Variance between samples: An estimate of $\sigma^2$ that is the variance of the sample means multiplied by $n$ (when the sample sizes are the same.). If the samples are different sizes, the variance between samples is weighted to account for the different sample sizes. The variance is also called variation due to treatment or explained variation. 2. Variance within samples: An estimate of $\sigma^2$ that is the average of the sample variances (also known as a pooled variance). When the sample sizes are different, the variance within samples is weighted. The variance is also called the variation due to error or unexplained variation. • $SS_{between}$ is the sum of squares that represents the variation among the different samples • $SS_{within}$ is the sum of squares that represents the variation within samples that is due to chance. To find a "sum of squares" means to add together squared quantities that, in some cases, may be weighted. We used sum of squares to calculate the sample variance and the sample standard deviation in Table 1.19. MS means "mean square." $MS_{between}$ is the variance between groups, and $MS_{within}$ is the variance within groups. Calculation of Sum of Squares and Mean Square • $k$ is the the number of different groups • $n_j$ is the size of the $j^{th}$ group • $s_j$ = the sum of the values in the $j^{th}$ group • $n$ is the total number of all the values combined (total sample size: $\Sigma n_{j}$) • $x$ is the one value: $\sum x=\sum s_{j} \nonumber$ • Sum of squares of all values from every group combined: $\sum x^{2} \nonumber$ • Between group variability: $SS_{total} =\sum x^{2}-\frac{\left(\sum x^{2}\right)}{n} \nonumber$ • Total sum of squares: $\sum x^{2}-\frac{\left(\sum x\right)^{2}}{n} \nonumber$ • Explained variation: sum of squares representing variation among the different samples: $SS_{between} =\sum\left[\frac{\left(s_{j}\right)^{2}}{n_{j}}\right]-\frac{\left(\sum s_{j}\right)^{2}}{n} \nonumber$ • Unexplained variation: sum of squares representing variation within samples due to chance: $S S_{\text { within }}=S S_{\text { total }}-S S_{\text { between }} \nonumber$ • $df$'s for different groups ($df$'s for the numerator): $df = k – 1 \nonumber$ • Equation for errors within samples ($df$'s for the denominator): $df_{within} = n – k \nonumber$ • Mean square (variance estimate) explained by the different groups: $M S_{\text { between }}=\frac{S S_{\text { between }}}{d f_{\text { between }}} \nonumber$ • Mean square (variance estimate) that is due to chance (unexplained): $M S_{\mathrm{within}}=\frac{S S_{\mathrm{within}}}{d f_{\mathrm{within}}} \nonumber$ $MS_{between}$ and $MS_{within}$ can be written as follows: \begin{align*} M S_{\mathrm{between}} & =\frac{S S_{\mathrm{between}}}{d f_{\mathrm{between}}}=\frac{S S_{\mathrm{between}}}{k-1} \[4pt] M S_{within} &=\frac{SS_{w ithin}}{df_{within}}=\frac{SS_{within}}{n-k}\end{align*} The one-way ANOVA test depends on the fact that $M S_{between}$ can be influenced by population differences among means of the several groups. Since $M S_{within}$ compares values of each group to its own group mean, the fact that group means might be different does not affect $M S_{within}$. The null hypothesis says that all groups are samples from populations having the same normal distribution. The alternate hypothesis says that at least two of the sample groups come from populations with different normal distributions. If the null hypothesis is true, $M S_{between}$ and $M S_{within}$ should both estimate the same value. Note The null hypothesis says that all the group population means are equal. The hypothesis of equal means implies that the populations have the same normal distribution, because it is assumed that the populations are normal and that they have equal variances. Definition: F-Ratio or F Statistic $F=\frac{M S_{\text { between }}}{M S_{\text { within }}}$ If $M S_{between}$ and $M S_{within}$ estimate the same value (following the belief that $H_0$ is true), then the $F$-ratio should be approximately equal to one. Mostly, just sampling errors would contribute to variations away from one. As it turns out, $M S_{between}$ consists of the population variance plus a variance produced from the differences between the samples. $M S_{within}$ is an estimate of the population variance. Since variances are always positive, if the null hypothesis is false, $M S_{between}$ will generally be larger than $MS_{within}$.Then the $F$-ratio will be larger than one. However, if the population effect is small, it is not unlikely that $M S_{within}$ will be larger in a given sample. The foregoing calculations were done with groups of different sizes. If the groups are the same size, the calculations simplify somewhat and the F-ratio can be written as: F-Ratio Formula when the groups are the same size The foregoing calculations were done with groups of different sizes. If the groups are the same size, the calculations simplify somewhat and the F-ratio can be written as $F=\frac{n \cdot s_{\overline{x}}^{2}}{s^{2}_{ pooled }}$ where • $n$ = the sample size • $d f_{\text {numerator}}=k-1$ • $d f_{\text {denominator}}=n-k$ • $s_{pooled}^2$ = the mean of the sample variances (pooled variance) • $s_{\overline x}^2$ = the variance of the sample means Data are typically put into a table for easy viewing. One-Way ANOVA results are often displayed in this manner by computer software. Table $1$ Source of variation Sum of squares ($SS$) Degrees of freedom ($df$) Mean square ($MS$) $F$ Factor (Between) $SS$(Factor) $k – 1$ $MS(Factor) = \dfrac{SS(Factor)}{k– 1}$ $F = \dfrac{MS(Factor)}{MS(Error)}$ Error (Within) $SS$(Error) $n – k$ $MS(Error) = \dfrac{SS(Error)}{n – k}$ Total $SS$(Total) $n – 1$ Example 12.2 Three different diet plans are to be tested for mean weight loss. The entries in the table are the weight losses for the different plans. The one-way ANOVA results are shown in Table $2$. Table $2$ Plan 1: $n_1 = 4$ Plan 2: $n_2 = 3$ Plan 3: $n_3 = 3$ 5 3.5 8 4.5 7 4 4   3.5 3 4.5 $s_{1}=16.5, s_{2}=15, s_{3}=15.5$ Following are the calculations needed to fill in the one-way ANOVA table. The table is used to conduct a hypothesis test. \begin{align*} S(\text { between }) &=\sum\left[\frac{\left(s_{j}\right)^{2}}{n_{j}}\right]-\frac{\left(\displaystyle \sum s_{j}\right)^{2}}{n} \[4pt] &=\frac{s_{1}^{2}}{4}+\frac{s_{2}^{2}}{3}+\frac{s_{3}^{2}}{3}-\frac{\left(s_{1}+s_{2}+s_{3}\right)^{2}}{10}\end{align*} where $n_{1}=4, n_{2}=3, n_{3}=3$ and $n=n_{1}+n_{2}+n_{3}=10$. \begin{align*} S(\text { between }) &= \frac{(16.5)^{2}}{4}+\frac{(15)^{2}}{3}+\frac{(15.5)^{2}}{3}-\frac{(16.5+15+15.5)^{2}}{10} \[4pt] &=2.2458 \[4pt] S(\text {total}) &=\sum x^{2}-\frac{\left(\sum x\right)^{2}}{n} \[4pt] &=\left(5^{2}+4.5^{2}+4^{2}+3^{2}+3.5^{2}+7^{2}+4.5^{2}+8^{2}+4^{2}+3.5^{2}\right) -\frac{(5+4.5+4+3+3.5+7+4.5+8+4+3.5)^{2}}{10}\[4pt] &=244-\frac{47^{2}}{10} \[4pt] &=244-220.9 \[4pt] & =23.1 \[4pt] S(\text {within}) & = S(\text {total})-S S(\text {between}) \[4pt] &=23.1-2.2458 \[4pt] &=20.8542 \end{align*} Table $3$ Source of variation Sum of squares ($SS$) Degrees of freedom ($df$) Mean square ($MS$) $F$ Factor (Between) $SS(Factor) = SS(Between) \= 2.2458$ $k – 1 = 3 groups – 1 \= 2$ $MS(Factor) = \dfrac{SS(Factor)}{k – 1} \= 2.2458/2 \= 1.1229$ $F = \dfrac{MS(Factor)}{MS(Error)} \ = \dfrac{1.1229}{2.9792} \= 0.3769$ Error (Within) $SS(Error) = SS(Within) \ = 20.8542$ $n – k = 10 total data – 3 groups \= 7$ $MS(Error) = \dfrac{SS(Error)}{n – k} \= \dfrac{20.8542}{7} \= 2.9792$ Total $SS(Total) = 2.2458 + 20.8542 \= 23.1$ $n – 1 = 10 total data – 1 \= 9$ Exercise 12.2 As part of an experiment to see how different types of soil cover would affect slicing tomato production, Marist College students grew tomato plants under different soil cover conditions. Groups of three plants each had one of the following treatments • bare soil • a commercial ground cover • black plastic • straw • compost All plants grew under the same conditions and were the same variety. Students recorded the weight (in grams) of tomatoes produced by each of the n = 15 plants: Bare: $n_1 = 3$ Ground Cover: $n_2 = 3$ Plastic: $n_3 = 3$ Straw: $n_4 = 3$ Compost: $n_5 = 3$ 2,625 5,348 6,583 7,285 6,277 2,997 5,682 8,560 6,897 7,818 4,915 5,482 3,830 9,230 8,677 Table $4$ Create the one-way ANOVA table. The one-way ANOVA hypothesis test is always right-tailed because larger $F$-values are way out in the right tail of the F-distribution curve and tend to make us reject $H_0$. Example 12.3 Let’s return to the slicing tomato exercise in Try It. The means of the tomato yields under the five mulching conditions are represented by $\mu_{1}, \mu_{2}, \mu_{3}, \mu_{4}, \mu_{5}$. We will conduct a hypothesis test to determine if all means are the same or at least one is different. Using a significance level of 5%, test the null hypothesis that there is no difference in mean yields among the five groups against the alternative hypothesis that at least one mean is different from the rest. Answer The null and alternative hypotheses are: $H_{0} : \mu_{1}=\mu_{2}=\mu_{3}=\mu_{4}=\mu_{5}$ $H_{a} : \mu_{i} \neq \mu_{j}$ some $i \neq j$ The one-way ANOVA results are shown in Table $5$ Table $5$ Source of variation Sum of squares ($SS$) Degrees of freedom ($df$) Mean square ($MS$) F Factor (Between) 36,648,561 $5 – 1 = 4$ $\frac{36,648,561}{4}=9,162,140$ $\frac{9,162,140}{2,044,672.6}=4.4810$ Error (Within) 20,446,726 $15 – 5 = 10$ $\frac{20,446,726}{10}=2,044,672.6$ Total 57,095,287 $15 – 1 = 14$ Distribution for the test: $F_{4,10}$ $df(num) = 5 – 1 = 4$ $df(denom) = 15 – 5 = 10$ Test statistic: $F = 4.4810$ Probability Statement: $p\text{-value }= P(F > 4.481) = 0.0248.$ Compare $\bf{\alpha}$ and the $\bf p$-value: $\alpha = 0.05$, $p\text{-value }= 0.0248$ Make a decision: Since $\alpha > p$-value, we cannot accept $H_0$. Conclusion: At the 5% significance level, we have reasonably strong evidence that differences in mean yields for slicing tomato plants grown under different mulching conditions are unlikely to be due to chance alone. We may conclude that at least some of mulches led to different mean yields. Exercise 12.3 MRSA, or Staphylococcus aureus, can cause a serious bacterial infections in hospital patients. Table $6$ shows various colony counts from different patients who may or may not have MRSA. The data from the table is plotted in FIgure $2$. Table $6$ Conc = 0.6 Conc = 0.8 Conc = 1.0 Conc = 1.2 Conc = 1.4 9 16 22 30 27 66 93 147 199 168 98 82 120 148 132 Plot of the data for the different concentrations: Test whether the mean number of colonies are the same or are different. Construct the ANOVA table, find the p-value, and state your conclusion. Use a 5% significance level. Example 12.4 Four sororities took a random sample of sisters regarding their grade means for the past term. The results are shown in Table $7$. Table $7$: Mean grades for four sororities Sorority 1 Sorority 2 Sorority 3 Sorority 4 2.17 2.63 2.63 3.79 1.85 1.77 3.78 3.45 2.83 3.25 4.00 3.08 1.69 1.86 2.55 2.26 3.33 2.21 2.45 3.18 Using a significance level of 1%, is there a difference in mean grades among the sororities? Answer Let $\mu_{1}, \mu_{2}, \mu_{3}, \mu_{4}$ be the population means of the sororities. Remember that the null hypothesis claims that the sorority groups are from the same normal distribution. The alternate hypothesis says that at least two of the sorority groups come from populations with different normal distributions. Notice that the four sample sizes are each five. Note: This is an example of a balanced design, because each factor (i.e., sorority) has the same number of observations. $H_{0}: \mu_{1}=\mu_{2}=\mu_{3}=\mu_{4}$ $H_a$: Not all of the means $\mu_{1}, \mu_{2}, \mu_{3}, \mu_{4}$ are equal. Distribution for the test: $F_{3,16}$ where $k = 4$ groups and $n = 20$ samples in total $df(num)= k – 1 = 4 – 1 = 3$ $df(denom) = n – k = 20 – 4 = 16$ Calculate the test statistic: $F = 2.23$ Graph: Probability statement: $p\text{-value }= P(F > 2.23) = 0.1241$ Compare $\bf{\alpha}$ and the $\bf p$-value: $\alpha = 0.01$ $p\text{-value }= 0.1241$ $\alpha < p$-value Make a decision: Since $\alpha < p$-value, you cannot reject $H_0$. Conclusion: There is not sufficient evidence to conclude that there is a difference among the mean grades for the sororities. Exercise 12.4 Four sports teams took a random sample of players regarding their GPAs for the last year. The results are shown in Table $8$. Table $8$ GPAs for four sports teams Basketball Baseball Hockey Lacrosse 3.6 2.1 4.0 2.0 2.9 2.6 2.0 3.6 2.5 3.9 2.6 3.9 3.3 3.1 3.2 2.7 3.8 3.4 3.2 2.5 Use a significance level of 5%, and determine if there is a difference in GPA among the teams. Example 12.5 A fourth grade class is studying the environment. One of the assignments is to grow bean plants in different soils. Tommy chose to grow his bean plants in soil found outside his classroom mixed with dryer lint. Tara chose to grow her bean plants in potting soil bought at the local nursery. Nick chose to grow his bean plants in soil from his mother's garden. No chemicals were used on the plants, only water. They were grown inside the classroom next to a large window. Each child grew five plants. At the end of the growing period, each plant was measured, producing the data (in inches) in Table $9$. Tommy's plants Tara's plants Nick's plants 24 25 23 21 31 27 23 23 22 30 20 30 23 28 20 Table $9$ Does it appear that the three media in which the bean plants were grown produce the same mean height? Test at a 3% level of significance. Answer This time, we will perform the calculations that lead to the F' statistic. Notice that each group has the same number of plants, so we will use the formula $F^{\prime}=\frac{n \cdot s_{\overline{x}}^{2}}{s^{2}_{pooled}}$. First, calculate the sample mean and sample variance of each group. Tommy's plants Tara's plants Nick's plants Sample mean 24.2 25.4 24.4 Sample variance 11.7 18.3 16.3 Table $10$ Next, calculate the variance of the three group means (Calculate the variance of 24.2, 25.4, and 24.4). Variance of the group means = 0.413 = $s_{\overline{x}}^{2}$ Then $M S_{b e t w e e n}=n s_{\overline{x}}^{2}=(5)(0.413)$ where $n = 5$ is the sample size (number of plants each child grew). Calculate the mean of the three sample variances (Calculate the mean of 11.7, 18.3, and 16.3). Mean of the sample variances = 15.433 = $\bf{s^2}$ pooled Then $M S_{\text {within}}=s^{2} \text { pooled }=15.433$. The $F$ statistic (or $F$ ratio) is $F=\frac{M S_{\text { between }}}{M S_{\text { within }}}=\frac{n s_{\overline{x}}^{2}}{s^{2} \text { pooled }}=\frac{(5)(0.413)}{15.433}=0.134$ The $df$s for the numerator = the number of groups $– 1 = 3 – 1 = 2$. The $df$s for the denominator = the total number of samples – the number of groups $= 15 – 3 = 12$ The distribution for the test is $F_{2,12}$ and the $F$ statistic is $F = 0.134$ The $p$-value is $P(F > 0.134) = 0.8759$. Decision: Since $\alpha = 0.03$ and the $p\text{-value }= 0.8759$, then you cannot reject H0. (Why?) Conclusion: With a 3% level of significance, from the sample data, the evidence is not sufficient to conclude that the mean heights of the bean plants are different. Notation The notation for the $F$ distribution is $F \sim F_{d f(n u m), d f(d e n o m)}$ where $df(num) = df_{between}$ and $df(denom) = df_{within}$. The mean for the $F$ distribution is$\mu=\frac{d f(n u m)}{d f(\text {denom})-2}$
textbooks/stats/Applied_Statistics/Introductory_Business_Statistics_(OpenStax)/12%3A_F_Distribution_and_One-Way_ANOVA/12.03%3A_The_F_Distribution_and_the_F-Ratio.txt
Here are some facts about the $\bf F$ distribution. 1. The curve is not symmetrical but skewed to the right. 2. There is a different curve for each set of degrees of freedom. 3. The $F$ statistic is greater than or equal to zero. 4. As the degrees of freedom for the numerator and for the denominator get larger, the curve approximates the normal as can be seen in the two figures below. Figure (b) with more degrees of freedom is more closely approaching the normal distribution, but remember that the $F$ cannot ever be less than zero so the distribution does not have a tail that goes to infinity on the left as the normal distribution does. 5. Other uses for the $F$ distribution include comparing two variances and two-way Analysis of Variance. Two-Way Analysis is beyond the scope of this chapter. 12.05: Chapter Formula Review 12.1 Test of Two Variances $H_{0} : \frac{\sigma_{1}^{2}}{\sigma_{2}^{2}}=\delta_{0}\nonumber$ $H_{a} : \frac{\sigma_{1}^{2}}{\sigma_{2}^{2}} \neq \delta_{0}\nonumber$ if $\delta_{0}=1$ then $H_{0} : \sigma_{1}^{2}=\sigma_{2}^{2}\nonumber$ $H_{a} : \sigma_{1}^{2} \neq \sigma_{2}\nonumber$ Test statistic is : $F_{c}=\frac{S_{1}^{2}}{S_{2}^{2}}\nonumber$ 12.3 The F Distribution and the F-Ratio $S S_{\mathrm{between}}=\sum\left[\frac{\left(s_{j}\right)^{2}}{n_{j}}\right]-\frac{\left(\sum s_{j}\right)^{2}}{n}$ $S S_{\mathrm{total}}=\sum x^{2}-\frac{\left(\sum x\right)^{2}}{n}$ $S S_{\text {within}}=S S_{\text {total}}-S S_{\text {between}}$ $d f_{\mathrm{between}}=d f(n u m)=k-1$ $d f_{\text {within}}=d f(\text {denom})=n-k$ $M S_{\text {between}}=\frac{S S_{\text {between}}}{d f_{\text {between}}}$ $M S_{\text {within}}=\frac{S S_{\text {within}}}{d f_{\text {within}}}$ $F=\frac{M S_{\text {between}}}{M S_{\text {within}}}$ • $k$ = the number of groups • $n_j$ = the size of the jth group • $s_j$ = the sum of the values in the jth group • $n$ = the total number of all values (observations) combined • $x$ = one value (one observation) from the data • $s_{\overline{x}}^{2}$ = the variance of the sample means • $s^2_{pooled}$ = the mean of the sample variances (pooled variance) 12.06: Chapter Homework 12.1 Test of Two Variances 55. Three students, Linda, Tuan, and Javier, are given five laboratory rats each for a nutritional experiment. Each rat’s weight is recorded in grams. Linda feeds her rats Formula A, Tuan feeds his rats Formula B, and Javier feeds his rats Formula C. At the end of a specified time period, each rat is weighed again and the net gain in grams is recorded. Linda's ratsTuan's ratsJavier's rats 43.547.051.2 39.440.540.9 41.338.937.9 46.046.345.0 38.244.248.6 Table $18$ Determine whether or not the variance in weight gain is statistically the same among Javier’s and Linda’s rats. Test at a significance level of 10%. 56. A grassroots group opposed to a proposed increase in the gas tax claimed that the increase would hurt working-class people the most, since they commute the farthest to work. Suppose that the group randomly surveyed 24 individuals and asked them their daily one-way commuting mileage. The results are as follows. Working-classProfessional (middle incomes)Professional (wealthy) 17.816.58.5 26.717.46.3 49.422.04.6 9.47.412.6 65.49.411.0 47.12.128.6 19.56.415.4 51.213.99.3 Table $19$ Determine whether or not the variance in mileage driven is statistically the same among the working class and professional (middle income) groups. Use a 5% significance level. Use the following information to answer the next two exercises. The following table lists the number of pages in four different types of magazines. Home decoratingNewsHealthComputer 1728782104 28694153136 1631238798 205106103207 19710196146 Table $20$ 57. Which two magazine types do you think have the same variance in length? 58. Which two magazine types do you think have different variances in length? 59. Is the variance for the amount of money, in dollars, that shoppers spend on Saturdays at the mall the same as the variance for the amount of money that shoppers spend on Sundays at the mall? Suppose that the Table $21$ shows the results of a study. SaturdaySundaySaturdaySunday 754462137 1858082 1506112439 941950127 629931141 736011873 89 Table 12.21 60. Are the variances for incomes on the East Coast and the West Coast the same? Suppose that Table $22$ shows the results of a study. Income is shown in thousands of dollars. Assume that both distributions are normal. Use a level of significance of 0.05. EastWest 3871 47126 3042 8251 7544 5290 11588 67 Table $22$ 61. Thirty men in college were taught a method of finger tapping. They were randomly assigned to three groups of ten, with each receiving one of three doses of caffeine: 0 mg, 100 mg, 200 mg. This is approximately the amount in no, one, or two cups of coffee. Two hours after ingesting the caffeine, the men had the rate of finger tapping per minute recorded. The experiment was double blind, so neither the recorders nor the students knew which group they were in. Does caffeine affect the rate of tapping, and if so how? Here are the data: 0 mg100 mg200 mg0 mg100 mg200 mg 242248246245246248 244245250248247252 247248248248250250 242247246244246248 246243245242244250 Table 12.23 62. King Manuel I, Komnenus ruled the Byzantine Empire from Constantinople (Istanbul) during the years 1145 to 1180 A.D. The empire was very powerful during his reign, but declined significantly afterwards. Coins minted during his era were found in Cyprus, an island in the eastern Mediterranean Sea. Nine coins were from his first coinage, seven from the second, four from the third, and seven from a fourth. These spanned most of his reign. We have data on the silver content of the coins: First coinageSecond coinageThird coinageFourth coinage 5.96.94.95.3 6.89.05.55.6 6.46.64.65.5 7.08.14.55.1 6.69.3 6.2 7.79.2 5.8 7.28.6 5.8 6.9 6.2 Table $24$ Did the silver content of the coins change over the course of Manuel’s reign? Here are the means and variances of each coinage. The data are unbalanced. FirstSecondThirdFourth Mean6.74448.24294.8755.6143 Variance0.29531.20950.20250.1314 Table 12.25 63. The American League and the National League of Major League Baseball are each divided into three divisions: East, Central, and West. Many years, fans talk about some divisions being stronger (having better teams) than other divisions. This may have consequences for the postseason. For instance, in 2012 Tampa Bay won 90 games and did not play in the postseason, while Detroit won only 88 and did play in the postseason. This may have been an oddity, but is there good evidence that in the 2012 season, the American League divisions were significantly different in overall records? Use the following data to test whether the mean number of wins per team in the three American League divisions were the same or not. Note that the data are not balanced, as two divisions had five teams, while one had only four. DivisionTeamWins EastNY Yankees95 EastBaltimore93 EastTampa Bay90 EastToronto73 EastBoston69 Table $26$ DivisionTeamWins CentralDetroit88 CentralChicago Sox85 CentralKansas City72 CentralCleveland68 CentralMinnesota66 Table $27$ DivisionTeamWins WestOakland94 WestTexas93 WestLA Angels89 WestSeattle75 Table $28$ 12.2 One-Way ANOVA 64. Three different traffic routes are tested for mean driving time. The entries in the Table $29$ are the driving times in minutes on the three different routes. Route 1Route 2Route 3 302716 322941 272822 353631 Table $29$ State $SS_{between}$, $SS_{within}$, and the $F$ statistic. 65. Suppose a group is interested in determining whether teenagers obtain their drivers licenses at approximately the same average age across the country. Suppose that the following data are randomly collected from five teenagers in each region of the country. The numbers represent the age at which teenagers obtained their drivers licenses. NortheastSouthWestCentralEast 16.316.916.416.217.1 16.116.516.516.617.2 16.416.416.616.516.6 16.516.216.116.416.8 $\overline x$=________________________________________ $s^2=$________________________________________ Table $30$ State the hypotheses. $H_0$: ____________ $H_a$: ____________ 12.3 The F Distribution and the F-Ratio Use the following information to answer the next three exercises. Suppose a group is interested in determining whether teenagers obtain their drivers licenses at approximately the same average age across the country. Suppose that the following data are randomly collected from five teenagers in each region of the country. The numbers represent the age at which teenagers obtained their drivers licenses. NortheastSouthWestCentralEast 16.316.916.416.217.1 16.116.516.516.617.2 16.416.416.616.516.6 16.516.216.116.416.8 $\overline x$=________________________________________ $s^2=$________________________________________ Table $31$ $H_{0} : \mu_{1}=\mu_{2}=\mu_{3}=\mu_{4}=\mu_{5}$ $H_a$: At least any two of the group means $\mu_{1}=\mu_{2}=\mu_{3}=\mu_{4}=\mu_{5}$ are not equal. 66. degrees of freedom – numerator: $df(num)$ = _________ 67. degrees of freedom – denominator: $df(denom)$ = ________ 68. $F$ statistic = ________ 12.4 Facts About the F Distribution 69. Three students, Linda, Tuan, and Javier, are given five laboratory rats each for a nutritional experiment. Each rat's weight is recorded in grams. Linda feeds her rats Formula A, Tuan feeds his rats Formula B, and Javier feeds his rats Formula C. At the end of a specified time period, each rat is weighed again, and the net gain in grams is recorded. Using a significance level of 10%, test the hypothesis that the three formulas produce the same mean weight gain. Linda's ratsTuan's ratsJavier's rats 43.547.051.2 39.440.540.9 41.338.937.9 46.046.345.0 38.244.248.6 Table $32$ Weights of Student Lab Rats 70. A grassroots group opposed to a proposed increase in the gas tax claimed that the increase would hurt working-class people the most, since they commute the farthest to work. Suppose that the group randomly surveyed 24 individuals and asked them their daily one-way commuting mileage. The results are in Table $33$. Using a 5% significance level, test the hypothesis that the three mean commuting mileages are the same. Working-classProfessional (middle incomes)Professional (wealthy) 17.816.58.5 26.717.46.3 49.422.04.6 9.47.412.6 65.49.411.0 47.12.128.6 19.56.415.4 51.213.99.3 Table 12.33 Use the following information to answer the next two exercises. Table $34$ lists the number of pages in four different types of magazines. Home decoratingNewsHealthComputer 1728782104 28694153136 1631238798 205106103207 19710196146 Table $34$ 71. Using a significance level of 5%, test the hypothesis that the four magazine types have the same mean length. 72. Eliminate one magazine type that you now feel has a mean length different from the others. Redo the hypothesis test, testing that the remaining three means are statistically the same. Use a new solution sheet. Based on this test, are the mean lengths for the remaining three magazines statistically the same? 73. A researcher wants to know if the mean times (in minutes) that people watch their favorite news station are the same. Suppose that Table $35$ shows the results of a study. CNNFOXLocal 451572 124337 186856 385060 233151 3522 Table $35$ Assume that all distributions are normal, the four population standard deviations are approximately the same, and the data were collected independently and randomly. Use a level of significance of 0.05. 74. Are the means for the final exams the same for all statistics class delivery types? Table $36$ shows the scores on final exams from several randomly selected classes that used the different delivery types. OnlineHybridFace-to-Face 728380 847378 778484 808181 81 86 79 82 Table $36$ Assume that all distributions are normal, the four population standard deviations are approximately the same, and the data were collected independently and randomly. Use a level of significance of 0.05. 75. Are the mean number of times a month a person eats out the same for whites, blacks, Hispanics and Asians? Suppose that TableTable $38$ shows the results of a study. PowderMachine MadeHard Packed 1,2102,1072,846 1,0801,1491,638 1,5378622,019 9411,8701,178 1,5282,233 1,382 Table $38$ Assume that all distributions are normal, the four population standard deviations are approximately the same, and the data were collected independently and randomly. Use a level of significance of 0.05. 77. Sanjay made identical paper airplanes out of three different weights of paper, light, medium and heavy. He made four airplanes from each of the weights, and launched them himself across the room. Here are the distances (in meters) that his planes flew. Paper type/TrialTrial 1Trial 2Trial 3Trial 4 Heavy5.1 meters3.1 meters4.7 meters5.3 meters Medium4 meters3.5 meters4.5 meters6.1 meters Light3.1 meters3.3 meters2.1 meters1.9 meters Table 12.39 1. An experiment was conducted on the number of eggs (fecundity) laid by female fruit flies. There are three groups of flies. One group was bred to be resistant to DDT (the RS group). Another was bred to be especially susceptible to DDT (SS). Finally there was a control line of non-selected or typical fruitflies (NS). Here are the data: RSSSNSRSSSNS 12.838.435.422.423.122.6 21.632.927.427.529.440.4 14.848.519.320.31634.4 23.120.941.838.720.130.4 34.611.620.326.423.314.9 19.722.337.623.722.951.8 22.630.236.926.122.533.8 29.633.437.329.515.137.9 16.426.728.238.63129.5 20.33923.444.416.942.4 29.312.833.723.216.136.6 14.914.629.223.610.847.4 27.312.241.7 Table $40$ Here is a chart of the three groups: 79. The data shown is the recorded body temperatures of 130 subjects as estimated from available histograms. Traditionally we are taught that the normal human body temperature is 98.6 F. This is not quite correct for everyone. Are the mean temperatures among the four groups different? Calculate 95% confidence intervals for the mean body temperature in each group and comment about the confidence intervals. FLFHMLMHFLFHMLMH 96.496.896.396.998.498.698.198.6 96.797.796.79798.798.698.198.6 97.297.897.197.198.798.698.298.7 97.297.997.297.198.798.798.298.8 97.49897.397.498.798.798.298.8 97.69897.497.598.898.898.298.8 97.79897.497.698.898.898.398.9 97.89897.497.798.898.898.499 97.898.197.597.898.898.998.499 97.998.397.697.999.29998.599 97.998.397.69899.39998.599.2 9898.397.898 99.198.699.5 98.298.497.898 99.198.6 98.298.497.898.3 99.298.7 98.298.497.998.4 99.499.1 98.298.49898.4 99.999.3 98.298.59898.6 10099.4 98.298.69898.6 100.8 Table $41$
textbooks/stats/Applied_Statistics/Introductory_Business_Statistics_(OpenStax)/12%3A_F_Distribution_and_One-Way_ANOVA/12.04%3A_Facts_About_the_F_Distribution.txt
Analysis of Variance also referred to as ANOVA, is a method of testing whether or not the means of three or more populations are equal. The method is applicable if: • The test statistic for analysis of variance is the $F$-ratio. One-Way ANOVA a method of testing whether or not the means of three or more populations are equal; the method is applicable if: • The test statistic for analysis of variance is the $F$-ratio. Variance mean of the squared deviations from the mean; the square of the standard deviation. For a set of data, a deviation can be represented as $x – \overline{x}$ where $x$ is a value of the data and $\overline{x}$ is the sample mean. The sample variance is equal to the sum of the squares of the deviations divided by the difference of the sample size and one. 12.08: Chapter Practice 12.1 Test of Two Variances Use the following information to answer the next two exercises. There are two assumptions that must be true in order to perform an $F$ test of two variances. 1. Name one assumption that must be true. 2. What is the other assumption that must be true? Use the following information to answer the next five exercises. Two coworkers commute from the same building. They are interested in whether or not there is any variation in the time it takes them to drive to work. They each record their times for 20 commutes. The first worker’s times have a variance of 12.1. The second worker’s times have a variance of 16.9. The first worker thinks that he is more consistent with his commute times. Test the claim at the 10% level. Assume that commute times are normally distributed. 3. State the null and alternative hypotheses. 4. What is $s_1$ in this problem? 5. What is $s_2$ in this problem? 6. What is $n$? 7. What is the $F$ statistic? 8. What is the critical value? 9. Is the claim accurate? Use the following information to answer the next four exercises. Two students are interested in whether or not there is variation in their test scores for math class. There are 15 total math tests they have taken so far. The first student’s grades have a standard deviation of 38.1. The second student’s grades have a standard deviation of 22.5. The second student thinks his scores are more consistent. 10. State the null and alternative hypotheses. 11. What is the $F$ Statistic? 12. What is the critical value? 13. At the 5% significance level, do we reject the null hypothesis? Use the following information to answer the next three exercises. Two cyclists are comparing the variances of their overall paces going uphill. Each cyclist records his or her speeds going up 35 hills. The first cyclist has a variance of 23.8 and the second cyclist has a variance of 32.1. The cyclists want to see if their variances are the same or different. Assume that commute times are normally distributed. 14. State the null and alternative hypotheses. 15. What is the $F$ Statistic? 16. At the 5% significance level, what can we say about the cyclists’ variances? 12.2 One-Way ANOVA Use the following information to answer the next five exercises. There are five basic assumptions that must be fulfilled in order to perform a one-way ANOVA test. What are they? 17. Write one assumption. 18. Write another assumption. 19. Write a third assumption. 20. Write a fourth assumption. 12.3 The F Distribution and the F-Ratio Use the following information to answer the next eight exercises. Groups of men from three different areas of the country are to be tested for mean weight. The entries in Table $13$ are the weights for the different groups. Group 1Group 2Group 3 216202170 198213165 240284182 187228197 176210201 Table 12.13 21. What is the Sum of Squares Factor? 22. What is the Sum of Squares Error? 23. What is the $df$ for the numerator? 24. What is the $df$ for the denominator? 25. What is the Mean Square Factor? 26. What is the Mean Square Error? 27. What is the $F$ statistic? Use the following information to answer the next eight exercises. Girls from four different soccer teams are to be tested for mean goals scored per game. The entries in Table $14$ are the goals per game for the different teams. Team 1Team 2Team 3Team 4 1203 2314 0214 3403 2402 Table $14$ 28. What is $SS_{between}$? 29. What is the $df$ for the numerator? 30. What is $MS_{between}$? 31. What is $SS_{within}$? 32. What is the $df$ for the denominator? 33. What is $MS_{within}$? 34. What is the $F$ statistic? 35. Judging by the $F$ statistic, do you think it is likely or unlikely that you will reject the null hypothesis? 12.4 Facts About the F Distribution 36. An $F$ statistic can have what values? 37. What happens to the curves as the degrees of freedom for the numerator and the denominator get larger? Use the following information to answer the next seven exercise. Four basketball teams took a random sample of players regarding how high each player can jump (in inches). The results are shown in Table $15$. Team 1Team 2Team 3Team 4Team 5 3632483841 4235504439 5138394640 Table $15$ 38. What is the $df(num)$? 39. What is the $df(denom)$? 40. What are the Sum of Squares and Mean Squares Factors? 41. What are the Sum of Squares and Mean Squares Errors? 42. What is the $F$ statistic? 43. What is the $p$-value? 44. At the 5% significance level, is there a difference in the mean jump heights among the teams? Use the following information to answer the next seven exercises. A video game developer is testing a new game on three different groups. Each group represents a different target market for the game. The developer collects scores from a random sample from each group. The results are shown in Table $16$ Group AGroup BGroup C 101151101 108149109 98160198 107112186 111126160 Table 12.16 45. What is the $df(num)$? 46. What is the $df(denom)$? 47. What are the $SS_{between}$ and $MS_{between}$? 48. What are the $SS_{within}$ and $MS_{within}$? 49. What is the $F$ Statistic? 50. What is the p-value? 51. At the 10% significance level, are the scores among the different groups different? Use the following information to answer the next three exercises. Suppose a group is interested in determining whether teenagers obtain their drivers licenses at approximately the same average age across the country. Suppose that the following data are randomly collected from five teenagers in each region of the country. The numbers represent the age at which teenagers obtained their drivers licenses. NortheastSouthWestCentralEast 16.316.916.416.217.1 16.116.516.516.617.2 16.416.416.616.516.6 16.516.216.116.416.8 $\overline x$=________________________________________ $s^2$=________________________________________ Table $17$ Enter the data into your calculator or computer. 52. $p$-value = ______ State the decisions and conclusions (in complete sentences) for the following preconceived levels of $\alpha$. 53. $\alpha = 0.05$ a. Decision: ____________________________ b. Conclusion: ____________________________ 54. $\alpha = 0.01$ a. Decision: ____________________________ b. Conclusion: ____________________________
textbooks/stats/Applied_Statistics/Introductory_Business_Statistics_(OpenStax)/12%3A_F_Distribution_and_One-Way_ANOVA/12.07%3A_Chapter_Key_Terms.txt
12.1 Test of Two Variances “MLB Vs. Division Standings – 2012.” Available online at http://espn.go.com/mlb/standings/_/y...ion/order/true. 12.3 The F Distribution and the F-Ratio Tomato Data, Marist College School of Science (unpublished student research) 12.4 Facts About the F Distribution Data from a fourth grade classroom in 1994 in a private K – 12 school in San Jose, CA. Hand, D.J., F. Daly, A.D. Lunn, K.J. McConway, and E. Ostrowski. A Handbook of Small Datasets: Data for Fruitfly Fecundity. London: Chapman & Hall, 1994. Hand, D.J., F. Daly, A.D. Lunn, K.J. McConway, and E. Ostrowski. A Handbook of Small Datasets. London: Chapman & Hall, 1994, pg. 50. Hand, D.J., F. Daly, A.D. Lunn, K.J. McConway, and E. Ostrowski. A Handbook of Small Datasets. London: Chapman & Hall, 1994, pg. 118. “MLB Standings – 2012.” Available online at http://espn.go.com/mlb/standings/_/year/2012. Mackowiak, P. A., Wasserman, S. S., and Levine, M. M. (1992), "A Critical Appraisal of 98.6 Degrees F, the Upper Limit of the Normal Body Temperature, and Other Legacies of Carl Reinhold August Wunderlich," Journal of the American Medical Association, 268, 1578-1580. 12.10: Chapter Review 12.1 Test of Two Variances The $F$ test for the equality of two variances rests heavily on the assumption of normal distributions. The test is unreliable if this assumption is not met. If both distributions are normal, then the ratio of the two sample variances is distributed as an $F$ statistic, with numerator and denominator degrees of freedom that are one less than the samples sizes of the corresponding two groups. A test of two variances hypothesis test determines if two variances are the same. The distribution for the hypothesis test is the $F$ distribution with two different degrees of freedom. Assumptions: 1. Analysis of variance extends the comparison of two groups to several, each a level of a categorical variable (factor). Samples from each group are independent, and must be randomly selected from normal populations with equal variances. We test the null hypothesis of equal means of the response in every group versus the alternative hypothesis of one or more group means being different from the others. A one-way ANOVA hypothesis test determines if several population means are equal. The distribution for the test is the Fdistribution with two different degrees of freedom. Assumptions: 1. Analysis of variance compares the means of a response variable for several groups. ANOVA compares the variation within each group to the variation of the mean of each group. The ratio of these two is the $F$ statistic from an $F$ distribution with (number of groups – 1) as the numerator degrees of freedom and (number of observations – number of groups) as the denominator degrees of freedom. These statistics are summarized in the ANOVA table. 12.4 Facts About the $\bf F$ Distribution When the data have unequal group sizes (unbalanced data), then techniques from Figure $3$ need to be used for hand calculations. In the case of balanced data (the groups are the same size) however, simplified calculations based on group means and variances may be used. In practice, of course, software is usually employed in the analysis. As in any analysis, graphs of various sorts should be used in conjunction with numerical techniques. Always look at your data! 12.11: Chapter Solution (Practice Homework) While there are differences in spread, it is not unreasonable to use ANOVA techniques. Here is the completed ANOVA table: Source of variationSum of squares ($SS$)Degrees of freedom ($df$)Mean square ($MS$)$F$ Factor (Between)$37.748$$4 – 1 = 3$$12.5825$$26.272$ Error (Within)$11.015$$27 – 4 = 23$$0.4789$ Total$48.763$$27 – 1 = 26$ Table $42$Table $43$ $P(F > 1.5521) = 0.2548$ Since the p-value is so large, there is not good evidence against the null hypothesis of equal means. We cannot reject the null hypothesis. Thus, for 2012, there is not any have any good evidence of a significant difference in mean number of wins between the divisions of the American League. 64. $SS_{between} = 26$ $SS_{within} = 441$ $F = 0.2653$ 67. $df(denom) = 15$ 69. 1. 72. 1. 74. 1. 76. 1. 78. The data appear normally distributed from the chart and of similar spread. There do not appear to be any serious outliers, so we may proceed with our ANOVA calculations, to see if we have good evidence of a difference between the three groups. Define $\mu_{1}, \mu_{2}, \mu_{3}$, as the population mean number of eggs laid by the three groups of fruit flies. $F$ statistic = 8.6657; $p$-value = 0.0004 Decision: Since the $p$-value is less than the level of significance of 0.01, we reject the null hypothesis. Conclusion: We have good evidence that the average number of eggs laid during the first 14 days of life for these three strains of fruitflies are different. Interestingly, if you perform a two sample $t$-test to compare the RS and NS groups they are significantly different ($p = 0.0013$). Similarly, SS and NS are significantly different ($p = 0.0006$). However, the two selected groups, RS and SS are not significantly different ($p = 0.5176$). Thus we appear to have good evidence that selection either for resistance or for susceptibility involves a reduced rate of egg production (for these specific strains) as compared to flies that were not selected for resistance or susceptibility to DDT. Here, genetic selection has apparently involved a loss of fecundity.
textbooks/stats/Applied_Statistics/Introductory_Business_Statistics_(OpenStax)/12%3A_F_Distribution_and_One-Way_ANOVA/12.09%3A_Chapter_Reference.txt
Professionals often want to know how two or more numeric variables are related. For example, is there a relationship between the grade on the second math exam a student takes and the grade on the final exam? If there is a relationship, what is the relationship and how strong is it? In another example, your income may be determined by your education, your profession, your years of experience, and your ability, or your gender or color. The amount you pay a repair person for labor is often determined by an initial amount plus an hourly fee. These examples may or may not be tied to a model, meaning that some theory suggested that a relationship exists. This link between a cause and an effect, often referred to as a model, is the foundation of the scientific method and is the core of how we determine what we believe about how the world works. Beginning with a theory and developing a model of the theoretical relationship should result in a prediction, what we have called a hypothesis earlier. Now the hypothesis concerns a full set of relationships. As an example, in Economics the model of consumer choice is based upon assumptions concerning human behavior: a desire to maximize something called utility, knowledge about the benefits of one product over another, likes and dislikes, referred to generally as preferences, and so on. These combined to give us the demand curve. From that we have the prediction that as prices rise the quantity demanded will fall. Economics has models concerning the relationship between what prices are charged for goods and the market structure in which the firm operates, monopoly verse competition, for example. Models for who would be most likely to be chosen for an on-the-job training position, the impacts of Federal Reserve policy changes and the growth of the economy and on and on. Models are not unique to Economics, even within the social sciences. In political science, for example, there are models that predict behavior of bureaucrats to various changes in circumstances based upon assumptions of the goals of the bureaucrats. There are models of political behavior dealing with strategic decision making both for international relations and domestic politics. The so-called hard sciences are, of course, the source of the scientific method as they tried through the centuries to explain the confusing world around us. Some early models today make us laugh; spontaneous generation of life for example. These early models are seen today as not much more than the foundational myths we developed to help us bring some sense of order to what seemed chaos. The foundation of all model building is the perhaps the arrogant statement that we know what caused the result we see. This is embodied in the simple mathematical statement of the functional form that \(y = f(x)\). The response, \(Y\), is caused by the stimulus, \(X\). Every model will eventually come to this final place and it will be here that the theory will live or die. Will the data support this hypothesis? If so then fine, we shall believe this version of the world until a better theory comes to replace it. This is the process by which we moved from flat earth to round earth, from earth-center solar system to sun-center solar system, and on and on. The scientific method does not confirm a theory for all time: it does not prove “truth”. All theories are subject to review and may be overturned. These are lessons we learned as we first developed the concept of the hypothesis test earlier in this book. Here, as we begin this section, these concepts deserve review because the tool we will develop here is the cornerstone of the scientific method and the stakes are higher. Full theories will rise or fall because of this statistical tool; regression and the more advanced versions call econometrics. In this chapter we will begin with correlation, the investigation of relationships among variables that may or may not be founded on a cause and effect model. The variables simply move in the same, or opposite, direction. That is to say, they do not move randomly. Correlation provides a measure of the degree to which this is true. From there we develop a tool to measure cause and effect relationships; regression analysis. We will be able to formulate models and tests to determine if they are statistically sound. If they are found to be so, then we can use them to make predictions: if as a matter of policy we changed the value of this variable what would happen to this other variable? If we imposed a gasoline tax of 50 cents per gallon how would that effect the carbon emissions, sales of Hummers/Hybrids, use of mass transit, etc.? The ability to provide answers to these types of questions is the value of regression as both a tool to help us understand our world and to make thoughtful policy decisions.
textbooks/stats/Applied_Statistics/Introductory_Business_Statistics_(OpenStax)/13%3A_Linear_Regression_and_Correlation/13.00%3A_Introduction_to_Linear_Regression_and_Correlation.txt
As we begin this section we note that the type of data we will be working with has changed. Perhaps unnoticed, all the data we have been using is for a single variable. It may be from two samples, but it is still a univariate variable. The type of data described in the examples above and for any model of cause and effect is bivariate data — "bi" for two variables. In reality, statisticians use multivariate data, meaning many variables. For our work we can classify data into three broad categories, time series data, cross-section data, and panel data. We met the first two very early on. Time series data measures a single unit of observation; say a person, or a company or a country, as time passes. What are measured will be at least two characteristics, say the person’s income, the quantity of a particular good they buy and the price they paid. This would be three pieces of information in one time period, say 1985. If we followed that person across time we would have those same pieces of information for 1985,1986, 1987, etc. This would constitute a times series data set. If we did this for 10 years we would have 30 pieces of information concerning this person’s consumption habits of this good for the past decade and we would know their income and the price they paid. A second type of data set is for cross-section data. Here the variation is not across time for a single unit of observation, but across units of observation during one point in time. For a particular period of time we would gather the price paid, amount purchased, and income of many individual people. A third type of data set is panel data. Here a panel of units of observation is followed across time. If we take our example from above we might follow 500 people, the unit of observation, through time, ten years, and observe their income, price paid and quantity of the good purchased. If we had 500 people and data for ten years for price, income and quantity purchased we would have 15,000 pieces of information. These types of data sets are very expensive to construct and maintain. They do, however, provide a tremendous amount of information that can be used to answer very important questions. As an example, what is the effect on the labor force participation rate of women as their family of origin, mother and father, age? Or are there differential effects on health outcomes depending upon the age at which a person started smoking? Only panel data can give answers to these and related questions because we must follow multiple people across time. The work we do here however will not be fully appropriate for data sets such as these. Beginning with a set of data with two independent variables we ask the question: are these related? One way to visually answer this question is to create a scatter plot of the data. We could not do that before when we were doing descriptive statistics because those data were univariate. Now we have bivariate data so we can plot in two dimensions. Three dimensions are possible on a flat piece of paper, but become very hard to fully conceptualize. Of course, more than three dimensions cannot be graphed although the relationships can be measured mathematically. To provide mathematical precision to the measurement of what we see we use the correlation coefficient. The correlation tells us something about the co-movement of two variables, but nothing about why this movement occurred. Formally, correlation analysis assumes that both variables being analyzed are independent variables. This means that neither one causes the movement in the other. Further, it means that neither variable is dependent on the other, or for that matter, on any other variable. Even with these limitations, correlation analysis can yield some interesting results. The correlation coefficient, ρ (pronounced rho), is the mathematical statistic for a population that provides us with a measurement of the strength of a linear relationship between the two variables. For a sample of data, the statistic, r, developed by Karl Pearson in the early 1900s, is an estimate of the population correlation and is defined mathematically as: $r=\frac{\frac{1}{n-1} \Sigma\left(X_{1 i}-\overline{X}_{1}\right)\left(X_{2 i}-\overline{X}_{2}\right)}{s_{x_{1}} s_{x_{2}}}\nonumber$ OR $r=\frac{\sum X_{1 i} X_{2 i}-n \overline{X}_{1}-\overline{X}_{2}}{\sqrt{\left(\Sigma X_{1 i}^{2}-n \overline{X}_{1}^{2}\right)\left(\Sigma X_{2 i}^{2}-n \overline{X}_{2}^{2}\right)}}\nonumber$ where $sx_1$ and $sx_2$ are the standard deviations of the two independent variables $X_1$ and $X_2$, $\overline{X}_{1}$ and $\overline{X}_{2}$ are the sample means of the two variables, and $X_{1i}$ and $X_{2i}$ are the individual observations of $X_1$ and $X_2$. The correlation coefficient $r$ ranges in value from -1 to 1. The second equivalent formula is often used because it may be computationally easier. As scary as these formulas look they are really just the ratio of the covariance between the two variables and the product of their two standard deviations. That is to say, it is a measure of relative variances. In practice all correlation and regression analysis will be provided through computer software designed for these purposes. Anything more than perhaps one-half a dozen observations creates immense computational problems. It was because of this fact that correlation, and even more so, regression, were not widely used research tools until after the advent of “computing machines”. Now the computing power required to analyze data using regression packages is deemed almost trivial by comparison to just a decade ago. To visualize any linear relationship that may exist review the plot of a scatter diagrams of the standardized data. Figure $2$ presents several scatter diagrams and the calculated value of r. In panels (a) and (b) notice that the data generally trend together, (a) upward and (b) downward. Panel (a) is an example of a positive correlation and panel (b) is an example of a negative correlation, or relationship. The sign of the correlation coefficient tells us if the relationship is a positive or negative (inverse) one. If all the values of $X_1$ and $X_2$ are on a straight line the correlation coefficient will be either $1$ or $-1$ depending on whether the line has a positive or negative slope and the closer to one or negative one the stronger the relationship between the two variables. BUT ALWAYS REMEMBER THAT THE CORRELATION COEFFICIENT DOES NOT TELL US THE SLOPE. Remember, all the correlation coefficient tells us is whether or not the data are linearly related. In panel (d) the variables obviously have some type of very specific relationship to each other, but the correlation coefficient is zero, indicating no linear relationship exists. If you suspect a linear relationship between $X_1$ and $X_2$ then $r$ can measure how strong the linear relationship is. What the VALUE of $r$ tells us: • What the SIGN of $r$ tells us • "correlation does not imply causation."
textbooks/stats/Applied_Statistics/Introductory_Business_Statistics_(OpenStax)/13%3A_Linear_Regression_and_Correlation/13.01%3A_The_Correlation_Coefficient_r.txt
The correlation coefficient, $r$, tells us about the strength and direction of the linear relationship between $X_1$ and $X_2$. The sample data are used to compute $r$, the correlation coefficient for the sample. If we had data for the entire population, we could find the population correlation coefficient. But because we have only sample data, we cannot calculate the population correlation coefficient. The sample correlation coefficient, r, is our estimate of the unknown population correlation coefficient. • The hypothesis test lets us decide whether the value of the population correlation coefficient \rho is "close to zero" or "significantly different from zero". We decide this based on the sample correlation coefficient $r$ and the sample size $n$. If the test concludes that the correlation coefficient is significantly different from zero, we say that the correlation coefficient is "significant." • What the Hypotheses Mean in Words • Drawing a Conclusion There are two methods of making the decision concerning the hypothesis. The test statistic to test this hypothesis is: $t_{c}=\frac{r}{\sqrt{\left(1-r^{2}\right) /(n-2)}}\nonumber$ $t_{c}=\frac{r \sqrt{n-2}}{\sqrt{1-r^{2}}}\nonumber$ Where the second formula is an equivalent form of the test statistic, $n$ is the sample size and the degrees of freedom are $n-2$. This is a $t$-statistic and operates in the same way as other $t$ tests. Calculate the $t$-value and compare that with the critical value from the $t$-table at the appropriate degrees of freedom and the level of confidence you wish to maintain. If the calculated value is in the tail then cannot accept the null hypothesis that there is no linear relationship between these two independent random variables. If the calculated $t$-value is NOT in the tailed then cannot reject the null hypothesis that there is no linear relationship between the two variables. A quick shorthand way to test correlations is the relationship between the sample size and the correlation. If: $|r| \geq \frac{2}{\sqrt{n}}\nonumber$ then this implies that the correlation between the two variables demonstrates that a linear relationship exists and is statistically significant at approximately the 0.05 level of significance. As the formula indicates, there is an inverse relationship between the sample size and the required correlation for significance of a linear relationship. With only 10 observations, the required correlation for significance is 0.6325, for 30 observations the required correlation for significance decreases to 0.3651 and at 100 observations the required level is only 0.2000. Correlations may be helpful in visualizing the data, but are not appropriately used to "explain" a relationship between two variables. Perhaps no single statistic is more misused than the correlation coefficient. Citing correlations between health conditions and everything from place of residence to eye color have the effect of implying a cause and effect relationship. This simply cannot be accomplished with a correlation coefficient. The correlation coefficient is, of course, innocent of this misinterpretation. It is the duty of the analyst to use a statistic that is designed to test for cause and effect relationships and report only those results if they are intending to make such a claim. The problem is that passing this more rigorous test is difficult so lazy and/or unscrupulous "researchers" fall back on correlations when they cannot make their case legitimately. 13.03: Linear Equations Linear regression for two variables is based on a linear equation with one independent variable. The equation has the form: $y=a+b x\nonumber$ where $a$ and $b$ are constant numbers. The variable $\bf x$ is the independent variable, and $\bf y$ is the dependent variable. Another way to think about this equation is a statement of cause and effect. The $X$ variable is the cause and the $Y$ variable is the hypothesized effect. Typically, you choose a value to substitute for the independent variable and then solve for the dependent variable. Example $1$ The following examples are linear equations. $y=3+2x$ $y=–0.01+1.2x$ The graph of a linear equation of the form $y = a + bx$ is a straight line. Any line that is not vertical can be described by this equation Example $2$ Graph the equation $y = –1 + 2x$. Exercise $2$ Is the following an example of a linear equation? Why or why not? Example $3$ Aaron's Word Processing Service (AWPS) does word processing. The rate for services is $32 per hour plus a$31.50 one-time charge. The total cost to a customer depends on the number of hours it takes to complete the job. Find the equation that expresses the total cost in terms of the number of hours required to complete the job. Answer Solution 13.3 Let $x$ = the number of hours it takes to get the job done. Let $y$ = the total cost to the customer.
textbooks/stats/Applied_Statistics/Introductory_Business_Statistics_(OpenStax)/13%3A_Linear_Regression_and_Correlation/13.02%3A_Testing_the_Significance_of_the_Correlation_Coefficient.txt
Regression analysis is a statistical technique that can test the hypothesis that a variable is dependent upon one or more other variables. Further, regression analysis can provide an estimate of the magnitude of the impact of a change in one variable on another. This last feature, of course, is all important in predicting future values. Regression analysis is based upon a functional relationship among variables and further, assumes that the relationship is linear. This linearity assumption is required because, for the most part, the theoretical statistical properties of non-linear estimation are not well worked out yet by the mathematicians and econometricians. This presents us with some difficulties in economic analysis because many of our theoretical models are nonlinear. The marginal cost curve, for example, is decidedly nonlinear as is the total cost function, if we are to believe in the effect of specialization of labor and the Law of Diminishing Marginal Product. There are techniques for overcoming some of these difficulties, exponential and logarithmic transformation of the data for example, but at the outset we must recognize that standard ordinary least squares (OLS) regression analysis will always use a linear function to estimate what might be a nonlinear relationship. The general linear regression model can be stated by the equation: $y_{i}=\beta_{0}+\beta_{1} X_{1 i}+\beta_{2} X_{2 i}+\cdots+\beta_{k} X_{k i}+\varepsilon_{i}\nonumber$ where $\beta_0$ is the intercept, $\beta_i$'s are the slope between $Y$ and the appropriate $X_i$, and $\epsilon$ (pronounced epsilon), is the error term that captures errors in measurement of $Y$ and the effect on $Y$ of any variables missing from the equation that would contribute to explaining variations in $Y$. This equation is the theoretical population equation and therefore uses Greek letters. The equation we will estimate will have the Roman equivalent symbols. This is parallel to how we kept track of the population parameters and sample parameters before. The symbol for the population mean was $\mu$ and for the sample mean $\overline{X}$ and for the population standard deviation was $\sigma$ and for the sample standard deviation was $s$. The equation that will be estimated with a sample of data for two independent variables will thus be: $y_{i}=b_{0}+b_{1} x_{1 i}+b_{2} x_{2 i}+e_{i}\nonumber$ As with our earlier work with probability distributions, this model works only if certain assumptions hold. These are that the $Y$ is normally distributed, the errors are also normally distributed with a mean of zero and a constant standard deviation, and that the error terms are independent of the size of $X$ and independent of each other. Assumptions of the Ordinary Least Squares Regression Model Each of these assumptions needs a bit more explanation. If one of these assumptions fails to be true, then it will have an effect on the quality of the estimates. Some of the failures of these assumptions can be fixed while others result in estimates that quite simply provide no insight into the questions the model is trying to answer or worse, give biased estimates. 1. The independent variables, $x_i$, are all measured without error, and are fixed numbers that are independent of the error term. This assumption is saying in effect that $Y$ is deterministic, the result of a fixed component “$X$” and a random error component “$\epsilon$.” 2. The error term is a random variable with a mean of zero and a constant variance. The meaning of this is that the variances of the independent variables are independent of the value of the variable. Consider the relationship between personal income and the quantity of a good purchased as an example of a case where the variance is dependent upon the value of the independent variable, income. It is plausible that as income increases the variation around the amount purchased will also increase simply because of the flexibility provided with higher levels of income. The assumption is for constant variance with respect to the magnitude of the independent variable called homoscedasticity. If the assumption fails, then it is called heteroscedasticity. Figure 13.6 shows the case of homoscedasticity where all three distributions have the same variance around the predicted value of $Y$ regardless of the magnitude of $X$. 3. While the independent variables are all fixed values they are from a probability distribution that is normally distributed. This can be seen in Figure 13.6 by the shape of the distributions placed on the predicted line at the expected value of the relevant value of $Y$. 4. The independent variables are independent of $Y$, but are also assumed to be independent of the other $X$ variables. The model is designed to estimate the effects of independent variables on some dependent variable in accordance with a proposed theory. The case where some or more of the independent variables are correlated is not unusual. There may be no cause and effect relationship among the independent variables, but nevertheless they move together. Take the case of a simple supply curve where quantity supplied is theoretically related to the price of the product and the prices of inputs. There may be multiple inputs that may over time move together from general inflationary pressure. The input prices will therefore violate this assumption of regression analysis. This condition is called multicollinearity, which will be taken up in detail later. 5. The error terms are uncorrelated with each other. This situation arises from an effect on one error term from another error term. While not exclusively a time series problem, it is here that we most often see this case. An $X$ variable in time period one has an effect on the $Y$ variable, but this effect then has an effect in the next time period. This effect gives rise to a relationship among the error terms. This case is called autocorrelation, “self-correlated.” The error terms are now not independent of each other, but rather have their own effect on subsequent error terms. Figure 13.6 does not show all the assumptions of the regression model, but it helps visualize these important ones. This is the general form that is most often called the multiple regression model. So-called "simple" regression analysis has only one independent (right-hand) variable rather than many independent variables. Simple regression is just a special case of multiple regression. There is some value in beginning with simple regression: it is easy to graph in two dimensions, difficult to graph in three dimensions, and impossible to graph in more than three dimensions. Consequently, our graphs will be for the simple regression case. Figure 13.7 presents the regression problem in the form of a scatter plot graph of the data set where it is hypothesized that $Y$ is dependent upon the single independent variable $X$. A basic relationship from Macroeconomic Principles is the consumption function. This theoretical relationship states that as a person's income rises, their consumption rises, but by a smaller amount than the rise in income. If $Y$ is consumption and $X$ is income in the equation below Figure 13.7, the regression problem is, first, to establish that this relationship exists, and second, to determine the impact of a change in income on a person's consumption. The parameter $\beta_1$ was called the Marginal Propensity to Consume in Macroeconomics Principles. Each "dot" in Figure 13.7 represents the consumption and income of different individuals at some point in time. This was called cross-section data earlier; observations on variables at one point in time across different people or other units of measurement. This analysis is often done with time series data, which would be the consumption and income of one individual or country at different points in time. For macroeconomic problems it is common to use times series aggregated data for a whole country. For this particular theoretical concept these data are readily available in the annual report of the President’s Council of Economic Advisors. Figure 13.8. Regression analysis is sometimes called "least squares" analysis because the method of determining which line best "fits" the data is to minimize the sum of the squared residuals of a line put through the data. This figure shows the assumed relationship between consumption and income from macroeconomic theory. Here the data are plotted as a scatter plot and an estimated straight line has been drawn. From this graph we can see an error term, $e_1$. Each data point also has an error term. Again, the error term is put into the equation to capture effects on consumption that are not caused by income changes. Such other effects might be a person’s savings or wealth, or periods of unemployment. We will see how by minimizing the sum of these errors we can get an estimate for the slope and intercept of this line. Consider the graph below. The notation has returned to that for the more general model rather than the specific case of the Macroeconomic consumption function in our example. The $\hat{\mathrm{y}}$ is read "$\bf y$ hat" and is the estimated value of $\bf y$. (In Figure 13.8 $\hat{C}$ represents the estimated value of consumption because it is on the estimated line.) It is the value of $y$ obtained using the regression line. $\hat{\mathrm{y}}$ is not generally equal to $y$ from the data. The term $y_{0}-\hat{y}_{0}=e_{0}$ is called the "error" or residual. It is not an error in the sense of a mistake. The error term was put into the estimating equation to capture missing variables and errors in measurement that may have occurred in the dependent variables. The absolute value of a residual measures the vertical distance between the actual value of $y$ and the estimated value of $y$. In other words, it measures the vertical distance between the actual data point and the predicted point on the line as can be seen on the graph at point $X_0$. If the observed data point lies above the line, the residual is positive, and the line underestimates the actual data value for $y$. If the observed data point lies below the line, the residual is negative, and the line overestimates that actual data value for $y$. In the graph, $y_{0}-\hat{y}_{0}=e_{0}$ is the residual for the point shown. Here the point lies above the line and the residual is positive. For each data point the residuals, or errors, are calculated $y_{i}-\hat{y}_{i}=e_{i}$ for $i = 1, 2, 3, ..., n$ where $n$ is the sample size. Each $|e|$ is a vertical distance. The sum of the errors squared is the term obviously called Sum of Squared Errors (SSE). Using calculus, you can determine the straight line that has the parameter values of $b_0$ and $b_1$ that minimizes the SSE. When you make the SSE a minimum, you have determined the points that are on the line of best fit. It turns out that the line of best fit has the equation: $\hat{y}=b_{0}+b_{1} x\nonumber$ where $b_{0}=\overline{y}-b_{1} \overline{x}$ and $b_{1}=\frac{\Sigma(x-\overline{x})(y-\overline{y})}{\Sigma(x-\overline{x})^{2}}=\frac{\operatorname{cov}(x, y)}{s_{x}^{2}}$ The sample means of the $x$ values and the $y$ values are $\overline{x}$ and $\overline{y}$, respectively. The best fit line always passes through the point ($\overline{y}$, $\overline{x}$) called the points of means. The slope $b$ can also be written as: $b_{1}=r_{\mathrm{y}, \mathrm{x}}\left(\frac{s_{y}}{s_{x}}\right)\nonumber$ where $s_y$ = the standard deviation of the $y$ values and $s_x$ = the standard deviation of the $x$ values and $r$ is the correlation coefficient between $x$ and $y$. These equations are called the Normal Equations and come from another very important mathematical finding called the Gauss-Markov Theorem without which we could not do regression analysis. The Gauss-Markov Theorem tells us that the estimates we get from using the ordinary least squares (OLS) regression method will result in estimates that have some very important properties. In the Gauss-Markov Theorem it was proved that a least squares line is BLUE, which is, Best, Linear, Unbiased, Estimator. Best is the statistical property that an estimator is the one with the minimum variance. Linear refers to the property of the type of line being estimated. An unbiased estimator is one whose estimating function has an expected mean equal to the mean of the population. (You will remember that the expected value of $\mu_{\overline{x}}$ was equal to the population mean $\mu$ in accordance with the Central Limit Theorem. This is exactly the same concept here). Both Gauss and Markov were giants in the field of mathematics, and Gauss in physics too, in the 18th century and early 19th century. They barely overlapped chronologically and never in geography, but Markov’s work on this theorem was based extensively on the earlier work of Carl Gauss. The extensive applied value of this theorem had to wait until the middle of this last century. Using the OLS method we can now find the estimate of the error variance which is the variance of the squared errors, e2. This is sometimes called the standard error of the estimate. (Grammatically this is probably best said as the estimate of the error’svariance) The formula for the estimate of the error variance is: $s_{e}^{2}=\frac{\Sigma\left(y_{i}-\hat{y}_{i}\right)^{2}}{n-k}=\frac{\Sigma e_{i}^{2}}{n-k}\nonumber$ where $\hat{y}$ is the predicted value of $y$ and $y$ is the observed value, and thus the term $\left(y_{i}-\hat{y}_{i}\right)^{2}$ is the squared errors that are to be minimized to find the estimates of the regression line parameters. This is really just the variance of the error terms and follows our regular variance formula. One important note is that here we are dividing by $(n−k)$, which is the degrees of freedom. The degrees of freedom of a regression equation will be the number of observations, $n$, reduced by the number of estimated parameters, which includes the intercept as a parameter. The variance of the errors is fundamental in testing hypotheses for a regression. It tells us just how “tight” the dispersion is about the line. As we will see shortly, the greater the dispersion about the line, meaning the larger the variance of the errors, the less probable that the hypothesized independent variable will be found to have a significant effect on the dependent variable. In short, the theory being tested will more likely fail if the variance of the error term is high. Upon reflection this should not be a surprise. As we tested hypotheses about a mean we observed that large variances reduced the calculated test statistic and thus it failed to reach the tail of the distribution. In those cases, the null hypotheses could not be rejected. If we cannot reject the null hypothesis in a regression problem, we must conclude that the hypothesized independent variable has no effect on the dependent variable. A way to visualize this concept is to draw two scatter plots of $x$ and $y$ data along a predetermined line. The first will have little variance of the errors, meaning that all the data points will move close to the line. Now do the same except the data points will have a large estimate of the error variance, meaning that the data points are scattered widely along the line. Clearly the confidence about a relationship between $x$ and $y$ is effected by this difference between the estimate of the error variance. Testing the Parameters of the Line The whole goal of the regression analysis was to test the hypothesis that the dependent variable, $Y$, was in fact dependent upon the values of the independent variables as asserted by some foundation theory, such as the consumption function example. Looking at the estimated equation under Figure 13.8, we see that this amounts to determining the values of $b_0$ and $b_1$. Notice that again we are using the convention of Greek letters for the population parameters and Roman letters for their estimates. The regression analysis output provided by the computer software will produce an estimate of $b_0$ and $b_1$, and any other $b$'s for other independent variables that were included in the estimated equation. The issue is how good are these estimates? In order to test a hypothesis concerning any estimate, we have found that we need to know the underlying sampling distribution. It should come as no surprise at his stage in the course that the answer is going to be the normal distribution. This can be seen by remembering the assumption that the error term in the population, $\epsilon$, is normally distributed. If the error term is normally distributed and the variance of the estimates of the equation parameters, $b_0$ and $b_1$, are determined by the variance of the error term, it follows that the variances of the parameter estimates are also normally distributed. And indeed this is just the case. We can see this by the creation of the test statistic for the test of hypothesis for the slope parameter, $\beta_1$ in our consumption function equation. To test whether or not $Y$ does indeed depend upon $X$, or in our example, that consumption depends upon income, we need only test the hypothesis that $\beta_1$ equals zero. This hypothesis would be stated formally as: $H_{0} : \beta_{1}=0\nonumber$ $H_{a} : \beta_{1} \neq 0\nonumber$ If we cannot reject the null hypothesis, we must conclude that our theory has no validity. If we cannot reject the null hypothesis that $\beta_1 = 0$ then $b_1$, the coefficient of Income, is zero and zero times anything is zero. Therefore the effect of Income on Consumption is zero. There is no relationship as our theory had suggested. Notice that we have set up the presumption, the null hypothesis, as "no relationship". This puts the burden of proof on the alternative hypothesis. In other words, if we are to validate our claim of finding a relationship, we must do so with a level of significance greater than 90, 95, or 99 percent. The status quo is ignorance, no relationship exists, and to be able to make the claim that we have actually added to our body of knowledge we must do so with significant probability of being correct. John Maynard Keynes got it right and thus was born Keynesian economics starting with this basic concept in 1936. The test statistic for this test comes directly from our old friend the standardizing formula: $t_{c}=\frac{b_{1}-\beta_{1}}{S_{b_{1}}}\nonumber$ where $b_1$ is the estimated value of the slope of the regression line, $\beta_1$ is the hypothesized value of beta, in this case zero, and $S_{b_1}$ is the standard deviation of the estimate of $b_1$. In this case we are asking how many standard deviations is the estimated slope away from the hypothesized slope. This is exactly the same question we asked before with respect to a hypothesis about a mean: how many standard deviations is the estimated mean, the sample mean, from the hypothesized mean? The test statistic is written as a student's t distribution, but if the sample size is larger enough so that the degrees of freedom are greater than 30 we may again use the normal distribution. To see why we can use the student's t or normal distribution we have only to look at $S_{b_1}$,the formula for the standard deviation of the estimate of $b_1$: $S_{b_{1}}=\frac{S_{e}^{2}}{\sqrt{\left(x_{i}-\overline{x}\right)^{2}}}\nonumber$ $\text{or}\nonumber$ $S_{b_{1}}=\frac{S_{e}^{2}}{(n-1) S_{x}^{2}}\nonumber$ Where $S_e$ is the estimate of the error variance and $S^2_x$ is the variance of $x$ values of the coefficient of the independent variable being tested. We see that $S_e$, the estimate of the error variance, is part of the computation. Because the estimate of the error variance is based on the assumption of normality of the error terms, we can conclude that the sampling distribution of the $b$'s, the coefficients of our hypothesized regression line, are also normally distributed. One last note concerns the degrees of freedom of the test statistic, $ν=n-k$. Previously we subtracted 1 from the sample size to determine the degrees of freedom in a student's t problem. Here we must subtract one degree of freedom for each parameter estimated in the equation. For the example of the consumption function we lose 2 degrees of freedom, one for $b_0$, the intercept, and one for $b_1$, the slope of the consumption function. The degrees of freedom would be $n - k - 1$, where k is the number of independent variables and the extra one is lost because of the intercept. If we were estimating an equation with three independent variables, we would lose 4 degrees of freedom: three for the independent variables, $k$, and one more for the intercept. The decision rule for acceptance or rejection of the null hypothesis follows exactly the same form as in all our previous test of hypothesis. Namely, if the calculated value of $t$ (or $Z$) falls into the tails of the distribution, where the tails are defined by $\alpha$, the required significance level in the test, we cannot accept the null hypothesis. If on the other hand, the calculated value of the test statistic is within the critical region, we cannot reject the null hypothesis. If we conclude that we cannot accept the null hypothesis, we are able to state with $(1−\alpha)$ level of confidence that the slope of the line is given by $b_1$. This is an extremely important conclusion. Regression analysis not only allows us to test if a cause and effect relationship exists, we can also determine the magnitude of that relationship, if one is found to exist. It is this feature of regression analysis that makes it so valuable. If models can be developed that have statistical validity, we are then able to simulate the effects of changes in variables that may be under our control with some degree of probability , of course. For example, if advertising is demonstrated to effect sales, we can determine the effects of changing the advertising budget and decide if the increased sales are worth the added expense. Multicollinearity Our discussion earlier indicated that like all statistical models, the OLS regression model has important assumptions attached. Each assumption, if violated, has an effect on the ability of the model to provide useful and meaningful estimates. The Gauss-Markov Theorem has assured us that the OLS estimates are unbiased and minimum variance, but this is true only under the assumptions of the model. Here we will look at the effects on OLS estimates if the independent variables are correlated. The other assumptions and the methods to mitigate the difficulties they pose if they are found to be violated are examined in Econometrics courses. We take up multicollinearity because it is so often prevalent in Economic models and it often leads to frustrating results. The OLS model assumes that all the independent variables are independent of each other. This assumption is easy to test for a particular sample of data with simple correlation coefficients. Correlation, like much in statistics, is a matter of degree: a little is not good, and a lot is terrible. The goal of the regression technique is to tease out the independent impacts of each of a set of independent variables on some hypothesized dependent variable. If two 2 independent variables are interrelated, that is, correlated, then we cannot isolate the effects on $Y$ of one from the other. In an extreme case where $x_1$ is a linear combination of $x_2$, correlation equal to one, both variables move in identical ways with $Y$. In this case it is impossible to determine the variable that is the true cause of the effect on $Y$. (If the two variables were actually perfectly correlated, then mathematically no regression results could actually be calculated.) The normal equations for the coefficients show the effects of multicollinearity on the coefficients. $b_{1}=\frac{s_{y}\left(r_{x_{1} y}-r_{x_{1} x_{2}} r_{x_{2} y}\right)}{s_{x_{1}}\left(1-r_{x_{1} x_{2}}^{2}\right)}\nonumber$ $b_{2}=\frac{s_{y}\left(r_{x_{2 y}}-r_{x_{1} x_{2}} r_{x_{1} y}\right)}{s_{x_{2}}\left(1-r_{x_{1} x_{2}}^{2}\right)}\nonumber$ $b_{0}=\overline{y}-b_{1} \overline{x}_{1}-b_{2} \overline{x}_{2}\nonumber$ The correlation between $x_1$ and $x_2$, $r_{x_{1} x_{2}}^{2}$, appears in the denominator of both the estimating formula for $b_1$ and $b_2$. If the assumption of independence holds, then this term is zero. This indicates that there is no effect of the correlation on the coefficient. On the other hand, as the correlation between the two independent variables increases the denominator decreases, and thus the estimate of the coefficient increases. The correlation has the same effect on both of the coefficients of these two variables. In essence, each variable is “taking” part of the effect on Y that should be attributed to the collinear variable. This results in biased estimates. Multicollinearity has a further deleterious impact on the OLS estimates. The correlation between the two independent variables also shows up in the formulas for the estimate of the variance for the coefficients. $s_{b_{1}}^{2}=\frac{s_{e}^{2}}{(n-1) s_{x_{1}}^{2}\left(1-r_{x_{1} x_{2}}^{2}\right)}\nonumber$ $s_{b_{2}}^{2}=\frac{s_{e}^{2}}{(n-1) s_{x_{2}}^{2}\left(1-r_{x_{1} x_{2}}^{2}\right)}\nonumber$ Here again we see the correlation between $x_1$ and $x_2$ in the denominator of the estimates of the variance for the coefficients for both variables. If the correlation is zero as assumed in the regression model, then the formula collapses to the familiar ratio of the variance of the errors to the variance of the relevant independent variable. If however the two independent variables are correlated, then the variance of the estimate of the coefficient increases. This results in a smaller $t$-value for the test of hypothesis of the coefficient. In short, multicollinearity results in failing to reject the null hypothesis that the $X$ variable has no impact on $Y$ when in fact $X$ does have a statistically significant impact on $Y$. Said another way, the large standard errors of the estimated coefficient created by multicollinearity suggest statistical insignificance even when the hypothesized relationship is strong. How Good is the Equation? In the last section we concerned ourselves with testing the hypothesis that the dependent variable did indeed depend upon the hypothesized independent variable or variables. It may be that we find an independent variable that has some effect on the dependent variable, but it may not be the only one, and it may not even be the most important one. Remember that the error term was placed in the model to capture the effects of any missing independent variables. It follows that the error term may be used to give a measure of the "goodness of fit" of the equation taken as a whole in explaining the variation of the dependent variable, $Y$. The multiple correlation coefficient, also called the coefficient of multiple determination or the coefficient of determination, is given by the formula: $R^{2}=\frac{\mathrm{SSR}}{\mathrm{SST}}\nonumber$ where SSR is the regression sum of squares, the squared deviation of the predicted value of $y$ from the mean value of $y(\hat{y}-\overline{y})$, and SST is the total sum Figure 13.10 shows how the total deviation of the dependent variable, y, is partitioned into these two pieces. Figure 13.10 shows the estimated regression line and a single observation, $x_1$. Regression analysis tries to explain the variation of the data about the mean value of the dependent variable, $y$. The question is, why do the observations of y vary from the average level of $y$? The value of y at observation $x_1$ varies from the mean of $y$ by the difference $\left(y_{i}-\overline{y}\right)$. The sum of these differences squared is SST, the sum of squares total. The actual value of $y$ at $x_1$ deviates from the estimated value, $\hat{y}$, by the difference between the estimated value and the actual value, $\left(y_{i}-\hat{y}\right)$. We recall that this is the error term, e, and the sum of these errors is SSE, sum of squared errors. The deviation of the predicted value of $y$, $\hat y$, from the mean value of $y$ is $(\hat{y}-\overline{y})$ and is the SSR, sum of squares regression. It is called “regression” because it is the deviation explained by the regression. (Sometimes the SSR is called SSM for sum of squares mean because it measures the deviation from the mean value of the dependent variable, y, as shown on the graph.). Because the SST = SSR + SSE we see that the multiple correlation coefficient is the percent of the variance, or deviation in $y$ from its mean value, that is explained by the equation when taken as a whole. $R^2$ will vary between zero and 1, with zero indicating that none of the variation in $y$ was explained by the equation and a value of 1 indicating that 100% of the variation in $y$ was explained by the equation. For time series studies expect a high $R^2$ and for cross-section data expect low $R^2$. While a high $R^2$ is desirable, remember that it is the tests of the hypothesis concerning the existence of a relationship between a set of independent variables and a particular dependent variable that was the motivating factor in using the regression model. It is validating a cause and effect relationship developed by some theory that is the true reason that we chose the regression analysis. Increasing the number of independent variables will have the effect of increasing $R^2$. To account for this effect the proper measure of the coefficient of determination is the $\overline{R}^{2}$, adjusted for degrees of freedom, to keep down mindless addition of independent variables. There is no statistical test for the $R^2$ and thus little can be said about the model using $R^2$ with our characteristic confidence level. Two models that have the same size of SSE, that is sum of squared errors, may have very different $R^2$ if the competing models have different SST, total sum of squared deviations. The goodness of fit of the two models is the same; they both have the same sum of squares unexplained, errors squared, but because of the larger total sum of squares on one of the models the $R^2$ differs. Again, the real value of regression as a tool is to examine hypotheses developed from a model that predicts certain relationships among the variables. These are tests of hypotheses on the coefficients of the model and not a game of maximizing $R^2$. Another way to test the general quality of the overall model is to test the coefficients as a group rather than independently. Because this is multiple regression (more than one X), we use the F-test to determine if our coefficients collectively affect Y. The hypothesis is: $H_{o} : \beta_{1}=\beta_{2}=\ldots=\beta_{i}=0$ $H_a$: "at least one of the $\beta_i$ is not equal to 0" If the null hypothesis cannot be rejected, then we conclude that none of the independent variables contribute to explaining the variation in $Y$. Reviewing Figure 13.10 we see that SSR, the explained sum of squares, is a measure of just how much of the variation in $Y$ is explained by all the variables in the model. SSE, the sum of the errors squared, measures just how much is unexplained. It follows that the ratio of these two can provide us with a statistical test of the model as a whole. Remembering that the $F$ distribution is a ratio of Chi squared distributions and that variances are distributed according to Chi Squared, and the sum of squared errors and the sum of squares are both variances, we have the test statistic for this hypothesis as: $F_{c}=\frac{\left(\frac{S S R}{k}\right)}{\left(\frac{S S E}{n-k-1}\right)}\nonumber$ where $n$ is the number of observations and $k$ is the number of independent variables. It can be shown that this is equivalent to: $F_{c}=\frac{n-k-1}{k} \cdot \frac{R^{2}}{1-R^{2}}\nonumber$ Figure 13.10 where $R^2$ is the coefficient of determination which is also a measure of the “goodness” of the model. As with all our tests of hypothesis, we reach a conclusion by comparing the calculated $F$ statistic with the critical value given our desired level of confidence. If the calculated test statistic, an $F$ statistic in this case, is in the tail of the distribution, then we cannot accept the null hypothesis. By not being able to accept the null hypotheses we conclude that this specification of this model has validity, because at least one of the estimated coefficients is significantly different from zero. An alternative way to reach this conclusion is to use the p-value comparison rule. The $p$-value is the area in the tail, given the calculated $F$ statistic. In essence, the computer is finding the $F$ value in the table for us. The computer regression output for the calculated $F$ statistic is typically found in the ANOVA table section labeled “significance F". How to read the output of an Excel regression is presented below. This is the probability of NOT accepting a false null hypothesis. If this probability is less than our pre-determined alpha error, then the conclusion is that we cannot accept the null hypothesis. Dummy Variables Thus far the analysis of the OLS regression technique assumed that the independent variables in the models tested were continuous random variables. There are, however, no restrictions in the regression model against independent variables that are binary. This opens the regression model for testing hypotheses concerning categorical variables such as gender, race, region of the country, before a certain data, after a certain date and innumerable others. These categorical variables take on only two values, 1 and 0, success or failure, from the binomial probability distribution. The form of the equation becomes: $\hat{y}=b_{0}+b_{2} x_{2}+b_{1} x_{1}\nonumber$ where $x_2=0$. $X_2$ is the dummy variable and $X_1$ is some continuous random variable. The constant, $b_0$, is the y-intercept, the value where the line crosses the $y$-axis. When the value of $X_2 = 0$, the estimated line crosses at $b_0$. When the value of $X_2 = 1$ then the estimated line crosses at $b_0 + b_2$. In effect the dummy variable causes the estimated line to shift either up or down by the size of the effect of the characteristic captured by the dummy variable. Note that this is a simple parallel shift and does not affect the impact of the other independent variable; $X_1$.This variable is a continuous random variable and predicts different values of $y$ at different values of $X_1$ holding constant the condition of the dummy variable. An example of the use of a dummy variable is the work estimating the impact of gender on salaries. There is a full body of literature on this topic and dummy variables are used extensively. For this example the salaries of elementary and secondary school teachers for a particular state is examined. Using a homogeneous job category, school teachers, and for a single state reduces many of the variations that naturally effect salaries such as differential physical risk, cost of living in a particular state, and other working conditions. The estimating equation in its simplest form specifies salary as a function of various teacher characteristic that economic theory would suggest could affect salary. These would include education level as a measure of potential productivity, age and/or experience to capture on-the-job training, again as a measure of productivity. Because the data are for school teachers employed in a public school districts rather than workers in a for-profit company, the school district’s average revenue per average daily student attendance is included as a measure of ability to pay. The results of the regression analysis using data on 24,916 school teachers are presented below. Variable Regression Coefficients (b) Standard Errors of the estimates for teacher's earnings function (sb) Intercept 4269.9 Gender (male = 1) 632.38 13.39 Total Years of Experience 52.32 1.10 Years of Experience in Current District 29.97 1.52 Education 629.33 13.16 Total Revenue per ADA 90.24 3.76 $\overline{R}^{2}$ .725 $n$ 24,916 Table 13.1 Earnings Estimate for Elementary and Secondary School Teachers The coefficients for all the independent variables are significantly different from zero as indicated by the standard errors. Dividing the standard errors of each coefficient results in a t-value greater than 1.96 which is the required level for 95% significance. The binary variable, our dummy variable of interest in this analysis, is gender where male is given a value of 1 and female given a value of 0. The coefficient is significantly different from zero with a dramatic t-statistic of 47 standard deviations. We thus cannot accept the null hypothesis that the coefficient is equal to zero. Therefore we conclude that there is a premium paid male teachers of $632 after holding constant experience, education and the wealth of the school district in which the teacher is employed. It is important to note that these data are from some time ago and the$632 represents a six percent salary premium at that time. A graph of this example of dummy variables is presented below. In two dimensions, salary is the dependent variable on the vertical axis and total years of experience was chosen for the continuous independent variable on horizontal axis. Any of the other independent variables could have been chosen to illustrate the effect of the dummy variable. The relationship between total years of experience has a slope of $52.32 per year of experience and the estimated line has an intercept of$4,269 if the gender variable is equal to zero, for female. If the gender variable is equal to 1, for male, the coefficient for the gender variable is added to the intercept and thus the relationship between total years of experience and salary is shifted upward parallel as indicated on the graph. Also marked on the graph are various points for reference. A female school teacher with 10 years of experience receives a salary of $4,792 on the basis of her experience only, but this is still$109 less than a male teacher with zero years of experience. A more complex interaction between a dummy variable and the dependent variable can also be estimated. It may be that the dummy variable has more than a simple shift effect on the dependent variable, but also interacts with one or more of the other continuous independent variables. While not tested in the example above, it could be hypothesized that the impact of gender on salary was not a one-time shift, but impacted the value of additional years of experience on salary also. That is, female school teacher’s salaries were discounted at the start, and further did not grow at the same rate from the effect of experience as for male school teachers. This would show up as a different slope for the relationship between total years of experience for males than for females. If this is so then females school teachers would not just start behind their male colleagues (as measured by the shift in the estimated regression line), but would fall further and further behind as time and experienced increased. The graph below shows how this hypothesis can be tested with the use of dummy variables and an interaction variable. The estimating equation shows how the slope of $X_1$, the continuous random variable experience, contains two parts, $b_1$ and $b_3$. This occurs because of the new variable $X_2$ $X_1$, called the interaction variable, was created to allow for an effect on the slope of $X_1$ from changes in $X_2$, the binary dummy variable. Note that when the dummy variable, $X_2 = 0$ the interaction variable has a value of 0, but when $X_2 = 1$ the interaction variable has a value of $X_1$. The coefficient $b_3$ is an estimate of the difference in the coefficient of $X_1$ when $X_2= 1$ compared to when $X_2 = 0$. In the example of teacher’s salaries, if there is a premium paid to male teachers that affects the rate of increase in salaries from experience, then the rate at which male teachers’ salaries rises would be $b_1 + b_3$ and the rate at which female teachers’ salaries rise would be simply $b_1$. This hypothesis can be tested with the hypothesis: $H_{0} : \beta_{3}=0 | \beta_{1}=0, \beta_{2}=0\nonumber$ $H_{a} : \beta_{3} \neq 0 | \beta_{1} \neq 0, \beta_{2} \neq 0\nonumber$ This is a $t$-test using the test statistic for the parameter $\beta_3$. If we cannot accept the null hypothesis that $\beta_3=0$ we conclude there is a difference between the rate of increase for the group for whom the value of the binary variable is set to 1, males in this example. This estimating equation can be combined with our earlier one Figure 13.13 are drawn for this case with a shift in the earnings function and a difference in the slope of the function with respect to total years of experience. Example 13.5 A random sample of 11 statistics students produced the following data, where x is the third exam score out of 80, and y is the final exam score out of 200. Can you predict the final exam score of a randomly selected student if you know the third exam score? Table showing the scores on the final exam based on scores from the third exam. $x$ (third exam score) $y$ (final exam score) 65 175 67 133 71 185 71 163 66 126 75 198 67 153 70 163 71 159 69 151 69 159 Table 13.2
textbooks/stats/Applied_Statistics/Introductory_Business_Statistics_(OpenStax)/13%3A_Linear_Regression_and_Correlation/13.04%3A_The_Regression_Equation.txt
As we have seen, the coefficient of an equation estimated using OLS regression analysis provides an estimate of the slope of a straight line that is assumed be the relationship between the dependent variable and at least one independent variable. From the calculus, the slope of the line is the first derivative and tells us the magnitude of the impact of a one unit change in the $X$ variable upon the value of the $Y$ variable measured in the units of the $Y$ variable. As we saw in the case of dummy variables, this can show up as a parallel shift in the estimated line or even a change in the slope of the line through an interactive variable. Here we wish to explore the concept of elasticity and how we can use a regression analysis to estimate the various elasticities in which economists have an interest. The concept of elasticity is borrowed from engineering and physics where it is used to measure a material’s responsiveness to a force, typically a physical force such as a stretching/pulling force. It is from here that we get the term an “elastic” band. In economics, the force in question is some market force such as a change in price or income. Elasticity is measured as a percentage change/response in both engineering applications and in economics. The value of measuring in percentage terms is that the units of measurement do not play a role in the value of the measurement and thus allows direct comparison between elasticities. As an example, if the price of gasoline increased say 50 cents from an initial price of $3.00 and generated a decline in monthly consumption for a consumer from 50 gallons to 48 gallons we calculate the elasticity to be 0.25. The price elasticity is the percentage change in quantity resulting from some percentage change in price. A 16 percent increase in price has generated only a 4 percent decrease in demand: 16% price change $\rightarrow$ 4% quantity change or $.04/.16 = .25$. This is called an inelastic demand meaning a small response to the price change. This comes about because there are few if any real substitutes for gasoline; perhaps public transportation, a bicycle or walking. Technically, of course, the percentage change in demand from a price increase is a decline in demand thus price elasticity is a negative number. The common convention, however, is to talk about elasticity as the absolute value of the number. Some goods have many substitutes: pears for apples for plums, for grapes, etc. etc. The elasticity for such goods is larger than one and are called elastic in demand. Here a small percentage change in price will induce a large percentage change in quantity demanded. The consumer will easily shift the demand to the close substitute. While this discussion has been about price changes, any of the independent variables in a demand equation will have an associated elasticity. Thus, there is an income elasticity that measures the sensitivity of demand to changes in income: not much for the demand for food, but very sensitive for yachts. If the demand equation contains a term for substitute goods, say candy bars in a demand equation for cookies, then the responsiveness of demand for cookies from changes in prices of candy bars can be measured. This is called the cross-price elasticity of demand and to an extent can be thought of as brand loyalty from a marketing view. How responsive is the demand for Coca-Cola to changes in the price of Pepsi? Now imagine the demand for a product that is very expensive. Again, the measure of elasticity is in percentage terms thus the elasticity can be directly compared to that for gasoline: an elasticity of 0.25 for gasoline conveys the same information as an elasticity of 0.25 for$25,000 car. Both goods are considered by the consumer to have few substitutes and thus have inelastic demand curves, elasticities less than one. The mathematical formulae for various elasticities are: $\text { Price elasticity: } \eta_{\mathrm{p}}=\frac{(\% \Delta \mathrm{Q})}{(\% \Delta \mathrm{P})}\nonumber$ Where $\eta$ is the Greek small case letter eta used to designate elasticity. ∆ is read as “change”. $\text { Income elasticity: } \eta_{\mathrm{Y}}=\frac{(\% \Delta \mathrm{Q})}{(\% \Delta \mathrm{Y})}\nonumber$ Where $Y$ is used as the symbol for income. $\text { Cross-Price elasticity: } \eta_{\mathrm{p} 1}=\frac{\left(\% \Delta \mathrm{Q}_{1}\right)}{\left(\% \Delta \mathrm{P}_{2}\right)}\nonumber$ Where P2 is the price of the substitute good. Examining closer the price elasticity we can write the formula as: $\eta_{\mathrm{p}}=\frac{(\% \Delta \mathrm{Q})}{(\% \Delta \mathrm{P})}=\frac{\mathrm{d} \mathrm{Q}}{\mathrm{dP}}\left(\frac{\mathrm{P}}{\mathrm{Q}}\right)=\mathrm{b}\left(\frac{\mathrm{P}}{\mathrm{Q}}\right)\nonumber$ Where $b$ is the estimated coefficient for price in the OLS regression. The first form of the equation demonstrates the principle that elasticities are measured in percentage terms. Of course, the ordinary least squares coefficients provide an estimate of the impact of a unit change in the independent variable, $X$, on the dependent variable measured in units of $Y$. These coefficients are not elasticities, however, and are shown in the second way of writing the formula for elasticity as $\left(\frac{\mathrm{d} Q}{\mathrm{d} P}\right)$, the derivative of the estimated demand function which is simply the slope of the regression line. Multiplying the slope times $\frac{P}{Q}$ provides an elasticity measured in percentage terms. Along a straight-line demand curve the percentage change, thus elasticity, changes continuously as the scale changes, while the slope, the estimated regression coefficient, remains constant. Going back to the demand for gasoline. A change in price from $3.00 to$3.50 was a 16 percent increase in price. If the beginning price were \$5.00 then the same 50¢ increase would be only a 10 percent increase generating a different elasticity. Every straight-line demand curve has a range of elasticities starting at the top left, high prices, with large elasticity numbers, elastic demand, and decreasing as one goes down the demand curve, inelastic demand. In order to provide a meaningful estimate of the elasticity of demand the convention is to estimate the elasticity at the point of means. Remember that all OLS regression lines will go through the point of means. At this point is the greatest weight of the data used to estimate the coefficient. The formula to estimate an elasticity when an OLS demand curve has been estimated becomes: $\eta_{\mathrm{p}}=\mathrm{b}\left(\frac{\overline{\mathrm{P}}}{\mathrm{Q}}\right)\nonumber$ Where $\overline{\mathrm{P}}$ and $\overline{\mathrm{Q}}$ are the mean values of these data used to estimate $b$, the price coefficient. The same method can be used to estimate the other elasticities for the demand function by using the appropriate mean values of the other variables; income and price of substitute goods for example. Logarithmic Transformation of the Data Ordinary least squares estimates typically assume that the population relationship among the variables is linear thus of the form presented in The Regression Equation. In this form the interpretation of the coefficients is as discussed above; quite simply the coefficient provides an estimate of the impact of a one unit change in $X$ on $Y$ measured in units of $Y$. It does not matter just where along the line one wishes to make the measurement because it is a straight line with a constant slope thus constant estimated level of impact per unit change. It may be, however, that the analyst wishes to estimate not the simple unit measured impact on the $Y$ variable, but the magnitude of the percentage impact on $Y$ of a one unit change in the $X$ variable. Such a case might be how a unit change in experience, say one year, effects not the absolute amount of a worker’s wage, but the percentage impact on the worker’s wage. Alternatively, it may be that the question asked is the unit measured impact on $Y$ of a specific percentage increase in X. An example may be “by how many dollars will sales increase if the firm spends $X$ percent more on advertising?” The third possibility is the case of elasticity discussed above. Here we are interested in the percentage impact on quantity demanded for a given percentage change in price, or income or perhaps the price of a substitute good. All three of these cases can be estimated by transforming the data to logarithms before running the regression. The resulting coefficients will then provide a percentage change measurement of the relevant variable. To summarize, there are four cases: 1. $\text { Unit } \Delta X \rightarrow \text { Unit } \Delta Y$ (Standard OLS case) 2. $\text { Unit } \Delta X \rightarrow \% \Delta Y$ 3. $\% \Delta X \rightarrow \text { Unit } \Delta Y$ 4. $\% \Delta X \rightarrow \% \Delta Y$ (elasticity case) Case 1: The ordinary least squares case begins with the linear model developed above: $Y=a+b X\nonumber$ where the coefficient of the independent variable $b=\frac{d Y}{d X}$ is the slope of a straight line and thus measures the impact of a unit change in $X$ on $Y$ measured in units of $Y$. Case 2: The underlying estimated equation is: $\log (\mathrm{Y})=a+b X\nonumber$ The equation is estimated by converting the $Y$ values to logarithms and using OLS techniques to estimate the coefficient of the $X$ variable, $b$. This is called a semi-log estimation. Again, differentiating both sides of the equation allows us to develop the interpretation of the $X$ coefficient $b$: $\mathrm{d}\left(\log _{\mathrm{Y}}\right)=b \mathrm{d} X\nonumber$ $\frac{\mathrm{d} Y}{Y}=b \mathrm{d} X\nonumber$ Multiply by 100 to covert to percentages and rearranging terms gives: $100 b=\frac{\% \Delta Y}{\text { Unit } \Delta X}\nonumber$ $100b$ is thus the percentage change in $Y$ resulting from a unit change in $X$. Case 3: In this case the question is “what is the unit change in $Y$ resulting from a percentage change in $X$?” What is the dollar loss in revenues of a five percent increase in price or what is the total dollar cost impact of a five percent increase in labor costs? The estimated equation for this case would be: $Y=a+B \log (X)\nonumber$ Here the calculus differential of the estimated equation is: $dY=bd(logX)\nonumber$ $\mathrm{d} Y=b \frac{\mathrm{d} X}{X}\nonumber$ Divide by 100 to get percentage and rearranging terms gives: $\frac{b}{100}=\frac{\mathrm{d} Y}{100 \frac{\mathrm{d} X}{X}}=\frac{\text { Unit } \Delta \mathrm{Y}}{\% \Delta \mathrm{X}}\nonumber$ Therefore, $\frac{b}{100}$ is the increase in $Y$ measured in units from a one percent increase in $X$. Case 4: This is the elasticity case where both the dependent and independent variables are converted to logs before the OLS estimation. This is known as the log-log case or double log case, and provides us with direct estimates of the elasticities of the independent variables. The estimated equation is: $logY=a+blogX\nonumber$ Differentiating we have: $d(logY)=bd(logX)\nonumber$ $\mathrm{d}(\log X)=b \frac{1}{X} \mathrm{d} X\nonumber$ thus: $\frac{1}{Y} \mathrm{d} Y=b \frac{1}{X} \mathrm{d} X \quad \text { OR } \quad \frac{\mathrm{d} Y}{Y}=b \frac{\mathrm{d} X}{X} \quad \text { OR } \quad b=\frac{\mathrm{d} Y}{\mathrm{d} X}\left(\frac{X}{Y}\right)\nonumber$ and $b=\frac{\% \Delta Y}{\% \Delta X}$ our definition of elasticity. We conclude that we can directly estimate the elasticity of a variable through double log transformation of the data. The estimated coefficient is the elasticity. It is common to use double log transformation of all variables in the estimation of demand functions to get estimates of all the various elasticities of the demand curve.
textbooks/stats/Applied_Statistics/Introductory_Business_Statistics_(OpenStax)/13%3A_Linear_Regression_and_Correlation/13.05%3A_Interpretation_of_Regression_Coefficients-_Elasticity_and_Logarithmic_Transformation.txt
One important value of an estimated regression equation is its ability to predict the effects on $Y$ of a change in one or more values of the independent variables. The value of this is obvious. Careful policy cannot be made without estimates of the effects that may result. Indeed, it is the desire for particular results that drive the formation of most policy. Regression models can be, and have been, invaluable aids in forming such policies. The Gauss-Markov theorem assures us that the point estimate of the impact on the dependent variable derived by putting in the equation the hypothetical values of the independent variables one wishes to simulate will result in an estimate of the dependent variable which is minimum variance and unbiased. That is to say that from this equation comes the best unbiased point estimate of y given the values of $x$. $\hat{y}=b_{0}+b, X_{1 i}+\cdots+b_{k} X_{k i}\nonumber$ Remember that point estimates do not carry a particular level of probability, or level of confidence, because points have no “width” above which there is an area to measure. This was why we developed confidence intervals for the mean and proportion earlier. The same concern arises here also. There are actually two different approaches to the issue of developing estimates of changes in the independent variable, or variables, on the dependent variable. The first approach wishes to measure the expected mean value of y from a specific change in the value of $x$: this specific value implies the expected value. Here the question is: what is the mean impact on $y$ that would result from multiple hypothetical experiments on $y$ at this specific value of $x$. Remember that there is a variance around the estimated parameter of $x$ and thus each experiment will result in a bit of a different estimate of the predicted value of $y$. The second approach to estimate the effect of a specific value of x on y treats the event as a single experiment: you choose x and multiply it times the coefficient and that provides a single estimate of y. Because this approach acts as if there were a single experiment the variance that exists in the parameter estimate is larger than the variance associated with the expected value approach. The conclusion is that we have two different ways to predict the effect of values of the independent variable(s) on the dependent variable and thus we have two different intervals. Both are correct answers to the question being asked, but there are two different questions. To avoid confusion, the first case where we are asking for the expected value of the mean of the estimated $y$, is called a confidence interval as we have named this concept before. The second case, where we are asking for the estimate of the impact on the dependent variable y of a single experiment using a value of $x$, is called the prediction interval. The test statistics for these two interval measures within which the estimated value of $y$ will fall are: $\text { Confidence Interval for Expected Value of Mean Value of y for } \mathrm{x}=\mathrm{x}_{\mathrm{p}}\nonumber$ $\hat{y}=\pm t_{\alpha / 2} s_{e}\left(\sqrt{\frac{1}{n}+\frac{\left(x_{p}-\overline{x}\right)^{2}}{s_{x}}}\right)\nonumber$ $\text { Prediction Interval for an Individual y for } x=x_{p}\nonumber$ $\hat{y}=\pm t_{\alpha / 2} s_{e}\left(\sqrt{1+\frac{1}{n}+\frac{\left(x_{p}-\overline{x}\right)^{2}}{s_{x}}}\right)\nonumber$ Where $s_e$ is the standard deviation of the error term and $s_x$ is the standard deviation of the $x$ variable. The mathematical computations of these two test statistics are complex. Various computer regression software packages provide programs within the regression functions to Figure $15$. Figure $15$ shows visually the difference the standard deviation makes in the size of the estimated intervals. The confidence interval, measuring the expected value of the dependent variable, is smaller than the prediction interval for the same level of confidence. The expected value method assumes that the experiment is conducted multiple times rather than just once as in the other method. The logic here is similar, although not identical, to that discussed when developing the relationship between the sample size and the confidence interval using the Central Limit Theorem. There, as the number of experiments increased, the distribution narrowed and the confidence interval became tighter around the expected value of the mean. It is also important to note that the intervals around a point estimate are highly dependent upon the range of data used to estimate the equation regardless of which approach is being used for prediction. Remember that all regression equations go through the point of means, that is, the mean value of $y$ and the mean values of all independent variables in the equation. As the value of $x$ chosen to estimate the associated value of $y$ is further from the point of means the width of the estimated interval around Figure $16$ shows this relationship. Figure $16$ demonstrates the concern for the quality of the estimated interval whether it is a prediction interval or a confidence interval. As the value chosen to predict $y$, $X_p$ in the graph, is further from the central weight of the data, $\overline X$, we see the interval expand in width even while holding constant the level of confidence. This shows that the precision of any estimate will diminish as one tries to predict beyond the largest weight of the data and most certainly will degrade rapidly for predictions beyond the range of the data. Unfortunately, this is just where most predictions are desired. They can be made, but the width of the confidence interval may be so large as to render the prediction useless. Only actual calculation and the particular application can determine this, however. Example $6$ Recall the third exam/final exam example . We found the equation of the best-fit line for the final exam grade as a function of the grade on the third-exam. We can now use the least-squares regression line for prediction. Assume the coefficient for $X$ was determined to be significantly different from zero. Suppose you want to estimate, or predict, the mean final exam score of statistics students who received 73 on the third exam. The exam scores ($\bf x$-values) range from 65 to 75. Since 73 is between the x-values 65 and 75, we feel comfortable to substitute $x = 73$ into the equation. Then: $\hat{y}=-173.51+4.83(73)=179.08\nonumber$ We predict that statistics students who earn a grade of 73 on the third exam will earn a grade of 179.08 on the final exam, on average. a. What would you predict the final exam score to be for a student who scored a 66 on the third exam? Answer Solution 13.6 a. 145.27 b. What would you predict the final exam score to be for a student who scored a 90 on the third exam? Answer Solution 13.6 b. The $x$ values in the data are between 65 and 75. Ninety is outside of the domain of the observed $x$ values in the data (independent variable), so you cannot reliably predict the final exam score for this student. (Even though it is possible to enter 90 into the equation for $x$ and calculate a corresponding $y$ value, the $y$ value that you get will have a confidence interval that may not be meaningful.) To understand really how unreliable the prediction can be outside of the observed $x$ values observed in the data, make the substitution $x = 90$ into the equation. $\hat{y}=-173.51+4.83(90)=261.19$ The final-exam score is predicted to be 261.19. The largest the final-exam score can be is 200. 13.07: Chapter Key Terms a is the symbol for the Y-Intercept Sometimes written as $b_0$, because when writing the theoretical linear model $\beta_0$ is used to represent a coefficient for a population. b is the symbol for Slope The word coefficient will be used regularly for the slope, because it is a number that will always be next to the letter “$x$.” It will be written as $b_1$ when a sample is used, and $\beta_1$ will be used with a population or when writing the theoretical linear model. Bivariate two variables are present in the model where one is the “cause” or independent variable and the other is the “effect” of dependent variable. Linear a model that takes data and regresses it into a straight line equation. Multivariate a system or model where more than one independent variable is being used to predict an outcome. There can only ever be one dependent variable, but there is no limit to the number of independent variables. R2R2 – Coefficient of Determination This is a number between 0 and 1 that represents the percentage variation of the dependent variable that can be explained by the variation in the independent variable. Sometimes calculated by the equation $R^{2}=\frac{S S R}{S S T}$ where $SSR$ is the “Sum of Squares Regression” and $SST$ is the “Sum of Squares Total.” The appropriate coefficient of determination to be reported should always be adjusted for degrees of freedom first. Residual or “error” the value calculated from subtracting $y_{0}-\hat{y}_{0}=e_{0}$. The absolute value of a residual measures the vertical distance between the actual value of y and the estimated value of y that appears on the best-fit line. RR – Correlation Coefficient A number between −1 and 1 that represents the strength and direction of the relationship between “$X$” and “$Y$.” The value for “$r$” will equal 1 or −1 only if all the plotted points form a perfectly straight line. Sum of Squared Errors (SSE) the calculated value from adding up all the squared residual terms. The hope is that this value is very small when creating a model. X – the independent variable This will sometimes be referred to as the “predictor” variable, because these values were measured in order to determine what possible outcomes could be predicted. Y – the dependent variable Also, using the letter “$y$” represents actual values while $\hat{y}$ represents predicted or estimated values. Predicted values will come from plugging in observed “$x$” values into a linear model.
textbooks/stats/Applied_Statistics/Introductory_Business_Statistics_(OpenStax)/13%3A_Linear_Regression_and_Correlation/13.06%3A_Predicting_with_a_Regression_Equation.txt
13.1 The Correlation Coefficient r 1. In order to have a correlation coefficient between traits $A$ and $B$, it is necessary to have: 1. one group of subjects, some of whom possess characteristics of trait $A$, the remainder possessing those of trait $B$ 2. measures of trait $A$ on one group of subjects and of trait $B$ on another group 3. two groups of subjects, one which could be classified as $A$ or not $A$, the other as $B$ or not $B$ 4. two groups of subjects, one which could be classified as $A$ or not $A$, the other as $B$ or not $B$ 2. Define the Correlation Coefficient and give a unique example of its use. 3. If the correlation between age of an auto and money spent for repairs is +.90 1. 81% of the variation in the money spent for repairs is explained by the age of the auto 2. 81% of money spent for repairs is unexplained by the age of the auto 3. 90% of the money spent for repairs is explained by the age of the auto 4. none of the above 4. Suppose that college grade-point average and verbal portion of an IQ test had a correlation of .40. What percentage of the variance do these two have in common? 1. 20 2. 16 3. 40 4. 80 5. True or false? If false, explain why: The coefficient of determination can have values between -1 and +1. 6. True or False: Whenever r is calculated on the basis of a sample, the value which we obtain for r is only an estimate of the true correlation coefficient which we would obtain if we calculated it for the entire population. 7. Under a "scatter diagram" there is a notation that the coefficient of correlation is .10. What does this mean? 1. plus and minus 10% from the means includes about 68% of the cases 2. one-tenth of the variance of one variable is shared with the other variable 3. one-tenth of one variable is caused by the other variable 4. on a scale from -1 to +1, the degree of linear relationship between the two variables is +.10 8. The correlation coefficient for $X$ and $Y$ is known to be zero. We then can conclude that: 1. X and $Y$ have standard distributions 2. the variances of $X$ and $Y$ are equal 3. there exists no relationship between $X$ and Y 4. there exists no linear relationship between $X$ and Y 5. none of these 9. What would you guess the value of the correlation coefficient to be for the pair of variables: "number of man-hours worked" and "number of units of work completed"? 1. Approximately 0.9 2. Approximately 0.4 3. Approximately 0.0 4. Approximately -0.4 5. Approximately -0.9 10. In a given group, the correlation between height measured in feet and weight measured in pounds is +.68. Which of the following would alter the value of r? 1. height is expressed centimeters. 2. weight is expressed in Kilograms. 3. both of the above will affect r. 4. neither of the above changes will affect r. 13.2 Testing the Significance of the Correlation Coefficient 11. Define a $t$ Test of a Regression Coefficient, and give a unique example of its use. 12. The correlation between scores on a neuroticism test and scores on an anxiety test is high and positive; therefore 1. anxiety causes neuroticism 2. those who score low on one test tend to score high on the other. 3. those who score low on one test tend to score low on the other. 4. no prediction from one test to the other can be meaningfully made. 13.3 Linear Equations 13. True or False? If False, correct it: Suppose a 95% confidence interval for the slope $\beta$ of the straight line regression of $Y$ on $X$ is given by $-3.5 < \beta < -0.5$. Then a two-sided test of the hypothesis $H_{0} : \beta=-1$ would result in rejection of $H_0$ at the 1% level of significance. 14. True or False: It is safer to interpret correlation coefficients as measures of association rather than causation because of the possibility of spurious correlation. 15. We are interested in finding the linear relation between the number of widgets purchased at one time and the cost per widget. The following data has been obtained: $X$: Number of widgets purchased – 1, 3, 6, 10, 15 $Y$: Cost per widget(in dollars) – 55, 52, 46, 32, 25 Suppose the regression line is $\hat{y}=-2.5 x+60$. We compute the average price per widget if 30 are purchased and observe which of the following? 1. $\hat{y}=15 \text { dollars }$; obviously, we are mistaken; the prediction $\hat y$ is actually +15 dollars. 2. $\hat{y}=15 \text { dollars }$, which seems reasonable judging by the data. 3. $\hat{y}=-15 \text { dollars }\, which is obvious nonsense. The regression line must be incorrect. 4. \(\hat{y}=-15 \text { dollars }$, which is obvious nonsense. This reminds us that predicting $Y$ outside the range of $X$ values in our data is a very poor practice. 16. Discuss briefly the distinction between correlation and causality. 17. True or False: If $r$ is close to + or -1, we shall say there is a strong correlation, with the tacit understanding that we are referring to a linear relationship and nothing else. 13.4 The Regression Equation 18. Suppose that you have at your disposal the information below for each of 30 drivers. Propose a model (including a very brief indication of symbols used to represent independent variables) to explain how miles per gallon vary from driver to driver on the basis of the factors measured. Information: 1. miles driven per day 2. weight of car 3. number of cylinders in car 4. average speed 5. miles per gallon 6. number of passengers 19. Consider a sample least squares regression analysis between a dependent variable ($Y$) and an independent variable ($X$). A sample correlation coefficient of −1 (minus one) tells us that 1. there is no relationship between $Y$ and $X$ in the sample 2. there is no relationship between $Y$ and $X$ in the population 3. there is a perfect negative relationship between $Y$ and $X$ in the population 4. there is a perfect negative relationship between $Y$ and $X$ in the sample. 20. In correlational analysis, when the points scatter widely about the regression line, this means that the correlation is 1. negative. 2. low. 3. heterogeneous. 4. between two measures that are unreliable. 13.5 Interpretation of Regression Coefficients: Elasticity and Logarithmic Transformation 21. In a linear regression, why do we need to be concerned with the range of the independent ($X$) variable? 22. Suppose one collected the following information where $X$ is diameter of tree trunk and $Y$ is tree height. X Y 4 8 2 4 8 18 6 22 10 30 6 8 Table $3$ Regression equation: $\hat{y}_{i}=-3.6+3.1 \cdot X_{i}$ What is your estimate of the average height of all trees having a trunk diameter of 7 inches? 23. The manufacturers of a chemical used in flea collars claim that under standard test conditions each additional unit of the chemical will bring about a reduction of 5 fleas (i.e. where $X_{j}=\text { amount of chemical }$ and$Y_{J}=B_{0}+B_{1} \cdot X_{J}+E_{J}$,$H_0:B_1=−5$ Suppose that a test has been conducted and results from a computer include: Intercept = 60 Slope = −4 Standard error of the regression coefficient = 1.0 Degrees of Freedom for Error = 2000 95% Confidence Interval for the slope −2.04, −5.96 Is this evidence consistent with the claim that the number of fleas is reduced at a rate of 5 fleas per unit chemical? 13.6 Predicting with a Regression Equation 24. True or False? If False, correct it: Suppose you are performing a simple linear regression of $Y$ on $X$ and you test the hypothesis that the slope $\beta$ is zero against a two-sided alternative. You have $n=25$ observations and your computed test ($t$) statistic is 2.6. Then your P-value is given by $.01 < P < .02$, which gives borderline significance (i.e. you would reject $H_0$ at $\alpha=.02$ but fail to reject $H_0$ at $\alpha=.01$). 25. An economist is interested in the possible influence of "Miracle Wheat" on the average yield of wheat in a district. To do so he fits a linear regression of average yield per year against year after introduction of "Miracle Wheat" for a ten year period. The fitted trend line is $\hat{y}_{j}=80+1.5 \cdot X_{j}$ ($Y_j$: Average yield in $j$ year after introduction) ($X_j$: $j$ year after introduction). 1. What is the estimated average yield for the fourth year after introduction? 2. Do you want to use this trend line to estimate yield for, say, 20 years after introduction? Why? What would your estimate be? 26. An interpretation of $r=0.5$ is that the following part of the $Y$-variation is associated with which variation in $X$: 1. most 2. half 3. very little 4. one quarter 5. none of these 27. Which of the following values of $r$ indicates the most accurate prediction of one variable from another? 1. $r=1.18$ 2. $r=−.77$ 3. $r=.68$ 13.7 How to Use Microsoft Excel® for Regression Analysis 28. A computer program for multiple regression has been used to fit $\hat{y}_{j}=b_{0}+b_{1} \cdot X_{1 j}+b_{2} \cdot X_{2 j}+b_{3} \cdot X_{3 j}$. Part of the computer output includes: i $b_i$ $S_{b_i}$ 0 8 1.6 1 2.2 .24 2 -.72 .32 3 0.005 0.002 Table $4$ 1. Calculation of confidence interval for $b_2$ consists of _______$\pm$ (a student's $t$ value) (_______) 2. The confidence level for this interval is reflected in the value used for _______. 3. The degrees of freedom available for estimating the variance are directly concerned with the value used for _______ 29. An investigator has used a multiple regression program on 20 data points to obtain a regression equation with 3 variables. Part of the computer output is: Variable Coefficient Standard Error of $bf{b_i}$ 1 0.45 0.21 2 0.80 0.10 3 3.10 0.86 Table $5$ 1. 0.80 is an estimate of ___________. 2. 0.10 is an estimate of ___________. 3. Assuming the responses satisfy the normality assumption, we can be 95% confident that the value of $\beta_2$ is in the interval,_______ ± [$t_{.025} \cdot$ _______], where $t_{.025}$ is the critical value of the student's t distribution with ____ degrees of freedom.
textbooks/stats/Applied_Statistics/Introductory_Business_Statistics_(OpenStax)/13%3A_Linear_Regression_and_Correlation/13.08%3A_Chapter_Practice.txt
13.3 Linear Equations The most basic type of association is a linear association. This type of relationship can be defined algebraically by the equations used, numerically with actual or predicted data values, or graphically from a plotted curve. (Lines are classified as straight curves.) Algebraically, a linear equation typically takes the form $\bf{y = mx + b}$, where $\bf m$ and $\bf b$ are constants, $\bf x$ is the independent variable, $\bf y$ is the dependent variable. In a statistical context, a linear equation is written in the form $\bf{y = a + bx}$, where $\bf a$ and $\bf b$ are the constants. This form is used to help readers distinguish the statistical context from the algebraic context. In the equation $y = a + bx$, the constant $b$ that multiplies the $\bf x$ variable ($b$ is called a coefficient) is called the slope. The slope describes the rate of change between the independent and dependent variables; in other words, the slope describes the change that occurs in the dependent variable as the independent variable is changed. In the equation $y = a + bx$, the constant a is called the y-intercept. The slope of a line is a value that describes the rate of change between the independent and dependent variables. The slope tells us how the dependent variable ($y$) changes for every one unit increase in the independent ($x$) variable, on average. The $\bf y$-intercept is used to describe the dependent variable when the independent variable equals zero. Graphically, the slope is represented by three line types in elementary statistics. 13.4 The Regression Equation It is hoped that this discussion of regression analysis has demonstrated the tremendous potential value it has as a tool for testing models and helping to better understand the world around us. The regression model has its limitations, especially the requirement that the underlying relationship be approximately linear. To the extent that the true relationship is nonlinear it may be approximated with a linear relationship or nonlinear forms of transformations that can be estimated with linear techniques. Double logarithmic transformation of the data will provide an easy way to test this particular shape of the relationship. A reasonably good quadratic form (the shape of the total cost curve from Microeconomics Principles) can be generated by the equation: $Y=a+b_{1} X+b_{2} X^{2}\nonumber$ where the values of $X$ are simply squared and put into the equation as a separate variable. There is much more in the way of econometric "tricks" that can bypass some of the more troublesome assumptions of the general regression model. This statistical technique is so valuable that further study would provide any student significant, statistically significant, dividends. 13.10: Chapter Solution 1. d 2. A measure of the degree to which variation of one variable is related to variation in one or more other variables. The most commonly used correlation coefficient indicates the degree to which variation in one variable is described by a straight line relation with another variable. Suppose that sample information is available on family income and Years of schooling of the head of the household. A correlation coefficient = 0 would indicate no linear association at all between these two variables. A correlation of 1 would indicate perfect linear association (where all variation in family income could be associated with schooling and vice versa). 3. a. 81% of the variation in the money spent for repairs is explained by the age of the auto 4. b. 16 5. The coefficient of determination is $r \cdot \cdot 2$ with $0 \leq r \cdot \cdot 2 \leq 1$, since $-1 \leq r \leq 1$. 6. True 7. d. on a scale from -1 to +1, the degree of linear relationship between the two variables is +.10 8. d. there exists no linear relationship between X and Y 9. Approximately 0.9 10. d. neither of the above changes will affect $r$. 11. Definition: A $t$ test is obtained by dividing a regression coefficient by its standard error and then comparing the result to critical values for Students' t with Error $df$. It provides a test of the claim that $\beta_{i}=0$ when all other variables have been included in the relevant regression model. Example: Suppose that 4 variables are suspected of influencing some response. Suppose that the results of fitting $Y_{i}=\beta_{0}+\beta_{1} X_{1 i}+\beta_{2} X_{2 i}+\beta_{3} X_{3 i}+\beta_{4} X_{4 i}+e_{i}$ include: Variable Regression coefficient Standard error of regular coefficient .5 1 -3 .4 2 +2 .02 3 +1 .6 4 -.5 Table $6$ $t$ calculated for variables 1, 2, and 3 would be 5 or larger in absolute value while that for variable 4 would be less than 1. For most significance levels, the hypothesis $\beta_{1}=0$ would be rejected. But, notice that this is for the case when $X_2$, $X_3$, and $X_4$ have been included in the regression. For most significance levels, the hypothesis $\beta_{4}=0$ would be continued (retained) for the case where $X_1$, $X_2$, and $X_3$ are in the regression. Often this pattern of results will result in computing another regression involving only $X_1$, $X_2$, $X_3$, and examination of the t ratios produced for that case. 12. c. those who score low on one test tend to score low on the other. 13. False. Since $H_{0} : \beta=-1$ would not be rejected at $\alpha=0.05$, it would not be rejected at $\alpha=0.01$. 14. True 15. d 16. Some variables seem to be related, so that knowing one variable's status allows us to predict the status of the other. This relationship can be measured and is called correlation. However, a high correlation between two variables in no way proves that a cause-and-effect relation exists between them. It is entirely possible that a third factor causes both variables to vary together. 17. True 18. $Y_{j}=b_{0}+b_{1} \cdot X_{1}+b_{2} \cdot X_{2}+b_{3} \cdot X_{3}+b_{4} \cdot X_{4}+b_{5} \cdot X_{6}+e_{j}$ 19. d. there is a perfect negative relationship between $Y$ and $X$ in the sample. 20. b. low 21. The precision of the estimate of the $Y$ variable depends on the range of the independent ($X$) variable explored. If we explore a very small range of the $X$ variable, we won't be able to make much use of the regression. Also, extrapolation is not recommended. 22. $\hat{y}=-3.6+(3.1 \cdot 7)=18.1$ 23. Most simply, since −5 is included in the confidence interval for the slope, we can conclude that the evidence is consistent with the claim at the 95% confidence level. Using a t test: $H_{0} : B_{1}=-5$ $H_{A} : B_{1} \neq-5$ $t_{\text { calculated }}=\frac{-5-(-4)}{1}=-1$ $t_{\text { critical }}=-1.96$. Since $t_{\mathrm{calc}}<t_{\mathrm{crit}}$ we retain the null hypothesis that $B_{1}=-5$. 24. True. $t_{\text { (critical, }, d f=23, \text { two-tailed, } \alpha=.02 )}=\pm 2.5$ $\mathrm{t}_{\text { critical }, \mathrm{df}=23, \text { two-tailed, } \alpha=.01}=\pm 2.8$ 25. 1. $80+1.5 \cdot 4=86$ 2. No. Most business statisticians would not want to extrapolate that far. If someone did, the estimate would be 110, but some other factors probably come into play with 20 years. 26. d. one quarter 27. b. $r=−.77$ 28. 1. $−.72, .32$ 2. the $t$ value 3. the $t$ value 29. 1. The population value for $\beta_2$, the change that occurs in $Y$ with a unit change in $X_2$, when the other variables are held constant. 2. The population value for the standard error of the distribution of estimates of $\beta_2$. 3. $.8, .1, 16 = 20 − 4$.
textbooks/stats/Applied_Statistics/Introductory_Business_Statistics_(OpenStax)/13%3A_Linear_Regression_and_Correlation/13.09%3A_Chapter_Review.txt
This section of this chapter is here in recognition that what we are now asking requires much more than a quick calculation of a ratio or a square root. Indeed, the use of regression analysis was almost non- existent before the middle of the last century and did not really become a widely used tool until perhaps the late 1960’s and early 1970’s. Even then the computational ability of even the largest IBM machines is laughable by today’s standards. In the early days programs were developed by the researchers and shared. There was no market for something called “software” and certainly nothing called “apps”, an entrant into the market only a few years old. With the advent of the personal computer and the explosion of a vital software market we have a number of regression and statistical analysis packages to choose from. Each has their merits. We have chosen Microsoft Excel because of the wide-spread availability both on college campuses and in the post-college market place. Stata is an alternative and has features that will be important for more advanced econometrics study if you choose to follow this path. Even more advanced packages exist, but typically require the analyst to do some significant amount of programing to conduct their analysis. The goal of this section is to demonstrate how to use Excel to run a regression and then to do so with an example of a simple version of a demand curve. The first step to doing a regression using Excel is to load the program into your computer. If you have Excel you have the Analysis ToolPak although you may not have it activated. The program calls upon a significant amount of space so is not loaded automatically. To activate the Analysis ToolPak follow these steps: Click “File” > “Options” > “Add-ins” to bring up a menu of the add-in “ToolPaks”. Select “Analysis ToolPak” and click “GO” next to “Manage: excel add-ins” near the bottom of the window. This will open a new window where you click “Analysis ToolPak” (make sure there is a green check mark in the box) and then click “OK”. Now there should be an Analysis tab under the data menu. These steps are presented in the following screen shots. Click “Data” then “Data Analysis” and then click “Regression” and “OK”. Congratulations, you have made it to the regression window. The window asks for your inputs. Clicking the box next to the $Y$ and $X$ ranges will allow you to use the click and drag feature of Excel to select your input ranges. Excel has one odd quirk and that is the click and drop feature requires that the independent variables, the $X$ variables, are all together, meaning that they form a single matrix. If your data are set up with the $Y$ variable between two columns of $X$ variables Excel will not allow you to use click and drag. As an example, say Column A and Column C are independent variables and Column B is the $Y$ variable, the dependent variable. Excel will not allow you to click and drop the data ranges. The solution is to move the column with the $Y$ variable to column A and then you can click and drag. The same problem arises again if you want to run the regression with only some of the $X$ variables. You will need to set up the matrix so all the $X$ variables you wish to regress are in a tightly formed matrix. These steps are presented in the following scene shots. Once you have selected the data for your regression analysis and told Excel which one is the dependent variable ($Y$) and which ones are the independent valuables ($X$‘s), you have several choices as to the parameters and how the output will be displayed. Refer to screen shot Figure $22$ under “Input” section. If you check the “labels” box the program will place the entry in the first column of each variable as its name in the output. You can enter an actual name, such as price or income in a demand analysis, in row one of the Excel spreadsheet for each variable and it will be displayed in the output. The level of significance can also be set by the analyst. This will not change the calculated t statistic, called t stat, but will alter the p value for the calculated t statistic. It will also alter the boundaries of the confidence intervals for the coefficients. A 95 percent confidence interval is always presented, but with a change in this you will also get other levels of confidence for the intervals. Excel also will allow you to suppress the intercept. This forces the regression program to minimize the residual sum of squares under the condition that the estimated line must go through the origin. This is done in cases where there is no meaning in the model at some value other than zero, zero for the start of the line. An example is an economic production function that is a relationship between the number of units of an input, say hours of labor, and output. There is no meaning of positive output with zero workers. Once the data are entered and the choices are made click OK and the results will be sent to a separate new worksheet by default. The output from Excel is presented in a way typical of other regression package programs. The first block of information gives the overall statistics of the regression: Multiple $R$, $R$ Squared, and the $R$ squared adjusted for degrees of freedom, which is the one you want to report. You also get the Standard error (of the estimate) and the number of observations in the regression. The second block of information is titled ANOVA which stands for Analysis of Variance. Our interest in this section is the column marked $F$. This is the calculated $F$ statistics for the null hypothesis that all of the coefficients are equal to zero verse the alternative that at least one of the coefficients are not equal to zero. This hypothesis test was presented in 13.4 under “How Good is the Equation?” The next column gives the p value for this test under the title “Significance F”. If the p value is less than say 0.05 (the calculated $F$ statistic is in the tail) we can say with 90 % confidence that we cannot accept the null hypotheses that all the coefficients are equal to zero. This is a good thing: it means that at least one of the coefficients is significantly different from zero thus do have an effect on the value of $Y$. The last block of information contains the hypothesis tests for the individual coefficient. The estimated coefficients, the intercept and the slopes, are first listed and then each standard error (of the estimated coefficient) followed by the t stat (calculated student’s t statistic for the null hypothesis that the coefficient is equal to zero). We compare the t stat and the critical value of the student’s t, dependent on the degrees of freedom, and determine if we have enough evidence to reject the null that the variable has no effect on $Y$. Remember that we have set up the null hypothesis as the status quo and our claim that we know what caused the $Y$ to change is in the alternative hypothesis. We want to reject the status quo and substitute our version of the world, the alternative hypothesis. The next column contains the p values for this hypothesis test followed by the estimated upper and lower bound of the confidence interval of the estimated slope parameter for various levels of confidence set by us at the beginning. Estimating the Demand for Roses Here is an example of using the Excel program to run a regression for a particular specific case: estimating the demand for roses. We are trying to estimate a demand curve, which from economic theory we expect certain variables affect how much of a good we buy. The relationship between the price of a good and the quantity demanded is the demand curve. Beyond that we have the demand function that includes other relevant variables: a person’s income, the price of substitute goods, and perhaps other variables such as season of the year or the price of complimentary goods. Quantity demanded will be our $Y$ variable, and Price of roses, Price of carnations and Income will be our independent variables, the $X$ variables. For all of these variables theory tells us the expected relationship. For the price of the good in question, roses, theory predicts an inverse relationship, the negatively sloped demand curve. Theory also predicts the relationship between the quantity demanded of one good, here roses, and the price of a substitute, carnations in this example. Theory predicts that this should be a positive or direct relationship; as the price of the substitute falls we substitute away from roses to the cheaper substitute, carnations. A reduction in the price of the substitute generates a reduction in demand for the good being analyzed, roses here. Reduction generates reduction is a positive relationship. For normal goods, theory also predicts a positive relationship; as our incomes rise we buy more of the good, roses. We expect these results because that is what is predicted by a hundred years of economic theory and research. Essentially we are testing these century-old hypotheses. The data gathered was determined by the model that is being tested. This should always be the case. One is not doing inferential statistics by throwing a mountain of data into a computer and asking the machine for a theory. Theory first, test follows. These data here are national average prices and income is the nation’s per capita personal income. Quantity demanded is total national annual sales of roses. These are annual time series data; we are tracking the rose market for the United States from 1984-2017, 33 observations. Because of the quirky way Excel requires how the data are entered into the regression package it is best to have the independent variables, price of roses, price of carnations and income next to each other on the spreadsheet. Once your data are entered into the spreadsheet it is always good to look at the data. Examine the range, the means and the standard deviations. Use your understanding of descriptive statistics from the very first part of this course. In large data sets you will not be able to “scan” the data. The Analysis ToolPac makes it easy to get the range, mean, standard deviations and other parameters of the distributions. You can also quickly get the correlations among the variables. Examine for outliers. Review the history. Did something happen? Was here a labor strike, change in import fees, something that makes these observations unusual? Do not take the data without question. There may have been a typo somewhere, who knows without review. Go to the regression window, enter the data and select 95% confidence level and click “OK”. You can include the labels in the input range if you have put a title at the top of each column, but be sure to click the “labels” box on the main regression page if you do. The regression output should show up automatically on a new worksheet. The first results presented is the R-Square, a measure of the strength of the correlation between $Y$ and $X_1$, $X_2$, and $X_3$ taken as a group. Our R-square here of 0.699, adjusted for degrees of freedom, means that 70% of the variation in Y, demand for roses, can be explained by variations in $X_1$, $X_2$, and $X_3$, Price of roses, Price of carnations and Income. There is no statistical test to determine the “significance” of an $R^2$. Of course a higher $R^2$ is preferred, but it is really the significance of the coefficients that will determine the value of the theory being tested and which will become part of any policy discussion if they are demonstrated to be significantly different form zero. Looking at the third panel of output we can write the equation as: $Y=b_{0}+b_{1} X_{1}+b_{2} X_{2}+b_{3} X_{3}+e\nonumber$ where $b_0$ is the intercept, $b_1$ is the estimated coefficient on price of roses, and b2 is the estimated coefficient on price of carnations, $b_3$ is the estimated effect of income and e is the error term. The equation is written in Roman letters indicating that these are the estimated values and not the population parameters, $\beta$’s. Our estimated equation is: $\text { Quantity of roses sold }=183,475-1.76 \text { Price of roses }+1.33 \text { Price of carnations }+3.03 \text { Income }\nonumber$ We first observe that the signs of the coefficients are as expected from theory. The demand curve is downward sloping with the negative sign for the price of roses. Further the signs of both the price of carnations and income coefficients are positive as would be expected from economic theory. Interpreting the coefficients can tell us the magnitude of the impact of a change in each variable on the demand for roses. It is the ability to do this which makes regression analysis such a valuable tool. The estimated coefficients tell us that an increase the price of roses by one dollar will lead to a 1.76 reduction in the number roses purchased. The price of carnations seems to play an important role in the demand for roses as we see that increasing the price of carnations by one dollar would increase the demand for roses by 1.33 units as consumers would substitute away from the now more expensive carnations. Similarly, increasing per capita income by one dollar will lead to a 3.03 unit increase in roses purchased. These results are in line with the predictions of economics theory with respect to all three variables included in this estimate of the demand for roses. It is important to have a theory first that predicts the significance or at least the direction of the coefficients. Without a theory to test, this research tool is not much more helpful than the correlation coefficients we learned about earlier. We cannot stop there, however. We need to first check whether our coefficients are statistically significant from zero. We set up a hypothesis of: $H_{0} : \beta_{1}=0\nonumber$ $H_{\mathrm{a}} : \beta_{1} \neq 0\nonumber$ for all three coefficients in the regression. Recall from earlier that we will not be able to definitively say that our estimated $b_1$ is the actual real population of $\beta_1$, but rather only that with $(1-\alpha) \%$ level of confidence that we cannot reject the null hypothesis that our estimated $\beta_1$ is significantly different from zero. The analyst is making a claim that the price of roses causes an impact on quantity demanded. Indeed, that each of the included variables has an impact on the quantity of roses demanded. The claim is therefore in the alternative hypotheses. It will take a very large probability, 0.95 in this case, to overthrow the null hypothesis, the status quo, that $\beta = 0$. In all regression hypothesis tests the claim is in the alternative and the claim is that the theory has found a variable that has a significant impact on the $Y$ variable. The test statistic for this hypothesis follows the familiar standardizing formula which counts the number of standard deviations, $t$, that the estimated value of the parameter, $b_1$, is away from the hypothesized value, $\beta_0$, which is zero in this case: $t_{c}=\frac{b_{1}-\beta_{0}}{S_{b_{1}}}\nonumber$ The computer calculates this test statistic and presents it as “t stat”. You can find this value to the right of the standard error of the coefficient estimate. The standard error of the coefficient for $b_1$ is $S_{b_1}$ in the formula. To reach a conclusion we compare this test statistic with the critical value of the student’s $t$ at degrees of freedom $n-3-1 =29$, and alpha = 0.025 (5% significance level for a two-tailed test). Our $t$ stat for $b_1$ is approximately 5.90 which is greater than 1.96 (the critical value we looked up in the t-table), so we cannot accept our null hypotheses of no effect. We conclude that Price has a significant effect because the calculated t value is in the tail. We conduct the same test for b2 and b3. For each variable, we find that we cannot accept the null hypothesis of no relationship because the calculated t-statistics are in the tail for each case, that is, greater than the critical value. All variables in this regression have been determined to have a significant effect on the demand for roses. These tests tell us whether or not an individual coefficient is significantly different from zero, but does not address the overall quality of the model. We have seen that the R squared adjusted for degrees of freedom indicates this model with these three variables explains 70% of the variation in quantity of roses demanded. We can also conduct a second test of the model taken as a whole. This is the $F$ test presented in section 13.4 of this chapter. Because this is a multiple regression (more than one X), we use the $F$-test to determine if our coefficients collectively affect $Y$. The hypothesis is: $H_{0} : \beta_{1}=\beta_{2}=\ldots=\beta i=0\nonumber$ $H_a: "\text{at least one of the} \beta_i \text{ is not equal to 0}"\nonumber$ Under the ANOVA section of the output we find the calculated $F$ statistic for this hypotheses. For this example the $F$ statistic is 21.9. Again, comparing the calculated $F$ statistic with the critical value given our desired level of significance and the degrees of freedom will allow us to reach a conclusion. The best way to reach a conclusion for this statistical test is to use the p-value comparison rule. The p-value is the area in the tail, given the calculated $F$ statistic. In essence the computer is finding the $F$ value in the table for us and calculating the p-value. In the Summary Output under “significance F” is this probability. For this example, it is calculated to be 2.6 $X$ 10-5, or 2.6 then moving the decimal five places to the left. (.000026) This is an almost infinitesimal level of probability and is certainly less than our alpha level of .05 for a 5 percent level of significance. By not being able to accept the null hypotheses we conclude that this specification of this model has validity because at least one of the estimated coefficients is significantly different from zero. Since $F$-calculated is greater than $F$-critical, we cannot accept H0, meaning that $X_1$, $X_2$ and $X_3$ together has a significant effect on $Y$. The development of computing machinery and the software useful for academic and business research has made it possible to answer questions that just a few years ago we could not even formulate. Data is available in electronic format and can be moved into place for analysis in ways and at speeds that were unimaginable a decade ago. The sheer magnitude of data sets that can today be used for research and analysis gives us a higher quality of results than in days past. Even with only an Excel spreadsheet we can conduct very high level research. This section gives you the tools to conduct some of this very interesting research with the only limit being your imagination.
textbooks/stats/Applied_Statistics/Introductory_Business_Statistics_(OpenStax)/13%3A_Linear_Regression_and_Correlation/13.11%3A_How_to_Use_Microsoft_Excel_for_Regression_Analysis.txt
English Phrases Written Mathematically When the English says: Interpret this as: $X$ is at least 4. $X \geq 4$ The minimum of $X$ is 4. $X \geq 4$ $X$ is no less than 4. $X \geq 4$ $X$ is greater than or equal to 4. $X \geq 4$ $X$ is at most 4. $X \leq 4$ The maximum of $X$ is 4. $X \leq 4$ $X$ is no more than 4. $X \leq 4$ $X$ is less than or equal to 4. $X \leq 4$ $X$ does not exceed 4. $X \leq 4$ $X$ is greater than 4. $X > 4$ $X$ is more than 4. $X > 4$ $X$ exceeds 4. $X > 4$ $X$ is less than 4. $X < 4$ There are fewer $X$ than 4. $X < 4$ $X$ is 4. $X = 4$ $X$ is equal to 4. $X = 4$ $X$ is the same as 4. $X = 4$ $X$ is not 4. $X \neq 4$ $X$ is not equal to 4. $X \neq 4$ $X$ is not the same as 4. $X \neq 4$ $X$ is different than 4. $X \neq 4$ Table B1 Symbols and Their Meanings Chapter (1st used) Symbol Spoken Meaning Sampling and Data $\sqrt{ }$ The square root of same Sampling and Data $\pi$ Pi 3.14159… (a specific number) Descriptive Statistics $Q_1$ Quartile one the first quartile Descriptive Statistics $Q_2$ Quartile two the second quartile Descriptive Statistics $Q_3$ Quartile three the third quartile Descriptive Statistics $IQR$ interquartile range $Q_3 – Q_1 = IQR$ Descriptive Statistics $\overline X$ $x$-bar sample mean Descriptive Statistics $\mu$ mu population mean Descriptive Statistics $s$ s sample standard deviation Descriptive Statistics $s^2$ $s$ squared sample variance Descriptive Statistics $\sigma$ sigma population standard deviation Descriptive Statistics $\sigma^2$ sigma squared population variance Descriptive Statistics $\Sigma$ capital sigma sum Probability Topics $\{ \}$ brackets set notation Probability Topics $S$ S sample space Probability Topics $A$ Event A event A Probability Topics $P(A)$ probability of A probability of A occurring Probability Topics $P(A|B)$ probability of A given B prob. of A occurring given B has occurred Probability Topics $P(A\cup B)$ prob. of A or B prob. of A or B or both occurring Probability Topics $P(A\cap B)$ prob. of A and B prob. of both A and B occurring (same time) Probability Topics $A^{\prime}$ A-prime, complement of A complement of A, not A Probability Topics $P(A^{\prime})$ prob. of complement of A same Probability Topics $G_1$ green on first pick same Probability Topics $P(G_1)$ prob. of green on first pick same Discrete Random Variables $PDF$ prob. density function same Discrete Random Variables $X$ X the random variable X Discrete Random Variables $X \sim$ the distribution of X same Discrete Random Variables $\geq$ greater than or equal to same Discrete Random Variables $\leq$ less than or equal to same Discrete Random Variables $=$ equal to same Discrete Random Variables $\neq$ not equal to same Continuous Random Variables $f(x)$ f of x function of x Continuous Random Variables $pdf$ prob. density function same Continuous Random Variables $U$ uniform distribution same Continuous Random Variables $Exp$ exponential distribution same Continuous Random Variables $f(x) =$ f of $X$ equals same Continuous Random Variables $m$ m decay rate (for exp. dist.) The Normal Distribution $N$ normal distribution same The Normal Distribution $z$ z-score same The Normal Distribution $Z$ standard normal dist. same The Central Limit Theorem $\overline X$ X-bar the random variable X-bar The Central Limit Theorem $\mu_{\overline{x}}$ mean of X-bars the average of X-bars The Central Limit Theorem $\sigma_{\overline{x}}$ standard deviation of X-bars same Confidence Intervals $CL$ confidence level same Confidence Intervals $CI$ confidence interval same Confidence Intervals $EBM$ error bound for a mean same Confidence Intervals $EBP$ error bound for a proportion same Confidence Intervals $t$ Student's t-distribution same Confidence Intervals $df$ degrees of freedom same Confidence Intervals $t_{\frac{\alpha}{2}}$ student t with α/2 area in right tail same Confidence Intervals $p^{\prime}$ p-prime sample proportion of success Confidence Intervals $q^{\prime}$ q-prime sample proportion of failure Hypothesis Testing $H_0$ H-naught, H-sub 0 null hypothesis Hypothesis Testing $H_a$ H-a, H-sub a alternate hypothesis Hypothesis Testing $H_1$ H-1, H-sub 1 alternate hypothesis Hypothesis Testing $\alpha$ alpha probability of Type I error Hypothesis Testing $\beta$ beta probability of Type II error Hypothesis Testing $\overline{X 1}-\overline{X 2}$ X1-bar minus X2-bar difference in sample means Hypothesis Testing $\mu_{1}-\mu_{2}$ mu-1 minus mu-2 difference in population means Hypothesis Testing $P_{1}^{\prime}-P_{2}^{\prime}$ P1-prime minus P2-prime difference in sample proportions Hypothesis Testing $p_{1}-p_{2}$ p1 minus p2 difference in population proportions Chi-Square Distribution $X^2$ Ky-square Chi-square Chi-Square Distribution $O$ Observed Observed frequency Chi-Square Distribution $E$ Expected Expected frequency Linear Regression and Correlation $y = a + bx$ y equals a plus b-x equation of a straight line Linear Regression and Correlation $\hat y$ y-hat estimated value of y Linear Regression and Correlation $r$ sample correlation coefficient same Linear Regression and Correlation $\varepsilon$ error term for a regression line same Linear Regression and Correlation $SSE$ Sum of Squared Errors same F-Distribution and ANOVA $F$ F-ratio F-ratio Table B2 Symbols and their Meanings Formulas Symbols you must know Population Sample $N$ Size $n$ $\mu$ Mean $\overline x$ $\sigma^2$ Variance $s^2$ $\sigma$ Standard deviation $s$ $p$ Proportion $p^{\prime}$ Single data set formulae Population Sample $\mu=E(x)=\frac{1}{N} \sum_{i=1}^{N}\left(x_{i}\right)$ Arithmetic mean $\overline{x}=\frac{1}{n} \sum_{i=1}^{n}\left(x_{i}\right)$ Geometric mean $\tilde{x}=\left(\prod_{i=1}^{n} X_{i}\right)^{\frac{1}{n}}$ $Q_{3}=\frac{3(n+1)}{4}, Q_{1}=\frac{(n+1)}{4}$ Inter-quartile range $I Q R=Q_{3}-Q_{1}$ $Q_{3}=\frac{3(n+1)}{4}, Q_{1}=\frac{(n+1)}{4}$ $\sigma^{2}=\frac{1}{N} \sum_{i=1}^{N}\left(x_{i}-\mu\right)^{2}$ Variance $s^{2}=\frac{1}{n} \sum_{i=1}^{n}\left(x_{i}-\overline{x}\right)^{2}$ Single data set formulae Population Sample $\mu=E(x)=\frac{1}{N} \sum_{i=1}^{N}\left(m_{i} \cdot f_{i}\right)$ Arithmetic mean $\overline{x}=\frac{1}{n} \sum_{i=1}^{n}\left(m_{i} \cdot f_{i}\right)$ Geometric mean $\tilde{x}=\left(\prod_{i=1}^{n} X_{i}\right)^{\frac{1}{n}}$ $\sigma^{2}=\frac{1}{N} \sum_{i=1}^{N}\left(m_{i}-\mu\right)^{2} \cdot f_{i}$ Variance $s^{2}=\frac{1}{n} \sum_{i=1}^{n}\left(m_{i}-\overline{x}\right)^{2} \cdot f_{i}$ $C V=\frac{\sigma}{\mu} \cdot 100$ Coefficient of variation $C V=\frac{s}{\overline{x}} \cdot 100$ Table B3 Basic probability rules $P(A \cap B)=P(A | B) \cdot P(B)$ Multiplication rule $P(A \cup B)=P(A)+P(B)-P(A \cap B)$ Addition rule $P(A \cap B)=P(A) \cdot P(B) \text { or } P(A | B)=P(A)$ Independence test Hypergeometric distribution formulae $n C x=\left(\begin{array}{c}{n} \ {x}\end{array}\right)=\frac{n !}{x !(n-x) !}$ Combinatorial equation $P(x)=\frac{\left(\begin{array}{c}{A} \ {x}\end{array}\right)\left(\begin{array}{c}{N-A} \ {n-x}\end{array}\right)}{\left(\begin{array}{c}{N} \ {n}\end{array}\right)}$ Probability equation $E(X)=\mu=n p$ Mean $\sigma^{2}=\left(\frac{N-n}{N-1}\right) n p(q)$ Variance Binomial distribution formulae $P(x)=\frac{n !}{x !(n-x) !} p^{x}(q)^{n-x}$ Probability density function $E(X)=\mu=n p$ Arithmetic mean $\sigma^{2}=n p(q)$ Variance Geometric distribution formulae $P(X=x)=(1-p)^{x-1}(p)$ Probability when $x$ is the first success. Probability when $x$ is the number of failures before first success $P(X=x)=(1-p)^{x}(p)$ $\mu=\frac{1}{p}$ Mean Mean $\mu=\frac{1-p}{p}$ $\sigma^{2}=\frac{(1-p)}{p^{2}}$ Variance Variance $\sigma^{2}=\frac{(1-p)}{p^{2}}$ Poisson distribution formulae $P(x)=\frac{e^{-\mu_{\mu} x}}{x !}$ Probability equation $E(X)=\mu$ Mean $\sigma^{2}=\mu$ Variance Uniform distribution formulae $f(x)=\frac{1}{b-a} \text { for } a \leq x \leq b$ PDF $E(X)=\mu=\frac{a+b}{2}$ Mean $\sigma^{2}=\frac{(b-a)^{2}}{12}$ Variance Exponential distribution formulae $P(X \leq x)=1-e^{-m x}$ Cumulative probability $E(X)=\mu=\frac{1}{m} \text { or } m=\frac{1}{\mu}$ Mean and decay factor $\sigma^{2}=\frac{1}{m^{2}}=\mu^{2}$ Variance Table B4 The following page of formulae requires the use of the "$Z$", "$t$", "$\chi^2$" or "$F$" tables. $Z=\frac{x-\mu}{\sigma}$ Z-transformation for normal distribution $Z=\frac{x-n p^{\prime}}{\sqrt{n p^{\prime}\left(q^{\prime}\right)}}$ Normal approximation to the binomial Probability (ignores subscripts) Hypothesis testing Confidence intervals [bracketed symbols equal margin of error] (subscripts denote locations on respective distribution tables) $Z_{c}=\frac{\overline{x}-\mu_{0}}{\frac{\sigma}{\sqrt{n}}}$ Interval for the population mean when sigma is known $\overline{x} \pm\left[Z_{(\alpha / 2)} \frac{\sigma}{\sqrt{n}}\right]$ $Z_{c}=\frac{\overline{x}-\mu_{0}}{\frac{s}{\sqrt{n}}}$ Interval for the population mean when sigma is unknown but $n>30$ $\overline{x} \pm\left[Z_{(\alpha / 2)} \frac{s}{\sqrt{n}}\right]$ $t_{c}=\frac{\overline{x}-\mu_{0}}{\frac{s}{\sqrt{n}}}$ Interval for the population mean when sigma is unknown but $n<30$ $\overline{x} \pm\left[t_{(n-1),(\alpha / 2)} \frac{s}{\sqrt{n}}\right]$ $Z_{c}=\frac{p^{\prime}-p_{0}}{\sqrt{\frac{p_{0} q_{0}}{n}}}$ Interval for the population proportion $p^{\prime} \pm\left[Z_{(\alpha / 2)} \sqrt{\frac{p^{\prime} q^{\prime}}{n}}\right]$ $t_{c}=\frac{\overline{d}-\delta_{0}}{s_{d}}$ Interval for difference between two means with matched pairs $\overline{d} \pm\left[t_{(n-1),(\alpha / 2)} \frac{s_{d}}{\sqrt{n}}\right]$ where $s_d$ is the deviation of the differences $Z_{c}=\frac{\left(\overline{x_{1}}-\overline{x_{2}}\right)-\delta_{0}}{\sqrt{\frac{\sigma_{1}^{2}}{n_{1}}+\frac{\sigma_{2}^{2}}{n_{2}}}}$ Interval for difference between two means when sigmas are known $\left(\overline{x}_{1}-\overline{x}_{2}\right) \pm\left[Z_{(\alpha / 2)} \sqrt{\frac{\sigma_{1}^{2}}{n_{1}}+\frac{\sigma_{2}^{2}}{n_{2}}}\right]$ $t_{c}=\frac{\left(\overline{x}_{1}-\overline{x}_{2}\right)-\delta_{0}}{\sqrt{\left(\frac{\left(s_{1}\right)^{2}}{n_{1}}+\frac{\left(s_{2}\right)^{2}}{n_{2}}\right)}}$ Interval for difference between two means with equal variances when sigmas are unknown $\left(\overline{x}_{1}-\overline{x}_{2}\right) \pm\left[t_{d f,(\alpha / 2)} \sqrt{\left(\frac{\left(s_{1}\right)^{2}}{n_{1}}+\frac{\left(s_{2}\right)^{2}}{n_{2}}\right)}\right] \text { where } d f=\frac{\left(\frac{\left(s_{1}\right)^{2}}{n_{1}}+\frac{\left(s_{2}\right)^{2}}{n_{2}}\right)^{2}}{\left(\frac{1}{n_{1}-1}\right)\left(\frac{\left(s_{1}\right)^{2}}{n_{1}}\right)+\left(\frac{1}{n_{2}-1}\right)\left(\frac{\left(s_{2}\right)^{2}}{n_{2}}\right)}$ $Z_{c}=\frac{\left(p_{1}^{\prime}-p_{2}^{\prime}\right)-\delta_{0}}{\sqrt{\frac{p_{1}^{\prime}\left(q_{1}^{\prime}\right)}{n_{1}}+\frac{p_{2}^{\prime}\left(q_{2}^{\prime}\right)}{n_{2}}}}$ Interval for difference between two population proportions $\left(p_{1}^{\prime}-p_{2}^{\prime}\right) \pm\left[Z_{(\alpha / 2)} \sqrt{\frac{p_{1}^{\prime}\left(q_{1}^{\prime}\right)}{n_{1}}+\frac{p_{2}^{\prime}\left(q_{2}^{\prime}\right)}{n_{2}}}\right]$ $\chi_{c}^{2}=\frac{(n-1) s^{2}}{\sigma_{0}^{2}}$ Tests for $GOF$, Independence, and Homogeneity $\chi_{c}^{2}=\sum \frac{(O-E)^{2}}{E}$where $O =$ observed values and $E =$ expected values $F_{c}=\frac{s_{1}^{2}}{s_{2}^{2}}$ Where $s_{1}^{2}$ is the sample variance which is the larger of the two sample variances The next 3 formule are for determining sample size with confidence intervals. (note: $E$ represents the margin of error) $n=\frac{Z^{2}\left(\frac{a}{2}\right)^{\sigma^{2}}}{E^{2}}$ Use when sigma is known $E=\overline{x}-\mu$ $n=\frac{Z^{2}\left(\frac{a}{2}\right)^{(0.25)}}{E^{2}}$ Use when $p^{\prime}$ is unknown $E=p^{\prime}-p$ $n=\frac{Z^{2}\left(\frac{a}{2}\right)^{\left[p^{\prime}\left(q^{\prime}\right)\right]}}{E^{2}}$ Use when p'p′ is uknown $E=p^{\prime}-p$ Table B5 Simple linear regression formulae for $y=a+b(x)$ $r=\frac{\Sigma[(x-\overline{x})(y-\overline{y})]}{\sqrt{\Sigma(x-\overline{x})^{2} * \Sigma(y-\overline{y})^{2}}}=\frac{S_{x y}}{S_{x} S_{y}}=\sqrt{\frac{S S R}{S S T}}$ Correlation coefficient $b=\frac{\Sigma[(x-\overline{x})(y-\overline{y})]}{\Sigma(x-\overline{x})^{2}}=\frac{S_{x y}}{S S_{x}}=r_{y, x}\left(\frac{s_{y}}{s_{x}}\right)$ Coefficient $b$ (slope) $a=\overline{y}-b(\overline{x})$ $y$-intercept $s_{e}^{2}=\frac{\Sigma\left(y_{i}-\hat{y}_{i}\right)^{2}}{n-k}=\frac{\sum_{i=1}^{n} e_{i}^{2}}{n-k}$ Estimate of the error variance $S_{b}=\frac{s_{e}^{2}}{\sqrt{\left(x_{i}-\overline{x}\right)^{2}}}=\frac{s_{e}^{2}}{(n-1) s_{x}^{2}}$ Standard error for coefficient $b$ $t_{c}=\frac{b-\beta_{0}}{s_b}$ Hypothesis test for coefficient $\beta$ $b \pm\left[t_{n-2, \alpha / 2} S_{b}\right]$ Interval for coefficient $\beta$ $\hat{y} \pm\left[t_{\alpha / 2} * s_{e}\left(\sqrt{\frac{1}{n}+\frac{\left(x_{p}-\overline{x}\right)^{2}}{s_{x}}}\right)\right]$ Interval for expected value of $y$ $\hat{y} \pm\left[t_{\alpha / 2} * s_{e}\left(\sqrt{1+\frac{1}{n}+\frac{\left(x_{p}-\overline{x}\right)^{2}}{s_{x}}}\right)\right]$ Prediction interval for an individual $y$ ANOVA formulae $S S R=\sum_{i=1}^{n}\left(\hat{y}_{i}-\overline{y}\right)^{2}$ Sum of squares regression $S S E=\sum_{i=1}^{n}\left(\hat{y}_{i}-\overline{y}_{i}\right)^{2}$ Sum of squares error $S S T=\sum_{i=1}^{n}\left(y_{i}-\overline{y}\right)^{2}$ Sum of squares total $R^{2}=\frac{S S R}{S S T}$ Coefficient of determination Table B6 The following is the breakdown of a one-way ANOVA table for linear regression. Source of variation Sum of squares Degrees of freedom Mean squares $F$-ratio Regression $SSR$ $1$ or $k−1$ $M S R=\frac{S S R}{d f_{R}}$ $F=\frac{M S R}{M S E}$ Error $SSE$ $n-k$ $M S E=\frac{S S E}{d f_{E}}$ Total $SST$ $n−1$ Table B7
textbooks/stats/Applied_Statistics/Introductory_Business_Statistics_(OpenStax)/14%3A_Apppendices/14.00%3A_B__Mathematical_Phrases_Symbols_and_Formulas.txt
$F$ Distribution Degrees of freedom in the numerator Degrees of freedom in the denominator $p$ 1 2 3 4 5 6 7 8 9 1 .100 39.86 49.50 53.59 55.83 57.24 58.20 58.91 59.44 59.86 .050 161.45 199.50 215.71 224.58 230.16 233.99 236.77 238.88 240.54 .025 647.79 799.50 864.16 899.58 921.85 937.11 948.22 956.66 963.28 .010 4052.2 4999.5 5403.4 5624.6 5763.6 5859.0 5928.4 5981.1 6022.5 .001 405284 500000 540379 562500 576405 585937 592873 598144 602284 2 .100 8.53 9.00 9.16 9.24 9.29 9.33 9.35 9.37 9.38 .050 18.51 19.00 19.16 19.25 19.30 19.33 19.35 19.37 19.38 .025 38.51 39.00 39.17 39.25 39.30 39.33 39.36 39.37 39.39 .010 98.50 99.00 99.17 99.25 99.30 99.33 99.36 99.37 99.39 .001 998.50 999.00 999.17 999.25 999.30 999.33 999.36 999.37 999.39 3 .100 5.54 5.46 5.39 5.34 5.31 5.28 5.27 5.25 5.24 .050 10.13 9.55 9.28 9.12 9.01 8.94 8.89 8.85 8.81 .025 17.44 16.04 15.44 15.10 14.88 14.73 14.62 14.54 14.47 .010 34.12 30.82 29.46 28.71 28.24 27.91 27.67 27.49 27.35 .001 167.03 148.50 141.11 137.10 134.58 132.85 131.58 130.62 129.86 4 .100 4.54 4.32 4.19 4.11 4.05 4.01 3.98 3.95 3.94 .050 7.71 6.94 6.59 6.39 6.26 6.16 6.09 6.04 6.00 .025 12.22 10.65 9.98 9.60 9.36 9.20 9.07 8.98 8.90 .010 21.20 18.00 16.69 15.98 15.52 15.21 14.98 14.80 14.66 .001 74.14 61.25 56.18 53.44 51.71 50.53 49.66 49.00 48.47 5 .100 4.06 3.78 3.62 3.52 3.45 3.40 3.37 3.34 3.32 .050 6.61 5.79 5.41 5.19 5.05 4.95 4.88 4.82 4.77 .025 10.01 8.43 7.76 7.39 7.15 6.98 6.85 6.76 6.68 .010 16.26 13.27 12.06 11.39 10.97 10.67 10.46 10.29 10.16 .001 47.18 37.12 33.20 31.09 29.75 28.83 28.16 27.65 27.24 6 .100 3.78 3.46 3.29 3.18 3.11 3.05 3.01 2.98 2.96 .050 5.99 5.14 4.76 4.53 4.39 4.28 4.21 4.15 4.10 .025 8.81 7.26 6.60 6.23 5.99 5.82 5.70 5.60 5.52 .010 13.75 10.92 9.78 9.15 8.75 8.47 8.26 8.10 7.98 .001 35.51 27.00 23.70 21.92 20.80 20.03 19.46 19.03 18.69 7 .100 3.59 3.26 3.07 2.96 2.88 2.83 2.78 2.75 2.72 .050 5.59 4.74 4.35 4.12 3.97 3.87 3.79 3.73 3.68 .025 8.07 6.54 5.89 5.52 5.29 5.12 4.99 4.90 4.82 .010 12.25 9.55 8.45 7.85 7.46 7.19 6.99 6.84 6.72 .001 29.25 21.69 18.77 17.20 16.21 15.52 15.02 14.63 14.33 Table A1 $F$ critical values Degrees of freedom in the numerator Degrees of freedom in the denominator $p$ 10 12 15 20 25 30 40 50 60 120 1000 1 .100 60.19 60.71 61.22 61.74 62.05 62.26 62.53 62.69 62.79 63.06 63.30 .050 241.88 243.91 245.95 248.01 249.26 250.10 251.14 251.77 252.20 253.25 254.19 .025 968.63 976.71 984.87 993.10 998.08 1001.4 1005.6 1008.1 1009.8 1014.0 1017.7 .010 6055.8 6106.3 6157.3 6208.7 6239.8 6260.6 6286.8 6302.5 6313.0 6339.4 6362.7 .001 605621 610668 615764 620908 624017 626099 628712 630285 631337 633972 636301 2 .100 9.39 9.41 9.42 9.44 9.45 9.46 9.47 9.47 9.47 9.48 9.49 .050 19.40 19.41 19.43 19.45 19.46 19.46 19.47 19.48 19.48 19.49 19.49 .025 39.40 39.41 39.43 39.45 39.46 39.46 39.47 39.48 39.48 39.49 39.50 .010 99.40 99.42 99.43 99.45 99.46 99.47 99.47 99.48 99.48 99.49 99.50 .001 999.40 999.42 999.43 999.45 999.46 999.47 999.47 999.48 999.48 999.49 999.50 3 .100 5.23 5.22 5.20 5.18 5.17 5.17 5.16 5.15 5.15 5.14 5.13 .050 8.79 8.74 8.70 8.66 8.63 8.62 8.59 8.58 8.57 8.55 8.53 .025 14.42 14.34 14.25 14.17 14.12 14.08 14.04 14.01 13.99 13.95 13.91 .010 27.23 27.05 26.87 26.69 26.58 26.50 26.41 26.35 26.32 26.22 26.14 .001 129.25 128.32 127.37 126.42 125.84 125.45 124.96 124.66 124.47 123.97 123.53 4 .100 3.92 3.90 3.87 3.84 3.83 3.82 3.80 3.80 3.79 3.78 3.76 .050 5.96 5.91 5.86 5.80 5.77 5.75 5.72 5.70 5.69 5.66 5.63 .025 8.84 8.75 8.66 8.56 8.50 8.46 8.41 8.38 8.36 8.31 8.26 .010 14.55 14.37 14.20 14.02 13.91 13.84 13.75 13.69 13.65 13.56 13.47 .001 48.05 47.41 46.76 46.10 45.70 45.43 45.09 44.88 44.75 44.40 44.09 5 .100 3.30 3.27 3.24 3.21 3.19 3.17 3.16 3.15 3.14 3.12 3.11 .050 4.74 4.68 4.62 4.56 4.52 4.50 4.46 4.44 4.43 4.40 4.37 .025 6.62 6.52 6.43 6.33 6.27 6.23 6.18 6.14 6.12 6.07 6.02 .010 10.05 9.89 9.72 9.55 9.45 9.38 9.29 9.24 9.20 9.11 9.03 .001 26.92 26.42 25.91 25.39 25.08 24.87 24.60 24.44 24.33 24.06 23.82 6 .100 2.94 2.90 2.87 2.84 2.81 2.80 2.78 2.77 2.76 2.74 2.72 .050 4.06 4.00 3.94 3.87 3.83 3.81 3.77 3.75 3.74 3.70 3.67 .025 5.46 5.37 5.27 5.17 5.11 5.07 5.01 4.98 4.96 4.90 4.86 .010 7.87 7.72 7.56 7.40 7.30 7.23 7.14 7.09 7.06 6.97 6.89 .001 18.41 17.99 17.56 17.12 16.85 16.67 16.44 16.31 16.21 15.98 15.77 7 .100 2.70 2.67 2.63 2.59 2.57 2.56 2.54 2.52 2.51 2.49 2.47 .050 3.64 3.57 3.51 3.44 3.40 3.38 3.34 3.32 3.30 3.27 3.23 .025 4.76 4.67 4.57 4.47 4.40 4.36 4.31 4.28 4.25 4.20 4.15 .010 6.62 6.47 6.31 6.16 6.06 5.99 5.91 5.86 5.82 5.74 5.66 .001 14.08 13.71 13.32 12.93 12.69 12.53 12.33 12.20 12.12 11.91 11.72 Table A2 $F$ critical values (continued) Degrees of freedom in the numerator Degrees of freedom in the denominator $p$ 1 2 3 4 5 6 7 8 9 8 .100 3.46 3.11 2.92 2.81 2.73 2.67 2.62 2.59 2.56 .050 5.32 4.46 4.07 3.84 3.69 3.58 3.50 3.44 3.39 .025 7.57 6.06 5.42 5.05 4.82 4.65 4.53 4.43 4.36 .010 11.26 8.65 7.59 7.01 6.63 6.37 6.18 6.03 5.91 .001 25.41 18.49 15.83 14.39 13.48 12.86 12.40 12.05 11.77 9 .100 3.36 3.01 2.81 2.69 2.61 2.55 2.51 2.47 2.44 .050 5.12 4.26 3.86 3.63 3.48 3.37 3.29 3.23 3.18 .025 7.21 5.71 5.08 4.72 4.48 4.32 4.20 4.10 4.03 .010 10.56 8.02 6.99 6.42 6.06 5.80 5.61 5.47 5.35 .001 22.86 16.39 13.90 12.56 11.71 11.13 10.70 10.37 10.11 10 .100 3.29 2.92 2.73 2.61 2.52 2.46 2.41 2.38 2.35 .050 4.96 4.10 3.71 3.48 3.33 3.22 3.14 3.07 3.02 .025 6.94 5.46 4.83 4.47 4.24 4.07 3.95 3.85 3.78 .010 10.04 7.56 6.55 5.99 5.64 5.39 5.20 5.06 4.94 .001 21.04 14.91 12.55 11.28 10.48 9.93 9.52 9.20 8.96 11 .100 3.23 2.86 2.66 2.54 2.45 2.39 2.34 2.30 2.27 .050 4.84 3.98 3.59 3.36 3.20 3.09 3.01 2.95 2.90 .025 6.72 5.26 4.63 4.28 4.04 3.88 3.76 3.66 3.59 .010 9.65 7.21 6.22 5.67 5.32 5.07 4.89 4.74 4.63 .001 19.69 13.81 11.56 10.35 9.58 9.05 8.66 8.35 8.12 12 .100 3.18 2.81 2.61 2.48 2.39 2.33 2.28 2.24 2.21 .050 4.75 3.89 3.49 3.26 3.11 3.00 2.91 2.85 2.80 .025 6.55 5.10 4.47 4.12 3.89 3.73 3.61 3.51 3.44 .010 9.33 6.93 5.95 5.41 5.06 4.82 4.64 4.50 4.39 .001 18.64 12.97 10.80 9.63 8.89 8.38 8.00 7.71 7.48 13 .100 3.14 2.76 2.56 2.43 2.35 2.28 2.23 2.20 2.16 .050 4.67 3.81 3.41 3.18 3.03 2.92 2.83 2.77 2.71 .025 6.41 4.97 4.35 4.00 3.77 3.60 3.48 3.39 3.31 .010 9.07 6.70 5.74 5.21 4.86 4.62 4.44 4.30 4.19 .001 17.82 12.31 10.21 9.07 8.35 7.86 7.49 7.21 6.98 14 .100 3.10 2.73 2.52 2.39 2.31 2.24 2.19 2.15 2.12 .050 4.60 3.74 3.34 3.11 2.96 2.85 2.76 2.70 2.65 .025 6.30 4.86 4.24 3.89 3.66 3.50 3.38 3.29 3.21 .010 8.86 6.51 5.56 5.04 4.69 4.46 4.28 4.14 4.03 .001 17.14 11.78 9.73 8.62 7.92 7.44 7.08 6.80 6.58 15 .100 3.07 2.70 2.49 2.36 2.27 2.21 2.16 2.12 2.09 .050 4.54 3.68 3.29 3.06 2.90 2.79 2.71 2.64 2.59 .025 6.20 4.77 4.15 3.80 3.58 3.41 3.29 3.20 3.12 .010 8.68 6.36 5.42 4.89 4.56 4.32 4.14 4.00 3.89 .001 16.59 11.34 9.34 8.25 7.57 7.09 6.74 6.47 6.26 Table A3 $F$ critical values (continued) Degrees of freedom in the numerator Degrees of freedom in the denominator $p$ 10 12 15 20 25 30 40 50 60 120 1000 8 .100 2.54 2.50 2.46 2.42 2.40 2.38 2.36 2.35 2.34 2.32 2.30 .050 3.35 3.28 3.22 3.15 3.11 3.08 3.04 3.02 3.01 2.97 2.93 .025 4.30 4.20 4.10 4.00 3.94 3.89 3.84 3.81 3.78 3.73 3.68 .010 5.81 5.67 5.52 5.36 5.26 5.20 5.12 5.07 5.03 4.95 4.87 .001 11.54 11.19 10.84 10.48 10.26 10.11 9.92 9.80 9.73 9.53 9.36 9 .100 2.42 2.38 2.34 2.30 2.27 2.25 2.23 2.22 2.21 2.18 2.16 .050 3.14 3.07 3.01 2.94 2.89 2.86 2.83 2.80 2.79 2.75 2.71 .025 3.96 3.87 3.77 3.67 3.60 3.56 3.51 3.47 3.45 3.39 3.34 .010 5.26 5.11 4.96 4.81 4.71 4.65 4.57 4.52 4.48 4.40 4.32 .001 9.89 9.57 9.24 8.90 8.69 8.55 8.37 8.26 8.19 8.00 7.84 10 .100 2.32 2.28 2.24 2.20 2.17 2.16 2.13 2.12 2.11 2.08 2.06 .050 2.98 2.91 2.85 2.77 2.73 2.70 2.66 2.64 2.62 2.58 2.54 .025 3.72 3.62 3.52 3.42 3.35 3.31 3.26 3.22 3.20 3.14 3.09 .010 4.85 4.71 4.56 4.41 4.31 4.25 4.17 4.12 4.08 4.00 3.92 .001 8.75 8.45 8.13 7.80 7.60 7.47 7.30 7.19 7.12 6.94 6.78 11 .100 2.25 2.21 2.17 2.12 2.10 2.08 2.05 2.04 2.03 2.00 1.98 .050 2.85 2.79 2.72 2.65 2.60 2.57 2.53 2.51 2.49 2.45 2.41 .025 3.53 3.43 3.33 3.23 3.16 3.12 3.06 3.03 3.00 2.94 2.89 .010 4.54 4.40 4.25 4.10 4.01 3.94 3.86 3.81 3.78 3.69 3.61 .001 7.92 7.63 7.32 7.01 6.81 6.68 6.52 6.42 6.35 6.18 6.02 12 .100 2.19 2.15 2.10 2.06 2.03 2.01 1.99 1.97 1.96 1.93 1.91 .050 2.75 2.69 2.62 2.54 2.50 2.47 2.43 2.40 2.38 2.34 2.30 .025 3.37 3.28 3.18 3.07 3.01 2.96 2.91 2.87 2.85 2.79 2.73 .010 4.30 4.16 4.01 3.86 3.76 3.70 3.62 3.57 3.54 3.45 3.37 .001 7.29 7.00 6.71 6.40 6.22 6.09 5.93 5.83 5.76 5.59 5.44 13 .100 2.14 2.10 2.05 2.01 1.98 1.96 1.93 1.92 1.90 1.88 1.85 .050 2.67 2.60 2.53 2.46 2.41 2.38 2.34 2.31 2.30 2.25 2.21 .025 3.25 3.15 3.05 2.95 2.88 2.84 2.78 2.74 2.72 2.66 2.60 .010 4.10 3.96 3.82 3.66 3.57 3.51 3.43 3.38 3.34 3.25 3.18 .001 6.80 6.52 6.23 5.93 5.75 5.63 5.47 5.37 5.30 5.14 4.99 14 .100 2.10 2.05 2.01 1.96 1.93 1.91 1.89 1.87 1.86 1.83 1.80 .050 2.60 2.53 2.46 2.39 2.34 2.31 2.27 2.24 2.22 2.18 2.14 .025 3.15 3.05 2.95 2.84 2.78 2.73 2.67 2.64 2.61 2.55 2.50 .010 3.94 3.80 3.66 3.51 3.41 3.35 3.27 3.22 3.18 3.09 3.02 .001 6.40 6.13 5.85 5.56 5.38 5.25 5.10 5.00 4.94 4.77 4.62 15 .100 2.06 2.02 1.97 1.92 1.89 1.87 1.85 1.83 1.82 1.79 1.76 .050 2.54 2.48 2.40 2.33 2.28 2.25 2.20 2.18 2.16 2.11 2.07 .025 3.06 2.96 2.86 2.76 2.69 2.64 2.59 2.55 2.52 2.46 2.40 .010 3.80 3.67 3.52 3.37 3.28 3.21 3.13 3.08 3.05 2.96 2.88 .001 6.08 5.81 5.54 5.25 5.07 4.95 4.80 4.70 4.64 4.47 4.33 Table A4 $F$ critical values (continued) Degrees of freedom in the numerator Degrees of freedom in the denominator $p$ 1 2 3 4 5 6 7 8 9 16 .100 3.05 2.67 2.46 2.33 2.24 2.18 2.13 2.09 2.06 .050 4.49 3.63 3.24 3.01 2.85 2.74 2.66 2.59 2.54 .025 6.12 4.69 4.08 3.73 3.50 3.34 3.22 3.12 3.05 .010 8.53 6.23 5.29 4.77 4.44 4.20 4.03 3.89 3.78 .001 16.12 10.97 9.01 7.94 7.27 6.80 6.46 6.19 5.98 17 .100 3.03 2.64 2.44 2.31 2.22 2.15 2.10 2.06 2.03 .050 4.45 3.59 3.20 2.96 2.81 2.70 2.61 2.55 2.49 .025 6.04 4.62 4.01 3.66 3.44 3.28 3.16 3.06 2.98 .010 8.40 6.11 5.19 4.67 4.34 4.10 3.93 3.79 3.68 .001 15.72 10.66 8.73 7.68 7.02 6.56 6.22 5.96 5.75 18 .100 3.01 2.62 2.42 2.29 2.20 2.13 2.08 2.04 2.00 .050 4.41 3.55 3.16 2.93 2.77 2.66 2.58 2.51 2.46 .025 5.98 4.56 3.95 3.61 3.38 3.22 3.10 3.01 2.93 .010 8.29 6.01 5.09 4.58 4.25 4.01 3.84 3.71 3.60 .001 15.38 10.39 8.49 7.46 6.81 6.35 6.02 5.76 5.56 19 .100 3.36 3.01 2.81 2.69 2.61 2.55 2.51 2.47 2.44 .050 5.12 4.26 3.86 3.63 3.48 3.37 3.29 3.23 3.18 .025 7.21 5.71 5.08 4.72 4.48 4.32 4.20 4.10 4.03 .010 10.56 8.02 6.99 6.42 6.06 5.80 5.61 5.47 5.35 .001 22.86 16.39 13.90 12.56 11.71 11.13 10.70 10.37 10.11 20 .100 2.97 2.59 2.38 2.25 2.16 2.09 2.04 2.00 1.96 .050 4.35 3.49 3.10 2.87 2.71 2.60 2.51 2.45 2.39 .025 5.87 4.46 3.86 3.51 3.29 3.13 3.01 2.91 2.84 .010 8.10 5.85 4.94 4.43 4.10 3.87 3.70 3.56 3.46 .001 14.82 9.95 8.10 7.10 6.46 6.02 5.69 5.44 5.24 21 .100 2.96 2.57 2.36 2.23 2.14 2.08 2.02 1.98 1.95 .050 4.32 3.47 3.07 2.84 2.68 2.57 2.49 2.42 2.37 .025 5.83 4.42 3.82 3.48 3.25 3.09 2.97 2.87 2.80 .010 8.02 5.78 4.87 4.37 4.04 3.81 3.64 3.51 3.40 .001 14.59 9.77 7.94 6.95 6.32 5.88 5.56 5.31 5.11 22 .100 2.95 2.56 2.35 2.22 2.13 2.06 2.01 1.97 1.93 .050 4.30 3.44 3.05 2.82 2.66 2.55 2.46 2.40 2.34 .025 5.79 4.38 3.78 3.44 3.22 3.05 2.93 2.84 2.76 .010 7.95 5.72 4.82 4.31 3.99 3.76 3.59 3.45 3.35 .001 14.38 9.61 7.80 6.81 6.19 5.76 5.44 5.19 4.99 23 .100 2.94 2.55 2.34 2.21 2.11 2.05 1.99 1.95 1.92 .050 4.28 3.42 3.03 2.80 2.64 2.53 2.44 2.37 2.32 .025 5.75 4.35 3.75 3.41 3.18 3.02 2.90 2.81 2.73 .010 7.88 5.66 4.76 4.26 3.94 3.71 3.54 3.41 3.30 .001 14.20 9.47 7.67 6.70 6.08 5.65 5.33 5.09 4.89 Table A5 $F$ critical values (continued) Degrees of freedom in the numerator Degrees of freedom in the denominator $p$ 10 12 15 20 25 30 40 50 60 120 1000 16 .100 2.03 1.99 1.94 1.89 1.86 1.84 1.81 1.79 1.78 1.75 1.72 .050 2.49 2.42 2.35 2.28 2.23 2.19 2.15 2.12 2.11 2.06 2.02 .025 2.99 2.89 2.79 2.68 2.61 2.57 2.51 2.47 2.45 2.38 2.32 .010 3.69 3.55 3.41 3.26 3.16 3.10 3.02 2.97 2.93 2.84 2.76 .001 5.81 5.55 5.27 4.99 4.82 4.70 4.54 4.45 4.39 4.23 4.08 17 .100 2.00 1.96 1.91 1.86 1.83 1.81 1.78 1.76 1.75 1.72 1.69 .050 2.45 2.38 2.31 2.23 2.18 2.15 2.10 2.08 2.06 2.01 1.97 .025 2.92 2.82 2.72 2.62 2.55 2.50 2.44 2.41 2.38 2.32 2.26 .010 3.59 3.46 3.31 3.16 3.07 3.00 2.92 2.87 2.83 2.75 2.66 .001 5.58 5.32 5.05 4.78 4.60 4.48 4.33 4.24 4.18 4.02 3.87 18 .100 1.98 1.93 1.89 1.84 1.80 1.78 1.75 1.74 1.72 1.69 1.66 .050 2.41 2.34 2.27 2.19 2.14 2.11 2.06 2.04 2.02 1.97 1.92 .025 2.87 2.77 2.67 2.56 2.49 2.44 2.38 2.35 2.32 2.26 2.20 .010 3.51 3.37 3.23 3.08 2.98 2.92 2.84 2.78 2.75 2.66 2.58 .001 5.39 5.13 4.87 4.59 4.42 4.30 4.15 4.06 4.00 3.84 3.69 19 .100 1.96 1.91 1.86 1.81 1.78 1.76 1.73 1.71 1.70 1.67 1.64 .050 2.38 2.31 2.23 2.16 2.11 2.07 2.03 2.00 1.98 1.93 1.88 .025 2.82 2.72 2.62 2.51 2.44 2.39 2.33 2.30 2.27 2.20 2.14 .010 3.43 3.30 3.15 3.00 2.91 2.84 2.76 2.71 2.67 2.58 2.50 .001 5.22 4.97 4.70 4.43 4.26 4.14 3.99 3.90 3.84 3.68 3.53 20 .100 1.94 1.89 1.84 1.79 1.76 1.74 1.71 1.69 1.68 1.64 1.61 .050 2.35 2.28 2.20 2.12 2.07 2.04 1.99 1.97 1.95 1.90 1.85 .025 2.77 2.68 2.57 2.46 2.40 2.35 2.29 2.25 2.22 2.16 2.09 .010 3.37 3.23 3.09 2.94 2.84 2.78 2.69 2.64 2.61 2.52 2.43 .001 5.08 4.82 4.56 4.29 4.12 4.00 3.86 3.77 3.70 3.54 3.40 21 .100 1.92 1.87 1.83 1.78 1.74 1.72 1.69 1.67 1.66 1.62 1.59 .050 2.32 2.25 2.18 2.10 2.05 2.01 1.96 1.94 1.92 1.87 1.82 .025 2.73 2.64 2.53 2.42 2.36 2.31 2.25 2.21 2.18 2.11 2.05 .010 3.31 3.17 3.03 2.88 2.79 2.72 2.64 2.58 2.55 2.46 2.37 .001 4.95 4.70 4.44 4.17 4.00 3.88 3.74 3.64 3.58 3.42 3.28 22 .100 1.90 1.86 1.81 1.76 1.73 1.70 1.67 1.65 1.64 1.60 1.57 .050 2.30 2.23 2.15 2.07 2.02 1.98 1.94 1.91 1.89 1.84 1.79 .025 2.70 2.60 2.50 2.39 2.32 2.27 2.21 2.17 2.14 2.08 2.01 .010 3.26 3.12 2.98 2.83 2.73 2.67 2.58 2.53 2.50 2.40 2.32 .001 4.83 4.58 4.33 4.06 3.89 3.78 3.63 3.54 3.48 3.32 3.17 23 .100 1.89 1.84 1.80 1.74 1.71 1.69 1.66 1.64 1.62 1.59 1.55 .050 2.27 2.20 2.13 2.05 2.00 1.96 1.91 1.88 1.86 1.81 1.76 .025 2.67 2.57 2.47 2.36 2.29 2.24 2.18 2.14 2.11 2.04 1.98 .010 3.21 3.07 2.93 2.78 2.69 2.62 2.54 2.48 2.45 2.35 2.27 .001 4.73 4.48 4.23 3.96 3.79 3.68 3.53 3.44 3.38 3.22 3.08 Table A6 $F$ critical values (continued) Degrees of freedom in the numerator Degrees of freedom in the denominator $p$ 1 2 3 4 5 6 7 8 9 24 .100 2.93 2.54 2.33 2.19 2.10 2.04 1.98 1.94 1.91 .050 4.26 3.40 3.01 2.78 2.62 2.51 2.42 2.36 2.30 .025 5.72 4.32 3.72 3.38 3.15 2.99 2.87 2.78 2.70 .010 7.82 5.61 4.72 4.22 3.90 3.67 3.50 3.36 3.26 .001 14.03 9.34 7.55 6.59 5.98 5.55 5.23 4.99 4.80 25 .100 2.92 2.53 2.32 2.18 2.09 2.02 1.97 1.93 1.89 .050 4.24 3.39 2.99 2.76 2.60 2.49 2.40 2.34 2.28 .025 5.69 4.29 3.69 3.35 3.13 2.97 2.85 2.75 2.68 .010 7.77 5.57 4.68 4.18 3.85 3.63 3.46 3.32 3.22 .001 13.88 9.22 7.45 6.49 5.89 5.46 5.15 4.91 4.71 26 .100 2.91 2.52 2.31 2.17 2.08 2.01 1.96 1.92 1.88 .050 4.23 3.37 2.98 2.74 2.59 2.47 2.39 2.32 2.27 .025 5.66 4.27 3.67 3.33 3.10 2.94 2.82 2.73 2.65 .010 7.72 5.53 4.64 4.14 3.82 3.59 3.42 3.29 3.18 .001 13.74 9.12 7.36 6.41 5.80 5.38 5.07 4.83 4.64 27 .100 2.90 2.51 2.30 2.17 2.07 2.00 1.95 1.91 1.87 .050 4.21 3.35 2.96 2.73 2.57 2.46 2.37 2.31 2.25 .025 5.63 4.24 3.65 3.31 3.08 2.92 2.80 2.71 2.63 .010 7.68 5.49 4.60 4.11 3.78 3.56 3.39 3.26 3.15 .001 13.61 9.02 7.27 6.33 5.73 5.31 5.00 4.76 4.57 28 .100 2.89 2.50 2.29 2.16 2.06 2.00 1.94 1.90 1.87 .050 4.20 3.34 2.95 2.71 2.56 2.45 2.36 2.29 2.24 .025 5.61 4.22 3.63 3.29 3.06 2.90 2.78 2.69 2.61 .010 7.64 5.45 4.57 4.07 3.75 3.53 3.36 3.23 3.12 .001 13.50 8.93 7.19 6.25 5.66 5.24 4.93 4.69 4.50 29 .100 2.89 2.50 2.28 2.15 2.06 1.99 1.93 1.89 1.86 .050 4.18 3.33 2.93 2.70 2.55 2.43 2.35 2.28 2.22 .025 5.59 4.20 3.61 3.27 3.04 2.88 2.76 2.67 2.59 .010 7.60 5.42 4.54 4.04 3.73 3.50 3.33 3.20 3.09 .001 13.39 8.85 7.12 6.19 5.59 5.18 4.87 4.64 4.45 30 .100 2.88 2.49 2.28 2.14 2.05 1.98 1.93 1.88 1.85 .050 4.17 3.32 2.92 2.69 2.53 2.42 2.33 2.27 2.21 .025 5.57 4.18 3.59 3.25 3.03 2.87 2.75 2.65 2.57 .010 7.56 5.39 4.51 4.02 3.70 3.47 3.30 3.17 3.07 .001 13.29 8.77 7.05 6.12 5.53 5.12 4.82 4.58 4.39 40 .100 2.84 2.44 2.23 2.09 2.00 1.93 1.87 1.83 1.79 .050 4.08 3.23 2.84 2.61 2.45 2.34 2.25 2.18 2.12 .025 5.42 4.05 3.46 3.13 2.90 2.74 2.62 2.53 2.45 .010 7.31 5.18 4.31 3.83 3.51 3.29 3.12 2.99 2.89 .001 12.61 8.25 6.59 5.70 5.13 4.73 4.44 4.21 4.02 Table A7 $F$ critical values (continued) Degrees of freedom in the numerator Degrees of freedom in the denominator $p$ 10 12 15 20 25 30 40 50 60 120 1000 24 .100 1.88 1.83 1.78 1.73 1.70 1.67 1.64 1.62 1.61 1.57 1.54 .050 2.25 2.18 2.11 2.03 1.97 1.94 1.89 1.86 1.84 1.79 1.74 .025 2.64 2.54 2.44 2.33 2.26 2.21 2.15 2.11 2.08 2.01 1.94 .010 3.17 3.03 2.89 2.74 2.64 2.58 2.49 2.44 2.40 2.31 2.22 .001 4.64 4.39 4.14 3.87 3.71 3.59 3.45 3.36 3.29 3.14 2.99 25 .100 1.87 1.82 1.77 1.72 1.68 1.66 1.63 1.61 1.59 1.56 1.52 .050 2.24 2.16 2.09 2.01 1.96 1.92 1.87 1.84 1.82 1.77 1.72 .025 2.61 2.51 2.41 2.30 2.23 2.18 2.12 2.08 2.05 1.98 1.91 .010 3.13 2.99 2.85 2.70 2.60 2.54 2.45 2.40 2.36 2.27 2.18 .001 4.56 4.31 4.06 3.79 3.63 3.52 3.37 3.28 3.22 3.06 2.91 26 .100 1.86 1.81 1.76 1.71 1.67 1.65 1.61 1.59 1.58 1.54 1.51 .050 2.22 2.15 2.07 1.99 1.94 1.90 1.85 1.82 1.80 1.75 1.70 .025 2.59 2.49 2.39 2.28 2.21 2.16 2.09 2.05 2.03 1.95 1.89 .010 3.09 2.96 2.81 2.66 2.57 2.50 2.42 2.36 2.33 2.23 2.14 .001 4.48 4.24 3.99 3.72 3.56 3.44 3.30 3.21 3.15 2.99 2.84 27 .100 1.85 1.80 1.75 1.70 1.66 1.64 1.60 1.58 1.57 1.53 1.50 .050 2.20 2.13 2.06 1.97 1.92 1.88 1.84 1.81 1.79 1.73 1.68 .025 2.57 2.47 2.36 2.25 2.18 2.13 2.07 2.03 2.00 1.93 1.86 .010 3.06 2.93 2.78 2.63 2.54 2.47 2.38 2.33 2.29 2.20 2.11 .001 4.41 4.17 3.92 3.66 3.49 3.38 3.23 3.14 3.08 2.92 2.78 28 .100 1.84 1.79 1.74 1.69 1.65 1.63 1.59 1.57 1.56 1.52 1.48 .050 2.19 2.12 2.04 1.96 1.91 1.87 1.82 1.79 1.77 1.71 1.66 .025 2.55 2.45 2.34 2.23 2.16 2.11 2.05 2.01 1.98 1.91 1.84 .010 3.03 2.90 2.75 2.60 2.51 2.44 2.35 2.30 2.26 2.17 2.08 .001 4.35 4.11 3.86 3.60 3.43 3.32 3.18 3.09 3.02 2.86 2.72 29 .100 1.83 1.78 1.73 1.68 1.64 1.62 1.58 1.56 1.55 1.51 1.47 .050 2.18 2.10 2.03 1.94 1.89 1.85 1.81 1.77 1.75 1.70 1.65 .025 2.53 2.43 2.32 2.21 2.14 2.09 2.03 1.99 1.96 1.89 1.82 .010 3.00 2.87 2.73 2.57 2.48 2.41 2.33 2.27 2.23 2.14 2.05 .001 4.29 4.05 3.80 3.54 3.38 3.27 3.12 3.03 2.97 2.81 2.66 30 .100 1.82 1.77 1.72 1.67 1.63 1.61 1.57 1.55 1.54 1.50 1.46 .050 2.16 2.09 2.01 1.93 1.88 1.84 1.79 1.76 1.74 1.68 1.63 .025 2.51 2.41 2.31 2.20 2.12 2.07 2.01 1.97 1.94 1.87 1.80 .010 2.98 2.84 2.70 2.55 2.45 2.39 2.30 2.25 2.21 2.11 2.02 .001 4.24 4.00 3.75 3.49 3.33 3.22 3.07 2.98 2.92 2.76 2.61 40 .100 1.76 1.71 1.66 1.61 1.57 1.54 1.51 1.48 1.47 1.42 1.38 .050 2.08 2.00 1.92 1.84 1.78 1.74 1.69 1.66 1.64 1.58 1.52 .025 2.39 2.29 2.18 2.07 1.99 1.94 1.88 1.83 1.80 1.72 1.65 .010 2.80 2.66 2.52 2.37 2.27 2.20 2.11 2.06 2.02 1.92 1.82 .001 3.87 3.64 3.40 3.14 2.98 2.87 2.73 2.64 2.57 2.41 2.25 Table A8 $F$ critical values (continued) Degrees of freedom in the numerator Degrees of freedom in the denominator $p$ 1 2 3 4 5 6 7 8 9 50 .100 2.81 2.41 2.20 2.06 1.97 1.90 1.84 1.80 1.76 .050 4.03 3.18 2.79 2.56 2.40 2.29 2.20 2.13 2.07 .025 5.34 3.97 3.39 3.05 2.83 2.67 2.55 2.46 2.38 .010 7.17 5.06 4.20 3.72 3.41 3.19 3.02 2.89 2.78 .001 12.22 7.96 6.34 5.46 4.90 4.51 4.22 4.00 3.82 60 .100 2.79 2.39 2.18 2.04 1.95 1.87 1.82 1.77 1.74 .050 4.00 3.15 2.76 2.53 2.37 2.25 2.17 2.10 2.04 .025 5.29 3.93 3.34 3.01 2.79 2.63 2.51 2.41 2.33 .010 7.08 4.98 4.13 3.65 3.34 3.12 2.95 2.82 2.72 .001 11.97 7.77 6.17 5.31 4.76 4.37 4.09 3.86 3.69 100 .100 2.76 2.36 2.14 2.00 1.91 1.83 1.78 1.73 1.69 .050 3.94 3.09 2.70 2.46 2.31 2.19 2.10 2.03 1.97 .025 5.18 3.83 3.25 2.92 2.70 2.54 2.42 2.32 2.24 .010 6.90 4.82 3.98 3.51 3.21 2.99 2.82 2.69 2.59 .001 11.50 7.41 5.86 5.02 4.48 4.11 3.83 3.61 3.44 200 .100 2.73 2.33 2.11 1.97 1.88 1.80 1.75 1.70 1.66 .050 3.89 3.04 2.65 2.42 2.26 2.14 2.06 1.98 1.93 .025 5.10 3.76 3.18 2.85 2.63 2.47 2.35 2.26 2.18 .010 6.76 4.71 3.88 3.41 3.11 2.89 2.73 2.60 2.50 .001 11.15 7.15 5.63 4.81 4.29 3.92 3.65 3.43 3.26 1000 .100 2.71 2.31 2.09 1.95 1.85 1.78 1.72 1.68 1.64 .050 3.85 3.00 2.61 2.38 2.22 2.11 2.02 1.95 1.89 .025 5.04 3.70 3.13 2.80 2.58 2.42 2.30 2.20 2.13 .010 6.66 4.63 3.80 3.34 3.04 2.82 2.66 2.53 2.43 .001 10.89 6.96 5.46 4.65 4.14 3.78 3.51 3.30 3.13 Table A9 $F$ critical values (continued) Degrees of freedom in the numerator Degrees of freedom in the denominator $p$ 10 12 15 20 25 30 40 50 60 120 1000 50 .100 1.73 1.68 1.63 1.57 1.53 1.50 1.46 1.44 1.42 1.38 1.33 .050 2.03 1.95 1.87 1.78 1.73 1.69 1.63 1.60 1.58 1.51 1.45 .025 2.32 2.22 2.11 1.99 1.92 1.87 1.80 1.75 1.72 1.64 1.56 .010 2.70 2.56 2.42 2.27 2.17 2.10 2.01 1.95 1.91 1.80 1.70 .001 3.67 3.44 3.20 2.95 2.79 2.68 2.53 2.44 2.38 2.21 2.05 60 .100 1.71 1.66 1.60 1.54 1.50 1.48 1.44 1.41 1.40 1.35 1.30 .050 1.99 1.92 1.84 1.75 1.69 1.65 1.59 1.56 1.53 1.47 1.40 .025 2.27 2.17 2.06 1.94 1.87 1.82 1.74 1.70 1.67 1.58 1.49 .010 2.63 2.50 2.35 2.20 2.10 2.03 1.94 1.88 1.84 1.73 1.62 .001 3.54 3.32 3.08 2.83 2.67 2.55 2.41 2.32 2.25 2.08 1.92 100 .100 1.66 1.61 1.56 1.49 1.45 1.42 1.38 1.35 1.34 1.28 1.22 .050 1.93 1.85 1.77 1.68 1.62 1.57 1.52 1.48 1.45 1.38 1.30 .025 2.18 2.08 1.97 1.85 1.77 1.71 1.64 1.59 1.56 1.46 1.36 .010 2.50 2.37 2.22 2.07 1.97 1.89 1.80 1.74 1.69 1.57 1.45 .001 3.30 3.07 2.84 2.59 2.43 2.32 2.17 2.08 2.01 1.83 1.64 200 .100 1.63 1.58 1.52 1.46 1.41 1.38 1.34 1.31 1.29 1.23 1.16 .050 1.88 1.80 1.72 1.62 1.56 1.52 1.46 1.41 1.39 1.30 1.21 .025 2.11 2.01 1.90 1.78 1.70 1.64 1.56 1.51 1.47 1.37 1.25 .010 2.41 2.27 2.13 1.97 1.87 1.79 1.69 1.63 1.58 1.45 1.30 .001 3.12 2.90 2.67 2.42 2.26 2.15 2.00 1.90 1.83 1.64 1.43 1000 .100 1.61 1.55 1.49 1.43 1.38 1.35 1.30 1.27 1.25 1.18 1.08 .050 1.84 1.76 1.68 1.58 1.52 1.47 1.41 1.36 1.33 1.24 1.11 .025 2.06 1.96 1.85 1.72 1.64 1.58 1.50 1.45 1.41 1.29 1.13 .010 2.34 2.20 2.06 1.90 1.79 1.72 1.61 1.54 1.50 1.35 1.16 .001 2.99 2.77 2.54 2.30 2.14 2.02 1.87 1.77 1.69 1.49 1.22 Table A10 $F$ critical values (continued) Numerical entries represent the probability that a standard normal random variable is between $0$ and $z$ where $z=\frac{x-\mu}{\sigma}$. Standard Normal Probability Distribution: $Z$ Table $z$ 0.00 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.0 0.0000 0.0040 0.0080 0.0120 0.0160 0.0199 0.0239 0.0279 0.0319 0.0359 0.1 0.0398 0.0438 0.0478 0.0517 0.0557 0.0596 0.0636 0.0675 0.0714 0.0753 0.2 0.0793 0.0832 0.0871 0.0910 0.0948 0.0987 0.1026 0.1064 0.1103 0.1141 0.3 0.1179 0.1217 0.1255 0.1293 0.1331 0.1368 0.1406 0.1443 0.1480 0.1517 0.4 0.1554 0.1591 0.1628 0.1664 0.1700 0.1736 0.1772 0.1808 0.1844 0.1879 0.5 0.1915 0.1950 0.1985 0.2019 0.2054 0.2088 0.2123 0.2157 0.2190 0.2224 0.6 0.2257 0.2291 0.2324 0.2357 0.2389 0.2422 0.2454 0.2486 0.2517 0.2549 0.7 0.2580 0.2611 0.2642 0.2673 0.2704 0.2734 0.2764 0.2794 0.2823 0.2852 0.8 0.2881 0.2910 0.2939 0.2967 0.2995 0.3023 0.3051 0.3078 0.3106 0.3133 0.9 0.3159 0.3186 0.3212 0.3238 0.3264 0.3289 0.3315 0.3340 0.3365 0.3389 1.0 0.3413 0.3438 0.3461 0.3485 0.3508 0.3531 0.3554 0.3577 0.3599 0.3621 1.1 0.3643 0.3665 0.3686 0.3708 0.3729 0.3749 0.3770 0.3790 0.3810 0.3830 1.2 0.3849 0.3869 0.3888 0.3907 0.3925 0.3944 0.3962 0.3980 0.3997 0.4015 1.3 0.4032 0.4049 0.4066 0.4082 0.4099 0.4115 0.4131 0.4147 0.4162 0.4177 1.4 0.4192 0.4207 0.4222 0.4236 0.4251 0.4265 0.4279 0.4292 0.4306 0.4319 1.5 0.4332 0.4345 0.4357 0.4370 0.4382 0.4394 0.4406 0.4418 0.4429 0.4441 1.6 0.4452 0.4463 0.4474 0.4484 0.4495 0.4505 0.4515 0.4525 0.4535 0.4545 1.7 0.4554 0.4564 0.4573 0.4582 0.4591 0.4599 0.4608 0.4616 0.4625 0.4633 1.8 0.4641 0.4649 0.4656 0.4664 0.4671 0.4678 0.4686 0.4693 0.4699 0.4706 1.9 0.4713 0.4719 0.4726 0.4732 0.4738 0.4744 0.4750 0.4756 0.4761 0.4767 2.0 0.4772 0.4778 0.4783 0.4788 0.4793 0.4798 0.4803 0.4808 0.4812 0.4817 2.1 0.4821 0.4826 0.4830 0.4834 0.4838 0.4842 0.4846 0.4850 0.4854 0.4857 2.2 0.4861 0.4864 0.4868 0.4871 0.4875 0.4878 0.4881 0.4884 0.4887 0.4890 2.3 0.4893 0.4896 0.4898 0.4901 0.4904 0.4906 0.4909 0.4911 0.4913 0.4916 2.4 0.4918 0.4920 0.4922 0.4925 0.4927 0.4929 0.4931 0.4932 0.4934 0.4936 2.5 0.4938 0.4940 0.4941 0.4943 0.4945 0.4946 0.4948 0.4949 0.4951 0.4952 2.6 0.4953 0.4955 0.4956 0.4957 0.4959 0.4960 0.4961 0.4962 0.4963 0.4964 2.7 0.4965 0.4966 0.4967 0.4968 0.4969 0.4970 0.4971 0.4972 0.4973 0.4974 2.8 0.4974 0.4975 0.4976 0.4977 0.4977 0.4978 0.4979 0.4979 0.4980 0.4981 2.9 0.4981 0.4982 0.4982 0.4983 0.4984 0.4984 0.4985 0.4985 0.4986 0.4986 3.0 0.4987 0.4987 0.4987 0.4988 0.4988 0.4989 0.4989 0.4989 0.4990 0.4990 3.1 0.4990 0.4991 0.4991 0.4991 0.4992 0.4992 0.4992 0.4992 0.4993 0.4993 3.2 0.4993 0.4993 0.4994 0.4994 0.4994 0.4994 0.4994 0.4995 0.4995 0.4995 3.3 0.4995 0.4995 0.4995 0.4996 0.4996 0.4996 0.4996 0.4996 0.4996 0.4997 3.4 0.4997 0.4997 0.4997 0.4997 0.4997 0.4997 0.4997 0.4997 0.4997 0.4998 Table A11 Standard Normal Distribution Student's $t$ Distribution For selected probabilities, a, the table shows the values $t_{v,a}$ such that $P(t_v > t_{v,a}) = a$, where $t_v$ is a Student’s $t$ random variable with $v$ degrees of freedom. For example, the probability is .10 that a Student’s $t$ random variable with 10 degrees of freedom exceeds 1.372. $v$ 0.10 0.05 0.025 0.01 0.005 0.001 1 3.078 6.314 12.706 31.821 63.657 318.313 2 1.886 2.920 4.303 6.965 9.925 22.327 3 1.638 2.353 3.182 4.541 5.841 10.215 4 1.533 2.132 2.776 3.747 4.604 7.173 5 1.476 2.015 2.571 3.365 4.032 5.893 6 1.440 1.943 2.447 3.143 3.707 5.208 7 1.415 1.895 2.365 2.998 3.499 4.782 8 1.397 1.860 2.306 2.896 3.355 4.499 9 1.383 1.833 2.262 2.821 3.250 4.296 10 1.372 1.812 2.228 2.764 3.169 4.143 11 1.363 1.796 2.201 2.718 3.106 4.024 12 1.356 1.782 2.179 2.681 3.055 3.929 13 1.350 1.771 2.160 2.650 3.012 3.852 14 1.345 1.761 2.145 2.624 2.977 3.787 15 1.341 1.753 2.131 2.602 2.947 3.733 16 1.337 1.746 2.120 2.583 2.921 3.686 17 1.333 1.740 2.110 2.567 2.898 3.646 18 1.330 1.734 2.101 2.552 2.878 3.610 19 1.328 1.729 2.093 2.539 2.861 3.579 20 1.325 1.725 2.086 2.528 2.845 3.552 21 1.323 1.721 2.080 2.518 2.831 3.527 22 1.321 1.717 2.074 2.508 2.819 3.505 23 1.319 1.714 2.069 2.500 2.807 3.485 24 1.318 1.711 2.064 2.492 2.797 3.467 25 1.316 1.708 2.060 2.485 2.787 3.450 26 1.315 1.706* 2.056 2.479 2.779 3.435 27 1.314 1.703 2.052 2.473 2.771 3.421 28 1.313 1.701 2.048 2.467 2.763 3.408 29 1.311 1.699 2.045 2.462 2.756 3.396 30 1.310 1.697 2.042 2.457 2.750 3.385 40 1.303 1.684 2.021 2.423 2.704 3.307 60 1.296 1.671 2.000 2.390 2.660 3.232 100 1.290 1.660 1.984 2.364 2.626 3.174 1.282 1.645 1.960 2.326 2.576 3.090 Table A12 Probability of Exceeding the Critical Value $\chi^2$ Probability Distribution df 0.995 0.990 0.975 0.950 0.900 0.100 0.050 0.025 0.010 0.005 1 0.000 0.000 0.001 0.004 0.016 2.706 3.841 5.024 6.635 7.879 2 0.010 0.020 0.051 0.103 0.211 4.605 5.991 7.378 9.210 10.597 3 0.072 0.115 0.216 0.352 0.584 6.251 7.815 9.348 11.345 12.838 4 0.207 0.297 0.484 0.711 1.064 7.779 9.488 11.143 13.277 14.860 5 0.412 0.554 0.831 1.145 1.610 9.236 11.070 12.833 15.086 16.750 6 0.676 0.872 1.237 1.635 2.204 10.645 12.592 14.449 16.812 18.548 7 0.989 1.239 1.690 2.167 2.833 12.017 14.067 16.013 18.475 20.278 8 1.344 1.646 2.180 2.733 3.490 13.362 15.507 17.535 20.090 21.955 9 1.735 2.088 2.700 3.325 4.168 14.684 16.919 19.023 21.666 23.589 10 2.156 2.558 3.247 3.940 4.865 15.987 18.307 20.483 23.209 25.188 11 2.603 3.053 3.816 4.575 5.578 17.275 19.675 21.920 24.725 26.757 12 3.074 3.571 4.404 5.226 6.304 18.549 21.026 23.337 26.217 28.300 13 3.565 4.107 5.009 5.892 7.042 19.812 22.362 24.736 27.688 29.819 14 4.075 4.660 5.629 6.571 7.790 21.064 23.685 26.119 29.141 31.319 15 4.601 5.229 6.262 7.261 8.547 22.307 24.996 27.488 30.578 32.801 16 5.142 5.812 6.908 7.962 9.312 23.542 26.296 28.845 32.000 34.267 17 5.697 6.408 7.564 8.672 10.085 24.769 27.587 30.191 33.409 35.718 18 6.265 7.015 8.231 9.390 10.865 25.989 28.869 31.526 34.805 37.156 19 6.844 7.633 8.907 10.117 11.651 27.204 30.144 32.852 36.191 38.582 20 7.434 8.260 9.591 10.851 12.443 28.412 31.410 34.170 37.566 39.997 21 8.034 8.897 10.283 11.591 13.240 29.615 32.671 35.479 38.932 41.401 22 8.643 9.542 10.982 12.338 14.041 30.813 33.924 36.781 40.289 42.796 23 9.260 10.196 11.689 13.091 14.848 32.007 35.172 38.076 41.638 44.181 24 9.886 10.856 12.401 13.848 15.659 33.196 36.415 39.364 42.980 45.559 25 10.520 11.524 13.120 14.611 16.473 34.382 37.652 40.646 44.314 46.928 26 11.160 12.198 13.844 15.379 17.292 35.563 38.885 41.923 45.642 48.290 27 11.808 12.879 14.573 16.151 18.114 36.741 40.113 43.195 46.963 49.645 28 12.461 13.565 15.308 16.928 18.939 37.916 41.337 44.461 48.278 50.993 29 13.121 14.256 16.047 17.708 19.768 39.087 42.557 45.722 49.588 52.336 30 13.787 14.953 16.791 18.493 20.599 40.256 43.773 46.979 50.892 53.672 40 20.707 22.164 24.433 26.509 29.051 51.805 55.758 59.342 63.691 66.766 50 27.991 29.707 32.357 34.764 37.689 63.167 67.505 71.420 76.154 79.490 60 35.534 37.485 40.482 43.188 46.459 74.397 79.082 83.298 88.379 91.952 70 43.275 45.442 48.758 51.739 55.329 85.527 90.531 95.023 100.425 104.215 80 51.172 53.540 57.153 60.391 64.278 96.578 101.879 106.629 112.329 116.321 90 59.196 61.754 65.647 69.126 73.291 107.565 113.145 118.136 124.116 128.299 100 67.328 70.065 74.222 77.929 82.358 118.498 124.342 129.561 135.807 140.169 Table A13 Area to the Right of the Critical Value of $\chi^2$
textbooks/stats/Applied_Statistics/Introductory_Business_Statistics_(OpenStax)/14%3A_Apppendices/14.01%3A_A__Statistical_Tables.txt
“Thou shalt not answer questionnaires Or quizzes upon World Affairs, Nor with compliance Take any test. Thou shalt not sit With statisticians nor commit A social science” – W.H. Auden1 01: Why Do We Learn Statistics To the surprise of many students, statistics is a fairly significant part of a psychological education. To the surprise of no-one, statistics is very rarely the favourite part of one’s psychological education. After all, if you really loved the idea of doing statistics, you’d probably be enrolled in a statistics class right now, not a psychology class. So, not surprisingly, there’s a pretty large proportion of the student base that isn’t happy about the fact that psychology has so much statistics in it. In view of this, I thought that the right place to start might be to answer some of the more common questions that people have about stats. . . A big part of this issue at hand relates to the very idea of statistics. What is it? What’s it there for? And why are scientists so bloody obsessed with it? These are all good questions, when you think about it. So let’s start with the last one. As a group, scientists seem to be bizarrely fixated on running statistical tests on everything. In fact, we use statistics so often that we sometimes forget to explain to people why we do. It’s a kind of article of faith among scientists – and especially social scientists – that your findings can’t be trusted until you’ve done some stats. Undergraduate students might be forgiven for thinking that we’re all completely mad, because no-one takes the time to answer one very simple question: Why do you do statistics? Why don’t scientists just use common sense? It’s a naive question in some ways, but most good questions are. There’s a lot of good answers to it,2 but for my money, the best answer is a really simple one: we don’t trust ourselves enough. We worry that we’re human, and susceptible to all of the biases, temptations and frailties that humans suffer from. Much of statistics is basically a safeguard. Using “common sense” to evaluate evidence means trusting gut instincts, relying on verbal arguments and on using the raw power of human reason to come up with the right answer. Most scientists don’t think this approach is likely to work. In fact, come to think of it, this sounds a lot like a psychological question to me, and since I do work in a psychology department, it seems like a good idea to dig a little deeper here. Is it really plausible to think that this “common sense” approach is very trustworthy? Verbal arguments have to be constructed in language, and all languages have biases – some things are harder to say than others, and not necessarily because they’re false (e.g., quantum electrodynamics is a good theory, but hard to explain in words). The instincts of our “gut” aren’t designed to solve scientific problems, they’re designed to handle day to day inferences – and given that biological evolution is slower than cultural change, we should say that they’re designed to solve the day to day problems for a different world than the one we live in. Most fundamentally, reasoning sensibly requires people to engage in “induction”, making wise guesses and going beyond the immediate evidence of the senses to make generalisations about the world. If you think that you can do that without being influenced by various distractors, well, I have a bridge in Brooklyn I’d like to sell you. Heck, as the next section shows, we can’t even solve “deductive” problems (ones where no guessing is required) without being influenced by our pre-existing biases. curse of belief bias People are mostly pretty smart. We’re certainly smarter than the other species that we share the planet with (though many people might disagree). Our minds are quite amazing things, and we seem to be capable of the most incredible feats of thought and reason. That doesn’t make us perfect though. And among the many things that psychologists have shown over the years is that we really do find it hard to be neutral, to evaluate evidence impartially and without being swayed by pre-existing biases. A good example of this is the belief bias effect in logical reasoning: if you ask people to decide whether a particular argument is logically valid (i.e., conclusion would be true if the premises were true), we tend to be influenced by the believability of the conclusion, even when we shouldn’t. For instance, here’s a valid argument where the conclusion is believable: No cigarettes are inexpensive (Premise 1) Some addictive things are inexpensive (Premise 2) Therefore, some addictive things are not cigarettes (Conclusion) And here’s a valid argument where the conclusion is not believable: No addictive things are inexpensive (Premise 1) Some cigarettes are inexpensive (Premise 2) Therefore, some cigarettes are not addictive (Conclusion) The logical structure of argument #2 is identical to the structure of argument #1, and they’re both valid. However, in the second argument, there are good reasons to think that premise 1 is incorrect, and as a result it’s probably the case that the conclusion is also incorrect. But that’s entirely irrelevant to the topic at hand: an argument is deductively valid if the conclusion is a logical consequence of the premises. That is, a valid argument doesn’t have to involve true statements. On the other hand, here’s an invalid argument that has a believable conclusion: No addictive things are inexpensive (Premise 1) Some cigarettes are inexpensive (Premise 2) Therefore, some addictive things are not cigarettes (Conclusion) And finally, an invalid argument with an unbelievable conclusion: No cigarettes are inexpensive (Premise 1) Some addictive things are inexpensive (Premise 2) Therefore, some cigarettes are not addictive (Conclusion) Now, suppose that people really are perfectly able to set aside their pre-existing biases about what is true and what isn’t, and purely evaluate an argument on its logical merits. We’d expect 100% of people to say that the valid arguments are valid, and 0% of people to say that the invalid arguments are valid. So if you ran an experiment looking at this, you’d expect to see data like this: conclusion feels true conclusion feels false argument is valid 100% say “valid” 100% say “valid” argument is invalid 0% say “valid” 0% say “valid” If the psychological data looked like this (or even a good approximation to this), we might feel safe in just trusting our gut instincts. That is, it’d be perfectly okay just to let scientists evaluate data based on their common sense, and not bother with all this murky statistics stuff. However, you guys have taken psych classes, and by now you probably know where this is going . . . In a classic study, J. S. B. T. Evans, Barston, and Pollard (1983) ran an experiment looking at exactly this. What they found is that when pre-existing biases (i.e., beliefs) were in agreement with the structure of the data, everything went the way you’d hope: conclusion feels true conclusion feels false argument is valid 92% say “valid” argument is invalid   8% say “valid” Not perfect, but that’s pretty good. But look what happens when our intuitive feelings about the truth of the conclusion run against the logical structure of the argument: conclusion feels true conclusion feels false argument is valid 92% say “valid” 46% say “valid” argument is invalid 92% say “valid” 8% say “valid” Oh dear, that’s not as good. Apparently, when people are presented with a strong argument that contradicts our pre-existing beliefs, we find it pretty hard to even perceive it to be a strong argument (people only did so 46% of the time). Even worse, when people are presented with a weak argument that agrees with our pre-existing biases, almost no-one can see that the argument is weak (people got that one wrong 92% of the time!)3 If you think about it, it’s not as if these data are horribly damning. Overall, people did do better than chance at compensating for their prior biases, since about 60% of people’s judgements were correct (you’d expect 50% by chance). Even so, if you were a professional “evaluator of evidence”, and someone came along and offered you a magic tool that improves your chances of making the right decision from 60% to (say) 95%, you’d probably jump at it, right? Of course you would. Thankfully, we actually do have a tool that can do this. But it’s not magic, it’s statistics. So that’s reason #1 why scientists love statistics. It’s just too easy for us to “believe what we want to believe”; so if we want to “believe in the data” instead, we’re going to need a bit of help to keep our personal biases under control. That’s what statistics does: it helps keep us honest. 1 The quote comes from Auden’s 1946 poem Under Which Lyre: A Reactionary Tract for the Times, delivered as part of a commencement address at Harvard University. The history of the poem is kind of interesting: harvardmagazine.com/2007/11/a-poets-warning.html 2 Including the suggestion that common sense is in short supply among scientists.
textbooks/stats/Applied_Statistics/Learning_Statistics_with_R_-_A_tutorial_for_Psychology_Students_and_other_Beginners_(Navarro)/01%3A_Why_Do_We_Learn_Statistics/1.01%3A_On_the_Psychology_of_Statistics.txt
The following is a true story (I think…). In 1973, the University of California, Berkeley had some worries about the admissions of students into their postgraduate courses. Specifically, the thing that caused the problem was that the gender breakdown of their admissions, which looked like this… Number of applicants Percent admitted Males 8442 46% Females 4321 35% …and the were worried about being sued.4 Given that there were nearly 13,000 applicants, a difference of 9% in admission rates between males and females is just way too big to be a coincidence. Pretty compelling data, right? And if I were to say to you that these data actually reflect a weak bias in favour of women (sort of!), you’d probably think that I was either crazy or sexist. Oddly, it’s actually sort of true …when people started looking more carefully at the admissions data (Bickel, Hammel, and O’Connell 1975) they told a rather different story. Specifically, when they looked at it on a department by department basis, it turned out that most of the departments actually had a slightly higher success rate for female applicants than for male applicants. Table 1.1 shows the admission figures for the six largest departments (with the names of the departments removed for privacy reasons): Table 1.1: Admission figures for the six largest departments by gender Department Male Applicants Male Percent Admitted Female Applicants Female Percent admitted A 825 62% 108 82% B 560 63% 25 68% C 325 37% 593 34% D 417 33% 375 35% E 191 28% 393 24% F 272 6% 341 7% Remarkably, most departments had a higher rate of admissions for females than for males! Yet the overall rate of admission across the university for females was lower than for males. How can this be? How can both of these statements be true at the same time? Here’s what’s going on. Firstly, notice that the departments are not equal to one another in terms of their admission percentages: some departments (e.g., engineering, chemistry) tended to admit a high percentage of the qualified applicants, whereas others (e.g., English) tended to reject most of the candidates, even if they were high quality. So, among the six departments shown above, notice that department A is the most generous, followed by B, C, D, E and F in that order. Next, notice that males and females tended to apply to different departments. If we rank the departments in terms of the total number of male applicants, we get A>B>D>C>F>E (the “easy” departments are in bold). On the whole, males tended to apply to the departments that had high admission rates. Now compare this to how the female applicants distributed themselves. Ranking the departments in terms of the total number of female applicants produces a quite different ordering C>E>D>F>A>B. In other words, what these data seem to be suggesting is that the female applicants tended to apply to “harder” departments. And in fact, if we look at all Figure 1.1 we see that this trend is systematic, and quite striking. This effect is known as Simpson’s paradox. It’s not common, but it does happen in real life, and most people are very surprised by it when they first encounter it, and many people refuse to even believe that it’s real. It is very real. And while there are lots of very subtle statistical lessons buried in there, I want to use it to make a much more important point …doing research is hard, and there are lots of subtle, counterintuitive traps lying in wait for the unwary. That’s reason #2 why scientists love statistics, and why we teach research methods. Because science is hard, and the truth is sometimes cunningly hidden in the nooks and crannies of complicated data. Before leaving this topic entirely, I want to point out something else really critical that is often overlooked in a research methods class. Statistics only solves part of the problem. Remember that we started all this with the concern that Berkeley’s admissions processes might be unfairly biased against female applicants. When we looked at the “aggregated” data, it did seem like the university was discriminating against women, but when we “disaggregate” and looked at the individual behaviour of all the departments, it turned out that the actual departments were, if anything, slightly biased in favour of women. The gender bias in total admissions was caused by the fact that women tended to self-select for harder departments. From a legal perspective, that would probably put the university in the clear. Postgraduate admissions are determined at the level of the individual department (and there are good reasons to do that), and at the level of individual departments, the decisions are more or less unbiased (the weak bias in favour of females at that level is small, and not consistent across departments). Since the university can’t dictate which departments people choose to apply to, and the decision making takes place at the level of the department it can hardly be held accountable for any biases that those choices produce. That was the basis for my somewhat glib remarks earlier, but that’s not exactly the whole story, is it? After all, if we’re interested in this from a more sociological and psychological perspective, we might want to ask why there are such strong gender differences in applications. Why do males tend to apply to engineering more often than females, and why is this reversed for the English department? And why is it it the case that the departments that tend to have a female-application bias tend to have lower overall admission rates than those departments that have a male-application bias? Might this not still reflect a gender bias, even though every single department is itself unbiased? It might. Suppose, hypothetically, that males preferred to apply to “hard sciences” and females prefer “humanities”. And suppose further that the reason for why the humanities departments have low admission rates is because the government doesn’t want to fund the humanities (Ph.D. places, for instance, are often tied to government funded research projects). Does that constitute a gender bias? Or just an unenlightened view of the value of the humanities? What if someone at a high level in the government cut the humanities funds because they felt that the humanities are “useless chick stuff”. That seems pretty blatantly gender biased. None of this falls within the purview of statistics, but it matters to the research project. If you’re interested in the overall structural effects of subtle gender biases, then you probably want to look at both the aggregated and disaggregated data. If you’re interested in the decision making process at Berkeley itself then you’re probably only interested in the disaggregated data. In short there are a lot of critical questions that you can’t answer with statistics, but the answers to those questions will have a huge impact on how you analyze and interpret data. And this is the reason why you should always think of statistics as a tool to help you learn about your data, no more and no less. It’s a powerful tool to that end, but there’s no substitute for careful thought.
textbooks/stats/Applied_Statistics/Learning_Statistics_with_R_-_A_tutorial_for_Psychology_Students_and_other_Beginners_(Navarro)/01%3A_Why_Do_We_Learn_Statistics/1.02%3A_The_Cautionary_Tale_of_Simpsons_Paradox.txt
I hope that the discussion above helped explain why science in general is so focused on statistics. But I’m guessing that you have a lot more questions about what role statistics plays in psychology, and specifically why psychology classes always devote so many lectures to stats. So here’s my attempt to answer a few of them… does psychology have so much statistics? To be perfectly honest, there’s a few different reasons, some of which are better than others. The most important reason is that psychology is a statistical science. What I mean by that is that the “things” that we study are people. Real, complicated, gloriously messy, infuriatingly perverse people. The “things” of physics include object like electrons, and while there are all sorts of complexities that arise in physics, electrons don’t have minds of their own. They don’t have opinions, they don’t differ from each other in weird and arbitrary ways, they don’t get bored in the middle of an experiment, and they don’t get angry at the experimenter and then deliberately try to sabotage the data set (not that I’ve ever done that…). At a fundamental level psychology is harder than physics.5 Basically, we teach statistics to you as psychologists because you need to be better at stats than physicists. There’s actually a saying used sometimes in physics, to the effect that “if your experiment needs statistics, you should have done a better experiment”. They have the luxury of being able to say that because their objects of study are pathetically simple in comparison to the vast mess that confronts social scientists. It’s not just psychology, really: most social sciences are desperately reliant on statistics. Not because we’re bad experimenters, but because we’ve picked a harder problem to solve. We teach you stats because you really, really need it. Can’t someone else do the statistics? To some extent, but not completely. It’s true that you don’t need to become a fully trained statistician just to do psychology, but you do need to reach a certain level of statistical competence. In my view, there’s three reasons that every psychological researcher ought to be able to do basic statistics: • Firstly, there’s the fundamental reason: statistics is deeply intertwined with research design. If you want to be good at designing psychological studies, you need to at least understand the basics of stats. • Secondly, if you want to be good at the psychological side of the research, then you need to be able to understand the psychological literature, right? But almost every paper in the psychological literature reports the results of statistical analyses. So if you really want to understand the psychology, you need to be able to understand what other people did with their data. And that means understanding a certain amount of statistics. • Thirdly, there’s a big practical problem with being dependent on other people to do all your statistics: statistical analysis is expensive. If you ever get bored and want to look up how much the Australian government charges for university fees, you’ll notice something interesting: statistics is designated as a “national priority” category, and so the fees are much, much lower than for any other area of study. This is because there’s a massive shortage of statisticians out there. So, from your perspective as a psychological researcher, the laws of supply and demand aren’t exactly on your side here! As a result, in almost any real life situation where you want to do psychological research, the cruel facts will be that you don’t have enough money to afford a statistician. So the economics of the situation mean that you have to be pretty self-sufficient. Note that a lot of these reasons generalise beyond researchers. If you want to be a practicing psychologist and stay on top of the field, it helps to be able to read the scientific literature, which relies pretty heavily on statistics. don’t care about jobs, research, or clinical work. Do I need statistics? Okay, now you’re just messing with me. Still, I think it should matter to you too. Statistics should matter to you in the same way that statistics should matter to everyone: we live in the 21st century, and data are everywhere. Frankly, given the world in which we live these days, a basic knowledge of statistics is pretty damn close to a survival tool! Which is the topic of the next section…
textbooks/stats/Applied_Statistics/Learning_Statistics_with_R_-_A_tutorial_for_Psychology_Students_and_other_Beginners_(Navarro)/01%3A_Why_Do_We_Learn_Statistics/1.03%3A_Statistics_in_Psychology.txt
“We are drowning in information, but we are starved for knowledge” -Various authors, original probably John Naisbitt When I started writing up my lecture notes I took the 20 most recent news articles posted to the ABC news website. Of those 20 articles, it turned out that 8 of them involved a discussion of something that I would call a statistical topic; 6 of those made a mistake. The most common error, if you’re curious, was failing to report baseline data (e.g., the article mentions that 5% of people in situation X have some characteristic Y, but doesn’t say how common the characteristic is for everyone else!) The point I’m trying to make here isn’t that journalists are bad at statistics (though they almost always are), it’s that a basic knowledge of statistics is very helpful for trying to figure out when someone else is either making a mistake or even lying to you. In fact, one of the biggest things that a knowledge of statistics does to you is cause you to get angry at the newspaper or the internet on a far more frequent basis: you can find a good example of this in Section 5.1.5. In later versions of this book I’ll try to include more anecdotes along those lines. 1.05: Theres More to Research Methods Than Statistics So far, most of what I’ve talked about is statistics, and so you’d be forgiven for thinking that statistics is all I care about in life. To be fair, you wouldn’t be far wrong, but research methodology is a broader concept than statistics. So most research methods courses will cover a lot of topics that relate much more to the pragmatics of research design, and in particular the issues that you encounter when trying to do research with humans. However, about 99% of student fears relate to the statistics part of the course, so I’ve focused on the stats in this discussion, and hopefully I’ve convinced you that statistics matters, and more importantly, that it’s not to be feared. That being said, it’s pretty typical for introductory research methods classes to be very stats-heavy. This is not (usually) because the lecturers are evil people. Quite the contrary, in fact. Introductory classes focus a lot on the statistics because you almost always find yourself needing statistics before you need the other research methods training. Why? Because almost all of your assignments in other classes will rely on statistical training, to a much greater extent than they rely on other methodological tools. It’s not common for undergraduate assignments to require you to design your own study from the ground up (in which case you would need to know a lot about research design), but it is common for assignments to ask you to analyse and interpret data that were collected in a study that someone else designed (in which case you need statistics). In that sense, from the perspective of allowing you to do well in all your other classes, the statistics is more urgent. But note that “urgent” is different from “important” – they both matter. I really do want to stress that research design is just as important as data analysis, and this book does spend a fair amount of time on it. However, while statistics has a kind of universality, and provides a set of core tools that are useful for most types of psychological research, the research methods side isn’t quite so universal. There are some general principles that everyone should think about, but a lot of research design is very idiosyncratic, and is specific to the area of research that you want to engage in. To the extent that it’s the details that matter, those details don’t usually show up in an introductory stats and research methods class. References Evans, J. St. B. T., J. L. Barston, and P. Pollard. 1983. “On the Conflict Between Logic and Belief in Syllogistic Reasoning.” Memory and Cognition 11: 295–306. Bickel, P. J., E. A. Hammel, and J. W. O’Connell. 1975. “Sex Bias in Graduate Admissions: Data from Berkeley.” Science 187: 398–404. 1. The quote comes from Auden’s 1946 poem Under Which Lyre: A Reactionary Tract for the Times, delivered as part of a commencement address at Harvard University. The history of the poem is kind of interesting: http://harvardmagazine.com/2007/11/a-poets-warning.html 2. Including the suggestion that common sense is in short supply among scientists. 3. In my more cynical moments I feel like this fact alone explains 95% of what I read on the internet. 4. Earlier versions of these notes incorrectly suggested that they actually were sued – apparently that’s not true. There’s a nice commentary on this here: https://www.refsmmat.com/posts/2016-05-08-simpsons-paradox-berkeley.html. A big thank you to Wilfried Van Hirtum for pointing this out to me! 5. Which might explain why physics is just a teensy bit further advanced as a science than we are.
textbooks/stats/Applied_Statistics/Learning_Statistics_with_R_-_A_tutorial_for_Psychology_Students_and_other_Beginners_(Navarro)/01%3A_Why_Do_We_Learn_Statistics/1.04%3A_Statistics_in_Everyday_Life.txt
To consult the statistician after an experiment is finished is often merely to ask him to conduct a post mortem examination. He can perhaps say what the experiment died of. – Sir Ronald Fisher6 In this chapter, we’re going to start thinking about the basic ideas that go into designing a study, collecting data, checking whether your data collection works, and so on. It won’t give you enough information to allow you to design studies of your own, but it will give you a lot of the basic tools that you need to assess the studies done by other people. However, since the focus of this book is much more on data analysis than on data collection, I’m only giving a very brief overview. Note that this chapter is “special” in two ways. Firstly, it’s much more psychology-specific than the later chapters. Secondly, it focuses much more heavily on the scientific problem of research methodology, and much less on the statistical problem of data analysis. Nevertheless, the two problems are related to one another, so it’s traditional for stats textbooks to discuss the problem in a little detail. This chapter relies heavily on Campbell and Stanley (1963) for the discussion of study design, and Stevens (1946) for the discussion of scales of measurement. Later versions will attempt to be more precise in the citations. 02: A Brief Introduction to Research Design The first thing to understand is data collection can be thought of as a kind of measurement. That is, what we’re trying to do here is measure something about human behaviour or the human mind. What do I mean by “measurement”? Some thoughts about psychological measurement Measurement itself is a subtle concept, but basically it comes down to finding some way of assigning numbers, or labels, or some other kind of well-defined descriptions to “stuff”. So, any of the following would count as a psychological measurement: • My age is 33 years. • I do not like anchovies. • My chromosomal gender is male. • My self-identified gender is male.7 In the short list above, the bolded part is “the thing to be measured”, and the italicised part is “the measurement itself”. In fact, we can expand on this a little bit, by thinking about the set of possible measurements that could have arisen in each case: • My age (in years) could have been 0, 1, 2, 3 …, etc. The upper bound on what my age could possibly be is a bit fuzzy, but in practice you’d be safe in saying that the largest possible age is 150, since no human has ever lived that long. • When asked if I like anchovies, I might have said that I do, or I do not, or I have no opinion, or I sometimes do. • My chromosomal gender is almost certainly going to be male (XY) or female (XX), but there are a few other possibilities. I could also have Klinfelter’s syndrome (XXY), which is more similar to male than to female. And I imagine there are other possibilities too. • My self-identified gender is also very likely to be male or female, but it doesn’t have to agree with my chromosomal gender. I may also choose to identify with neither, or to explicitly call myself transgender. As you can see, for some things (like age) it seems fairly obvious what the set of possible measurements should be, whereas for other things it gets a bit tricky. But I want to point out that even in the case of someone’s age, it’s much more subtle than this. For instance, in the example above, I assumed that it was okay to measure age in years. But if you’re a developmental psychologist, that’s way too crude, and so you often measure age in years and months (if a child is 2 years and 11 months, this is usually written as “2;11”). If you’re interested in newborns, you might want to measure age in days since birth, maybe even hours since birth. In other words, the way in which you specify the allowable measurement values is important. Looking at this a bit more closely, you might also realise that the concept of “age” isn’t actually all that precise. In general, when we say “age” we implicitly mean “the length of time since birth”. But that’s not always the right way to do it. Suppose you’re interested in how newborn babies control their eye movements. If you’re interested in kids that young, you might also start to worry that “birth” is not the only meaningful point in time to care about. If Baby Alice is born 3 weeks premature and Baby Bianca is born 1 week late, would it really make sense to say that they are the “same age” if we encountered them “2 hours after birth”? In one sense, yes: by social convention, we use birth as our reference point for talking about age in everyday life, since it defines the amount of time the person has been operating as an independent entity in the world, but from a scientific perspective that’s not the only thing we care about. When we think about the biology of human beings, it’s often useful to think of ourselves as organisms that have been growing and maturing since conception, and from that perspective Alice and Bianca aren’t the same age at all. So you might want to define the concept of “age” in two different ways: the length of time since conception, and the length of time since birth. When dealing with adults, it won’t make much difference, but when dealing with newborns it might. Moving beyond these issues, there’s the question of methodology. What specific “measurement method” are you going to use to find out someone’s age? As before, there are lots of different possibilities: • You could just ask people “how old are you?” The method of self-report is fast, cheap and easy, but it only works with people old enough to understand the question, and some people lie about their age. • You could ask an authority (e.g., a parent) “how old is your child?” This method is fast, and when dealing with kids it’s not all that hard since the parent is almost always around. It doesn’t work as well if you want to know “age since conception”, since a lot of parents can’t say for sure when conception took place. For that, you might need a different authority (e.g., an obstetrician). • You could look up official records, like birth certificates. This is time consuming and annoying, but it has its uses (e.g., if the person is now dead). Operationalisation: defining your measurement All of the ideas discussed in the previous section all relate to the concept of operationalisation. To be a bit more precise about the idea, operationalisation is the process by which we take a meaningful but somewhat vague concept, and turn it into a precise measurement. The process of operationalisation can involve several different things: • Being precise about what you are trying to measure. For instance, does “age” mean “time since birth” or “time since conception” in the context of your research? • Determining what method you will use to measure it. Will you use self-report to measure age, ask a parent, or look up an official record? If you’re using self-report, how will you phrase the question? • Defining the set of the allowable values that the measurement can take. Note that these values don’t always have to be numerical, though they often are. When measuring age, the values are numerical, but we still need to think carefully about what numbers are allowed. Do we want age in years, years and months, days, hours? Etc. For other types of measurements (e.g., gender), the values aren’t numerical. But, just as before, we need to think about what values are allowed. If we’re asking people to self-report their gender, what options to we allow them to choose between? Is it enough to allow only “male” or “female”? Do you need an “other” option? Or should we not give people any specific options, and let them answer in their own words? And if you open up the set of possible values to include all verbal response, how will you interpret their answers? Operationalisation is a tricky business, and there’s no “one, true way” to do it. The way in which you choose to operationalise the informal concept of “age” or “gender” into a formal measurement depends on what you need to use the measurement for. Often you’ll find that the community of scientists who work in your area have some fairly well-established ideas for how to go about it. In other words, operationalisation needs to be thought through on a case by case basis. Nevertheless, while there a lot of issues that are specific to each individual research project, there are some aspects to it that are pretty general. Before moving on, I want to take a moment to clear up our terminology, and in the process introduce one more term. Here are four different things that are closely related to each other: • A theoretical construct. This is the thing that you’re trying to take a measurement of, like “age”, “gender” or an “opinion”. A theoretical construct can’t be directly observed, and often they’re actually a bit vague. • A measure. The measure refers to the method or the tool that you use to make your observations. A question in a survey, a behavioural observation or a brain scan could all count as a measure. • An operationalisation. The term “operationalisation” refers to the logical connection between the measure and the theoretical construct, or to the process by which we try to derive a measure from a theoretical construct. • A variable. Finally, a new term. A variable is what we end up with when we apply our measure to something in the world. That is, variables are the actual “data” that we end up with in our data sets. In practice, even scientists tend to blur the distinction between these things, but it’s very helpful to try to understand the differences.
textbooks/stats/Applied_Statistics/Learning_Statistics_with_R_-_A_tutorial_for_Psychology_Students_and_other_Beginners_(Navarro)/02%3A_A_Brief_Introduction_to_Research_Design/2.01%3A__Introduction_to_Psychological_Measurement.txt
As the previous section indicates, the outcome of a psychological measurement is called a variable. But not all variables are of the same qualitative type, and it’s very useful to understand what types there are. A very useful concept for distinguishing between different types of variables is what’s known as scales of measurement. Nominal scale A nominal scale variable (also referred to as a categorical variable) is one in which there is no particular relationship between the different possibilities: for these kinds of variables it doesn’t make any sense to say that one of them is “bigger’ or”better" than any other one, and it absolutely doesn’t make any sense to average them. The classic example for this is “eye colour”. Eyes can be blue, green and brown, among other possibilities, but none of them is any “better” than any other one. As a result, it would feel really weird to talk about an “average eye colour”. Similarly, gender is nominal too: male isn’t better or worse than female, neither does it make sense to try to talk about an “average gender”. In short, nominal scale variables are those for which the only thing you can say about the different possibilities is that they are different. That’s it. Let’s take a slightly closer look at this. Suppose I was doing research on how people commute to and from work. One variable I would have to measure would be what kind of transportation people use to get to work. This “transport type” variable could have quite a few possible values, including: “train”, “bus”, “car”, “bicycle”, etc. For now, let’s suppose that these four are the only possibilities, and suppose that when I ask 100 people how they got to work today, and I get this: Transportation Number of people (1) Train 12 (2) Bus 30 (3) Car 48 (4) Bicycle 10 So, what’s the average transportation type? Obviously, the answer here is that there isn’t one. It’s a silly question to ask. You can say that travel by car is the most popular method, and travel by train is the least popular method, but that’s about all. Similarly, notice that the order in which I list the options isn’t very interesting. I could have chosen to display the data like this Transportation Number of people (3) Car 48 (1) Train 12 (4) Bicycle 10 (2) Bus 30 and nothing really changes. Ordinal scale Ordinal scale variables have a bit more structure than nominal scale variables, but not by a lot. An ordinal scale variable is one in which there is a natural, meaningful way to order the different possibilities, but you can’t do anything else. The usual example given of an ordinal variable is “finishing position in a race”. You can say that the person who finished first was faster than the person who finished second, but you don’t know how much faster. As a consequence we know that 1st > 2nd, and we know that 2nd > 3rd, but the difference between 1st and 2nd might be much larger than the difference between 2nd and 3rd. Here’s an more psychologically interesting example. Suppose I’m interested in people’s attitudes to climate change, and I ask them to pick one of these four statements that most closely matches their beliefs: 1. Temperatures are rising, because of human activity 2. Temperatures are rising, but we don’t know why 3. Temperatures are rising, but not because of humans 4. Temperatures are not rising Notice that these four statements actually do have a natural ordering, in terms of “the extent to which they agree with the current science”. Statement 1 is a close match, statement 2 is a reasonable match, statement 3 isn’t a very good match, and statement 4 is in strong opposition to the science. So, in terms of the thing I’m interested in (the extent to which people endorse the science), I can order the items as 1 > 2 > 3 > 4. Since this ordering exists, it would be very weird to list the options like this… 1. Temperatures are rising, but not because of humans 2. Temperatures are rising, because of human activity 3. Temperatures are not rising 4. Temperatures are rising, but we don’t know why … because it seems to violate the natural “structure” to the question. So, let’s suppose I asked 100 people these questions, and got the following answers: Response Number (1) Temperatures are rising, because of human activity 51 (2) Temperatures are rising, but we don’t know why 20 (3) Temperatures are rising, but not because of humans 10 (4) Temperatures are not rising 19 When analysing these data, it seems quite reasonable to try to group (1), (2) and (3) together, and say that 81 of 100 people were willing to at least partially endorse the science. And it’s also quite reasonable to group (2), (3) and (4) together and say that 49 of 100 people registered at least some disagreement with the dominant scientific view. However, it would be entirely bizarre to try to group (1), (2) and (4) together and say that 90 of 100 people said… what? There’s nothing sensible that allows you to group those responses together at all. That said, notice that while we can use the natural ordering of these items to construct sensible groupings, what we can’t do is average them. For instance, in my simple example here, the “average” response to the question is 1.97. If you can tell me what that means, I’d love to know. Because that sounds like gibberish to me! Interval scale In contrast to nominal and ordinal scale variables, interval scale and ratio scale variables are variables for which the numerical value is genuinely meaningful. In the case of interval scale variables, the differences between the numbers are interpretable, but the variable doesn’t have a “natural” zero value. A good example of an interval scale variable is measuring temperature in degrees celsius. For instance, if it was 15o yesterday and 18∘ today, then the 3o difference between the two is genuinely meaningful. Moreover, that 3o difference is exactly the same as the 3o difference between 7o and 10o. In short, addition and subtraction are meaningful for interval scale variables.8 However, notice that the 0o does not mean “no temperature at all”: it actually means “the temperature at which water freezes”, which is pretty arbitrary. As a consequence, it becomes pointless to try to multiply and divide temperatures. It is wrong to say that 20o is twice as hot as 10o, just as it is weird and meaningless to try to claim that 20o is negative two times as hot as -10o. Again, lets look at a more psychological example. Suppose I’m interested in looking at how the attitudes of first-year university students have changed over time. Obviously, I’m going to want to record the year in which each student started. This is an interval scale variable. A student who started in 2003 did arrive 5 years before a student who started in 2008. However, it would be completely insane for me to divide 2008 by 2003 and say that the second student started “1.0024 times later” than the first one. That doesn’t make any sense at all. Ratio scale The fourth and final type of variable to consider is a ratio scale variable, in which zero really means zero, and it’s okay to multiply and divide. A good psychological example of a ratio scale variable is response time (RT). In a lot of tasks it’s very common to record the amount of time somebody takes to solve a problem or answer a question, because it’s an indicator of how difficult the task is. Suppose that Alan takes 2.3 seconds to respond to a question, whereas Ben takes 3.1 seconds. As with an interval scale variable, addition and subtraction are both meaningful here. Ben really did take 3.1 - 2.3 = 0.8 seconds longer than Alan did. However, notice that multiplication and division also make sense here too: Ben took 3.1 / 2.3 = 1.35 times as long as Alan did to answer the question. And the reason why you can do this is that, for a ratio scale variable such as RT, “zero seconds” really does mean “no time at all”. Continuous versus discrete variables There’s a second kind of distinction that you need to be aware of, regarding what types of variables you can run into. This is the distinction between continuous variables and discrete variables. The difference between these is as follows: • A continuous variable is one in which, for any two values that you can think of, it’s always logically possible to have another value in between. • A discrete variable is, in effect, a variable that isn’t continuous. For a discrete variable, it’s sometimes the case that there’s nothing in the middle. These definitions probably seem a bit abstract, but they’re pretty simple once you see some examples. For instance, response time is continuous. If Alan takes 3.1 seconds and Ben takes 2.3 seconds to respond to a question, then it’s possible for Cameron’s response time to lie in between, by taking 3.0 seconds. And of course it would also be possible for David to take 3.031 seconds to respond, meaning that his RT would lie in between Cameron’s and Alan’s. And while in practice it might be impossible to measure RT that precisely, it’s certainly possible in principle. Because we can always find a new value for RT in between any two other ones, we say that RT is continuous. Discrete variables occur when this rule is violated. For example, nominal scale variables are always discrete: there isn’t a type of transportation that falls “in between” trains and bicycles, not in the strict mathematical way that 2.3 falls in between 2 and 3. So transportation type is discrete. Similarly, ordinal scale variables are always discrete: although “2nd place” does fall between “1st place” and “3rd place”, there’s nothing that can logically fall in between “1st place” and “2nd place”. Interval scale and ratio scale variables can go either way. As we saw above, response time (a ratio scale variable) is continuous. Temperature in degrees celsius (an interval scale variable) is also continuous. However, the year you went to school (an interval scale variable) is discrete. There’s no year in between 2002 and 2003. The number of questions you get right on a true-or-false test (a ratio scale variable) is also discrete: since a true-or-false question doesn’t allow you to be “partially correct”, there’s nothing in between 5/10 and 6/10. Table 2.1 summarises the relationship between the scales of measurement and the discrete/continuity distinction. Cells with a tick mark correspond to things that are possible. I’m trying to hammer this point home, because (a) some textbooks get this wrong, and (b) people very often say things like “discrete variable” when they mean “nominal scale variable”. It’s very unfortunate. Table 2.1: The relationship between the scales of measurement and the discrete/continuity distinction. Cells with a tick mark correspond to things that are possible. continuous discrete nominal ordinal interval ratio Some complexities Okay, I know you’re going to be shocked to hear this, but … the real world is much messier than this little classification scheme suggests. Very few variables in real life actually fall into these nice neat categories, so you need to be kind of careful not to treat the scales of measurement as if they were hard and fast rules. It doesn’t work like that: they’re guidelines, intended to help you think about the situations in which you should treat different variables differently. Nothing more. So let’s take a classic example, maybe the classic example, of a psychological measurement tool: the Likert scale. The humble Likert scale is the bread and butter tool of all survey design. You yourself have filled out hundreds, maybe thousands of them, and odds are you’ve even used one yourself. Suppose we have a survey question that looks like this: Which of the following best describes your opinion of the statement that “all pirates are freaking awesome” … and then the options presented to the participant are these: (1) Strongly disagree (2) Disagree (3) Neither agree nor disagree (4) Agree (5) Strongly agree This set of items is an example of a 5-point Likert scale: people are asked to choose among one of several (in this case 5) clearly ordered possibilities, generally with a verbal descriptor given in each case. However, it’s not necessary that all items be explicitly described. This is a perfectly good example of a 5-point Likert scale too: (1) Strongly disagree (2) (3) (4) (5) Strongly agree Likert scales are very handy, if somewhat limited, tools. The question is, what kind of variable are they? They’re obviously discrete, since you can’t give a response of 2.5. They’re obviously not nominal scale, since the items are ordered; and they’re not ratio scale either, since there’s no natural zero. But are they ordinal scale or interval scale? One argument says that we can’t really prove that the difference between “strongly agree” and “agree” is of the same size as the difference between “agree” and “neither agree nor disagree”. In fact, in everyday life it’s pretty obvious that they’re not the same at all. So this suggests that we ought to treat Likert scales as ordinal variables. On the other hand, in practice most participants do seem to take the whole “on a scale from 1 to 5” part fairly seriously, and they tend to act as if the differences between the five response options were fairly similar to one another. As a consequence, a lot of researchers treat Likert scale data as if it were interval scale. It’s not interval scale, but in practice it’s close enough that we usually think of it as being quasi-interval scale.
textbooks/stats/Applied_Statistics/Learning_Statistics_with_R_-_A_tutorial_for_Psychology_Students_and_other_Beginners_(Navarro)/02%3A_A_Brief_Introduction_to_Research_Design/2.02%3A_Scales_of_Measurement.txt
At this point we’ve thought a little bit about how to operationalise a theoretical construct and thereby create a psychological measure; and we’ve seen that by applying psychological measures we end up with variables, which can come in many different types. At this point, we should start discussing the obvious question: is the measurement any good? We’ll do this in terms of two related ideas: reliability and validity. Put simply, the reliability of a measure tells you how precisely you are measuring something, whereas the validity of a measure tells you how accurate the measure is. In this section I’ll talk about reliability; we’ll talk about validity in the next chapter. Reliability is actually a very simple concept: it refers to the repeatability or consistency of your measurement. The measurement of my weight by means of a “bathroom scale” is very reliable: if I step on and off the scales over and over again, it’ll keep giving me the same answer. Measuring my intelligence by means of “asking my mum” is very unreliable: some days she tells me I’m a bit thick, and other days she tells me I’m a complete moron. Notice that this concept of reliability is different to the question of whether the measurements are correct (the correctness of a measurement relates to it’s validity). If I’m holding a sack of potatos when I step on and off of the bathroom scales, the measurement will still be reliable: it will always give me the same answer. However, this highly reliable answer doesn’t match up to my true weight at all, therefore it’s wrong. In technical terms, this is a reliable but invalid measurement. Similarly, while my mum’s estimate of my intelligence is a bit unreliable, she might be right. Maybe I’m just not too bright, and so while her estimate of my intelligence fluctuates pretty wildly from day to day, it’s basically right. So that would be an unreliable but valid measure. Of course, to some extent, notice that if my mum’s estimates are too unreliable, it’s going to be very hard to figure out which one of her many claims about my intelligence is actually the right one. To some extent, then, a very unreliable measure tends to end up being invalid for practical purposes; so much so that many people would say that reliability is necessary (but not sufficient) to ensure validity. Okay, now that we’re clear on the distinction between reliability and validity, let’s have a think about the different ways in which we might measure reliability: • Test-retest reliability. This relates to consistency over time: if we repeat the measurement at a later date, do we get a the same answer? • Inter-rater reliability. This relates to consistency across people: if someone else repeats the measurement (e.g., someone else rates my intelligence) will they produce the same answer? • Parallel forms reliability. This relates to consistency across theoretically-equivalent measurements: if I use a different set of bathroom scales to measure my weight, does it give the same answer? • Internal consistency reliability. If a measurement is constructed from lots of different parts that perform similar functions (e.g., a personality questionnaire result is added up across several questions) do the individual parts tend to give similar answers. Not all measurements need to possess all forms of reliability. For instance, educational assessment can be thought of as a form of measurement. One of the subjects that I teach, Computational Cognitive Science, has an assessment structure that has a research component and an exam component (plus other things). The exam component is intended to measure something different from the research component, so the assessment as a whole has low internal consistency. However, within the exam there are several questions that are intended to (approximately) measure the same things, and those tend to produce similar outcomes; so the exam on its own has a fairly high internal consistency. Which is as it should be. You should only demand reliability in those situations where you want to be measure the same thing! Table 2.2: The terminology used to distinguish between different roles that a variable can play when analysing a data set. Note that this book will tend to avoid the classical terminology in favour of the newer names. role of the variable classical name modern name “to be explained” dependent variable (DV) outcome “to do the explaining” independent variable (IV) predictor 2.04: The Role of Variables- Predictors and Outcomes Okay, I’ve got one last piece of terminology that I need to explain to you before moving away from variables. Normally, when we do some research we end up with lots of different variables. Then, when we analyse our data we usually try to explain some of the variables in terms of some of the other variables. It’s important to keep the two roles “thing doing the explaining” and “thing being explained” distinct. So let’s be clear about this now. Firstly, we might as well get used to the idea of using mathematical symbols to describe variables, since it’s going to happen over and over again. Let’s denote the “to be explained” variable Y , and denote the variables “doing the explaining” as X1, X2, etc. Now, when we doing an analysis, we have different names for X and Y , since they play different roles in the analysis. The classical names for these roles are independent variable (IV) and dependent variable (DV). The IV is the variable that you use to do the explaining (i.e., X) and the DV is the variable being explained (i.e., Y ). The logic behind these names goes like this: if there really is a relationship between X and Y then we can say that Y depends on X, and if we have designed our study “properly” then X isn’t dependent on anything else. However, I personally find those names horrible: they’re hard to remember and they’re highly misleading, because (a) the IV is never actually “independent of everything else” and (b) if there’s no relationship, then the DV doesn’t actually depend on the IV. And in fact, because I’m not the only person who thinks that IV and DV are just awful names, there are a number of alternatives that I find more appealing. The terms that I’ll use in these notes are predictors and outcomes. The idea here is that what you’re trying to do is use X (the predictors) to make guesses about Y (the outcomes).4 This is summarised in Table 2.2. 4Annoyingly, though, there’s a lot of different names used out there. I won’t list all of them – there would be no point in doing that – other than to note that R often uses “response variable” where I’ve used “outcome”, and a traditionalist would use “dependent variable”. Sigh. This sort of terminological confusion is very common, I’m afraid.
textbooks/stats/Applied_Statistics/Learning_Statistics_with_R_-_A_tutorial_for_Psychology_Students_and_other_Beginners_(Navarro)/02%3A_A_Brief_Introduction_to_Research_Design/2.03%3A_Assessing_the_Reliability_of_a_Measurement.txt
One of the big distinctions that you should be aware of is the distinction between “experimental research” and “non-experimental research”. When we make this distinction, what we’re really talking about is the degree of control that the researcher exercises over the people and events in the study. Experimental research The key features of experimental research is that the researcher controls all aspects of the study, especially what participants experience during the study. In particular, the researcher manipulates or varies the predictor variables (IVs), and then allows the outcome variable (DV) to vary naturally. The idea here is to deliberately vary the predictors (IVs) to see if they have any causal effects on the outcomes. Moreover, in order to ensure that there’s no chance that something other than the predictor variables is causing the outcomes, everything else is kept constant or is in some other way “balanced” to ensure that they have no effect on the results. In practice, it’s almost impossible to think of everything else that might have an influence on the outcome of an experiment, much less keep it constant. The standard solution to this is randomisation: that is, we randomly assign people to different groups, and then give each group a different treatment (i.e., assign them different values of the predictor variables). We’ll talk more about randomisation later in this course, but for now, it’s enough to say that what randomisation does is minimise (but not eliminate) the chances that there are any systematic difference between groups. Let’s consider a very simple, completely unrealistic and grossly unethical example. Suppose you wanted to find out if smoking causes lung cancer. One way to do this would be to find people who smoke and people who don’t smoke, and look to see if smokers have a higher rate of lung cancer. This is not a proper experiment, since the researcher doesn’t have a lot of control over who is and isn’t a smoker. And this really matters: for instance, it might be that people who choose to smoke cigarettes also tend to have poor diets, or maybe they tend to work in asbestos mines, or whatever. The point here is that the groups (smokers and non-smokers) actually differ on lots of things, not just smoking. So it might be that the higher incidence of lung cancer among smokers is caused by something else, not by smoking per se. In technical terms, these other things (e.g. diet) are called “confounds”, and we’ll talk about those in just a moment. In the meantime, let’s now consider what a proper experiment might look like. Recall that our concern was that smokers and non-smokers might differ in lots of ways. The solution, as long as you have no ethics, is to control who smokes and who doesn’t. Specifically, if we randomly divide participants into two groups, and force half of them to become smokers, then it’s very unlikely that the groups will differ in any respect other than the fact that half of them smoke. That way, if our smoking group gets cancer at a higher rate than the non-smoking group, then we can feel pretty confident that (a) smoking does cause cancer and (b) we’re murderers. Non-experimental research Non-experimental research is a broad term that covers “any study in which the researcher doesn’t have quite as much control as they do in an experiment”. Obviously, control is something that scientists like to have, but as the previous example illustrates, there are lots of situations in which you can’t or shouldn’t try to obtain that control. Since it’s grossly unethical (and almost certainly criminal) to force people to smoke in order to find out if they get cancer, this is a good example of a situation in which you really shouldn’t try to obtain experimental control. But there are other reasons too. Even leaving aside the ethical issues, our “smoking experiment” does have a few other issues. For instance, when I suggested that we “force” half of the people to become smokers, I must have been talking about starting with a sample of non-smokers, and then forcing them to become smokers. While this sounds like the kind of solid, evil experimental design that a mad scientist would love, it might not be a very sound way of investigating the effect in the real world. For instance, suppose that smoking only causes lung cancer when people have poor diets, and suppose also that people who normally smoke do tend to have poor diets. However, since the “smokers” in our experiment aren’t “natural” smokers (i.e., we forced non-smokers to become smokers; they didn’t take on all of the other normal, real life characteristics that smokers might tend to possess) they probably have better diets. As such, in this silly example they wouldn’t get lung cancer, and our experiment will fail, because it violates the structure of the “natural” world (the technical name for this is an “artifactual” result; see later). One distinction worth making between two types of non-experimental research is the difference be- tween quasi-experimental research and case studies. The example I discussed earlier – in which we wanted to examine incidence of lung cancer among smokers and non-smokers, without trying to control who smokes and who doesn’t – is a quasi-experimental design. That is, it’s the same as an experiment, but we don’t control the predictors (IVs). We can still use statistics to analyse the results, it’s just that we have to be a lot more careful. The alternative approach, case studies, aims to provide a very detailed description of one or a few instances. In general, you can’t use statistics to analyse the results of case studies, and it’s usually very hard to draw any general conclusions about “people in general” from a few isolated examples. However, case studies are very useful in some situations. Firstly, there are situations where you don’t have any alternative: neuropsychology has this issue a lot. Sometimes, you just can’t find a lot of people with brain damage in a specific area, so the only thing you can do is describe those cases that you do have in as much detail and with as much care as you can. However, there’s also some genuine advantages to case studies: because you don’t have as many people to study, you have the ability to invest lots of time and effort trying to understand the specific factors at play in each case. This is a very valuable thing to do. As a consequence, case studies can complement the more statistically-oriented approaches that you see in experimental and quasi-experimental designs. We won’t talk much about case studies in these lectures, but they are nevertheless very valuable tools!
textbooks/stats/Applied_Statistics/Learning_Statistics_with_R_-_A_tutorial_for_Psychology_Students_and_other_Beginners_(Navarro)/02%3A_A_Brief_Introduction_to_Research_Design/2.05%3A_Experimental_and_Non-experimental_Research.txt
More than any other thing, a scientist wants their research to be “valid”. The conceptual idea behind validity is very simple: can you trust the results of your study? If not, the study is invalid. However, while it’s easy to state, in practice it’s much harder to check validity than it is to check reliability. And in all honesty, there’s no precise, clearly agreed upon notion of what validity actually is. In fact, there’s lots of different kinds of validity, each of which raises it’s own issues, and not all forms of validity are relevant to all studies. I’m going to talk about five different types: • Internal validity • External validity • Construct validity • Face validity • Ecological validity To give you a quick guide as to what matters here...(1) Internal and external validity are the most important, since they tie directly to the fundamental question of whether your study really works. (2) Construct validity asks whether you’re measuring what you think you are. (3) Face validity isn’t terribly important except insofar as you care about “appearances”. (4) Ecological validity is a special case of face validity that corresponds to a kind of appearance that you might care about a lot. Internal validity Internal validity refers to the extent to which you are able draw the correct conclusions about the causal relationships between variables. It’s called “internal” because it refers to the relationships between things “inside” the study. Let’s illustrate the concept with a simple example. Suppose you’re interested in finding out whether a university education makes you write better. To do so, you get a group of first year students, ask them to write a 1000 word essay, and count the number of spelling and grammatical errors they make. Then you find some third-year students, who obviously have had more of a university education than the first-years, and repeat the exercise. And let’s suppose it turns out that the third-year students produce fewer errors. And so you conclude that a university education improves writing skills. Right? Except... the big problem that you have with this experiment is that the third-year students are older, and they’ve had more experience with writing things. So it’s hard to know for sure what the causal relationship is: Do older people write better? Or people who have had more writing experience? Or people who have had more education? Which of the above is the true cause of the superior performance of the third-years? Age? Experience? Education? You can’t tell. This is an example of a failure of internal validity, because your study doesn’t properly tease apart the causal relationships between the different variables. External validity External validity relates to the generalisability of your findings. That is, to what extent do you expect to see the same pattern of results in “real life” as you saw in your study. To put it a bit more precisely, any study that you do in psychology will involve a fairly specific set of questions or tasks, will occur in a specific environment, and will involve participants that are drawn from a particular subgroup. So, if it turns out that the results don’t actually generalise to people and situations beyond the ones that you studied, then what you’ve got is a lack of external validity. The classic example of this issue is the fact that a very large proportion of studies in psychology will use undergraduate psychology students as the participants. Obviously, however, the researchers don’t care only about psychology students; they care about people in general. Given that, a study that uses only psych students as participants always carries a risk of lacking external validity. That is, if there’s something “special” about psychology students that makes them different to the general populace in some relevant respect, then we may start worrying about a lack of external validity. That said, it is absolutely critical to realise that a study that uses only psychology students does not necessarily have a problem with external validity. I’ll talk about this again later, but it’s such a common mistake that I’m going to mention it here. The external validity is threatened by the choice of population if (a) the population from which you sample your participants is very narrow (e.g., psych students), and (b) the narrow population that you sampled from is systematically different from the general population, in some respect that is relevant to the psychological phenomenon that you intend to study. The italicised part is the bit that lots of people forget: it is true that psychology undergraduates differ from the general population in lots of ways, and so a study that uses only psych students may have problems with external validity. However, if those differences aren’t very relevant to the phenomenon that you’re studying, then there’s nothing to worry about. To make this a bit more concrete, here’s two extreme examples: • You want to measure “attitudes of the general public towards psychotherapy”, but all of your participants are psychology students. This study would almost certainly have a problem with external validity. • You want to measure the effectiveness of a visual illusion, and your participants are all psychology students. This study is very unlikely to have a problem with external validity Having just spent the last couple of paragraphs focusing on the choice of participants (since that’s the big issue that everyone tends to worry most about), it’s worth remembering that external validity is a broader concept. The following are also examples of things that might pose a threat to external validity, depending on what kind of study you’re doing: • People might answer a “psychology questionnaire” in a manner that doesn’t reflect what they would do in real life. • Your lab experiment on (say) “human learning” has a different structure to the learning problems people face in real life. Construct validity Construct validity is basically a question of whether you’re measuring what you want to be measuring. A measurement has good construct validity if it is actually measuring the correct theoretical construct, and bad construct validity if it doesn’t. To give very simple (if ridiculous) example, suppose I’m trying to investigate the rates with which university students cheat on their exams. And the way I attempt to measure it is by asking the cheating students to stand up in the lecture theatre so that I can count them. When I do this with a class of 300 students, 0 people claim to be cheaters. So I therefore conclude that the proportion of cheaters in my class is 0%. Clearly this is a bit ridiculous. But the point here is not that this is a very deep methodological example, but rather to explain what construct validity is. The problem with my measure is that while I’m trying to measure “the proportion of people who cheat” what I’m actually measuring is “the proportion of people stupid enough to own up to cheating, or bloody minded enough to pretend that they do”. Obviously, these aren’t the same thing! So my study has gone wrong, because my measurement has very poor construct validity. Face validity Face validity simply refers to whether or not a measure “looks like” it’s doing what it’s supposed to, nothing more. If I design a test of intelligence, and people look at it and they say “no, that test doesn’t measure intelligence”, then the measure lacks face validity. It’s as simple as that. Obviously, face validity isn’t very important from a pure scientific perspective. After all, what we care about is whether or not the measure actually does what it’s supposed to do, not whether it looks like it does what it’s supposed to do. As a consequence, we generally don’t care very much about face validity. That said, the concept of face validity serves three useful pragmatic purposes: • Sometimes, an experienced scientist will have a “hunch” that a particular measure won’t work. While these sorts of hunches have no strict evidentiary value, it’s often worth paying attention to them. Because often times people have knowledge that they can’t quite verbalise, so there might be something to worry about even if you can’t quite say why. In other words, when someone you trust criticises the face validity of your study, it’s worth taking the time to think more carefully about your design to see if you can think of reasons why it might go awry. Mind you, if you don’t find any reason for concern, then you should probably not worry: after all, face validity really doesn’t matter much. • Often (very often), completely uninformed people will also have a “hunch” that your research is crap. And they’ll criticise it on the internet or something. On close inspection, you’ll often notice that these criticisms are actually focused entirely on how the study “looks”, but not on anything deeper. The concept of face validity is useful for gently explaining to people that they need to substantiate their arguments further. • Expanding on the last point, if the beliefs of untrained people are critical (e.g., this is often the case for applied research where you actually want to convince policy makers of something or other) then you have to care about face validity. Simply because – whether you like it or not – a lot of people will use face validity as a proxy for real validity. If you want the government to change a law on scientific, psychological grounds, then it won’t matter how good your studies “really” are. If they lack face validity, you’ll find that politicians ignore you. Of course, it’s somewhat unfair that policy often depends more on appearance than fact, but that’s how things go. Ecological validity Ecological validity is a different notion of validity, which is similar to external validity, but less important. The idea is that, in order to be ecologically valid, the entire set up of the study should closely approximate the real world scenario that is being investigated. In a sense, ecological validity is a kind of face validity – it relates mostly to whether the study “looks” right, but with a bit more rigour to it. To be ecologically valid, the study has to look right in a fairly specific way. The idea behind it is the intuition that a study that is ecologically valid is more likely to be externally valid. It’s no guarantee, of course. But the nice thing about ecological validity is that it’s much easier to check whether a study is ecologically valid than it is to check whether a study is externally valid. An simple example would be eyewitness identification studies. Most of these studies tend to be done in a university setting, often with fairly simple array of faces to look at rather than a line up. The length of time between seeing the “criminal” and being asked to identify the suspect in the “line up” is usually shorter. The “crime” isn’t real, so there’s no chance that the witness being scared, and there’s no police officers present, so there’s not as much chance of feeling pressured. These things all mean that the study definitely lacks ecological validity. They might (but might not) mean that it also lacks external validity.
textbooks/stats/Applied_Statistics/Learning_Statistics_with_R_-_A_tutorial_for_Psychology_Students_and_other_Beginners_(Navarro)/02%3A_A_Brief_Introduction_to_Research_Design/2.06%3A_Assessing_the_Validity_of_a_Study.txt
If we look at the issue of validity in the most general fashion, the two biggest worries that we have are confounds and artifact. These two terms are defined in the following way: • Confound: A confound is an additional, often unmeasured variable10 that turns out to be related to both the predictors and the outcomes. The existence of confounds threatens the internal validity of the study because you can’t tell whether the predictor causes the outcome, or if the confounding variable causes it, etc. • Artifact: A result is said to be “artifactual” if it only holds in the special situation that you happened to test in your study. The possibility that your result is an artifact describes a threat to your external validity, because it raises the possibility that you can’t generalise your results to the actual population that you care about. As a general rule confounds are a bigger concern for non-experimental studies, precisely because they’re not proper experiments: by definition, you’re leaving lots of things uncontrolled, so there’s a lot of scope for confounds working their way into your study. Experimental research tends to be much less vulnerable to confounds: the more control you have over what happens during the study, the more you can prevent confounds from appearing. However, there’s always swings and roundabouts, and when we start thinking about artifacts rather than confounds, the shoe is very firmly on the other foot. For the most part, artifactual results tend to be a concern for experimental studies than for non-experimental studies. To see this, it helps to realise that the reason that a lot of studies are non-experimental is precisely because what the researcher is trying to do is examine human behaviour in a more naturalistic context. By working in a more real-world context, you lose experimental control (making yourself vulnerable to confounds) but because you tend to be studying human psychology “in the wild” you reduce the chances of getting an artifactual result. Or, to put it another way, when you take psychology out of the wild and bring it into the lab (which we usually have to do to gain our experimental control), you always run the risk of accidentally studying something different than you wanted to study: which is more or less the definition of an artifact. Be warned though: the above is a rough guide only. It’s absolutely possible to have confounds in an experiment, and to get artifactual results with non-experimental studies. This can happen for all sorts of reasons, not least of which is researcher error. In practice, it’s really hard to think everything through ahead of time, and even very good researchers make mistakes. But other times it’s unavoidable, simply because the researcher has ethics (e.g., see “differential attrition”). Okay. There’s a sense in which almost any threat to validity can be characterised as a confound or an artifact: they’re pretty vague concepts. So let’s have a look at some of the most common examples… History effects History effects refer to the possibility that specific events may occur during the study itself that might influence the outcomes. For instance, something might happen in between a pre-test and a post-test. Or, in between testing participant 23 and participant 24. Alternatively, it might be that you’re looking at an older study, which was perfectly valid for its time, but the world has changed enough since then that the conclusions are no longer trustworthy. Examples of things that would count as history effects: • You’re interested in how people think about risk and uncertainty. You started your data collection in December 2010. But finding participants and collecting data takes time, so you’re still finding new people in February 2011. Unfortunately for you (and even more unfortunately for others), the Queensland floods occurred in January 2011, causing billions of dollars of damage and killing many people. Not surprisingly, the people tested in February 2011 express quite different beliefs about handling risk than the people tested in December 2010. Which (if any) of these reflects the “true” beliefs of participants? I think the answer is probably both: the Queensland floods genuinely changed the beliefs of the Australian public, though possibly only temporarily. The key thing here is that the “history” of the people tested in February is quite different to people tested in December. • You’re testing the psychological effects of a new anti-anxiety drug. So what you do is measure anxiety before administering the drug (e.g., by self-report, and taking physiological measures, let’s say), then you administer the drug, and then you take the same measures afterwards. In the middle, however, because your labs are in Los Angeles, there’s an earthquake, which increases the anxiety of the participants. Maturation effects As with history effects, maturational effects are fundamentally about change over time. However, maturation effects aren’t in response to specific events. Rather, they relate to how people change on their own over time: we get older, we get tired, we get bored, etc. Some examples of maturation effects: • When doing developmental psychology research, you need to be aware that children grow up quite rapidly. So, suppose that you want to find out whether some educational trick helps with vocabulary size among 3 year olds. One thing that you need to be aware of is that the vocabulary size of children that age is growing at an incredible rate (multiple words per day), all on its own. If you design your study without taking this maturational effect into account, then you won’t be able to tell if your educational trick works. • When running a very long experiment in the lab (say, something that goes for 3 hours), it’s very likely that people will begin to get bored and tired, and that this maturational effect will cause performance to decline, regardless of anything else going on in the experiment Repeated testing effects An important type of history effect is the effect of repeated testing. Suppose I want to take two measurements of some psychological construct (e.g., anxiety). One thing I might be worried about is if the first measurement has an effect on the second measurement. In other words, this is a history effect in which the “event” that influences the second measurement is the first measurement itself! This is not at all uncommon. Examples of this include: • Learning and practice: e.g., “intelligence” at time 2 might appear to go up relative to time 1 because participants learned the general rules of how to solve “intelligence-test-style” questions during the first testing session. • Familiarity with the testing situation: e.g., if people are nervous at time 1, this might make performance go down; after sitting through the first testing situation, they might calm down a lot precisely because they’ve seen what the testing looks like. • Auxiliary changes caused by testing: e.g., if a questionnaire assessing mood is boring, then mood at measurement at time 2 is more likely to become “bored”, precisely because of the boring measurement made at time 1. Selection bias Selection bias is a pretty broad term. Suppose that you’re running an experiment with two groups of participants, where each group gets a different “treatment”, and you want to see if the different treatments lead to different outcomes. However, suppose that, despite your best efforts, you’ve ended up with a gender imbalance across groups (say, group A has 80% females and group B has 50% females). It might sound like this could never happen, but trust me, it can. This is an example of a selection bias, in which the people “selected into” the two groups have different characteristics. If any of those characteristics turns out to be relevant (say, your treatment works better on females than males) then you’re in a lot of trouble. Differential attrition One quite subtle danger to be aware of is called differential attrition, which is a kind of selection bias that is caused by the study itself. Suppose that, for the first time ever in the history of psychology, I manage to find the perfectly balanced and representative sample of people. I start running “Dan’s incredibly long and tedious experiment” on my perfect sample, but then, because my study is incredibly long and tedious, lots of people start dropping out. I can’t stop this: as we’ll discuss later in the chapter on research ethics, participants absolutely have the right to stop doing any experiment, any time, for whatever reason they feel like, and as researchers we are morally (and professionally) obliged to remind people that they do have this right. So, suppose that “Dan’s incredibly long and tedious experiment” has a very high drop out rate. What do you suppose the odds are that this drop out is random? Answer: zero. Almost certainly, the people who remain are more conscientious, more tolerant of boredom etc than those that leave. To the extent that (say) conscientiousness is relevant to the psychological phenomenon that I care about, this attrition can decrease the validity of my results. When thinking about the effects of differential attrition, it is sometimes helpful to distinguish between two different types. The first is homogeneous attrition, in which the attrition effect is the same for all groups, treatments or conditions. In the example I gave above, the differential attrition would be homogeneous if (and only if) the easily bored participants are dropping out of all of the conditions in my experiment at about the same rate. In general, the main effect of homogeneous attrition is likely to be that it makes your sample unrepresentative. As such, the biggest worry that you’ll have is that the generalisability of the results decreases: in other words, you lose external validity. The second type of differential attrition is heterogeneous attrition, in which the attrition effect is different for different groups. This is a much bigger problem: not only do you have to worry about your external validity, you also have to worry about your internal validity too. To see why this is the case, let’s consider a very dumb study in which I want to see if insulting people makes them act in a more obedient way. Why anyone would actually want to study that I don’t know, but let’s suppose I really, deeply cared about this. So, I design my experiment with two conditions. In the “treatment” condition, the experimenter insults the participant and then gives them a questionnaire designed to measure obedience. In the “control” condition, the experimenter engages in a bit of pointless chitchat and then gives them the questionnaire. Leaving aside the questionable scientific merits and dubious ethics of such a study, let’s have a think about what might go wrong here. As a general rule, when someone insults me to my face, I tend to get much less co-operative. So, there’s a pretty good chance that a lot more people are going to drop out of the treatment condition than the control condition. And this drop out isn’t going to be random. The people most likely to drop out would probably be the people who don’t care all that much about the importance of obediently sitting through the experiment. Since the most bloody minded and disobedient people all left the treatment group but not the control group, we’ve introduced a confound: the people who actually took the questionnaire in the treatment group were already more likely to be dutiful and obedient than the people in the control group. In short, in this study insulting people doesn’t make them more obedient: it makes the more disobedient people leave the experiment! The internal validity of this experiment is completely shot. Non-response bias Non-response bias is closely related to selection bias, and to differential attrition. The simplest version of the problem goes like this. You mail out a survey to 1000 people, and only 300 of them reply. The 300 people who replied are almost certainly not a random subsample. People who respond to surveys are systematically different to people who don’t. This introduces a problem when trying to generalise from those 300 people who replied, to the population at large; since you now have a very non-random sample. The issue of non-response bias is more general than this, though. Among the (say) 300 people that did respond to the survey, you might find that not everyone answers every question. If (say) 80 people chose not to answer one of your questions, does this introduce problems? As always, the answer is maybe. If the question that wasn’t answered was on the last page of the questionnaire, and those 80 surveys were returned with the last page missing, there’s a good chance that the missing data isn’t a big deal: probably the pages just fell off. However, if the question that 80 people didn’t answer was the most confrontational or invasive personal question in the questionnaire, then almost certainly you’ve got a problem. In essence, what you’re dealing with here is what’s called the problem of missing data. If the data that is missing was “lost” randomly, then it’s not a big problem. If it’s missing systematically, then it can be a big problem. Regression to the mean Regression to the mean is a curious variation on selection bias. It refers to any situation where you select data based on an extreme value on some measure. Because the measure has natural variation, it almost certainly means that when you take a subsequent measurement, that later measurement will be less extreme than the first one, purely by chance. Here’s an example. Suppose I’m interested in whether a psychology education has an adverse effect on very smart kids. To do this, I find the 20 psych I students with the best high school grades and look at how well they’re doing at university. It turns out that they’re doing a lot better than average, but they’re not topping the class at university, even though they did top their classes at high school. What’s going on? The natural first thought is that this must mean that the psychology classes must be having an adverse effect on those students. However, while that might very well be the explanation, it’s more likely that what you’re seeing is an example of “regression to the mean”. To see how it works, let’s take a moment to think about what is required to get the best mark in a class, regardless of whether that class be at high school or at university. When you’ve got a big class, there are going to be lots of very smart people enrolled. To get the best mark you have to be very smart, work very hard, and be a bit lucky. The exam has to ask just the right questions for your idiosyncratic skills, and you have to not make any dumb mistakes (we all do that sometimes) when answering them. And that’s the thing: intelligence and hard work are transferrable from one class to the next. Luck isn’t. The people who got lucky in high school won’t be the same as the people who get lucky at university. That’s the very definition of “luck”. The consequence of this is that, when you select people at the very extreme values of one measurement (the top 20 students), you’re selecting for hard work, skill and luck. But because the luck doesn’t transfer to the second measurement (only the skill and work), these people will all be expected to drop a little bit when you measure them a second time (at university). So their scores fall back a little bit, back towards everyone else. This is regression to the mean. Regression to the mean is surprisingly common. For instance, if two very tall people have kids, their children will tend to be taller than average, but not as tall as the parents. The reverse happens with very short parents: two very short parents will tend to have short children, but nevertheless those kids will tend to be taller than the parents. It can also be extremely subtle. For instance, there have been studies done that suggested that people learn better from negative feedback than from positive feedback. However, the way that people tried to show this was to give people positive reinforcement whenever they did good, and negative reinforcement when they did bad. And what you see is that after the positive reinforcement, people tended to do worse; but after the negative reinforcement they tended to do better. But! Notice that there’s a selection bias here: when people do very well, you’re selecting for “high” values, and so you should expect (because of regression to the mean) that performance on the next trial should be worse, regardless of whether reinforcement is given. Similarly, after a bad trial, people will tend to improve all on their own. The apparent superiority of negative feedback is an artifact caused by regression to the mean (see Kahneman and Tversky 1973 for discussion). Experimenter bias Experimenter bias can come in multiple forms. The basic idea is that the experimenter, despite the best of intentions, can accidentally end up influencing the results of the experiment by subtly communicating the “right answer” or the “desired behaviour” to the participants. Typically, this occurs because the experimenter has special knowledge that the participant does not – either the right answer to the questions being asked, or knowledge of the expected pattern of performance for the condition that the participant is in, and so on. The classic example of this happening is the case study of “Clever Hans”, which dates back to 1907 (Pfungst 1911; Hothersall 2004). Clever Hans was a horse that apparently was able to read and count, and perform other human like feats of intelligence. After Clever Hans became famous, psychologists started examining his behaviour more closely. It turned out that – not surprisingly – Hans didn’t know how to do maths. Rather, Hans was responding to the human observers around him. Because they did know how to count, and the horse had learned to change its behaviour when people changed theirs. The general solution to the problem of experimenter bias is to engage in double blind studies, where neither the experimenter nor the participant knows which condition the participant is in, or knows what the desired behaviour is. This provides a very good solution to the problem, but it’s important to recognise that it’s not quite ideal, and hard to pull off perfectly. For instance, the obvious way that I could try to construct a double blind study is to have one of my Ph.D. students (one who doesn’t know anything about the experiment) run the study. That feels like it should be enough. The only person (me) who knows all the details (e.g., correct answers to the questions, assignments of participants to conditions) has no interaction with the participants, and the person who does all the talking to people (the Ph.D. student) doesn’t know anything. Except, that last part is very unlikely to be true. In order for the Ph.D. student to run the study effectively, they need to have been briefed by me, the researcher. And, as it happens, the Ph.D. student also knows me, and knows a bit about my general beliefs about people and psychology (e.g., I tend to think humans are much smarter than psychologists give them credit for). As a result of all this, it’s almost impossible for the experimenter to avoid knowing a little bit about what expectations I have. And even a little bit of knowledge can have an effect: suppose the experimenter accidentally conveys the fact that the participants are expected to do well in this task. Well, there’s a thing called the “Pygmalion effect”: if you expect great things of people, they’ll rise to the occasion; but if you expect them to fail, they’ll do that too. In other words, the expectations become a self-fulfilling prophesy. Demand effects and reactivity When talking about experimenter bias, the worry is that the experimenter’s knowledge or desires for the experiment are communicated to the participants, and that these effect people’s behaviour (Rosenthal 1966). However, even if you manage to stop this from happening, it’s almost impossible to stop people from knowing that they’re part of a psychological study. And the mere fact of knowing that someone is watching/studying you can have a pretty big effect on behaviour. This is generally referred to as reactivity or demand effects. The basic idea is captured by the Hawthorne effect: people alter their performance because of the attention that the study focuses on them. The effect takes its name from a the “Hawthorne Works” factory outside of Chicago (see Adair 1984). A study done in the 1920s looking at the effects of lighting on worker productivity at the factory turned out to be an effect of the fact that the workers knew they were being studied, rather than the lighting. To get a bit more specific about some of the ways in which the mere fact of being in a study can change how people behave, it helps to think like a social psychologist and look at some of the roles that people might adopt during an experiment, but might not adopt if the corresponding events were occurring in the real world: • The good participant tries to be too helpful to the researcher: he or she seeks to figure out the experimenter’s hypotheses and confirm them. • The negative participant does the exact opposite of the good participant: he or she seeks to break or destroy the study or the hypothesis in some way. • The faithful participant is unnaturally obedient: he or she seeks to follow instructions perfectly, regardless of what might have happened in a more realistic setting. • The apprehensive participant gets nervous about being tested or studied, so much so that his or her behaviour becomes highly unnatural, or overly socially desirable. Placebo effects The placebo effect is a specific type of demand effect that we worry a lot about. It refers to the situation where the mere fact of being treated causes an improvement in outcomes. The classic example comes from clinical trials: if you give people a completely chemically inert drug and tell them that it’s a cure for a disease, they will tend to get better faster than people who aren’t treated at all. In other words, it is people’s belief that they are being treated that causes the improved outcomes, not the drug. Situation, measurement and subpopulation effects In some respects, these terms are a catch-all term for “all other threats to external validity”. They refer to the fact that the choice of subpopulation from which you draw your participants, the location, timing and manner in which you run your study (including who collects the data) and the tools that you use to make your measurements might all be influencing the results. Specifically, the worry is that these things might be influencing the results in such a way that the results won’t generalise to a wider array of people, places and measures. Fraud, deception and self-deception It is difficult to get a man to understand something, when his salary depends on his not understanding it. – Upton Sinclair One final thing that I feel like I should mention. While reading what the textbooks often have to say about assessing the validity of the study, I couldn’t help but notice that they seem to make the assumption that the researcher is honest. I find this hilarious. While the vast majority of scientists are honest, in my experience at least, some are not.11 Not only that, as I mentioned earlier, scientists are not immune to belief bias – it’s easy for a researcher to end up deceiving themselves into believing the wrong thing, and this can lead them to conduct subtly flawed research, and then hide those flaws when they write it up. So you need to consider not only the (probably unlikely) possibility of outright fraud, but also the (probably quite common) possibility that the research is unintentionally “slanted”. I opened a few standard textbooks and didn’t find much of a discussion of this problem, so here’s my own attempt to list a few ways in which these issues can arise are: • Data fabrication. Sometimes, people just make up the data. This is occasionally done with “good” intentions. For instance, the researcher believes that the fabricated data do reflect the truth, and may actually reflect “slightly cleaned up” versions of actual data. On other occasions, the fraud is deliberate and malicious. Some high-profile examples where data fabrication has been alleged or shown include Cyril Burt (a psychologist who is thought to have fabricated some of his data), Andrew Wakefield (who has been accused of fabricating his data connecting the MMR vaccine to autism) and Hwang Woo-suk (who falsified a lot of his data on stem cell research). • Hoaxes. Hoaxes share a lot of similarities with data fabrication, but they differ in the intended purpose. A hoax is often a joke, and many of them are intended to be (eventually) discovered. Often, the point of a hoax is to discredit someone or some field. There’s quite a few well known scientific hoaxes that have occurred over the years (e.g., Piltdown man) some of were deliberate attempts to discredit particular fields of research (e.g., the Sokal affair). • Data misrepresentation. While fraud gets most of the headlines, it’s much more common in my experience to see data being misrepresented. When I say this, I’m not referring to newspapers getting it wrong (which they do, almost always). I’m referring to the fact that often, the data don’t actually say what the researchers think they say. My guess is that, almost always, this isn’t the result of deliberate dishonesty, it’s due to a lack of sophistication in the data analyses. For instance, think back to the example of Simpson’s paradox that I discussed in the beginning of these notes. It’s very common to see people present “aggregated” data of some kind; and sometimes, when you dig deeper and find the raw data yourself, you find that the aggregated data tell a different story to the disaggregated data. Alternatively, you might find that some aspect of the data is being hidden, because it tells an inconvenient story (e.g., the researcher might choose not to refer to a particular variable). There’s a lot of variants on this; many of which are very hard to detect. • Study “misdesign”. Okay, this one is subtle. Basically, the issue here is that a researcher designs a study that has built-in flaws, and those flaws are never reported in the paper. The data that are reported are completely real, and are correctly analysed, but they are produced by a study that is actually quite wrongly put together. The researcher really wants to find a particular effect, and so the study is set up in such a way as to make it “easy” to (artifactually) observe that effect. One sneaky way to do this – in case you’re feeling like dabbling in a bit of fraud yourself – is to design an experiment in which it’s obvious to the participants what they’re “supposed” to be doing, and then let reactivity work its magic for you. If you want, you can add all the trappings of double blind experimentation etc. It won’t make a difference, since the study materials themselves are subtly telling people what you want them to do. When you write up the results, the fraud won’t be obvious to the reader: what’s obvious to the participant when they’re in the experimental context isn’t always obvious to the person reading the paper. Of course, the way I’ve described this makes it sound like it’s always fraud: probably there are cases where this is done deliberately, but in my experience the bigger concern has been with unintentional misdesign. The researcher believes … and so the study just happens to end up with a built in flaw, and that flaw then magically erases itself when the study is written up for publication. • Data mining & post hoc hypothesising. Another way in which the authors of a study can more or less lie about what they found is by engaging in what’s referred to as “data mining”. As we’ll discuss later in the class, if you keep trying to analyse your data in lots of different ways, you’ll eventually find something that “looks” like a real effect but isn’t. This is referred to as “data mining”. It used to be quite rare because data analysis used to take weeks, but now that everyone has very powerful statistical software on their computers, it’s becoming very common. Data mining per se isn’t “wrong”, but the more that you do it, the bigger the risk you’re taking. The thing that is wrong, and I suspect is very common, is unacknowledged data mining. That is, the researcher run every possible analysis known to humanity, finds the one that works, and then pretends that this was the only analysis that they ever conducted. Worse yet, they often “invent” a hypothesis after looking at the data, to cover up the data mining. To be clear: it’s not wrong to change your beliefs after looking at the data, and to reanalyse your data using your new “post hoc” hypotheses. What is wrong (and, I suspect, common) is failing to acknowledge that you’ve done so. If you acknowledge that you did it, then other researchers are able to take your behaviour into account. If you don’t, then they can’t. And that makes your behaviour deceptive. Bad! • Publication bias & self-censoring. Finally, a pervasive bias is “non-reporting” of negative results. This is almost impossible to prevent. Journals don’t publish every article that is submitted to them: they prefer to publish articles that find “something”. So, if 20 people run an experiment looking at whether reading Finnegans Wake causes insanity in humans, and 19 of them find that it doesn’t, which one do you think is going to get published? Obviously, it’s the one study that did find that Finnegans Wake causes insanity.12 This is an example of a publication bias: since no-one ever published the 19 studies that didn’t find an effect, a naive reader would never know that they existed. Worse yet, most researchers “internalise” this bias, and end up self-censoring their research. Knowing that negative results aren’t going to be accepted for publication, they never even try to report them. As a friend of mine says “for every experiment that you get published, you also have 10 failures”. And she’s right. The catch is, while some (maybe most) of those studies are failures for boring reasons (e.g. you stuffed something up) others might be genuine “null” results that you ought to acknowledge when you write up the “good” experiment. And telling which is which is often hard to do. A good place to start is a paper by Ioannidis (2005) with the depressing title “Why most published research findings are false”. I’d also suggest taking a look at work by Kühberger, Fritz, and Scherndl (2014) presenting statistical evidence that this actually happens in psychology. There’s probably a lot more issues like this to think about, but that’ll do to start with. What I really want to point out is the blindingly obvious truth that real world science is conducted by actual humans, and only the most gullible of people automatically assumes that everyone else is honest and impartial. Actual scientists aren’t usually that naive, but for some reason the world likes to pretend that we are, and the textbooks we usually write seem to reinforce that stereotype.
textbooks/stats/Applied_Statistics/Learning_Statistics_with_R_-_A_tutorial_for_Psychology_Students_and_other_Beginners_(Navarro)/02%3A_A_Brief_Introduction_to_Research_Design/2.07%3A_Confounds_Artifacts_and_Other_Threats_to_Validity.txt
This chapter isn’t really meant to provide a comprehensive discussion of psychological research methods: it would require another volume just as long as this one to do justice to the topic. However, in real life statistics and study design are tightly intertwined, so it’s very handy to discuss some of the key topics. In this chapter, I’ve briefly discussed the following topics: • Introduction to psychological measurement. What does it mean to operationalise a theoretical construct? What does it mean to have variables and take measurements? • Scales of measurement and types of variables. Remember that there are two different distinctions here: there’s the difference between discrete and continuous data, and there’s the difference between the four different scale types (nominal, ordinal, interval and ratio). • Reliability of a measurement. If I measure the “same” thing twice, should I expect to see the same result? Only if my measure is reliable. But what does it mean to talk about doing the “same” thing? Well, that’s why we have different types of reliability. Make sure you remember what they are. • Terminology: predictors and outcomes. What roles do variables play in an analysis? Can you remember the difference between predictors and outcomes? Dependent and independent variables? Etc. • Experimental and non-experimental research designs. What makes an experiment an experiment? Is it a nice white lab coat, or does it have something to do with researcher control over variables? • Validity and its threats. Does your study measure what you want it to? How might things go wrong? And is it my imagination, or was that a very long list of possible ways in which things can go wrong? All this should make clear to you that study design is a critical part of research methodology. I built this chapter from the classic little book by Campbell and Stanley (1963), but there are of course a large number of textbooks out there on research design. Spend a few minutes with your favourite search engine and you’ll find dozens. References Campbell, D. T., and J. C. Stanley. 1963. Experimental and Quasi-Experimental Designs for Research. Boston, MA: Houghton Mifflin. Stevens, S. S. 1946. “On the Theory of Scales of Measurement.” Science 103: 677–80. Kahneman, D., and A. Tversky. 1973. “On the Psychology of Prediction.” Psychological Review 80: 237–51. Pfungst, O. 1911. Clever Hans (the Horse of Mr. von Osten): A Contribution to Experimental Animal and Human Psychology. Translated by C. L. Rahn. New York: Henry Holt. Hothersall, D. 2004. History of Psychology. McGraw-Hill. Rosenthal, R. 1966. Experimenter Effects in Behavioral Research. New York: Appleton. Adair, G. 1984. “The Hawthorne Effect: A Reconsideration of the Methodological Artifact.” Journal of Applied Psychology 69: 334–45. Ioannidis, John P. A. 2005. “Why Most Published Research Findings Are False.” PLoS Med 2 (8). Public Library of Science: 697–701. Kühberger, A, A Fritz, and T. Scherndl. 2014. “Publication Bias in Psychology: A Diagnosis Based on the Correlation Between Effect Size and Sample Size.” Public Library of Science One 9: 1–8. 1. Presidential Address to the First Indian Statistical Congress, 1938. Source: http://en.wikiquote.org/wiki/Ronald_Fisher 2. Well… now this is awkward, isn’t it? This section is one of the oldest parts of the book, and it’s outdated in a rather embarrassing way. I wrote this in 2010, at which point all of those facts were true. Revisiting this in 2018… well I’m not 33 any more, but that’s not surprising I suppose. I can’t imagine my chromosomes have changed, so I’m going to guess my karyotype was then and is now XY. The self-identified gender, on the other hand… ah. I suppose the fact that the title page now refers to me as Danielle rather than Daniel might possibly be a giveaway, but I don’t typically identify as “male” on a gender questionnaire these days, and I prefer “she/her” pronouns as a default (it’s a long story)! I did think a little about how I was going to handle this in the book, actually. The book has a somewhat distinct authorial voice to it, and I feel like it would be a rather different work if I went back and wrote everything as Danielle and updated all the pronouns in the work. Besides, it would be a lot of work, so I’ve left my name as “Dan” throughout the book, and in ant case “Dan” is a perfectly good nickname for “Danielle”, don’t you think? In any case, it’s not a big deal. I only wanted to mention it to make life a little easier for readers who aren’t sure how to refer to me. I still don’t like anchovies though :-) 3. Actually, I’ve been informed by readers with greater physics knowledge than I that temperature isn’t strictly an interval scale, in the sense that the amount of energy required to heat something up by 3∘ depends on it’s current temperature. So in the sense that physicists care about, temperature isn’t actually interval scale. But it still makes a cute example, so I’m going to ignore this little inconvenient truth. 4. Annoyingly, though, there’s a lot of different names used out there. I won’t list all of them – there would be no point in doing that – other than to note that R often uses “response variable” where I’ve used “outcome”, and a traditionalist would use “dependent variable”. Sigh. This sort of terminological confusion is very common, I’m afraid. 5. The reason why I say that it’s unmeasured is that if you have measured it, then you can use some fancy statistical tricks to deal with the confound. Because of the existence of these statistical solutions to the problem of confounds, we often refer to a confound that we have measured and dealt with as a covariate. Dealing with covariates is a topic for a more advanced course, but I thought I’d mention it in passing, since it’s kind of comforting to at least know that this stuff exists. 6. Some people might argue that if you’re not honest then you’re not a real scientist. Which does have some truth to it I guess, but that’s disingenuous (google the “No true Scotsman” fallacy). The fact is that there are lots of people who are employed ostensibly as scientists, and whose work has all of the trappings of science, but who are outright fraudulent. Pretending that they don’t exist by saying that they’re not scientists is just childish. 7. Clearly, the real effect is that only insane people would even try to read Finnegans Wake.
textbooks/stats/Applied_Statistics/Learning_Statistics_with_R_-_A_tutorial_for_Psychology_Students_and_other_Beginners_(Navarro)/02%3A_A_Brief_Introduction_to_Research_Design/2.08%3A_Summary.txt
Robots are nice to work with. –Roger Zelazny13 In this chapter I’ll discuss how to get started in R. I’ll briefly talk about how to download and install R, but most of the chapter will be focused on getting you started typing R commands. Our goal in this chapter is not to learn any statistical concepts: we’re just trying to learn the basics of how R works and get comfortable interacting with the system. To do this, we’ll spend a bit of time using R as a simple calculator, since that’s the easiest thing to do with R. In doing so, you’ll get a bit of a feel for what it’s like to work in R. From there I’ll introduce some very basic programming ideas: in particular, I’ll talk about the idea of defining variables to store information, and a few things that you can do with these variables. However, before going into any of the specifics, it’s worth talking a little about why you might want to use R at all. Given that you’re reading this, you’ve probably got your own reasons. However, if those reasons are “because that’s what my stats class uses”, it might be worth explaining a little why your lecturer has chosen to use R for the class. Of course, I don’t really know why other people choose R, so I’m really talking about why I use it. • It’s sort of obvious, but worth saying anyway: doing your statistics on a computer is faster, easier and more powerful than doing statistics by hand. Computers excel at mindless repetitive tasks, and a lot of statistical calculations are both mindless and repetitive. For most people, the only reason to ever do statistical calculations with pencil and paper is for learning purposes. In my class I do occasionally suggest doing some calculations that way, but the only real value to it is pedagogical. It does help you to get a “feel” for statistics to do some calculations yourself, so it’s worth doing it once. But only once! • Doing statistics in a spreadsheet (e.g., Microsoft Excel) is generally a bad idea in the long run. Although many people are likely feel more familiar with them, spreadsheets are very limited in terms of what analyses they allow you do. If you get into the habit of trying to do your real life data analysis using spreadsheets, then you’ve dug yourself into a very deep hole. • Avoiding proprietary software is a very good idea. There are a lot of commercial packages out there that you can buy, some of which I like and some of which I don’t. They’re usually very glossy in their appearance, and generally very powerful (much more powerful than spreadsheets). However, they’re also very expensive: usually, the company sells “student versions” (crippled versions of the real thing) very cheaply; they sell full powered “educational versions” at a price that makes me wince; and they sell commercial licences with a staggeringly high price tag. The business model here is to suck you in during your student days, and then leave you dependent on their tools when you go out into the real world. It’s hard to blame them for trying, but personally I’m not in favour of shelling out thousands of dollars if I can avoid it. And you can avoid it: if you make use of packages like R that are open source and free, you never get trapped having to pay exorbitant licensing fees. • Something that you might not appreciate now, but will love later on if you do anything involving data analysis, is the fact that R is highly extensible. When you download and install R, you get all the basic “packages”, and those are very powerful on their own. However, because R is so open and so widely used, it’s become something of a standard tool in statistics, and so lots of people write their own packages that extend the system. And these are freely available too. One of the consequences of this, I’ve noticed, is that if you open up an advanced textbook (a recent one, that is) rather than introductory textbooks, is that a lot of them use R. In other words, if you learn how to do your basic statistics in R, then you’re a lot closer to being able to use the state of the art methods than you would be if you’d started out with a “simpler” system: so if you want to become a genuine expert in psychological data analysis, learning R is a very good use of your time. • Related to the previous point: R is a real programming language. As you get better at using R for data analysis, you’re also learning to program. To some people this might seem like a bad thing, but in truth, programming is a core research skill across a lot of the social and behavioural sciences. Think about how many surveys and experiments are done online, or presented on computers. Think about all those online social environments which you might be interested in studying; and maybe collecting data from in an automated fashion. Think about artificial intelligence systems, computer vision and speech recognition. If any of these are things that you think you might want to be involved in – as someone “doing research in psychology”, that is – you’ll need to know a bit of programming. And if you don’t already know how to program, then learning how to do statistics using R is a nice way to start. Those are the main reasons I use R. It’s not without its flaws: it’s not easy to learn, and it has a few very annoying quirks to it that we’re all pretty much stuck with, but on the whole I think the strengths outweigh the weakness; more so than any other option I’ve encountered so far. 03: Getting Started with R Okay, enough with the sales pitch. Let’s get started. Just as with any piece of software, R needs to be installed on a “computer”, which is a magical box that does cool things and delivers free ponies. Or something along those lines: I may be confusing computers with the iPad marketing campaigns. Anyway, R is freely distributed online, and you can download it from the R homepage, which is: http://cran.r-project.org/ At the top of the page – under the heading “Download and Install R” – you’ll see separate links for Windows users, Mac users, and Linux users. If you follow the relevant link, you’ll see that the online instructions are pretty self-explanatory, but I’ll walk you through the installation anyway. As of this writing, the current version of R is 3.0.2 (Frisbee Sailing“), but they usually issue updates every six months, so you’ll probably have a newer version.14 Installing R on a Windows computer The CRAN homepage changes from time to time, and it’s not particularly pretty, or all that well-designed quite frankly. But it’s not difficult to find what you’re after. In general you’ll find a link at the top of the page with the text “Download R for Windows”. If you click on that, it will take you to a page that offers you a few options. Again, at the very top of the page you’ll be told to click on a link that says to click here if you’re installing R for the first time. That’s probably what you want. This will take you to a page that has a prominent link at the top called “Download R 3.0.2 for Windows”. That’s the one you want. Click on that and your browser should start downloading a file called `R-3.0.2-win.exe`, or whatever the equivalent version number is by the time you read this. The file for version 3.0.2 is about 54MB in size, so it may take some time depending on how fast your internet connection is. Once you’ve downloaded the file, double click to install it. As with any software you download online, Windows will ask you some questions about whether you trust the file and so on. After you click through those, it’ll ask you where you want to install it, and what components you want to install. The default values should be fine for most people, so again, just click through. Once all that is done, you should have R installed on your system. You can access it from the Start menu, or from the desktop if you asked it to add a shortcut there. You can now open up R in the usual way if you want to, but what I’m going to suggest is that instead of doing that you should now install RStudio. Installing R on a Mac When you click on the Mac OS X link, you should find yourself on a page with the title “R for Mac OS X”. The vast majority of Mac users will have a fairly recent version of the operating system: as long as you’re running Mac OS X 10.6 (Snow Leopard) or higher, then you’ll be fine.15 There’s a fairly prominent link on the page called “R-3.0.2.pkg”, which is the one you want. Click on that link and you’ll start downloading the installer file, which is (not surprisingly) called `R-3.0.2.pkg`. It’s about 61MB in size, so the download can take a while on slower internet connections. Once you’ve downloaded `R-3.0.2.pkg`, all you need to do is open it by double clicking on the package file. The installation should go smoothly from there: just follow all the instructions just like you usually do when you install something. Once it’s finished, you’ll find a file called `R.app` in the Applications folder. You can now open up R in the usual way16 if you want to, but what I’m going to suggest is that instead of doing that you should now install RStudio. Installing R on a Linux computer If you’re successfully managing to run a Linux box, regardless of what distribution, then you should find the instructions on the website easy enough. You can compile R from source yourself if you want, or install it through your package management system, which will probably have R in it. Alternatively, the CRAN site has precompiled binaries for Debian, Red Hat, Suse and Ubuntu and has separate instructions for each. Once you’ve got R installed, you can run it from the command line just by typing `R`. However, if you’re feeling envious of Windows and Mac users for their fancy GUIs, you can download RStudio too. Downloading and installing RStudio Okay, so regardless of what operating system you’re using, the last thing that I told you to do is to download RStudio. To understand why I’ve suggested this, you need to understand a little bit more about R itself. The term R doesn’t really refer to a specific application on your computer. Rather, it refers to the underlying statistical language. You can use this language through lots of different applications. When you install R initially, it comes with one application that lets you do this: it’s the R.exe application on a Windows machine, and the R.app application on a Mac. But that’s not the only way to do it. There are lots of different applications that you can use that will let you interact with R. One of those is called RStudio, and it’s the one I’m going to suggest that you use. RStudio provides a clean, professional interface to R that I find much nicer to work with than either the Windows or Mac defaults. Like R itself, RStudio is free software: you can find all the details on their webpage. In the meantime, you can download it here: http://www.RStudio.org/ When you visit the RStudio website, you’ll probably be struck by how much cleaner and simpler it is than the CRAN website,17 and how obvious it is what you need to do: click the big green button that says “Download”. When you click on the download button on the homepage it will ask you to choose whether you want the desktop version or the server version. You want the desktop version. After choosing the desktop version it will take you to a page http://www.RStudio.org/download/desktop) that shows several possible downloads: there’s a different one for each operating system. However, the nice people at RStudio have designed the webpage so that it automatically recommends the download that is most appropriate for your computer. Click on the appropriate link, and the RStudio installer file will start downloading. Once it’s finished downloading, open the installer file in the usual way to install RStudio. After it’s finished installing, you can start R by opening RStudio. You don’t need to open R.app or R.exe in order to access R. RStudio will take care of that for you. To illustrate what RStudio looks like, Figure 3.1 shows a screenshot of an R session in progress. In this screenshot, you can see that it’s running on a Mac, but it looks almost identical no matter what operating system you have. The Windows version looks more like a Windows application (e.g., the menus are attached to the application window and the colour scheme is slightly different), but it’s more or less identical. There are a few minor differences in where things are located in the menus (I’ll point them out as we go along) and in the shortcut keys, because RStudio is trying to “feel” like a proper Mac application or a proper Windows application, and this means that it has to change its behaviour a little bit depending on what computer it’s running on. Even so, these differences are very small: I started out using the Mac version of RStudio and then started using the Windows version as well in order to write these notes. The only “shortcoming” I’ve found with RStudio is that – as of this writing – it’s still a work in progress. The “problem” is that they keep improving it. New features keep turning up the more recent releases, so there’s a good chance that by the time you read this book there will be a version out that has some really neat things that weren’t in the version that I’m using now. Starting up R One way or another, regardless of what operating system you’re using and regardless of whether you’re using RStudio, or the default GUI, or even the command line, it’s time to open R and get started. When you do that, the first thing you’ll see (assuming that you’re looking at the R console, that is) is a whole lot of text that doesn’t make much sense. It should look something like this: ``````R version 3.0.2 (2013-09-25) -- "Frisbee Sailing" Copyright (C) 2013 The R Foundation for Statistical Computing Platform: x86_64-apple-darwin10.8.0 (64-bit) R is free software and comes with ABSOLUTELY NO WARRANTY. You are welcome to redistribute it under certain conditions. Type 'license()' or 'licence()' for distribution details. Natural language support but running in an English locale R is a collaborative project with many contributors. Type 'contributors()' for more information and 'citation()' on how to cite R or R packages in publications. Type 'demo()' for some demos, 'help()' for on-line help, or 'help.start()' for an HTML browser interface to help. Type 'q()' to quit R. > ``` ``` Most of this text is pretty uninteresting, and when doing real data analysis you’ll never really pay much attention to it. The important part of it is this… ``>`` … which has a flashing cursor next to it. That’s the command prompt. When you see this, it means that R is waiting patiently for you to do something!
textbooks/stats/Applied_Statistics/Learning_Statistics_with_R_-_A_tutorial_for_Psychology_Students_and_other_Beginners_(Navarro)/03%3A_Getting_Started_with_R/3.01%3A__Installing_R.txt
One of the easiest things you can do with R is use it as a simple calculator, so it’s a good place to start. For instance, try typing `10 + 20`, and hitting enter.18 When you do this, you’ve entered a command, and R will “execute” that command. What you see on screen now will be this: ``````> 10 + 20 [1] 30`````` Not a lot of surprises in this extract. But there’s a few things worth talking about, even with such a simple example. Firstly, it’s important that you understand how to read the extract. In this example, what I typed was the `10 + 20` part. I didn’t type the `>` symbol: that’s just the R command prompt and isn’t part of the actual command. And neither did I type the `[1] 30` part. That’s what R printed out in response to my command. Secondly, it’s important to understand how the output is formatted. Obviously, the correct answer to the sum `10 + 20` is `30`, and not surprisingly R has printed that out as part of its response. But it’s also printed out this `[1]` part, which probably doesn’t make a lot of sense to you right now. You’re going to see that a lot. I’ll talk about what this means in a bit more detail later on, but for now you can think of `[1] 30` as if R were saying “the answer to the 1st question you asked is 30”. That’s not quite the truth, but it’s close enough for now. And in any case it’s not really very interesting at the moment: we only asked R to calculate one thing, so obviously there’s only one answer printed on the screen. Later on this will change, and the `[1]` part will start to make a bit more sense. For now, I just don’t want you to get confused or concerned by it. important digression about formatting Now that I’ve taught you these rules I’m going to change them pretty much immediately. That is because I want you to be able to copy code from the book directly into R if if you want to test things or conduct your own analyses. However, if you copy this kind of code (that shows the command prompt and the results) directly into R you will get an error ``````> 10 + 20 [1] 30`````` ``````## Error: <text>:1:1: unexpected '>' ## 1: > ## ^`````` So instead, I’m going to provide code in a slightly different format so that it looks like this… ``10 + 20`` ``## [1] 30`` There are two main differences. • In your console, you type after the >, but from now I I won’t show the command prompt in the book. • In the book, output is commented out with ##, in your console it appears directly after your code. These two differences mean that if you’re working with an electronic version of the book, you can easily copy code out of the book and into the console. So for example if you copied the two lines of code from the book you’d get this ``10 + 20` ` ``## [1] 30`` ``## [1] 30`` very careful to avoid typos Before we go on to talk about other types of calculations that we can do with R, there’s a few other things I want to point out. The first thing is that, while R is good software, it’s still software. It’s pretty stupid, and because it’s stupid it can’t handle typos. It takes it on faith that you meant to type exactly what you did type. For example, suppose that you forgot to hit the shift key when trying to type `+`, and as a result your command ended up being `10 = 20` rather than `10 + 20`. Here’s what happens: ``10 = 20`` ````## Error in 10 = 20: invalid (do_set) left-hand side to assignment` ``` What’s happened here is that R has attempted to interpret `10 = 20` as a command, and spits out an error message because the command doesn’t make any sense to it. When a human looks at this, and then looks down at his or her keyboard and sees that `+` and `=` are on the same key, it’s pretty obvious that the command was a typo. But R doesn’t know this, so it gets upset. And, if you look at it from its perspective, this makes sense. All that R “knows” is that `10` is a legitimate number, `20` is a legitimate number, and `=` is a legitimate part of the language too. In other words, from its perspective this really does look like the user meant to type `10 = 20`, since all the individual parts of that statement are legitimate and it’s too stupid to realise that this is probably a typo. Therefore, R takes it on faith that this is exactly what you meant… it only “discovers” that the command is nonsense when it tries to follow your instructions, typo and all. And then it whinges, and spits out an error. Even more subtle is the fact that some typos won’t produce errors at all, because they happen to correspond to “well-formed” R commands. For instance, suppose that not only did I forget to hit the shift key when trying to type `10 + 20`, I also managed to press the key next to one I meant do. The resulting typo would produce the command `10 - 20`. Clearly, R has no way of knowing that you meant to add 20 to 10, not subtract 20 from 10, so what happens this time is this: ``10 - 20`` ````## [1] -10` ``` In this case, R produces the right answer, but to the the wrong question. To some extent, I’m stating the obvious here, but it’s important. The people who wrote R are smart. You, the user, are smart. But R itself is dumb. And because it’s dumb, it has to be mindlessly obedient. It does exactly what you ask it to do. There is no equivalent to “autocorrect” in R, and for good reason. When doing advanced stuff – and even the simplest of statistics is pretty advanced in a lot of ways – it’s dangerous to let a mindless automaton like R try to overrule the human user. But because of this, it’s your responsibility to be careful. Always make sure you type exactly what you mean. When dealing with computers, it’s not enough to type “approximately” the right thing. In general, you absolutely must be precise in what you say to R … like all machines it is too stupid to be anything other than absurdly literal in its interpretation. bit) flexible with spacing Of course, now that I’ve been so uptight about the importance of always being precise, I should point out that there are some exceptions. Or, more accurately, there are some situations in which R does show a bit more flexibility than my previous description suggests. The first thing R is smart enough to do is ignore redundant spacing. What I mean by this is that, when I typed `10 + 20` before, I could equally have done this ``10 + 20`` ``## [1] 30`` or this ``10+20`` ``## [1] 30`` and I would get exactly the same answer. However, that doesn’t mean that you can insert spaces in any old place. When we looked at the startup documentation in Section 3.1.5 it suggested that you could type `citation()` to get some information about how to cite R. If I do so… ``citation()`` ``````## ## To cite R in publications use: ## ## R Core Team (2018). R: A language and environment for ## statistical computing. R Foundation for Statistical Computing, ## Vienna, Austria. URL https://www.R-project.org/. ## ## A BibTeX entry for LaTeX users is ## ## @Manual{, ## title = {R: A Language and Environment for Statistical Computing}, ## author = {{R Core Team}}, ## organization = {R Foundation for Statistical Computing}, ## address = {Vienna, Austria}, ## year = {2018}, ## url = {https://www.R-project.org/}, ## } ## ## We have invested a lot of time and effort in creating R, please ## cite it when using it for data analysis. See also ## 'citation("pkgname")' for citing R packages.`````` … it tells me to cite the R manual (R Core Team 2013). Let’s see what happens when I try changing the spacing. If I insert spaces in between the word and the parentheses, or inside the parentheses themselves, then all is well. That is, either of these two commands ``citation ()`` ``citation( )`` will produce exactly the same response. However, what I can’t do is insert spaces in the middle of the word. If I try to do this, R gets upset: ``citat ion()` ` ``````## Error: <text>:1:7: unexpected symbol ## 1: citat ion ## ^`````` Throughout this book I’ll vary the way I use spacing a little bit, just to give you a feel for the different ways in which spacing can be used. I’ll try not to do it too much though, since it’s generally considered to be good practice to be consistent in how you format your commands. sometimes tell that you’re not finished yet (but not often) One more thing I should point out. If you hit enter in a situation where it’s “obvious” to R that you haven’t actually finished typing the command, R is just smart enough to keep waiting. For example, if you type `10 +` and then press enter, even R is smart enough to realise that you probably wanted to type in another number. So here’s what happens (for illustrative purposes I’m breaking my own code formatting rules in this section): ``````> 10+ + `````` and there’s a blinking cursor next to the plus sign. What this means is that R is still waiting for you to finish. It “thinks” you’re still typing your command, so it hasn’t tried to execute it yet. In other words, this plus sign is actually another command prompt. It’s different from the usual one (i.e., the `>` symbol) to remind you that R is going to “add” whatever you type now to what you typed last time. For example, if I then go on to type `3` and hit enter, what I get is this: ``````> 10 + + 20 [1] 30`````` And as far as R is concerned, this is exactly the same as if you had typed `10 + 20`. Similarly, consider the `citation()` command that we talked about in the previous section. Suppose you hit enter after typing `citation(`. Once again, R is smart enough to realise that there must be more coming – since you need to add the `)` character – so it waits. I can even hit enter several times and it will keep waiting: ``````> citation( + + + )`````` I’ll make use of this a lot in this book. A lot of the commands that we’ll have to type are pretty long, and they’re visually a bit easier to read if I break it up over several lines. If you start doing this yourself, you’ll eventually get yourself in trouble (it happens to us all). Maybe you start typing a command, and then you realise you’ve screwed up. For example, ``````> citblation( + + ``` ``` You’d probably prefer R not to try running this command, right? If you want to get out of this situation, just hit the ‘escape’ key.19 R will return you to the normal command prompt (i.e. `>`) without attempting to execute the botched command. That being said, it’s not often the case that R is smart enough to tell that there’s more coming. For instance, in the same way that I can’t add a space in the middle of a word, I can’t hit enter in the middle of a word either. If I hit enter after typing `citat` I get an error, because R thinks I’m interested in an “object” called `citat` and can’t find it: ``````> citat Error: object 'citat' not found`````` What about if I typed `citation` and hit enter? In this case we get something very odd, something that we definitely don’t want, at least at this stage. Here’s what happens: ``````> [citation] function (package = "base", lib.loc = NULL, auto = NULL) { dir <- system.file(package = package, lib.loc = lib.loc) if (dir == "") stop(gettextf("package '%s' not found", package), domain = NA) BLAH BLAH BLAH`````` where the `BLAH BLAH BLAH` goes on for rather a long time, and you don’t know enough R yet to understand what all this gibberish actually means (of course, it doesn’t actually say BLAH BLAH BLAH - it says some other things we don’t understand or need to know that I’ve edited for length) This incomprehensible output can be quite intimidating to novice users, and unfortunately it’s very easy to forget to type the parentheses; so almost certainly you’ll do this by accident. Do not panic when this happens. Simply ignore the gibberish. As you become more experienced this gibberish will start to make sense, and you’ll find it quite handy to print this stuff out.20 But for now just try to remember to add the parentheses when typing your commands.
textbooks/stats/Applied_Statistics/Learning_Statistics_with_R_-_A_tutorial_for_Psychology_Students_and_other_Beginners_(Navarro)/03%3A_Getting_Started_with_R/3.02%3A_Typing_Commands_at_the_R_Console.txt
Okay, now that we’ve discussed some of the tedious details associated with typing R commands, let’s get back to learning how to use the most powerful piece of statistical software in the world as a \$2 calculator. So far, all we know how to do is addition. Clearly, a calculator that only did addition would be a bit stupid, so I should tell you about how to perform other simple calculations using R. But first, some more terminology. Addition is an example of an “operation” that you can perform (specifically, an arithmetic operation), and the operator that performs it is `+`. To people with a programming or mathematics background, this terminology probably feels pretty natural, but to other people it might feel like I’m trying to make something very simple (addition) sound more complicated than it is (by calling it an arithmetic operation). To some extent, that’s true: if addition was the only operation that we were interested in, it’d be a bit silly to introduce all this extra terminology. However, as we go along, we’ll start using more and more different kinds of operations, so it’s probably a good idea to get the language straight now, while we’re still talking about very familiar concepts like addition! Adding, subtracting, multiplying and dividing So, now that we have the terminology, let’s learn how to perform some arithmetic operations in R. To that end, Table 3.1 lists the operators that correspond to the basic arithmetic we learned in primary school: addition, subtraction, multiplication and division. Table 3.1: Basic arithmetic operations in R. These five operators are used very frequently throughout the text, so it’s important to be familiar with them at the outset. operation operator example input example output addition + 10 + 2 12 subtraction - 9 - 3 6 multiplication * 5 * 5 25 division / 10 / 3 3 power ^ 5 ^ 2 25 As you can see, R uses fairly standard symbols to denote each of the different operations you might want to perform: addition is done using the `+` operator, subtraction is performed by the `-` operator, and so on. So if I wanted to find out what 57 times 61 is (and who wouldn’t?), I can use R instead of a calculator, like so: ``57 * 61`` ``## [1] 3477`` So that’s handy. Taking powers The first four operations listed in Table 3.1 are things we all learned in primary school, but they aren’t the only arithmetic operations built into R. There are three other arithmetic operations that I should probably mention: taking powers, doing integer division, and calculating a modulus. Of the three, the only one that is of any real importance for the purposes of this book is taking powers, so I’ll discuss that one here: the other two are discussed in Chapter 7. For those of you who can still remember your high school maths, this should be familiar. But for some people high school maths was a long time ago, and others of us didn’t listen very hard in high school. It’s not complicated. As I’m sure everyone will probably remember the moment they read this, the act of multiplying a number x by itself n times is called “raising x to the n-th power”. Mathematically, this is written as xn. Some values of n have special names: in particular x2 is called x-squared, and x3 is called x-cubed. So, the 4th power of 5 is calculated like this: 54=5×5×5×5 One way that we could calculate 54 in R would be to type in the complete multiplication as it is shown in the equation above. That is, we could do this ``5 * 5 * 5 * 5`` ``## [1] 625`` but it does seem a bit tedious. It would be very annoying indeed if you wanted to calculate 515, since the command would end up being quite long. Therefore, to make our lives easier, we use the power operator instead. When we do that, our command to calculate 54 goes like this: ``5 ^ 4`` ``## [1] 625`` Much easier. Doing calculations in the right order Okay. At this point, you know how to take one of the most powerful pieces of statistical software in the world, and use it as a \$2 calculator. And as a bonus, you’ve learned a few very basic programming concepts. That’s not nothing (you could argue that you’ve just saved yourself \$2) but on the other hand, it’s not very much either. In order to use R more effectively, we need to introduce more programming concepts. In most situations where you would want to use a calculator, you might want to do multiple calculations. R lets you do this, just by typing in longer commands.21 In fact, we’ve already seen an example of this earlier, when I typed in `5 * 5 * 5 * 5`. However, let’s try a slightly different example: ``1 + 2 * 4`` ``## [1] 9`` Clearly, this isn’t a problem for R either. However, it’s worth stopping for a second, and thinking about what R just did. Clearly, since it gave us an answer of `9` it must have multiplied `2 * 4` (to get an interim answer of 8) and then added 1 to that. But, suppose it had decided to just go from left to right: if R had decided instead to add `1+2` (to get an interim answer of 3) and then multiplied by 4, it would have come up with an answer of `12`. To answer this, you need to know the order of operations that R uses. If you remember back to your high school maths classes, it’s actually the same order that you got taught when you were at school: the “BEDMAS” order.22 That is, first calculate things inside Brackets `()`, then calculate Exponents `^`, then Division `/` and Multiplication `*`, then Addition `+` and Subtraction `-`. So, to continue the example above, if we want to force R to calculate the `1+2` part before the multiplication, all we would have to do is enclose it in brackets: ````(1 + 2) * 4 ` ``` ``## [1] 12`` This is a fairly useful thing to be able to do. The only other thing I should point out about order of operations is what to expect when you have two operations that have the same priority: that is, how does R resolve ties? For instance, multiplication and division are actually the same priority, but what should we expect when we give R a problem like `4 / 2 * 3` to solve? If it evaluates the multiplication first and then the division, it would calculate a value of two-thirds. But if it evaluates the division first it calculates a value of 6. The answer, in this case, is that R goes from left to right, so in this case the division step would come first: ``4 / 2 * 3` ` ``## [1] 6`` All of the above being said, it’s helpful to remember that brackets always come first. So, if you’re ever unsure about what order R will do things in, an easy solution is to enclose the thing you want it to do first in brackets. There’s nothing stopping you from typing `(4 / 2) * 3`. By enclosing the division in brackets we make it clear which thing is supposed to happen first. In this instance you wouldn’t have needed to, since R would have done the division first anyway, but when you’re first starting out it’s better to make sure R does what you want!
textbooks/stats/Applied_Statistics/Learning_Statistics_with_R_-_A_tutorial_for_Psychology_Students_and_other_Beginners_(Navarro)/03%3A_Getting_Started_with_R/3.03%3A_Doing_Simple_Calculations_with_R.txt
One of the most important things to be able to do in R (or any programming language, for that matter) is to store information in variables. Variables in R aren’t exactly the same thing as the variables we talked about in the last chapter on research methods, but they are similar. At a conceptual level you can think of a variable as label for a certain piece of information, or even several different pieces of information. When doing statistical analysis in R all of your data (the variables you measured in your study) will be stored as variables in R, but as well see later in the book you’ll find that you end up creating variables for other things too. However, before we delve into all the messy details of data sets and statistical analysis, let’s look at the very basics for how we create variables and work with them. Variable assignment using `<``->` Since we’ve been working with numbers so far, let’s start by creating variables to store our numbers. And since most people like concrete examples, let’s invent one. Suppose I’m trying to calculate how much money I’m going to make from this book. There’s several different numbers I might want to store. Firstly, I need to figure out how many copies I’ll sell. This isn’t exactly Harry Potter, so let’s assume I’m only going to sell one copy per student in my class. That’s 350 sales, so let’s create a variable called `sales`. What I want to do is assign a value to my variable`sales`, and that value should be `350`. We do this by using the assignment operator, which is `<-`. Here’s how we do it: ``sales <- 350`` When you hit enter, R doesn’t print out any output.23 It just gives you another command prompt. However, behind the scenes R has created a variable called `sales` and given it a value of `350`. You can check that this has happened by asking R to print the variable on screen. And the simplest way to do that is to type the name of the variable and hit enter24. ``sales`` ````## [1] 350` ``` So that’s nice to know. Anytime you can’t remember what R has got stored in a particular variable, you can just type the name of the variable and hit enter. Okay, so now we know how to assign variables. Actually, there’s a bit more you should know. Firstly, one of the curious features of R is that there are several different ways of making assignments. In addition to the `<-` operator, we can also use `->` and `=`, and it’s pretty important to understand the differences between them.25 Let’s start by considering `->`, since that’s the easy one (we’ll discuss the use of `=` in Section 3.5.1. As you might expect from just looking at the symbol, it’s almost identical to `<-`. It’s just that the arrow (i.e., the assignment) goes from left to right. So if I wanted to define my `sales` variable using `->`, I would write it like this: ``350 -> sales`` This has the same effect: and it still means that I’m only going to sell `350` copies. Sigh. Apart from this superficial difference, `<-` and `->` are identical. In fact, as far as R is concerned, they’re actually the same operator, just in a “left form” and a “right form.”26 Doing calculations using variables Okay, let’s get back to my original story. In my quest to become rich, I’ve written this textbook. To figure out how good a strategy is, I’ve started creating some variables in R. In addition to defining a `sales` variable that counts the number of copies I’m going to sell, I can also create a variable called `royalty`, indicating how much money I get per copy. Let’s say that my royalties are about \$7 per book: ``````sales <- 350 royalty <- 7`````` The nice thing about variables (in fact, the whole point of having variables) is that we can do anything with a variable that we ought to be able to do with the information that it stores. That is, since R allows me to multiply `350` by `7` ``350 * 7`` ``## [1] 2450`` it also allows me to multiply `sales` by `royalty` ``sales * royalty`` ``## [1] 2450`` As far as R is concerned, the `sales * royalty` command is the same as the `350 * 7` command. Not surprisingly, I can assign the output of this calculation to a new variable, which I’ll call `revenue`. And when we do this, the new variable `revenue` gets the value `2450`. So let’s do that, and then get R to print out the value of `revenue` so that we can verify that it’s done what we asked: ``````revenue <- sales * royalty revenue`````` ``## [1] 2450`` That’s fairly straightforward. A slightly more subtle thing we can do is reassign the value of my variable, based on its current value. For instance, suppose that one of my students (no doubt under the influence of psychotropic drugs) loves the book so much that he or she donates me an extra \$550. The simplest way to capture this is by a command like this: ``````revenue <- revenue + 550 revenue`````` ``## [1] 3000`` In this calculation, R has taken the old value of `revenue` (i.e., 2450) and added 550 to that value, producing a value of 3000. This new value is assigned to the `revenue` variable, overwriting its previous value. In any case, we now know that I’m expecting to make \$3000 off this. Pretty sweet, I thinks to myself. Or at least, that’s what I thinks until I do a few more calculation and work out what the implied hourly wage I’m making off this looks like. Rules and conventions for naming variables In the examples that we’ve seen so far, my variable names (`sales` and `revenue`) have just been English-language words written using lowercase letters. However, R allows a lot more flexibility when it comes to naming your variables, as the following list of rules27 illustrates: • Variable names can only use the upper case alphabetic characters `A`-`Z` as well as the lower case characters `a`-`z`. You can also include numeric characters `0`-`9` in the variable name, as well as the period `.` or underscore `_` character. In other words, you can use `SaL.e_s` as a variable name (though I can’t think why you would want to), but you can’t use `Sales?`. • Variable names cannot include spaces: therefore `my sales` is not a valid name, but `my.sales` is. • Variable names are case sensitive: that is, `Sales` and `sales` are different variable names. • Variable names must start with a letter or a period. You can’t use something like `_sales` or `1sales` as a variable name. You can use `.sales` as a variable name if you want, but it’s not usually a good idea. By convention, variables starting with a `.` are used for special purposes, so you should avoid doing so. • Variable names cannot be one of the reserved keywords. These are special names that R needs to keep “safe” from us mere users, so you can’t use them as the names of variables. The keywords are: `if`, `else`, `repeat`, `while`, `function`, `for`, `in`, `next`, `break`, `TRUE`, `FALSE`, `NULL`, `Inf`, `NaN`, `NA`, Rtextverb#NA_integer_#, Rtextverb#NA_real_#, `NA_complex_`, and finally, `NA_character_`. Don’t feel especially obliged to memorise these: if you make a mistake and try to use one of the keywords as a variable name, R will complain about it like the whiny little automaton it is. In addition to those rules that R enforces, there are some informal conventions that people tend to follow when naming variables. One of them you’ve already seen: i.e., don’t use variables that start with a period. But there are several others. You aren’t obliged to follow these conventions, and there are many situations in which it’s advisable to ignore them, but it’s generally a good idea to follow them when you can: • Use informative variable names. As a general rule, using meaningful names like `sales` and `revenue` is preferred over arbitrary ones like `variable1` and `variable2`. Otherwise it’s very hard to remember what the contents of different variables are, and it becomes hard to understand what your commands actually do. • Use short variable names. Typing is a pain and no-one likes doing it. So we much prefer to use a name like `sales` over a name like `sales.for.this.book.that.you.are.reading`. Obviously there’s a bit of a tension between using informative names (which tend to be long) and using short names (which tend to be meaningless), so use a bit of common sense when trading off these two conventions. • Use one of the conventional naming styles for multi-word variable names. Suppose I want to name a variable that stores “my new salary”. Obviously I can’t include spaces in the variable name, so how should I do this? There are three different conventions that you sometimes see R users employing. Firstly, you can separate the words using periods, which would give you `my.new.salary` as the variable name. Alternatively, you could separate words using underscores, as in `my_new_salary`. Finally, you could use capital letters at the beginning of each word (except the first one), which gives you `myNewSalary` as the variable name. I don’t think there’s any strong reason to prefer one over the other,28 but it’s important to be consistent.
textbooks/stats/Applied_Statistics/Learning_Statistics_with_R_-_A_tutorial_for_Psychology_Students_and_other_Beginners_(Navarro)/03%3A_Getting_Started_with_R/3.04%3A_Storing_a_Number_As_a_Variable.txt
The symbols `+`, `-`, `*` and so on are examples of operators. As we’ve seen, you can do quite a lot of calculations just by using these operators. However, in order to do more advanced calculations (and later on, to do actual statistics), you’re going to need to start using functions.29. Thus `10+20` is equivalent to the function call `+(20, 30)`. Not surprisingly, no-one ever uses this version. Because that would be stupid.] I’ll talk in more detail about functions and how they work in Section 8.4, but for now let’s just dive in and use a few. To get started, suppose I wanted to take the square root of 225. The square root, in case your high school maths is a bit rusty, is just the opposite of squaring a number. So, for instance, since “5 squared is 25” I can say that “5 is the square root of 25”. The usual notation for this is √25=5 though sometimes you’ll also see it written like this 250.5=5. This second way of writing it is kind of useful to “remind” you of the mathematical fact that “square root of x” is actually the same as “raising x to the power of 0.5”. Personally, I’ve never found this to be terribly meaningful psychologically, though I have to admit it’s quite convenient mathematically. Anyway, it’s not important. What is important is that you remember what a square root is, since we’re going to need it later on. To calculate the square root of 25, I can do it in my head pretty easily, since I memorised my multiplication tables when I was a kid. It gets harder when the numbers get bigger, and pretty much impossible if they’re not whole numbers. This is where something like R comes in very handy. Let’s say I wanted to calculate √225, the square root of 225. There’s two ways I could do this using R. Firstly, since the square root of 255 is the same thing as raising 225 to the power of 0.5, I could use the power operator `^`, just like we did earlier: ``225 ^ 0.5`` ``## [1] 15`` However, there’s a second way that we can do this, since R also provides a square root function, `sqrt()`. To calculate the square root of 255 using this function, what I do is insert the number `225` in the parentheses. That is, the command I type is this: ````sqrt( 225 )` ``` ``## [1] 15`` and as you might expect from our previous discussion, the spaces in between the parentheses are purely cosmetic. I could have typed `sqrt(225)` or `sqrt( 225 )` and gotten the same result. When we use a function to do something, we generally refer to this as calling the function, and the values that we type into the function (there can be more than one) are referred to as the arguments of that function. Obviously, the `sqrt()` function doesn’t really give us any new functionality, since we already knew how to do square root calculations by using the power operator `^`, though I do think it looks nicer when we use `sqrt()`. However, there are lots of other functions in R: in fact, almost everything of interest that I’ll talk about in this book is an R function of some kind. For example, one function that we will need to use in this book is the absolute value function. Compared to the square root function, it’s extremely simple: it just converts negative numbers to positive numbers, and leaves positive numbers alone. Mathematically, the absolute value of x is written |x| or sometimes abs(x). Calculating absolute values in R is pretty easy, since R provides the `abs()` function that you can use for this purpose. When you feed it a positive number… ``abs( 21 )`` ``## [1] 21`` the absolute value function does nothing to it at all. But when you feed it a negative number, it spits out the positive version of the same number, like this: ``abs( -13 )` ` ``## [1] 13`` In all honesty, there’s nothing that the absolute value function does that you couldn’t do just by looking at the number and erasing the minus sign if there is one. However, there’s a few places later in the book where we have to use absolute values, so I thought it might be a good idea to explain the meaning of the term early on. Before moving on, it’s worth noting that – in the same way that R allows us to put multiple operations together into a longer command, like `1 + 2*4` for instance – it also lets us put functions together and even combine functions with operators if we so desire. For example, the following is a perfectly legitimate command: ``sqrt( 1 + abs(-8) )` ` ``## [1] 3`` When R executes this command, starts out by calculating the value of `abs(-8)`, which produces an intermediate value of `8`. Having done so, the command simplifies to `sqrt( 1 + 8 )`. To solve the square root30 it first needs to add `1 + 8` to get `9`, at which point it evaluates `sqrt(9)`, and so it finally outputs a value of `3`. Function Arguments, Their Names and Their Defaults There’s two more fairly important things that you need to understand about how functions work in R, and that’s the use of “named” arguments, and default values" for arguments. Not surprisingly, that’s not to say that this is the last we’ll hear about how functions work, but they are the last things we desperately need to discuss in order to get you started. To understand what these two concepts are all about, I’ll introduce another function. The `round()` function can be used to round some value to the nearest whole number. For example, I could type this: ``round( 3.1415 )`` ``## [1] 3`` Pretty straightforward, really. However, suppose I only wanted to round it to two decimal places: that is, I want to get `3.14` as the output. The `round()` function supports this, by allowing you to input a second argument to the function that specifies the number of decimal places that you want to round the number to. In other words, I could do this: ``round( 3.14165, 2 )`` ``## [1] 3.14`` What’s happening here is that I’ve specified two arguments: the first argument is the number that needs to be rounded (i.e., `3.1415`), the second argument is the number of decimal places that it should be rounded to (i.e., `2`), and the two arguments are separated by a comma. In this simple example, it’s quite easy to remember which one argument comes first and which one comes second, but for more complicated functions this is not easy. Fortunately, most R functions make use of argument names. For the `round()` function, for example the number that needs to be rounded is specified using the `x` argument, and the number of decimal points that you want it rounded to is specified using the `digits` argument. Because we have these names available to us, we can specify the arguments to the function by name. We do so like this: ``round( x = 3.1415, digits = 2 )`` ``## [1] 3.14`` Notice that this is kind of similar in spirit to variable assignment (Section @(assign), except that I used `=` here, rather than `<-`. In both cases we’re specifying specific values to be associated with a label. However, there are some differences between what I was doing earlier on when creating variables, and what I’m doing here when specifying arguments, and so as a consequence it’s important that you use `=` in this context. As you can see, specifying the arguments by name involves a lot more typing, but it’s also a lot easier to read. Because of this, the commands in this book will usually specify arguments by name,31 since that makes it clearer to you what I’m doing. However, one important thing to note is that when specifying the arguments using their names, it doesn’t matter what order you type them in. But if you don’t use the argument names, then you have to input the arguments in the correct order. In other words, these three commands all produce the same output… ````round( 3.14165, 2 )` ``` ``## [1] 3.14`` ````round( x = 3.1415, digits = 2 )` ``` ``## [1] 3.14`` ````round( digits = 2, x = 3.1415 )` ``` ``## [1] 3.14`` but this one does not… ````round( 2, 3.14165 )` ``` ``## [1] 2`` How do you find out what the correct order is? There’s a few different ways, but the easiest one is to look at the help documentation for the function (see Section 4.12. However, if you’re ever unsure, it’s probably best to actually type in the argument name. Okay, so that’s the first thing I said you’d need to know: argument names. The second thing you need to know about is default values. Notice that the first time I called the `round()` function I didn’t actually specify the `digits` argument at all, and yet R somehow knew that this meant it should round to the nearest whole number. How did that happen? The answer is that the `digits` argument has a default value of `0`, meaning that if you decide not to specify a value for `digits` then R will act as if you had typed `digits = 0`. This is quite handy: the vast majority of the time when you want to round a number you want to round it to the nearest whole number, and it would be pretty annoying to have to specify the `digits` argument every single time. On the other hand, sometimes you actually do want to round to something other than the nearest whole number, and it would be even more annoying if R didn’t allow this! Thus, by having `digits = 0` as the default value, we get the best of both worlds.
textbooks/stats/Applied_Statistics/Learning_Statistics_with_R_-_A_tutorial_for_Psychology_Students_and_other_Beginners_(Navarro)/03%3A_Getting_Started_with_R/3.05%3A_Using_Functions_to_Do_Calculations.txt
Time for a bit of a digression. At this stage you know how to type in basic commands, including how to use R functions. And it’s probably beginning to dawn on you that there are a lot of R functions, all of which have their own arguments. You’re probably also worried that you’re going to have to remember all of them! Thankfully, it’s not that bad. In fact, very few data analysts bother to try to remember all the commands. What they really do is use tricks to make their lives easier. The first (and arguably most important one) is to use the internet. If you don’t know how a particular R function works, Google it. Second, you can look up the R help documentation. I’ll talk more about these two tricks in Section 4.12. But right now I want to call your attention to a couple of simple tricks that RStudio makes available to you. Autocomplete using “tab” The first thing I want to call your attention to is the autocomplete ability in RStudio.32 Let’s stick to our example above and assume that what you want to do is to round a number. This time around, start typing the name of the function that you want, and then hit the “tab” key. RStudio will then display a little window like the one shown in Figure 3.2. In this figure, I’ve typed the letters `ro` at the command line, and then hit tab. The window has two panels. On the left, there’s a list of variables and functions that start with the letters that I’ve typed shown in black text, and some grey text that tells you where that variable/function is stored. Ignore the grey text for now: it won’t make much sense to you until we’ve talked about packages in Section 4.2. In Figure 3.2 you can see that there’s quite a few things that start with the letters `ro`: there’s something called `rock`, something called `round`, something called `round.Date` and so on. The one we want is `round`, but if you’re typing this yourself you’ll notice that when you hit the tab key the window pops up with the top entry (i.e., `rock`) highlighted. You can use the up and down arrow keys to select the one that you want. Or, if none of the options look right to you, you can hit the escape key (“esc”) or the left arrow key to make the window go away. In our case, the thing we want is the `round` option, so we’ll select that. When you do this, you’ll see that the panel on the right changes. Previously, it had been telling us something about the `rock` data set (i.e., “Measurements on 48 rock samples…”) that is distributed as part of R. But when we select `round`, it displays information about the `round()` function, exactly as it is shown in Figure 3.2. This display is really handy. The very first thing it says is `round(x, digits = 0)`: what this is telling you is that the `round()` function has two arguments. The first argument is called `x`, and it doesn’t have a default value. The second argument is `digits`, and it has a default value of 0. In a lot of situations, that’s all the information you need. But RStudio goes a bit further, and provides some additional information about the function underneath. Sometimes that additional information is very helpful, sometimes it’s not: RStudio pulls that text from the R help documentation, and my experience is that the helpfulness of that documentation varies wildly. Anyway, if you’ve decided that `round()` is the function that you want to use, you can hit the right arrow or the enter key, and RStudio will finish typing the rest of the function name for you. Start typing the name of a function or a variable, and hit the “tab” key. RStudio brings up a little dialog box like this one that lets you select the one you want, and even prints out a little information about it. The RStudio autocomplete tool works slightly differently if you’ve already got the name of the function typed and you’re now trying to type the arguments. For instance, suppose I’ve typed `round(` into the console, and then I hit tab. RStudio is smart enough to recognise that I already know the name of the function that I want, because I’ve already typed it! Instead, it figures that what I’m interested in is the arguments to that function. So that’s what pops up in the little window. You can see this in Figure ??. Again, the window has two panels, and you can interact with this window in exactly the same way that you did with the window shown in 3.2. On the left hand panel, you can see a list of the argument names. On the right hand side, it displays some information about what the selected argument does. If you’ve typed the name of a function already along with the left parenthesis and then hit the “tab” key, RStudio brings up a different window to the one shown in Figure 3.2. This one lists all the arguments to the function on the left, and information about each argument on the right. Browsing your command history One thing that R does automatically is keep track of your “command history”. That is, it remembers all the commands that you’ve previously typed. You can access this history in a few different ways. The simplest way is to use the up and down arrow keys. If you hit the up key, the R console will show you the most recent command that you’ve typed. Hit it again, and it will show you the command before that. If you want the text on the screen to go away, hit escape33 Using the up and down keys can be really handy if you’ve typed a long command that had one typo in it. Rather than having to type it all again from scratch, you can use the up key to bring up the command and fix it. The second way to get access to your command history is to look at the history panel in RStudio. On the upper right hand side of the RStudio window you’ll see a tab labelled “History”. Click on that, and you’ll see a list of all your recent commands displayed in that panel: it should look something like Figure ??. If you double click on one of the commands, it will be copied to the R console. (You can achieve the same result by selecting the command you want with the mouse and then clicking the “To Console” button).34
textbooks/stats/Applied_Statistics/Learning_Statistics_with_R_-_A_tutorial_for_Psychology_Students_and_other_Beginners_(Navarro)/03%3A_Getting_Started_with_R/3.06%3A_Letting_RStudio_Help_You_with_Your_Commands.txt
At this point we’ve covered functions in enough detail to get us safely through the next couple of chapters (with one small exception: see Section 4.11, so let’s return to our discussion of variables. When I introduced variables in Section3.4 I showed you how we can use variables to store a single number. In this section, we’ll extend this idea and look at how to store multiple numbers within the one variable. In R, the name for a variable that can store multiple values is a vector. So let’s create one. Creating a vector Let’s stick to my silly “get rich quick by textbook writing” example. Suppose the textbook company (if I actually had one, that is) sends me sales data on a monthly basis. Since my class start in late February, we might expect most of the sales to occur towards the start of the year. Let’s suppose that I have 100 sales in February, 200 sales in March and 50 sales in April, and no other sales for the rest of the year. What I would like to do is have a variable – let’s call it `sales.by.month` – that stores all this sales data. The first number stored should be `0` since I had no sales in January, the second should be `100`, and so on. The simplest way to do this in R is to use the combine function, `c()`. To do so, all we have to do is type all the numbers you want to store in a comma separated list, like this:35 ``````sales.by.month <c(0, 100, 200, 50, 0, 0, 0, 0, 0, 0, 0, 0) sales.by.month`````` ``## [1] 0 100 200 50 0 0 0 0 0 0 0 0`` To use the correct terminology here, we have a single variable here called `sales.by.month`: this variable is a vector that consists of 12 elements. handy digression Now that we’ve learned how to put information into a vector, the next thing to understand is how to pull that information back out again. However, before I do so it’s worth taking a slight detour. If you’ve been following along, typing all the commands into R yourself, it’s possible that the output that you saw when we printed out the `sales.by.month` vector was slightly different to what I showed above. This would have happened if the window (or the RStudio panel) that contains the R console is really, really narrow. If that were the case, you might have seen output that looks something like this: ``sales.by.month`` ``## [1] 0 100 200 50 0 0 0 0 0 0 0 0`` Because there wasn’t much room on the screen, R has printed out the results over two lines. But that’s not the important thing to notice. The important point is that the first line has a `[1]` in front of it, whereas the second line starts with `[9]`. It’s pretty clear what’s happening here. For the first row, R has printed out the 1st element through to the 8th element, so it starts that row with a `[1]`. For the second row, R has printed out the 9th element of the vector through to the 12th one, and so it begins that row with a `[9]` so that you can tell where it’s up to at a glance. It might seem a bit odd to you that R does this, but in some ways it’s a kindness, especially when dealing with larger data sets! Getting information out of vectors To get back to the main story, let’s consider the problem of how to get information out of a vector. At this point, you might have a sneaking suspicion that the answer has something to do with the `[1]` and `[9]` things that R has been printing out. And of course you are correct. Suppose I want to pull out the February sales data only. February is the second month of the year, so let’s try this: ``sales.by.month[2]`` ``## [1] 100`` Yep, that’s the February sales all right. But there’s a subtle detail to be aware of here: notice that R outputs `[1] 100`, not `[2] 100`. This is because R is being extremely literal. When we typed in `sales.by.month[2]`, we asked R to find exactly one thing, and that one thing happens to be the second element of our `sales.by.month` vector. So, when it outputs `[1] 100` what R is saying is that the first number that we just asked for is `100`. This behaviour makes more sense when you realise that we can use this trick to create new variables. For example, I could create a `february.sales` variable like this: ``````february.sales <sales.by.month[2] february.sales`````` ``## [1] 100`` Obviously, the new variable `february.sales` should only have one element and so when I print it out this new variable, the R output begins with a `[1]` because `100` is the value of the first (and only) element of `february.sales`. The fact that this also happens to be the value of the second element of `sales.by.month` is irrelevant. We’ll pick this topic up again shortly (Section3.10. Altering the elements of a vector Sometimes you’ll want to change the values stored in a vector. Imagine my surprise when the publisher rings me up to tell me that the sales data for May are wrong. There were actually an additional 25 books sold in May, but there was an error or something so they hadn’t told me about it. How can I fix my `sales.by.month` variable? One possibility would be to assign the whole vector again from the beginning, using `c()`. But that’s a lot of typing. Also, it’s a little wasteful: why should R have to redefine the sales figures for all 12 months, when only the 5th one is wrong? Fortunately, we can tell R to change only the 5th element, using this trick: ``````sales.by.month[5] <25 sales.by.month`````` ``## [1] 0 100 200 50 25 0 0 0 0 0 0 0`` Another way to edit variables is to use the `edit()` or `fix()` functions. I won’t discuss them in detail right now, but you can check them out on your own. Useful things to know about vectors Before moving on, I want to mention a couple of other things about vectors. Firstly, you often find yourself wanting to know how many elements there are in a vector (usually because you’ve forgotten). You can use the `length()` function to do this. It’s quite straightforward: ``length( x = sales.by.month )`` ``## [1] 12`` Secondly, you often want to alter all of the elements of a vector at once. For instance, suppose I wanted to figure out how much money I made in each month. Since I’m earning an exciting \$7 per book (no seriously, that’s actually pretty close to what authors get on the very expensive textbooks that you’re expected to purchase), what I want to do is multiply each element in the `sales.by.month` vector by `7`. R makes this pretty easy, as the following example shows: ``sales.by.month * 7`` ``## [1] 0 700 1400 350 175 0 0 0 0 0 0 0`` In other words, when you multiply a vector by a single number, all elements in the vector get multiplied. The same is true for addition, subtraction, division and taking powers. So that’s neat. On the other hand, suppose I wanted to know how much money I was making per day, rather than per month. Since not every month has the same number of days, I need to do something slightly different. Firstly, I’ll create two new vectors: ``````days.per.month <c(31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31) profit <sales.by.month * 7`````` Obviously, the `profit` variable is the same one we created earlier, and the `days.per.month` variable is pretty straightforward. What I want to do is divide every element of `profit` by the corresponding element of `days.per.month`. Again, R makes this pretty easy: ````profit / days.per.month` ``` ``````## [1] 0.000000 25.000000 45.161290 11.666667 5.645161 0.000000 0.000000 ## [8] 0.000000 0.000000 0.000000 0.000000 0.000000`````` I still don’t like all those zeros, but that’s not what matters here. Notice that the second element of the output is 25, because R has divided the second element of `profit` (i.e. 700) by the second element of `days.per.month` (i.e. 28). Similarly, the third element of the output is equal to 1400 divided by 31, and so on. We’ll talk more about calculations involving vectors later on (and in particular a thing called the “recycling rule”; Section 7.12.2, but that’s enough detail for now.
textbooks/stats/Applied_Statistics/Learning_Statistics_with_R_-_A_tutorial_for_Psychology_Students_and_other_Beginners_(Navarro)/03%3A_Getting_Started_with_R/3.07%3A_Storing_Many_Numbers_As_a_Vector.txt
A lot of the time your data will be numeric in nature, but not always. Sometimes your data really needs to be described using text, not using numbers. To address this, we need to consider the situation where our variables store text. To create a variable that stores the word “hello”, we can type this: ``````greeting <"hello" greeting`````` ``## [1] "hello"`` When interpreting this, it’s important to recognise that the quote marks here aren’t part of the string itself. They’re just something that we use to make sure that R knows to treat the characters that they enclose as a piece of text data, known as a character string. In other words, R treats `"hello"` as a string containing the word “hello”; but if I had typed `hello` instead, R would go looking for a variable by that name! You can also use `'hello'` to specify a character string. Okay, so that’s how we store the text. Next, it’s important to recognise that when we do this, R stores the entire word `"hello"` as a single element: our `greeting` variable is not a vector of five different letters. Rather, it has only the one element, and that element corresponds to the entire character string `"hello"`. To illustrate this, if I actually ask R to find the first element of `greeting`, it prints the whole string: ``greeting[1]`` ``## [1] "hello"`` Of course, there’s no reason why I can’t create a vector of character strings. For instance, if we were to continue with the example of my attempts to look at the monthly sales data for my book, one variable I might want would include the names of all 12 `months`.^[Though actually there’s no real need to do this, since R has an inbuilt variable called `month.name] that you can use for this purpose.` To do so, I could type in a command like this ``````months <c("January", "February", "March", "April", "May", "June", "July", "August", "September", "October", "November", "December")`````` This is a character vector containing 12 elements, each of which is the name of a month. So if I wanted R to tell me the name of the fourth month, all I would do is this: ``months[4]` ` ``## [1] "April"`` Working with text Working with text data is somewhat more complicated than working with numeric data, and I discuss some of the basic ideas in Section 7.8, but for purposes of the current chapter we only need this bare bones sketch. The only other thing I want to do before moving on is show you an example of a function that can be applied to text data. So far, most of the functions that we have seen (i.e., `sqrt()`, `abs()` and `round()`) only make sense when applied to numeric data (e.g., you can’t calculate the square root of “hello”), and we’ve seen one function that can be applied to pretty much any variable or vector (i.e., `length()`). So it might be nice to see an example of a function that can be applied to text. The function I’m going to introduce you to is called `nchar()`, and what it does is count the number of individual characters that make up a string. Recall earlier that when we tried to calculate the `length()` of our `greeting` variable it returned a value of `1`: the `greeting` variable contains only the one string, which happens to be `"hello"`. But what if I want to know how many letters there are in the word? Sure, I could count them, but that’s boring, and more to the point it’s a terrible strategy if what I wanted to know was the number of letters in War and Peace. That’s where the `nchar()` function is helpful: ``nchar( x = greeting )`` ``## [1] 5`` That makes sense, since there are in fact 5 letters in the string `"hello"`. Better yet, you can apply `nchar()` to whole vectors. So, for instance, if I want R to tell me how many letters there are in the names of each of the 12 months, I can do this: ````nchar( x = months )` ``` ``## [1] 7 8 5 5 3 4 4 6 9 7 8 8`` So that’s nice to know. The `nchar()` function can do a bit more than this, and there’s a lot of other functions that you can do to extract more information from text or do all sorts of fancy things. However, the goal here is not to teach any of that! The goal right now is just to see an example of a function that actually does work when applied to text.
textbooks/stats/Applied_Statistics/Learning_Statistics_with_R_-_A_tutorial_for_Psychology_Students_and_other_Beginners_(Navarro)/03%3A_Getting_Started_with_R/3.08%3A_Storing_Text_Data.txt
Time to move onto a third kind of data. A key concept in that a lot of R relies on is the idea of a logical value. A logical value is an assertion about whether something is true or false. This is implemented in R in a pretty straightforward way. There are two logical values, namely `TRUE` and `FALSE`. Despite the simplicity, a logical values are very useful things. Let’s see how they work. Assessing mathematical truths In George Orwell’s classic book 1984, one of the slogans used by the totalitarian Party was “two plus two equals five”, the idea being that the political domination of human freedom becomes complete when it is possible to subvert even the most basic of truths. It’s a terrifying thought, especially when the protagonist Winston Smith finally breaks down under torture and agrees to the proposition. “Man is infinitely malleable”, the book says. I’m pretty sure that this isn’t true of humans36 but it’s definitely not true of R. R is not infinitely malleable. It has rather firm opinions on the topic of what is and isn’t true, at least as regards basic mathematics. If I ask it to calculate `2 + 2`, it always gives the same answer, and it’s not bloody 5: ``2 + 2`` ``## [1] 4`` Of course, so far R is just doing the calculations. I haven’t asked it to explicitly assert that 2+2=4 is a true statement. If I want R to make an explicit judgement, I can use a command like this: ``2 + 2 == 4`` ``## [1] TRUE`` What I’ve done here is use the equality operator, `==`, to force R to make a “true or false” judgement.37 Okay, let’s see what R thinks of the Party slogan: ``2+2 == 5`` ``## [1] FALSE`` Booyah! Freedom and ponies for all! Or something like that. Anyway, it’s worth having a look at what happens if I try to force R to believe that two plus two is five by making an assignment statement like `2 + 2 = 5` or `2 + 2 <5`. When I do this, here’s what happens: ``2 + 2 = 5`` ````## Error in 2 + 2 = 5: target of assignment expands to non-language object` ``` R doesn’t like this very much. It recognises that `2 + 2` is not a variable (that’s what the “non-language object” part is saying), and it won’t let you try to “reassign” it. While R is pretty flexible, and actually does let you do some quite remarkable things to redefine parts of R itself, there are just some basic, primitive truths that it refuses to give up. It won’t change the laws of addition, and it won’t change the definition of the number `2`. That’s probably for the best. Logical operations So now we’ve seen logical operations at work, but so far we’ve only seen the simplest possible example. You probably won’t be surprised to discover that we can combine logical operations with other operations and functions in a more complicated way, like this: ``3*3 + 4*4 == 5*5`` ``## [1] TRUE`` or this ``sqrt( 25 ) == 5`` ``## [1] TRUE`` Not only that, but as Table 3.2 illustrates, there are several other logical operators that you can use, corresponding to some basic mathematical concepts. Table 3.2: Some logical operators. Technically I should be calling these “binary relational operators”, but quite frankly I don’t want to. It’s my book so no-one can make me. operation operator example input answer less than < 2 < 3 TRUE less than or equal to <= 2 <= 2 TRUE greater than > 2 > 3 FALSE greater than or equal to >= 2 >= 2 TRUE equal to == 2 == 3 FALSE not equal to != 2 != 3 TRUE Hopefully these are all pretty self-explanatory: for example, the less than operator `<` checks to see if the number on the left is less than the number on the right. If it’s less, then R returns an answer of `TRUE`: ``99 < 100`` ``## [1] TRUE`` but if the two numbers are equal, or if the one on the right is larger, then R returns an answer of `FALSE`, as the following two examples illustrate: ``100 < 100`` ``## [1] FALSE` ` ``100 < 99`` ``## [1] FALSE`` In contrast, the less than or equal to operator `<=` will do exactly what it says. It returns a value of `TRUE` if the number of the left hand side is less than or equal to the number on the right hand side. So if we repeat the previous two examples using `<=`, here’s what we get: ``100 <= 100`` ````## [1] TRUE` ``` ``100 <= 99`` ``## [1] FALSE`` And at this point I hope it’s pretty obvious what the greater than operator `>` and the greater than or equal to operator `>=` do! Next on the list of logical operators is the not equal to operator `!=` which – as with all the others – does what it says it does. It returns a value of `TRUE` when things on either side are not identical to each other. Therefore, since 2+2 isn’t equal to 5, we get: ``2 + 2 != 5`` ``## [1] TRUE`` We’re not quite done yet. There are three more logical operations that are worth knowing about, listed in Table 3.3. Table 3.3: Some more logical operators. operation operator example input answer not ! !(1==1) FALSE or | (1==1) | (2==3) TRUE and & (1==1) & (2==3) FALSE These are the not operator `!`, the and operator `&`, and the or operator `|`. Like the other logical operators, their behaviour is more or less exactly what you’d expect given their names. For instance, if I ask you to assess the claim that “either 2+2=4 or 2+2=5” you’d say that it’s true. Since it’s an “either-or” statement, all we need is for one of the two parts to be true. That’s what the `|` operator does: ``(2+2 == 4) | (2+2 == 5)`` ``## [1] TRUE`` On the other hand, if I ask you to assess the claim that “both 2+2=4 and 2+2=5” you’d say that it’s false. Since this is an and statement we need both parts to be true. And that’s what the `&` operator does: ``(2+2 == 4) & (2+2 == 5)`` ``## [1] FALSE`` Finally, there’s the not operator, which is simple but annoying to describe in English. If I ask you to assess my claim that “it is not true that 2+2=5” then you would say that my claim is true; because my claim is that “2+2=5 is false”. And I’m right. If we write this as an R command we get this: ``! (2+2 == 5)`` ``## [1] TRUE`` In other words, since `2+2 == 5` is a `FALSE` statement, it must be the case that `!(2+2 == 5)` is a `TRUE` one. Essentially, what we’ve really done is claim that “not false” is the same thing as “true”. Obviously, this isn’t really quite right in real life. But R lives in a much more black or white world: for R everything is either true or false. No shades of gray are allowed. We can actually see this much more explicitly, like this: ``! FALSE`` ``## [1] TRUE`` Of course, in our 2+2=5 example, we didn’t really need to use “not” `!` and “equals to” `==` as two separate operators. We could have just used the “not equals to” operator `!=` like this: ``2+2 != 5`` ``## [1] TRUE`` But there are many situations where you really do need to use the `!` operator. We’ll see some later on.38 Storing and using logical data Up to this point, I’ve introduced numeric data (in Sections 3.4 and @ref(#vectors)) and character data (in Section 3.8). So you might not be surprised to discover that these `TRUE` and `FALSE` values that R has been producing are actually a third kind of data, called logical data. That is, when I asked R if `2 + 2 == 5` and it said `[1] FALSE` in reply, it was actually producing information that we can store in variables. For instance, I could create a variable called `is.the.Party.correct`, which would store R’s opinion: ``````is.the.Party.correct <2 + 2 == 5 is.the.Party.correct`````` ``## [1] FALSE`` Alternatively, you can assign the value directly, by typing `TRUE` or `FALSE` in your command. Like this: ``````is.the.Party.correct <FALSE is.the.Party.correct`````` ``## [1] FALSE`` Better yet, because it’s kind of tedious to type `TRUE` or `FALSE` over and over again, R provides you with a shortcut: you can use `T` and `F` instead (but it’s case sensitive: `t` and `f` won’t work).39 So this works: ``````is.the.Party.correct <F is.the.Party.correct`````` ``## [1] FALSE`` but this doesn’t: ````is.the.Party.correct <f` ``` ``## Error in eval(expr, envir, enclos): object 'f' not found`` Vectors of logicals The next thing to mention is that you can store vectors of logical values in exactly the same way that you can store vectors of numbers (Section 3.7) and vectors of text data (Section 3.8). Again, we can define them directly via the `c()` function, like this: ``````x <c(TRUE, TRUE, FALSE) x`````` ``## [1] TRUE TRUE FALSE`` or you can produce a vector of logicals by applying a logical operator to a vector. This might not make a lot of sense to you, so let’s unpack it slowly. First, let’s suppose we have a vector of numbers (i.e., a “non-logical vector”). For instance, we could use the `sales.by.month` vector that we were using in Section@ref(#vectors). Suppose I wanted R to tell me, for each month of the year, whether I actually sold a book in that month. I can do that by typing this: ``sales.by.month > 0`` ``````## [1] FALSE TRUE TRUE TRUE TRUE FALSE FALSE FALSE FALSE FALSE FALSE ## [12] FALSE`````` and again, I can store this in a vector if I want, as the example below illustrates: ``````any.sales.this.month <sales.by.month > 0 any.sales.this.month`````` ``````## [1] FALSE TRUE TRUE TRUE TRUE FALSE FALSE FALSE FALSE FALSE FALSE ## [12] FALSE`````` In other words, `any.sales.this.month` is a logical vector whose elements are `TRUE` only if the corresponding element of `sales.by.month` is greater than zero. For instance, since I sold zero books in January, the first element is `FALSE`. Applying logical operation to text In a moment (Section 3.10) I’ll show you why these logical operations and logical vectors are so handy, but before I do so I want to very briefly point out that you can apply them to text as well as to logical data. It’s just that we need to be a bit more careful in understanding how R interprets the different operations. In this section I’ll talk about how the equal to operator `==` applies to text, since this is the most important one. Obviously, the not equal to operator `!=` gives the exact opposite answers to `==` so I’m implicitly talking about that one too, but I won’t give specific commands showing the use of `!=`. As for the other operators, I’ll defer a more detailed discussion of this topic to Section 7.8.5. Okay, let’s see how it works. In one sense, it’s very simple. For instance, I can ask R if the word `"cat"` is the same as the word `"dog"`, like this: ``"cat" == "dog"`` ``## [1] FALSE`` That’s pretty obvious, and it’s good to know that even R can figure that out. Similarly, R does recognise that a `"cat"` is a `"cat"`: ````"cat" == "cat"` ``` ``## [1] TRUE`` Again, that’s exactly what we’d expect. However, what you need to keep in mind is that R is not at all tolerant when it comes to grammar and spacing. If two strings differ in any way whatsoever, R will say that they’re not equal to each other, as the following examples indicate: ````" cat" == "cat"` ``` ``## [1] FALSE`` ````"cat" == "CAT"` ``` ``## [1] FALSE`` ````"cat" == "c a t"` ``` ``## [1] FALSE``
textbooks/stats/Applied_Statistics/Learning_Statistics_with_R_-_A_tutorial_for_Psychology_Students_and_other_Beginners_(Navarro)/03%3A_Getting_Started_with_R/3.09%3A_Storing_True_or_False_Data.txt
One last thing to add before finishing up this chapter. So far, whenever I’ve had to get information out of a vector, all I’ve done is typed something like `months[4]`; and when I do this R prints out the fourth element of the `months` vector. In this section, I’ll show you two additional tricks for getting information out of the vector. Extracting multiple elements One very useful thing we can do is pull out more than one element at a time. In the previous example, we only used a single number (i.e., `2`) to indicate which element we wanted. Alternatively, we can use a vector. So, suppose I wanted the data for February, March and April. What I could do is use the vector `c(2,3,4)` to indicate which elements I want R to pull out. That is, I’d type this: ``sales.by.month[ c(2,3,4) ]`` ``## [1] 100 200 50`` Notice that the order matters here. If I asked for the data in the reverse order (i.e., April first, then March, then February) by using the vector `c(4,3,2)`, then R outputs the data in the reverse order: ``sales.by.month[ c(4,3,2) ]`` ``## [1] 50 200 100`` A second thing to be aware of is that R provides you with handy shortcuts for very common situations. For instance, suppose that I wanted to extract everything from the 2nd month through to the 8th month. One way to do this is to do the same thing I did above, and use the vector `c(2,3,4,5,6,7,8)` to indicate the elements that I want. That works just fine ``sales.by.month[ c(2,3,4,5,6,7,8) ]`` ``## [1] 100 200 50 25 0 0 0`` but it’s kind of a lot of typing. To help make this easier, R lets you use `2:8` as shorthand for `c(2,3,4,5,6,7,8)`, which makes things a lot simpler. First, let’s just check that this is true: ``2:8`` ``## [1] 2 3 4 5 6 7 8`` Next, let’s check that we can use the `2:8` shorthand as a way to pull out the 2nd through 8th elements of `sales.by.months`: ``sales.by.month[2:8]`` ``## [1] 100 200 50 25 0 0 0`` So that’s kind of neat. Logical indexing At this point, I can introduce an extremely useful tool called logical indexing. In the last section, I created a logical vector `any.sales.this.month`, whose elements are `TRUE` for any month in which I sold at least one book, and `FALSE` for all the others. However, that big long list of `TRUE`s and `FALSE`s is a little bit hard to read, so what I’d like to do is to have R select the names of the `months` for which I sold any books. Earlier on, I created a vector `months` that contains the names of each of the months. This is where logical indexing is handy. What I need to do is this: ``months[ sales.by.month > 0 ]`` ``## [1] "February" "March" "April" "May"`` To understand what’s happening here, it’s helpful to notice that `sales.by.month > 0` is the same logical expression that we used to create the `any.sales.this.month` vector in the last section. In fact, I could have just done this: ``months[ any.sales.this.month ]`` ``## [1] "February" "March" "April" "May"`` and gotten exactly the same result. In order to figure out which elements of `months` to include in the output, what R does is look to see if the corresponding element in `any.sales.this.month` is `TRUE`. Thus, since element 1 of `any.sales.this.month` is `FALSE`, R does not include `"January"` as part of the output; but since element 2 of `any.sales.this.month` is `TRUE`, R does include `"February"` in the output. Note that there’s no reason why I can’t use the same trick to find the actual sales numbers for those months. The command to do that would just be this: ``sales.by.month [ sales.by.month > 0 ]`` ``## [1] 100 200 50 25`` In fact, we can do the same thing with text. Here’s an example. Suppose that – to continue the saga of the textbook sales – I later find out that the bookshop only had sufficient stocks for a few months of the year. They tell me that early in the year they had `"high"` stocks, which then dropped to `"low"` levels, and in fact for one month they were `"out"` of copies of the book for a while before they were able to replenish them. Thus I might have a variable called `stock.levels` which looks like this: ``````stock.levels<-c("high", "high", "low", "out", "out", "high", "high", "high", "high", "high", "high", "high") stock.levels`````` ``````## [1] "high" "high" "low" "out" "out" "high" "high" "high" "high" "high" ## [11] "high" "high"`````` Thus, if I want to know the months for which the bookshop was out of my book, I could apply the logical indexing trick, but with the character vector `stock.levels`, like this: ``months[stock.levels == "out"]`` ``## [1] "April" "May"`` Alternatively, if I want to know when the bookshop was either low on copies or out of copies, I could do this: ``months[stock.levels == "out" | stock.levels == "low"]`` ``## [1] "March" "April" "May"`` or this ``months[stock.levels != "high" ]`` ``## [1] "March" "April" "May"`` Either way, I get the answer I want. At this point, I hope you can see why logical indexing is such a useful thing. It’s a very basic, yet very powerful way to manipulate data. We’ll talk a lot more about how to manipulate data in Chapter 7, since it’s a critical skill for real world research that is often overlooked in introductory research methods classes (or at least, that’s been my experience). It does take a bit of practice to become completely comfortable using logical indexing, so it’s a good idea to play around with these sorts of commands. Try creating a few different variables of your own, and then ask yourself questions like “how do I get R to spit out all the elements that are [blah]”. Practice makes perfect, and it’s only by practicing logical indexing that you’ll perfect the art of yelling frustrated insults at your computer.40
textbooks/stats/Applied_Statistics/Learning_Statistics_with_R_-_A_tutorial_for_Psychology_Students_and_other_Beginners_(Navarro)/03%3A_Getting_Started_with_R/3.10%3A_Indexing_Vectors.txt
``knitr::include_graphics("./rbook-master/img/introR/Rstudio_quit.png")`` There’s one last thing I should cover in this chapter: how to quit R. When I say this, I’m not trying to imply that R is some kind of pathological addition and that you need to call the R QuitLine or wear patches to control the cravings (although you certainly might argue that there’s something seriously pathological about being addicted to R). I just mean how to exit the program. Assuming you’re running R in the usual way (i.e., through RStudio or the default GUI on a Windows or Mac computer), then you can just shut down the application in the normal way. However, R also has a function, called `q()` that you can use to quit, which is pretty handy if you’re running R in a terminal window. Regardless of what method you use to quit R, when you do so for the first time R will probably ask you if you want to save the “workspace image”. We’ll talk a lot more about loading and saving data in Section 4.5, but I figured we’d better quickly cover this now otherwise you’re going to get annoyed when you close R at the end of the chapter. If you’re using RStudio, you’ll see a dialog box that looks like the one shown in Figure 3.5. If you’re using a text based interface you’ll see this: ``````q() ## Save workspace image? [y/n/c]: `````` The `y/n/c` part here is short for “yes / no / cancel”. Type `y` if you want to save, `n` if you don’t, and `c` if you’ve changed your mind and you don’t want to quit after all. What does this actually mean? What’s going on is that R wants to know if you want to save all those variables that you’ve been creating, so that you can use them later. This sounds like a great idea, so it’s really tempting to type `y` or click the “Save” button. To be honest though, I very rarely do this, and it kind of annoys me a little bit… what R is really asking is if you want it to store these variables in a “default” data file, which it will automatically reload for you next time you open R. And quite frankly, if I’d wanted to save the variables, then I’d have already saved them before trying to quit. Not only that, I’d have saved them to a location of my choice, so that I can find it again later. So I personally never bother with this. In fact, every time I install R on a new machine one of the first things I do is change the settings so that it never asks me again. You can do this in RStudio really easily: use the menu system to find the RStudio option; the dialog box that comes up will give you an option to tell R never to whine about this again (see Figure 3.6. On a Mac, you can open this window by going to the “RStudio” menu and selecting “Preferences”. On a Windows machine you go to the “Tools” menu and select “Global Options”. Under the “General” tab you’ll see an option that reads “Save workspace to .Rdata on exit”. By default this is set to “ask”. If you want R to stop asking, change it to “never”. ``knitr::include_graphics("./rbook-master/img/introR/Rstudio_options.png")``
textbooks/stats/Applied_Statistics/Learning_Statistics_with_R_-_A_tutorial_for_Psychology_Students_and_other_Beginners_(Navarro)/03%3A_Getting_Started_with_R/3.11%3A_Quitting_R.txt
Every book that tries to introduce basic programming ideas to novices has to cover roughly the same topics, and in roughly the same order. Mine is no exception, and so in the grand tradition of doing it just the same way everyone else did it, this chapter covered the following topics: • Getting started. We downloaded and installed R and RStudio • Basic commands. We talked a bit about the logic of how R works and in particular how to type commands into the R console (Section@ref(#firstcommand), and in doing so learned how to perform basic calculations using the arithmetic operators `+`, `-`, `*`, `/` and `^`. • Introduction to functions. We saw several different functions, three that are used to perform numeric calculations (`sqrt()`, `abs()`, `round()`, one that applies to text (`nchar()`; Section@ref(#simpletext), and one that works on any variable (`length()`; Section@ref(#veclength). In doing so, we talked a bit about how argument names work, and learned about default values for arguments. (Section@ref(#functionarguments) • Introduction to variables. We learned the basic idea behind variables, and how to assign values to variables using the assignment operator `<-` (Section@ref(#assign). We also learned how to create vectors using the combine function `c()`. (Section@ref(#vectors) • Data types. Learned the distinction between numeric, character and logical data; including the basics of how to enter and use each of them. (Sections@ref(#assign) to Sections 3.9 • Logical operations.(#logicals) Learned how to use the logical operators `==`, `!=`, `<`, `>`, `<=`, `=>`, `!`, `&` and `|`. And learned how to use logical indexing. (Section 3.10) We still haven’t arrived at anything that resembles a “data set”, of course. Maybe the next Chapter will get us a bit closer… References R Core Team. 2013. R: A Language and Environment for Statistical Computing. Vienna, Austria: R Foundation for Statistical Computing. 1. Source: Dismal Light (1968). 2. Although R is updated frequently, it doesn’t usually make much of a difference for the sort of work we’ll do in this book. In fact, during the writing of the book I upgraded several times, and didn’t have to change much except these sections describing the downloading. 3. If you’re running an older version of the Mac OS, then you need to follow the link to the “old” page (http://cran.r-project.org/bin/macosx/old/). You should be able to find the installer file that you need at the bottom of the page. 4. Tip for advanced Mac users. You can run R from the terminal if you want to. The command is just “R”. It behaves like the normal desktop version, except that help documentation behaves like a “man” page instead of opening in a new window. 5. This is probably no coincidence: the people who design and distribute the core R language itself are focused on technical stuff. And sometimes they almost seem to forget that there’s an actual human user at the end. The people who design and distribute RStudio are focused on user interface. They want to make R as usable as possible. The two websites reflect that difference. 6. Seriously. If you’re in a position to do so, open up R and start typing. The simple act of typing it rather than “just reading” makes a big difference. It makes the concepts more concrete, and it ties the abstract ideas (programming and statistics) to the actual context in which you need to use them. Statistics is something you do, not just something you read about in a textbook. 7. If you’re running R from the terminal rather than from RStudio, escape doesn’t work: use CTRL-C instead. 8. For advanced users: yes, as you’ve probably guessed, R is printing out the source code for the function. 9. If you’re reading this with R open, a good learning trick is to try typing in a few different variations on what I’ve done here. If you experiment with your commands, you’ll quickly learn what works and what doesn’t 10. For advanced users: if you want a table showing the complete order of operator precedence in R, type `?Syntax`. I haven’t included it in this book since there are quite a few different operators, and we don’t need that much detail. Besides, in practice most people seem to figure it out from seeing examples: until writing this book I never looked at the formal statement of operator precedence for any language I ever coded in, and never ran into any difficulties. 11. If you are using RStudio, and the “environment” panel (formerly known as the “workspace” panel) is visible when you typed the command, then you probably saw something happening there. That’s to be expected, and is quite helpful. However, there’s two things to note here (1) I haven’t yet explained what that panel does, so for now just ignore it, and (2) this is one of the helpful things RStudio does, not a part of R itself. 12. As we’ll discuss later, by doing this we are implicitly using the `print()` function 13. Actually, in keeping with the R tradition of providing you with a billion different screwdrivers (even when you’re actually looking for a hammer) these aren’t the only options. There’s also the`assign``()` function, and the `<<-` and `->>` operators. However, we won’t be using these at all in this book. 14. A quick reminder: when using operators like `<-` and `->` that span multiple characters, you can’t insert spaces in the middle. That is, if you type `- >` or `< -`, R will interpret your command the wrong way. And I will cry. 15. Actually, you can override any of these rules if you want to, and quite easily. All you have to do is add quote marks or backticks around your non-standard variable name. For instance ``my sales ` <- 350` would work just fine, but it’s almost never a good idea to do this. 16. For very advanced users: there is one exception to this. If you’re naming a function, don’t use `.` in the name unless you are intending to make use of the S3 object oriented programming system in R. If you don’t know what S3 is, then you definitely don’t want to be using it! For function naming, there’s been a trend among R users to prefer `myFunctionName`. 17. A side note for students with a programming background. Technically speaking, operators are functions in R: the addition operator `+` is actually a convenient way of calling the addition function `+()` 18. A note for the mathematically inclined: R does support complex numbers, but unless you explicitly specify that you want them it assumes all calculations must be real valued. By default, the square root of a negative number is treated as undefined: `sqrt(-9)` will produce `NaN` (not a number) as its output. To get complex numbers, you would type `sqrt(-9+0i)` and R would now return `0+3i`. However, since we won’t have any need for complex numbers in this book, I won’t refer to them again. 19. The two functions discussed previously, `sqrt()` and `abs()`, both only have a single argument, `x`. So I could have typed something like `sqrt(x = 225)` or `abs(x = -13)` earlier. The fact that all these functions use `x` as the name of the argument that corresponds the “main” variable that you’re working with is no coincidence. That’s a fairly widely used convention. Quite often, the writers of R functions will try to use conventional names like this to make your life easier. Or at least that’s the theory. In practice it doesn’t always work as well as you’d hope. 20. For advanced users: obviously, this isn’t just an RStudio thing. If you’re running R in a terminal window, tab autocomplete still works, and does so in exactly the way you’d expect. It’s not as visually pretty as the RStudio version, of course, and lacks some of the cooler features that RStudio provides. I don’t bother to document that here: my assumption is that if you are running R in the terminal then you’re already familiar with using tab autocomplete. 21. Incidentally, that always works: if you’ve started typing a command and you want to clear it and start again, hit escape. 22. Another method is to start typing some text and then hit the Control key and the up arrow together (on Windows or Linux) or the Command key and the up arrow together (on a Mac). This will bring up a window showing all your recent commands that started with the same text as what you’ve currently typed. That can come in quite handy sometimes. 23. Notice that I didn’t specify any argument names here. The `c()` function is one of those cases where we don’t use names. We just type all the numbers, and R just dumps them all in a single variable. 24. I offer up my teenage attempts to be “cool” as evidence that some things just can’t be done. 25. Note that this is a very different operator to the assignment operator `=` that I talked about in Section 3.4. A common typo that people make when trying to write logical commands in R (or other languages, since the “`=` versus `==`” distinction is important in most programming languages) is to accidentally type `=` when you really mean `==`. Be especially cautious with this – I’ve been programming in various languages since I was a teenager, and I still screw this up a lot. Hm. I think I see why I wasn’t cool as a teenager. And why I’m still not cool. 26. A note for those of you who have taken a computer science class: yes, R does have a function for exclusive-or, namely `xor()`. Also worth noting is the fact that R makes the distinction between element-wise operators `&` and `|` and operators that look only at the first element of the vector, namely `&&` and `||`. To see the distinction, compare the behaviour of a command like `c(FALSE,TRUE) & c(TRUE,TRUE)` to the behaviour of something like `c(FALSE,TRUE) && c(TRUE,TRUE)`. If this doesn’t mean anything to you, ignore this footnote entirely. It’s not important for the content of this book. 27. Warning! `TRUE` and `FALSE` are reserved keywords in R, so you can trust that they always mean what they say they do. Unfortunately, the shortcut versions `T` and `F` do not have this property. It’s even possible to create variables that set up the reverse meanings, by typing commands like `T <- FALSE` and `F <- TRUE`. This is kind of insane, and something that is generally thought to be a design flaw in R. Anyway, the long and short of it is that it’s safer to use `TRUE` and `FALSE`. 28. Well, I say that… but in my personal experience it wasn’t until I started learning “regular expressions” that my loathing of computers reached its peak.
textbooks/stats/Applied_Statistics/Learning_Statistics_with_R_-_A_tutorial_for_Psychology_Students_and_other_Beginners_(Navarro)/03%3A_Getting_Started_with_R/3.12%3A_Summary.txt
Form follows function – Louis Sullivan In Chapter 3 our main goal was to get started in R. As we go through the book we’ll run into a lot of new R concepts, which I’ll explain alongside the relevant data analysis concepts. However, there’s still quite a few things that I need to talk about now, otherwise we’ll run into problems when we start trying to work with data and do statistics. So that’s the goal in this chapter: to build on the introductory content from the last chapter, to get you to the point that we can start using R for statistics. Broadly speaking, the chapter comes in two parts. The first half of the chapter is devoted to the “mechanics” of R: installing and loading packages, managing the workspace, navigating the file system, and loading and saving data. In the second half, I’ll talk more about what kinds of variables exist in R, and introduce three new kinds of variables: factors, data frames and formulas. I’ll finish up by talking a little bit about the help documentation in R as well as some other avenues for finding assistance. In general, I’m not trying to be comprehensive in this chapter, I’m trying to make sure that you’ve got the basic foundations needed to tackle the content that comes later in the book. However, a lot of the topics are revisited in more detail later, especially in Chapters 7 and 8. 04: Additional R Concepts Before discussing any of the more complicated stuff, I want to introduce the comment character, `#`. It has a simple meaning: it tells R to ignore everything else you’ve written on this line. You won’t have much need of the `#` character immediately, but it’s very useful later on when writing scripts (see Chapter 8). However, while you don’t need to use it, I want to be able to include comments in my R extracts. For instance, if you read this:41 ``````seeker <- 3.1415 # create the first variable lover <- 2.7183 # create the second variable keeper <- seeker * lover # now multiply them to create a third one print( keeper ) # print out the value of 'keeper'`````` ``## [1] 8.539539`` it’s a lot easier to understand what I’m doing than if I just write this: ``````seeker <- 3.1415 lover <- 2.7183 keeper <- seeker * lover print( keeper ) `````` ``## [1] 8.539539`` You might have already noticed that the code extracts in Chapter 3 included the `#` character, but from now on, you’ll start seeing `#` characters appearing in the extracts, with some human-readable explanatory remarks next to them. These are still perfectly legitimate commands, since R knows that it should ignore the `#` character and everything after it. But hopefully they’ll help make things a little easier to understand. 4.02: Installing and Loading Packages In this section I discuss R packages, since almost all of the functions you might want to use in R come in packages. A package is basically just a big collection of functions, data sets and other R objects that are all grouped together under a common name. Some packages are already installed when you put R on your computer, but the vast majority of them of R packages are out there on the internet, waiting for you to download, install and use them. When I first started writing this book, Rstudio didn’t really exist as a viable option for using R, and as a consequence I wrote a very lengthy section that explained how to do package management using raw R commands. It’s not actually terribly hard to work with packages that way, but it’s clunky and unpleasant. Fortunately, we don’t have to do things that way anymore. In this section, I’ll describe how to work with packages using the Rstudio tools, because they’re so much simpler. Along the way, you’ll see that whenever you get Rstudio to do something (e.g., install a package), you’ll actually see the R commands that get created. I’ll explain them as we go, because I think that helps you understand what’s going on. However, before we get started, there’s a critical distinction that you need to understand, which is the difference between having a package installed on your computer, and having a package loaded in R. As of this writing, there are just over 5000 R packages freely available “out there” on the internet.42 When you install R on your computer, you don’t get all of them: only about 30 or so come bundled with the basic R installation. So right now there are about 30 packages “installed” on your computer, and another 5000 or so that are not installed. So that’s what installed means: it means “it’s on your computer somewhere”. The critical thing to remember is that just because something is on your computer doesn’t mean R can use it. In order for R to be able to use one of your 30 or so installed packages, that package must also be “loaded”. Generally, when you open up R, only a few of these packages (about 7 or 8) are actually loaded. Basically what it boils down to is this: A package must be installed before it can be loaded. A package must be loaded before it can be used. This two step process might seem a little odd at first, but the designers of R had very good reasons to do it this way,43 and you get the hang of it pretty quickly. package panel in Rstudio Right, lets get started. The first thing you need to do is look in the lower right hand panel in Rstudio. You’ll see a tab labelled “Packages”. Click on the tab, and you’ll see a list of packages that looks something like Figure 4.1. Every row in the panel corresponds to a different package, and every column is a useful piece of information about that package.44 Going from left to right, here’s what each column is telling you: • The check box on the far left column indicates whether or not the package is loaded. • The one word of text immediately to the right of the check box is the name of the package. • The short passage of text next to the name is a brief description of the package. • The number next to the description tells you what version of the package you have installed. • The little x-mark next to the version number is a button that you can push to uninstall the package from your computer (you almost never need this). Loading a package That seems straightforward enough, so let’s try loading and unloading packades. For this example, I’ll use the `foreign` package. The `foreign` package is a collection of tools that are very handy when R needs to interact with files that are produced by other software packages (e.g., SPSS). It comes bundled with R, so it’s one of the ones that you have installed already, but it won’t be one of the ones loaded. Inside the `foreign` package is a function called `read.spss()`. It’s a handy little function that you can use to import an SPSS data file into R, so let’s pretend we want to use it. Currently, the `foreign` package isn’t loaded, so if I ask R to tell me if it knows about a function called `read.spss()` it tells me that there’s no such thing… ``exists( "read.spss" )`` ``## [1] FALSE`` Now let’s load the package. In Rstudio, the process is dead simple: go to the package tab, find the entry for the `foreign` package, and check the box on the left hand side. The moment that you do this, you’ll see a command like this appear in the R console: ``library("foreign", lib.loc="/Library/Frameworks/R.framework/Versions/3.0/Resources/library")`` The `lib.loc` bit will look slightly different on Macs versus on Windows, because that part of the command is just Rstudio telling R where to look to find the installed packages. What I’ve shown you above is the Mac version. On a Windows machine, you’ll probably see something that looks like this: ``library("foreign", lib.loc="C:/Program Files/R/R-3.0.2/library")`` But actually it doesn’t matter much. The `lib.loc` bit is almost always unnecessary. Unless you’ve taken to installing packages in idiosyncratic places (which is something that you can do if you really want) R already knows where to look. So in the vast majority of cases, the command to load the `foreign` package is just this: ``library("foreign")`` Throughout this book, you’ll often see me typing in `library()` commands. You don’t actually have to type them in yourself: you can use the Rstudio package panel to do all your package loading for you. The only reason I include the `library()` commands sometimes is as a reminder to you to make sure that you have the relevant package loaded. Oh, and I suppose we should check to see if our attempt to load the package actually worked. Let’s see if R now knows about the existence of the `read.spss()` function… ``exists( "read.spss" )`` ``## [1] TRUE`` Yep. All good. Unloading a package Sometimes, especially after a long session of working with R, you find yourself wanting to get rid of some of those packages that you’ve loaded. The Rstudio package panel makes this exactly as easy as loading the package in the first place. Find the entry corresponding to the package you want to unload, and uncheck the box. When you do that for the `foreign` package, you’ll see this command appear on screen: ``detach("package:foreign", unload=TRUE)`` And the package is unloaded. We can verify this by seeing if the `read.spss()` function still `exists()`: ``exists( "read.spss" )`` ``## [1] FALSE`` Nope. Definitely gone. extra comments Sections 4.2.2 and 4.2.3 cover the main things you need to know about loading and unloading packages. However, there’s a couple of other details that I want to draw your attention to. A concrete example is the best way to illustrate. One of the other packages that you already have installed on your computer is the `Matrix` package, so let’s load that one and see what happens: ``````library( Matrix ) ## Loading required package: lattice`````` This is slightly more complex than the output that we got last time, but it’s not too complicated. The `Matrix` package makes use of some of the tools in the `lattice` package, and R has kept track of this dependency. So when you try to load the `Matrix` package, R recognises that you’re also going to need to have the `lattice` package loaded too. As a consequence, both packages get loaded, and R prints out a helpful little note on screen to tell you that it’s done so. R is pretty aggressive about enforcing these dependencies. Suppose, for example, I try to unload the `lattice` package while the `Matrix` package is still loaded. This is easy enough to try: all I have to do is uncheck the box next to “lattice” in the packages panel. But if I try this, here’s what happens: ``````detach("package:lattice", unload=TRUE) ## Error: package `lattice' is required by `Matrix' so will not be detached``` ``` R refuses to do it. This can be quite useful, since it stops you from accidentally removing something that you still need. So, if I want to remove both `Matrix` and `lattice`, I need to do it in the correct order Something else you should be aware of. Sometimes you’ll attempt to load a package, and R will print out a message on screen telling you that something or other has been “masked”. This will be confusing to you if I don’t explain it now, and it actually ties very closely to the whole reason why R forces you to load packages separately from installing them. Here’s an example. Two of the package that I’ll refer to a lot in this book are called `car` and `psych`. The `car` package is short for “Companion to Applied Regression” (which is a really great book, I’ll add), and it has a lot of tools that I’m quite fond of. The `car` package was written by a guy called John Fox, who has written a lot of great statistical tools for social science applications. The `psych` package was written by William Revelle, and it has a lot of functions that are very useful for psychologists in particular, especially in regards to psychometric techniques. For the most part, `car` and `psych` are quite unrelated to each other. They do different things, so not surprisingly almost all of the function names are different. But… there’s one exception to that. The `car` package and the `psych` package both contain a function called `logit()`.45 This creates a naming conflict. If I load both packages into R, an ambiguity is created. If the user types in `logit(100)`, should R use the `logit()` function in the `car` package, or the one in the `psych` package? The answer is: R uses whichever package you loaded most recently, and it tells you this very explicitly. Here’s what happens when I load the `car` package, and then afterwards load the `psych` package: ``library(car)`` ````## Loading required package: carData` ``` ``library(psych)`` ``````## ## Attaching package: 'psych'``` ``` ``````## The following object is masked from 'package:car': ## ## logit`````` The output here is telling you that the `logit` object (i.e., function) in the `car` package is no longer accessible to you. It’s been hidden (or “masked”) from you by the one in the `psych` package.46 Downloading new packages One of the main selling points for R is that there are thousands of packages that have been written for it, and these are all available online. So whereabouts online are these packages to be found, and how do we download and install them? There is a big repository of packages called the “Comprehensive R Archive Network” (CRAN), and the easiest way of getting and installing a new package is from one of the many CRAN mirror sites. Conveniently for us, R provides a function called `install.packages()` that you can use to do this. Even more conveniently, the Rstudio team runs its own CRAN mirror and Rstudio has a clean interface that lets you install packages without having to learn how to use the `install.packages()` command47 Using the Rstudio tools is, again, dead simple. In the top left hand corner of the packages panel (Figure 4.1) you’ll see a button called “Install Packages”. If you click on that, it will bring up a window like the one shown in Figure 4.2. There are a few different buttons and boxes you can play with. Ignore most of them. Just go to the line that says “Packages” and start typing the name of the package that you want. As you type, you’ll see a dropdown menu appear (Figure 4.3), listing names of packages that start with the letters that you’ve typed so far. You can select from this list, or just keep typing. Either way, once you’ve got the package name that you want, click on the install button at the bottom of the window. When you do, you’ll see the following command appear in the R console: ``install.packages("psych")`` This is the R command that does all the work. R then goes off to the internet, has a conversation with CRAN, downloads some stuff, and installs it on your computer. You probably don’t care about all the details of R’s little adventure on the web, but the `install.packages()` function is rather chatty, so it reports a bunch of gibberish that you really aren’t all that interested in: ``````trying URL 'http://cran.rstudio.com/bin/macosx/contrib/3.0/psych_1.4.1.tgz' Content type 'application/x-gzip' length 2737873 bytes (2.6 Mb) opened URL ================================================== downloaded 2.6 Mb The downloaded binary packages are in /var/folders/cl/thhsyrz53g73q0w1kb5z3l_80000gn/T//RtmpmQ9VT3/downloaded_packages`````` Despite the long and tedious response, all thar really means is “I’ve installed the psych package”. I find it best to humour the talkative little automaton. I don’t actually read any of this garbage, I just politely say “thanks” and go back to whatever I was doing. Updating R and R packages Every now and then the authors of packages release updated versions. The updated versions often add new functionality, fix bugs, and so on. It’s generally a good idea to update your packages periodically. There’s an `update.packages()` function that you can use to do this, but it’s probably easier to stick with the Rstudio tool. In the packages panel, click on the “Update Packages” button. This will bring up a window that looks like the one shown in Figure 4.4. In this window, each row refers to a package that needs to be updated. You can to tell R which updates you want to install by checking the boxes on the left. If you’re feeling lazy and just want to update everything, click the “Select All” button, and then click the “Install Updates” button. R then prints out a lot of garbage on the screen, individually downloading and installing all the new packages. This might take a while to complete depending on how good your internet connection is. Go make a cup of coffee. Come back, and all will be well. About every six months or so, a new version of R is released. You can’t update R from within Rstudio (not to my knowledge, at least): to get the new version you can go to the CRAN website and download the most recent version of R, and install it in the same way you did when you originally installed R on your computer. This used to be a slightly frustrating event, because whenever you downloaded the new version of R, you would lose all the packages that you’d downloaded and installed, and would have to repeat the process of re-installing them. This was pretty annoying, and there were some neat tricks you could use to get around this. However, newer versions of R don’t have this problem so I no longer bother explaining the workarounds for that issue. What packages does this book use? There are several packages that I make use of in this book. The most prominent ones are: • lot of interesting high-powered tools: it’s just a small collection of handy little things that I think can be useful to novice users. As you get more comfortable with R this package should start to feel pretty useless to you. • `psych`. This package, written by William Revelle, includes a lot of tools that are of particular use to psychologists. In particular, there’s several functions that are particularly convenient for producing analyses or summaries that are very common in psych, but less common in other disciplines. • `car`. This is the Companion to Applied Regression package, which accompanies the excellent book of the same name by (Fox and Weisberg 2011). It provides a lot of very powerful tools, only some of which we’ll touch in this book. Besides these three, there are a number of packages that I use in a more limited fashion: `gplots`, `sciplot`, `foreign`, `effects`, `R.matlab`, `gdata`, `lmtest`, and probably one or two others that I’ve missed. There are also a number of packages that I refer to but don’t actually use in this book, such as `reshape`, `compute.es`, `HistData` and `multcomp` among others. Finally, there are a number of packages that provide more advanced tools that I hope to talk about in future versions of the book, such as `sem`, `ez`, `nlme` and `lme4`. In any case, whenever I’m using a function that isn’t in the core packages, I’ll make sure to note this in the text.
textbooks/stats/Applied_Statistics/Learning_Statistics_with_R_-_A_tutorial_for_Psychology_Students_and_other_Beginners_(Navarro)/04%3A_Additional_R_Concepts/4.01%3A_Using_Comments.txt
Let’s suppose that you’re reading through this book, and what you’re doing is sitting down with it once a week and working through a whole chapter in each sitting. Not only that, you’ve been following my advice and typing in all these commands into R. So far during this chapter, you’d have typed quite a few commands, although the only ones that actually involved creating variables were the ones you typed during Section 4.1. As a result, you currently have three variables; `seeker`, `lover`, and `keeper`. These three variables are the contents of your workspace, also referred to as the global environment. The workspace is a key concept in R, so in this section we’ll talk a lot about what it is and how to manage its contents. Listing the contents of the workspace The first thing that you need to know how to do is examine the contents of the workspace. If you’re using Rstudio, you will probably find that the easiest way to do this is to use the “Environment” panel in the top right hand corner. Click on that, and you’ll see a list that looks very much like the one shown in Figures 4.5 and 4.6. If you’re using the commmand line, then the `objects()` function may come in handy: ``objects()`` ``## [1] "keeper" "lover" "seeker"`` Of course, in the true R tradition, the `objects()` function has a lot of fancy capabilities that I’m glossing over in this example. Moreover there are also several other functions that you can use, including `ls()` which is pretty much identical to `objects()`, and `ls.str()` which you can use to get a fairly detailed description of all the variables in the workspace. In fact, the `lsr` package actually includes its own function that you can use for this purpose, called `who()`. The reason for using the `who()` function is pretty straightforward: in my everyday work I find that the output produced by the `objects()` command isn’t quite informative enough, because the only thing it prints out is the name of each variable; but the `ls.str()` function is too informative, because it prints out a lot of additional information that I really don’t like to look at. The `who()` function is a compromise between the two. First, now that we’ve got the `lsr` package installed, we need to load it: ``library(lsr)`` ``## Warning: package 'lsr' was built under R version 3.5.2`` and now we can use the `who()` function: ``who()`` ``````## -Name - -Class - -Size -- ## keeper numeric 1 ## lover numeric 1 ## seeker numeric 1`````` As you can see, the `who()` function lists all the variables and provides some basic information about what kind of variable each one is and how many elements it contains. Personally, I find this output much easier more useful than the very compact output of the `objects()` function, but less overwhelming than the extremely verbose `ls.str()` function. Throughout this book you’ll see me using the `who()` function a lot. You don’t have to use it yourself: in fact, I suspect you’ll find it easier to look at the Rstudio environment panel. But for the purposes of writing a textbook I found it handy to have a nice text based description: otherwise there would be about another 100 or so screenshots added to the book.48 Removing variables from the workspace Looking over that list of variables, it occurs to me that I really don’t need them any more. I created them originally just to make a point, but they don’t serve any useful purpose anymore, and now I want to get rid of them. I’ll show you how to do this, but first I want to warn you – there’s no “undo” option for variable removal. Once a variable is removed, it’s gone forever unless you save it to disk. I’ll show you how to do that in Section 4.5, but quite clearly we have no need for these variables at all, so we can safely get rid of them. In Rstudio, the easiest way to remove variables is to use the environment panel. Assuming that you’re in grid view (i.e., Figure 4.6), check the boxes next to the variables that you want to delete, then click on the “Clear” button at the top of the panel. When you do this, Rstudio will show a dialog box asking you to confirm that you really do want to delete the variables. It’s always worth checking that you really do, because as Rstudio is at pains to point out, you can’t undo this. Once a variable is deleted, it’s gone.49 In any case, if you click “yes”, that variable will disappear from the workspace: it will no longer appear in the environment panel, and it won’t show up when you use the `who()` command. Suppose you don’t access to Rstudio, and you still want to remove variables. This is where the remove function `rm()` comes in handy. The simplest way to use `rm()` is just to type in a (comma separated) list of all the variables you want to remove. Let’s say I want to get rid of `seeker` and `lover`, but I would like to keep `keeper`. To do this, all I have to do is type: ``rm( seeker, lover )`` There’s no visible output, but if I now inspect the workspace ````who()` ``` ``````## -Name - -Class - -Size -- ## keeper numeric 1`````` I see that there’s only the `keeper` variable left. As you can see, `rm()` can be very handy for keeping the workspace tidy.
textbooks/stats/Applied_Statistics/Learning_Statistics_with_R_-_A_tutorial_for_Psychology_Students_and_other_Beginners_(Navarro)/04%3A_Additional_R_Concepts/4.03%3A_Managing_the_Workspace.txt
In this section I talk a little about how R interacts with the file system on your computer. It’s not a terribly interesting topic, but it’s useful. As background to this discussion, I’ll talk a bit about how file system locations work in Section 4.4.1. Once upon a time everyone who used computers could safely be assumed to understand how the file system worked, because it was impossible to successfully use a computer if you didn’t! However, modern operating systems are much more “user friendly”, and as a consequence of this they go to great lengths to hide the file system from users. So these days it’s not at all uncommon for people to have used computers most of their life and not be familiar with the way that computers organise files. If you already know this stuff, skip straight to Section 4.4.2. Otherwise, read on. I’ll try to give a brief introduction that will be useful for those of you who have never been forced to learn how to navigate around a computer using a DOS or UNIX shell. file system itself In this section I describe the basic idea behind file locations and file paths. Regardless of whether you’re using Window, Mac OS or Linux, every file on the computer is assigned a (fairly) human readable address, and every address has the same basic structure: it describes a path that starts from a root location , through as series of folders (or if you’re an old-school computer user, directories), and finally ends up at the file. On a Windows computer the root is the physical drive50 on which the file is stored, and for most home computers the name of the hard drive that stores all your files is C: and therefore most file names on Windows begin with C:. After that comes the folders, and on Windows the folder names are separated by a `\` symbol. So, the complete path to this book on my Windows computer might be something like this: ``C:\Users\danRbook\LSR.pdf`` and what that means is that the book is called LSR.pdf, and it’s in a folder called `book` which itself is in a folder called dan which itself is … well, you get the idea. On Linux, Unix and Mac OS systems, the addresses look a little different, but they’re more or less identical in spirit. Instead of using the backslash, folders are separated using a forward slash, and unlike Windows, they don’t treat the physical drive as being the root of the file system. So, the path to this book on my Mac might be something like this: ````/Users/dan/Rbook/LSR.pdf` ``` So that’s what we mean by the “path” to a file. The next concept to grasp is the idea of a working directory and how to change it. For those of you who have used command line interfaces previously, this should be obvious already. But if not, here’s what I mean. The working directory is just “whatever folder I’m currently looking at”. Suppose that I’m currently looking for files in Explorer (if you’re using Windows) or using Finder (on a Mac). The folder I currently have open is my user directory (i.e., `C:\Users\dan` or `/Users/dan`). That’s my current working directory. The fact that we can imagine that the program is “in” a particular directory means that we can talk about moving from our current location to a new one. What that means is that we might want to specify a new location in relation to our current location. To do so, we need to introduce two new conventions. Regardless of what operating system you’re using, we use . to refer to the current working directory, and .. to refer to the directory above it. This allows us to specify a path to a new location in relation to our current location, as the following examples illustrate. Let’s assume that I’m using my Windows computer, and my working directory is `C:\Users\danRbook`). The table below shows several addresses in relation to my current one: Table 4.1: Basic arithmetic operations in R. These five operators are used very frequently throughout the text, so it’s important to be familiar with them at the outset. absolute path (i.e., from root) relative path (i.e. from C:) C:\Users\dan .. C:\Users ..\.. \ C:\Users\danRbook\source .\source C:\Users\dan\nerdstuff ..\nerdstuff There’s one last thing I want to call attention to: the `~` directory. I normally wouldn’t bother, but R makes reference to this concept sometimes. It’s quite common on computers that have multiple users to define `~` to be the user’s home directory. On my Mac, for instance, the home directory `~` for the “dan” user is `\Users\dan\`. And so, not surprisingly, it is possible to define other directories in terms of their relationship to the home directory. For example, an alternative way to describe the location of the `LSR.pdf` file on my Mac would be ``~Rbook\LSR.pdf`` That’s about all you really need to know about file paths. And since this section already feels too long, it’s time to look at how to navigate the file system in R. Navigating the file system using the R console In this section I’ll talk about how to navigate this file system from within R itself. It’s not particularly user friendly, and so you’ll probably be happy to know that Rstudio provides you with an easier method, and I will describe it in Section 4.4.4. So in practice, you won’t really need to use the commands that I babble on about in this section, but I do think it helps to see them in operation at least once before forgetting about them forever. Okay, let’s get started. When you want to load or save a file in R it’s important to know what the working directory is. You can find out by using the `getwd()` command. For the moment, let’s assume that I’m using Mac OS or Linux, since there’s some subtleties to Windows. Here’s what happens: ``````getwd() ## [1] "/Users/dan"`````` We can change the working directory quite easily using `setwd()`. The `setwd()` function has only the one argument, `dir`, is a character string specifying a path to a directory, or a path relative to the working directory. Since I’m currently located at `/Users/dan`, the following two are equivalent: ``````setwd("/Users/dan/Rbook/data") setwd("./Rbook/data")`````` Now that we’re here, we can type `list.files()` command to get a listing of all the files in that directory. Since this is the directory in which I store all of the data files that we’ll use in this book, here’s what we get as the result: ``````list.files() ## [1] "afl24.Rdata" "aflsmall.Rdata" "aflsmall2.Rdata" ## [4] "agpp.Rdata" "all.zip" "annoying.Rdata" ## [7] "anscombesquartet.Rdata" "awesome.Rdata" "awesome2.Rdata" ## [10] "booksales.csv" "booksales.Rdata" "booksales2.csv" ## [13] "cakes.Rdata" "cards.Rdata" "chapek9.Rdata" ## [16] "chico.Rdata" "clinicaltrial_old.Rdata" "clinicaltrial.Rdata" ## [19] "coffee.Rdata" "drugs.wmc.rt.Rdata" "dwr_all.Rdata" ## [22] "effort.Rdata" "happy.Rdata" "harpo.Rdata" ## [25] "harpo2.Rdata" "likert.Rdata" "nightgarden.Rdata" ## [28] "nightgarden2.Rdata" "parenthood.Rdata" "parenthood2.Rdata" ## [31] "randomness.Rdata" "repeated.Rdata" "rtfm.Rdata" ## [34] "salem.Rdata" "zeppo.Rdata"`````` Not terribly exciting, I’ll admit, but it’s useful to know about. In any case, there’s only one more thing I want to make a note of, which is that R also makes use of the home directory. You can find out what it is by using the `path.expand()` function, like this: ``````path.expand("~") ## [1] "/Users/dan"`````` You can change the user directory if you want, but we’re not going to make use of it very much so there’s no reason to. The only reason I’m even bothering to mention it at all is that when you use Rstudio to open a file, you’ll see output on screen that defines the path to the file relative to the #~# directory. I’d prefer you not to be confused when you see it.51 the Windows paths use the wrong slash? Let’s suppose I’m on Windows. As before, I can find out what my current working directory is like this: ``````getwd() ## [1] "C:/Users/dan/`````` This seems about right, but you might be wondering why R is displaying a Windows path using the wrong type of slash. The answer is slightly complicated, and has to do with the fact that R treats the `\` character as “special” (see Section 7.8.7). If you’re deeply wedded to the idea of specifying a path using the Windows style slashes, then what you need to do is to type `/` whenever you mean `\`. In other words, if you want to specify the working directory on a Windows computer, you need to use one of the following commands: ``````setwd( "C:/Users/dan" ) setwd( "C:\Users\dan" )`````` It’s kind of annoying to have to do it this way, but as you’ll see later on in Section 7.8.7 it’s a necessary evil. Fortunately, as we’ll see in the next section, Rstudio provides a much simpler way of changing directories… Navigating the file system using the Rstudio file panel Although I think it’s important to understand how all this command line stuff works, in many (maybe even most) situations there’s an easier way. For our purposes, the easiest way to navigate the file system is to make use of Rstudio’s built in tools. The “file” panel – the lower right hand area in Figure 4.7 – is actually a pretty decent file browser. Not only can you just point and click on the names to move around the file system, you can also use it to set the working directory, and even load files. Here’s what you need to do to change the working directory using the file panel. Let’s say I’m looking at the actual screen shown in Figure 4.7. At the top of the file panel you see some text that says “Home > Rbook > data”. What that means is that it’s displaying the files that are stored in the ``/Users/dan/Rbook/data`` directory on my computer. It does not mean that this is the R working directory. If you want to change the R working directory, using the file panel, you need to click on the button that reads “More”. This will bring up a little menu, and one of the options will be “Set as Working Directory”. If you select that option, then R really will change the working directory. You can tell that it has done so because this command appears in the console: ````setwd("~/Rbook/data")` ``` In other words, Rstudio sends a command to the R console, exactly as if you’d typed it yourself. The file panel can be used to do other things too. If you want to move “up” to the parent folder (e.g., from `/Users/dan/Rbook/data` to `/Users/dan/Rbook` click on the “..” link in the file panel. To move to a subfolder, click on the name of the folder that you want to open. You can open some types of file by clicking on them. You can delete files from your computer using the “delete” button, rename them with the “rename” button, and so on. As you can tell, the file panel is a very handy little tool for navigating the file system. But it can do more than just navigate. As we’ll see later, it can be used to open files. And if you look at the buttons and menu options that it presents, you can even use it to rename, delete, copy or move files, and create new folders. However, since most of that functionality isn’t critical to the basic goals of this book, I’ll let you discover those on your own.
textbooks/stats/Applied_Statistics/Learning_Statistics_with_R_-_A_tutorial_for_Psychology_Students_and_other_Beginners_(Navarro)/04%3A_Additional_R_Concepts/4.04%3A_Navigating_the_File_System.txt
There are several different types of files that are likely to be relevant to us when doing data analysis. There are three in particular that are especially important from the perspective of this book: • Workspace files are those with a .Rdata file extension. This is the standard kind of file that R uses to store data and variables. They’re called “workspace files” because you can use them to save your whole workspace. • Comma separated value (CSV) files are those with a .csv file extension. These are just regular old text files, and they can be opened with almost any software. It’s quite typical for people to store data in CSV files, precisely because they’re so simple. • Script files are those with a .R file extension. These aren’t data files at all; rather, they’re used to save a collection of commands that you want R to execute later. They’re just text files, but we won’t make use of them until Chapter 8. There are also several other types of file that R makes use of,52 but they’re not really all that central to our interests. There are also several other kinds of data file that you might want to import into R. For instance, you might want to open Microsoft Excel spreadsheets (.xlsx files), or data files that have been saved in the native file formats for other statistics software, such as SPSS, SAS, Minitab, Stata or Systat. Finally, you might have to handle databases. R tries hard to play nicely with other software, so it has tools that let you open and work with any of these and many others. I’ll discuss some of these other possibilities elsewhere in this book (Section 7.9), but for now I want to focus primarily on the two kinds of data file that you’re most likely to need: .Rdata files and .csv files. In this section I’ll talk about how to load a workspace file, how to import data from a CSV file, and how to save your workspace to a workspace file. Throughout this section I’ll first describe the (sometimes awkward) R commands that do all the work, and then I’ll show you the (much easier) way to do it using Rstudio. Loading workspace files using R When I used the `list.files()` command to list the contents of the `/Users/dan/Rbook/data` directory (in Section 4.4.2), the output referred to a file called booksales.Rdata. Let’s say I want to load the data from this file into my workspace. The way I do this is with the `load()` function. There are two arguments to this function, but the only one we’re interested in is • `file`. This should be a character string that specifies a path to the file that needs to be loaded. You can use an absolute path or a relative path to do so. Using the absolute file path, the command would look like this: ``load( file = "/Users/dan/Rbook/data/booksales.Rdata" )`` but this is pretty lengthy. Given that the working directory (remember, we changed the directory at the end of Section 4.4.4) is `/Users/dan/Rbook/data`, I could use a relative file path, like so: ``load( file = "../data/booksales.Rdata" )`` However, my preference is usually to change the working directory first, and then load the file. What that would look like is this: ``````setwd( "../data" ) # move to the data directory load( "booksales.Rdata" ) # load the data`````` If I were then to type `who()` I’d see that there are several new variables in my workspace now. Throughout this book, whenever you see me loading a file, I will assume that the file is actually stored in the working directory, or that you’ve changed the working directory so that R is pointing at the directory that contains the file. Obviously, you don’t need type that command yourself: you can use the Rstudio file panel to do the work. Loading workspace files using Rstudio Okay, so how do we open an .Rdata file using the Rstudio file panel? It’s terribly simple. First, use the file panel to find the folder that contains the file you want to load. If you look at Figure 4.7, you can see that there are several .Rdata files listed. Let’s say I want to load the `booksales.Rdata` file. All I have to do is click on the file name. Rstudio brings up a little dialog box asking me to confirm that I do want to load this file. I click yes. The following command then turns up in the console, ``load("~/Rbook/data/booksales.Rdata")`` and the new variables will appear in the workspace (you’ll see them in the Environment panel in Rstudio, or if you type `who()`). So easy it barely warrants having its own section. One quite commonly used data format is the humble “comma separated value” file, also called a CSV file, and usually bearing the file extension .csv. CSV files are just plain old-fashioned text files, and what they store is basically just a table of data. This is illustrated in Figure 4.8, which shows a file called booksales.csv that I’ve created. As you can see, each row corresponds to a variable, and each row represents the book sales data for one month. The first row doesn’t contain actual data though: it has the names of the variables. If Rstudio were not available to you, the easiest way to open this file would be to use the `read.csv()` function.53 This function is pretty flexible, and I’ll talk a lot more about it’s capabilities in Section 7.9 for more details, but for now there’s only two arguments to the function that I’ll mention: • `file`. This should be a character string that specifies a path to the file that needs to be loaded. You can use an absolute path or a relative path to do so. • `header`. This is a logical value indicating whether or not the first row of the file contains variable names. The default value is `TRUE`. Therefore, to import the CSV file, the command I need is: ``books <- read.csv( file = "booksales.csv" )`` There are two very important points to notice here. Firstly, notice that I didn’t try to use the `load()` function, because that function is only meant to be used for .Rdata files. If you try to use `load()` on other types of data, you get an error. Secondly, notice that when I imported the CSV file I assigned the result to a variable, which I imaginatively called `books`.54 file. There’s a reason for this. The idea behind an `.Rdata` file is that it stores a whole workspace. So, if you had the ability to look inside the file yourself you’d see that the data file keeps track of all the variables and their names. So when you `load()` the file, R restores all those original names. CSV files are treated differently: as far as R is concerned, the CSV only stores one variable, but that variable is big table. So when you import that table into the workspace, R expects you to give it a name.] Let’s have a look at what we’ve got: ``print( books )`` ``````## Month Days Sales Stock.Levels ## 1 January 31 0 high ## 2 February 28 100 high ## 3 March 31 200 low ## 4 April 30 50 out ## 5 May 31 0 out ## 6 June 30 0 high ## 7 July 31 0 high ## 8 August 31 0 high ## 9 September 30 0 high ## 10 October 31 0 high ## 11 November 30 0 high ## 12 December 31 0 high`````` Clearly, it’s worked, but the format of this output is a bit unfamiliar. We haven’t seen anything like this before. What you’re looking at is a data frame, which is a very important kind of variable in R, and one I’ll discuss in Section 4.8. For now, let’s just be happy that we imported the data and that it looks about right. Importing data from CSV files using Rstudio Yet again, it’s easier in Rstudio. In the environment panel in Rstudio you should see a button called “Import Dataset”. Click on that, and it will give you a couple of options: select the “From Text File…” option, and it will open up a very familiar dialog box asking you to select a file: if you’re on a Mac, it’ll look like the usual Finder window that you use to choose a file; on Windows it looks like an Explorer window. An example of what it looks like on a Mac is shown in Figure 4.9. I’m assuming that you’re familiar with your own computer, so you should have no problem finding the CSV file that you want to import! Find the one you want, then click on the “Open” button. When you do this, you’ll see a window that looks like the one in Figure 4.10. The import data set window is relatively straightforward to understand. In the top left corner, you need to type the name of the variable you R to create. By default, that will be the same as the file name: our file is called `booksales.csv`, so Rstudio suggests the name `booksales`. If you’re happy with that, leave it alone. If not, type something else. Immediately below this are a few things that you can tweak to make sure that the data gets imported correctly: • Heading. Does the first row of the file contain raw data, or does it contain headings for each variable? The `booksales.csv` file has a header at the top, so I selected “yes”. • Separator. What character is used to separate different entries? In most CSV files this will be a comma (it is “comma separated” after all). But you can change this if your file is different. • Decimal. What character is used to specify the decimal point? In English speaking countries, this is almost always a period (i.e., `.`). That’s not universally true: many European countries use a comma. So you can change that if you need to. • Quote. What character is used to denote a block of text? That’s usually going to be a double quote mark. It is for the `booksales.csv` file, so that’s what I selected. The nice thing about the Rstudio window is that it shows you the raw data file at the top of the window, and it shows you a preview of the data at the bottom. If the data at the bottom doesn’t look right, try changing some of the settings on the left hand side. Once you’re happy, click “Import”. When you do, two commands appear in the R console: ``````booksales <- read.csv("~/Rbook/data/booksales.csv") View(booksales)`````` The first of these commands is the one that loads the data. The second one will display a pretty table showing the data in Rstudio. Saving a workspace file using `save` Not surprisingly, saving data is very similar to loading data. Although Rstudio provides a simple way to save files (see below), it’s worth understanding the actual commands involved. There are two commands you can use to do this, `save()` and `save.image()`. If you’re happy to save all of the variables in your workspace into the data file, then you should use `save.image()`. And if you’re happy for R to save the file into the current working directory, all you have to do is this: ``save.image( file = "myfile.Rdata" )`` Since `file` is the first argument, you can shorten this to `save.image("myfile.Rdata")`; and if you want to save to a different directory, then (as always) you need to be more explicit about specifying the path to the file, just as we discussed in Section 4.4. Suppose, however, I have several variables in my workspace, and I only want to save some of them. For instance, I might have this as my workspace: ``````who() ## -- Name -- -- Class -- -- Size -- ## data data.frame 3 x 2 ## handy character 1 ## junk numeric 1 `````` I want to save `data` and `handy`, but not `junk`. But I don’t want to delete `junk` right now, because I want to use it for something else later on. This is where the `save()` function is useful, since it lets me indicate exactly which variables I want to save. Here is one way I can use the `save` function to solve my problem: ``save(data, handy, file = "myfile.Rdata")`` Importantly, you must specify the name of the `file` argument. The reason is that if you don’t do so, R will think that `"myfile.Rdata"` is actually a variable that you want to save, and you’ll get an error message. Finally, I should mention a second way to specify which variables the `save()` function should save, which is to use the `list` argument. You do so like this: ``````save.me <- c("data", "handy") # the variables to be saved save( file = "booksales2.Rdata", list = save.me ) # the command to save them`````` Saving a workspace file using Rstudio Rstudio allows you to save the workspace pretty easily. In the environment panel (Figures 4.5 and 4.6) you can see the “save” button. There’s no text, but it’s the same icon that gets used on every computer everywhere: it’s the one that looks like a floppy disk. You know, those things that haven’t been used in about 20 years. Alternatively, go to the “Session” menu and click on the “Save Workspace As…” option.55 This will bring up the standard “save” dialog box for your operating system (e.g., on a Mac it’ll look a little bit like the loading dialog box in Figure 4.9). Type in the name of the file that you want to save it to, and all the variables in your workspace will be saved to disk. You’ll see an R command like this one ``save.image("~/Desktop/Untitled.RData")`` Pretty straightforward, really. Other things you might want to save Until now, we’ve talked mostly about loading and saving data. Other things you might want to save include: • The output. Sometimes you might also want to keep a copy of all your interactions with R, including everything that you typed in and everything that R did in response. There are some functions that you can use to get R to write its output to a file rather than to print onscreen (e.g., `sink()`), but to be honest, if you do want to save the R output, the easiest thing to do is to use the mouse to select the relevant text in the R console, go to the “Edit” menu in Rstudio and select “Copy”. The output has now been copied to the clipboard. Now open up your favourite text editor or word processing software, and paste it. And you’re done. However, this will only save the contents of the console, not the plots you’ve drawn (assuming you’ve drawn some). We’ll talk about saving images later on. • A script. While it is possible – and sometimes handy – to save the R output as a method for keeping a copy of your statistical analyses, another option that people use a lot (especially when you move beyond simple “toy” analyses) is to write scripts. A script is a text file in which you write out all the commands that you want R to run. You can write your script using whatever software you like. In real world data analysis writing scripts is a key skill – and as you become familiar with R you’ll probably find that most of what you do involves scripting rather than typing commands at the R prompt. However, you won’t need to do much scripting initially, so we’ll leave that until Chapter 8.
textbooks/stats/Applied_Statistics/Learning_Statistics_with_R_-_A_tutorial_for_Psychology_Students_and_other_Beginners_(Navarro)/04%3A_Additional_R_Concepts/4.05%3A_Loading_and_Saving_Data.txt