chapter
stringlengths
1.97k
1.53M
path
stringlengths
47
241
The goodness–of–fit test can be used to decide whether a population fits a given distribution, but it will not suffice to decide whether two populations follow the same unknown distribution. A different test, called the test for homogeneity, can be used to draw a conclusion about whether two populations have the same distribution. To calculate the test statistic for a test for homogeneity, follow the same procedure as with the test of independence. The expected value for each cell needs to be at least five in order for you to use this test. Hypotheses • $H_{0}$: The distributions of the two populations are the same. • $H_{a}$: The distributions of the two populations are not the same. Test Statistic • Use a $\chi^{2}$ test statistic. It is computed in the same way as the test for independence. Degrees of Freedom ($df$) • $df = \text{number of columns} - 1$ Requirements • All values in the table must be greater than or equal to five. Common Uses Comparing two populations. For example: men vs. women, before vs. after, east vs. west. The variable is categorical with more than two possible response values. Example $1$ Do male and female college students have the same distribution of living arrangements? Use a level of significance of 0.05. Suppose that 250 randomly selected male college students and 300 randomly selected female college students were asked about their living arrangements: dormitory, apartment, with parents, other. The results are shown in Table $1$. Do male and female college students have the same distribution of living arrangements? Table $1$: Distribution of Living Arragements for College Males and College Females Dormitory Apartment With Parents Other Males 72 84 49 45 Females 91 86 88 35 Answer • $H_{0}$: The distribution of living arrangements for male college students is the same as the distribution of living arrangements for female college students. • $H_{a}$: The distribution of living arrangements for male college students is not the same as the distribution of living arrangements for female college students. Degrees of Freedom ($df$): $df = \text{number of columns} - 1 = 4 - 1 = 3$ Distribution for the test: $\chi^{2}_{3}$ Calculate the test statistic: $\chi^{2} = 10.1287$ (calculator or computer) Probability statement: $p\text{-value} = P(\chi^{2} > 10.1287) = 0.0175$ Press the MATRX key and arrow over to EDIT . Press 1:[A] . Press 2 ENTER 4 ENTER . Enter the table values by row. Press ENTER after each. Press 2nd QUIT . Press STAT and arrow over to TESTS . Arrow down to C:χ2-TEST . Press ENTER . You should see Observed:[A] and Expected:[B] . Arrow down to Calculate . Press ENTER . The test statistic is 10.1287 and the $p\text{-value} = 0.0175$. Do the procedure a second time but arrow down to Draw instead of calculate . Compare α and the p-value: Since no $\alpha$ is given, assume $\alpha = 0.05$. $p\text{-value} = 0.0175$. $\alpha > p\text{-value}$. Make a decision: Since $\alpha > p\text{-value}$, reject $H_{0}$. This means that the distributions are not the same. Conclusion: At a 5% level of significance, from the data, there is sufficient evidence to conclude that the distributions of living arrangements for male and female college students are not the same. Notice that the conclusion is only that the distributions are not the same. We cannot use the test for homogeneity to draw any conclusions about how they differ. Exercise $1$ Do families and singles have the same distribution of cars? Use a level of significance of 0.05. Suppose that 100 randomly selected families and 200 randomly selected singles were asked what type of car they drove: sport, sedan, hatchback, truck, van/SUV. The results are shown in Table $2$. Do families and singles have the same distribution of cars? Test at a level of significance of 0.05. Table $1$ Sport Sedan Hatchback Truck Van/SUV Family 5 15 35 17 28 Single 45 65 37 46 7 Answer With a $p\text{-value}$ of almost zero, we reject the null hypothesis. The data show that the distribution of cars is not the same for families and singles. Example 11.5.2 Both before and after a recent earthquake, surveys were conducted asking voters which of the three candidates they planned on voting for in the upcoming city council election. Has there been a change since the earthquake? Use a level of significance of 0.05. Table shows the results of the survey. Has there been a change in the distribution of voter preferences since the earthquake? Perez Chung Stevens Before 167 128 135 After 214 197 225 Answer $H_{0}$: The distribution of voter preferences was the same before and after the earthquake. $H_{a}$: The distribution of voter preferences was not the same before and after the earthquake. Degrees of Freedom (df): $df = \text{number of columns} - 1 = 3 - 1 = 2$ Distribution for the test: $\chi^{2}_{2}$ Calculate the test statistic: $\chi^{2} = 3.2603$ (calculator or computer) Probability statement: $p\text{-value} = P(\chi^{2} > 3.2603) = 0.1959$ Press the MATRX key and arrow over to EDIT. Press 1:[A]. Press 2 ENTER 3 ENTER. Enter the table values by row. Press ENTER after each. Press 2nd QUIT. Press STAT and arrow over to TESTS. Arrow down to C:χ2-TEST. Press ENTER. You should see Observed:[A] and Expected:[B]. Arrow down to Calculate. Press ENTER. The test statistic is 3.2603 and the p-value = 0.1959. Do the procedure a second time but arrow down to Draw instead of calculate. Compare $\alpha$ and the $p\text{-value}$: $\alpha = 0.05$ and the $p\text{-value} = 0.1959$. $\alpha < p\text{-value}$. Make a decision: Since $\alpha < p\text{-value}$, do not reject $H_{0}$. Conclusion: At a 5% level of significance, from the data, there is insufficient evidence to conclude that the distribution of voter preferences was not the same before and after the earthquake. Exercise $2$ Ivy League schools receive many applications, but only some can be accepted. At the schools listed in Table, two types of applications are accepted: regular and early decision. Application Type Accepted Brown Columbia Cornell Dartmouth Penn Yale Regular 2,115 1,792 5,306 1,734 2,685 1,245 Early Decision 577 627 1,228 444 1,195 761 We want to know if the number of regular applications accepted follows the same distribution as the number of early applications accepted. State the null and alternative hypotheses, the degrees of freedom and the test statistic, sketch the graph of the p-value, and draw a conclusion about the test of homogeneity. Answer $H_{0}$: The distribution of regular applications accepted is the same as the distribution of early applications accepted. $H_{a}$: The distribution of regular applications accepted is not the same as the distribution of early applications accepted. $df = 5$ $\chi^{2} \text{test statistic} = 430.06$ Press the MATRX key and arrow over to EDIT. Press 1:[A]. Press 3 ENTER 3 ENTER. Enter the table values by row. Press ENTER after each. Press 2nd QUIT. Press STAT and arrow over to TESTS. Arrow down toC:χ2-TEST. Press ENTER. You should see Observed:[A] and Expected:[B]. Arrow down to Calculate. Press ENTER. The test statistic is 430.06 and the $p\text{-value} = 9.80E-91$. Do the procedure a second time but arrow down to Draw instead of calculate. Review To assess whether two data sets are derived from the same distribution—which need not be known, you can apply the test for homogeneity that uses the chi-square distribution. The null hypothesis for this test states that the populations of the two data sets come from the same distribution. The test compares the observed values against the expected values if the two populations followed the same distribution. The test is right-tailed. Each observation or cell category must have an expected value of at least five. Formula Review $\sum_{i \cdot j} \frac{(O-E)^{2}}{E}$ Homogeneity test statistic where: $O =$ observed values $E =$ expected values $i =$ number of rows in data contingency table $j =$ number of columns in data contingency table $df = (i −1)(j −1)$ Degrees of freedom Exercise $3$ A math teacher wants to see if two of her classes have the same distribution of test scores. What test should she use? Answer test for homogeneity Exercise $4$ What are the null and alternative hypotheses for Exercise? Exercise $5$ A market researcher wants to see if two different stores have the same distribution of sales throughout the year. What type of test should he use? Answer test for homogeneity Exercise $6$ A meteorologist wants to know if East and West Australia have the same distribution of storms. What type of test should she use? Exercise $7$ What condition must be met to use the test for homogeneity? Answer All values in the table must be greater than or equal to five. Use the following information to answer the next five exercises: Do private practice doctors and hospital doctors have the same distribution of working hours? Suppose that a sample of 100 private practice doctors and 150 hospital doctors are selected at random and asked about the number of hours a week they work. The results are shown in Table. 20–30 30–40 40–50 50–60 Private Practice 16 40 38 6 Hospital 8 44 59 39 Exercise $8$ State the null and alternative hypotheses. Exercise $9$ $df =$ _______ Answer 3 Exercise $10$ What is the test statistic? Exercise $11$ What is the $p\text{-value}$? Answer 0.00005 Exercise $12$ What can you conclude at the 5% significance level?
textbooks/stats/Introductory_Statistics/Introductory_Statistics_(OpenStax)/11%3A_The_Chi-Square_Distribution/11.05%3A_Test_for_Homogeneity.txt
You have seen the $\chi^{2}$ test statistic used in three different circumstances. The following bulleted list is a summary that will help you decide which $\chi^{2}$ test is the appropriate one to use. • Goodness-of-Fit: Use the goodness-of-fit test to decide whether a population with an unknown distribution "fits" a known distribution. In this case there will be a single qualitative survey question or a single outcome of an experiment from a single population. Goodness-of-Fit is typically used to see if the population is uniform (all outcomes occur with equal frequency), the population is normal, or the population is the same as another population with a known distribution. The null and alternative hypotheses are: • $H_{0}$: The population fits the given distribution. • $H_{a}$: The population does not fit the given distribution. • Independence: Use the test for independence to decide whether two variables (factors) are independent or dependent. In this case there will be two qualitative survey questions or experiments and a contingency table will be constructed. The goal is to see if the two variables are unrelated (independent) or related (dependent). The null and alternative hypotheses are: • $H_{0}$: The two variables (factors) are independent. • $H_{a}$: The two variables (factors) are dependent. • Homogeneity: Use the test for homogeneity to decide if two populations with unknown distributions have the same distribution as each other. In this case there will be a single qualitative survey question or experiment given to two different populations. The null and alternative hypotheses are: • $H_{0}$: The two populations follow the same distribution. • $H_{a}$: The two populations have different distributions. Review The goodness-of-fit test is typically used to determine if data fits a particular distribution. The test of independence makes use of a contingency table to determine the independence of two factors. The test for homogeneity determines whether two populations come from the same distribution, even if this distribution is unknown. Exercise $1$ Which test do you use to decide whether an observed distribution is the same as an expected distribution? Answer a goodness-of-fit test Exercise $2$ What is the null hypothesis for the type of test from Exercise? Exercise $3$ Which test would you use to decide whether two factors have a relationship? Answer a test for independence Exercise $4$ Which test would you use to decide if two populations have the same distribution? Exercise $5$ How are tests of independence similar to tests for homogeneity? Answer Answers will vary. Sample answer: Tests of independence and tests for homogeneity both calculate the test statistic the same way $\sum_{i \cdot j} \frac{(O-E)^{2}}{E}$. In addition, all values must be greater than or equal to five. Exercise $6$ How are tests of independence different from tests for homogeneity? Bringing It Together Exercise $7$ 1. Explain why a goodness-of-fit test and a test of independence are generally right-tailed tests. 2. If you did a left-tailed test, what would you be testing? Answer a The test statistic is always positive and if the expected and observed values are not close together, the test statistic is large and the null hypothesis will be rejected. Answer b Testing to see if the data fits the distribution “too well” or is too perfect.
textbooks/stats/Introductory_Statistics/Introductory_Statistics_(OpenStax)/11%3A_The_Chi-Square_Distribution/11.06%3A_Comparison_of_the_Chi-Square_Tests.txt
A test of a single variance assumes that the underlying distribution is normal. The null and alternative hypotheses are stated in terms of the population variance (or population standard deviation). The test statistic is: $\chi^{2} = \frac{(n-1)s^{2}}{\sigma^{2}} \label{test}$ where: • $n$ is the the total number of data • $s^{2}$ is the sample variance • $\sigma^{2}$ is the population variance You may think of $s$ as the random variable in this test. The number of degrees of freedom is $df = n - 1$. A test of a single variance may be right-tailed, left-tailed, or two-tailed. The next example will show you how to set up the null and alternative hypotheses. The null and alternative hypotheses contain statements about the population variance. Example $1$ Math instructors are not only interested in how their students do on exams, on average, but how the exam scores vary. To many instructors, the variance (or standard deviation) may be more important than the average. Suppose a math instructor believes that the standard deviation for his final exam is five points. One of his best students thinks otherwise. The student claims that the standard deviation is more than five points. If the student were to conduct a hypothesis test, what would the null and alternative hypotheses be? Answer Even though we are given the population standard deviation, we can set up the test using the population variance as follows. • $H_{0}: \sigma^{2} = 5^{2}$ • $H_{a}: \sigma^{2} > 5^{2}$ Exercise $1$ A SCUBA instructor wants to record the collective depths each of his students dives during their checkout. He is interested in how the depths vary, even though everyone should have been at the same depth. He believes the standard deviation is three feet. His assistant thinks the standard deviation is less than three feet. If the instructor were to conduct a test, what would the null and alternative hypotheses be? Answer • $H_{0}: \sigma^{2} = 3^{2}$ • $H_{a}: \sigma^{2} > 3^{2}$ Example $2$ With individual lines at its various windows, a post office finds that the standard deviation for normally distributed waiting times for customers on Friday afternoon is 7.2 minutes. The post office experiments with a single, main waiting line and finds that for a random sample of 25 customers, the waiting times for customers have a standard deviation of 3.5 minutes. With a significance level of 5%, test the claim that a single line causes lower variation among waiting times (shorter waiting times) for customers. Answer Since the claim is that a single line causes less variation, this is a test of a single variance. The parameter is the population variance, $\sigma^{2}$, or the population standard deviation, $\sigma$. Random Variable: The sample standard deviation, $s$, is the random variable. Let $s = \text{standard deviation for the waiting times}$. • $H_{0}: \sigma^{2} = 7.2^{2}$ • $H_{a}: \sigma^{2} < 7.2^{2}$ The word "less" tells you this is a left-tailed test. Distribution for the test: $\chi^{2}_{24}$, where: • $n = \text{the number of customers sampled}$ • $df = n - 1 = 25 - 1 = 24$ Calculate the test statistic (Equation \ref{test}): $\chi^{2} = \frac{(n-1)s^{2}}{\sigma^{2}} = \frac{(25-1)(3.5)^{2}}{7.2^{2}} = 5.67 \nonumber$ where $n = 25$, $s = 3.5$, and $\sigma = 7.2$. Graph: Probability statement: $p\text{-value} = P(\chi^{2} < 5.67) = 0.000042$ Compare $\alpha$ and the $p\text{-value}$: $\alpha = 0.05 (p\text{-value} = 0.000042 \alpha > p\text{-value} \nonumber$ Make a decision: Since $\alpha > p\text{-value}$, reject $H_{0}$. This means that you reject $\sigma^{2} = 7.2^{2}$. In other words, you do not think the variation in waiting times is 7.2 minutes; you think the variation in waiting times is less. Conclusion: At a 5% level of significance, from the data, there is sufficient evidence to conclude that a single line causes a lower variation among the waiting times or with a single line, the customer waiting times vary less than 7.2 minutes. In 2nd DISTR, use 7:χ2cdf. The syntax is (lower, upper, df) for the parameter list. For Example, χ2cdf(-1E99,5.67,24). The $p\text{-value} = 0.000042$. Exercise $2$ The FCC conducts broadband speed tests to measure how much data per second passes between a consumer’s computer and the internet. As of August of 2012, the standard deviation of Internet speeds across Internet Service Providers (ISPs) was 12.2 percent. Suppose a sample of 15 ISPs is taken, and the standard deviation is 13.2. An analyst claims that the standard deviation of speeds is more than what was reported. State the null and alternative hypotheses, compute the degrees of freedom, the test statistic, sketch the graph of the p-value, and draw a conclusion. Test at the 1% significance level. Answer • $H_{0}: \sigma^{2} = 12.2^{2}$ • $H_{a}: \sigma^{2} > 12.2^{2}$ In 2nd DISTR, use7:χ2cdf. The syntax is (lower, upper, df) for the parameter list. χ2cdf(16.39,10^99,14). The $p\text{-value} = 0.2902$. $df = 14$ $\text{chi}^{2} \text{test statistic} = 16.39 \nonumber$ The $p\text{-value}$ is $0.2902$, so we decline to reject the null hypothesis. There is not enough evidence to suggest that the variance is greater than $12.2^{2}$. Review To test variability, use the chi-square test of a single variance. The test may be left-, right-, or two-tailed, and its hypotheses are always expressed in terms of the variance (or standard deviation). Formula Review $\chi^{2} = \frac{(n-1) \cdot s^{2}}{\sigma^{2}}$ Test of a single variance statistic where: $n: \text{sample size}$ $s: \text{sample standard deviation}$ $\sigma: \text{population standard deviation}$ $df = n – 1 \text{Degrees of freedom}$ Test of a Single Variance • Use the test to determine variation. • The degrees of freedom is the $\text{number of samples} - 1$. • The test statistic is $\frac{(n-1) \cdot s^{2}}{\sigma^{2}}$, where $n = \text{the total number of data}$, $s^{2} = \text{sample variance}$, and $\sigma^{2} = \text{population variance}$. • The test may be left-, right-, or two-tailed. Use the following information to answer the next three exercises: An archer’s standard deviation for his hits is six (data is measured in distance from the center of the target). An observer claims the standard deviation is less. Exercise $3$ What type of test should be used? Answer a test of a single variance Exercise $4$ State the null and alternative hypotheses. Exercise $5$ Is this a right-tailed, left-tailed, or two-tailed test? Answer a left-tailed test Use the following information to answer the next three exercises: The standard deviation of heights for students in a school is 0.81. A random sample of 50 students is taken, and the standard deviation of heights of the sample is 0.96. A researcher in charge of the study believes the standard deviation of heights for the school is greater than 0.81. Exercise $6$ What type of test should be used? Exercise $5$ State the null and alternative hypotheses. Answer $H_{0}: \sigma^{2} = 0.81^{2}$; $H_{a}: \sigma^{2} > 0.81^{2}$ Exercise $6$ $df =$ ________ Use the following information to answer the next four exercises: The average waiting time in a doctor’s office varies. The standard deviation of waiting times in a doctor’s office is 3.4 minutes. A random sample of 30 patients in the doctor’s office has a standard deviation of waiting times of 4.1 minutes. One doctor believes the variance of waiting times is greater than originally thought. Exercise $7$ What type of test should be used? Answer a test of a single variance Exercise $8$ What is the test statistic? Exercise $9$ What is the $p\text{-value}$? Answer 0.0542 Exercise $10$ What can you conclude at the 5% significance level?
textbooks/stats/Introductory_Statistics/Introductory_Statistics_(OpenStax)/11%3A_The_Chi-Square_Distribution/11.07%3A_Test_of_a_Single_Variance.txt
Name: ______________________________ Section: _____________________________ Student ID#:__________________________ Work in groups on these problems. You should try to answer the questions without referring to your textbook. If you get stuck, try asking another group for help. Student Learning Outcome • The student will evaluate data collected to determine if they fit either the uniform or exponential distributions. Collect the Data Go to your local supermarket. Ask 30 people as they leave for the total amount on their grocery receipts. (Or, ask three cashiers for the last ten amounts. Be sure to include the express lane, if it is open.) You may need to combine two categories so that each cell has an expected value of at least five. 1. Record the values. __________ __________ __________ __________ __________ __________ __________ __________ __________ __________ __________ __________ __________ __________ __________ __________ __________ __________ __________ __________ __________ __________ __________ __________ __________ __________ __________ __________ __________ __________ 2. Construct a histogram of the data. Make five to six intervals. Sketch the graph using a ruler and pencil. Scale the axes. 3. Calculate the following: 1. $\bar{x} =$________ 2. $s =$ ________ 3. $s^{2} =$ ________ Uniform Distribution Test to see if grocery receipts follow the uniform distribution. 1. Using your lowest and highest values, $X \sim U ($_______, _______$)$ 2. Divide the distribution into fifths. 3. Calculate the following: 1. lowest value = _________ 2. 20th percentile = _________ 3. 40th percentile = _________ 4. 60th percentile = _________ 5. 80th percentile = _________ 6. highest value = _________ 4. For each fifth, count the observed number of receipts and record it. Then determine the expected number of receipts and record that. Fifth Observed Expected 1st 2nd 3rd 4th 5th 5. $H_{0}$: ________ 6. $H_{a}$: ________ 7. What distribution should you use for a hypothesis test? 8. Why did you choose this distribution? 9. Calculate the test statistic. 10. Find the $p\text{-value}$. 11. Sketch a graph of the situation. Label and scale the x-axis. Shade the area corresponding to the $p\text{-value}$. 12. State your decision. 13. State your conclusion in a complete sentence. Exponential Distribution Test to see if grocery receipts follow the exponential distribution with decay parameter $\frac{1}{\bar{x}}$. 1. Using $\frac{1}{\bar{x}}$ as the decay parameter, $X \sim \text{Exp}($_________$)$. 2. Calculate the following: 1. lowest value = ________ 2. first quartile = ________ 3. 37th percentile = ________ 4. median = ________ 5. 63rd percentile = ________ 6. 3rd quartile = ________ 7. highest value = ________ 3. For each cell, count the observed number of receipts and record it. Then determine the expected number of receipts and record that. Cell Observed Expected 1st 2nd 3rd 4th 5th 6th 4. $H_{0}$: ________ 5. $H_{a}$: ________ 6. What distribution should you use for a hypothesis test? 7. Why did you choose this distribution? 8. Calculate the test statistic. 9. Find the $p\text{-value}$. 10. Sketch a graph of the situation. Label and scale the x-axis. Shade the area corresponding to the $p\text{-value}$. 11. State your decision. 12. State your conclusion in a complete sentence. Discussion Questions 1. Did your data fit either distribution? If so, which? 2. In general, do you think it’s likely that data could fit more than one distribution? In complete sentences, explain why or why not. 11.09: Lab 2- Chi-Square Test of Independence (Worksheet) Name: ______________________________ Section: _____________________________ Student ID#:__________________________ Work in groups on these problems. You should try to answer the questions without referring to your textbook. If you get stuck, try asking another group for help. Student Learning Outcome • The student will evaluate if there is a significant relationship between favorite type of snack and gender. Collect the Data 1. Using your class as a sample, complete the following chart. Ask each other what your favorite snack is, then total the results. NOTE: You may need to combine two food categories so that each cell has an expected value of at least five. Favorite type of snack sweets (candy & baked goods) ice cream chips & pretzels fruits & vegetables Total male female Total 2. Looking at Table, does it appear to you that there is a dependence between gender and favorite type of snack food? Why or why not? Hypothesis Test Conduct a hypothesis test to determine if the factors are independent: 1. $H_{0}$: ________ 2. $H_{a}$: ________ 3. What distribution should you use for a hypothesis test? 4. Why did you choose this distribution? 5. Calculate the test statistic. 6. Find the p-value. 7. Sketch a graph of the situation. Label and scale the $x$-axis. Shade the area corresponding to the $p\text{-value}$. 8. State your decision. 9. State your conclusion in a complete sentence. Discussion Questions 1. Is the conclusion of your study the same as or different from your answer to answer to question two under Collect the Data? 2. Why do you think that occurred?
textbooks/stats/Introductory_Statistics/Introductory_Statistics_(OpenStax)/11%3A_The_Chi-Square_Distribution/11.08%3A_Lab_1-_Chi-Square_Goodness-of-Fit_%28Worksheet%29.txt
These are homework exercises to accompany the Textmap created for "Introductory Statistics" by OpenStax. 11.2: Facts about the Chi-Square Distribution Decide whether the following statements are true or false. Q 11.2.1 As the number of degrees of freedom increases, the graph of the chi-square distribution looks more and more symmetrical. true Q 11.2.2 The standard deviation of the chi-square distribution is twice the mean. Q 11.2.3 The mean and the median of the chi-square distribution are the same if $df = 24$. false 11:3: Goodness-of-Fit Test For each problem, use a solution sheet to solve the hypothesis test problem. Go to [link] for the chi-square solution sheet. Round expected frequency to two decimal places. Q 11.3.1 A six-sided die is rolled 120 times. Fill in the expected frequency column. Then, conduct a hypothesis test to determine if the die is fair. The data in Table are the result of the 120 rolls. Face Value Frequency Expected Frequency 1 15 2 29 3 16 4 15 5 30 6 15 The marital status distribution of the U.S. male population, ages 15 and older, is as shown in Table.Q 11.3.2 Marital Status Percent Expected Frequency never married 31.3 married 56.1 widowed 2.5 divorced/separated 10.1 Suppose that a random sample of 400 U.S. young adult males, 18 to 24 years old, yielded the following frequency distribution. We are interested in whether this age group of males fits the distribution of the U.S. adult population. Calculate the frequency one would expect when surveying 400 people. Fill in Table, rounding to two decimal places. Marital Status Frequency never married 140 married 238 widowed 2 divorced/separated 20 S 11.3.2 Marital Status Percent Expected Frequency never married 31.3 125.2 married 56.1 224.4 widowed 2.5 10 divorced/separated 10.1 40.4 1. The data fits the distribution. 2. The data does not fit the distribution. 3. 3 4. chi-square distribution with $df = 3$ 5. 19.27 6. 0.0002 7. Check student’s solution. 1. $\alpha = 0.05$ 2. Decision: Reject null 3. Reason for decision: $p\text{-value} < \alpha$ 4. Conclusion: Data does not fit the distribution. Use the following information to answer the next two exercises: The columns in Table contain the Race/Ethnicity of U.S. Public Schools for a recent year, the percentages for the Advanced Placement Examinee Population for that class, and the Overall Student Population. Suppose the right column contains the result of a survey of 1,000 local students from that year who took an AP Exam. Race/Ethnicity AP Examinee Population Overall Student Population Survey Frequency Asian, Asian American, or Pacific Islander 10.2% 5.4% 113 Black or African-American 8.2% 14.5% 94 Hispanic or Latino 15.5% 15.9% 136 American Indian or Alaska Native 0.6% 1.2% 10 White 59.4% 61.6% 604 Not reported/other 6.1% 1.4% 43 Q 11.3.3 Perform a goodness-of-fit test to determine whether the local results follow the distribution of the U.S. overall student population based on ethnicity. Q 11.3.4 Perform a goodness-of-fit test to determine whether the local results follow the distribution of U.S. AP examinee population, based on ethnicity. S 11.3.4 1. $H_{0}$: The local results follow the distribution of the U.S. AP examinee population 2. $H_{0}$: The local results do not follow the distribution of the U.S. AP examinee population 3. $df = 5$ 4. chi-square distribution with $df = 5$ 5. chi-square test statistic = 13.4 6. $p\text{-value} = 0.0199$ 7. Check student’s solution. 1. $\alpha = 0.05$ 2. Decision: Reject null when $a = 0.05$ 3. Reason for Decision: $p\text{-value} < \alpha$ 4. Conclusion: Local data do not fit the AP Examinee Distribution. 5. Decision: Do not reject null when $a = 0.01$ 6. Conclusion: There is insufficient evidence to conclude that local data do not follow the distribution of the U.S. AP examinee distribution. Q 11.3.5 The City of South Lake Tahoe, CA, has an Asian population of 1,419 people, out of a total population of 23,609. Suppose that a survey of 1,419 self-reported Asians in the Manhattan, NY, area yielded the data in Table. Conduct a goodness-of-fit test to determine if the self-reported sub-groups of Asians in the Manhattan area fit that of the Lake Tahoe area. Race Lake Tahoe Frequency Manhattan Frequency Asian Indian 131 174 Chinese 118 557 Filipino 1,045 518 Japanese 80 54 Korean 12 29 Vietnamese 9 21 Other 24 66 Use the following information to answer the next two exercises: UCLA conducted a survey of more than 263,000 college freshmen from 385 colleges in fall 2005. The results of students' expected majors by gender were reported in The Chronicle of Higher Education (2/2/2006). Suppose a survey of 5,000 graduating females and 5,000 graduating males was done as a follow-up last year to determine what their actual majors were. The results are shown in the tables for Exercise and Exercise. The second column in each table does not add to 100% because of rounding. Q 11.3.6 Conduct a goodness-of-fit test to determine if the actual college majors of graduating females fit the distribution of their expected majors. Major Women - Expected Major Women - Actual Major Arts & Humanities 14.0% 670 Biological Sciences 8.4% 410 Business 13.1% 685 Education 13.0% 650 Engineering 2.6% 145 Physical Sciences 2.6% 125 Professional 18.9% 975 Social Sciences 13.0% 605 Technical 0.4% 15 Other 5.8% 300 Undecided 8.0% 420 S 11.3.6 1. $H_{0}$: The actual college majors of graduating females fit the distribution of their expected majors 2. $H_{a}$: The actual college majors of graduating females do not fit the distribution of their expected majors 3. $df = 10$ 4. chi-square distribution with $df = 10$ 5. $\text{test statistic} = 11.48$ 6. $p\text{-value} = 0.3211$ 7. Check student’s solution. 1. $\alpha = 0.05$ 2. Decision: Do not reject null when $a = 0.05$ and $a = 0.01$ 3. Reason for decision: $p\text{-value} > \alpha$ 4. Conclusion: There is insufficient evidence to conclude that the distribution of actual college majors of graduating females fits the distribution of their expected majors. Q 11.3.7 Conduct a goodness-of-fit test to determine if the actual college majors of graduating males fit the distribution of their expected majors. Major Men - Expected Major Men - Actual Major Arts & Humanities 11.0% 600 Biological Sciences 6.7% 330 Business 22.7% 1130 Education 5.8% 305 Engineering 15.6% 800 Physical Sciences 3.6% 175 Professional 9.3% 460 Social Sciences 7.6% 370 Technical 1.8% 90 Other 8.2% 400 Undecided 6.6% 340 Read the statement and decide whether it is true or false. Q 11.3.8 In a goodness-of-fit test, the expected values are the values we would expect if the null hypothesis were true. true Q 11.3.9 In general, if the observed values and expected values of a goodness-of-fit test are not close together, then the test statistic can get very large and on a graph will be way out in the right tail. Q 11.3.10 Use a goodness-of-fit test to determine if high school principals believe that students are absent equally during the week or not. true Q 11.3.11 The test to use to determine if a six-sided die is fair is a goodness-of-fit test. Q 11.3.12 In a goodness-of fit test, if the p-value is 0.0113, in general, do not reject the null hypothesis. false Q 11.3.13 A sample of 212 commercial businesses was surveyed for recycling one commodity; a commodity here means any one type of recyclable material such as plastic or aluminum. Table shows the business categories in the survey, the sample size of each category, and the number of businesses in each category that recycle one commodity. Based on the study, on average half of the businesses were expected to be recycling one commodity. As a result, the last column shows the expected number of businesses in each category that recycle one commodity. At the 5% significance level, perform a hypothesis test to determine if the observed number of businesses that recycle one commodity follows the uniform distribution of the expected values. Business Type Number in class Observed Number that recycle one commodity Expected number that recycle one commodity Office 35 19 17.5 Retail/Wholesale 48 27 24 Food/Restaurants 53 35 26.5 Manufacturing/Medical 52 21 26 Hotel/Mixed 24 9 12 Q 11.3.14 Table contains information from a survey among 499 participants classified according to their age groups. The second column shows the percentage of obese people per age class among the study participants. The last column comes from a different study at the national level that shows the corresponding percentages of obese people in the same age classes in the USA. Perform a hypothesis test at the 5% significance level to determine whether the survey participants are a representative sample of the USA obese population. Age Class (Years) Obese (Percentage) Expected USA average (Percentage) 20–30 75.0 32.6 31–40 26.5 32.6 41–50 13.6 36.6 51–60 21.9 36.6 61–70 21.0 39.7 S 11.3.14 1. $H_{0}$: Surveyed obese fit the distribution of expected obese 2. $H_{a}$: Surveyed obese do not fit the distribution of expected obese 3. $df = 4$ 4. chi-square distribution with $df = 4$ 5. $\text{test statistic} = 54.01$ 6. $p\text{-value} = 0$ 7. Check student’s solution. 1. $\alpha: 0.05$ 2. Decision: Reject the null hypothesis. 3. Reason for decision: $p\text{-value} < \alpha$ 4. Conclusion: At the 5% level of significance, from the data, there is sufficient evidence to conclude that the surveyed obese do not fit the distribution of expected obese. 11.4: Test of Independence For each problem, use a solution sheet to solve the hypothesis test problem. Go to Appendix E for the chi-square solution sheet. Round expected frequency to two decimal places. Q 11.4.1 A recent debate about where in the United States skiers believe the skiing is best prompted the following survey. Test to see if the best ski area is independent of the level of the skier. U.S. Ski Area Beginner Intermediate Advanced Tahoe 20 30 40 Utah 10 30 60 Colorado 10 40 50 Q 11.4.2 Car manufacturers are interested in whether there is a relationship between the size of car an individual drives and the number of people in the driver’s family (that is, whether car size and family size are independent). To test this, suppose that 800 car owners were randomly surveyed with the results in Table. Conduct a test of independence. Family Size Sub & Compact Mid-size Full-size Van & Truck 1 20 35 40 35 2 20 50 70 80 3–4 20 50 100 90 5+ 20 30 70 70 S 11.4.2 1. $H_{0}$: Car size is independent of family size. 2. $H_{a}$: Car size is dependent on family size. 3. $df = 9$ 4. chi-square distribution with $df = 9$ 5. $\text{test statistic} = 15.8284$ 6. $p\text{-value} = 0.0706$ 7. Check student’s solution. 1. $\alpha: 0.05$ 2. Decision: Do not reject the null hypothesis. 3. Reason for decision: $p\text{-value} > \alpha$ 4. Conclusion: At the 5% significance level, there is insufficient evidence to conclude that car size and family size are dependent. Q 11.4.3 College students may be interested in whether or not their majors have any effect on starting salaries after graduation. Suppose that 300 recent graduates were surveyed as to their majors in college and their starting salaries after graduation. Table shows the data. Conduct a test of independence. Major < $50,000$50,000 – $68,999$69,000 + English 5 20 5 Engineering 10 30 60 Nursing 10 15 15 Business 10 20 30 Psychology 20 30 20 Q 11.4.4 Some travel agents claim that honeymoon hot spots vary according to age of the bride. Suppose that 280 recent brides were interviewed as to where they spent their honeymoons. The information is given in Table. Conduct a test of independence. Location 20–29 30–39 40–49 50 and over Niagara Falls 15 25 25 20 Poconos 15 25 25 10 Europe 10 25 15 5 Virgin Islands 20 25 15 5 1. $H_{0}$: Honeymoon locations are independent of bride’s age. 2. $H_{a}$: Honeymoon locations are dependent on bride’s age. 3. $df = 9$ 4. chi-square distribution with $df = 9$ 5. $\text{test statistic} = 15.7027$ 6. $p\text{-value} = 0.0734$ 7. Check student’s solution. 1. $\alpha: 0.05$ 2. Decision: Do not reject the null hypothesis. 3. Reason for decision: $p\text{-value} > \alpha$ 4. Conclusion: At the 5% significance level, there is insufficient evidence to conclude that honeymoon location and bride age are dependent. Q 11.4.5 A manager of a sports club keeps information concerning the main sport in which members participate and their ages. To test whether there is a relationship between the age of a member and his or her choice of sport, 643 members of the sports club are randomly selected. Conduct a test of independence. Sport 18 - 25 26 - 30 31 - 40 41 and over racquetball 42 58 30 46 tennis 58 76 38 65 swimming 72 60 65 33 Q 11.4.6 A major food manufacturer is concerned that the sales for its skinny french fries have been decreasing. As a part of a feasibility study, the company conducts research into the types of fries sold across the country to determine if the type of fries sold is independent of the area of the country. The results of the study are shown in Table. Conduct a test of independence. Type of Fries Northeast South Central West skinny fries 70 50 20 25 curly fries 100 60 15 30 steak fries 20 40 10 10 S 11.4.6 1. $H_{0}$: The types of fries sold are independent of the location. 2. $H_{a}$: The types of fries sold are dependent on the location. 3. $df = 6$ 4. chi-square distribution with $df = 6$ 5. $\text{test statistic} =18.8369$ 6. $p\text{-value} = 0.0044$ 7. Check student’s solution. 1. $\alpha: 0.05$ 2. Decision: Reject the null hypothesis. 3. Reason for decision: $p\text{-value} > \alpha$ 4. Conclusion: At the 5% significance level, There is sufficient evidence that types of fries and location are dependent. Q 11.4.7 According to Dan Lenard, an independent insurance agent in the Buffalo, N.Y. area, the following is a breakdown of the amount of life insurance purchased by males in the following age groups. He is interested in whether the age of the male and the amount of life insurance purchased are independent events. Conduct a test for independence. Age of Males None < $200,000$200,000–$400,000$401,001–$1,000,000$1,000,001+ 20–29 40 15 40 0 5 30–39 35 5 20 20 10 40–49 20 0 30 0 30 50+ 40 30 15 15 10 Q 11.4.8 Suppose that 600 thirty-year-olds were surveyed to determine whether or not there is a relationship between the level of education an individual has and salary. Conduct a test of independence. Annual Salary Not a high school graduate High school graduate College graduate Masters or doctorate < $30,000 15 25 10 5$30,000–$40,000 20 40 70 30$40,000–$50,000 10 20 40 55$50,000–$60,000 5 10 20 60$60,000+ 0 5 10 150 S 11.4.8 1. $H_{0}$: Salary is independent of level of education. 2. $H_{a}$: Salary is dependent on level of education. 3. $df = 12$ 4. chi-square distribution with $df = 12$ 5. $\text{test statistic} = 255.7704$ 6. $p\text{-value} = 0$ 7. Check student’s solution. 1. $\alpha: 0.05$ 2. Decision: Reject the null hypothesis. 3. Reason for decision: $p\text{-value} > \alpha$ 4. Conclusion: At the 5% significance level, There is sufficient evidence that types of fries and location are dependent. Read the statement and decide whether it is true or false. Q 11.4.9 The number of degrees of freedom for a test of independence is equal to the sample size minus one. Q 11.4.10 The test for independence uses tables of observed and expected data values. true Q 11.4.11 The test to use when determining if the college or university a student chooses to attend is related to his or her socioeconomic status is a test for independence. Q 11.4.12 In a test of independence, the expected number is equal to the row total multiplied by the column total divided by the total surveyed. true Q 11.4.13 An ice cream maker performs a nationwide survey about favorite flavors of ice cream in different geographic areas of the U.S. Based on Table, do the numbers suggest that geographic location is independent of favorite ice cream flavors? Test at the 5% significance level. U.S. region/Flavor Strawberry Chocolate Vanilla Rocky Road Mint Chocolate Chip Pistachio Row total East 8 31 27 8 15 7 96 Midwest 10 32 22 11 15 6 96 West 12 21 22 19 15 8 97 South 15 28 30 8 15 6 102 Column Total 45 112 101 46 60 27 391 Q 11.4.14 Table provides a recent survey of the youngest online entrepreneurs whose net worth is estimated at one million dollars or more. Their ages range from 17 to 30. Each cell in the table illustrates the number of entrepreneurs who correspond to the specific age group and their net worth. Are the ages and net worth independent? Perform a test of independence at the 5% significance level. Age Group\ Net Worth Value (in millions of US dollars) 1–5 6–24 ≥25 Row Total 17–25 8 7 5 20 26–30 6 5 9 20 Column Total 14 12 14 40 S 11.4.14 1. $H_{0}$: Age is independent of the youngest online entrepreneurs’ net worth. 2. $H_{5}$: Age is dependent on the net worth of the youngest online entrepreneurs. 3. $df = 2$ 4. chi-square distribution with $df = 2$ 5. $\text{test statistic} = 1.76$ 6. $p\text{-value} = 0.4144$ 7. Check student’s solution. 1. $\alpha: 0.05$ 2. Decision: Do not reject the null hypothesis. 3. Reason for decision: $p\text{-value} > \alpha$ 4. Conclusion: At the 5% significance level, there is insufficient evidence to conclude that age and net worth for the youngest online entrepreneurs are dependent. Q 11.4.15 A 2013 poll in California surveyed people about taxing sugar-sweetened beverages. The results are presented in Table, and are classified by ethnic group and response type. Are the poll responses independent of the participants’ ethnic group? Conduct a test of independence at the 5% significance level. Opinion/Ethnicity Asian-American White/Non-Hispanic African-American Latino Row Total Against tax 48 433 41 160 628 In Favor of tax 54 234 24 147 459 No opinion 16 43 16 19 84 Column Total 118 710 71 272 1171 11.5: Test for Homogeneity For each word problem, use a solution sheet to solve the hypothesis test problem. Go to [link] for the chi-square solution sheet. Round expected frequency to two decimal places. Q 11.5.1 A psychologist is interested in testing whether there is a difference in the distribution of personality types for business majors and social science majors. The results of the study are shown in Table. Conduct a test of homogeneity. Test at a 5% level of significance. Open Conscientious Extrovert Agreeable Neurotic Business 41 52 46 61 58 Social Science 72 75 63 80 65 S 11.5.1 1. $H_{0}$: The distribution for personality types is the same for both majors 2. $H_{a}$: The distribution for personality types is not the same for both majors 3. $df = 4$ 4. chi-square with $df = 4$ 5. $\text{test statistic} = 3.01$ 6. $p\text{-value} = 0.5568$ 7. Check student’s solution. 1. $\alpha: 0.05$ 2. Decision: Do not reject the null hypothesis. 3. Reason for decision: $p\text{-value} > \alpha$ 4. Conclusion: There is insufficient evidence to conclude that the distribution of personality types is different for business and social science majors. Q 11.5.2 Do men and women select different breakfasts? The breakfasts ordered by randomly selected men and women at a popular breakfast place is shown in Table. Conduct a test for homogeneity at a 5% level of significance. French Toast Pancakes Waffles Omelettes Men 47 35 28 53 Women 65 59 55 60 Q 11.5.3 A fisherman is interested in whether the distribution of fish caught in Green Valley Lake is the same as the distribution of fish caught in Echo Lake. Of the 191 randomly selected fish caught in Green Valley Lake, 105 were rainbow trout, 27 were other trout, 35 were bass, and 24 were catfish. Of the 293 randomly selected fish caught in Echo Lake, 115 were rainbow trout, 58 were other trout, 67 were bass, and 53 were catfish. Perform a test for homogeneity at a 5% level of significance. S 11.5.3 1. $H_{0}$: The distribution for fish caught is the same in Green Valley Lake and in Echo Lake. 2. $H_{a}$: The distribution for fish caught is not the same in Green Valley Lake and in Echo Lake. 3. $df = 3$ 4. chi-square with $df = 3$ 5. $\text{test statistic} = 11.75$ 6. $p\text{-value} = 0.0083$ 7. Check student’s solution. 1. $\alpha: 0.05$ 2. Decision: Reject the null hypothesis. 3. Reason for decision: $p\text{-value} > \alpha$ 4. Conclusion: There is evidence to conclude that the distribution of fish caught is different in Green Valley Lake and in Echo Lake Q 11.5.4 In 2007, the United States had 1.5 million homeschooled students, according to the U.S. National Center for Education Statistics. In Table you can see that parents decide to homeschool their children for different reasons, and some reasons are ranked by parents as more important than others. According to the survey results shown in the table, is the distribution of applicable reasons the same as the distribution of the most important reason? Provide your assessment at the 5% significance level. Did you expect the result you obtained? Reasons for Homeschooling Applicable Reason (in thousands of respondents) Most Important Reason (in thousands of respondents) Row Total Concern about the environment of other schools 1,321 309 1,630 Dissatisfaction with academic instruction at other schools 1,096 258 1,354 To provide religious or moral instruction 1,257 540 1,797 Child has special needs, other than physical or mental 315 55 370 Nontraditional approach to child’s education 984 99 1,083 Other reasons (e.g., finances, travel, family time, etc.) 485 216 701 Column Total 5,458 1,477 6,935 Q 11.5.5 When looking at energy consumption, we are often interested in detecting trends over time and how they correlate among different countries. The information in Table shows the average energy use (in units of kg of oil equivalent per capita) in the USA and the joint European Union countries (EU) for the six-year period 2005 to 2010. Do the energy use values in these two areas come from the same distribution? Perform the analysis at the 5% significance level. Year European Union United States Row Total 2010 3,413 7,164 10,557 2009 3,302 7,057 10,359 2008 3,505 7,488 10,993 2007 3,537 7,758 11,295 2006 3,595 7,697 11,292 2005 3,613 7,847 11,460 Column Total 45,011 20,965 65,976 S 11.5.5 1. $H_{0}$: The distribution of average energy use in the USA is the same as in Europe between 2005 and 2010. 2. $H_{a}$: The distribution of average energy use in the USA is not the same as in Europe between 2005 and 2010. 3. $df = 4$ 4. chi-square with $df = 4$ 5. $\text{test statistic} = 2.7434$ 6. $p\text{-value} = 0.7395$ 7. Check student’s solution. 1. $\alpha: 0.05$ 2. Decision: Do not reject the null hypothesis. 3. Reason for decision: $p\text{-value} > \alpha$ 4. Conclusion: At the 5% significance level, there is insufficient evidence to conclude that the average energy use values in the US and EU are not derived from different distributions for the period from 2005 to 2010. Q 11.5.6 The Insurance Institute for Highway Safety collects safety information about all types of cars every year, and publishes a report of Top Safety Picks among all cars, makes, and models. Table presents the number of Top Safety Picks in six car categories for the two years 2009 and 2013. Analyze the table data to conclude whether the distribution of cars that earned the Top Safety Picks safety award has remained the same between 2009 and 2013. Derive your results at the 5% significance level. Year \ Car Type Small Mid-Size Large Small SUV Mid-Size SUV Large SUV Row Total 2009 12 22 10 10 27 6 87 2013 31 30 19 11 29 4 124 Column Total 43 52 29 21 56 10 211 11.6: Comparison of the Chi-Square Tests For each word problem, use a solution sheet to solve the hypothesis test problem. Go to [link] for the chi-square solution sheet. Round expected frequency to two decimal places. Q 11.6.1 Is there a difference between the distribution of community college statistics students and the distribution of university statistics students in what technology they use on their homework? Of some randomly selected community college students, 43 used a computer, 102 used a calculator with built in statistics functions, and 65 used a table from the textbook. Of some randomly selected university students, 28 used a computer, 33 used a calculator with built in statistics functions, and 40 used a table from the textbook. Conduct an appropriate hypothesis test using a 0.05 level of significance. S 11.6.1 1. $H_{0}$: The distribution for technology use is the same for community college students and university students. 2. $H_{a}$: The distribution for technology use is not the same for community college students and university students. 3. $df = 2$ 4. chi-square with $df = 2$ 5. $\text{test statistic} = 7.05$ 6. $p\text{-value} = 0.0294$ 7. Check student’s solution. 1. $\alpha: 0.05$ 2. Decision: Reject the null hypothesis. 3. Reason for decision: $p\text{-value} > \alpha$ 4. Conclusion: There is sufficient evidence to conclude that the distribution of technology use for statistics homework is not the same for statistics students at community colleges and at universities. Read the statement and decide whether it is true or false. Q 11.6.2 If $df = 2$, the chi-square distribution has a shape that reminds us of the exponential. 11.7: Test of a Single Variance Use the following information to answer the next twelve exercises: Suppose an airline claims that its flights are consistently on time with an average delay of at most 15 minutes. It claims that the average delay is so consistent that the variance is no more than 150 minutes. Doubting the consistency part of the claim, a disgruntled traveler calculates the delays for his next 25 flights. The average delay for those 25 flights is 22 minutes with a standard deviation of 15 minutes. Q 11.7.1 Is the traveler disputing the claim about the average or about the variance? Q 11.7.2 A sample standard deviation of 15 minutes is the same as a sample variance of __________ minutes. 225 Q 11.7.3 Is this a right-tailed, left-tailed, or two-tailed test? Q 11.7.4 $H_{0}$: __________ S 11.7.4 $H_{0}: \sigma^{2} \leq 150$ Q 11.7.5 $df =$ ________ Q 11.7.6 chi-square test statistic = ________ 36 Q 11.7.7 $p\text{-value} =$ ________ Q 11.7.8 Graph the situation. Label and scale the horizontal axis. Mark the mean and test statistic. Shade the $p\text{-value}$. S 11.7.8 Check student’s solution. Q 11.7.9 Let $\alpha = 0.05$ Decision: ________ Conclusion (write out in a complete sentence.): ________ Q 11.7.10 How did you know to test the variance instead of the mean? S 11.7.10 The claim is that the variance is no more than 150 minutes. Q 11.7.11 If an additional test were done on the claim of the average delay, which distribution would you use? Q 11.7.12 If an additional test were done on the claim of the average delay, but 45 flights were surveyed, which distribution would you use? S 11.7.12 a Student's $t$- or normal distribution For each word problem, use a solution sheet to solve the hypothesis test problem. Go to [link] for the chi-square solution sheet. Round expected frequency to two decimal places. Q 11.7.13 A plant manager is concerned her equipment may need recalibrating. It seems that the actual weight of the 15 oz. cereal boxes it fills has been fluctuating. The standard deviation should be at most 0.5 oz. In order to determine if the machine needs to be recalibrated, 84 randomly selected boxes of cereal from the next day’s production were weighed. The standard deviation of the 84 boxes was 0.54. Does the machine need to be recalibrated? S 11.7.20 1. $H_{0}: \sigma = 25^{2}$ 2. $H_{a}: \sigma > 25^{2}$ 3. $df = n - 1 = 7$ 4. test statistic: $\chi^{2} = \chi^{2}_{7} = \frac{(n-1)s^{2}}{25^{2}} = \frac{(8-1)(34.29)^{2}}{25^{2}} = 13.169$ 5. $p\text{-value}: P(\chi^{2}_{7} > 13.169) = 1- P(\chi^{2}_{7} \leq 13.169) = 0.0681$ 1. $\alpha: 0.05$ 2. Decision: Do not reject the null hypothesis 3. Reason for decision: $p\text{-value} < \alpha$ 4. Conclusion: At the 5% level, there is insufficient evidence to conclude that the variance is more than 625. Q 11.7.21 A company packages apples by weight. One of the weight grades is Class A apples. Class A apples have a mean weight of 150 g, and there is a maximum allowed weight tolerance of 5% above or below the mean for apples in the same consumer package. A batch of apples is selected to be included in a Class A apple package. Given the following apple weights of the batch, does the fruit comply with the Class A grade weight tolerance requirements. Conduct an appropriate hypothesis test. 1. at the 5% significance level 2. at the 1% significance level Weights in selected apple batch (in grams): 158; 167; 149; 169; 164; 139; 154; 150; 157; 171; 152; 161; 141; 166; 172;
textbooks/stats/Introductory_Statistics/Introductory_Statistics_(OpenStax)/11%3A_The_Chi-Square_Distribution/11.E%3A_The_Chi-Square_Distribution_%28Exercises%29.txt
Regression analysis is a statistical process for estimating the relationships among variables and includes many techniques for modeling and analyzing several variables. When the focus is on the relationship between a dependent variable and one or more independent variables. • 12.1: Prelude to Linear Regression and Correlation In this chapter, you will be studying the simplest form of regression, "linear regression" with one independent variable (x). This involves data that fits a line in two dimensions. You will also study correlation which measures how strong the relationship is. • 12.2: Linear Equations Linear regression for two variables is based on a linear equation with one independent variable. The equation has the form: y=a+bx where a and b are constant numbers. The variable x is the independent variable, and y is the dependent variable. Typically, you choose a value to substitute for the independent variable and then solve for the dependent variable. • 12.3: Scatter Plots A scatter plot shows the direction of a relationship between the variables. A clear direction happens when there is either: High values of one variable occurring with high values of the other variable or low values of one variable occurring with low values of the other variable. High values of one variable occurring with low values of the other variable. • 12.4: The Regression Equation A regression line, or a line of best fit, can be drawn on a scatter plot and used to predict outcomes for the x and y variables in a given data set or sample data. There are several ways to find a regression line, but usually the least-squares regression line is used because it creates a uniform line. Residuals measure the distance from the actual value of y and the estimated value of y . The Sum of Squared Errors, when set to its minimum, calculates the points on the line of best fit. • 12.5: Testing the Significance of the Correlation Coefficient The correlation coefficient tells us about the strength and direction of the linear relationship between x and y. However, the reliability of the linear model also depends on how many observed data points are in the sample. We need to look at both the value of the correlation coefficient r and the sample size n, and perform a hypothesis test of the "significance of the correlation coefficient" to decide whether the linear relationship in the sample data is strong enough to use to linear model. • 12.6: Prediction After determining the presence of a strong correlation coefficient and calculating the line of best fit, you can use the least squares regression line to make predictions about your data.  The process of predicting inside of the observed x values observed in the data is called interpolation. The process of predicting outside of the observed x-values observed in the data is called extrapolation. • 12.7: Outliers In some data sets, there are values (observed data points) called outliers. Outliers are observed data points that are far from the least squares line. They have large "errors", where the "error" or residual is the vertical distance from the line to the point. • 12.8: Regression - Distance from School (Worksheet) A statistics Worksheet: The student will calculate and construct the line of best fit between two variables. The student will evaluate the relationship between two variables to determine if that relationship is significant. • 12.9: Regression - Textbook Cost (Worksheet) A statistics Worksheet: The student will calculate and construct the line of best fit between two variables. The student will evaluate the relationship between two variables to determine if that relationship is significant. • 12.10: Regression - Fuel Efficiency (Worksheet) A statistics Worksheet: The student will calculate and construct the line of best fit between two variables. The student will evaluate the relationship between two variables to determine if that relationship is significant. • 12.E: Linear Regression and Correlation (Exercises) These are homework exercises to accompany the Textmap created for "Introductory Statistics" by OpenStax. 12: Linear Regression and Correlation CHAPTER OBJECTIVES By the end of this chapter, the student should be able to: • Discuss basic ideas of linear regression and correlation. • Create and interpret a line of best fit. • Calculate and interpret the correlation coefficient. • Calculate and interpret outliers. Professionals often want to know how two or more numeric variables are related. For example, is there a relationship between the grade on the second math exam a student takes and the grade on the final exam? If there is a relationship, what is the relationship and how strong is it? In another example, your income may be determined by your education, your profession, your years of experience, and your ability. The amount you pay a repair person for labor is often determined by an initial amount plus an hourly fee. The type of data described in the examples is bivariate data — "bi" for two variables. In reality, statisticians use multivariate data, meaning many variables. In this chapter, you will be studying the simplest form of regression, "linear regression" with one independent variable (\(x\)). This involves data that fits a line in two dimensions. You will also study correlation which measures how strong the relationship is.
textbooks/stats/Introductory_Statistics/Introductory_Statistics_(OpenStax)/12%3A_Linear_Regression_and_Correlation/12.01%3A_Prelude_to_Linear_Regression_and_Correlation.txt
Linear regression for two variables is based on a linear equation with one independent variable. The equation has the form: $y = a + b\text{x}\nonumber$ where $a$ and $b$ are constant numbers. The variable $x$ is the independent variable, and $y$ is the dependent variable. Typically, you choose a value to substitute for the independent variable and then solve for the dependent variable. Example $1$ The following examples are linear equations. $y = 3 + 2\text{x}\nonumber$ $y = -0.01 + 1.2\text{x}\nonumber$ Exercise $1$ Is the following an example of a linear equation? $y = -0.125 - 3.5\text{x}\nonumber$ Answer yes The graph of a linear equation of the form $y = a + b\text{x}$ is a straight line. Any line that is not vertical can be described by this equation. Example $2$ Graph the equation $y = -1 + 2\text{x}$. Exercise $2$ Is the following an example of a linear equation? Why or why not? Answer No, the graph is not a straight line; therefore, it is not a linear equation. Example $3$ Aaron's Word Processing Service (AWPS) does word processing. The rate for services is $32 per hour plus a$31.50 one-time charge. The total cost to a customer depends on the number of hours it takes to complete the job. Find the equation that expresses the total cost in terms of the number of hours required to complete the job. Answer Let $x =$ the number of hours it takes to get the job done. Let $y =$ the total cost to the customer. Summary The most basic type of association is a linear association. This type of relationship can be defined algebraically by the equations used, numerically with actual or predicted data values, or graphically from a plotted curve. (Lines are classified as straight curves.) Algebraically, a linear equation typically takes the form $y = mx + b$, where $m$ and $b$ are constants, $x$ is the independent variable, $y$ is the dependent variable. In a statistical context, a linear equation is written in the form $y = a + bx$, where $a$ and $b$ are the constants. This form is used to help readers distinguish the statistical context from the algebraic context. In the equation $y = a + b\text{x}$, the constant b that multiplies the $x$ variable ($b$ is called a coefficient) is called the slope. The constant a is called the $y$-intercept. The slope of a line is a value that describes the rate of change between the independent and dependent variables. The slope tells us how the dependent variable ($y$) changes for every one unit increase in the independent ($x$) variable, on average. The $y$-intercept is used to describe the dependent variable when the independent variable equals zero. Formula Review $y = a + b\text{x}$ where a is the $y$-intercept and $b$ is the slope. The variable $x$ is the independent variable and $y$ is the dependent variable. 12.02: Linear Equations Use the following information to answer the next three exercises. A vacation resort rents SCUBA equipment to certified divers. The resort charges an up-front fee of \$25 and another fee of \$12.50 an hour. Exercise 12.2.5 What are the dependent and independent variables? Answer dependent variable: fee amount; independent variable: time Exercise 12.2.6 Find the equation that expresses the total fee in terms of the number of hours the equipment is rented. Exercise 12.2.7 Graph the equation from Exercise. Answer Use the following information to answer the next two exercises. A credit card company charges \$10 when a payment is late, and \$5 a day each day the payment remains unpaid. Exercise 12.2.8 Find the equation that expresses the total fee in terms of the number of days the payment is late. Exercise 12.2.9 Graph the equation from Exercise. Answer Exercise 12.2.10 Is the equation \(y = 10 + 5x – 3x^{2}\) linear? Why or why not? Exercise 12.2.11 Which of the following equations are linear? 1. \(y = 6x + 8\) 2. \(y + 7 = 3x\) 3. \(y – x = 8x^{2}\) 4. \(4y = 8\) Answer \(y = 6x + 8\), \(4y = 8\), and \(y + 7 = 3x\) are all linear equations. Exercise 12.2.12 Does the graph show a linear equation? Why or why not? Table contains real data for the first two decades of AIDS reporting. Adults and Adolescents only, United States Year # AIDS cases diagnosed # AIDS deaths Pre-1981 91 29 1981 319 121 1982 1,170 453 1983 3,076 1,482 1984 6,240 3,466 1985 11,776 6,878 1986 19,032 11,987 1987 28,564 16,162 1988 35,447 20,868 1989 42,674 27,591 1990 48,634 31,335 1991 59,660 36,560 1992 78,530 41,055 1993 78,834 44,730 1994 71,874 49,095 1995 68,505 49,456 1996 59,347 38,510 1997 47,149 20,736 1998 38,393 19,005 1999 25,174 18,454 2000 25,522 17,347 2001 25,643 17,402 2002 26,464 16,371 Total 802,118 489,093 Exercise 12.2.13 Use the columns "year" and "# AIDS cases diagnosed. Why is “year” the independent variable and “# AIDS cases diagnosed.” the dependent variable (instead of the reverse)? Answer The number of AIDS cases depends on the year. Therefore, year becomes the independent variable and the number of AIDS cases is the dependent variable. Use the following information to answer the next two exercises. A specialty cleaning company charges an equipment fee and an hourly labor fee. A linear equation that expresses the total amount of the fee the company charges for each session is \(y = 50 + 100x\). Exercise 12.2.14 What are the independent and dependent variables? Exercise 12.2.15 What is the y-intercept and what is the slope? Interpret them using complete sentences. Answer The \(y\)-intercept is 50 (\(a = 50\)). At the start of the cleaning, the company charges a one-time fee of \$50 (this is when \(x = 0\)). The slope is 100 (\(b = 100\)). For each session, the company charges \$100 for each hour they clean. Use the following information to answer the next three questions. Due to erosion, a river shoreline is losing several thousand pounds of soil each year. A linear equation that expresses the total amount of soil lost per year is \(y = 12,000x\). Exercise 12.2.16 What are the independent and dependent variables? Exercise 12.2.17 How many pounds of soil does the shoreline lose in a year? Answer 12,000 pounds of soil Exercise 12.2.18 What is the \(y\)-intercept? Interpret its meaning. Use the following information to answer the next two exercises. The price of a single issue of stock can fluctuate throughout the day. A linear equation that represents the price of stock for Shipment Express is \(y = 15 – 1.5x\) where \(x\) is the number of hours passed in an eight-hour day of trading. Exercise 12.2.19 What are the slope and y-intercept? Interpret their meaning. Answer The slope is -1.5 (\(b = -1.5\)). This means the stock is losing value at a rate of \$1.50 per hour. The \(y\)-intercept is \$15 (\(a = 15\)). This means the price of stock before the trading day was \$15. Exercise 12.2.19 If you owned this stock, would you want a positive or negative slope? Why?
textbooks/stats/Introductory_Statistics/Introductory_Statistics_(OpenStax)/12%3A_Linear_Regression_and_Correlation/12.02%3A_Linear_Equations/12.2E%3A_Linear_Equations_%28Exercises%29.txt
Before we take up the discussion of linear regression and correlation, we need to examine a way to display the relation between two variables x and y. The most common and easiest way is a scatter plot. The following example illustrates a scatter plot. Example \(1\) In Europe and Asia, m-commerce is popular. M-commerce users have special mobile phones that work like electronic wallets as well as provide phone and Internet services. Users can do everything from paying for parking to buying a TV set or soda from a machine to banking to checking sports scores on the Internet. For the years 2000 through 2004, was there a relationship between the year and the number of m-commerce users? Construct a scatter plot. Let \(x =\) the year and let \(y =\) the number of m-commerce users, in millions. Table \(1\): Table showing the number of m-commerce users (in millions) by year. \(x\) (year) \(y\) (# of users) 2000 0.5 2002 20.0 2003 33.0 2004 47.0 To create a scatter plot 1. Enter your \(X\) data into list L1 and your \(Y\) data into list L2. 2. Press 2nd STATPLOT ENTER to use Plot 1. On the input screen for PLOT 1, highlight On and press ENTER. (Make sure the other plots are OFF.) 3. For TYPE: highlight the very first icon, which is the scatter plot, and press ENTER. 4. For Xlist:, enter L1 ENTER and for Ylist: L2 ENTER. 5. For Mark: it does not matter which symbol you highlight, but the square is the easiest to see. Press ENTER. 6. Make sure there are no other equations that could be plotted. Press Y = and clear any equations out. 7. Press the ZOOM key and then the number 9 (for menu item "ZoomStat") ; the calculator will fit the window to the data. You can press WINDOW to see the scaling of the axes. Exercise \(1\) Amelia plays basketball for her high school. She wants to improve to play at the college level. She notices that the number of points she scores in a game goes up in response to the number of hours she practices her jump shot each week. She records the following data: \(X\) (hours practicing jump shot) \(Y\) (points scored in a game) 5 15 7 22 9 28 10 31 11 33 12 36 Construct a scatter plot and state if what Amelia thinks appears to be true. Answer Figure \(2\) Yes, Amelia’s assumption appears to be correct. The number of points Amelia scores per game goes up when she practices her jump shot more. A scatter plot shows the direction of a relationship between the variables. A clear direction happens when there is either: • High values of one variable occurring with high values of the other variable or low values of one variable occurring with low values of the other variable. • High values of one variable occurring with low values of the other variable. You can determine the strength of the relationship by looking at the scatter plot and seeing how close the points are to a line, a power function, an exponential function, or to some other type of function. For a linear relationship there is an exception. Consider a scatter plot where all the points fall on a horizontal line providing a "perfect fit." The horizontal line would in fact show no relationship. When you look at a scatter plot, you want to notice the overall pattern and any deviations from the pattern. The following scatterplot examples illustrate these concepts. In this chapter, we are interested in scatter plots that show a linear pattern. Linear patterns are quite common. The linear relationship is strong if the points are close to a straight line, except in the case of a horizontal line where there is no relationship. If we think that the points show a linear relationship, we would like to draw a line on the scatter plot. This line can be calculated through a process called linear regression. However, we only calculate a regression line if one of the variables helps to explain or predict the other variable. If \(x\) is the independent variable and \(y\) the dependent variable, then we can use a regression line to predict \(y\) for a given value of \(x\) Summary Scatter plots are particularly helpful graphs when we want to see if there is a linear relationship among data points. They indicate both the direction of the relationship between the \(x\) variables and the \(y\) variables, and the strength of the relationship. We calculate the strength of the relationship between an independent variable and a dependent variable using linear regression. 12.03: Scatter Plots Exercise \(1\) Does the scatter plot appear linear? Strong or weak? Positive or negative? Answer The data appear to be linear with a strong, positive correlation. Exercise \(3\) Does the scatter plot appear linear? Strong or weak? Positive or negative? Exercise \(4\) Does the scatter plot appear linear? Strong or weak? Positive or negative? Answer The data appear to have no correlation.
textbooks/stats/Introductory_Statistics/Introductory_Statistics_(OpenStax)/12%3A_Linear_Regression_and_Correlation/12.03%3A_Scatter_Plots/12.3E%3A_Scatter_Plots_%28Exercises%29.txt
Data rarely fit a straight line exactly. Usually, you must be satisfied with rough predictions. Typically, you have a set of data whose scatter plot appears to "fit" a straight line. This is called a Line of Best Fit or Least-Squares Line. COLLABORATIVE EXERCISE If you know a person's pinky (smallest) finger length, do you think you could predict that person's height? Collect data from your class (pinky finger length, in inches). The independent variable, $x$, is pinky finger length and the dependent variable, $y$, is height. For each set of data, plot the points on graph paper. Make your graph big enough and use a ruler. Then "by eye" draw a line that appears to "fit" the data. For your line, pick two convenient points and use them to find the slope of the line. Find the $y$-intercept of the line by extending your line so it crosses the $y$-axis. Using the slopes and the $y$-intercepts, write your equation of "best fit." Do you think everyone will have the same equation? Why or why not? According to your equation, what is the predicted height for a pinky length of 2.5 inches? Example $1$ A random sample of 11 statistics students produced the following data, where $x$ is the third exam score out of 80, and $y$ is the final exam score out of 200. Can you predict the final exam score of a random student if you know the third exam score? 1a: Table showing the scores on the final exam based on scores from the third exam. $x$ (third exam score) $y$ (final exam score) 65 175 67 133 71 185 71 163 66 126 75 198 67 153 70 163 71 159 69 151 69 159 Exercise $1$ SCUBA divers have maximum dive times they cannot exceed when going to different depths. The data in Table show different depths with the maximum dive times in minutes. Use your calculator to find the least squares regression line and predict the maximum dive time for 110 feet. $X$ (depth in feet) $Y$ (maximum dive time) 50 80 60 55 70 45 80 35 90 25 100 22 Answer $\hat{y} = 127.24 – 1.11x$ At 110 feet, a diver could dive for only five minutes. The third exam score, $x$, is the independent variable and the final exam score, $y$, is the dependent variable. We will plot a regression line that best "fits" the data. If each of you were to fit a line "by eye," you would draw different lines. We can use what is called a least-squares regression line to obtain the best fit line. Consider the following diagram. Each point of data is of the the form ($x, y$) and each point of the line of best fit using least-squares linear regression has the form ($x, \hat{y}$). The $\hat{y}$ is read "$y$ hat" and is the estimated value of $y$. It is the value of $y$ obtained using the regression line. It is not generally equal to $y$ from data. The term $y_{0} – \hat{y}_{0} = \varepsilon_{0}$ is called the "error" or residual. It is not an error in the sense of a mistake. The absolute value of a residual measures the vertical distance between the actual value of $y$ and the estimated value of $y$. In other words, it measures the vertical distance between the actual data point and the predicted point on the line. If the observed data point lies above the line, the residual is positive, and the line underestimates the actual data value for $y$. If the observed data point lies below the line, the residual is negative, and the line overestimates that actual data value for $y$. In the diagram in Figure, $y_{0} – \hat{y}_{0} = \varepsilon_{0}$ is the residual for the point shown. Here the point lies above the line and the residual is positive. $\varepsilon =$ the Greek letter epsilon For each data point, you can calculate the residuals or errors, $y_{i} - \hat{y}_{i} = \varepsilon_{i}$ for $i = 1, 2, 3, ..., 11$. Each $|\varepsilon|$ is a vertical distance. For the example about the third exam scores and the final exam scores for the 11 statistics students, there are 11 data points. Therefore, there are 11 $\varepsilon$ values. If you square each $\varepsilon$ and add, you get $(\varepsilon_{1})^{2} + (\varepsilon_{2})^{2} + \dotso + (\varepsilon_{11})^{2} = \sum^{11}_{i = 1} \varepsilon^{2} \label{SSE}$ Equation\ref{SSE} is called the Sum of Squared Errors (SSE). Using calculus, you can determine the values of $a$ and $b$ that make the SSE a minimum. When you make the SSE a minimum, you have determined the points that are on the line of best fit. It turns out that the line of best fit has the equation: $\hat{y} = a + bx$ where • $a = \bar{y} - b\bar{x}$ and • $b = \dfrac{\sum(x - \bar{x})(y - \bar{y})}{\sum(x - \bar{x})^{2}}$. The sample means of the $x$ values and the $x$ values are $\bar{x}$ and $\bar{y}$, respectively. The best fit line always passes through the point $(\bar{x}, \bar{y})$. The slope $b$ can be written as $b = r\left(\dfrac{s_{y}}{s_{x}}\right)$ where $s_{y} =$ the standard deviation of the $y$ values and $s_{x} =$ the standard deviation of the $x$ values. $r$ is the correlation coefficient, which is discussed in the next section. Least Square Criteria for Best Fit The process of fitting the best-fit line is called linear regression. The idea behind finding the best-fit line is based on the assumption that the data are scattered about a straight line. The criteria for the best fit line is that the sum of the squared errors (SSE) is minimized, that is, made as small as possible. Any other line you might choose would have a higher SSE than the best fit line. This best fit line is called the least-squares regression line . Note Computer spreadsheets, statistical software, and many calculators can quickly calculate the best-fit line and create the graphs. The calculations tend to be tedious if done by hand. Instructions to use the TI-83, TI-83+, and TI-84+ calculators to find the best-fit line and create a scatterplot are shown at the end of this section. THIRD EXAM vs FINAL EXAM EXAMPLE: The graph of the line of best fit for the third-exam/final-exam example is as follows: The least squares regression line (best-fit line) for the third-exam/final-exam example has the equation: $\hat{y} = -173.51 + 4.83x$ REMINDER Remember, it is always important to plot a scatter diagram first. If the scatter plot indicates that there is a linear relationship between the variables, then it is reasonable to use a best fit line to make predictions for $y$ given $x$ within the domain of $x$-values in the sample data, but not necessarily for x-values outside that domain. You could use the line to predict the final exam score for a student who earned a grade of 73 on the third exam. You should NOT use the line to predict the final exam score for a student who earned a grade of 50 on the third exam, because 50 is not within the domain of the $x$-values in the sample data, which are between 65 and 75. Understanding Slope The slope of the line, $b$, describes how changes in the variables are related. It is important to interpret the slope of the line in the context of the situation represented by the data. You should be able to write a sentence interpreting the slope in plain English. INTERPRETATION OF THE SLOPE: The slope of the best-fit line tells us how the dependent variable ($y$) changes for every one unit increase in the independent ($x$) variable, on average. THIRD EXAM vs FINAL EXAM EXAMPLE Slope: The slope of the line is $b = 4.83$. Interpretation: For a one-point increase in the score on the third exam, the final exam score increases by 4.83 points, on average. USING THE TI-83, 83+, 84, 84+ CALCULATOR Using the Linear Regression T Test: LinRegTTest 1. In the STAT list editor, enter the $X$ data in list L1 and the Y data in list L2, paired so that the corresponding ($x,y$) values are next to each other in the lists. (If a particular pair of values is repeated, enter it as many times as it appears in the data.) 2. On the STAT TESTS menu, scroll down with the cursor to select the LinRegTTest. (Be careful to select LinRegTTest, as some calculators may also have a different item called LinRegTInt.) 3. On the LinRegTTest input screen enter: Xlist: L1 ; Ylist: L2 ; Freq: 1 4. On the next line, at the prompt $\beta$ or $\rho$, highlight "$\neq 0$" and press ENTER 5. Leave the line for "RegEq:" blank 6. Highlight Calculate and press ENTER. The output screen contains a lot of information. For now we will focus on a few items from the output, and will return later to the other items. The second line says $y = a + bx$. Scroll down to find the values $a = -173.513$, and $b = 4.8273$; the equation of the best fit line is $\hat{y} = -173.51 + 4.83x$ The two items at the bottom are $r_{2} = 0.43969$ and $r = 0.663$. For now, just note where to find these values; we will discuss them in the next two sections. Graphing the Scatterplot and Regression Line 1. We are assuming your $X$ data is already entered in list L1 and your $Y$ data is in list L2 2. Press 2nd STATPLOT ENTER to use Plot 1 3. On the input screen for PLOT 1, highlight On, and press ENTER 4. For TYPE: highlight the very first icon which is the scatterplot and press ENTER 5. Indicate Xlist: L1 and Ylist: L2 6. For Mark: it does not matter which symbol you highlight. 7. Press the ZOOM key and then the number 9 (for menu item "ZoomStat") ; the calculator will fit the window to the data 8. To graph the best-fit line, press the "$Y =$" key and type the equation $-173.5 + 4.83X$ into equation Y1. (The $X$ key is immediately left of the STAT key). Press ZOOM 9 again to graph it. 9. Optional: If you want to change the viewing window, press the WINDOW key. Enter your desired window using Xmin, Xmax, Ymin, Ymax Note Another way to graph the line after you create a scatter plot is to use LinRegTTest. 1. Make sure you have done the scatter plot. Check it on your screen. 2. Go to LinRegTTest and enter the lists. 3. At RegEq: press VARS and arrow over to Y-VARS. Press 1 for 1:Function. Press 1 for 1:Y1. Then arrow down to Calculate and do the calculation for the line of best fit. 4. Press $Y = (\text{you will see the regression equation})$. 5. Press GRAPH. The line will be drawn." The Correlation Coefficient $r$ Besides looking at the scatter plot and seeing that a line seems reasonable, how can you tell if the line is a good predictor? Use the correlation coefficient as another indicator (besides the scatterplot) of the strength of the relationship between $x$ and $y$. The correlation coefficient, $r$, developed by Karl Pearson in the early 1900s, is numerical and provides a measure of strength and direction of the linear association between the independent variable $x$ and the dependent variable $y$. The correlation coefficient is calculated as $r = \dfrac{n \sum(xy) - \left(\sum x\right)\left(\sum y\right)}{\sqrt{\left[n \sum x^{2} - \left(\sum x\right)^{2}\right] \left[n \sum y^{2} - \left(\sum y\right)^{2}\right]}}$ where $n =$ the number of data points. If you suspect a linear relationship between $x$ and $y$, then $r$ can measure how strong the linear relationship is. What the VALUE of $r$ tells us: • The value of $r$ is always between –1 and +1: –1 ≤ r ≤ 1. • The size of the correlation $r$ indicates the strength of the linear relationship between $x$ and $y$. Values of $r$ close to –1 or to +1 indicate a stronger linear relationship between $x$ and $y$. • If $r = 0$ there is absolutely no linear relationship between $x$ and $y$ (no linear correlation). • If $r = 1$, there is perfect positive correlation. If $r = -1$, there is perfect negative correlation. In both these cases, all of the original data points lie on a straight line. Of course,in the real world, this will not generally happen. What the SIGN of $r$ tells us: • A positive value of $r$ means that when $x$ increases, $y$ tends to increase and when $x$ decreases, $y$ tends to decrease (positive correlation). • A negative value of $r$ means that when $x$ increases, $y$ tends to decrease and when $x$ decreases, $y$ tends to increase (negative correlation). • The sign of $r$ is the same as the sign of the slope, $b$, of the best-fit line. Note Strong correlation does not suggest that $x$ causes $y$ or $y$ causes $x$. We say "correlation does not imply causation." The formula for $r$ looks formidable. However, computer spreadsheets, statistical software, and many calculators can quickly calculate $r$. The correlation coefficient $r$ is the bottom item in the output screens for the LinRegTTest on the TI-83, TI-83+, or TI-84+ calculator (see previous section for instructions). The Coefficient of Determination The variable $r^{2}$ is called the coefficient of determination and is the square of the correlation coefficient, but is usually stated as a percent, rather than in decimal form. It has an interpretation in the context of the data: • $r^{2}$, when expressed as a percent, represents the percent of variation in the dependent (predicted) variable $y$ that can be explained by variation in the independent (explanatory) variable $x$ using the regression (best-fit) line. • $1 - r^{2}$, when expressed as a percentage, represents the percent of variation in $y$ that is NOT explained by variation in $x$ using the regression line. This can be seen as the scattering of the observed data points about the regression line. Consider the third exam/final exam example introduced in the previous section • The line of best fit is: $\hat{y} = -173.51 + 4.83x$ • The correlation coefficient is $r = 0.6631$ • The coefficient of determination is $r^{2} = 0.6631^{2} = 0.4397$ • Interpretation of $r^{2}$ in the context of this example: • Approximately 44% of the variation (0.4397 is approximately 0.44) in the final-exam grades can be explained by the variation in the grades on the third exam, using the best-fit regression line. • Therefore, approximately 56% of the variation ($1 - 0.44 = 0.56$) in the final exam grades can NOT be explained by the variation in the grades on the third exam, using the best-fit regression line. (This is seen as the scattering of the points about the line.) Summary A regression line, or a line of best fit, can be drawn on a scatter plot and used to predict outcomes for the $x$ and $y$ variables in a given data set or sample data. There are several ways to find a regression line, but usually the least-squares regression line is used because it creates a uniform line. Residuals, also called “errors,” measure the distance from the actual value of $y$ and the estimated value of $y$. The Sum of Squared Errors, when set to its minimum, calculates the points on the line of best fit. Regression lines can be used to predict values within the given set of data, but should not be used to make predictions for values outside the set of data. The correlation coefficient $r$ measures the strength of the linear association between $x$ and $y$. The variable $r$ has to be between –1 and +1. When $r$ is positive, the $x$ and $y$ will tend to increase and decrease together. When $r$ is negative, $x$ will increase and $y$ will decrease, or the opposite, $x$ will decrease and $y$ will increase. The coefficient of determination $r^{2}$, is equal to the square of the correlation coefficient. When expressed as a percent, $r^{2}$ represents the percent of variation in the dependent variable $y$ that can be explained by variation in the independent variable $x$ using the regression line. Glossary Coefficient of Correlation a measure developed by Karl Pearson (early 1900s) that gives the strength of association between the independent variable and the dependent variable; the formula is: $r = \dfrac{n \sum xy - \left(\sum x\right) \left(\sum y\right)}{\sqrt{\left[n \sum x^{2} - \left(\sum x\right)^{2}\right] \left[n \sum y^{2} - \left(\sum y\right)^{2}\right]}}$ where $n$ is the number of data points. The coefficient cannot be more than 1 or less than –1. The closer the coefficient is to ±1, the stronger the evidence of a significant linear relationship between $x$ and $y$. 12.04: The Regression Equation Use the following information to answer the next five exercises. A random sample of ten professional athletes produced the following data where $x$ is the number of endorsements the player has and $y$ is the amount of money made (in millions of dollars). $x$ $y$ $x$ $y$ 0 2 5 12 3 8 4 9 2 7 3 9 1 3 0 3 5 13 4 10 Exercise 12.4.2 Draw a scatter plot of the data. Exercise 12.4.3 Use regression to find the equation for the line of best fit. Answer $\hat{y} = 2.23 + 1.99x$ Exercise 12.4.4 Draw the line of best fit on the scatter plot. Exercise 12.4.5 What is the slope of the line of best fit? What does it represent? Answer The slope is 1.99 ($b = 1.99$). It means that for every endorsement deal a professional player gets, he gets an average of another \$1.99 million in pay each year. Exercise 12.4.6 What is the $y$-intercept of the line of best fit? What does it represent? Exercise 12.4.7 What does an $r$ value of zero mean? Answer It means that there is no correlation between the data sets. Exercise 12.4.8 When $n = 2$ and $r = 1$, are the data significant? Explain. Exercise 12.4.9 When $n = 100$ and $r = -0.89$, is there a significant correlation? Explain.
textbooks/stats/Introductory_Statistics/Introductory_Statistics_(OpenStax)/12%3A_Linear_Regression_and_Correlation/12.04%3A_The_Regression_Equation/12.4E%3A_The_Regression_Equation_%28Exercise%29.txt
The correlation coefficient, $r$, tells us about the strength and direction of the linear relationship between $x$ and $y$. However, the reliability of the linear model also depends on how many observed data points are in the sample. We need to look at both the value of the correlation coefficient $r$ and the sample size $n$, together. We perform a hypothesis test of the "significance of the correlation coefficient" to decide whether the linear relationship in the sample data is strong enough to use to model the relationship in the population. The sample data are used to compute $r$, the correlation coefficient for the sample. If we had data for the entire population, we could find the population correlation coefficient. But because we have only sample data, we cannot calculate the population correlation coefficient. The sample correlation coefficient, $r$, is our estimate of the unknown population correlation coefficient. • The symbol for the population correlation coefficient is $\rho$, the Greek letter "rho." • $\rho =$ population correlation coefficient (unknown) • $r =$ sample correlation coefficient (known; calculated from sample data) The hypothesis test lets us decide whether the value of the population correlation coefficient $\rho$ is "close to zero" or "significantly different from zero". We decide this based on the sample correlation coefficient $r$ and the sample size $n$. If the test concludes that the correlation coefficient is significantly different from zero, we say that the correlation coefficient is "significant." • Conclusion: There is sufficient evidence to conclude that there is a significant linear relationship between $x$ and $y$ because the correlation coefficient is significantly different from zero. • What the conclusion means: There is a significant linear relationship between $x$ and $y$. We can use the regression line to model the linear relationship between $x$ and $y$ in the population. If the test concludes that the correlation coefficient is not significantly different from zero (it is close to zero), we say that correlation coefficient is "not significant". • Conclusion: "There is insufficient evidence to conclude that there is a significant linear relationship between $x$ and $y$ because the correlation coefficient is not significantly different from zero." • What the conclusion means: There is not a significant linear relationship between $x$ and $y$. Therefore, we CANNOT use the regression line to model a linear relationship between $x$ and $y$ in the population. NOTE • If $r$ is significant and the scatter plot shows a linear trend, the line can be used to predict the value of $y$ for values of $x$ that are within the domain of observed $x$ values. • If $r$ is not significant OR if the scatter plot does not show a linear trend, the line should not be used for prediction. • If $r$ is significant and if the scatter plot shows a linear trend, the line may NOT be appropriate or reliable for prediction OUTSIDE the domain of observed $x$ values in the data. PERFORMING THE HYPOTHESIS TEST • Null Hypothesis: $H_{0}: \rho = 0$ • Alternate Hypothesis: $H_{a}: \rho \neq 0$ WHAT THE HYPOTHESES MEAN IN WORDS: • Null Hypothesis $H_{0}$: The population correlation coefficient IS NOT significantly different from zero. There IS NOT a significant linear relationship(correlation) between $x$ and $y$ in the population. • Alternate Hypothesis $H_{a}$: The population correlation coefficient IS significantly DIFFERENT FROM zero. There IS A SIGNIFICANT LINEAR RELATIONSHIP (correlation) between $x$ and $y$ in the population. DRAWING A CONCLUSION:There are two methods of making the decision. The two methods are equivalent and give the same result. • Method 1: Using the $p\text{-value}$ • Method 2: Using a table of critical values In this chapter of this textbook, we will always use a significance level of 5%, $\alpha = 0.05$ NOTE Using the $p\text{-value}$ method, you could choose any appropriate significance level you want; you are not limited to using $\alpha = 0.05$. But the table of critical values provided in this textbook assumes that we are using a significance level of 5%, $\alpha = 0.05$. (If we wanted to use a different significance level than 5% with the critical value method, we would need different tables of critical values that are not provided in this textbook.) METHOD 1: Using a $p\text{-value}$ to make a decision Using the TI83, 83+, 84, 84+ CALCULATOR To calculate the $p\text{-value}$ using LinRegTTEST: On the LinRegTTEST input screen, on the line prompt for $\beta$ or $\rho$, highlight "$\neq 0$" The output screen shows the $p\text{-value}$ on the line that reads "$p =$". (Most computer statistical software can calculate the $p\text{-value}$.) If the $p\text{-value}$ is less than the significance level ($\alpha = 0.05$): • Decision: Reject the null hypothesis. • Conclusion: "There is sufficient evidence to conclude that there is a significant linear relationship between $x$ and $y$ because the correlation coefficient is significantly different from zero." If the $p\text{-value}$ is NOT less than the significance level ($\alpha = 0.05$) • Decision: DO NOT REJECT the null hypothesis. • Conclusion: "There is insufficient evidence to conclude that there is a significant linear relationship between $x$ and $y$ because the correlation coefficient is NOT significantly different from zero." Calculation Notes: • You will use technology to calculate the $p\text{-value}$. The following describes the calculations to compute the test statistics and the $p\text{-value}$: • The $p\text{-value}$ is calculated using a $t$-distribution with $n - 2$ degrees of freedom. • The formula for the test statistic is $t = \frac{r\sqrt{n-2}}{\sqrt{1-r^{2}}}$. The value of the test statistic, $t$, is shown in the computer or calculator output along with the $p\text{-value}$. The test statistic $t$ has the same sign as the correlation coefficient $r$. • The $p\text{-value}$ is the combined area in both tails. An alternative way to calculate the $p\text{-value}$ ($p$) given by LinRegTTest is the command 2*tcdf(abs(t),10^99, n-2) in 2nd DISTR. THIRD-EXAM vs FINAL-EXAM EXAMPLE: $p\text{-value}$ method • Consider the third exam/final exam example. • The line of best fit is: $\hat{y} = -173.51 + 4.83x$ with $r = 0.6631$ and there are $n = 11$ data points. • Can the regression line be used for prediction? Given a third exam score ($x$ value), can we use the line to predict the final exam score (predicted $y$ value)? $H_{0}: \rho = 0$ $H_{a}: \rho \neq 0$ $\alpha = 0.05$ • The $p\text{-value}$ is 0.026 (from LinRegTTest on your calculator or from computer software). • The $p\text{-value}$, 0.026, is less than the significance level of $\alpha = 0.05$. • Decision: Reject the Null Hypothesis $H_{0}$ • Conclusion: There is sufficient evidence to conclude that there is a significant linear relationship between the third exam score ($x$) and the final exam score ($y$) because the correlation coefficient is significantly different from zero. Because $r$ is significant and the scatter plot shows a linear trend, the regression line can be used to predict final exam scores. METHOD 2: Using a table of Critical Values to make a decision The 95% Critical Values of the Sample Correlation Coefficient Table can be used to give you a good idea of whether the computed value of $r$ is significant or not. Compare $r$ to the appropriate critical value in the table. If $r$ is not between the positive and negative critical values, then the correlation coefficient is significant. If $r$ is significant, then you may want to use the line for prediction. Example $1$ Suppose you computed $r = 0.801$ using $n = 10$ data points. $df = n - 2 = 10 - 2 = 8$. The critical values associated with $df = 8$ are $-0.632$ and $+0.632$. If $r <$ negative critical value or $r >$ positive critical value, then $r$ is significant. Since $r = 0.801$ and $0.801 > 0.632$, $r$ is significant and the line may be used for prediction. If you view this example on a number line, it will help you. Exercise $1$ For a given line of best fit, you computed that $r = 0.6501$ using $n = 12$ data points and the critical value is 0.576. Can the line be used for prediction? Why or why not? Answer If the scatter plot looks linear then, yes, the line can be used for prediction, because $r >$ the positive critical value. Example $2$ Suppose you computed $r = –0.624$ with 14 data points. $df = 14 – 2 = 12$. The critical values are $-0.532$ and $0.532$. Since $-0.624 < -0.532$, $r$ is significant and the line can be used for prediction Exercise $2$ For a given line of best fit, you compute that $r = 0.5204$ using $n = 9$ data points, and the critical value is $0.666$. Can the line be used for prediction? Why or why not? Answer No, the line cannot be used for prediction, because $r <$ the positive critical value. Example $3$ Suppose you computed $r = 0.776$ and $n = 6$. $df = 6 - 2 = 4$. The critical values are $-0.811$ and $0.811$. Since $-0.811 < 0.776 < 0.811$, $r$ is not significant, and the line should not be used for prediction. Exercise $3$ For a given line of best fit, you compute that $r = -0.7204$ using $n = 8$ data points, and the critical value is $= 0.707$. Can the line be used for prediction? Why or why not? Answer Yes, the line can be used for prediction, because $r <$ the negative critical value. THIRD-EXAM vs FINAL-EXAM EXAMPLE: critical value method Consider the third exam/final exam example. The line of best fit is: $\hat{y} = -173.51 + 4.83x$ with $r = 0.6631$ and there are $n = 11$ data points. Can the regression line be used for prediction? Given a third-exam score ($x$ value), can we use the line to predict the final exam score (predicted $y$ value)? • $H_{0}: \rho = 0$ • $H_{a}: \rho \neq 0$ • $\alpha = 0.05$ • Use the "95% Critical Value" table for $r$ with $df = n - 2 = 11 - 2 = 9$. • The critical values are $-0.602$ and $+0.602$ • Since $0.6631 > 0.602$, $r$ is significant. • Decision: Reject the null hypothesis. • Conclusion:There is sufficient evidence to conclude that there is a significant linear relationship between the third exam score ($x$) and the final exam score ($y$) because the correlation coefficient is significantly different from zero. Because $r$ is significant and the scatter plot shows a linear trend, the regression line can be used to predict final exam scores. Example $4$ Suppose you computed the following correlation coefficients. Using the table at the end of the chapter, determine if $r$ is significant and the line of best fit associated with each r can be used to predict a $y$ value. If it helps, draw a number line. 1. $r = –0.567$ and the sample size, $n$, is $19$. The $df = n - 2 = 17$. The critical value is $-0.456$. $-0.567 < -0.456$ so $r$ is significant. 2. $r = 0.708$ and the sample size, $n$, is $9$. The $df = n - 2 = 7$. The critical value is $0.666$. $0.708 > 0.666$ so $r$ is significant. 3. $r = 0.134$ and the sample size, $n$, is $14$. The $df = 14 - 2 = 12$. The critical value is $0.532$. $0.134$ is between $-0.532$ and $0.532$ so $r$ is not significant. 4. $r = 0$ and the sample size, $n$, is five. No matter what the $dfs$ are, $r = 0$ is between the two critical values so $r$ is not significant. Exercise $4$ For a given line of best fit, you compute that $r = 0$ using $n = 100$ data points. Can the line be used for prediction? Why or why not? Answer No, the line cannot be used for prediction no matter what the sample size is. Assumptions in Testing the Significance of the Correlation Coefficient Testing the significance of the correlation coefficient requires that certain assumptions about the data are satisfied. The premise of this test is that the data are a sample of observed points taken from a larger population. We have not examined the entire population because it is not possible or feasible to do so. We are examining the sample to draw a conclusion about whether the linear relationship that we see between $x$ and $y$ in the sample data provides strong enough evidence so that we can conclude that there is a linear relationship between $x$ and $y$ in the population. The regression line equation that we calculate from the sample data gives the best-fit line for our particular sample. We want to use this best-fit line for the sample as an estimate of the best-fit line for the population. Examining the scatter plot and testing the significance of the correlation coefficient helps us determine if it is appropriate to do this. The assumptions underlying the test of significance are: • There is a linear relationship in the population that models the average value of $y$ for varying values of $x$. In other words, the expected value of $y$ for each particular value lies on a straight line in the population. (We do not know the equation for the line for the population. Our regression line from the sample is our best estimate of this line in the population.) • The $y$ values for any particular $x$ value are normally distributed about the line. This implies that there are more $y$ values scattered closer to the line than are scattered farther away. Assumption (1) implies that these normal distributions are centered on the line: the means of these normal distributions of $y$ values lie on the line. • The standard deviations of the population $y$ values about the line are equal for each value of $x$. In other words, each of these normal distributions of $y$ values has the same shape and spread about the line. • The residual errors are mutually independent (no pattern). • The data are produced from a well-designed, random sample or randomized experiment. Summary Linear regression is a procedure for fitting a straight line of the form $\hat{y} = a + bx$ to data. The conditions for regression are: • Linear In the population, there is a linear relationship that models the average value of $y$ for different values of $x$. • Independent The residuals are assumed to be independent. • Normal The $y$ values are distributed normally for any value of $x$. • Equal variance The standard deviation of the $y$ values is equal for each $x$ value. • Random The data are produced from a well-designed random sample or randomized experiment. The slope $b$ and intercept $a$ of the least-squares line estimate the slope $\beta$ and intercept $\alpha$ of the population (true) regression line. To estimate the population standard deviation of $y$, $\sigma$, use the standard deviation of the residuals, $s$. $s = \sqrt{\frac{SEE}{n-2}}$. The variable $\rho$ (rho) is the population correlation coefficient. To test the null hypothesis $H_{0}: \rho =$ hypothesized value, use a linear regression t-test. The most common null hypothesis is $H_{0}: \rho = 0$ which indicates there is no linear relationship between $x$ and $y$ in the population. The TI-83, 83+, 84, 84+ calculator function LinRegTTest can perform this test (STATS TESTS LinRegTTest). Formula Review Least Squares Line or Line of Best Fit: $\hat{y} = a + bx$ where $a = y\text{-intercept}$ $b = \text{slope}$ Standard deviation of the residuals: $s = \sqrt{\frac{SSE}{n-2}}$ where $SSE = \text{sum of squared errors}$ $n = \text{the number of data points}$ 12.05: Testing the Significance of the Correlation Coefficient Exercise $5$ When testing the significance of the correlation coefficient, what is the null hypothesis? Exercise $6$ When testing the significance of the correlation coefficient, what is the alternative hypothesis? Answer $H_{a}: \rho \neq 0$ Exercise $7$ If the level of significance is 0.05 and the $p\text{-value}$ is $0.04$, what conclusion can you draw?
textbooks/stats/Introductory_Statistics/Introductory_Statistics_(OpenStax)/12%3A_Linear_Regression_and_Correlation/12.05%3A_Testing_the_Significance_of_the_Correlation_Coefficient/12.5E%3A_Testing_the_Significance_of_the_Correlation_Coef.txt
Recall the third exam/final exam example. We examined the scatter plot and showed that the correlation coefficient is significant. We found the equation of the best-fit line for the final exam grade as a function of the grade on the third-exam. We can now use the least-squares regression line for prediction. Suppose you want to estimate, or predict, the mean final exam score of statistics students who received 73 on the third exam. The exam scores ($x$-values) range from 65 to 75. Since 73 is between the $x$-values 65 and 75, substitute $x = 73$ into the equation. Then: $\hat{y} = -173.51 + 4.83(73) = 179.08\nonumber$ We predict that statistics students who earn a grade of 73 on the third exam will earn a grade of 179.08 on the final exam, on average. Example $1$ Recall the third exam/final exam example. 1. What would you predict the final exam score to be for a student who scored a 66 on the third exam? 2. What would you predict the final exam score to be for a student who scored a 90 on the third exam? Answer a. 145.27 b. The $x$ values in the data are between 65 and 75. Ninety is outside of the domain of the observed $x$ values in the data (independent variable), so you cannot reliably predict the final exam score for this student. (Even though it is possible to enter 90 into the equation for $x$ and calculate a corresponding $y$ value, the $y$ value that you get will not be reliable.) To understand really how unreliable the prediction can be outside of the observed $x$-values observed in the data, make the substitution $x = 90$ into the equation. $\hat{y} = -173.51 + 4.83(90) = 261.19\nonumber$ The final-exam score is predicted to be 261.19. The largest the final-exam score can be is 200. The process of predicting inside of the observed $x$ values observed in the data is called interpolation. The process of predicting outside of the observed $x$-values observed in the data is called extrapolation. Exercise $1$ Data are collected on the relationship between the number of hours per week practicing a musical instrument and scores on a math test. The line of best fit is as follows: $\hat{y} = 72.5 + 2.8x \nonumber$ What would you predict the score on a math test would be for a student who practices a musical instrument for five hours a week? Answer 86.5 Summary After determining the presence of a strong correlation coefficient and calculating the line of best fit, you can use the least squares regression line to make predictions about your data. 12.06: Prediction Use the following information to answer the next two exercises. An electronics retailer used regression to find a simple model to predict sales growth in the first quarter of the new year (January through March). The model is good for 90 days, where $x$ is the day. The model can be written as follows: $\hat{y} = 101.32 + 2.48x$ where $\hat{y}$ is in thousands of dollars. Exercise 12.6.2 What would you predict the sales to be on day 60? Answer \$250,120 Exercise 12.6.3 What would you predict the sales to be on day 90? Use the following information to answer the next three exercises. A landscaping company is hired to mow the grass for several large properties. The total area of the properties combined is 1,345 acres. The rate at which one person can mow is as follows: $\hat{y} = 1350 - 1.2x$ where $x$ is the number of hours and $\hat{y}$ represents the number of acres left to mow. Exercise 12.6.4 How many acres will be left to mow after 20 hours of work? Answer 1,326 acres Exercise 12.6.5 How many acres will be left to mow after 100 hours of work? Exercise 12.6.7 How many hours will it take to mow all of the lawns? (When is $\hat{y} = 0$?) Answer 1,125 hours, or when $x = 1,125$ Table contains real data for the first two decades of AIDS reporting. Adults and Adolescents only, United States Year # AIDS cases diagnosed # AIDS deaths Pre-1981 91 29 1981 319 121 1982 1,170 453 1983 3,076 1,482 1984 6,240 3,466 1985 11,776 6,878 1986 19,032 11,987 1987 28,564 16,162 1988 35,447 20,868 1989 42,674 27,591 1990 48,634 31,335 1991 59,660 36,560 1992 78,530 41,055 1993 78,834 44,730 1994 71,874 49,095 1995 68,505 49,456 1996 59,347 38,510 1997 47,149 20,736 1998 38,393 19,005 1999 25,174 18,454 2000 25,522 17,347 2001 25,643 17,402 2002 26,464 16,371 Total 802,118 489,093 Exercise 12.6.8 Graph “year” versus “# AIDS cases diagnosed” (plot the scatter plot). Do not include pre-1981 data. Exercise 12.6.9 Perform linear regression. What is the linear equation? Round to the nearest whole number. Answer Check student’s solution. Exercise 12.6.10 Write the equations: 1. Linear equation: __________ 2. $a =$ ________ 3. $b =$ ________ 4. $r =$ ________ 5. $n =$ ________ Exercise 12.6.11 Solve. 1. When $x = 1985$, $\hat{y} =$ _____ 2. When $x = 1990$, $\hat{y} =$_____ 3. When $x = 1970$, $\hat{y} =$______ Why doesn’t this answer make sense? Answer 1. When $x = 1985$, $\hat{y} = 25,52$ 2. When $x = 1990$, $\hat{y} = 34,275$ 3. When $x = 1970$, $\hat{y} = –725$ Why doesn’t this answer make sense? The range of $x$ values was 1981 to 2002; the year 1970 is not in this range. The regression equation does not apply, because predicting for the year 1970 is extrapolation, which requires a different process. Also, a negative number does not make sense in this context, where we are predicting AIDS cases diagnosed. Exercise 12.6.11 Does the line seem to fit the data? Why or why not? Exercise 12.6.12 What does the correlation imply about the relationship between time (years) and the number of diagnosed AIDS cases reported in the U.S.? Answer Also, the correlation $r = 0.4526$. If r is compared to the value in the 95% Critical Values of the Sample Correlation Coefficient Table, because $r > 0.423$, $r$ is significant, and you would think that the line could be used for prediction. But the scatter plot indicates otherwise. Exercise 12.6.13 Plot the two given points on the following graph. Then, connect the two points to form the regression line. Obtain the graph on your calculator or computer. Exercise 12.6.14 Write the equation: $\hat{y} =$ ____________ Answer $\hat{y} = 3,448,225 + 1750x$ Exercise 12.6.15 Hand draw a smooth curve on the graph that shows the flow of the data. Exercise 12.6.16 Does the line seem to fit the data? Why or why not? Answer There was an increase in AIDS cases diagnosed until 1993. From 1993 through 2002, the number of AIDS cases diagnosed declined each year. It is not appropriate to use a linear regression line to fit to the data. Exercise 12.6.17 Do you think a linear fit is best? Why or why not? Exercise 12.6.18 What does the correlation imply about the relationship between time (years) and the number of diagnosed AIDS cases reported in the U.S.? Answer Since there is no linear association between year and # of AIDS cases diagnosed, it is not appropriate to calculate a linear correlation coefficient. When there is a linear association and it is appropriate to calculate a correlation, we cannot say that one variable “causes” the other variable. Exercise 12.6.19 Graph “year” vs. “# AIDS cases diagnosed.” Do not include pre-1981. Label both axes with words. Scale both axes. Exercise 12.6.20 Enter your data into your calculator or computer. The pre-1981 data should not be included. Why is that so? Write the linear equation, rounding to four decimal places: Answer We don’t know if the pre-1981 data was collected from a single year. So we don’t have an accurate x value for this figure. Regression equation: $\hat{y} \text{(#AIDS Cases)} = -3,448,225 + 1749.777 \text{(year)}$ Coefficients Intercept –3,448,225 $X$ Variable 1 1,749.777 Exercise 12.6.21 Calculate the following: 1. $a =$ _____ 2. $b =$ _____ 3. correlation = _____ 4. $n =$ _____
textbooks/stats/Introductory_Statistics/Introductory_Statistics_(OpenStax)/12%3A_Linear_Regression_and_Correlation/12.06%3A_Prediction/12.6E%3A_Prediction_%28Exercises%29.txt
In some data sets, there are values (observed data points) called outliers. Outliers are observed data points that are far from the least squares line. They have large "errors", where the "error" or residual is the vertical distance from the line to the point. Outliers need to be examined closely. Sometimes, for some reason or another, they should not be included in the analysis of the data. It is possible that an outlier is a result of erroneous data. Other times, an outlier may hold valuable information about the population under study and should remain included in the data. The key is to examine carefully what causes a data point to be an outlier. Besides outliers, a sample may contain one or a few points that are called influential points. Influential points are observed data points that are far from the other observed data points in the horizontal direction. These points may have a big effect on the slope of the regression line. To begin to identify an influential point, you can remove it from the data set and see if the slope of the regression line is changed significantly. Computers and many calculators can be used to identify outliers from the data. Computer output for regression analysis will often identify both outliers and influential points so that you can examine them. Identifying Outliers We could guess at outliers by looking at a graph of the scatter plot and best fit-line. However, we would like some guideline as to how far away a point needs to be in order to be considered an outlier. As a rough rule of thumb, we can flag any point that is located further than two standard deviations above or below the best-fit line as an outlier. The standard deviation used is the standard deviation of the residuals or errors. We can do this visually in the scatter plot by drawing an extra pair of lines that are two standard deviations above and below the best-fit line. Any data points that are outside this extra pair of lines are flagged as potential outliers. Or we can do this numerically by calculating each residual and comparing it to twice the standard deviation. On the TI-83, 83+, or 84+, the graphical approach is easier. The graphical procedure is shown first, followed by the numerical calculations. You would generally need to use only one of these methods. Example $1$ In the third exam/final exam example, you can determine if there is an outlier or not. If there is an outlier, as an exercise, delete it and fit the remaining data to a new line. For this example, the new line ought to fit the remaining data better. This means the SSE should be smaller and the correlation coefficient ought to be closer to 1 or -1. Answer Graphical Identification of Outliers With the TI-83, 83+, 84+ graphing calculators, it is easy to identify the outliers graphically and visually. If we were to measure the vertical distance from any data point to the corresponding point on the line of best fit and that distance were equal to 2s or more, then we would consider the data point to be "too far" from the line of best fit. We need to find and graph the lines that are two standard deviations below and above the regression line. Any points that are outside these two lines are outliers. We will call these lines Y2 and Y3: As we did with the equation of the regression line and the correlation coefficient, we will use technology to calculate this standard deviation for us. Using the LinRegTTest with this data, scroll down through the output screens to find $s = 16.412$. Line $Y2 = -173.5 + 4.83x - 2(16.4)$ and line $Y3 = -173.5 + 4.83x + 2(16.4)$ where $\hat{y} = -173.5 + 4.83x$ is the line of best fit. $Y2$ and $Y3$ have the same slope as the line of best fit. Graph the scatterplot with the best fit line in equation $Y1$, then enter the two extra lines as $Y2$ and $Y3$ in the "$Y=$" equation editor and press ZOOM 9. You will find that the only data point that is not between lines $Y2$ and $Y3$ is the point $x = 65$, $y = 175$. On the calculator screen it is just barely outside these lines. The outlier is the student who had a grade of 65 on the third exam and 175 on the final exam; this point is further than two standard deviations away from the best-fit line. Sometimes a point is so close to the lines used to flag outliers on the graph that it is difficult to tell if the point is between or outside the lines. On a computer, enlarging the graph may help; on a small calculator screen, zooming in may make the graph clearer. Note that when the graph does not give a clear enough picture, you can use the numerical comparisons to identify outliers. Exercise $1$ Identify the potential outlier in the scatter plot. The standard deviation of the residuals or errors is approximately 8.6. Answer The outlier appears to be at (6, 58). The expected $y$ value on the line for the point (6, 58) is approximately 82. Fifty-eight is 24 units from 82. Twenty-four is more than two standard deviations ($2s = (2)(8.6) = 17.2$). So 82 is more than two standard deviations from 58, which makes $(6, 58)$ a potential outlier. Numerical Identification of Outliers In the table below, the first two columns are the third-exam and final-exam data. The third column shows the predicted $\hat{y}$ values calculated from the line of best fit: $\hat{y} = -173.5 + 4.83x$. The residuals, or errors, have been calculated in the fourth column of the table: observed $y$ value−predicted $y$ value $= y − \hat{y}$. s is the standard deviation of all the $y - \hat{y} = \varepsilon$ values where $n = \text{the total number of data points}$. If each residual is calculated and squared, and the results are added, we get the $SSE$. The standard deviation of the residuals is calculated from the $SSE$ as: $s = \sqrt{\dfrac{SSE}{n-2}}\nonumber$ NOTE We divide by ($n – 2$) because the regression model involves two estimates. Rather than calculate the value of s ourselves, we can find s using the computer or calculator. For this example, the calculator function LinRegTTest found $s = 16.4$ as the standard deviation of the residuals 35; –17; 16; –6; –19; 9; 3; –1; –10; –9; –1 . $x$ $y$ $\hat{y}$ $y – \hat{y}$ 65 175 140 175 – 140 = 35 67 133 150 133 – 150= –17 71 185 169 185 – 169 = 16 71 163 169 163 – 169 = –6 66 126 145 126 – 145 = –19 75 198 189 198 – 189 = 9 67 153 150 153 – 150 = 3 70 163 164 163 – 164 = –1 71 159 169 159 – 169 = –10 69 151 160 151 – 160 = –9 69 159 160 159 – 160 = –1 We are looking for all data points for which the residual is greater than $2s = 2(16.4) = 32.8$ or less than $-32.8$. Compare these values to the residuals in column four of the table. The only such data point is the student who had a grade of 65 on the third exam and 175 on the final exam; the residual for this student is 35. How does the outlier affect the best fit line? Numerically and graphically, we have identified the point (65, 175) as an outlier. We should re-examine the data for this point to see if there are any problems with the data. If there is an error, we should fix the error if possible, or delete the data. If the data is correct, we would leave it in the data set. For this problem, we will suppose that we examined the data and found that this outlier data was an error. Therefore we will continue on and delete the outlier, so that we can explore how it affects the results, as a learning experience. Compute a new best-fit line and correlation coefficient using the ten remaining points On the TI-83, TI-83+, TI-84+ calculators, delete the outlier from L1 and L2. Using the LinRegTTest, the new line of best fit and the correlation coefficient are: $\hat{y} = -355.19 + 7.39x\nonumber$ and $r = 0.9121\nonumber$ The new line with $r = 0.9121$ is a stronger correlation than the original ($r = 0.6631$) because $r = 0.9121$ is closer to one. This means that the new line is a better fit to the ten remaining data values. The line can better predict the final exam score given the third exam score. Numerical Identification of Outliers: Calculating s and Finding Outliers Manually If you do not have the function LinRegTTest, then you can calculate the outlier in the first example by doing the following. First, square each $|y – \hat{y}|$ The squares are 352; 172; 162; 62; 192; 92; 32; 12; 102; 92; 12 Then, add (sum) all the $|y – \hat{y}|$ squared terms using the formula $\sum^{11}_{i = 11} (|y_{i} - \hat{y}_{i}|)^{2} = \sum^{11}_{i - 1} \varepsilon^{2}_{i}\nonumber$ Recall that \begin{align*} y_{i} - \hat{y}_{i} &= \varepsilon_{i} \nonumber \ &= 35^{2} + 17^{2} + 16^{2} + 6^{2} + 19^{2} + 9^{2} + 3^{2} + 1^{2} + 10^{2} + 9^{2} + 1^{2} \nonumber \ &= 2440 = SSE. \nonumber \end{align*} The result, $SSE$ is the Sum of Squared Errors. Next, calculate s, the standard deviation of all the $y - \hat{y} = \varepsilon$ values where $n = \text{the total number of data points}$. The calculation is $s = \sqrt{\dfrac{SSE}{n-2}}.\nonumber$ For the third exam/final exam problem: $s = \sqrt{\dfrac{2440}{11 - 2}} = 16.47.\nonumber$ Next, multiply $s$ by $2$: $(2)(16.47) = 32.94\nonumber$ $32.94$ is $2$ standard deviations away from the mean of the $y - \hat{y}$ values. If we were to measure the vertical distance from any data point to the corresponding point on the line of best fit and that distance is at least $2s$, then we would consider the data point to be "too far" from the line of best fit. We call that point a potential outlier. For the example, if any of the $|y – \hat{y}|$ values are at least 32.94, the corresponding ($x, y$) data point is a potential outlier. For the third exam/final exam problem, all the $|y – \hat{y}|$'s are less than 31.29 except for the first one which is 35. $35 > 31.29$ That is, $|y – \hat{y}| \geq (2)(s)$ The point which corresponds to $|y – \hat{y}| = 35$ is $(65, 175)$. Therefore, the data point $(65,175)$ is a potential outlier. For this example, we will delete it. (Remember, we do not always delete an outlier.) NOTE When outliers are deleted, the researcher should either record that data was deleted, and why, or the researcher should provide results both with and without the deleted data. If data is erroneous and the correct values are known (e.g., student one actually scored a 70 instead of a 65), then this correction can be made to the data. The next step is to compute a new best-fit line using the ten remaining points. The new line of best fit and the correlation coefficient are: $\hat{y} = -355.19 + 7.39x\nonumber$ and $r = 0.9121\nonumber$ Example $2$ Using this new line of best fit (based on the remaining ten data points in the third exam/final exam example), what would a student who receives a 73 on the third exam expect to receive on the final exam? Is this the same as the prediction made using the original line? Answer Using the new line of best fit, $\hat{y} = -355.19 + 7.39(73) = 184.28$. A student who scored 73 points on the third exam would expect to earn 184 points on the final exam. The original line predicted $\hat{y} = -173.51 + 4.83(73) = 179.08$ so the prediction using the new line with the outlier eliminated differs from the original prediction. Exercise $2$ The data points for a study that was done are as follows: (1, 5), (2, 7), (2, 6), (3, 9), (4, 12), (4, 13), (5, 18), (6, 19), (7, 12), and (7, 21). Remove the outlier and recalculate the line of best fit. Find the value of ŷ when x = 10. Answer $\hat{y} = 1.04 + 2.96x; 30.64$ Example $3$: The Consumer Price Index The Consumer Price Index (CPI) measures the average change over time in the prices paid by urban consumers for consumer goods and services. The CPI affects nearly all Americans because of the many ways it is used. One of its biggest uses is as a measure of inflation. By providing information about price changes in the Nation's economy to government, business, and labor, the CPI helps them to make economic decisions. The President, Congress, and the Federal Reserve Board use the CPI's trends to formulate monetary and fiscal policies. In the following table, $x$ is the year and $y$ is the CPI. Data $x$ $y$ $x$ $y$ 1915 10.1 1969 36.7 1926 17.7 1975 49.3 1935 13.7 1979 72.6 1940 14.7 1980 82.4 1947 24.1 1986 109.6 1952 26.5 1991 130.7 1964 31.0 1999 166.6 1. Draw a scatterplot of the data. 2. Calculate the least squares line. Write the equation in the form ŷ = a + bx. 3. Draw the line on the scatterplot. 4. Find the correlation coefficient. Is it significant? 5. What is the average CPI for the year 1990? Answer 1. See Figure. 2. $\hat{y} = -3204 + 1.662x$ is the equation of the line of best fit. 3. $r = 0.8694$ 4. The number of data points is $n = 14$. Use the 95% Critical Values of the Sample Correlation Coefficient table at the end of Chapter 12. $n - 2 = 12$. The corresponding critical value is 0.532. Since 0.8694 > 0.532, r is significant. $\hat{y} = -3204 + 1.662(1990) = 103.4 \text{CPI}\nonumber$ 5. Using the calculator LinRegTTest, we find that $s = 25.4$; graphing the lines $Y2 = -3204 + 1.662X – 2(25.4)$ and $Y3 = -3204 + 1.662X + 2(25.4)$ shows that no data values are outside those lines, identifying no outliers. (Note that the year 1999 was very close to the upper line, but still inside it.) NOTE In the example, notice the pattern of the points compared to the line. Although the correlation coefficient is significant, the pattern in the scatterplot indicates that a curve would be a more appropriate model to use than a line. In this example, a statistician should prefer to use other methods to fit a curve to this data, rather than model the data with the line we found. In addition to doing the calculations, it is always important to look at the scatterplot when deciding whether a linear model is appropriate. If you are interested in seeing more years of data, visit the Bureau of Labor Statistics CPI website ftp://ftp.bls.gov/pub/special.requests/cpi/cpiai.txt; our data is taken from the column entitled "Annual Avg." (third column from the right). For example you could add more current years of data. Try adding the more recent years: 2004: $\text{CPI} = 188.9$; 2008: $\text{CPI} = 215.3$; 2011: $\text{CPI} = 224.9$. See how it affects the model. (Check: $\hat{y} = -4436 + 2.295x$; $r = 0.9018$. Is $r$ significant? Is the fit better with the addition of the new points?) Exercise $3$ The following table shows economic development measured in per capita income PCINC. Year PCINC Year PCINC 1870 340 1920 1050 1880 499 1930 1170 1890 592 1940 1364 1900 757 1950 1836 1910 927 1960 2132 1. What are the independent and dependent variables? 2. Draw a scatter plot. 3. Use regression to find the line of best fit and the correlation coefficient. 4. Interpret the significance of the correlation coefficient. 5. Is there a linear relationship between the variables? 6. Find the coefficient of determination and interpret it. 7. What is the slope of the regression equation? What does it mean? 8. Use the line of best fit to estimate PCINC for 1900, for 2000. 9. Determine if there are any outliers. Answer a The independent variable (x) is the year and the dependent variable (y) is the per capita income. Answer b Figure 12.7.4. Answer c $\hat{y} = 18.61x – 34574$; $r = 0.9732$ Answer d At $df = 8$, the critical value is $0.632$. The $r$ value is significant because it is greater than the critical value. Answer e There does appear to be a linear relationship between the variables. Answer f The coefficient of determination is $0.947$, which means that 94.7% of the variation in PCINC is explained by the variation in the years. Answer g and h The slope of the regression equation is 18.61, and it means that per capita income increases by \$18.61 for each passing year. $\hat{y} = 785$ when the year is 1900, and $\hat{y} = 2,646$ when the year is 2000. Answer i There do not appear to be any outliers. 95% Critical Values of the Sample Correlation Coefficient Table Degrees of Freedom: $n – 2$ Critical Values: (+ and –) 1 0.997 2 0.950 3 0.878 4 0.811 5 0.754 6 0.707 7 0.666 8 0.632 9 0.602 10 0.576 11 0.555 12 0.532 13 0.514 14 0.497 15 0.482 16 0.468 17 0.456 18 0.444 19 0.433 20 0.423 21 0.413 22 0.404 23 0.396 24 0.388 25 0.381 26 0.374 27 0.367 28 0.361 29 0.355 30 0.349 40 0.304 50 0.273 60 0.250 70 0.232 80 0.217 90 0.205 100 0.195 Summary To determine if a point is an outlier, do one of the following: 1. Input the following equations into the TI 83, 83+,84, 84+: $y_{1} = a + bx\nonumber$ $y_{2} = a + bx +2s\nonumber$ $y_{3} = a + bx - 2s\nonumber$ where $s$ is the standard deviation of the residuals If any point is above $y_{2}$ or below $y_{3}$ then the point is considered to be an outlier. 2. Use the residuals and compare their absolute values to $2s$ where $s$ is the standard deviation of the residuals. If the absolute value of any residual is greater than or equal to $2s$, then the corresponding point is an outlier. Note: The calculator function LinRegTTest (STATS TESTS LinRegTTest) calculates $s$. Glossary Outlier an observation that does not fit the rest of the data 12.07: Outliers Use the following information to answer the next four exercises. The scatter plot shows the relationship between hours spent studying and exam scores. The line shown is the calculated line of best fit. The correlation coefficient is $0.69$. Exercise 12.7.4 Do there appear to be any outliers? Answer Yes, there appears to be an outlier at $(6, 58)$. Exercise 12.7.5 A point is removed, and the line of best fit is recalculated. The new correlation coefficient is 0.98. Does the point appear to have been an outlier? Why? Exercise 12.7.6 What effect did the potential outlier have on the line of best fit? Answer The potential outlier flattened the slope of the line of best fit because it was below the data set. It made the line of best fit less accurate as a predictor for the data. Exercise 12.7.7 Are you more or less confident in the predictive ability of the new line of best fit? Exercise 12.7.8 The Sum of Squared Errors for a data set of 18 numbers is 49. What is the standard deviation? Answer $s = 1.75$ Exercise 12.7.9 The Standard Deviation for the Sum of Squared Errors for a data set is 9.8. What is the cutoff for the vertical distance that a point can be from the line of best fit to be considered an outlier? Bring It Together Exercise 12.7.10 The average number of people in a family that received welfare for various years is given in Table. Year Welfare family size 1969 4.0 1973 3.6 1975 3.2 1979 3.0 1983 3.0 1988 3.0 1991 2.9 1. Using “year” as the independent variable and “welfare family size” as the dependent variable, draw a scatter plot of the data. 2. Calculate the least-squares line. Put the equation in the form of: $\hat{y} = a + bx$ 3. Find the correlation coefficient. Is it significant? 4. Pick two years between 1969 and 1991 and find the estimated welfare family sizes. 5. Based on the data in Table, is there a linear relationship between the year and the average number of people in a welfare family? 6. Using the least-squares line, estimate the welfare family sizes for 1960 and 1995. Does the least-squares line give an accurate estimate for those years? Explain why or why not. 7. Are there any outliers in the data? 8. What is the estimated average welfare family size for 1986? Does the least squares line give an accurate estimate for that year? Explain why or why not. 9. What is the slope of the least squares (best-fit) line? Interpret the slope. Exercise 12.7.11 The percent of female wage and salary workers who are paid hourly rates is given in Table for the years 1979 to 1992. Year Percent of workers paid hourly rates 1979 61.2 1980 60.7 1981 61.3 1982 61.3 1983 61.8 1984 61.7 1985 61.8 1986 62.0 1987 62.7 1990 62.8 1992 62.9 1. Using “year” as the independent variable and “percent” as the dependent variable, draw a scatter plot of the data. 2. Does it appear from inspection that there is a relationship between the variables? Why or why not? 3. Calculate the least-squares line. Put the equation in the form of: $\hat{y} = a + bx$ 4. Find the correlation coefficient. Is it significant? 5. Find the estimated percents for 1991 and 1988. 6. Based on the data, is there a linear relationship between the year and the percent of female wage and salary earners who are paid hourly rates? 7. Are there any outliers in the data? 8. What is the estimated percent for the year 2050? Does the least-squares line give an accurate estimate for that year? Explain why or why not. 9. What is the slope of the least-squares (best-fit) line? Interpret the slope. Answer 1. Check student's solution. 2. yes 3. $\hat{y} = -266.8863 + 0.1656x$ 4. $0.9448$; Yes 5. $62.8233; 62.3265$ 6. yes 7. yes; $(1987, 62.7)$ 8. $72.5937$; no 9. $slope = 0.1656$. As the year increases by one, the percent of workers paid hourly rates tends to increase by 0.1656. Use the following information to answer the next two exercises. The cost of a leading liquid laundry detergent in different sizes is given in Table. Size (ounces) Cost ($) Cost per ounce 16 3.99 32 4.99 64 5.99 200 10.99 Exercise 12.7.12 1. Using “size” as the independent variable and “cost” as the dependent variable, draw a scatter plot. 2. Does it appear from inspection that there is a relationship between the variables? Why or why not? 3. Calculate the least-squares line. Put the equation in the form of: $\hat{y} = a + bx$ 4. Find the correlation coefficient. Is it significant? 5. If the laundry detergent were sold in a 40-ounce size, find the estimated cost. 6. If the laundry detergent were sold in a 90-ounce size, find the estimated cost. 7. Does it appear that a line is the best way to fit the data? Why or why not? 8. Are there any outliers in the given data? 9. Is the least-squares line valid for predicting what a 300-ounce size of the laundry detergent would you cost? Why or why not? 10. What is the slope of the least-squares (best-fit) line? Interpret the slope. Exercise 12.7.13 1. Complete Table for the cost per ounce of the different sizes. 2. Using “size” as the independent variable and “cost per ounce” as the dependent variable, draw a scatter plot of the data. 3. Does it appear from inspection that there is a relationship between the variables? Why or why not? 4. Calculate the least-squares line. Put the equation in the form of: $\hat{y} = a + bx$ 5. Find the correlation coefficient. Is it significant? 6. If the laundry detergent were sold in a 40-ounce size, find the estimated cost per ounce. 7. If the laundry detergent were sold in a 90-ounce size, find the estimated cost per ounce. 8. Does it appear that a line is the best way to fit the data? Why or why not? 9. Are there any outliers in the the data? 10. Is the least-squares line valid for predicting what a 300-ounce size of the laundry detergent would cost per ounce? Why or why not? 11. What is the slope of the least-squares (best-fit) line? Interpret the slope. Answer 1. Size (ounces) Cost ($) cents/oz 16 3.99 24.94 32 4.99 15.59 64 5.99 9.36 200 10.99 5.50 2. Check student’s solution. 3. There is a linear relationship for the sizes 16 through 64, but that linear trend does not continue to the 200-oz size. 4. $\hat{y} = 20.2368 - 0.0819x$ 5. $r = -0.8086$ 6. 40-oz: 16.96 cents/oz 7. 90-oz: 12.87 cents/oz 8. The relationship is not linear; the least squares line is not appropriate. 9. no outliers 10. No, you would be extrapolating. The 300-oz size is outside the range of $x$. 11. $slope = -0.08194$; for each additional ounce in size, the cost per ounce decreases by 0.082 cents. Exercise 12.7.14 According to a flyer by a Prudential Insurance Company representative, the costs of approximate probate fees and taxes for selected net taxable estates are as follows: Net Taxable Estate ($) Approximate Probate Fees and Taxes ($) 600,000 30,000 750,000 92,500 1,000,000 203,000 1,500,000 438,000 2,000,000 688,000 2,500,000 1,037,000 3,000,000 1,350,000 1. Decide which variable should be the independent variable and which should be the dependent variable. 2. Draw a scatter plot of the data. 3. Does it appear from inspection that there is a relationship between the variables? Why or why not? 4. Calculate the least-squares line. Put the equation in the form of: $\hat{y} = a + bx$. 5. Find the correlation coefficient. Is it significant? 6. Find the estimated total cost for a next taxable estate of $1,000,000. Find the cost for$2,500,000. 7. Does it appear that a line is the best way to fit the data? Why or why not? 8. Are there any outliers in the data? 9. Based on these results, what would be the probate fees and taxes for an estate that does not have any assets? 10. What is the slope of the least-squares (best-fit) line? Interpret the slope. Exercise 12.7.15 The following are advertised sale prices of color televisions at Anderson’s. Size (inches) Sale Price ($) 9 147 20 197 27 297 31 447 35 1177 40 2177 60 2497 1. Decide which variable should be the independent variable and which should be the dependent variable. 2. Draw a scatter plot of the data. 3. Does it appear from inspection that there is a relationship between the variables? Why or why not? 4. Calculate the least-squares line. Put the equation in the form of: $\hat{y} = a + bx$. 5. Find the correlation coefficient. Is it significant? 6. Find the estimated sale price for a 32 inch television. Find the cost for a 50 inch television. 7. Does it appear that a line is the best way to fit the data? Why or why not? 8. Are there any outliers in the data? 9. What is the slope of the least-squares (best-fit) line? Interpret the slope. Answer 1. Size is $x$, the independent variable, price is $y$, the dependent variable. 2. Check student’s solution. 3. The relationship does not appear to be linear. 4. $\hat{y} = -745.252 + 54.75569x$ 5. $r = 0.8944$, yes it is significant 6. 32-inch:$1006.93, 50-inch: $1992.53 7. No, the relationship does not appear to be linear. However, $r$ is significant. 8. yes, the 60-inch TV 9. For each additional inch, the price increases by$54.76 Exercise 12.7.16 Table shows the average heights for American boys in 1990. Age (years) Height (cm) birth 50.8 2 83.8 3 91.4 5 106.6 7 119.3 10 137.1 14 157.5 1. Decide which variable should be the independent variable and which should be the dependent variable. 2. Draw a scatter plot of the data. 3. Does it appear from inspection that there is a relationship between the variables? Why or why not? 4. Calculate the least-squares line. Put the equation in the form of: $\hat{y} = a + bx$. 5. Find the correlation coefficient. Is it significant? 6. Find the estimated average height for a one-year-old. Find the estimated average height for an eleven-year-old. 7. Does it appear that a line is the best way to fit the data? Why or why not? 8. Are there any outliers in the data? 9. Use the least squares line to estimate the average height for a sixty-two-year-old man. Do you think that your answer is reasonable? Why or why not? 10. What is the slope of the least-squares (best-fit) line? Interpret the slope. Exercise 12.7.17 State # letters in name Year entered the Union Ranks for entering the Union Area (square miles) Alabama 7 1819 22 52,423 Colorado 8 1876 38 104,100 Hawaii 6 1959 50 10,932 Iowa 4 1846 29 56,276 Maryland 8 1788 7 12,407 Missouri 8 1821 24 69,709 New Jersey 9 1787 3 8,722 Ohio 4 1803 17 44,828 South Carolina 13 1788 8 32,008 Utah 4 1896 45 84,904 Wisconsin 9 1848 30 65,499 We are interested in whether there is a relationship between the ranking of a state and the area of the state. 1. What are the independent and dependent variables? 2. What do you think the scatter plot will look like? Make a scatter plot of the data. 3. Does it appear from inspection that there is a relationship between the variables? Why or why not? 4. Calculate the least-squares line. Put the equation in the form of: $\hat{y} = a + bx$. 5. Find the correlation coefficient. What does it imply about the significance of the relationship? 6. Find the estimated areas for Alabama and for Colorado. Are they close to the actual areas? 7. Use the two points in part f to plot the least-squares line on your graph from part b. 8. Does it appear that a line is the best way to fit the data? Why or why not? 9. Are there any outliers? 10. Use the least squares line to estimate the area of a new state that enters the Union. Can the least-squares line be used to predict it? Why or why not? 11. Delete “Hawaii” and substitute “Alaska” for it. Alaska is the forty-ninth, state with an area of 656,424 square miles. 12. Calculate the new least-squares line. 13. Find the estimated area for Alabama. Is it closer to the actual area with this new least-squares line or with the previous one that included Hawaii? Why do you think that’s the case? 14. Do you think that, in general, newer states are larger than the original states? Answer 1. Let rank be the independent variable and area be the dependent variable. 2. Check student’s solution. 3. There appears to be a linear relationship, with one outlier. 4. $\hat{y} \text{ (area) } = 24177.06 + 1010.478x$ 5. $r = 0.50047$, $r$ is not significant so there is no relationship between the variables. 6. Alabama: 46407.576 Colorado: 62575.224 7. Alabama estimate is closer than Colorado estimate. 8. If the outlier is removed, there is a linear relationship. 9. There is one outlier (Hawaii). 10. rank 51: 75711.4; no 11. Alabama 7 1819 22 52,423 Colorado 8 1876 38 104,100 Alaska 6 1959 51 656,424 Iowa 4 1846 29 56,276 Maryland 8 1788 7 12,407 Missouri 8 1821 24 69,709 New Jersey 9 1787 3 8,722 Ohio 4 1803 17 44,828 South Carolina 13 1788 8 32,008 Utah 4 1896 45 84,904 Wisconsin 9 1848 30 65,499 12. $\hat{y} = -87065.3 + 7828.532x$ 13. Alabama: 85,162.404; the prior estimate was closer. Alaska is an outlier. 14. yes, with the exception of Hawaii
textbooks/stats/Introductory_Statistics/Introductory_Statistics_(OpenStax)/12%3A_Linear_Regression_and_Correlation/12.07%3A_Outliers/12.7E%3A_Outliers_%28Exercises%29.txt
Name: ______________________________ Section: _____________________________ Student ID#:__________________________ Work in groups on these problems. You should try to answer the questions without referring to your textbook. If you get stuck, try asking another group for help. Student Learning Outcomes • The student will calculate and construct the line of best fit between two variables. • The student will evaluate the relationship between two variables to determine if that relationship is significant. Collect the Data Use eight members of your class for the sample. Collect bivariate data (distance an individual lives from school, the cost of supplies for the current term). 1. Complete the table. Distance from school Cost of supplies this term 2. Which variable should be the dependent variable and which should be the independent variable? Why? 3. Graph “distance” vs. “cost.” Plot the points on the graph. Label both axes with words. Scale both axes. Analyze the Data Enter your data into your calculator or computer. Write the linear equation, rounding to four decimal places. 1. Calculate the following: 1. $a =$ ______ 2. $b =$ ______ 3. correlation = ______ 4. $n =$ ______ 5. equation: $\hat{y} =$ ______ 6. Is the correlation significant? Why or why not? (Answer in one to three complete sentences.) 2. Supply an answer for the following senarios: 1. For a person who lives eight miles from campus, predict the total cost of supplies this term: 2. For a person who lives eighty miles from campus, predict the total cost of supplies this term: 3. Obtain the graph on your calculator or computer. Sketch the regression line. Discussion Questions 1. Answer each question in complete sentences. 1. Does the line seem to fit the data? Why? 2. What does the correlation imply about the relationship between the distance and the cost? 2. Are there any outliers? If so, which point is an outlier? 3. Should the outlier, if it exists, be removed? Why or why not? 12.09: Regression - Textbook Cost (Worksheet) Name: ______________________________ Section: _____________________________ Student ID#:__________________________ Work in groups on these problems. You should try to answer the questions without referring to your textbook. If you get stuck, try asking another group for help. Student Learning Outcomes • The student will calculate and construct the line of best fit between two variables. • The student will evaluate the relationship between two variables to determine if that relationship is significant. Collect the Data Survey ten textbooks. Collect bivariate data (number of pages in a textbook, the cost of the textbook). 1. Complete the table. Number of pages Cost of textbook 2. Which variable should be the dependent variable and which should be the independent variable? Why? 3. Graph “pages” vs. “cost.” Plot the points on the graph in Analyze the Data. Label both axes with words. Scale both axes. Analyze the Data Enter your data into your calculator or computer. Write the linear equation, rounding to four decimal places. 1. Calculate the following: 1. $a =$ ______ 2. $b =$ ______ 3. correlation = ______ 4. $n =$ ______ 5. equation: $\hat{y} =$ ______ 6. Is the correlation significant? Why or why not? (Answer in complete sentences.) 2. Supply an answer for the following senarios: 1. For a textbook with 400 pages, predict the cost. 2. For a textbook with 600 pages, predict the cost. 3. Obtain the graph on your calculator or computer. Sketch the regression line. Discussion Questions 1. Answer each question in complete sentences. 1. Does the line seem to fit the data? Why? 2. What does the correlation imply about the relationship between the number of pages and the cost? 2. Are there any outliers? If so, which point(s) is an outlier? 3. Should the outlier, if it exists, be removed? Why or why not? 12.10: Regression - Fuel Efficiency (Worksheet) Name: ______________________________ Section: _____________________________ Student ID#:__________________________ Work in groups on these problems. You should try to answer the questions without referring to your textbook. If you get stuck, try asking another group for help. Student Learning Outcomes • The student will calculate and construct the line of best fit between two variables. • The student will evaluate the relationship between two variables to determine if that relationship is significant. Collect the Data Use the most recent April issue of Consumer Reports. It will give the total fuel efficiency (in miles per gallon) and weight (in pounds) of new model cars with automatic transmissions. We will use this data to determine the relationship, if any, between the fuel efficiency of a car and its weight. 1. Using your random number generator, randomly select 20 cars from the list and record their weights and fuel efficiency into Table. Weight Fuel Efficiency 2. Which variable should be the dependent variable and which should be the independent variable? Why? 3. By hand, do a scatterplot of “weight” vs. “fuel efficiency”. Plot the points on graph paper. Label both axes with words. Scale both axes accurately. Analyze the Data Enter your data into your calculator or computer. Write the linear equation, rounding to 4 decimal places. 1. Calculate the following: 1. $a =$ ______ 2. $b =$ ______ 3. correlation = ______ 4. $n =$ ______ 5. equation: $\hat{y} =$ ______ 2. Obtain the graph of the regression line on your calculator. Sketch the regression line on the same axes as your scatter plot. Discussion Questions 1. Is the correlation significant? Explain how you determined this in complete sentences. 2. Is the relationship a positive one or a negative one? Explain how you can tell and what this means in terms of weight and fuel efficiency. 3. In one or two complete sentences, what is the practical interpretation of the slope of the least squares line in terms of fuel efficiency and weight? 4. For a car that weighs 4,000 pounds, predict its fuel efficiency. Include units. 5. Can we predict the fuel efficiency of a car that weighs 10,000 pounds using the least squares line? Explain why or why not. 6. Answer each question in complete sentences. 1. Does the line seem to fit the data? Why or why not? 2. What does the correlation imply about the relationship between fuel efficiency and weight of a car? Is this what you expected? 7. Are there any outliers? If so, which point is an outlier?
textbooks/stats/Introductory_Statistics/Introductory_Statistics_(OpenStax)/12%3A_Linear_Regression_and_Correlation/12.08%3A_Regression_-_Distance_from_School_%28Worksheet%29.txt
These are homework exercises to accompany the Textmap created for "Introductory Statistics" by OpenStax. 12.2: Linear Equations Q 12.2.1 For each of the following situations, state the independent variable and the dependent variable. 1. A study is done to determine if elderly drivers are involved in more motor vehicle fatalities than other drivers. The number of fatalities per 100,000 drivers is compared to the age of drivers. 2. A study is done to determine if the weekly grocery bill changes based on the number of family members. 3. Insurance companies base life insurance premiums partially on the age of the applicant. 4. Utility bills vary according to power consumption. 5. A study is done to determine if a higher education reduces the crime rate in a population. S 12.2.1 1. independent variable: age; dependent variable: fatalities 2. independent variable: # of family members; dependent variable: grocery bill 3. independent variable: age of applicant; dependent variable: insurance premium 4. independent variable: power consumption; dependent variable: utility 5. independent variable: higher education (years); dependent variable: crime rates Q 12.2.2 Piece-rate systems are widely debated incentive payment plans. In a recent study of loan officer effectiveness, the following piece-rate system was examined: % of goal reached < 80 80 100 120 Incentive n/a $4,000 with an additional$125 added per percentage point from 81–99% $6,500 with an additional$125 added per percentage point from 101–119% $9,500 with an additional$125 added per percentage point starting at 121% If a loan officer makes 95% of his or her goal, write the linear function that applies based on the incentive plan table. In context, explain the y-intercept and slope. 12.3: Scatter Plots Q 12.3.1 The Gross Domestic Product Purchasing Power Parity is an indication of a country’s currency value compared to another country. Table shows the GDP PPP of Cuba as compared to US dollars. Construct a scatter plot of the data. Year Cuba’s PPP Year Cuba’s PPP 1999 1,700 2006 4,000 2000 1,700 2007 11,000 2002 2,300 2008 9,500 2003 2,900 2009 9,700 2004 3,000 2010 9,900 2005 3,500 S 12.3.1 Check student’s solution. Q 12.3.2 The following table shows the poverty rates and cell phone usage in the United States. Construct a scatter plot of the data Year Poverty Rate Cellular Usage per Capita 2003 12.7 54.67 2005 12.6 74.19 2007 12 84.86 2009 12 90.82 Q 12.3.3 Does the higher cost of tuition translate into higher-paying jobs? The table lists the top ten colleges based on mid-career salary and the associated yearly tuition costs. Construct a scatter plot of the data. School Mid-Career Salary (in thousands) Yearly Tuition Princeton 137 28,540 Harvey Mudd 135 40,133 CalTech 127 39,900 US Naval Academy 122 0 West Point 120 0 MIT 118 42,050 Lehigh University 118 43,220 NYU-Poly 117 39,565 Babson College 117 40,400 Stanford 114 54,506 S 12.3.3 For graph: check student’s solution. Note that tuition is the independent variable and salary is the dependent variable. Q 12.3.4 If the level of significance is 0.05 and the $p\text{-value}$ is 0.06, what conclusion can you draw? Q 12.3.5 If there are 15 data points in a set of data, what is the number of degree of freedom? 13 12.4: The Regression Equation Q 12.4.1 What is the process through which we can calculate a line that goes through a scatter plot with a linear pattern? Q 12.4.2 Explain what it means when a correlation has an $r^{2}$ of 0.72. S 12.4.2 It means that 72% of the variation in the dependent variable ($y$) can be explained by the variation in the independent variable ($x$). Q 12.4.3 Can a coefficient of determination be negative? Why or why not? 12.5: Testing the Significance of the Correlation Coefficient Q 12.5.1 If the level of significance is 0.05 and the $p\text{-value}$ is $0.06$, what conclusion can you draw? S 12.5.1 We do not reject the null hypothesis. There is not sufficient evidence to conclude that there is a significant linear relationship between $x$ and $y$ because the correlation coefficient is not significantly different from zero. Q 12.5.2 If there are 15 data points in a set of data, what is the number of degree of freedom? 12.6: Prediction Q 12.6.1 Recently, the annual number of driver deaths per 100,000 for the selected age groups was as follows: Age Number of Driver Deaths per 100,000 17.5 38 22 36 29.5 24 44.5 20 64.5 18 80 28 1. For each age group, pick the midpoint of the interval for the x value. (For the 75+ group, use 80.) 2. Using “ages” as the independent variable and “Number of driver deaths per 100,000” as the dependent variable, make a scatter plot of the data. 3. Calculate the least squares (best–fit) line. Put the equation in the form of: ŷ = a + bx 4. Find the correlation coefficient. Is it significant? 5. Predict the number of deaths for ages 40 and 60. 6. Based on the given data, is there a linear relationship between age of a driver and driver fatality rate? 7. What is the slope of the least squares (best-fit) line? Interpret the slope. S 12.6.1 1. Age Number of Driver Deaths per 100,000 16–19 38 20–24 36 25–34 24 35–54 20 55–74 18 75+ 28 2. Check student’s solution. 3. $\hat{y} = 35.5818045 - 0.19182491x$ 4. $r = -0.57874$ For four $df$ and alpha $= 0.05$, the LinRegTTest gives $p\text{-value} = 0.2288$ so we do not reject the null hypothesis; there is not a significant linear relationship between deaths and age. Using the table of critical values for the correlation coefficient, with four df, the critical value is 0.811. The correlation coefficient $r = -0.57874$ is not less than –0.811, so we do not reject the null hypothesis. 5. if age = 40, $\hat{y}\text{ (deaths) }= 35.5818045 – 0.19182491(40) = 27.9$ if age = 60, $\hat{y}\text{ (deaths) }= 35.5818045 – 0.19182491(60) = 24.1$ 6. For entire dataset, there is a linear relationship for the ages up to age 74. The oldest age group shows an increase in deaths from the prior group, which is not consistent with the younger ages. 7. $\text{slope} = -0.19182491$ Q 12.6.2 Table shows the life expectancy for an individual born in the United States in certain years. Year of Birth Life Expectancy 1930 59.7 1940 62.9 1950 70.2 1965 69.7 1973 71.4 1982 74.5 1987 75 1992 75.7 2010 78.7 1. Decide which variable should be the independent variable and which should be the dependent variable. 2. Draw a scatter plot of the ordered pairs. 3. Calculate the least squares line. Put the equation in the form of: $\hat{y} = a + bx$ 4. Find the correlation coefficient. Is it significant? 5. Find the estimated life expectancy for an individual born in 1950 and for one born in 1982. 6. Why aren’t the answers to part e the same as the values in Table that correspond to those years? 7. Use the two points in part e to plot the least squares line on your graph from part b. 8. Based on the data, is there a linear relationship between the year of birth and life expectancy? 9. Are there any outliers in the data? 10. Using the least squares line, find the estimated life expectancy for an individual born in 1850. Does the least squares line give an accurate estimate for that year? Explain why or why not. 11. What is the slope of the least-squares (best-fit) line? Interpret the slope. Q 12.6.3 The maximum discount value of the Entertainment® card for the “Fine Dining” section, Edition ten, for various pages is given in Table Q 12.6.4 Table gives the gold medal times for every other Summer Olympics for the women’s 100-meter freestyle (swimming). Year Time (seconds) 1912 82.2 1924 72.4 1932 66.8 1952 66.8 1960 61.2 1968 60.0 1976 55.65 1984 55.92 1992 54.64 2000 53.8 2008 53.1 1. Decide which variable should be the independent variable and which should be the dependent variable. 2. Draw a scatter plot of the data. 3. Does it appear from inspection that there is a relationship between the variables? Why or why not? 4. Calculate the least squares line. Put the equation in the form of: $\hat{y} = a + bx$. 5. Find the correlation coefficient. Is the decrease in times significant? 6. Find the estimated gold medal time for 1932. Find the estimated time for 1984. 7. Why are the answers from part f different from the chart values? 8. Does it appear that a line is the best way to fit the data? Why or why not? 9. Use the least-squares line to estimate the gold medal time for the next Summer Olympics. Do you think that your answer is reasonable? Why or why not? Q 12.6.5 State # letters in name Year entered the Union Rank for entering the Union Area (square miles) Alabama 7 1819 22 52,423 Colorado 8 1876 38 104,100 Hawaii 6 1959 50 10,932 Iowa 4 1846 29 56,276 Maryland 8 1788 7 12,407 Missouri 8 1821 24 69,709 New Jersey 9 1787 3 8,722 Ohio 4 1803 17 44,828 South Carolina 13 1788 8 32,008 Utah 4 1896 45 84,904 Wisconsin 9 1848 30 65,499 We are interested in whether or not the number of letters in a state name depends upon the year the state entered the Union. 1. Decide which variable should be the independent variable and which should be the dependent variable. 2. Draw a scatter plot of the data. 3. Does it appear from inspection that there is a relationship between the variables? Why or why not? 4. Calculate the least-squares line. Put the equation in the form of: $\hat{y} = a + bx$. 5. Find the correlation coefficient. What does it imply about the significance of the relationship? 6. Find the estimated number of letters (to the nearest integer) a state would have if it entered the Union in 1900. Find the estimated number of letters a state would have if it entered the Union in 1940. 7. Does it appear that a line is the best way to fit the data? Why or why not? 8. Use the least-squares line to estimate the number of letters a new state that enters the Union this year would have. Can the least squares line be used to predict it? Why or why not? S 12.6.5 1. Year is the independent or $x$ variable; the number of letters is the dependent or $y$ variable. 2. Check student’s solution. 3. no 4. $\hat{y} = 47.03 - 0.0216x$ 5. $-0.4280$ 6. 6; 5 7. No, the relationship does not appear to be linear; the correlation is not significant. 8. current year: 2013: 3.55 or four letters; this is not an appropriate use of the least squares line. It is extrapolation. 12.7: Outliers Q 12.7.1 The height (sidewalk to roof) of notable tall buildings in America is compared to the number of stories of the building (beginning at street level). Height (in feet) Stories 1,050 57 428 28 362 26 529 40 790 60 401 22 380 38 1,454 110 1,127 100 700 46 1. Using “stories” as the independent variable and “height” as the dependent variable, make a scatter plot of the data. 2. Does it appear from inspection that there is a relationship between the variables? 3. Calculate the least squares line. Put the equation in the form of: $\hat{y} = a + bx$ 4. Find the correlation coefficient. Is it significant? 5. Find the estimated heights for 32 stories and for 94 stories. 6. Based on the data in Table, is there a linear relationship between the number of stories in tall buildings and the height of the buildings? 7. Are there any outliers in the data? If so, which point(s)? 8. What is the estimated height of a building with six stories? Does the least squares line give an accurate estimate of height? Explain why or why not. 9. Based on the least squares line, adding an extra story is predicted to add about how many feet to a building? 10. What is the slope of the least squares (best-fit) line? Interpret the slope. Q 12.7.2 Ornithologists, scientists who study birds, tag sparrow hawks in 13 different colonies to study their population. They gather data for the percent of new sparrow hawks in each colony and the percent of those that have returned from migration. Percent return: 74; 66; 81; 52; 73; 62; 52; 45; 62; 46; 60; 46; 38 Percent new: 5; 6; 8; 11; 12; 15; 16; 17; 18; 18; 19; 20; 20 1. Enter the data into your calculator and make a scatter plot. 2. Use your calculator’s regression function to find the equation of the least-squares regression line. Add this to your scatter plot from part a. 3. Explain in words what the slope and $y$-intercept of the regression line tell us. 4. How well does the regression line fit the data? Explain your response. 5. Which point has the largest residual? Explain what the residual means in context. Is this point an outlier? An influential point? Explain. 6. An ecologist wants to predict how many birds will join another colony of sparrow hawks to which 70% of the adults from the previous year have returned. What is the prediction? S 12.7.2 1. Check student’s solution. 2. Check student’s solution. 3. The slope of the regression line is -0.3179 with a $y$-intercept of 32.966. In context, the $y$-intercept indicates that when there are no returning sparrow hawks, there will be almost 31% new sparrow hawks, which doesn’t make sense since if there are no returning birds, then the new percentage would have to be 100% (this is an example of why we do not extrapolate). The slope tells us that for each percentage increase in returning birds, the percentage of new birds in the colony decreases by 0.3179%. 4. If we examine $r2$, we see that only 50.238% of the variation in the percent of new birds is explained by the model and the correlation coefficient, $r = 0.71$ only indicates a somewhat strong correlation between returning and new percentages. 5. The ordered pair $(66, 6)$ generates the largest residual of 6.0. This means that when the observed return percentage is 66%, our observed new percentage, 6%, is almost 6% less than the predicted new value of 11.98%. If we remove this data pair, we see only an adjusted slope of -0.2723 and an adjusted intercept of 30.606. In other words, even though this data generates the largest residual, it is not an outlier, nor is the data pair an influential point. 6. If there are 70% returning birds, we would expect to see $y = -0.2723(70) + 30.606 = 0.115$ or 11.5% new birds in the colony. Q 12.7.3 The following table shows data on average per capita wine consumption and heart disease rate in a random sample of 10 countries. Yearly wine consumption in liters 2.5 3.9 2.9 2.4 2.9 0.8 9.1 2.7 0.8 0.7 Death from heart diseases 221 167 131 191 220 297 71 172 211 300 1. Enter the data into your calculator and make a scatter plot. 2. Use your calculator’s regression function to find the equation of the least-squares regression line. Add this to your scatter plot from part a. 3. Explain in words what the slope and $y$-intercept of the regression line tell us. 4. How well does the regression line fit the data? Explain your response. 5. Which point has the largest residual? Explain what the residual means in context. Is this point an outlier? An influential point? Explain. 6. Do the data provide convincing evidence that there is a linear relationship between the amount of alcohol consumed and the heart disease death rate? Carry out an appropriate test at a significance level of 0.05 to help answer this question. Q 12.7.4 The following table consists of one student athlete’s time (in minutes) to swim 2000 yards and the student’s heart rate (beats per minute) after swimming on a random sample of 10 days: Swim Time Heart Rate 34.12 144 35.72 152 34.72 124 34.05 140 34.13 152 35.73 146 36.17 128 35.57 136 35.37 144 35.57 148 1. Enter the data into your calculator and make a scatter plot. 2. Use your calculator’s regression function to find the equation of the least-squares regression line. Add this to your scatter plot from part a. 3. Explain in words what the slope and $y$-intercept of the regression line tell us. 4. How well does the regression line fit the data? Explain your response. 5. Which point has the largest residual? Explain what the residual means in context. Is this point an outlier? An influential point? Explain. S 12.7.4 1. Check student’s solution. 2. Check student’s solution. 3. We have a slope of $-1.4946$ with a $y$-intercept of 193.88. The slope, in context, indicates that for each additional minute added to the swim time, the heart rate will decrease by 1.5 beats per minute. If the student is not swimming at all, the $y$-intercept indicates that his heart rate will be 193.88 beats per minute. While the slope has meaning (the longer it takes to swim 2,000 meters, the less effort the heart puts out), the $y$-intercept does not make sense. If the athlete is not swimming (resting), then his heart rate should be very low. 4. Since only 1.5% of the heart rate variation is explained by this regression equation, we must conclude that this association is not explained with a linear relationship. 5. The point $(34.72, 124)$ generates the largest residual of $-11.82$. This means that our observed heart rate is almost 12 beats less than our predicted rate of 136 beats per minute. When this point is removed, the slope becomes $1.6914$ with the $y$-intercept changing to $83.694$. While the linear association is still very weak, we see that the removed data pair can be considered an influential point in the sense that the $y$-intercept becomes more meaningful. Q 12.7.5 A researcher is investigating whether non-white minorities commit a disproportionate number of homicides. He uses demographic data from Detroit, MI to compare homicide rates and the number of the population that are white males. White Males Homicide rate per 100,000 people 558,724 8.6 538,584 8.9 519,171 8.52 500,457 8.89 482,418 13.07 465,029 14.57 448,267 21.36 432,109 28.03 416,533 31.49 401,518 37.39 387,046 46.26 373,095 47.24 359,647 52.33 1. Use your calculator to construct a scatter plot of the data. What should the independent variable be? Why? 2. Use your calculator’s regression function to find the equation of the least-squares regression line. Add this to your scatter plot. 3. Discuss what the following mean in context. 1. The slope of the regression equation 2. The $y$-intercept of the regression equation 3. The correlation $r$ 4. The coefficient of determination $r2$. 4. Do the data provide convincing evidence that there is a linear relationship between the number of white males in the population and the homicide rate? Carry out an appropriate test at a significance level of 0.05 to help answer this question. Q 12.7.6 School Mid-Career Salary (in thousands) Yearly Tuition Princeton 137 28,540 Harvey Mudd 135 40,133 CalTech 127 39,900 US Naval Academy 122 0 West Point 120 0 MIT 118 42,050 Lehigh University 118 43,220 NYU-Poly 117 39,565 Babson College 117 40,400 Stanford 114 54,506 Using the data to determine the linear-regression line equation with the outliers removed. Is there a linear correlation for the data set with outliers removed? Justify your answer. S 12.7.6 If we remove the two service academies (the tuition is \$0.00), we construct a new regression equation of $y = -0.0009x + 160$ with a correlation coefficient of $0.71397$ and a coefficient of determination of $0.50976$. This allows us to say there is a fairly strong linear association between tuition costs and salaries if the service academies are removed from the data set.
textbooks/stats/Introductory_Statistics/Introductory_Statistics_(OpenStax)/12%3A_Linear_Regression_and_Correlation/12.E%3A_Linear_Regression_and_Correlation_%28Exercises%29.txt
For hypothesis tests comparing averages between more than two groups, statisticians have developed a method called "Analysis of Variance" (abbreviated ANOVA). In this chapter, you will study the simplest form of ANOVA called single factor or one-way ANOVA. You will also study the \(F\) distribution, used for one-way ANOVA, and the test of two variances. This is just a very brief overview of one-way ANOVA. You will study this topic in much greater detail in future statistics courses. One-Way ANOVA, as it is presented here, relies heavily on a calculator or computer Contributors and Attributions Barbara Illowsky and Susan Dean (De Anza College) with many other contributing authors. Content produced by OpenStax College is licensed under a Creative Commons Attribution License 4.0 license. Download for free at http://cnx.org/contents/[email protected]. 13: F Distribution and One-Way ANOVA CHAPTER OBJECTIVES By the end of this chapter, the student should be able to: • Interpret the F probability distribution as the number of groups and the sample size change. • Discuss two uses for the F distribution: one-way ANOVA and the test of two variances. • Conduct and interpret one-way ANOVA. • Conduct and interpret hypothesis tests of two variances Many statistical applications in psychology, social science, business administration, and the natural sciences involve several groups. For example, an environmentalist is interested in knowing if the average amount of pollution varies in several bodies of water. A sociologist is interested in knowing if the amount of income a person earns varies according to his or her upbringing. A consumer looking for a new car might compare the average gas mileage of several models. For hypothesis tests comparing averages between more than two groups, statisticians have developed a method called "Analysis of Variance" (abbreviated ANOVA). In this chapter, you will study the simplest form of ANOVA called single factor or one-way ANOVA. You will also study the \(F\) distribution, used for one-way ANOVA, and the test of two variances. This is just a very brief overview of one-way ANOVA. You will study this topic in much greater detail in future statistics courses. One-Way ANOVA, as it is presented here, relies heavily on a calculator or computer. Contributors and Attributions Barbara Illowsky and Susan Dean (De Anza College) with many other contributing authors. Content produced by OpenStax College is licensed under a Creative Commons Attribution License 4.0 license. Download for free at http://cnx.org/contents/[email protected]. 13.02: One-Way ANOVA The purpose of a one-way ANOVA test is to determine the existence of a statistically significant difference among several group means. The test actually uses variances to help determine if the means are equal or not. To perform a one-way ANOVA test, there are several basic assumptions to be fulfilled: Five basic assumptions of one-way ANOVA to be fulfilled 1. Each population from which a sample is taken is assumed to be normal. 2. All samples are randomly selected and independent. 3. The populations are assumed to have equal standard deviations (or variances). 4. The factor is a categorical variable. 5. The response is a numerical variable. The Null and Alternative Hypotheses The null hypothesis is simply that all the group population means are the same. The alternative hypothesis is that at least one pair of means is different. For example, if there are $k$ groups: • $H_{0}: \mu_{1} = \mu_{2} = \mu_{3} = \dotsc = \mu_{k}$ • $H_{a}: \text{At least two of the group means} \mu_{2} = \mu_{3} = \dotsc = \mu_{k} \text{are not equal}$ The graphs, a set of box plots representing the distribution of values with the group means indicated by a horizontal line through the box, help in the understanding of the hypothesis test. In the first graph (red box plots), $H_{0}: \mu_{1} = \mu_{2} = \mu_{3}$ and the three populations have the same distribution if the null hypothesis is true. The variance of the combined data is approximately the same as the variance of each of the populations. If the null hypothesis is false, then the variance of the combined data is larger which is caused by the different means as shown in the second graph (green box plots). Review Analysis of variance extends the comparison of two groups to several, each a level of a categorical variable (factor). Samples from each group are independent, and must be randomly selected from normal populations with equal variances. We test the null hypothesis of equal means of the response in every group versus the alternative hypothesis of one or more group means being different from the others. A one-way ANOVA hypothesis test determines if several population means are equal. The distribution for the test is the $F$ distribution with two different degrees of freedom. Assumptions: 1. Each population from which a sample is taken is assumed to be normal. 2. All samples are randomly selected and independent. 3. The populations are assumed to have equal standard deviations (or variances). Glossary Analysis of Variance also referred to as ANOVA, is a method of testing whether or not the means of three or more populations are equal. The method is applicable if: • all populations of interest are normally distributed. • the populations have equal standard deviations. • samples (not necessarily of the same size) are randomly and independently selected from each population. The test statistic for analysis of variance is the $F$-ratio. One-WayANOVA a method of testing whether or not the means of three or more populations are equal; the method is applicable if: • all populations of interest are normally distributed. • the populations have equal standard deviations. • samples (not necessarily of the same size) are randomly and independently selected from each population. The test statistic for analysis of variance is the $F$-ratio. Variance mean of the squared deviations from the mean; the square of the standard deviation. For a set of data, a deviation can be represented as $x - \bar{x}$ where $x$ is a value of the data and $\bar{x}$ is the sample mean. The sample variance is equal to the sum of the squares of the deviations divided by the difference of the sample size and one. Contributors and Attributions Barbara Illowsky and Susan Dean (De Anza College) with many other contributing authors. Content produced by OpenStax College is licensed under a Creative Commons Attribution License 4.0 license. Download for free at http://cnx.org/contents/[email protected].
textbooks/stats/Introductory_Statistics/Introductory_Statistics_(OpenStax)/13%3A_F_Distribution_and_One-Way_ANOVA/13.01%3A_Prelude_to_F_Distribution_and_One-Way_ANOVA.txt
The distribution used for the hypothesis test is a new one. It is called the $F$ distribution, named after Sir Ronald Fisher, an English statistician. The $F$ statistic is a ratio (a fraction). There are two sets of degrees of freedom; one for the numerator and one for the denominator. For example, if $F$ follows an $F$ distribution and the number of degrees of freedom for the numerator is four, and the number of degrees of freedom for the denominator is ten, then $F \sim F_{4,10}$. The $F$ distribution is derived from the Student's $t$-distribution. The values of the $F$ distribution are squares of the corresponding values of the $t$-distribution. One-Way ANOVA expands the $t$-test for comparing more than two groups. The scope of that derivation is beyond the level of this course. To calculate the $F$ ratio, two estimates of the variance are made. 1. Variance between samples: An estimate of $\sigma^{2}$ that is the variance of the sample means multiplied by $n$ (when the sample sizes are the same.). If the samples are different sizes, the variance between samples is weighted to account for the different sample sizes. The variance is also called variation due to treatment or explained variation. 2. Variance within samples: An estimate of $\sigma^{2}$ that is the average of the sample variances (also known as a pooled variance). When the sample sizes are different, the variance within samples is weighted. The variance is also called the variation due to error or unexplained variation. • $SS_{\text{between}} = \text{the sum of squares that represents the variation among the different samples}$ • $SS_{\text{within}} = \text{the sum of squares that represents the variation within samples that is due to chance}$. To find a "sum of squares" means to add together squared quantities that, in some cases, may be weighted. We used sum of squares to calculate the sample variance and the sample standard deviation in discussed previously. $MS$ means "mean square." $MS_{\text{between}}$ is the variance between groups, and $MS_{\text{within}}$ is the variance within groups. Calculation of Sum of Squares and Mean Square • $k =$ the number of different groups • $n_{j} =$ the size of the $j^{th}$ group} • $s_{j} =$ the sum of the values in the $j^{th}$ group • $n =$ total number of all the values combined (total sample size): $n= \sum n_{j}$ • $x =$ one value: $\sum x = \sum s_{j}$ • Sum of squares of all values from every group combined: $\sum x^{2}$ • Between group variability: $SS_{\text{total}} = \sum x^{2} - \dfrac{\left(\sum x^{2}\right)}{n}$ • Total sum of squares: $\sum x^{2} - \dfrac{\left(\sum x\right)^{2}}{n}$ • Explained variation: sum of squares representing variation among the different samples: $SS_{\text{between}} = \sum \left[\dfrac{(s_{j})^{2}}{n_{j}}\right] - \dfrac{\left(\sum s_{j}\right)^{2}}{n}$ • Unexplained variation: sum of squares representing variation within samples due to chance: $SS_{\text{within}} = SS_{\text{total}} - SS_{\text{between}}$ • $df$'s for different groups ($df$'s for the numerator): $df = k - 1$ • Equation for errors within samples ($df$'s for the denominator): $df_{\text{within}} = n - k$ • Mean square (variance estimate) explained by the different groups: $MS_{\text{between}} = \dfrac{SS_{\text{between}}}{df_{\text{between}}}$ • Mean square (variance estimate) that is due to chance (unexplained): $MS_{\text{within}} = \dfrac{SS_{\text{within}}}{df_{\text{within}}}$ $MS_{\text{between}}$ and $MS_{\text{within}}$ can be written as follows: $MS_{\text{between}} = \dfrac{SS_{\text{between}}}{df_{\text{between}}} = \dfrac{SS_{\text{between}}}{k - 1}$ $MS_{\text{within}} = \dfrac{SS_{\text{within}}}{df_{\text{within}}} = \dfrac{SS_{\text{within}}}{n - k}$ The one-way ANOVA test depends on the fact that $MS_{\text{between}}$ can be influenced by population differences among means of the several groups. Since $MS_{\text{within}}$ compares values of each group to its own group mean, the fact that group means might be different does not affect $MS_{\text{within}}$. The null hypothesis says that all groups are samples from populations having the same normal distribution. The alternate hypothesis says that at least two of the sample groups come from populations with different normal distributions. If the null hypothesis is true, $MS_{\text{between}}$ and $MS_{\text{within}}$ should both estimate the same value. The null hypothesis says that all the group population means are equal. The hypothesis of equal means implies that the populations have the same normal distribution, because it is assumed that the populations are normal and that they have equal variances. $F$-Ratio or $F$ Statistic $F = \dfrac{MS_{\text{between}}}{MS_{\text{within}}}$ If $MS_{\text{between}}$ and $MS_{\text{within}}$ estimate the same value (following the belief that $H_{0}$ is true), then the $F$-ratio should be approximately equal to one. Mostly, just sampling errors would contribute to variations away from one. As it turns out, $MS_{\text{between}}$ consists of the population variance plus a variance produced from the differences between the samples. $MS_{\text{within}}$ is an estimate of the population variance. Since variances are always positive, if the null hypothesis is false, $MS_{\text{between}}$ will generally be larger than $MS_{\text{within}}$.Then the $F$-ratio will be larger than one. However, if the population effect is small, it is not unlikely that $MS_{\text{within}}$ will be larger in a given sample. The foregoing calculations were done with groups of different sizes. If the groups are the same size, the calculations simplify somewhat and the $F$-ratio can be written as: $F$-Ratio Formula when the groups are the same size $F = \dfrac{n \cdot s_{\bar{x}}^{2}}{s^{2}_{\text{pooled}}}$ where ... • $n = \text{the sample size}$ • $df_{\text{numerator}} = k - 1$ • $df_{\text{denominator}} = n - k$ • $s^{2}_{\text{pooled}} = \text{the mean of the sample variances (pooled variance)}$ • $s_{\bar{x}^{2}} = \text{the variance of the sample means}$ Data are typically put into a table for easy viewing. One-Way ANOVA results are often displayed in this manner by computer software. Source of Variation Sum of Squares ($SS$) Degrees of Freedom ($df$) Mean Square ($MS$) $F$ Factor (Between) $SS(\text{Factor})$ $k - 1$ $MS(\text{Factor}) = \dfrac{SS(\text{Factor})}{(k - 1)}$ $F = \dfrac{MS(\text{Factor})}{MS(\text{Error})}$ Error (Within) $SS(\text{Error})$ $n - k$ $MS(\text{Error}) = \dfrac{SS(\text{Error})}{(n - k)}$ Total $SS(\text{Total})$ $n - 1$ Example $1$ Three different diet plans are to be tested for mean weight loss. The entries in the table are the weight losses for the different plans. The one-way ANOVA results are shown in Table. Plan 1: $n_{1} = 4$ Plan 2: $n_{2} = 3$ Plan 3: $n_{3} = 3$ 5 3.5 8 4.5 7 4 4   3.5 3 4.5 $s_{1} = 16.5, s_{2} =15, s_{3} = 15.7$ Following are the calculations needed to fill in the one-way ANOVA table. The table is used to conduct a hypothesis test. \begin{align} SS(\text{between}) &= \sum \left[\dfrac{(s_{j})^{2}}{n_{j}}\right] - \dfrac{\left(\sum s_{j}\right)^{2}}{n} \ &= \dfrac{s^{2}_{1}}{4} + \dfrac{s^{2}_{2}}{3} + \dfrac{s^{2}_{3}}{3} + \dfrac{(s_{1} + s_{2} + s_{3})^{2}}{10} \end{align} where $n_{1} = 4, n_{2} = 3, n_{3} = 3$ and $n = n_{1} + n_{2} + n_{3} = 10$ so \begin{align} SS(\text{between}) &= \dfrac{(16.5)^{2}}{4} + \dfrac{(15)^{2}}{3} + \dfrac{(5.5)^{2}}{3} = \dfrac{(16.5 + 15 + 15.5)^{2}}{10} \ &= 2.2458 \end{align} \begin{align} S(\text{total}) =& \sum x^{2} - \dfrac{\left(\sum x\right)^{2}}{n} \ =& (5^{2} + 4.5^{2} + 4^{2} + 3^{2} + 3.5^{2} + 7^{2} + 4.5^{2} + 8^{2} + 4^{2} + 3.5^{2}) \ &− \dfrac{(5 + 4.5 + 4 + 3 + 3.5 + 7 + 4.5 + 8 + 4 + 3.5)^{2}}{10} \ =& 244 - \dfrac{47^{2}}{10} = 244 - 220.9 \ =& 23.1 \end{align} \begin{align} SS(\text{within}) &= SS(\text{total}) - SS(\text{between}) \ &= 23.1 - 2.2458 \ &= 20.8542 \end{align} One-Way ANOVA Table: The formulas for $SS(\text{Total})$, $SS(\text{Factor}) = SS(\text{Between})$ and $SS(\text{Error}) = SS(\text{Within})$ as shown previously. The same information is provided by the TI calculator hypothesis test function ANOVA in STAT TESTS (syntax is $ANOVA(L1, L2, L3)$ where $L1, L2, L3$ have the data from Plan 1, Plan 2, Plan 3 respectively). Source of Variation Sum of Squares ($SS$) Degrees of Freedom ($df$) Mean Square ($MS$) $F$ Factor (Between) $SS(\text{Factor}) = SS(\text{Between}) = 2.2458$ $k - 1= 3 \text{ groups} - 1 = 2$ $MS(\text{Factor}) = \dfrac{SS(\text{Factor})}{(k– 1)} = \dfrac{2.2458}{2} = 1.1229$ $F = \dfrac{MS(\text{Factor})}{MS(\text{Error})} = \dfrac{1.1229}{2.9792} = 0.3769$ Error (Within) $SS(\text{Error}) = SS(\text{Within}) = 20.8542$ $n – k = 10 \text{ total data} - 3 \text{ groups} = 7$ $MS(\text{Error})) = \dfrac{SS(\text{Error})}{(n– k)} = \dfrac{20.8542}{7} = 2.9792$ Total $SS(\text{Total}) = 2.2458 + 20.8542 = 23.1$ $n - 1 = 10 \text{ total data} - 1 = 9$ Exercise $1$ As part of an experiment to see how different types of soil cover would affect slicing tomato production, Marist College students grew tomato plants under different soil cover conditions. Groups of three plants each had one of the following treatments • bare soil • a commercial ground cover • black plastic • straw • compost All plants grew under the same conditions and were the same variety. Students recorded the weight (in grams) of tomatoes produced by each of the $n = 15$ plants: Bare: $n_{1} = 3$ Ground Cover: $n_{2} = 3$ Plastic: $n_{3} = 3$ Straw: $n_{4} = 3$ Compost: $n_{5} = 3$ 2,625 5,348 6,583 7,285 6,277 2,997 5,682 8,560 6,897 7,818 4,915 5,482 3,830 9,230 8,677 Create the one-way ANOVA table. Answer Enter the data into lists L1, L2, L3, L4 and L5. Press STAT and arrow over to TESTS. Arrow down to ANOVA. Press ENTER and enter L1, L2, L3, L4, L5). Press ENTER. The table was filled in with the results from the calculator. One-Way ANOVA table Source of Variation Sum of Squares ($SS$) Degrees of Freedom ($df$) Mean Square ($MS$) $F$ Factor (Between) 36,648,561 $5 - 1 = 4$ $\dfrac{36,648,561}{4} = 9,162,140$ $\dfrac{9,162,140}{2,044,672.6} = 4.4810$ Error (Within) 20,446,726 $15 - 5 = 10$ $\dfrac{20,446,726}{10} = 2,044,672.6$ Total 57,095,287 $15 - 1 = 14$ The one-way ANOVA hypothesis test is always right-tailed because larger $F$-values are way out in the right tail of the $F$-distribution curve and tend to make us reject $H_{0}$. Notation The notation for the $F$ distribution is $F \sim F{df(\text{num}),df(\text{denom})}$ where $df(\text{num}) = df_{between} and df(\text{denom}) = df_{within}$ The mean for the $F$ distribution is $\mu = \dfrac{df(\text{num})}{df(\text{denom}) - 1}$ Review Analysis of variance compares the means of a response variable for several groups. ANOVA compares the variation within each group to the variation of the mean of each group. The ratio of these two is the $F$ statistic from an $F$ distribution with (number of groups – 1) as the numerator degrees of freedom and (number of observations – number of groups) as the denominator degrees of freedom. These statistics are summarized in the ANOVA table. Formula Review $SS_{between} = \sum \left[\dfrac{(s_{j})^{2}}{n_{j}}\right] - \dfrac{\left(\sum s_{j}\right)^{2}}{n}$ $SS_{\text{total}} = \sum x^{2} - \dfrac{\left(\sum x\right)^{2}}{n}$ $SS_{\text{within}} = SS_{\text{total}} - SS_{\text{between}}$ $df_{\text{between}} = df(\text{num}) = k - 1$ $df_{\text{within}} = df(\text{denom}) = n - k$ $MS_{\text{between}} = \dfrac{SS_{\text{between}}}{df_{\text{between}}}$ $MS_{\text{within}} = \dfrac{SS_{\text{within}}}{df_{\text{within}}}$ $F = \dfrac{MS_{\text{between}}}{MS_{\text{within}}}$ $F$ ratio when the groups are the same size: $F = \dfrac{ns_{\bar{x}}^{2}}{s^{2}_{\text{pooled}}}$ Mean of the $F$ distribution: $\mu = \dfrac{df(\text{num})}{df(\text{denom}) - 1}$ where: • $k =$ the number of groups • $n_{j} =$ the size of the $j^{th}$ group • $s_{j} =$ the sum of the values in the $j^{th}$ group • $n =$ the total number of all values (observations) combined • $x =$ one value (one observation) from the data • $s_{\bar{x}}^{2} =$ the variance of the sample means • $s^{2}_{\text{pooled}} =$ the mean of the sample variances (pooled variance) Contributors and Attributions Barbara Illowsky and Susan Dean (De Anza College) with many other contributing authors. Content produced by OpenStax College is licensed under a Creative Commons Attribution License 4.0 license. Download for free at http://cnx.org/contents/[email protected].
textbooks/stats/Introductory_Statistics/Introductory_Statistics_(OpenStax)/13%3A_F_Distribution_and_One-Way_ANOVA/13.03%3A_The_F_Distribution_and_the_F-Ratio.txt
Here are some facts about the $F$ distribution: 1. The curve is not symmetrical but skewed to the right. 2. There is a different curve for each set of $dfs$. 3. The $F$ statistic is greater than or equal to zero. 4. As the degrees of freedom for the numerator and for the denominator get larger, the curve approximates the normal. 5. Other uses for the $F$ distribution include comparing two variances and two-way Analysis of Variance. Two-Way Analysis is beyond the scope of this chapter. Example $1$ Let’s return to the slicing tomato exercise. The means of the tomato yields under the five mulching conditions are represented by $\mu_{1}, \mu_{2}, \mu_{3}, \mu_{4}, \mu_{5}$. We will conduct a hypothesis test to determine if all means are the same or at least one is different. Using a significance level of 5%, test the null hypothesis that there is no difference in mean yields among the five groups against the alternative hypothesis that at least one mean is different from the rest. Answer The null and alternative hypotheses are: • $H_{0}: \mu_{1} = \mu_{2} = \mu_{3} = \mu_{4} = \mu_{5}$ • $H_{a}: \mu_{i} \neq \mu_{j}$ some $i \neq j$ The one-way ANOVA results are shown in Table one-way ANOVA results Source of Variation Sum of Squares ($SS$) Degrees of Freedom ($df$) Mean Square ($MS$) $F$ Factor (Between) 36,648,561 $5 - 1 = 4$ $\dfrac{36,648,561}{4} = 9,162,140$ $\dfrac{9,162,140}{2,044,672.6} = 4.4810$ Error (Within) 20,446,726 $15 - 5 = 10$ $\dfrac{20,446,726}{10} = 2,044,672.6$ Total 57,095,287 $15 - 1 = 14$ Distribution for the test: $F_{4,10}$ $df(\text{num}) = 5 - 1 = 4$ $df(\text{denom}) = 15 - 5 = 10$ Test statistic: $F = 4.4810$ Probability Statement: $p\text{-value} = P(F > 4.481) = 0.0248$. Compare $\alpha$ and the $p\text{-value}$: $\alpha = 0.05, p\text{-value} = 0.0248$ Make a decision: Since $\alpha > p\text{-value}$, we reject $H_{0}$. Conclusion: At the 5% significance level, we have reasonably strong evidence that differences in mean yields for slicing tomato plants grown under different mulching conditions are unlikely to be due to chance alone. We may conclude that at least some of mulches led to different mean yields. To find these results on the calculator: Press STAT. Press 1:EDIT. Put the data into the lists L1, L2, L3, L4, L5. Press STAT, and arrow over to TESTS, and arrow down to ANOVA. Press ENTER, and then enter L1, L2, L3, L4, L5). Press ENTER. You will see that the values in the foregoing ANOVA table are easily produced by the calculator, including the test statistic and the p-value of the test. The calculator displays: • $F = 4.4810$ • $p = 0.0248$ ($p\text{-value}$) Factor • $df = 4$ • $SS = 36648560.9$ • $MS = 9162140.23$ Error • $df = 10$ • $SS = 20446726$ • $MS = 2044672.6$ Exercise $1$ MRSA, or Staphylococcus aureus, can cause a serious bacterial infections in hospital patients. Table shows various colony counts from different patients who may or may not have MRSA. Conc = 0.6 Conc = 0.8 Conc = 1.0 Conc = 1.2 Conc = 1.4 9 16 22 30 27 66 93 147 199 168 98 82 120 148 132 Plot of the data for the different concentrations: Test whether the mean number of colonies are the same or are different. Construct the ANOVA table (by hand or by using a TI-83, 83+, or 84+ calculator), find the p-value, and state your conclusion. Use a 5% significance level. Answer While there are differences in the spreads between the groups (Figure $1$), the differences do not appear to be big enough to cause concern. We test for the equality of mean number of colonies: $H_{0}: \mu_{1} = \mu_{2} = \mu_{3} = \mu_{4} = \mu_{5}$ $H_{a}: \mu_{i} \neq \mu_{j}$ some $i \neq j$ The one-way ANOVA table results are shown in Table. Table $1$ Source of Variation Sum of Squares ($SS$) Degrees of Freedom ($df$) Mean Square ($MS$) $F$ Factor (Between) 10,233 $5 - 1 = 4$ $\dfrac{10,233}{4} = 2,558.25$ $\dfrac{2,558.25}{4,194.9} = 0.6099$ Error (Within) 41,949 $15 - 5 = 10$ Total 52,182 $15 - 1 = 14$ $\dfrac{41,949}{10} = 4,194.9$ Figure $2$ Distribution for the test: $F_{4,10}$ Probability Statement: $p\text{-value} = P(F > 0.6099) = 0.6649$. Compare $\alpha$ and the $p\text{-value}$: $\alpha = 0.05, p\text{-value} = 0.669, \alpha > p\text{-value}$ Make a decision: Since $\alpha > p\text{-value}$, we do not reject $H_{0}$. Conclusion: At the 5% significance level, there is insufficient evidence from these data that different levels of tryptone will cause a significant difference in the mean number of bacterial colonies formed. Example $2$ Four sororities took a random sample of sisters regarding their grade means for the past term. The results are shown in Table. Figure $1$: MEAN GRADES FOR FOUR SORORITIES Sorority 1 Sorority 2 Sorority 3 Sorority 4 2.17 2.63 2.63 3.79 1.85 1.77 3.78 3.45 2.83 3.25 4.00 3.08 1.69 1.86 2.55 2.26 3.33 2.21 2.45 3.18 Using a significance level of 1%, is there a difference in mean grades among the sororities? Answer Let $\mu_{1}, \mu_{2}, \mu_{3}, \mu_{4}$ be the population means of the sororities. Remember that the null hypothesis claims that the sorority groups are from the same normal distribution. The alternate hypothesis says that at least two of the sorority groups come from populations with different normal distributions. Notice that the four sample sizes are each five. This is an example of a balanced design, because each factor (i.e., sorority) has the same number of observations. $H_{0}: \mu_{1} = \mu_{2} = \mu_{3} = \mu_{4}$ $H_{a}$: Not all of the means $\mu_{1}, \mu_{2}, \mu_{3}, \mu_{4}$ are equal. Distribution for the test: $F_{3,16}$ where $k = 4$ groups and $n = 20$ samples in total $df(\text{num}) = k - 1 = 4 - 1 = 3$ $df(\text{denom}) = n - k = 20 - 4 = 16$ Calculate the test statistic: $F = 2.23$ Graph: Probability statement: $p\text{-value} = P(F > 2.23) = 0.1241$ Compare $\alpha$ and the $p\text{-value}$: $\alpha = 0.01$ $p\text{-value} = 0.1241$ $\alpha < p\text{-value}$ Make a decision: Since $\alpha < p\text{-value}$, you cannot reject $H_{0}$. Conclusion: There is not sufficient evidence to conclude that there is a difference among the mean grades for the sororities. Put the data into lists L1, L2, L3, and L4. Press STAT and arrow over to TESTS. Arrow down to F:ANOVA. Press ENTER and Enter (L1,L2,L3,L4). The calculator displays the F statistic, the $p\text{-value}$ and the values for the one-way ANOVA table: $F = 2.2303$ $p = 0.1241$ ($p\text{-value}$) Factor $df = 3$ $SS = 2.88732$ $MS = 0.96244$ Error $df = 1$ $SS = 6.9044$ $MS = 0.431525$ Exercise $2$ Four sports teams took a random sample of players regarding their GPAs for the last year. The results are shown in Table. GPAs FOR FOUR SPORTS TEAMS Basketball Baseball Hockey Lacrosse 3.6 2.1 4.0 2.0 2.9 2.6 2.0 3.6 2.5 3.9 2.6 3.9 3.3 3.1 3.2 2.7 3.8 3.4 3.2 2.5 Use a significance level of 5%, and determine if there is a difference in GPA among the teams. Answer With a $p\text{-value}$ of $0.9271$, we decline to reject the null hypothesis. There is not sufficient evidence to conclude that there is a difference among the GPAs for the sports teams. Example $3$ A fourth grade class is studying the environment. One of the assignments is to grow bean plants in different soils. Tommy chose to grow his bean plants in soil found outside his classroom mixed with dryer lint. Tara chose to grow her bean plants in potting soil bought at the local nursery. Nick chose to grow his bean plants in soil from his mother's garden. No chemicals were used on the plants, only water. They were grown inside the classroom next to a large window. Each child grew five plants. At the end of the growing period, each plant was measured, producing the data (in inches) in Table $3$. Table $3$ Tommy's Plants Tara's Plants Nick's Plants 24 25 23 21 31 27 23 23 22 30 20 30 23 28 20 Does it appear that the three media in which the bean plants were grown produce the same mean height? Test at a 3% level of significance. Answer This time, we will perform the calculations that lead to the $F'$ statistic. Notice that each group has the same number of plants, so we will use the formula $F' = \dfrac{n \cdot s_{\bar{x}}^{2}}{s^{2}_{\text{pooled}}}.$ First, calculate the sample mean and sample variance of each group. Tommy's Plants Tara's Plants Nick's Plants Sample Mean 24.2 25.4 24.4 Sample Variance 11.7 18.3 16.3 Next, calculate the variance of the three group means (Calculate the variance of 24.2, 25.4, and 24.4). Variance of the group means $= 0.413 = s_{\bar{x}}^{2}$ Then $MS_{\text{between}} = ns_{\bar{x}}^{2} = (5)(0.413)$ where $n = 5$ is the sample size (number of plants each child grew). Calculate the mean of the three sample variances (Calculate the mean of 11.7, 18.3, and 16.3). Mean of the sample variances $= 15.433 = s^{2}_{\text{pooled}}$ Then $MS_{\text{within}} = s^{2}_{\text{pooled}} = 15.433$. The $F$ statistic (or $F$ ratio) is $F = \dfrac{MS_{\text{between}}}{MS_{\text{within}}} = \dfrac{ns_{\bar{x}}^{2}}{s^{2}_{\text{pooled}}} = \dfrac{(5)(0.413)}{15.433} = 0.134$ The $dfs$ for the numerator $= \text{the number of groups} - 1 = 3 - 1 = 2$. The $dfs$ for the denominator $= \text{the total number of samples} - \text{the number of groups} = 15 - 3 = 12$ The distribution for the test is $F_{2,12}$ and the $F$ statistic is $F = 0.134$ The $p\text{-value}$ is $P(F > 0.134) = 0.8759$. Decision: Since $\alpha = 0.03$ and the $p\text{-value} = 0.8759$, do not reject $H_{0}$. (Why?) Conclusion: With a 3% level of significance, from the sample data, the evidence is not sufficient to conclude that the mean heights of the bean plants are different. To calculate the $p\text{-value}$: *Press 2nd DISTR *Arrow down to Fcdf(and press ENTER. *Enter 0.134, E99, 2, 12) *Press ENTER The $p\text{-value}$ is $0.8759$. Exercise $3$ Another fourth grader also grew bean plants, but this time in a jelly-like mass. The heights were (in inches) 24, 28, 25, 30, and 32. Do a one-way ANOVA test on the four groups. Are the heights of the bean plants different? Use the same method as shown in Example $3$. Answer • $F = 0.9496$ • $p\text{-value} = 0.4402$ From the sample data, the evidence is not sufficient to conclude that the mean heights of the bean plants are different. Collaborative Exercise From the class, create four groups of the same size as follows: men under 22, men at least 22, women under 22, women at least 22. Have each member of each group record the number of states in the United States he or she has visited. Run an ANOVA test to determine if the average number of states visited in the four groups are the same. Test at a 1% level of significance. Use one of the solution sheets in [link]. Review The graph of the $F$ distribution is always positive and skewed right, though the shape can be mounded or exponential depending on the combination of numerator and denominator degrees of freedom. The $F$ statistic is the ratio of a measure of the variation in the group means to a similar measure of the variation within the groups. If the null hypothesis is correct, then the numerator should be small compared to the denominator. A small $F$ statistic will result, and the area under the $F$ curve to the right will be large, representing a large $p\text{-value}$. When the null hypothesis of equal group means is incorrect, then the numerator should be large compared to the denominator, giving a large $F$ statistic and a small area (small $p\text{-value}$) to the right of the statistic under the $F$ curve. When the data have unequal group sizes (unbalanced data), then techniques discussed earlier need to be used for hand calculations. In the case of balanced data (the groups are the same size) however, simplified calculations based on group means and variances may be used. In practice, of course, software is usually employed in the analysis. As in any analysis, graphs of various sorts should be used in conjunction with numerical techniques. Always look of your data!
textbooks/stats/Introductory_Statistics/Introductory_Statistics_(OpenStax)/13%3A_F_Distribution_and_One-Way_ANOVA/13.04%3A_Facts_About_the_F_Distribution.txt
Another of the uses of the $F$ distribution is testing two variances. It is often desirable to compare two variances rather than two averages. For instance, college administrators would like two college professors grading exams to have the same variation in their grading. In order for a lid to fit a container, the variation in the lid and the container should be the same. A supermarket might be interested in the variability of check-out times for two checkers. to perform a $F$ test of two variances, it is important that the following are true: • The populations from which the two samples are drawn are normally distributed. • The two populations are independent of each other. Unlike most other tests in this book, the $F$ test for equality of two variances is very sensitive to deviations from normality. If the two distributions are not normal, the test can give higher $p\text{-values}$ than it should, or lower ones, in ways that are unpredictable. Many texts suggest that students not use this test at all, but in the interest of completeness we include it here. Suppose we sample randomly from two independent normal populations. Let $\sigma^{2}_{1}$ and $\sigma^{2}_{2}$ be the population variances and $s^{2}_{1}$ and $s^{2}_{2}$ be the sample variances. Let the sample sizes be $n_{1}$ and $n_{2}$. Since we are interested in comparing the two sample variances, we use the $F$ ratio: $F = \dfrac{\left[\dfrac{(s_{1})^{2}}{(\sigma_{1})^{2}}\right]}{\left[\dfrac{(s_{2})^{2}}{(\sigma_{2})^{2}}\right]}$ $F$ has the distribution $F \sim F(n_{1} - 1, n_{2} - 1)$ where $n_{1} - 1$ are the degrees of freedom for the numerator and $n_{2} - 1$ are the degrees of freedom for the denominator. If the null hypothesis is $\sigma^{2}_{1} = \sigma^{2}_{2}$, then the $F$ Ratio becomes $F = \dfrac{\left[\dfrac{(s_{1})^{2}}{(\sigma_{1})^{2}}\right]}{\left[\dfrac{(s_{2})^{2}}{(\sigma_{2})^{2}}\right]} = \dfrac{(s_{1})^{2}}{(s_{2})^{2}}.$ The $F$ ratio could also be $\dfrac{(s_{2})^{2}}{(s_{1})^{2}}$. It depends on $H_{a}$ and on which sample variance is larger. If the two populations have equal variances, then $s^{2}_{1}$ and $s^{2}_{2}$ are close in value and $F = \dfrac{(s_{1})^{2}}{(s_{2})^{2}}$ is close to one. But if the two population variances are very different, $s^{2}_{1}$ and $s^{2}_{2}$ tend to be very different, too. Choosing $s^{2}_{1}$ as the larger sample variance causes the ratio $\dfrac{(s_{1})^{2}}{(s_{2})^{2}}$ to be greater than one. If $s^{2}_{1}$ and $s^{2}_{2}$ are far apart, then $F = \dfrac{(s_{1})^{2}}{(s_{2})^{2}}$ is a large number. Therefore, if $F$ is close to one, the evidence favors the null hypothesis (the two population variances are equal). But if $F$ is much larger than one, then the evidence is against the null hypothesis. A test of two variances may be left, right, or two-tailed. A test of two variances may be left, right, or two-tailed. Example $1$ Two college instructors are interested in whether or not there is any variation in the way they grade math exams. They each grade the same set of 30 exams. The first instructor's grades have a variance of 52.3. The second instructor's grades have a variance of 89.9. Test the claim that the first instructor's variance is smaller. (In most colleges, it is desirable for the variances of exam grades to be nearly the same among instructors.) The level of significance is 10%. Answer Let 1 and 2 be the subscripts that indicate the first and second instructor, respectively. • $n_{1} = n_{2} = 30$. • $H_{0}: \sigma^{2}_{1} = \sigma^{2}_{2}$ and $H_{a}: \sigma^{2}_{1} < \sigma^{2}_{2}$ Calculate the test statistic: By the null hypothesis $\sigma^{2}_{1} = \sigma^{2}_{2})$, the $F$ statistic is: $F = \dfrac{\left[\dfrac{(s_{1})^{2}}{(\sigma_{1})^{2}}\right]}{\left[\dfrac{(s_{2})^{2}}{(s_{2})^{2}}\right]} = \dfrac{(s_{1})^{2}}{(s_{2})^{2}} = \dfrac{52.3}{89.9} = 0.5818$ Distribution for the test: $F_{29,29}$ where $n_{1} - 1 = 29$ and $n_{2} - 1 = 29$. Graph: This test is left tailed. Draw the graph labeling and shading appropriately. Probability statement: $p\text{-value} = P(F < 0.5818) = 0.0753$ Compare $\alpha$ and the $p\text{-value}$: $\alpha = 0.10 \alpha > p\text{-value}$. Make a decision: Since $\alpha > p\text{-value}$, reject $H_{0}$. Conclusion: With a 10% level of significance, from the data, there is sufficient evidence to conclude that the variance in grades for the first instructor is smaller. Press STAT and arrow over to TESTS. Arrow down to D:2-SampFTest. Press ENTER. Arrow to Stats and press ENTER. For Sx1, n1, Sx2, and n2, enter (52.3)−−−−−√(52.3), 30, (89.9)−−−−−√(89.9), and 30. Press ENTER after each. Arrow to σ1: and <σ2. Press ENTER. Arrow down to Calculate and press ENTER. F = 0.5818 and p-value = 0.0753. Do the procedure again and try Draw instead of Calculate. Exercise $1$ The New York Choral Society divides male singers up into four categories from highest voices to lowest: Tenor1, Tenor2, Bass1, Bass2. In the table are heights of the men in the Tenor1 and Bass2 groups. One suspects that taller men will have lower voices, and that the variance of height may go up with the lower voices as well. Do we have good evidence that the variance of the heights of singers in each of these two groups (Tenor1 and Bass2) are different? Tenor1 Bass2 Tenor 1 Bass 2 Tenor 1 Bass 2 69 72 67 72 68 67 72 75 70 74 67 70 71 67 65 70 64 70 66 75 72 66   69 76 74 70 68   72 74 72 68 75   71 71 72 64 68   74 66 74 73 70   75 68 72 66 72 Answer The histograms are not as normal as one might like. Plot them to verify. However, we proceed with the test in any case. Subscripts: $\text{T1} =$ tenor 1 and $\text{B2} =$ bass 2 The standard deviations of the samples are $s_{\text{T1}} = 3.3302$ and $s_{\text{B2}} = 2.7208$. The hypotheses are $H_{0}: \sigma^{2}_{\text{T1}} = \sigma^{2}_{\text{B2}}$ and $H_{0}: \sigma^{2}_{\text{T1}} \neq \sigma^{2}_{\text{B2}}$ (two tailed test) The $F$ statistic is $1.4894$ with 20 and 25 degrees of freedom. The $p\text{-value}$ is $0.3430$. If we assume alpha is 0.05, then we cannot reject the null hypothesis. We have no good evidence from the data that the heights of Tenor1 and Bass2 singers have different variances (despite there being a significant difference in mean heights of about 2.5 inches.) Review The F test for the equality of two variances rests heavily on the assumption of normal distributions. The test is unreliable if this assumption is not met. If both distributions are normal, then the ratio of the two sample variances is distributed as an F statistic, with numerator and denominator degrees of freedom that are one less than the samples sizes of the corresponding two groups. A test of two variances hypothesis test determines if two variances are the same. The distribution for the hypothesis test is the $F$ distribution with two different degrees of freedom. Assumptions: 1. The populations from which the two samples are drawn are normally distributed. 2. The two populations are independent of each other. Formula Review $F$ has the distribution $F \sim F(n_{1} - 1, n_{2} - 1)$ $F = \dfrac{\dfrac{s^{2}_{1}}{\sigma^{2}_{1}}}{\dfrac{s^{2}_{2}}{\sigma^{2}_{2}}}$ If $\sigma_{1} = \sigma_{2}$, then $F = \dfrac{s^{2}_{1}}{s^{2}_{2}}$ 13.06: Lab- One-Way ANOVA Name: ______________________________ Section: _____________________________ Student ID#:__________________________ Work in groups on these problems. You should try to answer the questions without referring to your textbook. If you get stuck, try asking another group for help. Student Learning Outcome • The student will conduct a simple one-way $ANOVA$ test involving three variables. Collect the Data Record the price per pound of eight fruits, eight vegetables, and eight breads in your local supermarket. Fruits Vegetables Breads Explain how you could try to collect the data randomly. Analyze the Data and Conduct a Hypothesis Test 1. Compute the following: 1. Fruit: 1. $\bar{x} =$ ______ 2. $s_{x} =$ ______ 3. $n =$ ______ 2. Vegetables: 1. $\bar{x} =$ ______ 2. $s_{x} =$ ______ 3. $n =$ ______ 3. Bread: 1. $\bar{x} =$ ______ 2. $s_{x} =$ ______ 3. $n =$ ______ 2. Find the following: 1. $df(\text{num}) =$ ______ 2. $df(\text{denom}) =$ ______ 3. State the approximate distribution for the test. 4. Test statistic: $F =$ ______ 5. Sketch a graph of this situation. CLEARLY, label and scale the horizontal axis and shade the region(s) corresponding to the $p\text{-value}$. 6. $p\text{-value} =$ ______ 7. Test at $\alpha = 0.05$. State your decision and conclusion. 1. Decision: Why did you make this decision? 2. Conclusion (write a complete sentence). 3. Based on the results of your study, is there a need to investigate any of the food groups’ prices? Why or why not?
textbooks/stats/Introductory_Statistics/Introductory_Statistics_(OpenStax)/13%3A_F_Distribution_and_One-Way_ANOVA/13.05%3A_Test_of_Two_Variances.txt
These are homework exercises to accompany the Textmap created for "Introductory Statistics" by OpenStax. 13.2: One-Way ANOVA Q 13.2.1 Three different traffic routes are tested for mean driving time. The entries in the table are the driving times in minutes on the three different routes. The one-way $ANOVA$ results are shown in Table. Route 1 Route 2 Route 3 30 27 16 32 29 41 27 28 22 35 36 31 State $SS_{\text{between}}$, $SS_{\text{within}}$, and the $F$ statistic. S 13.2.1 $SS_{\text{between}} = 26$ $SS_{\text{within}} = 441$ $F = 0.2653$ Q 13.2.2 Suppose a group is interested in determining whether teenagers obtain their drivers licenses at approximately the same average age across the country. Suppose that the following data are randomly collected from five teenagers in each region of the country. The numbers represent the age at which teenagers obtained their drivers licenses. Northeast South West Central East 16.3 16.9 16.4 16.2 17.1 16.1 16.5 16.5 16.6 17.2 16.4 16.4 16.6 16.5 16.6 16.5 16.2 16.1 16.4 16.8 $\bar{x} =$ ________ ________ ________ ________ ________ $s^{2} =$ ________ ________ ________ ________ ________ State the hypotheses. $H_{0}$: ____________ $H_{a}$: ____________ 13.3: The F-Distribution and the F-Ratio Use the following information to answer the next five exercises. There are five basic assumptions that must be fulfilled in order to perform a one-way $ANOVA$ test. What are they? Exercise 13.2.1 Write one assumption. Answer Each population from which a sample is taken is assumed to be normal. Exercise 13.2.2 Write another assumption. Exercise 13.2.3 Write a third assumption. Answer The populations are assumed to have equal standard deviations (or variances). Exercise 13.2.4 Write a fourth assumption. Exercise 13.2.5 Write the final assumption. Answer The response is a numerical value. Exercise 13.2.6 State the null hypothesis for a one-way $ANOVA$ test if there are four groups. Exercise 13.2.7 State the alternative hypothesis for a one-way $ANOVA$ test if there are three groups. Answer $H_{a}: \text{At least two of the group means } \mu_{1}, \mu_{2}, \mu_{3} \text{ are not equal.}$ Exercise 13.2.8 When do you use an $ANOVA$ test? Use the following information to answer the next three exercises. Suppose a group is interested in determining whether teenagers obtain their drivers licenses at approximately the same average age across the country. Suppose that the following data are randomly collected from five teenagers in each region of the country. The numbers represent the age at which teenagers obtained their drivers licenses. Northeast South West Central East 16.3 16.9 16.4 16.2 17.1 16.1 16.5 16.5 16.6 17.2 16.4 16.4 16.6 16.5 16.6 16.5 16.2 16.1 16.4 16.8 $\bar{x} =$ ________ ________ ________ ________ ________ $s^{2}$ ________ ________ ________ ________ ________ $H_{0}: \mu_{1} = \mu_{2} = \mu_{3} = \mu_{4} = \mu_{5}$ $H_{a}$: At least any two of the group means $\mu_{1} , \mu_{2}, \dotso, \mu_{5}$ are not equal. Q 13.3.1 degrees of freedom – numerator: $df(\text{num}) =$ _________ Q 13.3.2 degrees of freedom – denominator: $df(\text{denom}) =$ ________ S 13.3.2 $df(\text{denom}) = 15$ Q 13.3.3 $F$ statistic = ________ 13.4: Facts About the F Distribution Exercise 13.4.4 An $F$ statistic can have what values? Exercise 13.4.5 What happens to the curves as the degrees of freedom for the numerator and the denominator get larger? Answer The curves approximate the normal distribution. Use the following information to answer the next seven exercise. Four basketball teams took a random sample of players regarding how high each player can jump (in inches). The results are shown in Table. Team 1 Team 2 Team 3 Team 4 Team 5 36 32 48 38 41 42 35 50 44 39 51 38 39 46 40 Exercise 13.4.6 What is the $df(\text{num})$? Exercise 13.4.7 What is the $df(\text{denom})$? Answer ten Exercise 13.4.8 What are the Sum of Squares and Mean Squares Factors? Exercise 13.4.9 What are the Sum of Squares and Mean Squares Errors? Answer $SS = 237.33; MS = 23.73$ Exercise 13.4.10 What is the $F$ statistic? Exercise 13.4.11 What is the $p\text{-value}$? Answer 0.1614 Exercise 13.4.12 At the 5% significance level, is there a difference in the mean jump heights among the teams? Use the following information to answer the next seven exercises. A video game developer is testing a new game on three different groups. Each group represents a different target market for the game. The developer collects scores from a random sample from each group. The results are shown in Table Group A Group B Group C 101 151 101 108 149 109 98 160 198 107 112 186 111 126 160 Exercise 13.4.13 What is the $df(\text{num})$? Answer two Exercise 13.4.14 What is the $df(\text{denom})$? Exercise 13.4.15 What are the $SS_{\text{between}}$ and $MS_{\text{between}}$? Answer $SS_{\text{between}} = 5,700.4$; $MS_{\text{between}} = 2,850.2$ Exercise 13.4.16 What are the $SS_{\text{within}}$ and $MS_{\text{within}}$? Exercise 13.4.17 What is the $F$ Statistic? Answer 3.6101 Exercise 13.4.18 What is the $p\text{-value}$? Exercise 13.4.19 At the 10% significance level, are the scores among the different groups different? Answer Yes, there is enough evidence to show that the scores among the groups are statistically significant at the 10% level. Use the following information to answer the next three exercises. Suppose a group is interested in determining whether teenagers obtain their drivers licenses at approximately the same average age across the country. Suppose that the following data are randomly collected from five teenagers in each region of the country. The numbers represent the age at which teenagers obtained their drivers licenses. Northeast South West Central East 16.3 16.9 16.4 16.2 17.1 16.1 16.5 16.5 16.6 17.2 16.4 16.4 16.6 16.5 16.6 16.5 16.2 16.1 16.4 16.8 $\bar{x} =$ ________ ________ ________ ________ ________ $s^{2} =$ ________ ________ ________ ________ ________ Enter the data into your calculator or computer. Exercise 13.4.20 $p\text{-value} =$ ______ State the decisions and conclusions (in complete sentences) for the following preconceived levels of $\alpha$. Exercise 13.4.21 $\alpha = 0.05$ 1. Decision: ____________________________ 2. Conclusion: ____________________________ Exercise 13.4.22 $\alpha = 0.01$ 1. Decision: ____________________________ 2. Conclusion: ____________________________ Use the following information to answer the next eight exercises. Groups of men from three different areas of the country are to be tested for mean weight. The entries in the table are the weights for the different groups. The one-way $ANOVA$ results are shown in Table. Group 1 Group 2 Group 3 216 202 170 198 213 165 240 284 182 187 228 197 176 210 201 Exercise 13.3.2 What is the Sum of Squares Factor? Answer 4,939.2 Exercise 13.3.3 What is the Sum of Squares Error? Exercise 13.3.4 What is the $df$ for the numerator? Answer 2 Exercise 13.3.5 What is the $df$ for the denominator? Exercise 13.3.6 What is the Mean Square Factor? Answer 2,469.6 Exercise 13.3.7 What is the Mean Square Error? Exercise 13.3.8 What is the $F$ statistic? Answer 3.7416 Use the following information to answer the next eight exercises. Girls from four different soccer teams are to be tested for mean goals scored per game. The entries in the table are the goals per game for the different teams. The one-way $ANOVA$ results are shown in Table. Team 1 Team 2 Team 3 Team 4 1 2 0 3 2 3 1 4 0 2 1 4 3 4 0 3 2 4 0 2 Exercise 13.3.9 What is $SS_{\text{between}}$? Exercise 13.3.10 What is the $df$ for the numerator? Answer 3 Exercise 13.3.11 What is $MS_{\text{between}}$? Exercise 13.3.12 What is $SS_{\text{within}}$? Answer 13.2 Exercise 13.3.13 What is the $df$ for the denominator? Exercise 13.3.14 What is $MS_{\text{within}}$? Answer 0.825 Exercise 13.3.15 What is the $F$ statistic? Exercise 13.3.16 Judging by the $F$ statistic, do you think it is likely or unlikely that you will reject the null hypothesis? Answer Because a one-way $ANOVA$ test is always right-tailed, a high $F$ statistic corresponds to a low $p\text{-value}$, so it is likely that we will reject the null hypothesis. DIRECTIONS Use a solution sheet to conduct the following hypothesis tests. The solution sheet can be found in [link]. Q 13.4.1 Three students, Linda, Tuan, and Javier, are given five laboratory rats each for a nutritional experiment. Each rat's weight is recorded in grams. Linda feeds her rats Formula A, Tuan feeds his rats Formula B, and Javier feeds his rats Formula C. At the end of a specified time period, each rat is weighed again, and the net gain in grams is recorded. Using a significance level of 10%, test the hypothesis that the three formulas produce the same mean weight gain. Weights of Student Lab Rats Linda's rats Tuan's rats Javier's rats 43.5 47.0 51.2 39.4 40.5 40.9 41.3 38.9 37.9 46.0 46.3 45.0 38.2 44.2 48.6 1. $H_{0}: \mu_{L} = \mu_{T} = \mu_{J}$ 2. at least any two of the means are different 3. $df(\text{num}) = 2; df(\text{denom}) = 12$ 4. $F$ distribution 5. 0.67 6. 0.5305 7. Check student’s solution. 8. Decision: Do not reject null hypothesis; Conclusion: There is insufficient evidence to conclude that the means are different. Q 13.4.2 A grassroots group opposed to a proposed increase in the gas tax claimed that the increase would hurt working-class people the most, since they commute the farthest to work. Suppose that the group randomly surveyed 24 individuals and asked them their daily one-way commuting mileage. The results are in Table. Using a 5% significance level, test the hypothesis that the three mean commuting mileages are the same. working-class professional (middle incomes) professional (wealthy) 17.8 16.5 8.5 26.7 17.4 6.3 49.4 22.0 4.6 9.4 7.4 12.6 65.4 9.4 11.0 47.1 2.1 28.6 19.5 6.4 15.4 51.2 13.9 9.3 Q 13.4.3 Examine the seven practice laps from [link]. Determine whether the mean lap time is statistically the same for the seven practice laps, or if there is at least one lap that has a different mean time from the others. S 13.4.3 1. $H_{0}: \mu_{1} = \mu_{2} = \mu_{3} = \mu_{4} = \mu_{5} = \mu_{6} = \mu_{T}$ 2. At least two mean lap times are different. 3. $df(\text{num}) = 6; df(\text{denom}) = 98$ 4. $F$ distribution 5. 1.69 6. 0.1319 7. Check student’s solution. 8. Decision: Do not reject null hypothesis; Conclusion: There is insufficient evidence to conclude that the mean lap times are different. Use the following information to answer the next two exercises. Table lists the number of pages in four different types of magazines. home decorating news health computer 172 87 82 104 286 94 153 136 163 123 87 98 205 106 103 207 197 101 96 146 Q 13.4.4 Using a significance level of 5%, test the hypothesis that the four magazine types have the same mean length. Q 13.4.5 Eliminate one magazine type that you now feel has a mean length different from the others. Redo the hypothesis test, testing that the remaining three means are statistically the same. Use a new solution sheet. Based on this test, are the mean lengths for the remaining three magazines statistically the same? S 13.4.6 1. $H_{a}: \mu_{d} = \mu_{n} = \mu_{h}$ 2. At least any two of the magazines have different mean lengths. 3. $df(\text{num}) = 2, df(\text{denom}) = 12$ 4. $F$ distribtuion 5. $F = 15.28$ 6. $p\text{-value} = 0.001$ 7. Check student’s solution. 1. $\alpha: 0.05$ 2. Decision: Reject the Null Hypothesis. 3. Reason for decision: $p\text{-value} < \alpha$ 4. Conclusion: There is sufficient evidence to conclude that the mean lengths of the magazines are different. Q 13.4.7 A researcher wants to know if the mean times (in minutes) that people watch their favorite news station are the same. Suppose that Table shows the results of a study. CNN FOX Local 45 15 72 12 43 37 18 68 56 38 50 60 23 31 51 35 22 Assume that all distributions are normal, the four population standard deviations are approximately the same, and the data were collected independently and randomly. Use a level of significance of 0.05. Q 13.4.8 Are the means for the final exams the same for all statistics class delivery types? Table shows the scores on final exams from several randomly selected classes that used the different delivery types. Online Hybrid Face-to-Face 72 83 80 84 73 78 77 84 84 80 81 81 81   86 79 82 Assume that all distributions are normal, the four population standard deviations are approximately the same, and the data were collected independently and randomly. Use a level of significance of 0.05. S 13.4.8 1. $H_{0}: \mu_{o} = \mu_{h} = \mu_{f}$ 2. At least two of the means are different. 3. $df(\text{n}) = 2, df(\text{d}) = 13$ 4. $F_{2,13}$ 5. 0.64 6. 0.5437 7. Check student’s solution. 1. $\alpha: 0.05$ 2. Decision: Do not reject the null hypothesis. 3. Reason for decision: $p\text{-value} < \alpha$ 4. Conclusion: The mean scores of different class delivery are not different. Q 13.4.9 Are the mean number of times a month a person eats out the same for whites, blacks, Hispanics and Asians? Suppose that Table shows the results of a study. White Black Hispanic Asian 6 4 7 8 8 1 3 3 2 5 5 5 4 2 4 1 6   6 7 Assume that all distributions are normal, the four population standard deviations are approximately the same, and the data were collected independently and randomly. Use a level of significance of 0.05. Q 13.4.10 Are the mean numbers of daily visitors to a ski resort the same for the three types of snow conditions? Suppose that Table shows the results of a study. Powder Machine Made Hard Packed 1,210 2,107 2,846 1,080 1,149 1,638 1,537 862 2,019 941 1,870 1,178 1,528 2,233 1,382 Assume that all distributions are normal, the four population standard deviations are approximately the same, and the data were collected independently and randomly. Use a level of significance of 0.05. S 13.4.11 1. $H_{0}: \mu_{p} = \mu_{m} = \mu_{h}$ 2. At least any two of the means are different. 3. $df(\text{n}) = 2, df(\text{d}) = 12$ 4. $F_{2,12}$ 5. 3.13 6. 0.0807 7. Check student’s solution. 1. $\alpha: 0.05$ 2. Decision: Do not reject the null hypothesis. 3. Reason for decision: $p\text{-value} < \alpha$ 4. Conclusion: There is not sufficient evidence to conclude that the mean numbers of daily visitors are different. Q 13.4.12 Sanjay made identical paper airplanes out of three different weights of paper, light, medium and heavy. He made four airplanes from each of the weights, and launched them himself across the room. Here are the distances (in meters) that his planes flew. Paper Type/Trial Trial 1 Trial 2 Trial 3 Trial 4 Heavy 5.1 meters 3.1 meters 4.7 meters 5.3 meters Medium 4 meters 3.5 meters 4.5 meters 6.1 meters Light 3.1 meters 3.3 meters 2.1 meters 1.9 meters Figure 13.4.1. 1. Take a look at the data in the graph. Look at the spread of data for each group (light, medium, heavy). Does it seem reasonable to assume a normal distribution with the same variance for each group? Yes or No. 2. Why is this a balanced design? 3. Calculate the sample mean and sample standard deviation for each group. 4. Does the weight of the paper have an effect on how far the plane will travel? Use a 1% level of significance. Complete the test using the method shown in the bean plant example in Example. • variance of the group means __________ • $MS_{\text{between}} =$ ___________ • mean of the three sample variances ___________ • $MS_{\text{within}} =$ _____________ • $F$ statistic = ____________ • $df(\text{num}) =$ __________, $df(\text{denom}) =$ ___________ • number of groups _______ • number of observations _______ • $p\text{-value} =$ __________ ($P(F >$ _______$) =$ __________) • Graph the $p\text{-value}$. • decision: _______________________ • conclusion: _______________________________________________________________ Q 13.4.13 DDT is a pesticide that has been banned from use in the United States and most other areas of the world. It is quite effective, but persisted in the environment and over time became seen as harmful to higher-level organisms. Famously, egg shells of eagles and other raptors were believed to be thinner and prone to breakage in the nest because of ingestion of DDT in the food chain of the birds. An experiment was conducted on the number of eggs (fecundity) laid by female fruit flies. There are three groups of flies. One group was bred to be resistant to DDT (the RS group). Another was bred to be especially susceptible to DDT (SS). Finally there was a control line of non-selected or typical fruitflies (NS). Here are the data: RS SS NS RS SS NS 12.8 38.4 35.4 22.4 23.1 22.6 21.6 32.9 27.4 27.5 29.4 40.4 14.8 48.5 19.3 20.3 16 34.4 23.1 20.9 41.8 38.7 20.1 30.4 34.6 11.6 20.3 26.4 23.3 14.9 19.7 22.3 37.6 23.7 22.9 51.8 22.6 30.2 36.9 26.1 22.5 33.8 29.6 33.4 37.3 29.5 15.1 37.9 16.4 26.7 28.2 38.6 31 29.5 20.3 39 23.4 44.4 16.9 42.4 29.3 12.8 33.7 23.2 16.1 36.6 14.9 14.6 29.2 23.6 10.8 47.4 27.3 12.2 41.7 The values are the average number of eggs laid daily for each of 75 flies (25 in each group) over the first 14 days of their lives. Using a 1% level of significance, are the mean rates of egg selection for the three strains of fruitfly different? If so, in what way? Specifically, the researchers were interested in whether or not the selectively bred strains were different from the nonselected line, and whether the two selected lines were different from each other. Here is a chart of the three groups: S 13.4.13 The data appear normally distributed from the chart and of similar spread. There do not appear to be any serious outliers, so we may proceed with our ANOVA calculations, to see if we have good evidence of a difference between the three groups. $H_{0}: \mu_{1} = \mu_{2} = \mu_{3}$; $H_{a}: \mu_{i} \neq \mu_{j}$ some $i \neq j$. Define $\mu_{1}, \mu_{2}, \mu_{3}$, as the population mean number of eggs laid by the three groups of fruit flies. $F$ statistic $= 8.6657$; $p\text{-value} = 0.0004$ Decision: Since the $p\text{-value}$ is less than the level of significance of 0.01, we reject the null hypothesis. Conclusion: We have good evidence that the average number of eggs laid during the first 14 days of life for these three strains of fruitflies are different. Interestingly, if you perform a two sample $t$-test to compare the RS and NS groups they are significantly different ($p = 0.0013$). Similarly, SS and NS are significantly different ($p = 0.0006$). However, the two selected groups, RS and SS are not significantly different ($p = 0.5176$). Thus we appear to have good evidence that selection either for resistance or for susceptibility involves a reduced rate of egg production (for these specific strains) as compared to flies that were not selected for resistance or susceptibility to DDT. Here, genetic selection has apparently involved a loss of fecundity. Q 13.4.14 The data shown is the recorded body temperatures of 130 subjects as estimated from available histograms. Traditionally we are taught that the normal human body temperature is 98.6 F. This is not quite correct for everyone. Are the mean temperatures among the four groups different? Calculate 95% confidence intervals for the mean body temperature in each group and comment about the confidence intervals. FL FH ML MH FL FH ML MH 96.4 96.8 96.3 96.9 98.4 98.6 98.1 98.6 96.7 97.7 96.7 97 98.7 98.6 98.1 98.6 97.2 97.8 97.1 97.1 98.7 98.6 98.2 98.7 97.2 97.9 97.2 97.1 98.7 98.7 98.2 98.8 97.4 98 97.3 97.4 98.7 98.7 98.2 98.8 97.6 98 97.4 97.5 98.8 98.8 98.2 98.8 97.7 98 97.4 97.6 98.8 98.8 98.3 98.9 97.8 98 97.4 97.7 98.8 98.8 98.4 99 97.8 98.1 97.5 97.8 98.8 98.9 98.4 99 97.9 98.3 97.6 97.9 99.2 99 98.5 99 97.9 98.3 97.6 98 99.3 99 98.5 99.2 98 98.3 97.8 98   99.1 98.6 99.5 98.2 98.4 97.8 98   99.1 98.6 98.2 98.4 97.8 98.3   99.2 98.7 98.2 98.4 97.9 98.4   99.4 99.1 98.2 98.4 98 98.4   99.9 99.3 98.2 98.5 98 98.6   100 99.4 98.2 98.6 98 98.6   100.8 13.5: Test of Two Variances Use the following information to answer the next two exercises. There are two assumptions that must be true in order to perform an $F$ test of two variances. Exercise 13.5.2 Name one assumption that must be true. Answer The populations from which the two samples are drawn are normally distributed. Exercise 13.5.3 What is the other assumption that must be true? Use the following information to answer the next five exercises. Two coworkers commute from the same building. They are interested in whether or not there is any variation in the time it takes them to drive to work. They each record their times for 20 commutes. The first worker’s times have a variance of 12.1. The second worker’s times have a variance of 16.9. The first worker thinks that he is more consistent with his commute times and that his commute time is shorter. Test the claim at the 10% level. Exercise 13.5.4 State the null and alternative hypotheses. Answer $H_{0}: \sigma_{1} = \sigma_{2}$ $H_{a}: \sigma_{1} < \sigma_{2}$ or $H_{0}: \sigma^{2}_{1} = \sigma^{2}_{2}$ $H_{a}: \sigma^{2}_{1} < \sigma^{2}_{2}$ Exercise 13.5.5 What is $s_{1}$ in this problem? Exercise 13.5.6 What is $s_{2}$ in this problem? Answer 4.11 Exercise 13.5.7 What is $n$? Exercise 13.5.8 What is the $F$ statistic? Answer 0.7159 Exercise 13.5.9 What is the $p\text{-value}$? Exercise 13.5.10 Is the claim accurate? Answer No, at the 10% level of significance, we do not reject the null hypothesis and state that the data do not show that the variation in drive times for the first worker is less than the variation in drive times for the second worker. Use the following information to answer the next four exercises. Two students are interested in whether or not there is variation in their test scores for math class. There are 15 total math tests they have taken so far. The first student’s grades have a standard deviation of 38.1. The second student’s grades have a standard deviation of 22.5. The second student thinks his scores are lower. Exercise 13.5.11 State the null and alternative hypotheses. Exercise 13.5.12 What is the $F$ Statistic? Answer 2.8674 Exercise 13.5.13 What is the $p\text{-value}$? Exercise 13.5.14 At the 5% significance level, do we reject the null hypothesis? Answer Reject the null hypothesis. There is enough evidence to say that the variance of the grades for the first student is higher than the variance in the grades for the second student. Use the following information to answer the next three exercises. Two cyclists are comparing the variances of their overall paces going uphill. Each cyclist records his or her speeds going up 35 hills. The first cyclist has a variance of 23.8 and the second cyclist has a variance of 32.1. The cyclists want to see if their variances are the same or different. Exercise 13.5.15 State the null and alternative hypotheses. Exercise 13.5.16 What is the $F$ Statistic? Answer 0.7414 Exercise 13.5.17 At the 5% significance level, what can we say about the cyclists’ variances? Q 13.5.1 Three students, Linda, Tuan, and Javier, are given five laboratory rats each for a nutritional experiment. Each rat’s weight is recorded in grams. Linda feeds her rats Formula A, Tuan feeds his rats Formula B, and Javier feeds his rats Formula C. At the end of a specified time period, each rat is weighed again and the net gain in grams is recorded. Linda's rats Tuan's rats Javier's rats 43.5 47.0 51.2 39.4 40.5 40.9 41.3 38.9 37.9 46.0 46.3 45.0 38.2 44.2 48.6 Determine whether or not the variance in weight gain is statistically the same among Javier’s and Linda’s rats. Test at a significance level of 10%. S 13.5.1 1. $H_{0}: \sigma^{2}_{1} = \sigma^{2}_{2}$ 2. $H_{a}: \sigma^{2}_{1} \neq \sigma^{2}_{1}$ 3. $df(\text{num}) = 4; df(\text{denom}) = 4$ 4. $F_{4, 4}$ 5. 3.00 6. $2(0.1563) = 0.3126$. Using the TI-83+/84+ function 2-SampFtest, you get the test statistic as 2.9986 and p-value directly as 0.3127. If you input the lists in a different order, you get a test statistic of 0.3335 but the $p\text{-value}$ is the same because this is a two-tailed test. 7. Check student't solution. 8. Decision: Do not reject the null hypothesis; Conclusion: There is insufficient evidence to conclude that the variances are different. Q 13.5.2 A grassroots group opposed to a proposed increase in the gas tax claimed that the increase would hurt working-class people the most, since they commute the farthest to work. Suppose that the group randomly surveyed 24 individuals and asked them their daily one-way commuting mileage. The results are as follows. working-class professional (middle incomes) professional (wealthy) 17.8 16.5 8.5 26.7 17.4 6.3 49.4 22.0 4.6 9.4 7.4 12.6 65.4 9.4 11.0 47.1 2.1 28.6 19.5 6.4 15.4 51.2 13.9 9.3 Determine whether or not the variance in mileage driven is statistically the same among the working class and professional (middle income) groups. Use a 5% significance level. Q 13.5.3 Refer to the data from [link]. Examine practice laps 3 and 4. Determine whether or not the variance in lap time is statistically the same for those practice laps. Use the following information to answer the next two exercises. The following table lists the number of pages in four different types of magazines. home decorating news health computer 172 87 82 104 286 94 153 136 163 123 87 98 205 106 103 207 197 101 96 146 S 13.5.3 1. $H_{0}: \sigma^{2}_{1} = \sigma^{2}_{2}$ 2. $H_{a}: \sigma^{2}_{1} \neq \sigma^{2}_{1}$ 3. $df(\text{n}) = 19, df(\text{d}) = 19$ 4. $F_{19,19}$ 5. 1.13 6. 0.786 7. Check student’s solution. 1. $\alpha: 0.05$ 2. Decision: Do not reject the null hypothesis. 3. Reason for decision: $p\text{-value} > \alpha$ 4. Conclusion: There is not sufficient evidence to conclude that the variances are different. Q 13.5.4 Which two magazine types do you think have the same variance in length? Q 13.5.5 Which two magazine types do you think have different variances in length? S 13.5.5 The answers may vary. Sample answer: Home decorating magazines and news magazines have different variances. Q 13.5.6 Is the variance for the amount of money, in dollars, that shoppers spend on Saturdays at the mall the same as the variance for the amount of money that shoppers spend on Sundays at the mall? Suppose that the Table shows the results of a study. Saturday Sunday Saturday Sunday 75 44 62 137 18 58 0 82 150 61 124 39 94 19 50 127 62 99 31 141 73 60 118 73 89 Q 13.5.7 Are the variances for incomes on the East Coast and the West Coast the same? Suppose that Table shows the results of a study. Income is shown in thousands of dollars. Assume that both distributions are normal. Use a level of significance of 0.05. East West 38 71 47 126 30 42 82 51 75 44 52 90 115 88 67 S 13.5.7 1. $H_{0}: \sigma^{2}_{1} = \sigma^{2}_{2}$ 2. $H_{a}: \sigma^{2}_{1} \neq \sigma^{2}_{1}$ 3. $df(\text{n}) = 7, df(\text{d}) = 6$ 4. $F_{7,6}$ 5. 0.8117 6. 0.7825 7. Check student’s solution. 1. $\alpha: 0.05$ 2. Decision: Do not reject the null hypothesis. 3. Reason for decision: $p\text{-value} > \alpha$ 4. Conclusion: There is not sufficient evidence to conclude that the variances are different. Q 13.5.8 Thirty men in college were taught a method of finger tapping. They were randomly assigned to three groups of ten, with each receiving one of three doses of caffeine: 0 mg, 100 mg, 200 mg. This is approximately the amount in no, one, or two cups of coffee. Two hours after ingesting the caffeine, the men had the rate of finger tapping per minute recorded. The experiment was double blind, so neither the recorders nor the students knew which group they were in. Does caffeine affect the rate of tapping, and if so how? Here are the data: 0 mg 100 mg 200 mg 0 mg 100 mg 200 mg 242 248 246 245 246 248 244 245 250 248 247 252 247 248 248 248 250 250 242 247 246 244 246 248 246 243 245 242 244 250 Q 13.5.9 King Manuel I, Komnenus ruled the Byzantine Empire from Constantinople (Istanbul) during the years 1145 to 1180 A.D. The empire was very powerful during his reign, but declined significantly afterwards. Coins minted during his era were found in Cyprus, an island in the eastern Mediterranean Sea. Nine coins were from his first coinage, seven from the second, four from the third, and seven from a fourth. These spanned most of his reign. We have data on the silver content of the coins: First Coinage Second Coinage Third Coinage Fourth Coinage 5.9 6.9 4.9 5.3 6.8 9.0 5.5 5.6 6.4 6.6 4.6 5.5 7.0 8.1 4.5 5.1 6.6 9.3   6.2 7.7 9.2   5.8 7.2 8.6   5.8 6.9 6.2 Did the silver content of the coins change over the course of Manuel’s reign? Here are the means and variances of each coinage. The data are unbalanced. First Second Third Fourth Mean 6.7444 8.2429 4.875 5.6143 Variance 0.2953 1.2095 0.2025 0.1314 S 13.5.9 Here is a strip chart of the silver content of the coins: While there are differences in spread, it is not unreasonable to use $ANOVA$ techniques. Here is the completed $ANOVA$ table: Source of Variation Sum of Squares ($SS$) Degrees of Freedom ($df$) Mean Square ($MS$) $F$ Factor (Between) 37.748 $4 - 1 = 3$ 12.5825 26.272 Error (Within) 11.015 $27 - 4 = 23$ 0.4789 Total 48.763 $27 - 1 = 26$ $P(F > 26.272) = 0$; Reject the null hypothesis for any alpha. There is sufficient evidence to conclude that the mean silver content among the four coinages are different. From the strip chart, it appears that the first and second coinages had higher silver contents than the third and fourth. Q 13.5.10 The American League and the National League of Major League Baseball are each divided into three divisions: East, Central, and West. Many years, fans talk about some divisions being stronger (having better teams) than other divisions. This may have consequences for the postseason. For instance, in 2012 Tampa Bay won 90 games and did not play in the postseason, while Detroit won only 88 and did play in the postseason. This may have been an oddity, but is there good evidence that in the 2012 season, the American League divisions were significantly different in overall records? Use the following data to test whether the mean number of wins per team in the three American League divisions were the same or not. Note that the data are not balanced, as two divisions had five teams, while one had only four. Division Team Wins East NY Yankees 95 East Baltimore 93 East Tampa Bay 90 East Toronto 73 East Boston 69 Division Team Wins Central Detroit 88 Central Chicago Sox 85 Central Kansas City 72 Central Cleveland 68 Central Minnesota 66 Division Team Wins West Oakland 94 West Texas 93 West LA Angels 89 West Seattle 75 S 13.5.10 Here is a stripchart of the number of wins for the 14 teams in the AL for the 2012 season. While the spread seems similar, there may be some question about the normality of the data, given the wide gaps in the middle near the 0.500 mark of 82 games (teams play 162 games each season in MLB). However, one-way $ANOVA$ is robust. Here is the $ANOVA$ table for the data: Source of Variation Sum of Squares ($SS$) Degrees of Freedom ($df$) Mean Square ($MS$) $F$ Factor (Between) 344.16 3 – 1 = 2 172.08 26.272 Error (Within) 1,219.55 14 – 3 = 11 110.87 1.5521 Total 1,563.71 14 – 1 = 13 $P(F > 1.5521) = 0.2548$ Since the $p\text{-value}$ is so large, there is not good evidence against the null hypothesis of equal means. We decline to reject the null hypothesis. Thus, for 2012, there is not any have any good evidence of a significant difference in mean number of wins between the divisions of the American League.
textbooks/stats/Introductory_Statistics/Introductory_Statistics_(OpenStax)/13%3A_F_Distribution_and_One-Way_ANOVA/13.E%3A_F_Distribution_and_One-Way_ANOVA_%28Exercises%29.txt
In this chapter we will introduce some basic terminology and lay the groundwork for the course. We will explain in general terms what statistics and probability are and the problems that these two areas of study are designed to solve. • 1.1: Basic Definitions and Concepts Statistics is a study of data: describing properties of data (descriptive statistics) and drawing conclusions about a population based on information in a sample (inferential statistics). The distinction between a population together with its parameters and a sample together with its statistics is a fundamental concept in inferential statistics. Information in a sample is used to make inferences about the population from which the sample was drawn. • 1.2: Overview Statistics computed from samples vary randomly from sample to sample. Conclusions made about population parameters are statements of probability. • 1.3: Presentation of Data In this book we will use two formats for presenting data sets. Data could be presented as the data list or in set notation. • 1.E: Introduction to Statistics (Exercises) These are homework exercises to accompany the Textmap created for "Introductory Statistics" by Shafer and Zhang. 01: Introduction to Statistics Learning Objectives • To learn the basic definitions used in statistics and some of its key concepts. We begin with a simple example. There are millions of passenger automobiles in the United States. What is their average value? It is obviously impractical to attempt to solve this problem directly by assessing the value of every single car in the country, add up all those values, then divide by the number of values, one for each car. In practice the best we can do would be to estimate the average value. A natural way to do so would be to randomly select some of the cars, say $200$ of them, ascertain the value of each of those cars, and find the average of those $200$ values. The set of all those millions of vehicles is called the population of interest, and the number attached to each one, its value, is a measurement. The average value is a parameter: a number that describes a characteristic of the population, in this case monetary worth. The set of $200$ cars selected from the population is called a sample, and the $200$ numbers, the monetary values of the cars we selected, are the sample data. The average of the data is called a statistic: a number calculated from the sample data. This example illustrates the meaning of the following definitions. Definitions: populations and samples A population is any specific collection of objects of interest. A sample is any subset or subcollection of the population, including the case that the sample consists of the whole population, in which case it is termed a census. Definitions: measurements and Sample Data A measurement is a number or attribute computed for each member of a population or of a sample. The measurements of sample elements are collectively called the sample data. Definition: parameters A parameter is a number that summarizes some aspect of the population as a whole. A statistic is a number computed from the sample data. Continuing with our example, if the average value of the cars in our sample was $8,357$, then it seems reasonable to conclude that the average value of all cars is about $8,357$. In reasoning this way we have drawn an inference about the population based on information obtained from the sample. In general, statistics is a study of data: describing properties of the data, which is called descriptive statistics, and drawing conclusions about a population of interest from information extracted from a sample, which is called inferential statistics. Computing the single number $8,357$ to summarize the data was an operation of descriptive statistics; using it to make a statement about the population was an operation of inferential statistics. Definition: Statistics Statistics is a collection of methods for collecting, displaying, analyzing, and drawing conclusions from data. Definition: Descriptive statistics Descriptive statistics is the branch of statistics that involves organizing, displaying, and describing data. Definition: Inferential statistics Inferential statistics is the branch of statistics that involves drawing conclusions about a population based on information contained in a sample taken from that population. The measurement made on each element of a sample need not be numerical. In the case of automobiles, what is noted about each car could be its color, its make, its body type, and so on. Such data are categorical or qualitative, as opposed to numerical or quantitative data such as value or age. This is a general distinction. Definition: Qualitative data Qualitative data are measurements for which there is no natural numerical scale, but which consist of attributes, labels, or other non-numerical characteristics. Definition: Quantitative data Quantitative data are numerical measurements that arise from a natural numerical scale. Qualitative data can generate numerical sample statistics. In the automobile example, for instance, we might be interested in the proportion of all cars that are less than six years old. In our same sample of $200$ cars we could note for each car whether it is less than six years old or not, which is a qualitative measurement. If $172$ cars in the sample are less than six years old, which is $0.86$ or $86\%$, then we would estimate the parameter of interest, the population proportion, to be about the same as the sample statistic, the sample proportion, that is, about $0.86$. The relationship between a population of interest and a sample drawn from that population is perhaps the most important concept in statistics, since everything else rests on it. This relationship is illustrated graphically in Figure $1$. The circles in the large box represent elements of the population. In the figure there was room for only a small number of them but in actual situations, like our automobile example, they could very well number in the millions. The solid black circles represent the elements of the population that are selected at random and that together form the sample. For each element of the sample there is a measurement of interest, denoted by a lower case $x$ (which we have indexed as $x_1 , \ldots, x_n$ to tell them apart); these measurements collectively form the sample data set. From the data we may calculate various statistics. To anticipate the notation that will be used later, we might compute the sample mean $\bar{x}$ and the sample proportion $\hat{p}$, and take them as approximations to the population mean $\mu$ (this is the lower case Greek letter mu, the traditional symbol for this parameter) and the population proportion $p$, respectively. The other symbols in the figure stand for other parameters and statistics that we will encounter. Key Takeaway • Statistics is a study of data: describing properties of data (descriptive statistics) and drawing conclusions about a population based on information in a sample (inferential statistics). • The distinction between a population together with its parameters and a sample together with its statistics is a fundamental concept in inferential statistics. • Information in a sample is used to make inferences about the population from which the sample was drawn.
textbooks/stats/Introductory_Statistics/Introductory_Statistics_(Shafer_and_Zhang)/01%3A_Introduction_to_Statistics/1.01%3A_Basic_Definitions_and_Concepts.txt
Learning Objectives • To obtain an overview of the material in the text. The example we gave in the first section seems fairly simple, but it illustrates some significant problems. We supposed that the $200$ cars of the sample had an average value of $8,357$ (a number that is precisely known), and concluded that the population has an average of about the same amount, although its precise value is still unknown. What would happen if someone else were to take another sample of exactly the same size from exactly the same population? Would he or she get the same sample average as we did, $8,357$? Almost surely not. In fact, if the investigator who took the second sample reported precisely the same value, we would immediately become suspicious of his result. The sample average is an example of what is called a random variable: a number that varies from trial to trial of an experiment (in this case, from sample to sample), and does so in a way that cannot be predicted precisely. Random variables will be a central object of study for us, beginning in Chapter 4. Another issue that arises is that different samples have different levels of reliability. We have supposed that our sample of size $200$ had an average of $8,357$. If a sample of size $1,000$ yielded an average value of $7,832$, then we would naturally regard this latter number as probably a better estimate of the average value of all cars, since it came from a larger sample. How can this be expressed? An important idea developed in Chapter 7 is the confidence interval: from the data we will construct an interval of values using a process that has a certain chance, say a $95\%$ chance, of generating an interval that contains the true population average. Thus, instead of reporting a single estimate, $8,357$, for the population mean we might say that, based on our sample data, we are $95\%$ certain that the true average is within $100$ of our sample mean, that is, we are $95\%$ certain that the true average is the between $8,257$ and $8,457$. The number $100$ will be computed from the sample data just as the sample mean $8,357$ was. This "$95\%$ confidence interval" will automatically indicate the reliability of the estimate that we obtained from the sample. Moreover, to obtain the same chance of containing the unknown parameter, a large sample will typically produce a shorter interval than a small sample will. Thus large samples usually give more accurate results. Unless we perform a census, which is a "sample" that includes the entire population, we can never be completely sure of the exact average value of the population. The best that we can do is to make statements of probability, an important concept that we will begin to study formally in Chapter 3. Sampling may be done not only to estimate a population parameter, but to test a claim that is made about that parameter. Suppose a food package asserts that the amount of sugar in one serving of the product is $14$ grams. A consumer group might suspect that it actually contains more. How would they test the competing claims about the amount of sugar, "$14$ grams" versus "more than $14$ grams"? They might take a random sample of perhaps $20$ food packages, measure the amount of sugar in one serving of each one, and average those amounts. They are not interested in measuring the average amount of sugar in a serving for its own sake; their interest is simply whether the claim about the true amount is accurate. Stated another way, they are sampling not in order to estimate the average amount of sugar in one serving, but to see whether that amount, whatever it may be, is larger than $14$ grams. Again because one can have certain knowledge only by taking a census, ideas of probability enter into the analysis. We will examine tests of hypotheses beginning in Chapter 8 . Several times in this introduction we have used the term “random sample.” Generally the value of our data is only as good as the sample that produced it. For example, suppose we wish to estimate the proportion of all students at a large university who are females, which we denote by $p$. If we select $50$ students at random and $27$ of them are female, then a natural estimate is $p \approx \hat{p} = 27/50 = 0.54$ or $54\%$. How much confidence we can place in this estimate depends not only on the size of the sample, but on its quality, whether or not it is truly random, or at least truly representative of the whole population. If all $50$ students in our sample were drawn from a College of Nursing, then the proportion of female students in the sample is likely higher than that of the entire campus. If all $50$ students were selected from a College of Engineering Sciences, then the proportion of students in the entire student body who are females could be underestimated. In either case, the estimate would be distorted or biased. In statistical practice an unbiased sampling scheme is important but in most cases not easy to produce. For this introductory course we will assume that all samples are either random or at least representative. Key Takeaway • Statistics computed from samples vary randomly from sample to sample. Conclusions made about population parameters are statements of probability. 1.03: Presentation of Data Learning Objectives • To learn two ways that data will be presented in the text. In this book we will use two formats for presenting data sets. The first is a data list, which is an explicit listing of all the individual measurements, either as a display with space between the individual measurements, or in set notation with individual measurements separated by commas. Example $1$ The data obtained by measuring the age of $21$ randomly selected students enrolled in freshman courses at a university could be presented as the data list: $\begin{array}{cccccccccc}18 & 18 & 19 & 19 & 19 & 18 & 22 & 20 & 18 & 18 & 17 \ 19 & 18 & 24 & 18 & 20 & 18 & 21 & 20 & 17 & 19 &\end{array} \nonumber$ or in set notation as: $\{18,18,19,19,19,18,22,20,18,18,17,19,18,24,18,20,18,21,20,17,19\} \nonumber$ A data set can also be presented by means of a data frequency table, a table in which each distinct value $x$ is listed in the first row and its frequency $f$, which is the number of times the value $x$ appears in the data set, is listed below it in the second row. A data set can also be presented by means of a data frequency table, a table in which each distinct value $x$ is listed in the first row and its frequency $f$, which is the number of times the value $x$ appears in the data set, is listed below it in the second row. Example $2$ The data set of the previous example is represented by the data frequency table $\begin{array}{c|cccccc}x & 17 & 18 & 19 & 20 & 21 & 22 & 24 \ \hline f & 2 & 8 & 5 & 3 & 1 & 1 & 1\end{array} \nonumber$ The data frequency table is especially convenient when data sets are large and the number of distinct values is not too large. Key Takeaway • Data sets can be presented either by listing all the elements or by giving a table of values and frequencies.
textbooks/stats/Introductory_Statistics/Introductory_Statistics_(Shafer_and_Zhang)/01%3A_Introduction_to_Statistics/1.02%3A_Overview.txt
These are homework exercises to accompany the Textmap created for "Introductory Statistics" by Shafer and Zhang. Complementary General Chemistry question banks can be found for other Textmaps and can be accessed here. In addition to these publicly available questions, access to private problems bank for use in exams and homework is available to faculty only on an individual basis; please contact Delmar Larsen for an account with access permission. 1.1: Basic Definitions and Concepts Questions 1. Explain what is meant by the term population. 2. Explain what is meant by the term sample. 3. Explain how a sample differs from a population. 4. Explain what is meant by the term sample data. 5. Explain what a parameter is. 6. Explain what a statistic is. 7. Give an example of a population and two different characteristics that may be of interest. 8. Describe the difference between descriptive statistics and inferential statistics. Illustrate with an example. 9. Identify each of the following data sets as either a population or a sample: 1. The grade point averages (GPAs) of all students at a college. 2. The GPAs of a randomly selected group of students on a college campus. 3. The ages of the nine Supreme Court Justices of the United States on $\text {January}\: 1, 1842$. 4. The gender of every second customer who enters a movie theater. 5. The lengths of Atlantic croakers caught on a fishing trip to the beach. 10. Identify the following measures as either quantitative or qualitative: 1. The $30$ high-temperature readings of the last $30$ days. 2. The scores of $40$ students on an English test. 3. The blood types of $120$ teachers in a middle school. 4. The last four digits of social security numbers of all students in a class. 5. The numbers on the jerseys of $53$ football players on a team. 11. Identify the following measures as either quantitative or qualitative: 1. The genders of the first $40$ newborns in a hospital one year. 2. The natural hair color of $20$ randomly selected fashion models. 3. The ages of $20$ randomly selected fashion models. 4. The fuel economy in miles per gallon of $20$ new cars purchased last month. 5. The political affiliation of $500$ randomly selected voters. 12. A researcher wishes to estimate the average amount spent per person by visitors to a theme park. He takes a random sample of forty visitors and obtains an average of $28$ per person. 1. What is the population of interest? 2. What is the parameter of interest? 3. Based on this sample, do we know the average amount spent per person by visitors to the park? Explain fully. 13. A researcher wishes to estimate the average weight of newborns in South America in the last five years. He takes a random sample of $235$ newborns and obtains an average of $3.27$ kilograms. 1. What is the population of interest? 2. What is the parameter of interest? 3. Based on this sample, do we know the average weight of newborns in South America? Explain fully. 14. A researcher wishes to estimate the proportion of all adults who own a cell phone. He takes a random sample of $1,572$ adults; $1,298$ of them own a cell phone, hence $1298∕1572 ≈ .83$ or about $83\%$ own a cell phone. 1. What is the population of interest? 2. What is the parameter of interest? 3. What is the statistic involved? 4. Based on this sample, do we know the proportion of all adults who own a cell phone? Explain fully. 15. A sociologist wishes to estimate the proportion of all adults in a certain region who have never married. In a random sample of $1,320$ adults, $145$ have never married, hence $145∕1320 ≈ .11$ or about $11\%$ have never married. 1. What is the population of interest? 2. What is the parameter of interest? 3. What is the statistic involved? 4. Based on this sample, do we know the proportion of all adults who have never married? Explain fully. 1. What must be true of a sample if it is to give a reliable estimate of the value of a particular population parameter? 2. What must be true of a sample if it is to give certain knowledge of the value of a particular population parameter? Answers 1. A population is the total collection of objects that are of interest in a statistical study. 2. A sample, being a subset, is typically smaller than the population. In a statistical study, all elements of a sample are available for observation, which is not typically the case for a population. 3. A parameter is a value describing a characteristic of a population. In a statistical study the value of a parameter is typically unknown. 4. All currently registered students at a particular college form a population. Two population characteristics of interest could be the average GPA and the proportion of students over $23$ years. 1. Population. 2. Sample. 3. Population. 4. Sample. 5. Sample. 1. Qualitative. 2. Qualitative. 3. Quantitative. 4. Quantitative. 5. Qualitative. 1. All newborn babies in South America in the last five years. 2. The average birth weight of all newborn babies in South America in the last five years. 3. No, not exactly, but we know the approximate value of the average. 1. All adults in the region. 2. The proportion of the adults in the region who have never married. 3. The proportion computed from the sample, $0.1$. 4. No, not exactly, but we know the approximate value of the proportion.
textbooks/stats/Introductory_Statistics/Introductory_Statistics_(Shafer_and_Zhang)/01%3A_Introduction_to_Statistics/1.E%3A_Introduction_to_Statistics_%28Exercises%29.txt
Statistics naturally divides into two branches, descriptive statistics and inferential statistics. Our main interest is in inferential statistics to try to infer from the data what the population might thin or to evaluate the probability that an observed difference between groups is a dependable one or one that might have happened by chance in this study. Nevertheless, the starting point for dealing with a collection of data is to organize, display, and summarize it effectively. These are the objectives of descriptive statistics, the topic of this chapter. • 2.1: Three Popular Data Displays Graphical representations of large data sets provide a quick overview of the nature of the data.     A population or a very large data set may be represented by a smooth curve. This curve is a very fine relative frequency histogram in which the exceedingly narrow vertical bars have been omitted.     When a curve derived from a relative frequency histogram is used to describe a data set, the proportion of data with values between two numbers a and b is the area under the curve between a and b, as • 2.2: Measures of Central Location - Three Kinds of Averages The mean, the median, and the mode each answer the question “Where is the center of the data set?” The nature of the data set, as indicated by a relative frequency histogram, determines which one gives the best answer. • 2.3: Measures of Variability The range, the standard deviation, and the variance each give a quantitative answer to the question “How variable are the data?” • 2.4: Relative Position of Data The percentile rank and z-score of a measurement indicate its relative position with regard to the other measurements in a data set. The three quartiles divide a data set into fourths. The five-number summary and its associated box plot summarize the location and distribution of the data. • 2.5: The Empirical Rule and Chebyshev's Theorem The Empirical Rule is an approximation that applies only to data sets with a bell-shaped relative frequency histogram. It estimates the proportion of the measurements that lie within one, two, and three standard deviations of the mean. Chebyshev’s Theorem is a fact that applies to all possible data sets. It describes the minimum proportion of the measurements that lie must within one, two, or more standard deviations of the mean. • 2.E: Descriptive Statistics (Exercises) These are homework exercises to accompany the Textmap created for "Introductory Statistics" by Shafer and Zhang. Complementary General Chemistry question banks can be found for other Textmaps and can be accessed here. In addition to these publicly available questions, access to private problems bank for use in exams and homework is available to faculty only on an individual basis; please contact Delmar Larsen for an account with access permission. 02: Descriptive Statistics Learning Objectives • To learn to interpret the meaning of three graphical representations of sets of data: stem and leaf diagrams, frequency histograms, and relative frequency histograms. A well-known adage is that “a picture is worth a thousand words.” This saying proves true when it comes to presenting statistical information in a data set. There are many effective ways to present data graphically. The three graphical tools that are introduced in this section are among the most commonly used and are relevant to the subsequent presentation of the material in this book. Stem and Leaf Diagrams Suppose $30$ students in a statistics class took a test and made the following scores: $\begin{array}{r}86 & 80 & 25 & 77 & 73 & 76 & 100 & 90 & 69 & 93 \ 90 & 83 & 70 & 73 & 73 & 70 & 90 & 83 & 71 & 95 \ 40 & 58 & 68 & 69 & 100 & 78 & 87 & 97 & 92 & 74\end{array} \nonumber$ How did the class do on the test? A quick glance at the set of $30$ numbers does not immediately give a clear answer. However the data set may be reorganized and rewritten to make relevant information more visible. One way to do so is to construct a stem and leaf diagram as shown in Figure $1$ The numbers in the tens place, from $2$ through $9$, and additionally the number $10$, are the “stems,” and are arranged in numerical order from top to bottom to the left of a vertical line. The number in the units place in each measurement is a “leaf,” and is placed in a row to the right of the corresponding stem, the number in the tens place of that measurement. Thus the three leaves $9, 8, \text{and} \; 9$ in the row headed with the stem $6$ correspond to the three exam scores in the $60s, 69$ (in the first row of data), $68$ (in the third row), and $69$ (also in the third row). The display is made even more useful for some purposes by rearranging the leaves in numerical order, as shown in Figure $2$. Either way, with the data reorganized certain information of interest becomes apparent immediately. There are two perfect scores; three students made scores under $60$; most students scored in the $70s, 80s\; \text{and} \; 90s$; and the overall average is probably in the high $70s\; \text{or low}\; 80s$. In this example the scores have a natural stem (the tens place) and leaf (the ones place). One could spread the diagram out by splitting each tens place number into lower and upper categories. For example, all the scores in the $80s$ may be represented on two separate stems, lower $80s$ and upper $80s$: $\begin{array}{r|lcc}8 & 0 & 3 & 3 \ 8 & 6 & 7 &\end{array} \nonumber$ The definitions of stems and leaves are flexible in practice. The general purpose of a stem and leaf diagram is to provide a quick display of how the data are distributed across the range of their values; some improvisation could be necessary to obtain a diagram that best meets that goal. Note that all of the original data can be recovered from the stem and leaf diagram. This will not be true in the next two types of graphical displays. Frequency Histograms The stem and leaf diagram is not practical for large data sets, so we need a different, purely graphical way to represent data. A frequency histogram is such a device. We will illustrate it using the same data set from the previous subsection. For the $30$ scores on the exam, it is natural to group the scores on the standard ten-point scale, and count the number of scores in each group. Thus there are two $100s$, seven scores in the $90s$, six in the $80s$, and so on. We then construct the diagram shown in Figure $3$ by drawing for each group, or class, a vertical bar whose length is the number of observations in that group. In our example, the bar labeled $100$ is $2$ units long, the bar labeled $90$ is $7$ units long, and so on. While the individual data values are lost, we know the number in each class. This number is called the frequency of the class, hence the name frequency histogram. The same procedure can be applied to any collection of numerical data. Observations are grouped into several classes and the frequency (the number of observations) of each class is noted. These classes are arranged and indicated in order on the horizontal axis (called the x-axis), and for each group a vertical bar, whose length is the number of observations in that group, is drawn. The resulting display is a frequency histogram for the data. The similarity in Figure $1$ and Figure $3$ is apparent, particularly if you imagine turning the stem and leaf diagram on its side by rotating it a quarter turn counterclockwise. Definition In general, the definition of the classes in the frequency histogram is flexible. The general purpose of a frequency histogram is very much the same as that of a stem and leaf diagram, to provide a graphical display that gives a sense of data distribution across the range of values that appear. We will not discuss the process of constructing a histogram from data since in actual practice it is done automatically with statistical software or even handheld calculators. Relative Frequency Histograms In our example of the exam scores in a statistics class, five students scored in the $80s$. The number $5$ is the frequency of the group labeled “$80s$.” Since there are $30$ students in the entire statistics class, the proportion who scored in the $80s$ is $5/30$. The number $5/30$, which could also be expressed as $0.1 \bar{6} \approx . 1667$, or as $16.67\%$, is the relative frequency of the group labeled “$80s$.” Every group (the $70s$, the $80s$, and so on) has a relative frequency. We can thus construct a diagram by drawing for each group, or class, a vertical bar whose length is the relative frequency of that group. For example, the bar for the $80s$ will have length $5/30$ unit, not $5$ units. The diagram is a relative frequency histogram for the data, and is shown in Figure $4$. It is exactly the same as the frequency histogram except that the vertical axis in the relative frequency histogram is not frequency but relative frequency. The same procedure can be applied to any collection of numerical data. Classes are selected, the relative frequency of each class is noted, the classes are arranged and indicated in order on the horizontal axis, and for each class a vertical bar, whose length is the relative frequency of the class, is drawn. The resulting display is a relative frequency histogram for the data. A key point is that now if each vertical bar has width $1$ unit, then the total area of all the bars is $1$ or $100\%$. Although the histograms in Figure $3$ and Figure $4$ have the same appearance, the relative frequency histogram is more important for us, and it will be relative frequency histograms that will be used repeatedly to represent data in this text. To see why this is so, reflect on what it is that you are actually seeing in the diagrams that quickly and effectively communicates information to you about the data. It is the relative sizes of the bars. The bar labeled “$70s$” in either figure takes up $1/3$ of the total area of all the bars, and although we may not think of this consciously, we perceive the proportion $1/3$ in the figures, indicating that a third of the grades were in the $70s$. The relative frequency histogram is important because the labeling on the vertical axis reflects what is important visually: the relative sizes of the bars. When the size n of a sample is small only a few classes can be used in constructing a relative frequency histogram. Such a histogram might look something like the one in panel (a) of Figure $5$. If the sample size $n$ were increased, then more classes could be used in constructing a relative frequency histogram and the vertical bars of the resulting histogram would be finer, as indicated in panel (b) of Figure $5$. For a very large sample the relative frequency histogram would look very fine, like the one in (c) of Figure $5$. If the sample size were to increase indefinitely then the corresponding relative frequency histogram would be so fine that it would look like a smooth curve, such as the one in panel (d) of Figure $5$. It is common in statistics to represent a population or a very large data set by a smooth curve. It is good to keep in mind that such a curve is actually just a very fine relative frequency histogram in which the exceedingly narrow vertical bars have disappeared. Because the area of each such vertical bar is the proportion of the data that lies in the interval of numbers over which that bar stands, this means that for any two numbers $a$ and $b$, the proportion of the data that lies between the two numbers $a$ and $b$ is the area under the curve that is above the interval ($a,b$) in the horizontal axis. This is the area shown in Figure $6$. In particular the total area under the curve is $1$, or $100\%$. Key Takeaway • Graphical representations of large data sets provide a quick overview of the nature of the data. • A population or a very large data set may be represented by a smooth curve. This curve is a very fine relative frequency histogram in which the exceedingly narrow vertical bars have been omitted. • When a curve derived from a relative frequency histogram is used to describe a data set, the proportion of data with values between two numbers $a$ and $b$ is the area under the curve between $a$ and $b$, as illustrated in Figure $6$.
textbooks/stats/Introductory_Statistics/Introductory_Statistics_(Shafer_and_Zhang)/02%3A_Descriptive_Statistics/2.01%3A_Three_Popular_Data_Displays.txt
Learning Objectives • To learn the concept of the “center” of a data set. • To learn the meaning of each of three measures of the center of a data set—the mean, the median, and the mode—and how to compute each one. This section is be titled “three kinds of averages” because any kind of average could be used to answer the question "where is the center of the data?". We will see that the nature of the data set, as indicated by a relative frequency histogram, will determine what constitutes a good answer. Different shapes of the histogram call for different measures of central location. The Mean The first measure of central location is the usual “average” that is familiar to everyone: add up all the values, then divide by the number of values. Before writing a formula for the mean let us introduce some handy mathematical notation. notations: $\sum$ "sum" and $n$ "sample size" The Greek letter $\sum$, pronounced "sigma", is a handy mathematical shorthand that stands for "add up all the values" or "sum". For example $\sum x$ means "add up all the values of $x$", and $\sum x^2$ means "add up all the values of $x^2$". In these expressions $x$ usually stands for a value of the data, so $\sum x$ stands for "the sum of all the data values" and $\sum x^2$ means "the sum of the squares of all the data values". $\mathbf{n}$ stands for the sample size, the number of data values. An example will help make this clear. Example $1$ Find $n$, $\sum x$, $\sum x^2$ and $\sum (x - 1)^2$ for the data: $1,\, 3,\, 4 \nonumber$ Solution $\begin{array}{rcl} n & = & 3 \quad \mbox{ because there are three data values} \ \sum x & = & 1 + 3 + 4 = 8 \ \sum x^2 & = & 1^2 + 3^2 + 4^2 = 1 + 9 + 16 = 26 \ \sum {(x - 1)}^2 & = & {(1 - 1)}^2 + {(3 - 1)}^2 + {(4 - 1)}^2 = 0^2 + 2^2 + 3^2 = 13\end{array} \nonumber$ Using these handy notations it's easy to write a formula defining the mean $\bar{x}$ of a sample. Definition: Sample Mean The sample mean of a set of $n$ sample data values is the number $\bar x$ defined by the formula $\bar x = \dfrac{\sum x}{n} \label{samplemean}$ Example $2$ Find the mean of the following sample data: $2$, $-1$, $0$, $2$ Solution This is a application of Equation \ref{samplemean}: $\bar x = \dfrac{\sum x}{n} = \dfrac{2 + (-1) + 0 + 2}{4} = \dfrac{3}{4} = 0.75 \nonumber$ Example $3$ A random sample of ten students is taken from the student body of a college and their GPAs are recorded as follows: $1.90, 3.00, 2.53, 3.71, 2.12, 1.76, 2.71, 1.39, 4.00, 3.33\nonumber$ Find the mean. Solution This is a application of Equation \ref{samplemean}: $\begin{array}{rcl}\bar x = \dfrac{\sum x}{n} = \dfrac{1.90 + 3.00 + 2.53 + 3.71 + 2.12 + 1.76 + 2.71 + 1.39 + 4.00 + 3.33}{10} = \dfrac{26.45}{10} = 2.645\end{array} \nonumber$ Example $4$ A random sample of $19$ women beyond child-bearing age gave the following data, where $x$ is the number of children and $f$ is the frequency, or the number of times it occurred in the data set. $\begin{array}{c|cccc}x & 0 & 1 & 2 & 3 & 4 \ \hline f & 3 & 6 & 6 & 3 & 1\end{array} \nonumber$ Find the sample mean. Solution In this example the data are presented by means of a data frequency table, introduced in Chapter 1. Each number in the first line of the table is a number that appears in the data set; the number below it is how many times it occurs. Thus the value $0$ is observed three times, that is, three of the measurements in the data set are $0$, the value $1$ is observed six times, and so on. In the context of the problem this means that three women in the sample have had no children, six have had exactly one child, and so on. The explicit list of all the observations in this data set is therefore: $0, 0, 0, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 3, 3, 3, 4 \nonumber$ The sample size can be read directly from the table, without first listing the entire data set, as the sum of the frequencies: $n = 3 + 6 + 6 + 3 + 1 = 19$. The sample mean can be computed directly from the table as well: $\bar x = \dfrac{\sum x}{n} = \dfrac{0 \times 3 + 1 \times 6 + 2 \times 6 + 3 \times 3 + 4 \times 1}{19} = \dfrac{31}{19} = 1.6316 \nonumber$ In the examples above the data sets were described as samples. Therefore the means were sample means $\bar x$. If the data come from a census, so that there is a measurement for every element of the population, then the mean is calculated by exactly the same process of summing all the measurements and dividing by how many of them there are, but it is now the population mean and is denoted by $\mu$, the lower case Greek letter mu. Definition: Population Mean The population mean of a set of $N$ population data is the number $\mu$ defined by the formula: $\displaystyle \mu=\frac{\sum x}{N}. \nonumber$ The mean of two numbers is the number that is halfway between them. For example, the average of the numbers $5$ and $17$ is $(5 + 17) ∕ 2 = 11$, which is $6$ units above $5$ and $6$ units below $17$. In this sense the average $11$ is the “center” of the data set $\{5,17\}$. For larger data sets the mean can similarly be regarded as the “center” of the data. The Median To see why another concept of average is needed, consider the following situation. Suppose we are interested in the average yearly income of employees at a large corporation. We take a random sample of seven employees, obtaining the sample data (rounded to the nearest hundred dollars, and expressed in thousands of dollars). $24.8, 22.8, 24.6, 192.5, 25.2, 18.5, 23.7 \nonumber$ The mean (rounded to one decimal place) is $\bar x = 47.4$, but the statement “the average income of employees at this corporation is $47,400$” is surely misleading. It is approximately twice what six of the seven employees in the sample make and is nowhere near what any of them makes. It is easy to see what went wrong: the presence of the one executive in the sample, whose salary is so large compared to everyone else’s, caused the numerator in the formula for the sample mean to be far too large, pulling the mean far to the right of where we think that the average “ought” to be, namely around $24,000$ or $25,000$. The number $192.5$ in our data set is called an outlier, a number that is far removed from most or all of the remaining measurements. Many times an outlier is the result of some sort of error, but not always, as is the case here. We would get a better measure of the “center” of the data if we were to arrange the data in numerical order: $18.5, 22.8, 23.7, 24.6, 24.8, 25.2, 192.5 \nonumber$ then select the middle number in the list, in this case $24.6$. The result is called the median of the data set, and has the property that roughly half of the measurements are larger than it is, and roughly half are smaller. In this sense it locates the center of the data. If there are an even number of measurements in the data set, then there will be two middle elements when all are lined up in order, so we take the mean of the middle two as the median. Thus we have the following definition. Definition: Sample Median The sample median $\tilde{x}$ of a set of sample data for which there are an odd number of measurements is the middle measurement when the data are arranged in numerical order. The sample median of a set of sample data for which there are an even number of measurements, is the mean of the two middle measurements when the data are arranged in numerical order. Definition: Population Median The population median is defined in the same way as the sample median except for the entire population. The median is a value that divides the observations in a data set so that $50\%$ of the data are on its left and the other $50\%$ on its right. In accordance with Figure $7$, therefore, in the curve that represents the distribution of the data, a vertical line drawn at the median divides the area in two, area $0.5$ ($50\%$ of the total area $1$) to the left and area $0.5$ ($50\%$ of the total area $1$) to the right, as shown in Figure $1$. In our income example the median, $24,600$, clearly gave a much better measure of the middle of the data set than did the mean $47,400$. This is typical for situations in which the distribution is skewed. (Skewness and symmetry of distributions are discussed at the end of this subsection.) Example $5$ Compute the sample median for the data from Example $2$ Solution The data in numerical order are $−1, 0, 2, 2$. The two middle measurements are $0$ and $2$, so $\tilde{x}= (0+2)/2 = 1$. Example $6$ Compute the sample median for the data from Example $3$ Solution The data in numerical order are $1.39, 1.76, 1.90, 2.12, 2.53, 2.71, 3.00, 3.33, 3.71, 4.00 \nonumber$ The number of observations is ten, which is even, so there are two middle measurements, the fifth and sixth, which are $2.53$ and $2.71$. Therefore the median of these data is $\tilde{x} = (2.53+2.71)/2 = 2.62$. Example $7$ Compute the sample median for the data from Example $4$ Solution The data in numerical order are: $0, 0, 0, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 3, 3, 3, 4 \nonumber$ The number of observations is $19$, which is odd, so there is one middle measurement, the tenth. Since the tenth measurement is $2$, the median is $\tilde{x} = 2$. In the last example it is important to note that we could have computed the median directly from the frequency table, without first explicitly listing all the observations in the data set. We already saw in Example $4$ how to find the number of observations directly from the frequencies listed in the table $n = 3+6+6+3+1 = 19$. Thus the median is the tenth observation. The second line of the table in Example $4$ shows that when the data are listed in order there will be three $0s$ followed by six $1s$, so the tenth observation, the median, is $2$. The relationship between the mean and the median for several common shapes of distributions is shown in Figure $2$. The distributions in panels (a) and (b) are said to be symmetric because of the symmetry that they exhibit. The distributions in the remaining two panels are said to be skewed. In each distribution we have drawn a vertical line that divides the area under the curve in half, which in accordance with Figure $1$ is located at the median. The following facts are true in general: • When the distribution is symmetric, as in panels (a) and (b) of Figure $2$, the mean and the median are equal. • When the distribution is as shown in panel (c), it is said to be skewed right. The mean has been pulled to the right of the median by the long “right tail” of the distribution, the few relatively large data values. • When the distribution is as shown in panel (d), it is said to be skewed left. The mean has been pulled to the left of the median by the long “left tail” of the distribution, the few relatively small data values. The Mode Perhaps you have heard a statement like “The average number of automobiles owned by households in the United States is $1.37$,” and have been amused at the thought of a fraction of an automobile sitting in a driveway. In such a context the following measure for central location might make more sense. Definition: Sample Mode The sample mode of a set of sample data is the most frequently occurring value. On a relative frequency histogram, the highest point of the histogram corresponds to the mode of the data set. Figure $3$ illustrates the mode. Figure $3$: Mode For any data set there is always exactly one mean and exactly one median. This need not be true of the mode; several different values could occur with the highest frequency, as we will see. It could even happen that every value occurs with the same frequency, in which case the concept of the mode does not make much sense. Example $8$ Find the mode of the following data set: $-1,\; 0,\; 2,\; 0$. Solution The value $0$ is most frequently observed in the data set, so the mode is $0$. Example $9$ Compute the sample mode for the data of Example $4$ Solution The two most frequently observed values in the data set are $1$ and $2$. Therefore mode is a set of two values: $\{1,2\}$. The mode is a measure of central location since most real-life data sets have more observations near the center of the data range and fewer observations on the lower and upper ends. The value with the highest frequency is often in the middle of the data range. Key Takeaway • The mean, the median, and the mode each answer the question “Where is the center of the data set?” The nature of the data set, as indicated by a relative frequency histogram, determines which one gives the best answer.
textbooks/stats/Introductory_Statistics/Introductory_Statistics_(Shafer_and_Zhang)/02%3A_Descriptive_Statistics/2.02%3A_Measures_of_Central_Location_-_Three_Kinds_of_Averages.txt
Learning Objectives • To learn the concept of the variability of a data set. • To learn how to compute three measures of the variability of a data set: the range, the variance, and the standard deviation. Look at the two data sets in Table $1$ and the graphical representation of each, called a dot plot, in Figure $1$. Table $1$: Two Data Sets Data Set I: 40 38 42 40 39 39 43 40 39 40 Data Set II: 46 37 40 33 42 36 40 47 34 45 The two sets of ten measurements each center at the same value: they both have mean, median, and mode $40$. Nevertheless a glance at the figure shows that they are markedly different. In Data Set I the measurements vary only slightly from the center, while for Data Set II the measurements vary greatly. Just as we have attached numbers to a data set to locate its center, we now wish to associate to each data set numbers that measure quantitatively how the data either scatter away from the center or cluster close to it. These new quantities are called measures of variability, and we will discuss three of them. The Range First we discuss the simplest measure of variability. Definition: range The range $R$ of a data set is difference between its largest and smallest values $R=x_{\text{max}}−x_{\text{min}} \nonumber$ where $\displaystyle x_{\text{max}}$ is the largest measurement in the data set and $\displaystyle x_{\text{min}}$ is the smallest. Example $1$: Identifyig the Range of a dataset Find the range of each data set in Table $1$. Solution • For Data Set I the maximum is $43$ and the minimum is $38$, so the range is $R=43−38=5$. • For Data Set II the maximum is $47$ and the minimum is $33$, so the range is $R=47−33=14$. The range is a measure of variability because it indicates the size of the interval over which the data points are distributed. A smaller range indicates less variability (less dispersion) among the data, whereas a larger range indicates the opposite. The Variance and the Standard Deviation The other two measures of variability that we will consider are more elaborate and also depend on whether the data set is just a sample drawn from a much larger population or is the whole population itself (that is, a census). Definition: sample variance and sample Standard Deviation The sample variance of a set of $n$ sample data is the number $\mathbf{ s^2}$ defined by the formula $s^2 = \dfrac{\sum (x-\bar x)^2}{n-1} \nonumber$ which by algebra is equivalent to the formula $s^2=\dfrac{\sum x^2 - \dfrac{1}{n}\left(\sum x\right)^2}{n-1} \nonumber$ The square root $\mathbf s$ of the sample variance is called the sample standard deviation of a set of $n$ sample data . It is given by the formulas $s = \sqrt{s^2} = \sqrt{\dfrac{\sum (x-\bar x)^2}{n-1} } = \sqrt{\dfrac{\sum x^2 - \dfrac{1}{n}\left(\sum x\right)^2}{n-1}}. \nonumber$ Although the first formula in each case looks less complicated than the second, the latter is easier to use in hand computations, and is called a shortcut formula. Example $2$: Identifying the Variance and Standard Deviation of a Dataset Find the sample variance and the sample standard deviation of Data Set II in Table $1$. Solution To use the defining formula (the first formula) in the definition we first compute for each observation $x$ its deviation $x-\bar x$ from the sample mean. Since the mean of the data is $\bar x =40$, we obtain the ten numbers displayed in the second line of the supplied table $\begin{array}{c|cccccccccc} x & 46 & 37 & 40 & 33 & 42 & 36 & 40 & 47 & 34 & 45 \ \hline x−\bar{x} & -6 & -3 & 0 & -7 & 2 & -4 & 0 & 7 & -6 & 5 \end{array} \nonumber$ Thus $\sum (x-\bar{x})^2=6^2+(-3)^2+0^2+(-7)^2+2^2+(-4)^2+0^2+7^2+(-6)^2+5^2=224\nonumber$ so the variance is $s^2=\dfrac{\sum (x-\bar{x})^2}{n-1}=\dfrac{224}{9}=24.\bar{8} \nonumber$ and the standard deviation is $s=\sqrt{24.\bar{8}} \approx 4.99 \nonumber$ The student is encouraged to compute the ten deviations for Data Set I and verify that their squares add up to $20$, so that the sample variance and standard deviation of Data Set I are the much smaller numbers $s^2=20/9=2.\bar{2} \nonumber$ and $s=20/9 \approx 1.49 \nonumber$ Example $2$ Find the sample variance and the sample standard deviation of the ten GPAs in "Example 2.2.3" in Section 2.2. $1.90\; \; 3.00\; \; 2.53\; \; 3.71\; \; 2.12\; \; 1.76\; \; 2.71\; \; 1.39\; \; 4.00\; \; 3.33\nonumber$ Solution Since $\sum x = 1.90 + 3.00+ 2.53 + 3.71 + 2.12 + 1.76 + 2.71 + 1.39 + 4.00 + 3.33 = 26.45 \nonumber$ and $\sum x^2 = 1.902 + 3.002 + 2.532 + 3.712 + 2.122+ 1.762 + 2.712 + 1.392 + 4.002 + 3.332 = 76.7321 \nonumber$ the shortcut formula gives $s^2=\dfrac{\sum x^2−(\sum x)^2}{n−1}=\dfrac{76.7321−(26.45)^2/10}{10−1}=\dfrac{6.77185}{9}=0.75242\bar{7} \nonumber$ and $s=\sqrt{0.75242\bar{7}}\approx 0.867 \nonumber$ The sample variance has different units from the data. For example, if the units in the data set were inches, the new units would be inches squared, or square inches. It is thus primarily of theoretical importance and will not be considered further in this text, except in passing. If the data set comprises the whole population, then the population standard deviation, denoted $\sigma$ (the lower case Greek letter sigma), and its square, the population variance $\sigma ^2$, are defined as follows. Definitions: The population variance $\mathbf{\sigma^2}$ and population standard deviation $\mathbf \sigma$ The variability of a set of $N$ population data is measured by the population variance $\sigma^2=\dfrac{\sum (x−\mu)^2}{N} \label{popVar}$ and its square root, the population standard deviation $\sigma =\sqrt{\dfrac{\sum (x−\mu)^2}{N}}\label{popSTD}$ where $\mu$ is the population mean as defined above. Note that the denominator in the fraction is the full number of observations, not that number reduced by one, as is the case with the sample standard deviation. Since most data sets are samples, we will always work with the sample standard deviation and variance. Finally, in many real-life situations the most important statistical issues have to do with comparing the means and standard deviations of two data sets. Figure $2$ illustrates how a difference in one or both of the sample mean and the sample standard deviation are reflected in the appearance of the data set as shown by the curves derived from the relative frequency histograms built using the data. Key Takeaway The range, the standard deviation, and the variance each give a quantitative answer to the question “How variable are the data?”
textbooks/stats/Introductory_Statistics/Introductory_Statistics_(Shafer_and_Zhang)/02%3A_Descriptive_Statistics/2.03%3A_Measures_of_Variability.txt
Learning Objectives • To learn the concept of the relative position of an element of a data set. • To learn the meaning of each of two measures, the percentile rank and the $z$-score, of the relative position of a measurement and how to compute each one. • To learn the meaning of the three quartiles associated to a data set and how to compute them. • To learn the meaning of the five-number summary of a data set, how to construct the box plot associated to it, and how to interpret the box plot. When you take an exam, what is often as important as your actual score on the exam is the way your score compares to other students’ performance. If you made a $70$ but the average score (whether the mean, median, or mode) was $85$, you did relatively poorly. If you made a $70$ but the average score was only $55$ then you did relatively well. In general, the significance of one observed value in a data set strongly depends on how that value compares to the other observed values in a data set. Therefore we wish to attach to each observed value a number that measures its relative position. Percentiles and Quartiles Anyone who has taken a national standardized test is familiar with the idea of being given both a score on the exam and a “percentile ranking” of that score. You may be told that your score was $625$ and that it is the $85^{th}$ percentile. The first number tells how you actually did on the exam; the second says that $85\%$ of the scores on the exam were less than or equal to your score, $625$. Definition: percentile of data Given an observed value $x$ in a data set, $x$ is the $P^{th}$ percentile of the data if $P\%$ of the data are less than or equal to $x$. The number $P$ is the percentile rank of $x$. Example $1$ What percentile is the value $1.39$ in the data set of ten GPAs considered in a previous Example? What percentile is the value $3.33$? Solution The data, written in increasing order, are $\begin{array}{cccccccccc} 1.39 & 1.76 & 1.90 & 2.12 & 2.53 & 2.71 & 3.00 & 3.33 & 3.71 & 4.00 \end{array} \nonumber$ The only data value that is less than or equal to $1.39$ is $1.39$ itself. Since $1$ out of ten, or $1/10=10\%$, of the data points are less than or equal to $1.39$, $1.39$ is the $10^{th}$ percentile. Eight data values are less than or equal to $3.33$. Since $8$ out of ten, or $8∕10 = .80 = 80\%$ of the data values are less than or equal to $3.33$, the value $3.33$ is the $80^{th}$ percentile of the data. The Pth percentile cuts the data set in two so that approximately $P\%$ of the data lie below it and $(100−P)\%$ of the data lie above it. In particular, the three percentiles that cut the data into fourths, as shown in Figure $1$, are called the quartiles of a data set. The quartiles are the three numbers $Q_1$, $Q_2$, $Q_3$ that divide the data set approximately into fourths. The following simple computational definition of the three quartiles works well in practice. Definition: quartile For any data set: 1. The second quartile $Q_2$ of the data set is its median. 2. Define two subsets: 1. the lower set: all observations that are strictly less than $Q_2$ 2. the upper set: all observations that are strictly greater than $Q_2$ 3. The first quartile $Q_1$ of the data set is the median of the lower set. 4. The third quartile $Q_3$ of the data set is the median of the upper set. Example $2$ Find the quartiles of the data set of GPAs of discussed in a previous Example. Solution As in the previous example we first list the data in numerical order: $\begin{array}{cccccccccc} 1.39 & 1.76 & 1.90 & 2.12 & 2.53 & 2.71 & 3.00 & 3.33 & 3.71 & 4.00 \end{array} \nonumber$ This data set has $n=10$ observations. Since $10$ is an even number, the median is the mean of the two middle observations: $\tilde x=(2.53+2.71)∕2=2.62. \nonumber$ Thus the second quartile is $Q_2=2.62$. The lower and upper subsets are • Lower: $L=\{1.39,1.76,1.90,2.12,2.53\}$, • Upper: $U=\{2.71,3.00,3.33,3.71,4.00\}$. Each has an odd number of elements, so the median of each is its middle observation. Thus the first quartile is $Q_1=1.90$, the median of $L$, and the third quartile is $Q_3=3.33$, the median of $U$. Example $3$ Adjoin the observation $3.88$ to the data set of the previous example and find the quartiles of the new set of data. Solution As in the previous example we first list the data in numerical order: $\begin{array}{ccccccccccc} 1.39 & 1.76 & 1.90 & 2.12 & 2.53 & 2.71 & 3.00 & 3.33 & 3.71 & 3.88, 4.00 \end{array} \nonumber$ This data set has $11$ observations. The second quartile is its median, the middle value $2.71$. Thus $Q_2=2.71$. The lower and upper subsets are now • Lower: $L=\{1.39,1.76,1.90,2.12,2.53\}$ • Upper: $U=\{3.00,3.33,3.71,3.88,4.00\}$. The lower set $L$ has median, the middle value, $1.90$, so $Q_1=1.90$. The upper set has median $3.71$, so $Q_3=3.71$. In addition to the three quartiles, the two extreme values, the minimum $x_{min}$ and the maximum $x_{max}$ are also useful in describing the entire data set. Together these five numbers are called the five-number summary of a data set, { $X_{min}$, $Q_1$, $Q_2$, $Q_3$, $X_{max}$ } The five-number summary is used to construct a box plot, as in Figure $2$. Each of the five numbers is represented by a vertical line segment, a box is formed using the line segments at $Q_1$ and $Q_3$ as its two vertical sides, and two horizontal line segments are extended from the vertical segments marking $Q_1$ and $Q_3$ to the adjacent extreme values. (The two horizontal line segments are referred to as “whiskers,” and the diagram is sometimes called a "box and whisker plot.") We caution the reader that there are other types of box plots that differ somewhat from the ones we are constructing, although all are based on the three quartiles. Note that the distance from $Q1$ to $Q3$ is the length of the interval over which the middle half of the data range. Thus it has the following special name. Definition: interquartile range The interquartile range $IQR$ is the quantity $IQR = Q3-Q1 \nonumber$ Example $4$ Construct a box plot and find the $IQR$ for the data in Example $3$. Solution From our work in Example $1$, we know that the five-number summary is • $X_{min} = 1.39$ • $Q1 = 1.90$ • $Q2 = 2.62$ • $Q3 = 3.33$ • $X_{max} = 4.00$ The box plot is: The interquartile range is: $IQR=3.33-1.90=1.43$. $z$-Scores Another way to locate a particular observation $x$ in a data set is to compute its distance from the mean in units of standard deviation. The $z$-score indicates how many standard deviations an individual observation $x$ is from the center of the data set, its mean. It is used on distributions that have been standardized, which allows us to better understand its properties. If $z$ is negative then $x$ is below average. If $z$ is $0$ then $x$ is equal to the average. If $z$ is positive then $x$ is above the average Definition: $z$-score The $z$-score of an observation $x$ is the number $z$ given by the computational formula $z = \dfrac{x - \mu}{\sigma} \nonumber$ Figure $3$: x-Scale versus z-Score Example $5$ Suppose the mean and standard deviation of the GPA's of all currently registered students at a college are $\mu = 2.70$ and $\sigma = 0.50$. The $z$-scores of the GPA's of two students, Antonio and Beatrice, are $z=-0.62$ and $z=1.28$, respectively. What are their GPAs? Solution Using the second formula right after the definition of $z$-scores we compute the GPA's as • Antonio: $x=\mu +z\sigma =2.70+(-0.62)(0.50)=2.39$ • Beatrice: $x=\mu +z\sigma =2.70+(1.28)(0.50)=3.34$ Key Takeaways • The percentile rank and $z$-score of a measurement indicate its relative position with regard to the other measurements in a data set. • The three quartiles divide a data set into fourths. • The five-number summary and its associated box plot summarize the location and distribution of the data.
textbooks/stats/Introductory_Statistics/Introductory_Statistics_(Shafer_and_Zhang)/02%3A_Descriptive_Statistics/2.04%3A_Relative_Position_of_Data.txt
Learning Objectives • To learn what the value of the standard deviation of a data set implies about how the data scatter away from the mean as described by the Empirical Rule and Chebyshev’s Theorem. • To use the Empirical Rule and Chebyshev’s Theorem to draw conclusions about a data set. You probably have a good intuitive grasp of what the average of a data set says about that data set. In this section we begin to learn what the standard deviation has to tell us about the nature of the data set. The Empirical Rule We start by examining a specific set of data. Table $1$ shows the heights in inches of $100$ randomly selected adult men. A relative frequency histogram for the data is shown in Figure $1$. The mean and standard deviation of the data are, rounded to two decimal places, $\bar{x}=69.92$ and $\sigma = 1.70$. Table $1$: Heights of Men 68.7 72.3 71.3 72.5 70.6 68.2 70.1 68.4 68.6 70.6 73.7 70.5 71.0 70.9 69.3 69.4 69.7 69.1 71.5 68.6 70.9 70.0 70.4 68.9 69.4 69.4 69.2 70.7 70.5 69.9 69.8 69.8 68.6 69.5 71.6 66.2 72.4 70.7 67.7 69.1 68.8 69.3 68.9 74.8 68.0 71.2 68.3 70.2 71.9 70.4 71.9 72.2 70.0 68.7 67.9 71.1 69.0 70.8 67.3 71.8 70.3 68.8 67.2 73.0 70.4 67.8 70.0 69.5 70.1 72.0 72.2 67.6 67.0 70.3 71.2 65.6 68.1 70.8 71.4 70.2 70.1 67.5 71.3 71.5 71.0 69.1 69.5 71.1 66.8 71.8 69.6 72.7 72.8 69.6 65.9 68.0 69.7 68.7 69.8 69.7 If we go through the data and count the number of observations that are within one standard deviation of the mean, that is, that are between $69.92-1.70=68.22$ and $69.92+1.70=71.62$ inches, there are $69$ of them. If we count the number of observations that are within two standard deviations of the mean, that is, that are between $69.92-2(1.70)=66.52$ and $69.92+2(1.70)=73.32$ inches, there are $95$ of them. All of the measurements are within three standard deviations of the mean, that is, between $69.92-3(1.70)=64.822$ and $69.92+3(1.70)=75.02$ inches. These tallies are not coincidences, but are in agreement with the following result that has been found to be widely applicable. The Empirical Rule Approximately $68\%$ of the data lie within one standard deviation of the mean, that is, in the interval with endpoints $\bar{x}\pm s$ for samples and with endpoints $\mu \pm \sigma$ for populations; if a data set has an approximately bell-shaped relative frequency histogram, then (Figure $2$) • approximately $95\%$ of the data lie within two standard deviations of the mean, that is, in the interval with endpoints $\bar{x}\pm 2s$ for samples and with endpoints $\mu \pm 2\sigma$ for populations; and • approximately $99.7\%$ of the data lies within three standard deviations of the mean, that is, in the interval with endpoints $\bar{x}\pm 3s$ for samples and with endpoints $\mu \pm 3\sigma$ for populations. Two key points in regard to the Empirical Rule are that the data distribution must be approximately bell-shaped and that the percentages are only approximately true. The Empirical Rule does not apply to data sets with severely asymmetric distributions, and the actual percentage of observations in any of the intervals specified by the rule could be either greater or less than those given in the rule. We see this with the example of the heights of the men: the Empirical Rule suggested 68 observations between $68.22$ and $71.62$ inches, but we counted $69$. Example $1$ Heights of $18$-year-old males have a bell-shaped distribution with mean $69.6$ inches and standard deviation $1.4$ inches. 1. About what proportion of all such men are between $68.2$ and $71$ inches tall? 2. What interval centered on the mean should contain about $95\%$ of all such men? Solution A sketch of the distribution of heights is given in Figure $3$. 1. Since the interval from $68.2$ to $71.0$ has endpoints $\bar{x}-s$ and $\bar{x}+s$, by the Empirical Rule about $68\%$ of all $18$-year-old males should have heights in this range. 2. By the Empirical Rule the shortest such interval has endpoints $\bar{x}-2s$ and $\bar{x}+2s$. Since $\bar{x}-2s=69.6-2(1.4)=66.8 \nonumber$ and $\bar{x}+2s=69.6+2(1.4)=72.4 \nonumber$ the interval in question is the interval from $66.8$ inches to $72.4$ inches. Example $2$ Scores on IQ tests have a bell-shaped distribution with mean $\mu =100$ and standard deviation $\sigma =10$. Discuss what the Empirical Rule implies concerning individuals with IQ scores of $110$, $120$, and $130$. Solution A sketch of the IQ distribution is given in Figure $3$. The Empirical Rule states that 1. approximately $68\%$ of the IQ scores in the population lie between $90$ and $110$, 2. approximately $95\%$ of the IQ scores in the population lie between $80$ and $120$, and 3. approximately $99.7\%$ of the IQ scores in the population lie between $70$ and $130$. 1. Since $68\%$ of the IQ scores lie within the interval from $90$ to $110$, it must be the case that $32\%$ lie outside that interval. By symmetry approximately half of that $32\%$, or $16\%$ of all IQ scores, will lie above $110$. If $16\%$ lie above $110$, then $84\%$ lie below. We conclude that the IQ score $110$ is the $84^{th}$ percentile. 2. The same analysis applies to the score $120$. Since approximately $95\%$ of all IQ scores lie within the interval form $80$ to $120$, only $5\%$ lie outside it, and half of them, or $2.5\%$ of all scores, are above $120$. The IQ score $120$ is thus higher than $97.5\%$ of all IQ scores, and is quite a high score. 3. By a similar argument, only $15/100$ of $1\%$ of all adults, or about one or two in every thousand, would have an IQ score above $130$. This fact makes the score $130$ extremely high. Chebyshev’s Theorem The Empirical Rule does not apply to all data sets, only to those that are bell-shaped, and even then is stated in terms of approximations. A result that applies to every data set is known as Chebyshev’s Theorem. Chebyshev’s Theorem For any numerical data set, • at least $3/4$ of the data lie within two standard deviations of the mean, that is, in the interval with endpoints $\bar{x}\pm 2s$ for samples and with endpoints $\mu \pm 2\sigma$ for populations; • at least $8/9$ of the data lie within three standard deviations of the mean, that is, in the interval with endpoints $\bar{x}\pm 3s$ for samples and with endpoints $\mu \pm 3\sigma$ for populations; • at least $1-1/k^2$ of the data lie within $k$ standard deviations of the mean, that is, in the interval with endpoints $\bar{x}\pm ks$ for samples and with endpoints $\mu \pm k\sigma$ for populations, where $k$ is any positive whole number that is greater than $1$. Figure $4$ gives a visual illustration of Chebyshev’s Theorem. It is important to pay careful attention to the words “at least” at the beginning of each of the three parts of Chebyshev’s Theorem. The theorem gives the minimum proportion of the data which must lie within a given number of standard deviations of the mean; the true proportions found within the indicated regions could be greater than what the theorem guarantees. Example $3$ A sample of size $n=50$ has mean $\bar{x}=28$ and standard deviation $s=3$. Without knowing anything else about the sample, what can be said about the number of observations that lie in the interval $(22,34)$? What can be said about the number of observations that lie outside that interval? Solution The interval $(22,34)$ is the one that is formed by adding and subtracting two standard deviations from the mean. By Chebyshev’s Theorem, at least $3/4$ of the data are within this interval. Since $3/4$ of $50$ is $37.5$, this means that at least $37.5$ observations are in the interval. But one cannot take a fractional observation, so we conclude that at least $38$ observations must lie inside the interval $(22,34)$. If at least $3/4$ of the observations are in the interval, then at most $1/4$ of them are outside it. Since $1/4$ of $50$ is $12.5$, at most $12.5$ observations are outside the interval. Since again a fraction of an observation is impossible, $x\; (22,34)$. Example $4$ The number of vehicles passing through a busy intersection between $8:00\; a.m.$ and $10:00\; a.m.$ was observed and recorded on every weekday morning of the last year. The data set contains $n=251$ numbers. The sample mean is $\bar{x}=725$ and the sample standard deviation is $s=25$. Identify which of the following statements must be true. 1. On approximately $95\%$ of the weekday mornings last year the number of vehicles passing through the intersection from $8:00\; a.m.$ to $10:00\; a.m.$ was between $675$ and $775$. 2. On at least $75\%$ of the weekday mornings last year the number of vehicles passing through the intersection from $8:00\; a.m.$ to $10:00\; a.m.$ was between $675$ and $775$. 3. On at least $189$ weekday mornings last year the number of vehicles passing through the intersection from $8:00\; a.m.$ to $10:00\; a.m.$ was between $675$ and $775$. 4. On at most $25\%$ of the weekday mornings last year the number of vehicles passing through the intersection from $8:00\; a.m.$ to $10:00\; a.m.$ was either less than $675$ or greater than $775$. 5. On at most $12.5\%$ of the weekday mornings last year the number of vehicles passing through the intersection from $8:00\; a.m.$ to $10:00\; a.m.$ was less than $675$. 6. On at most $25\%$ of the weekday mornings last year the number of vehicles passing through the intersection from $8:00\; a.m.$ to $10:00\; a.m.$ was less than $675$. Solution 1. Since it is not stated that the relative frequency histogram of the data is bell-shaped, the Empirical Rule does not apply. Statement (1) is based on the Empirical Rule and therefore it might not be correct. 2. Statement (2) is a direct application of part (1) of Chebyshev’s Theorem because $\bar{x}-2s$, $\bar{x}+2s = (675,775)$. It must be correct. 3. Statement (3) says the same thing as statement (2) because $75\%$ of $251$ is $188.25$, so the minimum whole number of observations in this interval is $189$. Thus statement (3) is definitely correct. 4. Statement (4) says the same thing as statement (2) but in different words, and therefore is definitely correct. 5. Statement (4), which is definitely correct, states that at most $25\%$ of the time either fewer than $675$ or more than $775$ vehicles passed through the intersection. Statement (5) says that half of that $25\%$ corresponds to days of light traffic. This would be correct if the relative frequency histogram of the data were known to be symmetric. But this is not stated; perhaps all of the observations outside the interval ($675,775$) are less than $75$. Thus statement (5) might not be correct. 6. Statement (4) is definitely correct and statement (4) implies statement (6): even if every measurement that is outside the interval ($675,775$) is less than $675$ (which is conceivable, since symmetry is not known to hold), even so at most $25\%$ of all observations are less than $675$. Thus statement (6) must definitely be correct. Key Takeaway • The Empirical Rule is an approximation that applies only to data sets with a bell-shaped relative frequency histogram. It estimates the proportion of the measurements that lie within one, two, and three standard deviations of the mean. • Chebyshev’s Theorem is a fact that applies to all possible data sets. It describes the minimum proportion of the measurements that lie must within one, two, or more standard deviations of the mean.
textbooks/stats/Introductory_Statistics/Introductory_Statistics_(Shafer_and_Zhang)/02%3A_Descriptive_Statistics/2.05%3A_The_Empirical_Rule_and_Chebyshev%27s_Theorem.txt
These are homework exercises to accompany the Textmap created for "Introductory Statistics" by Shafer and Zhang. Basic 1. Describe one difference between a frequency histogram and a relative frequency histogram. 2. Describe one advantage of a stem and leaf diagram over a frequency histogram. 3. Construct a stem and leaf diagram, a frequency histogram, and a relative frequency histogram for the following data set. For the histograms use classes $51-60$, $61-70$, and so on. $\begin{array}69 & 92 & 68 & 77 & 80 \ 70 & 85 & 88 & 85 & 96 \ 93 & 75 & 76 & 82 & 100 \ 53 & 70 & 70 & 82 & 85\end{array}$ 4. Construct a stem and leaf diagram, a frequency histogram, and a relative frequency histogram for the following data set. For the histograms use classes $6.0-6.9$, $7.0-7.9$, and so on. $\begin{array}8.5 & 8.2 & 7.0 & 7.0 & 4.9 \ 6.5 & 8.2 & 7.6 & 1.5 & 9.3 \ 9.6 & 8.5 & 8.8 & 8.5 & 8.7 \ 8.0 & 7.7 & 2.9 & 9.2 & 6.9\end{array}$ 5. A data set contains $n = 10$ observations. The values $x$ and their frequencies $f$ are summarized in the following data frequency table. $\begin{array}{c|cccc}x & -1 & 0 & 1 & 2 \ \hline f & 3 & 4 & 2 & 1\end{array}$Construct a frequency histogram and a relative frequency histogram for the data set. 6. A data set contains the $n=20$ observations The values $x$ and their frequencies $f$ are summarized in the following data frequency table. $\begin{array}{c|ccc}x & -1 & 0 & 1 & 2 \ \hline f & 3 & a & 2 & 1\end{array}$The frequency of the value $0$ is missing. Find a and then sketch a frequency histogram and a relative frequency histogram for the data set. 7. A data set has the following frequency distribution table: $\begin{array}{c|ccc}x & 1 & 2 & 3 & 4 \ \hline f & 3 & a & 2 & 1\end{array}$The number a is unknown. Can you construct a frequency histogram? If so, construct it. If not, say why not. 8. A table of some of the relative frequencies computed from a data set is $\begin{array}{c|ccc}x & 1 & 2 & 3 & 4 \ \hline f ∕ n & 0.3 & p & 0.2 & 0.1\end{array}$The number $p$ is yet to be computed. Finish the table and construct the relative frequency histogram for the data set. Applications 1. The IQ scores of ten students randomly selected from an elementary school are given. $\begin{array}108 & 100 & 99 & 125 & 87 \ 105 & 107 & 105 & 119 & 118\end{array}$Grouping the measures in the $80s$, the $90s$, and so on, construct a stem and leaf diagram, a frequency histogram, and a relative frequency histogram. 2. The IQ scores of ten students randomly selected from an elementary school for academically gifted students are given. $\begin{array}133 & 140 & 152 & 142 & 137 \ 145 & 160 & 138 & 139 & 138\end{array}$Grouping the measures by their common hundreds and tens digits, construct a stem and leaf diagram, a frequency histogram, and a relative frequency histogram. 3. During a one-day blood drive $300$ people donated blood at a mobile donation center. The blood types of these $300$ donors are summarized in the table. $\begin{array}{c|ccc}Blood\: Type\hspace{0.167em} & O & A & B & AB \ \hline Frequency & 136 & 120 & 32 & 12\end{array}$Construct a relative frequency histogram for the data set. 4. In a particular kitchen appliance store an electric automatic rice cooker is a popular item. The weekly sales for the last $20$weeks are shown. $\begin{array}20 & 15 & 14 & 14 & 18 \ 15 & 17 & 16 & 16 & 18 \ 15 & 19 & 12 & 13 & 9 \ 19 & 15 & 15 & 16 & 15\end{array}$Construct a relative frequency histogram with classes $6-10$, $11-15$, and $16-20$. Additional Exercises 1. Random samples, each of size $n = 10$, were taken of the lengths in centimeters of three kinds of commercial fish, with the following results: $\begin {array}{lrcccccccc} Sample \hspace{0.167em}1 : & 108 & 100 & 99 & 125 & 87 & 105 & 107 & 105 & 119 & 118 \ Sample \hspace{0.167em} 2 : & 133 & 140 & 152 & 142 & 137 & 145 & 160 & 138 & 139 & 138 \ Sample \hspace{0.167em} 3 : & 82 & 60 & 83 & 82 & 82 & 74 & 79 & 82 & 80 & 80\end{array}$Grouping the measures by their common hundreds and tens digits, construct a stem and leaf diagram, a frequency histogram, and a relative frequency histogram for each of the samples. Compare the histograms and describe any patterns they exhibit. 2. During a one-day blood drive $300$ people donated blood at a mobile donation center. The blood types of these $300$ donors are summarized below. $\begin{array}{c|ccc}Blood\: Type\hspace{0.167em} & O & A & B & AB \ \hline Frequency & 136 & 120 & 32 & 12\end{array}$Identify the blood type that has the highest relative frequency for these $300$ people. Can you conclude that the blood type you identified is also most common for all people in the population at large? Explain. 3. In a particular kitchen appliance store, the weekly sales of an electric automatic rice cooker for the last $20$ weeks are as follows. $\begin{array}20 & 15 & 14 & 14 & 18 \ 15 & 17 & 16 & 16 & 18 \ 15 & 19 & 12 & 13 & 9 \ 19 & 15 & 15 & 16 & 15\end{array}$In retail sales, too large an inventory ties up capital, while too small an inventory costs lost sales and customer satisfaction. Using the relative frequency histogram for these data, find approximately how many rice cookers must be in stock at the beginning of each week if 1. the store is not to run out of stock by the end of a week for more than $15\%$ of the weeks; and 2. the store is not to run out of stock by the end of a week for more than $5\%$ of the weeks. 4. In retail sales, too large an inventory ties up capital, while too small an inventory costs lost sales and customer satisfaction. Using the relative frequency histogram for these data, find approximately how many rice cookers must be in stock at the beginning of each week if the store is not to run out of stock by the end of a week for more than $15\%$ of the weeks; and the store is not to run out of stock by the end of a week for more than $5\%$ of the weeks. Answers 1. The vertical scale on one is the frequencies and on the other is the relative frequencies. 2. $\begin{array}{r|cccccc}5 & 3 & & & & & & \ 6 & 8 & 9 & & & & & \ 7 & 0 & 0 & 0 & 5 & 6 & 7 & \ 8 & 0 & 2 & 3 & 5 & 5 & 5 & 8 \ 9 & 2 & 3 & 6 & & & & \ 10 & 0 & & & & & &\end{array}$ 3. Noting that $n = 10$ the relative frequency table is: $\begin{array}{c|cccc}x & -1 & 0 & 1 & 2 \ \hline f ∕ n & 0.3 & 0.4 & 0.2 & 0.1\end{array}$ 4. Since $n$ is unknown, $a$ is unknown, so the histogram cannot be constructed. 5. $\begin{array}{r|cccc}8 & 7 & & & & \ 9 & 9 & & & & \ 10 & 0 & 5 & 5 & 7 & 8 \ 11 & 8 & 9 & & \ 12 & 5 & & & &\end{array}$ Frequency and relative frequency histograms are similarly generated. 6. Noting $n = 300$, the relative frequency table is therefore: $\begin{array}{c|cccc}Blood\hspace{0.167em}Type & O & A & B & AB \ \hline f ∕ n & 0.4533 & 0.4 & 0.1067 & 0.04\end{array}$ A relative frequency histogram is then generated. 7. The stem and leaf diagrams listed for Samples $1,\, 2,\; \text{and}\; 3$ in that order: $\begin{array}{c|ccccc}6 & & & & & \ 7 & & & & & \ 8 & 7 & & & & \ 9 & 9 & & & & \ 10 & 0 & 5 & 5 & 7 & 8 \ 11 & 8 & 9 & & & \ 12 & 5 & & & & \ 13 & & & & & \ 14 & & & & & \ 15 & & & & & \ 16 & & & & &\end{array}$ $\begin{array}{c|ccccc}6 & & & & & \ 7 & & & & & \ 8 & & & & & \ 9 & & & & & \ 10 & & & & & \ 11 & & & & & \ 12 & & & & & \ 13 & 3 & 7 & 8 & 8 & 9 \ 14 & 0 & 2 & 5 & & \ 15 & 2 & & & & \ 16 & 0 & & & &\end{array}$ $\begin{array}{c|ccccccc}6 & 0 & & & & \ 7 & 4 & 9 & & & \ 8 & 0 & 0 & 2 & 2 & 2 & 2 & 3 \ 9 & & & & & \ 10 & & & & & \ 11 & & & & & \ 12 & & & & & \ 13 & & & & & \ 14 & & & & & \ 15 & & & & & \ 16 & & & & &\end{array}$ The frequency tables are given below in the same order: $\begin{array}{c|ccc}Length\hspace{0.167em} & 80 \sim 89 & 90 \sim 99 & 100 \sim 109 \ \hline f & 1 & 1 & 5\end{array}$ $\begin{array}{c|cc}Length\hspace{0.167em} & 110 \sim 119 & 120 \sim 129 \ \hline f & 2 & 1\end{array}$ $\begin{array}{c|ccc}Length\hspace{0.167em} & 130 \sim 139 & 140 \sim 149 & 150 \sim 159 \ \hline f & 5 & 3 & 1\end{array}$ $\begin{array}{c|ccc}Length\hspace{0.167em} & 160 \sim 169 \ \hline f & 1\end{array}$ $\begin{array}{c|ccc}Length\hspace{0.167em} & 60 \sim 69 & 70 \sim 79 & 80 \sim 89 \ \hline f & 1 & 2 & 7\end{array}$ The relative frequency tables are also given below in the same order: $\begin{array}{c|ccc}Length\hspace{0.167em} & 80 \sim 89 & 90 \sim 99 & 100 \sim 109 \ \hline f ∕ n & 0.1 & 0.1 & 0.5\end{array}$ $\begin{array}{c|cc}Length\hspace{0.167em} & 110 \sim 119 & 120 \sim 129 \ \hline f ∕ n & 0.2 & 0.1\end{array}$ $\begin{array}{c|ccc}Length\hspace{0.167em} & 130 \sim 139 & 140 \sim 149 & 150 \sim 159 \ \hline f ∕ n & 0.5 & 0.3 & 0.1\end{array}$ $\begin{array}{c|c}Length\hspace{0.167em} & 160 \sim 169 \ \hline f ∕ n & 0.1\end{array}$ $\begin{array}{c|ccc}Length\hspace{0.167em} & 60 \sim 69 & 70 \sim 79 & 80 \sim 89 \ \hline f ∕ n & 0.1 & 0.2 & 0.7\end{array}$ 1. 19 2. 20 2.2: Measures of Central Location Basic 1. For the sample data set $\{1,2,6\}$ find 1. $\sum x$ 2. $\sum x^2$ 3. $\sum (x-3)$ 4. $\sum (x-3)^2$ 2. For the sample data set $\{-1,0,1,4\}$ find 1. $\sum x$ 2. $\sum x^2$ 3. $\sum (x-1)$ 4. $\sum (x-1)^2$ 3. Find the mean, the median, and the mode for the sample $1\; 2\; 3\; 4$ 4. Find the mean, the median, and the mode for the sample $3\; 3\; 4\; 4$ 5. Find the mean, the median, and the mode for the sample $2\; 1\; 2\; 7$ 6. Find the mean, the median, and the mode for the sample $-1\; 0\; 1\; 4\; 1\; 1$ 7. Find the mean, the median, and the mode for the sample data represented by the table $\begin{array}{c|c c c}x & 1 & 2 & 7 \ \hline f & 1 & 2 & 1\ \end{array}$ 8. Find the mean, the median, and the mode for the sample data represented by the table $\begin{array}{c|c c c c}x & -1 & 0 & 1 & 4 \ \hline f & 1 & 1 & 3 & 1\ \end{array}$ 9. Create a sample data set of size $n=3$ for which the mean $\bar{x}$ is greater than the median $\tilde{x}$. 10. Create a sample data set of size $n=3$ for which the mean $\bar{x}$ is less than the median $\tilde{x}$. 11. Create a sample data set of size $n=4$ for which the mean $\bar{x}$, the median $\tilde{x}$, and the mode are all identical. 12. Create a sample data set of size $n=4$ for which the median $\tilde{x}$ and the mode are identical but the mean $\bar{x}$ is different. Applications 1. Find the mean and the median for the LDL cholesterol level in a sample of ten heart patients. $\begin{matrix} 132 & 162 & 133 & 145 & 148\ 139 & 147 & 160 & 150 & 153 \end{matrix}$ 2. Find the mean and the median, for the LDL cholesterol level in a sample of ten heart patients on a special diet. $\begin{matrix} 127 & 152 & 138 & 110 & 152\ 113 & 131 & 148 & 135 & 158 \end{matrix}$ 3. Find the mean, the median, and the mode for the number of vehicles owned in a survey of $52$ households. $\begin{array}{c|c c c c c c c c} x & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7\ \hline f &2 &12 &15 &11 &6 &3 &1 &2\ \end{array}$ 4. The number of passengers in each of $120$ randomly observed vehicles during morning rush hour was recorded, with the following results. $\begin{array}{c|c c c c c } x & 1 & 2 & 3 & 4 & 5\ \hline f &84 &29 &3 &3 &1\ \end{array}$Find the mean, the median, and the mode of this data set. 5. Twenty-five $1-lb$ boxes of $16d$ nails were randomly selected and the number of nails in each box was counted, with the following results. $\begin{array}{c|c c c c c } x & 47 & 48 & 49 & 50 & 51\ \hline f &1 &3 &18 &2 &1\ \end{array}$Find the mean, the median, and the mode of this data set. Additional Exercises 1. Five laboratory mice with thymus leukemia are observed for a predetermined period of $500$ days. After $500$ days, four mice have died but the fifth one survives. The recorded survival times for the five mice are $\begin{matrix} 493 & 421 & 222 & 378 & 500^* \end{matrix}$where $500^*$ indicates that the fifth mouse survived for at least $500$ days but the survival time (i.e., the exact value of the observation) is unknown. 1. Can you find the sample mean for the data set? If so, find it. If not, why not? 2. Can you find the sample median for the data set? If so, find it. If not, why not? 2. Five laboratory mice with thymus leukemia are observed for a predetermined period of $500$ days. After $450$ days, three mice have died, and one of the remaining mice is sacrificed for analysis. By the end of the observational period, the last remaining mouse still survives. The recorded survival times for the five mice are $\begin{matrix} 222 & 421 & 378 & 450^* & 500^* \end{matrix}$where $^*$ indicates that the mouse survived for at least the given number of days but the exact value of the observation is unknown. 1. Can you find the sample mean for the data set? If so, find it. If not, explain why not. 2. Can you find the sample median for the data set? If so, find it. If not, explain why not. 3. A player keeps track of all the rolls of a pair of dice when playing a board game and obtains the following data. $\begin{array}{c|c c c c c c } x & 2 & 3 & 4 & 5 & 6 & 7\ \hline f &10 &29 &40 &56 &68 &77 \ \end{array}$ $\begin{array}{c|c c c c c } x & 8 & 9 & 10 & 11 & 12 \ \hline f &67 &55 &39 &28 &11 \ \end{array}$Find the mean, the median, and the mode. 4. Cordelia records her daily commute time to work each day, to the nearest minute, for two months, and obtains the following data. $\begin{array}{c|c c c c c c c } x & 26 & 27 & 28 & 29 & 30 & 31 & 32\ \hline f &3 &4 &16 &12 &6 &2 &1 \ \end{array}$ 1. Based on the frequencies, do you expect the mean and the median to be about the same or markedly different, and why? 2. Compute the mean, the median, and the mode. 5. An ordered stem and leaf diagram gives the scores of $71$ students on an exam. $\begin{array}{c|c c c c c c c c c c c c c c c c c c } 10 & 0 & 0 \ 9 &1 &1 &1 &1 &2 &3\ 8 &0 &1 &1 &2 &2 &3 &4 &5 &7 &8 &8 &9\ 7 &0 &0 &0 &1 &1 &2 &4 &4 &5 &6 &6 &6 &7 &7 &7 &8 &8 &9\ 6 &0 &1 &2 &2 &2 &3 &4 &4 &5 &7 &7 &7 &7 &8 &8\ 5 &0 &2 &3 &3 &4 &4 &6 &7 &7 &8 &9\ 4 &2 &5 &6 &8 &8\ 3 &9 &9 \end{array}$ 1. Based on the shape of the display, do you expect the mean and the median to be about the same or markedly different, and why? 2. Compute the mean, the median, and the mode. 6. A man tosses a coin repeatedly until it lands heads and records the number of tosses required. (For example, if it lands heads on the first toss he records a $1$; if it lands tails on the first two tosses and heads on the third he records a $3$.) The data are shown. $\begin{array}{c|c c c c c c c c c c } x & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 \ \hline f &384 &208 &98 &56 &28 &12 &8 &2 &3 &1 \end{array}$ 1. Find the mean of the data. 2. Find the median of the data. 1. Construct a data set consisting of ten numbers, all but one of which is above average, where the average is the mean. 2. Is it possible to construct a data set as in part (a) when the average is the median? Explain. 7. Show that no matter what kind of average is used (mean, median, or mode) it is impossible for all members of a data set to be above average. 1. Twenty sacks of grain weigh a total of $1,003\; lb$. What is the mean weight per sack? 2. Can the median weight per sack be calculated based on the information given? If not, construct two data sets with the same total but different medians. 8. Begin with the following set of data, call it $\text{Data Set I}$. $\begin{matrix} 5 & -2 & 6 & 14 & -3 & 0 & 1 & 4 & 3 & 2 & 5 \end{matrix}$ 1. Compute the mean, median, and mode. 2. Form a new data set, $\text{Data Set II}$, by adding $3$ to each number in $\text{Data Set I}$. Calculate the mean, median, and mode of $\text{Data Set II}$. 3. Form a new data set, $\text{Data Set III}$, by subtracting $6$ from each number in $\text{Data Set I}$. Calculate the mean, median, and mode of $\text{Data Set III}$. 4. Comparing the answers to parts (a), (b), and (c), can you guess the pattern? State the general principle that you expect to be true. Large Data Set Exercises Note: For Large Data Set Exercises below, all of the data sets associated with these questions are missing, but the questions themselves are included here for reference. 1. Large $\text{Data Set 1}$ lists the SAT scores and GPAs of $1,000$ students. 1. Compute the mean and median of the $1,000$ SAT scores. 2. Compute the mean and median of the $1,000$ GPAs. 2. Large $\text{Data Set 1}$ lists the SAT scores of $1,000$ students. 1. Regard the data as arising from a census of all students at a high school, in which the SAT score of every student was measured. Compute the population mean $\mu$. 2. Regard the first $25$ observations as a random sample drawn from this population. Compute the sample mean $\bar{x}$ and compare it to $\mu$. 3. Regard the next $25$ observations as a random sample drawn from this population. Compute the sample mean $\bar{x}$ and compare it to $\mu$. 3. Large $\text{Data Set 1}$ lists the GPAs of $1,000$ students. 1. Regard the data as arising from a census of all freshman at a small college at the end of their first academic year of college study, in which the GPA of every such person was measured. Compute the population mean $\mu$. 2. Regard the first $25$ observations as a random sample drawn from this population. Compute the sample mean $\bar{x}$ and compare it to $\mu$. 3. Regard the next $25$ observations as a random sample drawn from this population. Compute the sample mean $\bar{x}$ and compare it to $\mu$. 4. Large $\text{Data Sets}\: 7,\: 7A,\: \text{and}\: 7B$ list the survival times in days of $140$ laboratory mice with thymic leukemia from onset to death. 1. Compute the mean and median survival time for all mice, without regard to gender. 2. Compute the mean and median survival time for the $65$ male mice (separately recorded in Large $\text{Data Set 7A}$). 3. Compute the mean and median survival time for the $75$ female mice (separately recorded in Large $\text{Data Set 7B}$). Answers 1. 9 2. 41 3. 0 4. 14 1. $\bar x= 2.5,\; \tilde{x} = 2.5,\; \text{mode} = \{1,2,3,4\}$ 2. $\bar x= 3,\; \tilde{x} = 2,\; \text{mode} = 2$ 3. $\bar x= 3,\; \tilde{x} = 2,\; \text{mode} = 2$ 4. $\{0, 0, 3\}$ 5. $\{0, 1, 1, 2\}$ 6. $\bar x = 146.9,\; \tilde x = 147.5$ 7. $\bar x=2.6 ,\; \tilde{x} = 2,\; \text{mode} = 2$ 8. $\bar x= 48.96,\; \tilde{x} = 49,\; \text{mode} = 49$ 1. No, the survival times of the fourth and fifth mice are unknown. 2. Yes, $\tilde{x}=421$. 9. $\bar x= 28.55,\; \tilde{x} = 28,\; \text{mode} = 28$ 10. $\bar x= 2.05,\; \tilde{x} = 2,\; \text{mode} = 1$ 11. Mean: $nx_{min}\leq \sum x$ so dividing by $n$ yields $x_{min}\leq \bar{x}$, so the minimum value is not above average. Median: the middle measurement, or average of the two middle measurements, $\tilde{x}$, is at least as large as $x_{min}$, so the minimum value is not above average. Mode: the mode is one of the measurements, and is not greater than itself 1. $\bar x= 3.18,\; \tilde{x} = 3,\; \text{mode} = 5$ 2. $\bar x= 6.18,\; \tilde{x} = 6,\; \text{mode} = 8$ 3. $\bar x= -2.81,\; \tilde{x} = -3,\; \text{mode} = -1$ 4. If a number is added to every measurement in a data set, then the mean, median, and mode all change by that number. 1. $\mu = 1528.74$ 2. $\bar{x}=1502.8$ 3. $\bar{x}=1532.2$ 1. $\bar x= 553.4286,\; \tilde{x} = 552.5$ 2. $\bar x= 665.9692,\; \tilde{x} = 667$ 3. $\bar x= 455.8933,\; \tilde{x} = 448$ 2.3 Measures of Variability Basic 1. Find the range, the variance, and the standard deviation for the following sample. $1\; 2\; 3\; 4$ 2. Find the range, the variance, and the standard deviation for the following sample. $2\; -3\; 6\; 0\; 3\; 1$ 3. Find the range, the variance, and the standard deviation for the following sample. $2\; 1\; 2\; 7$ 4. Find the range, the variance, and the standard deviation for the following sample. $-1\; 0\; 1\; 4\; 1\; 1$ 5. Find the range, the variance, and the standard deviation for the sample represented by the data frequency table. $\begin{array}{c|c c c} x & 1 & 2 & 7 \ \hline f &1 &2 &1\ \end{array}$ 6. Find the range, the variance, and the standard deviation for the sample represented by the data frequency table. $\begin{array}{c|c c c c} x & -1 & 0 & 1 & 4 \ \hline f &1 &1 &3 &1\ \end{array}$ Applications 1. Find the range, the variance, and the standard deviation for the sample of ten IQ scores randomly selected from a school for academically gifted students. $\begin{matrix} 132 & 162 & 133 & 145 & 148\ 139 & 147 & 160 & 150 & 153 \end{matrix}$ 2. Find the range, the variance and the standard deviation for the sample of ten IQ scores randomly selected from a school for academically gifted students. $\begin{matrix} 142 & 152 & 138 & 145 & 148\ 139 & 147 & 155 & 150 & 153 \end{matrix}$ Additional Exercises 1. Consider the data set represented by the table $\begin{array}{c|c c c c c c c} x & 26 & 27 & 28 & 29 & 30 & 31 & 32 \ \hline f &3 &4 &16 &12 &6 &2 &1\ \end{array}$ 1. Use the frequency table to find that $\sum x=1256$ and $\sum x^2=35,926$. 2. Use the information in part (a) to compute the sample mean and the sample standard deviation. 2. Find the sample standard deviation for the data $\begin{array}{c|c c c c c} x & 1 & 2 & 3 & 4 & 5 \ \hline f &384 &208 &98 &56 &28 \ \end{array}$ $\begin{array}{c|c c c c c} x & 6 & 7 & 8 & 9 & 10 \ \hline f &12 &8 &2 &3 &1 \ \end{array}$ 3. A random sample of $49$ invoices for repairs at an automotive body shop is taken. The data are arrayed in the stem and leaf diagram shown. (Stems are thousands of dollars, leaves are hundreds, so that for example the largest observation is $3,800$.) $\begin{array}{c|c c c c c c c c c c c} 3 & 5 & 6 & 8 \ 3 &0 &0 &1 &1 &2 &4 \ 2 &5 &6 &6 &7 &7 &8 &8 &9 &9 \ 2 &0 &0 &0 &0 &1 &2 &2 &4 \ 1 &5 &5 &5 &6 &6 &7 &7 &7 &8 &8 &9 \ 1 &0 &0 &1 &3 &4 &4 &4 \ 0 &5 &6 &8 &8 \ 0 &4 \end{array}$ For these data, $\sum x=101$, $\sum x^2=244,830,000$. 1. Compute the mean, median, and mode. 2. Compute the range. 3. Compute the sample standard deviation. 4. What must be true of a data set if its standard deviation is $0$? 5. A data set consisting of $25$ measurements has standard deviation $0$. One of the measurements has value $17$. What are the other $24$ measurements? 6. Create a sample data set of size $n=3$ for which the range is $0$ and the sample mean is $2$. 7. Create a sample data set of size $n=3$ for which the sample variance is $0$ and the sample mean is $1$. 8. The sample $\{-1,0,1\}$ has mean $\bar{x}=0$ and standard deviation $\bar{x}=0$. Create a sample data set of size $n=3$ for which $\bar{x}=0$ and $s$ is greater than $1$. 9. The sample $\{-1,0,1\}$ has mean $\bar{x}=0$ and standard deviation $\bar{x}=0$. Create a sample data set of size $n=3$ for which $\bar{x}=0$ and the standard deviation $s$ is less than $1$. 10. Begin with the following set of data, call it $\text{Data Set I}$. $5\; -2\; 6\; 1\; 4\; -3\; 0\; 1\; 4\; 3\; 2\; 5$ 1. Compute the sample standard deviation of $\text{Data Set I}$. 2. Form a new data set, $\text{Data Set II}$, by adding $3$ to each number in $\text{Data Set I}$. Calculate the sample standard deviation of $\text{Data Set II}$. 3. Form a new data set, $\text{Data Set III}$, by subtracting $6$ from each number in $\text{Data Set I}$. Calculate the sample standard deviation of $\text{Data Set III}$. 4. Comparing the answers to parts (a), (b), and (c), can you guess the pattern? State the general principle that you expect to be true. Large Data Set Exercises Note: For Large Data Set Exercises below, all of the data sets associated with these questions are missing, but the questions themselves are included here for reference. 1. $\text{Large Data Set 1}$ lists the SAT scores and GPAs of $1,000$ students. 1. Compute the range and sample standard deviation of the $1,000$ SAT scores. 2. Compute the range and sample standard deviation of the $1,000$ GPAs. 2. $\text{Large Data Set 1}$ lists the SAT scores of $1,000$ students. 1. Regard the data as arising from a census of all students at a high school, in which the SAT score of every student was measured. Compute the population range and population standard deviation $\sigma$. 2. Regard the first $25$ observations as a random sample drawn from this population. Compute the sample range and sample standard deviation $s$ and compare them to the population range and $\sigma$. 3. Regard the next $25$ observations as a random sample drawn from this population. Compute the sample range and sample standard deviation $s$ and compare them to the population range and $\sigma$. 3. $\text{Large Data Set 1}$ lists the GPAs of $1,000$ students. 1. Regard the data as arising from a census of all freshman at a small college at the end of their first academic year of college study, in which the GPA of every such person was measured. Compute the population range and population standard deviation $\sigma$. 2. Regard the first $25$ observations as a random sample drawn from this population. Compute the sample range and sample standard deviation $s$ and compare them to the population range and $\sigma$. 3. Regard the next $25$ observations as a random sample drawn from this population. Compute the sample range and sample standard deviation $s$ and compare them to the population range and $\sigma$. 4. $\text{Large Data Set 7, 7A, and 7B }$ list the survival times in days of $140$ laboratory mice with thymic leukemia from onset to death. 1. Compute the range and sample standard deviation of survival time for all mice, without regard to gender. 2. Compute the range and sample standard deviation of survival time for the $65$ male mice (separately recorded in $\text{Large Data Set 7A}$). 3. Compute the range and sample standard deviation of survival time for the $75$ female mice (separately recorded in $\text{Large Data Set 7B}$). Do you see a difference in the results for male and female mice? Does it appear to be significant? Answers 1. $R = 3,\; s^2 = 1.7,\; s = 1.3$. 2. $R = 6,\; s^2=7.\bar{3},\; s = 2.7$. 3. $R = 6,\; s^2=7.3,\; s = 2.7$. 1. $R = 30,\; s^2 = 103.2,\; s = 10.2$. 1. $\bar{x}=28.55,\; s = 1.3$. 1. $\bar{x}=2063,\; \tilde{x} =2000,\; \text{mode}=2000$. 2. $R = 3400$. 3. $s = 869$. 2. All are $17$. 3. $\{1,1,1\}$ 4. One example is $\{-.5,0,.5\}$. 1. $R = 1350$ and $s = 212.5455$ 2. $R = 4.00$ and $s = 0.7407$ 1. $R = 4.00$ and $\sigma = 0.740375$ 2. $R = 3.04$ and $s = 0.808045$ 3. $R = 2.49$ and $s = 0.657843$ 2.4 Relative Position of Data Basic 1. Consider the data set $\begin{matrix} 69 & 92 & 68 & 77 & 80\ 93 & 75 & 76 & 82 & 100\ 70 & 85 & 88 & 85 & 96\ 53 & 70 & 70 & 82 & 85 \end{matrix}$ 1. Find the percentile rank of $82$. 2. Find the percentile rank of $68$. 2. Consider the data set $\begin{matrix} 8.5 & 8.2 & 7.0 & 7.0 & 4.9\ 9.6 & 8.5 & 8.8 & 8.5 & 8.7\ 6.5 & 8.2 & 7.6 & 1.5 & 9.3\ 8.0 & 7.7 & 2.9 & 9.2 & 6.9 \end{matrix}$ 1. Find the percentile rank of $6.5$. 2. Find the percentile rank of $7.7$. 3. Consider the data set represented by the ordered stem and leaf diagram $\begin{array}{c|c c c c c c c c c c c c c c c c c c} 10 & 0 & 0 \ 9 &1 &1 &1 &1 &2 &3\ 8 &0 &1 &1 &2 &2 &3 &4 &5 &7 &8 &8 &9\ 7 &0 &0 &0 &1 &1 &2 &4 &4 &5 &6 &6 &6 &7 &7 &7 &8 &8 &9\ 6 &0 &1 &2 &2 &2 &3 &4 &4 &5 &7 &7 &7 &7 &8 &8\ 5 &0 &2 &3 &3 &4 &4 &6 &7 &7 &8 &9\ 4 &2 &5 &6 &8 &8\ 3 &9 &9 \end{array}$ 1. Find the percentile rank of the grade $75$. 2. Find the percentile rank of the grade $57$. 4. Is the $90^{th}$ percentile of a data set always equal to $90\%$? Why or why not? 5. The $29^{th}$ percentile in a large data set is $5$. 1. Approximately what percentage of the observations are less than $5$? 2. Approximately what percentage of the observations are greater than $5$? 6. The $54^{th}$ percentile in a large data set is $98.6$. 1. Approximately what percentage of the observations are less than $98.6$? 2. Approximately what percentage of the observations are greater than $98.6$? 7. In a large data set the $29^{th}$ percentile is $5$ and the $79^{th}$ percentile is $10$. Approximately what percentage of observations lie between $5$ and $10$? 8. In a large data set the $40^{th}$ percentile is $125$ and the $82^{nd}$ percentile is $158$. Approximately what percentage of observations lie between $125$ and $158$? 9. Find the five-number summary and the IQR and sketch the box plot for the sample represented by the stem and leaf diagram in Figure 2.1.2 "Ordered Stem and Leaf Diagram". 10. Find the five-number summary and the IQR and sketch the box plot for the sample explicitly displayed in "Example 2.2.7" in Section 2.2. 11. Find the five-number summary and the IQR and sketch the box plot for the sample represented by the data frequency table $\begin{array}{c|c c c c c} x & 1 & 2 & 5 & 8 & 9 \ \hline f &5 &2 &3 &6 &4\ \end{array}$ 12. Find the five-number summary and the IQR and sketch the box plot for the sample represented by the data frequency table $\begin{array}{c|c c c c c c c c c} x & -5 & -3 & -2 & -1 & 0 & 1 & 3 & 4 & 5 \ \hline f &2 &1 &3 &2 &4 &1 &1 &2 &1\ \end{array}$ 13. Find the $z$-score of each measurement in the following sample data set. $-5\; \; 6\; \; 2\; \; -1\; \; 0$ 14. Find the $z$-score of each measurement in the following sample data set. $1.6\; \; 5.2\; \; 2.8\; \; 3.7\; \; 4.0$ 15. The sample with data frequency table $\begin{array}{c|c c c} x & 1 & 2 & 7 \ \hline f &1 &2 &1\ \end{array}$ has mean $\bar{x}=3$ and standard deviation $s\approx 2.71$. Find the $z$-score for every value in the sample. 16. The sample with data frequency table $\begin{array}{c|c c c c} x & -1 & 0 & 1 & 4 \ \hline f &1 &1 &3 &1\ \end{array}$ has mean $\bar{x}=1$ and standard deviation $s\approx 1.67$. Find the $z$-score for every value in the sample. 17. For the population $0\; \; 0\; \; 2\; \; 2$compute each of the following. 1. The population mean $\mu$. 2. The population variance $\sigma ^2$. 3. The population standard deviation $\sigma$. 4. The $z$-score for every value in the population data set. 18. For the population $0.5\; \; 2.1\; \; 4.4\; \; 1.0$compute each of the following. 1. The population mean $\mu$. 2. The population variance $\sigma ^2$. 3. The population standard deviation $\sigma$. 4. The $z$-score for every value in the population data set. 19. A measurement $x$ in a sample with mean $\bar{x}=10$ and standard deviation $s=3$ has $z$-score $z=2$. Find $x$. 20. A measurement $x$ in a sample with mean $\bar{x}=10$ and standard deviation $s=3$ has $z$-score $z=-1$. Find $x$. 21. A measurement $x$ in a population with mean $\mu =2.3$ and standard deviation $\sigma =1.3$ has $z$-score $z=2$. Find $x$. 22. A measurement $x$ in a sample with mean $\mu =2.3$ and standard deviation $\sigma =1.3$ has $z$-score $z=-1.2$. Find $x$. Applications 1. The weekly sales for the last $20$ weeks in a kitchen appliance store for an electric automatic rice cooker are $\begin{matrix} 20 & 15 & 14 & 14 & 18\ 15 & 19 & 12 & 13 & 9\ 15 & 17 & 16 & 16 & 18\ 19 & 15 & 15 & 16 & 15 \end{matrix}$ 1. Find the percentile rank of $15$. 2. If the sample accurately reflects the population, then what percentage of weeks would an inventory of $15$ rice cookers be adequate? 2. The table shows the number of vehicles owned in a survey of 52 households. $\begin{array}{c|c c c c c c c c} x & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 \ \hline f &2 &12 &15 &11 &6 &3 &1 &2\ \end{array}$ 1. Find the percentile rank of $2$. 2. If the sample accurately reflects the population, then what percentage of households have at most two vehicles? 3. For two months Cordelia records her daily commute time to work each day to the nearest minute and obtains the following data: $\begin{array}{c|c c c c c c c} x & 26 & 27 & 28 & 29 & 30 & 31 & 32 \ \hline f &3 &4 &16 &12 &6 &2 &1 \ \end{array}$Cordelia is supposed to be at work at $8:00\; a.m$. but refuses to leave her house before $7:30\; a.m$. 1. Find the percentile rank of $30$, the time she has to get to work. 2. Assuming that the sample accurately reflects the population of all of Cordelia’s commute times, use your answer to part (a) to predict the proportion of the work days she is late for work. 4. The mean score on a standardized grammar exam is $49.6$; the standard deviation is $1.35$. Dromio is told that the $z$-score of his exam score is $-1.19$. 1. Is Dromio’s score above average or below average? 2. What was Dromio’s actual score on the exam? 5. A random sample of $49$ invoices for repairs at an automotive body shop is taken. The data are arrayed in the stem and leaf diagram shown. (Stems are thousands of dollars, leaves are hundreds, so that for example the largest observation is $3,800$.) $\begin{array}{c|c c c c c c c c c c c} 3 & 5 & 6 & 8 \ 3 &0 &0 &1 &1 &2 &4 \ 2 &5 &6 &6 &7 &7 &8 &8 &9 &9 \ 2 &0 &0 &0 &0 &1 &2 &2 &4 \ 1 &5 &5 &5 &6 &6 &7 &7 &7 &8 &8 &9 \ 1 &0 &0 &1 &3 &4 &4 &4 \ 0 &5 &6 &8 &8 \ 0 &4 \end{array}$For these data, $\sum x=101,100$, $\sum x^2=244,830,000$. 1. Find the $z$-score of the repair that cost $\1,100$. 2. Find the $z$-score of the repairs that cost $\2,700$. 6. The stem and leaf diagram shows the time in seconds that callers to a telephone-order center were on hold before their call was taken. $\begin{array}{c|c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c} 0 &0 &0 &0 &0 &0 &0 &1 &1 &1 &1 &1 &1 &1 &1 &2 &2 &2 &2 &2 &3 &3 &3 &3 &3 &3 &3 &4 &4 &4 &4 &4 \ 0 &5 &5 &5 &5 &5 &5 &5 &5 &5 &6 &6 &6 &6 &6 &6 &6 &6 &6 &6 &7 &7 &7 &7 &7 &7 &8 &8 &8 &9 &9 \ 1 &0 &0 &1 &1 &1 &1 &2 &2 &2 &2 &4 &4 \ 1 &5 &6 &6 &8 &9 \ 2 &2 &4 \ 2 &5 \ 3 &0 \ \end{array}$ 1. Find the quartiles. 2. Give the five-number summary of the data. 3. Find the range and the IQR. Additional Exercises 1. Consider the data set represented by the ordered stem and leaf diagram $\begin{array}{c|c c c c c c c c c c c c c c c c c c} 10 &0 &0 \ 9 &1 &1 &1 &1 &2 &3\ 8 &0 &1 &1 &2 &2 &3 &4 &5 &7 &8 &8 &9\ 7 &0 &0 &0 &1 &1 &2 &4 &4 &5 &6 &6 &6 &7 &7 &7 &8 &8 &9\ 6 &0 &1 &2 &2 &2 &3 &4 &4 &5 &7 &7 &7 &7 &8 &8\ 5 &0 &2 &3 &3 &4 &4 &6 &7 &7 &8 &9\ 4 &2 &5 &6 &8 &8\ 3 &9 &9 \end{array}$ 1. Find the three quartiles. 2. Give the five-number summary of the data. 3. Find the range and the IQR. 2. For the following stem and leaf diagram the units on the stems are thousands and the units on the leaves are hundreds, so that for example the largest observation is $3,800$. $\begin{array}{c|c c c c c c c c c c c} 3 &5 &6 &8 \ 3 &0 &0 &1 &1 &2 &4\ 2 &5 &6 &6 &7 &7 &8 &8 &9 &9 \ 2 &0 &0 &0 &0 &1 &2 &2 &4 \ 1 &5 &5 &5 &6 &6 &7 &7 &7 &8 &8 &9 \ 1 &0 &0 &1 &3 &4 &4 &4 \ 0 &5 &6 &8 &8\ 0 &4 \end{array}$ 1. Find the percentile rank of $800$. 2. Find the percentile rank of $3,200$. 3. Find the five-number summary for the following sample data. $\begin{array}{c|c c c c c c c} x &26 &27 &28 &29 &30 &31 &32 \ \hline f &3 &4 &16 &12 &6 &2 &1\ \end{array}$ 4. Find the five-number summary for the following sample data. $\begin{array}{c|c c c c c c c c c c} x &1 &2 &3 &4 &5 &6 &7 &8 &9 &10 \ \hline f &384 &208 &98 &56 &28 &12 &8 &2 &3 &1\ \end{array}$ 5. For the following stem and leaf diagram the units on the stems are thousands and the units on the leaves are hundreds, so that for example the largest observation is $3,800$. $\begin{array}{c|c c c c c c c c c c c} 3 &5 &6 &8 \ 3 &0 &0 &1 &1 &2 &4\ 2 &5 &6 &6 &7 &7 &8 &8 &9 &9 \ 2 &0 &0 &0 &0 &1 &2 &2 &4 \ 1 &5 &5 &5 &6 &6 &7 &7 &7 &8 &8 &9 \ 1 &0 &0 &1 &3 &4 &4 &4 \ 0 &5 &6 &8 &8\ 0 &4 \end{array}$ 1. Find the three quartiles. 2. Find the IQR. 3. Give the five-number summary of the data. 6. Determine whether the following statement is true. “In any data set, if an observation $x_1$ is greater than another observation $x_2$, then the $z$-score of $x_1$ is greater than the $z$-score of $x_2$. 7. Emilia and Ferdinand took the same freshman chemistry course, Emilia in the fall, Ferdinand in the spring. Emilia made an $83$ on the common final exam that she took, on which the mean was $76$ and the standard deviation $8$. Ferdinand made a $79$ on the common final exam that he took, which was more difficult, since the mean was $65$ and the standard deviation $12$. The one who has a higher $z$-score did relatively better. Was it Emilia or Ferdinand? 8. Refer to the previous exercise. On the final exam in the same course the following semester, the mean is $68$ and the standard deviation is $9$. What grade on the exam matches Emilia’s performance? Ferdinand’s? 9. Rosencrantz and Guildenstern are on a weight-reducing diet. Rosencrantz, who weighs $178\; lb$, belongs to an age and body-type group for which the mean weight is $145\; lb$ and the standard deviation is $15\; lb$. Guildenstern, who weighs $204\; lb$, belongs to an age and body-type group for which the mean weight is $165\; lb$ and the standard deviation is $20\; lb$. Assuming z-scores are good measures for comparison in this context, who is more overweight for his age and body type? Large Data Set Exercises Note: For Large Data Set Exercises below, all of the data sets associated with these questions are missing, but the questions themselves are included here for reference. 1. Large $\text{Data Set 1}$ lists the SAT scores and GPAs of $1,000$ students. 1. Compute the three quartiles and the interquartile range of the $1,000$ SAT scores. 2. Compute the three quartiles and the interquartile range of the $1,000$ GPAs. 2. Large $\text{Data Set 10}$ records the scores of $72$ students on a statistics exam. 1. Compute the five-number summary of the data. 2. Describe in words the performance of the class on the exam in the light of the result in part (a). 3. Large $\text{Data Sets 3 and 3A}$ list the heights of $174$ customers entering a shoe store. 1. Compute the five-number summary of the heights, without regard to gender. 2. Compute the five-number summary of the heights of the men in the sample. 3. Compute the five-number summary of the heights of the women in the sample. 4. Large $\text{Data Sets 7, 7A, and 3B}$ list the survival times in days of $140$ laboratory mice with thymic leukemia from onset to death. 1. Compute the three quartiles and the interquartile range of the survival times for all mice, without regard to gender. 2. Compute the three quartiles and the interquartile range of the survival times for the $65$ male mice (separately recorded in $\text{Data Set 7A}$). 3. Compute the three quartiles and the interquartile range of the survival times for the $75$ female mice (separately recorded in $\text{Data Sets 7B}$). Answer 1. 60 2. 10 1. 59 2. 23 1. 29 2. 71 1. $50\%$ 2. $x_{min}=25,\; \; Q_1=70,\; \; Q_2=77.5\; \; Q_3=90,\; \; x_{max}=100, \; \; IQR=20$ 3. $x_{min}=1,\; \; Q_1=1.5,\; \; Q_2=6.5\; \; Q_3=8,\; \; x_{max}=9, \; \; IQR=6.5$ 4. $-1.3,\; 1.39,\; 0.4,\; -0.35,\; -0.11$ 5. $z=-0.74\; \text{for}\; x = 1,\; z=-0.37\; \text{for}\; x = 2,\; z = 1.48\; \text{for}\; x = 7$ 1. 1 2. 1 3. 1 4. $z=-1\; \text{for}\; x = 0,\; z=1\; \text{for}\; x = 2$ 6. 16 7. 4.9 1. 55 2. 55 1. 93 2. 0.07 1. -1.11 2. 0.73 1. $Q_1=59,\; Q_2=70,\; Q_3=81$ 2. $x_{min}=39,\; Q_1=59,\; Q_2=70,\; Q_3=81,\; x_{max}=100$ 3. $R = 61,\; IQR=22$ 8. $x_{min}=26,\; Q_1=28,\; Q_2=28,\; Q_3=29,\; x_{max}=32$ 1. $Q_1=1450,\; Q_2=2000,\; Q_3=2800$ 2. $IQR=1350$ 3. $x_{min}=400,\; Q_1=1450,\; Q_2=2000,\; Q_3=2800,\; x_{max}=3800$ 9. Emilia: $z=0.875$, Ferdinand: $z=1.1\bar{6}$ 10. Rosencrantz: $z=2.2$, Guildenstern: $z=1.95$. Rosencrantz is more overweight for his age and body type. 1. $x_{min}=15,\; Q_1=51,\; Q_2=67,\; Q_3=82,\; x_{max}=97$ 2. The data set appears to be skewed to the left. 1. $Q_1=440,\; Q_2=552.5,\; Q_3=661\; \; \text{and}\; \; IQR=221$ 2. $Q_1=641,\; Q_2=667,\; Q_3=700\; \; \text{and}\; \; IQR=59$ 3. $Q_1=407,\; Q_2=448,\; Q_3=504\; \; \text{and}\; \; IQR=97$ 2.5 The Empirical Rule and Chebyshev's Theorem Basic 1. State the Empirical Rule. 2. Describe the conditions under which the Empirical Rule may be applied. 3. State Chebyshev’s Theorem. 4. Describe the conditions under which Chebyshev’s Theorem may be applied. 5. A sample data set with a bell-shaped distribution has mean $\bar{x}=6$ and standard deviation $s=2$. Find the approximate proportion of observations in the data set that lie: 1. between $4$ and $8$; 2. between $2$ and $10$; 3. between $0$ and $12$. 6. A population data set with a bell-shaped distribution has mean $\mu =6$ and standard deviation $\sigma =2$. Find the approximate proportion of observations in the data set that lie: 1. between $4$ and $8$; 2. between $2$ and $10$; 3. between $0$ and $12$. 7. A population data set with a bell-shaped distribution has mean $\mu =2$ and standard deviation $\sigma =1.1$. Find the approximate proportion of observations in the data set that lie: 1. above $2$; 2. above $3.1$; 3. between $2$ and $3.1$. 8. A sample data set with a bell-shaped distribution has mean $\bar{x}=2$ and standard deviation $s=1.1$. Find the approximate proportion of observations in the data set that lie: 1. below $-0.2$; 2. below $3.1$; 3. between $-1.3$ and $0.9$. 9. A population data set with a bell-shaped distribution and size $N=500$ has mean $\mu =2$ and standard deviation $\sigma =1.1$. Find the approximate number of observations in the data set that lie: 1. above $2$; 2. above $3.1$; 3. between $2$ and $3.1$. 10. A sample data set with a bell-shaped distribution and size $n=128$ has mean $\bar{x}=2$ and standard deviation $s=1.1$. Find the approximate number of observations in the data set that lie: 1. below $-0.2$; 2. below $3.1$; 3. between $-1.3$ and $0.9$. 11. A sample data set has mean $\bar{x}=6$ and standard deviation $s=2$. Find the minimum proportion of observations in the data set that must lie: 1. between $2$ and $10$; 2. between $0$ and $12$; 3. between $4$ and $8$. 12. A population data set has mean $\mu =2$ and standard deviation $\sigma =1.1$. Find the minimum proportion of observations in the data set that must lie: 1. between $-0.2$ and $4.2$; 2. between $-1.3$ and $5.3$. 13. A population data set of size $N=500$ has mean $\mu =5.2$ and standard deviation $\sigma =1.1$. Find the minimum number of observations in the data set that must lie: 1. between $3$ and $7.4$; 2. between $1.9$ and $8.5$. 14. A sample data set of size $n=128$ has mean $\bar{x}=2$ and standard deviation $s=2$. Find the minimum number of observations in the data set that must lie: 1. between $-2$ and $6$ (including $-2$ and $6$); 2. between $-4$ and $8$ (including $-4$ and $8$). 15. A sample data set of size $n=30$ has mean $\bar{x}=6$ and standard deviation $s=2$. 1. What is the maximum proportion of observations in the data set that can lie outside the interval $(2,10)$? 2. What can be said about the proportion of observations in the data set that are below $2$? 3. What can be said about the proportion of observations in the data set that are above $10$? 4. What can be said about the number of observations in the data set that are above $10$? 16. A population data set has mean $\mu =2$ and standard deviation $\sigma =1.1$. 1. What is the maximum proportion of observations in the data set that can lie outside the interval $(-1.3,5.3)$? 2. What can be said about the proportion of observations in the data set that are below $-1.3$? 3. What can be said about the proportion of observations in the data set that are above $5.3$? Applications 1. Scores on a final exam taken by $1,200$ students have a bell-shaped distribution with mean $72$ and standard deviation $9$. 1. What is the median score on the exam? 2. About how many students scored between $63$ and $81$? 3. About how many students scored between $72$ and $90$? 4. About how many students scored below $54$? 2. Lengths of fish caught by a commercial fishing boat have a bell-shaped distribution with mean $23$ inches and standard deviation $1.5$ inches. 1. About what proportion of all fish caught are between $20$ inches and $26$ inches long? 2. About what proportion of all fish caught are between $20$ inches and $23$ inches long? 3. About how long is the longest fish caught (only a small fraction of a percent are longer)? 3. Hockey pucks used in professional hockey games must weigh between $5.5$ and $6$ ounces. If the weight of pucks manufactured by a particular process is bell-shaped, has mean $5.75$ ounces and standard deviation $0.125$ ounce, what proportion of the pucks will be usable in professional games? 4. Hockey pucks used in professional hockey games must weigh between $5.5$ and $6$ ounces. If the weight of pucks manufactured by a particular process is bell-shaped and has mean $5.75$ ounces, how large can the standard deviation be if $99.7\%$ of the pucks are to be usable in professional games? 5. Speeds of vehicles on a section of highway have a bell-shaped distribution with mean $60\; mph$ and standard deviation $2.5\; mph$. 1. If the speed limit is $55\; mph$, about what proportion of vehicles are speeding? 2. What is the median speed for vehicles on this highway? 3. What is the percentile rank of the speed $65\; mph$? 4. What speed corresponds to the $16_{th}$ percentile? 6. Suppose that, as in the previous exercise, speeds of vehicles on a section of highway have mean $60\; mph$ and standard deviation $2.5\; mph$, but now the distribution of speeds is unknown. 1. If the speed limit is $55\; mph$, at least what proportion of vehicles must speeding? 2. What can be said about the proportion of vehicles going $65\; mph$ or faster? 7. An instructor announces to the class that the scores on a recent exam had a bell-shaped distribution with mean $75$ and standard deviation $5$. 1. What is the median score? 2. Approximately what proportion of students in the class scored between $70$ and $80$? 3. Approximately what proportion of students in the class scored above $85$? 4. What is the percentile rank of the score $85$? 8. The GPAs of all currently registered students at a large university have a bell-shaped distribution with mean $2.7$ and standard deviation $0.6$. Students with a GPA below $1.5$ are placed on academic probation. Approximately what percentage of currently registered students at the university are on academic probation? 9. Thirty-six students took an exam on which the average was $80$ and the standard deviation was $6$. A rumor says that five students had scores $61$ or below. Can the rumor be true? Why or why not? Additional Exercises 1. For the sample data $\begin{array}{c|c c c c c c c} x &26 &27 &28 &29 &30 &31 &32 \ \hline f &3 &4 &16 &12 &6 &2 &1\ \end{array}$ $\sum x=1,256\; \; \text{and}\; \; \sum x^2=35,926$ 1. Compute the mean and the standard deviation. 2. About how many of the measurements does the Empirical Rule predict will be in the interval $\left (\bar{x}-s,\bar{x}+s \right )$, the interval $\left (\bar{x}-2s,\bar{x}+2s \right )$, and the interval $\left (\bar{x}-3s,\bar{x}+3s \right )$? 3. Compute the number of measurements that are actually in each of the intervals listed in part (a), and compare to the predicted numbers. 2. A sample of size $n = 80$ has mean $139$ and standard deviation $13$, but nothing else is known about it. 1. What can be said about the number of observations that lie in the interval $(126,152)$? 2. What can be said about the number of observations that lie in the interval $(113,165)$? 3. What can be said about the number of observations that exceed $165$? 4. What can be said about the number of observations that either exceed $165$ or are less than $113$? 3. For the sample data $\begin{array}{c|c c c c c } x &1 &2 &3 &4 &5 \ \hline f &84 &29 &3 &3 &1\ \end{array}$ $\sum x=168\; \; \text{and}\; \; \sum x^2=300$ 1. Compute the sample mean and the sample standard deviation. 2. Considering the shape of the data set, do you expect the Empirical Rule to apply? Count the number of measurements within one standard deviation of the mean and compare it to the number predicted by the Empirical Rule. 3. What does Chebyshev’s Rule say about the number of measurements within one standard deviation of the mean? 4. Count the number of measurements within two standard deviations of the mean and compare it to the minimum number guaranteed by Chebyshev’s Theorem to lie in that interval. 4. For the sample data set $\begin{array}{c|c c c c c } x &47 &48 &49 &50 &51 \ \hline f &1 &3 &18 &2 &1\ \end{array}$ $\sum x=1224\; \; \text{and}\; \; \sum x^2=59,940$ 1. Compute the sample mean and the sample standard deviation. 2. Considering the shape of the data set, do you expect the Empirical Rule to apply? Count the number of measurements within one standard deviation of the mean and compare it to the number predicted by the Empirical Rule. 3. What does Chebyshev’s Rule say about the number of measurements within one standard deviation of the mean? 4. Count the number of measurements within two standard deviations of the mean and compare it to the minimum number guaranteed by Chebyshev’s Theorem to lie in that interval. Answers 1. See the displayed statement in the text. 2. See the displayed statement in the text. 1. $0.68$ 2. $0.95$ 3. $0.997$ 1. $0.5$ 2. $0.16$ 3. $0.34$ 1. $250$ 2. $80$ 3. $170$ 1. $3/4$ 2. $8/9$ 3. $0$ 1. $375$ 2. $445$ 1. At most $0.25$. 2. At most $0.25$. 3. At most $0.25$. 4. At most $7$. 1. $72$ 2. $816$ 3. $570$ 4. $30$ 3. $0.95$ 1. $0.975$ 2. $60$ 3. $97.5$ 4. $57.5$ 1. $75$ 2. $0.68$ 3. $0.025$ 4. $0.975$ 4. By Chebyshev’s Theorem at most $1/9$ of the scores can be below $62$, so the rumor is impossible. 1. Nothing. 2. It is at least $60$. 3. It is at most $20$. 4. It is at most $20$. 1. $\bar{x}=48.96$, $s = 0.7348$. 2. Roughly bell-shaped, the Empirical Rule should apply. True count: $18$, Predicted: $17$. 3. Nothing. 4. True count: $23$, Guaranteed: at least $18.75$, hence at least $19$. • Anonymous
textbooks/stats/Introductory_Statistics/Introductory_Statistics_(Shafer_and_Zhang)/02%3A_Descriptive_Statistics/2.E%3A_Descriptive_Statistics_%28Exercises%29.txt
Suppose a polling organization questions 1,200 voters in order to estimate the proportion of all voters who favor a particular bond issue. We would expect the proportion of the 1,200 voters in the survey who are in favor to be close to the proportion of all voters who are in favor, but this need not be true. There is a degree of randomness associated with the survey result. If the survey result is highly likely to be close to the true proportion, then we have confidence in the survey result. If it is not particularly likely to be close to the population proportion, then we would perhaps not take the survey result too seriously. The likelihood that the survey proportion is close to the population proportion determines our confidence in the survey result. For that reason, we would like to be able to compute that likelihood. The task of computing it belongs to the realm of probability, which we study in this chapter. • 3.1: Sample Spaces, Events, and Their Probabilities The sample space of a random experiment is the collection of all possible outcomes. An event associated with a random experiment is a subset of the sample space. The probability of any outcome is a number between 0 and 1. The probabilities of all the outcomes add up to 1. The probability of any event A is the sum of the probabilities of the outcomes in A. • 3.2: Complements, Intersections, and Unions Some events can be naturally expressed in terms of other, sometimes simpler, events. • 3.3: Conditional Probability and Independent Events A conditional probability is the probability that an event has occurred, taking into account additional information about the result of the experiment. A conditional probability can always be computed using the formula in the definition. Sometimes it can be computed by discarding part of the sample space. Two events A and B are independent if the probability P(A∩B) of their intersection A ∩ B is equal to the product P(A)⋅P(B) of their individual probabilities. • 3.E: Basic Concepts of Probability (Exercises) These are homework exercises to accompany the Textmap created for "Introductory Statistics" by Shafer and Zhang. 03: Basic Concepts of Probability Learning Objectives • To learn the concept of the sample space associated with a random experiment. • To learn the concept of an event associated with a random experiment. • To learn the concept of the probability of an event. Sample Spaces and Events Rolling an ordinary six-sided die is a familiar example of a random experiment, an action for which all possible outcomes can be listed, but for which the actual outcome on any given trial of the experiment cannot be predicted with certainty. In such a situation we wish to assign to each outcome, such as rolling a two, a number, called the probability of the outcome, that indicates how likely it is that the outcome will occur. Similarly, we would like to assign a probability to any event, or collection of outcomes, such as rolling an even number, which indicates how likely it is that the event will occur if the experiment is performed. This section provides a framework for discussing probability problems, using the terms just mentioned. Definition: random experiment A random experiment is a mechanism that produces a definite outcome that cannot be predicted with certainty. The sample space associated with a random experiment is the set of all possible outcomes. An event is a subset of the sample space. Definition: Element and Occurrence An event $E$ is said to occur on a particular trial of the experiment if the outcome observed is an element of the set $E$. Example $1$: Sample Space for a single coin Construct a sample space for the experiment that consists of tossing a single coin. Solution The outcomes could be labeled $h$ for heads and $t$ for tails. Then the sample space is the set: $S = \{h,t\}$ Example $2$: Sample Space for a single die Construct a sample space for the experiment that consists of rolling a single die. Find the events that correspond to the phrases “an even number is rolled” and “a number greater than two is rolled.” Solution The outcomes could be labeled according to the number of dots on the top face of the die. Then the sample space is the set $S = \{1,2,3,4,5,6\}$ The outcomes that are even are $2, 4,\; \; \text{and}\; \; 6$, so the event that corresponds to the phrase “an even number is rolled” is the set $\{2,4,6\}$, which it is natural to denote by the letter $E$. We write $E=\{2,4,6\}$. Similarly the event that corresponds to the phrase “a number greater than two is rolled” is the set $T=\{3,4,5,6\}$, which we have denoted $T$. A graphical representation of a sample space and events is a Venn diagram, as shown in Figure $1$. In general the sample space $S$ is represented by a rectangle, outcomes by points within the rectangle, and events by ovals that enclose the outcomes that compose them. Example $3$: Sample Spaces for two coines A random experiment consists of tossing two coins. 1. Construct a sample space for the situation that the coins are indistinguishable, such as two brand new pennies. 2. Construct a sample space for the situation that the coins are distinguishable, such as one a penny and the other a nickel. Solution 1. After the coins are tossed one sees either two heads, which could be labeled $2h$, two tails, which could be labeled $2t$, or coins that differ, which could be labeled $d$ Thus a sample space is $S=\{2h, 2t, d\}$. 2. Since we can tell the coins apart, there are now two ways for the coins to differ: the penny heads and the nickel tails, or the penny tails and the nickel heads. We can label each outcome as a pair of letters, the first of which indicates how the penny landed and the second of which indicates how the nickel landed. A sample space is then $S' = \{hh, ht, th, tt\}$. A device that can be helpful in identifying all possible outcomes of a random experiment, particularly one that can be viewed as proceeding in stages, is what is called a tree diagram. It is described in the following example. Example $4$: Tree diagram Construct a sample space that describes all three-child families according to the genders of the children with respect to birth order. Solution Two of the outcomes are “two boys then a girl,” which we might denote $bbg$, and “a girl then two boys,” which we would denote $gbb$. Clearly there are many outcomes, and when we try to list all of them it could be difficult to be sure that we have found them all unless we proceed systematically. The tree diagram shown in Figure $2$, gives a systematic approach. The diagram was constructed as follows. There are two possibilities for the first child, boy or girl, so we draw two line segments coming out of a starting point, one ending in a $b$ for “boy” and the other ending in a $g$ for “girl.” For each of these two possibilities for the first child there are two possibilities for the second child, “boy” or “girl,” so from each of the $b$ and $g$ we draw two line segments, one segment ending in a $b$ and one in a $g$. For each of the four ending points now in the diagram there are two possibilities for the third child, so we repeat the process once more. The line segments are called branches of the tree. The right ending point of each branch is called a node. The nodes on the extreme right are the final nodes; to each one there corresponds an outcome, as shown in the figure. From the tree it is easy to read off the eight outcomes of the experiment, so the sample space is, reading from the top to the bottom of the final nodes in the tree, $S=\{bbb,\; bbg,\; bgb,\; bgg,\; gbb,\; gbg,\; ggb,\; ggg\} \nonumber$ Probability Definition: probability The probability of an outcome $e$ in a sample space $S$ is a number $P$ between $1$ and $0$ that measures the likelihood that $e$ will occur on a single trial of the corresponding random experiment. The value $P=0$ corresponds to the outcome $e$ being impossible and the value $P=1$ corresponds to the outcome $e$ being certain. Definition: probability of an event The probability of an event $A$ is the sum of the probabilities of the individual outcomes of which it is composed. It is denoted $P(A)$. The following formula expresses the content of the definition of the probability of an event: If an event $E$ is $E=\{e_1,e_2,...,e_k\}$, then $P(E)=P(e_1)+P(e_2)+...+P(e_k) \nonumber$ The following figure expresses the content of the definition of the probability of an event: Since the whole sample space $S$ is an event that is certain to occur, the sum of the probabilities of all the outcomes must be the number $1$. In ordinary language probabilities are frequently expressed as percentages. For example, we would say that there is a $70\%$ chance of rain tomorrow, meaning that the probability of rain is $0.70$. We will use this practice here, but in all the computational formulas that follow we will use the form $0.70$ and not $70\%$. Example $5$ A coin is called “balanced” or “fair” if each side is equally likely to land up. Assign a probability to each outcome in the sample space for the experiment that consists of tossing a single fair coin. Solution With the outcomes labeled $h$ for heads and $t$ for tails, the sample space is the set $S=\{h,t\} \nonumber$ Since the outcomes have the same probabilities, which must add up to $1$, each outcome is assigned probability $1/2$. Example $6$ A die is called “balanced” or “fair” if each side is equally likely to land on top. Assign a probability to each outcome in the sample space for the experiment that consists of tossing a single fair die. Find the probabilities of the events $E$: “an even number is rolled” and $T$: “a number greater than two is rolled.” Solution With outcomes labeled according to the number of dots on the top face of the die, the sample space is the set $S=\{1,2,3,4,5,6\} \nonumber$ Since there are six equally likely outcomes, which must add up to $1$, each is assigned probability $1/6$. Since $E = \{2,4,6\}$, $P(E) = \dfrac{1}{6} + \dfrac{1}{6} + \dfrac{1}{6} = \dfrac{3}{6} = \dfrac{1}{2} \nonumber$ Since $T = \{3,4,5,6\}$, $P(T) = \dfrac{4}{6} = \dfrac{2}{3} \nonumber$ Example $7$ Two fair coins are tossed. Find the probability that the coins match, i.e., either both land heads or both land tails. Solution In Example $3$ we constructed the sample space $S=\{2h,2t,d\}$ for the situation in which the coins are identical and the sample space $S′=\{hh,ht,th,tt\}$ for the situation in which the two coins can be told apart. The theory of probability does not tell us how to assign probabilities to the outcomes, only what to do with them once they are assigned. Specifically, using sample space $S$, matching coins is the event $M=\{2h, 2t\}$ which has probability $P(2h)+P(2t)$. Using sample space $S'$, matching coins is the event $M'=\{hh, tt\}$, which has probability $P(hh)+P(tt)$. In the physical world it should make no difference whether the coins are identical or not, and so we would like to assign probabilities to the outcomes so that the numbers $P(M)$ and $P(M')$ are the same and best match what we observe when actual physical experiments are performed with coins that seem to be fair. Actual experience suggests that the outcomes in S' are equally likely, so we assign to each probability $\frac{1}{4}$, and then... $P(M') = P(hh) + P(tt) = \frac{1}{4} + \frac{1}{4} = \frac{1}{2} \nonumber$ Similarly, from experience appropriate choices for the outcomes in $S$ are: $P(2h) = \frac{1}{4} \nonumber$ $P(2t) = \frac{1}{4} \nonumber$ $P(d) = \frac{1}{2} \nonumber$ The previous three examples illustrate how probabilities can be computed simply by counting when the sample space consists of a finite number of equally likely outcomes. In some situations the individual outcomes of any sample space that represents the experiment are unavoidably unequally likely, in which case probabilities cannot be computed merely by counting, but the computational formula given in the definition of the probability of an event must be used. Example $8$ The breakdown of the student body in a local high school according to race and ethnicity is $51\%$ white, $27\%$ black, $11\%$ Hispanic, $6\%$ Asian, and $5\%$ for all others. A student is randomly selected from this high school. (To select “randomly” means that every student has the same chance of being selected.) Find the probabilities of the following events: 1. $B$: the student is black, 2. $M$: the student is minority (that is, not white), 3. $N$: the student is not black. Solution The experiment is the action of randomly selecting a student from the student population of the high school. An obvious sample space is $S=\{w,b,h,a,o\}$. Since $51\%$ of the students are white and all students have the same chance of being selected, $P(w)=0.51$, and similarly for the other outcomes. This information is summarized in the following table: $\begin{array}{l|cccc}Outcome & w & b & h & a & o \ Probability & 0.51 & 0.27 & 0.11 & 0.06 & 0.05\end{array} \nonumber$ 1. Since $B=\{b\},\; \; P(B)=P(b)=0.27$ 2. Since $M=\{b,h,a,o\},\; \; P(M)=P(b)+P(h)+P(a)+P(o)=0.27+0.11+0.06+0.05=0.49$ 3. Since $N=\{w,h,a,o\},\; \; P(N)=P(w)+P(h)+P(a)+P(o)=0.51+0.11+0.06+0.05=0.73$ Example $9$ The student body in the high school considered in the last example may be broken down into ten categories as follows: $25\%$ white male, $26\%$ white female, $12\%$ black male, $15\%$ black female, 6% Hispanic male, $5\%$ Hispanic female, $3\%$ Asian male, $3\%$ Asian female, $1\%$ male of other minorities combined, and $4\%$ female of other minorities combined. A student is randomly selected from this high school. Find the probabilities of the following events: 1. $B$: the student is black 2. $MF$: the student is a non-white female 3. $FN$: the student is female and is not black Solution Now the sample space is $S=\{wm, bm, hm, am, om, wf, bf, hf, af, of\}$. The information given in the example can be summarized in the following table, called a two-way contingency table: Gender Race / Ethnicity White Black Hispanic Asian Others Male 0.25 0.12 0.06 0.03 0.01 Female 0.26 0.15 0.05 0.03 0.04 1. Since $B=\{bm, bf\},\; \; P(B)=P(bm)+P(bf)=0.12+0.15=0.27$ 2. Since $MF=\{bf, hf, af, of\},\; \; P(M)=P(bf)+P(hf)+P(af)+P(of)=0.15+0.05+0.03+0.04=0.27$ 3. Since $FN=\{wf, hf, af, of\},\; \; P(FN)=P(wf)+P(hf)+P(af)+P(of)=0.26+0.05+0.03+0.04=0.38​​​​​​$ Key Takeaway • The sample space of a random experiment is the collection of all possible outcomes. • An event associated with a random experiment is a subset of the sample space. • The probability of any outcome is a number between $0$ and $1$. The probabilities of all the outcomes add up to $1$. • The probability of any event $A$ is the sum of the probabilities of the outcomes in $A$.
textbooks/stats/Introductory_Statistics/Introductory_Statistics_(Shafer_and_Zhang)/03%3A_Basic_Concepts_of_Probability/3.01%3A_Sample_Spaces_Events_and_Their_Probabilities.txt
Learning Objectives • To learn how some events are naturally expressible in terms of other events. • To learn how to use special formulas for the probability of an event that is expressed in terms of one or more other events. Some events can be naturally expressed in terms of other, sometimes simpler, events. Complements Definition: Complement The complement of an event $A$ in a sample space $S$, denoted $A^c$, is the collection of all outcomes in $S$ that are not elements of the set $A$. It corresponds to negating any description in words of the event $A$. Example $1$ Two events connected with the experiment of rolling a single die are $E$: “the number rolled is even” and $T$: “the number rolled is greater than two.” Find the complement of each. Solution In the sample space $S=\{1,2,3,4,5,6\}$ the corresponding sets of outcomes are $E=\{2,4,6\}$ and $T=\{3,4,5,6\}$. The complements are $E^c=\{1,3,5\}$ and $T^c=\{1,2\}$. In words the complements are described by “the number rolled is not even” and “the number rolled is not greater than two.” Of course easier descriptions would be “the number rolled is odd” and “the number rolled is less than three.” If there is a $60\%$ chance of rain tomorrow, what is the probability of fair weather? The obvious answer, $40\%$, is an instance of the following general rule. Definition: Probability Rule for Complements The Probability Rule for Complements states that $P(A^c) = 1 - P(A) \nonumber$ This formula is particularly useful when finding the probability of an event directly is difficult. Example $2$ Find the probability that at least one heads will appear in five tosses of a fair coin. Solution Identify outcomes by lists of five $hs$ and $ts$, such as $tthtt$ and $hhttt$. Although it is tedious to list them all, it is not difficult to count them. Think of using a tree diagram to do so. There are two choices for the first toss. For each of these there are two choices for the second toss, hence $2\times 2 = 4$ outcomes for two tosses. For each of these four outcomes, there are two possibilities for the third toss, hence $4\times 2 = 8$ outcomes for three tosses. Similarly, there are $8\times 2 = 16$ outcomes for four tosses and finally $16\times 2 = 32$ outcomes for five tosses. Let $O$ denote the event “at least one heads.” There are many ways to obtain at least one heads, but only one way to fail to do so: all tails. Thus although it is difficult to list all the outcomes that form $O$, it is easy to write $O^c = \{ttttt\}$. Since there are $32$ equally likely outcomes, each has probability $\frac{1}{32}$, so $P(O^c)=1∕32$, hence $P(O) = 1-\frac{1}{32}\approx 0.97$ or about a $97\%$ chance. Intersection of Events Definition: intersections The intersection of events $A$ and $B$, denoted $A\cap B$, is the collection of all outcomes that are elements of both of the sets $A$ and $B$. It corresponds to combining descriptions of the two events using the word “and.” To say that the event $A\cap B$ occurred means that on a particular trial of the experiment both $A$ and $B$ occurred. A visual representation of the intersection of events $A$ and $B$ in a sample space $S$ is given in Figure $1$. The intersection corresponds to the shaded lens-shaped region that lies within both ovals. Example $3$ In the experiment of rolling a single die, find the intersection $E\cap T$ of the events $E$: “the number rolled is even” and $T$: “the number rolled is greater than two.” Solution The sample space is $S=\{1,2,3,4,5,6\}$. Since the outcomes that are common to $E=\{2,4,6\}$ and $T=\{3,4,5,6\}$ are $4$ and $6$, $E\cap T=\{4,6\}$. In words the intersection is described by “the number rolled is even and is greater than two.” The only numbers between one and six that are both even and greater than two are four and six, corresponding to $E\cap T$ given above. Example $4$ A single die is rolled. 1. Suppose the die is fair. Find the probability that the number rolled is both even and greater than two. 2. Suppose the die has been “loaded” so that $P(1)=\frac{1}{12}$, $P(6)=\frac{3}{12}$, and the remaining four outcomes are equally likely with one another. Now find the probability that the number rolled is both even and greater than two. Solution In both cases the sample space is $S=\{1,2,3,4,5,6\}$ and the event in question is the intersection $E\cap T=\{4,6\}$ of the previous example. 1. Since the die is fair, all outcomes are equally likely, so by counting we have $P(E\cap T)=\frac{2}{6}$. 2. The information on the probabilities of the six outcomes that we have so far is $\begin{array}{l|cccc}Outcome & 1 & 2 & 3 & 4 & 5 & 6 \ Probablity & \frac{1}{12} & p & p & p & p & \frac{3}{12}\end{array} \nonumber$ Since $P(1)+P(6)=\frac{4}{6}=\frac{1}{3}$ $P(2) + P(3) + P(4) + P(5) = 1 - \frac{1}{3} = \frac{2}{3} \nonumber$ Thus $4p=\frac{2}{3}$, so $p=\frac{1}{6}$. In particular $P(4)=\frac{1}{6}$ therefore: $P(E\cap T) = P(4) + P(6) = \frac{1}{6} + \frac{3}{12} = \frac{5}{12} \nonumber$ Definition: mutually exclusive Events $A$ and $B$ are mutually exclusive (cannot both occur at once) if they have no elements in common. For $A$ and $B$ to have no outcomes in common means precisely that it is impossible for both $A$ and $B$ to occur on a single trial of the random experiment. This gives the following rule: Definition: Probability Rule for Mutually Exclusive Events Events $A$ and $B$ are mutually exclusive if and only if $P(A ∩ B) = 0 \nonumber$ Any event $A$ and its complement $A^c$ are mutually exclusive, but $A$ and $B$ can be mutually exclusive without being complements. Example $5$ In the experiment of rolling a single die, find three choices for an event $A$ so that the events $A$ and $E$: “the number rolled is even” are mutually exclusive. Solution Since $E=\{2,4,6\}$ and we want $A$ to have no elements in common with $E$, any event that does not contain any even number will do. Three choices are $\{1,3,5\}$ (the complement $E^c$, the odds), $\{1,3\}$, and $\{5\}$. Union of Events Definition: Union of Events The union of events $A$ and $B,$ denoted $A\cup B$, is the collection of all outcomes that are elements of one or the other of the sets $A$ and $B$, or of both of them. It corresponds to combining descriptions of the two events using the word “or.” To say that the event $A\cup B$ occurred means that on a particular trial of the experiment either $A$ or $B$ occurred (or both did). A visual representation of the union of events $A$ and $B$ in a sample space $S$ is given in Figure $2$. The union corresponds to the shaded region. Figure $2$: The Union of Events A and B Example $6$ In the experiment of rolling a single die, find the union of the events $E$: “the number rolled is even” and $T$: “the number rolled is greater than two.” Solution Since the outcomes that are in either $E=\{2,4,6\}$ or $T=\{3,4,5,6\}$ (or both) are $2, 3, 4, 5,$ and $6$, that means $E\cup T=\{2,3,4,5,6\}$. Note that an outcome such as $4$ that is in both sets is still listed only once (although strictly speaking it is not incorrect to list it twice). In words the union is described by “the number rolled is even or is greater than two.” Every number between one and six except the number one is either even or is greater than two, corresponding to $E\cup T$ given above. Example $7$ A two-child family is selected at random. Let $B$ denote the event that at least one child is a boy, let $D$ denote the event that the genders of the two children differ, and let $M$ denote the event that the genders of the two children match. Find $B\cup D$ and $B\cup M$. Solution A sample space for this experiment is $S=\{bb,bg,gb,gg\}$, where the first letter denotes the gender of the firstborn child and the second letter denotes the gender of the second child. The events $B, D,$ and $M$ are $B=\{bb,bg,gb\}$, $D=\{bg,gb\}$, $M=\{bb,gg\}$. Each outcome in $D$ is already in $B$, so the outcomes that are in at least one or the other of the sets $B$ and $D$ is just the set $B$ itself: $B\cup D=\{bb,bg,gb\}=B$. Every outcome in the whole sample space $S$ is in at least one or the other of the sets $B$ and $M$, so $B\cup M=\{bb,bg,gb,gg\}=S$. Definition: Additive Rule of Probability A useful property to know is the Additive Rule of Probability, which is $P(A\cup B) = P(A) + P(B) − P(A\cap B) \nonumber$ The next example, in which we compute the probability of a union both by counting and by using the formula, shows why the last term in the formula is needed. Example $8$ Two fair dice are thrown. Find the probabilities of the following events: 1. both dice show a four 2. at least one die shows a four Solution As was the case with tossing two identical coins, actual experience dictates that for the sample space to have equally likely outcomes we should list outcomes as if we could distinguish the two dice. We could imagine that one of them is red and the other is green. Then any outcome can be labeled as a pair of numbers as in the following display, where the first number in the pair is the number of dots on the top face of the green die and the second number in the pair is the number of dots on the top face of the red die. $\begin{array}11 & 12 & 13 & 14 & 15 & 16 \ 21 & 22 & 23 & 24 & 25 & 26 \ 31 & 32 & 33 & 34 & 35 & 36 \ 41 & 42 & 43 & 44 & 45 & 46 \ 51 & 52 & 53 & 54 & 55 & 56 \ 61 & 62 & 63 & 64 & 65 & 66\end{array} \nonumber$ 1. There are $36$ equally likely outcomes, of which exactly one corresponds to two fours, so the probability of a pair of fours is $1/36$. 2. From the table we can see that there are $11$ pairs that correspond to the event in question: the six pairs in the fourth row (the green die shows a four) plus the additional five pairs other than the pair $44$, already counted, in the fourth column (the red die is four), so the answer is $11/36$. To see how the formula gives the same number, let $A_G$ denote the event that the green die is a four and let $A_R$ denote the event that the red die is a four. Then clearly by counting we get: $P(A_G) = 6/36$ and $P(A_R) = 6/36$. Since $A_G\cap A_R = \{44\}$, $P(A_G\cap A_R) = 1/36$. This is the computation from part 1, of course. Thus by the Additive Rule of Probability we get: $P(A_G\cap A_R ) = P(A_G) + P(A_R) - P(A_G - A_R) = 6/36 + 6/36 - 1/36 = \frac{11}{36} \nonumber$ Example $9$ A tutoring service specializes in preparing adults for high school equivalence tests. Among all the students seeking help from the service, $63\%$ need help in mathematics, $34\%$ need help in English, and $27\%$ need help in both mathematics and English. What is the percentage of students who need help in either mathematics or English? Solution Imagine selecting a student at random, that is, in such a way that every student has the same chance of being selected. Let $M$ denote the event “the student needs help in mathematics” and let $E$ denote the event “the student needs help in English.” The information given is that $P(M) = 0.63$, $P(E) = 0.34$ and $P(M\cap E) = 0.27$. Thus the Additive Rule of Probability gives: $P(M\cup E) = P(M) + P(E) - P(M\cap E) = 0.63 + 0.34 - 0.27 = 0.70 \nonumber$ Note how the naïve reasoning that if $63\%$ need help in mathematics and $34\%$ need help in English then $63$ plus $34$ or $97\%$ need help in one or the other gives a number that is too large. The percentage that need help in both subjects must be subtracted off, else the people needing help in both are counted twice, once for needing help in mathematics and once again for needing help in English. The simple sum of the probabilities would work if the events in question were mutually exclusive, for then $P(A\cap B)$ is zero, and makes no difference. Example $10$ Volunteers for a disaster relief effort were classified according to both specialty ($C$: construction, $E$: education, $M$: medicine) and language ability ($S$: speaks a single language fluently, $T$: speaks two or more languages fluently). The results are shown in the following two-way classification table: Specialty Language Ability $S$ $T$ $C$ 12 1 $E$ 4 3 $M$ 6 2 The first row of numbers means that $12$ volunteers whose specialty is construction speak a single language fluently, and $1$ volunteer whose specialty is construction speaks at least two languages fluently. Similarly for the other two rows. A volunteer is selected at random, meaning that each one has an equal chance of being chosen. Find the probability that: 1. his specialty is medicine and he speaks two or more languages; 2. either his specialty is medicine or he speaks two or more languages; 3. his specialty is something other than medicine. Solution When information is presented in a two-way classification table it is typically convenient to adjoin to the table the row and column totals, to produce a new table like this: Specialty Language Ability Total $S$ $T$ $C$ 12 1 13 $E$ 4 3 7 $M$ 6 2 8 Total 22 6 28 1. The probability sought is $P(M\cap T)$. The table shows that there are $2$ such people, out of $28$ in all, hence $P(M\cap T) = 2/28 \approx 0.07$ or about a $7\%$ chance. 2. The probability sought is $P(M\cup T)$. The third row total and the grand total in the sample give $P(M) = 8/28$. The second column total and the grand total give $P(T) = 6/28$. Thus using the result from part (1), $P(M\cup T) = P(M) + P(T) - P(M\cap T) = 828 + 628 - 228 = 1228\approx 0.43 \nonumber$ or about a $43\%$ chance. 1. This probability can be computed in two ways. Since the event of interest can be viewed as the event $C\cup E$ and the events $C$ and $E$ are mutually exclusive, the answer is, using the first two row totals, $P(C\cup E) = P(C) + P(E) - P(C\cap E) = 1328 + 728 - 028 = 2028\approx 0.71 \nonumber$ On the other hand, the event of interest can be thought of as the complement $M^c$ of $M$, hence using the value of $P(M)$computed in part (2), $P(M^c) = 1 - P(M) = 1 - 828 = 2028\approx 0.71 \nonumber$ as before. Key Takeaway • The probability of an event that is a complement or union of events of known probability can be computed using formulas.
textbooks/stats/Introductory_Statistics/Introductory_Statistics_(Shafer_and_Zhang)/03%3A_Basic_Concepts_of_Probability/3.02%3A_Complements_Intersections_and_Unions.txt
Learning Objectives • To learn the concept of a conditional probability and how to compute it. • To learn the concept of independence of events, and how to apply it. Suppose a fair die has been rolled and you are asked to give the probability that it was a five. There are six equally likely outcomes, so your answer is $1/6$. But suppose that before you give your answer you are given the extra information that the number rolled was odd. Since there are only three odd numbers that are possible, one of which is five, you would certai: nly revise your estimate of the likelihood that a five was rolled from $1/6$ to $1/3$. In general, the revised probability that an event A has occurred, taking into account the additional information that another event $B$ has definitely occurred on this trial of the experiment, is called the conditional probability of $A$ given $B$ and is denoted by $P(A\mid B)$. The reasoning employed in this example can be generalized to yield the computational formula in the following definition. Definition: conditional probability The conditional probability of $A$ given $B$, denoted $P(A\mid B)$, is the probability that event $A$ has occurred in a trial of a random experiment for which it is known that event $B$ has definitely occurred. It may be computed by means of the following formula: $P(A\mid B)=\dfrac{P(A\cap B)}{P(B)} \label{CondProb}$ Example $1$: Rolling a Die A fair (unbiased) die is rolled. 1. Find the probability that the number rolled is a five, given that it is odd. 2. Find the probability that the number rolled is odd, given that it is a five. Solution The sample space for this experiment is the set $S={1,2,3,4,5,6}$ consisting of six equally likely outcomes. Let $F$ denote the event “a five is rolled” and let $O$ denote the event “an odd number is rolled,” so that $F={5}\; \; \text{and}\; \; O={1,3,5} \nonumber$ 1. This is the introductory example, so we already know that the answer is $1/3$. To use Equation \ref{CondProb} to confirm this we must replace $A$ in the formula (the event whose likelihood we seek to estimate) by $F$ and replace $B$ (the event we know for certain has occurred) by $O$: $P(F\mid O)=\dfrac{P(F\cap O)}{P(O)}\nonumber$ Since $F\cap O={5}\cap {1,3,5}={5},\; P(F\cap O)=1/6 \nonumber$Since $O={1,3,5}, \; P(O)=3/6. \nonumber$Thus $P(F\mid O)=\dfrac{P(F\cap O)}{P(O)}=\dfrac{1/6}{3/6}=\dfrac{1}{3} \nonumber$ 2. This is the same problem, but with the roles of $F$ and $O$ reversed. Since we are given that the number that was rolled is five, which is odd, the probability in question must be $1$. To apply Equation \ref{CondProb} to this case we must now replace $A$ (the event whose likelihood we seek to estimate) by $O$ and $B$ (the event we know for certain has occurred) by $F$:$P(O\mid F)=\dfrac{P(O\cap F)}{P(F)} \nonumber$Obviously $P(F)=1/6$. In part (a) we found that $P(F\mid O)=1/6$. Thus$P(O\mid F)=\dfrac{P(O\cap F)}{P(F)}=\dfrac{1/6}{1/6}=1 \nonumber$ Just as we did not need the computational formula in this example, we do not need it when the information is presented in a two-way classification table, as in the next example. Example $2$: Marriage and Gender In a sample of $902$ individuals under $40$ who were or had previously been married, each person was classified according to gender and age at first marriage. The results are summarized in the following two-way classification table, where the meaning of the labels is: • $M$: male • $F$: female • $E$: a teenager when first married • $W$: in one’s twenties when first married • $H$: in one’s thirties when first married $E$ $W$ $H$ Total $M$ 43 293 114 450 $F$ 82 299 71 452 Total 125 592 185 902 The numbers in the first row mean that $43$ people in the sample were men who were first married in their teens, $293$ were men who were first married in their twenties, $114$ men who were first married in their thirties, and a total of $450$ people in the sample were men. Similarly for the numbers in the second row. The numbers in the last row mean that, irrespective of gender, $125$ people in the sample were married in their teens, $592$ in their twenties, $185$ in their thirties, and that there were $902$ people in the sample in all. Suppose that the proportions in the sample accurately reflect those in the population of all individuals in the population who are under $40$ and who are or have previously been married. Suppose such a person is selected at random. 1. Find the probability that the individual selected was a teenager at first marriage. 2. Find the probability that the individual selected was a teenager at first marriage, given that the person is male. Solution It is natural to let $E$ also denote the event that the person selected was a teenager at first marriage and to let $M$ denote the event that the person selected is male. 1. According to the table, the proportion of individuals in the sample who were in their teens at their first marriage is $125/902$. This is the relative frequency of such people in the population, hence $P(E)=125/902\approx 0.139$ or about $14\%$. 2. Since it is known that the person selected is male, all the females may be removed from consideration, so that only the row in the table corresponding to men in the sample applies: $E$ $W$ $H$ Total $M$ 43 293 114 450 The proportion of males in the sample who were in their teens at their first marriage is $43/450$. This is the relative frequency of such people in the population of males, hence $P(E/M)=43/450\approx 0.096$ or about $10\%$. In the next example, the computational formula in the definition must be used. Example $3$: Body Weigth and hypertension Suppose that in an adult population the proportion of people who are both overweight and suffer hypertension is $0.09$; the proportion of people who are not overweight but suffer hypertension is $0.11$; the proportion of people who are overweight but do not suffer hypertension is $0.02$; and the proportion of people who are neither overweight nor suffer hypertension is $0.78$. An adult is randomly selected from this population. 1. Find the probability that the person selected suffers hypertension given that he is overweight. 2. Find the probability that the selected person suffers hypertension given that he is not overweight. 3. Compare the two probabilities just found to give an answer to the question as to whether overweight people tend to suffer from hypertension. Solution: Let $H$ denote the event “the person selected suffers hypertension.” Let $O$ denote the event “the person selected is overweight.” The probability information given in the problem may be organized into the following contingency table: $O$ $O^c$ $H$ 0.09 0.11 $H^c$ 0.02 0.78 1. Using the formula in the definition of conditional probability (Equation \ref{CondProb}), $P(H|O)=\dfrac{P(H\cap O)}{P(O)}=\dfrac{0.09}{0.09+0.02}=0.8182 \nonumber$ 2. Using the formula in the definition of conditional probability (Equation \ref{CondProb}), $P(H|O)=\dfrac{P(H\cap O^c)}{P(O^c)}=\dfrac{0.11}{0.11+0.78}=0.1236 \nonumber$ 3. $P(H|O)=0.8182$ is over six times as large as $P(H|O^c)=0.1236$, which indicates a much higher rate of hypertension among people who are overweight than among people who are not overweight. It might be interesting to note that a direct comparison of $P(H\cap O)=0.09$ and $P(H\cap O^c)=0.11$ does not answer the same question. Independent Events Although typically we expect the conditional probability $P(A\mid B)$ to be different from the probability $P(A)$ of $A$, it does not have to be different from $P(A)$. When $P(A\mid B)=P(A)$, the occurrence of $B$ has no effect on the likelihood of $A$. Whether or not the event $A$ has occurred is independent of the event $B$. Using algebra it can be shown that the equality $P(A\mid B)=P(A)$ holds if and only if the equality $P(A\cap B)=P(A)\cdot P(B)$ holds, which in turn is true if and only if $P(B\mid A)=P(B)$. This is the basis for the following definition. Definition: Independent and Dependent Events Events $A$ and $B$ are independent (i.e., events whose probability of occurring together is the product of their individual probabilities). if $P(A\cap B)=P(A)\cdot P(B) \nonumber$ If $A$ and $B$ are not independent then they are dependent. The formula in the definition has two practical but exactly opposite uses: • In a situation in which we can compute all three probabilities $P(A), P(B)\; \text{and}\; P(A\cap B)$, it is used to check whether or not the events $A$ and $B$ are independent: • If $P(A\cap B)=P(A)\cdot P(B)$, then $A$ and $B$ are independent. • If $P(A\cap B)\neq P(A)\cdot P(B)$, then $A$ and $B$ are not independent. • In a situation in which each of $P(A)$ and $P(B)$ can be computed and it is known that $A$ and $B$ are independent, then we can compute $P(A\cap B)$ by multiplying together $P(A) \; \text{and}\; P(B)$: $P(A\cap B)=P(A)\cdot P(B)$. Example $4$: Rolling a Die again A single fair die is rolled. Let $A=\{3\}$ and $B=\{1,3,5\}$. Are $A$ and $B$ independent? Solution In this example we can compute all three probabilities $P(A)=1/6$, $P(B)=1/2$, and $P(A\cap B)=P(\{3\})=1/6$. Since the product $P(A)\cdot P(B)=(1/6)(1/2)=1/12$ is not the same number as $P(A\cap B)=1/6$, the events $A$ and $B$ are not independent. Example $5$ The two-way classification of married or previously married adults under $40$ according to gender and age at first marriage produced the table E W H Total M 43 293 114 450 F 82 299 71 452 Total 125 592 185 902 Determine whether or not the events $F$: “female” and $E$: “was a teenager at first marriage” are independent. Solution The table shows that in the sample of $902$ such adults, $452$ were female, $125$ were teenagers at their first marriage, and $82$ were females who were teenagers at their first marriage, so that \begin{align*} P(F) &=\dfrac{452}{902},\[4pt] P(E) &=\dfrac{125}{902} \[4pt] P(F\cap E) &=\dfrac{82}{902} \end{align*} \nonumber Since \begin{align*} P(F)\cdot P(E) &=\dfrac{452}{902}\cdot \dfrac{125}{902} \[4pt] &=0.069 \end{align*} \nonumber is not the same as $P(F\cap E)=\dfrac{82}{902}=0.091 \nonumber$ we conclude that the two events are not independent. Example $6$ Many diagnostic tests for detecting diseases do not test for the disease directly but for a chemical or biological product of the disease, hence are not perfectly reliable. The sensitivity of a test is the probability that the test will be positive when administered to a person who has the disease. The higher the sensitivity, the greater the detection rate and the lower the false negative rate. Suppose the sensitivity of a diagnostic procedure to test whether a person has a particular disease is $92\%$. A person who actually has the disease is tested for it using this procedure by two independent laboratories. 1. What is the probability that both test results will be positive? 2. What is the probability that at least one of the two test results will be positive? Solution 1. Let $A_1$ denote the event “the test by the first laboratory is positive” and let $A_2$ denote the event “the test by the second laboratory is positive.” Since $A_1$ and $A_2$ are independent, \begin{align*} P(A_1\cap A_2) &=P(A_1)\cdot P(A_2) \[4pt] &=0.92\times 0.92 \[4pt] &=0.8464 \end{align*} \nonumber 2. Using the Additive Rule for Probability and the probability just computed, \begin{align*}P(A_1\cup A_2) &= P(A_1)+P(A_2)-P(A_1\cap A_2) \[4pt] &=0.92+0.92-0.8464 \[4pt] &=0.9936 \end{align*} \nonumber Example $7$: specificity of a diagnostic test The specificity of a diagnostic test for a disease is the probability that the test will be negative when administered to a person who does not have the disease. The higher the specificity, the lower the false positive rate. Suppose the specificity of a diagnostic procedure to test whether a person has a particular disease is $89\%$. 1. A person who does not have the disease is tested for it using this procedure. What is the probability that the test result will be positive? 2. A person who does not have the disease is tested for it by two independent laboratories using this procedure. What is the probability that both test results will be positive? Solution 1. Let $B$ denote the event “the test result is positive.” The complement of $B$ is that the test result is negative, and has probability the specificity of the test, $0.89$. Thus $P(B)=1-P(B^c)=1-0.89=0.11 \nonumber$ 2. Let $B_1$ denote the event “the test by the first laboratory is positive” and let $B_2$ denote the event “the test by the second laboratory is positive.” Since $B_1$ and $B_2$ are independent, by part (a) of the example $P(B_1\cap B_2)=P(B_1)\cdot P(B_2)=0.11\times 0.11=0.0121 \nonumber$ The concept of independence applies to any number of events. For example, three events $A,\; B,\; \text{and}\; C$ are independent if $P(A\cap B\cap C)=P(A)\cdot P(B)\cdot P(C)$. Note carefully that, as is the case with just two events, this is not a formula that is always valid, but holds precisely when the events in question are independent. Example $8$: redundancy The reliability of a system can be enhanced by redundancy, which means building two or more independent devices to do the same job, such as two independent braking systems in an automobile. Suppose a particular species of trained dogs has a $90\%$ chance of detecting contraband in airline luggage. If the luggage is checked three times by three different dogs independently of one another, what is the probability that contraband will be detected? Solution Let $D_1$ denote the event that the contraband is detected by the first dog, $D_2$ the event that it is detected by the second dog, and $D_3$ the event that it is detected by the third. Since each dog has a $90\%$ of detecting the contraband, by the Probability Rule for Complements it has a $10\%$ chance of failing. In symbols, $P(D_{1}^{c})=0.10,\; \; P(D_{2}^{c})=0.10,\; \; P(D_{3}^{c})=0.10 \nonumber$ Let $D$ denote the event that the contraband is detected. We seek $P(D)$. It is easier to find $P(D^c)$, because although there are several ways for the contraband to be detected, there is only one way for it to go undetected: all three dogs must fail. Thus $D^c=D_{1}^{c}\cap D_{2}^{c}\cap D_{3}^{c}$ and $P(D)=1-P(D^c)=1-P(D_{1}^{c}\cap D_{2}^{c}\cap D_{3}^{c}) \nonumber$But the events $D_1$, $D_2$, and $D_3$ are independent, which implies that their complements are independent, so $P(D_{1}^{c}\cap D_{2}^{c}\cap D_{3}^{c})=P(D_{1}^{c})\cdot P(D_{2}^{c})\cdot P(D_{3}^{c})=0.10\times 0.10\times 0.10=0.001 \nonumber$ Using this number in the previous display we obtain $P(D)=1-0.001=0.999 \nonumber$ That is, although any one dog has only a $90\%$ chance of detecting the contraband, three dogs working independently have a $99.9\%$ chance of detecting it. Probabilities on Tree Diagrams Some probability problems are made much simpler when approached using a tree diagram. The next example illustrates how to place probabilities on a tree diagram and use it to solve a problem. Example $9$: A jar of Marbles A jar contains $10$ marbles, $7$ black and $3$ white. Two marbles are drawn without replacement, which means that the first one is not put back before the second one is drawn. 1. What is the probability that both marbles are black? 2. What is the probability that exactly one marble is black? 3. What is the probability that at least one marble is black? Solution A tree diagram for the situation of drawing one marble after the other without replacement is shown in Figure $1$. The circle and rectangle will be explained later, and should be ignored for now. The numbers on the two leftmost branches are the probabilities of getting either a black marble, $7$ out of $10$, or a white marble, $3$ out of $10$, on the first draw. The number on each remaining branch is the probability of the event corresponding to the node on the right end of the branch occurring, given that the event corresponding to the node on the left end of the branch has occurred. Thus for the top branch, connecting the two Bs, it is $P(B_2\mid B_1)$, where $B_1$ denotes the event “the first marble drawn is black” and $B_2$ denotes the event “the second marble drawn is black.” Since after drawing a black marble out there are $9$ marbles left, of which $6$ are black, this probability is $6/9$. The number to the right of each final node is computed as shown, using the principle that if the formula in the Conditional Rule for Probability is multiplied by $P(B)$, then the result is $P(B\cap A)=P(B)\cdot P(A\mid B) \nonumber$ 1. The event “both marbles are black” is $B_1\cap B_2$ and corresponds to the top right node in the tree, which has been circled. Thus as indicated there, it is $0.47$. 2. The event “exactly one marble is black” corresponds to the two nodes of the tree enclosed by the rectangle. The events that correspond to these two nodes are mutually exclusive: black followed by white is incompatible with white followed by black. Thus in accordance with the Additive Rule for Probability we merely add the two probabilities next to these nodes, since what would be subtracted from the sum is zero. Thus the probability of drawing exactly one black marble in two tries is $0.23+0.23=0.46$. 3. The event “at least one marble is black” corresponds to the three nodes of the tree enclosed by either the circle or the rectangle. The events that correspond to these nodes are mutually exclusive, so as in part (b) we merely add the probabilities next to these nodes. Thus the probability of drawing at least one black marble in two tries is $0.47+0.23+0.23=0.93$. Of course, this answer could have been found more easily using the Probability Law for Complements, simply subtracting the probability of the complementary event, “two white marbles are drawn,” from 1 to obtain $1-0.07=0.93$. As this example shows, finding the probability for each branch is fairly straightforward, since we compute it knowing everything that has happened in the sequence of steps so far. Two principles that are true in general emerge from this example: Probabilities on Tree Diagrams • The probability of the event corresponding to any node on a tree is the product of the numbers on the unique path of branches that leads to that node from the start. • If an event corresponds to several final nodes, then its probability is obtained by adding the numbers next to those nodes. Key Takeaway • A conditional probability is the probability that an event has occurred, taking into account additional information about the result of the experiment. • A conditional probability can always be computed using the formula in the definition. Sometimes it can be computed by discarding part of the sample space. • Two events $A$ and $B$ are independent if the probability $P(A\cap B)$ of their intersection $A\cap B$ is equal to the product $P(A)\cdot P(B)$ of their individual probabilities.
textbooks/stats/Introductory_Statistics/Introductory_Statistics_(Shafer_and_Zhang)/03%3A_Basic_Concepts_of_Probability/3.03%3A_Conditional_Probability_and_Independent_Events.txt
These are homework exercises to accompany the Textmap created for "Introductory Statistics" by Shafer and Zhang. 3.1: Sample Spaces, Events, and Their Probabilities Basic Q3.1.1 A box contains $10$ white and $10$ black marbles. Construct a sample space for the experiment of randomly drawing out, with replacement, two marbles in succession and noting the color each time. (To draw “with replacement” means that the first marble is put back before the second marble is drawn.) Q3.1.2 A box contains $16$ white and $16$ black marbles. Construct a sample space for the experiment of randomly drawing out, with replacement, three marbles in succession and noting the color each time. (To draw “with replacement” means that each marble is put back before the next marble is drawn.) Q3.1.3 A box contains $8$ red, $8$ yellow, and $8$ green marbles. Construct a sample space for the experiment of randomly drawing out, with replacement, two marbles in succession and noting the color each time. Q3.1.4 A box contains $6$ red, $6$ yellow, and $6$ green marbles. Construct a sample space for the experiment of randomly drawing out, with replacement, three marbles in succession and noting the color each time. Q3.1.5 In the situation of Exercise 1, list the outcomes that comprise each of the following events. 1. At least one marble of each color is drawn. 2. No white marble is drawn. Q3.1.6 In the situation of Exercise 2, list the outcomes that comprise each of the following events. 1. At least one marble of each color is drawn. 2. No white marble is drawn. 3. More black than white marbles are drawn. Q3.1.7 In the situation of Exercise 3, list the outcomes that comprise each of the following events. 1. No yellow marble is drawn. 2. The two marbles drawn have the same color. 3. At least one marble of each color is drawn. Q3.1.8 In the situation of Exercise 4, list the outcomes that comprise each of the following events. 1. No yellow marble is drawn. 2. The three marbles drawn have the same color. 3. At least one marble of each color is drawn. Q3.1.9 Assuming that each outcome is equally likely, find the probability of each event in Exercise 5. Q3.1.10 Assuming that each outcome is equally likely, find the probability of each event in Exercise 6. Q3.1.11 Assuming that each outcome is equally likely, find the probability of each event in Exercise 7. Q3.1.12 Assuming that each outcome is equally likely, find the probability of each event in Exercise 8. Q3.1.13 A sample space is $S=\{a,b,c,d,e\}$. Identify two events as $U=\{a,b,d\}$ and $V=\{b,c,d\}$. Suppose $P(a)$ and $P(b)$ are each $0.2$ and $P(c)$ and $P(d)$ are each $0.1$. 1. Determine what $P(e)$ must be. 2. Find $P(U)$. 3. Find $P(V)$ Q3.1.14 A sample space is $S=\{u,v,w,x\}$. Identify two events as $A=\{v,w\}$ and $B=\{u,w,x\}$. Suppose $P(u)=0.22$, $P(w)=0.36$, and $P(x)=0.27$. 1. Determine what $P(v)$ must be. 2. Find $P(A)$. 3. Find $P(B)$. Q3.1.15 A sample space is $S=\{m,n,q,r,s\}$. Identify two events as $U=\{m,q,s\}$ and $V=\{n,q,r\}$. The probabilities of some of the outcomes are given by the following table: $\begin{array}{c|c c c c c} Outcome &m &n &q &r &s \ \hline Probability &0.18 &0.16 & &0.24 &0.21\ \end{array}$ 1. Determine what $P(q)$ must be. 2. Find $P(U)$. 3. Find $P(V)$. Q3.1.16 A sample space is $S=\{d,e,f,g,h\}$. Identify two events as $M=\{e,f,g,h\}$ and $N=\{d,g\}$. The probabilities of some of the outcomes are given by the following table: $\begin{array}{c|c c c c c} Outcome &d &e &f &g &h \ \hline Probability &0.22 &0.13 &0.27 & &0.19\ \end{array}$ 1. Determine what $P(g)$ must be. 2. Find $P(M)$. 3. Find $P(N)$. Applications Q3.1.17 The sample space that describes all three-child families according to the genders of the children with respect to birth order was constructed in "Example 3.1.4". Identify the outcomes that comprise each of the following events in the experiment of selecting a three-child family at random. 1. At least one child is a girl. 2. At most one child is a girl. 3. All of the children are girls. 4. Exactly two of the children are girls. 5. The first born is a girl. Q3.1.18 The sample space that describes three tosses of a coin is the same as the one constructed in "Example 3.1.4" with “boy” replaced by “heads” and “girl” replaced by “tails.” Identify the outcomes that comprise each of the following events in the experiment of tossing a coin three times. 1. The coin lands heads more often than tails. 2. The coin lands heads the same number of times as it lands tails. 3. The coin lands heads at least twice. 4. The coin lands heads on the last toss. Q3.1.19 Assuming that the outcomes are equally likely, find the probability of each event in Exercise 17. Q3.1.20 Assuming that the outcomes are equally likely, find the probability of each event in Exercise 18. Additional Exercises Q3.1.21 The following two-way contingency table gives the breakdown of the population in a particular locale according to age and tobacco usage: Age Tobacco Use Smoker Non-smoker Under $30$ $0.05$ $0.20$ Over $30$ $0.20$ $0.55$ A person is selected at random. Find the probability of each of the following events. 1. The person is a smoker. 2. The person is under $30$. 3. The person is a smoker who is under $30$. Q3.1.22 The following two-way contingency table gives the breakdown of the population in a particular locale according to party affiliation ($A, B, C,\; \text{or None}$) and opinion on a bond issue: Affiliation Opinion Favors Opposes Undecided $A$ $0.12$ $0.09$ $0.07$ $B$ $0.16$ $0.12$ $0.14$ $C$ $0.04$ $0.03$ $0.06$ None $0.08$ $0.06$ $0.03$ A person is selected at random. Find the probability of each of the following events. 1. The person is affiliated with party $B$. 2. The person is affiliated with some party. 3. The person is in favor of the bond issue. 4. The person has no party affiliation and is undecided about the bond issue. Q3.1.23 The following two-way contingency table gives the breakdown of the population of married or previously married women beyond child-bearing age in a particular locale according to age at first marriage and number of children: Age Number of Children $0$ $1\; or\; 2$ $3\; \text{or More}$ $Under\; 20$ $0.02$ $0.14$ $0.08$ $20-29$ $0.07$ $0.37$ $0.11$ $30\; \text{and above}$ $0.10$ $0.10$ $0.01$ A woman is selected at random. Find the probability of each of the following events. 1. The woman was in her twenties at her first marriage. 2. The woman was $20$ or older at her first marriage. 3. The woman had no children. 4. The woman was in her twenties at her first marriage and had at least three children. Q3.1.24 The following two-way contingency table gives the breakdown of the population of adults in a particular locale according to highest level of education and whether or not the individual regularly takes dietary supplements: Education Use of Supplements Takes Does Not Take No High School Diploma $0.04$ $0.06$ High School Diploma $0.06$ $0.44$ Undergraduate Degree $0.09$ $0.28$ Graduate Degree $0.01$ $0.02$ An adult is selected at random. Find the probability of each of the following events. 1. The person has a high school diploma and takes dietary supplements regularly. 2. The person has an undergraduate degree and takes dietary supplements regularly. 3. The person takes dietary supplements regularly. 4. The person does not take dietary supplements regularly. Large Data Set Exercises Q3.1.25 Large Data Set 4 and Data Set 4A record the results of $500$ tosses of a coin. Find the relative frequency of each outcome $1, 2, 3, 4, 5,\; and\; 6$. Does the coin appear to be “balanced” or “fair”? Q3.1.26 Large Data Set 6, Data Set 6A, and Data Set 6B record results of a random survey of $200$ voters in each of two regions, in which they were asked to express whether they prefer Candidate $A$ for a U.S. Senate seat or prefer some other candidate. 1. Find the probability that a randomly selected voter among these $400$ prefers Candidate $A$. 2. Find the probability that a randomly selected voter among the $200$ who live in Region $1$ prefers Candidate $A$ (separately recorded in $\text{Large Data Set 6A}$). 3. Find the probability that a randomly selected voter among the $200$ who live in Region $2$ prefers Candidate $A$ (separately recorded in $\text{Large Data Set 6B}$). Answers S3.1.1 $S=\{bb,bw,wb,ww\}$ S3.1.3 $S=\{rr,ry,rg,yr,yy,yg,gr,gy,gg\}$ S3.1.5 1. $\{bw,wb\}$ 2. $\{bb\}$ S3.1.7 1. $\{rr,rg,gr,gg\}$ 2. $\{rr,yy,gg\}$ 3. $\varnothing$ S3.1.9 1. $1/4$ 2. $2/4$ S3.1.11 1. $4/9$ 2. $3/9$ 3. $0$ S3.1.13 1. $0.4$ 2. $0.5$ 3. $0.4$ S3.1.15 1. $0.61$ 2. $0.6$ 3. $0.21$ S3.1.17 1. $\{gbb,gbg,ggb,ggg\}$ 2. $\{bgg,gbg,ggb\}$ 3. $\{ggg\}$ 4. $\{bbb,bbg,bgb,gbb\}$ 5. $\{bbg,bgb,bgg,gbb,gbg,ggb,ggg\}$ S3.1.19 1. $4/8$ 2. $3/8$ 3. $1/8$ 4. $4/8$ 5. $7/8$ S3.1.21 1. $0.05$ 2. $0.25$ 3. $0.25$ S3.1.23 1. $0.11$ 2. $0.19$ 3. $0.76$ 4. $0.55$ S3.1.25 The relative frequencies for $1$ through $6$ are $0.16, 0.194, 0.162, 0.164, 0.154\; and\; 0.166$. It would appear that the die is not balanced. 3.2: Complements, Intersections and Unions Basic 1. For the sample space $S=\{a,b,c,d,e\}$ identify the complement of each event given. 1. $A=\{a,d,e\}$ 2. $B=\{b,c,d,e\}$ 3. $S$ 2. For the sample space $S=\{r,s,t,u,v\}$ identify the complement of each event given. 1. $R=\{t,u\}$ 2. $T=\{r\}$ 3. $\varnothing$ (the “empty” set that has no elements) 3. The sample space for three tosses of a coin is $S=\{hhh,hht,hth,htt,thh,tht,tth,ttt\}$ Define events $\text{H:at least one head is observed}\ \text{M:more heads than tails are observed}$ 1. List the outcomes that comprise $H$ and $M$. 2. List the outcomes that comprise $H\cap M$, $H\cup M$, and $H^c$. 3. Assuming all outcomes are equally likely, find $P(H\cap M)$, $P(H\cup M)$, and $P(H^c)$. 4. Determine whether or not $H^c$ and $M$ are mutually exclusive. Explain why or why not. 4. For the experiment of rolling a single six-sided die once, define events $\text{T:the number rolled is three}\ \text{G:the number rolled is four or greater}$ 1. List the outcomes that comprise $T$ and $G$. 2. List the outcomes that comprise $T\cap G$, $T\cup G$, $T^c$, and $(T\cup G)^c$. 3. Assuming all outcomes are equally likely, find $P(T\cap G)$, $P(T\cup G)$, and $P(T^c)$. 4. Determine whether or not $T$ and $G$ are mutually exclusive. Explain why or why not. 5. A special deck of $16$ cards has $4$ that are blue, $4$ yellow, $4$ green, and $4$ red. The four cards of each color are numbered from one to four. A single card is drawn at random. Define events $\text{B:the card is blue}\ \text{R:the card is red}\ \text{N:the number on the card is at most two}$ 1. List the outcomes that comprise $B$, $R$, and $N$. 2. List the outcomes that comprise $B\cap R$, $B\cup R$, $B\cap N$, $R\cup N$, $B^c$, and $(B\cup R)^c$. 3. Assuming all outcomes are equally likely, find the probabilities of the events in the previous part. 4. Determine whether or not $B$ and $N$ are mutually exclusive. Explain why or why not. 6. In the context of the previous problem, define events $\text{Y:the card is yellow}\ \text{I:the number on the card is not a one}\ \text{J:the number on the card is a two or a four}$ 1. List the outcomes that comprise $Y$, $I$, and $J$. 2. List the outcomes that comprise $Y\cap I$, $Y\cup J$, $I\cap J$, $I^c$, and $(Y\cup J)^c$. 3. Assuming all outcomes are equally likely, find the probabilities of the events in the previous part. 4. Determine whether or not $I^c$ and $J$ are mutually exclusive. Explain why or why not. 7. The Venn diagram provided shows a sample space and two events $A$ and $B$. Suppose $P(a)=0.13, P(b)=0.09, P(c)=0.27, P(d)=0.20,\; \text{and}\; P(e)=0.31$. Confirm that the probabilities of the outcomes add up to $1$, then compute the following probabilities. 1. $P(A)$. 2. $P(B)$. 3. $P(A^c)$. Two ways: (i) by finding the outcomes in $A^c$ and adding their probabilities, and (ii) using the Probability Rule for Complements. 4. $P(A\cap B)$. 5. $P(A\cup B)$ Two ways: (i) by finding the outcomes in $A\cup B$ and adding their probabilities, and (ii) using the Additive Rule of Probability. 1. The Venn diagram provided shows a sample space and two events $A$ and $B$. Suppose $P(a)=0.32, P(b)=0.17, P(c)=0.28,\; \text{and}\; P(d)=0.23$. Confirm that the probabilities of the outcomes add up to $1$, then compute the following probabilities. 1. $P(A)$. 2. $P(B)$. 3. $P(A^c)$. Two ways: (i) by finding the outcomes in $A^c$ and adding their probabilities, and (ii) using the Probability Rule for Complements. 4. $P(A\cap B)$. 5. $P(A\cup B)$ Two ways: (i) by finding the outcomes in $A\cup B$ and adding their probabilities, and (ii) using the Additive Rule of Probability. 1. Confirm that the probabilities in the two-way contingency table add up to $1$, then use it to find the probabilities of the events indicated. $U$ $V$ $W$ $A$ $0.15$ $0.00$ $0.23$ $B$ $0.22$ $0.30$ $0.10$ 1. $P(A), P(B), P(A\cap B)$. 2. $P(U), P(W), P(U\cap W)$. 3. $P(U\cup W)$. 4. $P(V^c)$. 5. Determine whether or not the events $A$ and $U$ are mutually exclusive; the events $A$ and $V$. 1. Confirm that the probabilities in the two-way contingency table add up to $1$, then use it to find the probabilities of the events indicated. $R$ $S$ $T$ $M$ $0.09$ $0.25$ $0.19$ $N$ $0.31$ $0.16$ $0.00$ 1. $P(R), P(S), P(R\cap S)$. 2. $P(M), P(N), P(M\cap N)$. 3. $P(R\cup S)$. 4. $P(R^c)$. 5. Determine whether or not the events $N$ and $S$ are mutually exclusive; the events $N$ and $T$. Applications 1. Make a statement in ordinary English that describes the complement of each event (do not simply insert the word “not”). 1. In the roll of a die: “five or more.” 2. In a roll of a die: “an even number.” 3. In two tosses of a coin: “at least one heads.” 4. In the random selection of a college student: “Not a freshman.” 2. Make a statement in ordinary English that describes the complement of each event (do not simply insert the word “not”). 1. In the roll of a die: “two or less.” 2. In the roll of a die: “one, three, or four.” 3. In two tosses of a coin: “at most one heads.” 4. In the random selection of a college student: “Neither a freshman nor a senior.” 3. The sample space that describes all three-child families according to the genders of the children with respect to birth order is $S=\{bbb,bbg,bgb,bgg,gbb,gbg,ggb,ggg\}$. For each of the following events in the experiment of selecting a three-child family at random, state the complement of the event in the simplest possible terms, then find the outcomes that comprise the event and its complement. 1. At least one child is a girl. 2. At most one child is a girl. 3. All of the children are girls. 4. Exactly two of the children are girls. 5. The first born is a girl. 4. The sample space that describes the two-way classification of citizens according to gender and opinion on a political issue is $S=\{mf,ma,mn,ff,fa,fn\}$, where the first letter denotes gender ($\text{m: male, f: female}$) and the second opinion ($\text{f: for, a: against, n: neutral}$). For each of the following events in the experiment of selecting a citizen at random, state the complement of the event in the simplest possible terms, then find the outcomes that comprise the event and its complement. 1. The person is male. 2. The person is not in favor. 3. The person is either male or in favor. 4. The person is female and neutral. 5. A tourist who speaks English and German but no other language visits a region of Slovenia. If $35\%$ of the residents speak English, $15\%$ speak German, and $3\%$ speak both English and German, what is the probability that the tourist will be able to talk with a randomly encountered resident of the region? 6. In a certain country $43\%$ of all automobiles have airbags, $27\%$ have anti-lock brakes, and $13\%$ have both. What is the probability that a randomly selected vehicle will have both airbags and anti-lock brakes? 7. A manufacturer examines its records over the last year on a component part received from outside suppliers. The breakdown on source (supplier $A$, supplier $B$) and quality ($\text{H: high, U: usable, D: defective}$) is shown in the two-way contingency table. $H$ $U$ $D$ $A$ $0.6937$ $0.0049$ $0.0014$ $B$ $0.2982$ $0.0009$ $0.0009$ The record of a part is selected at random. Find the probability of each of the following events. 1. The part was defective. 2. The part was either of high quality or was at least usable, in two ways: (i) by adding numbers in the table, and (ii) using the answer to (a) and the Probability Rule for Complements. 3. The part was defective and came from supplier $B$. 4. The part was defective or came from supplier $B$, in two ways: by finding the cells in the table that correspond to this event and adding their probabilities, and (ii) using the Additive Rule of Probability. 1. Individuals with a particular medical condition were classified according to the presence ($T$) or absence ($N$) of a potential toxin in their blood and the onset of the condition ($\text{E: early, M: midrange, L: late}$). The breakdown according to this classification is shown in the two-way contingency table. $E$ $M$ $L$ $T$ $0.012$ $0.124$ $0.013$ $N$ $0.170$ $0.638$ $0.043$ One of these individuals is selected at random. Find the probability of each of the following events. 1. The person experienced early onset of the condition. 2. The onset of the condition was either midrange or late, in two ways: (i) by adding numbers in the table, and (ii) using the answer to (a) and the Probability Rule for Complements. 3. The toxin is present in the person’s blood. 4. The person experienced early onset of the condition and the toxin is present in the person’s blood. 5. The person experienced early onset of the condition or the toxin is present in the person’s blood, in two ways: (i) by finding the cells in the table that correspond to this event and adding their probabilities, and (ii) using the Additive Rule of Probability. 1. The breakdown of the students enrolled in a university course by class ($\text{F: freshman, So: sophomore, J: junior, Se: senior}$) and academic major ($\text{S: science, mathematics, or engineering, L: liberal arts, O: other}$) is shown in the two-way classification table. Major Class $F$ $So$ $J$ $Se$ $S$ $92$ $42$ $20$ $13$ $L$ $368$ $167$ $80$ $53$ $O$ $460$ $209$ $100$ $67$ A student enrolled in the course is selected at random. Adjoin the row and column totals to the table and use the expanded table to find the probability of each of the following events. 1. The student is a freshman. 2. The student is a liberal arts major. 3. The student is a freshman liberal arts major. 4. The student is either a freshman or a liberal arts major. 5. The student is not a liberal arts major. 1. The table relates the response to a fund-raising appeal by a college to its alumni to the number of years since graduation. Response Years Since Graduation $0-5$ $6-20$ $21-35$ Over $35$ Positive $120$ $440$ $210$ $90$ None $1380$ $3560$ $3290$ $910$ An alumnus is selected at random. Adjoin the row and column totals to the table and use the expanded table to find the probability of each of the following events. 1. The alumnus responded. 2. The alumnus did not respond. 3. The alumnus graduated at least $21$ years ago. 4. The alumnus graduated at least $21$ years ago and responded. Additional Exercises 1. The sample space for tossing three coins is $S=\{hhh,hht,hth,htt,thh,tht,tth,ttt\}$ 1. List the outcomes that correspond to the statement “All the coins are heads.” 2. List the outcomes that correspond to the statement “Not all the coins are heads.” 3. List the outcomes that correspond to the statement “All the coins are not heads.” Answers 1. $\{b,c\}$ 2. $\{a\}$ 3. $\varnothing$ 1. $H=\{hhh,hht,hth,htt,thh,tht,tth\},\; M=\{hhh,hht,hth,thh\}$ 2. $H\cap M=\{hhh,hht,hth,thh\}, H\cup M=H, H^c=\{ttt\}$ 3. $P(H\cap M)=4/8, P(H\cup M)=7/8, P(H^c)=1/8$ 4. Mutually exclusive because they have no elements in common. 1. $B=\{b1,b2,b3,b4\},\; R=\{r1,r2,r3,r4\},\; N=\{b1,b2,y1,y2,g1,g2,r1,r2\}$ 2. $B\cap R=\varnothing , B\cup R=\{b1,b2,b3,b4,r1,r2,r3,r4\},\; B\cap N=\{b1,b2\},\ R\cup N=\{b1,b2,y1,y2,g1,g2,r1,r2,r3,r4\},\ B^c=\{y1,y2,y3,y4,g1,g2,g3,g4,r1,r2,r3,r4\},\; (B\cup R)^c=\{y1,y2,y3,y4,g1,g2,g3,g4\}$ 3. $P(B\cap R)=0,\; P(B\cup R)=8/16,\; P(B\cap N)=2/16,\; P(R\cup N)=10/16,\; P(B^c)=12/16,\; P((B\cup R)^c)=8/16$ 4. Not mutually exclusive because they have an element in common. 1. $0.36$ 2. $0.78$ 3. $0.64$ 4. $0.27$ 5. $0.87$ 1. $P(A)=0.38,\; P(B)=0.62,\; P(A\cap B)=0$ 2. $P(U)=0.37,\; P(W)=0.33,\; P(U\cap W)=0$ 3. $0.7$ 4. $0.7$ 5. $A$ and $U$ are not mutually exclusive because $P(A\cap U)$ is the nonzero number $0.15$. $A$ and $V$ are mutually exclusive because $P(A\cap V)=0$. 1. “four or less” 2. “an odd number” 3. “no heads” or “all tails” 4. “a freshman” 1. “All the children are boys.” Event: $\{bbg,bgb,bgg,gbb,gbg,ggb,ggg\}$, Complement: $\{bbb\}$ 2. “At least two of the children are girls” or “There are two or three girls.” Event: $\{bbb,bbg,bgb,gbb\}$, Complement: $\{bgg,gbg,ggb,ggg\}$ 3. “At least one child is a boy.” Event: $\{ggg\}$, Complement: $\{bbb,bbg,bgb,bgg,gbb,gbg,ggb\}$ 4. “There are either no girls, exactly one girl, or three girls.” Event: $\{bgg,gbg,ggb\}$, Complement: $\{bbb,bbg,bgb,gbb,ggg\}$ 5. “The first born is a boy.” Event: $\{gbb,gbg,ggb,ggg\}$, Complement: $\{bbb,bbg,bgb,bgg\}$ 1. $0.47$ 1. $0.0023$ 2. $0.9977$ 3. $0.0009$ 4. $0.3014$ 1. $920/1671$ 2. $668/1671$ 3. $368/1671$ 4. $1220/1671$ 5. $1003/1671$ 1. $\{hhh\}$ 2. $\{hht,hth,htt,thh,tht,tth,ttt\}$ 3. $\{ttt\}$ 3.3: Conditional Probability and Independent Events Basic 1. Q3.3.1For two events $A$ and $B$, $P(A)=0.73,\; P(B)=0.48\; \text{and}\; P(A\cap B)=0.29$. 1. Find $P(A\mid B)$. 2. Find $P(B\mid A)$. 3. Determine whether or not $A$ and $B$ are independent. 2. Q3.3.1For two events $A$ and $B$, $P(A)=0.26,\; P(B)=0.37\; \text{and}\; P(A\cap B)=0.11$. 1. Find $P(A\mid B)$. 2. Find $P(B\mid A)$. 3. Determine whether or not $A$ and $B$ are independent. 3. Q3.3.1For independent events $A$ and $B$, $P(A)=0.81$ and $P(B)=0.27$. 1. $P(A\cap B)$. 2. Find $P(A\mid B)$. 3. Find $P(B\mid A)$. 4. Q3.3.1For independent events $A$ and $B$, $P(A)=0.68$ and $P(B)=0.37$. 1. $P(A\cap B)$. 2. Find $P(A\mid B)$. 3. Find $P(B\mid A)$. 5. Q3.3.1For mutually exclusive events $A$ and $B$, $P(A)=0.17$ and $P(B)=0.32$. 1. Find $P(A\mid B)$. 2. Find $P(B\mid A)$. 6. Q3.3.1For mutually exclusive events $A$ and $B$, $P(A)=0.45$ and $P(B)=0.09$. 1. Find $P(A\mid B)$. 2. Find $P(B\mid A)$. 7. Q3.3.1Compute the following probabilities in connection with the roll of a single fair die. 1. The probability that the roll is even. 2. The probability that the roll is even, given that it is not a two. 3. The probability that the roll is even, given that it is not a one. 8. Q3.3.1Compute the following probabilities in connection with two tosses of a fair coin. 1. The probability that the second toss is heads. 2. The probability that the second toss is heads, given that the first toss is heads. 3. The probability that the second toss is heads, given that at least one of the two tosses is heads. 9. Q3.3.1A special deck of $16$ cards has $4$ that are blue, $4$ yellow, $4$ green, and $4$ red. The four cards of each color are numbered from one to four. A single card is drawn at random. Find the following probabilities. 1. The probability that the card drawn is red. 2. The probability that the card is red, given that it is not green. 3. The probability that the card is red, given that it is neither red nor yellow. 4. The probability that the card is red, given that it is not a four. 10. Q3.3.1A special deck of $16$ cards has $4$ that are blue, $4$ yellow, $4$ green, and $4$ red. The four cards of each color are numbered from one to four. A single card is drawn at random. Find the following probabilities. 1. The probability that the card drawn is a two or a four. 2. The probability that the card is a two or a four, given that it is not a one. 3. The probability that the card is a two or a four, given that it is either a two or a three. 4. The probability that the card is a two or a four, given that it is red or green. 11. Q3.3.1A random experiment gave rise to the two-way contingency table shown. Use it to compute the probabilities indicated. $R$ $S$ $A$ $0.12$ $0.18$ $B$ $0.28$ $0.42$ 1. $P(A),\; P(R),\; P(A\cap B)$. 2. Based on the answer to (a), determine whether or not the events $A$ and $R$ are independent. 3. Based on the answer to (b), determine whether or not $P(A\mid R)$ can be predicted without any computation. If so, make the prediction. In any case, compute $P(A\mid R)$ using the Rule for Conditional Probability. 12. Q3.3.1A random experiment gave rise to the two-way contingency table shown. Use it to compute the probabilities indicated. $R$ $S$ $A$ $0.13$ $0.07$ $B$ $0.61$ $0.19$ 1. $P(A),\; P(R),\; P(A\cap B)$. 2. Based on the answer to (a), determine whether or not the events $A$ and $R$ are independent. 3. Based on the answer to (b), determine whether or not $P(A\mid R)$ can be predicted without any computation. If so, make the prediction. In any case, compute $P(A\mid R)$ using the Rule for Conditional Probability. 13. Q3.3.1Suppose for events $A$ and $B$ in a random experiment $P(A)=0.70$ and $P(B)=0.30$.Compute the indicated probability, or explain why there is not enough information to do so. 1. $P(A\cap B)$. 2. $P(A\cap B)$, with the extra information that $A$ and $B$ are independent. 3. $P(A\cap B)$, with the extra information that $A$ and $B$ are mutually exclusive. 14. Q3.3.1Suppose for events $A$ and $B$ in a random experiment $P(A)=0.50$ and $P(B)=0.50$. Compute the indicated probability, or explain why there is not enough information to do so. 1. $P(A\cap B)$. 2. $P(A\cap B)$, with the extra information that $A$ and $B$ are independent. 3. $P(A\cap B)$, with the extra information that $A$ and $B$ are mutually exclusive. 15. Q3.3.1Suppose for events $A,\; B,\; and\; C$ connected to some random experiment, $A,\; B,\; and\; C$ are independent and $P(A)=0.50$, $P(B)=0.50\; \text{and}\; P(C)=0.44$. Compute the indicated probability, or explain why there is not enough information to do so. 1. $P(A\cap B\cap C)$. 2. $P(A^c\cap B^c\cap C^c)$. 16. Q3.3.1Suppose for events $A,\; B,\; and\; C$ connected to some random experiment, $A,\; B,\; and\; C$ are independent and $P(A)=0.95$, $P(B)=0.73\; \text{and}\; P(C)=0.62$. Compute the indicated probability, or explain why there is not enough information to do so. 1. $P(A\cap B\cap C)$. 2. $P(A^c\cap B^c\cap C^c)$. Applications Q3.3.17 The sample space that describes all three-child families according to the genders of the children with respect to birth order is $S=\{bbb,bbg,bgb,bgg,gbb,gbg,ggb,ggg\}$ In the experiment of selecting a three-child family at random, compute each of the following probabilities, assuming all outcomes are equally likely. 1. The probability that the family has at least two boys. 2. The probability that the family has at least two boys, given that not all of the children are girls. 3. The probability that at least one child is a boy. 4. The probability that at least one child is a boy, given that the first born is a girl. Q3.3.18 The following two-way contingency table gives the breakdown of the population in a particular locale according to age and number of vehicular moving violations in the past three years: Age Violations $0$ $1$ $2+$ Under $21$ $0.04$ $0.06$ $0.02$ $21-40$ $0.25$ $0.16$ $0.01$ $41-60$ $0.23$ $0.10$ $0.02$ $60+$ $0.08$ $0.03$ $0.00$ A person is selected at random. Find the following probabilities. 1. The person is under $21$. 2. The person has had at least two violations in the past three years. 3. The person has had at least two violations in the past three years, given that he is under $21$. 4. The person is under $21$, given that he has had at least two violations in the past three years. 5. Determine whether the events “the person is under $21$” and “the person has had at least two violations in the past three years” are independent or not. Q3.3.19 The following two-way contingency table gives the breakdown of the population in a particular locale according to party affiliation ($A, B, C, \text{or None}$) and opinion on a bond issue: Affiliation Opinion Favors Opposes Undecided $A$ $0.12$ $0.09$ $0.07$ $B$ $0.16$ $0.12$ $0.14$ $C$ $0.04$ $0.03$ $0.06$ None $0.08$ $0.06$ $0.03$ A person is selected at random. Find each of the following probabilities. 1. The person is in favor of the bond issue. 2. The person is in favor of the bond issue, given that he is affiliated with party $A$. 3. The person is in favor of the bond issue, given that he is affiliated with party $B$. Q3.3.20 The following two-way contingency table gives the breakdown of the population of patrons at a grocery store according to the number of items purchased and whether or not the patron made an impulse purchase at the checkout counter: Number of Items Impulse Purchase Made Not Made Few $0.01$ $0.19$ Many $0.04$ $0.76$ A patron is selected at random. Find each of the following probabilities. 1. The patron made an impulse purchase. 2. The patron made an impulse purchase, given that the total number of items purchased was many. 3. Determine whether or not the events “few purchases” and “made an impulse purchase at the checkout counter” are independent. Q3.3.21 The following two-way contingency table gives the breakdown of the population of adults in a particular locale according to employment type and level of life insurance: Employment Type Level of Insurance Low Medium High Unskilled $0.07$ $0.19$ $0.00$ Semi-skilled $0.04$ $0.28$ $0.08$ Skilled $0.03$ $0.18$ $0.05$ Professional $0.01$ $0.05$ $0.02$ An adult is selected at random. Find each of the following probabilities. 1. The person has a high level of life insurance. 2. The person has a high level of life insurance, given that he does not have a professional position. 3. The person has a high level of life insurance, given that he has a professional position. 4. Determine whether or not the events “has a high level of life insurance” and “has a professional position” are independent. Q3.3.22 The sample space of equally likely outcomes for the experiment of rolling two fair dice is $\begin{matrix} 11 & 12 & 13 & 14 & 15 & 16\ 21 & 22 & 23 & 24 & 25 & 26\ 31 & 32 & 33 & 34 & 35 & 36\ 41 & 42 & 43 & 44 & 45 & 46\ 51 & 52 & 53 & 54 & 55 & 56\ 61 & 62 & 63 & 64 & 65 & 66 \end{matrix}$ Identify the events $\text{N: the sum is at least nine, T: at least one of the dice is a two, and F: at least one of the dice is a five}$. 1. Find $P(N)$. 2. Find $P(N\mid F)$. 3. Find $P(N\mid T)$. 4. Determine from the previous answers whether or not the events $N$ and $F$ are independent; whether or not $N$ and $T$ are. Q3.3.23 The sensitivity of a drug test is the probability that the test will be positive when administered to a person who has actually taken the drug. Suppose that there are two independent tests to detect the presence of a certain type of banned drugs in athletes. One has sensitivity $0.75$; the other has sensitivity $0.85$. If both are applied to an athlete who has taken this type of drug, what is the chance that his usage will go undetected? Q3.3.24 A man has two lights in his well house to keep the pipes from freezing in winter. He checks the lights daily. Each light has probability $0.002$ of burning out before it is checked the next day (independently of the other light). 1. If the lights are wired in parallel one will continue to shine even if the other burns out. In this situation, compute the probability that at least one light will continue to shine for the full $24$ hours. Note the greatly increased reliability of the system of two bulbs over that of a single bulb. 2. If the lights are wired in series neither one will continue to shine even if only one of them burns out. In this situation, compute the probability that at least one light will continue to shine for the full $24$ hours. Note the slightly decreased reliability of the system of two bulbs over that of a single bulb. Q3.3.25 An accountant has observed that $5\%$ of all copies of a particular two-part form have an error in Part I, and $2\%$ have an error in Part II. If the errors occur independently, find the probability that a randomly selected form will be error-free. Q3.3.26 A box contains $20$ screws which are identical in size, but $12$ of which are zinc coated and $8$ of which are not. Two screws are selected at random, without replacement. 1. Find the probability that both are zinc coated. 2. Find the probability that at least one is zinc coated. Additional Exercises Q3.3.27 Events $A$ and $B$ are mutually exclusive. Find $P(A\mid B)$. Q3.3.28 The city council of a particular city is composed of five members of party $A$, four members of party $B$, and three independents. Two council members are randomly selected to form an investigative committee. 1. Find the probability that both are from party $A$. 2. Find the probability that at least one is an independent. 3. Find the probability that the two have different party affiliations (that is, not both $A$, not both $B$, and not both independent). Q3.3.29 A basketball player makes $60\%$ of the free throws that he attempts, except that if he has just tried and missed a free throw then his chances of making a second one go down to only $30\%$. Suppose he has just been awarded two free throws. 1. Find the probability that he makes both. 2. Find the probability that he makes at least one. (A tree diagram could help.) Q3.3.30 An economist wishes to ascertain the proportion $p$ of the population of individual taxpayers who have purposely submitted fraudulent information on an income tax return. To truly guarantee anonymity of the taxpayers in a random survey, taxpayers questioned are given the following instructions. 1. Flip a coin. 2. If the coin lands heads, answer “Yes” to the question “Have you ever submitted fraudulent information on a tax return?” even if you have not. 3. If the coin lands tails, give a truthful “Yes” or “No” answer to the question “Have you ever submitted fraudulent information on a tax return?” The questioner is not told how the coin landed, so he does not know if a “Yes” answer is the truth or is given only because of the coin toss. 1. Using the Probability Rule for Complements and the independence of the coin toss and the taxpayers’ status fill in the empty cells in the two-way contingency table shown. Assume that the coin is fair. Each cell except the two in the bottom row will contain the unknown proportion (or probability) $p$. Status Coin Probability $H$ $T$ Fraud $p$ No fraud Probability $1$ 2. The only information that the economist sees are the entries in the following table: $\begin{array}{c|c|c} Response & "Yes" & "No" \ \hline Proportion &r &s\ \end{array}$Equate the entry in the one cell in the table in (a) that corresponds to the answer “No” to the number s to obtain the formula that expresses the unknown number $p$ in terms of the known number $s$. 3. Equate the sum of the entries in the three cells in the table in (a) that together correspond to the answer “Yes” to the number r to obtain the formula that expresses the unknown number $p$ in terms of the known number $r$. 4. Use the fact that $r+s=1$(since they are the probabilities of complementary events) to verify that the formulas in (b) and (c) give the same value for $p$. (For example, insert $s=1-r$into the formula in (b) to obtain the formula in (c)). 5. Suppose a survey of $1,200$ taxpayers is conducted and $690$ respond “Yes” (truthfully or not) to the question “Have you ever submitted fraudulent information on a tax return?” Use the answer to either (b) or (c) to estimate the true proportion $p$ of all individual taxpayers who have purposely submitted fraudulent information on an income tax return. Answers 1. $0.6$ 2. $0.4$ 3. not independent 1. $0.22$ 2. $0.81$ 3. $0.27$ 1. $0$ 2. $0$ 1. $0.5$ 2. $0.4$ 3. $0.6$ 1. $0.25$ 2. $0.33$ 3. $0$ 4. $0.25$ 1. $P(A)=0.3,\; P(R)=0.4,\; P(A\cap R)=0.12$ 2. independent 3. without computation $0.3$ 1. Insufficient information. The events A and B are not known to be either independent or mutually exclusive. 2. $0.21$ 3. $0$ 1. $0.25$ 2. $0.02$ 1. $0.5$ 2. $0.57$ 3. $0.875$ 4. $0.75$ 1. $0.4$ 2. $0.43$ 3. $0.38$ 1. $0.15$ 2. $0.14$ 3. $0.25$ 4. not independent 1. $0.0375$ 2. $0.931$ 3. $0$ 1. $0.36$ 2. $0.72$ • Anonymous
textbooks/stats/Introductory_Statistics/Introductory_Statistics_(Shafer_and_Zhang)/03%3A_Basic_Concepts_of_Probability/3.E%3A_Basic_Concepts_of_Probability_%28Exercises%29.txt
It is often the case that a number is naturally associated to the outcome of a random experiment: the number of boys in a three-child family, the number of defective light bulbs in a case of 100 bulbs, the length of time until the next customer arrives at the drive-through window at a bank. Such a number varies from trial to trial of the corresponding experiment, and does so in a way that cannot be predicted with certainty; hence, it is called a random variable. In this chapter and the next we study such variables. • 4.1: Random Variables A random variable is a number generated by a random experiment. A random variable is called discrete if its possible values form a finite or countable set. A random variable is called continuous if its possible values contain a whole interval of numbers. • 4.2: Probability Distributions for Discrete Random Variables The probability distribution of a discrete random variable X is a list of each possible value of X together with the probability that X takes that value in one trial of the experiment. The probabilities in the probability distribution of a random variable X must satisfy the following two conditions: Each probability P(x) must be between 0 and 1  and the sum of all the probabilities is 1 . • 4.3: The Binomial Distribution Suppose a random experiment has the following characteristics. There are n identical and independent trials of a common procedure. There are exactly two possible outcomes for each trial, one termed “success” and the other “failure.” The probability of success on any one trial is the same number p. Then the discrete random variable X that counts the number of successes in the n trials is the binomial random variable with parameters n and p. We also say that X has a binomial distribution • 4.E: Discrete Random Variables (Exercises) 04: Discrete Random Variables Learning Objectives • To learn the concept of a random variable. • To learn the distinction between discrete and continuous random variables. Definition: random variable A random variable is a numerical quantity that is generated by a random experiment. We will denote random variables by capital letters, such as $X$ or $Z$, and the actual values that they can take by lowercase letters, such as $x$ and $z$. Table $1$ gives four examples of random variables. In the second example, the three dots indicates that every counting number is a possible value for $X$. Although it is highly unlikely, for example, that it would take $50$ tosses of the coin to observe heads for the first time, nevertheless it is conceivable, hence the number $50$ is a possible value. The set of possible values is infinite, but is still at least countable, in the sense that all possible values can be listed one after another. In the last two examples, by way of contrast, the possible values cannot be individually listed, but take up a whole interval of numbers. In the fourth example, since the light bulb could conceivably continue to shine indefinitely, there is no natural greatest value for its lifetime, so we simply place the symbol $\infty$ for infinity as the right endpoint of the interval of possible values. Table $1$: Four Random Variables Experiment Number X Possible Values of X Roll two fair dice Sum of the number of dots on the top faces 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12 Flip a fair coin repeatedly Number of tosses until the coin lands heads 1, 2, 3,4, … Measure the voltage at an electrical outlet Voltage measured 118 ≤ x ≤ 122 Operate a light bulb until it burns out Time until the bulb burns out 0 ≤ x < ∞ Definition: discrete random variable A random variable is called discrete if it has either a finite or a countable number of possible values. A random variable is called continuous if its possible values contain a whole interval of numbers. The examples in the table are typical in that discrete random variables typically arise from a counting process, whereas continuous random variables typically arise from a measurement. Key Takeaway • A random variable is a number generated by a random experiment. • A random variable is called discrete if its possible values form a finite or countable set. • A random variable is called continuous if its possible values contain a whole interval of numbers.
textbooks/stats/Introductory_Statistics/Introductory_Statistics_(Shafer_and_Zhang)/04%3A_Discrete_Random_Variables/4.01%3A_Random_Variables.txt
Learning Objectives • To learn the concept of the probability distribution of a discrete random variable. • To learn the concepts of the mean, variance, and standard deviation of a discrete random variable, and how to compute them. Associated to each possible value $x$ of a discrete random variable $X$ is the probability $P(x)$ that $X$ will take the value $x$ in one trial of the experiment. Definition: probability distribution The probability distribution of a discrete random variable $X$ is a list of each possible value of $X$ together with the probability that $X$ takes that value in one trial of the experiment. The probabilities in the probability distribution of a random variable $X$ must satisfy the following two conditions: • Each probability $P(x)$ must be between $0$ and $1$: $0\leq P(x)\leq 1. \nonumber$ • The sum of all the possible probabilities is $1$: $\sum P(x)=1. \nonumber$ Example $1$: two Fair Coins A fair coin is tossed twice. Let $X$ be the number of heads that are observed. 1. Construct the probability distribution of $X$. 2. Find the probability that at least one head is observed. Solution 1. The possible values that $X$ can take are $0$, $1$, and $2$. Each of these numbers corresponds to an event in the sample space $S=\{hh,ht,th,tt\}$ of equally likely outcomes for this experiment: $X = 0\; \text{to}\; \{tt\},\; X = 1\; \text{to}\; \{ht,th\}, \; \text{and}\; X = 2\; \text{to}\; {hh}. \nonumber$ The probability of each of these events, hence of the corresponding value of $X$, can be found simply by counting, to give $\begin{array}{c|ccc} x & 0 & 1 & 2 \ \hline P(x) & 0.25 & 0.50 & 0.25\ \end{array} \nonumber$ This table is the probability distribution of $X$. 2. “At least one head” is the event $X\geq 1$, which is the union of the mutually exclusive events $X = 1$ and $X = 2$. Thus \begin{align*} P(X\geq 1)&=P(1)+P(2)=0.50+0.25 \[5pt] &=0.75 \end{align*} \nonumber A histogram that graphically illustrates the probability distribution is given in Figure $1$. Example $2$: Two Fair Dice A pair of fair dice is rolled. Let $X$ denote the sum of the number of dots on the top faces. 1. Construct the probability distribution of $X$ for a paid of fair dice. 2. Find $P(X\geq 9)$. 3. Find the probability that $X$ takes an even value. Solution The sample space of equally likely outcomes is $\begin{matrix} 11 & 12 & 13 & 14 & 15 & 16\ 21 & 22 & 23 & 24 & 25 & 26\ 31 & 32 & 33 & 34 & 35 & 36\ 41 & 42 & 43 & 44 & 45 & 46\ 51 & 52 & 53 & 54 & 55 & 56\ 61 & 62 & 63 & 64 & 65 & 66 \end{matrix} \nonumber$ where the first digit is die 1 and the second number is die 2. 1. The possible values for $X$ are the numbers $2$ through $12$. $X= 2$ is the event $\{11\}$, so $P(2)=1/36$. $X= 3$ is the event $\{12,21\}$, so $P(3)=2/36$. Continuing this way we obtain the following table $\begin{array}{c|ccccccccccc} x &2 &3 &4 &5 &6 &7 &8 &9 &10 &11 &12 \ \hline P(x) &\dfrac{1}{36} &\dfrac{2}{36} &\dfrac{3}{36} &\dfrac{4}{36} &\dfrac{5}{36} &\dfrac{6}{36} &\dfrac{5}{36} &\dfrac{4}{36} &\dfrac{3}{36} &\dfrac{2}{36} &\dfrac{1}{36} \ \end{array} \nonumber$This table is the probability distribution of $X$. 2. The event $X\geq 9$ is the union of the mutually exclusive events $X = 9$, $X = 10$, $X = 11$, and $X = 12$. Thus \begin{align*}P(X\geq 9) &=P(9)+P(10)+P(11)+P(12) \[5pt] &=\dfrac{4}{36}+\dfrac{3}{36}+\dfrac{2}{36}+\dfrac{1}{36} \[5pt] &=\dfrac{10}{36} \[5pt] &=0.2\bar{7} \end{align*} \nonumber 3. Before we immediately jump to the conclusion that the probability that $X$ takes an even value must be $0.5$, note that $X$ takes six different even values but only five different odd values. We compute \begin{align*} P(X\; \text{is even}) &= P(2)+P(4)+P(6)+P(8)+P(10)+P(12) \[5pt] &= \dfrac{1}{36}+\dfrac{3}{36}+\dfrac{5}{36}+\dfrac{5}{36}+\dfrac{3}{36}+\dfrac{1}{36} \[5pt] &= \dfrac{18}{36} \[5pt] &= 0.5 \end{align*} \nonumberA histogram that graphically illustrates the probability distribution is given in Figure $2$. The Mean and Standard Deviation of a Discrete Random Variable Definition: mean The mean (also called the "expectation value" or "expected value") of a discrete random variable $X$ is the number $\mu =E(X)=\sum x P(x) \label{mean}$ The mean of a random variable may be interpreted as the average of the values assumed by the random variable in repeated trials of the experiment. Example $3$ Find the mean of the discrete random variable $X$ whose probability distribution is $\begin{array}{c|cccc} x &-2 &1 &2 &3.5\ \hline P(x) &0.21 &0.34 &0.24 &0.21\ \end{array} \nonumber$ Solution Using the definition of mean (Equation \ref{mean}) gives \begin{align*} \mu &= \sum x P(x)\[5pt] &= (-2)(0.21)+(1)(0.34)+(2)(0.24)+(3.5)(0.21)\[5pt] &= 1.135 \end{align*} \nonumber Example $4$ A service organization in a large town organizes a raffle each month. One thousand raffle tickets are sold for $\1$ each. Each has an equal chance of winning. First prize is $\300$, second prize is $\200$, and third prize is $\100$. Let $X$ denote the net gain from the purchase of one ticket. 1. Construct the probability distribution of $X$. 2. Find the probability of winning any money in the purchase of one ticket. 3. Find the expected value of $X$, and interpret its meaning. Solution 1. If a ticket is selected as the first prize winner, the net gain to the purchaser is the $\300$ prize less the $\1$ that was paid for the ticket, hence $X = 300-11 = 299$. There is one such ticket, so $P(299) = 0.001$. Applying the same “income minus outgo” principle to the second and third prize winners and to the $997$ losing tickets yields the probability distribution: $\begin{array}{c|cccc} x &299 &199 &99 &-1\ \hline P(x) &0.001 &0.001 &0.001 &0.997\ \end{array} \nonumber$ 2. Let $W$ denote the event that a ticket is selected to win one of the prizes. Using the table \begin{align*} P(W)&=P(299)+P(199)+P(99)=0.001+0.001+0.001\[5pt] &=0.003 \end{align*} \nonumber 3. Using the definition of expected value (Equation \ref{mean}), \begin{align*}E(X)&=(299)\cdot (0.001)+(199)\cdot (0.001)+(99)\cdot (0.001)+(-1)\cdot (0.997) \[5pt] &=-0.4 \end{align*} \nonumber The negative value means that one loses money on the average. In particular, if someone were to buy tickets repeatedly, then although he would win now and then, on average he would lose $40$ cents per ticket purchased. The concept of expected value is also basic to the insurance industry, as the following simplified example illustrates. Example $5$ A life insurance company will sell a $\200,000$ one-year term life insurance policy to an individual in a particular risk group for a premium of $\195$. Find the expected value to the company of a single policy if a person in this risk group has a $99.97\%$ chance of surviving one year. Solution Let $X$ denote the net gain to the company from the sale of one such policy. There are two possibilities: the insured person lives the whole year or the insured person dies before the year is up. Applying the “income minus outgo” principle, in the former case the value of $X$ is $195-0$; in the latter case it is $195-200,000=-199,805$. Since the probability in the first case is 0.9997 and in the second case is $1-0.9997=0.0003$, the probability distribution for $X$ is: $\begin{array}{c|cc} x &195 &-199,805 \ \hline P(x) &0.9997 &0.0003 \ \end{array}\nonumber$ Therefore \begin{align*} E(X) &=\sum x P(x) \[5pt]&=(195)\cdot (0.9997)+(-199,805)\cdot (0.0003) \[5pt] &=135 \end{align*} \nonumber Occasionally (in fact, $3$ times in $10,000$) the company loses a large amount of money on a policy, but typically it gains $\195$, which by our computation of $E(X)$ works out to a net gain of $\135$ per policy sold, on average. Definition: variance The variance ($\sigma ^2$) of a discrete random variable $X$ is the number $\sigma ^2=\sum (x-\mu )^2P(x) \label{var1}$ which by algebra is equivalent to the formula $\sigma ^2=\left [ \sum x^2 P(x)\right ]-\mu ^2 \label{var2}$ Definition: standard deviation The standard deviation, $\sigma$, of a discrete random variable $X$ is the square root of its variance, hence is given by the formulas $\sigma =\sqrt{\sum (x-\mu )^2P(x)}=\sqrt{\left [ \sum x^2 P(x)\right ]-\mu ^2} \label{std}$ The variance and standard deviation of a discrete random variable $X$ may be interpreted as measures of the variability of the values assumed by the random variable in repeated trials of the experiment. The units on the standard deviation match those of $X$. Example $6$ A discrete random variable $X$ has the following probability distribution: $\begin{array}{c|cccc} x &-1 &0 &1 &4\ \hline P(x) &0.2 &0.5 &a &0.1\ \end{array} \label{Ex61}$ A histogram that graphically illustrates the probability distribution is given in Figure $3$. Compute each of the following quantities. 1. $a$. 2. $P(0)$. 3. $P(X> 0)$. 4. $P(X\geq 0)$. 5. $P(X\leq -2)$. 6. The mean $\mu$ of $X$. 7. The variance $\sigma ^2$ of $X$. 8. The standard deviation $\sigma$ of $X$. Solution 1. Since all probabilities must add up to 1, $a=1-(0.2+0.5+0.1)=0.2 \nonumber$ 2. Directly from the table, P(0)=0.5$P(0)=0.5 \nonumber$ 3. From Table \ref{Ex61}, $P(X> 0)=P(1)+P(4)=0.2+0.1=0.3 \nonumber$ 4. From Table \ref{Ex61}, $P(X\geq 0)=P(0)+P(1)+P(4)=0.5+0.2+0.1=0.8 \nonumber$ 5. Since none of the numbers listed as possible values for $X$ is less than or equal to $-2$, the event $X\leq -2$ is impossible, so $P(X\leq -2)=0 \nonumber$ 6. Using the formula in the definition of $\mu$ (Equation \ref{mean}) \begin{align*}\mu &=\sum x P(x) \[5pt] &=(-1)\cdot (0.2)+(0)\cdot (0.5)+(1)\cdot (0.2)+(4)\cdot (0.1) \[5pt] &=0.4 \end{align*} \nonumber 7. Using the formula in the definition of $\sigma ^2$ (Equation \ref{var1}) and the value of $\mu$ that was just computed, \begin{align*} \sigma ^2 &=\sum (x-\mu )^2P(x) \ &= (-1-0.4)^2\cdot (0.2)+(0-0.4)^2\cdot (0.5)+(1-0.4)^2\cdot (0.2)+(4-0.4)^2\cdot (0.1)\ &= 1.84 \end{align*} \nonumber 8. Using the result of part (g), $\sigma =\sqrt{1.84}=1.3565$ Summary • The probability distribution of a discrete random variable $X$ is a listing of each possible value $x$ taken by $X$ along with the probability $P(x)$ that $X$ takes that value in one trial of the experiment. • The mean $\mu$ of a discrete random variable $X$ is a number that indicates the average value of $X$ over numerous trials of the experiment. It is computed using the formula $\mu =\sum xP(x)$. • The variance $\sigma ^2$ and standard deviation $\sigma$ of a discrete random variable $X$ are numbers that indicate the variability of $X$ over numerous trials of the experiment. They may be computed using the formula $\sigma ^2=\left [ \sum x^2P(x) \right ]-\mu ^2$.
textbooks/stats/Introductory_Statistics/Introductory_Statistics_(Shafer_and_Zhang)/04%3A_Discrete_Random_Variables/4.02%3A_Probability_Distributions_for_Discrete_Random_Variables.txt
Learning Objectives • To learn the concept of a binomial random variable. • To learn how to recognize a random variable as being a binomial random variable. The experiment of tossing a fair coin three times and the experiment of observing the genders according to birth order of the children in a randomly selected three-child family are completely different, but the random variables that count the number of heads in the coin toss and the number of boys in the family (assuming the two genders are equally likely) are the same random variable, the one with probability distribution $\begin{array}{c|cccc} x& 0& 1& 2& 3\ \hline P(x)& 0.125& 0.375& 0.375& 0.125\ \end{array} \nonumber$ A histogram that graphically illustrates this probability distribution is given in Figure $1$. What is common to the two experiments is that we perform three identical and independent trials of the same action, each trial has only two outcomes (heads or tails, boy or girl), and the probability of success is the same number, $0.5$, on every trial. The random variable that is generated is called the binomial random variable with parameters $n=3$ and $p=0.5$. This is just one case of a general situation. Definition: binomial distribution Suppose a random experiment has the following characteristics. • There are $n$ identical and independent trials of a common procedure. • There are exactly two possible outcomes for each trial, one termed “success” and the other “failure.” • The probability of success on any one trial is the same number $p$. Then the discrete random variable $X$ that counts the number of successes in the n trials is the binomial random variable with parameters $n$ and $p$. We also say that $X$ has a binomial distribution with parameters $n$ and $p$. The following four examples illustrate the definition. Note how in every case “success” is the outcome that is counted, not the outcome that we prefer or think is better in some sense. 1. A random sample of $125$ students is selected from a large college in which the proportion of students who are females is $57\%$. Suppose $X$ denotes the number of female students in the sample. In this situation there are $n=125$ identical and independent trials of a common procedure, selecting a student at random; there are exactly two possible outcomes for each trial, “success” (what we are counting, that the student be female) and “failure;” and finally the probability of success on any one trial is the same number $p = 0.57$. $X$ is a binomial random variable with parameters $n = 125$ and $p = 0.57$. 2. A multiple-choice test has $15$ questions, each of which has five choices. An unprepared student taking the test answers each of the questions completely randomly by choosing an arbitrary answer from the five provided. Suppose $X$ denotes the number of answers that the student gets right. $X$ is a binomial random variable with parameters $n = 15$ and $p=1/5=0.20$. 3. In a survey of $1,000$ registered voters each voter is asked if he intends to vote for a candidate Titania Queen in the upcoming election. Suppose $X$ denotes the number of voters in the survey who intend to vote for Titania Queen. $X$ is a binomial random variable with $n = 1000$ and $p$ equal to the true proportion of voters (surveyed or not) who intend to vote for Titania Queen. 4. An experimental medication was given to $30$ patients with a certain medical condition. Suppose $X$ denotes the number of patients who develop severe side effects. $X$ is a binomial random variable with $n = 30$ and $p$ equal to the true probability that a patient with the underlying condition will experience severe side effects if given that medication. Probability Formula for a Binomial Random Variable Often the most difficult aspect of working a problem that involves the binomial random variable is recognizing that the random variable in question has a binomial distribution. Once that is known, probabilities can be computed using the following formula. If $X$ is a binomial random variable with parameters $n$ and $p$, then $P(x)=\dfrac{n!}{x!(n−x)!}p^xq^{n−x} \nonumber$ where $q=1-p$ and where for any counting number $m$, $m!$ (read “m factorial”) is defined by $0!=1,1!=1,2!=1⋅2,3!=1⋅2⋅3 \nonumber$ and in general $m!=1⋅2 ⋅ ⋅ ⋅ (m−1)⋅m \nonumber$ Example $1$ Seventeen percent of victims of financial fraud know the perpetrator of the fraud personally. 1. Use the formula to construct the probability distribution for the number $X$ of people in a random sample of five victims of financial fraud who knew the perpetrator personally. 2. A investigator examines five cases of financial fraud every day. Find the most frequent number of cases each day in which the victim knew the perpetrator. 3. A investigator examines five cases of financial fraud every day. Find the average number of cases per day in which the victim knew the perpetrator. Solution The random variable X is binomial with parameters $n = 5$ and $p = 0.17$; $q=1-p=0.83$. The possible values of $X$ are $0, 1, 2, 3, 4,\; \text{and}\; 5$. \begin{align*} P(0) &= \frac{5!}{0!5!}(0.17)^0(0.83)^5\ &= \frac{1\cdot 2\cdot 3\cdot 4\cdot 5}{(1)(1\cdot 2\cdot 3\cdot 4\cdot 5)}1\cdot (0.3939040643)\ &= 0.3939040643\approx 0.3939 \end{align*} \nonumber \begin{align*} P(1) &= \frac{5!}{1!4!}(0.17)^1(0.83)^4\ &= \frac{1\cdot 2\cdot 3\cdot 4\cdot 5}{(1)(1\cdot 2\cdot 3\cdot 4)}(0.17)\cdot (0.47458321)\ &= 5\cdot (0.17)\cdot (0.47458321)\ &= 0.4033957285 \approx 0.4034 \end{align*} \nonumber \begin{align*} P(2) &= \frac{5!}{2!3!}(0.17)^2(0.83)^3\ &= \frac{1\cdot 2\cdot 3\cdot 4\cdot 5}{(1\cdot 2)(1\cdot 2\cdot 3)}(0.0289)\cdot (0.571787)\ &= 10\cdot (0.0289)\cdot (0.571787)\ &= 0.165246443 \approx 0.1652 \end{align*} \nonumber The remaining three probabilities are computed similarly, to give the probability distribution $\begin{array}{c|cccccc} x& 0& 1& 2& 3& 4& 5\ \hline P(x)& 0.3939& 0.4034& 0.1652& 0.0338& 0.0035& 0.0001 \ \end{array} \nonumber$ The probabilities do not add up to exactly $1$ because of rounding. This probability distribution is represented by the histogram in Figure $2$, which graphically illustrates just how improbable the events $X = 4$ and $X = 5$ are. The corresponding bar in the histogram above the number $4$ is barely visible, if visible at all, and the bar above $5$ is far too short to be visible. The value of $X$ that is most likely is $X = 1$, so the most frequent number of cases seen each day in which the victim knew the perpetrator is one. The average number of cases per day in which the victim knew the perpetrator is the mean of $X$, which is \begin{align} μ&=\sum xP(x) \ &=0⋅0.3939+1⋅0.4034+2⋅0.1652+3⋅0.0338+4⋅0.0035+5⋅0.0001 \ &= 0.8497 \end{align} \nonumber Special Formulas for the Mean and Standard Deviation of a Binomial Random Variable Since a binomial random variable is a discrete random variable, the formulas for its mean, variance, and standard deviation given in the previous section apply to it, as we just saw in Example $2$ in the case of the mean. However, for the binomial random variable there are much simpler formulas. If $X$ is a binomial random variable with parameters $n$ and $p$, then $\mu=np \nonumber$ $\sigma ^2=npq \nonumber$ $\sigma =\sqrt{npq} \nonumber$ where $q=1-p$. Example $2$ Find the mean and standard deviation of the random variable $X$ of Example $1$. Solution The random variable $X$ is binomial with parameters $n = 5$ and $p = 0.17$, and $q=1-p=0.83$. Thus its mean and standard deviation are $\mu =np=(5)\cdot (0.17)=0.85 \; \; \text{(exactly)} \nonumber$ and $\sigma =\sqrt{npq}=\sqrt{(5)\cdot (0.17)\cdot (0.83)}=\sqrt{0.7055}\approx 0.8399 \nonumber$ The Cumulative Probability Distribution of a Binomial Random Variable In order to allow a broader range of more realistic problems contains probability tables for binomial random variables for various choices of the parameters $n$ and $p$. These tables are not the probability distributions that we have seen so far, but are cumulative probability distributions. In the place of the probability $P(x)$ the table contains the probability $P(X≤x) = P(0) + P(1) + \ldots + P(x) \nonumber$ This is illustrated in Figure $3$. The probability entered in the table corresponds to the area of the shaded region. The reason for providing a cumulative table is that in practical problems that involve a binomial random variable typically the probability that is sought is of the form $P(X≤x)$ or $P(X≥x)$. The cumulative table is much easier to use for computing $P(X≤x)$ since all the individual probabilities have already been computed and added. The one table suffices for both $P(X≤x)$ or $P(X≥x)$ and can be used to readily obtain probabilities of the form $P(x)$, too, because of the following formulas. The first is just the Probability Rule for Complements. If $X$ is a discrete random variable, then $P(X≥x)=1−P(X≤x−1) \nonumber$ and $P(x)=P(X≤x)−P(X≤x−1) \nonumber$ Example $3$ A student takes a ten-question true/false exam. 1. Find the probability that the student gets exactly six of the questions right simply by guessing the answer on every question. 2. Find the probability that the student will obtain a passing grade of $60\%$ or greater simply by guessing. Solution Let $X$ denote the number of questions that the student guesses correctly. Then $X$ is a binomial random variable with parameters $n = 10$ and $p= 0.50$. 1. The probability sought is $P(6)$. The formula gives $P(6)=10!(6!)(4!)(.5)6.54=0.205078125\nonumber$ Using the table, $P(6)=P(X≤6)−P(X≤5)=0.8281−0.6230=0.2051\nonumber$ 2. The student must guess correctly on at least $60\%$ of the questions, which is $(0.60)\cdot (10)=6$ questions. The probability sought is not $P(6)$ (an easy mistake to make), but $P(X≥6)=P(6)+P(7)+P(8)+P(9)+P(10)\nonumber$ Instead of computing each of these five numbers using the formula and adding them we can use the table to obtain $P(X≥6)=1−P(X≤5)=1−0.6230=0.3770\nonumber$ which is much less work and of sufficient accuracy for the situation at hand. Example $4$ An appliance repairman services five washing machines on site each day. One-third of the service calls require installation of a particular part. 1. The repairman has only one such part on his truck today. Find the probability that the one part will be enough today, that is, that at most one washing machine he services will require installation of this particular part. 2. Find the minimum number of such parts he should take with him each day in order that the probability that he have enough for the day's service calls is at least $95\%$. Solution Let $X$ denote the number of service calls today on which the part is required. Then $X$ is a binomial random variable with parameters $n = 5$ and $p=1/3=0.\bar{3}$ 1. Note that the probability in question is not $P(1)$, but rather $P(X\leq 1)$. Using the cumulative distribution table, $P(X≤1)=0.4609\nonumber$ 2. The answer is the smallest number $x$ such that the table entry $P(X\leq x)$ is at least $0.9500$. Since $P(X\leq2 )=0.7901$ is less than $0.95$, two parts are not enough. Since $P(X\leq 3)=0.9547$ is as large as $0.95$, three parts will suffice at least $95\%$ of the time. Thus the minimum needed is three. Summary • The discrete random variable $X$ that counts the number of successes in $n$ identical, independent trials of a procedure that always results in either of two outcomes, “success” or “failure,” and in which the probability of success on each trial is the same number $p$, is called the binomial random variable with parameters $n$ and $p$. • There is a formula for the probability that the binomial random variable with parameters $n$ and $p$ will take a particular value $x$. • There are special formulas for the mean, variance, and standard deviation of the binomial random variable with parameters $n$ and $p$ that are much simpler than the general formulas that apply to all discrete random variables. • Cumulative probability distribution tables, when available, facilitate computation of probabilities encountered in typical practical situations.
textbooks/stats/Introductory_Statistics/Introductory_Statistics_(Shafer_and_Zhang)/04%3A_Discrete_Random_Variables/4.03%3A_The_Binomial_Distribution.txt
These are homework exercises to accompany the Textmap created for "Introductory Statistics" by Shafer and Zhang. 4.1: Random Variables Basic 1. Classify each random variable as either discrete or continuous. 1. The number of arrivals at an emergency room between midnight and $6:00\; a.m$. 2. The weight of a box of cereal labeled “$18$ ounces.” 3. The duration of the next outgoing telephone call from a business office. 4. The number of kernels of popcorn in a $1$-pound container. 5. The number of applicants for a job. 2. Classify each random variable as either discrete or continuous. 1. The time between customers entering a checkout lane at a retail store. 2. The weight of refuse on a truck arriving at a landfill. 3. The number of passengers in a passenger vehicle on a highway at rush hour. 4. The number of clerical errors on a medical chart. 5. The number of accident-free days in one month at a factory. 3. Classify each random variable as either discrete or continuous. 1. The number of boys in a randomly selected three-child family. 2. The temperature of a cup of coffee served at a restaurant. 3. The number of no-shows for every $100$ reservations made with a commercial airline. 4. The number of vehicles owned by a randomly selected household. 5. The average amount spent on electricity each July by a randomly selected household in a certain state. 4. Classify each random variable as either discrete or continuous. 1. The number of patrons arriving at a restaurant between $5:00\; p.m$. and $6:00\; p.m$. 2. The number of new cases of influenza in a particular county in a coming month. 3. The air pressure of a tire on an automobile. 4. The amount of rain recorded at an airport one day. 5. The number of students who actually register for classes at a university next semester. 5. Identify the set of possible values for each random variable. (Make a reasonable estimate based on experience, where necessary.) 1. The number of heads in two tosses of a coin. 2. The average weight of newborn babies born in a particular county one month. 3. The amount of liquid in a $12$-ounce can of soft drink. 4. The number of games in the next World Series (best of up to seven games). 5. The number of coins that match when three coins are tossed at once. 6. Identify the set of possible values for each random variable. (Make a reasonable estimate based on experience, where necessary.) 1. The number of hearts in a five-card hand drawn from a deck of $52$ cards that contains $13$ hearts in all. 2. The number of pitches made by a starting pitcher in a major league baseball game. 3. The number of breakdowns of city buses in a large city in one week. 4. The distance a rental car rented on a daily rate is driven each day. 5. The amount of rainfall at an airport next month. Answers 1. discrete 2. continuous 3. continuous 4. discrete 5. discrete 1. discrete 2. continuous 3. discrete 4. discrete 5. continuous 1. $\{0.1.2\}$ 2. an interval $(a,b)$ (answers vary) 3. an interval $(a,b)$ (answers vary) 4. $\{4,5,6,7\}$ 5. $\{2,3\}$ 4.2: Probability Distributioins for Discrete Random Variables Basic 1. Determine whether or not the table is a valid probability distribution of a discrete random variable. Explain fully. 1. $\begin{array}{c|c c c c} x &-2 &0 &2 &4 \ \hline P(x) &0.3 &0.5 &0.2 &0.1\ \end{array}$ 2. $\begin{array}{c|c c c} x &0.5 &0.25 &0.25\ \hline P(x) &-0.4 &0.6 &0.8\ \end{array}$ 3. $\begin{array}{c|c c c c c} x &1.1 &2.5 &4.1 &4.6 &5.3\ \hline P(x) &0.16 &0.14 &0.11 &0.27 &0.22\ \end{array}$ 2. Determine whether or not the table is a valid probability distribution of a discrete random variable. Explain fully. 1. $\begin{array}{c|c c c c c} x &0 &1 &2 &3 &4\ \hline P(x) &-0.25 &0.50 &0.35 &0.10 &0.30\ \end{array}$ 2. $\begin{array}{c|c c c } x &1 &2 &3 \ \hline P(x) &0.325 &0.406 &0.164 \ \end{array}$ 3. $\begin{array}{c|c c c c c} x &25 &26 &27 &28 &29 \ \hline P(x) &0.13 &0.27 &0.28 &0.18 &0.14 \ \end{array}$ 3. A discrete random variable $X$ has the following probability distribution: $\begin{array}{c|c c c c c} x &77 &78 &79 &80 &81 \ \hline P(x) &0.15 &0.15 &0.20 &0.40 &0.10 \ \end{array}$Compute each of the following quantities. 1. $P(80)$. 2. $P(X>80)$. 3. $P(X\leq 80)$. 4. The mean $\mu$ of $X$. 5. The variance $\sigma ^2$ of $X$. 6. The standard deviation $\sigma$ of $X$. 4. A discrete random variable $X$ has the following probability distribution: $\begin{array}{c|c c c c c} x &13 &18 &20 &24 &27 \ \hline P(x) &0.22 &0.25 &0.20 &0.17 &0.16 \ \end{array}$Compute each of the following quantities. 1. $P(18)$. 2. $P(X>18)$. 3. $P(X\leq 18)$. 4. The mean $\mu$ of $X$. 5. The variance $\sigma ^2$ of $X$. 6. The standard deviation $\sigma$ of $X$. 5. If each die in a pair is “loaded” so that one comes up half as often as it should, six comes up half again as often as it should, and the probabilities of the other faces are unaltered, then the probability distribution for the sum X of the number of dots on the top faces when the two are rolled is $\begin{array}{c|c c c c c c} x &2 &3 &4 &5 &6 &7 \ \hline P(x) &\frac{1}{144} &\frac{4}{144} &\frac{8}{144} &\frac{12}{144} &\frac{16}{144} &\frac{22}{144}\ \end{array}$ $\begin{array}{c|c c c c c } x &8 &9 &10 &11 &12 \ \hline P(x) &\frac{24}{144} &\frac{20}{144} &\frac{16}{144} &\frac{12}{144} &\frac{9}{144} \ \end{array}$Compute each of the following. 1. $P(5\leq X\leq 9)$. 2. $P(X\geq 7)$. 3. The mean $\mu$ of $X$. (For fair dice this number is $7$). 4. The standard deviation $\sigma$ of $X$. (For fair dice this number is about $2.415$). Applications 1. Borachio works in an automotive tire factory. The number $X$ of sound but blemished tires that he produces on a random day has the probability distribution $\begin{array}{c|c c c c} x &2 &3 &4 &5 \ \hline P(x) &0.48 &0.36 &0.12 &0.04\ \end{array}$ 1. Find the probability that Borachio will produce more than three blemished tires tomorrow. 2. Find the probability that Borachio will produce at most two blemished tires tomorrow. 3. Compute the mean and standard deviation of $X$. Interpret the mean in the context of the problem. 2. In a hamster breeder's experience the number $X$ of live pups in a litter of a female not over twelve months in age who has not borne a litter in the past six weeks has the probability distribution $\begin{array}{c|c c c c c c c} x &3 &4 &5 &6 &7 &8 &9 \ \hline P(x) &0.04 &0.10 &0.26 &0.31 &0.22 &0.05 &0.02\ \end{array}$ 1. Find the probability that the next litter will produce five to seven live pups. 2. Find the probability that the next litter will produce at least six live pups. 3. Compute the mean and standard deviation of $X$. Interpret the mean in the context of the problem. 3. The number $X$ of days in the summer months that a construction crew cannot work because of the weather has the probability distribution $\begin{array}{c|c c c c c} x &6 &7 &8 &9 &10\ \hline P(x) &0.03 &0.08 &0.15 &0.20 &0.19 \ \end{array}$ $\begin{array}{c|c c c c } x &11 &12 &13 &14 \ \hline P(x) &0.16 &0.10 &0.07 &0.02 \ \end{array}$ 1. Find the probability that no more than ten days will be lost next summer. 2. Find the probability that from $8$ to $12$ days will be lost next summer. 3. Find the probability that no days at all will be lost next summer. 4. Compute the mean and standard deviation of $X$. Interpret the mean in the context of the problem. 4. Let $X$ denote the number of boys in a randomly selected three-child family. Assuming that boys and girls are equally likely, construct the probability distribution of $X$. 5. Let $X$ denote the number of times a fair coin lands heads in three tosses. Construct the probability distribution of $X$. 6. Five thousand lottery tickets are sold for $\1$ each. One ticket will win $\1,000$, two tickets will win $\500$ each, and ten tickets will win $\100$ each. Let $X$ denote the net gain from the purchase of a randomly selected ticket. 1. Construct the probability distribution of $X$. 2. Compute the expected value $E(X)$ of $X$. Interpret its meaning. 3. Compute the standard deviation $\sigma$ of $X$. 7. Seven thousand lottery tickets are sold for $\5$ each. One ticket will win $\2,000$, two tickets will win $\750$ each, and five tickets will win $\100$ each. Let $X$ denote the net gain from the purchase of a randomly selected ticket. 1. Construct the probability distribution of $X$. 2. Compute the expected value $E(X)$ of $X$. Interpret its meaning. 3. Compute the standard deviation $\sigma$ of $X$. 8. An insurance company will sell a $\90,000$ one-year term life insurance policy to an individual in a particular risk group for a premium of $\478$. Find the expected value to the company of a single policy if a person in this risk group has a $99.62\%$ chance of surviving one year. 9. An insurance company will sell a $\10,000$ one-year term life insurance policy to an individual in a particular risk group for a premium of $\368$. Find the expected value to the company of a single policy if a person in this risk group has a $97.25\%$ chance of surviving one year. 10. An insurance company estimates that the probability that an individual in a particular risk group will survive one year is $0.9825$. Such a person wishes to buy a $\150,000$ one-year term life insurance policy. Let $C$ denote how much the insurance company charges such a person for such a policy. 1. Construct the probability distribution of $X$. (Two entries in the table will contain $C$). 2. Compute the expected value $E(X)$ of $X$. 3. Determine the value $C$ must have in order for the company to break even on all such policies (that is, to average a net gain of zero per policy on such policies). 4. Determine the value $C$ must have in order for the company to average a net gain of $\250$ per policy on all such policies. 11. An insurance company estimates that the probability that an individual in a particular risk group will survive one year is $0.99$. Such a person wishes to buy a $\75,000$ one-year term life insurance policy. Let $C$ denote how much the insurance company charges such a person for such a policy. 1. Construct the probability distribution of $X$. (Two entries in the table will contain $C$). 2. Compute the expected value $E(X)$ of $X$. 3. Determine the value $C$ must have in order for the company to break even on all such policies (that is, to average a net gain of zero per policy on such policies). 4. Determine the value $C$ must have in order for the company to average a net gain of $\150$ per policy on all such policies. 12. A roulette wheel has $38$ slots. Thirty-six slots are numbered from $1$ to $36$; half of them are red and half are black. The remaining two slots are numbered $0$ and $00$ and are green. In a $\1$ bet on red, the bettor pays $\1$ to play. If the ball lands in a red slot, he receives back the dollar he bet plus an additional dollar. If the ball does not land on red he loses his dollar. Let $X$ denote the net gain to the bettor on one play of the game. 1. Construct the probability distribution of $X$. 2. Compute the expected value $E(X)$ of $X$, and interpret its meaning in the context of the problem. 3. Compute the standard deviation of $X$. 13. A roulette wheel has $38$ slots. Thirty-six slots are numbered from $1$ to $36$; the remaining two slots are numbered $0$ and $00$. Suppose the “number” $00$ is considered not to be even, but the number $0$ is still even. In a $\1$ bet on even, the bettor pays $\1$ to play. If the ball lands in an even numbered slot, he receives back the dollar he bet plus an additional dollar. If the ball does not land on an even numbered slot, he loses his dollar. Let $X$ denote the net gain to the bettor on one play of the game. 1. Construct the probability distribution of $X$. 2. Compute the expected value $E(X)$ of $X$, and explain why this game is not offered in a casino (where 0 is not considered even). 3. Compute the standard deviation of $X$. 14. The time, to the nearest whole minute, that a city bus takes to go from one end of its route to the other has the probability distribution shown. As sometimes happens with probabilities computed as empirical relative frequencies, probabilities in the table add up only to a value other than $1.00$ because of round-off error. $\begin{array}{c|c c c c c c} x &42 &43 &44 &45 &46 &47 \ \hline P(x) &0.10 &0.23 &0.34 &0.25 &0.05 &0.02\ \end{array}$ 1. Find the average time the bus takes to drive the length of its route. 2. Find the standard deviation of the length of time the bus takes to drive the length of its route. 15. Tybalt receives in the mail an offer to enter a national sweepstakes. The prizes and chances of winning are listed in the offer as: $\5$ million, one chance in $65$ million; $\150,000$, one chance in $6.5$ million; $\5,000$, one chance in $650,000$; and $\1,000$, one chance in $65,000$. If it costs Tybalt $44$ cents to mail his entry, what is the expected value of the sweepstakes to him? Additional Exercises 1. The number $X$ of nails in a randomly selected $1$-pound box has the probability distribution shown. Find the average number of nails per pound. $\begin{array}{c|c c c } x &100 &101 &102 \ \hline P(x) &0.01 &0.96 &0.03 \ \end{array}$ 2. Three fair dice are rolled at once. Let $X$ denote the number of dice that land with the same number of dots on top as at least one other die. The probability distribution for $X$ is $\begin{array}{c|c c c } x &0 &u &3 \ \hline P(x) &p &\frac{15}{36} &\frac{1}{36} \ \end{array}$ 1. Find the missing value $u$ of $X$. 2. Find the missing probability $p$. 3. Compute the mean of $X$. 4. Compute the standard deviation of $X$. 3. Two fair dice are rolled at once. Let $X$ denote the difference in the number of dots that appear on the top faces of the two dice. Thus for example if a one and a five are rolled, $X=4$, and if two sixes are rolled, $X=0$. 1. Construct the probability distribution for $X$. 2. Compute the mean $\mu$ of $X$. 3. Compute the standard deviation $\sigma$ of $X$. 4. A fair coin is tossed repeatedly until either it lands heads or a total of five tosses have been made, whichever comes first. Let $X$ denote the number of tosses made. 1. Construct the probability distribution for $X$. 2. Compute the mean $\mu$ of $X$. 3. Compute the standard deviation $\sigma$ of $X$. 5. A manufacturer receives a certain component from a supplier in shipments of $100$ units. Two units in each shipment are selected at random and tested. If either one of the units is defective the shipment is rejected. Suppose a shipment has $5$ defective units. 1. Construct the probability distribution for the number $X$ of defective units in such a sample. (A tree diagram is helpful). 2. Find the probability that such a shipment will be accepted. 6. Shylock enters a local branch bank at $4:30\; p.m$. every payday, at which time there are always two tellers on duty. The number $X$ of customers in the bank who are either at a teller window or are waiting in a single line for the next available teller has the following probability distribution. $\begin{array}{c|c c c c} x &0 &1 &2 &3 \ \hline P(x) &0.135 &0.192 &0.284 &0.230 \ \end{array}$ $\begin{array}{c|c c c } x &4 &5 &6 \ \hline P(x) &0.103 &0.051 &0.005 \ \end{array}$ 1. What number of customers does Shylock most often see in the bank the moment he enters? 2. What number of customers waiting in line does Shylock most often see the moment he enters? 3. What is the average number of customers who are waiting in line the moment Shylock enters? 7. The owner of a proposed outdoor theater must decide whether to include a cover that will allow shows to be performed in all weather conditions. Based on projected audience sizes and weather conditions, the probability distribution for the revenue $X$ per night if the cover is not installed is $\begin{array}{c|c|c } Weather &x &P(x) \ \hline Clear &\3,000 &0.61 \ Threatening &\2,800 &0.17 \ Light Rain &\1,975 &0.11 \ Show-cancelling\; rain &\0 &0.11 \ \end{array}$The additional cost of the cover is \$410,000. The owner will have it built if this cost can be recovered from the increased revenue the cover affords in the first ten 90-night seasons. 1. Compute the mean revenue per night if the cover is not installed. 2. Use the answer to (a) to compute the projected total revenue per $90$-night season if the cover is not installed. 3. Compute the projected total revenue per season when the cover is in place. To do so assume that if the cover were in place the revenue each night of the season would be the same as the revenue on a clear night. 4. Using the answers to (b) and (c), decide whether or not the additional cost of the installation of the cover will be recovered from the increased revenue over the first ten years. Will the owner have the cover installed? Answers 1. no: the sum of the probabilities exceeds $1$ 2. no: a negative probability 3. no: the sum of the probabilities is less than $1$ 1. $0.4$ 2. $0.1$ 3. $0.9$ 4. $79.15$ 5. $\sigma ^2=1.5275$ 6. $\sigma =1.2359$ 1. $0.6528$ 2. $0.7153$ 3. $\mu =7.8333$ 4. $\sigma ^2=5.4866$ 5. $\sigma =2.3424$ 1. $0.79$ 2. $0.60$ 3. $\mu =5.8$, $\sigma =1.2570$ 1. $\begin{array}{c|c c c c} x &0 &1 &2 &3 \ \hline P(x) &1/8 &3/8 &3/8 &1/8\ \end{array}$ 1. $\begin{array}{c|c c c c} x &-1 &999 &499 &99 \ \hline P(x) &\frac{4987}{5000} &\frac{1}{5000} &\frac{2}{5000} &\frac{10}{5000}\ \end{array}$ 2. $-0.4$ 3. $17.8785$ 2. $136$ 1. $\begin{array}{c|c c c } x &C &C &-150,000 \ \hline P(x) &0.9825 & &0.0175 \ \end{array}$ 2. $C-2625$ 3. $C \geq 2625$ 4. $C \geq 2875$ 1. $\begin{array}{c|c c } x &-1 &1 \ \hline P(x) &\frac{20}{38} &\frac{18}{38} \ \end{array}$ 2. $E(X)=-0.0526$. In many bets the bettor sustains an average loss of about $5.25$ cents per bet. 3. $0.9986$ 1. $43.54$ 2. $1.2046$ 3. $101.02$ 1. $\begin{array}{c|c c c c c c} x &0 &1 &2 &3 &4 &5 \ \hline P(x) &\frac{6}{36} &\frac{10}{36} &\frac{8}{36} &\frac{6}{36} &\frac{4}{36} &\frac{2}{36} \ \end{array}$ 2. $1.9444$ 3. $1.4326$ 1. $\begin{array}{c|c c c } x &0 &1 &2 \ \hline P(x) &0.902 &0.096 &0.002 \ \end{array}$ 2. $0.902$ 1. $2523.25$ 2. $227,092.5$ 3. $270,000$ 4. The owner will install the cover. 4.3: The Binomial Distribution Basic 1. Determine whether or not the random variable $X$ is a binomial random variable. If so, give the values of $n$ and$p$. If not, explain why not. 1. $X$ is the number of dots on the top face of fair die that is rolled. 2. $X$ is the number of hearts in a five-card hand drawn (without replacement) from a well-shuffled ordinary deck. 3. $X$ is the number of defective parts in a sample of ten randomly selected parts coming from a manufacturing process in which $0.02\%$ of all parts are defective. 4. $X$ is the number of times the number of dots on the top face of a fair die is even in six rolls of the die. 5. $X$ is the number of dice that show an even number of dots on the top face when six dice are rolled at once. 2. Determine whether or not the random variable $X$ is a binomial random variable. If so, give the values of $n$ and $p$. If not, explain why not. 1. $X$ is the number of black marbles in a sample of $5$ marbles drawn randomly and without replacement from a box that contains $25$ white marbles and $15$ black marbles. 2. $X$ is the number of black marbles in a sample of $5$ marbles drawn randomly and with replacement from a box that contains $25$ white marbles and $15$ black marbles. 3. $X$ is the number of voters in favor of proposed law in a sample $1,200$ randomly selected voters drawn from the entire electorate of a country in which $35\%$ of the voters favor the law. 4. $X$ is the number of fish of a particular species, among the next ten landed by a commercial fishing boat, that are more than $13$ inches in length, when $17\%$ of all such fish exceed $13$ inches in length. 5. $X$ is the number of coins that match at least one other coin when four coins are tossed at once. 3. $X$ is a binomial random variable with parameters $n=12$ and $p=0.82$. Compute the probability indicated. 1. $P(11)$ 2. $P(9)$ 3. $P(0)$ 4. $P(13)$ 4. $X$ is a binomial random variable with parameters $n=16$ and $p=0.74$. Compute the probability indicated. 1. $P(14)$ 2. $P(4)$ 3. $P(0)$ 4. $P(20)$ 5. $X$ is a binomial random variable with parameters $n=5$, $p=0.5$. Use the tables in 7.1: Large Sample Estimation of a Population Mean to compute the probability indicated. 1. $P(X \leq 3)$ 2. $P(X \geq 3)$ 3. $P(3)$ 4. $P(0)$ 5. $P(5)$ 6. $X$ is a binomial random variable with parameters $n=5$, $p=0.\bar{3}$. Use the tables in 7.1: Large Sample Estimation of a Population Mean to compute the probability indicated. 1. $P(X \leq 2)$ 2. $P(X \geq 2)$ 3. $P(2)$ 4. $P(0)$ 5. $P(5)$ 7. $X$ is a binomial random variable with the parameters shown. Use the tables in 7.1: Large Sample Estimation of a Population Mean to compute the probability indicated. 1. $n = 10, p = 0.25, P(X \leq 6)$ 2. $n = 10, p = 0.75, P(X \leq 6)$ 3. $n = 15, p = 0.75, P(X \leq 6)$ 4. $n = 15, p = 0.75, P(12)$ 5. $n = 15, p=0.\bar{6}, P(10\leq X\leq 12)$ 8. $X$ is a binomial random variable with the parameters shown. Use the tables in 7.1: Large Sample Estimation of a Population Mean to compute the probability indicated. 1. $n = 5, p = 0.05, P(X \leq 1)$ 2. $n = 5, p = 0.5, P(X \leq 1)$ 3. $n = 10, p = 0.75, P(X \leq 5)$ 4. $n = 10, p = 0.75, P(12)$ 5. $n = 10, p=0.\bar{6}, P(5\leq X\leq 8)$ 9. $X$ is a binomial random variable with the parameters shown. Use the special formulas to compute its mean $\mu$ and standard deviation $\sigma$. 1. $n = 8, p = 0.43$ 2. $n = 47, p = 0.82$ 3. $n = 1200, p = 0.44$ 4. $n = 2100, p = 0.62$ 10. $X$ is a binomial random variable with the parameters shown. Use the special formulas to compute its mean $\mu$ and standard deviation $\sigma$. 1. $n = 14, p = 0.55$ 2. $n = 83, p = 0.05$ 3. $n = 957, p = 0.35$ 4. $n = 1750, p = 0.79$ 11. $X$ is a binomial random variable with the parameters shown. Compute its mean $\mu$ and standard deviation $\sigma$ in two ways, first using the tables in 7.1: Large Sample Estimation of a Population Mean in conjunction with the general formulas $\mu =\sum xP(x)$ and $\sigma =\sqrt{\left [ \sum x^2P(x) \right ]-\mu ^2}$, then using the special formulas $\mu =np$ and $\sigma =\sqrt{npq}$. 1. $n = 5, p=0.\bar{3}$ 2. $n = 10, p = 0.75$ 12. $X$ is a binomial random variable with the parameters shown. Compute its mean $\mu$ and standard deviation $\sigma$ in two ways, first using the tables in 7.1: Large Sample Estimation of a Population Mean in conjunction with the general formulas $\mu =\sum xP(x)$ and $\sigma =\sqrt{\left [ \sum x^2P(x) \right ]-\mu ^2}$, then using the special formulas $\mu =np$ and $\sigma =\sqrt{npq}$. 1. $n = 10, p = 0.25$ 2. $n = 15, p = 0.1$ 13. $X$ is a binomial random variable with parameters $n=10$ and $p=1/3$. Use the cumulative probability distribution for $X$ that is given in 7.1: Large Sample Estimation of a Population Mean to construct the probability distribution of $X$. 14. $X$ is a binomial random variable with parameters $n=15$ and $p=1/2$. Use the cumulative probability distribution for $X$ that is given in 7.1: Large Sample Estimation of a Population Mean to construct the probability distribution of $X$. 15. In a certain board game a player's turn begins with three rolls of a pair of dice. If the player rolls doubles all three times there is a penalty. The probability of rolling doubles in a single roll of a pair of fair dice is $1/6$. Find the probability of rolling doubles all three times. 16. A coin is bent so that the probability that it lands heads up is $2/3$. The coin is tossed ten times. 1. Find the probability that it lands heads up at most five times. 2. Find the probability that it lands heads up more times than it lands tails up. Applications 1. An English-speaking tourist visits a country in which $30\%$ of the population speaks English. He needs to ask someone directions. 1. Find the probability that the first person he encounters will be able to speak English. 2. The tourist sees four local people standing at a bus stop. Find the probability that at least one of them will be able to speak English. 2. The probability that an egg in a retail package is cracked or broken is $0.025$. 1. Find the probability that a carton of one dozen eggs contains no eggs that are either cracked or broken. 2. Find the probability that a carton of one dozen eggs has (i) at least one that is either cracked or broken; (ii) at least two that are cracked or broken. 3. Find the average number of cracked or broken eggs in one dozen cartons. 3. An appliance store sells $20$ refrigerators each week. Ten percent of all purchasers of a refrigerator buy an extended warranty. Let $X$ denote the number of the next $20$ purchasers who do so. 1. Verify that $X$ satisfies the conditions for a binomial random variable, and find $n$ and $p$. 2. Find the probability that $X$ is zero. 3. Find the probability that $X$ is two, three, or four. 4. Find the probability that $X$ is at least five. 4. Adverse growing conditions have caused $5\%$ of grapefruit grown in a certain region to be of inferior quality. Grapefruit are sold by the dozen. 1. Find the average number of inferior quality grapefruit per box of a dozen. 2. A box that contains two or more grapefruit of inferior quality will cause a strong adverse customer reaction. Find the probability that a box of one dozen grapefruit will contain two or more grapefruit of inferior quality. 5. The probability that a $7$-ounce skein of a discount worsted weight knitting yarn contains a knot is $0.25$. Goneril buys ten skeins to crochet an afghan. 1. Find the probability that (i) none of the ten skeins will contain a knot; (ii) at most one will. 2. Find the expected number of skeins that contain knots. 3. Find the most likely number of skeins that contain knots. 6. One-third of all patients who undergo a non-invasive but unpleasant medical test require a sedative. A laboratory performs $20$ such tests daily. Let $X$ denote the number of patients on any given day who require a sedative. 1. Verify that $X$ satisfies the conditions for a binomial random variable, and find $n$ and $p$. 2. Find the probability that on any given day between five and nine patients will require a sedative (include five and nine). 3. Find the average number of patients each day who require a sedative. 4. Using the cumulative probability distribution for $X$ in 7.1: Large Sample Estimation of a Population Mean find the minimum number $x_{min}$ of doses of the sedative that should be on hand at the start of the day so that there is a $99\%$ chance that the laboratory will not run out. 7. About $2\%$ of alumni give money upon receiving a solicitation from the college or university from which they graduated. Find the average number monetary gifts a college can expect from every $2,000$ solicitations it sends. 8. Of all college students who are eligible to give blood, about $18\%$ do so on a regular basis. Each month a local blood bank sends an appeal to give blood to $250$ randomly selected students. Find the average number of appeals in such mailings that are made to students who already give blood. 9. About $12\%$ of all individuals write with their left hands. A class of $130$ students meets in a classroom with $130$ individual desks, exactly $14$ of which are constructed for people who write with their left hands. Find the probability that exactly $14$ of the students enrolled in the class write with their left hands. 10. A traveling salesman makes a sale on $65\%$ of his calls on regular customers. He makes four sales calls each day. 1. Construct the probability distribution of $X$, the number of sales made each day. 2. Find the probability that, on a randomly selected day, the salesman will make a sale. 3. Assuming that the salesman makes $20$ sales calls per week, find the mean and standard deviation of the number of sales made per week. 11. A corporation has advertised heavily to try to insure that over half the adult population recognizes the brand name of its products. In a random sample of $20$ adults, $14$ recognized its brand name. What is the probability that $14$ or more people in such a sample would recognize its brand name if the actual proportion $p$ of all adults who recognize the brand name were only $0.50$? Additional Exercises 1. When dropped on a hard surface a thumbtack lands with its sharp point touching the surface with probability $2/3$; it lands with its sharp point directed up into the air with probability $1/3$. The tack is dropped and its landing position observed $15$ times. 1. Find the probability that it lands with its point in the air at least $7$ times. 2. If the experiment of dropping the tack $15$ times is done repeatedly, what is the average number of times it lands with its point in the air? 2. A professional proofreader has a $98\%$ chance of detecting an error in a piece of written work (other than misspellings, double words, and similar errors that are machine detected). A work contains four errors. 1. Find the probability that the proofreader will miss at least one of them. 2. Show that two such proofreaders working independently have a $99.96\%$ chance of detecting an error in a piece of written work. 3. Find the probability that two such proofreaders working independently will miss at least one error in a work that contains four errors. 3. A multiple choice exam has $20$ questions; there are four choices for each question. 1. A student guesses the answer to every question. Find the chance that he guesses correctly between four and seven times. 2. Find the minimum score the instructor can set so that the probability that a student will pass just by guessing is $20\%$ or less. 4. In spite of the requirement that all dogs boarded in a kennel be inoculated, the chance that a healthy dog boarded in a clean, well-ventilated kennel will develop kennel cough from a carrier is $0.008$. 1. If a carrier (not known to be such, of course) is boarded with three other dogs, what is the probability that at least one of the three healthy dogs will develop kennel cough? 2. If a carrier is boarded with four other dogs, what is the probability that at least one of the four healthy dogs will develop kennel cough? 3. The pattern evident from parts (a) and (b) is that if $K+1$ dogs are boarded together, one a carrier and $K$ healthy dogs, then the probability that at least one of the healthy dogs will develop kennel cough is $P(X\geq 1)=1-(0.992)^K$, where $X$ is the binomial random variable that counts the number of healthy dogs that develop the condition. Experiment with different values of $K$ in this formula to find the maximum number $K+1$ of dogs that a kennel owner can board together so that if one of the dogs has the condition, the chance that another dog will be infected is less than $0.05$. 5. Investigators need to determine which of $600$ adults have a medical condition that affects $2\%$ of the adult population. A blood sample is taken from each of the individuals. 1. Show that the expected number of diseased individuals in the group of $600$ is $12$ individuals. 2. Instead of testing all $600$ blood samples to find the expected $12$ diseased individuals, investigators group the samples into $60$ groups of $10$ each, mix a little of the blood from each of the $10$ samples in each group, and test each of the $60$ mixtures. Show that the probability that any such mixture will contain the blood of at least one diseased person, hence test positive, is about $0.18$. 3. Based on the result in (b), show that the expected number of mixtures that test positive is about $11$. (Supposing that indeed $11$ of the $60$ mixtures test positive, then we know that none of the $490$ persons whose blood was in the remaining $49$ samples that tested negative has the disease. We have eliminated $490$ persons from our search while performing only $60$ tests.) Answers 1. not binomial; not success/failure. 2. not binomial; trials are not independent. 3. binomial; $n = 10, p = 0.0002$ 4. binomial; $n = 6, p = 0.5$ 5. binomial; $n = 6, p = 0.5$ 1. $0.2434$ 2. $0.2151$ 3. $0.18^{12}\approx 0$ 4. $0$ 1. $0.8125$ 2. $0.5000$ 3. $0.3125$ 4. $0.0313$ 5. $0.0312$ 1. $0.9965$ 2. $0.2241$ 3. $0.0042$ 4. $0.2252$ 5. $0.5390$ 1. $\mu = 3.44, \sigma = 1.4003$ 2. $\mu = 38.54, \sigma = 2.6339$ 3. $\mu = 528, \sigma = 17.1953$ 4. $\mu = 1302, \sigma = 22.2432$ 1. $\mu = 1.6667, \sigma = 1.0541$ 2. $\mu = 7.5, \sigma = 1.3693$ 1. $\begin{array}{c|c c c c} x &0 &1 &2 &3 \ \hline P(x) &0.0173 &0.0867 &0.1951 &0.2602\ \end{array}$ $\begin{array}{c|c c c c} x &4 &5 &6 &7 \ \hline P(x) &0.2276 &0.1365 &0.0569 &0.0163\ \end{array}$ $\begin{array}{c|c c c } x &8 &9 &10 \ \hline P(x) &0.0030 &0.0004 &0.0000 \ \end{array}$ 2. $0.0046$ 1. $0.3$ 2. $0.7599$ 1. $n = 20, p = 0.1$ 2. $0.1216$ 3. $0.5651$ 4. $0.0432$ 1. $0.0563$ and $0.2440$ 2. $2.5$ 3. $2$ 3. $40$ 4. $0.1019$ 5. $0.0577$ 1. $0.0776$ 2. $0.9996$ 3. $0.0016$ 1. $0.0238$ 2. $0.0316$ 3. $6$ • Anonymous
textbooks/stats/Introductory_Statistics/Introductory_Statistics_(Shafer_and_Zhang)/04%3A_Discrete_Random_Variables/4.E%3A_Discrete_Random_Variables_%28Exercises%29.txt
A random variable is called continuous if its set of possible values contains a whole interval of decimal numbers. In this chapter we investigate such random variables. • 5.1: Continuous Random Variables For a discrete random variable X the probability that X assumes one of its possible values on a single trial of the experiment makes good sense. This is not the case for a continuous random variable. With continuous random variables one is concerned not with the event that the variable assumes a single particular value, but with the event that the random variable assumes a value in a particular interval. • 5.2: The Standard Normal Distribution A standard normal random variable $Z$ is a normally distributed random variable with mean $\mu =0$ and standard deviation $\sigma =1$. • 5.3: Probability Computations for General Normal Random Variables Probabilities for a general normal random variable are computed after converting $x$-values to $z$-scores. • 5.4: Areas of Tails of Distributions The left tail of a density curve y=f(x) of a continuous random variable X cut off by a value x* of X is the region under the curve that is to the left of x*. The right tail cut off by x* is defined similarly. • 5.E: Continuous Random Variables (Exercises) hese are homework exercises to accompany the Textmap created for "Introductory Statistics" by Shafer and Zhang. 05: Continuous Random Variables Learning Objectives • To learn the concept of the probability distribution of a continuous random variable, and how it is used to compute probabilities. • To learn basic facts about the family of normally distributed random variables. The Probability Distribution of a Continuous Random Variable For a discrete random variable $X$ the probability that $X$ assumes one of its possible values on a single trial of the experiment makes good sense. This is not the case for a continuous random variable. For example, suppose $X$ denotes the length of time a commuter just arriving at a bus stop has to wait for the next bus. If buses run every $30$ minutes without fail, then the set of possible values of $X$ is the interval denoted $\left [ 0,30 \right ]$, the set of all decimal numbers between $0$ and $30$. But although the number $7.211916$ is a possible value of $X$, there is little or no meaning to the concept of the probability that the commuter will wait precisely $7.211916$ minutes for the next bus. If anything the probability should be zero, since if we could meaningfully measure the waiting time to the nearest millionth of a minute it is practically inconceivable that we would ever get exactly $7.211916$ minutes. More meaningful questions are those of the form: What is the probability that the commuter's waiting time is less than $10$ minutes, or is between $5$ and $10$ minutes? In other words, with continuous random variables one is concerned not with the event that the variable assumes a single particular value, but with the event that the random variable assumes a value in a particular interval. Definition: density function The probability distribution of a continuous random variable $X$ is an assignment of probabilities to intervals of decimal numbers using a function $f(x)$, called a density function, in the following way: the probability that $X$ assumes a value in the interval $\left [ a,b\right ]$ is equal to the area of the region that is bounded above by the graph of the equation $y=f(x)$, bounded below by the x-axis, and bounded on the left and right by the vertical lines through $a$ and $b$, as illustrated in Figure $1$. This definition can be understood as a natural outgrowth of the discussion in Section 2.1.3. There we saw that if we have in view a population (or a very large sample) and make measurements with greater and greater precision, then as the bars in the relative frequency histogram become exceedingly fine their vertical sides merge and disappear, and what is left is just the curve formed by their tops, as shown in Figure 2.1.5. Moreover the total area under the curve is $1$, and the proportion of the population with measurements between two numbers $a$ and $b$ is the area under the curve and between $a$ and $b$, as shown in Figure 2.1.6. If we think of $X$ as a measurement to infinite precision arising from the selection of any one member of the population at random, then $P(a<X<b)$is simply the proportion of the population with measurements between $a$ and $b$, the curve in the relative frequency histogram is the density function for $X$, and we arrive at the definition just above. • Every density function $f(x)$ must satisfy the following two conditions: • For all numbers $x$, $f(x)\geq 0$, so that the graph of $y=f(x)$ never drops below the x-axis. • The area of the region under the graph of $y=f(x)$ and above the $x$-axis is $1$. Because the area of a line segment is $0$, the definition of the probability distribution of a continuous random variable implies that for any particular decimal number, say $a$, the probability that $X$ assumes the exact value a is $0$. This property implies that whether or not the endpoints of an interval are included makes no difference concerning the probability of the interval. For any continuous random variable $X$: $P(a\leq X\leq b)=P(a<X\leq b)=P(a\leq X<b)=P(a<X<b) \nonumber$ Example $1$ A random variable $X$ has the uniform distribution on the interval $\left [ 0,1\right ]$: the density function is $f(x)=1$ if $x$ is between $0$ and $1$ and $f(x)=0$ for all other values of $x$, as shown in Figure $2$. 1. Find $P(X > 0.75)$, the probability that $X$ assumes a value greater than $0.75$. 2. Find $P(X \leq 0.2)$, the probability that $X$ assumes a value less than or equal to $0.2$. 3. Find $P(0.4 < X < 0.7)$, the probability that $X$ assumes a value between $0.4$ and $0.7$. Solution 1. $P(X > 0.75)$ is the area of the rectangle of height $1$ and base length $1-0.75=0.25$, hence is $base\times height=(0.25)\cdot (1)=0.25$. See Figure $\PageIndex{3a}$. 2. $P(X \leq 0.2)$ is the area of the rectangle of height $1$ and base length $0.2-0=0.2$, hence is $base\times height=(0.2)\cdot (1)=0.2$. See Figure $\PageIndex{3b}$. 3. $P(0.4 < X < 0.7)$ is the area of the rectangle of height $1$ and length $0.7-0.4=0.3$, hence is $base\times height=(0.3)\cdot (1)=0.3$. See Figure $\PageIndex{3c}$. Example $2$ A man arrives at a bus stop at a random time (that is, with no regard for the scheduled service) to catch the next bus. Buses run every $30$ minutes without fail, hence the next bus will come any time during the next $30$ minutes with evenly distributed probability (a uniform distribution). Find the probability that a bus will come within the next $10$ minutes. Solution The graph of the density function is a horizontal line above the interval from $0$ to $30$ and is the $x$-axis everywhere else. Since the total area under the curve must be $1$, the height of the horizontal line is $1/30$ (Figure $4$). The probability sought is $P(0\leq X\leq 10)$.By definition, this probability is the area of the rectangular region bounded above by the horizontal line $f(x)=1/30$, bounded below by the $x$-axis, bounded on the left by the vertical line at $0$ (the $y$-axis), and bounded on the right by the vertical line at $10$. This is the shaded region in Figure $4$. Its area is the base of the rectangle times its height, $(10)\cdot (1/30)=1/3$. Thus $P(0\leq X\leq 10)=1/3$. Normal Distributions Most people have heard of the “bell curve.” It is the graph of a specific density function $f(x)$ that describes the behavior of continuous random variables as different as the heights of human beings, the amount of a product in a container that was filled by a high-speed packing machine, or the velocities of molecules in a gas. The formula for $f(x)$ contains two parameters $\mu$ and $\sigma$ that can be assigned any specific numerical values, so long as $\sigma$ is positive. We will not need to know the formula for $f(x)$, but for those who are interested it is $f(x)=\frac{1}{\sqrt{2\pi \sigma ^2}}e^{-\frac{1}{2}(\mu -x)^2/\sigma ^2} \nonumber$ where $\pi \approx 3.14159$ and $e\approx 2.71828$ is the base of the natural logarithms. Each different choice of specific numerical values for the pair $\mu$ and $\sigma$ gives a different bell curve. The value of $\mu$ determines the location of the curve, as shown in Figure $5$. In each case the curve is symmetric about $\mu$. The value of $\sigma$ determines whether the bell curve is tall and thin or short and squat, subject always to the condition that the total area under the curve be equal to $1$. This is shown in Figure $6$, where we have arbitrarily chosen to center the curves at $\mu=6$. Definition: normal distribution The probability distribution corresponding to the density function for the bell curve with parameters $\mu$ and $\sigma$ is called the normal distribution with mean $\mu$ and standard deviation $\sigma$. Definition: normally distributed random variable A continuous random variable whose probabilities are described by the normal distribution with mean $\mu$ and standard deviation $\sigma$ is called a normally distributed random variable, or a normal random variable for short, with mean $\mu$ and standard deviation $\sigma$. Figure $7$ shows the density function that determines the normal distribution with mean $\mu$ and standard deviation $\sigma$. We repeat an important fact about this curve: The density curve for the normal distribution is symmetric about the mean. Example $3$ Heights of $25$-year-old men in a certain region have mean $69.75$ inches and standard deviation $2.59$ inches. These heights are approximately normally distributed. Thus the height $X$ of a randomly selected $25$-year-old man is a normal random variable with mean $\mu = 69.75$ and standard deviation $\sigma = 2.59$. Sketch a qualitatively accurate graph of the density function for $X$. Find the probability that a randomly selected $25$-year-old man is more than $69.75$ inches tall. Solution The distribution of heights looks like the bell curve in Figure $8$. The important point is that it is centered at its mean, $69.75$, and is symmetric about the mean. Since the total area under the curve is $1$, by symmetry the area to the right of $69.75$ is half the total, or $0.5$. But this area is precisely the probability $P(X > 69.75)$, the probability that a randomly selected $25$-year-old man is more than $69.75$ inches tall. We will learn how to compute other probabilities in the next two sections. Key Takeaway • For a continuous random variable $X$ the only probabilities that are computed are those of $X$ taking a value in a specified interval. • The probability that $X$ take a value in a particular interval is the same whether or not the endpoints of the interval are included. • The probability $P(a<X<b)$, that $X$ take a value in the interval from $a$ to $b$, is the area of the region between the vertical lines through $a$ and $b$, above the $x$-axis, and below the graph of a function $f(x)$ called the density function. • A normally distributed random variable is one whose density function is a bell curve. • Every bell curve is symmetric about its mean and lies everywhere above the $x$-axis, which it approaches asymptotically (arbitrarily closely without touching).
textbooks/stats/Introductory_Statistics/Introductory_Statistics_(Shafer_and_Zhang)/05%3A_Continuous_Random_Variables/5.01%3A_Continuous_Random_Variables.txt
Learning Objectives • To learn what a standard normal random variable is. • To learn how to compute probabilities related to a standard normal random variable. Definition: standard normal random variable A standard normal random variable is a normally distributed random variable with mean $\mu =0$ and standard deviation $\sigma =1$. It will always be denoted by the letter $Z$. The density function for a standard normal random variable is shown in Figure $1$. To compute probabilities for $Z$ we will not work with its density function directly but instead read probabilities out of Figure $2$. The tables are tables of cumulative probabilities; their entries are probabilities of the form $P(Z< z)$. The use of the tables will be explained by the following series of examples. Example $1$ Find the probabilities indicated, where as always $Z$ denotes a standard normal random variable. 1. $P(Z< 1.48)$. 2. $P(Z< -0.25)$. Solution 1. Figure $3$ shows how this probability is read directly from the table without any computation required. The digits in the ones and tenths places of $1.48$, namely $1.4$, are used to select the appropriate row of the table; the hundredths part of $1.48$, namely $0.08$, is used to select the appropriate column of the table. The four decimal place number in the interior of the table that lies in the intersection of the row and column selected, $0.9306$, is the probability sought: $P(Z< 1.48)=0.9306 \nonumber$ 1. The minus sign in $-0.25$ makes no difference in the procedure; the table is used in exactly the same way as in part (a): the probability sought is the number that is in the intersection of the row with heading $-0.2$ and the column with heading $0.05$, the number $0.4013$. Thus $P(Z< -0.25)=0.4013$. Example $2$ Find the probabilities indicated. 1. $P(Z> 1.60)$. 2. $P(Z> -1.02)$. Solution 1. Because the events $Z> 1.60$ and $Z\leq 1.60$ are complements, the Probability Rule for Complements implies that $P(Z> 1.60)=1-P(Z\leq 1.60) \nonumber$ Since inclusion of the endpoint makes no difference for the continuous random variable $Z$, $P(Z\leq 1.60)=P(Z< 1.60)$, which we know how to find from the table in Figure $2$. The number in the row with heading $1.6$ and in the column with heading $0.00$ is $0.9452$. Thus $P(Z< 1.60)=0.9452$ so $P(Z> 1.60)=1-P(Z\leq 1.60)=1-0.9452=0.0548 \nonumber$ Figure $4$ illustrates the ideas geometrically. Since the total area under the curve is $1$ and the area of the region to the left of $1.60$ is (from the table) $0.9452$, the area of the region to the right of $1.60$ must be $1-0.9452=0.0548$. 1. The minus sign in $-1.02$ makes no difference in the procedure; the table is used in exactly the same way as in part (a). The number in the intersection of the row with heading $-1.0$ and the column with heading $0.02$ is $0.1539$. This means that $P(Z<-1.02)=P(Z\leq -1.02)=0.1539$. Hence $P(Z>-1.02)=P(Z\leq -1.02)=1-0.1539=0.8461 \nonumber$ Example $3$ Find the probabilities indicated. 1. $P(0.5<Z<1.57)$. 2. $P(-2.55<Z<0.09)$. Solution 1. Figure $5$ illustrates the ideas involved for intervals of this type. First look up the areas in the table that correspond to the numbers $0.5$ (which we think of as $0.50$ to use the table) and $1.57$. We obtain $0.6915$ and $0.9418$, respectively. From the figure it is apparent that we must take the difference of these two numbers to obtain the probability desired. In symbols, $P(0.5<Z<1.57)=P(Z<1.57)-P(Z<0.50)=0.9418-0.6915=0.2503 \nonumber$ 1. The procedure for finding the probability that $Z$ takes a value in a finite interval whose endpoints have opposite signs is exactly the same procedure used in part (a), and is illustrated in Figure $6$ "Computing a Probability for an Interval of Finite Length". In symbols the computation is $P(-2.55<Z<0.09)=P(Z<0.09)-P(Z<-2.55)=0.5359-0.0054=0.5305 \nonumber$ The next example shows what to do if the value of $Z$ that we want to look up in the table is not present there. Example $4$ Find the probabilities indicated. 1. $P(1.13<Z<4.16)$. 2. $P(-5.22<Z<2.15)$. Solution 1. We attempt to compute the probability exactly as in Example $3$ by looking up the numbers $1.13$ and $4.16$ in the table. We obtain the value $0.8708$ for the area of the region under the density curve to left of $1.13$ without any problem, but when we go to look up the number $4.16$ in the table, it is not there. We can see from the last row of numbers in the table that the area to the left of $4.16$ must be so close to 1 that to four decimal places it rounds to $1.0000$. Therefore $P(1.13<Z<4.16)=1.0000-0.8708=0.1292 \nonumber$ 2. Similarly, here we can read directly from the table that the area under the density curve and to the left of $2.15$ is $0.9842$, but $-5.22$ is too far to the left on the number line to be in the table. We can see from the first line of the table that the area to the left of $-5.22$ must be so close to $0$ that to four decimal places it rounds to $0.0000$. Therefore $P(-5.22<Z<2.15)=0.9842-0.0000=0.9842 \nonumber$ The final example of this section explains the origin of the proportions given in the Empirical Rule. Example $5$ Find the probabilities indicated. 1. $P(-1<Z<1)$. 2. $P(-2<Z<2)$. 3. $P(-3<Z<3)$. Solution 1. Using the table as was done in Example $3$ we obtain $P(-1<Z<1)=0.8413-0.1587=0.6826 \nonumber$ Since $Z$ has mean $0$ and standard deviation $1$, for $Z$ to take a value between $-1$ and $1$ means that $Z$ takes a value that is within one standard deviation of the mean. Our computation shows that the probability that this happens is about $0.68$, the proportion given by the Empirical Rule for histograms that are mound shaped and symmetrical, like the bell curve. 2. Using the table in the same way, $P(-2<Z<2)=0.9772-0.0228=0.9544 \nonumber$ This corresponds to the proportion 0.95 for data within two standard deviations of the mean. 3. Similarly, $P(-3<Z<3)=0.9987-0.0013=0.9974 \nonumber$ which corresponds to the proportion 0.997 for data within three standard deviations of the mean. Key takeaway • A standard normal random variable $Z$ is a normally distributed random variable with mean $\mu =0$ and standard deviation $\sigma =1$. • Probabilities for a standard normal random variable are computed using Figure $2$.
textbooks/stats/Introductory_Statistics/Introductory_Statistics_(Shafer_and_Zhang)/05%3A_Continuous_Random_Variables/5.02%3A_The_Standard_Normal_Distribution.txt
Learning Objectives • To learn how to compute probabilities related to any normal random variable. If $X$ is any normally distributed normal random variable then Figure $1$ can also be used to compute a probability of the form $P(a<X<b)$ by means of the following equality. equality If $X$ is a normally distributed random variable with mean $\mu$ and standard deviation $\sigma$, then $P(a<X<b)=P\left ( \frac{a-\mu }{\sigma }<Z<\frac{b-\mu }{\sigma } \right ) \nonumber$ where $Z$ denotes a standard normal random variable. $a$ can be any decimal number or $-\infty$; $b$ can be any decimal number or $\infty$. The new endpoints $\frac{(a-\mu )}{\sigma }$ and $\frac{(b-\mu )}{\sigma }$ are the $z$-scores of $a$ and $b$ as defined in Chapter 2. Figure $2$ illustrates the meaning of the equality geometrically: the two shaded regions, one under the density curve for $X$ and the other under the density curve for $Z$, have the same area. Instead of drawing both bell curves, though, we will always draw a single generic bell-shaped curve with both an $x$-axis and a $z$-axis below it. Example $1$ Let $X$ be a normal random variable with mean $\mu =10$ and standard deviation $\sigma =2.5$. Compute the following probabilities. 1. $P(X<14)$. 2. $P(8<X<14)$. Solution 1. See Figure $3$ "Probability Computation for a General Normal Random Variable". \begin{align*} P(X<14) &= P\left ( Z<\frac{14-\mu }{\sigma } \right )\ &= P\left ( Z<\frac{14-10}{2.5} \right )\ &= P(Z<1.60)\ &= 0.9452 \end{align*} \nonumber 1. See Figure $4$ "Probability Computation for a General Normal Random Variable". \begin{align*} P(8<X<14) &= P\left ( \frac{8-10}{2.5}<Z<\frac{14-10}{2.5} \right )\ &= P\left ( -0.80<Z<1.60 \right )\ &= 0.9452-0.2119\ &= 0.7333 \end{align*} \nonumber Example $2$ The lifetimes of the tread of a certain automobile tire are normally distributed with mean $37,500$ miles and standard deviation $4,500$ miles. Find the probability that the tread life of a randomly selected tire will be between $30,000$ and $40,000$ miles. Solution Let $X$ denote the tread life of a randomly selected tire. To make the numbers easier to work with we will choose thousands of miles as the units. Thus $\mu =37.5,\; \sigma =4.5$, and the problem is to compute $P(30<X<40)$. Figure $5$ "Probability Computation for Tire Tread Wear" illustrates the following computation: \begin{align*} P(30<X<40) &= P\left ( \frac{30-\mu }{\sigma }<Z<\frac{40-\mu }{\sigma } \right )\ &= P\left ( \frac{30-37.5}{4.5}<Z<\frac{40-37.5}{4.5} \right )\ &= P\left ( -1.67<Z<0.56\right )\ &= 0.7123-0.0475\ &= 0.6648 \end{align*} \nonumber Note that the two $z$-scores were rounded to two decimal places in order to use Figure $1$ "Cumulative Normal Probability". Example $3$ Scores on a standardized college entrance examination (CEE) are normally distributed with mean $510$ and standard deviation $60$. A selective university considers for admission only applicants with CEE scores over $650$. Find percentage of all individuals who took the CEE who meet the university's CEE requirement for consideration for admission. Solution Let $X$ denote the score made on the CEE by a randomly selected individual. Then $X$ is normally distributed with mean $510$ and standard deviation $60$. The probability that $X$ lie in a particular interval is the same as the proportion of all exam scores that lie in that interval. Thus the solution to the problem is $P(X>650)$, expressed as a percentage. Figure $6$ "Probability Computation for Exam Scores" illustrates the following computation: \begin{align*} P(X>650) &= P\left ( Z>\frac{650-\mu }{\sigma } \right )\ &= P\left ( Z>\frac{650-510}{60} \right )\ &= P(Z>2.33)\ &= 1-0.9901\ &= 0.0099 \end{align*} \nonumber The proportion of all CEE scores that exceed $650$ is $0.0099$, hence $0.99\%$ or about $1\%$ do. key takeaway • Probabilities for a general normal random variable are computed using Figure $1$ after converting $x$-values to $z$-scores.
textbooks/stats/Introductory_Statistics/Introductory_Statistics_(Shafer_and_Zhang)/05%3A_Continuous_Random_Variables/5.03%3A_Probability_Computations_for_General_Normal_Random_Variables.txt
Learning Objectives • To learn how to find, for a normal random variable $X$ and an area $a$, the value $x^\ast$ of $X$ so that $P(X<x^\ast )=a$ or that $P(X>x^\ast )=a$, whichever is required. Definition: Left and Right Tails The left tail of a density curve $y=f(x)$ of a continuous random variable $X$ cut off by a value $x^\ast$ of $X$ is the region under the curve that is to the left of $x^\ast$, as shown by the shading in Figure $1$(a). The right tail cut off by $x^\ast$ is defined similarly, as indicated by the shading in Figure $1$(b). The probabilities tabulated in Figure 5.3.1 are areas of left tails in the standard normal distribution. Tails of the Standard Normal Distribution At times it is important to be able to solve the kind of problem illustrated by Figure $2$. We have a certain specific area in mind, in this case the area $0.0125$ of the shaded region in the figure, and we want to find the value $z^\ast$ of $Z$ that produces it. This is exactly the reverse of the kind of problems encountered so far. Instead of knowing a value $z^\ast$ of $Z$ and finding a corresponding area, we know the area and want to find $z^\ast$. In the case at hand, in the terminology of the definition just above, we wish to find the value $z^\ast$ that cuts off a left tail of area $0.0125$ in the standard normal distribution. The idea for solving such a problem is fairly simple, although sometimes its implementation can be a bit complicated. In a nutshell, one reads the cumulative probability table for $Z$ in reverse, looking up the relevant area in the interior of the table and reading off the value of $Z$ from the margins. Example $1$ Find the value $z^\ast$ of $Z$ as determined by Figure $2$: the value $z^\ast$ that cuts off a left tail of area $0.0125$ in the standard normal distribution. In symbols, find the number $z^\ast$ such that $P(Z<z^\ast )=0.0125$. Solution The number that is known, $0.0125$, is the area of a left tail, and as already mentioned the probabilities tabulated in Figure 5.3.1 are areas of left tails. Thus to solve this problem we need only search in the interior of Figure 5.3.1 for the number $0.0125$. It lies in the row with the heading $-2.2$ and in the column with the heading $0.04$. This means that $P(Z < -2.24)= 0.0125$, hence $z^\ast=-2.24$. Example $2$ Find the value $z^\ast$ of $Z$ as determined by Figure $3$: the value $z^\ast$ that cuts off a right tail of area $0.0250$ in the standard normal distribution. In symbols, find the number $z^\ast$ such that $P(Z >z^\ast)= 0.0250$. Solution The important distinction between this example and the previous one is that here it is the area of a right tail that is known. In order to be able to use Figure 5.3.1 we must first find that area of the left tail cut off by the unknown number $z^\ast$. Since the total area under the density curve is $1$, that area is $1-0.0250=0.9750$. This is the number we look for in the interior of Figure 5.3.1. It lies in the row with the heading $1.9$ and in the column with the heading $0.06$. Therefore $z^\ast=1.96$. Definition: standard normal random variable The value of the standard normal random variable $Z$ that cuts off a right tail of area $c$ is denoted $z_c$. By symmetry, value of $Z$ that cuts off a left tail of area $c$ is $-z_c$. See Figure $4$. The previous two examples were atypical because the areas we were looking for in the interior of Figure 5.3.1 were actually there. The following example illustrates the situation that is more common. Example $3$ Find $z_{.01}$ and $-z_{.01}$, the values of $Z$ that cut off right and left tails of area $0.01$ in the standard normal distribution. Solution Since $-z_{.01}$ cuts off a left tail of area $0.01$ and Figure 5.3.1 is a table of left tails, we look for the number $0.0100$ in the interior of the table. It is not there, but falls between the two numbers $0.0102$ and $0.0099$ in the row with heading $-2.3$. The number $0.0099$ is closer to $0.0100$ than $0.0102$ is, so for the hundredths place in $-z_{.01}$ we use the heading of the column that contains $0.0099$, namely, $0.03$, and write $-z_{.01}\approx -2.33$. The answer to the second half of the problem is automatic: since $-z_{.01}=-2.33$, we conclude immediately that $z_{.01}=2.33$. We could just as well have solved this problem by looking for $z_{.01}$ first, and it is instructive to rework the problem this way. To begin with, we must first subtract $0.01$ from $1$ to find the area $1-0.0100=0.9900$ of the left tail cut off by the unknown number $z_{.01}$. See Figure $5$. Then we search for the area $0.9900$ in Figure $5$. It is not there, but falls between the numbers $0.9898$ and $0.9901$ in the row with heading $2.3$. Since $0.9901$ is closer to $0.9900$ than $0.9898$ is, we use the column heading above it, $0.03$, to obtain the approximation $z_{.01}\approx 2.33$. Then finally $-z_{.01}\approx -2.33$. Tails of General Normal Distributions The problem of finding the value $x^\ast$ of a general normally distributed random variable $X$ that cuts off a tail of a specified area also arises. This problem may be solved in two steps. 1. Suppose $X$ is a normally distributed random variable with mean $\mu$ and standard deviation $\sigma$. To find the value $x^\ast$ of $X$ that cuts off a left or right tail of area $c$ in the distribution of $X$: 2. find the value $z^\ast$ of $Z$ that cuts off a left or right tail of area $c$ in the standard normal distribution; $z^\ast$ is the $z$-score of $x^\ast$; compute $x^\ast$ using the destandardization formula $x^\ast =\mu +z^\ast \sigma \nonumber$ In short, solve the corresponding problem for the standard normal distribution, thereby obtaining the $z$-score of $x^\ast$, then destandardize to obtain $x^\ast$. Example $4$ Find $x^\ast$ such that $P(X<x^\ast )=0.9332$, where $X$ is a normal random variable with mean $\mu =10$ and standard deviation $\sigma =2.5$. Solution All the ideas for the solution are illustrated in Figure $6$. Since $0.9332$ is the area of a left tail, we can find $z^\ast$ simply by looking for $0.9332$ in the interior of Figure 5.3.1. It is in the row and column with headings $1.5$ and $0.00$, hence $z^\ast=1.50$. Thus $x^\ast$ is $1.50$ standard deviations above the mean, so $x^\ast =\mu +z^\ast \sigma =10+(1.50)\cdot (0.2)=13.75 \nonumber$ Example $5$ Find $x^\ast$ such that $P(X>x^\ast )=0.65$, where $X$ is a normal random variable with mean $\mu =175$ and standard deviation $\sigma =12$. Solution The situation is illustrated in Figure $7$. Since $0.65$ is the area of a right tail, we first subtract it from $1$ to obtain $1-0.65=0.35$, the area of the complementary left tail. We find $z^\ast$ by looking for $0.3500$ in the interior of Figure 5.3.1. It is not present, but lies between table entries $0.3520$ and $0.3483$. The entry $0.3483$ with row and column headings $-0.3$ and $0.09$ is closer to $0.3500$ than the other entry is, so $z^\ast \approx -0.39$. Thus $x^\ast$ is $0.39$ standard deviations below the mean, so $x^\ast =\mu +z^\ast \sigma =175+(-0.39)\cdot (12)=170.32 \nonumber$ Example $6$ Scores on a standardized college entrance examination (CEE) are normally distributed with mean $510$ and standard deviation $60$. A selective university decides to give serious consideration for admission to applicants whose CEE scores are in the top $5\%$ of all CEE scores. Find the minimum score that meets this criterion for serious consideration for admission. Solution Let $X$ denote the score made on the CEE by a randomly selected individual. Then $X$ is normally distributed with mean $510$ and standard deviation $60$. The probability that $X$ lie in a particular interval is the same as the proportion of all exam scores that lie in that interval. Thus the minimum score that is in the top $5\%$ of all CEE is the score $x^\ast$ that cuts off a right tail in the distribution of $X$ of area $0.05$ ($5\%$ expressed as a proportion). See Figure $8$. Since $0.0500$ is the area of a right tail, we first subtract it from $1$ to obtain $1-0.0500=0.9500$, the area of the complementary left tail. We find $z^\ast =z_{.05}$by looking for $0.9500$ in the interior of Figure 5.3.1. It is not present, and lies exactly half-way between the two nearest entries that are, $0.9495$ and $0.9505$. In the case of a tie like this, we will always average the values of $Z$ corresponding to the two table entries, obtaining here the value $z^\ast =1.645$. Using this value, we conclude that $x^\ast$ is $1.645$ standard deviations above the mean, so $x^\ast =\mu +z^\ast \sigma =510+(1.645)\cdot (60)=608.7 \nonumber$ Example $7$ All boys at a military school must run a fixed course as fast as they can as part of a physical examination. Finishing times are normally distributed with mean $29$ minutes and standard deviation $2$ minutes. The middle $75\%$ of all finishing times are classified as “average.” Find the range of times that are average finishing times by this definition. Solution Let $X$ denote the finish time of a randomly selected boy. Then $X$ is normally distributed with mean $29$ and standard deviation $2$. The probability that $X$ lie in a particular interval is the same as the proportion of all finish times that lie in that interval. Thus the situation is as shown in Figure $9$. Because the area in the middle corresponding to “average” times is $0.75$, the areas of the two tails add up to $1 - 0.75 = 0.25$ in all. By the symmetry of the density curve each tail must have half of this total, or area $0.125$ each. Thus the fastest time that is “average” has $z$-score $-z_{.125}$, which by Figure 5.3.1 is $-1.15$, and the slowest time that is “average” has $z$-score $z_{.125}=1.15$. The fastest and slowest times that are still considered average are $x_{fast}=\mu +(-z_{.125})\sigma =29+(-1.15)\cdot (2)=26.7 \nonumber$ and $x_{slow}=\mu +z_{.125}\sigma =29+(1.15)\cdot (2)=31.3 \nonumber$ A boy has an average finishing time if he runs the course with a time between $26.7$ and $31.3$ minutes, or equivalently between $26$ minutes $42$ seconds and $31$ minutes $18$ seconds. Key Takeaways • The problem of finding the number $z^\ast$ so that the probability $P(Z<z^\ast )$ is a specified value $c$ is solved by looking for the number $c$ in the interior of Figure 5.3.1 and reading $z^\ast$ from the margins. • The problem of finding the number $z^\ast$ so that the probability $P(Z>z^\ast )$ is a specified value $c$ is solved by looking for the complementary probability $1-c$ in the interior of Figure 5.3.1 and reading $z^\ast$ from the margins. • For a normal random variable $X$ with mean $\mu$ and standard deviation $\sigma$, the problem of finding the number $x^\ast$ so that $P(X<x^\ast )$ is a specified value $c$ (or so that $P(X>x^\ast )$ is a specified value $c$) is solved in two steps: • (1) solve the corresponding problem for $Z$ with the same value of $c$, thereby obtaining the $z$-score, $z^\ast$, of $x^\ast$; • (2) find $x^\ast$ using $x^\ast =\mu +z^\ast \sigma$. • The value of $Z$ that cuts off a right tail of area $c$ in the standard normal distribution is denoted $z_c$.
textbooks/stats/Introductory_Statistics/Introductory_Statistics_(Shafer_and_Zhang)/05%3A_Continuous_Random_Variables/5.04%3A_Areas_of_Tails_of_Distributions.txt
These are homework exercises to accompany the Textmap created for "Introductory Statistics" by Shafer and Zhang. 5.1: Continuous Random Variables Basic 1. A continuous random variable $X$ has a uniform distribution on the interval $[5,12]$. Sketch the graph of its density function. 2. A continuous random variable $X$ has a uniform distribution on the interval $[-3,3]$. Sketch the graph of its density function. 3. A continuous random variable $X$ has a normal distribution with mean $100$ and standard deviation $10$. Sketch a qualitatively accurate graph of its density function. 4. A continuous random variable $X$ has a normal distribution with mean $73$ and standard deviation $2.5$. Sketch a qualitatively accurate graph of its density function. 5. A continuous random variable $X$ has a normal distribution with mean $73$. The probability that $X$ takes a value greater than $80$ is $0.212$. Use this information and the symmetry of the density function to find the probability that $X$ takes a value less than $66$. Sketch the density curve with relevant regions shaded to illustrate the computation. 6. A continuous random variable $X$ has a normal distribution with mean $169$. The probability that $X$ takes a value greater than $180$ is $0.17$. Use this information and the symmetry of the density function to find the probability that $X$ takes a value less than $158$. Sketch the density curve with relevant regions shaded to illustrate the computation. 7. A continuous random variable $X$ has a normal distribution with mean $50.5$. The probability that $X$ takes a value less than $54$ is $0.76$. Use this information and the symmetry of the density function to find the probability that $X$ takes a value greater than $47$. Sketch the density curve with relevant regions shaded to illustrate the computation. 8. A continuous random variable $X$ has a normal distribution with mean $12.25$. The probability that $X$ takes a value less than $13$ is $0.82$. Use this information and the symmetry of the density function to find the probability that $X$ takes a value greater than $11.50$. Sketch the density curve with relevant regions shaded to illustrate the computation. 9. The figure provided shows the density curves of three normally distributed random variables $X_A,\; X_B\; \text{and}\; X_C$. Their standard deviations (in no particular order) are $15$, $7$, and $20$. Use the figure to identify the values of the means $\mu _A,\: \mu _B,\; \text{and}\; \mu _C$ and standard deviations $\sigma _A,\: \sigma _B,\; \text{and}\; \sigma _C$ of the three random variables. 1. The figure provided shows the density curves of three normally distributed random variables $X_A,\; X_B\; \text{and}\; X_C$. Their standard deviations (in no particular order) are $20$, $5$, and $10$. Use the figure to identify the values of the means $\mu _A,\: \mu _B,\; \text{and}\; \mu _C$ and standard deviations $\sigma _A,\: \sigma _B,\; \text{and}\; \sigma _C$ of the three random variables. Applications 1. Dogberry's alarm clock is battery operated. The battery could fail with equal probability at any time of the day or night. Every day Dogberry sets his alarm for $6:30\; a.m.$ and goes to bed at $10:00\; p.m.$. Find the probability that when the clock battery finally dies, it will do so at the most inconvenient time, between $10:00\; p.m.$ and $6:30\; a.m.$. 2. Buses running a bus line near Desdemona's house run every $15$ minutes. Without paying attention to the schedule she walks to the nearest stop to take the bus to town. Find the probability that she waits more than $10$ minutes. 3. The amount $X$ of orange juice in a randomly selected half-gallon container varies according to a normal distribution with mean $64$ ounces and standard deviation $0.25$ ounce. 1. Sketch the graph of the density function for $X$. 2. What proportion of all containers contain less than a half gallon ($64$ ounces)? Explain. 3. What is the median amount of orange juice in such containers? Explain. 4. The weight $X$ of grass seed in bags marked $50$ lb varies according to a normal distribution with mean $50$ lb and standard deviation $1$ ounce ($0.0625$ lb). 1. Sketch the graph of the density function for $X$. 2. What proportion of all bags weigh less than $50$ pounds? Explain. 3. What is the median weight of such bags? Explain. Answers 1. The graph is a horizontal line with height $1/7$ from $x = 5$ to $x = 12$ 2. 3. The graph is a bell-shaped curve centered at $100$ and extending from about $70$ to $130$. 4. 5. $0.212$ 6. 7. $0.76$ 8. 9. $\mu _A=100,\; \mu _B=200,\; \mu _C=300,\; \sigma _A=7,\; \sigma _B=20,\; \sigma _C=15$ 10. 11. $0.3542$ 12. 1. The graph is a bell-shaped curve centered at $64$ and extending from about $63.25$ to $64.75$. 2. $0.5$ 3. $64$ 5.2: The Standard Normal Distribution Basic 1. Use Figure 7.1.5: Cumulative Normal Probability to find the probability indicated.$/**/$ 1. $P(Z < -1.72)$ 2. $P(Z < 2.05)$ 3. $P(Z < 0)$ 4. $P(Z > -2.11)$ 5. $P(Z > 1.63)$ 6. $P(Z > 2.36)$ 2. Use Figure 7.1.5: Cumulative Normal Probability to find the probability indicated.$/**/$ 1. $P(Z < -1.17)$ 2. $P(Z < -0.05)$ 3. $P(Z < 0.66)$ 4. $P(Z > -2.43)$ 5. $P(Z > -1.00)$ 6. $P(Z > 2.19)$ 3. Use Figure 7.1.5: Cumulative Normal Probability to find the probability indicated. 1. $P(-2.15 < Z < -1.09)$ 2. $P(-0.93 < Z < 0.55)$ 3. $P(0.68 < Z < 2.11)$ 4. Use Figure 7.1.5: Cumulative Normal Probability to find the probability indicated. 1. $P(-1.99 < Z < -1.03)$ 2. $P(-0.87 < Z < 1.58)$ 3. $P(0.33 < Z < 0.96)$ 5. Use Figure 7.1.5: Cumulative Normal Probability to find the probability indicated. 1. $P(-4.22 < Z < -1.39)$ 2. $P(-1.37 < Z < 5.11)$ 3. $P(Z < -4.31)$ 4. $P(Z < 5.02)$ 6. Use Figure 7.1.5: Cumulative Normal Probability to find the probability indicated. 1. $P(Z > -5.31)$ 2. $P(-4.08 < Z < 0.58)$ 3. $P(Z < -6.16)$ 4. $P(-0.51< Z < 5.63)$ 7. Use Figure 7.1.5: Cumulative Normal Probability to find the probability listed. Find the second probability without referring to the table, but using the symmetry of the standard normal density curve instead. Sketch the density curve with relevant regions shaded to illustrate the computation. 1. $P(Z < -1.08),\; P(Z > 1.08)$ 2. $P(Z < -0.36),\; P(Z > 0.36)$ 3. $P(Z < 1.25),\; P(Z > -1.25)$ 4. $P(Z < 2.03),\; P(Z > -2.03)$ 8. Use Figure 7.1.5: Cumulative Normal Probability to find the probability listed. Find the second probability without referring to the table, but using the symmetry of the standard normal density curve instead. Sketch the density curve with relevant regions shaded to illustrate the computation. 1. $P(Z < -2.11),\; P(Z > 2.11)$ 2. $P(Z < -0.88),\; P(Z > 0.88)$ 3. $P(Z < 2.44),\; P(Z > -2.44)$ 4. $P(Z < 3.07),\; P(Z > -3.07)$ 9. The probability that a standard normal random variable $Z$ takes a value in the union of intervals $(-\infty ,-\alpha ]\cup [\alpha ,\infty )$, which arises in applications, will be denoted $P(Z \leq -a\; or\; Z \geq a)$. Use Figure 7.1.5: Cumulative Normal Probability to find the following probabilities of this type. Sketch the density curve with relevant regions shaded to illustrate the computation. Because of the symmetry of the standard normal density curve you need to use Figure 7.1.5: Cumulative Normal Probability only one time for each part. 1. $P(Z < -1.29\; or\; Z > 1.29)$ 2. $P(Z < -2.33\; or\; Z > 2.33)$ 3. $P(Z < -1.96\; or\; Z > 1.96)$ 4. $P(Z < -3.09\; or\; Z > 3.09)$ 10. The probability that a standard normal random variable $Z$ takes a value in the union of intervals $(-\infty ,-\alpha ]\cup [\alpha ,\infty )$, which arises in applications, will be denoted $P(Z \leq -a\; or\; Z \geq a)$. Use Figure 7.1.5: Cumulative Normal Probability to find the following probabilities of this type. Sketch the density curve with relevant regions shaded to illustrate the computation. Because of the symmetry of the standard normal density curve you need to use Figure 7.1.5: Cumulative Normal Probability only one time for each part. 1. $P(Z < -2.58 \; or\; Z > 2.58 )$ 2. $P(Z < -2.81 \; or\; Z > 2.81 )$ 3. $P(Z < -1.65 \; or\; Z > 1.65 )$ 4. $P(Z < -2.43 \; or\; Z > 2.43 )$ Answers 1. $0.0427$ 2. $0.9798$ 3. $0.5$ 4. $0.9826$ 5. $0.0516$ 6. $0.0091$ 1. 1. $0.1221$ 2. $0.5326$ 3. $0.2309$ 2. 1. $0.0823$ 2. $0.9147$ 3. $0.0000$ 4. $1.0000$ 3. 1. $0.1401,\; 0.1401$ 2. $0.3594,\; 0.3594$ 3. $0.8944,\; 0.8944$ 4. $0.9788,\; 0.9788$ 4. 1. $0.1970$ 2. $0.01980$ 3. $0.0500$ 4. $0.0020$ 5.3: Probability Computations for General Normal Random Variables Basic 1. $X$ is a normally distributed random variable with mean $57$ and standard deviation $6$. Find the probability indicated. 1. $P(X < 59.5)$ 2. $P(X < 46.2)$ 3. $P(X > 52.2)$ 4. $P(X > 70)$ 2. $X$ is a normally distributed random variable with mean $-25$ and standard deviation $4$. Find the probability indicated. 1. $P(X < -27.2)$ 2. $P(X < -14.8)$ 3. $P(X > -33.1)$ 4. $P(X > -16.5)$ 3. $X$ is a normally distributed random variable with mean $112$ and standard deviation $15$. Find the probability indicated. 1. $P(100<X<125)$ 2. $P(91<X<107)$ 3. $P(118<X<160)$ 4. $X$ is a normally distributed random variable with mean $72$ and standard deviation $22$. Find the probability indicated. 1. $P(78<X<127)$ 2. $P(60<X<90)$ 3. $P(49<X<71)$ 5. $X$ is a normally distributed random variable with mean $500$ and standard deviation $25$. Find the probability indicated. 1. $P(X < 400)$ 2. $P(466<X<625)$ 6. $X$ is a normally distributed random variable with mean $0$ and standard deviation $0.75$. Find the probability indicated. 1. $P(-4.02 < X < 3.82)$ 2. $P(X > 4.11)$ 7. $X$ is a normally distributed random variable with mean $15$ and standard deviation $1$. Use Figure 7.1.5$/**/$: Cumulative Normal Probability to find the first probability listed. Find the second probability using the symmetry of the density curve. Sketch the density curve with relevant regions shaded to illustrate the computation. 1. $P(X < 12),\; P(X > 18)$ 2. $P(X < 14),\; P(X > 16)$ 3. $P(X < 11.25),\; P(X > 18.75)$ 4. $P(X < 12.67),\; P(X > 17.33)$ 8. $X$ is a normally distributed random variable with mean $100$ and standard deviation $10$. Use Figure 7.1.5$/**/$: Cumulative Normal Probability to find the first probability listed. Find the second probability using the symmetry of the density curve. Sketch the density curve with relevant regions shaded to illustrate the computation. 1. $P(X < 80),\; P(X > 120)$ 2. $P(X < 75),\; P(X > 125)$ 3. $P(X < 84.55),\; P(X > 115.45)$ 4. $P(X < 77.42),\; P(X > 122.58)$ 9. $X$ is a normally distributed random variable with mean $67$ and standard deviation $13$. The probability that $X$ takes a value in the union of intervals $(-\infty ,67-a]\cup [67+a,\infty )$ will be denoted $P(X\leq 67-a\; or\; X\geq 67+a)$. Use Figure 7.1.5$/**/$: Cumulative Normal Probability to find the following probabilities of this type. Sketch the density curve with relevant regions shaded to illustrate the computation. Because of the symmetry of the density curve you need to use Figure 7.1.5$/**/$: Cumulative Normal Probability only one time for each part. 1. $P(X<57\; or\; X>77)$ 2. $P(X<47\; or\; X>87)$ 3. $P(X<49\; or\; X>85)$ 4. $P(X<37\; or\; X>97)$ 10. $X$ is a normally distributed random variable with mean $288$ and standard deviation $6$. The probability that $X$ takes a value in the union of intervals $(-\infty ,288-a]\cup [288+a,\infty )$ will be denoted $P(X\leq 288-a\; or\; X\geq 288+a)$. Use Figure 7.1.5$/**/$: Cumulative Normal Probability to find the following probabilities of this type. Sketch the density curve with relevant regions shaded to illustrate the computation. Because of the symmetry of the density curve you need to use Figure 7.1.5$/**/$: Cumulative Normal Probability only one time for each part. 1. $P(X<278\; or\; X>298)$ 2. $P(X<268\; or\; X>308)$ 3. $P(X<273\; or\; X>303)$ 4. $P(X<280\; or\; X>296)$ Applications 1. The amount $X$ of beverage in a can labeled $12$ ounces is normally distributed with mean $12.1$ ounces and standard deviation $0.05$ ounce. A can is selected at random. 1. Find the probability that the can contains at least $12$ ounces. 2. Find the probability that the can contains between $11.9$ and $12.1$ ounces. 2. The length of gestation for swine is normally distributed with mean $114$ days and standard deviation $0.75$ day. Find the probability that a litter will be born within one day of the mean of $114$. 3. The systolic blood pressure $X$ of adults in a region is normally distributed with mean $112$ mm Hg and standard deviation $15$ mm Hg. A person is considered “prehypertensive” if his systolic blood pressure is between $120$ and $130$ mm Hg. Find the probability that the blood pressure of a randomly selected person is prehypertensive. 4. Heights $X$ of adult women are normally distributed with mean $63.7$ inches and standard deviation $2.71$ inches. Romeo, who is $69.25$ inches tall, wishes to date only women who are shorter than he but within $4$ inches of his height. Find the probability that the next woman he meets will have such a height. 5. Heights $X$ of adult men are normally distributed with mean $69.1$ inches and standard deviation $2.92$ inches. Juliet, who is $63.25$ inches tall, wishes to date only men who are taller than she but within 6 inches of her height. Find the probability that the next man she meets will have such a height. 6. A regulation hockey puck must weigh between $5.5$ and $6$ ounces. The weights $X$ of pucks made by a particular process are normally distributed with mean $5.75$ ounces and standard deviation $0.11$ ounce. Find the probability that a puck made by this process will meet the weight standard. 7. A regulation golf ball may not weigh more than $1.620$ ounces. The weights $X$ of golf balls made by a particular process are normally distributed with mean $1.361$ ounces and standard deviation $0.09$ ounce. Find the probability that a golf ball made by this process will meet the weight standard. 8. The length of time that the battery in Hippolyta's cell phone will hold enough charge to operate acceptably is normally distributed with mean $25.6$ hours and standard deviation $0.32$ hour. Hippolyta forgot to charge her phone yesterday, so that at the moment she first wishes to use it today it has been $26$ hours $18$ minutes since the phone was last fully charged. Find the probability that the phone will operate properly. 9. The amount of non-mortgage debt per household for households in a particular income bracket in one part of the country is normally distributed with mean $\28,350$ and standard deviation $\3,425$. Find the probability that a randomly selected such household has between $\20,000$ and $\30,000$ in non-mortgage debt. 10. Birth weights of full-term babies in a certain region are normally distributed with mean $7.125$ lb and standard deviation $1.290$ lb. Find the probability that a randomly selected newborn will weigh less than $5.5$ lb, the historic definition of prematurity. 11. The distance from the seat back to the front of the knees of seated adult males is normally distributed with mean $23.8$ inches and standard deviation $1.22$ inches. The distance from the seat back to the back of the next seat forward in all seats on aircraft flown by a budget airline is $26$ inches. Find the proportion of adult men flying with this airline whose knees will touch the back of the seat in front of them. 12. The distance from the seat to the top of the head of seated adult males is normally distributed with mean $36.5$ inches and standard deviation $1.39$ inches. The distance from the seat to the roof of a particular make and model car is $40.5$ inches. Find the proportion of adult men who when sitting in this car will have at least one inch of headroom (distance from the top of the head to the roof). Additional Exercises 1. The useful life of a particular make and type of automotive tire is normally distributed with mean $57,500$ miles and standard deviation $950$ miles. 1. Find the probability that such a tire will have a useful life of between $57,000$ and $58,000$ miles. 2. Hamlet buys four such tires. Assuming that their lifetimes are independent, find the probability that all four will last between $57,000$ and $58,000$ miles. (If so, the best tire will have no more than $1,000$ miles left on it when the first tire fails.) Hint: There is a binomial random variable here, whose value of $p$ comes from part (a). 2. A machine produces large fasteners whose length must be within $0.5$ inch of $22$ inches. The lengths are normally distributed with mean $22.0$ inches and standard deviation $0.17$ inch. 1. Find the probability that a randomly selected fastener produced by the machine will have an acceptable length. 2. The machine produces $20$ fasteners per hour. The length of each one is inspected. Assuming lengths of fasteners are independent, find the probability that all $20$ will have acceptable length. Hint: There is a binomial random variable here, whose value of $p$ comes from part (a). 3. The lengths of time taken by students on an algebra proficiency exam (if not forced to stop before completing it) are normally distributed with mean $28$ minutes and standard deviation $1.5$ minutes. 1. Find the proportion of students who will finish the exam if a $30$-minute time limit is set. 2. Six students are taking the exam today. Find the probability that all six will finish the exam within the $30$-minute limit, assuming that times taken by students are independent. Hint: There is a binomial random variable here, whose value of $p$ comes from part (a). 4. Heights of adult men between $18$ and $34$ years of age are normally distributed with mean $69.1$ inches and standard deviation $2.92$ inches. One requirement for enlistment in the military is that men must stand between $60$ and $80$ inches tall. 1. Find the probability that a randomly elected man meets the height requirement for military service. 2. Twenty-three men independently contact a recruiter this week. Find the probability that all of them meet the height requirement. Hint: There is a binomial random variable here, whose value of $p$ comes from part (a). 5. A regulation hockey puck must weigh between $5.5$ and $6$ ounces. In an alternative manufacturing process the mean weight of pucks produced is $5.75$ ounce. The weights of pucks have a normal distribution whose standard deviation can be decreased by increasingly stringent (and expensive) controls on the manufacturing process. Find the maximum allowable standard deviation so that at most $0.005$ of all pucks will fail to meet the weight standard. (Hint: The distribution is symmetric and is centered at the middle of the interval of acceptable weights.) 6. The amount of gasoline $X$ delivered by a metered pump when it registers $5$ gallons is a normally distributed random variable. The standard deviation $\sigma$ of $X$measures the precision of the pump; the smaller $\sigma$ is the smaller the variation from delivery to delivery. A typical standard for pumps is that when they show that $5$ gallons of fuel has been delivered the actual amount must be between $4.97$ and $5.03$ gallons (which corresponds to being off by at most about half a cup). Supposing that the mean of $X$ is $5$, find the largest that $\sigma$ can be so that $P(4.97 < X < 5.03)$ is $1.0000$ to four decimal places when computed using Figure 7.1.5: Cumulative Normal Probability which means that the pump is sufficiently accurate. (Hint: The $z$-score of $5.03$ will be the smallest value of $Z$ so that Figure 7.1.5: Cumulative Normal Probability gives $P(Z<z)=1.0000$). Answers 1. $0.6628$ 2. $0.0359$ 3. $0.7881$ 4. $0.0150$ 1. 1. $0.5959$ 2. $0.2899$ 3. $0.3439$ 2. 1. $0.0000$ 2. $0.9131$ 3. 1. $0.0013,\; 0.0013$ 2. $0.1587,\; 0.1587$ 3. $0.0001,\; 0.0001$ 4. $0.0099,\; 0.0099$ 4. 1. $0.4412$ 2. $0.1236$ 3. $0.1676$ 4. $0.0208$ 5. 1. $0.9772$ 2. $0.5000$ 6. 7. $0.1830$ 8. 9. $0.4971$ 10. 11. $0.9980$ 12. 13. $0.6771$ 14. 15. $0.0359$ 16. 1. $0.4038$ 2. $0.0266$ 17. 1. $0.9082$ 2. $0.5612$ 18. 19. $0.089$ 5.4: Areas of Tails of Distributions Basic 1. Find the value of $z\ast$ that yields the probability shown. 1. $P(Z<z*)=0.0075$ 2. $P(Z<z*)=0.9850$ 3. $P(Z>z*)=0.8997$ 4. $P(Z>z*)=0.0110$ 2. Find the value of $z\ast$ that yields the probability shown. 1. $P(Z<z*)=0.3300$ 2. $P(Z<z*)=0.9901$ 3. $P(Z>z*)=0.0055$ 4. $P(Z>z*)=0.7995$ 3. Find the value of $z\ast$ that yields the probability shown. 1. $P(Z<z*)=0.1500$ 2. $P(Z<z*)=0.7500$ 3. $P(Z>z*)=0.3333$ 4. $P(Z>z*)=0.8000$ 4. Find the value of $z\ast$ that yields the probability shown. 1. $P(Z<z*)=0.2200$ 2. $P(Z<z*)=0.6000$ 3. $P(Z>z*)=0.0750$ 4. $P(Z>z*)=0.8200$ 5. Find the indicated value of $Z$. (It is easier to find $-z_c$and negate it.) 1. $Z_{0.025}$ 2. $Z_{0.20}$ 6. Find the indicated value of $Z$. (It is easier to find $-z_c$and negate it.) 1. $Z_{0.002}$ 2. $Z_{0.02}$ 7. Find the value of $x\ast$ that yields the probability shown, where $X$ is a normally distributed random variable $X$ with mean $83$ and standard deviation $4$. 1. $P(X<x*)=0.8700$ 2. P(X>x*)=0.0500P(X>x*)=0.0500$P(X>x*)=0.0500$ 8. Find the value of $x\ast$ that yields the probability shown, where $X$ is a normally distributed random variable $X$ with mean $54$ and standard deviation $12$. 1. $P(X<x*)=0.0900$ 2. P(X>x*)=0.0500P(X>x*)=0.0500$P(X>x*)=0.6500$ 9. $X$ is a normally distributed random variable $X$ with mean $15$ and standard deviation $0.25$. Find the values $X_L$ and $X_R$ of $X$ that are symmetrically located with respect to the mean of $X$ and satisfy $P(X_L < X < X_R) = 0.80$. (Hint. First solve the corresponding problem for $Z$). 10. $X$ is a normally distributed random variable $X$ with mean $28$ and standard deviation $3.7$. Find the values $X_L$ and $X_R$ of $X$ that are symmetrically located with respect to the mean of $X$ and satisfy $P(X_L < X < X_R) = 0.65$. (Hint. First solve the corresponding problem for $Z$). Applications 1. Scores on a national exam are normally distributed with mean $382$ and standard deviation $26$. 1. Find the score that is the $50^{th}$ percentile. 2. Find the score that is the $90^{th}$ percentile. 2. Heights of women are normally distributed with mean $63.7$ inches and standard deviation $2.47$ inches. 1. Find the height that is the $10^{th}$ percentile. 2. Find the height that is the $80^{th}$ percentile. 3. The monthly amount of water used per household in a small community is normally distributed with mean $7,069$ gallons and standard deviation $58$ gallons. Find the three quartiles for the amount of water used. 4. The quantity of gasoline purchased in a single sale at a chain of filling stations in a certain region is normally distributed with mean $11.6$ gallons and standard deviation $2.78$ gallons. Find the three quartiles for the quantity of gasoline purchased in a single sale. 5. Scores on the common final exam given in a large enrollment multiple section course were normally distributed with mean $69.35$ and standard deviation $12.93$. The department has the rule that in order to receive an $A$ in the course his score must be in the top $10\%$ of all exam scores. Find the minimum exam score that meets this requirement. 6. The average finishing time among all high school boys in a particular track event in a certain state is $5$ minutes $17$ seconds. Times are normally distributed with standard deviation $12$ seconds. 1. The qualifying time in this event for participation in the state meet is to be set so that only the fastest $5\%$ of all runners qualify. Find the qualifying time. (Hint: Convert seconds to minutes.) 2. In the western region of the state the times of all boys running in this event are normally distributed with standard deviation $12$ seconds, but with mean $5$ minutes $22$ seconds. Find the proportion of boys from this region who qualify to run in this event in the state meet. 7. Tests of a new tire developed by a tire manufacturer led to an estimated mean tread life of $67,350$ miles and standard deviation of $1,120$ miles. The manufacturer will advertise the lifetime of the tire (for example, a “$50,000$ mile tire”) using the largest value for which it is expected that $98\%$ of the tires will last at least that long. Assuming tire life is normally distributed, find that advertised value. 8. Tests of a new light led to an estimated mean life of $1,321$ hours and standard deviation of $106$ hours. The manufacturer will advertise the lifetime of the bulb using the largest value for which it is expected that $90\%$ of the bulbs will last at least that long. Assuming bulb life is normally distributed, find that advertised value. 9. The weights $X$ of eggs produced at a particular farm are normally distributed with mean $1.72$ ounces and standard deviation $0.12$ ounce. Eggs whose weights lie in the middle $75\%$ of the distribution of weights of all eggs are classified as “medium.” Find the maximum and minimum weights of such eggs. (These weights are endpoints of an interval that is symmetric about the mean and in which the weights of $75\%$ of the eggs produced at this farm lie.) 10. The lengths $X$ of hardwood flooring strips are normally distributed with mean $28.9$ inches and standard deviation $6.12$ inches. Strips whose lengths lie in the middle 80% of the distribution of lengths of all strips are classified as “average-length strips.” Find the maximum and minimum lengths of such strips. (These lengths are endpoints of an interval that is symmetric about the mean and in which the lengths of $80\%$ of the hardwood strips lie.) 11. All students in a large enrollment multiple section course take common in-class exams and a common final, and submit common homework assignments. Course grades are assigned based on students' final overall scores, which are approximately normally distributed. The department assigns a $C$ to students whose scores constitute the middle $2/3$ of all scores. If scores this semester had mean $72.5$ and standard deviation $6.14$, find the interval of scores that will be assigned a $C$. 12. Researchers wish to investigate the overall health of individuals with abnormally high or low levels of glucose in the blood stream. Suppose glucose levels are normally distributed with mean $96$ and standard deviation $8.5\; mg/dl$, and that “normal” is defined as the middle $90\%$ of the population. Find the interval of normal glucose levels, that is, the interval centered at $96$ that contains $90\%$ of all glucose levels in the population. Additional Exercises 1. A machine for filling $2$-liter bottles of soft drink delivers an amount to each bottle that varies from bottle to bottle according to a normal distribution with standard deviation $0.002$ liter and mean whatever amount the machine is set to deliver. 1. If the machine is set to deliver $2$ liters (so the mean amount delivered is $2$ liters) what proportion of the bottles will contain at least $2$ liters of soft drink? 2. Find the minimum setting of the mean amount delivered by the machine so that at least $99\%$ of all bottles will contain at least $2$ liters. 2. A nursery has observed that the mean number of days it must darken the environment of a species poinsettia plant daily in order to have it ready for market is $71$ days. Suppose the lengths of such periods of darkening are normally distributed with standard deviation $2$ days. Find the number of days in advance of the projected delivery dates of the plants to market that the nursery must begin the daily darkening process in order that at least $95\%$ of the plants will be ready on time. (Poinsettias are so long-lived that once ready for market the plant remains salable indefinitely.) Answers 1. $-2.43$ 2. $2.17$ 3. $-1.28$ 4. $2.29$ 1. 1. $-1.04$ 2. $0.67$ 3. $0.43$ 4. $-0.84$ 2. 1. $1.96$ 2. $0.84$ 3. 1. $87.52$ 2. $89.58$ 4. 5. $15.32$ 6. 1. $382$ 2. $415$ 7. 8. $7030.14,\; 7069,\; 7107.86$ 9. 10. $85.90$ 11. 12. $65,054$ 13. 14. $1.58,\; 1.86$ 15. 16. $66.5,\; 78.5$ 17. 1. $0.5$ 2. $2.005$ • Anonymous
textbooks/stats/Introductory_Statistics/Introductory_Statistics_(Shafer_and_Zhang)/05%3A_Continuous_Random_Variables/5.E%3A_Continuous_Random_Variables_%28Exercises%29.txt
A statistic, such as the sample mean or the sample standard deviation, is a number computed from a sample. Since a sample is random, every statistic is a random variable: it varies from sample to sample in a way that cannot be predicted with certainty. As a random variable it has a mean, a standard deviation, and a probability distribution. The probability distribution of a statistic is called its sampling distribution. Typically sample statistics are not ends in themselves, but are computed in order to estimate the corresponding population parameters. This chapter introduces the concepts of the mean, the standard deviation, and the sampling distribution of a sample statistic, with an emphasis on the sample mean • 6.1: The Mean and Standard Deviation of the Sample Mean The sample mean is a random variable and as a random variable, the sample mean has a probability distribution, a mean, and a standard deviation. There are formulas that relate the mean and standard deviation of the sample mean to the mean and standard deviation of the population from which the sample is drawn. • 6.2: The Sampling Distribution of the Sample Mean This phenomenon of the sampling distribution of the mean taking on a bell shape even though the population distribution is not bell-shaped happens in general.  The importance of the Central Limit Theorem is that it allows us to make probability statements about the sample mean, specifically in relation to its value in comparison to the population mean, as we will see in the examples • 6.3: The Sample Proportion Often sampling is done in order to estimate the proportion of a population that has a specific characteristic. • 6.E: Sampling Distributions (Exercises) These are homework exercises to accompany the Textmap created for "Introductory Statistics" by Shafer and Zhang. 06: Sampling Distributions Learning Objectives • To become familiar with the concept of the probability distribution of the sample mean. • To understand the meaning of the formulas for the mean and standard deviation of the sample mean. Suppose we wish to estimate the mean $μ$ of a population. In actual practice we would typically take just one sample. Imagine however that we take sample after sample, all of the same size $n$, and compute the sample mean $\bar{x}$ each time. The sample mean $x$ is a random variable: it varies from sample to sample in a way that cannot be predicted with certainty. We will write $\bar{X}$ when the sample mean is thought of as a random variable, and write $x$ for the values that it takes. The random variable $\bar{X}$ has a mean, denoted $μ_{\bar{X}}$, and a standard deviation, denoted $σ_{\bar{X}}$. Here is an example with such a small population and small sample size that we can actually write down every single sample. Example $1$ A rowing team consists of four rowers who weigh $152$, $156$, $160$, and $164$ pounds. Find all possible random samples with replacement of size two and compute the sample mean for each one. Use them to find the probability distribution, the mean, and the standard deviation of the sample mean $\bar{X}$. Solution The following table shows all possible samples with replacement of size two, along with the mean of each: Sample Mean Sample Mean Sample Mean Sample Mean 152, 152 152   156, 152 154   160, 152 156   164, 152 158 152, 156 154   156, 156 156   160, 156 158   164, 156 160 152, 160 156   156, 160 158   160, 160 160   164, 160 162 152, 164 158   156, 164 160   160, 164 162   164, 164 164 The table shows that there are seven possible values of the sample mean $\bar{X}$. The value $\bar{x}=152$ happens only one way (the rower weighing $152$ pounds must be selected both times), as does the value $\bar{x}=164$, but the other values happen more than one way, hence are more likely to be observed than $152$ and $164$ are. Since the $16$ samples are equally likely, we obtain the probability distribution of the sample mean just by counting: $\begin{array}{c|c c c c c c c} \bar{x} & 152 & 154 & 156 & 158 & 160 & 162 & 164\ \hline P(\bar{x}) &\frac{1}{16} &\frac{2}{16} &\frac{3}{16} &\frac{4}{16} &\frac{3}{16} &\frac{2}{16} &\frac{1}{16}\ \end{array} \nonumber$ Now we apply the formulas from Section 4.2 to $\bar{X}$. For $\mu_{\bar{X}}$, we obtain. \begin{align*} μ_{\bar{X}} &=\sum \bar{x} P(\bar{x}) \[4pt] &=152\left ( \dfrac{1}{16}\right )+154\left ( \dfrac{2}{16}\right )+156\left ( \dfrac{3}{16}\right )+158\left ( \dfrac{4}{16}\right )+160\left ( \dfrac{3}{16}\right )+162\left ( \dfrac{2}{16}\right )+164\left ( \dfrac{1}{16}\right ) \[4pt] &=158 \end{align*} \nonumber For $σ_{\bar{X}}$, we first compute $\sum \bar{x}^2P(\bar{x})$: \begin{align*} \sum \bar{x}^2P(\bar{x})= 152^2\left ( \dfrac{1}{16}\right )+154^2\left ( \dfrac{2}{16}\right )+156^2\left ( \dfrac{3}{16}\right )+158^2\left ( \dfrac{4}{16}\right )+160^2\left ( \dfrac{3}{16}\right )+162^2\left ( \dfrac{2}{16}\right )+164^2\left ( \dfrac{1}{16}\right ) \end{align*} \nonumber which is $24,974$, so that \begin{align*} \sigma _{\bar{x}}&=\sqrt{\sum \bar{x}^2P(\bar{x})-\mu _{\bar{x}}^{2}} \[4pt] &=\sqrt{24,974-158^2} \[4pt] &=\sqrt{10} \end{align*} \nonumber The mean and standard deviation of the population $\{152,156,160,164\}$ in the example are $μ = 158$ and $σ=\sqrt{20}$. The mean of the sample mean $\bar{X}$ that we have just computed is exactly the mean of the population. The standard deviation of the sample mean $\bar{X}$ that we have just computed is the standard deviation of the population divided by the square root of the sample size: $\sqrt{10} = \sqrt{20}/\sqrt{2}$. These relationships are not coincidences, but are illustrations of the following formulas. Definition: Sample mean and sample standard deviation Suppose random samples of size $n$ are drawn from a population with mean $μ$ and standard deviation $σ$. The mean $\mu_{\bar{X}}$ and standard deviation $σ_{\bar{X}}$ of the sample mean $\bar{X}$ satisfy $μ_{\bar{X}} =μ \label{average}$ and $σ_{\bar{X}}=\dfrac{σ}{\sqrt{n}} \label{std}$ Equation $\ref{average}$ says that if we could take every possible sample from the population and compute the corresponding sample mean, then those numbers would center at the number we wish to estimate, the population mean $μ$. Equation $\ref{std}$ says that averages computed from samples vary less than individual measurements on the population do, and quantifies the relationship. Example $2$ The mean and standard deviation of the tax value of all vehicles registered in a certain state are $μ=\13,525$ and $σ=\4,180$. Suppose random samples of size $100$ are drawn from the population of vehicles. What are the mean $\mu_{\bar{X}}$ and standard deviation $σ_{\bar{X}}$ of the sample mean $\bar{X}$? Solution Since $n = 100$, the formulas yield $\mu _{\bar{X}} =\mu = \13,525 \nonumber$ and $\sigma _{\bar{x}}=\frac{\sigma }{\sqrt{n}}=\frac{\4,180}{\sqrt{100}}=\418 \nonumber$ Key Takeaway • The sample mean is a random variable; as such it is written $\bar{X}$, and $\bar{x}$ stands for individual values it takes. • As a random variable the sample mean has a probability distribution, a mean $μ_{\bar{X}}$, and a standard deviation $σ_{\bar{X}}$. • There are formulas that relate the mean and standard deviation of the sample mean to the mean and standard deviation of the population from which the sample is drawn.
textbooks/stats/Introductory_Statistics/Introductory_Statistics_(Shafer_and_Zhang)/06%3A_Sampling_Distributions/6.01%3A_The_Mean_and_Standard_Deviation_of_the_Sample_Mean.txt
Learning Objectives • To learn what the sampling distribution of $\overline{X}$ is when the sample size is large. • To learn what the sampling distribution of $\overline{X}$ is when the population is normal. In Example 6.1.1, we constructed the probability distribution of the sample mean for samples of size two drawn from the population of four rowers. The probability distribution is: $\begin{array}{c|c c c c c c c} \bar{x} & 152 & 154 & 156 & 158 & 160 & 162 & 164\ \hline P(\bar{x}) &\dfrac{1}{16} &\dfrac{2}{16} &\dfrac{3}{16} &\dfrac{4}{16} &\dfrac{3}{16} &\dfrac{2}{16} &\dfrac{1}{16}\ \end{array} \nonumber$ Figure $1$ shows a side-by-side comparison of a histogram for the original population and a histogram for this distribution. Whereas the distribution of the population is uniform, the sampling distribution of the mean has a shape approaching the shape of the familiar bell curve. This phenomenon of the sampling distribution of the mean taking on a bell shape even though the population distribution is not bell-shaped happens in general. Here is a somewhat more realistic example. Suppose we take samples of size $1$, $5$, $10$, or $20$ from a population that consists entirely of the numbers $0$ and $1$, half the population $0$, half $1$, so that the population mean is $0.5$. The sampling distributions are: $n = 1$: $\begin{array}{c|c c } \bar{x} & 0 & 1 \ \hline P(\bar{x}) &0.5 &0.5 \ \end{array} \nonumber$ $n = 5$: $\begin{array}{c|c c c c c c} \bar{x} & 0 & 0.2 & 0.4 & 0.6 & 0.8 & 1 \ \hline P(\bar{x}) &0.03 &0.16 &0.31 &0.31 &0.16 &0.03 \ \end{array} \nonumber$ $n = 10$: $\begin{array}{c|c c c c c c c c c c c} \bar{x} & 0 & 0.1 & 0.2 & 0.3 & 0.4 & 0.5 & 0.6 & 0.7 & 0.8 & 0.9 & 1 \ \hline P(\bar{x}) &0.00 &0.01 &0.04 &0.12 &0.21 &0.25 &0.21 &0.12 &0.04 &0.01 &0.00 \ \end{array} \nonumber$ $n = 20$: $\begin{array}{c|c c c c c c c c c c c} \bar{x} & 0 & 0.05 & 0.10 & 0.15 & 0.20 & 0.25 & 0.30 & 0.35 & 0.40 & 0.45 & 0.50 \ \hline P(\bar{x}) &0.00 &0.00 &0.00 &0.00 &0.00 &0.01 &0.04 &0.07 &0.12 &0.16 &0.18 \ \end{array} \nonumber$ and $\begin{array}{c|c c c c c c c c c c } \bar{x} & 0.55 & 0.60 & 0.65 & 0.70 & 0.75 & 0.80 & 0.85 & 0.90 & 0.95 & 1 \ \hline P(\bar{x}) &0.16 &0.12 &0.07 &0.04 &0.01 &0.00 &0.00 &0.00 &0.00 &0.00 \ \end{array} \nonumber$ Histograms illustrating these distributions are shown in Figure $2$. As $n$ increases the sampling distribution of $\overline{X}$ evolves in an interesting way: the probabilities on the lower and the upper ends shrink and the probabilities in the middle become larger in relation to them. If we were to continue to increase $n$ then the shape of the sampling distribution would become smoother and more bell-shaped. What we are seeing in these examples does not depend on the particular population distributions involved. In general, one may start with any distribution and the sampling distribution of the sample mean will increasingly resemble the bell-shaped normal curve as the sample size increases. This is the content of the Central Limit Theorem. The Central Limit Theorem For samples of size $30$ or more, the sample mean is approximately normally distributed, with mean $\mu _{\overline{X}}=\mu$ and standard deviation $\sigma _{\overline{X}}=\dfrac{\sigma }{\sqrt{n}}$, where $n$ is the sample size. The larger the sample size, the better the approximation. The Central Limit Theorem is illustrated for several common population distributions in Figure $3$. The dashed vertical lines in the figures locate the population mean. Regardless of the distribution of the population, as the sample size is increased the shape of the sampling distribution of the sample mean becomes increasingly bell-shaped, centered on the population mean. Typically by the time the sample size is $30$ the distribution of the sample mean is practically the same as a normal distribution. The importance of the Central Limit Theorem is that it allows us to make probability statements about the sample mean, specifically in relation to its value in comparison to the population mean, as we will see in the examples. But to use the result properly we must first realize that there are two separate random variables (and therefore two probability distributions) at play: 1. $X$, the measurement of a single element selected at random from the population; the distribution of $X$ is the distribution of the population, with mean the population mean $\mu$ and standard deviation the population standard deviation $\sigma$; 2. $\overline{X}$, the mean of the measurements in a sample of size $n$; the distribution of $\overline{X}$ is its sampling distribution, with mean $\mu _{\overline{X}}=\mu$ and standard deviation $\sigma _{\overline{X}}=\dfrac{\sigma }{\sqrt{n}}$. Example $1$ Let $\overline{X}$ be the mean of a random sample of size $50$ drawn from a population with mean $112$ and standard deviation $40$. 1. Find the mean and standard deviation of $\overline{X}$. 2. Find the probability that $\overline{X}$ assumes a value between $110$ and $114$. 3. Find the probability that $\overline{X}$ assumes a value greater than $113$. Solution 1. By the formulas in the previous section $\mu _{\overline{X}}=\mu=112 \nonumber$ and $\sigma_{\overline{X}}=\dfrac{\sigma}{\sqrt{n}}=\dfrac{40} {\sqrt{50}}=5.65685 \nonumber$ 2. Since the sample size is at least $30$, the Central Limit Theorem applies: $\overline{X}$ is approximately normally distributed. We compute probabilities using Figure 5.3.1 in the usual way, just being careful to use $\sigma _{\overline{X}}$ and not $\sigma$ when we standardize: \begin{align*} P(110<\overline{X}<114)&= P\left ( \dfrac{110-\mu _{\overline{X}}}{\sigma _{\overline{X}}} <Z<\dfrac{114-\mu _{\overline{X}}}{\sigma _{\overline{X}}}\right )\[4pt] &= P\left ( \dfrac{110-112}{5.65685} <Z<\dfrac{114-112}{5.65685}\right )\[4pt] &= P(-0.35<Z<0.35)\[4pt] &= 0.6368-0.3632\[4pt] &= 0.2736 \end{align*} \nonumber 1. Similarly \begin{align*} P(\overline{X}> 113)&= P\left ( Z>\dfrac{113-\mu _{\overline{X}}}{\sigma _{\overline{X}}}\right )\[4pt] &= P\left ( Z>\dfrac{113-112}{5.65685}\right )\[4pt] &= P(Z>0.18)\[4pt] &= 1-P(Z<0.18)\[4pt] &= 1-0.5714\[4pt] &= 0.4286 \end{align*} \nonumber Note that if in the above example we had been asked to compute the probability that the value of a single randomly selected element of the population exceeds $113$, that is, to compute the number $P(X>113)$, we would not have been able to do so, since we do not know the distribution of $X$, but only that its mean is $112$ and its standard deviation is $40$. By contrast we could compute $P(\overline{X}>113)$ even without complete knowledge of the distribution of $X$ because the Central Limit Theorem guarantees that $\overline{X}$ is approximately normal. Example $2$ The numerical population of grade point averages at a college has mean $2.61$ and standard deviation $0.5$. If a random sample of size $100$ is taken from the population, what is the probability that the sample mean will be between $2.51$ and $2.71$? Solution The sample mean $\overline{X}$ has mean $\mu _{\overline{X}}=\mu =2.61$ and standard deviation $\sigma _{\overline{X}}=\dfrac{\sigma }{\sqrt{n}}=\dfrac{0.5}{10}=0.05$, so \begin{align*} P(2.51<\overline{X}<2.71)&= P\left ( \dfrac{2.51-\mu _{\overline{X}}}{\sigma _{\overline{X}}} <Z<\dfrac{2.71-\mu _{\overline{X}}}{\sigma _{\overline{X}}}\right )\[4pt] &= P\left ( \dfrac{2.51-2.61}{0.05} <Z<\dfrac{2.71-2.61}{0.05}\right )\[4pt] &= P(-2<Z<2)\[4pt] &= P(Z<2)-P(Z<-2)\[4pt] &= 0.9772-0.0228\[4pt] &= 0.9544 \end{align*} \nonumber Normally Distributed Populations The Central Limit Theorem says that no matter what the distribution of the population is, as long as the sample is “large,” meaning of size $30$ or more, the sample mean is approximately normally distributed. If the population is normal to begin with then the sample mean also has a normal distribution, regardless of the sample size. For samples of any size drawn from a normally distributed population, the sample mean is normally distributed, with mean $μ_X=μ$ and standard deviation $σ_X =σ/\sqrt{n}$, where $n$ is the sample size. The effect of increasing the sample size is shown in Figure $4$. Example $3$ A prototype automotive tire has a design life of $38,500$ miles with a standard deviation of $2,500$ miles. Five such tires are manufactured and tested. On the assumption that the actual population mean is $38,500$ miles and the actual population standard deviation is $2,500$ miles, find the probability that the sample mean will be less than $36,000$ miles. Assume that the distribution of lifetimes of such tires is normal. Solution For simplicity we use units of thousands of miles. Then the sample mean $\overline{X}$ has mean $\mu _{\overline{X}}=\mu =38.5$ and standard deviation $\sigma _{\overline{X}}=\dfrac{\sigma }{\sqrt{n}}=\dfrac{2.5}{\sqrt{5}}=1.11803$. Since the population is normally distributed, so is $\overline{X}$, hence \begin{align*} P(\overline{X}<36)&= P\left ( Z<\dfrac{36-\mu _{\overline{X}}}{\sigma _{\overline{X}}}\right )\[4pt] &= P\left ( Z<\dfrac{36-38.5}{1.11803}\right )\[4pt] &= P(Z<-2.24)\[4pt] &= 0.0125 \end{align*} \nonumber That is, if the tires perform as designed, there is only about a $1.25\%$ chance that the average of a sample of this size would be so low. Example $4$ An automobile battery manufacturer claims that its midgrade battery has a mean life of $50$ months with a standard deviation of $6$ months. Suppose the distribution of battery lives of this particular brand is approximately normal. 1. On the assumption that the manufacturer’s claims are true, find the probability that a randomly selected battery of this type will last less than $48$ months. 2. On the same assumption, find the probability that the mean of a random sample of $36$ such batteries will be less than $48$ months. Solution 1. Since the population is known to have a normal distribution \begin{align*} P(X<48)&= P\left ( Z<\dfrac{48-\mu }{\sigma }\right )\[4pt] &= P\left ( Z<\dfrac{48-50}{6}\right )\[4pt] &= P(Z<-0.33)\[4pt] &= 0.3707 \end{align*} \nonumber 1. The sample mean has mean $\mu _{\overline{X}}=\mu =50$ and standard deviation $\sigma _{\overline{X}}=\dfrac{\sigma }{\sqrt{n}}=\dfrac{6}{\sqrt{36}}=1$. Thus \begin{align*} P(\overline{X}<48)&= P\left ( Z<\dfrac{48-\mu _{\overline{X}}}{\sigma _{\overline{X}}}\right )\[4pt] &= P\left ( Z<\dfrac{48-50}{1}\right )\[4pt] &= P(Z<-2)\[4pt] &= 0.0228 \end{align*} \nonumber Key Takeaway • When the sample size is at least $30$ the sample mean is normally distributed. • When the population is normal the sample mean is normally distributed regardless of the sample size.
textbooks/stats/Introductory_Statistics/Introductory_Statistics_(Shafer_and_Zhang)/06%3A_Sampling_Distributions/6.02%3A_The_Sampling_Distribution_of_the_Sample_Mean.txt
Learning Objectives • To recognize that the sample proportion $\hat{p}$ is a random variable. • To understand the meaning of the formulas for the mean and standard deviation of the sample proportion. • To learn what the sampling distribution of $\hat{p}$ is when the sample size is large. Often sampling is done in order to estimate the proportion of a population that has a specific characteristic, such as the proportion of all items coming off an assembly line that are defective or the proportion of all people entering a retail store who make a purchase before leaving. The population proportion is denoted $p$ and the sample proportion is denoted $\hat{p}$. Thus if in reality $43\%$ of people entering a store make a purchase before leaving, $p = 0.43 \nonumber$ if in a sample of $200$ people entering the store, $78$ make a purchase, $\hat{p}=\dfrac{78}{200}=0.39. \nonumber$ The sample proportion is a random variable: it varies from sample to sample in a way that cannot be predicted with certainty. Viewed as a random variable it will be written $\hat{P}$. It has a mean $μ_{\hat{P}}$ and a standard deviation $σ_{\hat{P}}$. Here are formulas for their values. mean and standard deviation of the sample proportion Suppose random samples of size $n$ are drawn from a population in which the proportion with a characteristic of interest is $p$. The mean $μ_{\hat{P}}$ and standard deviation $σ_{\hat{P}}$ of the sample proportion $\hat{P}$ satisfy $μ_{\hat{P}}=p \nonumber$ and $σ_{\hat{P}}= \sqrt{\dfrac{pq}{n}} \nonumber$ where $q=1−p$. The Central Limit Theorem has an analogue for the population proportion $\hat{p}$. To see how, imagine that every element of the population that has the characteristic of interest is labeled with a $1$, and that every element that does not is labeled with a $0$. This gives a numerical population consisting entirely of zeros and ones. Clearly the proportion of the population with the special characteristic is the proportion of the numerical population that are ones; in symbols, $p=\dfrac{\text{number of 1s}}{N} \nonumber$ But of course the sum of all the zeros and ones is simply the number of ones, so the mean $μ$ of the numerical population is $μ=\dfrac{ \sum x}{N}= \dfrac{\text{number of 1s}}{N} \nonumber$ Thus the population proportion $p$ is the same as the mean $μ$ of the corresponding population of zeros and ones. In the same way the sample proportion $\hat{p}$ is the same as the sample mean $\bar{x}$. Thus the Central Limit Theorem applies to $\hat{p}$. However, the condition that the sample be large is a little more complicated than just being of size at least $30$. The Sampling Distribution of the Sample Proportion For large samples, the sample proportion is approximately normally distributed, with mean $μ_{\hat{P}}=p$ and standard deviation $\sigma _{\hat{P}}=\sqrt{\frac{pq}{n}}$. A sample is large if the interval $\left [ p-3\sigma _{\hat{p}},\, p+3\sigma _{\hat{p}} \right ]$ lies wholly within the interval $[0,1]$. In actual practice $p$ is not known, hence neither is $σ_{\hat{P}}$. In that case in order to check that the sample is sufficiently large we substitute the known quantity $\hat{p}$ for $p$. This means checking that the interval $\left [ \hat{p}-3\sqrt{\frac{\hat{p}(1-\hat{p})}{n}},\, \hat{p}+3\sqrt{\frac{\hat{p}(1-\hat{p})}{n}} \right ] \nonumber$ lies wholly within the interval $[0,1]$. This is illustrated in the examples. Figure $1$ shows that when $p = 0.1$, a sample of size $15$ is too small but a sample of size $100$ is acceptable. Figure $2$ shows that when $p=0.5$ a sample of size $15$ is acceptable. Example $1$ Suppose that in a population of voters in a certain region $38\%$ are in favor of particular bond issue. Nine hundred randomly selected voters are asked if they favor the bond issue. 1. Verify that the sample proportion $\hat{p}$ computed from samples of size $900$ meets the condition that its sampling distribution be approximately normal. 2. Find the probability that the sample proportion computed from a sample of size $900$ will be within $5$ percentage points of the true population proportion. Solution 1. The information given is that $p=0.38$, hence $q=1-p=0.62$. First we use the formulas to compute the mean and standard deviation of $\hat{p}$: $\mu _{\hat{p}}=p=0.38\; \text{and}\; \sigma _{\hat{P}}=\sqrt{\frac{pq}{n}}=\sqrt{\frac{(0.38)(0.62)}{900}}=0.01618 \nonumber$ Then $3\sigma _{\hat{P}}=3(0.01618)=0.04854\approx 0.05$ so $\left [ \hat{p} - 3\sqrt{\frac{\hat{p}(1-\hat{p})}{n}},\, \hat{p}+3\sqrt{\frac{\hat{p}(1-\hat{p})}{n}} \right ]=[0.38-0.05,0.38+0.05]=[0.33,0.43] \nonumber$ which lies wholly within the interval $[0,1]$, so it is safe to assume that $\hat{p}$ is approximately normally distributed. 1. To be within $5$ percentage points of the true population proportion $0.38$ means to be between $0.38-0.05=0.33$ and $0.38+0.05=0.43$. Thus \begin{align*} P(0.33<\hat{P}<0.43) &= P\left ( \frac{0.33-\mu _{\hat{P}}}{\sigma _{\hat{P}}} <Z< \frac{0.43-\mu _{\hat{P}}}{\sigma _{\hat{P}}} \right )\[4pt] &= P\left ( \frac{0.33-0.38}{0.01618} <Z< \frac{0.43-0.38}{0.01618}\right )\[4pt] &= P(-3.09<Z<3.09)\[4pt] &= P(3.09)-P(-3.09)\[4pt] &= 0.9990-0.0010\[4pt] &= 0.9980 \end{align*} \nonumber Example $2$ An online retailer claims that $90\%$ of all orders are shipped within $12$ hours of being received. A consumer group placed $121$ orders of different sizes and at different times of day; $102$ orders were shipped within $12$ hours. 1. Compute the sample proportion of items shipped within $12$ hours. 2. Confirm that the sample is large enough to assume that the sample proportion is normally distributed. Use $p=0.90$, corresponding to the assumption that the retailer’s claim is valid. 3. Assuming the retailer’s claim is true, find the probability that a sample of size $121$ would produce a sample proportion so low as was observed in this sample. 4. Based on the answer to part (c), draw a conclusion about the retailer’s claim. Solution 1. The sample proportion is the number $x$ of orders that are shipped within $12$ hours divided by the number $n$ of orders in the sample: $\hat{p} =\frac{x}{n}=\frac{102}{121}=0.84\nonumber$ 1. Since $p=0.90$, $q=1-p=0.10$, and $n=121$, $\sigma _{\hat{P}}=\sqrt{\frac{(0.90)(0.10)}{121}}=0.0\overline{27}\nonumber$ hence $\left [ p-3\sigma _{\hat{P}},\, p+3\sigma _{\hat{P}} \right ]=[0.90-0.08,0.90+0.08]=[0.82,0.98]\nonumber$ Because $[0.82,0.98]⊂[0,1]\nonumber$ it is appropriate to use the normal distribution to compute probabilities related to the sample proportion $\hat{P}$. 1. Using the value of $\hat{P}$ from part (a) and the computation in part (b), \begin{align*} P(\hat{P}\leq 0.84) &= P\left ( Z\leq \frac{0.84-\mu _{\hat{P}}}{\sigma _{\hat{P}}} \right )\[4pt] &= P\left ( Z\leq \frac{0.84-0.90}{0.0\overline{27}} \right )\[4pt] &= P(Z\leq -2.20)\[4pt] &= 0.0139 \end{align*} \nonumber 1. The computation shows that a random sample of size $121$ has only about a $1.4\%$ chance of producing a sample proportion as the one that was observed, $\hat{p} =0.84$, when taken from a population in which the actual proportion is $0.90$. This is so unlikely that it is reasonable to conclude that the actual value of $p$ is less than the $90\%$ claimed. Key Takeaway • The sample proportion is a random variable $\hat{P}$. • There are formulas for the mean $μ_{\hat{P}}$, and standard deviation $σ_{\hat{P}}$ of the sample proportion. • When the sample size is large the sample proportion is normally distributed.
textbooks/stats/Introductory_Statistics/Introductory_Statistics_(Shafer_and_Zhang)/06%3A_Sampling_Distributions/6.03%3A_The_Sample_Proportion.txt
These are homework exercises to accompany the Textmap created for "Introductory Statistics" by Shafer and Zhang. 6.1: The Mean and Standard Deviation of the Sample Mean Basic Q6.1.1 Random samples of size $225$ are drawn from a population with mean $100$ and standard deviation $20$. Find the mean and standard deviation of the sample mean. Q6.1.2 Random samples of size $64$ are drawn from a population with mean $32$ and standard deviation $5$. Find the mean and standard deviation of the sample mean. Q6.1.3 A population has mean $75$ and standard deviation $12$. 1. Random samples of size $121$ are taken. Find the mean and standard deviation of the sample mean. 2. How would the answers to part (a) change if the size of the samples were $400$ instead of $121$? Q6.1.4 A population has mean $5.75$ and standard deviation $1.02$. 1. Random samples of size $81$ are taken. Find the mean and standard deviation of the sample mean. 2. How would the answers to part (a) change if the size of the samples were $25$ instead of $81$? Answers S6.1.1 $\mu _{\bar{X}}=100,\; \sigma _{\bar{X}}=1.33$ S6.1.3 1. $\mu _{\bar{X}}=75,\; \sigma _{\bar{X}}=1.09$ 2. $\mu _{\bar{X}}$ stays the same but $\sigma _{\bar{X}}$ decreases to $0.6$ 6.2: The Sampling Distribution of the Sample Mean Basic 1. A population has mean $128$ and standard deviation $22$. 1. Find the mean and standard deviation of $\overline{X}$ for samples of size $36$. 2. Find the probability that the mean of a sample of size $36$ will be within $10$ units of the population mean, that is, between $118$ and $138$. 2. A population has mean $1,542$ and standard deviation $246$. 1. Find the mean and standard deviation of $\overline{X}$ for samples of size $100$. 2. Find the probability that the mean of a sample of size $100$ will be within $100$ units of the population mean, that is, between $1,442$ and $1,642$. 3. A population has mean $73.5$ and standard deviation $2.5$. 1. Find the mean and standard deviation of $\overline{X}$ for samples of size $30$. 2. Find the probability that the mean of a sample of size $30$ will be less than $72$. 4. A population has mean $48.4$ and standard deviation $6.3$. 1. Find the mean and standard deviation of $\overline{X}$ for samples of size $64$. 2. Find the probability that the mean of a sample of size $64$ will be less than $46.7$. 5. A normally distributed population has mean $25.6$ and standard deviation $3.3$. 1. Find the probability that a single randomly selected element $X$ of the population exceeds $30$. 2. Find the mean and standard deviation of $\overline{X}$ for samples of size $9$. 3. Find the probability that the mean of a sample of size $9$ drawn from this population exceeds $30$. 6. A normally distributed population has mean $57.7$ and standard deviation $12.1$. 1. Find the probability that a single randomly selected element $X$ of the population is less than $45$. 2. Find the mean and standard deviation of $\overline{X}$ for samples of size $16$. 3. Find the probability that the mean of a sample of size $16$ drawn from this population is less than $45$. 7. A population has mean $557$ and standard deviation $35$. 1. Find the mean and standard deviation of $\overline{X}$ for samples of size $50$. 2. Find the probability that the mean of a sample of size $50$ will be more than $570$. 8. A population has mean $16$ and standard deviation $1.7$. 1. Find the mean and standard deviation of $\overline{X}$ for samples of size $80$. 2. Find the probability that the mean of a sample of size $80$ will be more than $16.4$. 9. A normally distributed population has mean $1,214$ and standard deviation $122$. 1. Find the probability that a single randomly selected element $X$ of the population is between $1,100$ and $1,300$. 2. Find the mean and standard deviation of $\overline{X}$ for samples of size $25$. 3. Find the probability that the mean of a sample of size $25$ drawn from this population is between $1,100$ and $1,300$. 10. A normally distributed population has mean $57,800$ and standard deviation $750$. 1. Find the probability that a single randomly selected element $X$ of the population is between $57,000$ and $58,000$. 2. Find the mean and standard deviation of $\overline{X}$ for samples of size $100$. 3. Find the probability that the mean of a sample of size $100$ drawn from this population is between $57,000$ and $58,000$. 11. A population has mean $72$ and standard deviation $6$. 1. Find the mean and standard deviation of $\overline{X}$ for samples of size $45$. 2. Find the probability that the mean of a sample of size $45$ will differ from the population mean $72$ by at least $2$ units, that is, is either less than $70$ or more than $74$. (Hint: One way to solve the problem is to first find the probability of the complementary event.) 12. A population has mean $12$ and standard deviation $1.5$. 1. Find the mean and standard deviation of $\overline{X}$ for samples of size $90$. 2. Find the probability that the mean of a sample of size $90$ will differ from the population mean $12$ by at least $0.3$ unit, that is, is either less than $11.7$ or more than $12.3$. (Hint: One way to solve the problem is to first find the probability of the complementary event.) Applications 1. Suppose the mean number of days to germination of a variety of seed is $22$, with standard deviation $2.3$ days. Find the probability that the mean germination time of a sample of $160$ seeds will be within $0.5$ day of the population mean. 2. Suppose the mean length of time that a caller is placed on hold when telephoning a customer service center is $23.8$ seconds, with standard deviation $4.6$ seconds. Find the probability that the mean length of time on hold in a sample of $1,200$ calls will be within $0.5$ second of the population mean. 3. Suppose the mean amount of cholesterol in eggs labeled “large” is $186$ milligrams, with standard deviation $7$ milligrams. Find the probability that the mean amount of cholesterol in a sample of $144$ eggs will be within $2$ milligrams of the population mean. 4. Suppose that in one region of the country the mean amount of credit card debt per household in households having credit card debt is $\15,250$, with standard deviation $\7,125$. Find the probability that the mean amount of credit card debt in a sample of $1,600$ such households will be within $\300$ of the population mean. 5. Suppose speeds of vehicles on a particular stretch of roadway are normally distributed with mean $36.6$ mph and standard deviation $1.7$ mph. 1. Find the probability that the speed $X$ of a randomly selected vehicle is between $35$ and $40$ mph. 2. Find the probability that the mean speed $\overline{X}$ of $20$ randomly selected vehicles is between $35$ and $40$ mph. 6. Many sharks enter a state of tonic immobility when inverted. Suppose that in a particular species of sharks the time a shark remains in a state of tonic immobility when inverted is normally distributed with mean $11.2$ minutes and standard deviation $1.1$ minutes. 1. If a biologist induces a state of tonic immobility in such a shark in order to study it, find the probability that the shark will remain in this state for between $10$ and $13$ minutes. 2. When a biologist wishes to estimate the mean time that such sharks stay immobile by inducing tonic immobility in each of a sample of $12$ sharks, find the probability that mean time of immobility in the sample will be between $10$ and $13$ minutes. 7. Suppose the mean cost across the country of a $30$-day supply of a generic drug is $\46.58$, with standard deviation $\4.84$. Find the probability that the mean of a sample of $100$ prices of $30$-day supplies of this drug will be between $\45$ and $\50$. 8. Suppose the mean length of time between submission of a state tax return requesting a refund and the issuance of the refund is $47$ days, with standard deviation $6$ days. Find the probability that in a sample of $50$ returns requesting a refund, the mean such time will be more than $50$ days. 9. Scores on a common final exam in a large enrollment, multiple-section freshman course are normally distributed with mean $72.7$ and standard deviation $13.1$. 1. Find the probability that the score $X$ on a randomly selected exam paper is between $70$ and $80$. 2. Find the probability that the mean score $\overline{X}$ of $38$ randomly selected exam papers is between $70$ and $80$. 10. Suppose the mean weight of school children’s bookbags is $17.4$ pounds, with standard deviation $2.2$ pounds. Find the probability that the mean weight of a sample of $30$ bookbags will exceed $17$ pounds. 11. Suppose that in a certain region of the country the mean duration of first marriages that end in divorce is $7.8$ years, standard deviation $1.2$ years. Find the probability that in a sample of $75$ divorces, the mean age of the marriages is at most $8$ years. 12. Borachio eats at the same fast food restaurant every day. Suppose the time $X$ between the moment Borachio enters the restaurant and the moment he is served his food is normally distributed with mean $4.2$ minutes and standard deviation $1.3$ minutes. 1. Find the probability that when he enters the restaurant today it will be at least $5$ minutes until he is served. 2. Find the probability that average time until he is served in eight randomly selected visits to the restaurant will be at least $5$ minutes. Additional Exercises 1. A high-speed packing machine can be set to deliver between $11$ and $13$ ounces of a liquid. For any delivery setting in this range the amount delivered is normally distributed with mean some amount $\mu$ and with standard deviation $0.08$ ounce. To calibrate the machine it is set to deliver a particular amount, many containers are filled, and $25$ containers are randomly selected and the amount they contain is measured. Find the probability that the sample mean will be within $0.05$ ounce of the actual mean amount being delivered to all containers. 2. A tire manufacturer states that a certain type of tire has a mean lifetime of $60,000$ miles. Suppose lifetimes are normally distributed with standard deviation $\sigma =3,500$ miles. 1. Find the probability that if you buy one such tire, it will last only $57,000$ or fewer miles. If you had this experience, is it particularly strong evidence that the tire is not as good as claimed? 2. A consumer group buys five such tires and tests them. Find the probability that average lifetime of the five tires will be $57,000$ miles or less. If the mean is so low, is that particularly strong evidence that the tire is not as good as claimed? Answers 1. $\mu _{\overline{X}}=128,\; \sigma _{\overline{X}}=3.67$ 2. $0.9936$ 1. 1. $\mu _{\overline{X}}=73.5,\; \sigma _{\overline{X}}=0.456$ 2. $0.0005$ 2. 1. $0.0918$ 2. $\mu _{\overline{X}}=25.6,\; \sigma _{\overline{X}}=1.1$ 3. $0.0000$ 3. 1. $\mu _{\overline{X}}=557,\; \sigma _{\overline{X}}=4.9497$ 2. $0.0043$ 4. 1. $0.5818$ 2. $\mu _{\overline{X}}=1214\; \sigma _{\overline{X}}=24.4$ 3. $0.9998$ 5. 1. $\mu _{\overline{X}}=72\; \sigma _{\overline{X}}=0.8944$ 2. $0.0250$ 6. 7. $0.9940$ 8. 9. $0.9994$ 10. 1. $0.8036$ 2. $1.0000$ 11. 12. $0.9994$ 13. 1. $0.2955$ 2. $0.8977$ 14. 15. $0.9251$ 16. 17. $0.9982$ 6.3: The Sample Proportion Basic 1. The proportion of a population with a characteristic of interest is $p = 0.37$. Find the mean and standard deviation of the sample proportion $\widehat{P}$ obtained from random samples of size $1,600$. 2. The proportion of a population with a characteristic of interest is $p = 0.82$. Find the mean and standard deviation of the sample proportion $\widehat{P}$ obtained from random samples of size $900$. 3. The proportion of a population with a characteristic of interest is $p = 0.76$. Find the mean and standard deviation of the sample proportion $\widehat{P}$ obtained from random samples of size $1,200$. 4. The proportion of a population with a characteristic of interest is $p = 0.37$. Find the mean and standard deviation of the sample proportion $\widehat{P}$ obtained from random samples of size $125$. 5. Random samples of size $225$ are drawn from a population in which the proportion with the characteristic of interest is $0.25$. Decide whether or not the sample size is large enough to assume that the sample proportion $\widehat{P}$ is normally distributed. 6. Random samples of size $1,600$ are drawn from a population in which the proportion with the characteristic of interest is $0.05$. Decide whether or not the sample size is large enough to assume that the sample proportion $\widehat{P}$ is normally distributed. 7. Random samples of size $n$ produced sample proportions $\hat{p}$ as shown. In each case decide whether or not the sample size is large enough to assume that the sample proportion $\widehat{P}$ is normally distributed. 1. $n = 50,\; \hat{p}=0.48$ 2. $n = 50,\; \hat{p}=0.12$ 3. $n = 100,\; \hat{p}=0.12$ 8. Samples of size $n$ produced sample proportions $\hat{p}$ as shown. In each case decide whether or not the sample size is large enough to assume that the sample proportion $\widehat{P}$ is normally distributed. 1. $n = 30,\; \hat{p}=0.72$ 2. $n = 30,\; \hat{p}=0.84$ 3. $n = 75,\; \hat{p}=0.84$ 9. A random sample of size $121$ is taken from a population in which the proportion with the characteristic of interest is $p = 0.47$. Find the indicated probabilities. 1. $P(0.45\leq \widehat{P}\leq 0.50)$ 2. $P(\widehat{P}\geq 0.50)$ 10. A random sample of size $225$ is taken from a population in which the proportion with the characteristic of interest is $p = 0.34$. Find the indicated probabilities. 1. $P(0.25\leq \widehat{P}\leq 0.40)$ 2. $P(\widehat{P}\geq 0.35)$ 11. A random sample of size 900 is taken from a population in which the proportion with the characteristic of interest is p = 0.62. Find the indicated probabilities. 1. $P(0.60\leq \widehat{P}\leq 0.64)$ 2. $P(0.57\leq \widehat{P}\leq 0.67)$ 12. A random sample of size 1,100 is taken from a population in which the proportion with the characteristic of interest is p = 0.28. Find the indicated probabilities. 1. $P(0.27\leq \widehat{P}\leq 0.29)$ 2. $P(0.23\leq \widehat{P}\leq 0.33)$ Applications 1. Suppose that $8\%$ of all males suffer some form of color blindness. Find the probability that in a random sample of $250$ men at least $10\%$ will suffer some form of color blindness. First verify that the sample is sufficiently large to use the normal distribution. 2. Suppose that $29\%$ of all residents of a community favor annexation by a nearby municipality. Find the probability that in a random sample of $50$ residents at least $35\%$ will favor annexation. First verify that the sample is sufficiently large to use the normal distribution. 3. Suppose that $2\%$ of all cell phone connections by a certain provider are dropped. Find the probability that in a random sample of $1,500$ calls at most $40$ will be dropped. First verify that the sample is sufficiently large to use the normal distribution. 4. Suppose that in $20\%$ of all traffic accidents involving an injury, driver distraction in some form (for example, changing a radio station or texting) is a factor. Find the probability that in a random sample of $275$ such accidents between $15\%$ and $25\%$ involve driver distraction in some form. First verify that the sample is sufficiently large to use the normal distribution. 5. An airline claims that $72\%$ of all its flights to a certain region arrive on time. In a random sample of $30$ recent arrivals, $19$ were on time. You may assume that the normal distribution applies. 1. Compute the sample proportion. 2. Assuming the airline’s claim is true, find the probability of a sample of size $30$ producing a sample proportion so low as was observed in this sample. 6. A humane society reports that $19\%$ of all pet dogs were adopted from an animal shelter. Assuming the truth of this assertion, find the probability that in a random sample of $80$ pet dogs, between $15\%$ and $20\%$ were adopted from a shelter. You may assume that the normal distribution applies. 7. In one study it was found that $86\%$ of all homes have a functional smoke detector. Suppose this proportion is valid for all homes. Find the probability that in a random sample of $600$ homes, between $80\%$ and $90\%$ will have a functional smoke detector. You may assume that the normal distribution applies. 8. A state insurance commission estimates that $13\%$ of all motorists in its state are uninsured. Suppose this proportion is valid. Find the probability that in a random sample of $50$ motorists, at least $5$ will be uninsured. You may assume that the normal distribution applies. 9. An outside financial auditor has observed that about $4\%$ of all documents he examines contain an error of some sort. Assuming this proportion to be accurate, find the probability that a random sample of $700$ documents will contain at least $30$ with some sort of error. You may assume that the normal distribution applies. 10. Suppose $7\%$ of all households have no home telephone but depend completely on cell phones. Find the probability that in a random sample of $450$ households, between $25$ and $35$ will have no home telephone. You may assume that the normal distribution applies. Additional Exercises 1. Some countries allow individual packages of prepackaged goods to weigh less than what is stated on the package, subject to certain conditions, such as the average of all packages being the stated weight or greater. Suppose that one requirement is that at most $4\%$ of all packages marked $500$ grams can weigh less than $490$ grams. Assuming that a product actually meets this requirement, find the probability that in a random sample of $150$ such packages the proportion weighing less than $490$ grams is at least $3\%$. You may assume that the normal distribution applies. 2. An economist wishes to investigate whether people are keeping cars longer now than in the past. He knows that five years ago, $38\%$ of all passenger vehicles in operation were at least ten years old. He commissions a study in which $325$ automobiles are randomly sampled. Of them, $132$ are ten years old or older. 1. Find the sample proportion. 2. Find the probability that, when a sample of size $325$ is drawn from a population in which the true proportion is $0.38$, the sample proportion will be as large as the value you computed in part (a). You may assume that the normal distribution applies. 3. Give an interpretation of the result in part (b). Is there strong evidence that people are keeping their cars longer than was the case five years ago? 3. A state public health department wishes to investigate the effectiveness of a campaign against smoking. Historically $22\%$ of all adults in the state regularly smoked cigars or cigarettes. In a survey commissioned by the public health department, $279$ of $1,500$ randomly selected adults stated that they smoke regularly. 1. Find the sample proportion. 2. Find the probability that, when a sample of size $1,500$ is drawn from a population in which the true proportion is $0.22$, the sample proportion will be no larger than the value you computed in part (a). You may assume that the normal distribution applies. 3. Give an interpretation of the result in part (b). How strong is the evidence that the campaign to reduce smoking has been effective? 4. In an effort to reduce the population of unwanted cats and dogs, a group of veterinarians set up a low-cost spay/neuter clinic. At the inception of the clinic a survey of pet owners indicated that $78\%$ of all pet dogs and cats in the community were spayed or neutered. After the low-cost clinic had been in operation for three years, that figure had risen to $86\%$. 1. What information is missing that you would need to compute the probability that a sample drawn from a population in which the proportion is $78\%$ (corresponding to the assumption that the low-cost clinic had had no effect) is as high as $86\%$? 2. Knowing that the size of the original sample three years ago was $150$ and that the size of the recent sample was $125$, compute the probability mentioned in part (a). You may assume that the normal distribution applies. 3. Give an interpretation of the result in part (b). How strong is the evidence that the presence of the low-cost clinic has increased the proportion of pet dogs and cats that have been spayed or neutered? 5. An ordinary die is “fair” or “balanced” if each face has an equal chance of landing on top when the die is rolled. Thus the proportion of times a three is observed in a large number of tosses is expected to be close to $1/6$ or $0.1\bar{6}$. Suppose a die is rolled $240$ times and shows three on top $36$ times, for a sample proportion of $0.15$. 1. Find the probability that a fair die would produce a proportion of $0.15$ or less. You may assume that the normal distribution applies. 2. Give an interpretation of the result in part (b). How strong is the evidence that the die is not fair? 3. Suppose the sample proportion $0.15$ came from rolling the die $2,400$ times instead of only $240$ times. Rework part (a) under these circumstances. 4. Give an interpretation of the result in part (c). How strong is the evidence that the die is not fair? Answers 1. $\mu _{\widehat{P}}=0.37,\; \sigma _{\widehat{P}}=0.012$ 2. 3. $\mu _{\widehat{P}}=0.76,\; \sigma _{\widehat{P}}=0.012$ 4. 5. $p\pm 3\sqrt{\frac{pq}{n}}=0.25\pm 0.087,\; \text{yes}$ 6. 1. $\hat{p}\pm 3\sqrt{\frac{\hat{p}\hat{q}}{n}}=0.48\pm 0.21,\; \text{yes}$ 2. $\hat{p}\pm 3\sqrt{\frac{\hat{p}\hat{q}}{n}}=0.12\pm 0.14,\; \text{no}$ 3. $\hat{p}\pm 3\sqrt{\frac{\hat{p}\hat{q}}{n}}=0.12\pm 0.10,\; \text{yes}$ 7. 1. $0.4154$ 2. $0.2546$ 8. 1. $0.7850$ 2. $0.9980$ 9. 10. $p\pm 3\sqrt{\frac{pq}{n}}=0.08\pm 0.05$ and $[0.03,0.13]\subset [0,1],0.1210$ 11. 12. $p\pm 3\sqrt{\frac{pq}{n}}=0.02\pm 0.01$ and $[0.01,0.03]\subset [0,1],0.9671$ 13. 1. $0.63$ 2. $0.1446$ 14. 15. $0.9977$ 16. 17. $0.3483$ 18. 19. $0.7357$ 20. 1. $0.186$ 2. $0.0007$ 3. In a population in which the true proportion is $22\%$ the chance that a random sample of size $1,500$ would produce a sample proportion of $18.6\%$ or less is only $7/100$ of $1\%$. This is strong evidence that currently a smaller proportion than $22\%$ smoke. 21. 1. $0.2451$ 2. We would expect a sample proportion of $0.15$ or less in about $24.5\%$ of all samples of size $240$, so this is practically no evidence at all that the die is not fair. 3. $0.0139$ 4. We would expect a sample proportion of $0.15$ or less in only about $1.4\%$ of all samples of size $2,400$, so this is strong evidence that the die is not fair. • Anonymous
textbooks/stats/Introductory_Statistics/Introductory_Statistics_(Shafer_and_Zhang)/06%3A_Sampling_Distributions/6.E%3A_Sampling_Distributions_%28Exercises%29.txt
If we wish to estimate the mean μ of a population for which a census is impractical, say the average height of all 18-year-old men in the country, a reasonable strategy is to take a sample, compute its mean x−, and estimate the unknown number μ by the known number x−. For example, if the average height of 100 randomly selected men aged 18 is 70.6 inches, then we would say that the average height of all 18-year-old men is (at least approximately) 70.6 inches. Estimating a population parameter by a single number like this is called point estimation; in the case at hand the statistic x− is a point estimate of the parameter μ. The terminology arises because a single number corresponds to a single point on the number line. A problem with a point estimate is that it gives no indication of how reliable the estimate is. In contrast, in this chapter we learn about interval estimation. In brief, in the case of estimating a population mean μ we use a formula to compute from the data a number E, called the margin of error of the estimate, and form the interval [x−−E,x−+E]. We do this in such a way that a certain proportion, say 95%, of all the intervals constructed from sample data by means of this formula contain the unknown parameter μ. Such an interval is called a 95% confidence interval for μ. Continuing with the example of the average height of 18-year-old men, suppose that the sample of 100 men mentioned above for which x−=70.6 inches also had sample standard deviation s = 1.7 inches. It then turns out that E = 0.33 and we would state that we are 95% confident that the average height of all 18-year-old men is in the interval formed by 70.6±0.33 inches, that is, the average is between 70.27 and 70.93 inches. If the sample statistics had come from a smaller sample, say a sample of 50 men, the lower reliability would show up in the 95% confidence interval being longer, hence less precise in its estimate. In this example the 95% confidence interval for the same sample statistics but with n = 50 is 70.6±0.47 inches, or from 70.13 to 71.07 inches. • 7.1: Large Sample Estimation of a Population Mean A confidence interval for a population mean is an estimate of the population mean together with an indication of reliability. There are different formulas for a confidence interval based on the sample size and whether or not the population standard deviation is known. The confidence intervals are constructed entirely from the sample data (or sample data and the population standard deviation, when it is known). • 7.2: Small Sample Estimation of a Population Mean In selecting the correct formula for construction of a confidence interval for a population mean ask two questions: is the population standard deviation σ known or unknown, and is the sample large or small? We can construct confidence intervals with small samples only if the population is normal. • 7.3: Large Sample Estimation of a Population Proportion We have a single formula for a confidence interval for a population proportion, which is valid when the sample is large. The condition that a sample be large is not that its size n be at least 30, but that the density function fit inside the interval [0,1]. • 7.4: Sample Size Considerations Sampling is typically done with a set of clear objectives in mind. Since sampling costs time, effort, and money, it would be useful to be able to estimate the smallest size sample that is likely to meet these criteria. • 7.E: Estimation (Exercises) These are homework exercises to accompany the Textmap created for "Introductory Statistics" by Shafer and Zhang. 07: Estimation Learning Objectives • To become familiar with the concept of an interval estimate of the population mean. • To understand how to apply formulas for a confidence interval for a population mean. The Central Limit Theorem says that, for large samples (samples of size $n \ge 30$), when viewed as a random variable the sample mean $\overline{X}$ is normally distributed with mean $\mu_{ \overline{X}}=\mu$ and standard deviation $\sigma_{\overline{X}}=\frac{\sigma}{\sqrt{n}}$. The Empirical Rule says that we must go about two standard deviations from the mean to capture $95\%$ of the values of $\overline{X}$ generated by sample after sample. A more precise distance based on the normality of $\overline{X}$ is $1.960$ standard deviations, which is $E=\frac{1.960 \sigma}{\sqrt{n}}$. The key idea in the construction of the $95\%$ confidence interval is this, as illustrated in Figure $1$, because in sample after sample $95\%$ of the values of $\overline{X}$ lie in the interval $[\mu -E,\mu +E]$, if we adjoin to each side of the point estimate $x-a$ “wing” of length $E$, $95\%$ of the intervals formed by the winged dots contain $\mu$. The $95\%$ confidence interval is thus $\bar{x}\pm 1.960\frac{\sigma }{\sqrt{n}}$. For a different level of confidence, say $90\%$ or $99\%$, the number $1.960$ will change, but the idea is the same. Figure $2$ shows the intervals generated by a computer simulation of drawing $40$ samples from a normally distributed population and constructing the $95\%$ confidence interval for each one. We expect that about $(0.05)(40)=2$ of the intervals so constructed would fail to contain the population mean $\mu$, and in this simulation two of the intervals, shown in red, do. It is standard practice to identify the level of confidence in terms of the area $α$ in the two tails of the distribution of $\overline{X}$ when the middle part specified by the level of confidence is taken out. This is shown in Figure $3$, drawn for the general situation, and in Figure $4$, drawn for $95\%$ confidence. Remember from Section 5.4 that the $z$-value that cuts off a right tail of area $c$ is denoted $z_c$. Thus the number $1.960$ in the example is $z_{.025}$, which is $z_{\frac{\alpha }{2}}$ for $\alpha =1-0.95=0.05$. For $95\%$ confidence the area in each tail is $\alpha /2=0.025$. The level of confidence can be any number between $0$ and $100\%$, but the most common values are probably $90\%$ $(\alpha =0.10)$, $95\%$ $(\alpha =0.05)$, and $99\%$ $(\alpha =0.01)$. Thus in general for a $100(1-\alpha )\%$ confidence interval, $E=z_{\alpha /2}(\sigma /\sqrt{n})$, so the formula for the confidence interval is $\bar{x}\pm z_{\alpha /2}(\sigma /\sqrt{n})$. While sometimes the population standard deviation $\sigma$ is known, typically it is not. If not, for $n\geq 30$ it is generally safe to approximate $\sigma$ by the sample standard deviation $s$. Large Sample $100(1-\alpha )\%$ Confidence Interval for a Population Mean • If $\sigma$ is known: $\bar{x}\pm z_{\alpha /2}\left ( \dfrac{\sigma }{\sqrt{n}} \right ) \nonumber$ • If $\sigma$ is unknown: $\bar{x}\pm z_{\alpha /2}\left ( \dfrac{s}{\sqrt{n}} \right ) \nonumber$ A sample is considered large when $n\geq 30$. As mentioned earlier, the number $E=z_{\alpha /2}\left ( \frac{\sigma }{\sqrt{n}} \right ) \nonumber$ or $E=z_{\alpha /2}\left ( \frac{s}{\sqrt{n}} \right ) \nonumber$ is called the margin of error of the estimate. Example $1$ Find the number $z_{\alpha /2}$ needed in construction of a confidence interval: 1. when the level of confidence is $90\%$; 2. when the level of confidence is $99\%$. using the tables in Figure $5$ below. Solution: 1. For confidence level $90\%$, $\alpha =1-0.90=0.10$, so $z_{\alpha /2}=z_{0.05}$. Since the area under the standard normal curve to the right of $z_{0.05}$ is $0.05$, the area to the left of $z_{0.05}$ is $0.95$. We search for the area $0.9500$ in Figure $5$. The closest entries in the table are $0.9495$ and $0.9505$, corresponding to $z$-values $1.64$ and $1.65$. Since $0.95$ is halfway between $0.9495$ and $0.9505$ we use the average $1.645$ of the $z$-values for $z_{0.05}$. 2. For confidence level $99\%$, $\alpha =1-0.99=0.01$, so $z_{\alpha /2}=z_{0.005}$. Since the area under the standard normal curve to the right of $z_{0.005}$ is $0.005$, the area to the left of $z_{0.005}$ is $0.9950$. We search for the area $0.9950$ in Figure $5$. The closest entries in the table are $0.9949$ and $0.9951$, corresponding to $z$-values $2.57$ and $2.58$. Since $0.995$ is halfway between $0.9949$ and $0.9951$ we use the average $2.575$ of the $z$-values for $z_{0.005}$. Example $2$ Use Figure $6$ below to find the number $z_{\alpha /2}$ needed in construction of a confidence interval: 1. when the level of confidence is $90\%$; 2. when the level of confidence is $99\%$. Solution: 1. In the next section we will learn about a continuous random variable that has a probability distribution called the Student $t$-distribution. Figure $6$ gives the value $t_c$ that cuts off a right tail of area $c$ for different values of $c$. The last line of that table, the one whose heading is the symbol $\infty$ for infinity and $[z]$, gives the corresponding $z$-value $z_c$ that cuts off a right tail of the same area $c$. In particular, $z_{0.05}$ is the number in that row and in the column with the heading $t_{0.05}$. We read off directly that $z_{0.05}=1.645$. 2. In Figure $6$ $z_{0.005}$ is the number in the last row and in the column headed $t_{0.005}$, namely $2.576$. Figure $6$ can be used to find $z_c$ only for those values of $c$ for which there is a column with the heading $t_c$ appearing in the table; otherwise we must use Figure $5$ in reverse. But when it can be done it is both faster and more accurate to use the last line of Figure $6$ to find $z_c$ than it is to do so using Figure $5$ in reverse. Example $3$ A sample of size $49$ has sample mean $35$ and sample standard deviation $14$. Construct a $98\%$ confidence interval for the population mean using this information. Interpret its meaning. Solution: For confidence level $98\%$, $\alpha =1-0.98=0.02$, so $z_{\alpha /2}=z_{0.01}$. From Figure $6$ we read directly that $z_{0.01}=2.326$.Thus $\bar{x}\pm z_{\alpha /2}\frac{s}{\sqrt{n}}=35\pm 2.326\left ( \frac{14}{\sqrt{49}} \right )=35\pm 4.652\approx 35\pm 4.7 \nonumber$ We are $98\%$ confident that the population mean $\mu$ lies in the interval $[30.3,39.7]$, in the sense that in repeated sampling $98\%$ of all intervals constructed from the sample data in this manner will contain $\mu$. Example $4$ A random sample of $120$ students from a large university yields mean GPA $2.71$ with sample standard deviation $0.51$. Construct a $90\%$ confidence interval for the mean GPA of all students at the university. Solution: For confidence level $90\%$, $\alpha =1-0.90=0.10$, so $z_{\alpha /2}=z_{0.05}$. From Figure $6$ we read directly that $z_{0.05}=1.645$. Since $n=120$, $\bar{x}=2.71$, and $s=0.51$, $\bar{x}\pm z_{\alpha /2}\frac{s}{\sqrt{n}}=2.71\pm 1.645\left ( \frac{0.51}{\sqrt{120}} \right )=2.71\pm 0.0766 \nonumber$ One may be $90\%$ confident that the true average GPA of all students at the university is contained in the interval $(2.71-0.08,2.71+0.08)=(2.63,2.79)$. Key Takeaway • A confidence interval for a population mean is an estimate of the population mean together with an indication of reliability. • There are different formulas for a confidence interval based on the sample size and whether or not the population standard deviation is known. • The confidence intervals are constructed entirely from the sample data (or sample data and the population standard deviation, when it is known).
textbooks/stats/Introductory_Statistics/Introductory_Statistics_(Shafer_and_Zhang)/07%3A_Estimation/7.01%3A_Large_Sample_Estimation_of_a_Population_Mean.txt
Learning Objectives 1. To become familiar with Student’s $t$-distribution. 2. To understand how to apply additional formulas for a confidence interval for a population mean. The confidence interval formulas in the previous section are based on the Central Limit Theorem, the statement that for large samples $\overline{X}$ is normally distributed with mean $\mu$ and standard deviation $\sigma /\sqrt{n}$. When the population mean $\mu$ is estimated with a small sample ($n<30$), the Central Limit Theorem does not apply. In order to proceed we assume that the numerical population from which the sample is taken has a normal distribution to begin with. If this condition is satisfied then when the population standard deviation $\sigma$ is known the old formula $\bar{x}\pm z_{\alpha /2}(\sigma /\sqrt{n})$ can still be used to construct a $100(1-\alpha )\%$ confidence interval for $\mu$. If the population standard deviation is unknown and the sample size $n$ is small then when we substitute the sample standard deviation $s$ for $\sigma$ the normal approximation is no longer valid. The solution is to use a different distribution, called Student’s $t$-distribution with $n-1$ degrees of freedom. Student’s $t$-distribution is very much like the standard normal distribution in that it is centered at $0$ and has the same qualitative bell shape, but it has heavier tails than the standard normal distribution does, as indicated by Figure $1$, in which the curve (in brown) that meets the dashed vertical line at the lowest point is the $t$-distribution with two degrees of freedom, the next curve (in blue) is the $t$-distribution with five degrees of freedom, and the thin curve (in red) is the standard normal distribution. As also indicated by the figure, as the sample size $n$ increases, Student’s $t$-distribution ever more closely resembles the standard normal distribution. Although there is a different $t$-distribution for every value of $n$, once the sample size is $30$ or more it is typically acceptable to use the standard normal distribution instead, as we will always do in this text. Just as the symbol $z_c$ stands for the value that cuts off a right tail of area $c$ in the standard normal distribution, so the symbol $t_c$ stands for the value that cuts off a right tail of area $c$ in the standard normal distribution. This gives us the following confidence interval formulas. Small Sample $100(1−α)\%$ Confidence Interval for a Population Mean If $σ$ is known: $\overline{x} = ±z_{α/2} \left( \dfrac{σ}{\sqrt{n}}\right) \nonumber$ If $σ$ is unknown: $\overline{x} = ±t_{α/2} \left( \dfrac{s}{\sqrt{n}}\right) \label{tdist}$ with the degrees of freedom $df=n−1$. The population must be normally distributed and a sample is considered small when $n < 30$. To use the new formula we use the line in Figure 7.1.6 that corresponds to the relevant sample size. Example $1$ A sample of size $15$ drawn from a normally distributed population has sample mean $35$ and sample standard deviation $14$. Construct a $95\%$ confidence interval for the population mean, and interpret its meaning. Solution Since the population is normally distributed, the sample is small, and the population standard deviation is unknown, the formula that applies is Equation \ref{tdist}. Confidence level $95\%$ means that $α=1−0.95=0.05 \nonumber$ so $α/2=0.025$. Since the sample size is $n = 15$, there are $n−1=14$ degrees of freedom. By Figure 7.1.6 $t_{0.025}=2.145$. Thus \begin{align} \overline{x} &= ±t_{α/2} \left( \dfrac{s}{\sqrt{n}}\right) \ &=35 ± 2.145 \left( \dfrac{14}{\sqrt{15}} \right) \ &=35 ±7.8 \end{align} \nonumber One may be $95\%$ confident that the true value of $μ$ is contained in the interval $(35−7.8, 35+7.8) = (27.2,42.8). \nonumber$ Example $2$ A random sample of $12$ students from a large university yields mean GPA $2.71$ with sample standard deviation $0.51$. Construct a $90\%$ confidence interval for the mean GPA of all students at the university. Assume that the numerical population of GPAs from which the sample is taken has a normal distribution. Solution Since the population is normally distributed, the sample is small, and the population standard deviation is unknown, the formula that applies is Equation \ref{tdist} Confidence level $90\%$ means that $α=1−0.90=0.10 \nonumber$ so $α/2=0.05$. Since the sample size is $n = 12$, there are $n−1=11$ degrees of freedom. By Figure 7.1.6 $t_{0.05}=1.796$. Thus \begin{align} \overline{x} &= ±t_{α/2} \left( \dfrac{s}{\sqrt{n}}\right) \ &=2.71 ± 1.796 \left( \dfrac{0.51}{\sqrt{12}} \right) \ &=2.71 ±0.26 \end{align} \nonumber One may be $90\%$ confident that the true average GPA of all students at the university is contained in the interval $(2.71−0.26,2.71+0.26)=(2.45,2.97). \nonumber$ Compare "Example 4" in Section 7.1 and "Example 6" in Section 7.1. The summary statistics in the two samples are the same, but the $90\%$ confidence interval for the average GPA of all students at the university in "Example 4" in Section 7.1, $(2.63,2.79)$, is shorter than the $90\%$ confidence interval $(2.45,2.97)$, in "Example 6" in Section 7.1. This is partly because in "Example 4" in Section 7.1 the sample size is larger; there is more information pertaining to the true value of $\mu$ in the large data set than in the small one. Key Takeaway • In selecting the correct formula for construction of a confidence interval for a population mean ask two questions: is the population standard deviation $\sigma$ known or unknown, and is the sample large or small? • We can construct confidence intervals with small samples only if the population is normal. 7.03: Large Sample Estimation of a Population Proportion Learning Objectives • To understand how to apply the formula for a confidence interval for a population proportion. Since from Section 6.3, we know the mean, standard deviation, and sampling distribution of the sample proportion $\hat{p}$, the ideas of the previous two sections can be applied to produce a confidence interval for a population proportion. Here is the formula. Large Sample $100(1−\alpha)\%$ Confidence Interval for a Population Proportion $\hat{p}\pm z_{\alpha /2}\sqrt{\frac{\hat{p}(1-\hat{p})}{n}} \nonumber$ A sample is large if the interval $[p-3\sigma_{\hat{p}},p+3\sigma _{\hat{p}}]$ lies wholly within the interval $[0,1]$. In actual practice the value of $p$ is not known, hence neither is $\sigma_{\hat{p}}$. In that case we substitute the known quantity $\hat{p}$ for $p$ in making the check; this means checking that the interval $\left [ \hat{p}-3\sqrt{\frac{\hat{p}(1-\hat{p})}{n}},\: \hat{p}+3\sqrt{\frac{\hat{p}(1-\hat{p})}{n}}\right ] \nonumber$ lies wholly within the interval $[0,1]$. Example $1$ To estimate the proportion of students at a large college who are female, a random sample of $120$ students is selected. There are $69$ female students in the sample. Construct a $90\%$ confidence interval for the proportion of all students at the college who are female. Solution The proportion of students in the sample who are female is $\hat{p} =69/120=0.575 \nonumber$ Confidence level $90\%$ means that $\alpha =1-0.90=0.10$ so $\alpha /2=0.05$. From the last line of Figure 7.1.6 we obtain $z_{0.05}=1.645$. Thus $\hat{p}\pm z_{\alpha /2}\sqrt{\frac{\hat{p}(1-\hat{p})}{n}}=0.575\pm 1.645\sqrt{\frac{(0.575)(0.425)}{120}}=0.575\pm 0.074 \nonumber$ One may be $90\%$ confident that the true proportion of all students at the college who are female is contained in the interval $(0.575-0.074,0.575+0.074)=(0.501,0.649)$. Summary • We have a single formula for a confidence interval for a population proportion, which is valid when the sample is large. • The condition that a sample be large is not that its size $n$ be at least $30$, but that the density function fit inside the interval $[0,1]$.
textbooks/stats/Introductory_Statistics/Introductory_Statistics_(Shafer_and_Zhang)/07%3A_Estimation/7.02%3A_Small_Sample_Estimation_of_a_Population_Mean.txt
Learning Objectives • To learn how to apply formulas for estimating the size sample that will be needed in order to construct a confidence interval for a population mean or proportion that meets given criteria. Sampling is typically done with a set of clear objectives in mind. For example, an economist might wish to estimate the mean yearly income of workers in a particular industry at $90\%$ confidence and to within $\500$. Since sampling costs time, effort, and money, it would be useful to be able to estimate the smallest size sample that is likely to meet these criteria. Estimating $μ$ The confidence interval formulas for estimating a population mean $\mu$ have the form $\overline{x} \pm E$. When the population standard deviation $σ$ is known, $E=\dfrac{z_{\alpha/2}σ}{\sqrt{n}} \nonumber$ The number $z_{\alpha/2}$ is determined by the desired level of confidence. To say that we wish to estimate the mean to within a certain number of units means that we want the margin of error $E$ to be no larger than that number. Thus we obtain the minimum sample size needed by solving the displayed equation for $n$. Minimum Sample Size for Estimating a Population Mean The estimated minimum sample size $n$ needed to estimate a population mean $μ$ to within $E$ units at $100(1−\alpha)\%$ confidence is $n=\dfrac{(z_{\alpha/2})^2σ^2}{E^2} \, \text{(rounded up)} \label{estimate}$ To apply Equation \ref{estimate}, we must have prior knowledge of the population in order to have an estimate of its standard deviation $σ$. In all the examples and exercises the population standard deviation will be given. Example $1$ Find the minimum sample size necessary to construct a $99\%$ confidence interval for $μ$ with a margin of error $E = 0.2$. Assume that the population standard deviation is $σ = 1.3$. Solution Confidence level $99\%$ means that $α=1−0.99=0.01$ so $α/2=0.005$. From the last line of Figure 7.1.6 we obtain $z_{0.005}=2.576$. Thus $n=\dfrac{(z_{\alpha/2})^2σ^2}{E^2} = \dfrac{(2.576)^2(1.3)^2}{(0.2)^2}=280.361536 \nonumber$ which we round up to $281$, since it is impossible to take a fractional observation. Example $2$ An economist wishes to estimate, with a $95\%$ confidence interval, the yearly income of welders with at least five years experience to within $\1,000$. He estimates that the range of incomes is no more than $\24,000$, so using the Empirical Rule he estimates the population standard deviation to be about one-sixth as much, or about $\4,000$. Find the estimated minimum sample size required. Solution Confidence level $95\%$ means that $α=1−0.95=0.05$ so $α/2=0.025$. From the last line of Figure 7.1.6 we obtain $z_{0.025}=1.960$. To say that the estimate is to be “to within $\1,000$” means that $E = 1000$. Thus $n=\dfrac{(z_{\alpha/2})^2σ^2}{E^2} = \dfrac{(1.960)^2(4000)^2}{(1000)^2}=61.4656 \nonumber$ which we round up to $62$. Estimating $p$ The confidence interval formula for estimating a population proportion $p$ is $\hat{p} ±E$, where $E=z_{\alpha/2} \sqrt{\dfrac{\hat{p} (1− \hat{p} )}{n}} \nonumber$ The number $z_{α/2}$ is determined by the desired level of confidence. To say that we wish to estimate the population proportion to within a certain number of percentage points means that we want the margin of error $E$ to be no larger than that number (expressed as a proportion). Thus we obtain the minimum sample size needed by solving the displayed equation for $n$. Minimum Sample Size for Estimating a Population Proportion The estimated minimum sample size n needed to estimate a population proportion $p$ to within $E$ at $100(1−\alpha)\%$ confidence is $n=\dfrac{(z_{\alpha/2})^2 \hat{p} (1− \hat{p} )}{E^2} \text{(rounded up)} \nonumber$ There is a dilemma here: the formula for estimating how large a sample to take contains the number $\hat{p}$, which we know only after we have taken the sample. There are two ways out of this dilemma. Typically the researcher will have some idea as to the value of the population proportion $p$, hence of what the sample proportion $\hat{p}$ is likely to be. For example, if last month $37\%$ of all voters thought that state taxes are too high, then it is likely that the proportion with that opinion this month will not be dramatically different, and we would use the value $0.37$ for $\hat{p}$ in the formula. The second approach to resolving the dilemma is simply to replace $\hat{p}$ in the formula by $0.5$. This is because if $\hat{p}$ is large then $1− \hat{p}$ is small, and vice versa, which limits their product to a maximum value of $0.25$, which occurs when $\hat{p} =0.5$. This is called the most conservative estimate, since it gives the largest possible estimate of $n$. Example $3$ Find the necessary minimum sample size to construct a $98\%$ confidence interval for $p$ with a margin of error $E=0.05$, 1. assuming that no prior knowledge about $p$ is available; and 2. assuming that prior studies suggest that $p$ is about $0.1$. Solution Confidence level $98\%$ means that $\alpha =1-0.98=0.02$ so $\alpha /2=0.01$. From the last line of Figure 7.1.6 we obtain $z_{0.01}=2.326$. 1. Since there is no prior knowledge of $p$ we make the most conservative estimate that $\hat{p} =0.5$. Then $n=\dfrac{(z_{\alpha/2})^2 \hat{p} (1− \hat{p} )}{E^2}= \dfrac{(2.326)^2(0.5)(1−0.5)}{0.05^2}=541.0276 \nonumber$ which we round up to $542$. 1. Since $p\approx 0.1$ we estimate $\hat{p}$ by $0.1$, and obtain $n=\dfrac{(z_{\alpha/2})^2 \hat{p} (1− \hat{p} )}{E^2}=\dfrac{(2.326)^2(0.1)(1−0.1)}{0.05^2}=194.769936 \nonumber$ which we round up to $195$. Example $4$ A dermatologist wishes to estimate the proportion of young adults who apply sunscreen regularly before going out in the sun in the summer. Find the minimum sample size required to estimate the proportion to within three percentage points, at $90\%$ confidence. Solution Confidence level $90\%$ means that $\alpha=1−0.90=0.10$ so $α/2=0.05$. From the last line of Figure 7.1.6 we obtain $z_{0.05}=1.645$. Since there is no prior knowledge of $p$ we make the most conservative estimate that $\hat{p} =0.5$. To estimate “to within three percentage points” means that $E = 0.03$. Then $n=\dfrac{(z_{\alpha/2})^2 \hat{p} (1− \hat{p} )}{E^2} = \dfrac{(1.645)^2(0.5)(1−0.5)}{0.03^2}=751.6736111 \nonumber$ which we round up to $752$. Key Takeaway • If the population standard deviation $σ$ is known or can be estimated, then the minimum sample size needed to obtain a confidence interval for the population mean with a given maximum error of the estimate and a given level of confidence can be estimated. • The minimum sample size needed to obtain a confidence interval for a population proportion with a given maximum error of the estimate and a given level of confidence can always be estimated. If there is prior knowledge of the population proportion p then the estimate can be sharpened.
textbooks/stats/Introductory_Statistics/Introductory_Statistics_(Shafer_and_Zhang)/07%3A_Estimation/7.04%3A_Sample_Size_Considerations.txt
These are homework exercises to accompany the Textmap created for "Introductory Statistics" by Shafer and Zhang. 7.1: Large Sample Estimation of a Population Mean Basic 1. A random sample is drawn from a population of known standard deviation $11.3$. Construct a $90\%$ confidence interval for the population mean based on the information given (not all of the information given need be used). 1. $n = 36,\; \bar{x}=105.2,\; s = 11.2$ 2. $n = 100,\; \bar{x}=105.2,\; s = 11.2$ 2. A random sample is drawn from a population of known standard deviation $22.1$. Construct a $95\%$ confidence interval for the population mean based on the information given (not all of the information given need be used). 1. $n =121 ,\; \bar{x}=82.4,\; s = 21.9$ 2. $n =81 ,\; \bar{x}=82.4,\; s = 21.9$ 3. A random sample is drawn from a population of unknown standard deviation. Construct a $99\%$ confidence interval for the population mean based on the information given. 1. $n =49 ,\; \bar{x}=17.1,\; s = 2.1$ 2. $n =169 ,\; \bar{x}=17.1,\; s = 2.1$ 4. A random sample is drawn from a population of unknown standard deviation. Construct a $98\%$ confidence interval for the population mean based on the information given. 1. $n =225 ,\; \bar{x}=92.0,\; s = 8.4$ 2. $n =64 ,\; \bar{x}=92.0,\; s = 8.4$ 5. A random sample of size $144$ is drawn from a population whose distribution, mean, and standard deviation are all unknown. The summary statistics are $\bar{x}=58.2$ and $s = 2.6$. 1. Construct an $80\%$ confidence interval for the population mean $\mu$. 2. Construct a $90\%$ confidence interval for the population mean $\mu$. 3. Comment on why one interval is longer than the other. 6. A random sample of size $256$ is drawn from a population whose distribution, mean, and standard deviation are all unknown. The summary statistics are $\bar{x}=1011$ and $s = 34$. 1. Construct a $90\%$ confidence interval for the population mean $\mu$. 2. Construct a $99\%$ confidence interval for the population mean $\mu$. 3. Comment on why one interval is longer than the other. Applications 1. A government agency was charged by the legislature with estimating the length of time it takes citizens to fill out various forms. Two hundred randomly selected adults were timed as they filled out a particular form. The times required had mean $12.8$ minutes with standard deviation $1.7$ minutes. Construct a $90\%$ confidence interval for the mean time taken for all adults to fill out this form. 2. Four hundred randomly selected working adults in a certain state, including those who worked at home, were asked the distance from their home to their workplace. The average distance was $8.84$ miles with standard deviation $2.70$ miles. Construct a $99\%$ confidence interval for the mean distance from home to work for all residents of this state. 3. On every passenger vehicle that it tests an automotive magazine measures, at true speed $55$ mph, the difference between the true speed of the vehicle and the speed indicated by the speedometer. For $36$ vehicles tested the mean difference was $-1.2$ mph with standard deviation $0.2$ mph. Construct a $90\%$ confidence interval for the mean difference between true speed and indicated speed for all vehicles. 4. A corporation monitors time spent by office workers browsing the web on their computers instead of working. In a sample of computer records of $50$ workers, the average amount of time spent browsing in an eight-hour work day was $27.8$ minutes with standard deviation $8.2$ minutes. Construct a $99.5\%$ confidence interval for the mean time spent by all office workers in browsing the web in an eight-hour day. 5. A sample of $250$ workers aged $16$ and older produced an average length of time with the current employer (“job tenure”) of $4.4$ years with standard deviation $3.8$ years. Construct a $99.9\%$ confidence interval for the mean job tenure of all workers aged $16$ or older. 6. The amount of a particular biochemical substance related to bone breakdown was measured in $30$ healthy women. The sample mean and standard deviation were $3.3$ nanograms per milliliter (ng/mL) and $1.4$ ng/mL. Construct an $80\%$ confidence interval for the mean level of this substance in all healthy women. 7. A corporation that owns apartment complexes wishes to estimate the average length of time residents remain in the same apartment before moving out. A sample of $150$ rental contracts gave a mean length of occupancy of $3.7$ years with standard deviation $1.2$ years. Construct a $95\%$ confidence interval for the mean length of occupancy of apartments owned by this corporation. 8. The designer of a garbage truck that lifts roll-out containers must estimate the mean weight the truck will lift at each collection point. A random sample of $325$ containers of garbage on current collection routes yielded $\bar{x}=75.3 lb,\; s = 12.8 lb$. Construct a $99.8\%$ confidence interval for the mean weight the trucks must lift each time. 9. In order to estimate the mean amount of damage sustained by vehicles when a deer is struck, an insurance company examined the records of $50$ such occurrences, and obtained a sample mean of $\2,785$ with sample standard deviation $\221$. Construct a $95\%$ confidence interval for the mean amount of damage in all such accidents. 10. In order to estimate the mean FICO credit score of its members, a credit union samples the scores of $95$ members, and obtains a sample mean of $738.2$ with sample standard deviation $64.2$. Construct a $99\%$ confidence interval for the mean FICO score of all of its members. Additional Exercises 1. For all settings a packing machine delivers a precise amount of liquid; the amount dispensed always has standard deviation $0.07$ ounce. To calibrate the machine its setting is fixed and it is operated $50$ times. The mean amount delivered is $6.02$ ounces with sample standard deviation $0.04$ ounce. Construct a $99.5\%$ confidence interval for the mean amount delivered at this setting. Hint: Not all the information provided is needed. 2. A power wrench used on an assembly line applies a precise, preset amount of torque; the torque applied has standard deviation $0.73$ foot-pound at every torque setting. To check that the wrench is operating within specifications it is used to tighten $100$ fasteners. The mean torque applied is $36.95$ foot-pounds with sample standard deviation $0.62$ foot-pound. Construct a $99.9\%$ confidence interval for the mean amount of torque applied by the wrench at this setting. Hint: Not all the information provided is needed. 3. The number of trips to a grocery store per week was recorded for a randomly selected collection of households, with the results shown in the table. $\begin{matrix} 2 & 2 & 2 & 1 & 4 & 2 & 3 & 2 & 5 & 4\ 2 & 3 & 5 & 0 & 3 & 2 & 3 & 1 & 4 & 3\ 3 & 2 & 1 & 6 & 2 & 3 & 3 & 2 & 4 & 4 \end{matrix}$Construct a $95\%$ confidence interval for the average number of trips to a grocery store per week of all households. 4. For each of $40$ high school students in one county the number of days absent from school in the previous year were counted, with the results shown in the frequency table. $\begin{array}{c|c c c c c c} x &0 &1 &2 &3 &4 &5 \ \hline f &24 &7 &5 &2 &1 &1\ \end{array}$Construct a $90\%$ confidence interval for the average number of days absent from school of all students in the county. 5. A town council commissioned a random sample of $85$ households to estimate the number of four-wheel vehicles per household in the town. The results are shown in the following frequency table. $\begin{array}{c|c c c c c c} x &0 &1 &2 &3 &4 &5 \ \hline f &1 &16 &28 &22 &12 &6\ \end{array}$Construct a $98\%$ confidence interval for the average number of four-wheel vehicles per household in the town. 6. The number of hours per day that a television set was operating was recorded for a randomly selected collection of households, with the results shown in the table. $\begin{matrix} 3.7 & 4.2 & 1.5 & 3.6 & 5.9\ 4.7 & 8.2 & 3.9 & 2.5 & 4.4\ 2.1 & 3.6 & 1.1 & 7.3 & 4.2\ 3.0 & 3.8 & 2.2 & 4.2 & 3.8\ 4.3 & 2.1 & 2.4 & 6.0 & 3.7\ 2.5 & 1.3 & 2.8 & 3.0 & 5.6 \end{matrix}$Construct a $99.8\%$ confidence interval for the mean number of hours that a television set is in operation in all households. Large Data Set Exercises Large Data Set missing from the original 1. Large $\text{Data Set 1}$ records the SAT scores of $1,000$ students. Regarding it as a random sample of all high school students, use it to construct a $99\%$ confidence interval for the mean SAT score of all students. 2. Large $\text{Data Set 1}$ records the GPAs of $1,000$ college students. Regarding it as a random sample of all college students, use it to construct a $95\%$ confidence interval for the mean GPA of all students. 3. Large $\text{Data Set 1}$ lists the SAT scores of $1,000$ students. 1. Regard the data as arising from a census of all students at a high school, in which the SAT score of every student was measured. Compute the population mean $\mu$. 2. Regard the first $36$ students as a random sample and use it to construct a $99\%$ confidence for the mean $\mu$ of all $1,000$ SAT scores. Does it actually capture the mean $\mu$? 4. Large $\text{Data Set 1}$ lists the GPAs of $1,000$ students. 1. Regard the data as arising from a census of all freshman at a small college at the end of their first academic year of college study, in which the GPA of every such person was measured. Compute the population mean $\mu$. 2. Regard the first $36$ students as a random sample and use it to construct a $95\%$ confidence for the mean $\mu$ of all $1,000$ GPAs. Does it actually capture the mean $\mu$? Answers 1. $105.2\pm 3.10$ 2. $105.2\pm 1.86$ 1. $17.1\pm 0.77$ 2. $17.1\pm 0.42$ 1. $58.2\pm 0.28$ 2. $58.2\pm 0.36$ 3. Asking for greater confidence requires a longer interval. 1. $12.8\pm 0.20$ 2. $-1.2\pm 0.05$ 3. $4.4\pm 0.79$ 4. $3.7\pm 0.19$ 5. $2785\pm 61$ 6. $6.02\pm 0.03$ 7. $2.8\pm 0.48$ 8. $2.54\pm 0.30$ 9. $(1511.43,1546.05)$ 1. $\mu = 1528.74$ 2. $(1428.22,1602.89)$ 7.2: Small Sample Estimation of a Population Mean Basic 1. A random sample is drawn from a normally distributed population of known standard deviation $5$. Construct a $99.8\%$ confidence interval for the population mean based on the information given (not all of the information given need be used). 1. $n = 16,\; \bar{x}=98,\; s = 5.6$ 2. $n = 9,\; \bar{x}=98,\; s = 5.6$ 2. A random sample is drawn from a normally distributed population of known standard deviation $10.7$. Construct a $95\%$ confidence interval for the population mean based on the information given (not all of the information given need be used). 1. $n = 25,\; \bar{x}=103.3,\; s = 11.0$ 2. $n = 4,\; \bar{x}=103.3,\; s = 11.0$ 3. A random sample is drawn from a normally distributed population of unknown standard deviation. Construct a $99\%$ confidence interval for the population mean based on the information given. 1. $n = 18,\; \bar{x}=386,\; s = 24$ 2. $n = 7,\; \bar{x}=386,\; s = 24$ 4. A random sample is drawn from a normally distributed population of unknown standard deviation. Construct a $98\%$ confidence interval for the population mean based on the information given. 1. $n = 8,\; \bar{x}=58.3,\; s = 4.1$ 2. $n = 27,\; \bar{x}=58.3,\; s = 4.1$ 5. A random sample of size $14$ is drawn from a normal population. The summary statistics are $\bar{x}=933,\; and\; s = 18$. 1. Construct an $80\%$ confidence interval for the population mean $\mu$. 2. Construct a $90\%$ confidence interval for the population mean $\mu$. 3. Comment on why one interval is longer than the other. 6. A random sample of size $28$ is drawn from a normal population. The summary statistics are $\bar{x}=68.6,\; and\; s = 1.28$. 1. Construct a $95\%$ confidence interval for the population mean $\mu$. 2. Construct a $99.5\%$ confidence interval for the population mean $\mu$. 3. Comment on why one interval is longer than the other. ApplicationExercises 1. City planners wish to estimate the mean lifetime of the most commonly planted trees in urban settings. A sample of $16$ recently felled trees yielded mean age $32.7$ years with standard deviation $3.1$ years. Assuming the lifetimes of all such trees are normally distributed, construct a $99.8\%$ confidence interval for the mean lifetime of all such trees. 2. To estimate the number of calories in a cup of diced chicken breast meat, the number of calories in a sample of four separate cups of meat is measured. The sample mean is $211.8$ calories with sample standard deviation $0.9$ calorie. Assuming the caloric content of all such chicken meat is normally distributed, construct a $95\%$ confidence interval for the mean number of calories in one cup of meat. 3. A college athletic program wishes to estimate the average increase in the total weight an athlete can lift in three different lifts after following a particular training program for six weeks. Twenty-five randomly selected athletes when placed on the program exhibited a mean gain of $47.3$ lb with standard deviation $6.4$ lb. Construct a $90\%$ confidence interval for the mean increase in lifting capacity all athletes would experience if placed on the training program. Assume increases among all athletes are normally distributed. 4. To test a new tread design with respect to stopping distance, a tire manufacturer manufactures a set of prototype tires and measures the stopping distance from $70$ mph on a standard test car. A sample of $25$ stopping distances yielded a sample mean $173$ feet with sample standard deviation $8$ feet. Construct a $98\%$ confidence interval for the mean stopping distance for these tires. Assume a normal distribution of stopping distances. 5. A manufacturer of chokes for shotguns tests a choke by shooting $15$ patterns at targets $40$ yards away with a specified load of shot. The mean number of shot in a $30$-inch circle is $53.5$ with standard deviation $1.6$. Construct an $80\%$ confidence interval for the mean number of shot in a $30$-inch circle at $40$ yards for this choke with the specified load. Assume a normal distribution of the number of shot in a $30$-inch circle at $40$ yards for this choke. 6. In order to estimate the speaking vocabulary of three-year-old children in a particular socioeconomic class, a sociologist studies the speech of four children. The mean and standard deviation of the sample are $\bar{x}=1120$ and $s = 215$ words. Assuming that speaking vocabularies are normally distributed, construct an $80\%$ confidence interval for the mean speaking vocabulary of all three-year-old children in this socioeconomic group. 7. A thread manufacturer tests a sample of eight lengths of a certain type of thread made of blended materials and obtains a mean tensile strength of $8.2$ lb with standard deviation $0.06$ lb. Assuming tensile strengths are normally distributed, construct a $90\%$ confidence interval for the mean tensile strength of this thread. 8. An airline wishes to estimate the weight of the paint on a fully painted aircraft of the type it flies. In a sample of four repaintings the average weight of the paint applied was $239$ pounds, with sample standard deviation $8$ pounds. Assuming that weights of paint on aircraft are normally distributed, construct a $99.8\%$ confidence interval for the mean weight of paint on all such aircraft. 9. In a study of dummy foal syndrome, the average time between birth and onset of noticeable symptoms in a sample of six foals was $18.6$ hours, with standard deviation $1.7$ hours. Assuming that the time to onset of symptoms in all foals is normally distributed, construct a $90\%$ confidence interval for the mean time between birth and onset of noticeable symptoms. 10. A sample of $26$ women’s size $6$ dresses had mean waist measurement $25.25$ inches with sample standard deviation $0.375$ inch. Construct a $95\%$ confidence interval for the mean waist measurement of all size $6$ women’s dresses. Assume waist measurements are normally distributed. Additional Exercises 1. Botanists studying attrition among saplings in new growth areas of forests diligently counted stems in six plots in five-year-old new growth areas, obtaining the following counts of stems per acre: $\begin{matrix} 9,432 & 11,026 & 10,539\ 8,773 & 9,868 & 10,247 \end{matrix}$ Construct an $80\%$ confidence interval for the mean number of stems per acre in all five-year-old new growth areas of forests. Assume that the number of stems per acre is normally distributed. 2. Nutritionists are investigating the efficacy of a diet plan designed to increase the caloric intake of elderly people. The increase in daily caloric intake in $12$ individuals who are put on the plan is (a minus sign signifies that calories consumed went down): $\begin{matrix} 121 & 284 & -94 & 295 & 183 & 312\ 188 & -102 & 259 & 226 & 152 & 167 \end{matrix}$ Construct a $99.8\%$ confidence interval for the mean increase in caloric intake for all people who are put on this diet. Assume that population of differences in intake is normally distributed. 3. A machine for making precision cuts in dimension lumber produces studs with lengths that vary with standard deviation $0.003$ inch. Five trial cuts are made to check the machine’s calibration. The mean length of the studs produced is $104.998$ inches with sample standard deviation $0.004$ inch. Construct a $99.5\%$ confidence interval for the mean lengths of all studs cut by this machine. Assume lengths are normally distributed. Hint: Not all the numbers given in the problem are used. 4. The variation in time for a baked good to go through a conveyor oven at a large scale bakery has standard deviation $0.017$ minute at every time setting. To check the bake time of the oven periodically four batches of goods are carefully timed. The recent check gave a mean of $27.2$ minutes with sample standard deviation $0.012$ minute. Construct a $99.8\%$ confidence interval for the mean bake time of all batches baked in this oven. Assume bake times are normally distributed. Hint: Not all the numbers given in the problem are used. 5. Wildlife researchers tranquilized and weighed three adult male polar bears. The data (in pounds) are: $926, 742, 1109$. Assume the weights of all bears are normally distributed. 1. Construct an $80\%$ confidence interval for the mean weight of all adult male polar bears using these data. 2. Convert the three weights in pounds to weights in kilograms using the conversion $1\; lb = 0.453\; kg$ (so the first datum changes to $(926)(0.453)=419$). Use the converted data to construct an $80\%$ confidence interval for the mean weight of all adult male polar bears expressed in kilograms. 3. Convert your answer in part (a) into kilograms directly and compare it to your answer in (b). This illustrates that if you construct a confidence interval in one system of units you can convert it directly into another system of units without having to convert all the data to the new units. 6. Wildlife researchers trapped and measured six adult male collared lemmings. The data (in millimeters) are: $104, 99, 112, 115, 96, 109$. Assume the lengths of all lemmings are normally distributed. 1. Construct a $90\%$ confidence interval for the mean length of all adult male collared lemmings using these data. 2. Convert the six lengths in millimeters to lengths in inches using the conversion $1\; mm = 0.039\; in$ (so the first datum changes to $(104)(0.039) = 4.06$). Use the converted data to construct a $90\%$ confidence interval for the mean length of all adult male collared lemmings expressed in inches. 3. Convert your answer in part (a) into inches directly and compare it to your answer in (b). This illustrates that if you construct a confidence interval in one system of units you can convert it directly into another system of units without having to convert all the data to the new units. Answers 1. $98\pm 3.9$ 2. $98\pm 5.2$ 1. $386\pm 16.4$ 2. $386\pm 33.6$ 1. $933\pm 6.5$ 2. $933\pm 8.5$ 3. Asking for greater confidence requires a longer interval. 1. $32.7\pm 2.9$ 2. $47.3\pm 2.19$ 3. $53.5\pm 0.56$ 4. $8.2\pm 0.04$ 5. $18.6\pm 1.4$ 6. $9981\pm 486$ 7. $104.998\pm 0.004$ 1. $926\pm 200$ 2. $419\pm 90$ 3. $419\pm 91$ 7.3: Large Sample Estimation of a Population Proportion Basic 1. Information about a random sample is given. Verify that the sample is large enough to use it to construct a confidence interval for the population proportion. Then construct a $90\%$ confidence interval for the population proportion. 1. $n = 25, \hat{p}=0.7$ 2. $n = 50, \hat{p}=0.7$ 2. Information about a random sample is given. Verify that the sample is large enough to use it to construct a confidence interval for the population proportion. Then construct a $95\%$ confidence interval for the population proportion. 1. $n = 2500, \hat{p}=0.22$ 2. $n = 1200, \hat{p}=022$ 3. Information about a random sample is given. Verify that the sample is large enough to use it to construct a confidence interval for the population proportion. Then construct a $98\%$ confidence interval for the population proportion. 1. $n = 80, \hat{p}=0.4$ 2. $n = 325, \hat{p}=0.4$ 4. Information about a random sample is given. Verify that the sample is large enough to use it to construct a confidence interval for the population proportion. Then construct a $99.5\%$ confidence interval for the population proportion. 1. $n = 200, \hat{p}=0.85$ 2. $n = 75, \hat{p}=0.85$ 5. In a random sample of size $1,100,338$ have the characteristic of interest. 1. Compute the sample proportion $\hat{p}$ with the characteristic of interest. 2. Verify that the sample is large enough to use it to construct a confidence interval for the population proportion. 3. Construct an $80\%$ confidence interval for the population proportion $p$. 4. Construct a $90\%$ confidence interval for the population proportion $p$. 5. Comment on why one interval is longer than the other. 6. In a random sample of size $2,400,420$ have the characteristic of interest. 1. Compute the sample proportion $\hat{p}$ with the characteristic of interest. 2. Verify that the sample is large enough to use it to construct a confidence interval for the population proportion. 3. Construct a $90\%$ confidence interval for the population proportion $p$. 4. Construct a $99\%$ confidence interval for the population proportion $p$. 5. Comment on why one interval is longer than the other. Q7.3.7 A security feature on some web pages is graphic representations of words that are readable by human beings but not machines. When a certain design format was tested on $450$ subjects, by having them attempt to read ten disguised words, $448$ subjects could read all the words. 1. Give a point estimate of the proportion $p$ of all people who could read words disguised in this way. 2. Show that the sample is not sufficiently large to construct a confidence interval for the proportion of all people who could read words disguised in this way. Q7.3.8 In a random sample of $900$ adults, $42$ defined themselves as vegetarians. 1. Give a point estimate of the proportion of all adults who would define themselves as vegetarians. 2. Verify that the sample is sufficiently large to use it to construct a confidence interval for that proportion. 3. Construct an $80\%$ confidence interval for the proportion of all adults who would define themselves as vegetarians. Q7.3.9 In a random sample of $250$ employed people, $61$ said that they bring work home with them at least occasionally. 1. Give a point estimate of the proportion of all employed people who bring work home with them at least occasionally. 2. Construct a $99\%$ confidence interval for that proportion. Q7.3.10 In a random sample of $1,250$ household moves, $822$ were moves to a location within the same county as the original residence. 1. Give a point estimate of the proportion of all household moves that are to a location within the same county as the original residence. 2. Construct a $98\%$ confidence interval for that proportion. Q7.3.11 In a random sample of $12,447$ hip replacement or revision surgery procedures nationwide, $162$ patients developed a surgical site infection. 1. Give a point estimate of the proportion of all patients undergoing a hip surgery procedure who develop a surgical site infection. 2. Verify that the sample is sufficiently large to use it to construct a confidence interval for that proportion. 3. Construct a $95\%$ confidence interval for the proportion of all patients undergoing a hip surgery procedure who develop a surgical site infection. Q7.3.12 In a certain region prepackaged products labeled $500$ g must contain on average at least $500$ grams of the product, and at least $90\%$ of all packages must weigh at least $490$ grams. In a random sample of $300$ packages, $288$ weighed at least $490$ grams. 1. Give a point estimate of the proportion of all packages that weigh at least $490$ grams. 2. Verify that the sample is sufficiently large to use it to construct a confidence interval for that proportion. 3. Construct a $99.8\%$ confidence interval for the proportion of all packages that weigh at least $490$ grams. Q7.3.13 A survey of $50$ randomly selected adults in a small town asked them if their opinion on a proposed “no cruising” restriction late at night. Responses were coded $1$ for in favor, $0$ for indifferent, and $2$ for opposed, with the results shown in the table.$\begin{matrix} 1 & 0 & 2 & 0 & 1 & 0 & 0 & 1 & 1 & 2\ 0 & 2 & 0 & 0 & 0 & 1 & 0 & 2 & 0 & 0 \ 0 & 2 & 1 & 2 & 0 & 0 & 0 & 2 & 0 & 1\ 0 & 2 & 0 & 2 & 0 & 1 & 0 & 0 & 2 & 0\ 1 & 0 & 0 & 1 & 2 & 0 & 0 & 2 & 1 & 2 \end{matrix}$ 1. Give a point estimate of the proportion of all adults in the community who are indifferent concerning the proposed restriction. 2. Assuming that the sample is sufficiently large, construct a $90\%$ confidence interval for the proportion of all adults in the community who are indifferent concerning the proposed restriction. Q7.3.14 To try to understand the reason for returned goods, the manager of a store examines the records on $40$ products that were returned in the last year. Reasons were coded by $1$ for “defective,” $2$ for “unsatisfactory,” and $0$ for all other reasons, with the results shown in the table. $\begin{matrix} 0 & 2 & 0 & 0 & 0 & 0 & 0 & 2 & 0 & 0\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 2\ 0 & 0 & 2 & 0 & 0 & 0 & 0 & 2 & 0 & 0\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \end{matrix}$ 1. Give a point estimate of the proportion of all returns that are because of something wrong with the product, that is, either defective or performed unsatisfactorily. 2. Assuming that the sample is sufficiently large, construct an $80\%$ confidence interval for the proportion of all returns that are because of something wrong with the product. Q7.3.15 In order to estimate the proportion of entering students who graduate within six years, the administration at a state university examined the records of $600$ randomly selected students who entered the university six years ago, and found that $312$ had graduated. 1. Give a point estimate of the six-year graduation rate, the proportion of entering students who graduate within six years. 2. Assuming that the sample is sufficiently large, construct a $98\%$ confidence interval for the six-year graduation rate. Q7.3.16 In a random sample of $2,300$ mortgages taken out in a certain region last year, $187$ were adjustable-rate mortgages. 1. Give a point estimate of the proportion of all mortgages taken out in this region last year that were adjustable-rate mortgages. 2. Assuming that the sample is sufficiently large, construct a $99.9\%$ confidence interval for the proportion of all mortgages taken out in this region last year that were adjustable-rate mortgages. Q7.3.17 In a research study in cattle breeding, $159$ of $273$ cows in several herds that were in estrus were detected by means of an intensive once a day, one-hour observation of the herds in early morning. 1. Give a point estimate of the proportion of all cattle in estrus who are detected by this method. 2. Assuming that the sample is sufficiently large, construct a $90\%$ confidence interval for the proportion of all cattle in estrus who are detected by this method. Q7.3.18 A survey of $21,250$ households concerning telephone service gave the results shown in the table. Landline No Landline Cell phone 12,474 5,844 No cell phone 2,529 403 1. Give a point estimate for the proportion of all households in which there is a cell phone but no landline. 2. Assuming the sample is sufficiently large, construct a $99.9\%$ confidence interval for the proportion of all households in which there is a cell phone but no landline. 3. Give a point estimate for the proportion of all households in which there is no telephone service of either kind. 4. Assuming the sample is sufficiently large, construct a $99.9\%$ confidence interval for the proportion of all all households in which there is no telephone service of either kind. Additional Exercises 1. In a random sample of $900$ adults, $42$ defined themselves as vegetarians. Of these $42$, $29$ were women. 1. Give a point estimate of the proportion of all self-described vegetarians who are women. 2. Verify that the sample is sufficiently large to use it to construct a confidence interval for that proportion. 3. Construct a $90\%$ confidence interval for the proportion of all all self-described vegetarians who are women. 2. A random sample of $185$ college soccer players who had suffered injuries that resulted in loss of playing time was made with the results shown in the table. Injuries are classified according to severity of the injury and the condition under which it was sustained. Minor Moderate Serious Practice 48 20 6 Game 62 32 17 1. Give a point estimate for the proportion $p$ of all injuries to college soccer players that are sustained in practice. 2. Construct a $95\%$ confidence interval for the proportion $p$ of all injuries to college soccer players that are sustained in practice. 3. Give a point estimate for the proportion $p$ of all injuries to college soccer players that are either moderate or serious. 4. Construct a $95\%$ confidence interval for the proportion $p$ of all injuries to college soccer players that are either moderate or serious. 3. The body mass index (BMI) was measured in $1,200$ randomly selected adults, with the results shown in the table. BMI Under 18.5 18.5–25 Over 25 Men 36 165 315 Women 75 274 335 1. Give a point estimate for the proportion of all men whose BMI is over $25$. 2. Assuming the sample is sufficiently large, construct a $99\%$ confidence interval for the proportion of all men whose BMI is over $25$. 3. Give a point estimate for the proportion of all adults, regardless of gender, whose BMI is over $25$. 4. Assuming the sample is sufficiently large, construct a $99\%$ confidence interval for the proportion of all adults, regardless of gender, whose BMI is over $25$. 4. Confidence intervals constructed using the formula in this section often do not do as well as expected unless $n$ is quite large, especially when the true population proportion is close to either $0$ or $1$. In such cases a better result is obtained by adding two successes and two failures to the actual data and then computing the confidence interval. This is the same as using the formula $\tilde{p}\pm z_{\alpha /2}\sqrt{\frac{\tilde{p}(1-\tilde{p})}{\tilde{n}}}\ \text{where}\ \tilde{p}=\frac{x+2}{n+4}\; \text{and}\; \tilde{n}=n+4$ Suppose that in a random sample of $600$ households, $12$ had no telephone service of any kind. Use the adjusted confidence interval procedure just described to form a $99.9\%$ confidence interval for the proportion of all households that have no telephone service of any kind. Large Data Set Exercises Large Data Set missing from the original 1. Large $\text{Data Sets 4 and 4A}$ list the results of $500$ tosses of a die. Let $p$ denote the proportion of all tosses of this die that would result in a four. Use the sample data to construct a $90\%$ confidence interval for $p$. 2. Large $\text{Data Set 6}$ records results of a random survey of $200$ voters in each of two regions, in which they were asked to express whether they prefer Candidate $A$ for a U.S. Senate seat or prefer some other candidate. Use the full data set ($400$ observations) to construct a $98\%$ confidence interval for the proportion $p$ of all voters who prefer Candidate $A$. 3. Lines $2$ through $536$ in $\text{Data Set 11}$ is a sample of $535$ real estate sales in a certain region in $2008$. Those that were foreclosure sales are identified with a $1$ in the second column. 1. Use these data to construct a point estimate $\hat{p}$ of the proportion $p$ of all real estate sales in this region in $2008$ that were foreclosure sales. 2. Use these data to construct a $90\%$ confidence for $p$. 4. Lines $537$ through $1106$ in Large $\text{Data Set 11}$ is a sample of $570$ real estate sales in a certain region in $2010$. Those that were foreclosure sales are identified with a $1$ in the second column. 1. Use these data to construct a point estimate $\hat{p}$ of the proportion $p$ of all real estate sales in this region in $2010$ that were foreclosure sales. 2. Use these data to construct a $90\%$ confidence for $p$. Answers 1. $(0.5492, 0.8508)$ 2. $(0.5934, 0.8066)$ 1. $(0.2726, 0.5274)$ 2. $(0.3368, 0.4632)$ 1. $0.3073$ 2. $\hat{p}\pm 3\sqrt{\frac{\hat{p}\hat{q}}{n}}=0.31\pm 0.04\; \text{and}\; [0.27,0.35]\subset [0,1]$ 3. $(0.2895, 0.3251)$ 4. $(0.2844, 0.3302)$ 5. Asking for greater confidence requires a longer interval. 1. $0.9956$ 2. $(0.9862, 1.005)$ 1. $0.244$ 2. $(0.1740, 0.3140)$ 1. $0.013$ 2. $(0.01, 0.016)$ 3. $(0.011, 0.015)$ 1. $0.52$ 2. $(0.4038, 0.6362)$ 1. $0.52$ 2. $(0.4726, 0.5674)$ 1. $0.5824$ 2. $(0.5333, 0.6315)$ 1. $0.69$ 2. $\hat{p}\pm 3\sqrt{\frac{\hat{p}\hat{q}}{n}}=0.69\pm 0.21\; \text{and}\; [0.48,0.90]\subset [0,1]$ 3. $0.69\pm 0.12$ 1. $0.6105$ 2. $(0.5552, 0.6658)$ 3. $0.5583$ 4. $(0.5214, 0.5952)$ 1. $(0.1368,0.1912)$ 1. $\hat{p}=0.2280$ 2. $(0.1982,0.2579)$ 7.4: Sample Size Considerations Basic 1. Estimate the minimum sample size needed to form a confidence interval for the mean of a population having the standard deviation shown, meeting the criteria given. 1. $\sigma = 30, 95\%$ confidence, $E = 10$ 2. $\sigma = 30, 99\%$ confidence, $E = 10$ 3. $\sigma = 30, 95\%$ confidence, $E = 5$ 2. Estimate the minimum sample size needed to form a confidence interval for the mean of a population having the standard deviation shown, meeting the criteria given. 1. $\sigma = 4, 95\%$ confidence, $E = 1$ 2. $\sigma = 4, 99\%$ confidence, $E = 1$ 3. $\sigma = 4, 95\%$ confidence, $E = 0.5$ 3. Estimate the minimum sample size needed to form a confidence interval for the proportion of a population that has a particular characteristic, meeting the criteria given. 1. $p\approx 0.37, 80\%$ confidence, $E = 0.05$ 2. $p\approx 0.37, 90\%$ confidence, $E = 0.05$ 3. $p\approx 0.37, 80\%$ confidence, $E = 0.01$ 4. Estimate the minimum sample size needed to form a confidence interval for the proportion of a population that has a particular characteristic, meeting the criteria given. 1. $p\approx 0.81, 95\%$ confidence, $E = 0.02$ 2. $p\approx 0.81, 99\%$ confidence, $E = 0.02$ 3. $p\approx 0.81, 95\%$ confidence, $E = 0.01$ 5. Estimate the minimum sample size needed to form a confidence interval for the proportion of a population that has a particular characteristic, meeting the criteria given. 1. $80\%$ confidence, $E = 0.05$ 2. $90\%$ confidence, $E = 0.05$ 3. $80\%$ confidence, $E = 0.01$ 6. Estimate the minimum sample size needed to form a confidence interval for the proportion of a population that has a particular characteristic, meeting the criteria given. 1. $95\%$ confidence, $E = 0.02$ 2. $99\%$ confidence, $E = 0.02$ 3. $95\%$ confidence, $E = 0.01$ Applications 1. A software engineer wishes to estimate, to within $5$ seconds, the mean time that a new application takes to start up, with $95\%$ confidence. Estimate the minimum size sample required if the standard deviation of start up times for similar software is $12$ seconds. 2. A real estate agent wishes to estimate, to within $\2.50$, the mean retail cost per square foot of newly built homes, with $80\%$ confidence. He estimates the standard deviation of such costs at $\5.00$. Estimate the minimum size sample required. 3. An economist wishes to estimate, to within $2$ minutes, the mean time that employed persons spend commuting each day, with $95\%$ confidence. On the assumption that the standard deviation of commuting times is $8$ minutes, estimate the minimum size sample required. 4. A motor club wishes to estimate, to within $1$ cent, the mean price of $1$ gallon of regular gasoline in a certain region, with $98\%$ confidence. Historically the variability of prices is measured by $\sigma =\0.03$Estimate the minimum size sample required. 5. A bank wishes to estimate, to within $\25$, the mean average monthly balance in its checking accounts, with $99.8\%$ confidence. Assuming $\sigma =\250$, estimate the minimum size sample required. 6. A retailer wishes to estimate, to within $15$ seconds, the mean duration of telephone orders taken at its call center, with $99.5\%$ confidence. In the past the standard deviation of call length has been about $1.25$ minutes. Estimate the minimum size sample required. (Be careful to express all the information in the same units.) 7. The administration at a college wishes to estimate, to within two percentage points, the proportion of all its entering freshmen who graduate within four years, with $90\%$ confidence. Estimate the minimum size sample required. 8. A chain of automotive repair stores wishes to estimate, to within five percentage points, the proportion of all passenger vehicles in operation that are at least five years old, with $98\%$ confidence. Estimate the minimum size sample required. 9. An internet service provider wishes to estimate, to within one percentage point, the current proportion of all email that is spam, with $99.9\%$ confidence. Last year the proportion that was spam was $71\%$. Estimate the minimum size sample required. 10. An agronomist wishes to estimate, to within one percentage point, the proportion of a new variety of seed that will germinate when planted, with $95\%$ confidence. A typical germination rate is $97\%$. Estimate the minimum size sample required. 11. A charitable organization wishes to estimate, to within half a percentage point, the proportion of all telephone solicitations to its donors that result in a gift, with $90\%$ confidence. Estimate the minimum sample size required, using the information that in the past the response rate has been about $30\%$. 12. A government agency wishes to estimate the proportion of drivers aged $16-24$ who have been involved in a traffic accident in the last year. It wishes to make the estimate to within one percentage point and at $90\%$ confidence. Find the minimum sample size required, using the information that several years ago the proportion was $0.12$. Additional Exercises 1. An economist wishes to estimate, to within six months, the mean time between sales of existing homes, with $95\%$ confidence. Estimate the minimum size sample required. In his experience virtually all houses are re-sold within $40$ months, so using the Empirical Rule he will estimate $\sigma$ by one-sixth the range, or $40/6=6.7$. 2. A wildlife manager wishes to estimate the mean length of fish in a large lake, to within one inch, with $80\%$ confidence. Estimate the minimum size sample required. In his experience virtually no fish caught in the lake is over $23$ inches long, so using the Empirical Rule he will estimate $\sigma$ by one-sixth the range, or $23/6=3.8$. 3. You wish to estimate the current mean birth weight of all newborns in a certain region, to within $1$ ounce ($1/16$ pound) and with $95\%$ confidence. A sample will cost $\400$ plus $\1.50$ for every newborn weighed. You believe the standard deviations of weight to be no more than $1.25$ pounds. You have $\2,500$ to spend on the study. 1. Can you afford the sample required? 2. If not, what are your options? 4. You wish to estimate a population proportion to within three percentage points, at $95\%$ confidence. A sample will cost $\500$ plus $50$ cents for every sample element measured. You have $\1,000$ to spend on the study. 1. Can you afford the sample required? 2. If not, what are your options? Answers 1. $35$ 2. $60$ 3. $139$ 1. $154$ 2. $253$ 3. $3832$ 1. $165$ 2. $271$ 3. $4109$ 1. $23$ 2. $62$ 3. $955$ 4. $1692$ 5. $22,301$ 6. $22,731$ 7. $5$ 1. no 2. decrease the confidence level
textbooks/stats/Introductory_Statistics/Introductory_Statistics_(Shafer_and_Zhang)/07%3A_Estimation/7.E%3A_Estimation_%28Exercises%29.txt
In the sampling that we have studied so far the goal has been to estimate a population parameter. But the sampling done by the government agency has a somewhat different objective, not so much to estimate the population mean μ as to test an assertion—or a hypothesis—about it, namely, whether it is as large as 75 or not. The agency is not necessarily interested in the actual value of μ, just whether it is as claimed. Their sampling is done to perform a test of hypotheses, the subject of this chapter. • 8.1: The Elements of Hypothesis Testing A hypothesis about the value of a population parameter is an assertion about its value. As in the introductory example we will be concerned with testing the truth of two competing hypotheses, only one of which can be true. • 8.2: Large Sample Tests for a Population Mean In this section we describe and demonstrate the procedure for conducting a test of hypotheses about the mean of a population in the case that the sample size n is at least 30 • 8.3: The Observed Significance of a Test The conceptual basis of our testing procedure is that we reject the null hypothesis only if the data that we obtained would constitute a rare event if the null hypothesis were actually true. The level of significance α specifies what is meant by “rare.” The observed significance of the test is a measure of how rare the value of the test statistic that we have just observed would be if the null hypothesis were true. • 8.4: Small Sample Tests for a Population Mean Previous hypotheses testing for population means was described in the case of large samples. The statistical validity of the tests was insured by the Central Limit Theorem, with essentially no assumptions on the distribution of the population. When sample sizes are small, as is often the case in practice, the Central Limit Theorem does not apply. One must then impose stricter assumptions on the population to give statistical validity to the test procedure. • 8.5: Large Sample Tests for a Population Proportion Both the critical value approach and the p-value approach can be applied to test hypotheses about a population proportion. • 8.E: Testing Hypotheses (Exercises) These are homework exercises to accompany the Textmap created for "Introductory Statistics" by Shafer and Zhang. 08: Testing Hypotheses Learning Objectives • To understand the logical framework of tests of hypotheses. • To learn basic terminology connected with hypothesis testing. • To learn fundamental facts about hypothesis testing. Types of Hypotheses A hypothesis about the value of a population parameter is an assertion about its value. As in the introductory example we will be concerned with testing the truth of two competing hypotheses, only one of which can be true. Definition: null hypothesis and alternative hypothesis • The null hypothesis, denoted $H_0$, is the statement about the population parameter that is assumed to be true unless there is convincing evidence to the contrary. • The alternative hypothesis, denoted $H_a$, is a statement about the population parameter that is contradictory to the null hypothesis, and is accepted as true only if there is convincing evidence in favor of it. Definition: statistical procedure Hypothesis testing is a statistical procedure in which a choice is made between a null hypothesis and an alternative hypothesis based on information in a sample. The end result of a hypotheses testing procedure is a choice of one of the following two possible conclusions: 1. Reject $H_0$ (and therefore accept $H_a$), or 2. Fail to reject $H_0$ (and therefore fail to accept $H_a$). The null hypothesis typically represents the status quo, or what has historically been true. In the example of the respirators, we would believe the claim of the manufacturer unless there is reason not to do so, so the null hypotheses is $H_0:\mu =75$. The alternative hypothesis in the example is the contradictory statement $H_a:\mu <75$. The null hypothesis will always be an assertion containing an equals sign, but depending on the situation the alternative hypothesis can have any one of three forms: with the symbol $<$, as in the example just discussed, with the symbol $>$, or with the symbol $\neq$. The following two examples illustrate the latter two cases. Example $1$ A publisher of college textbooks claims that the average price of all hardbound college textbooks is $\127.50$. A student group believes that the actual mean is higher and wishes to test their belief. State the relevant null and alternative hypotheses. Solution The default option is to accept the publisher’s claim unless there is compelling evidence to the contrary. Thus the null hypothesis is $H_0:\mu =127.50$. Since the student group thinks that the average textbook price is greater than the publisher’s figure, the alternative hypothesis in this situation is $H_a:\mu >127.50$. Example $2$ The recipe for a bakery item is designed to result in a product that contains $8$ grams of fat per serving. The quality control department samples the product periodically to insure that the production process is working as designed. State the relevant null and alternative hypotheses. Solution The default option is to assume that the product contains the amount of fat it was formulated to contain unless there is compelling evidence to the contrary. Thus the null hypothesis is $H_0:\mu =8.0$. Since to contain either more fat than desired or to contain less fat than desired are both an indication of a faulty production process, the alternative hypothesis in this situation is that the mean is different from $8.0$, so $H_a:\mu \neq 8.0$. In Example $1$, the textbook example, it might seem more natural that the publisher’s claim be that the average price is at most $\127.50$, not exactly $\127.50$. If the claim were made this way, then the null hypothesis would be $H_0:\mu \leq 127.50$, and the value $\127.50$ given in the example would be the one that is least favorable to the publisher’s claim, the null hypothesis. It is always true that if the null hypothesis is retained for its least favorable value, then it is retained for every other value. Thus in order to make the null and alternative hypotheses easy for the student to distinguish, in every example and problem in this text we will always present one of the two competing claims about the value of a parameter with an equality. The claim expressed with an equality is the null hypothesis. This is the same as always stating the null hypothesis in the least favorable light. So in the introductory example about the respirators, we stated the manufacturer’s claim as “the average is $75$ minutes” instead of the perhaps more natural “the average is at least $75$ minutes,” essentially reducing the presentation of the null hypothesis to its worst case. The first step in hypothesis testing is to identify the null and alternative hypotheses. The Logic of Hypothesis Testing Although we will study hypothesis testing in situations other than for a single population mean (for example, for a population proportion instead of a mean or in comparing the means of two different populations), in this section the discussion will always be given in terms of a single population mean $\mu$. The null hypothesis always has the form $H_0:\mu =\mu _0$ for a specific number $\mu _0$ (in the respirator example $\mu _0=75$, in the textbook example $\mu _0=127.50$, and in the baked goods example $\mu _0=8.0$). Since the null hypothesis is accepted unless there is strong evidence to the contrary, the test procedure is based on the initial assumption that $H_0$ is true. This point is so important that we will repeat it in a display: The test procedure is based on the initial assumption that $H_0$ is true. The criterion for judging between $H_0$ and $H_a$ based on the sample data is: if the value of $\overline{X}$ would be highly unlikely to occur if $H_0$ were true, but favors the truth of $H_a$, then we reject $H_0$ in favor of $H_a$. Otherwise we do not reject $H_0$. Supposing for now that $\overline{X}$ follows a normal distribution, when the null hypothesis is true the density function for the sample mean $\overline{X}$ must be as in Figure $1$: a bell curve centered at $\mu _0$. Thus if $H_0$ is true then $\overline{X}$ is likely to take a value near $\mu _0$ and is unlikely to take values far away. Our decision procedure therefore reduces simply to: • if $H_a$ has the form $H_a:\mu <\mu _0$ then reject $H_0$ if $\bar{x}$ is far to the left of $\mu _0$; • if $H_a$ has the form $H_a:\mu >\mu _0$ then reject $H_0$ if $\bar{x}$ is far to the right of $\mu _0$; • if $H_a$ has the form $H_a:\mu \neq \mu _0$ then reject $H_0$ if $\bar{x}$ is far away from $\mu _0$ in either direction. Think of the respirator example, for which the null hypothesis is $H_0:\mu =75$, the claim that the average time air is delivered for all respirators is $75$ minutes. If the sample mean is $75$ or greater then we certainly would not reject $H_0$ (since there is no issue with an emergency respirator delivering air even longer than claimed). If the sample mean is slightly less than $75$ then we would logically attribute the difference to sampling error and also not reject $H_0$ either. Values of the sample mean that are smaller and smaller are less and less likely to come from a population for which the population mean is $75$. Thus if the sample mean is far less than $75$, say around $60$ minutes or less, then we would certainly reject $H_0$, because we know that it is highly unlikely that the average of a sample would be so low if the population mean were $75$. This is the rare event criterion for rejection: what we actually observed $(\overline{X}<60)$ would be so rare an event if $\mu =75$ were true that we regard it as much more likely that the alternative hypothesis $\mu <75$ holds. In summary, to decide between $H_0$ and $H_a$ in this example we would select a “rejection region” of values sufficiently far to the left of $75$, based on the rare event criterion, and reject $H_0$ if the sample mean $\overline{X}$ lies in the rejection region, but not reject $H_0$ if it does not. The Rejection Region Each different form of the alternative hypothesis Ha has its own kind of rejection region: • if (as in the respirator example) $H_a$ has the form $H_a:\mu <\mu _0$, we reject $H_0$ if $\bar{x}$ is far to the left of $\mu _0$, that is, to the left of some number $C$, so the rejection region has the form of an interval $(-\infty ,C]$; • if (as in the textbook example) $H_a$ has the form $H_a:\mu >\mu _0$, we reject $H_0$ if $\bar{x}$ is far to the right of $\mu _0$, that is, to the right of some number $C$, so the rejection region has the form of an interval $[C,\infty )$; • if (as in the baked good example) $H_a$ has the form $H_a:\mu \neq \mu _0$, we reject $H_0$ if $\bar{x}$ is far away from $\mu _0$ in either direction, that is, either to the left of some number $C$ or to the right of some other number $C′$, so the rejection region has the form of the union of two intervals $(-\infty ,C]\cup [C',\infty )$. The key issue in our line of reasoning is the question of how to determine the number $C$ or numbers $C$ and $C′$, called the critical value or critical values of the statistic, that determine the rejection region. Definition: critical values The critical value or critical values of a test of hypotheses are the number or numbers that determine the rejection region. Suppose the rejection region is a single interval, so we need to select a single number $C$. Here is the procedure for doing so. We select a small probability, denoted $\alpha$, say $1\%$, which we take as our definition of “rare event:” an event is “rare” if its probability of occurrence is less than $\alpha$. (In all the examples and problems in this text the value of $\alpha$ will be given already.) The probability that $\overline{X}$ takes a value in an interval is the area under its density curve and above that interval, so as shown in Figure $2$ (drawn under the assumption that $H_0$ is true, so that the curve centers at $\mu _0$) the critical value $C$ is the value of $\overline{X}$ that cuts off a tail area $\alpha$ in the probability density curve of $\overline{X}$. When the rejection region is in two pieces, that is, composed of two intervals, the total area above both of them must be $\alpha$, so the area above each one is $\alpha /2$, as also shown in Figure $2$. The number $\alpha$ is the total area of a tail or a pair of tails. Example $3$ In the context of Example $2$, suppose that it is known that the population is normally distributed with standard deviation $\alpha =0.15$ gram, and suppose that the test of hypotheses $H_0:\mu =8.0$ versus $H_a:\mu \neq 8.0$ will be performed with a sample of size $5$. Construct the rejection region for the test for the choice $\alpha =0.10$. Explain the decision procedure and interpret it. Solution If $H_0$ is true then the sample mean $\overline{X}$ is normally distributed with mean and standard deviation \begin{align} \mu _{\overline{X}} &=\mu \nonumber \[5pt] &=8.0 \nonumber \end{align} \nonumber \begin{align} \sigma _{\overline{X}}&=\dfrac{\sigma}{\sqrt{n}} \nonumber \[5pt] &= \dfrac{0.15}{\sqrt{5}} \nonumber\[5pt] &=0.067 \nonumber \end{align} \nonumber Since $H_a$ contains the $\neq$ symbol the rejection region will be in two pieces, each one corresponding to a tail of area $\alpha /2=0.10/2=0.05$. From Figure 7.1.6, $z_{0.05}=1.645$, so $C$ and $C′$ are $1.645$ standard deviations of $\overline{X}$ to the right and left of its mean $8.0$: $C=8.0-(1.645)(0.067) = 7.89 \; \; \text{and}\; \; C'=8.0 + (1.645)(0.067) = 8.11 \nonumber$ The result is shown in Figure $3$.α=0.1 The decision procedure is: take a sample of size $5$ and compute the sample mean $\bar{x}$. If $\bar{x}$ is either $7.89$ grams or less or $8.11$ grams or more then reject the hypothesis that the average amount of fat in all servings of the product is $8.0$ grams in favor of the alternative that it is different from $8.0$ grams. Otherwise do not reject the hypothesis that the average amount is $8.0$ grams. The reasoning is that if the true average amount of fat per serving were $8.0$ grams then there would be less than a $10\%$ chance that a sample of size $5$ would produce a mean of either $7.89$ grams or less or $8.11$ grams or more. Hence if that happened it would be more likely that the value $8.0$ is incorrect (always assuming that the population standard deviation is $0.15$ gram). Because the rejection regions are computed based on areas in tails of distributions, as shown in Figure $2$, hypothesis tests are classified according to the form of the alternative hypothesis in the following way. Definitions: Test classifications • If $H_a$ has the form $\mu \neq \mu _0$ the test is called a two-tailed test. • If $H_a$ has the form $\mu < \mu _0$ the test is called a left-tailed test. • If $H_a$ has the form $\mu > \mu _0$the test is called a right-tailed test. Each of the last two forms is also called a one-tailed test. Two Types of Errors The format of the testing procedure in general terms is to take a sample and use the information it contains to come to a decision about the two hypotheses. As stated before our decision will always be either 1. reject the null hypothesis $H_0$ in favor of the alternative $H_a$ presented, or 2. do not reject the null hypothesis $H_0$ in favor of the alternative $H_0$ presented. There are four possible outcomes of hypothesis testing procedure, as shown in the following table: True State of Nature $H_0$ is true $H_0$ is false Our Decision Do not reject $H_0$ Correct decision Type II error Reject $H_0$ Type I error Correct decision As the table shows, there are two ways to be right and two ways to be wrong. Typically to reject $H_0$ when it is actually true is a more serious error than to fail to reject it when it is false, so the former error is labeled “Type I” and the latter error “Type II”. Definition: Type I and Type II errors In a test of hypotheses: • A Type I error is the decision to reject $H_0$ when it is in fact true. • A Type II error is the decision not to reject $H_0$ when it is in fact not true. Unless we perform a census we do not have certain knowledge, so we do not know whether our decision matches the true state of nature or if we have made an error. We reject $H_0$ if what we observe would be a “rare” event if $H_0$ were true. But rare events are not impossible: they occur with probability $\alpha$. Thus when $H_0$ is true, a rare event will be observed in the proportion $\alpha$ of repeated similar tests, and $H_0$ will be erroneously rejected in those tests. Thus $\alpha$ is the probability that in following the testing procedure to decide between $H_0$ and $H_a$ we will make a Type I error. Definition: level of significance The number $\alpha$ that is used to determine the rejection region is called the level of significance of the test. It is the probability that the test procedure will result in a Type I error. The probability of making a Type II error is too complicated to discuss in a beginning text, so we will say no more about it than this: for a fixed sample size, choosing $alpha$ smaller in order to reduce the chance of making a Type I error has the effect of increasing the chance of making a Type II error. The only way to simultaneously reduce the chances of making either kind of error is to increase the sample size. Standardizing the Test Statistic Hypotheses testing will be considered in a number of contexts, and great unification as well as simplification results when the relevant sample statistic is standardized by subtracting its mean from it and then dividing by its standard deviation. The resulting statistic is called a standardized test statistic. In every situation treated in this and the following two chapters the standardized test statistic will have either the standard normal distribution or Student’s $t$-distribution. Definition: hypothesis test A standardized test statistic for a hypothesis test is the statistic that is formed by subtracting from the statistic of interest its mean and dividing by its standard deviation. For example, reviewing Example $3$, if instead of working with the sample mean $\overline{X}$ we instead work with the test statistic $\frac{\overline{X}-8.0}{0.067} \nonumber$ then the distribution involved is standard normal and the critical values are just $\pm z_{0.05}$. The extra work that was done to find that $C=7.89$ and $C′=8.11$ is eliminated. In every hypothesis test in this book the standardized test statistic will be governed by either the standard normal distribution or Student’s $t$-distribution. Information about rejection regions is summarized in the following tables: Table $1$: When the test statistic has the standard normal distribution Symbol in $H_a$ Terminology Rejection Region < Left-tailed test $(-\infty ,-z_\alpha ]$ > Right-tailed test $[z_\alpha ,\infty )$ Two-tailed test $(-\infty ,-z_{\alpha/2} ]\cup [z_{\alpha /2},\infty )$ Table $2$: When the test statistic has Student’s t-distribution Symbol in $H_a$ Terminology Rejection Region < Left-tailed test $(-\infty ,-t_\alpha ]$ > Right-tailed test $[t_\alpha ,\infty )$ Two-tailed test $(-\infty ,-t_{\alpha/2} ]\cup [t_{\alpha /2},\infty )$ Every instance of hypothesis testing discussed in this and the following two chapters will have a rejection region like one of the six forms tabulated in the tables above. No matter what the context a test of hypotheses can always be performed by applying the following systematic procedure, which will be illustrated in the examples in the succeeding sections. Systematic Hypothesis Testing Procedure: Critical Value Approach 1. Identify the null and alternative hypotheses. 2. Identify the relevant test statistic and its distribution. 3. Compute from the data the value of the test statistic. 4. Construct the rejection region. 5. Compare the value computed in Step 3 to the rejection region constructed in Step 4 and make a decision. Formulate the decision in the context of the problem, if applicable. The procedure that we have outlined in this section is called the “Critical Value Approach” to hypothesis testing to distinguish it from an alternative but equivalent approach that will be introduced at the end of Section 8.3. Key Takeaway • A test of hypotheses is a statistical process for deciding between two competing assertions about a population parameter. • The testing procedure is formalized in a five-step procedure.
textbooks/stats/Introductory_Statistics/Introductory_Statistics_(Shafer_and_Zhang)/08%3A_Testing_Hypotheses/8.01%3A_The_Elements_of_Hypothesis_Testing.txt
Learning Objectives • To learn how to apply the five-step test procedure for a test of hypotheses concerning a population mean when the sample size is large. • To learn how to interpret the result of a test of hypotheses in the context of the original narrated situation. In this section we describe and demonstrate the procedure for conducting a test of hypotheses about the mean of a population in the case that the sample size $n$ is at least $30$. The Central Limit Theorem states that $\overline{X}$ is approximately normally distributed, and has mean $\mu _{\overline{X}}=\mu$ and standard deviation $\sigma _{\overline{X}}=\sigma /\sqrt{n}$, where $\mu$ and $\sigma$ are the mean and the standard deviation of the population. This implies that the statistic $\frac{\bar{x}-\mu }{\sigma /\sqrt{n}} \nonumber$ has the standard normal distribution, which means that probabilities related to it are given in Figure 7.1.5 and the last line in Figure 7.1.6. If we know $\sigma$ then the statistic in the display is our test statistic. If, as is typically the case, we do not know $\sigma$, then we replace it by the sample standard deviation $s$. Since the sample is large the resulting test statistic still has a distribution that is approximately standard normal. Standardized Test Statistics for Large Sample Hypothesis Tests Concerning a Single Population Mean •  If $\sigma$ is known: $Z=\frac{\bar{x}-\mu _0}{\sigma /\sqrt{n}}$ • If $\sigma$ is unknown: $Z=\frac{\bar{x}-\mu _0}{s /\sqrt{n}}$ The test statistic has the standard normal distribution. The distribution of the standardized test statistic and the corresponding rejection region for each form of the alternative hypothesis (left-tailed, right-tailed, or two-tailed), is shown in Figure $1$. Example $1$ It is hoped that a newly developed pain reliever will more quickly produce perceptible reduction in pain to patients after minor surgeries than a standard pain reliever. The standard pain reliever is known to bring relief in an average of $3.5$ minutes with standard deviation $2.1$ minutes. To test whether the new pain reliever works more quickly than the standard one, $50$ patients with minor surgeries were given the new pain reliever and their times to relief were recorded. The experiment yielded sample mean $\bar{x}=3.1$ minutes and sample standard deviation $s=1.5$ minutes. Is there sufficient evidence in the sample to indicate, at the $5\%$ level of significance, that the newly developed pain reliever does deliver perceptible relief more quickly? Solution We perform the test of hypotheses using the five-step procedure given at the end of Section 8.1. • Step 1. The natural assumption is that the new drug is no better than the old one, but must be proved to be better. Thus if $\mu$ denotes the average time until all patients who are given the new drug experience pain relief, the hypothesis test is $H_0: \mu =3.5\ \text{vs}\ H_a:\mu <3.5\; @\; \alpha =0.05 \nonumber$ • Step 2. The sample is large, but the population standard deviation is unknown (the $2.1$ minutes pertains to the old drug, not the new one). Thus the test statistic is $Z=\frac{\bar{x}-\mu _0}{s /\sqrt{n}} \nonumber$ and has the standard normal distribution. • Step 3. Inserting the data into the formula for the test statistic gives $Z=\frac{\bar{x}-\mu _0}{s /\sqrt{n}}=\frac{3.1-3.5}{1.5/\sqrt{50}}=-1.886 \nonumber$ • Step 4. Since the symbol in $H_a$ is “$<$” this is a left-tailed test, so there is a single critical value, $-z_\alpha =-z_{0.005}$, which from the last line in Figure 7.1.6 we read off as $-1.645$. The rejection region is $(-\infty ,-1.645]$. • Step 5. As shown in Figure $2$ the test statistic falls in the rejection region. The decision is to reject $H_0$. In the context of the problem our conclusion is: The data provide sufficient evidence, at the $5\%$ level of significance, to conclude that the average time until patients experience perceptible relief from pain using the new pain reliever is smaller than the average time for the standard pain reliever. Example $2$ A cosmetics company fills its best-selling $8$ ounce jars of facial cream by an automatic dispensing machine. The machine is set to dispense a mean of $8.1$ ounces per jar. Uncontrollable factors in the process can shift the mean away from $8.1$ and cause either underfill or overfill, both of which are undesirable. In such a case the dispensing machine is stopped and recalibrated. Regardless of the mean amount dispensed, the standard deviation of the amount dispensed always has value $0.22$ ounce. A quality control engineer routinely selects $30$ jars from the assembly line to check the amounts filled. On one occasion, the sample mean is $\bar{x}=8.2$ ounces and the sample standard deviation is $s=0.25$ ounce. Determine if there is sufficient evidence in the sample to indicate, at the $1\%$ level of significance, that the machine should be recalibrated. Solution • Step 1. The natural assumption is that the machine is working properly. Thus if $\mu$ denotes the mean amount of facial cream being dispensed, the hypothesis test is $H_0: \mu =8.1\ \text{vs}\ H_a:\mu \neq 8.1\; @\; \alpha =0.01 \nonumber$ • Step 2. The sample is large and the population standard deviation is known. Thus the test statistic is $Z=\frac{\bar{x}-\mu _0}{\sigma /\sqrt{n}} \nonumber$ and has the standard normal distribution. • Step 3. Inserting the data into the formula for the test statistic gives $Z=\frac{\bar{x}-\mu _0}{\sigma /\sqrt{n}}=\frac{8.2-8.1}{0.22/\sqrt{30}}=2.490 \nonumber$ • Step 4. Since the symbol in $H_a$ is “$\neq$” this is a two-tailed test, so there are two critical values, $\pm z_{\alpha /2}=\pm z_{0.005}$, which from the last line in Figure 7.1.6 "Critical Values of " we read off as $\pm 2.576$. The rejection region is $(-\infty ,-2.576]\cup [2.576,\infty )$. • Step 5. As shown in Figure $3$ the test statistic does not fall in the rejection region. The decision is not to reject $H_0$. In the context of the problem our conclusion is: The data do not provide sufficient evidence, at the $1\%$ level of significance, to conclude that the average amount of product dispensed is different from $8.1$ ounce. We conclude that the machine does not need to be recalibrated. Key Takeaway • There are two formulas for the test statistic in testing hypotheses about a population mean with large samples. Both test statistics follow the standard normal distribution. • The population standard deviation is used if it is known, otherwise the sample standard deviation is used. • The same five-step procedure is used with either test statistic.
textbooks/stats/Introductory_Statistics/Introductory_Statistics_(Shafer_and_Zhang)/08%3A_Testing_Hypotheses/8.02%3A_Large_Sample_Tests_for_a_Population_Mean.txt
Learning Objectives • To learn what the observed significance of a test is. • To learn how to compute the observed significance of a test. • To learn how to apply the $p$-value approach to hypothesis testing. The Observed Significance The conceptual basis of our testing procedure is that we reject $H_0$ only if the data that we obtained would constitute a rare event if $H_0$ were actually true. The level of significance α specifies what is meant by “rare.” The observed significance of the test is a measure of how rare the value of the test statistic that we have just observed would be if the null hypothesis were true. That is, the observed significance of the test just performed is the probability that, if the test were repeated with a new sample, the result of the new test would be at least as contrary to $H_0$ and in support of $H_a$ as what was observed in the original test. Definition: observed significance The observed significance or $p$-value of a specific test of hypotheses is the probability, on the supposition that $H_0$ is true, of obtaining a result at least as contrary to $H_0$ and in favor of $H_a$ as the result actually observed in the sample data. Think back to "Example 8.2.1", Section 8.2 concerning the effectiveness of a new pain reliever. This was a left-tailed test in which the value of the test statistic was $-1.886$. To be as contrary to $H_0$ and in support of $H_a$ as the result $Z=-1.886$ actually observed means to obtain a value of the test statistic in the interval $(-\infty ,-1.886]$. Rounding $-1.886$ to $-1.89$, we can read directly from Figure 7.1.5 that $P(Z\leq -1.89)=0.0294$. Thus the $p$-value or observed significance of the test in "Example 8.2.1", Section 8.2 is $0.0294$ or about $3\%$. Under repeated sampling from this population, if $H_0$ were true then only about $3\%$ of all samples of size $50$ would give a result as contrary to $H_0$ and in favor of $H_a$ as the sample we observed. Note that the probability $0.0294$ is the area of the left tail cut off by the test statistic in this left-tailed test. Analogous reasoning applies to a right-tailed or a two-tailed test, except that in the case of a two-tailed test being as far from $0$ as the observed value of the test statistic but on the opposite side of $0$ is just as contrary to $H_0$ as being the same distance away and on the same side of $0$, hence the corresponding tail area is doubled. Computational Definition of the Observed Significance of a Test of Hypotheses The observed significance of a test of hypotheses is the area of the tail of the distribution cut off by the test statistic (times two in the case of a two-tailed test). Example $1$ Compute the observed significance of the test performed in "Example 8.2.2", Section 8.2. Solution The value of the test statistic was $z=2.490$, which by Figure 7.1.5 cuts off a tail of area $0.0064$, as shown in Figure $1$. Since the test was two-tailed, the observed significance is $2\times 0.0064=0.0128$. The p-value Approach to Hypothesis Testing In "Example 8.2.1", Section 8.2 the test was performed at the $5\%$ level of significance: the definition of “rare” event was probability $\alpha =0.05$ or less. We saw above that the observed significance of the test was $p=0.0294$ or about $3\%$. Since $p=0.0294<0.05=\alpha$ (or $3\%$ is less than $5\%$), the decision turned out to be to reject: what was observed was sufficiently unlikely to qualify as an event so rare as to be regarded as (practically) incompatible with $H_0$. In "Example 8.2.2", Section 8.2 the test was performed at the $1\%$ level of significance: the definition of “rare” event was probability $\alpha =0.01$ or less. The observed significance of the test was computed in "Example $1$" as $p=0.0128$ or about $1.3\%$. Since $p=0.0128>0.01=\alpha$ (or $1.3\%$ is greater than $1\%$), the decision turned out to be not to reject. The event observed was unlikely, but not sufficiently unlikely to lead to rejection of the null hypothesis. The reasoning just presented is the basis for a slightly different but equivalent formulation of the hypothesis testing process. The first three steps are the same as before, but instead of using $\alpha$ to compute critical values and construct a rejection region, one computes the $p$-value $p$ of the test and compares it to $\alpha$, rejecting $H_0$ if $p\leq \alpha$ and not rejecting if $p>\alpha$. Systematic Hypothesis Testing Procedure: p-Value Approach 1. Identify the null and alternative hypotheses. 2. Identify the relevant test statistic and its distribution. 3. Compute from the data the value of the test statistic. 4. Compute the $p$-value of the test. 5. Compare the value computed in Step 4 to significance level α and make a decision: reject $H_0$ if $p\leq \alpha$ and do not reject $H_0$ if $p>\alpha$. Formulate the decision in the context of the problem, if applicable. Example $2$ The total score in a professional basketball game is the sum of the scores of the two teams. An expert commentator claims that the average total score for NBA games is $202.5$. A fan suspects that this is an overstatement and that the actual average is less than $202.5$. He selects a random sample of $85$ games and obtains a mean total score of $199.2$ with standard deviation $19.63$. Determine, at the $5\%$ level of significance, whether there is sufficient evidence in the sample to reject the expert commentator’s claim. Solution • Step 1. Let $\mu$ be the true average total game score of all NBA games. The relevant test is $H_0: \mu =202.5\ \text{vs}\ H_a: \mu <202.5\; @\; \alpha =0.05 \nonumber$ • Step 2. The sample is large and the population standard deviation is unknown. Thus the test statistic is $Z=\frac{\bar{x}-\mu _0}{s/\sqrt{n}} \nonumber$ and has the standard normal distribution. • Step 3. Inserting the data into the formula for the test statistic gives $Z=\frac{\bar{x}-\mu _0}{s/\sqrt{n}}=\frac{199.2-202.5}{19.63/\sqrt{85}}=-1.55 \nonumber$ • Step 4. The area of the left tail cut off by $z=-1.55$ is, by Figure 7.1.5, $0.0606$, as illustrated in Figure $2$. Since the test is left-tailed, the $p$-value is just this number, $p=0.0606$. • Step 5. Since $p=0.0606>0.05=\alpha$, the decision is not to reject $H_0$. In the context of the problem our conclusion is: The data do not provide sufficient evidence, at the $5\%$ level of significance, to conclude that the average total score of NBA games is less than $202.5$. Example $3$ Mr. Prospero has been teaching Algebra II from a particular textbook at Remote Isle High School for many years. Over the years students in his Algebra II classes have consistently scored an average of $67$ on the end of course exam (EOC). This year Mr. Prospero used a new textbook in the hope that the average score on the EOC test would be higher. The average EOC test score of the $64$ students who took Algebra II from Mr. Prospero this year had mean $69.4$ and sample standard deviation $6.1$. Determine whether these data provide sufficient evidence, at the $1\%$ level of significance, to conclude that the average EOC test score is higher with the new textbook. Solution • Step 1. Let $\mu$ be the true average score on the EOC exam of all Mr. Prospero’s students who take the Algebra II course with the new textbook. The natural statement that would be assumed true unless there were strong evidence to the contrary is that the new book is about the same as the old one. The alternative, which it takes evidence to establish, is that the new book is better, which corresponds to a higher value of $\mu$. Thus the relevant test is $H_0: \mu =67\ \text{vs}\ H_a: \mu >67\; @\; \alpha =0.01 \nonumber$ • Step 2. The sample is large and the population standard deviation is unknown. Thus the test statistic is $Z=\frac{\bar{x}-\mu _0}{s/\sqrt{n}} \nonumber$ and has the standard normal distribution. • Step 3. Inserting the data into the formula for the test statistic gives $Z=\frac{\bar{x}-\mu _0}{s/\sqrt{n}}=\frac{69.4-67}{6.1/\sqrt{64}}=3.15 \nonumber$ • Step 4. The area of the right tail cut off by $z=3.15$ is, by Figure 7.1.5, $1-0.9992=0.0008$, as shown in Figure $3$. Since the test is right-tailed, the $p$-value is just this number, $p=0.0008$. • Step 5. Since $p=0.0008<0.01=\alpha$, the decision is to reject $H_0$. In the context of the problem our conclusion is: The data provide sufficient evidence, at the $1\%$ level of significance, to conclude that the average EOC exam score of students taking the Algebra II course from Mr. Prospero using the new book is higher than the average score of those taking the course from him but using the old book. Example $4$ For the surface water in a particular lake, local environmental scientists would like to maintain an average pH level at $7.4$. Water samples are routinely collected to monitor the average pH level. If there is evidence of a shift in pH value, in either direction, then remedial action will be taken. On a particular day $30$ water samples are taken and yield average pH reading of $7.7$ with sample standard deviation $0.5$. Determine, at the $1\%$ level of significance, whether there is sufficient evidence in the sample to indicate that remedial action should be taken. Solution • Step 1. Let $\mu$ be the true average pH level at the time the samples were taken. The relevant test is $H_0: \mu =7.4\ \text{vs}\ H_a: \mu \neq 7.4\; @\; \alpha =0.01 \nonumber$ • Step 2. The sample is large and the population standard deviation is unknown. Thus the test statistic is $Z=\frac{\bar{x}-\mu _0}{s/\sqrt{n}} \nonumber$ and has the standard normal distribution. • Step 3. Inserting the data into the formula for the test statistic gives $Z=\frac{\bar{x}-\mu _0}{s/\sqrt{n}}=\frac{7.7-7.4}{0.5/\sqrt{30}}=3.29 \nonumber$ • Step 4. The area of the right tail cut off by $z=3.29$ is, by Figure 7.1.5, $1-0.9995=0.0005$, as illustrated in Figure $4$. Since the test is two-tailed, the p-value is the double of this number, p=2×0.0005=0.0010. • Step 5. Since $p=0.0010<0.01=\alpha$, the decision is to reject $H_0$. In the context of the problem our conclusion is: The data provide sufficient evidence, at the $1\%$ level of significance, to conclude that the average pH of surface water in the lake is different from $7.4$. That is, remedial action is indicated. Key Takeaway • The observed significance or $p$-value of a test is a measure of how inconsistent the sample result is with $H_0$ and in favor of $H_a$. • The $p$-value approach to hypothesis testing means that one merely compares the $p$-value to $\alpha$ instead of constructing a rejection region. • There is a systematic five-step procedure for the $p$-value approach to hypothesis testing.
textbooks/stats/Introductory_Statistics/Introductory_Statistics_(Shafer_and_Zhang)/08%3A_Testing_Hypotheses/8.03%3A_The_Observed_Significance_of_a_Test.txt
Learning Objectives • To learn how to apply the five-step test procedure for test of hypotheses concerning a population mean when the sample size is small. In the previous section hypotheses testing for population means was described in the case of large samples. The statistical validity of the tests was insured by the Central Limit Theorem, with essentially no assumptions on the distribution of the population. When sample sizes are small, as is often the case in practice, the Central Limit Theorem does not apply. One must then impose stricter assumptions on the population to give statistical validity to the test procedure. One common assumption is that the population from which the sample is taken has a normal probability distribution to begin with. Under such circumstances, if the population standard deviation is known, then the test statistic $\frac{(\bar{x}-\mu _0)}{\sigma /\sqrt{n}} \nonumber$ still has the standard normal distribution, as in the previous two sections. If $\sigma$ is unknown and is approximated by the sample standard deviation $s$, then the resulting test statistic $\dfrac{(\bar{x}-\mu _0)}{s/\sqrt{n}} \nonumber$ follows Student’s $t$-distribution with $n-1$ degrees of freedom. Standardized Test Statistics for Small Sample Hypothesis Tests Concerning a Single Population Mean If $\sigma$ is known: $Z=\frac{\bar{x}-\mu _0}{\sigma /\sqrt{n}} \nonumber$ If $\sigma$ is unknown: $T=\frac{\bar{x}-\mu _0}{s /\sqrt{n}} \nonumber$ • The first test statistic ($\sigma$ known) has the standard normal distribution. • The second test statistic ($\sigma$ unknown) has Student’s $t$-distribution with $n-1$ degrees of freedom. • The population must be normally distributed. The distribution of the second standardized test statistic (the one containing $s$) and the corresponding rejection region for each form of the alternative hypothesis (left-tailed, right-tailed, or two-tailed), is shown in Figure $1$. This is just like Figure 8.2.1 except that now the critical values are from the $t$-distribution. Figure 8.2.1 still applies to the first standardized test statistic (the one containing ($\sigma$) since it follows the standard normal distribution. The $p$-value of a test of hypotheses for which the test statistic has Student’s $t$-distribution can be computed using statistical software, but it is impractical to do so using tables, since that would require $30$ tables analogous to Figure 7.1.5, one for each degree of freedom from $1$ to $30$. Figure 7.1.6 can be used to approximate the $p$-value of such a test, and this is typically adequate for making a decision using the $p$-value approach to hypothesis testing, although not always. For this reason the tests in the two examples in this section will be made following the critical value approach to hypothesis testing summarized at the end of Section 8.1, but after each one we will show how the $p$-value approach could have been used. Example $1$ The price of a popular tennis racket at a national chain store is $\179$. Portia bought five of the same racket at an online auction site for the following prices: $155\; 179\; 175\; 175\; 161 \nonumber$ Assuming that the auction prices of rackets are normally distributed, determine whether there is sufficient evidence in the sample, at the $5\%$ level of significance, to conclude that the average price of the racket is less than $\179$ if purchased at an online auction. Solution • Step 1. The assertion for which evidence must be provided is that the average online price $\mu$ is less than the average price in retail stores, so the hypothesis test is $H_0: \mu =179\ \text{vs}\ H_a: \mu <179\; @\; \alpha =0.05 \nonumber$ • Step 2. The sample is small and the population standard deviation is unknown. Thus the test statistic is $T=\frac{\bar{x}-\mu _0}{s /\sqrt{n}} \nonumber$ and has the Student $t$-distribution with $n-1=5-1=4$ degrees of freedom. • Step 3. From the data we compute $\bar{x}=169$ and $s=10.39$. Inserting these values into the formula for the test statistic gives $T=\frac{\bar{x}-\mu _0}{s /\sqrt{n}}=\frac{169-179}{10.39/\sqrt{5}}=-2.152 \nonumber$ • Step 4. Since the symbol in $H_a$ is “$<$” this is a left-tailed test, so there is a single critical value, $-t_\alpha =-t_{0.05}[df=4]$. Reading from the row labeled $df=4$ in Figure 7.1.6 its value is $-2.132$. The rejection region is $(-\infty ,-2.132]$. • Step 5. As shown in Figure $2$ the test statistic falls in the rejection region. The decision is to reject $H_0$. In the context of the problem our conclusion is: The data provide sufficient evidence, at the $5\%$ level of significance, to conclude that the average price of such rackets purchased at online auctions is less than $\179$. To perform the test in Example $1$ using the $p$-value approach, look in the row in Figure 7.1.6 with the heading $df=4$ and search for the two $t$-values that bracket the unsigned value $2.152$ of the test statistic. They are $2.132$ and $2.776$, in the columns with headings $t_{0.050}$ and $t_{0.025}$. They cut off right tails of area $0.050$ and $0.025$, so because $2.152$ is between them it must cut off a tail of area between $0.050$ and $0.025$. By symmetry $-2.152$ cuts off a left tail of area between $0.050$ and $0.025$, hence the $p$-value corresponding to $t=-2.152$ is between $0.025$ and $0.05$. Although its precise value is unknown, it must be less than $\alpha =0.05$, so the decision is to reject $H_0$. Example $2$ A small component in an electronic device has two small holes where another tiny part is fitted. In the manufacturing process the average distance between the two holes must be tightly controlled at $0.02$ mm, else many units would be defective and wasted. Many times throughout the day quality control engineers take a small sample of the components from the production line, measure the distance between the two holes, and make adjustments if needed. Suppose at one time four units are taken and the distances are measured as $0.021\; \; 0.019\; \; 0.023\; \; 0.020$ Determine, at the $1\%$ level of significance, if there is sufficient evidence in the sample to conclude that an adjustment is needed. Assume the distances of interest are normally distributed. Solution • Step 1. The assumption is that the process is under control unless there is strong evidence to the contrary. Since a deviation of the average distance to either side is undesirable, the relevant test is $H_0: \mu =0.02\ \text{vs}\ H_a: \mu \neq 0.02\; @\; \alpha =0.01 \nonumber$ where $\mu$ denotes the mean distance between the holes. • Step 2. The sample is small and the population standard deviation is unknown. Thus the test statistic is $T=\frac{\bar{x}-\mu _0}{s /\sqrt{n}} \nonumber$ and has the Student $t$-distribution with $n-1=4-1=3$ degrees of freedom. • Step 3. From the data we compute $\bar{x}=0.02075$ and $s=0.00171$. Inserting these values into the formula for the test statistic gives $T=\frac{\bar{x}-\mu _0}{s /\sqrt{n}}=\frac{0.02075-0.02}{0.00171\sqrt{4}}=0.877 \nonumber$ • Step 4. Since the symbol in $H_a$ is “$\neq$” this is a two-tailed test, so there are two critical values, $\pm t_{\alpha/2} =-t_{0.005}[df=3]$. Reading from the row in Figure 7.1.6 labeled $df=3$ their values are $\pm 5.841$. The rejection region is $(-\infty ,-5.841]\cup [5.841,\infty )$. • Step 5. As shown in Figure $3$ the test statistic does not fall in the rejection region. The decision is not to reject $H_0$. In the context of the problem our conclusion is: The data do not provide sufficient evidence, at the $1\%$ level of significance, to conclude that the mean distance between the holes in the component differs from $0.02$ mm. To perform the test in "Example $2$" using the $p$-value approach, look in the row in Figure 7.1.6 with the heading $df=3$ and search for the two $t$-values that bracket the value $0.877$ of the test statistic. Actually $0.877$ is smaller than the smallest number in the row, which is $0.978$, in the column with heading $t_{0.200}$. The value $0.978$ cuts off a right tail of area $0.200$, so because $0.877$ is to its left it must cut off a tail of area greater than $0.200$. Thus the $p$-value, which is the double of the area cut off (since the test is two-tailed), is greater than $0.400$. Although its precise value is unknown, it must be greater than $\alpha =0.01$, so the decision is not to reject $H_0$. Key Takeaway • There are two formulas for the test statistic in testing hypotheses about a population mean with small samples. One test statistic follows the standard normal distribution, the other Student’s $t$-distribution. • The population standard deviation is used if it is known, otherwise the sample standard deviation is used. • Either five-step procedure, critical value or $p$-value approach, is used with either test statistic.
textbooks/stats/Introductory_Statistics/Introductory_Statistics_(Shafer_and_Zhang)/08%3A_Testing_Hypotheses/8.04%3A_Small_Sample_Tests_for_a_Population_Mean.txt
Learning Objectives • To learn how to apply the five-step critical value test procedure for test of hypotheses concerning a population proportion. • To learn how to apply the five-step $p$-value test procedure for test of hypotheses concerning a population proportion. Both the critical value approach and the p-value approach can be applied to test hypotheses about a population proportion p. The null hypothesis will have the form $H_0 : p = p_0$ for some specific number $p_0$ between $0$ and $1$. The alternative hypothesis will be one of the three inequalities • $p <p_0$, • $p>p_0$, or • $p≠p_0$ for the same number $p_0$ that appears in the null hypothesis. The information in Section 6.3 gives the following formula for the test statistic and its distribution. In the formula $p_0$ is the numerical value of $p$ that appears in the two hypotheses, $q_0=1−p_0, \hat{p}$ is the sample proportion, and $n$ is the sample size. Remember that the condition that the sample be large is not that $n$ be at least 30 but that the interval $\left[ \hat{p} −3 \sqrt{ \dfrac{\hat{p} (1−\hat{p} )}{n}} , \hat{p} + 3 \sqrt{ \dfrac{\hat{p} (1−\hat{p} )}{n}} \right] \nonumber$ lie wholly within the interval $[0,1]$. Standardized Test Statistic for Large Sample Hypothesis Tests Concerning a Single Population Proportion $Z = \dfrac{\hat{p} - p_0}{\sqrt{\dfrac{p_0q_o}{n}}} \label{eq2}$ The test statistic has the standard normal distribution. The distribution of the standardized test statistic and the corresponding rejection region for each form of the alternative hypothesis (left-tailed, right-tailed, or two-tailed), is shown in Figure $1$. Example $1$ A soft drink maker claims that a majority of adults prefer its leading beverage over that of its main competitor’s. To test this claim $500$ randomly selected people were given the two beverages in random order to taste. Among them, $270$ preferred the soft drink maker’s brand, $211$ preferred the competitor’s brand, and $19$ could not make up their minds. Determine whether there is sufficient evidence, at the $5\%$ level of significance, to support the soft drink maker’s claim against the default that the population is evenly split in its preference. Solution We will use the critical value approach to perform the test. The same test will be performed using the $p$-value approach in Example $3$. We must check that the sample is sufficiently large to validly perform the test. Since $\hat{p} =270/500=0.54$, $\sqrt{ \dfrac{\hat{p} (1−\hat{p} )}{n}} =\sqrt{ \dfrac{(0.54)(0.46)}{500}} ≈0.02 \nonumber$ hence \begin{align} & \left[ \hat{p} −3\sqrt{ \dfrac{\hat{p} (1−\hat{p} )}{n}} ,\hat{p} +3\sqrt{ \dfrac{\hat{p} (1−\hat{p} )}{n}} \right] \ &=[0.54−(3)(0.02),0.54+(3)(0.02)] \ &=[0.48, 0.60] ⊂[0,1] \end{align} \nonumber so the sample is sufficiently large. • Step 1. The relevant test is $H_0 : p = 0.50 \nonumber$ $vs. \nonumber$ $H_a : p > 0.50\, @ \,\alpha =0.05 \nonumber$ where $p$ denotes the proportion of all adults who prefer the company’s beverage over that of its competitor’s beverage. • Step 2. The test statistic (Equation \ref{eq2}) is $Z=\dfrac{\hat{p} −p_0}{\sqrt{ \dfrac{p_0q_0}{n}}} \nonumber$ and has the standard normal distribution. • Step 3. The value of the test statistic is \begin{align} Z &=\dfrac{\hat{p} −p_0}{\sqrt{ \dfrac{p_0q_0}{n}}} \[6pt] &= \dfrac{0.54−0.50}{\sqrt{\dfrac{(0.50)(0.50)}{500}}} \[6pt] &=1.789 \end{align} \nonumber • Step 4. Since the symbol in $H_a$ is “$>$” this is a right-tailed test, so there is a single critical value, $z_{α}=z_{0.05}$. Reading from the last line in Figure 7.1.6 its value is $1.645$. The rejection region is $[1.645,∞)$. • Step 5. As shown in Figure $2$ the test statistic falls in the rejection region. The decision is to reject $H_0$. In the context of the problem our conclusion is: The data provide sufficient evidence, at the $5\%$ level of significance, to conclude that a majority of adults prefer the company’s beverage to that of their competitor’s. Example $2$ Globally the long-term proportion of newborns who are male is $51.46\%$. A researcher believes that the proportion of boys at birth changes under severe economic conditions. To test this belief randomly selected birth records of $5,000$ babies born during a period of economic recession were examined. It was found in the sample that $52.55\%$ of the newborns were boys. Determine whether there is sufficient evidence, at the $10\%$ level of significance, to support the researcher’s belief. Solution We will use the critical value approach to perform the test. The same test will be performed using the $p$-value approach in Example $1$. The sample is sufficiently large to validly perform the test since $\sqrt{ \dfrac{\hat{p} (1−\hat{p} )}{n}} =\sqrt{ \dfrac{(0.5255)(0.4745)}{5000}} ≈0.01 \nonumber$ hence \begin{align} & \left[ \hat{p} −3\sqrt{ \dfrac{\hat{p} (1−\hat{p} )}{n}} ,\hat{p} +3\sqrt{ \dfrac{\hat{p} (1−\hat{p} )}{n}} \right] \ &=[0.5255−0.03,0.5255+0.03] \ &=[0.4955,0.5555] ⊂[0,1] \end{align} \nonumber • Step 1. Let $p$ be the true proportion of boys among all newborns during the recession period. The burden of proof is to show that severe economic conditions change it from the historic long-term value of $0.5146$ rather than to show that it stays the same, so the hypothesis test is $H_0 : p = 0.5146 \nonumber$ $vs. \nonumber$ $H_a : p \neq 0.5146\, @ \,\alpha =0.10 \nonumber$ • Step 2. The test statistic (Equation \ref{eq2}) is $Z=\dfrac{\hat{p} −p_0}{\sqrt{ \dfrac{p_0q_0}{n}}} \nonumber$ and has the standard normal distribution. • Step 3. The value of the test statistic is \begin{align} Z &=\dfrac{\hat{p} −p_0}{\sqrt{ \dfrac{p_0q_0}{n}}} \[6pt] &= \dfrac{0.5255−0.5146}{\sqrt{\dfrac{(0.5146)(0.4854)}{5000}}} \[6pt] &=1.542 \end{align} \nonumber • Step 4. Since the symbol in $H_a$ is “$\neq$” this is a two-tailed test, so there are a pair of critical values, $\pm z_{\alpha /2}=\pm z_{0.05}=\pm 1.645$. The rejection region is $(-\infty ,-1.645]\cup [1.645,\infty )$. • Step 5. As shown in Figure $3$ the test statistic does not fall in the rejection region. The decision is not to reject $H_0$. In the context of the problem our conclusion is: The data do not provide sufficient evidence, at the $10\%$ level of significance, to conclude that the proportion of newborns who are male differs from the historic proportion in times of economic recession. Example $3$ Perform the test of Example $1$ using the $p$-value approach. Solution We already know that the sample size is sufficiently large to validly perform the test. • Steps 1–3 of the five-step procedure described in Section 8.3 have already been done in Example $1$ so we will not repeat them here, but only say that we know that the test is right-tailed and that value of the test statistic is $Z = 1.789$. • Step 4. Since the test is right-tailed the p-value is the area under the standard normal curve cut off by the observed test statistic, $Z = 1.789$, as illustrated in Figure $4$. By Figure 7.1.5 that area and therefore the p-value is $1−0.9633=0.0367$. • Step 5. Since the $p$-value is less than $α=0.05$ the decision is to reject $H_0$. Example $4$ Perform the test of Example $2$ using the $p$-value approach. Solution We already know that the sample size is sufficiently large to validly perform the test. • Steps 1–3 of the five-step procedure described in Section 8.3 have already been done in Example $2$. They tell us that the test is two-tailed and that value of the test statistic is $Z = 1.542$. • Step 4. Since the test is two-tailed the $p$-value is the double of the area under the standard normal curve cut off by the observed test statistic, $Z = 1.542$. By Figure 7.1.5 that area is $1-0.9382=0.0618$, as illustrated in Figure $5$, hence the $p$-value is $2\times 0.0618=0.1236$. • Step 5. Since the $p$-value is greater than $\alpha =0.10$ the decision is not to reject $H_0$. Key Takeaway • There is one formula for the test statistic in testing hypotheses about a population proportion. The test statistic follows the standard normal distribution. • Either five-step procedure, critical value or $p$-value approach, can be used.
textbooks/stats/Introductory_Statistics/Introductory_Statistics_(Shafer_and_Zhang)/08%3A_Testing_Hypotheses/8.05%3A_Large_Sample_Tests_for_a_Population_Proportion.txt
These are homework exercises to accompany the Textmap created for "Introductory Statistics" by Shafer and Zhang. 8.1: The Elements of Hypothesis Testing Q8.1.1 State the null and alternative hypotheses for each of the following situations. (That is, identify the correct number $\mu _0$ and write $H_0:\mu =\mu _0$ and the appropriate analogous expression for $H_a$.) 1. The average July temperature in a region historically has been $74.5^{\circ}F$. Perhaps it is higher now. 2. The average weight of a female airline passenger with luggage was $145$ pounds ten years ago. The FAA believes it to be higher now. 3. The average stipend for doctoral students in a particular discipline at a state university is $\14,756$. The department chairman believes that the national average is higher. 4. The average room rate in hotels in a certain region is $\82.53$. A travel agent believes that the average in a particular resort area is different. 5. The average farm size in a predominately rural state was $69.4$ acres. The secretary of agriculture of that state asserts that it is less today. Q1.1.2 State the null and alternative hypotheses for each of the following situations. (That is, identify the correct number $\mu _0$ and write $H_0:\mu =\mu _0$ and the appropriate analogous expression for $H_a$.) 1. The average time workers spent commuting to work in Verona five years ago was $38.2$ minutes. The Verona Chamber of Commerce asserts that the average is less now. 2. The mean salary for all men in a certain profession is $\58,291$. A special interest group thinks that the mean salary for women in the same profession is different. 3. The accepted figure for the caffeine content of an $8$-ounce cup of coffee is $133$ mg. A dietitian believes that the average for coffee served in a local restaurants is higher. 4. The average yield per acre for all types of corn in a recent year was $161.9$ bushels. An economist believes that the average yield per acre is different this year. 5. An industry association asserts that the average age of all self-described fly fishermen is $42.8$ years. A sociologist suspects that it is higher. Q1.1.3 Describe the two types of errors that can be made in a test of hypotheses. Q1.1.4 Under what circumstance is a test of hypotheses certain to yield a correct decision? Answers 1. $H_0:\mu =74.5\; vs\; H_a:\mu >74.5$ 2. $H_0:\mu =145\; vs\; H_a:\mu >145$ 3. $H_0:\mu =14756\; vs\; H_a:\mu >14756$ 4. $H_0:\mu =82.53\; vs\; H_a:\mu \neq 82.53$ 5. $H_0:\mu =69.4\; vs\; H_a:\mu <69.4$ 1. A Type I error is made when a true $H_0$ is rejected. A Type II error is made when a false $H_0$ is not rejected. 8.2: Large Sample Tests for a Population Mean Basic 1. Find the rejection region (for the standardized test statistic) for each hypothesis test. 1. $H_0:\mu =27\; vs\; H_a:\mu <27\; @\; \alpha =0.05$ 2. $H_0:\mu =52\; vs\; H_a:\mu \neq 52\; @\; \alpha =0.05$ 3. $H_0:\mu =-105\; vs\; H_a:\mu >-105\; @\; \alpha =0.10$ 4. $H_0:\mu =78.8\; vs\; H_a:\mu \neq 78.8\; @\; \alpha =0.10$ 2. Find the rejection region (for the standardized test statistic) for each hypothesis test. 1. $H_0:\mu =17\; vs\; H_a:\mu <17\; @\; \alpha =0.01$ 2. $H_0:\mu =880\; vs\; H_a:\mu \neq 880\; @\; \alpha =0.01$ 3. $H_0:\mu =-12\; vs\; H_a:\mu >-12\; @\; \alpha =0.05$ 4. $H_0:\mu =21.1\; vs\; H_a:\mu \neq 21.1\; @\; \alpha =0.05$ 3. Find the rejection region (for the standardized test statistic) for each hypothesis test. Identify the test as left-tailed, right-tailed, or two-tailed. 1. $H_0:\mu =141\; vs\; H_a:\mu <141\; @\; \alpha =0.20$ 2. $H_0:\mu =-54\; vs\; H_a:\mu <-54\; @\; \alpha =0.05$ 3. $H_0:\mu =98.6\; vs\; H_a:\mu \neq 98.6\; @\; \alpha =0.05$ 4. $H_0:\mu =3.8\; vs\; H_a:\mu >3.8\; @\; \alpha =0.001$ 4. Find the rejection region (for the standardized test statistic) for each hypothesis test. Identify the test as left-tailed, right-tailed, or two-tailed. 1. $H_0:\mu =-62\; vs\; H_a:\mu \neq -62\; @\; \alpha =0.005$ 2. $H_0:\mu =73\; vs\; H_a:\mu >73\; @\; \alpha =0.001$ 3. $H_0:\mu =1124\; vs\; H_a:\mu <1124\; @\; \alpha =0.001$ 4. $H_0:\mu =0.12\; vs\; H_a:\mu \neq 0.12\; @\; \alpha =0.001$ 5. Compute the value of the test statistic for the indicated test, based on the information given. 1. Testing $H_0:\mu =72.2\; vs\; H_a:\mu >72.2,\; \sigma \; \text{unknown}\; n=55,\; \bar{x}=75.1,\; s=9.25$ 2. Testing $H_0:\mu =58\; vs\; H_a:\mu >58,\; \sigma =1.22\; n=40,\; \bar{x}=58.5,\; s=1.29$ 3. Testing $H_0:\mu =-19.5\; vs\; H_a:\mu <-19.5,\; \sigma \; \text{unknown}\; n=30,\; \bar{x}=-23.2,\; s=9.55$ 4. Testing $H_0:\mu =805\; vs\; H_a:\mu \neq 805,\; \sigma =37.5\; n=75,\; \bar{x}=818,\; s=36.2$ 6. Compute the value of the test statistic for the indicated test, based on the information given. 1. Testing $H_0:\mu =342\; vs\; H_a:\mu <342,\; \sigma =11.2\; n=40,\; \bar{x}=339,\; s=10.3$ 2. Testing $H_0:\mu =105\; vs\; H_a:\mu >105,\; \sigma =5.3\; n=80,\; \bar{x}=107,\; s=5.1$ 3. Testing $H_0:\mu =-13.5\; vs\; H_a:\mu \neq -13.5,\; \sigma \; \text{unknown}\; n=32,\; \bar{x}=-13.8,\; s=1.5$ 4. Testing $H_0:\mu =28\; vs\; H_a:\mu \neq 28,\; \sigma \; \text{unknown}\; n=68,\; \bar{x}=27.8,\; s=1.3$ 7. Perform the indicated test of hypotheses, based on the information given. 1. Test $H_0:\mu =212\; vs\; H_a:\mu <212\; @\; \alpha =0.10,\; \sigma \; \text{unknown}\; n=36,\; \bar{x}=211.2,\; s=2.2$ 2. Test $H_0:\mu =-18\; vs\; H_a:\mu >-18\; @\; \alpha =0.05,\; \sigma =3.3\; n=44,\; \bar{x}=-17.2,\; s=3.1$ 3. Test $H_0:\mu =24\; vs\; H_a:\mu \neq 24\; @\; \alpha =0.02,\; \sigma \; \text{unknown}\; n=50,\; \bar{x}=22.8,\; s=1.9$ 8. Perform the indicated test of hypotheses, based on the information given. 1. Test $H_0:\mu =105\; vs\; H_a:\mu >105\; @\; \alpha =0.05,\; \sigma \; \text{unknown}\; n=30,\; \bar{x}=108,\; s=7.2$ 2. Test $H_0:\mu =21.6\; vs\; H_a:\mu <21.6\; @\; \alpha =0.01,\; \sigma \; \text{unknown}\; n=78,\; \bar{x}=20.5,\; s=3.9$ 3. Test $H_0:\mu =-375\; vs\; H_a:\mu \neq -375\; @\; \alpha =0.01,\; \sigma =18.5\; n=31,\; \bar{x}=-388,\; s=18.0$ Applications 1. In the past the average length of an outgoing telephone call from a business office has been $143$ seconds. A manager wishes to check whether that average has decreased after the introduction of policy changes. A sample of $100$ telephone calls produced a mean of $133$ seconds, with a standard deviation of $35$ seconds. Perform the relevant test at the $1\%$ level of significance. 2. The government of an impoverished country reports the mean age at death among those who have survived to adulthood as $66.2$ years. A relief agency examines $30$ randomly selected deaths and obtains a mean of $62.3$ years with standard deviation $8.1$ years. Test whether the agency’s data support the alternative hypothesis, at the $1\%$ level of significance, that the population mean is less than $66.2$. 3. The average household size in a certain region several years ago was $3.14$ persons. A sociologist wishes to test, at the $5\%$ level of significance, whether it is different now. Perform the test using the information collected by the sociologist: in a random sample of $75$ households, the average size was $2.98$ persons, with sample standard deviation $0.82$ person. 4. The recommended daily calorie intake for teenage girls is $2,200$ calories/day. A nutritionist at a state university believes the average daily caloric intake of girls in that state to be lower. Test that hypothesis, at the $5\%$ level of significance, against the null hypothesis that the population average is $2,200$ calories/day using the following sample data: $n=36,\; \bar{x}=2,150,\; s=203$ 5. An automobile manufacturer recommends oil change intervals of $3,000$ miles. To compare actual intervals to the recommendation, the company randomly samples records of $50$ oil changes at service facilities and obtains sample mean $3,752$ miles with sample standard deviation $638$ miles. Determine whether the data provide sufficient evidence, at the $5\%$ level of significance, that the population mean interval between oil changes exceeds $3,000$ miles. 6. A medical laboratory claims that the mean turn-around time for performance of a battery of tests on blood samples is $1.88$ business days. The manager of a large medical practice believes that the actual mean is larger. A random sample of $45$ blood samples yielded mean $2.09$ and sample standard deviation $0.13$ day. Perform the relevant test at the $10\%$ level of significance, using these data. 7. A grocery store chain has as one standard of service that the mean time customers wait in line to begin checking out not exceed $2$ minutes. To verify the performance of a store the company measures the waiting time in $30$ instances, obtaining mean time $2.17$ minutes with standard deviation $0.46$ minute. Use these data to test the null hypothesis that the mean waiting time is $2$ minutes versus the alternative that it exceeds $2$ minutes, at the $10\%$ level of significance. 8. A magazine publisher tells potential advertisers that the mean household income of its regular readership is $\61,500$. An advertising agency wishes to test this claim against the alternative that the mean is smaller. A sample of $40$ randomly selected regular readers yields mean income $\59,800$ with standard deviation $\5,850$. Perform the relevant test at the $1\%$ level of significance. 9. Authors of a computer algebra system wish to compare the speed of a new computational algorithm to the currently implemented algorithm. They apply the new algorithm to $50$ standard problems; it averages $8.16$ seconds with standard deviation $0.17$ second. The current algorithm averages $8.21$ seconds on such problems. Test, at the $1\%$ level of significance, the alternative hypothesis that the new algorithm has a lower average time than the current algorithm. 10. A random sample of the starting salaries of $35$ randomly selected graduates with bachelor’s degrees last year gave sample mean and standard deviation $\41,202$ and $\7,621$, respectively. Test whether the data provide sufficient evidence, at the $5\%$ level of significance, to conclude that the mean starting salary of all graduates last year is less than the mean of all graduates two years before, $\43,589$. Additional Exercises 1. The mean household income in a region served by a chain of clothing stores is $\48,750$. In a sample of $40$ customers taken at various stores the mean income of the customers was $\51,505$ with standard deviation $\6,852$. 1. Test at the $10\%$ level of significance the null hypothesis that the mean household income of customers of the chain is $\48,750$ against that alternative that it is different from $\48,750$. 2. The sample mean is greater than $\48,750$, suggesting that the actual mean of people who patronize this store is greater than $\48,750$. Perform this test, also at the $10\%$ level of significance. (The computation of the test statistic done in part (a) still applies here.) 2. The labor charge for repairs at an automobile service center are based on a standard time specified for each type of repair. The time specified for replacement of universal joint in a drive shaft is one hour. The manager reviews a sample of $30$ such repairs. The average of the actual repair times is $0.86$ hour with standard deviation $0.32$ hour. 1. Test at the $1\%$ level of significance the null hypothesis that the actual mean time for this repair differs from one hour. 2. The sample mean is less than one hour, suggesting that the mean actual time for this repair is less than one hour. Perform this test, also at the $1\%$ level of significance. (The computation of the test statistic done in part (a) still applies here.) Large Data Set Exercises Large Data Set missing from the original 1. Large $\text{Data Set 1}$ records the SAT scores of $1,000$ students. Regarding it as a random sample of all high school students, use it to test the hypothesis that the population mean exceeds $1,510$, at the $1\%$ level of significance. (The null hypothesis is that $\mu =1510$). 2. Large $\text{Data Set 1}$ records the GPAs of $1,000$ college students. Regarding it as a random sample of all college students, use it to test the hypothesis that the population mean is less than $2.50$, at the $10\%$ level of significance. (The null hypothesis is that $\mu =2.50$). 3. Large $\text{Data Set 1}$ lists the SAT scores of $1,000$ students. 1. Regard the data as arising from a census of all students at a high school, in which the SAT score of every student was measured. Compute the population mean $\mu$. 2. Regard the first $50$ students in the data set as a random sample drawn from the population of part (a) and use it to test the hypothesis that the population mean exceeds $1,510$, at the $10\%$ level of significance. (The null hypothesis is that $\mu =1510$). 3. Is your conclusion in part (b) in agreement with the true state of nature (which by part (a) you know), or is your decision in error? If your decision is in error, is it a Type I error or a Type II error? 4. Large $\text{Data Set 1}$ lists the GPAs of $1,000$ students. 1. Regard the data as arising from a census of all freshman at a small college at the end of their first academic year of college study, in which the GPA of every such person was measured. Compute the population mean $\mu$. 2. Regard the first $50$ students in the data set as a random sample drawn from the population of part (a) and use it to test the hypothesis that the population mean is less than $2.50$, at the $10\%$ level of significance. (The null hypothesis is that $\mu =2.50$). 3. Is your conclusion in part (b) in agreement with the true state of nature (which by part (a) you know), or is your decision in error? If your decision is in error, is it a Type I error or a Type II error? Answers 1. $Z\leq -1.645$ 2. $Z\leq -1.645\; or\; Z\geq 1.96$ 3. $Z\geq 1.28$ 4. $Z\leq -1.645\; or\; Z\geq 1.645$ 1. $Z\leq -0.84$ 2. $Z\leq -1.645$ 3. $Z\leq -1.96\; or\; Z\geq 1.96$ 4. $Z\geq 3.1$ 1. $Z = 2.235$ 2. $Z = 2.592$ 3. $Z = -2.122$ 4. $Z = 3.002$ 1. $Z = -2.18,\; -z_{0.10}=-1.28,\; \text{reject}\; H_0$ 2. $Z = 1.61,\; z_{0.05}=1.645,\; \text{do not reject}\; H_0$ 3. $Z = -4.47,\; -z_{0.01}=-2.33,\; \text{reject}\; H_0$ 1. $Z = -2.86,\; -z_{0.01}=-2.33,\; \text{reject}\; H_0$ 2. $Z = -1.69,\; -z_{0.025}=-1.96,\; \text{do not reject}\; H_0$ 3. $Z = 8.33,\; z_{0.05}=1.645,\; \text{reject}\; H_0$ 4. $Z = 2.02,\; z_{0.10}=1.28,\; \text{reject}\; H_0$ 5. $Z = -2.08,\; -z_{0.01}=-2.33,\; \text{do not reject}\; H_0$ 1. $Z =2.54,\; z_{0.05}=1.645,\; \text{reject}\; H_0$ 2. $Z = 2.54,\; z_{0.10}=1.28,\; \text{reject}\; H_0$ 6. $H_0:\mu =1510\; vs\; H_a:\mu >1510$. Test Statistic: $Z = 2.7882$. Rejection Region: $[2.33,\infty )$. Decision: Reject $H_0$. 1. $\mu _0=1528.74$ 2. $H_0:\mu =1510\; vs\; H_a:\mu >1510$. Test Statistic: $Z = -1.41$. Rejection Region: $[1.28,\infty )$. Decision: Fail to reject $H_0$. 3. No, it is a Type II error. 8.3: The Observed Significance of a Test Basic 1. Compute the observed significance of each test. 1. Testing $H_0:\mu =54.7\; vs\; H_a:\mu <54.7,\; \text{test statistic}\; z=-1.72$ 2. Testing $H_0:\mu =195\; vs\; H_a:\mu \neq 195,\; \text{test statistic}\; z=-2.07$ 3. Testing $H_0:\mu =-45\; vs\; H_a:\mu >-45,\; \text{test statistic}\; z=2.54$ 2. Compute the observed significance of each test. 1. Testing $H_0:\mu =0\; vs\; H_a:\mu \neq 0,\; \text{test statistic}\; z=2.82$ 2. Testing $H_0:\mu =18.4\; vs\; H_a:\mu <18.4,\; \text{test statistic}\; z=-1.74$ 3. Testing $H_0:\mu =63.85\; vs\; H_a:\mu >63.85,\; \text{test statistic}\; z=1.93$ 3. Compute the observed significance of each test. (Some of the information given might not be needed.) 1. Testing $H_0:\mu =27.5\; vs\; H_a:\mu >27.5,\; n=49,\; \bar{x}=28.9,\; s=3.14,\; \text{test statistic}\; z=3.12$ 2. Testing $H_0:\mu =581\; vs\; H_a:\mu <581,\; n=32,\; \bar{x}=560,\; s=47.8,\; \text{test statistic}\; z=-2.49$ 3. Testing $H_0:\mu =138.5\; vs\; H_a:\mu \neq 138.5,\; n=44,\; \bar{x}=137.6,\; s=2.45,\; \text{test statistic}\; z=-2.44$ 4. Compute the observed significance of each test. (Some of the information given might not be needed.) 1. Testing $H_0:\mu =-17.9\; vs\; H_a:\mu <-17.9,\; n=34,\; \bar{x}=-18.2,\; s=0.87,\; \text{test statistic}\; z=-2.01$ 2. Testing $H_0:\mu =5.5\; vs\; H_a:\mu \neq 5.5,\; n=56,\; \bar{x}=7.4,\; s=4.82,\; \text{test statistic}\; z=2.95$ 3. Testing $H_0:\mu =1255\; vs\; H_a:\mu >1255,\; n=152,\; \bar{x}=1257,\; s=7.5,\; \text{test statistic}\; z=3.29$ 5. Make the decision in each test, based on the information provided. 1. Testing $H_0:\mu =82.9\; vs\; H_a:\mu <82.9\; @\; \alpha =0.05$, observed significance $p=0.038$ 2. Testing $H_0:\mu =213.5\; vs\; H_a:\mu \neq 213.5\; @\; \alpha =0.01$, observed significance $p=0.038$ 6. Make the decision in each test, based on the information provided. 1. Testing $H_0:\mu =31.4\; vs\; H_a:\mu >31.4\; @\; \alpha =0.10$, observed significance $p=0.062$ 2. Testing $H_0:\mu =-75.5\; vs\; H_a:\mu <-75.5\; @\; \alpha =0.05$, observed significance $p=0.062$ Applications 1. A lawyer believes that a certain judge imposes prison sentences for property crimes that are longer than the state average $11.7$ months. He randomly selects $36$ of the judge’s sentences and obtains mean $13.8$ and standard deviation $3.9$ months. 1. Perform the test at the $1\%$ level of significance using the critical value approach. 2. Compute the observed significance of the test. 3. Perform the test at the $1\%$ level of significance using the $p$-value approach. You need not repeat the first three steps, already done in part (a). 2. In a recent year the fuel economy of all passenger vehicles was $19.8$ mpg. A trade organization sampled $50$ passenger vehicles for fuel economy and obtained a sample mean of $20.1$ mpg with standard deviation $2.45$ mpg. The sample mean $20.1$ exceeds $19.8$, but perhaps the increase is only a result of sampling error. 1. Perform the relevant test of hypotheses at the $20\%$ level of significance using the critical value approach. 2. Compute the observed significance of the test. 3. Perform the test at the $20\%$ level of significance using the $p$-value approach. You need not repeat the first three steps, already done in part (a). 3. The mean score on a $25$-point placement exam in mathematics used for the past two years at a large state university is $14.3$. The placement coordinator wishes to test whether the mean score on a revised version of the exam differs from $14.3$. She gives the revised exam to $30$ entering freshmen early in the summer; the mean score is $14.6$ with standard deviation $2.4$. 1. Perform the test at the $10\%$ level of significance using the critical value approach. 2. Compute the observed significance of the test. 3. Perform the test at the $10\%$ level of significance using the $p$-value approach. You need not repeat the first three steps, already done in part (a). 4. The mean increase in word family vocabulary among students in a one-year foreign language course is $576$ word families. In order to estimate the effect of a new type of class scheduling, an instructor monitors the progress of $60$ students; the sample mean increase in word family vocabulary of these students is $542$ word families with sample standard deviation $18$ word families. 1. Test at the $5\%$ level of significance whether the mean increase with the new class scheduling is different from $576$ word families, using the critical value approach. 2. Compute the observed significance of the test. 3. Perform the test at the $5\%$ level of significance using the $p$-value approach. You need not repeat the first three steps, already done in part (a). 5. The mean yield for hard red winter wheat in a certain state is $44.8$ bu/acre. In a pilot program a modified growing scheme was introduced on $35$ independent plots. The result was a sample mean yield of $45.4$ bu/acre with sample standard deviation $1.6$ bu/acre, an apparent increase in yield. 1. Test at the $5\%$ level of significance whether the mean yield under the new scheme is greater than $44.8$ bu/acre, using the critical value approach. 2. Compute the observed significance of the test. 3. Perform the test at the $5\%$ level of significance using the $p$-value approach. You need not repeat the first three steps, already done in part (a). 6. The average amount of time that visitors spent looking at a retail company’s old home page on the world wide web was $23.6$ seconds. The company commissions a new home page. On its first day in place the mean time spent at the new page by $7,628$ visitors was $23.5$ seconds with standard deviation $5.1$ seconds. 1. Test at the $5\%$ level of significance whether the mean visit time for the new page is less than the former mean of $23.6$ seconds, using the critical value approach. 2. Compute the observed significance of the test. 3. Perform the test at the $5\%$ level of significance using the $p$-value approach. You need not repeat the first three steps, already done in part (a). Answers 1. $p\text{-value}=0.0427$ 2. $p\text{-value}=0.0384$ 3. $p\text{-value}=0.0055$ 1. $p\text{-value}=0.0009$ 2. $p\text{-value}=0.0064$ 3. $p\text{-value}=0.0146$ 1. reject $H_0$ 2. do not reject $H_0$ 1. $Z=3.23,\; z_{0.01}=2.33$, reject $H_0$ 2. $p\text{-value}=0.0006$ 3. reject $H_0$ 1. $Z=0.68,\; z_{0.05}=1.645$, do not reject $H_0$ 2. $p\text{-value}=0.4966$ 3. do not reject $H_0$ 1. $Z=2.22,\; z_{0.05}=1.645$, reject $H_0$ 2. $p\text{-value}=0.0132$ 3. reject $H_0$ 8.4: Small Sample Tests for a Population Mean Basic 1. Find the rejection region (for the standardized test statistic) for each hypothesis test based on the information given. The population is normally distributed. 1. $H_0: \mu =27\; vs\; H_a:\mu <27\; @\; \alpha =0.05,\; n=12,\; \sigma =2.2$ 2. $H_0: \mu =52\; vs\; H_a:\mu \neq 52\; @\; \alpha =0.05,\; n=6,\; \sigma \; \text{unknown}$ 3. $H_0: \mu =-105\; vs\; H_a:\mu >-105\; @\; \alpha =0.10,\; n=24,\; \sigma \; \text{unknown}$ 4. $H_0: \mu =78.8\; vs\; H_a:\mu \neq 78.8\; @\; \alpha =0.10,\; n=8,\; \sigma =1.7$ 2. Find the rejection region (for the standardized test statistic) for each hypothesis test based on the information given. The population is normally distributed. 1. $H_0: \mu =17\; vs\; H_a:\mu <17\; @\; \alpha =0.01,\; n=26,\; \sigma =0.94$ 2. $H_0: \mu =880\; vs\; H_a:\mu \neq 880\; @\; \alpha =0.01,\; n=4,\; \sigma \; \text{unknown}$ 3. $H_0: \mu =-12\; vs\; H_a:\mu >-12\; @\; \alpha =0.05,\; n=18,\; \sigma =1.1$ 4. $H_0: \mu =21.1\; vs\; H_a:\mu \neq 21.1\; @\; \alpha =0.05,\; n=23,\; \sigma \; \text{unknown}$ 3. Find the rejection region (for the standardized test statistic) for each hypothesis test based on the information given. The population is normally distributed. Identify the test as left-tailed, right-tailed, or two-tailed. 1. $H_0: \mu =141\; vs\; H_a:\mu <141\; @\; \alpha =0.20,\; n=29,\; \sigma \; \text{unknown}$ 2. $H_0: \mu =-54\; vs\; H_a:\mu <-54\; @\; \alpha =0.05,\; n=15,\; \sigma =1.9$ 3. $H_0: \mu =98.6\; vs\; H_a:\mu \neq 98.6\; @\; \alpha =0.05,\; n=12,\; \sigma \; \text{unknown}$ 4. $H_0: \mu =3.8\; vs\; H_a:\mu >3.8\; @\; \alpha =0.001,\; n=27,\; \sigma \; \text{unknown}$ 4. Find the rejection region (for the standardized test statistic) for each hypothesis test based on the information given. The population is normally distributed. Identify the test as left-tailed, right-tailed, or two-tailed. 1. $H_0: \mu =-62\; vs\; H_a:\mu \neq -62\; @\; \alpha =0.005,\; n=8,\; \sigma \; \text{unknown}$ 2. $H_0: \mu =73\; vs\; H_a:\mu >73\; @\; \alpha =0.001,\; n=22,\; \sigma \; \text{unknown}$ 3. $H_0: \mu =1124\; vs\; H_a:\mu <1124\; @\; \alpha =0.001,\; n=21,\; \sigma \; \text{unknown}$ 4. $H_0: \mu =0.12\; vs\; H_a:\mu \neq 0.12\; @\; \alpha =0.001,\; n=14,\; \sigma =0.026$ 5. A random sample of size 20 drawn from a normal population yielded the following results: $\bar{x}=49.2,\; s=1.33$ 1. Test $H_0: \mu =50\; vs\; H_a:\mu \neq 50\; @\; \alpha =0.01$. 2. Estimate the observed significance of the test in part (a) and state a decision based on the $p$-value approach to hypothesis testing. 6. A random sample of size 16 drawn from a normal population yielded the following results: $\bar{x}=-0.96,\; s=1.07$ 1. Test $H_0: \mu =0\; vs\; H_a:\mu <0\; @\; \alpha =0.001$. 2. Estimate the observed significance of the test in part (a) and state a decision based on the $p$-value approach to hypothesis testing. 7. A random sample of size 8 drawn from a normal population yielded the following results: $\bar{x}=289,\; s=46$ 1. Test $H_0: \mu =250\; vs\; H_a:\mu >250\; @\; \alpha =0.05$. 2. Estimate the observed significance of the test in part (a) and state a decision based on the $p$-value approach to hypothesis testing. 8. A random sample of size 12 drawn from a normal population yielded the following results: $\bar{x}=86.2,\; s=0.63$ 1. Test $H_0: \mu =85.5\; vs\; H_a:\mu \neq 85.5\; @\; \alpha =0.01$. 2. Estimate the observed significance of the test in part (a) and state a decision based on the $p$-value approach to hypothesis testing. Applications 1. Researchers wish to test the efficacy of a program intended to reduce the length of labor in childbirth. The accepted mean labor time in the birth of a first child is $15.3$ hours. The mean length of the labors of $13$ first-time mothers in a pilot program was $8.8$ hours with standard deviation $3.1$ hours. Assuming a normal distribution of times of labor, test at the $10\%$ level of significance test whether the mean labor time for all women following this program is less than $15.3$ hours. 2. A dairy farm uses the somatic cell count (SCC) report on the milk it provides to a processor as one way to monitor the health of its herd. The mean SCC from five samples of raw milk was $250,000$ cells per milliliter with standard deviation $37,500$ cell/ml. Test whether these data provide sufficient evidence, at the $10\%$ level of significance, to conclude that the mean SCC of all milk produced at the dairy exceeds that in the previous report, $210,250$ cell/ml. Assume a normal distribution of SCC. 3. Six coins of the same type are discovered at an archaeological site. If their weights on average are significantly different from $5.25$ grams then it can be assumed that their provenance is not the site itself. The coins are weighed and have mean $4.73$ g with sample standard deviation $0.18$ g. Perform the relevant test at the $0.1\%$ ($\text{1/10th of}\; 1\%$) level of significance, assuming a normal distribution of weights of all such coins. 4. An economist wishes to determine whether people are driving less than in the past. In one region of the country the number of miles driven per household per year in the past was $18.59$ thousand miles. A sample of $15$ households produced a sample mean of $16.23$ thousand miles for the last year, with sample standard deviation $4.06$ thousand miles. Assuming a normal distribution of household driving distances per year, perform the relevant test at the $5\%$ level of significance. 5. The recommended daily allowance of iron for females aged $19-50$ is $18$ mg/day. A careful measurement of the daily iron intake of $15$ women yielded a mean daily intake of $16.2$ mg with sample standard deviation $4.7$ mg. 1. Assuming that daily iron intake in women is normally distributed, perform the test that the actual mean daily intake for all women is different from $18$ mg/day, at the $10\%$ level of significance. 2. The sample mean is less than $18$, suggesting that the actual population mean is less than $18$ mg/day. Perform this test, also at the $10\%$ level of significance. (The computation of the test statistic done in part (a) still applies here.) 6. The target temperature for a hot beverage the moment it is dispensed from a vending machine is $170^{\circ}F$. A sample of ten randomly selected servings from a new machine undergoing a pre-shipment inspection gave mean temperature $173^{\circ}F$ with sample standard deviation $6.3^{\circ}F$. 1. Assuming that temperature is normally distributed, perform the test that the mean temperature of dispensed beverages is different from $170^{\circ}F$, at the $10\%$ level of significance. 2. The sample mean is greater than $170$, suggesting that the actual population mean is greater than $170^{\circ}F$. Perform this test, also at the $10\%$ level of significance. (The computation of the test statistic done in part (a) still applies here.) 7. The average number of days to complete recovery from a particular type of knee operation is $123.7$ days. From his experience a physician suspects that use of a topical pain medication might be lengthening the recovery time. He randomly selects the records of seven knee surgery patients who used the topical medication. The times to total recovery were:$\begin{matrix} 128 & 135 & 121 & 142 & 126 & 151 & 123 \end{matrix}$ 1. Assuming a normal distribution of recovery times, perform the relevant test of hypotheses at the $10\%$ level of significance. 2. Would the decision be the same at the $5\%$ level of significance? Answer either by constructing a new rejection region (critical value approach) or by estimating the $p$-value of the test in part (a) and comparing it to $\alpha$. 8. A 24-hour advance prediction of a day’s high temperature is “unbiased” if the long-term average of the error in prediction (true high temperature minus predicted high temperature) is zero. The errors in predictions made by one meteorological station for $20$ randomly selected days were:$\begin{matrix} 2 & 0 & -3 & 1 & -2\ 1 & 0 & -1 & 1 & -1\ -4 & 1 & 1 & -4 & 0\ -4 & -3 & -4 & 2 & 2 \end{matrix}$ 1. Assuming a normal distribution of errors, test the null hypothesis that the predictions are unbiased (the mean of the population of all errors is $0$) versus the alternative that it is biased (the population mean is not $0$), at the $1\%$ level of significance. 2. Would the decision be the same at the $5\%$ level of significance? The $10\%$ level of significance? Answer either by constructing new rejection regions (critical value approach) or by estimating the $p$-value of the test in part (a) and comparing it to $\alpha$. 9. Pasteurized milk may not have a standardized plate count (SPC) above $20,000$ colony-forming bacteria per milliliter (cfu/ml). The mean SPC for five samples was $21,500$ cfu/ml with sample standard deviation $750$ cfu/ml. Test the null hypothesis that the mean SPC for this milk is $20,000$ versus the alternative that it is greater than $20,000$, at the $10\%$ level of significance. Assume that the SPC follows a normal distribution. 10. One water quality standard for water that is discharged into a particular type of stream or pond is that the average daily water temperature be at most $18^{\circ}F$. Six samples taken throughout the day gave the data: $\begin{matrix} 16.8 & 21.5 & 19.1 & 12.8 & 18.0 & 20.7 \end{matrix}$ The sample mean exceeds $\bar{x}=18.15$, but perhaps this is only sampling error. Determine whether the data provide sufficient evidence, at the $10\%$ level of significance, to conclude that the mean temperature for the entire day exceeds $18^{\circ}F$. Additional Exercises 1. A calculator has a built-in algorithm for generating a random number according to the standard normal distribution. Twenty-five numbers thus generated have mean $0.15$ and sample standard deviation $0.94$. Test the null hypothesis that the mean of all numbers so generated is $0$ versus the alternative that it is different from $0$, at the $20\%$ level of significance. Assume that the numbers do follow a normal distribution. 2. At every setting a high-speed packing machine delivers a product in amounts that vary from container to container with a normal distribution of standard deviation $0.12$ ounce. To compare the amount delivered at the current setting to the desired amount $64.1$ ounce, a quality inspector randomly selects five containers and measures the contents of each, obtaining sample mean $63.9$ ounces and sample standard deviation $0.10$ ounce. Test whether the data provide sufficient evidence, at the $5\%$ level of significance, to conclude that the mean of all containers at the current setting is less than $64.1$ ounces. 3. A manufacturing company receives a shipment of $1,000$ bolts of nominal shear strength $4,350$ lb. A quality control inspector selects five bolts at random and measures the shear strength of each. The data are:$\begin{matrix} 4,320 & 4,290 & 4,360 & 4,350 & 4,320 \end{matrix}$ 1. Assuming a normal distribution of shear strengths, test the null hypothesis that the mean shear strength of all bolts in the shipment is $4,350$ lb versus the alternative that it is less than $4,350$ lb, at the $10\%$ level of significance. 2. Estimate the $p$-value (observed significance) of the test of part (a). 3. Compare the $p$-value found in part (b) to $\alpha = 0.10$ andmake a decision based on the $p$-value approach. Explain fully. 4. A literary historian examines a newly discovered document possibly written by Oberon Theseus. The mean average sentence length of the surviving undisputed works of Oberon Theseus is $48.72$ words. The historian counts words in sentences between five successive $101$ periods in the document in question to obtain a mean average sentence length of $39.46$ words with standard deviation $7.45$ words. (Thus the sample size is five.) 1. Determine if these data provide sufficient evidence, at the $1\%$ level of significance, to conclude that the mean average sentence length in the document is less than $48.72$. 2. Estimate the $p$-value of the test. 3. Based on the answers to parts (a) and (b), state whether or not it is likely that the document was written by Oberon Theseus. Answers 1. $Z\leq -1.645$ 2. $T\leq -2.571\; or\; T \geq 2.571$ 3. $T \geq 1.319$ 4. $Z\leq -1645\; or\; Z \geq 1.645$ 1. $T\leq -0.855$ 2. $Z\leq -1.645$ 3. $T\leq -2.201\; or\; T \geq 2.201$ 4. $T \geq 3.435$ 1. $T=-2.690,\; df=19,\; -t_{0.005}=-2.861,\; \text{do not reject }H_0$ 2. $0.01<p-value<0.02,\; \alpha =0.01,\; \text{do not reject }H_0$ 1. $T=2.398,\; df=7,\; t_{0.05}=1.895,\; \text{reject }H_0$ 2. $0.01<p-value<0.025,\; \alpha =0.05,\; \text{reject }H_0$ 1. $T=-7.560,\; df=12,\; -t_{0.10}=-1.356,\; \text{reject }H_0$ 2. $T=-7.076,\; df=5,\; -t_{0.0005}=-6.869,\; \text{reject }H_0$ 1. $T=-1.483,\; df=14,\; -t_{0.05}=-1.761,\; \text{do not reject }H_0$ 2. $T=-1.483,\; df=14,\; -t_{0.10}=-1.345,\; \text{reject }H_0$ 1. $T=2.069,\; df=6,\; t_{0.10}=1.44,\; \text{reject }H_0$ 2. $T=2.069,\; df=6,\; t_{0.05}=1.943,\; \text{reject }H_0$ 3. $T=4.472,\; df=4,\; t_{0.10}=1.533,\; \text{reject }H_0$ 4. $T=0.798,\; df=24,\; t_{0.10}=1.318,\; \text{do not reject }H_0$ 1. $T=-1.773,\; df=4,\; -t_{0.05}=-2.132,\; \text{do not reject }H_0$ 2. $0.05<p-value<0.10$ 3. $\alpha =0.05,\; \text{do not reject }H_0$ 8.5: Large Sample Tests for a Population Proportion Basic On all exercises for this section you may assume that the sample is sufficiently large for the relevant test to be validly performed. 1. Compute the value of the test statistic for each test using the information given. 1. Testing $H_0:p=0.50\; vs\; H_a:p>0.50,\; n=360,\; \hat{p}=0.56$. 2. Testing $H_0:p=0.50\; vs\; H_a:p\neq 0.50,\; n=360,\; \hat{p}=0.56$. 3. Testing $H_0:p=0.37\; vs\; H_a:p<0.37,\; n=1200,\; \hat{p}=0.35$. 2. Compute the value of the test statistic for each test using the information given. 1. Testing $H_0:p=0.72\; vs\; H_a:p<0.72,\; n=2100,\; \hat{p}=0.71$. 2. Testing $H_0:p=0.83\; vs\; H_a:p\neq 0.83,\; n=500,\; \hat{p}=0.86$. 3. Testing $H_0:p=0.22\; vs\; H_a:p<0.22,\; n=750,\; \hat{p}=0.18$. 3. For each part of Exercise 1 construct the rejection region for the test for $\alpha = 0.05$ and make the decision based on your answer to that part of the exercise. 4. For each part of Exercise 2 construct the rejection region for the test for $\alpha = 0.05$ and make the decision based on your answer to that part of the exercise. 5. For each part of Exercise 1 compute the observed significance ($p$-value) of the test and compare it to $\alpha = 0.05$ in order to make the decision by the $p$-value approach to hypothesis testing. 6. For each part of Exercise 2 compute the observed significance ($p$-value) of the test and compare it to $\alpha = 0.05$ in order to make the decision by the $p$-value approach to hypothesis testing. 7. Perform the indicated test of hypotheses using the critical value approach. 1. Testing $H_0:p=0.55\; vs\; H_a:p>0.55\; @\; \alpha =0.05,\; n=300,\; \hat{p}=0.60$. 2. Testing $H_0:p=0.47\; vs\; H_a:p\neq 0.47\; @\; \alpha =0.01,\; n=9750,\; \hat{p}=0.46$. 8. Perform the indicated test of hypotheses using the critical value approach. 1. Testing $H_0:p=0.15\; vs\; H_a:p\neq 0.15\; @\; \alpha =0.001,\; n=1600,\; \hat{p}=0.18$. 2. Testing $H_0:p=0.90\; vs\; H_a:p>0.90\; @\; \alpha =0.01,\; n=1100,\; \hat{p}=0.91$. 9. Perform the indicated test of hypotheses using the $p$-value approach. 1. Testing $H_0:p=0.37\; vs\; H_a:p\neq 0.37\; @\; \alpha =0.005,\; n=1300,\; \hat{p}=0.40$. 2. Testing $H_0:p=0.94\; vs\; H_a:p>0.94\; @\; \alpha =0.05,\; n=1200,\; \hat{p}=0.96$. 10. Perform the indicated test of hypotheses using the $p$-value approach. 1. Testing $H_0:p=0.25\; vs\; H_a:p<0.25\; @\; \alpha =0.10,\; n=850,\; \hat{p}=0.23$. 2. Testing $H_0:p=0.33\; vs\; H_a:p\neq 0.33\; @\; \alpha =0.05,\; n=1100,\; \hat{p}=0.30$. Applications 1. Five years ago $3.9\%$ of children in a certain region lived with someone other than a parent. A sociologist wishes to test whether the current proportion is different. Perform the relevant test at the $5\%$ level of significance using the following data: in a random sample of $2,759$ children, $119$ lived with someone other than a parent. 2. The government of a particular country reports its literacy rate as $52\%$. A nongovernmental organization believes it to be less. The organization takes a random sample of $600$ inhabitants and obtains a literacy rate of $42\%$. Perform the relevant test at the $0.5\%$ (one-half of $1\%$) level of significance. 3. Two years ago $72\%$ of household in a certain county regularly participated in recycling household waste. The county government wishes to investigate whether that proportion has increased after an intensive campaign promoting recycling. In a survey of $900$ households, $674$ regularly participate in recycling. Perform the relevant test at the $10\%$ level of significance. 4. Prior to a special advertising campaign, $23\%$ of all adults recognized a particular company’s logo. At the close of the campaign the marketing department commissioned a survey in which $311$ of $1,200$ randomly selected adults recognized the logo. Determine, at the $1\%$ level of significance, whether the data provide sufficient evidence to conclude that more than $23\%$ of all adults now recognize the company’s logo. 5. A report five years ago stated that $35.5\%$ of all state-owned bridges in a particular state were “deficient.” An advocacy group took a random sample of $100$ state-owned bridges in the state and found $33$ to be currently rated as being “deficient.” Test whether the current proportion of bridges in such condition is $35.5\%$ versus the alternative that it is different from $35.5\%$, at the $10\%$ level of significance. 6. In the previous year the proportion of deposits in checking accounts at a certain bank that were made electronically was $45\%$. The bank wishes to determine if the proportion is higher this year. It examined $20,000$ deposit records and found that $9,217$ were electronic. Determine, at the $1\%$ level of significance, whether the data provide sufficient evidence to conclude that more than $45\%$ of all deposits to checking accounts are now being made electronically. 7. According to the Federal Poverty Measure $12\%$ of the U.S. population lives in poverty. The governor of a certain state believes that the proportion there is lower. In a sample of size $1,550,163$ were impoverished according to the federal measure. 1. Test whether the true proportion of the state’s population that is impoverished is less than $12\%$, at the $5\%$ level of significance. 2. Compute the observed significance of the test. 8. An insurance company states that it settles $85\%$ of all life insurance claims within $30$ days. A consumer group asks the state insurance commission to investigate. In a sample of $250$ life insurance claims, $203$ were settled within $30$ days. 1. Test whether the true proportion of all life insurance claims made to this company that are settled within $30$ days is less than $85\%$, at the $5\%$ level of significance. 2. Compute the observed significance of the test. 9. A special interest group asserts that $90\%$ of all smokers began smoking before age $18$. In a sample of $850$ smokers, $687$ began smoking before age $18$. 1. Test whether the true proportion of all smokers who began smoking before age $18$ is less than $90\%$, at the $1\%$ level of significance. 2. Compute the observed significance of the test. 10. In the past, $68\%$ of a garage’s business was with former patrons. The owner of the garage samples $200$ repair invoices and finds that for only $114$ of them the patron was a repeat customer. 1. Test whether the true proportion of all current business that is with repeat customers is less than $68\%$, at the $1\%$ level of significance. 2. Compute the observed significance of the test. Additional Exercises 1. A rule of thumb is that for working individuals one-quarter of household income should be spent on housing. A financial advisor believes that the average proportion of income spent on housing is more than $0.25$. In a sample of $30$ households, the mean proportion of household income spent on housing was $0.285$ with a standard deviation of $0.063$. Perform the relevant test of hypotheses at the $1\%$ level of significance. Hint: This exercise could have been presented in an earlier section. 2. Ice cream is legally required to contain at least $10\%$ milk fat by weight. The manufacturer of an economy ice cream wishes to be close to the legal limit, hence produces its ice cream with a target proportion of $0.106$ milk fat. A sample of five containers yielded a mean proportion of $0.094$ milk fat with standard deviation $0.002$. Test the null hypothesis that the mean proportion of milk fat in all containers is $0.106$ against the alternative that it is less than $0.106$, at the $10\%$ level of significance. Assume that the proportion of milk fat in containers is normally distributed. Hint: This exercise could have been presented in an earlier section. Large Data Set Exercises Large Data Sets missing 1. Large $\text{Data Sets 4 and 4A}$ list the results of $500$ tosses of a die. Let $p$ denote the proportion of all tosses of this die that would result in a five. Use the sample data to test the hypothesis that $p$ is different from $1/6$, at the $20\%$ level of significance. 2. Large $\text{Data Set 6}$ records results of a random survey of $200$ voters in each of two regions, in which they were asked to express whether they prefer Candidate $A$ for a U.S. Senate seat or prefer some other candidate. Use the full data set ($400$ observations) to test the hypothesis that the proportion $p$ of all voters who prefer Candidate $A$ exceeds $0.35$. Test at the $10\%$ level of significance. 3. Lines $2$ through $536$ in Large $\text{Data Set 11}$ is a sample of $535$ real estate sales in a certain region in 2008. Those that were foreclosure sales are identified with a $1$ in the second column. Use these data to test, at the $10\%$ level of significance, the hypothesis that the proportion $p$ of all real estate sales in this region in 2008 that were foreclosure sales was less than $25\%$. (The null hypothesis is $H_0:p=0.25$). 4. Lines $537$ through $1106$ in Large $\text{Data Set 11}$ is a sample of $570$ real estate sales in a certain region in 2010. Those that were foreclosure sales are identified with a $1$ in the second column. Use these data to test, at the $5\%$ level of significance, the hypothesis that the proportion $p$ of all real estate sales in this region in 2010 that were foreclosure sales was greater than $23\%$. (The null hypothesis is $H_0:p=0.25$). Answers 1. $Z = 2.277$ 2. $Z = 2.277$ 3. $Z = -1.435$ 1. $Z \geq 1.645$; reject $H_0$ 2. $Z\leq -1.96\; or\; Z \geq 1.96$; reject $H_0$ 3. $Z \leq -1.645$; do not reject $H_0$ 1. $p-value=0.0116,\; \alpha =0.05$; reject $H_0$ 2. $p-value=0.0232,\; \alpha =0.05$; reject $H_0$ 3. $p-value=0.0749,\; \alpha =0.05$; do not reject $H_0$ 1. $Z=1.74,\; z_{0.05}=1.645$; reject $H_0$ 2. $Z=-1.98,\; -z_{0.005}=-2.576$; do not reject $H_0$ 1. $Z=2.24,\; p-value=0.025,\alpha =0.005$; do not reject $H_0$ 2. $Z=2.92,\; p-value=0.0018,\alpha =0.05$; reject $H_0$ 1. $Z=1.11,\; z_{0.025}=1.96$; do not reject $H_0$ 2. $Z=1.93,\; z_{0.10}=1.28$; reject $H_0$ 3. $Z=-0.523,\; \pm z_{0.05}=\pm 1.645$; do not reject $H_0$ 1. $Z=-1.798,\; -z_{0.05}=-1.645$; reject $H_0$ 2. $p-value=0.0359$ 1. $Z=-8.92,\; -z_{0.01}=-2.33$; reject $H_0$ 2. $p-value\approx 0$ 4. $Z=3.04,\; z_{0.01}=2.33$; reject $H_0$ 5. $H_0:p=1/6\; vs\; H_a:p\neq 1/6$. Test Statistic: $Z = -0.76$. Rejection Region: $(-\infty ,-1.28]\cup [1.28,\infty )$. Decision: Fail to reject $H_0$. 6. $H_0:p=0.25\; vs\; H_a:p<0.25$. Test Statistic: $Z = -1.17$. Rejection Region: $(-\infty ,-1.28]$. Decision: Fail to reject $H_0$. • Anonymous
textbooks/stats/Introductory_Statistics/Introductory_Statistics_(Shafer_and_Zhang)/08%3A_Testing_Hypotheses/8.E%3A_Testing_Hypotheses_%28Exercises%29.txt
The previous two chapters treated the questions of estimating and making inferences about a parameter of a single population. In this chapter we consider a comparison of parameters that belong to two different populations. For example, we might wish to compare the average income of all adults in one region of the country with the average income of those in another region, or we might wish to compare the proportion of all men who are vegetarians with the proportion of all women who are vegetarians. We will study construction of confidence intervals and tests of hypotheses in four situations, depending on the parameter of interest, the sizes of the samples drawn from each of the populations, and the method of sampling. We also examine sample size considerations. • 9.1: Comparison of Two Population Means- Large, Independent Samples Suppose we wish to compare the means of two distinct populations. Our goal is to use the information in the samples to estimate the difference in the means of the two populations and to make statistically valid inferences about it. • 9.2: Comparison of Two Population Means - Small, Independent Samples When one or the other of the sample sizes is small, as is often the case in practice, the Central Limit Theorem does not apply. We must then impose conditions on the population to give statistical validity to the test procedure. We will assume that both populations from which the samples are taken have a normal probability distribution and that their standard deviations are equal. • 9.3: Comparison of Two Population Means - Paired Samples A confidence interval for the difference in two population means using paired sampling is computed using a formula in the same fashion as was done for a single population mean. The same five-step procedure used to test hypotheses concerning a single population mean is used to test hypotheses concerning the difference between two population means using pair sampling. The only difference is in the formula for the standardized test statistic. • 9.4: Comparison of Two Population Proportions A confidence interval for the difference in two population proportions is computed using a formula in the same fashion as was done for a single population mean. The same five-step procedure used to test hypotheses concerning a single population proportion is used to test hypotheses concerning the difference between two population proportions. The only difference is in the formula for the standardized test statistic. • 9.5: Sample Size Considerations The minimum equal sample sizes needed to obtain a confidence interval for the difference in two population proportions with a given maximum error of the estimate and a given level of confidence can always be estimated. If there is prior knowledge of the population proportions p1 and p2 then the estimate can be sharpened. • 9.E: Two-Sample Problems (Exercises) These are homework exercises to accompany the Textmap created for "Introductory Statistics" by Shafer and Zhang. 09: Two-Sample Problems Learning Objectives • To understand the logical framework for estimating the difference between the means of two distinct populations and performing tests of hypotheses concerning those means. • To learn how to construct a confidence interval for the difference in the means of two distinct populations using large, independent samples. • To learn how to perform a test of hypotheses concerning the difference between the means of two distinct populations using large, independent samples. Suppose we wish to compare the means of two distinct populations. Figure $1$ illustrates the conceptual framework of our investigation in this and the next section. Each population has a mean and a standard deviation. We arbitrarily label one population as Population $1$ and the other as Population $2$, and subscript the parameters with the numbers $1$ and $2$ to tell them apart. We draw a random sample from Population $1$ and label the sample statistics it yields with the subscript $1$. Without reference to the first sample we draw a sample from Population $2$ and label its sample statistics with the subscript $2$. Definition: Independence Samples from two distinct populations are independent if each one is drawn without reference to the other, and has no connection with the other. Our goal is to use the information in the samples to estimate the difference $\mu _1-\mu _2$ in the means of the two populations and to make statistically valid inferences about it. Confidence Intervals Since the mean $x-1$ of the sample drawn from Population $1$ is a good estimator of $\mu _1$ and the mean $x-2$ of the sample drawn from Population $2$ is a good estimator of $\mu _2$, a reasonable point estimate of the difference $\mu _1-\mu _2$ is $\bar{x_1}-\bar{x_2}$. In order to widen this point estimate into a confidence interval, we first suppose that both samples are large, that is, that both $n_1\geq 30$ and $n_2\geq 30$. If so, then the following formula for a confidence interval for $\mu _1-\mu _2$ is valid. The symbols $s_{1}^{2}$ and $s_{2}^{2}$ denote the squares of $s_1$ and $s_2$. (In the relatively rare case that both population standard deviations $\sigma _1$ and $\sigma _2$ are known they would be used instead of the sample standard deviations.) $100(1-\alpha )\%$ Confidence Interval for the Difference Between Two Population Means: Large, Independent Samples The samples must be independent, and each sample must be large: Example $1$ To compare customer satisfaction levels of two competing cable television companies, $174$ customers of Company $1$ and $355$ customers of Company $2$ were randomly selected and were asked to rate their cable companies on a five-point scale, with $1$ being least satisfied and $5$ most satisfied. The survey results are summarized in the following table: Company 1 Company 2 $n_1=174$ $n_2=355$ $x-1=3.51$ $x-2=3.24$ $s_1=0.51$ $s_2=0.52$ Construct a point estimate and a 99% confidence interval for $\mu _1-\mu _2$, the difference in average satisfaction levels of customers of the two companies as measured on this five-point scale. Solution The point estimate of $\mu _1-\mu _2$ is $\bar{x_1}-\bar{x_2}=3.51-3.24=0.27 \nonumber$ In words, we estimate that the average customer satisfaction level for Company $1$ is $0.27$ points higher on this five-point scale than it is for Company $2$. To apply the formula for the confidence interval, proceed exactly as was done in Chapter 7. The $99\%$ confidence level means that $\alpha =1-0.99=0.01$ so that $z_{\alpha /2}=z_{0.005}$. From Figure 7.1.6 "Critical Values of " we read directly that $z_{0.005}=2.576$. Thus $(\bar{x_1}-\bar{x_2})\pm z_{\alpha /2}\sqrt{\frac{s_{1}^{2}}{n_1}+\frac{s_{2}^{2}}{n_2}}=0.27\pm 2.576\sqrt{\frac{0.51^{2}}{174}+\frac{0.52^{2}}{355}}=0.27\pm 0.12 \nonumber$ We are $99\%$ confident that the difference in the population means lies in the interval $[0.15,0.39]$, in the sense that in repeated sampling $99\%$ of all intervals constructed from the sample data in this manner will contain $\mu _1-\mu _2$. In the context of the problem we say we are $99\%$ confident that the average level of customer satisfaction for Company $1$ is between $0.15$ and $0.39$ points higher, on this five-point scale, than that for Company $2$. Hypothesis Testing Hypotheses concerning the relative sizes of the means of two populations are tested using the same critical value and $p$-value procedures that were used in the case of a single population. All that is needed is to know how to express the null and alternative hypotheses and to know the formula for the standardized test statistic and the distribution that it follows. The null and alternative hypotheses will always be expressed in terms of the difference of the two population means. Thus the null hypothesis will always be written $H_0: \mu _1-\mu _2=D_0 \nonumber$ where $D_0$ is a number that is deduced from the statement of the situation. As was the case with a single population the alternative hypothesis can take one of the three forms, with the same terminology: Form of Ha Terminology $H_a: \mu _1-\mu _2<D_0$ Left-tailed $H_a: \mu _1-\mu _2>D_0$ Right-tailed $H_a: \mu _1-\mu _2\neq D_0$ Two-tailed As long as the samples are independent and both are large the following formula for the standardized test statistic is valid, and it has the standard normal distribution. (In the relatively rare case that both population standard deviations $\sigma _1$ and $\sigma _2$ are known they would be used instead of the sample standard deviations.) Standardized Test Statistic for Hypothesis Tests Concerning the Difference Between Two Population Means: Large, Independent Samples $Z=\frac{(\bar{x_1}-\bar{x_2})-D_0}{\sqrt{\frac{s_{1}^{2}}{n_1}+\frac{s_{2}^{2}}{n_2}}} \nonumber$ The test statistic has the standard normal distribution. The samples must be independent, and each sample must be large: $n_1\geq 30$ and $n_2\geq 30$. Example $2$ Refer to Example $1$ concerning the mean satisfaction levels of customers of two competing cable television companies. Test at the $1\%$ level of significance whether the data provide sufficient evidence to conclude that Company $1$ has a higher mean satisfaction rating than does Company $2$. Use the critical value approach. Solution: • Step 1. If the mean satisfaction levels $\mu _1$ and $\mu _2$ are the same then $\mu _1=\mu _2$, but we always express the null hypothesis in terms of the difference between $\mu _1$ and $\mu _2$, hence $H_0$ is $\mu _1-\mu _2=0$. To say that the mean customer satisfaction for Company $1$ is higher than that for Company $2$ means that $\mu _1>\mu _2$, which in terms of their difference is $\mu _1-\mu _2>0$. The test is therefore $H_0: \mu _1-\mu _2=0 \nonumber$ $vs. \nonumber$ $H_a: \mu _1-\mu _2>0\; \; @\; \; \alpha =0.01 \nonumber$ • Step 2. Since the samples are independent and both are large the test statistic is $Z=\frac{(\bar{x_1}-\bar{x_2})-D_0}{\sqrt{\frac{s_{1}^{2}}{n_1}+\frac{s_{2}^{2}}{n_2}}} \nonumber$ • Step 3. Inserting the data into the formula for the test statistic gives $Z=\frac{(\bar{x_1}-\bar{x_2})-D_0}{\sqrt{\frac{s_{1}^{2}}{n_1}+\frac{s_{2}^{2}}{n_2}}}=\frac{(3.51-3.24)-0}{\sqrt{\frac{0.51^{2}}{174}+\frac{0.52^{2}}{355}}}=5.684 \nonumber$ • Step 4. Since the symbol in $H_a$ is “$>$” this is a right-tailed test, so there is a single critical value, $z_\alpha =z_{0.01}$, which from the last line in Figure 7.1.6 "Critical Values of " we read off as $2.326$. The rejection region is $[2.326,\infty )$. Figure $2$: Rejection Region and Test Statistic for Example $2$ • Step 5. As shown in Figure $2$ the test statistic falls in the rejection region. The decision is to reject $H_0$. In the context of the problem our conclusion is: The data provide sufficient evidence, at the $1\%$ level of significance, to conclude that the mean customer satisfaction for Company $1$ is higher than that for Company $2$. Example $3$ Perform the test of Example $2$ using the $p$-value approach. Solution: The first three steps are identical to those in Example $2$ • Step 4. The observed significance or $p$-value of the test is the area of the right tail of the standard normal distribution that is cut off by the test statistic $Z=5.684$. The number $5.684$ is too large to appear in Figure 7.1.5, which means that the area of the left tail that it cuts off is $1.0000$ to four decimal places. The area that we seek, the area of the right tail, is therefore $1-1.0000=0.0000$ to four decimal places. See Figure $3$. That is, $p$-value=$0.0000$ to four decimal places. (The actual value is approximately $0.000000007$.) • Step 5. Since $0.0000<0.01$, $p-value <\alpha$ so the decision is to reject the null hypothesis: The data provide sufficient evidence, at the $1\%$ level of significance, to conclude that the mean customer satisfaction for Company $1$ is higher than that for Company $2$. Key Takeaway • A point estimate for the difference in two population means is simply the difference in the corresponding sample means. • In the context of estimating or testing hypotheses concerning two population means, “large” samples means that both samples are large. • A confidence interval for the difference in two population means is computed using a formula in the same fashion as was done for a single population mean. • The same five-step procedure used to test hypotheses concerning a single population mean is used to test hypotheses concerning the difference between two population means. The only difference is in the formula for the standardized test statistic.
textbooks/stats/Introductory_Statistics/Introductory_Statistics_(Shafer_and_Zhang)/09%3A_Two-Sample_Problems/9.01%3A_Comparison_of_Two_Population_Means-_Large_Independent_Samples.txt
Learning Objectives • To learn how to construct a confidence interval for the difference in the means of two distinct populations using small, independent samples. • To learn how to perform a test of hypotheses concerning the difference between the means of two distinct populations using small, independent samples. When one or the other of the sample sizes is small, as is often the case in practice, the Central Limit Theorem does not apply. We must then impose conditions on the population to give statistical validity to the test procedure. We will assume that both populations from which the samples are taken have a normal probability distribution and that their standard deviations are equal. Confidence Intervals When the two populations are normally distributed and have equal standard deviations, the following formula for a confidence interval for $\mu _1-\mu _2$ is valid. $100(1-\alpha )\%$ Confidence Interval for the Difference Between Two Population Means: Small, Independent Samples $(\bar{x_1}-\bar{x_2})\pm t_{\alpha /2}\sqrt{s_{p}^{2}\left ( \dfrac{1}{n_1}+\dfrac{1}{n_2} \right )} \label{eq1}$ where $s_{p}^{2}=\dfrac{(n_1-1)s_{1}^{2}+(n_2-1)s_{2}^{2}}{n_1+n_2-2} \nonumber$ The number of degrees of freedom is $df=n_1+n_2-2. \nonumber$ The samples must be independent, the populations must be normal, and the population standard deviations must be equal. “Small” samples means that either $n_1<30$ or $n_2<30$. The quantity $s_{p}^{2}$ is called the pooled sample variance. It is a weighted average of the two estimates $s_{1}^{2}$ and $s_{2}^{2}$ of the common variance $\sigma _{1}^{2}=\sigma _{2}^{2}$ of the two populations. Example $1$ A software company markets a new computer game with two experimental packaging designs. Design $1$ is sent to $11$ stores; their average sales the first month is $52$ units with sample standard deviation $12$ units. Design $2$ is sent to $6$ stores; their average sales the first month is $46$ units with sample standard deviation $10$ units. Construct a point estimate and a $95\%$ confidence interval for the difference in average monthly sales between the two package designs. Solution The point estimate of $\mu _1-\mu _2$ is $\bar{x_1}-\bar{x_2}=52-46-6 \nonumber$ In words, we estimate that the average monthly sales for Design $1$ is $6$ units more per month than the average monthly sales for Design $2$. To apply the formula for the confidence interval (Equation \ref{eq1}), we must find $t_{\alpha /2}$. The $95\%$ confidence level means that $\alpha =1-0.95=0.05$ so that $t_{\alpha /2}=t_{0.025}$. From Figure 7.1.6, in the row with the heading $df=11+6-2=15$ we read that $t_{0.025}=2.131$. From the formula for the pooled sample variance we compute $s_{p}^{2}=\dfrac{(n_1-1)s_{1}^{2}+(n_2-1)s_{2}^{2}}{n_1+n_2-2}=\dfrac{(10)(12)^2+(5)(10)^2}{15}=129.\bar{3} \nonumber$ Thus $(\bar{x_1}-\bar{x_2})\pm t_\alpha /2\sqrt{s_{p}^{2}\left ( \dfrac{1}{n_1}+\dfrac{1}{n_2} \right )}=6\pm (2.131)\sqrt{129.\bar{3}\left ( \dfrac{1}{11}+\dfrac{1}{6} \right )}\approx 6\pm 12.3 \nonumber$ We are $95\%$ confident that the difference in the population means lies in the interval $[-6.3,18.3]$, in the sense that in repeated sampling $95\%$ of all intervals constructed from the sample data in this manner will contain $\mu _1-\mu _2$. Because the interval contains both positive and negative values the statement in the context of the problem is that we are $95\%$ confident that the average monthly sales for Design $1$ is between $18.3$ units higher and $6.3$ units lower than the average monthly sales for Design $2$. Hypothesis Testing Testing hypotheses concerning the difference of two population means using small samples is done precisely as it is done for large samples, using the following standardized test statistic. The same conditions on the populations that were required for constructing a confidence interval for the difference of the means must also be met when hypotheses are tested. Standardized Test Statistic for Hypothesis Tests Concerning the Difference Between Two Population Means: Small, Independent Samples $T=\dfrac{(\bar{x_1}-\bar{x_2})-D_0}{\sqrt{s_{p}^{2}\left ( \dfrac{1}{n_1}+\dfrac{1}{n_2}\right )}} \nonumber$ where $s_{p}^{2}=\dfrac{(n_1-1)s_{1}^{2}+(n_2-1)s_{2}^{2}}{n_1+n_2-2} \nonumber$ The test statistic has Student’s t-distribution with $df=n_1+n_2-2$ degrees of freedom. The samples must be independent, the populations must be normal, and the population standard deviations must be equal. “Small” samples means that either $n_1<30$ or $n_2<30$. Example $2$ Refer to Example $1$ concerning the mean sales per month for the same computer game but sold with two package designs. Test at the $1\%$ level of significance whether the data provide sufficient evidence to conclude that the mean sales per month of the two designs are different. Use the critical value approach. Solution • Step 1. The relevant test is $H_0: \mu _1-\mu _2=0 \nonumber$ vs. $H_a: \mu _1-\mu _2\neq 0\; \; @\; \; \alpha =0.01 \nonumber$ • Step 2. Since the samples are independent and at least one is less than $30$ the test statistic is $T=\dfrac{(\bar{x_1}-\bar{x_2})-D_0}{\sqrt{s_{p}^{2}\left ( \dfrac{1}{n_1}+\dfrac{1}{n_2}\right )}} \nonumber$ which has Student’s $t$-distribution with $df=11+6-2=15$ degrees of freedom. • Step 3. Inserting the data and the value $D_0=0$ into the formula for the test statistic gives \begin{align*} T&=\dfrac{(\bar{x_1}-\bar{x_2})-D_0}{\sqrt{s_{p}^{2}\left ( \dfrac{1}{n_1}+\dfrac{1}{n_2}\right )}} \[4pt] &=\dfrac{(52-46)-0}{\sqrt{129.\bar{3}\left ( \dfrac{1}{11}+\dfrac{1}{6} \right )}} \[4pt] &=1.040 \end{align*} \nonumber • Step 4. Since the symbol in $H_a$ is “$\neq$” this is a two-tailed test, so there are two critical values, $\pm t_{\alpha /2}=\pm t_{0.005}$. From the row in Figure 7.1.6 with the heading $df=15$ we read off $t_{0.005}=2.947$. The rejection region is $(-\infty ,-2.947]\cup [2.947,\infty )$. • Step 5. As shown in Figure $1$ the test statistic does not fall in the rejection region. The decision is not to reject $H_0$. In the context of the problem our conclusion is: The data do not provide sufficient evidence, at the $1\%$ level of significance, to conclude that the mean sales per month of the two designs are different. Example $3$ Perform the test of Example $2$ using the $p$-value approach. Solution The first three steps are identical to those in Example $2$. • Step 4. Because the test is two-tailed the observed significance or $p$-value of the test is the double of the area of the right tail of Student’st-distribution, with $15$ degrees of freedom, that is cut off by the test statistic $T=1.040$. We can only approximate this number. Looking in the row of Figure 7.1.6 headed $df=15$, the number $1.040$ is between the numbers $0.866$ and $1.341$, corresponding to $t_{0.200}$ and $t_{0.100}$. The area cut off by $t=0.866$ is $0.200$ and the area cut off by $t=1.341$ is $0.100$. Since $1.040$ is between $0.866$ and $1.341$ the area it cuts off is between $0.200$ and $0.100$. Thus the $p$-value (since the area must be doubled) is between $0.400$ and $0.200$. • Step 5. Since $p>0.200>0.01,\; \; p>\alpha$, so the decision is not to reject the null hypothesis: The data do not provide sufficient evidence, at the $1\%$ level of significance, to conclude that the mean sales per month of the two designs are different. Key Takeaway • In the context of estimating or testing hypotheses concerning two population means, “small” samples means that at least one sample is small. In particular, even if one sample is of size $30$ or more, if the other is of size less than $30$ the formulas of this section must be used. • A confidence interval for the difference in two population means is computed using a formula in the same fashion as was done for a single population mean.
textbooks/stats/Introductory_Statistics/Introductory_Statistics_(Shafer_and_Zhang)/09%3A_Two-Sample_Problems/9.02%3A_Comparison_of_Two_Population_Means_-_Small_Independent_Samples.txt
Learning Objectives • To learn the distinction between independent samples and paired samples. • To learn how to construct a confidence interval for the difference in the means of two distinct populations using paired samples. • To learn how to perform a test of hypotheses concerning the difference in the means of two distinct populations using paired samples Suppose chemical engineers wish to compare the fuel economy obtained by two different formulations of gasoline. Since fuel economy varies widely from car to car, if the mean fuel economy of two independent samples of vehicles run on the two types of fuel were compared, even if one formulation were better than the other the large variability from vehicle to vehicle might make any difference arising from difference in fuel difficult to detect. Just imagine one random sample having many more large vehicles than the other. Instead of independent random samples, it would make more sense to select pairs of cars of the same make and model and driven under similar circumstances, and compare the fuel economy of the two cars in each pair. Thus the data would look something like Table $1$, where the first car in each pair is operated on one formulation of the fuel (call it Type $1$ gasoline) and the second car is operated on the second (call it Type $2$ gasoline). Table $1$: Fuel Economy of Pairs of Vehicles Make and Model Car 1 Car 2 Buick LaCrosse 17.0 17.0 Dodge Viper 13.2 12.9 Honda CR-Z 35.3 35.4 Hummer H 3 13.6 13.2 Lexus RX 32.7 32.5 Mazda CX-9 18.4 18.1 Saab 9-3 22.5 22.5 Toyota Corolla 26.8 26.7 Volvo XC 90 15.1 15.0 The first column of numbers form a sample from Population $1$, the population of all cars operated on Type $1$ gasoline; the second column of numbers form a sample from Population $2$, the population of all cars operated on Type $2$ gasoline. It would be incorrect to analyze the data using the formulas from the previous section, however, since the samples were not drawn independently. What is correct is to compute the difference in the numbers in each pair (subtracting in the same order each time) to obtain the third column of numbers as shown in Table $2$ and treat the differences as the data. At this point, the new sample of differences $d_1=0.0,\cdots ,d_9=0.1$ in the third column of Table $2$ may be considered as a random sample of size $n=9$ selected from a population with mean $\mu _d=\mu _1-\mu _2$. This approach essentially transforms the paired two-sample problem into a one-sample problem as discussed in the previous two chapters. Table $2$: Fuel Economy of Pairs of Vehicles Make and Model Car 1 Car 2 Difference Buick LaCrosse 17.0 17.0 0.0 Dodge Viper 13.2 12.9 0.3 Honda CR-Z 35.3 35.4 −0.1 Hummer H 3 13.6 13.2 0.4 Lexus RX 32.7 32.5 0.2 Mazda CX-9 18.4 18.1 0.3 Saab 9-3 22.5 22.5 0.0 Toyota Corolla 26.8 26.7 0.1 Volvo XC 90 15.1 15.0 0.1 Note carefully that although it does not matter what order the subtraction is done, it must be done in the same order for all pairs. This is why there are both positive and negative quantities in the third column of numbers in Table $2$. Confidence Intervals When the population of differences is normally distributed the following formula for a confidence interval for $\mu _d=\mu _1-\mu _2$ is valid. $100(1−\alpha)\%$ Confidence Interval for the Difference Between Two Population Means: Paired Difference Samples $\bar{d}\pm t_{\alpha /2}\frac{s_d}{\sqrt{n}} \nonumber$ where there are $n$ pairs, $\bar{d}$ is the mean and $s_d$ is the standard deviation of their differences. The number of degrees of freedom is $df=n-1. \nonumber$ The population of differences must be normally distributed. Example $1$ Using the data in Table $1$ construct a point estimate and a $95\%$ confidence interval for the difference in average fuel economy between cars operated on Type $1$ gasoline and cars operated on Type $2$ gasoline. Solution We have referred to the data in Table $1$ because that is the way that the data are typically presented, but we emphasize that with paired sampling one immediately computes the differences, as given in Table $2$, and uses the differences as the data. The mean and standard deviation of the differences are $\bar{d}=\frac{\sum d}{n}=\frac{1.3}{9}=0.1\bar{4} \nonumber$ $s_d=\sqrt{\frac{\sum d^2-\frac{1}{n}(\sum d)^2}{n-1}}=\sqrt{\frac{0.41-\frac{1}{9}(1.3)^2}{8}}=0.1\bar{6} \nonumber$ The point estimate of $\mu _1-\mu _2=\mu _d$ is $\bar{d}=0.14 \nonumber$ In words, we estimate that the average fuel economy of cars using Type $1$ gasoline is $0.14$ mpg greater than the average fuel economy of cars using Type $2$ gasoline. To apply the formula for the confidence interval, we must find $t_{\alpha /2}$. The $95\%$ confidence level means that $\alpha =1-0.95=0.05$ so that $t_{\alpha /2}=t_{0.025}$. From Figure 7.1.6, in the row with the heading $df=9-1=8$ we read that $t_{0.025}=2.306$. Thus $\bar{d}\pm t_{\alpha /2}\frac{s_d}{\sqrt{n}}=0.14\pm 2.306\left ( \frac{0.1\bar{6}}{\sqrt{9}} \right )\approx 0.14\pm 0.13 \nonumber$ We are $95\%$ confident that the difference in the population means lies in the interval $[0.01,0.27]$, in the sense that in repeated sampling $95\%$ of all intervals constructed from the sample data in this manner will contain $\mu _d=\mu _1-\mu _2$. Stated differently, we are $95\%$ confident that mean fuel economy is between $0.01$ and $0.27$ mpg greater with Type $1$ gasoline than with Type $2$ gasoline. Hypothesis Testing Testing hypotheses concerning the difference of two population means using paired difference samples is done precisely as it is done for independent samples, although now the null and alternative hypotheses are expressed in terms of $\mu _d$ instead of $\mu _1-\mu _2$. Thus the null hypothesis will always be written $H_0:\mu _d=D_0 \nonumber$ The three forms of the alternative hypothesis, with the terminology for each case, are: Form of $H_a$ Terminology $H_a:\mu_d<D_0$ Left-tailed $H_a:\mu_d>D_0$ Right-tailed $H_a:\mu_d\neq D_0$ Two-tailed The same conditions on the population of differences that was required for constructing a confidence interval for the difference of the means must also be met when hypotheses are tested. Here is the standardized test statistic that is used in the test. Standardized Test Statistic for Hypothesis Tests Concerning the Difference Between Two Population Means: Paired Difference Samples $T=\frac{\bar{d}-D_0}{s_d/\sqrt{n}} \nonumber$ where there are $n$ pairs, $\bar{d}$ is the mean and $s_d$ is the standard deviation of their differences. The test statistic has Student’s $t$-distribution with $df=n-1$ degrees of freedom. The population of differences must be normally distributed. Example $2$: using the critical value approach Using the data of Table $2$ test the hypothesis that mean fuel economy for Type $1$ gasoline is greater than that for Type $2$ gasoline against the null hypothesis that the two formulations of gasoline yield the same mean fuel economy. Test at the $5\%$ level of significance using the critical value approach. Solution The only part of the table that we use is the third column, the differences. • Step 1. Since the differences were computed in the order  $\text{Type}\; \; 1 \; \; \text{mpg}-\text{Type}\; \; 2 \; \; \text{mpg}$, better fuel economy with Type $1$ fuel corresponds to $\mu _d=\mu _1-\mu _2>0$. Thus the test is $H_0:\mu _d=0\ \text{vs.}\ H_a:\mu _d>0\; \; @\; \; \alpha =0.05 \nonumber$ (If the differences had been computed in the opposite order then the alternative hypotheses would have been $H_a:\mu _d<0$.) • Step 2. Since the sampling is in pairs the test statistic is $T=\frac{\bar{d}-D_0}{s_d/\sqrt{n}} \nonumber$ • Step 3. We have already computed $\bar{d}$ and $s_d$ in the previous example. Inserting their values and $D_0=0$ into the formula for the test statistic gives $T=\frac{\bar{d}-D_0}{s_d/\sqrt{n}}=\frac{0.1\bar{4}}{0.1\bar{6}/\sqrt{3}}=2.600 \nonumber$ • Step 4. Since the symbol in $H_a$ is “$>$” this is a right-tailed test, so there is a single critical value, $t_\alpha =t_{0.05}$ with $8$ degrees of freedom, which from the row labeled $df=8$ in Figure 7.1.6 we read off as $1.860$. The rejection region is $[1.860,\infty )$. • Step 5. As shown in Figure $1$ the test statistic falls in the rejection region. The decision is to reject $H_0$. In the context of the problem our conclusion is: The data provide sufficient evidence, at the $5\%$ level of significance, to conclude that the mean fuel economy provided by Type $1$ gasoline is greater than that for Type $2$ gasoline. Example $3$: using the p-value approach Perform the test in Example $2$ using the p-value approach. Solution The first three steps are identical to those $2$. • Step 4. Because the test is one-tailed the observed significance or $p$-value of the test is just the area of the right tail of Student’s $t$-distribution, with $8$ degrees of freedom, that is cut off by the test statistic $T=2.600$. We can only approximate this number. Looking in the row of Figure 7.1.6 headed $df=8$, the number $2.600$ is between the numbers $2.306$ and $2.896$, corresponding to $t_{0.025}$ and $t_{0.010}$. The area cut off by $t=2.306$ is $0.025$ and the area cut off by $t=2.896$ is $0.010$. Since $2.600$ is between $2.306$ and $2.896$ the area it cuts off is between $0.025$ and $0.010$. Thus the $p$-value is between $0.025$ and $0.010$. In particular it is less than $0.025$. See Figure $2$. • Step 5. Since $0.025<0.05$, $p<\alpha$ so the decision is to reject the null hypothesis: The data provide sufficient evidence, at the $5\%$ level of significance, to conclude that the mean fuel economy provided by Type $1$ gasoline is greater than that for Type $2$ gasoline. The paired two-sample experiment is a very powerful study design. It bypasses many unwanted sources of “statistical noise” that might otherwise influence the outcome of the experiment, and focuses on the possible difference that might arise from the one factor of interest. If the sample is large (meaning that $n\geq 30$) then in the formula for the confidence interval we may replace $t_{\alpha /2}$ by $z_{\alpha /2}$. For hypothesis testing when the number of pairs is at least $30$, we may use the same statistic as for small samples for hypothesis testing, except now it follows a standard normal distribution, so we use the last line of Figure 7.1.6 to compute critical values, and $p$-values can be computed exactly with Figure 7.1.5, not merely estimated using Figure 7.1.6. Key Takeaway • When the data are collected in pairs, the differences computed for each pair are the data that are used in the formulas. • A confidence interval for the difference in two population means using paired sampling is computed using a formula in the same fashion as was done for a single population mean. • The same five-step procedure used to test hypotheses concerning a single population mean is used to test hypotheses concerning the difference between two population means using pair sampling. The only difference is in the formula for the standardized test statistic.
textbooks/stats/Introductory_Statistics/Introductory_Statistics_(Shafer_and_Zhang)/09%3A_Two-Sample_Problems/9.03%3A_Comparison_of_Two_Population_Means_-_Paired_Samples.txt
Learning Objectives • To learn how to construct a confidence interval for the difference in the proportions of two distinct populations that have a particular characteristic of interest. • To learn how to perform a test of hypotheses concerning the difference in the proportions of two distinct populations that have a particular characteristic of interest. Suppose we wish to compare the proportions of two populations that have a specific characteristic, such as the proportion of men who are left-handed compared to the proportion of women who are left-handed. Figure $1$ illustrates the conceptual framework of our investigation. Each population is divided into two groups, the group of elements that have the characteristic of interest (for example, being left-handed) and the group of elements that do not. We arbitrarily label one population as Population $1$ and the other as Population $2$, and subscript the proportion of each population that possesses the characteristic with the number $1$ or $2$ to tell them apart. We draw a random sample from Population $1$ and label the sample statistic it yields with the subscript $1$. Without reference to the first sample we draw a sample from Population $2$ and label its sample statistic with the subscript $2$. Our goal is to use the information in the samples to estimate the difference $p_1-p_2$ in the two population proportions and to make statistically valid inferences about it. Confidence Intervals Since the sample proportion $\hat{p}_1$ computed using the sample drawn from Population $1$ is a good estimator of population proportion $p_1$ of Population $1$ and the sample proportion $\hat{p}_2$ computed using the sample drawn from Population $2$ is a good estimator of population proportion $p_2$ of Population $2$, a reasonable point estimate of the difference $p_1−p_2$ is $\hat{p}_1 -\hat{p}_2$. In order to widen this point estimate into a confidence interval we suppose that both samples are large, as described in Section 7.3 and repeated below. If so, then the following formula for a confidence interval for $p_1−p_2$ is valid. $100(1−\alpha)\%$ Confidence Interval for the Difference Between Two Population Proportions $(\hat{p}_1−\hat{p}_2) \pm z_{a/2} \sqrt{ \dfrac{ \hat{p}_1(1−\hat{p}_1)}{n_1}+ \dfrac{\hat{p}_2(1−\hat{p}_2)}{n_2}} \nonumber$ The samples must be independent, and each sample must be large: each of the intervals $\left[ \hat{p}_1−3 \sqrt{ \dfrac{\hat{p}_1(1−\hat{p}_1)}{n_1}}, \hat{p}_1 + 3 \sqrt{ \dfrac{ \hat{p}_1(1−\hat{p}_1)}{n_1}} \right] \nonumber$ and $\left[ \hat{p}_2−3 \sqrt{ \dfrac{\hat{p}_2(1−\hat{p}_2)}{n_2}}, \hat{p}_2 + 3 \sqrt{ \dfrac{ \hat{p}_2(1−\hat{p}_2)}{n_2}} \right] \nonumber$ must lie wholly within the interval $[0,1]$. Example $1$ The department of code enforcement of a county government issues permits to general contractors to work on residential projects. For each permit issued, the department inspects the result of the project and gives a “pass” or “fail” rating. A failed project must be re-inspected until it receives a pass rating. The department had been frustrated by the high cost of re-inspection and decided to publish the inspection records of all contractors on the web. It was hoped that public access to the records would lower the re-inspection rate. A year after the web access was made public, two samples of records were randomly selected. One sample was selected from the pool of records before the web publication and one after. The proportion of projects that passed on the first inspection was noted for each sample. The results are summarized below. Construct a point estimate and a $90\%$ confidence interval for the difference in the passing rate on first inspection between the two time periods. $\begin{array}{c|c} \text{No public web access} & n_1=500\; \; \hat{p_1}=0.67 \ \hline \text{Public web access} & n_2=100\; \; \hat{p_2}=0.80 \ \end{array} \nonumber$ Solution The point estimate of $p_1−p_2$ is $\hat{p}_1−\hat{p}_2=0.67−0.80=−0.13 \nonumber$ Because the “No public web access” population was labeled as Population $1$ and the “Public web access” population was labeled as Population $2$, in words this means that we estimate that the proportion of projects that passed on the first inspection increased by $13$ percentage points after records were posted on the web. The sample sizes are sufficiently large for constructing a confidence interval since for sample 1: $3 \sqrt{ \dfrac{ \hat{p}_1(1−\hat{p}_1)}{n_1}} = 3 \sqrt{ \dfrac{ (0.67)(0.33)}{500}} =0.06 \nonumber$ so that $\left [ \hat{p_1}-3\sqrt{\frac{\hat{p_1}(1-\hat{p_1})}{n_1}}, \hat{p_1}+3\sqrt{\frac{\hat{p_1}(1-\hat{p_1})}{n_1}} \right ]=[0.67-0.06,0.67+0.06]=[0.61,0.73]\subset [0,1] \nonumber$ and for sample $2$: $3\sqrt{\frac{\hat{p_1}(1-\hat{p_1})}{n_1}}=3\sqrt{\frac{(0.8)(0.2)}{100}}=0.12 \nonumber$ so that $\left [ \hat{p_2}-3\sqrt{\frac{\hat{p_2}(1-\hat{p_2})}{n_2}}, \hat{p_2}+3\sqrt{\frac{\hat{p_2}(1-\hat{p_2})}{n_2}} \right ]=[0.8-0.12,0.8+0.12]=[0.68,0.92]\subset [0,1] \nonumber$ To apply the formula for the confidence interval, we first observe that the $90\%$ confidence level means that $\alpha =1-0.90=0.10$ so that $z_{\alpha /2}=z_{0.05}$. From Figure 7.1.6 we read directly that $z_{0.05}=1.645$. Thus the desired confidence interval is \begin{align} (\hat{p}_1−\hat{p}_2)&± z_{α/2} \sqrt{ \dfrac{ \hat{p}_1(1−\hat{p}_1)}{n_1} + \dfrac{\hat{p}_2(1−\hat{p}_2)}{n_2}} \ &= 0.13 ± 1.645 \sqrt{ \dfrac{(0.67)(0.33)}{500}+\dfrac{(0.8)(0.2)}{100}} \ &= -0.13±0.07 \end{align} \nonumber The $90\%$ confidence interval is $[-0.20,-0.06]$. We are $90\%$ confident that the difference in the population proportions lies in the interval $[-0.20,-0.06]$, in the sense that in repeated sampling $90\%$ of all intervals constructed from the sample data in this manner will contain $p_1−p_2$. Taking into account the labeling of the two populations, this means that we are $90\%$ confident that the proportion of projects that pass on the first inspection is between $6$ and $20$ percentage points higher after public access to the records than before. Hypothesis Testing In hypothesis tests concerning the relative sizes of the proportions $p_1$ and $p_2$ of two populations that possess a particular characteristic, the null and alternative hypotheses will always be expressed in terms of the difference of the two population proportions. Hence the null hypothesis is always written $H_0: p_1−p_2=D_0 \nonumber$ The three forms of the alternative hypothesis, with the terminology for each case, are: Form of $H_a$ Terminology $H_a : p _1−p_2 < D_0$ Left-tailed $H_a : p_1−p_2>D_0$ Right-tailed $H_a : p_1−p_2 \neq D_0$ Two-tailed As long as the samples are independent and both are large the following formula for the standardized test statistic is valid, and it has the standard normal distribution. Standardized Test Statistic for Hypothesis Tests Concerning the Difference Between Two Population Proportions $Z=\frac{(\hat{p_1}-\hat{p_2})-D_0}{\sqrt{\frac{\hat{p_1}(1-\hat{p_1})}{n_1}+\frac{\hat{p_2}(1-\hat{p_2})}{n_2}}} \nonumber$ The test statistic has the standard normal distribution. The samples must be independent, and each sample must be large: each of the intervals $\left [ \hat{p_1}-3\sqrt{\frac{\hat{p_1}(1-\hat{p_1})}{n_1}}, \hat{p_1}+3\sqrt{\frac{\hat{p_1}(1-\hat{p_1})}{n_1}} \right ] \nonumber$ and $\left [ \hat{p_2}-3\sqrt{\frac{\hat{p_2}(1-\hat{p_2})}{n_2}}, \hat{p_2}+3\sqrt{\frac{\hat{p_2}(1-\hat{p_2})}{n_2}} \right ] \nonumber$ must lie wholly within the interval $[0,1]$. Example $2$ Using the data of Example $1$, test whether there is sufficient evidence to conclude that public web access to the inspection records has increased the proportion of projects that passed on the first inspection by more than $5$ percentage points. Use the critical value approach at the $10\%$ level of significance. Solution • Step 1. Taking into account the labeling of the populations an increase in passing rate at the first inspection by more than $5$ percentage points after public access on the web may be expressed as $p_2>p_1+0.05$, which by algebra is the same as $p_1-p_2<-0.05$. This is the alternative hypothesis. Since the null hypothesis is always expressed as an equality, with the same number on the right as is in the alternative hypothesis, the test is $H_0: p_1-p_2=-0.05\ \text{vs.}\ H_a: p_1-p_2<-0.05\; \; @\; \; \alpha =0.10 \nonumber$ • Step 2. Since the test is with respect to a difference in population proportions the test statistic is $Z=\frac{(\hat{p_1}-\hat{p_2})-D_0}{\sqrt{\frac{\hat{p_1}(1-\hat{p_1})}{n_1}+\frac{\hat{p_2}(1-\hat{p_2})}{n_2}}} \nonumber$ • Step 3. Inserting the values given in Example $1$ and the value $D_0=-0.05$into the formula for the test statistic gives $Z=\frac{(\hat{p_1}-\hat{p_2})-D_0}{\sqrt{\frac{\hat{p_1}(1-\hat{p_1})}{n_1}+\frac{\hat{p_2}(1-\hat{p_2})}{n_2}}}=\frac{(-0.13)-(-0.05)}{\sqrt{\frac{(0.67)(0.33)}{500}+\frac{(0.8)(0.2)}{100}}}=-1.770 \nonumber$ • Step 4. Since the symbol in $H_a$ is “$<$” this is a left-tailed test, so there is a single critical value, $z_\alpha =-z_{0.10}$. From the last row in Figure 7.1.6 $z_{0.10}=1.282$, so $-z_{0.10}=-1.282$. The rejection region is $(-\infty ,-1.282]$. • Step 5. As shown in Figure $2$ the test statistic falls in the rejection region. The decision is to reject $H_0$. In the context of the problem our conclusion is: The data provide sufficient evidence, at the $10\%$ level of significance, to conclude that the rate of passing on the first inspection has increased by more than $5$ percentage points since records were publicly posted on the web. Example $3$ Perform the test of Example $2$ using the $p$-value approach. Solution The first three steps are identical to those in Example $2$ • Step 4. Because the test is left-tailed the observed significance or $p$-value of the test is just the area of the left tail of the standard normal distribution that is cut off by the test statistic $Z=-1.770$. From Figure 7.1.5 the area of the left tail determined by $-1.77$ is $0.0384$. The $p$-value is $0.0384$. • Step 5. Since the $p$-value $0.0384$ is less than $\alpha =0.10$, the decision is to reject the null hypothesis: The data provide sufficient evidence, at the $10\%$ level of significance, to conclude that the rate of passing on the first inspection has increased by more than $5$ percentage points since records were publicly posted on the web. Finally a common misuse of the formulas given in this section must be mentioned. Suppose a large pre-election survey of potential voters is conducted. Each person surveyed is asked to express a preference between, say, Candidate $A$ and Candidate $B$. (Perhaps “no preference” or “other” are also choices, but that is not important.) In such a survey, estimators $\hat{p}_A$ and $\hat{p}B$ of $p_A$ and $p_B$ can be calculated. It is important to realize, however, that these two estimators were not calculated from two independent samples. While $\hat{p}A−\hat{p}_B$ may be a reasonable estimator of $p_A−p_B$, the formulas for confidence intervals and for the standardized test statistic given in this section are not valid for data obtained in this manner. Key Takeaway • A confidence interval for the difference in two population proportions is computed using a formula in the same fashion as was done for a single population mean. • The same five-step procedure used to test hypotheses concerning a single population proportion is used to test hypotheses concerning the difference between two population proportions. The only difference is in the formula for the standardized test statistic.
textbooks/stats/Introductory_Statistics/Introductory_Statistics_(Shafer_and_Zhang)/09%3A_Two-Sample_Problems/9.04%3A_Comparison_of_Two_Population_Proportions.txt
Learning Objectives • To learn how to apply formulas for estimating the size samples that will be needed in order to construct a confidence interval for the difference in two population means or proportions that meets given criteria. As was pointed out at the beginning of Section 7.4, sampling is typically done with definite objectives in mind. For example, a physician might wish to estimate the difference in the average amount of sleep gotten by patients suffering a certain condition with the average amount of sleep got by healthy adults, at $90\%$ confidence and to within half an hour. Since sampling costs time, effort, and money, it would be useful to be able to estimate the smallest size samples that are likely to meet these criteria. Estimating $\mu _1-\mu _2$ with Independent Samples Assuming that large samples will be required, the confidence interval formula for estimating the difference $\mu _1-\mu _2$ between two population means using independent samples is $(\bar{x_1}-\bar{x_2})\pm E$, where $E=z_{\alpha /2}\sqrt{\frac{s_{1}^{2}}{n_1}+\frac{s_{2}^{2}}{n_2}} \nonumber$ To say that we wish to estimate the mean to within a certain number of units means that we want the margin of error $E$ to be no larger than that number. The number $z_{\alpha /2}$ is determined by the desired level of confidence. The numbers $s_1$ and $s_2$ are estimates of the standard deviations $\sigma _1$ and $\sigma _2$ of the two populations. In analogy with what we did in Section 7.4 we will assume that we either know or can reasonably approximate $\sigma _1$ and $\sigma _2$. We cannot solve for both $n_1$ and $n_2$, so we have to make an assumption about their relative sizes. We will specify that they be equal. With these assumptions we obtain the minimum sample sizes needed by solving the equation displayed just above for $n_1=n_2$. Minimum Equal Sample Sizes for Estimating the Difference in the Means of Two Populations Using Independent Samples The estimated minimum equal sample sizes $n_1=n_2$ needed to estimate the difference $\mu _1-\mu _2$ in two population means to within $E$ units at $100(1-\alpha )\%$ confidence is $n_1=n_2=\frac{(z_{\alpha /2})^2(\sigma _{1}^{2}+\sigma _{2}^{2})}{E^2}\; \; \text{rounded up} \nonumber$ In all the examples and exercises the population standard deviations $\sigma _1$ and $\sigma _2$ will be given. Example $1$ A law firm wishes to estimate the difference in the mean delivery time of documents sent between two of its offices by two different courier companies, to within half an hour and with $99.5\%$ confidence. From their records it will randomly sample the same number n of documents as delivered by each courier company. Determine how large $n$ must be if the estimated standard deviations of the delivery times are $0.75$ hour for one company and $1.15$ hours for the other. Solution Confidence level $99.5\%$ means that $\alpha =1-0.995=0.005$ so $\alpha /2=0.0025$. From the last line of Figure 7.1.6 we obtain $z_{0.0025}=2.807$. To say that the estimate is to be “to within half an hour” means that $E=0.5$. Thus $n=\frac{(z_{\alpha /2})^2(\sigma _{1}^{2}+\sigma _{2}^{2})}{E^2}=\frac{(2.807)^2(0.75^2+1.15^2)}{0.5^2}=59.40953746 \nonumber$ which we round up to $60$, since it is impossible to take a fractional observation. The law firm must sample $60$ document deliveries by each company. Estimating $\mu _1-\mu _2$ with Paired Samples As we mentioned at the end of Section 9.3, if the sample is large (meaning that $n\geq 30$) then in the formula for the confidence interval we may replace $t_{\alpha /2}$ by $z_{\alpha /2}$, so that the confidence interval formula becomes $\bar{d}\pm E$ for $E=z_{\alpha /2}\frac{s_d}{\sqrt{n}} \nonumber$ The number $s_d$ is an estimate of the standard deviations $\sigma _d$ of the population of differences. We must assume that we either know or can reasonably approximate $\sigma _d$. Thus, assuming that large samples will be required to meet the criteria given, we can solve the displayed equation for $n$ to obtain an estimate of the number of pairs needed in the sample. Minimum Sample Size for Estimating the Difference in the Means of Two Populations Using Paired Difference Samples The estimated minimum number of pairs $n$ needed to estimate the difference $\mu_d=\mu _1-\mu _2$ in two population means to within $E$ units at $100(1-\alpha )\%$ confidence using paired difference samples is $n=\frac{(z_{\alpha /2})^2\sigma _{d}^{2}}{E^2}\; \; \text{rounded up} \nonumber$ In all the examples and exercises the population standard deviation of the differences $\sigma _d$ will be given. Example $2$ A automotive tire manufacturer wishes to compare the mean lifetime of two tread designs under actual driving conditions. They will mount one of each type of tire on $n$ vehicles (both on the front or both on the back) and measure the difference in remaining tread after $20,000$ miles of driving. If the standard deviation of the differences is assumed to be $0.025$ inch, find the minimum samples size needed to estimate the difference in mean depth (at $20,000$ miles use) to within $0.01$ inch at $99.9\%$ confidence. Solution Confidence level $99.9\%$ means that $\alpha =1-0.999=0.001$ so $\alpha /2=0.0005$. From the last line of Figure 7.1.6 we obtain $z_{0.0005}=3.291$. To say that the estimate is to be “to within $0.01$ inch” means that $E = 0.01$. Thus $n=\frac{(z_{\alpha /2})^2\sigma _{d}^{2}}{E^2}=\frac{(3.291)^2(0.025)^2}{(0.01)^2}=67.69175625 \nonumber$ which we round up to $68$. The manufacturer must test $68$ pairs of tires. Estimating $p_1-p_2$ The confidence interval formula for estimating the difference $p_1-p_2$ between two population proportions is $\hat{p_1}-\hat{p_2}\pm E$, where $E=z_{\alpha /2}\sqrt{\frac{\hat{p_1}(1-\hat{p_1})}{n_1}+\frac{\hat{p_2}(1-\hat{p_2})}{n_2}} \nonumber$ To say that we wish to estimate the mean to within a certain number of units means that we want the margin of error $E$ to be no larger than that number. The number $z_{\alpha /2}$ is determined by the desired level of confidence. We cannot solve for both $n_1$ and $n_2$, so we have to make an assumption about their relative sizes. We will specify that they be equal. With these assumptions we obtain the minimum sample sizes needed by solving the displayed equation for $n_1=n_2$. Minimum Equal Sample Sizes for Estimating the Difference in Two Population Proportions The estimated minimum equal sample sizes $n_1=n_2$ needed to estimate the difference $p_1-p_2$ in two population proportions to within $E$ percentage points at $100(1-\alpha )\%$ confidence is $n_1=n_2=\frac{(z_{\alpha /2})^2(\hat{p_1}(1-\hat{p_1}+\hat{p_2}(1-\hat{p_2}))}{E^2}\; \; \text{rounded up} \nonumber$ Here we face the same dilemma that we encountered in the case of a single population proportion: the formula for estimating how large a sample to take contains the numbers $\hat{p_1}$ and $\hat{p_2}$, which we know only after we have taken the sample. There are two ways out of this dilemma. Typically the researcher will have some idea as to the values of the population proportions $p_1$ and $p_2$, hence of what the sample proportions $\hat{p_1}$ and $\hat{p_2}$ are likely to be. If so, those estimates can be used in the formula. The second approach to resolving the dilemma is simply to replace each of $\hat{p_1}$ and $\hat{p_2}$ in the formula by $0.5$. As in the one-population case, this is the most conservative estimate, since it gives the largest possible estimate of $n$. If we have an estimate of only one of $p_1$ and $p_2$ we can use that estimate for it, and use the conservative estimate $0.5$ for the other. Example $3$ Find the minimum equal sample sizes necessary to construct a $98\%$ confidence interval for the difference $p_1-p_2$ with a margin of error $E=0.05$, 1. assuming that no prior knowledge about $p_1$ or $p_2$ is available; and 2. assuming that prior studies suggest that $p_1\approx 0.2$ and $p_2\approx 0.3$. Solution Confidence level $98\%$ means that $\alpha =1-0.98=0.02$ so $\alpha /2=0.01$. From the last line of Figure 7.1.6 we obtain $z_{0.01}=2.326$. 1. Since there is no prior knowledge of $p_1$ or $p_2$ we make the most conservative estimate that $\hat{p_1}=0.5$ and $\hat{p_2}=0.5$. Then \begin{align*} n_1=n_2 &= \frac{(z_{\alpha /2})^2(\hat{p_1}(1-\hat{p_1}+\hat{p_2}(1-\hat{p_2}))}{E^2}\ &= \frac{(2.326)^2((0.5)(0.5)+(0.5)(0.5))}{0.05^2}\ &= 1082.0552 \end{align*} \nonumber which we round up to $1,083$. We must take a sample of size $1,083$ from each population. 1. Since $p_1\approx 0.2$ we estimate $\hat{p_1}$ by $0.2$, and since $p_2\approx 0.3$ we estimate $\hat{p_2}$ by $0.3$. Thus we obtain \begin{align*} n_1=n_2 &= \frac{(z_{\alpha /2})^2(\hat{p_1}(1-\hat{p_1}+\hat{p_2}(1-\hat{p_2}))}{E^2}\ &= \frac{(2.326)^2((0.2)(0.8)+(0.3)(0.7))}{0.05^2}\ &= 800.720848\end{align*} \nonumber which we round up to $801$. We must take a sample of size $801$ from each population. Key Takeaway • If the population standard deviations $\sigma _1$ and $\sigma _2$ are known or can be estimated, then the minimum equal sizes of independent samples needed to obtain a confidence interval for the difference $\mu _1-\mu _2$ in two population means with a given maximum error of the estimate $E$ and a given level of confidence can be estimated. • If the standard deviation $\sigma _d$ of the population of differences in pairs drawn from two populations is known or can be estimated, then the minimum number of sample pairs needed under paired difference sampling to obtain a confidence interval for the difference $\mu_d=\mu _1-\mu _2$ in two population means with a given maximum error of the estimate $E$ and a given level of confidence can be estimated. • The minimum equal sample sizes needed to obtain a confidence interval for the difference in two population proportions with a given maximum error of the estimate and a given level of confidence can always be estimated. If there is prior knowledge of the population proportions $p_1$ and $p_2$ then the estimate can be sharpened.
textbooks/stats/Introductory_Statistics/Introductory_Statistics_(Shafer_and_Zhang)/09%3A_Two-Sample_Problems/9.05%3A_Sample_Size_Considerations.txt
These are homework exercises to accompany the Textmap created for "Introductory Statistics" by Shafer and Zhang. 9.1: Comparison of Two Population Means: Large, Independent Samples Q9.1.1 Construct the confidence interval for $\mu _1-\mu _2$ for the level of confidence and the data from independent samples given. 1. $90\%$ confidence, $n_1=45, \bar{x_1}=27, s_1=2\ n_2=60, \bar{x_2}=22, s_2=3$ 2. $99\%$ confidence, $n_1=30, \bar{x_1}=-112, s_1=9\ n_2=40, \bar{x_2}=-98, s_2=4$ Q9.1.2 Construct the confidence interval for $\mu _1-\mu _2$ for the level of confidence and the data from independent samples given. 1. $95\%$ confidence, $n_1=110, \bar{x_1}=77, s_1=15\ n_2=85, \bar{x_2}=79, s_2=21$ 2. $90\%$ confidence, $n_1=65, \bar{x_1}=-83, s_1=12\ n_2=65, \bar{x_2}=-74, s_2=8$ Q9.1.3 Construct the confidence interval for $\mu _1-\mu _2$ for the level of confidence and the data from independent samples given. 1. $99.5\%$ confidence, $n_1=130, \bar{x_1}=27.2, s_1=2.5\ n_2=155, \bar{x_2}=38.8, s_2=4.6$ 2. $95\%$ confidence, $n_1=68, \bar{x_1}=215.5, s_1=12.3\ n_2=84, \bar{x_2}=287.8, s_2=14.1$ Q9.1.4 Construct the confidence interval for $\mu _1-\mu _2$ for the level of confidence and the data from independent samples given. 1. $99.9\%$ confidence, $n_1=275, \bar{x_1}=70.2, s_1=1.5\ n_2=325, \bar{x_2}=63.4, s_2=1.1$ 2. $90\%$ confidence, $n_1=120, \bar{x_1}=35.5, s_1=0.75\ n_2=146, \bar{x_2}=29.6, s_2=0.80$ Q9.1.5 Perform the test of hypotheses indicated, using the data from independent samples given. Use the critical value approach. Compute the $p$-value of the test as well. 1. Test $H_0:\mu _1-\mu _2=3\; vs\; H_a:\mu _1-\mu _2\neq 3\; @\; \alpha =0.05$ $n_1=35, \bar{x_1}=25, s_1=1\ n_2=45, \bar{x_2}=19, s_2=2$ 2. Test $H_0:\mu _1-\mu _2=-25\; vs\; H_a:\mu _1-\mu _2<-25\; @\; \alpha =0.10$ $n_1=85, \bar{x_1}=188, s_1=15\ n_2=62, \bar{x_2}=215, s_2=19$ Q9.1.6 Perform the test of hypotheses indicated, using the data from independent samples given. Use the critical value approach. Compute the $p$-value of the test as well. 1. Test $H_0:\mu _1-\mu _2=45\; vs\; H_a:\mu _1-\mu _2>45\; @\; \alpha =0.001$ $n_1=200, \bar{x_1}=1312, s_1=35\ n_2=225, \bar{x_2}=1256, s_2=28$ 2. Test $H_0:\mu _1-\mu _2=-12\; vs\; H_a:\mu _1-\mu _2\neq -12\; @\; \alpha =0.10$ $n_1=35, \bar{x_1}=121, s_1=6\ n_2=40, \bar{x_2}=135 s_2=7$ Q9.1.7 Perform the test of hypotheses indicated, using the data from independent samples given. Use the critical value approach. Compute the $p$-value of the test as well. 1. Test $H_0:\mu _1-\mu _2=0\; vs\; H_a:\mu _1-\mu _2\neq 0\; @\; \alpha =0.01$ $n_1=125, \bar{x_1}=-46, s_1=10\ n_2=90, \bar{x_2}=-50, s_2=13$ 2. Test $H_0:\mu _1-\mu _2=20\; vs\; H_a:\mu _1-\mu _2>20\; @\; \alpha =0.05$ $n_1=40, \bar{x_1}=142, s_1=11\ n_2=40, \bar{x_2}=118 s_2=10$ Q9.1.8 Perform the test of hypotheses indicated, using the data from independent samples given. Use the critical value approach. Compute the $p$-value of the test as well. 1. Test $H_0:\mu _1-\mu _2=13\; vs\; H_a:\mu _1-\mu _2<13\; @\; \alpha =0.01$ $n_1=35, \bar{x_1}=100, s_1=2\ n_2=35, \bar{x_2}=88, s_2=2$ 2. Test $H_0:\mu _1-\mu _2=-10\; vs\; H_a:\mu _1-\mu _2\neq -10\; @\; \alpha =0.10$ $n_1=146, \bar{x_1}=62, s_1=4\ n_2=120, \bar{x_2}=73 s_2=7$ Q9.1.9 Perform the test of hypotheses indicated, using the data from independent samples given. Use the $p$-value approach. 1. Test $H_0:\mu _1-\mu _2=57\; vs\; H_a:\mu _1-\mu _2<57\; @\; \alpha =0.10$ $n_1=117, \bar{x_1}=1309, s_1=42\ n_2=133, \bar{x_2}=1258, s_2=37$ 2. Test $H_0:\mu _1-\mu _2=-1.5\; vs\; H_a:\mu _1-\mu _2\neq -1.5\; @\; \alpha =0.20$ $n_1=65, \bar{x_1}=16.9, s_1=1.3\ n_2=57, \bar{x_2}=18.6 s_2=1.1$ Q9.1.10 Perform the test of hypotheses indicated, using the data from independent samples given. Use the $p$-value approach. 1. Test $H_0:\mu _1-\mu _2=-10.5\; vs\; H_a:\mu _1-\mu _2>-10.5\; @\; \alpha =0.01$ $n_1=64, \bar{x_1}=85.6, s_1=2.4\ n_2=50, \bar{x_2}=95.3, s_2=3.1$ 2. Test $H_0:\mu _1-\mu _2=110\; vs\; H_a:\mu _1-\mu _2\neq 110\; @\; \alpha =0.02$ $n_1=176, \bar{x_1}=1918, s_1=68\ n_2=241, \bar{x_2}=1782 s_2=146$ Q9.1.11 Perform the test of hypotheses indicated, using the data from independent samples given. Use the $p$-value approach. 1. Test $H_0:\mu _1-\mu _2=50\; vs\; H_a:\mu _1-\mu _2>50\; @\; \alpha =0.005$ $n_1=72, \bar{x_1}=272, s_1=26\ n_2=103, \bar{x_2}=213, s_2=14$ 2. Test $H_0:\mu _1-\mu _2=7.5\; vs\; H_a:\mu _1-\mu _2\neq 7.5\; @\; \alpha =0.10$ $n_1=52, \bar{x_1}=94.3, s_1=2.6\ n_2=38, \bar{x_2}=88.6 s_2=8.0$ Q9.1.12 Perform the test of hypotheses indicated, using the data from independent samples given. Use the $p$-value approach. 1. Test $H_0:\mu _1-\mu _2=23\; vs\; H_a:\mu _1-\mu _2<23\; @\; \alpha =0.20$ $n_1=314, \bar{x_1}=198, s_1=12.2\ n_2=220, \bar{x_2}=176, s_2=11.5$ 2. Test $H_0:\mu _1-\mu _2=4.4\; vs\; H_a:\mu _1-\mu _2\neq 4.4\; @\; \alpha =0.05$ $n_1=32, \bar{x_1}=40.3, s_1=0.5\ n_2=30, \bar{x_2}=35.5 s_2=0.7$ Q9.1.13 In order to investigate the relationship between mean job tenure in years among workers who have a bachelor’s degree or higher and those who do not, random samples of each type of worker were taken, with the following results. n $\bar{x}$ s Bachelor’s degree or higher 155 5.2 1.3 No degree 210 5.0 1.5 1. Construct the $99\%$ confidence interval for the difference in the population means based on these data. 2. Test, at the $1\%$ level of significance, the claim that mean job tenure among those with higher education is greater than among those without, against the default that there is no difference in the means. 3. Compute the observed significance of the test. Q9.1.14 Records of $40$ used passenger cars and $40$ used pickup trucks (none used commercially) were randomly selected to investigate whether there was any difference in the mean time in years that they were kept by the original owner before being sold. For cars the mean was $5.3$ years with standard deviation $2.2$ years. For pickup trucks the mean was $7.1$ years with standard deviation $3.0$ years. 1. Construct the $95\%$ confidence interval for the difference in the means based on these data. 2. Test the hypothesis that there is a difference in the means against the null hypothesis that there is no difference. Use the $1\%$ level of significance. 3. Compute the observed significance of the test in part (b). Q9.1.15 In previous years the average number of patients per hour at a hospital emergency room on weekends exceeded the average on weekdays by $6.3$ visits per hour. A hospital administrator believes that the current weekend mean exceeds the weekday mean by fewer than $6.3$ hours. 1. Construct the $99\%$ confidence interval for the difference in the population means based on the following data, derived from a study in which $30$ weekend and $30$ weekday one-hour periods were randomly selected and the number of new patients in each recorded. n $\bar{x}$ s Weekends 30 13.8 3.1 Weekdays 30 8.6 2.7 1. Test at the $5\%$ level of significance whether the current weekend mean exceeds the weekday mean by fewer than $6.3$ patients per hour. 2. Compute the observed significance of the test. Q9.1.16 A sociologist surveys $50$ randomly selected citizens in each of two countries to compare the mean number of hours of volunteer work done by adults in each. Among the $50$ inhabitants of Lilliput, the mean hours of volunteer work per year was $52$, with standard deviation $11.8$. Among the $50$ inhabitants of Blefuscu, the mean number of hours of volunteer work per year was $37$, with standard deviation $7.2$. 1. Construct the $99\%$ confidence interval for the difference in mean number of hours volunteered by all residents of Lilliput and the mean number of hours volunteered by all residents of Blefuscu. 2. Test, at the $1\%$ level of significance, the claim that the mean number of hours volunteered by all residents of Lilliput is more than ten hours greater than the mean number of hours volunteered by all residents of Blefuscu. 3. Compute the observed significance of the test in part (b). Q9.1.17 A university administrator asserted that upperclassmen spend more time studying than underclassmen. 1. Test this claim against the default that the average number of hours of study per week by the two groups is the same, using the following information based on random samples from each group of students. Test at the $1\%$ level of significance. n $\bar{x}$ s Upperclassmen 35 15.6 2.9 Underclassmen 35 12.3 4.1 1. Compute the observed significance of the test. Q9.1.18 An kinesiologist claims that the resting heart rate of men aged $18$ to $25$ who exercise regularly is more than five beats per minute less than that of men who do not exercise regularly. Men in each category were selected at random and their resting heart rates were measured, with the results shown. n $\bar{x}$ s Regular exercise 40 63 1.0 No regular exercise 30 71 1.2 1. Perform the relevant test of hypotheses at the $1\%$ level of significance. 2. Compute the observed significance of the test. Q9.1.19 Children in two elementary school classrooms were given two versions of the same test, but with the order of questions arranged from easier to more difficult in Version $A$ and in reverse order in Version $B$. Randomly selected students from each class were given Version $A$ and the rest Version $B$. The results are shown in the table. n $\bar{x}$ s Version A 31 83 4.6 Version B 32 78 4.3 1. Construct the $90\%$ confidence interval for the difference in the means of the populations of all children taking Version $A$ of such a test and of all children taking Version $B$ of such a test. 2. Test at the $1\%$ level of significance the hypothesis that the $A$ version of the test is easier than the $B$ version (even though the questions are the same). 3. Compute the observed significance of the test. Q9.1.20 The Municipal Transit Authority wants to know if, on weekdays, more passengers ride the northbound blue line train towards the city center that departs at $8:15\; a.m.$ or the one that departs at $8:30\; a.m$. The following sample statistics are assembled by the Transit Authority. n $\bar{x}$ s 8:15 a.m. train 30 323 41 8:30 a.m. train 45 356 45 1. Construct the $90\%$ confidence interval for the difference in the mean number of daily travelers on the $8:15\; a.m.$ train and the mean number of daily travelers on the $8:30\; a.m.$ train. 2. Test at the $5\%$ level of significance whether the data provide sufficient evidence to conclude that more passengers ride the $8:30\; a.m.$ train. 3. Compute the observed significance of the test. Q9.1.21 In comparing the academic performance of college students who are affiliated with fraternities and those male students who are unaffiliated, a random sample of students was drawn from each of the two populations on a university campus. Summary statistics on the student GPAs are given below. n $\bar{x}$ s Fraternity 645 2.90 0.47 Unaffiliated 450 2.88 0.42 Test, at the $5\%$ level of significance, whether the data provide sufficient evidence to conclude that there is a difference in average GPA between the population of fraternity students and the population of unaffiliated male students on this university campus. Q9.1.22 In comparing the academic performance of college students who are affiliated with sororities and those female students who are unaffiliated, a random sample of students was drawn from each of the two populations on a university campus. Summary statistics on the student GPAs are given below. n $\bar{x}$ s Sorority 330 3.18 0.37 Unaffiliated 550 3.12 0.41 Test, at the $5\%$ level of significance, whether the data provide sufficient evidence to conclude that there is a difference in average GPA between the population of sorority students and the population of unaffiliated female students on this university campus. Q9.1.23 The owner of a professional football team believes that the league has become more offense oriented since five years ago. To check his belief, $32$ randomly selected games from one year’s schedule were compared to $32$ randomly selected games from the schedule five years later. Since more offense produces more points per game, the owner analyzed the following information on points per game (ppg). n $\bar{x}$ s ppg previously 32 20.62 4.17 ppg recently 32 22.05 4.01 Test, at the $10\%$ level of significance, whether the data on points per game provide sufficient evidence to conclude that the game has become more offense oriented. Q9.1.24 The owner of a professional football team believes that the league has become more offense oriented since five years ago. To check his belief, $32$ randomly selected games from one year’s schedule were compared to $32$ randomly selected games from the schedule five years later. Since more offense produces more offensive yards per game, the owner analyzed the following information on offensive yards per game (oypg). n $\bar{x}$ s oypg previously 32 316 40 oypg recently 32 336 35 Test, at the $10\%$ level of significance, whether the data on offensive yards per game provide sufficient evidence to conclude that the game has become more offense oriented. Large Data Set Exercises Large Data Sets are absent 1. Large $\text{Data Sets 1A and 1B}$ list the SAT scores for $1,000$ randomly selected students. Denote the population of all male students as $\text{Population 1}$ and the population of all female students as $\text{Population 2}$. 1. Restricting attention to just the males, find $n_1$, $\bar{x_1}$ and $s_1$. Restricting attention to just the females, find $n_2$, $\bar{x_2}$ and $s_2$. 2. Let $\mu _1$ denote the mean SAT score for all males and $\mu _2$ the mean SAT score for all females. Use the results of part (a) to construct a $90\%$ confidence interval for the difference $\mu _1-\mu _2$. 3. Test, at the $5\%$ level of significance, the hypothesis that the mean SAT scores among males exceeds that of females. 2. Large $\text{Data Sets 1A and 1B}$ list the SAT scores for $1,000$ randomly selected students. Denote the population of all male students as $\text{Population 1}$ and the population of all female students as $\text{Population 2}$. 1. Restricting attention to just the males, find $n_1$, $\bar{x_1}$ and $s_1$. Restricting attention to just the females, find $n_2$, $\bar{x_2}$ and $s_2$. 2. Let $\mu _1$ denote the mean SAT score for all males and $\mu _2$ the mean SAT score for all females. Use the results of part (a) to construct a $95\%$ confidence interval for the difference $\mu _1-\mu _2$. 3. Test, at the $10\%$ level of significance, the hypothesis that the mean SAT scores among males exceeds that of females. 3. Large $\text{Data Sets 7A and 7B}$ list the survival times for $65$ male and $75$ female laboratory mice with thymic leukemia. Denote the population of all such male mice as $\text{Population 1}$ and the population of all such female mice as $\text{Population 2}$. 1. Restricting attention to just the males, find $n_1$, $\bar{x_1}$ and $s_1$. Restricting attention to just the females, find $n_2$, $\bar{x_2}$ and $s_2$. 2. Let $\mu _1$ denote the mean survival for all males and $\mu _2$ the mean survival time for all females. Use the results of part (a) to construct a $99\%$ confidence interval for the difference $\mu _1-\mu _2$. 3. Test, at the $1\%$ level of significance, the hypothesis that the mean survival time for males exceeds that for females by more than $182$ days (half a year). 4. Compute the observed significance of the test in part (c). Answers 1. $(4.20,5.80)$ 2. $(-18.54,-9.46)$ 1. $(-12.81,-10.39)$ 2. $(-76.50,-68.10)$ 1. $Z = 8.753, \pm z_{0.025}=\pm 1.960$, reject $H_0$, $p$-value=$0.0000$ 2. $Z = -0.687, -z_{0.10}=-1.282$, do not reject $H_0$, $p$-value=$0.2451$ 1. $Z = 2.444, \pm z_{0.005}=\pm 2.576$, do not reject $H_0$, $p$-value=$0.0146$ 2. $Z = 1.702, z_{0.05}=-1.645$, reject $H_0$, $p$-value=$0.0446$ 1. $Z = -1.19$, $p$-value=$0.1170$, do not reject $H_0$ 2. $Z = -0.92$, $p$-value=$0.3576$, do not reject $H_0$ 1. $Z = 2.68$, $p$-value=$0.0037$, reject $H_0$ 2. $Z = -1.34$, $p$-value=$0.1802$, do not reject $H_0$ 1. $0.2\pm 0.4$ 2. $Z = -1.466, -z_{0.050}=-1.645$, do not reject $H_0$ (exceeds by $6.3$ or more) 3. $p$-value=$0.0869$ 1. $5.2\pm 1.9$ 2. $Z = -1.466, -z_{0.050}=-1.645$, do not reject $H_0$ (exceeds by $6.3$ or more) 3. $p$-value=$0.0708$ 1. $Z = 3.888, z_{0.01}=2.326$, reject $H_0$ (upperclassmen study more) 2. $p$-value=$0.0001$ 1. $5\pm 1.8$ 2. $Z = 4.454, z_{0.01}=2.326$, reject $H_0$ (Test A is easier) 3. $p$-value=$0.0000$ 1. $Z = 0.738, \pm z_{0.025}=\pm 1.960$, do not reject $H_0$ (no difference) 2. $Z = -1.398, -z_{0.10}=-1.282$, reject $H_0$ (more offense oriented) 1. $n_1=419,\; \bar{x_1}=1540.33,\; s_1=205.40, \; n_2=581,\; \bar{x_2}=1520.38,\; s_2=217.34$ 2. $(-2.24,42.15)$ 3. $H_0:\mu _1-\mu _2=0\; vs\; H_a:\mu _1-\mu _2>0$. Test Statistic: $Z = 1.48$. Rejection Region: $[1.645,\infty )$. Decision: Fail to reject $H_0$. 1. $n_1=65,\; \bar{x_1}=665.97,\; s_1=41.60, \; n_2=75,\; \bar{x_2}=455.89,\; s_2=63.22$ 2. $(187.06,233.09)$ 3. $H_0:\mu _1-\mu _2=182\; vs\; H_a:\mu _1-\mu _2>182$. Test Statistic: $Z = 3.14$. Rejection Region: $[2.33,\infty )$. Decision: Reject $H_0$. 4. $p$-value=$0.0008$ 9.2: Comparison of Two Population Means: Small, Independent Samples Basic In all exercises for this section assume that the populations are normal and have equal standard deviations. Q9.2.1 Construct the confidence interval for $\mu _1-\mu _2$ for the level of confidence and the data from independent samples given. 1. $95\%$ confidence, $n_1=10,\; \bar{x_1}=120,\; s_1=2\ n_2=15,\; \bar{x_2}=101,\; s_1=4$ 2. $99\%$ confidence, $n_1=6,\; \bar{x_1}=25,\; s_1=1\ n_2=12,\; \bar{x_2}=17,\; s_1=3$ Q9.2.2 Construct the confidence interval for $\mu _1-\mu _2$ for the level of confidence and the data from independent samples given. 1. $90\%$ confidence, $n_1=28,\; \bar{x_1}=212,\; s_1=6\ n_2=23,\; \bar{x_2}=198,\; s_1=5$ 2. $99\%$ confidence, $n_1=14,\; \bar{x_1}=68,\; s_1=8\ n_2=20,\; \bar{x_2}=43,\; s_1=3$ Q9.2.3 Construct the confidence interval for $\mu _1-\mu _2$ for the level of confidence and the data from independent samples given. 1. $99.9\%$ confidence, $n_1=35,\; \bar{x_1}=6.5,\; s_1=0.2\ n_2=20,\; \bar{x_2}=6.2,\; s_1=0.1$ 2. $99\%$ confidence, $n_1=18,\; \bar{x_1}=77.3,\; s_1=1.2\ n_2=32,\; \bar{x_2}=75.0,\; s_1=1.6$ Q9.2.4 Construct the confidence interval for $\mu _1-\mu _2$ for the level of confidence and the data from independent samples given. 1. $99.5\%$ confidence, $n_1=40,\; \bar{x_1}=85.6,\; s_1=2.8\ n_2=20,\; \bar{x_2}=73.1,\; s_1=2.1$ 2. $99.9\%$ confidence, $n_1=25,\; \bar{x_1}=215,\; s_1=7\ n_2=35,\; \bar{x_2}=185,\; s_1=12$ Q9.2.5 Perform the test of hypotheses indicated, using the data from independent samples given. Use the critical value approach. 1. Test $H_0:\mu _1-\mu _2=11\; vs\; H_a:\mu _1-\mu _2>11\; @\; \alpha =0.025$ $n_1=6,\; \bar{x_1}=32,\; s_1=2\ n_2=11,\; \bar{x_2}=19,\; s_1=1$ 2. Test $H_0:\mu _1-\mu _2=26\; vs\; H_a:\mu _1-\mu _2\neq 26\; @\; \alpha =0.05$ $n_1=17,\; \bar{x_1}=166,\; s_1=4\ n_2=24,\; \bar{x_2}=138,\; s_1=3$ Q9.2.6 Perform the test of hypotheses indicated, using the data from independent samples given. Use the critical value approach. 1. Test $H_0:\mu _1-\mu _2=40\; vs\; H_a:\mu _1-\mu _2<40\; @\; \alpha =0.10$ $n_1=14,\; \bar{x_1}=289,\; s_1=11\ n_2=12,\; \bar{x_2}=254,\; s_1=9$ 2. Test $H_0:\mu _1-\mu _2=21\; vs\; H_a:\mu _1-\mu _2\neq 21\; @\; \alpha =0.05$ $n_1=23,\; \bar{x_1}=130,\; s_1=6\ n_2=27,\; \bar{x_2}=113,\; s_1=8$ Q9.2.7 Perform the test of hypotheses indicated, using the data from independent samples given. Use the critical value approach. 1. Test $H_0:\mu _1-\mu _2=-15\; vs\; H_a:\mu _1-\mu _2<-15\; @\; \alpha =0.10$ $n_1=30,\; \bar{x_1}=42,\; s_1=7\ n_2=12,\; \bar{x_2}=60,\; s_1=5$ 2. Test $H_0:\mu _1-\mu _2=103\; vs\; H_a:\mu _1-\mu _2\neq 103\; @\; \alpha =0.10$ $n_1=17,\; \bar{x_1}=711,\; s_1=28\ n_2=32,\; \bar{x_2}=598,\; s_1=21$ Q9.2.8 Perform the test of hypotheses indicated, using the data from independent samples given. Use the critical value approach. 1. Test $H_0:\mu _1-\mu _2=75\; vs\; H_a:\mu _1-\mu _2>75\; @\; \alpha =0.025$ $n_1=45,\; \bar{x_1}=674,\; s_1=18\ n_2=29,\; \bar{x_2}=591,\; s_1=13$ 2. Test $H_0:\mu _1-\mu _2=-20\; vs\; H_a:\mu _1-\mu _2\neq -20\; @\; \alpha =0.005$ $n_1=30,\; \bar{x_1}=137,\; s_1=8\ n_2=19,\; \bar{x_2}=166,\; s_1=11$ Q9.2.9 Perform the test of hypotheses indicated, using the data from independent samples given. Use the $p$-value approach. (The $p$-value can be only approximated.) 1. Test $H_0:\mu _1-\mu _2=12\; vs\; H_a:\mu _1-\mu _2>12\; @\; \alpha =0.01$ $n_1=20,\; \bar{x_1}=133,\; s_1=7\ n_2=10,\; \bar{x_2}=115,\; s_1=5$ 2. Test $H_0:\mu _1-\mu _2=46\; vs\; H_a:\mu _1-\mu _2\neq 46\; @\; \alpha =0.10$ $n_1=24,\; \bar{x_1}=586,\; s_1=11\ n_2=27,\; \bar{x_2}=535,\; s_1=13$ Q9.2.10 Perform the test of hypotheses indicated, using the data from independent samples given. Use the $p$-value approach. (The $p$-value can be only approximated.) 1. Test $H_0:\mu _1-\mu _2=38\; vs\; H_a:\mu _1-\mu _2<38\; @\; \alpha =0.01$ $n_1=12,\; \bar{x_1}=464,\; s_1=5\ n_2=10,\; \bar{x_2}=432,\; s_1=6$ 2. Test $H_0:\mu _1-\mu _2=4\; vs\; H_a:\mu _1-\mu _2\neq 4\; @\; \alpha =0.005$ $n_1=14,\; \bar{x_1}=68,\; s_1=2\ n_2=17,\; \bar{x_2}=67,\; s_1=3$ Q9.2.11 Perform the test of hypotheses indicated, using the data from independent samples given. Use the $p$-value approach. (The $p$-value can be only approximated.) 1. Test $H_0:\mu _1-\mu _2=50\; vs\; H_a:\mu _1-\mu _2>50\; @\; \alpha =0.01$ $n_1=30,\; \bar{x_1}=681,\; s_1=8\ n_2=27,\; \bar{x_2}=625,\; s_1=8$ 2. Test $H_0:\mu _1-\mu _2=35\; vs\; H_a:\mu _1-\mu _2\neq 35\; @\; \alpha =0.10$ $n_1=36,\; \bar{x_1}=325,\; s_1=11\ n_2=29,\; \bar{x_2}=286,\; s_1=7$ Q9.2.12 Perform the test of hypotheses indicated, using the data from independent samples given. Use the $p$-value approach. (The $p$-value can be only approximated.) 1. Test $H_0:\mu _1-\mu _2=-4\; vs\; H_a:\mu _1-\mu _2<-4\; @\; \alpha =0.05$ $n_1=40,\; \bar{x_1}=80,\; s_1=5\ n_2=25,\; \bar{x_2}=87,\; s_1=5$ 2. Test $H_0:\mu _1-\mu _2=21\; vs\; H_a:\mu _1-\mu _2\neq 21\; @\; \alpha =0.01$ $n_1=15,\; \bar{x_1}=192,\; s_1=12\ n_2=34,\; \bar{x_2}=180,\; s_1=8$ Q9.2.13 A county environmental agency suspects that the fish in a particular polluted lake have elevated mercury level. To confirm that suspicion, five striped bass in that lake were caught and their tissues were tested for mercury. For the purpose of comparison, four striped bass in an unpolluted lake were also caught and tested. The fish tissue mercury levels in mg/kg are given below. Sample 1 (from polluted lake) Sample 2(from unpolluted lake) 0.580 0.382 0.711 0.276 0.571 0.570 0.666 0.366 0.598 1. Construct the $95\%$ confidence interval for the difference in the population means based on these data. 2. Test, at the $5\%$ level of significance, whether the data provide sufficient evidence to conclude that fish in the polluted lake have elevated levels of mercury in their tissue. Q9.2.14 A genetic engineering company claims that it has developed a genetically modified tomato plant that yields on average more tomatoes than other varieties. A farmer wants to test the claim on a small scale before committing to a full-scale planting. Ten genetically modified tomato plants are grown from seeds along with ten other tomato plants. At the season’s end, the resulting yields in pound are recorded as below. Sample 1(genetically modified) Sample 2(regular) 20 21 23 21 27 22 25 18 25 20 25 20 27 18 23 25 24 23 22 20 1. Construct the $99\%$ confidence interval for the difference in the population means based on these data. 2. Test, at the $1\%$ level of significance, whether the data provide sufficient evidence to conclude that the mean yield of the genetically modified variety is greater than that for the standard variety. Q9.2.15 The coaching staff of a professional football team believes that the rushing offense has become increasingly potent in recent years. To investigate this belief, $20$ randomly selected games from one year’s schedule were compared to $11$ randomly selected games from the schedule five years later. The sample information on rushing yards per game (rypg) is summarized below. n $\bar{x}$ s rypg previously 20 112 24 rypg recently 11 114 21 1. Construct the $95\%$ confidence interval for the difference in the population means based on these data. 2. Test, at the $5\%$ level of significance, whether the data on rushing yards per game provide sufficient evidence to conclude that the rushing offense has become more potent in recent years. Q9.2.16 The coaching staff of professional football team believes that the rushing offense has become increasingly potent in recent years. To investigate this belief, $20$ randomly selected games from one year’s schedule were compared to $11$ randomly selected games from the schedule five years later. The sample information on passing yards per game (pypg) is summarized below. n $\bar{x}$ s pypg previously 20 203 38 pypg recently 11 232 33 1. Construct the $95\%$ confidence interval for the difference in the population means based on these data. 2. Test, at the $5\%$ level of significance, whether the data on passing yards per game provide sufficient evidence to conclude that the passing offense has become more potent in recent years. Q9.2.17 A university administrator wishes to know if there is a difference in average starting salary for graduates with master’s degrees in engineering and those with master’s degrees in business. Fifteen recent graduates with master’s degree in engineering and $11$ with master’s degrees in business are surveyed and the results are summarized below. n $\bar{x}$ s Engineering 15 68,535 1627 Business 11 63,230 2033 1. Construct the $90\%$ confidence interval for the difference in the population means based on these data. 2. Test, at the $10\%$ level of significance, whether the data provide sufficient evidence to conclude that the average starting salaries are different. Q9.2.18 A gardener sets up a flower stand in a busy business district and sells bouquets of assorted fresh flowers on weekdays. To find a more profitable pricing, she sells bouquets for $15$ dollars each for ten days, then for $10$ dollars each for five days. Her average daily profit for the two different prices are given below. n $\bar{x}$ s $15 10 171 26$10 5 198 29 1. Construct the $90\%$ confidence interval for the difference in the population means based on these data. 2. Test, at the $10\%$ level of significance, whether the data provide sufficient evidence to conclude the gardener’s average daily profit will be higher if the bouquets are sold at $\10$ each. Answers 1. $(16.16,21.84)$ 2. $(4.28,11.72)$ 1. $(0.13,0.47)$ 2. (1.14,3.46)$$(1.14,3.46)$$ 1. $T = 2.787,\; t_{0.025}=2.131$, reject $H_0$ 2. $T = 1.831,\; \pm t_{0.025}=\pm 2.023$, do not reject $H_0$ 1. $T = -1.349,\; -t_{0.10}=-1.303$, reject $H_0$ 2. $T = 1.411,\; \pm t_{0.05}=\pm 1.678$, do not reject $H_0$ 1. $T = 2.411,\; df=28,\; \text{p-value}>0.01$, do not reject $H_0$ 2. $T = 1.473,\; df=49,\; \text{p-value}<0.10$, reject $H_0$ 1. $T = 2.827,\; df=55,\; \text{p-value}<0.01$, reject $H_0$ 2. $T = 1.699,\; df=63,\; \text{p-value}<0.10$, reject $H_0$ 1. $0.2267\pm 0.2182$ 2. $T = 1.699,\; df=63,\; t_{0.05}=1.895$, reject $H_0$ (elevated levels) 1. $-2\pm 17.7$ 2. $T = -0.232,\; df=29,\; -t_{0.05}=-1.699$, do not reject $H_0$ (not more potent) 1. $5305\pm 1227$ 2. $T = 7.395,\; df=24,\; \pm t_{0.05}=\pm 1.711$, reject $H_0$ (different) 9.3 Comparison of Two Population Means: Paired Samples Basic In all exercises for this section assume that the population of differences is normal. 1. Use the following paired sample data for this exercise. $\begin{matrix} Population\: 1 & 35 & 32 & 35 & 35 & 36 & 35 & 35\ Population\: 2 & 28 & 26 & 27 & 26 & 29 & 27 & 29 \end{matrix}$ 1. Compute $\bar{d}$ and $s_d$. 2. Give a point estimate for $\mu _1-\mu _2=\mu _d$. 3. Construct the $95\%$ confidence interval for $\mu _1-\mu _2=\mu _d$ from these data. 4. Test, at the $10\%$ level of significance, the hypothesis that $\mu _1-\mu _2>7$ as an alternative to the null hypothesis that $\mu _1-\mu _2=7$. 2. Use the following paired sample data for this exercise. $\begin{matrix} Population\: 1 & 103 & 127 & 96 & 110\ Population\: 2 & 81 & 106 & 73 & 88\ Population\: 1 & 90 & 118 & 130 & 106\ Population\: 2 & 70 & 95 & 109 & 83 \end{matrix}$ 1. Compute $\bar{d}$ and $s_d$. 2. Give a point estimate for $\mu _1-\mu _2=\mu _d$. 3. Construct the $90\%$ confidence interval for $\mu _1-\mu _2=\mu _d$ from these data. 4. Test, at the $1\%$ level of significance, the hypothesis that $\mu _1-\mu _2<247$ as an alternative to the null hypothesis that $\mu _1-\mu _2=24$. 3. Use the following paired sample data for this exercise. $\begin{matrix} Population\: 1 & 40 & 27 & 55 & 34\ Population\: 2 & 53 & 42 & 68 & 50 \end{matrix}$ 1. Compute $\bar{d}$ and $s_d$. 2. Give a point estimate for $\mu _1-\mu _2=\mu _d$. 3. Construct the $99\%$ confidence interval for $\mu _1-\mu _2=\mu _d$ from these data. 4. Test, at the $10\%$ level of significance, the hypothesis that $\mu _1-\mu _2 \neq -12$ as an alternative to the null hypothesis that $\mu _1-\mu _2=-12$. 4. Use the following paired sample data for this exercise. $\begin{matrix} Population\: 1 & 196 & 165 & 181 & 201 & 190\ Population\: 2 & 212 & 182 & 199 & 210 & 205 \end{matrix}$ 1. Compute $\bar{d}$ and $s_d$. 2. Give a point estimate for $\mu _1-\mu _2=\mu _d$. 3. Construct the $98\%$ confidence interval for $\mu _1-\mu _2=\mu _d$ from these data. 4. Test, at the $2\%$ level of significance, the hypothesis that $\mu _1-\mu _2 \neq -20$ as an alternative to the null hypothesis that $\mu _1-\mu _2=-20$. Applications 1. Each of five laboratory mice was released into a maze twice. The five pairs of times to escape were: Mouse 1 2 3 4 5 First release 129 89 136 163 118 Second release 113 97 139 85 75 1. Compute $\bar{d}$ and $s_d$. 2. Give a point estimate for $\mu _1-\mu _2=\mu _d$. 3. Construct the $90\%$ confidence interval for $\mu _1-\mu _2=\mu _d$ from these data. 4. Test, at the $10\%$ level of significance, the hypothesis that it takes mice less time to run the maze on the second trial, on average. 2. Eight golfers were asked to submit their latest scores on their favorite golf courses. These golfers were each given a set of newly designed clubs. After playing with the new clubs for a few months, the golfers were again asked to submit their latest scores on the same golf courses. The results are summarized below. Golfer 1 2 3 4 5 6 7 8 Own clubs 77 80 69 73 73 72 75 77 New clubs 72 81 68 73 75 70 73 75 1. Compute $\bar{d}$ and $s_d$. 2. Give a point estimate for $\mu _1-\mu _2=\mu _d$. 3. Construct the $99\%$ confidence interval for $\mu _1-\mu _2=\mu _d$ from these data. 4. Test, at the $1\%$ level of significance, the hypothesis that on average golf scores are lower with the new clubs. 3. A neighborhood home owners association suspects that the recent appraisal values of the houses in the neighborhood conducted by the county government for taxation purposes is too high. It hired a private company to appraise the values of ten houses in the neighborhood. The results, in thousands of dollars, are House County Government Private Company 1 217 219 2 350 338 3 296 291 4 237 237 5 237 235 6 272 269 7 257 239 8 277 275 9 312 320 10 335 335 1. Give a point estimate for the difference between the mean private appraisal of all such homes and the government appraisal of all such homes. 2. Construct the $99\%$ confidence interval based on these data for the difference. 3. Test, at the $1\%$ level of significance, the hypothesis that appraised values by the county government of all such houses is greater than the appraised values by the private appraisal company. 4. In order to cut costs a wine producer is considering using duo or $1+1$ corks in place of full natural wood corks, but is concerned that it could affect buyers’s perception of the quality of the wine. The wine producer shipped eight pairs of bottles of its best young wines to eight wine experts. Each pair includes one bottle with a natural wood cork and one with a duo cork. The experts are asked to rate the wines on a one to ten scale, higher numbers corresponding to higher quality. The results are: Wine Expert Duo Cork Wood Cork 1 8.5 8.5 2 8.0 8.5 3 6.5 8.0 4 7.5 8.5 5 8.0 7.5 6 8.0 8.0 7 9.0 9.0 8 7.0 7.5 1. Give a point estimate for the difference between the mean ratings of the wine when bottled are sealed with different kinds of corks. 2. Construct the $90\%$ confidence interval based on these data for the difference. 3. Test, at the $10\%$ level of significance, the hypothesis that on the average duo corks decrease the rating of the wine. 5. Engineers at a tire manufacturing corporation wish to test a new tire material for increased durability. To test the tires under realistic road conditions, new front tires are mounted on each of $11$ company cars, one tire made with a production material and the other with the experimental material. After a fixed period the $11$ pairs were measured for wear. The amount of wear for each tire (in mm) is shown in the table: Car Production Experimental 1 5.1 5.0 2 6.5 6.5 3 3.6 3.1 4 3.5 3.7 5 5.7 4.5 6 5.0 4.1 7 6.4 5.3 8 4.7 2.6 9 3.2 3.0 10 3.5 3.5 11 6.4 5.1 1. Give a point estimate for the difference in mean wear. 2. Construct the $99\%$ confidence interval for the difference based on these data. 3. Test, at the $1\%$ level of significance, the hypothesis that the mean wear with the experimental material is less than that for the production material. 6. A marriage counselor administered a test designed to measure overall contentment to $30$ randomly selected married couples. The scores for each couple are given below. A higher number corresponds to greater contentment or happiness. Couple Husband Wife 1 47 44 2 44 46 3 49 44 4 53 44 5 42 43 6 45 45 7 48 47 8 45 44 9 52 44 10 47 42 11 40 34 12 45 42 13 40 43 14 46 41 15 47 45 16 46 45 17 46 41 18 46 41 19 44 45 20 45 43 21 48 38 22 42 46 23 50 44 24 46 51 25 43 45 26 50 40 27 46 46 28 42 41 29 51 41 30 46 47 1. Test, at the $1\%$ level of significance, the hypothesis that on average men and women are not equally happy in marriage. 2. Test, at the $1\%$ level of significance, the hypothesis that on average men are happier than women in marriage. Large Data Set Exercises Large Data Sets are absent 1. Large $\text{Data Set 5}$ lists the scores for $25$ randomly selected students on practice SAT reading tests before and after taking a two-week SAT preparation course. Denote the population of all students who have taken the course as $\text{Population 1}$ and the population of all students who have not taken the course as $\text{Population 2}$. 1. Compute the $25$ differences in the order after - before, their mean $\bar{d}$, and their sample standard deviation $s_d$. 2. Give a point estimate for $\mu _d=\mu _1-\mu _2$, the difference in the mean score of all students who have taken the course and the mean score of all who have not. 3. Construct a $98\%$ confidence interval for $\mu _d$. 4. Test, at the $1\%$ level of significance, the hypothesis that the mean SAT score increases by at least ten points by taking the two-week preparation course. 2. Large $\text{Data Set 12}$ lists the scores on one round for 75 randomly selected members at a golf course, first using their own original clubs, then two months later after using new clubs with an experimental design. Denote the population of all golfers using their own original clubs as $\text{Population 1}$ and the population of all golfers using the new style clubs as $\text{Population 2}$. 1. Compute the $75$ differences in the order original clubs - new clubs, their mean $\bar{d}$, and their sample standard deviation $s_d$. 2. Give a point estimate for $\mu _d=\mu _1-\mu _2$, the difference in the mean score of all students who have taken the course and the mean score of all who have not. 3. Construct a $90\%$ confidence interval for $\mu _d$. 4. Test, at the $1\%$ level of significance, the hypothesis that the mean SAT score increases by at least ten points by taking the two-week preparation course. 3. Consider the previous problem again. Since the data set is so large, it is reasonable to use the standard normal distribution instead of Student’s $t$-distribution with $74$ degrees of freedom. 1. Construct a 90% confidence interval for $\mu _d$ using the standard normal distribution, meaning that the formula is $\bar{d}\pm z_{\alpha /2}\frac{s_d}{\sqrt{n}}$. (The computations done in part (a) of the previous problem still apply and need not be redone.) How does the result obtained here compare to the result obtained in part (c) of the previous problem? 2. Test, at the $1\%$ level of significance, the hypothesis that the mean golf score decreases by at least one stroke by using the new kind of clubs, using the standard normal distribution. (All the work done in part (d) of the previous problem applies, except the critical value is now $z_\alpha$ instead of $z_\alpha$ (or the $p$-value can be computed exactly instead of only approximated, if you used the $p$-value approach).) How does the result obtained here compare to the result obtained in part (c) of the previous problem? 3. Construct the $99\%$ confidence intervals for $\mu _d$ using both the $t$- and $z$-distributions. How much difference is there in the results now? Answers 1. $\bar{d}=7.4286,\; s_d=0.9759$ 2. $\bar{d}=7.4286$ 3. $(6.53,8.33)$ 4. $T = 1.162,\; df=6,\; t_{0.10}=1.44$, do not reject $H_0$ 1. $\bar{d}=-14.25,\; s_d=1.5$ 2. $\bar{d}=-14.25$ 3. $(-18.63,-9.87)$ 4. $T = -3.000,\; df=3,\; \pm t_{0.05}=\pm 2.353$, reject $H_0$ 1. $\bar{d}=25.2,\; s_d=35.6609$ 2. $\bar{d}=25.2$ 3. $25.2\pm 34.0$ 4. $T = 1.580,\; df=4,\; t_{0.10}=1.533$, reject $H_0$ (takes less time) 1. $3.2$ 2. $3.2\pm 7.5$ 3. $T = 1.392,\; df=9,\; t_{0.10}=2.821$, do not reject $H_0$ (government appraisals not higher) 1. $0.65$ 2. $0.65\pm 0.69$ 3. $T = 3.014,\; df=10,\; t_{0.10}=2.764$, reject $H_0$ (experimental material wears less) 1. $\bar{d}=16.68,\; s_d=10.77$ 2. $\bar{d}=16.68$ 3. $(11.31,22.05)$ 4. $H_0:\mu _1-\mu _2=10\; vs\; H_a:\mu _1-\mu _2>10$. Test Statistic: $T = 3.1014,\; df=11$. Rejection Region: $[2.492,\infty )$. Decision: Reject $H_0$. 1. $(1.6266,2.6401)$. Endpoints change in the third decimal place. 2. $H_0:\mu _1-\mu _2=1\; vs\; H_a:\mu _1-\mu _2>1$. Test Statistic: $Z = 3.6791$. Rejection Region: $[2.33,\infty )$. Decision: Reject $H_0$. The decision is the same as in the previous problem. 3. Using the $t$-distribution, $(1.3188,2.9478)$. Using the $z$-distribution, $(1.3401,2.9266)$. There is a difference. 9.4: Comparison of Two Population Proportions Basic 1. Construct the confidence interval for $p_1-p_2$ for the level of confidence and the data given. (The samples are sufficiently large.) 1. $90\%$ confidence $n_1=1670,\; \hat{p_1}=0.42\ n_2=900,\; \hat{p_2}=0.38$ 2. $95\%$ confidence $n_1=600,\; \hat{p_1}=0.84\ n_2=420,\; \hat{p_2}=0.67$ 2. Construct the confidence interval for $p_1-p_2$ for the level of confidence and the data given. (The samples are sufficiently large.) 1. $98\%$ confidence $n_1=750,\; \hat{p_1}=0.64\ n_2=800,\; \hat{p_2}=0.51$ 2. $99.5\%$ confidence $n_1=250,\; \hat{p_1}=0.78\ n_2=250,\; \hat{p_2}=0.51$ 3. Construct the confidence interval for $p_1-p_2$ for the level of confidence and the data given. (The samples are sufficiently large.) 1. $80\%$ confidence $n_1=300,\; \hat{p_1}=0.255\ n_2=400,\; \hat{p_2}=0.193$ 2. $95\%$ confidence $n_1=3500,\; \hat{p_1}=0.147\ n_2=3750,\; \hat{p_2}=0.131$ 4. Construct the confidence interval for $p_1-p_2$ for the level of confidence and the data given. (The samples are sufficiently large.) 1. $99\%$ confidence $n_1=2250,\; \hat{p_1}=0.915\ n_2=2525,\; \hat{p_2}=0.858$ 2. $95\%$ confidence $n_1=120,\; \hat{p_1}=0.650\ n_2=200,\; \hat{p_2}=0.505$ 5. Perform the test of hypotheses indicated, using the data given. Use the critical value approach. Compute the $p$-value of the test as well. (The samples are sufficiently large.) 1. Test $H_0:p_1-p_2=0\; vs\; H_a:p_1-p_2>0\; @\; \alpha =0.10$ $n_1=1200,\; \hat{p_1}=0.42\ n_2=1200,\; \hat{p_2}=0.40$ 2. Test $H_0:p_1-p_2=0\; vs\; H_a:p_1-p_2\neq 0\; @\; \alpha =0.05$ $n_1=550,\; \hat{p_1}=0.61\ n_2=600,\; \hat{p_2}=0.67$ 6. Perform the test of hypotheses indicated, using the data given. Use the critical value approach. Compute the $p$-value of the test as well. (The samples are sufficiently large.) 1. Test $H_0:p_1-p_2=0.05\; vs\; H_a:p_1-p_2>0.05\; @\; \alpha =0.05$ $n_1=1100,\; \hat{p_1}=0.57\ n_2=1100,\; \hat{p_2}=0.48$ 2. Test $H_0:p_1-p_2=0\; vs\; H_a:p_1-p_2\neq 0\; @\; \alpha =0.05$ $n_1=800,\; \hat{p_1}=0.39\ n_2=900,\; \hat{p_2}=0.43$ 7. Perform the test of hypotheses indicated, using the data given. Use the critical value approach. Compute the $p$-value of the test as well. (The samples are sufficiently large.) 1. Test $H_0:p_1-p_2=0.25\; vs\; H_a:p_1-p_2<0.25\; @\; \alpha =0.005$ $n_1=1400,\; \hat{p_1}=0.57\ n_2=1200,\; \hat{p_2}=0.37$ 2. Test $H_0:p_1-p_2=0.16\; vs\; H_a:p_1-p_2\neq 0.16\; @\; \alpha =0.02$ $n_1=750,\; \hat{p_1}=0.43\ n_2=600,\; \hat{p_2}=0.22$ 8. Perform the test of hypotheses indicated, using the data given. Use the critical value approach. Compute the $p$-value of the test as well. (The samples are sufficiently large.) 1. Test $H_0:p_1-p_2=0.08\; vs\; H_a:p_1-p_2>0.08\; @\; \alpha =0.025$ $n_1=450,\; \hat{p_1}=0.67\ n_2=200,\; \hat{p_2}=0.52$ 2. Test $H_0:p_1-p_2=0.02\; vs\; H_a:p_1-p_2\neq 0.02\; @\; \alpha =0.001$ $n_1=2700,\; \hat{p_1}=0.837\ n_2=2900,\; \hat{p_2}=0.854$ 9. Perform the test of hypotheses indicated, using the data given. Use the critical value approach. Compute the $p$-value of the test as well. (The samples are sufficiently large.) 1. Test $H_0:p_1-p_2=0\; vs\; H_a:p_1-p_2<0\; @\; \alpha =0.005$ $n_1=1100,\; \hat{p_1}=0.22\ n_2=1300,\; \hat{p_2}=0.27$ 2. Test $H_0:p_1-p_2=0\; vs\; H_a:p_1-p_2\neq 0\; @\; \alpha =0.01$ $n_1=650,\; \hat{p_1}=0.35\ n_2=650,\; \hat{p_2}=0.41$ 10. Perform the test of hypotheses indicated, using the data given. Use the critical value approach. Compute the $p$-value of the test as well. (The samples are sufficiently large.) 1. Test $H_0:p_1-p_2=0.15\; vs\; H_a:p_1-p_2>0.15\; @\; \alpha =0.10$ $n_1=950,\; \hat{p_1}=0.41\ n_2=500,\; \hat{p_2}=0.23$ 2. Test $H_0:p_1-p_2=0.10\; vs\; H_a:p_1-p_2\neq 0.10\; @\; \alpha =0.10$ $n_1=220,\; \hat{p_1}=0.92\ n_2=160,\; \hat{p_2}=0.78$ 11. Perform the test of hypotheses indicated, using the data given. Use the critical value approach. Compute the $p$-value of the test as well. (The samples are sufficiently large.) 1. Test $H_0:p_1-p_2=0.22\; vs\; H_a:p_1-p_2>0.22\; @\; \alpha =0.05$ $n_1=90,\; \hat{p_1}=0.72\ n_2=75,\; \hat{p_2}=0.40$ 2. Test $H_0:p_1-p_2=0.37\; vs\; H_a:p_1-p_2\neq 0.37\; @\; \alpha =0.02$ $n_1=425,\; \hat{p_1}=0.772\ n_2=425,\; \hat{p_2}=0.331$ 12. Perform the test of hypotheses indicated, using the data given. Use the critical value approach. Compute the $p$-value of the test as well. (The samples are sufficiently large.) 1. Test $H_0:p_1-p_2=0.50\; vs\; H_a:p_1-p_2<0.50\; @\; \alpha =0.10$ $n_1=40,\; \hat{p_1}=0.65\ n_2=55,\; \hat{p_2}=0.24$ 2. Test $H_0:p_1-p_2=0.30\; vs\; H_a:p_1-p_2\neq 0.30\; @\; \alpha =0.10$ $n_1=7500,\; \hat{p_1}=0.664\ n_2=1000,\; \hat{p_2}=0.319$ Applications In all the remaining exercsises the samples are sufficiently large (so this need not be checked). 1. Voters in a particular city who identify themselves with one or the other of two political parties were randomly selected and asked if they favor a proposal to allow citizens with proper license to carry a concealed handgun in city parks. The results are: Party A Party B Sample size, n 150 200 Number in favor, x 90 140 1. Give a point estimate for the difference in the proportion of all members of $\text{Party A}$ and all members of $\text{Party B}$ who favor the proposal. 2. Construct the $95\%$ confidence interval for the difference, based on these data. 3. Test, at the $5\%$ level of significance, the hypothesis that the proportion of all members of $\text{Party A}$ who favor the proposal is less than the proportion of all members of $\text{Party B}$ who do. 4. Compute the $p$-value of the test. 2. To investigate a possible relation between gender and handedness, a random sample of $320$ adults was taken, with the following results: Men Women Sample size, n 168 152 Number of left-handed, x 24 9 1. Give a point estimate for the difference in the proportion of all men who are left-handed and the proportion of all women who are left-handed. 2. Construct the $95\%$ confidence interval for the difference, based on these data. 3. Test, at the $5\%$ level of significance, the hypothesis that the proportion of men who are left-handed is greater than the proportion of women who are. 4. Compute the $p$-value of the test. 3. A local school board member randomly sampled private and public high school teachers in his district to compare the proportions of National Board Certified (NBC) teachers in the faculty. The results were: Private Schools Public Schools Sample size, n 80 520 Proportion of NBC teachers, pˆ$p^$ 0.175 0.150 1. Give a point estimate for the difference in the proportion of all teachers in area public schools and the proportion of all teachers in private schools who are National Board Certified. 2. Construct the $90\%$ confidence interval for the difference, based on these data. 3. Test, at the $10\%$ level of significance, the hypothesis that the proportion of all public school teachers who are National Board Certified is less than the proportion of private school teachers who are. 4. Compute the $p$-value of the test. 4. In professional basketball games, the fans of the home team always try to distract free throw shooters on the visiting team. To investigate whether this tactic is actually effective, the free throw statistics of a professional basketball player with a high free throw percentage were examined. During the entire last season, this player had $656$ free throws, $420$ in home games and $236$ in away games. The results are summarized below. Home Away Sample size, n 420 236 Free throw percent, pˆ$\hat{p}$ 81.5% 78.8% 1. Give a point estimate for the difference in the proportion of free throws made at home and away. 2. Construct the $90\%$ confidence interval for the difference, based on these data. 3. Test, at the $10\%$ level of significance, the hypothesis that there exists a home advantage in free throws. 4. Compute the $p$-value of the test. 5. Randomly selected middle-aged people in both China and the United States were asked if they believed that adults have an obligation to financially support their aged parents. The results are summarized below. China USA Sample size, n 1300 150 Number of yes, x 1170 110 Test, at the $1\%$ level of significance, whether the data provide sufficient evidence to conclude that there exists a cultural difference in attitude regarding this question. 6. A manufacturer of walk-behind push mowers receives refurbished small engines from two new suppliers, $A$ and $B$. It is not uncommon that some of the refurbished engines need to be lightly serviced before they can be fitted into mowers. The mower manufacturer recently received $100$ engines from each supplier. In the shipment from $A$, $13$ needed further service. In the shipment from $B$, $10$ needed further service. Test, at the $10\%$ level of significance, whether the data provide sufficient evidence to conclude that there exists a difference in the proportions of engines from the two suppliers needing service. Large Data Set Exercises Large Data Sets are absent 1. Large $\text{Data Sets 6A and 6B}$ record results of a random survey of $200$ voters in each of two regions, in which they were asked to express whether they prefer $\text{Candidate A}$ for a U.S. Senate seat or prefer some other candidate. Let the population of all voters in $\text{region 1}$ be denoted $\text{Population 1}$ and the population of all voters in $\text{region 2}$ be denoted $\text{Population 2}$. Let $p_1$ be the proportion of voters in $\text{Population 1}$ who prefer $\text{Candidate A}$, and $p_2$ the proportion in $\text{Population 2}$ who do. 1. Find the relevant sample proportions $\hat{p_1}$ and $\hat{p_2}$. 2. Construct a point estimate for $p_1-p_2$. 3. Construct a $95\%$ confidence interval for $p_1-p_2$. 4. Test, at the $5\%$ level of significance, the hypothesis that the same proportion of voters in the two regions favor $\text{Candidate A}$, against the alternative that a larger proportion in $\text{Population 2}$ do. 2. Large $\text{Data Set 11}$ records the results of samples of real estate sales in a certain region in the year $2008$ (lines $2$ through $536$) and in the year $2010$ (lines $537$ through $1106$). Foreclosure sales are identified with a 1 in the second column. Let all real estate sales in the region in 2008 be $\text{Population 1}$ and all real estate sales in the region in 2010 be $\text{Population 2}$. 1. Use the sample data to construct point estimates $\hat{p_1}$ and $\hat{p_2}$ of the proportions $p_1$ and $p_2$ of all real estate sales in this region in $2008$ and $2010$ that were foreclosure sales. Construct a point estimate of $p_1-p_2$. 2. Use the sample data to construct a $90\%$ confidence for $p_1-p_2$. 3. Test, at the $10\%$ level of significance, the hypothesis that the proportion of real estate sales in the region in $2010$ that were foreclosure sales was greater than the proportion of real estate sales in the region in $2008$ that were foreclosure sales. (The default is that the proportions were the same.) Answers 1. $(0.0068,0.0732)$ 2. $(0.1163,0.2237)$ 1. $(0.0210,0.1030)$ 2. (0.0001,0.0319)$(0.0001,0.0319)$ 1. $Z = 0.996,\; z_{0.10}=1.282,\; \text{p-value}=0.1587$, do not reject $H_0$ 2. $Z = -2.120,\; \pm z_{0.025}=\pm 1.960,\; \text{p-value}=0.0340$, reject $H_0$ 1. $Z = -2.602,\; -z_{0.005}=-2.576,\; \text{p-value}=0.0047$, reject $H_0$ 2. $Z = 2.020,\; \pm z_{0.01}=\pm 2.326,\; \text{p-value}=0.0434$, do not reject $H_0$ 1. $Z = -2.85,\; \text{p-value}=0.0022$, reject $H_0$ 2. $Z = -2.23,\; \text{p-value}=0.0258$, do not reject $H_0$ 1. $Z =1.36,\; \text{p-value}=0.0869$, do not reject $H_0$ 2. $Z = 2.32,\; \text{p-value}=0.0204$, do not reject $H_0$ 1. $-0.10$ 2. $-0.10\pm 0.101$ 3. $Z = -1.943,\; -z_{0.05}=-1.645$, reject $H_0$ (fewer in $\text{Party A}$ favor) 4. $\text{p-value}=0.0262$ 1. $0.025$ 2. $0.025\pm 0.0745$ 3. $Z = 0.552,\; z_{0.10}=1.282$, do not reject $H_0$ (as many public school teachers are certified) 4. $\text{p-value}=0.2912$ 1. $Z = 4.498,\; \pm z_{0.005}=\pm 2.576$, reject $H_0$ (different) 1. $\hat{p_1}=0.355$ and $\hat{p_2}=0.41$ 2. $\hat{p_1}-\hat{p_2}=-0.055$ 3. $(-0.1501,0.0401)$ 4. $H_0:p_1-p_2=0\; vs\; H_a:p_1-p_2<0$. Test Statistic: $Z=-1.1335$. Rejection Region: $(-\infty ,-1.645 ]$. Decision: Fail to reject $H_0$. 9.5 Sample Size Considerations Basic 1. Estimate the common sample size $n$ of equally sized independent samples needed to estimate $\mu _1-\mu _2$ as specified when the population standard deviations are as shown. 1. $90\%$ confidence, to within $3$ units, $\sigma _1=10$ and $\sigma _2=7$ 2. $99\%$ confidence, to within $4$ units, $\sigma _1=6.8$ and $\sigma _2=9.3$ 3. $95\%$ confidence, to within $5$ units, $\sigma _1=22.6$ and $\sigma _2=31.8$ 2. Estimate the common sample size $n$ of equally sized independent samples needed to estimate $\mu _1-\mu _2$ as specified when the population standard deviations are as shown. 1. $80\%$ confidence, to within $2$ units, $\sigma _1=14$ and $\sigma _2=23$ 2. $90\%$ confidence, to within $0.3$ units, $\sigma _1=1.3$ and $\sigma _2=0.8$ 3. $99\%$ confidence, to within $11$ units, $\sigma _1=42$ and $\sigma _2=37$ 3. Estimate the number $n$ of pairs that must be sampled in order to estimate $\mu _d=\mu _1-\mu _2$ as specified when the standard deviation $s_d$ of the population of differences is as shown. 1. $80\%$ confidence, to within $6$ units, $\sigma _d=26.5$ 2. $95\%$ confidence, to within $4$ units, $\sigma _d=12$ 3. $90\%$ confidence, to within $5.2$ units, $\sigma _d=11.3$ 4. Estimate the number $n$ of pairs that must be sampled in order to estimate $\mu _d=\mu _1-\mu _2$ as specified when the standard deviation $s_d$ of the population of differences is as shown. 1. $90\%$ confidence, to within $20$ units, $\sigma _d=75.5$ 2. $95\%$ confidence, to within $11$ units, $\sigma _d=31.4$ 3. $99\%$ confidence, to within $1.8$ units, $\sigma _d=4$ 5. Estimate the minimum equal sample sizes $n _1=n_2$ necessary in order to estimate $p _1-p _2$ as specified. 1. $80\%$ confidence, to within $0.05$ (five percentage points) 1. when no prior knowledge of $p _1$ or $p _2$ is available 2. when prior studies indicate that $p_1\approx 0.20$ and $p_2\approx 0.65$ 2. $90\%$ confidence, to within $0.02$ (two percentage points) 1. when no prior knowledge of $p _1$ or $p _2$ is available 2. when prior studies indicate that $p_1\approx 0.75$ and $p_2\approx 0.63$ 3. $95\%$ confidence, to within $0.10$ (ten percentage points) 1. when no prior knowledge of $p _1$ or $p _2$ is available 2. when prior studies indicate that $p_1\approx 0.11$ and $p_2\approx 0.37$ 6. Estimate the minimum equal sample sizes $n _1=n_2$ necessary in order to estimate $p _1-p _2$ as specified. 1. $80\%$ confidence, to within $0.02$ (two percentage points) 1. when no prior knowledge of $p _1$ or $p _2$ is available 2. when prior studies indicate that $p_1\approx 0.78$ and $p_2\approx 0.65$ 2. $90\%$ confidence, to within $0.05$ (five percentage points) 1. when no prior knowledge of $p _1$ or $p _2$ is available 2. when prior studies indicate that $p_1\approx 0.12$ and $p_2\approx 0.24$ 3. $95\%$ confidence, to within $0.10$ (ten percentage points) 1. when no prior knowledge of $p _1$ or $p _2$ is available 2. when prior studies indicate that $p_1\approx 0.14$ and $p_2\approx 0.21$ Applications 1. An educational researcher wishes to estimate the difference in average scores of elementary school children on two versions of a $100$-point standardized test, at $99\%$ confidence and to within two points. Estimate the minimum equal sample sizes necessary if it is known that the standard deviation of scores on different versions of such tests is $4.9$. 2. A university administrator wishes to estimate the difference in mean grade point averages among all men affiliated with fraternities and all unaffiliated men, with $95\%$ confidence and to within $0.15$. It is known from prior studies that the standard deviations of grade point averages in the two groups have common value $0.4$. Estimate the minimum equal sample sizes necessary to meet these criteria. 3. An automotive tire manufacturer wishes to estimate the difference in mean wear of tires manufactured with an experimental material and ordinary production tire, with $90\%$ confidence and to within $0.5$ mm. To eliminate extraneous factors arising from different driving conditions the tires will be tested in pairs on the same vehicles. It is known from prior studies that the standard deviations of the differences of wear of tires constructed with the two kinds of materials is $1.75$ mm. Estimate the minimum number of pairs in the sample necessary to meet these criteria. 4. To assess to the relative happiness of men and women in their marriages, a marriage counselor plans to administer a test measuring happiness in marriage to $n$ randomly selected married couples, record the their test scores, find the differences, and then draw inferences on the possible difference. Let $\mu _1$ and $\mu _2$ be the true average levels of happiness in marriage for men and women respectively as measured by this test. Suppose it is desired to find a $90\%$ confidence interval for estimating $\mu _d=\mu _1-\mu _2$ to within two test points. Suppose further that, from prior studies, it is known that the standard deviation of the differences in test scores is $\sigma _d\approx 10$. What is the minimum number of married couples that must be included in this study? 5. A journalist plans to interview an equal number of members of two political parties to compare the proportions in each party who favor a proposal to allow citizens with a proper license to carry a concealed handgun in public parks. Let $p_1$ and $p_2$ be the true proportions of members of the two parties who are in favor of the proposal. Suppose it is desired to find a $95\%$ confidence interval for estimating $p_1-p_2$ to within $0.05$. Estimate the minimum equal number of members of each party that must be sampled to meet these criteria. 6. A member of the state board of education wants to compare the proportions of National Board Certified (NBC) teachers in private high schools and in public high schools in the state. His study plan calls for an equal number of private school teachers and public school teachers to be included in the study. Let $p_1$ and $p_2$ be these proportions. Suppose it is desired to find a $99\%$ confidence interval that estimates $p_1-p_2$ to within $0.05$. 1. Supposing that both proportions are known, from a prior study, to be approximately $0.15$, compute the minimum common sample size needed. 2. Compute the minimum common sample size needed on the supposition that nothing is known about the values of $p_1$ and $p_2$. Answers 1. $n_1=n_2=45$ 2. $n_1=n_2=56$ 3. $n_1=n_2=234$ 1. $n_1=n_2=33$ 2. $n_1=n_2=35$ 3. $n_1=n_2=13$ 1. $n_1=n_2=329$ 2. $n_1=n_2=255$ 1. $n_1=n_2=3383$ 2. $n_1=n_2=2846$ 1. $n_1=n_2=193$ 2. $n_1=n_2=128$ 1. $n_1=n_2\approx 80$ 2. $n_1=n_2\approx 34$ 3. $n_1=n_2\approx 769$
textbooks/stats/Introductory_Statistics/Introductory_Statistics_(Shafer_and_Zhang)/09%3A_Two-Sample_Problems/9.E%3A_Two-Sample_Problems_%28Exercises%29.txt
Our interest in this chapter is in situations in which we can associate to each element of a population or sample two measurements \(x\) and y, particularly in the case that it is of interest to use the value of \(x\) to predict the value of y. For example, the population could be the air in automobile garages, \(x\) could be the electrical current produced by an electrochemical reaction taking place in a carbon monoxide meter, and \(y\) the concentration of carbon monoxide in the air. In this chapter we will learn statistical methods for analyzing the relationship between variables \(x\) and \(y\) in this context. • 10.1: Linear Relationships Between Variables In this chapter we will analyze situations in which variables x and y exhibit a linear relationship with some randomness. The level of randomness will vary from situation to situation. • 10.2: The Linear Correlation Coefficient The linear correlation coefficient measures the strength and direction of the linear relationship between two variables x and y. The sign of the linear correlation coefficient indicates the direction of the linear relationship between x and y. • 10.3: Modelling Linear Relationships with Randomness Present For any statistical procedures, given in this book or elsewhere, the associated formulas are valid only under specific assumptions. The set of assumptions in simple linear regression are a mathematical description of the relationship between x and y. Such a set of assumptions is known as a model. Statistical procedures are valid only when certain assumptions are valid. • 10.4: The Least Squares Regression Line How well a straight line fits a data set is measured by the sum of the squared errors. The least squares regression line is the line that best fits the data. Its slope and y-intercept are computed from the data using formulas. The slope of the least squares regression line estimates the size and direction of the mean change in the dependent variable y when the independent variable x is increased by one unit. • 10.5: Statistical Inferences About β₁ The parameter βₗ, the slope of the population regression line, is of primary importance in regression analysis because it gives the true rate of change in the mean E(y) in response to a unit increase in the predictor variable x. • 10.6: The Coefficient of Determination The coefficient of determination estimates the proportion of the variability in the variable y that is explained by the linear relationship between y and the variable x. There are several formulas for computing. The choice of which one to use can be based on which quantities have already been computed so far. • 10.7: Estimation and Prediction The coefficient of determination estimates the proportion of the variability in the variable y that is explained by the linear relationship between y and the variable x. There are several formulas for computing coefficient of determination; the choice of which one to use can be based on which quantities have already been computed so far. • 10.8: A Complete Example In this section we will go through a complete example of the use of correlation and regression analysis of data from start to finish, touching on all the topics of this chapter in sequence. • 10.9: Formula List • 10.E: Correlation and Regression (Exercises) 10: Correlation and Regression Learning Objectives To learn what it means for two variables to exhibit a relationship that is close to linear, but which contains an element of randomness The following table gives examples of the kinds of pairs of variables which could be of interest from a statistical point of view. $x$ $y$ Predictor or independent variable Response or dependent variable Temperature in degrees Celsius Temperature in degrees Fahrenheit Area of a house (sq.ft.) Value of the house Age of a particular make and model car Resale value of the car Amount spent by a business on advertising in a year Revenue received that year Height of a $25$-year-old man Weight of the man The first line in the table is different from all the rest because in that case and no other the relationship between the variables is deterministic: once the value of $x$ is known the value of $y$ is completely determined. In fact there is a formula for $y$ in terms of $x$: $y=95x+32 \nonumber$ Choosing several values for $x$ and computing the corresponding value for $y$ for each one using the formula gives the table $\begin{array}{c|c c c c c} x & -40 & -15 & 0 & 20 & 50 \ \hline y &-40 &5 &32 &68 &122\ \end{array} \nonumber$ We can plot these data by choosing a pair of perpendicular lines in the plane, called the coordinate axes, as shown in Figure $1$. Then to each pair of numbers in the table we associate a unique point in the plane, the point that lies $x$ units to the right of the vertical axis (to the left if $x<0$) and y units above the horizontal axis (below if $y<0$). The relationship between $x$ and $y$ is called a linear relationship because the points so plotted all lie on a single straight line. The number $95$ in the equation $y=95x+32$ is the slope of the line, and measures its steepness. It describes how y changes in response to a change in $x$: if $x$ increases by $1$ unit then $y$ increases (since $95$ is positive) by $95$ unit. If the slope had been negative then $y$ would have decreased in response to an increase in $x$. The number $32$ in the formula $y=95x+32$ is the $y$-intercept of the line; it identifies where the line crosses the $y$-axis. You may recall from an earlier course that every non-vertical line in the plane is described by an equation of the form $y=mx+b$, where $m$ is the slope of the line and $b$ is its $y$-intercept. The relationship between $x$ and $y$ in the temperature example is deterministic because once the value of $x$ is known, the value of $y$ is completely determined. In contrast, all the other relationships listed in the table above have an element of randomness in them. Consider the relationship described in the last line of the table, the height $x$ of a man aged $25$ and his weight $y$. If we were to randomly select several $25$-year-old men and measure the height and weight of each one, we might obtain a collection of $(x,y)$ pairs something like this: $(68,151)\; \; (72,163)\; \; (69,146)\; \; (72,180)\; \; (70,157)\; \; (73,170)\; \; (70,164)\; \; (73,175)\; \; (71,171)\; \; (74,178)\; \; (72,160)\; \; (75,188) \nonumber$ A plot of these data is shown in Figure $2$. Such a plot is called a scatter diagram or scatter plot. Looking at the plot it is evident that there exists a linear relationship between height $x$ and weight $y$, but not a perfect one. The points appear to be following a line, but not exactly. There is an element of randomness present. In this chapter we will analyze situations in which variables $x$ and $y$ exhibit such a linear relationship with randomness. The level of randomness will vary from situation to situation. In the introductory example connecting an electric current and the level of carbon monoxide in air, the relationship is almost perfect. In other situations, such as the height and weights of individuals, the connection between the two variables involves a high degree of randomness. In the next section we will see how to quantify the strength of the linear relationship between two variables. Key Takeaway • Two variables $x$ and $y$ have a deterministic linear relationship if points plotted from $(x,y)$ pairs lie exactly along a single straight line. • In practice it is common for two variables to exhibit a relationship that is close to linear but which contains an element, possibly large, of randomness.
textbooks/stats/Introductory_Statistics/Introductory_Statistics_(Shafer_and_Zhang)/10%3A_Correlation_and_Regression/10.01%3A_Linear_Relationships_Between_Variables.txt
Learning Objectives To learn what the linear correlation coefficient is, how to compute it, and what it tells us about the relationship between two variables $x$ and $y$ Figure $1$ illustrates linear relationships between two variables $x$ and $y$ of varying strengths. It is visually apparent that in the situation in panel (a), $x$ could serve as a useful predictor of $y$, it would be less useful in the situation illustrated in panel (b), and in the situation of panel (c) the linear relationship is so weak as to be practically nonexistent. The linear correlation coefficient is a number computed directly from the data that measures the strength of the linear relationship between the two variables $x$ and $y$. Definition: linear correlation coefficient The linear correlation coefficient for a collection of $n$ pairs $x$ of numbers in a sample is the number $r$ given by the formula The linear correlation coefficient has the following properties, illustrated in Figure $2$ 1. The value of $r$ lies between $−1$ and $1$, inclusive. 2. The sign of $r$ indicates the direction of the linear relationship between $x$ and $y$: 3. The size of $|r|$ indicates the strength of the linear relationship between $x$ and $y$: 1. If $|r|$ is near $1$ (that is, if $r$ is near either $1$ or $−1$), then the linear relationship between $x$ and $y$ is strong. 2. If $|r|$ is near $0$ (that is, if $r$ is near $0$ and of either sign). then the linear relationship between $x$ and $y$ is weak. so that $r= \dfrac{SS_{xy}}{\sqrt{SS_{xx}SS_{yy}}}=\dfrac{2.44.583}{\sqrt{(46.916)(1690.916)}}=0.868 \nonumber$ The number quantifies what is visually apparent from Figure $2$ weights tends to increase linearly with height ($r$ is positive) and although the relationship is not perfect, it is reasonably strong ($r$ is near $1$). Example $1$ Compute the linear correlation coefficient for the height and weight pairs plotted in Figure $2$. Solution: Even for small data sets like this one computations are too long to do completely by hand. In actual practice the data are entered into a calculator or computer and a statistics program is used. In order to clarify the meaning of the formulas we will display the data and related quantities in tabular form. For each $x$ $y$ $x^2$ $y^2$ 68 151 4624 10268 22801 69 146 4761 10074 21316 70 157 4900 10990 24649 70 164 4900 11480 26896 71 171 5041 12141 29241 72 160 5184 11520 25600 72 163 5184 11736 26569 72 180 5184 12960 32400 73 170 5329 12410 28900 73 175 5329 12775 30625 74 178 5476 13172 31684 75 188 5625 14100 35344 859 2003 61537 143626 336025 Key Takeaway • The linear correlation coefficient measures the strength and direction of the linear relationship between two variables $x$ and $y$. • The sign of the linear correlation coefficient indicates the direction of the linear relationship between $x$ and $y$. • When $r$ is near $1$ or $−1$ the linear relationship is strong; when it is near $0$ the linear relationship is weak. 10.03: Modelling Linear Relationships with Randomness Present Learning Objectives • To learn the framework in which the statistical analysis of the linear relationship between two variables $x$ and $y$ will be done In this chapter we are dealing with a population for which we can associate to each element two measurements, $x$ and $y$. We are interested in situations in which the value of $x$ can be used to draw conclusions about the value of $y$, such as predicting the resale value $y$ of a residential house based on its size $x$. Since the relationship between $x$ and $y$ is not deterministic, statistical procedures must be applied. For any statistical procedures, given in this book or elsewhere, the associated formulas are valid only under specific assumptions. The set of assumptions in simple linear regression are a mathematical description of the relationship between $x$ and $y$. Such a set of assumptions is known as a model. For each fixed value of $x$, a sub-population of the full population is determined, such as the collection of all houses with $2,100$ square feet of living space. For each element of that sub-population there is a measurement $y$, such as the value of any $2,100$-square-foot house. Let $E(y)$ denote the mean of all the $y$-values for each particular value of $x$. $E(y)$ can change from $x$-value to $x$-value, such as the mean value of all $2,100$-square-foot houses, the (different) mean value for all $2,500$-square foot-houses, and so on. Our first assumption is that the relationship between $x$ and the mean of the $y$-values in the sub-population determined by $x$ is linear. This means that there exist numbers such that $y = \beta_1 x+\beta_0 \nonumber$ This linear relationship is the reason for the word “linear” in “simple linear regression” below. (The word “simple” means that $y$ depends on only one other variable and not two or more.) Our next assumption is that for each value of $x$ the $y$-values scatter about the mean $E(y)$ according to a normal distribution centered at $E(y)$ and with a standard deviation $σ$ that is the same for every value of $x$. This is the same as saying that there exists a normally distributed random variable $ε$ with mean $0$ and standard deviation $σ$ so that the relationship between $x$ and $y$ in the whole population is $y = \beta_1 x+\beta_0 + \epsilon \nonumber$ Our last assumption is that the random deviations associated with different observations are independent. In summary, the model is: Simple Linear Regression Model For each point $(x,y)$ in data set the $y$-value is an independent observation of $y=β_1x+β_0+ε \nonumber$ where $β_1$ and $β_0$ are fixed parameters and $ε$ is a normally distributed random variable with mean $0$ and an unknown standard deviation $σ$. The line with equation $y=β_1x + β_0 \nonumber$ is called the population regression line. It is conceptually important to view the model as a sum of two parts: $y = \underbrace{ \beta_1 x+\beta_0}_{\text{Deterministic}} + \underbrace{\epsilon}_{\text{Random}} \nonumber$ • Deterministic Part. The first part $0$ is the equation that describes the trend in $y$ as $x$ increases. The line that we seem to see when we look at the scatter diagram is an approximation of the line $y = \beta_1 x+\beta_0. \nonumber$There is nothing random in this part, and therefore it is called the deterministic part of the model. • Random Part. The second part $ε$ is a random variable, often called the error term or the noise. This part explains why the actual observed values of $y$ are not exactly on but fluctuate near a line. Information about this term is important since only when one knows how much noise there is in the data can one know how trustworthy the detected trend is. There are procedures for checking the validity of the three assumptions, but for us it will be sufficient to visually verify the linear trend in the data. If the data set is large then the points in the scatter diagram will form a band about an apparent straight line. The normality of $ε$ with a constant standard deviation corresponds graphically to the band being of roughly constant width, and with most points concentrated near the middle of the band. Fortunately, the three assumptions do not need to hold exactly in order for the procedures and analysis developed in this chapter to be useful. Key Takeaway • Statistical procedures are valid only when certain assumptions are valid. The assumptions underlying the analyses done in this chapter are graphically summarized in Figure $1$.
textbooks/stats/Introductory_Statistics/Introductory_Statistics_(Shafer_and_Zhang)/10%3A_Correlation_and_Regression/10.02%3A_The_Linear_Correlation_Coefficient.txt
Learning Objectives • To learn how to measure how well a straight line fits a collection of data. • To learn how to construct the least squares regression line, the straight line that best fits a collection of data. • To learn the meaning of the slope of the least squares regression line. • To learn how to use the least squares regression line to estimate the response variable $y$ in terms of the predictor variable $x$. Goodness of Fit of a Straight Line to Data Once the scatter diagram of the data has been drawn and the model assumptions described in the previous sections at least visually verified (and perhaps the correlation coefficient $r$ computed to quantitatively verify the linear trend), the next step in the analysis is to find the straight line that best fits the data. We will explain how to measure how well a straight line fits a collection of points by examining how well the line $y=\frac{1}{2}x-1$ fits the data set $\begin{array}{c|c c c c c} x & 2 & 2 & 6 & 8 & 10 \ \hline y &0 &1 &2 &3 &3\ \end{array} \nonumber$ (which will be used as a running example for the next three sections). We will write the equation of this line as $\hat{y}=\frac{1}{2}x-1$ with an accent on the $y$ to indicate that the $y$-values computed using this equation are not from the data. We will do this with all lines approximating data sets. The line $\hat{y}=\frac{1}{2}x-1$ was selected as one that seems to fit the data reasonably well. The idea for measuring the goodness of fit of a straight line to data is illustrated in Figure $1$, in which the graph of the line $\hat{y}=\frac{1}{2}x-1$ has been superimposed on the scatter plot for the sample data set. To each point in the data set there is associated an “error,” the positive or negative vertical distance from the point to the line: positive if the point is above the line and negative if it is below the line. The error can be computed as the actual $y$-value of the point minus the $y$-value $\hat{y}$ that is “predicted” by inserting the $x$-value of the data point into the formula for the line: $\text{error at data point(x,y)}=(\text{true y})−(\text{predicted y})=y−\hat{y} \nonumber$ The computation of the error for each of the five points in the data set is shown in Table $1$. Table $1$: The Errors in Fitting Data with a Straight Line $x$ $y$ $\hat{y}=\frac{1}{2}x-1$ $y-\hat{y}$ $(y-\hat{y})^2$ 2 0 0 0 0 2 1 0 1 1 6 2 2 0 0 8 3 3 0 0 10 3 4 −1 1 $\sum$ - - - 0 2 A first thought for a measure of the goodness of fit of the line to the data would be simply to add the errors at every point, but the example shows that this cannot work well in general. The line does not fit the data perfectly (no line can), yet because of cancellation of positive and negative errors the sum of the errors (the fourth column of numbers) is zero. Instead goodness of fit is measured by the sum of the squares of the errors. Squaring eliminates the minus signs, so no cancellation can occur. For the data and line in Figure $1$ the sum of the squared errors (the last column of numbers) is $2$. This number measures the goodness of fit of the line to the data. Definition: goodness of fit The goodness of fit of a line $\hat{y}=mx+b$ to a set of $n$ pairs $(x,y)$ of numbers in a sample is the sum of the squared errors $\sum (y−\hat{y})^2 \nonumber$ ($n$ terms in the sum, one for each data pair). The Least Squares Regression Line Given any collection of pairs of numbers (except when all the $x$-values are the same) and the corresponding scatter diagram, there always exists exactly one straight line that fits the data better than any other, in the sense of minimizing the sum of the squared errors. It is called the least squares regression line. Moreover there are formulas for its slope and $y$-intercept. Definition: least squares regression Line Given a collection of pairs $(x,y)$ of numbers (in which not all the $x$-values are the same), there is a line $\hat{y}=\hat{β}_1x+\hat{β}_0$ that best fits the data in the sense of minimizing the sum of the squared errors. It is called the least squares regression line. Its slope $\hat{β}_1$ and $y$-intercept $\hat{β}_0$ are computed using the formulas $\hat{β}_1=\dfrac{SS_{xy}}{SS_{xx}} \nonumber$ and $\hat{β}_0=\overline{y} - \hat{β}_1 x \nonumber$ where $SS_{xx}=\sum x^2-\frac{1}{n}\left ( \sum x \right )^2 \nonumber$ and $SS_{xy}=\sum xy-\frac{1}{n}\left ( \sum x \right )\left ( \sum y \right ) \nonumber$ $\bar{x}$ is the mean of all the $x$-values, $\bar{y}$ is the mean of all the $y$-values, and $n$ is the number of pairs in the data set. The equation $\hat{y}=\hat{β}_1x+\hat{β}_0 \nonumber$ specifying the least squares regression line is called the least squares regression equation. Remember from Section 10.3 that the line with the equation $y=\beta _1x+\beta _0$ is called the population regression line. The numbers $\hat{\beta _1}$ and $\hat{\beta _0}$ are statistics that estimate the population parameters $\beta _1$ and $\beta _0$. We will compute the least squares regression line for the five-point data set, then for a more practical example that will be another running example for the introduction of new concepts in this and the next three sections. Example $2$ Find the least squares regression line for the five-point data set $\begin{array}{c|c c c c c} x & 2 & 2 & 6 & 8 & 10 \ \hline y &0 &1 &2 &3 &3\ \end{array} \nonumber$ and verify that it fits the data better than the line $\hat{y}=\frac{1}{2}x-1$ considered in Section 10.4.1 above. Solution In actual practice computation of the regression line is done using a statistical computation package. In order to clarify the meaning of the formulas we display the computations in tabular form. $x$ $y$ $x^2$ $xy$ 2 0 4 0 2 1 4 2 6 2 36 12 8 3 64 24 10 3 100 30 $\sum$ 28 9 208 68 In the last line of the table we have the sum of the numbers in each column. Using them we compute: $SS_{xx}=\sum x^2-\frac{1}{n}\left ( \sum x \right )^2=208-\frac{1}{5}(28)^2=51.2 \nonumber$ $SS_{xy}=\sum xy-\frac{1}{n}\left ( \sum x \right )\left ( \sum y \right )=68-\frac{1}{5}(28)(9)=17.6 \nonumber$ $\bar{x}=\frac{\sum x}{n}=\frac{28}{5}=5.6\ \bar{y}=\frac{\sum y}{n}=\frac{9}{5}=1.8 \nonumber$ so that $\hat{β}_1=\dfrac{SS_{xy}}{SS_{xx}}=\dfrac{17.6}{51.2}=0.34375 \nonumber$ and $\hat{β}_0=\bar{y}−\hat{β}_1x−=1.8−(0.34375)(5.6)=−0.125 \nonumber$ The least squares regression line for these data is $\hat{y}=0.34375 x−0.125 \nonumber$ The computations for measuring how well it fits the sample data are given in Table $2$. The sum of the squared errors is the sum of the numbers in the last column, which is $0.75$. It is less than $2$, the sum of the squared errors for the fit of the line $\hat{y}=\frac{1}{2}x-1$ to this data set. Table $2$ The Errors in Fitting Data with the Least Squares Regression Line $x$ $y$ $\hat{y}=0.34375x-0.125$ $y-\hat{y}$ $(y-\hat{y})^2$ 2 0 0.5625 −0.5625 0.31640625 2 1 0.5625 0.4375 0.19140625 6 2 1.9375 0.0625 0.00390625 8 3 2.6250 0.3750 0.14062500 10 3 3.3125 −0.3125 0.09765625 Example $3$ Table $3$ shows the age in years and the retail value in thousands of dollars of a random sample of ten automobiles of the same make and model. 1. Construct the scatter diagram. 2. Compute the linear correlation coefficient $r$. Interpret its value in the context of the problem. 3. Compute the least squares regression line. Plot it on the scatter diagram. 4. Interpret the meaning of the slope of the least squares regression line in the context of the problem. 5. Suppose a four-year-old automobile of this make and model is selected at random. Use the regression equation to predict its retail value. 6. Suppose a $20$-year-old automobile of this make and model is selected at random. Use the regression equation to predict its retail value. Interpret the result. 7. Comment on the validity of using the regression equation to predict the price of a brand new automobile of this make and model. Table $3$: Data on Age and Value of Used Automobiles of a Specific Make and Model $x$ 2 3 3 3 4 4 5 5 5 6 $y$ 28.7 24.8 26.0 30.5 23.8 24.6 23.8 20.4 21.6 22.1 Solution 1. The scatter diagram is shown in Figure $2$. 1. We must first compute $SS_{xx},\; SS_{xy},\; SS_{yy}$, which means computing $\sum x,\; \sum y,\; \sum x^2,\; \sum y^2\; \text{and}\; \sum xy$. Using a computing device we obtain $\sum x=40\; \; \sum y=246.3\; \; \sum x^2=174\; \; \sum y^2=6154.15\; \; \sum xy=956.5 \nonumber$ Thus $SS_{xx}=\sum x^2-\frac{1}{n}\left ( \sum x \right )^2=174-\frac{1}{10}(40)^2=14\ SS_{xy}=\sum xy-\frac{1}{n}\left ( \sum x \right )\left ( \sum y \right )=956.5-\frac{1}{10}(40)(246.3)=-28.7\ SS_{yy}=\sum y^2-\frac{1}{n}\left ( \sum y \right )^2=6154.15-\frac{1}{10}(246.3)^2=87.781 \nonumber$ so that $r=\frac{SS_{xy}}{\sqrt{SS_{xx}\cdot SS_{yy}}}=\frac{-28.7}{\sqrt{(14)(87.781)}}=-0.819 \nonumber$ The age and value of this make and model automobile are moderately strongly negatively correlated. As the age increases, the value of the automobile tends to decrease. 2. Using the values of $\sum x$ and $\sum y$ computed in part (b), $\bar{x}=\frac{\sum x}{n}=\frac{40}{10}=4\ \bar{y}=\frac{\sum y}{n}=\frac{246.3}{10}=24.63 \nonumber$ Thus using the values of $SS_{xx}$ and $SS_{xy}$ from part (b), $\hat{\beta _1}=\frac{SS_{xy}}{SS_{xx}}=\frac{-28.7}{14}=-2.05 \nonumber$ and $\hat{\beta _0}=\bar{y}-\hat{\beta _1}x=24.63-(-2.05)(4)=32.83 \nonumber$ The equation $\bar{y}=\hat{\beta _1}x+\hat{\beta _0}$ of the least squares regression line for these sample data is $\hat{y}=−2.05x+32.83 \nonumber$ Figure $3$ shows the scatter diagram with the graph of the least squares regression line superimposed. 1. The slope $-2.05$ means that for each unit increase in $x$ (additional year of age) the average value of this make and model vehicle decreases by about $2.05$ units (about $\2,050$). 2. Since we know nothing about the automobile other than its age, we assume that it is of about average value and use the average value of all four-year-old vehicles of this make and model as our estimate. The average value is simply the value of $\hat{y}$ obtained when the number $4$ is inserted for $x$ in the least squares regression equation: $\hat{y}=−2.05(4)+32.83=24.63 \nonumber$ which corresponds to $\24,630$. 3. Now we insert $x=20$ into the least squares regression equation, to obtain $\hat{y}=−2.05(20)+32.83=−8.17 \nonumber$ which corresponds to $-\8,170$. Something is wrong here, since a negative makes no sense. The error arose from applying the regression equation to a value of $x$ not in the range of $x$-values in the original data, from two to six years. Applying the regression equation $\bar{y}=\hat{\beta _1}x+\hat{\beta _0}$ to a value of $x$ outside the range of $x$-values in the data set is called extrapolation. It is an invalid use of the regression equation and should be avoided. 4. The price of a brand new vehicle of this make and model is the value of the automobile at age $0$. If the value $x=0$ is inserted into the regression equation the result is always $\hat{\beta _0}$, the $y$-intercept, in this case $32.83$, which corresponds to $\32,830$. But this is a case of extrapolation, just as part (f) was, hence this result is invalid, although not obviously so. In the context of the problem, since automobiles tend to lose value much more quickly immediately after they are purchased than they do after they are several years old, the number $\32,830$ is probably an underestimate of the price of a new automobile of this make and model. For emphasis we highlight the points raised by parts (f) and (g) of the example. Definition: extrapolation The process of using the least squares regression equation to estimate the value of $y$ at a value of $x$ that does not lie in the range of the $x$-values in the data set that was used to form the regression line is called extrapolation. It is an invalid use of the regression equation that can lead to errors, hence should be avoided. The Sum of the Squared Errors SSE In general, in order to measure the goodness of fit of a line to a set of data, we must compute the predicted $y$-value $\hat{y}$ at every point in the data set, compute each error, square it, and then add up all the squares. In the case of the least squares regression line, however, the line that best fits the data, the sum of the squared errors can be computed directly from the data using the following formula The sum of the squared errors for the least squares regression line is denoted by $SSE$. It can be computed using the formula $SSE=SS_{yy}−\hat{β}_1SS_{xy} \nonumber$ Example $4$ Find the sum of the squared errors $SSE$ for the least squares regression line for the five-point data set $\begin{array}{c|c c c c c} x & 2 & 2 & 6 & 8 & 10 \ \hline y &0 &1 &2 &3 &3\ \end{array} \nonumber$ Do so in two ways: 1. using the definition $\sum (y-\hat{y})^2$; 2. using the formula $SSE=SS_{yy}-\hat{\beta }_1SS_{xy}$. Solution 1. The least squares regression line was computed in "Example $2$" and is $\hat{y}=0.34375x-0.125$. SSE was found at the end of that example using the definition $\sum (y-\hat{y})^2$. The computations were tabulated in Table $2$. SSE is the sum of the numbers in the last column, which is $0.75$. 2. The numbers $SS_{xy}$ and $\hat{\beta _1}$ were already computed in "Example $2$" in the process of finding the least squares regression line. So was the number $\sum y=9$. We must compute $SS_{yy}$. To do so it is necessary to first compute $\sum y^2=0+1^2+2^2+3^2+3^2=23 \nonumber$ Then $SS_{yy}=\sum y^2-\frac{1}{n}\left ( \sum y \right )^2=23-\frac{1}{5}(9)^2=6.8 \nonumber$ so that $SSE=SS_{yy}-\hat{\beta _1}SS_{xy}=6.8-(0.34375)(17.6)=0.75 \nonumber$ Example $5$ Find the sum of the squared errors $SSE$ for the least squares regression line for the data set, presented in Table $3$, on age and values of used vehicles in "Example $3$". Solution From "Example $3$" we already know that $SS_{xy}=-28.7,\; \hat{\beta _1}=-2.05,\; \text{and}\; \sum y=246.3 \nonumber$ To compute $SS_{yy}$ we first compute $\sum y^2=28.7^2+24.8^2+26.0^2+30.5^2+23.8^2+24.6^2+23.8^2+20.4^2+21.6^2+22.1^2=6154.15 \nonumber$ Then $SS_{yy}=\sum y^2-\frac{1}{n}\left ( \sum y \right )^2=6154.15-\frac{1}{10}(246.3)^2=87.781 \nonumber$ Therefore $SSE=SS_{yy}-\hat{\beta _1}SS_{xy}=87.781-(-2.05)(-28.7)=28.946 \nonumber$ Key Takeaway • How well a straight line fits a data set is measured by the sum of the squared errors. • The least squares regression line is the line that best fits the data. Its slope and $y$-intercept are computed from the data using formulas. • The slope $\hat{\beta _1}$ of the least squares regression line estimates the size and direction of the mean change in the dependent variable $y$ when the independent variable $x$ is increased by one unit. • The sum of the squared errors $SSE$ of the least squares regression line can be computed using a formula, without having to compute all the individual errors.
textbooks/stats/Introductory_Statistics/Introductory_Statistics_(Shafer_and_Zhang)/10%3A_Correlation_and_Regression/10.04%3A_The_Least_Squares_Regression_Line.txt
Learning Objectives • To learn how to construct a confidence interval for $β_1$, the slope of the population regression line. • To learn how to test hypotheses regarding $β_1$. The parameter $β_1$, the slope of the population regression line, is of primary importance in regression analysis because it gives the true rate of change in the mean $E(y)$ in response to a unit increase in the predictor variable $x$. For every unit increase in $x$ the mean of the response variable $y$ changes by $β_1$ units, increasing if $β_1>0$ and decreasing if $β_1 <0$. We wish to construct confidence intervals for $β_1$ and test hypotheses about it. Confidence Intervals for $β_1$ The slope $\hat{β}_1$ of the least squares regression line is a point estimate of $β_1$. A confidence interval for $β_1$ is given by the following formula. Definition: $100(1-\alpha )%$ Confidence Interval for the Slope $β_1$ of the Population Regression Line $\hat{β}_1 \pm t_{α/2} \dfrac{s_{\epsilon}}{\sqrt{SS_{xx}}} \nonumber$ where $S_\varepsilon =\sqrt{\frac{SSE}{n-2}}$ and the number of degrees of freedom is $df=n-2$. The assumptions listed in Section 10.3 must hold. Definition: sample standard deviation of errors The statistic $S_\varepsilon$ is called the sample standard deviation of errors. It estimates the standard deviation $\sigma$ of the errors in the population of $y$-values for each fixed value of $x$ (see Figure 10.3.1). Example $1$ Construct the $95\%$ confidence interval for the slope $β_1$ of the population regression line based on the five-point sample data set $\begin{array}{c|c c c c c} x & 2 & 2 & 6 & 8 & 10 \ \hline y &0 &1 &2 &3 &3\ \end{array} \nonumber$ Solution The point estimate $\hat{β}_1$ of $β_1$ was computed in Example 10.4.2 as $\hat{β}_1=0.34375$. In the same example $SS_{xx}$ was found to be $SS_{xx}=51.2$. The sum of the squared errors $SSE$ was computed in Example 10.4.4 as $SSE=0.75$. Thus $S_\varepsilon =\sqrt{\frac{SSE}{n-2}}=\sqrt{\frac{0.75}{3}}=0.50 \nonumber$ Confidence level $95\%$ means $\alpha =1-0.95=0.05$ so $\alpha /2=0.025$. From the row labeled $df=3$ in Figure 7.1.6 we obtain $t_{0.025}=3.182$. Therefore $\hat{\beta _1}\pm t_{\alpha /2}\frac{S_\varepsilon }{\sqrt{SS_{xx}}}=0.34375\pm 3.182\left ( \frac{0.50}{\sqrt{51.2}} \right )=0.34375\pm 0.2223 \nonumber$ which gives the interval $(0.1215,0.5661)$. We are $95\%$ confident that the slope $β_1$ of the population regression line is between $0.1215$ and $0.5661$. Example $2$ Using the sample data in Table 10.4.3 construct a $90\%$ confidence interval for the slope $β_1$ of the population regression line relating age and value of the automobiles of Example 10.4.3. Interpret the result in the context of the problem. Solution The point estimate $\hat{β}_1$ of $β_1$ was computed in Example 10.4.3, as was $SS_{xx}$. Their values are $\hat{β}_1=-2.05$ and $SS_{xx}=14$. The sum of the squared errors $SSE$ was computed in Example 10.4.5 as $SSE=28.946$. Thus $S_\varepsilon =\sqrt{\frac{SSE}{n-2}}=\sqrt{\frac{28.946}{8}}=1.902169814 \nonumber$ Confidence level $90\%$ means $\alpha =1-0.90=0.10$ so $\alpha /2=0.05$. From the row labeled $df=8$ in Figure 7.1.6 we obtain $t_{0.05}=1.860$. Therefore $\hat{\beta _1}\pm t_{\alpha /2}\frac{S_\varepsilon }{\sqrt{SS_{xx}}}=-2.05\pm 1.860\left ( \frac{1.902169814}{\sqrt{14}} \right )=-2.05\pm 0.95 \nonumber$ which gives the interval $(-3.00,-1.10)$. We are $90\%$ confident that the slope $β_1$ of the population regression line is between $-3.00$ and $-1.10$. In the context of the problem this means that for vehicles of this make and model between two and six years old we are $90\%$ confident that for each additional year of age the average value of such a vehicle decreases by between $\1,100$ and $\3,000$. Testing Hypotheses About β1 Hypotheses regarding $β_1$ can be tested using the same five-step procedures, either the critical value approach or the $p$-value approach, that were introduced in Section 8.1 and Section 8.3. The null hypothesis always has the form $H_0: \beta _1=B_0$ where $B_0$ is a number determined from the statement of the problem. The three forms of the alternative hypothesis, with the terminology for each case, are: Form of $H_a$ Terminology $H_a: \beta _1<B_0$ Left-tailed $H_a: \beta _1>B_0$ Right-tailed $H_a: \beta _1\neq B_0$ Two-tailed The value zero for $B_0$ is of particular importance since in that case the null hypothesis is $H_0: \beta _1=0$, which corresponds to the situation in which $x$ is not useful for predicting $y$. For if $β_1=0$ then the population regression line is horizontal, so the mean $E(y)$ is the same for every value of $x$ and we are just as well off in ignoring $x$ completely and approximating $y$ by its average value. Given two variables $x$ and $y$, the burden of proof is that $x$ is useful for predicting $y$, not that it is not. Thus the phrase “test whether $x$ is useful for prediction of $y$,” or words to that effect, means to perform the test $H_0: \beta _1=0\; \; \text{vs.}\; \; H_a: \beta _1\neq 0 \nonumber$ Standardized Test Statistic for Hypothesis Tests Concerning the Slope $β_1$ of the Population Regression Line $T=\frac{\hat{\beta _1}-B_0}{S_\varepsilon /\sqrt{SS_{xx}}} \nonumber$ The test statistic has Student’s $t$-distribution with $df=n-2$ degrees of freedom. The assumptions listed in Section 10.3 must hold. Example $3$ Test, at the $2\%$ level of significance, whether the variable $x$ is useful for predicting $y$ based on the information in the five-point data set $\begin{array}{c|c c c c c} x & 2 & 2 & 6 & 8 & 10 \ \hline y &0 &1 &2 &3 &3\ \end{array} \nonumber$ Solution We will perform the test using the critical value approach. • Step 1. Since $x$ is useful for prediction of $y$ precisely when the slope $β_1$ of the population regression line is nonzero, the relevant test is $H_0: \beta _1=0\ \text{vs.}\ H_a: \beta _1\neq 0\; \; @\; \; \alpha =0.02 \nonumber$ • Step 2. The test statistic is $T=\frac{\hat{\beta _1}}{S_\varepsilon /\sqrt{SS_{xx}}} \nonumber$ and has Student’s $t$-distribution with $n-2=5-2=3$ degrees of freedom. • Step 3. From Example 10.4.1, $β_1=0.34375$ and $SS_{xx}=51.2$. From "Example $1$", $S_\varepsilon =0.50$. The value of the test statistic is therefore $T=\frac{\hat{\beta _1}-B_0}{S_\varepsilon /\sqrt{SS_{xx}}}=\frac{0.34375}{0.50/\sqrt{51.2}}=4.919 \nonumber$ • Step 4. Since the symbol in $H_a$ is “$\neq$” this is a two-tailed test, so there are two critical values $\pm t_{\alpha /2}=\pm t_{0.01}$. Reading from the line in Figure 7.1.6 labeled $df=3$, $t_{0.01}=4.541$. The rejection region is $(-\infty ,-4.541]\cup [4.541,\infty ) \nonumber$. • Step 5. As shown in Figure $1$ "Rejection Region and Test Statistic for" the test statistic falls in the rejection region. The decision is to reject $H_0$. In the context of the problem our conclusion is: The data provide sufficient evidence, at the $2\%$ level of significance, to conclude that the slope of the population regression line is nonzero, so that $x$ is useful as a predictor of $y$. Example $4$ A car salesman claims that automobiles between two and six years old of the make and model discussed in Example 10.4.2 lose more than $\1,100$ in value each year. Test this claim at the $5\%$ level of significance. Solution We will perform the test using the critical value approach. • Step 1. In terms of the variables $x$ and $y$, the salesman’s claim is that if $x$ is increased by $1$ unit (one additional year in age), then $y$ decreases by more than $1.1$ units (more than $\1,100$). Thus his assertion is that the slope of the population regression line is negative, and that it is more negative than $-1.1$. In symbols, $β_1<-1.1$. Since it contains an inequality, this has to be the alternative hypotheses. The null hypothesis has to be an equality and have the same number on the right hand side, so the relevant test is $H_0: \beta _1=-1.1\ \text{vs.}\ H_a: \beta _1<-1.1\; \; @\; \; \alpha =0.05 \nonumber$ • Step 2. The test statistic is $T=\frac{\hat{\beta _1}-B_0}{S_\varepsilon /\sqrt{SS_{xx}}} \nonumber$ and has Student’s $t$-distribution with $8$ degrees of freedom. • Step 3. From Example 10.4.2, $β_1=-2.05$ and $SS_{xx}=14$. From "Example $2$", $S_\varepsilon =1.902169814$. The value of the test statistic is therefore $T=\frac{\hat{\beta _1}-B_0}{S_\varepsilon /\sqrt{SS_{xx}}}=\frac{-2.05-(-1.1)}{1.902169814/\sqrt{14}}=-1.869 \nonumber$ • Step 4. Since the symbol in $H_a$ is “$<$” this is a left-tailed test, so there is a single critical value $-t_{\alpha /2}=-t_{0.05}$. Reading from the line in Figure 7.1.6 labeled $df=8$, $t_{0.05}=1.860$. The rejection region is $(-\infty ,-1.860] \nonumber$. • Step 5. As shown in Figure $2$ "Rejection Region and Test Statistic for " the test statistic falls in the rejection region. The decision is to reject $H_0$. In the context of the problem our conclusion is: The data provide sufficient evidence, at the $5\%$ level of significance, to conclude that vehicles of this make and model and in this age range lose more than $\1,100$ per year in value, on average. Key Takeaway • The parameter $β_1$, the slope of the population regression line, is of primary interest because it describes the average change in $y$ with respect to unit increase in $x$. • The statistic $\hat{β}_1$, the slope of the least squares regression line, is a point estimate of $β_1$. Confidence intervals for $β_1$ can be computed using a formula. • Hypotheses regarding $β_1$ are tested using the same five-step procedures introduced in Chapter 8.
textbooks/stats/Introductory_Statistics/Introductory_Statistics_(Shafer_and_Zhang)/10%3A_Correlation_and_Regression/10.05%3A_Statistical_Inferences_About.txt
Learning Objectives • To learn what the coefficient of determination is, how to compute it, and what it tells us about the relationship between two variables $x$ and $y$. If the scatter diagram of a set of $(x,y)$ pairs shows neither an upward or downward trend, then the horizontal line $\hat{y} =\overline{y}$ fits it well, as illustrated in Figure $1$. The lack of any upward or downward trend means that when an element of the population is selected at random, knowing the value of the measurement $x$ for that element is not helpful in predicting the value of the measurement $y$. If the scatter diagram shows a linear trend upward or downward then it is useful to compute the least squares regression line $\hat{y} =\hat{β}_1x+\hat{β}_0 \nonumber$ and use it in predicting $y$. Figure $2$ illustrates this. In each panel we have plotted the height and weight data of Section 10.1. This is the same scatter plot as Figure $2$, with the average value line $\hat{y} =\overline{y}$ superimposed on it in the left panel and the least squares regression line imposed on it in the right panel. The errors are indicated graphically by the vertical line segments. The sum of the squared errors computed for the regression line, $SSE$, is smaller than the sum of the squared errors computed for any other line. In particular it is less than the sum of the squared errors computed using the line $\hat{y}=\overline{y}$, which sum is actually the number $SS_{yy}$ that we have seen several times already. A measure of how useful it is to use the regression equation for prediction of $y$ is how much smaller $SSE$ is than $SS_{yy}$. In particular, the proportion of the sum of the squared errors for the line $\hat{y} =\overline{y}$ that is eliminated by going over to the least squares regression line is $\dfrac{SS_{yy}−SSE}{SS_{yy}}=\dfrac{SS_{yy}}{SS_{yy}}−\dfrac{SSE}{SS_{yy}}=1−\dfrac{SSE}{SS_{yy}} \nonumber$ We can think of $SSE/SS_{yy}$ as the proportion of the variability in $y$ that cannot be accounted for by the linear relationship between $x$ and $y$, since it is still there even when $x$ is taken into account in the best way possible (using the least squares regression line; remember that $SSE$ is the smallest the sum of the squared errors can be for any line). Seen in this light, the coefficient of determination, the complementary proportion of the variability in $y$, is the proportion of the variability in all the $y$ measurements that is accounted for by the linear relationship between $x$ and $y$. In the context of linear regression the coefficient of determination is always the square of the correlation coefficient $r$ discussed in Section 10.2. Thus the coefficient of determination is denoted $r^2$, and we have two additional formulas for computing it. Definition: coefficient of determination The coefficient of determination of a collection of $(x,y)$ pairs is the number $r^2$ computed by any of the following three expressions: $r^2=\dfrac{SS_{yy}−SSE}{SS_{yy}}=\dfrac{SS^2_{xy}}{SS_{xx}SS_{yy}}=\hat{β}_1 \dfrac{SS_{xy}}{SS_{yy}} \nonumber$ It measures the proportion of the variability in $y$ that is accounted for by the linear relationship between $x$ and $y$. If the correlation coefficient $r$ is already known then the coefficient of determination can be computed simply by squaring $r$, as the notation indicates, $r^2=(r)^2$. Example $1$ The value of used vehicles of the make and model discussed in "Example 10.4.2" in Section 10.4 varies widely. The most expensive automobile in the sample in Table 10.4.3 has value $\30,500$, which is nearly half again as much as the least expensive one, which is worth $\20,400$. Find the proportion of the variability in value that is accounted for by the linear relationship between age and value. Solution The proportion of the variability in value $y$ that is accounted for by the linear relationship between it and age $x$ is given by the coefficient of determination, $r^2$. Since the correlation coefficient $r$ was already computed in "Example 10.4.2" in Section 10.4 as $r=-0.819\ r^2=(-0.819)2=0.671 \nonumber$ About $67\%$ of the variability in the value of this vehicle can be explained by its age. Example $2$ Use each of the three formulas for the coefficient of determination to compute its value for the example of ages and values of vehicles. Solution In "Example 10.4.2" in Section 10.4 we computed the exact values $SS_{xx}=14\ SS_{xy}=-28.7\ SS_{yy}=87.781\ \hat{\beta _1}=-2.05 \nonumber$ In "Example 10.4.4" in Section 10.4 we computed the exact value $SSE=28.946 \nonumber$ Inserting these values into the formulas in the definition, one after the other, gives $r^2=\dfrac{SS_{yy}−SSE}{SS_{yy}}=\dfrac{87.781−28.946}{87.781}=0.6702475479 \nonumber$ $r^2= \dfrac{SS^2_{xy}}{SS_{xx}SS_{yy}}=\dfrac{(−28.7)^2}{(14)(87.781)}=0.6702475479 \nonumber$ $r^2=\hat{β}_1 \dfrac{SS_{xy}}{SS_{yy}}=−2.05\dfrac{−28.7}{87.781}=0.6702475479 \nonumber$ which rounds to $0.670$. The discrepancy between the value here and in the previous example is because a rounded value of $r$ from "Example 10.4.2" was used there. The actual value of $r$ before rounding is $0.8186864772$, which when squared gives the value for $r^2$ obtained here. The coefficient of determination $r^2$ can always be computed by squaring the correlation coefficient $r$ if it is known. Any one of the defining formulas can also be used. Typically one would make the choice based on which quantities have already been computed. What should be avoided is trying to compute $r$ by taking the square root of $r^2$, if it is already known, since it is easy to make a sign error this way. To see what can go wrong, suppose $r^2=0.64$. Taking the square root of a positive number with any calculating device will always return a positive result. The square root of $0.64$ is $0.8$. However, the actual value of $r$ might be the negative number $-0.8$. Key Takeaway • The coefficient of determination $r^2$ estimates the proportion of the variability in the variable $y$ that is explained by the linear relationship between $y$ and the variable $x$. • There are several formulas for computing $r^2$. The choice of which one to use can be based on which quantities have already been computed so far.
textbooks/stats/Introductory_Statistics/Introductory_Statistics_(Shafer_and_Zhang)/10%3A_Correlation_and_Regression/10.06%3A_The_Coefficient_of_Determination.txt
Learning Objectives • To learn the distinction between estimation and prediction. • To learn the distinction between a confidence interval and a prediction interval. • To learn how to implement formulas for computing confidence intervals and prediction intervals. Consider the following pairs of problems, in the context of Example 10.4.2, the automobile age and value example. Problem 1 1. Estimate the average value of all four-year-old automobiles of this make and model. 2. Construct a $95\%$ confidence interval for the average value of all four-year-old automobiles of this make and model. Problem 2 1. Shylock intends to buy a four-year-old automobile of this make and model next week. Predict the value of the first such automobile that he encounters. 2. Construct a $95\%$ confidence interval for the value of the first such automobile that he encounters. The method of solution and answer to the first question in each pair, (1a) and (2a), are the same. When we set $x$ equal to $4$ in the least squares regression equation $\hat{y} =−2.05x+32.83 \nonumber$ that was computed in part (c) of Example 10.4.2, the number returned, $\hat{y}=−2.05(4)+32.83=24.63 \nonumber$ which corresponds to value $\24,630$, is an estimate of precisely the number sought in question (1a): the mean $E(y)$ of all $y$ values when $x = 4$. Since nothing is known about the first four-year-old automobile of this make and model that Shylock will encounter, our best guess as to its value is the mean value $E(y)$ of all such automobiles, the number $24.63$ or $\24,630$, computed in the same way. The answers to the second part of each question differ. In question (1b) we are trying to estimate a population parameter: the mean of the all the $y$-values in the sub-population picked out by the value $x=4$, that is, the average value of all four-year-old automobiles. In question (2b), however, we are not trying to capture a fixed parameter, but the value of the random variable $y$ in one trial of an experiment: examine the first four-year-old car Shylock encounters. In the first case we seek to construct a confidence interval in the same sense that we have done before. In the second case the situation is different, and the interval constructed has a different name, prediction interval. In the second case we are trying to “predict” where a the value of a random variable will take its value. $100(1−α)\%$ Confidence Interval for the Mean Value of $y$ at $x=x_p$ $\hat{y}_p ± t_{α∕2} s_ε \sqrt{\dfrac{1}{n}+ \dfrac{(x_p−\overline{x})^2}{SS_{xx}}} \nonumber$ where • $x_p$ is a particular value of $x$ that lies in the range of $x$-values in the sample data set used to construct the least squares regression line; • $\hat{y}_p$ is the numerical value obtained when the least square regression equation is evaluated at $x=x_p$; and • the number of degrees of freedom for $t_{α∕2}$ is $df=n−2$. The assumptions listed in Section 10.3 must hold. The formula for the prediction interval is identical except for the presence of the number $1$ underneath the square root sign. This means that the prediction interval is always wider than the confidence interval at the same confidence level and value of $x$. In practice the presence of the number $1$ tends to make it much wider. $100(1−α)\%$ Prediction Interval for an Individual New Value of of $y$ at $x=x_p$ $\hat{y}_p ± t_{α∕2} s_ε \sqrt{1+ \dfrac{1}{n}+ \dfrac{(x_p−\overline{x})^2}{SS_{xx}}} \nonumber$ where • $x_p$ is a particular value of $x$ that lies in the range of $x$-values in the data set used to construct the least squares regression line; • $\hat{y}_p$ is the numerical value obtained when the least square regression equation is evaluated at $x=x_p$; and • the number of degrees of freedom for $t_{α∕2}$ is $df=n−2$. The assumptions listed in Section 10.3 must hold. Example $1$ Using the sample data of "Example 10.4.2" in Section 10.4 , recorded in Table 10.4.3, construct a $95\%$ confidence interval for the average value of all three-and-one-half-year-old automobiles of this make and model. Solution Solving this problem is merely a matter of finding the values of $\hat{y_p},\; \alpha ,\; and\; \; t_{\alpha /2},S_\varepsilon ,\; \bar{x}\; and\; \; SS_{xx}$, and inserting them into the confidence interval formula given just above. Most of these quantities are already known. From Example 10.4.2, $SS_{xx}=14\; \; and\; \; \bar{x}=4$. From Example 10.5.2, $S\varepsilon =1.902169814$. From the statement of the problem $x_p=3.5$, the value of $x$ of interest. The value of $\hat{y_p}$ is the number given by the regression equation, which by Example 10.4.2 is $\hat{y}=-2.05x+32.83$, when $x=x_p$, that is, when $x=3.5$. Thus here $\hat{y}=-2.05(3.5)+32.83=25.655$. Lastly, confidence level $95\%$ means that $\alpha =1-0.95=0.05$ so $\alpha /2=0.025$. Since the sample size is $n=10$, there are $n-2=8$ degrees of freedom. By Figure 7.1.6, $t_{0.025}=2.306$. Thus \begin{align*} \hat{y_p}\pm t_{\alpha /2}S_\varepsilon \sqrt{\frac{1}{n}+\frac{(x_p-\bar{x})^2}{SS_{xx}}} &= 25.655\pm (2.306)(1.902169814)\sqrt{\frac{1}{10}+\frac{(3.5-4)^2}{14}}\ &= 25.655\pm 4.386403591\sqrt{0.1178571429}\ &= 25.655\pm 1.506 \end{align*} \nonumber which gives the interval $(24.149,27.161)$. We are $95\%$ confident that the average value of all three-and-one-half-year-old vehicles of this make and model is between $\24,149$ and $\27,161$. Example $2$ Using the sample data of Example 10.4.2, recorded in Table 10.4.3, construct a $95\%$ prediction interval for the predicted value of a randomly selected three-and-one-half-year-old automobile of this make and model. Solution The computations for this example are identical to those of the previous example, except that now there is the extra number $1$ beneath the square root sign. Since we were careful to record the intermediate results of that computation, we have immediately that the $95\%$ prediction interval is \begin{align*} \hat{y_p}\pm t_{\alpha /2}S_\varepsilon \sqrt{1+\frac{1}{n}+\frac{(x_p-\bar{x})^2}{SS_{xx}}} &= 25.655\pm 4.386403591\sqrt{1.1178571429}\ &= 25.655\pm 4 \end{align*} \nonumber which gives the interval $(21.017,30.293)$. We are $95\%$ confident that the value of a randomly selected three-and-one-half-year-old vehicle of this make and model is between $\21,017$ and $\30,293$. Note what an enormous difference the presence of the extra number $1$ under the square root sign made. The prediction interval is about two-and-one-half times wider than the confidence interval at the same level of confidence. KeyTakaways • A confidence interval is used to estimate the mean value of $y$ in the sub-population determined by the condition that $x$ have some specific value $x_p$. • The prediction interval is used to predict the value that the random variable $y$ will take when $x$ has some specific value $x_p$.
textbooks/stats/Introductory_Statistics/Introductory_Statistics_(Shafer_and_Zhang)/10%3A_Correlation_and_Regression/10.07%3A_Estimation_and_Prediction.txt
Learning Objectives • To see a complete linear correlation and regression analysis, in a practical setting, as a cohesive whole In the preceding sections numerous concepts were introduced and illustrated, but the analysis was broken into disjoint pieces by sections. In this section we will go through a complete example of the use of correlation and regression analysis of data from start to finish, touching on all the topics of this chapter in sequence. In general educators are convinced that, all other factors being equal, class attendance has a significant bearing on course performance. To investigate the relationship between attendance and performance, an education researcher selects for study a multiple section introductory statistics course at a large university. Instructors in the course agree to keep an accurate record of attendance throughout one semester. At the end of the semester $26$ students are selected a random. For each student in the sample two measurements are taken: $x$, the number of days the student was absent, and $y$, the student’s score on the common final exam in the course. The data are summarized in Table $1$. Table $1$: Absence and Score Data Absences Score Absences Score $x$ $y$ $x$ $y$ 2 76 4 41 7 29 5 63 2 96 4 88 7 63 0 98 2 79 1 99 7 71 0 89 0 88 1 96 0 92 3 90 6 55 1 90 6 70 3 68 2 80 1 84 2 75 3 80 1 63 1 78 A scatter plot of the data is given in Figure $1$. There is a downward trend in the plot which indicates that on average students with more absences tend to do worse on the final examination. The trend observed in Figure $1$ as well as the fairly constant width of the apparent band of points in the plot makes it reasonable to assume a relationship between $x$ and $y$ of the form $y=β_1x+β_0+ε \nonumber$ where $β_1$ and $β_0$ are unknown parameters and $\varepsilon$ is a normal random variable with mean zero and unknown standard deviation $\sum$. Note carefully that this model is being proposed for the population of all students taking this course, not just those taking it this semester, and certainly not just those in the sample. The numbers $β_1$, $β_0$, and $\sum$ are parameters relating to this large population. First we perform preliminary computations that will be needed later. The data are processed in Table $2$. Table $2$: Processed Absence and Score Data $x$ $y$ $x^2$ $xy$ $y^2$ $x$ $y$ $x^2$ $xy$ $y^2$ 2 76 4 152 5776 4 41 16 164 1681 7 29 49 203 841 5 63 25 315 3969 2 96 4 192 9216 4 88 16 352 7744 7 63 49 441 3969 0 98 0 0 9604 2 79 4 158 6241 1 99 1 99 9801 7 71 49 497 5041 0 89 0 0 7921 0 88 0 0 7744 1 96 1 96 9216 0 92 0 0 8464 3 90 9 270 8100 6 55 36 330 3025 1 90 1 90 8100 6 70 36 420 4900 3 68 9 204 4624 2 80 4 160 6400 1 84 1 84 7056 2 75 4 150 5625 3 80 9 240 6400 1 63 1 63 3969 1 78 1 78 6084 Adding up the numbers in each column in Table $2$ gives $\sum x=71,\sum y=2001,\sum x^2=329,\sum xy=4758,\; and\; \sum y^2=161511. \nonumber$ Then $SS_{xx}=\sum x^2-\frac{1}{n}\left ( \sum x \right )^2=329-\frac{1}{26}(71)^2=135.1153846\ SS_{xy}=\sum xy-\frac{1}{n}\left ( \sum x \right )\left ( \sum y \right )=4758-\frac{1}{26}(71)(2001)=-706.2692308\ SS_{yy}=\sum y^2-\frac{1}{n}\left ( \sum y \right )^2=161511-\frac{1}{26}(2001)^2=7510.961538 \nonumber$ and $\bar{x}=\frac{\sum x}{n}=\frac{71}{26}=2.730769231\; \; and\; \; \bar{y}=\frac{\sum y}{n}=\frac{2001}{26}=76.96153846 \nonumber$ We begin the actual modelling by finding the least squares regression line, the line that best fits the data. Its slope and $y$-intercept are $\hat{\beta _1}=\frac{SS_{xy}}{SS_{xx}}=\frac{-706.2692308}{135.1153846}=-5.227156278 \nonumber$ $\hat{\beta _0}=\bar{y}-\hat{\beta _1}\bar{x}=76.96153846-(-5.227156278)(2.730769231)=91.23569553 \nonumber$ Rounding these numbers to two decimal places, the least squares regression line for these data is $\hat{y}=-5.23 x+91.24 \nonumber$ The goodness of fit of this line to the scatter plot, the sum of its squared errors, is $SSE=SS_{yy}-\hat{\beta _1}SS_{xy}=7510.961538-(-5.227156278)(-706.2692308)=3819.181894 \nonumber$ This number is not particularly informative in itself, but we use it to compute the important statistic $S_\varepsilon =\sqrt{\frac{SSE}{n-2}}=\sqrt{\frac{3819.181894}{24}}=12.11988495 \nonumber$ The statistic $S_\varepsilon$ estimates the standard deviation $\sum$ of the normal random variable $\varepsilon$ in the model. Its meaning is that among all students with the same number of absences, the standard deviation of their scores on the final exam is about $12.1$ points. Such a large value on a $100$-point exam means that the final exam scores of each sub-population of students, based on the number of absences, are highly variable. The size and sign of the slope $βˆ1=−5.23$ indicate that, for every class missed, students tend to score about $5.23$ fewer points lower on the final exam on average. Similarly for every two classes missed students tend to score on average $2\times 5.23=10.46$ fewer points on the final exam, or about a letter grade worse on average. Since $0$ is in the range of $x$-values in the data set, the $y$-intercept also has meaning in this problem. It is an estimate of the average grade on the final exam of all students who have perfect attendance. The predicted average of such students is $\hat{\beta _0}=91.24$. Before we use the regression equation further, or perform other analyses, it would be a good idea to examine the utility of the linear regression model. We can do this in two ways: 1) by computing the correlation coefficient $r$ to see how strongly the number of absences $x$ and the score $y$ on the final exam are correlated, and 2) by testing the null hypothesis $H_0: \hat{\beta _1}=0$ (the slope of the population regression line is zero, so $x$ is not a good predictor of $y$) against the natural alternative $H_a: \hat{\beta _1}<0$ (the slope of the population regression line is negative, so final exam scores $y$ go down as absences $x$ go up). The correlation coefficient $r$ is $r=\frac{SS_{xy}}{\sqrt{SS_{xx}SS_{yy}}}=\frac{-706.2692308}{\sqrt{(135.1153846)(7510.961538)}}=-0.7010840977 \nonumber$ a moderate negative correlation. Turning to the test of hypotheses, let us test at the commonly used $5\%$ level of significance. The test is $H_0: \beta _1=0\ vs.\ H_a: \beta _1<0\; \; @\; \; \alpha =0.05 \nonumber$ From Figure 7.1.6, with $df=26-2=24$ degrees of freedom $t_{0.05}=1.711$, so the rejection region is $(-\infty ,-1.711]$. The value of the standardized test statistic is $t=\frac{\hat{\beta _1}-B_0}{S_\varepsilon /\sqrt{SS_{xx}}}=\frac{-5.227156278-0}{12.11988495/\sqrt{135.1153846}}=-5.013 \nonumber$ which falls in the rejection region. We reject $H_0$ in favor of $H_a$. The data provide sufficient evidence, at the $5\%$ level of significance, to conclude that $β_1$ is negative, meaning that as the number of absences increases average score on the final exam decreases. As already noted, the value $β_1=-5.23$ gives a point estimate of how much one additional absence is reflected in the average score on the final exam. For each additional absence the average drops by about $5.23$ points. We can widen this point estimate to a confidence interval for $β_1$. At the $95\%$ confidence level, from Figure 7.1.6 with $df=26-2=24$ degrees of freedom, $t_{\alpha /2}=t_{0.025}=2.064$. The $95\%$ confidence interval for $β_1$ based on our sample data is $\hat{\beta _1}\pm t_{\alpha /2}\tfrac{S_\varepsilon }{\sqrt{SS_{xx}}}=-5.23\pm 2.064\tfrac{12.11988495}{\sqrt{135.1153846}}=-5.23\pm 2.15 \nonumber$ or $(-7.38,-3.08)$. We are $95\%$ confident that, among all students who ever take this course, for each additional class missed the average score on the final exam goes down by between $3.08$ and $7.38$ points. If we restrict attention to the sub-population of all students who have exactly five absences, say, then using the least squares regression equation $\hat{y}=-5.23x+91.24$ we estimate that the average score on the final exam for those students is $\hat{y}=-5.23(5)+91.24=65.09 \nonumber$ This is also our best guess as to the score on the final exam of any particular student who is absent five times. A $95\%$ confidence interval for the average score on the final exam for all students with five absences is \begin{align*} \hat{y_p}\pm t_{\alpha /2}S_\varepsilon \sqrt{\frac{1}{n}+\frac{(x_p-\bar{x})^2}{SS_{xx}}} &= 65.09\pm (2.064)(12.11988495)\sqrt{\frac{1}{26}+\frac{(5-2.730769231)^2}{135.1153846}}\ &= 65.09\pm 25.01544254\sqrt{0.0765727299}\ &= 65.09\pm 6.92 \end{align*} \nonumber which is the interval $(58.17,72.01)$. This confidence interval suggests that the true mean score on the final exam for all students who are absent from class exactly five times during the semester is likely to be between $58.17$ and $72.01$. If a particular student misses exactly five classes during the semester, his score on the final exam is predicted with $95\%$ confidence to be in the interval \begin{align*} \hat{y_p}\pm t_{\alpha /2}S_\varepsilon \sqrt{1+\frac{1}{n}+\frac{(x_p-\bar{x})^2}{SS_{xx}}} &= 65.09\pm 25.01544254\sqrt{1.0765727299}\ &= 65.09\pm 25.96 \end{align*} \nonumber which is the interval $(39.13,91.05)$. This prediction interval suggests that this individual student’s final exam score is likely to be between $39.13$ and $91.05$. Whereas the $95\%$ confidence interval for the average score of all student with five absences gave real information, this interval is so wide that it says practically nothing about what the individual student’s final exam score might be. This is an example of the dramatic effect that the presence of the extra summand $1$ under the square sign in the prediction interval can have. Finally, the proportion of the variability in the scores of students on the final exam that is explained by the linear relationship between that score and the number of absences is estimated by the coefficient of determination, $r^2$. Since we have already computed r above we easily find that $r^2=(-0.7010840977)^2=0.491518912 \nonumber$ or about $49\%$. Thus although there is a significant correlation between attendance and performance on the final exam, and we can estimate with fair accuracy the average score of students who miss a certain number of classes, nevertheless less than half the total variation of the exam scores in the sample is explained by the number of absences. This should not come as a surprise, since there are many factors besides attendance that bear on student performance on exams. Key Takeaway • It is a good idea to attend class. 10.09: Formula List Learning Objectives • Listing of all formulas used throughout the chapter. $SS_{xx}=\sum x^2-\frac{1}{n}\left ( \sum x \right )^2\; \; SS_{xy}=\sum xy-\frac{1}{n}\left ( \sum x \right )\left ( \sum y \right )\; \; SS_{yy}=\sum y^2-\frac{1}{n}\left ( \sum y \right )^2 \nonumber$ Correlation coefficient: $r=\frac{SS_{xy}}{\sqrt{SS_{xx}SS_{yy}}} \nonumber$ Least squares regression equation (equation of the least squares regression line): $\hat{y}=\hat{\beta _1}x+\hat{\beta _0}\; \; \text{where}\; \; \hat{\beta _1}=\frac{SS_{xy}}{SS_{xx}}\; \; \text{and}\; \; \hat{\beta _0}=\bar{y}-\hat{\beta _1}\bar{x} \nonumber$ Sum of the squared errors for the least squares regression line: $SSE=SS_{yy}-\hat{\beta _1}SS_{xy} \nonumber$ Sample standard deviation of errors: $S_\varepsilon =\sqrt{\frac{SSE}{n-2}} \nonumber$ $100(1-\alpha )\%$ confidence interval for $\beta _1$: $\hat{\beta _1}\pm t_{\alpha /2}\frac{S_\varepsilon }{\sqrt{SS_{xx}}}\; \; \; (df=n-2) \nonumber$ Standardized test statistic for hypothesis tests concerning $\beta _1$: $T=\frac{\hat{\beta _1}-B_0}{S_\varepsilon /\sqrt{SS_{xx}}}\; \; \; (df=n-2) \nonumber$ Coefficient of determination: $r^2=\frac{SS_{yy}-SSE}{SS_{yy}}=\frac{SS_{xy}^{2}}{SS_{xx}SS_{yy}}=\hat{\beta _1}\frac{SS_{xy}}{SS_{yy}} \nonumber$ $100(1-\alpha )\%$ confidence interval for the mean value of $y$ at $x=x_p$: $\hat{y_p}\pm t_{\alpha /2}S_\varepsilon \sqrt{\frac{1}{n}+\frac{(x_p-\bar{x})^2}{SS_{xx}}} \; \; \; (df=n-2) \nonumber$ $100(1-\alpha )\%$ prediction interval for an individual new value of $y$ at $x=x_p$: $\hat{y_p}\pm t_{\alpha /2}S_\varepsilon \sqrt{1+\frac{1}{n}+\frac{(x_p-\bar{x})^2}{SS_{xx}}} \; \; \; (df=n-2) \nonumber$ ˆ=7771.44
textbooks/stats/Introductory_Statistics/Introductory_Statistics_(Shafer_and_Zhang)/10%3A_Correlation_and_Regression/10.08%3A_A_Complete_Example.txt
These are homework exercises to accompany the Textmap created for "Introductory Statistics" by Shafer and Zhang. 10.1 Linear Relationships Between Variables Basic 1. A line has equation $y=0.5x+2$. 1. Pick five distinct $x$-values, use the equation to compute the corresponding $y$-values, and plot the five points obtained. 2. Give the value of the slope of the line; give the value of the $y$-intercept. 2. A line has equation $y=x-0.5$. 1. Pick five distinct $x$-values, use the equation to compute the corresponding $y$-values, and plot the five points obtained. 2. Give the value of the slope of the line; give the value of the $y$-intercept. 3. A line has equation $y=-2x+4$. 1. Pick five distinct $x$-values, use the equation to compute the corresponding $y$-values, and plot the five points obtained. 2. Give the value of the slope of the line; give the value of the $y$-intercept. 4. A line has equation $y=-1.5x+1$. 1. Pick five distinct $x$-values, use the equation to compute the corresponding $y$-values, and plot the five points obtained. 2. Give the value of the slope of the line; give the value of the $y$-intercept. 5. Based on the information given about a line, determine how $y$ will change (increase, decrease, or stay the same) when $x$ is increased, and explain. In some cases it might be impossible to tell from the information given. 1. The slope is positive. 2. The $y$-intercept is positive. 3. The slope is zero. 6. Based on the information given about a line, determine how $y$ will change (increase, decrease, or stay the same) when $x$ is increased, and explain. In some cases it might be impossible to tell from the information given. 1. The $y$-intercept is negative. 2. The $y$-intercept is zero. 3. The slope is negative. 7. A data set consists of eight $(x,y)$ pairs of numbers: $\begin{matrix} (0,12) & (4,16) & (8,22) & (15,28)\ (2,15) & (5,14) & (13,24) & (20,30) \end{matrix}$ 1. Plot the data in a scatter diagram. 2. Based on the plot, explain whether the relationship between $x$ and $y$ appears to be deterministic or to involve randomness. 3. Based on the plot, explain whether the relationship between $x$ and $y$ appears to be linear or not linear. 8. A data set consists of ten $(x,y)$ pairs of numbers: $\begin{matrix} (3,20) & (6,9) & (11,0) & (14,1) & (18,9)\ (5,13) & (8,4) & (12,0) & (17,6) & (20,16) \end{matrix}$ 1. Plot the data in a scatter diagram. 2. Based on the plot, explain whether the relationship between $x$ and $y$ appears to be deterministic or to involve randomness. 3. Based on the plot, explain whether the relationship between $x$ and $y$ appears to be linear or not linear. 9. A data set consists of nine $(x,y)$ pairs of numbers: $\begin{matrix} (8,16) & (10,4) & (12,0) & (14,4) & (16,16)\ (9,9) & (11,1) & (13,1) & (15,9) & \end{matrix}$ 1. Plot the data in a scatter diagram. 2. Based on the plot, explain whether the relationship between $x$ and $y$ appears to be deterministic or to involve randomness. 3. Based on the plot, explain whether the relationship between $x$ and $y$ appears to be linear or not linear. 10. A data set consists of five $(x,y)$ pairs of numbers: $\begin{matrix} (0,1) & (2,5) & (3,7) & (5,11) & (8,17) \end{matrix}$ 1. Plot the data in a scatter diagram. 2. Based on the plot, explain whether the relationship between $x$ and $y$ appears to be deterministic or to involve randomness. 3. Based on the plot, explain whether the relationship between $x$ and $y$ appears to be linear or not linear. Applications 1. At $60^{\circ}F$ a particular blend of automotive gasoline weights $6.17$ lb/gal. The weight $y$ of gasoline on a tank truck that is loaded with $x$ gallons of gasoline is given by the linear equation $y=6.17x$ 1. Explain whether the relationship between the weight $y$ and the amount $x$ of gasoline is deterministic or contains an element of randomness. 2. Predict the weight of gasoline on a tank truck that has just been loaded with $6,750$ gallons of gasoline. 2. The rate for renting a motor scooter for one day at a beach resort area is $\25$ plus $30$ cents for each mile the scooter is driven. The total cost $y$ in dollars for renting a scooter and driving it $x$ miles is $y=0.30x+25$ 1. Explain whether the relationship between the cost $y$ of renting the scooter for a day and the distance $x$ that the scooter is driven that day is deterministic or contains an element of randomness. 2. A person intends to rent a scooter one day for a trip to an attraction $17$ miles away. Assuming that the total distance the scooter is driven is $34$ miles, predict the cost of the rental. 3. The pricing schedule for labor on a service call by an elevator repair company is $\150$ plus $\50$ per hour on site. 1. Write down the linear equation that relates the labor cost $y$ to the number of hours $x$ that the repairman is on site. 2. Calculate the labor cost for a service call that lasts $2.5$ hours. 4. The cost of a telephone call made through a leased line service is $2.5$ cents per minute. 1. Write down the linear equation that relates the cost $y$ (in cents) of a call to its length $x$. 2. Calculate the cost of a call that lasts $23$ minutes. Large Data Set Exercises Large Data Sets not available 1. Large $\text{Data Set 1}$ lists the SAT scores and GPAs of $1,000$ students. Plot the scatter diagram with SAT score as the independent variable ($x$) and GPA as the dependent variable ($y$). Comment on the appearance and strength of any linear trend. 2. Large $\text{Data Set 12}$ lists the golf scores on one round of golf for $75$ golfers first using their own original clubs, then using clubs of a new, experimental design (after two months of familiarization with the new clubs). Plot the scatter diagram with golf score using the original clubs as the independent variable ($x$) and golf score using the new clubs as the dependent variable ($y$). Comment on the appearance and strength of any linear trend. 3. Large $\text{Data Set 13}$ records the number of bidders and sales price of a particular type of antique grandfather clock at $60$ auctions. Plot the scatter diagram with the number of bidders at the auction as the independent variable ($x$) and the sales price as the dependent variable ($y$). Comment on the appearance and strength of any linear trend. Answers 1. Answers vary. 2. Slope $m=0.5$; $y$-intercept $b=2$. 1. Answers vary. 2. Slope $m=-2$; $y$-intercept $b=4$. 1. $y$ increases. 2. Impossible to tell. 3. $y$ does not change. 1. Scatter diagram needed. 2. Involves randomness. 3. Linear. 1. Scatter diagram needed. 2. Deterministic. 3. Not linear. 1. Deterministic. 2. $41,647.5$ pounds. 1. $y=50x+150$. 2. $\275$. 1. There appears to a hint of some positive correlation. 2. There appears to be clear positive correlation. 10.2 The Linear Correlation Coefficient Basic With the exception of the exercises at the end of Section 10.3, the first Basic exercise in each of the following sections through Section 10.7 uses the data from the first exercise here, the second Basic exercise uses the data from the second exercise here, and so on, and similarly for the Application exercises. Save your computations done on these exercises so that you do not need to repeat them later. 1. For the sample data $\begin{array}{c|c c c c c} x &0 &1 &3 &5 &8 \ \hline y &2 &4 &6 &5 &9\ \end{array}$ 1. Draw the scatter plot. 2. Based on the scatter plot, predict the sign of the linear correlation coefficient. Explain your answer. 3. Compute the linear correlation coefficient and compare its sign to your answer to part (b). 2. For the sample data $\begin{array}{c|c c c c c} x &0 &2 &3 &6 &9 \ \hline y &0 &3 &3 &4 &8\ \end{array}$ 1. Draw the scatter plot. 2. Based on the scatter plot, predict the sign of the linear correlation coefficient. Explain your answer. 3. Compute the linear correlation coefficient and compare its sign to your answer to part (b). 3. For the sample data $\begin{array}{c|c c c c c} x &1 &3 &4 &6 &8 \ \hline y &4 &1 &3 &-1 &0\ \end{array}$ 1. Draw the scatter plot. 2. Based on the scatter plot, predict the sign of the linear correlation coefficient. Explain your answer. 3. Compute the linear correlation coefficient and compare its sign to your answer to part (b). 4. For the sample data $\begin{array}{c|c c c c c} x &1 &2 &4 &7 &9 \ \hline y &5 &5 &6 &-3 &0\ \end{array}$ 1. Draw the scatter plot. 2. Based on the scatter plot, predict the sign of the linear correlation coefficient. Explain your answer. 3. Compute the linear correlation coefficient and compare its sign to your answer to part (b). 5. For the sample data $\begin{array}{c|c c c c c} x &1 &1 &3 &4 &5 \ \hline y &2 &1 &5 &3 &4\ \end{array}$ 1. Draw the scatter plot. 2. Based on the scatter plot, predict the sign of the linear correlation coefficient. Explain your answer. 3. Compute the linear correlation coefficient and compare its sign to your answer to part (b). 6. For the sample data $\begin{array}{c|c c c c c} x &1 &3 &5 &5 &8 \ \hline y &5 &-2 &2 &-1 &-3\ \end{array}$ 1. Draw the scatter plot. 2. Based on the scatter plot, predict the sign of the linear correlation coefficient. Explain your answer. 3. Compute the linear correlation coefficient and compare its sign to your answer to part (b). 7. Compute the linear correlation coefficient for the sample data summarized by the following information: $n=5\; \; \sum x=25\; \; \sum x^2=165\ \sum y=24\; \; \sum y^2=134\; \; \sum xy=144\ 1\leq x\leq 9$ 8. Compute the linear correlation coefficient for the sample data summarized by the following information: $n=5\; \; \sum x=31\; \; \sum x^2=253\ \sum y=18\; \; \sum y^2=90\; \; \sum xy=148\ 2\leq x\leq 12$ 9. Compute the linear correlation coefficient for the sample data summarized by the following information: $n=10\; \; \sum x=0\; \; \sum x^2=60\ \sum y=24\; \; \sum y^2=234\; \; \sum xy=-87\ -4\leq x\leq 4$ 10. Compute the linear correlation coefficient for the sample data summarized by the following information: $n=10\; \; \sum x=-3\; \; \sum x^2=263\ \sum y=55\; \; \sum y^2=917\; \; \sum xy=-355\ -10\leq x\leq 10$ Applications 1. The age $x$ in months and vocabulary $y$ were measured for six children, with the results shown in the table. $\begin{array}{c|c c c c c c c} x &13 &14 &15 &16 &16 &18 \ \hline y &8 &10 &15 &20 &27 &30\ \end{array}$ Compute the linear correlation coefficient for these sample data and interpret its meaning in the context of the problem. 2. The curb weight $x$ in hundreds of pounds and braking distance $y$ in feet, at $50$ miles per hour on dry pavement, were measured for five vehicles, with the results shown in the table. $\begin{array}{c|c c c c c c } x &25 &27.5 &32.5 &35 &45 \ \hline y &105 &125 &140 &140 &150 \ \end{array}$ Compute the linear correlation coefficient for these sample data and interpret its meaning in the context of the problem. 3. The age $x$ and resting heart rate $y$ were measured for ten men, with the results shown in the table. $\begin{array}{c|c c c c c c } x &20 &23 &30 &37 &35 \ \hline y &72 &71 &73 &74 &74 \ \end{array}\ \begin{array}{c|c c c c c c } x &45 &51 &55 &60 &63 \ \hline y &73 &72 &79 &75 &77 \ \end{array}\$ Compute the linear correlation coefficient for these sample data and interpret its meaning in the context of the problem. 4. The wind speed $x$ in miles per hour and wave height $y$ in feet were measured under various conditions on an enclosed deep water sea, with the results shown in the table, $\begin{array}{c|c c c c c c } x &0 &0 &2 &7 &7 \ \hline y &2.0 &0.0 &0.3 &0.7 &3.3 \ \end{array}\ \begin{array}{c|c c c c c c } x &9 &13 &20 &22 &31 \ \hline y &4.9 &4.9 &3.0 &6.9 &5.9 \ \end{array}\$ Compute the linear correlation coefficient for these sample data and interpret its meaning in the context of the problem. 5. The advertising expenditure $x$ and sales $y$ in thousands of dollars for a small retail business in its first eight years in operation are shown in the table. $\begin{array}{c|c c c c c } x &1.4 &1.6 &1.6 &2.0 \ \hline y &180 &184 &190 &220 \ \end{array}\ \begin{array}{c|c c c c c c } x &2.0 &2.2 &2.4 &2.6 \ \hline y &186 &215 &205 &240 \ \end{array}\$ Compute the linear correlation coefficient for these sample data and interpret its meaning in the context of the problem. 6. The height $x$ at age $2$ and $y$ at age $20$, both in inches, for ten women are tabulated in the table. $\begin{array}{c|c c c c c } x &31.3 &31.7 &32.5 &33.5 &34.4\ \hline y &60.7 &61.0 &63.1 &64.2 &65.9 \ \end{array}\ \begin{array}{c|c c c c c } x &35.2 &35.8 &32.7 &33.6 &34.8 \ \hline y &68.2 &67.6 &62.3 &64.9 &66.8 \ \end{array}\$ Compute the linear correlation coefficient for these sample data and interpret its meaning in the context of the problem. 7. The course average $x$ just before a final exam and the score $y$ on the final exam were recorded for $15$ randomly selected students in a large physics class, with the results shown in the table. $\begin{array}{c|c c c c c } x &69.3 &87.7 &50.5 &51.9 &82.7\ \hline y &56 &89 &55 &49 &61 \ \end{array}\ \begin{array}{c|c c c c c } x &70.5 &72.4 &91.7 &83.3 &86.5 \ \hline y &66 &72 &83 &73 &82 \ \end{array}\ \begin{array}{c|c c c c c } x &79.3 &78.5 &75.7 &52.3 &62.2 \ \hline y &92 &80 &64 &18 &76 \ \end{array}\$ Compute the linear correlation coefficient for these sample data and interpret its meaning in the context of the problem. 8. The table shows the acres $x$ of corn planted and acres $y$ of corn harvested, in millions of acres, in a particular country in ten successive years. $\begin{array}{c|c c c c c } x &75.7 &78.9 &78.6 &80.9 &81.8\ \hline y &68.8 &69.3 &70.9 &73.6 &75.1 \ \end{array}\ \begin{array}{c|c c c c c } x &78.3 &93.5 &85.9 &86.4 &88.2 \ \hline y &70.6 &86.5 &78.6 &79.5 &81.4 \ \end{array}\$ Compute the linear correlation coefficient for these sample data and interpret its meaning in the context of the problem. 9. Fifty male subjects drank a measured amount $x$ (in ounces) of a medication and the concentration $y$ (in percent) in their blood of the active ingredient was measured $30$ minutes later. The sample data are summarized by the following information. $n=50\; \; \sum x=112.5\; \; \sum y=4.83\ \sum xy=15.255\; \; 0\leq x\leq 4.5\ \sum x^2=356.25\; \; \sum y^2=0.667$ Compute the linear correlation coefficient for these sample data and interpret its meaning in the context of the problem. 10. In an effort to produce a formula for estimating the age of large free-standing oak trees non-invasively, the girth $x$ (in inches) five feet off the ground of $15$ such trees of known age $y$ (in years) was measured. The sample data are summarized by the following information. $n=15\; \; \sum x=3368\; \; \sum y=6496\ \sum xy=1,933,219\; \; 74\leq x\leq 395\ \sum x^2=917,780\; \; \sum y^2=4,260,666$ Compute the linear correlation coefficient for these sample data and interpret its meaning in the context of the problem. 11. Construction standards specify the strength of concrete $28$ days after it is poured. For $30$ samples of various types of concrete the strength $x$ after $3$ days and the strength $y$ after $28$ days (both in hundreds of pounds per square inch) were measured. The sample data are summarized by the following information. $n=30\; \; \sum x=501.6\; \; \sum y=1338.8\ \sum xy=23,246.55\; \; 11\leq x\leq 22\ \sum x^2=8724.74\; \; \sum y^2=61,980.14$ Compute the linear correlation coefficient for these sample data and interpret its meaning in the context of the problem. 12. Power-generating facilities used forecasts of temperature to forecast energy demand. The average temperature $x$ (degrees Fahrenheit) and the day’s energy demand $y$ (million watt-hours) were recorded on $40$ randomly selected winter days in the region served by a power company. The sample data are summarized by the following information. $n=40\; \; \sum x=2000\; \; \sum y=2969\ \sum xy=143,042\; \; 40\leq x\leq 60\ \sum x^2=101,340\; \; \sum y^2=243,027$ Compute the linear correlation coefficient for these sample data and interpret its meaning in the context of the problem. Additional Exercises 1. In each case state whether you expect the two variables $x$ and $y$ indicated to have positive, negative, or zero correlation. 1. the number $x$ of pages in a book and the age $y$ of the author 2. the number $x$ of pages in a book and the age $y$ of the intended reader 3. the weight $x$ of an automobile and the fuel economy $y$ in miles per gallon 4. the weight $x$ of an automobile and the reading $y$ on its odometer 5. the amount $x$ of a sedative a person took an hour ago and the time $y$ it takes him to respond to a stimulus 2. In each case state whether you expect the two variables $x$ and $y$ indicated to have positive, negative, or zero correlation. 1. the length $x$ of time an emergency flare will burn and the length $y$ of time the match used to light it burned 2. the average length $x$ of time that calls to a retail call center are on hold one day and the number $y$ of calls received that day 3. the length $x$ of a regularly scheduled commercial flight between two cities and the headwind $y$ encountered by the aircraft 4. the value $x$ of a house and the its size $y$ in square feet 5. the average temperature $x$ on a winter day and the energy consumption $y$ of the furnace 3. Changing the units of measurement on two variables $x$ and $y$ should not change the linear correlation coefficient. Moreover, most change of units amount to simply multiplying one unit by the other (for example, $1$ foot = $12$ inches). Multiply each $x$ value in the table in Exercise 1 by two and compute the linear correlation coefficient for the new data set. Compare the new value of $r$ to the one for the original data. 4. Refer to the previous exercise. Multiply each $x$ value in the table in Exercise 2 by two, multiply each $y$ value by three, and compute the linear correlation coefficient for the new data set. Compare the new value of $r$ to the one for the original data. 5. Reversing the roles of $x$ and $y$ in the data set of Exercise 1 produces the data set $\begin{array}{c|c c c c c} x &2 &4 &6 &5 &9 \ \hline y &0 &1 &3 &5 &8\ \end{array}$ Compute the linear correlation coefficient of the new set of data and compare it to what you got in Exercise 1. 6. In the context of the previous problem, look at the formula for $r$ and see if you can tell why what you observed there must be true for every data set. Large Data Set Exercises Large Data Sets not available 1. Large $\text{Data Set 1}$ lists the SAT scores and GPAs of $1,000$ students. Compute the linear correlation coefficient $r$. Compare its value to your comments on the appearance and strength of any linear trend in the scatter diagram that you constructed in the first large data set problem for Section 10.1. 2. Large $\text{Data Set 12}$ lists the golf scores on one round of golf for $75$ golfers first using their own original clubs, then using clubs of a new, experimental design (after two months of familiarization with the new clubs). Compute the linear correlation coefficient $r$. Compare its value to your comments on the appearance and strength of any linear trend in the scatter diagram that you constructed in the second large data set problem for Section 10.1. 3. Large $\text{Data Set 13}$ records the number of bidders and sales price of a particular type of antique grandfather clock at $60$ auctions. Compute the linear correlation coefficient $r$. Compare its value to your comments on the appearance and strength of any linear trend in the scatter diagram that you constructed in the third large data set problem for Section 10.1. Answers 1. $r=0.921$ 2. $r=-0.794$ 3. $r=0.707$ 4. $0.875$ 5. $-0.846$ 6. $0.948$ 7. $0.709$ 8. $0.832$ 9. $0.751$ 10. $0.965$ 11. $0.992$ 12. .921 1. zero 2. positive 3. negative 4. zero 5. positive 13. same value 14. same value 15. $r=0.4601$ 16. $r=0.9002$ 10.3 Modelling Linear Relationships with Randomness Present Basic 1. State the three assumptions that are the basis for the Simple Linear Regression Model. 2. The Simple Linear Regression Model is summarized by the equation $y=\beta _1x+\beta _0+\varepsilon$ Identify the deterministic part and the random part. 3. Is the number $\beta _1$ in the equation $y=\beta _1x+\beta _0$ a statistic or a population parameter? Explain. 4. Is the number $\sigma$ in the Simple Linear Regression Model a statistic or a population parameter? Explain. 5. Describe what to look for in a scatter diagram in order to check that the assumptions of the Simple Linear Regression Model are true. 6. True or false: the assumptions of the Simple Linear Regression Model must hold exactly in order for the procedures and analysis developed in this chapter to be useful. Answers 1. The mean of $y$ is linearly related to $x$. 2. For each given $x$, $y$ is a normal random variable with mean $\beta _1x+\beta _0$ and a standard deviation $\sigma$. 3. All the observations of $y$ in the sample are independent. 1. $\beta _1$ is a population parameter. 2. A linear trend. 10.4 The Least Squares Regression Line Basic For the Basic and Application exercises in this section use the computations that were done for the exercises with the same number in Section 10.2. 1. Compute the least squares regression line for the data in Exercise 1 of Section 10.2. 2. Compute the least squares regression line for the data in Exercise 2 of Section 10.2. 3. Compute the least squares regression line for the data in Exercise 3 of Section 10.2. 4. Compute the least squares regression line for the data in Exercise 4 of Section 10.2. 5. For the data in Exercise 5 of Section 10.2 1. Compute the least squares regression line. 2. Compute the sum of the squared errors $\text{SSE}$ using the definition $\sum (y-\hat{y})^2$. 3. Compute the sum of the squared errors $\text{SSE}$ using the formula $SSE=SS_{yy}-\widehat{\beta _1}SS_{xy}$. 6. For the data in Exercise 6 of Section 10.2 1. Compute the least squares regression line. 2. Compute the sum of the squared errors $\text{SSE}$ using the definition $\sum (y-\hat{y})^2$. 3. Compute the sum of the squared errors $\text{SSE}$ using the formula $SSE=SS_{yy}-\widehat{\beta _1}SS_{xy}$. 7. Compute the least squares regression line for the data in Exercise 7 of Section 10.2. 8. Compute the least squares regression line for the data in Exercise 8 of Section 10.2. 9. For the data in Exercise 9 of Section 10.2 1. Compute the least squares regression line. 2. Can you compute the sum of the squared errors $\text{SSE}$ using the definition $\sum (y-\hat{y})^2$? Explain. 3. Compute the sum of the squared errors $\text{SSE}$ using the formula $SSE=SS_{yy}-\widehat{\beta _1}SS_{xy}$. 10. For the data in Exercise 10 of Section 10.2 1. Compute the least squares regression line. 2. Can you compute the sum of the squared errors $\text{SSE}$ using the definition $\sum (y-\hat{y})^2$? Explain. 3. Compute the sum of the squared errors $\text{SSE}$ using the formula $SSE=SS_{yy}-\widehat{\beta _1}SS_{xy}$. Applications 1. For the data in Exercise 11 of Section 10.2 1. Compute the least squares regression line. 2. On average, how many new words does a child from $13$ to $18$ months old learn each month? Explain. 3. Estimate the average vocabulary of all $16$-month-old children. 2. For the data in Exercise 12 of Section 10.2 1. Compute the least squares regression line. 2. On average, how many additional feet are added to the braking distance for each additional $100$ pounds of weight? Explain. 3. Estimate the average braking distance of all cars weighing $3,000$ pounds. 3. For the data in Exercise 13 of Section 10.2 1. Compute the least squares regression line. 2. Estimate the average resting heart rate of all $40$-year-old men. 3. Estimate the average resting heart rate of all newborn baby boys. Comment on the validity of the estimate. 4. For the data in Exercise 14 of Section 10.2 1. Compute the least squares regression line. 2. Estimate the average wave height when the wind is blowing at $10$ miles per hour. 3. Estimate the average wave height when there is no wind blowing. Comment on the validity of the estimate. 5. For the data in Exercise 15 of Section 10.2 1. Compute the least squares regression line. 2. On average, for each additional thousand dollars spent on advertising, how does revenue change? Explain. 3. Estimate the revenue if $\2,500$ is spent on advertising next year. 6. For the data in Exercise 16 of Section 10.2 1. Compute the least squares regression line. 2. On average, for each additional inch of height of two-year-old girl, what is the change in the adult height? Explain. 3. Predict the adult height of a two-year-old girl who is $33$ inches tall. 7. For the data in Exercise 17 of Section 10.2 1. Compute the least squares regression line. 2. Compute $\text{SSE}$ using the formula $SSE=SS_{yy}-\widehat{\beta _1}SS_{xy}$. 3. Estimate the average final exam score of all students whose course average just before the exam is $85$. 8. For the data in Exercise 18 of Section 10.2 1. Compute the least squares regression line. 2. Compute $\text{SSE}$ using the formula $SSE=SS_{yy}-\widehat{\beta _1}SS_{xy}$. 3. Estimate the number of acres that would be harvested if $90$ million acres of corn were planted. 9. For the data in Exercise 19 of Section 10.2 1. Compute the least squares regression line. 2. Interpret the value of the slope of the least squares regression line in the context of the problem. 3. Estimate the average concentration of the active ingredient in the blood in men after consuming $1$ ounce of the medication. 10. For the data in Exercise 20 of Section 10.2 1. Compute the least squares regression line. 2. Interpret the value of the slope of the least squares regression line in the context of the problem. 3. Estimate the age of an oak tree whose girth five feet off the ground is $92$ inches. 11. For the data in Exercise 21 of Section 10.2 1. Compute the least squares regression line. 2. The $28$-day strength of concrete used on a certain job must be at least $3,200$ psi. If the $3$-day strength is $1,300$ psi, would we anticipate that the concrete will be sufficiently strong on the $28^{th}$ day? Explain fully. 12. For the data in Exercise 22 of Section 10.2 1. Compute the least squares regression line. 2. If the power facility is called upon to provide more than $95$ million watt-hours tomorrow then energy will have to be purchased from elsewhere at a premium. The forecast is for an average temperature of $42$ degrees. Should the company plan on purchasing power at a premium? Additional Exercises 1. Verify that no matter what the data are, the least squares regression line always passes through the point with coordinates $(\bar{x},\bar{y})$. Hint: Find the predicted value of $y$ when $x=\bar{x}$. 2. In Exercise 1 you computed the least squares regression line for the data in Exercise 1 of Section 10.2. 1. Reverse the roles of x and y and compute the least squares regression line for the new data set $\begin{array}{c|c c c c c c} x &2 &4 &6 &5 &9 \ \hline y &0 &1 &3 &5 &8\ \end{array}$ 2. Interchanging x and y corresponds geometrically to reflecting the scatter plot in a 45-degree line. Reflecting the regression line for the original data the same way gives a line with the equation $\bar{y}=1.346x-3.600$. Is this the equation that you got in part (a)? Can you figure out why not? Hint: Think about how x and y are treated differently geometrically in the computation of the goodness of fit. 3. Compute $\text{SSE}$ for each line and see if they fit the same, or if one fits the data better than the other. Large Data Set Exercises Large Data Sets not available 1. Large $\text{Data Set 1}$ lists the SAT scores and GPAs of $1,000$ students. 1. Compute the least squares regression line with SAT score as the independent variable ($x$) and GPA as the dependent variable ($y$). 2. Interpret the meaning of the slope $\widehat{\beta _1}$ of regression line in the context of problem. 3. Compute $\text{SSE}$ the measure of the goodness of fit of the regression line to the sample data. 4. Estimate the GPA of a student whose SAT score is $1350$. 2. Large $\text{Data Set 12}$ lists the golf scores on one round of golf for $75$ golfers first using their own original clubs, then using clubs of a new, experimental design (after two months of familiarization with the new clubs). 1. Compute the least squares regression line with scores using the original clubs as the independent variable ($x$) and scores using the new clubs as the dependent variable ($y$). 2. Interpret the meaning of the slope $\widehat{\beta _1}$ of regression line in the context of problem. 3. Compute $\text{SSE}$ the measure of the goodness of fit of the regression line to the sample data. 4. Estimate the score with the new clubs of a golfer whose score with the old clubs is $73$. 3. Large $\text{Data Set 13}$ records the number of bidders and sales price of a particular type of antique grandfather clock at $60$ auctions. 1. Compute the least squares regression line with the number of bidders present at the auction as the independent variable ($x$) and sales price as the dependent variable ($y$). 2. Interpret the meaning of the slope $\widehat{\beta _1}$ of regression line in the context of problem. 3. Compute $\text{SSE}$ the measure of the goodness of fit of the regression line to the sample data. 4. Estimate the sales price of a clock at an auction at which the number of bidders is seven. Answers 1. $\hat{y}=0.743x+2.675$ 2. $\hat{y}=-0.610x+4.082$ 3. $\hat{y}=0.625x+1.25,\; SSE=5$ 4. $\hat{y}=0.6x+1.8$ 5. $\hat{y}=-1.45x+2.4,\; SSE=50.25$ (cannot use the definition to compute) 1. $\hat{y}=4.848x-56$ 2. $4.8$ 3. $21.6$ 1. $\hat{y}=0.114x+69.222$ 2. $73.8$ 3. $69.2$, invalid extrapolation 1. $\hat{y}=42.024x+119.502$ 2. increases by $\42,024$ 3. $\224,562$ 1. $\hat{y}=1.045x-8.527$ 2. $2151.93367$ 3. $80.3$ 1. $\hat{y}=0.043x+0.001$ 2. For each additional ounce of medication consumed blood concentration of the active ingredient increases by $0.043\%$ 3. $0.044\%$ 1. $\hat{y}=2.550x+1.993$ 2. Predicted $28$-day strength is $3,514$ psi; sufficiently strong 1. $\hat{y}=0.0016x+0.022$ 2. On average, every $100$ point increase in SAT score adds $0.16$ point to the GPA. 3. $SSE=432.10$ 4. $\hat{y}=2.182$ 1. $\hat{y}=116.62x+6955.1$ 2. On average, every $1$ additional bidder at an auction raises the price by $116.62$ dollars. 3. $SSE=1850314.08$ 4. $\hat{y}=7771.44$ 10.5 Statistical Inferences About β1 Basic For the Basic and Application exercises in this section use the computations that were done for the exercises with the same number in Section 10.2 and Section 10.4. 1. Construct the $95\%$ confidence interval for the slope $\beta _1$ of the population regression line based on the sample data set of Exercise 1 of Section 10.2. 2. Construct the $90\%$ confidence interval for the slope $\beta _1$ of the population regression line based on the sample data set of Exercise 2 of Section 10.2. 3. Construct the $90\%$ confidence interval for the slope $\beta _1$ of the population regression line based on the sample data set of Exercise 3 of Section 10.2. 4. Construct the $99\%$ confidence interval for the slope $\beta _1$ of the population regression line based on the sample data set of Exercise 4 of Section 10.2. 5. For the data in Exercise 5 of Section 10.2 test, at the $10\%$ level of significance, whether $x$ is useful for predicting $y$ (that is, whether $\beta _1\neq 0$). 6. For the data in Exercise 6 of Section 10.2 test, at the $5\%$ level of significance, whether $x$ is useful for predicting $y$ (that is, whether $\beta _1\neq 0$). 7. Construct the $90\%$ confidence interval for the slope $\beta _1$ of the population regression line based on the sample data set of Exercise 7 of Section 10.2. 8. Construct the $95\%$ confidence interval for the slope $\beta _1$ of the population regression line based on the sample data set of Exercise 8 of Section 10.2. 9. For the data in Exercise 9 of Section 10.2 test, at the $1\%$ level of significance, whether $x$ is useful for predicting $y$ (that is, whether $\beta _1\neq 0$). 10. For the data in Exercise 10 of Section 10.2 test, at the $1\%$ level of significance, whether $x$ is useful for predicting $y$ (that is, whether $\beta _1\neq 0$). Applications 1. For the data in Exercise 11 of Section 10.2 construct a $90\%$ confidence interval for the mean number of new words acquired per month by children between $13$ and $18$ months of age. 2. For the data in Exercise 12 of Section 10.2 construct a $90\%$ confidence interval for the mean increased braking distance for each additional $100$ pounds of vehicle weight. 3. For the data in Exercise 13 of Section 10.2 test, at the $10\%$ level of significance, whether age is useful for predicting resting heart rate. 4. For the data in Exercise 14 of Section 10.2 test, at the $10\%$ level of significance, whether wind speed is useful for predicting wave height. 5. For the situation described in Exercise 15 of Section 10.2 1. Construct the $95\%$ confidence interval for the mean increase in revenue per additional thousand dollars spent on advertising. 2. An advertising agency tells the business owner that for every additional thousand dollars spent on advertising, revenue will increase by over $\25,000$. Test this claim (which is the alternative hypothesis) at the $5\%$ level of significance. 3. Perform the test of part (b) at the $10\%$ level of significance. 4. Based on the results in (b) and (c), how believable is the ad agency’s claim? (This is a subjective judgement.) 6. For the situation described in Exercise 16 of Section 10.2 1. Construct the $90\%$ confidence interval for the mean increase in height per additional inch of length at age two. 2. It is claimed that for girls each additional inch of length at age two means more than an additional inch of height at maturity. Test this claim (which is the alternative hypothesis) at the $10\%$ level of significance. 7. For the data in Exercise 17 of Section 10.2 test, at the $10\%$ level of significance, whether course average before the final exam is useful for predicting the final exam grade. 8. For the situation described in Exercise 18 of Section 10.2, an agronomist claims that each additional million acres planted results in more than $750,000$ additional acres harvested. Test this claim at the $1\%$ level of significance. 9. For the data in Exercise 19 of Section 10.2 test, at the $1/10$th of $1\%$ level of significance, whether, ignoring all other facts such as age and body mass, the amount of the medication consumed is a useful predictor of blood concentration of the active ingredient. 10. For the data in Exercise 20 of Section 10.2 test, at the $1\%$ level of significance, whether for each additional inch of girth the age of the tree increases by at least two and one-half years. 11. For the data in Exercise 21 of Section 10.2 1. Construct the $95\%$ confidence interval for the mean increase in strength at $28$ days for each additional hundred psi increase in strength at $3$ days. 2. Test, at the $1/10$th of $1\%$ level of significance, whether the $3$-day strength is useful for predicting $28$-day strength. 12. For the situation described in Exercise 22 of Section 10.2 1. Construct the $99\%$ confidence interval for the mean decrease in energy demand for each one-degree drop in temperature. 2. An engineer with the power company believes that for each one-degree increase in temperature, daily energy demand will decrease by more than $3.6$ million watt-hours. Test this claim at the $1\%$ level of significance. Large Data Set Exercises Large Data Sets not available 1. Large $\text{Data Set 1}$ lists the SAT scores and GPAs of $1,000$ students. 1. Compute the $90\%$ confidence interval for the slope $\beta _1$ of the population regression line with SAT score as the independent variable ($x$) and GPA as the dependent variable ($y$). 2. Test, at the $10\%$ level of significance, the hypothesis that the slope of the population regression line is greater than $0.001$, against the null hypothesis that it is exactly $0.001$. 2. Large $\text{Data Set 12}$ lists the golf scores on one round of golf for $75$ golfers first using their own original clubs, then using clubs of a new, experimental design (after two months of familiarization with the new clubs). 1. Compute the $95\%$ confidence interval for the slope $\beta _1$ of the population regression line with scores using the original clubs as the independent variable ($x$) and scores using the new clubs as the dependent variable ($y$). 2. Test, at the $10\%$ level of significance, the hypothesis that the slope of the population regression line is different from $1$, against the null hypothesis that it is exactly $1$. 3. Large $\text{Data Set 13}$ records the number of bidders and sales price of a particular type of antique grandfather clock at $60$ auctions. 1. Compute the $95\%$ confidence interval for the slope $\beta _1$ of the population regression line with the number of bidders present at the auction as the independent variable($x$) and sales price as the dependent variable ($y$). 2. Test, at the $10\%$ level of significance, the hypothesis that the average sales price increases by more than $\90$ for each additional bidder at an auction, against the default that it increases by exactly $\90$. Answers 1. $0.743\pm 0.578$ 2. $-0.610\pm 0.633$ 3. $T=1.732,\; \pm t_{0.05}=\pm 2.353$, do not reject $H_0$ 4. $0.6\pm 0.451$ 5. $T=-4.481,\; \pm t_{0.005}=\pm 3.355$, reject $H_0$ 6. $4.8\pm 1.7$ words 7. $T=2.843,\; \pm t_{0.05}=\pm 1.860$, reject $H_0$ 1. $42.024\pm 28.011$ thousand dollars 2. $T=1.487,\; \pm t_{0.05}=\pm 1.943$, do not reject $H_0$ 3. $t_{0.10}=1.440$, reject $H_0$ 8. $T=4.096,\; \pm t_{0.05}=\pm 1.771$, reject $H_0$ 9. $T=25.524,\; \pm t_{0.0005}=\pm 3.505$, reject $H_0$ 1. $2.550\pm 0.127$ hundred psi 2. $T=41.072,\; \pm t_{0.005}=\pm 3.674$, reject $H_0$ 1. $(0.0014,0.0018)$ 2. $H_0:\beta _1=0.001\; vs\; H_a:\beta _1>0.001$. Test Statistic: $Z=6.1625$. Rejection Region: $[1.28,+\infty )$. Decision: Reject $H_0$ 1. $(101.789,131.4435)$ 2. $H_0:\beta _1=90\; vs\; H_a:\beta _1>90$. Test Statistic: $T=3.5938,\; d.f.=58$. Rejection Region: $[1.296,+\infty )$. Decision: Reject $H_0$ 10.6 The Coefficient of Determination Basic For the Basic and Application exercises in this section use the computations that were done for the exercises with the same number in Section 10.2, Section 10.4, and Section 10.5. 1. For the sample data set of Exercise 1 of Section 10.2 find the coefficient of determination using the formula $r^2=\widehat{\beta _1}SS_{xy}/SS_{yy}$. Confirm your answer by squaring $r$ as computed in that exercise. 2. For the sample data set of Exercise 2 of Section 10.2" find the coefficient of determination using the formula $r^2=\widehat{\beta _1}SS_{xy}/SS_{yy}$. Confirm your answer by squaring $r$ as computed in that exercise. 3. For the sample data set of Exercise 3 of Section 10.2 find the coefficient of determination using the formula $r^2=\widehat{\beta _1}SS_{xy}/SS_{yy}$. Confirm your answer by squaring $r$ as computed in that exercise. 4. For the sample data set of Exercise 4 of Section 10.2 find the coefficient of determination using the formula $r^2=\widehat{\beta _1}SS_{xy}/SS_{yy}$. Confirm your answer by squaring $r$ as computed in that exercise. 5. For the sample data set of Exercise 5 of Section 10.2 find the coefficient of determination using the formula $r^2=\widehat{\beta _1}SS_{xy}/SS_{yy}$. Confirm your answer by squaring $r$ as computed in that exercise. 6. For the sample data set of Exercise 6 of Section 10.2 find the coefficient of determination using the formula $r^2=\widehat{\beta _1}SS_{xy}/SS_{yy}$. Confirm your answer by squaring $r$ as computed in that exercise. 7. For the sample data set of Exercise 7 of Section 10.2 find the coefficient of determination using the formula $r^2=(SS_{yy}-SSE)/SS_{yy}$. Confirm your answer by squaring $r$ as computed in that exercise. 8. For the sample data set of Exercise 7 of Section 10.2 find the coefficient of determination using the formula $r^2=(SS_{yy}-SSE)/SS_{yy}$. Confirm your answer by squaring $r$ as computed in that exercise. 9. For the sample data set of Exercise 7 of Section 10.2 find the coefficient of determination using the formula $r^2=(SS_{yy}-SSE)/SS_{yy}$. Confirm your answer by squaring $r$ as computed in that exercise. 10. For the sample data set of Exercise 7 of Section 10.2 find the coefficient of determination using the formula $r^2=(SS_{yy}-SSE)/SS_{yy}$. Confirm your answer by squaring $r$ as computed in that exercise. Applications 1. For the data in Exercise 11 of Section 10.2 compute the coefficient of determination and interpret its value in the context of age and vocabulary. 2. For the data in Exercise 12 of Section 10.2" compute the coefficient of determination and interpret its value in the context of vehicle weight and braking distance. 3. For the data in Exercise 13 of Section 10.2 compute the coefficient of determination and interpret its value in the context of age and resting heart rate. In the age range of the data, does age seem to be a very important factor with regard to heart rate? 4. For the data in Exercise 14 of Section 10.2 compute the coefficient of determination and interpret its value in the context of wind speed and wave height. Does wind speed seem to be a very important factor with regard to wave height? 5. For the data in Exercise 15 of Section 10.2 find the proportion of the variability in revenue that is explained by level of advertising. 6. For the data in Exercise 16 of Section 10.2 find the proportion of the variability in adult height that is explained by the variation in length at age two. 7. For the data in Exercise 17 of Section 10.2 compute the coefficient of determination and interpret its value in the context of course average before the final exam and score on the final exam. 8. For the data in Exercise 18 of Section 10.2 compute the coefficient of determination and interpret its value in the context of acres planted and acres harvested. 9. For the data in Exercise 19 of Section 10.2 compute the coefficient of determination and interpret its value in the context of the amount of the medication consumed and blood concentration of the active ingredient. 10. For the data in Exercise 20 of Section 10.2 compute the coefficient of determination and interpret its value in the context of tree size and age. 11. For the data in Exercise 21 of Section 10.2 find the proportion of the variability in $28$-day strength of concrete that is accounted for by variation in $3$-day strength. 12. For the data in Exercise 22 of Section 10.2 find the proportion of the variability in energy demand that is accounted for by variation in average temperature. Large Data Set Exercises Large Data Sets not available 1. Large $\text{Data Set 1}$ lists the SAT scores and GPAs of $1,000$ students. Compute the coefficient of determination and interpret its value in the context of SAT scores and GPAs. 2. Large $\text{Data Set 12}$ lists the golf scores on one round of golf for $75$ golfers first using their own original clubs, then using clubs of a new, experimental design (after two months of familiarization with the new clubs). Compute the coefficient of determination and interpret its value in the context of golf scores with the two kinds of golf clubs. 3. Large $\text{Data Set 13}$ records the number of bidders and sales price of a particular type of antique grandfather clock at $60$ auctions. Compute the coefficient of determination and interpret its value in the context of the number of bidders at an auction and the price of this type of antique grandfather clock. Answers 1. $0.848$ 2. $0.631$ 3. $0.5$ 4. $0.766$ 5. $0.715$ 6. $0.898$; about $90\%$ of the variability in vocabulary is explained by age 7. $0.503$; about $50\%$ of the variability in heart rate is explained by age. Age is a significant but not dominant factor in explaining heart rate. 8. The proportion is $r^2=0.692$ 9. $0.563$; about $56\%$ of the variability in final exam scores is explained by course average before the final exam 10. $0.931$; about $93\%$ of the variability in the blood concentration of the active ingredient is explained by the amount of the medication consumed 11. The proportion is $r^2=0.984$ 12. $r^2=21.17\%$ 13. $r^2=81.04\%$ 10.7 Estimation and Prediction Basic For the Basic and Application exercises in this section use the computations that were done for the exercises with the same number in previous sections. 1. For the sample data set of Exercise 1 of Section 10.2 1. Give a point estimate for the mean value of $y$ in the sub-population determined by the condition $x = 4$. 2. Construct the $90\%$ confidence interval for that mean value. 2. For the sample data set of Exercise 2 of Section 10.2 1. Give a point estimate for the mean value of $y$ in the sub-population determined by the condition $x = 4$. 2. Construct the $90\%$ confidence interval for that mean value. 3. For the sample data set of Exercise 3 of Section 10.2 1. Give a point estimate for the mean value of $y$ in the sub-population determined by the condition $x = 7$. 2. Construct the $95\%$ confidence interval for that mean value. 4. For the sample data set of Exercise 4 of Section 10.2 1. Give a point estimate for the mean value of $y$ in the sub-population determined by the condition $x = 2$. 2. Construct the $80\%$ confidence interval for that mean value. 5. For the sample data set of Exercise 5 of Section 10.2 1. Give a point estimate for the mean value of $y$ in the sub-population determined by the condition $x = 1$. 2. Construct the $80\%$ confidence interval for that mean value. 6. For the sample data set of Exercise 6 of Section 10.2 1. Give a point estimate for the mean value of $y$ in the sub-population determined by the condition $x = 5$. 2. Construct the $95\%$ confidence interval for that mean value. 7. For the sample data set of Exercise 7 of Section 10.2 1. Give a point estimate for the mean value of $y$ in the sub-population determined by the condition $x = 6$. 2. Construct the $99\%$ confidence interval for that mean value. 3. Is it valid to make the same estimates for $x = 12$? Explain. 8. For the sample data set of Exercise 8 of Section 10.2 1. Give a point estimate for the mean value of $y$ in the sub-population determined by the condition $x = 12$. 2. Construct the $80\%$ confidence interval for that mean value. 3. Is it valid to make the same estimates for $x = 0$? Explain. 9. For the sample data set of Exercise 9 of Section 10.2 1. Give a point estimate for the mean value of $y$ in the sub-population determined by the condition $x = 0$. 2. Construct the $90\%$ confidence interval for that mean value. 3. Is it valid to make the same estimates for $x = -1$? Explain. 10. For the sample data set of Exercise 9 of Section 10.2 1. Give a point estimate for the mean value of $y$ in the sub-population determined by the condition $x = 8$. 2. Construct the $95\%$ confidence interval for that mean value. 3. Is it valid to make the same estimates for $x = 0$? Explain. Applications 1. For the data in Exercise 11 of Section 10.2 1. Give a point estimate for the average number of words in the vocabulary of $18$-month-old children. 2. Construct the $95\%$ confidence interval for that mean value. 3. Construct the $95\%$ confidence interval for that mean value. 2. For the data in Exercise 12 of Section 10.2 1. Give a point estimate for the average braking distance of automobiles that weigh $3,250$ pounds. 2. Construct the $80\%$ confidence interval for that mean value. 3. Is it valid to make the same estimates for $5,000$-pound automobiles? Explain. 3. For the data in Exercise 13 of Section 10.2 1. Give a point estimate for the resting heart rate of a man who is $35$ years old. 2. One of the men in the sample is $35$ years old, but his resting heart rate is not what you computed in part (a). Explain why this is not a contradiction. 3. Construct the $90\%$ confidence interval for the mean resting heart rate of all $35$-year-old men. 4. For the data in Exercise 14 of Section 10.2 1. Give a point estimate for the wave height when the wind speed is $13$ miles per hour. 2. One of the wind speeds in the sample is $13$ miles per hour, but the height of waves that day is not what you computed in part (a). Explain why this is not a contradiction. 3. Construct the $95\%$ confidence interval for the mean wave height on days when the wind speed is $13$ miles per hour. 5. For the data in Exercise 15 of Section 10.2 1. The business owner intends to spend $\2,500$ on advertising next year. Give an estimate of next year’s revenue based on this fact. 2. Construct the $90\%$ prediction interval for next year’s revenue, based on the intent to spend $\2,500$ on advertising. 6. For the data in Exercise 16 of Section 10.2 1. A two-year-old girl is $32.3$ inches long. Predict her adult height. 2. Construct the $95\%$ prediction interval for the girl’s adult height. 7. For the data in Exercise 17 of Section 10.2 1. Lodovico has a $78.6$ average in his physics class just before the final. Give a point estimate of what his final exam grade will be. 2. Explain whether an interval estimate for this problem is a confidence interval or a prediction interval. 3. Based on your answer to (b), construct an interval estimate for Lodovico’s final exam grade at the $90\%$ level of confidence. 8. For the data in Exercise 18 of Section 10.2 1. This year $86.2$ million acres of corn were planted. Give a point estimate of the number of acres that will be harvested this year. 2. Explain whether an interval estimate for this problem is a confidence interval or a prediction interval. 3. Based on your answer to (b), construct an interval estimate for the number of acres that will be harvested this year, at the $99\%$ level of confidence. 9. For the data in Exercise 19 of Section 10.2 1. Give a point estimate for the blood concentration of the active ingredient of this medication in a man who has consumed $1.5$ ounces of the medication just recently. 2. Gratiano just consumed $1.5$ ounces of this medication $30$ minutes ago. Construct a $95\%$ prediction interval for the concentration of the active ingredient in his blood right now. 10. For the data in Exercise 20 of Section 10.2 1. You measure the girth of a free-standing oak tree five feet off the ground and obtain the value $127$ inches. How old do you estimate the tree to be? 2. Construct a $90\%$ prediction interval for the age of this tree. 11. For the data in Exercise 21 of Section 10.2 1. A test cylinder of concrete three days old fails at $1,750$ psi. Predict what the $28$-day strength of the concrete will be. 2. Construct a $99\%$ prediction interval for the $28$-day strength of this concrete. 3. Based on your answer to (b), what would be the minimum $28$-day strength you could expect this concrete to exhibit? 12. For the data in Exercise 22 of Section 10.2 1. Tomorrow’s average temperature is forecast to be $53$ degrees. Estimate the energy demand tomorrow. 2. Construct a $99\%$ prediction interval for the energy demand tomorrow. 3. Based on your answer to (b), what would be the minimum demand you could expect? Large Data Set Exercises Large Data Sets not available 1. Large $\text{Data Set 1}$ lists the SAT scores and GPAs of $1,000$ students. 1. Give a point estimate of the mean GPA of all students who score $1350$ on the SAT. 2. Construct a $90\%$ confidence interval for the mean GPA of all students who score $1350$ on the SAT. 2. Large $\text{Data Set 12}$ lists the golf scores on one round of golf for $75$ golfers first using their own original clubs, then using clubs of a new, experimental design (after two months of familiarization with the new clubs). 1. Thurio averages $72$ strokes per round with his own clubs. Give a point estimate for his score on one round if he switches to the new clubs. 2. Explain whether an interval estimate for this problem is a confidence interval or a prediction interval. 3. Based on your answer to (b), construct an interval estimate for Thurio’s score on one round if he switches to the new clubs, at $90\%$ confidence. 3. Large $\text{Data Set 13}$ records the number of bidders and sales price of a particular type of antique grandfather clock at $60$ auctions. 1. There are seven likely bidders at the Verona auction today. Give a point estimate for the price of such a clock at today’s auction. 2. Explain whether an interval estimate for this problem is a confidence interval or a prediction interval. 3. Based on your answer to (b), construct an interval estimate for the likely sale price of such a clock at today’s sale, at $95\%$ confidence. Answers 1. $5.647$ 2. $5.647\pm 1.253$ 1. $-0.188$ 2. $-0.188\pm 3.041$ 1. $1.875$ 2. $1.875\pm 1.423$ 1. $5.4$ 2. $5.4\pm 3.355$ 3. invalid (extrapolation) 1. $2.4$ 2. 2.4±1.4742.4±1.474$2.4\pm 1.474$ 3. valid ($-1$ is in the range of the $x$-values in the data set) 1. $31.3$ words 2. $31.3\pm 7.1$ words 3. not valid, since two years is $24$ months, hence this is extrapolation 1. $73.2$ beats/min 2. The man’s heart rate is not the predicted average for all men his age. 3. $73.2\pm 1.2$ beats/min 1. $\224,562$ 2. $\224,562 \pm \28,699$ 1. $74$ 2. Prediction (one person, not an average for all who have average $78.6$ before the final exam) 3. $74\pm 24$ 1. $0.066\%$ 2. $0.066\pm 0.034\%$ 1. $4,656$ psi 2. 4,656±321$4,656\pm 321$ psi 3. $4,656-321=4,335$ psi 1. $2.19$ 2. $(2.1421,2.2316)$ 1. $7771.39$ 2. A prediction interval. 3. $(7410.41,8132.38)$ 10.8 A Complete Example Basic The exercises in this section are unrelated to those in previous sections. 1. The data give the amount $x$ of silicofluoride in the water (mg/L) and the amount $y$ of lead in the bloodstream (μg/dL) of ten children in various communities with and without municipal water. Perform a complete analysis of the data, in analogy with the discussion in this section (that is, make a scatter plot, do preliminary computations, find the least squares regression line, find $SSE,\; s_\varepsilon$ and $r$, and so on). In the hypothesis test use as the alternative hypothesis $\beta _1>0$, and test at the $5\%$ level of significance. Use confidence level $95\%$ for the confidence interval for $\beta _1$. Construct $95\%$ confidence and predictions intervals at $x_p=2$ at the end. $\begin{array}{c|c c c c c} x &0.0 &0.0 &1.1 &1.4 &1.6 \ \hline y &0.3 &0.1 &4.7 &3.2 &5.1\ \end{array}\ \begin{array}{c|c c c c c} x &1.7 &2.0 &2.0 &2.2 &2.2 \ \hline y &7.0 &5.0 &6.1 &8.6 &9.5\ \end{array}$ 2. The table gives the weight $x$ (thousands of pounds) and available heat energy $y$ (million BTU) of a standard cord of various species of wood typically used for heating. Perform a complete analysis of the data, in analogy with the discussion in this section (that is, make a scatter plot, do preliminary computations, find the least squares regression line, find $SSE,\; s_\varepsilon$ and $r$, and so on). In the hypothesis test use as the alternative hypothesis $\beta _1$, and test at the $5\%$ level of significance. Use confidence level $95\%$ for the confidence interval for $\beta _1$. Construct $95\%$ confidence and predictions intervals at $x_p=5$ at the end. $\begin{array}{c|c c c c c} x &3.37 &3.50 &4.29 &4.00 &4.64 \ \hline y &23.6 &17.5 &20.1 &21.6 &28.1\ \end{array}\ \begin{array}{c|c c c c c} x &4.99 &4.94 &5.48 &3.26 &4.16 \ \hline y &25.3 &27.0 &30.7 &18.9 &20.7\ \end{array}$ Large Data Set Exercises Large Data Sets not available 1. Large Data Sets 3 and 3A list the shoe sizes and heights of $174$ customers entering a shoe store. The gender of the customer is not indicated in Large Data Set 3. However, men’s and women’s shoes are not measured on the same scale; for example, a size $8$ shoe for men is not the same size as a size $8$ shoe for women. Thus it would not be meaningful to apply regression analysis to Large Data Set 3. Nevertheless, compute the scatter diagrams, with shoe size as the independent variable ($x$) and height as the dependent variable ($y$), for (i) just the data on men, (ii) just the data on women, and (iii) the full mixed data set with both men and women. Does the third, invalid scatter diagram look markedly different from the other two? 2. Separate out from Large Data Set 3A just the data on men and do a complete analysis, with shoe size as the independent variable ($x$) and height as the dependent variable ($y$). Use $\alpha =0.05$ and $x_p=10$ whenever appropriate. 3. Separate out from Large Data Set 3A just the data on women and do a complete analysis, with shoe size as the independent variable ($x$) and height as the dependent variable ($y$). Use $\alpha =0.05$ and $x_p=10$ whenever appropriate. Answers 1. $\sum x=14.2,\; \sum y=49.6,\; \sum xy=91.73,\; \sum x^2=26.3,\; \sum y^2=333.86\ SS_{xx}=6.136,\; SS_{xy}=21.298,\; SS_{yy}=87.844\ \bar{x}=1.42,\; \bar{y}=4.96\ \widehat{\beta _1}=3.47,\; \widehat{\beta _0}=0.03\ SSE=13.92\ s_\varepsilon =1.32\ r = 0.9174, r^2 = 0.8416\ df=8, T = 6.518$ The $95\%$ confidence interval for $\beta _1$ is: $(2.24,4.70)$ At $x_p=2$ the $95\%$ confidence interval for $E(y)$ is $(5.77,8.17)$ At $x_p=2$ the $95\%$ confidence interval for $y$ is $(3.73,10.21)$ 2. The positively correlated trend seems less profound than that in each of the previous plots. 3. The regression line: $\hat{y}=3.3426x+138.7692$. Coefficient of Correlation: $r = 0.9431$. Coefficient of Determination: $r^2 = 0.8894$. $SSE=283.2473$. $s_e=1.9305$. A $95\%$ confidence interval for $\beta _1$: $(3.0733,3.6120)$. Test Statistic for $H_0: \beta _1=0: T=24.7209$. At $x_p=10$, $\hat{y}=172.1956$; a $95\%$ confidence interval for the mean value of $y$ is: $(171.5577,172.8335)$; and a $95\%$ prediction interval for an individual value of $y$ is: $(168.2974,176.0938)$.
textbooks/stats/Introductory_Statistics/Introductory_Statistics_(Shafer_and_Zhang)/10%3A_Correlation_and_Regression/10.E%3A_Correlation_and_Regression_%28Exercises%29.txt
In previous chapters you saw how to test hypotheses concerning population means and population proportions. The idea of testing hypotheses can be extended to many other situations that involve different parameters and use different test statistics. Whereas the standardized test statistics that appeared in earlier chapters followed either a normal or Student t-distribution, in this chapter the tests will involve two other very common and useful distributions, the chi-square and the F-distributions. The chi-square distribution arises in tests of hypotheses concerning the independence of two random variables and concerning whether a discrete random variable follows a specified distribution. The F-distribution arises in tests of hypotheses concerning whether or not two population variances are equal and concerning whether or not three or more population means are equal. • 11.1: Chi-Square Tests for Independence All the chi-square distributions form a family, and each of its members is also specified by a parameter df, the number of degrees of freedom. • 11.2: Chi-Square One-Sample Goodness-of-Fit Tests The chi-square goodness-of-fit test can be used to evaluate the hypothesis that a sample is taken from a population with an assumed specific probability distribution. • 11.3: F-tests for Equality of Two Variances Another important and useful family of distributions in statistics is the family of F-distributions. An F random variable is a random variable that assumes only positive values and follows an F-distribution. Each member of the F-distribution family is specified by a pair of parameters called degrees of freedom. An F-test can be used to evaluate the hypothesis of two identical normal population variances. • 11.4: F-Tests in One-Way ANOVA In this section we will learn to compare three or more population means at the same time, which is often of interest in practical applications. For example, an administrator at a university may be interested in knowing whether student grade point averages are the same for different majors. In another example, an oncologist may be interested in knowing whether patients with the same type of cancer have the same average survival times under several different competing cancer treatments. • 11.E: Chi-Square Tests and F-Tests (Exercises) These are homework exercises to accompany the Textmap created for "Introductory Statistics" by Shafer and Zhang. 11: Chi-Square Tests and F-Tests Learning Objectives • To understand what chi-square distributions are. • To understand how to use a chi-square test to judge whether two factors are independent. Chi-Square Distributions As you know, there is a whole family of $t$-distributions, each one specified by a parameter called the degrees of freedom, denoted $df$. Similarly, all the chi-square distributions form a family, and each of its members is also specified by a parameter $df$, the number of degrees of freedom. Chi is a Greek letter denoted by the symbol $\chi$ and chi-square is often denoted by $\chi^2$. Figure $1$ shows several $\chi$-square distributions for different degrees of freedom. A chi-square random variable is a random variable that assumes only positive values and follows a $\chi$-square distribution. Definition: critical value The value of the chi-square random variable $\chi^2$ with $df=k$ that cuts off a right tail of area $c$ is denoted $\chi_c^2$ and is called a critical value (Figure $2$). Figure $3$ below gives values of $\chi_c^2$ for various values of $c$ and under several chi-square distributions with various degrees of freedom. Tests for Independence Hypotheses tests encountered earlier in the book had to do with how the numerical values of two population parameters compared. In this subsection we will investigate hypotheses that have to do with whether or not two random variables take their values independently, or whether the value of one has a relation to the value of the other. Thus the hypotheses will be expressed in words, not mathematical symbols. We build the discussion around the following example. There is a theory that the gender of a baby in the womb is related to the baby’s heart rate: baby girls tend to have higher heart rates. Suppose we wish to test this theory. We examine the heart rate records of $40$ babies taken during their mothers’ last prenatal checkups before delivery, and to each of these $40$ randomly selected records we compute the values of two random measures: 1) gender and 2) heart rate. In this context these two random measures are often called factors. Since the burden of proof is that heart rate and gender are related, not that they are unrelated, the problem of testing the theory on baby gender and heart rate can be formulated as a test of the following hypotheses: $H_0: \text{Baby gender and baby heart rate are independent}\ vs. \ H_a: \text{Baby gender and baby heart rate are not independent} \nonumber$ The factor gender has two natural categories or levels: boy and girl. We divide the second factor, heart rate, into two levels, low and high, by choosing some heart rate, say $145$ beats per minute, as the cutoff between them. A heart rate below $145$ beats per minute will be considered low and $145$ and above considered high. The $40$ records give rise to a $2\times 2$ contingency table. By adjoining row totals, column totals, and a grand total we obtain the table shown as Table $1$. The four entries in boldface type are counts of observations from the sample of $n = 40$. There were $11$ girls with low heart rate, $17$ boys with low heart rate, and so on. They form the core of the expanded table. Table $1$: Baby Gender and Heart Rate Heart Rate $\text{Low}$ $\text{High}$ $\text{Row Total}$ $\text{Gender}$ $\text{Girl}$ $11$ $7$ $18$ $\text{Boy}$ $17$ $5$ $22$ $\text{Column Total}$ $28$ $12$ $\text{Total}=40$ In analogy with the fact that the probability of independent events is the product of the probabilities of each event, if heart rate and gender were independent then we would expect the number in each core cell to be close to the product of the row total $R$ and column total $C$ of the row and column containing it, divided by the sample size $n$. Denoting such an expected number of observations $E$, these four expected values are: • 1st row and 1st column: $E=(R\times C)/n = 18\times 28 /40 = 12.6$ • 1st row and 2nd column: $E=(R\times C)/n = 18\times 12 /40 = 5.4$ • 2nd row and 1st column: $E=(R\times C)/n = 22\times 28 /40 = 15.4$ • 2nd row and 2nd column: $E=(R\times C)/n = 22\times 12 /40 = 6.6$ We update Table $1$ by placing each expected value in its corresponding core cell, right under the observed value in the cell. This gives the updated table Table $2$. Table $2$: Updated Baby Gender and Heart Rate $\text{Heart Rate}$ $\text{Low}$ $\text{High}$ $\text{Row Total}$ $\text{Gender}$ $\text{Girl}$ $O=11$ $E=12.6$ $O=7$ $E=5.4$ $R = 18$ $\text{Boy}$ $O=17$ $E=15.4$ $O=5$ $E=6.6$ $R = 22$ $\text{Column Total}$ $C = 28$ $C = 12$ $n = 40$ A measure of how much the data deviate from what we would expect to see if the factors really were independent is the sum of the squares of the difference of the numbers in each core cell, or, standardizing by dividing each square by the expected number in the cell, the sum $\sum (O-E)^2 / E$. We would reject the null hypothesis that the factors are independent only if this number is large, so the test is right-tailed. In this example the random variable $\sum (O-E)^2 / E$ has the chi-square distribution with one degree of freedom. If we had decided at the outset to test at the $10\%$ level of significance, the critical value defining the rejection region would be, reading from Figure $3$, $\chi _{\alpha }^{2}=\chi _{0.10 }^{2}=2.706$, so that the rejection region would be the interval $[2.706,\infty )$. When we compute the value of the standardized test statistic we obtain $\sum \frac{(O-E)^2}{E}=\frac{(11-12.6)^2}{12.6}+\frac{(7-5.4)^2}{5.4}+\frac{(17-15.4)^2}{15.4}+\frac{(5-6.6)^2}{6.6}=1.231 \nonumber$ Since $1.231 < 2.706$, the decision is not to reject $H_0$. See Figure $4$. The data do not provide sufficient evidence, at the $10\%$ level of significance, to conclude that heart rate and gender are related. Figure $4$: Baby Gender Prediction H0vs.Ha::BabygenderandbabyheartrateareindependentBabygenderandbabyheartratearenotindependentH0vs.Ha::BabygenderandbabyheartrateareindependentBabygenderandbabyheartratearenotindependent With this specific example in mind, now turn to the general situation. In the general setting of testing the independence of two factors, call them Factor $1$ and Factor $2$, the hypotheses to be tested are $H_0: \text{The two factors are independent}\ vs. \ H_a: \text{The two factors are not independent} \nonumber$ As in the example each factor is divided into a number of categories or levels. These could arise naturally, as in the boy-girl division of gender, or somewhat arbitrarily, as in the high-low division of heart rate. Suppose Factor $1$ has $I$ levels and Factor $2$ has $J$ levels. Then the information from a random sample gives rise to a general $I\times J$ contingency table, which with row totals, column totals, and a grand total would appear as shown in Table $3$. Each cell may be labeled by a pair of indices $(i,j)$. $O_{ij}$ stands for the observed count of observations in the cell in row $i$ and column $j$, $R_i$ for the $i^{th}$ row total and $C_j$ for the $j^{th}$ column total. To simplify the notation we will drop the indices so Table $3$ becomes Table $4$. Nevertheless it is important to keep in mind that the $Os$, the $Rs$ and the $Cs$, though denoted by the same symbols, are in fact different numbers. Table $3$: General Contingency Table $\text{Factor 2 Levels}$ $1$  ⋅ ⋅ ⋅  $j$  ⋅ ⋅ ⋅  $J$ $\text{Row Total}$ $\text{Factor 1 Levels}$ $1$ $O_{11}$  ⋅ ⋅ ⋅  $O_{1j}$  ⋅ ⋅ ⋅  $O_{1J}$ $R_1$ $i$ $O_{i1}$  ⋅ ⋅ ⋅  $O_{ij}$  ⋅ ⋅ ⋅  $O_{iJ}$ $R_i$ $I$ $O_{I1}$  ⋅ ⋅ ⋅  $O_{Ij}$  ⋅ ⋅ ⋅  $O_{IJ}$ $R_I$ $\text{Column Total}$ $C_1$  ⋅ ⋅ ⋅  $C_j$  ⋅ ⋅ ⋅  $C_J$ $n$ Table $4$: Simplified General Contingency Table $\text{Factor 2 Levels}$ $1$  ⋅ ⋅ ⋅  $j$  ⋅ ⋅ ⋅  $J$ $\text{Row Total}$ $\text{Factor 1 Levels}$ $1$ $O$  ⋅ ⋅ ⋅  $O$  ⋅ ⋅ ⋅  $O$ $R$ $i$ $O$  ⋅ ⋅ ⋅  $O$  ⋅ ⋅ ⋅  $O$ $R$ $I$ $O$  ⋅ ⋅ ⋅  $O$  ⋅ ⋅ ⋅  $O$ $R$ $\text{Column Total}$ $C$  ⋅ ⋅ ⋅  $C$  ⋅ ⋅ ⋅  $C$ $n$ As in the example, for each core cell in the table we compute what would be the expected number $E$ of observations if the two factors were independent. $E$ is computed for each core cell (each cell with an $O$ in it) of Table $4$ by the rule applied in the example: $E=R×Cn \nonumber$ where $R$ is the row total and $C$ is the column total corresponding to the cell, and $n$ is the sample size After the expected number is computed for every cell, Table $4$ is updated to form Table $5$ by inserting the computed value of $E$ into each core cell. Table $5$: Updated General Contingency Table $\text{Factor 2 Levels}$ $1$  ⋅ ⋅ ⋅  $j$  ⋅ ⋅ ⋅  $J$ $\text{Row Total}$ $\text{Factor 1 Levels}$ $1$ $O$ $E$ ⋅ ⋅ ⋅ $O$ $E$ ⋅ ⋅ ⋅ $O$ $E$ $R$ $i$ $O$ $E$ ⋅ ⋅ ⋅ $O$ $E$ ⋅ ⋅ ⋅ $O$ $E$ $R$ $I$ $O$ $E$ ⋅ ⋅ ⋅ $O$ $E$ ⋅ ⋅ ⋅ $O$ $E$ $R$ $\text{Column Total}$ $C$  ⋅ ⋅ ⋅  $C$  ⋅ ⋅ ⋅  $C$ $n$ Here is the test statistic for the general hypothesis based on Table $5$, together with the conditions that it follow a chi-square distribution. Test Statistic for Testing the Independence of Two Factors $\chi^2=\sum (O−E)^2E \nonumber$ where the sum is over all core cells of the table. If 1. the two study factors are independent, and 2. the observed count $O$ of each cell in Table $5$ is at least $5$, then $\chi ^2$ approximately follows a chi-square distribution with $df=(I-1)\times (J-1)$ degrees of freedom. The same five-step procedures, either the critical value approach or the $p$-value approach, that were introduced in Section 8.1 and Section 8.3 are used to perform the test, which is always right-tailed. Example $1$ A researcher wishes to investigate whether students’ scores on a college entrance examination ($CEE$) have any indicative power for future college performance as measured by $GPA$. In other words, he wishes to investigate whether the factors $CEE$ and $GPA$ are independent or not. He randomly selects $n = 100$ students in a college and notes each student’s score on the entrance examination and his grade point average at the end of the sophomore year. He divides entrance exam scores into two levels and grade point averages into three levels. Sorting the data according to these divisions, he forms the contingency table shown as Table $6$, in which the row and column totals have already been computed. Table $6$: $CEE$ versus $GPA$ Contingency Table $GPA$ $<2.7$ $2.7\; \; \text{to}\; \; 3.2$ $>3.2$ $\text{Row Total}$ $CEE$ $<1800$ $35$ $12$ $5$ $52$ $\geq 1800$ $6$ $24$ $18$ $48$ $\text{Column Total}$ $41$ $36$ $23$ $\text{Total}=100$ Test, at the $1\%$ level of significance, whether these data provide sufficient evidence to conclude that $CEE$ scores indicate future performance levels of incoming college freshmen as measured by $GPA$. Solution We perform the test using the critical value approach, following the usual five-step method outlined at the end of Section 8.1. • Step 1. The hypotheses are $H_0:\text{CEE and GPA are independent factors}\ vs.\ H_a:\text{CEE and GPA are not independent factors} \nonumber$ • Step 2. The distribution is chi-square. • Step 3. To compute the value of the test statistic we must first computed the expected number for each of the six core cells (the ones whose entries are boldface): • 1st row and 1st column: $E=(R\times C)/n=41\times 52/100=21.32$ • 1st row and 2nd column: $E=(R\times C)/n=36\times 52/100=18.72$ • 1st row and 3rd column: $E=(R\times C)/n=23\times 52/100=11.96$ • 2nd row and 1st column: $E=(R\times C)/n=41\times 48/100=19.68$ • 2nd row and 2nd column: $E=(R\times C)/n=36\times 48/100=17.28$ • 2nd row and 3rd column: $E=(R\times C)/n=23\times 48/100=11.04$ Table $6$ is updated to Table $6$. Table $7$: Updated CEE versus GPA Contingency Table $GPA$ $<2.7$ $2.7\; \; \text{to}\; \; 3.2$ $>3.2$ $\text{Row Total}$ $CEE$ $<1800$ $O=35$ $E=21.32$ $O=12$ $E=18.72$ $O=5$ $E=11.96$ $R = 52$ $\geq 1800$ $O=6$ $E=19.68$ $O=24$ $E=17.28$ $O=18$ $E=11.04$ $R = 48$ $\text{Column Total}$ $C = 41$ $C = 36$ $C = 23$ $n = 100$ The test statistic is \begin{align*} \chi^2 &= \sum \frac{(O-E)^2}{E}\ &= \frac{(35-21.32)^2}{21.32}+\frac{(12-18.72)^2}{18.72}+\frac{(5-11.96)^2}{11.96}+\frac{(6-19.68)^2}{19.68}+\frac{(24-17.28)^2}{17.28}+\frac{(18-11.04)^2}{11.04}\ &= 31.75 \end{align*} \nonumber • Step 4. Since the $CEE$ factor has two levels and the $GPA$ factor has three, $I = 2$ and $J = 3$. Thus the test statistic follows the chi-square distribution with $df=(2-1)\times (3-1)=2$ degrees of freedom. Since the test is right-tailed, the critical value is $\chi _{0.01}^{2}$. Reading from Figure 7.1.6 "Critical Values of Chi-Square Distributions", $\chi _{0.01}^{2}=9.210$, so the rejection region is $[9.210,\infty )$. • Step 5. Since $31.75 > 9.21$ the decision is to reject the null hypothesis. See Figure $5$. The data provide sufficient evidence, at the $1\%$ level of significance, to conclude that $CEE$ score and $GPA$ are not independent: the entrance exam score has predictive power. Key Takeaway • Critical values of a chi-square distribution with degrees of freedom df are found in Figure 7.1.6. • A chi-square test can be used to evaluate the hypothesis that two random variables or factors are independent.
textbooks/stats/Introductory_Statistics/Introductory_Statistics_(Shafer_and_Zhang)/11%3A_Chi-Square_Tests_and_F-Tests/11.01%3A_Chi-Square_Tests_for_Independence.txt
Learning Objectives • To understand how to use a chi-square test to judge whether a sample fits a particular population well. Suppose we wish to determine if an ordinary-looking six-sided die is fair, or balanced, meaning that every face has probability $1/6$ of landing on top when the die is tossed. We could toss the die dozens, maybe hundreds, of times and compare the actual number of times each face landed on top to the expected number, which would be $1/6$ of the total number of tosses. We wouldn’t expect each number to be exactly $1/6$ of the total, but it should be close. To be specific, suppose the die is tossed $n=60$ times with the results summarized in Table $1$. For ease of reference we add a column of expected frequencies, which in this simple example is simply a column of $10s$. The result is shown as Table $2$. In analogy with the previous section we call this an “updated” table. A measure of how much the data deviate from what we would expect to see if the die really were fair is the sum of the squares of the differences between the observed frequency $O$ and the expected frequency $E$ in each row, or, standardizing by dividing each square by the expected number, the sum $\dfrac{Σ(O−E)^2}{E} \nonumber$ If we formulate the investigation as a test of hypotheses, the test is $H_0: \text{The die is fair}\ vs.\ H_a: \text{The die is not fair} \nonumber$ Table $1$: Die Contingency Table Die Value Assumed Distribution Observed Frequency $1$ $1/6$ $9$ $2$ $1/6$ $15$ $3$ $1/6$ $9$ $4$ $1/6$ $8$ $5$ $1/6$ $6$ $6$ $1/6$ $13$ Table $2$: Updated Die Contingency Table Die Value Assumed Distribution Observed Freq. Expected Freq. $1$ $1/6$ $9$ $10$ $2$ $1/6$ $15$ $10$ $3$ $1/6$ $9$ $10$ $4$ $1/6$ $8$ $10$ $5$ $1/6$ $6$ $10$ $6$ $1/6$ $13$ $10$ We would reject the null hypothesis that the die is fair only if the number $\dfrac{Σ(O−E)^2}{E}$ is large, so the test is right-tailed. In this example the random variable $\dfrac{Σ(O−E)^2}{E}$ has the chi-square distribution with five degrees of freedom. If we had decided at the outset to test at the $10\%$ level of significance, the critical value defining the rejection region would be, reading from Figure 7.1.6, $\chi _{\alpha }^{2}=\chi _{0.10}^{2}=9.236$, so that the rejection region would be the interval $[9.236,\infty ) \nonumber$. When we compute the value of the standardized test statistic using the numbers in the last two columns of Table $2$, we obtain \begin{align*} \sum \frac{(O-E)^2}{E} &= \frac{(-1)^2}{10}+\frac{(5)^2}{10}+\frac{(-1)^2}{10}+\frac{(-2)^2}{10}+\frac{(-4)^2}{10}+\frac{(3)^2}{10}\ &= 0.1+2.5+0.1+0.4+1.6+0.9\ &= 5.6 \end{align*} \nonumber Since $5.6<9.236$ the decision is not to reject $H_0$. See Figure $1$. The data do not provide sufficient evidence, at the $10\%$ level of significance, to conclude that the die is loaded. In the general situation we consider a discrete random variable that can take $I$ different values, $x_1,\: x_2,\cdots ,x_I$, for which the default assumption is that the probability distribution is $\begin{array}{c|c c c c} x & x_1 & x_2 & \cdots & x_I \ \hline P(x) &p_1 &p_2 &\cdots &p_I\ \end{array} \nonumber$ We wish to test the hypotheses: $H_0: \text{The assumed probability distribution for X is valid}\ vs.\ H_a: \text{The assumed probability distribution for X is not valid} \nonumber$ We take a sample of size $n$ and obtain a list of observed frequencies. This is shown in Table $3$. Based on the assumed probability distribution we also have a list of assumed frequencies, each of which is defined and computed by the formula $Ei=n×pi \nonumber$ Table $3$: General Contingency Table Factor Levels Assumed Distribution Observed Frequency $1$ $p_1$ $O_1$ $2$ $p_2$ $O_2$ $\vdots$ $\vdots$ $\vdots$ $I$ $p_I$ $O_I$ Table $3$ is updated to Table $4$ by adding the expected frequency for each value of $X$. To simplify the notation we drop indices for the observed and expected frequencies and represent Table $4$ by Table $5$. Table $4$: Updated General Contingency Table Factor Levels Assumed Distribution Observed Freq. Expected Freq. $1$ $p_1$ $O_1$ $E_1$ $2$ $p_2$ $O_2$ $E_2$ $\vdots$ $\vdots$ $\vdots$ $\vdots$ $I$ $p_I$ $O_I$ $E_I$ Table $5$: Simplified Updated General Contingency Table Factor Levels Assumed Distribution Observed Freq. Expected Freq. $1$ $p_1$ $O$ $E$ $2$ $p_2$ $O$ $E$ $\vdots$ $\vdots$ $\vdots$ $\vdots$ $I$ $p_I$ $O$ $E$ Here is the test statistic for the general hypothesis based on Table $5$, together with the conditions that it follow a chi-square distribution. Test Statistic for Testing Goodness of Fit to a Discrete Probability Distribution $\chi ^2 =\sum \frac{(O-E)^2}{E} \nonumber$ where the sum is over all the rows of the table (one for each value of $X$). If 1. the true probability distribution of $X$ is as assumed, and 2. the observed count $O$ of each cell in Table $5$ is at least $5$, then $\chi ^2$ approximately follows a chi-square distribution with $df=I-1$ degrees of freedom. The test is known as a goodness-of-fit $\chi ^2$ test since it tests the null hypothesis that the sample fits the assumed probability distribution well. It is always right-tailed, since deviation from the assumed probability distribution corresponds to large values of $\chi ^2$. Testing is done using either of the usual five-step procedures. Example $1$ Table $6$ shows the distribution of various ethnic groups in the population of a particular state based on a decennial U.S. census. Five years later a random sample of $2,500$ residents of the state was taken, with the results given in Table $7$ (along with the probability distribution from the census year). Test, at the $1\%$ level of significance, whether there is sufficient evidence in the sample to conclude that the distribution of ethnic groups in this state five years after the census had changed from that in the census year. Table $6$: Ethnic Groups in the Census Year Ethnicity White Black Amer.-Indian Hispanic Asian Others Proportion $0.743$ $0.216$ $0.012$ $0.012$ $0.008$ $0.009$ Table $7$: Sample Data Five Years After the Census Year Ethnicity Assumed Distribution Observed Frequency White $0.743$ $1732$ Black $0.216$ $538$ American-Indian $0.012$ $32$ Hispanic $0.012$ $42$ Asian $0.008$ $133$ Others $0.009$ $23$ Solution We test using the critical value approach. • Step 1. The hypotheses of interest in this case can be expressed as $H_0: \text{The distribution of ethnic groups has not changed}\ vs.\ H_a: \text{The distribution of ethnic groups has changed} \nonumber$ • Step 2. The distribution is chi-square. • Step 3. To compute the value of the test statistic we must first compute the expected number for each row of Table $7$. Since $n=2500$, using the formula $E_i=n\times p_i$ and the values of $p_i$ from either Table $6$ or Table $7$, $E_1=2500×0.743=1857.5\ E_2=2500×0.216=540\ E_3=2500×0.012=30\ E_4=2500×0.012=30\ E_5=2500×0.008=20\ E_6=2500×0.009=22.5 \nonumber$ Table $7$ is updated to Table $8$. Table $8$: Observed and Expected Frequencies Five Years After the Census Year Ethnicity Assumed Dist. Observed Freq. Expected Freq. White $0.743$ $1732$ $1857.5$ Black $0.216$ $538$ $540$ American-Indian $0.012$ $32$ $30$ Hispanic $0.012$ $42$ $30$ Asian $0.008$ $133$ $20$ Others $0.009$ $23$ $22.5$ The value of the test statistic is \begin{align*} \chi ^2 &= \sum \frac{(O-E)^2}{E}\ &= \frac{(1732-1857.5)^2}{1857.5}+\frac{(538-540)^2}{540}+\frac{(32-30)^2}{30}+\frac{(42-30)^2}{30}+\frac{(133-20)^2}{20}+\frac{(23-22.5)^2}{22.5}\ &= 651.881 \end{align*} \nonumber Since the random variable takes six values, $I=6$. Thus the test statistic follows the chi-square distribution with $df=6-1=5$ degrees of freedom. Since the test is right-tailed, the critical value is $\chi _{0.01}^{2}$. Reading from Figure 7.1.6, $\chi _{0.01}^{2}=15.086$, so the rejection region is $[15.086,\infty )$. Since $651.881>15.086$ the decision is to reject the null hypothesis. See Figure $2$. The data provide sufficient evidence, at the $1\%$ level of significance, to conclude that the ethnic distribution in this state has changed in the five years since the U.S. census. Key Takeaway • The chi-square goodness-of-fit test can be used to evaluate the hypothesis that a sample is taken from a population with an assumed specific probability distribution.
textbooks/stats/Introductory_Statistics/Introductory_Statistics_(Shafer_and_Zhang)/11%3A_Chi-Square_Tests_and_F-Tests/11.02%3A_Chi-Square_One-Sample_Goodness-of-Fit_Tests.txt
Learning Objectives • To understand what $F$-distributions are. • To understand how to use an $F$-test to judge whether two population variances are equal. $F$-Distributions Another important and useful family of distributions in statistics is the family of $F$-distributions. Each member of the $F$-distribution family is specified by a pair of parameters called degrees of freedom and denoted $df_1$ and $df_2$. Figure $1$ shows several $F$-distributions for different pairs of degrees of freedom. An $F$ random variable is a random variable that assumes only positive values and follows an $F$-distribution. The parameter $df_1$ is often referred to as the numerator degrees of freedom and the parameter $df_2$ as the denominator degrees of freedom. It is important to keep in mind that they are not interchangeable. For example, the $F$-distribution with degrees of freedom $df_1=3$ and $df_2=8$ is a different distribution from the $F$-distribution with degrees of freedom $df_1=8$ and $df_2=3$. Definition: critical value The value of the $F$ random variable $F$ with degrees of freedom $df_1$ and $df_2$ that cuts off a right tail of area $c$ is denoted $F_c$ and is called a critical value (Figure $2$). Tables containing the values of $F_c$ are given in Chapter 11. Each of the tables is for a fixed collection of values of $c$, either $0.900,\; 0.950,\; 0.975,\; 0.990,\; \text{and}\; 0.995$ (yielding what are called “lower” critical values), or $0.005,\; 0.010,\; 0.025,\; 0.050,\; \text{and}\; 0.100$ (yielding what are called “upper” critical values). In each table critical values are given for various pairs $(df_1,\: df_2)$. We illustrate the use of the tables with several examples. Example $1$: an $F$ random variable Suppose $F$ is an $F$ random variable with degrees of freedom $df_1=5$ and $df_2=4$. Use the tables to find 1. $F_{0.10}$ 2. $F_{0.95}$ Solution 1. The column headings of all the tables contain $df_1=5$. Look for the table for which $0.10$ is one of the entries on the extreme left (a table of upper critical values) and that has a row heading $df_2=4$ in the left margin of the table. A portion of the relevant table is provided. The entry in the intersection of the column with heading $df_1=5$ and the row with the headings $0.10$ and $df_2=4$, which is shaded in the table provided, is the answer, F0.10=4.05. $F$ Tail Area $\frac{df_1}{df_2}$ $1$ $2$ $\cdots$ $5$  $\cdots$ $\vdots$ $\vdots$ $\vdots$ $\vdots$ $\vdots$ $\vdots$ $\vdots$ $0.005$ $4$ $\cdots$   $\cdots$   $\cdots$  $22.5$ $\cdots$ $0.01$ $4$ $\cdots$  $\cdots$  $\cdots$  $15.5$ $\cdots$ $0.025$ $4$  $\cdots$   $\cdots$  $\cdots$  $9.36$ $\cdots$ $0.05$ $4$ $\cdots$   $\cdots$  $\cdots$  $6.26$ $\cdots$ $0.10$ $4$ $\cdots$   $\cdots$   $\cdots$   $4.05$ $\cdots$ $\vdots$ $\vdots$ $\vdots$ $\vdots$ $\vdots$ $\vdots$ $\vdots$ 1. Look for the table for which $0.95$ is one of the entries on the extreme left (a table of lower critical values) and that has a row heading $df_2=4$ in the left margin of the table. A portion of the relevant table is provided. The entry in the intersection of the column with heading $df_1=5$ and the row with the headings $0.95$ and $df_2=4$, which is shaded in the table provided, is the answer, F0.95=0.19. $F$ Tail Area $\frac{df_1}{df_2}$ $1$ $2$ $\cdots$  $5$ $\cdots$ $\vdots$ $\vdots$ $\vdots$ $\vdots$ $\vdots$ $\vdots$ $\vdots$ $0.90$ $4$ $\cdots$   $\cdots$    $\cdots$   $0.28$ $\cdots$ $0.95$ $4$ $\cdots$   $\cdots$   $\cdots$   $0.19$ $\cdots$ $0.975$ $4$ $\cdots$  $\cdots$   $\cdots$  $0.14$ $\cdots$ $0.99$ $4$ $\cdots$    $\cdots$    $\cdots$  $0.09$  $\cdots$ $0.995$ $4$ $\cdots$    $\cdots$   $\cdots$   $0.06$  $\cdots$ $\vdots$ $\vdots$ $\vdots$ $\vdots$ $\vdots$ $\vdots$ $\vdots$ Example $2$ Suppose $F$ is an $F$ random variable with degrees of freedom $df_1=2$ and $df_2=20$. Let $α=0.05$. Use the tables to find 1. $F_{\alpha }$ 2. $F_{\alpha /2}$ 3. $F_{1-\alpha }$ 4. $F_{1-\alpha /2}$ Solution 1. The column headings of all the tables contain $df_1=2$. Look for the table for which $\alpha =0.05$ is one of the entries on the extreme left (a table of upper critical values) and that has a row heading $df_2=20$ in the left margin of the table. A portion of the relevant table is provided. The shaded entry, in the intersection of the column with heading $df_1=2$ and the row with the headings $0.05$ and $df_2=20$ is the answer, F0.05=3.49. $F$ Tail Area $\frac{df_1}{df_2}$ $1$ $2$  $\cdots$ $\vdots$ $\vdots$ $\vdots$ $\vdots$ $\vdots$ $0.005$ $20$ $\cdots$  $6.99$  $\cdots$ $0.01$ $20$ $\cdots$  $5.85$  $\cdots$ $0.025$ $20$ $\cdots$   $4.46$  $\cdots$ $0.05$ $20$ $\cdots$   $3.49$  $\cdots$ $0.10$ $20$ $\cdots$  $2.59$  $\cdots$ $\vdots$ $\vdots$ $\vdots$ $\vdots$ $\vdots$ 1. Look for the table for which $\alpha /2=0.025$ is one of the entries on the extreme left (a table of upper critical values) and that has a row heading $df_2=20$ in the left margin of the table. A portion of the relevant table is provided. The shaded entry, in the intersection of the column with heading $df_1=2$ and the row with the headings $0.025$ and $df_2=20$ is the answer, $F_{0.025}=4.46$. $F$ Tail Area $\frac{df_1}{df_2}$ $1$ $2$ $\cdots$ $\vdots$ $\vdots$ $\vdots$ $\vdots$ $\vdots$ $0.005$ $20$ $\cdots$  $6.99$  $\cdots$ $0.01$ $20$ $\cdots$  $5.85$  $\cdots$ $0.025$ $20$ $\cdots$  $4.46$ $\cdots$ $0.05$ $20$ $\cdots$  $3.49$ $\cdots$ $0.10$ $20$ $\cdots$   $2.59$  $\cdots$ $\vdots$ $\vdots$ $\vdots$ $\vdots$ $\vdots$ 1. Look for the table for which $1-\alpha =0.95$ is one of the entries on the extreme left (a table of lower critical values) and that has a row heading $df_2=20$ in the left margin of the table. A portion of the relevant table is provided. The shaded entry, in the intersection of the column with heading $df_1=2$ and the row with the headings $0.95$ and $df_2=20$ is the answer, $F_{0.95}=0.05$. $F$ Tail Area $\frac{df_1}{df_2}$ $1$ $2$  $\cdots$ $\vdots$ $\vdots$ $\vdots$ $\vdots$ $\vdots$ $0.90$ $20$  $\cdots$  $0.11$ $\cdots$ $0.95$ $20$  $\cdots$  $0.05$  $\cdots$ $0.975$ $20$ $\cdots$   $0.03$ $\cdots$ $0.99$ $20$ $\cdots$  $0.01$  $\cdots$ $0.995$ $20$  $\cdots$   $0.01$ $\cdots$ $\vdots$ $\vdots$ $\vdots$ $\vdots$ $\vdots$ 1. Look for the table for which $1-\alpha /2=0.975$ is one of the entries on the extreme left (a table of lower critical values) and that has a row heading $df_2=20$ in the left margin of the table. A portion of the relevant table is provided. The shaded entry, in the intersection of the column with heading $df_1=2$ and the row with the headings $0.975$ and $df_2=20$ is the answer, $F_{0.975}=0.03$. $F$ Tail Area $\frac{df_1}{df_2}$ $1$ $2$  $\cdots$ $\vdots$ $\vdots$ $\vdots$ $\vdots$ $\vdots$ $0.90$ $20$ $\cdots$  $0.11$ $\cdots$ $0.95$ $20$ $\cdots$   $0.05$ $\cdots$ $0.975$ $20$ $\cdots$   $0.03$ $\cdots$ $0.99$ $20$ $\cdots$   $0.01$ $\cdots$ $0.995$ $20$ $\cdots$  $0.01$ $\cdots$ $\vdots$ $\vdots$ $\vdots$ $\vdots$ $\vdots$ A fact that sometimes allows us to find a critical value from a table that we could not read otherwise is: If $F_u(r,s)$ denotes the value of the $F$-distribution with degrees of freedom $df_1=r$ and $df_2=s$ that cuts off a right tail of area $u$, then $F_c(k,l)=\frac{1}{F_{1-c}(l,k)} \nonumber$ Example $3$ Use the tables to find 1. $F_{0.01}$ for an $F$ random variable with $df_1=13$ and $df_2=8$. 2. $F_{0.975}$ for an $F$ random variable with $df_1=40$ and $df_2=10$. Solution 1. There is no table with $df_1=13$, but there is one with $df_1=8$. Thus we use the fact that $F_{0.01}(13,8)=\frac{1}{F_{0.99}(8,13)} \nonumber$ Using the relevant table we find that $F_{0.99}(8,13)=0.18$, hence $F_{0.01}(13,8)=0.18^{-1}=5.556$. 2. There is no table with $df_1=40$, but there is one with $df_1=10$. Thus we use the fact that $F_{0.975}(40,10)=\frac{1}{F_{0.025}(10,40)} \nonumber$ Using the relevant table we find that $F_{0.025}(10,40)=3.31$, hence $F_{0.975}(40,10)=3.31^{-1}=0.302$. $F$-Tests for Equality of Two Variances In Chapter 9 we saw how to test hypotheses about the difference between two population means $μ_1$ and $μ_2$. In some practical situations the difference between the population standard deviations $σ_1$ and $σ_2$ is also of interest. Standard deviation measures the variability of a random variable. For example, if the random variable measures the size of a machined part in a manufacturing process, the size of standard deviation is one indicator of product quality. A smaller standard deviation among items produced in the manufacturing process is desirable since it indicates consistency in product quality. For theoretical reasons it is easier to compare the squares of the population standard deviations, the population variances $\sigma _{1}^{2}$ and $\sigma _{2}^{2}$. This is not a problem, since $σ_1=σ_2$ precisely when $\sigma _{1}^{2}=\sigma _{2}^{2}$, $σ_1<σ_2$ precisely when $\sigma _{1}^{2}<\sigma _{2}^{2}$, and $σ_1>σ_2$ precisely when $\sigma _{1}^{2}>\sigma _{2}^{2}$. The null hypothesis always has the form $H_0: \sigma _{1}^{2}=\sigma _{2}^{2}$. The three forms of the alternative hypothesis, with the terminology for each case, are: Form of $H_a$ Terminology $H_a: \sigma _{1}^{2}>\sigma _{2}^{2}$ Right-tailed $H_a: \sigma _{1}^{2}<\sigma _{2}^{2}$ Left-tailed $H_a: \sigma _{1}^{2}\neq \sigma _{2}^{2}$ Two-tailed Just as when we test hypotheses concerning two population means, we take a random sample from each population, of sizes $n_1$ and $n_2$, and compute the sample standard deviations $s_1$ and $s_2$. In this context the samples are always independent. The populations themselves must be normally distributed. Test Statistic for Hypothesis Tests Concerning the Difference Between Two Population Variances $F=\frac{s_{1}^{2}}{s_{2}^{2}} \nonumber$ If the two populations are normally distributed and if $H_0: \sigma _{1}^{2}=\sigma _{2}^{2}$ is true then under independent sampling $F$ approximately follows an $F$-distribution with degrees of freedom $df_1=n_1-1$ and $df_2=n_2-1$. A test based on the test statistic $F$ is called an $F$-test. A most important point is that while the rejection region for a right-tailed test is exactly as in every other situation that we have encountered, because of the asymmetry in the $F$-distribution the critical value for a left-tailed test and the lower critical value for a two-tailed test have the special forms shown in the following table: Terminology Alternative Hypothesis Rejection Region Right-tailed $H_a: \sigma _{1}^{2}>\sigma _{2}^{2}$ $F\geq F_\alpha$ Left-tailed $H_a: \sigma _{1}^{2}<\sigma _{2}^{2}$ $F\leq F_{1-\alpha }$ Two-tailed $H_a: \sigma _{1}^{2}\neq \sigma _{2}^{2}$ $F\leq F_{1-\alpha /2}\; \text{or}\; F\geq F_{\alpha /2}$ Figure $3$ illustrates these rejection regions. The test is performed using the usual five-step procedure described at the end of Section 8.1. Example $4$ One of the quality measures of blood glucose meter strips is the consistency of the test results on the same sample of blood. The consistency is measured by the variance of the readings in repeated testing. Suppose two types of strips, $A$ and $B$, are compared for their respective consistencies. We arbitrarily label the population of Type $A$ strips Population $1$ and the population of Type $B$ strips Population $2$. Suppose $15$ Type $A$ strips were tested with blood drops from a well-shaken vial and $20$ Type $B$ strips were tested with the blood from the same vial. The results are summarized in Table $3$. Assume the glucose readings using Type $A$ strips follow a normal distribution with variance $\sigma _{1}^{2}$and those using Type $B$ strips follow a normal distribution with variance with $\sigma _{2}^{2}$. Test, at the $10\%$ level of significance, whether the data provide sufficient evidence to conclude that the consistencies of the two types of strips are different. Table $3$: Two Types of Test Strips Strip Type Sample Size Sample Variance $A$ $n_1=16$ $s_{1}^{2}=2.09$ $B$ $n_2=21$ $s_{2}^{2}=1.10$ Solution • Step 1. The test of hypotheses is $H_0: \sigma _{1}^{2}=\sigma _{2}^{2}\ vs.\ H_a: \sigma _{1}^{2}\neq \sigma _{2}^{2}\; @\; \alpha =0.10 \nonumber$ • Step 2. The distribution is the $F$-distribution with degrees of freedom $df_1=16-1=15$ and $df_2=21-1=20$. • Step 3. The test is two-tailed. The left or lower critical value is $F_{1-\alpha }=F_{0.95}=0.43$. The right or upper critical value is $F_{\alpha /2}=F_{0.05}=2.20$. Thus the rejection region is $[0,-0.43]\cup [2.20,\infty )$, as illustrated in Figure $4$. • Step 4. The value of the test statistic is $F=\frac{s_{1}^{2}}{s_{2}^{2}}=\frac{2.09}{1.10}=1.90 \nonumber$ • Step 5. As shown in Figure $4$, the test statistic $1.90$ does not lie in the rejection region, so the decision is not to reject $H_0$. The data do not provide sufficient evidence, at the $10\%$ level of significance, to conclude that there is a difference in the consistency, as measured by the variance, of the two types of test strips. Example $5$ In the context of "Example $4$", suppose Type $A$ test strips are the current market leader and Type $B$ test strips are a newly improved version of Type $A$. Test, at the $10\%$ level of significance, whether the data given in Table $3$ provide sufficient evidence to conclude that Type $B$ test strips have better consistency (lower variance) than Type $A$ test strips. Solution • Step 1. The test of hypotheses is now $H_0: \sigma _{1}^{2}=\sigma _{2}^{2}\ vs.\ H_a: \sigma _{1}^{2}>\sigma _{2}^{2}\; @\; \alpha =0.10 \nonumber$ • Step 2. The distribution is the $F$-distribution with degrees of freedom $df_1=16-1=15$ and $df_2=21-1=20$. • Step 3. The value of the test statistic is $F=\frac{s_{1}^{2}}{s_{2}^{2}}=\frac{2.09}{1.10}=1.90 \nonumber$ • Step 4. The test is right-tailed. The single critical value is $F_\alpha =F_{0.10}=1.84$. Thus the rejection region is $[1.84,\infty )$, as illustrated in Figure $5$. Figure $5$: Rejection Region and Test Statistic for "Example $5$" • Step 5. As shown in Figure $5$, the test statistic $1.90$ lies in the rejection region, so the decision is to reject $H_0$ The data provide sufficient evidence, at the $10\%$ level of significance, to conclude that Type $B$ test strips have better consistency (lower variance) than Type $A$ test strips do. Lower Critical Values of $F$-Distributions Key Takeaway • Critical values of an $F$-distribution with degrees of freedom $df_1$ and $df_2$ are found in tables above. • An $F$-test can be used to evaluate the hypothesis of two identical normal population variances.
textbooks/stats/Introductory_Statistics/Introductory_Statistics_(Shafer_and_Zhang)/11%3A_Chi-Square_Tests_and_F-Tests/11.03%3A_F-tests_for_Equality_of_Two_Variances.txt
Learning Objectives • To understand how to use an $F$-test to judge whether several population means are all equal In Chapter 9, we saw how to compare two population means $\mu _1$ and $\mu _2$. In this section we will learn to compare three or more population means at the same time, which is often of interest in practical applications. For example, an administrator at a university may be interested in knowing whether student grade point averages are the same for different majors. In another example, an oncologist may be interested in knowing whether patients with the same type of cancer have the same average survival times under several different competing cancer treatments. In general, suppose there are $K$ normal populations with possibly different means, $μ_1 , μ_2 , \ldots, μ_K$, but all with the same variance $σ^2$. The study question is whether all the $K$ population means are the same. We formulate this question as the test of hypotheses $H_0: \mu _1=\mu _2=\cdots =\mu _K\ vs.\ H_a: \text{not all K population means are equal} \nonumber$ To perform the test $K$ independent random samples are taken from the $K$ normal populations. The $K$ sample means, the $K$ sample variances, and the $K$ sample sizes are summarized in the table: Population Sample Size Sample Mean Sample Variance $1$ $n_1$ $\bar{x_1}$ $s_{1}^{2}$ $2$ $n_2$ $\bar{x_2}$ $s_{2}^{2}$ $\vdots$ $\vdots$ $\vdots$ $\vdots$ $K$ $n_K$ $\bar{x_K}$ $s_{K}^{2}$ Define the following quantities: Definitions The combined sample size: $n=n_1+n_2+ \ldots + n_K \nonumber$ The mean of the combined sample of all $n$ observations: $x= \dfrac{\displaystyle Σx}{n} = \dfrac{n_1 \overline{x} + n_2 \overline{x}_2 + \ldots + n_K \overline{x}_K}{n} \nonumber$ The mean square for treatment: $MST=\dfrac{n_1(\overline{x}_1−\overline{x})^2 + n_2(\overline{x}_2−\overline{x})^2 + \ldots + n_K (\overline{x}K−\overline{x})^2}{K−1} \nonumber$ The mean square for error: $MSE= \dfrac{(n_1−1)s^2_1 + (n_2−1)s^2_2 + \ldots + (n_{K−1})s^2_K}{n−K} \nonumber$ $MST$ can be thought of as the variance between the $K$ individual independent random samples and $MSE$ as the variance within the samples. This is the reason for the name “analysis of variance,” universally abbreviated ANOVA. The adjective “one-way” has to do with the fact that the sampling scheme is the simplest possible, that of taking one random sample from each population under consideration. If the means of the $K$ populations are all the same then the two quantities $MST$ and $MSE$ should be close to the same, so the null hypothesis will be rejected if the ratio of these two quantities is significantly greater than $1$. This yields the following test statistic and methods and conditions for its use. Test Statistic for Testing the Null Hypothesis that $K$ Population Means Are Equal $F =\dfrac{MST}{MSE} \nonumber$ If the $K$ populations are normally distributed with a common variance and if $H_0: \mu _1=\mu _2=\cdots =\mu _K$ is true then under independent random sampling $F$ approximately follows an $F$-distribution with degrees of freedom $df_1=K-1$ and $df_2=n-K$. The test is right-tailed: $H_0$ is rejected at level of significance α if $F≥F_α$. As always the test is performed using the usual five-step procedure. Example $1$ The average of grade point averages (GPAs) of college courses in a specific major is a measure of difficulty of the major. An educator wishes to conduct a study to find out whether the difficulty levels of different majors are the same. For such a study, a random sample of major grade point averages (GPA) of $11$ graduating seniors at a large university is selected for each of the four majors mathematics, English, education, and biology. The data are given in Table $1$. Test, at the $5\%$ level of significance, whether the data contain sufficient evidence to conclude that there are differences among the average major GPAs of these four majors. Table $1$: Difficulty Levels of College Majors Mathematics English Education Biology 2.59 3.64 4.00 2.78 3.13 3.19 3.59 3.51 2.97 3.15 2.80 2.65 2.50 3.78 2.39 3.16 2.53 3.03 3.47 2.94 3.29 2.61 3.59 2.32 2.53 3.20 3.74 2.58 3.17 3.30 3.77 3.21 2.70 3.54 3.13 3.23 3.88 3.25 3.00 3.57 2.64 4.00 3.47 3.22 Solution • Step 1. The test of hypotheses is $H_0: \mu _1=\mu _2=\mu _3=\mu _4\ vs.\ H_a: \text{not all four population means are equal}\; @\; \alpha =0.05 \nonumber$ • Step 2. The test statistic is $F=MST/MSE$ with (since $n=44$ and $K=4$) degrees of freedom $df_1=K-1=4-1=3$ and $df_2=n-K=44-4=40$. • Step 3. If we index the population of mathematics majors by $1$, English majors by $2$, education majors by $3$, and biology majors by $4$, then the sample sizes, sample means, and sample variances of the four samples in Table $1$ are summarized (after rounding for simplicity) by: Major Sample Size Sample Mean Sample Variance Mathematics $n_1=11$ $\bar{x_1}=2.90$ $s_{1}^{2}=0.188$ English $n_2=11$ $\bar{x_2}=3.34$ $s_{2}^{2}=0.148$ Education $n_3=11$ $\bar{x_3}=3.36$ $s_{3}^{2}=0.229$ Biology $n_4=11$ $\bar{x_4}=3.02$ $s_{4}^{2}=0.157$ The average of all $44$ observations is (after rounding for simplicity) $\overline{x}=3.15$. We compute (rounding for simplicity) \begin{align} MST &= \dfrac{11(2.90−3.15)^2+11(3.34−3.15)^2+11(3.36−3.15)^2+11(3.02−3.15)^2}{4−1} \nonumber \[6pt] &=\dfrac{1.7556}{3} \nonumber \[6pt] &=0.585 \nonumber \end{align} \nonumber and \begin{align} MSE &= \dfrac{(11−1)(0.188)+(11−1)(0.148)+(11−1)(0.229)+(11−1)(0.157)}{44−4} \nonumber \[6pt] &=\dfrac{7.22}{40} \nonumber \[6pt] &=0.181 \nonumber \end{align} \nonumber so that $F=\dfrac{MST}{MSE}=\dfrac{0.585}{0.181}=3.232 \nonumber$ • Step 4. The test is right-tailed. The single critical value is (since $df_1=3$ and $df_2=40$) $F_\alpha =F_{0.05}=2.84$. Thus the rejection region is $[2.84,\infty )$, as illustrated in Figure $1$. • Step 5. Since $F=3.232>2.84$, we reject $H_0$. The data provide sufficient evidence, at the $5\%$ level of significance, to conclude that the averages of major GPAs for the four majors considered are not all equal. Example $2$: Mice Survival Times A research laboratory developed two treatments which are believed to have the potential of prolonging the survival times of patients with an acute form of thymic leukemia. To evaluate the potential treatment effects $33$ laboratory mice with thymic leukemia were randomly divided into three groups. One group received Treatment $1$, one received Treatment $2$, and the third was observed as a control group. The survival times of these mice are given in Table $2$. Test, at the $1\%$ level of significance, whether these data provide sufficient evidence to confirm the belief that at least one of the two treatments affects the average survival time of mice with thymic leukemia. Table $2$ Mice Survival Times in Days Treatment $1$ Treatment $2$ Control 71 75 77 81 72 73 67 79 75 72 79 73 80 65 78 71 60 63 81 75 65 69 72 84 63 64 71 77 78 71 84 67 91 Solution • Step 1. The test of hypotheses is $H_0: \mu _1=\mu _2=\mu _3\ vs.\ H_a: \text{not all three population means are equal}\; @\; \alpha =0.01 \nonumber$ • Step 2. The test statistic is $F=\dfrac{MST}{MSE}$ with (since $n=33$ and $K=3$) degrees of freedom $df_1=K-1=3-1=2$ and $df_2=n-K=33-3=30$. • Step 3. If we index the population of mice receiving Treatment $1$ by $1$, Treatment $2$ by $2$, and no treatment by $3$, then the sample sizes, sample means, and sample variances of the three samples in Table $2$ are summarized (after rounding for simplicity) by: Table $2$: Mice Survival Times in Days Group Sample Size Sample Mean Sample Variance Treatment $1$ $n_1=16$ $\bar{x_1}=69.75$ $s_{1}^{2}=34.47$ Treatment $2$ $n_2=9$ $\bar{x_2}=77.78$ $s_{2}^{2}=52.69$ Control $n_3=8$ $\bar{x_3}=75.88$ $s_{3}^{2}=30.69$ The average of all $33$ observations is (after rounding for simplicity) $\overline{x}=73.42$. We compute (rounding for simplicity) \begin{align*} MST &= \frac{16(69.75-73.42)^2+9(77.78-73.42)^2+8(75.88-73.42)^2}{31}\ &= \frac{434.63}{2}\ &= 217.50 \end{align*} \nonumber and \begin{align*} MSE &= \frac{(16-1)(34.47)+(9-1)(52.69)+(8-1)(30.69)}{33-3}\ &= \frac{1153.4}{30}\ &= 38.45\end{align*} \nonumber so that $F=\dfrac{MST}{MSE}=\dfrac{217.50}{38.45}=5.65 \nonumber$ • Step 4. The test is right-tailed. The single critical value is $F_\alpha =F_{0.01}=5.39$. Thus the rejection region is $[5.39,\infty )$, as illustrated in Figure $2$. • Step 5. Since $F=5.65>5.39$, we reject $H_0$. The data provide sufficient evidence, at the $1\%$ level of significance, to conclude that a treatment effect exists at least for one of the two treatments in increasing the mean survival time of mice with thymic leukemia. It is important to to note that, if the null hypothesis of equal population means is rejected, the statistical implication is that not all population means are equal. It does not however tell which population mean is different from which. The inference about where the suggested difference lies is most frequently made by a follow-up study. Key Takeaway • An $F$-test can be used to evaluate the hypothesis that the means of several normal populations, all with the same standard deviation, are identical.
textbooks/stats/Introductory_Statistics/Introductory_Statistics_(Shafer_and_Zhang)/11%3A_Chi-Square_Tests_and_F-Tests/11.04%3A_F-Tests_in_One-Way_ANOVA.txt
These are homework exercises to accompany the Textmap created for "Introductory Statistics" by Shafer and Zhang. 11.1: Chi-Square Tests for Independence Q11.1.1 Find $\chi _{0.01}^{2}$ for each of the following number of degrees of freedom. 1. $df=5$ 2. $df=11$ 3. $df=25$ Q11.1.2 Find $\chi _{0.05}^{2}$ for each of the following number of degrees of freedom. 1. $df=6$ 2. $df=12$ 3. $df=30$ Q11.1.3 Find $\chi _{0.10}^{2}$ for each of the following number of degrees of freedom. 1. $df=6$ 2. $df=12$ 3. $df=30$ Q11.1.4 Find $\chi _{0.01}^{2}$ for each of the following number of degrees of freedom. 1. $df=7$ 2. $df=10$ 3. $df=20$ Q11.1.5 For $df=7$ and $\alpha =0.05$ 1. $\chi _{\alpha }^{2}$ 2. $\chi _{\frac{\alpha }{2}}^{2}$ Q11.1.6 For $df=17$ and $\alpha =0.01$ 1. $\chi _{\alpha }^{2}$ 2. $\chi _{\frac{\alpha }{2}}^{2}$ Q11.1.7 A data sample is sorted into a $2 \times 2$ contingency table based on two factors, each of which has two levels. Factor 1 Level 1 Level 2 Row Total Factor 2 Level 1 $20$ $10$ R Level 2 $15$ 5 R Column Total C C n 1. Find the column totals, the row totals, and the grand total, $n$, of the table. 2. Find the expected number $E$ of observations for each cell based on the assumption that the two factors are independent (that is, just use the formula $E=(R\times C)/n$). 3. Find the value of the chi-square test statistic $\chi ^2$. 4. Find the number of degrees of freedom of the chi-square test statistic. Q11.1.8 A data sample is sorted into a $3 \times 2$ contingency table based on two factors, one of which has three levels and the other of which has two levels. Factor 1 Level 1 Level 2 Row Total Factor 2 Level 1 $20$ $10$ R Level 2 $15$ 5 R Level 3 $10$ $20$ R Column Total C C n 1. Find the column totals, the row totals, and the grand total, $n$, of the table. 2. Find the expected number $E$ of observations for each cell based on the assumption that the two factors are independent (that is, just use the formula $E=(R\times C)/n$). 3. Find the value of the chi-square test statistic $\chi ^2$. 4. Find the number of degrees of freedom of the chi-square test statistic. Q11.1.9 A child psychologist believes that children perform better on tests when they are given perceived freedom of choice. To test this belief, the psychologist carried out an experiment in which $200$ third graders were randomly assigned to two groups, $A$ and $B$. Each child was given the same simple logic test. However in group $B$, each child was given the freedom to choose a text booklet from many with various drawings on the covers. The performance of each child was rated as Very Good, Good, and Fair. The results are summarized in the table provided. Test, at the $5\%$ level of significance, whether there is sufficient evidence in the data to support the psychologist’s belief. Group A B Performance Very Good 32 29 Good 55 61 Fair 10 13 Q11.1.10 In regard to wine tasting competitions, many experts claim that the first glass of wine served sets a reference taste and that a different reference wine may alter the relative ranking of the other wines in competition. To test this claim, three wines, $A$, $B$ and $C$, were served at a wine tasting event. Each person was served a single glass of each wine, but in different orders for different guests. At the close, each person was asked to name the best of the three. One hundred seventy-two people were at the event and their top picks are given in the table provided. Test, at the $1\%$ level of significance, whether there is sufficient evidence in the data to support the claim that wine experts’ preference is dependent on the first served wine. Top Pick A B C First Glass A 12 31 27 B 15 40 21 C 10 9 7 1. Is being left-handed hereditary? To answer this question, $250$ adults are randomly selected and their handedness and their parents’ handedness are noted. The results are summarized in the table provided. Test, at the $1\%$ level of significance, whether there is sufficient evidence in the data to conclude that there is a hereditary element in handedness. Number of Parents Left-Handed 0 1 2 Handedness Left 8 10 12 Right 178 21 21 2. Some geneticists claim that the genes that determine left-handedness also govern development of the language centers of the brain. If this claim is true, then it would be reasonable to expect that left-handed people tend to have stronger language abilities. A study designed to text this claim randomly selected $807$ students who took the Graduate Record Examination (GRE). Their scores on the language portion of the examination were classified into three categories: low, average, and high, and their handedness was also noted. The results are given in the table provided. Test, at the $5\%$ level of significance, whether there is sufficient evidence in the data to conclude that left-handed people tend to have stronger language abilities. GRE English Scores Low Average High Handedness Left 18 40 22 Right 201 360 166 3. It is generally believed that children brought up in stable families tend to do well in school. To verify such a belief, a social scientist examined $290$ randomly selected students’ records in a public high school and noted each student’s family structure and academic status four years after entering high school. The data were then sorted into a $2 \times 3$ contingency table with two factors. $\text{Factor 1}$ has two levels: graduated and did not graduate. $\text{Factor 2}$ has three levels: no parent, one parent, and two parents. The results are given in the table provided. Test, at the $1\%$ level of significance, whether there is sufficient evidence in the data to conclude that family structure matters in school performance of the students. Academic Status Graduated Did Not Graduate Family No parent 18 31 One parent 101 44 Two parents 70 26 4. A large middle school administrator wishes to use celebrity influence to encourage students to make healthier choices in the school cafeteria. The cafeteria is situated at the center of an open space. Everyday at lunch time students get their lunch and a drink in three separate lines leading to three separate serving stations. As an experiment, the school administrator displayed a poster of a popular teen pop star drinking milk at each of the three areas where drinks are provided, except the milk in the poster is different at each location: one shows white milk, one shows strawberry-flavored pink milk, and one shows chocolate milk. After the first day of the experiment the administrator noted the students’ milk choices separately for the three lines. The data are given in the table provided. Test, at the $1\%$ level of significance, whether there is sufficient evidence in the data to conclude that the posters had some impact on the students’ drink choices. Student Choice Regular Strawberry Chocolate Poster Choice Regular 38 28 40 Strawberry 18 51 24 Chocolate 32 32 53 Large Data Set Exercise Large Data Sets not available 1. Large $\text{Data Set 8}$ records the result of a survey of $300$ randomly selected adults who go to movie theaters regularly. For each person the gender and preferred type of movie were recorded. Test, at the $5\%$ level of significance, whether there is sufficient evidence in the data to conclude that the factors “gender” and “preferred type of movie” are dependent. Answers 1. $15.09$ 2. $24.72$ 3. $44.31$ 1. $10.64$ 2. $18.55$ 3. $40.26$ 1. $14.07$ 2. $16.01$ 1. $C_1=35,\; C_2=15,\; R_1=30,\; R_2=20,\; n=50$ 2. $E_{11}=21,\; E_{12}=9,\; E_{21}=14,\; E_{22}=6$ 3. $\chi ^2=0.3968$ 4. $df=1$ 1. $\chi ^2=0.6698,\; \chi _{0.05}^{2}=5.99$, do not reject $H_0$ 2. $\chi ^2=72.35,\; \chi _{0.01}^{2}=9.21$, reject $H_0$ 3. $\chi ^2=21.2784,\; \chi _{0.01}^{2}=9.21$, reject $H_0$ 4. $\chi ^2=28.4539$, $df=3$, Rejection Region: $[7.815,\infty )$, Decision: reject $H_0$ of independence 11.2: Chi-Square One-Sample Goodness-of-Fit Tests Basic 1. A data sample is sorted into five categories with an assumed probability distribution. Factor Levels Assumed Distribution Observed Frequency 1 $p_1=0.1$ 10 2 $p_2=0.4$ 35 3 $p_3=0.4$ 45 4 $p_4=0.1$ 10 1. Find the size $n$ of the sample. 2. Find the expected number $E$ of observations for each level, if the sampled population has a probability distribution as assumed (that is, just use the formula $E_i=n\times p_i$). 3. Find the chi-square test statistic $\chi ^2$. 4. Find the number of degrees of freedom of the chi-square test statistic. 2. A data sample is sorted into five categories with an assumed probability distribution. Factor Levels Assumed Distribution Observed Frequency 1 $p_1=0.3$ 23 2 $p_2=0.3$ 30 3 $p_3=0.2$ 19 4 $p_4=0.1$ 8 5 $p_5=0.1$ 10 1. Find the size $n$ of the sample. 2. Find the expected number $E$ of observations for each level, if the sampled population has a probability distribution as assumed (that is, just use the formula $E_i=n\times p_i$). 3. Find the chi-square test statistic $\chi ^2$. 4. Find the number of degrees of freedom of the chi-square test statistic. Applications 1. Retailers of collectible postage stamps often buy their stamps in large quantities by weight at auctions. The prices the retailers are willing to pay depend on how old the postage stamps are. Many collectible postage stamps at auctions are described by the proportions of stamps issued at various periods in the past. Generally the older the stamps the higher the value. At one particular auction, a lot of collectible stamps is advertised to have the age distribution given in the table provided. A retail buyer took a sample of $73$ stamps from the lot and sorted them by age. The results are given in the table provided. Test, at the $5\%$ level of significance, whether there is sufficient evidence in the data to conclude that the age distribution of the lot is different from what was claimed by the seller. Year Claimed Distribution Observed Frequency Before 1940 0.10 6 1940 to 1959 0.25 15 1960 to 1979 0.45 30 After 1979 0.20 22 2. The litter size of Bengal tigers is typically two or three cubs, but it can vary between one and four. Based on long-term observations, the litter size of Bengal tigers in the wild has the distribution given in the table provided. A zoologist believes that Bengal tigers in captivity tend to have different (possibly smaller) litter sizes from those in the wild. To verify this belief, the zoologist searched all data sources and found $316$ litter size records of Bengal tigers in captivity. The results are given in the table provided. Test, at the $5\%$ level of significance, whether there is sufficient evidence in the data to conclude that the distribution of litter sizes in captivity differs from that in the wild. Litter Size Wild Litter Distribution Observed Frequency 1 0.11 41 2 0.69 243 3 0.18 27 4 0.02 5 3. An online shoe retailer sells men’s shoes in sizes $8$ to $13$. In the past orders for the different shoe sizes have followed the distribution given in the table provided. The management believes that recent marketing efforts may have expanded their customer base and, as a result, there may be a shift in the size distribution for future orders. To have a better understanding of its future sales, the shoe seller examined $1,040$ sales records of recent orders and noted the sizes of the shoes ordered. The results are given in the table provided. Test, at the $1\%$ level of significance, whether there is sufficient evidence in the data to conclude that the shoe size distribution of future sales will differ from the historic one. Shoe Size Past Size Distribution Recent Size Frequency 8.0 0.03 25 8.5 0.06 43 9.0 0.09 88 9.5 0.19 221 10.0 0.23 272 10.5 0.14 150 11.0 0.10 107 11.5 0.06 51 12.0 0.05 37 12.5 0.03 35 13.0 0.02 11 4. An online shoe retailer sells women’s shoes in sizes $5$ to $10$. In the past orders for the different shoe sizes have followed the distribution given in the table provided. The management believes that recent marketing efforts may have expanded their customer base and, as a result, there may be a shift in the size distribution for future orders. To have a better understanding of its future sales, the shoe seller examined $1,174$ sales records of recent orders and noted the sizes of the shoes ordered. The results are given in the table provided. Test, at the $1\%$ level of significance, whether there is sufficient evidence in the data to conclude that the shoe size distribution of future sales will differ from the historic one. Shoe Size Past Size Distribution Recent Size Frequency 5.0 0.02 20 5.5 0.03 23 6.0 0.07 88 6.5 0.08 90 7.0 0.20 222 7.5 0.20 258 8.0 0.15 177 8.5 0.11 121 9.0 0.08 91 9.5 0.04 53 10.0 0.02 31 5. A chess opening is a sequence of moves at the beginning of a chess game. There are many well-studied named openings in chess literature. French Defense is one of the most popular openings for black, although it is considered a relatively weak opening since it gives black probability $0.344$ of winning, probability $0.405$ of losing, and probability $0.251$ of drawing. A chess master believes that he has discovered a new variation of French Defense that may alter the probability distribution of the outcome of the game. In his many Internet chess games in the last two years, he was able to apply the new variation in $77$ games. The wins, losses, and draws in the $77$ games are given in the table provided. Test, at the $5\%$ level of significance, whether there is sufficient evidence in the data to conclude that the newly discovered variation of French Defense alters the probability distribution of the result of the game. Result for Black Probability Distribution New Variation Wins Win 0.344 31 Loss 0.405 25 Draw 0.251 21 6. The Department of Parks and Wildlife stocks a large lake with fish every six years. It is determined that a healthy diversity of fish in the lake should consist of $10\%$ largemouth bass, $15\%$ smallmouth bass, $10\%$ striped bass, $10\%$ trout, and $20\%$ catfish. Therefore each time the lake is stocked, the fish population in the lake is restored to maintain that particular distribution. Every three years, the department conducts a study to see whether the distribution of the fish in the lake has shifted away from the target proportions. In one particular year, a research group from the department observed a sample of $292$ fish from the lake with the results given in the table provided. Test, at the $5\%$ level of significance, whether there is sufficient evidence in the data to conclude that the fish population distribution has shifted since the last stocking. Fish Target Distribution Fish in Sample Largemouth Bass 0.10 14 Smallmouth Bass 0.15 49 Striped Bass 0.10 21 Trout 0.10 22 Catfish 0.20 75 Other 0.35 111 Large Data Set Exercise Large Data Sets not available 1. Large $\text{Data Set 4}$ records the result of $500$ tosses of six-sided die. Test, at the $10\%$ level of significance, whether there is sufficient evidence in the data to conclude that the die is not “fair” (or “balanced”), that is, that the probability distribution differs from probability $1/6$ for each of the six faces on the die. S11.2.1 1. $n=100$ 2. $E=10,E=40,E=40,E=10$ 3. $\chi^2=1.25$ 4. $df=3$ S11.2.3 $\chi ^2=4.8082,\; \chi _{0.05}^{2}=7.81,\; \text{do not reject } H_0$ S11.2.5 $\chi ^2=26.5765,\; \chi _{0.01}^{2}=23.21,\; \text{reject } H_0$ S11.2.7 $\chi ^2=2.1401,\; \chi _{0.05}^{2}=5.99,\; \text{do not reject } H_0$ S11.2.9 $\chi ^2=2.944,\; df=5,\; \text{Rejection Region: }[9.236,\infty ),\; \text{Decision: Fail to reject }H_0\; \text{of balance}$ 11.3 F-tests for Equality of Two Variances Basic 1. Find $F_{0.01}$ for each of the following degrees of freedom. 1. $df_1=5$ and $df_2=5$ 2. $df_1=5$ and $df_2=12$ 3. $df_1=12$ and $df_2=20$ 2. Find $F_{0.05}$ for each of the following degrees of freedom. 1. $df_1=6$ and $df_2=6$ 2. $df_1=6$ and $df_2=12$ 3. $df_1=12$ and $df_2=30$ 3. Find $F_{0.95}$ for each of the following degrees of freedom. 1. $df_1=6$ and $df_2=6$ 2. $df_1=6$ and $df_2=12$ 3. $df_1=12$ and $df_2=30$ 4. Find $F_{0.90}$ for each of the following degrees of freedom. 1. $df_1=5$ and $df_2=5$ 2. $df_1=5$ and $df_2=12$ 3. $df_1=12$ and $df_2=20$ 5. For $df_1=7$, $df_2=10$ and $\alpha =0.05$, find 1. $F_{\alpha }$ 2. $F_{1-\alpha }$ 3. $F_{\alpha /2}$ 4. $F_{1-\alpha /2}$ 6. For $df_1=15$, $df_2=8$ and $\alpha =0.01$, find 1. $F_{\alpha }$ 2. $F_{1-\alpha }$ 3. $F_{\alpha /2}$ 4. $F_{1-\alpha /2}$ 7. For each of the two samples $\text{Sample 1}:\{8,2,11,0,-2\}\ \text{Sample 2}:\{-2,0,0,0,2,4,-1\}$ find 1. the sample size 2. the sample mean 3. the sample variance 8. For each of the two samples $\text{Sample 1}:\{0.8,1.2,1.1,0.8,-2.0\}\ \text{Sample 2}:\{-2.0,0.0,0.7,0.8,2.2,4.1,-1.9\}$ find 1. the sample size 2. the sample mean 3. the sample variance 9. Two random samples taken from two normal populations yielded the following information: Sample Sample Size Sample Variance 1 $n_1=16$ $s_{1}^{2}=53$ 2 $n_2=21$ $s_{2}^{2}=32$ 1. Find the statistic $F=s_{1}^{2}/s_{2}^{2}$ 2. Find the degrees of freedom $df_1$ and $df_2$. 3. Find $F_{0.05}$ using $df_1$ and $df_2$ computed above. 4. Perform the test the hypotheses $H_0:\alpha _{1}^{2}=\alpha _{2}^{2}\; vs\; H_a:\alpha _{1}^{2}>\alpha _{2}^{2}$ at the $5\%$ level of significance. 10. Two random samples taken from two normal populations yielded the following information: Sample Sample Size Sample Variance 1 $n_1=11$ $s_{1}^{2}=61$ 2 $n_2=8$ $s_{2}^{2}=44$ 1. Find the statistic $F=s_{1}^{2}/s_{2}^{2}$. 2. Find the degrees of freedom $df_1$ and $df_2$. 3. Find $F_{0.05}$ using $df_1$ and $df_2$computed above. 4. Perform the test the hypotheses $H_0:\alpha _{1}^{2}=\alpha _{2}^{2}\; vs\; H_a:\alpha _{1}^{2}>\alpha _{2}^{2}$ at the $5\%$ level of significance. 11. Two random samples taken from two normal populations yielded the following information: 1. Find the statistic $F=s_{1}^{2}/s_{2}^{2}$. 2. Find the degrees of freedom $df_1$ and $df_2$. 3. For $\alpha =0.05$ find $F_{1-\alpha }$ using $df_1$ and $df_2$ computed above. 4. Perform the test the hypotheses $H_0:\alpha _{1}^{2}=\alpha _{2}^{2}\; vs\; H_a:\alpha _{1}^{2}<\alpha _{2}^{2}$ at the $5\%$ level of significance. 12. Sample Sample Size Sample Variance 1 $n_1=10$ $s_{1}^{2}=12$ 2 $n_2=13$ $s_{2}^{2}=23$ 13. Two random samples taken from two normal populations yielded the following information: Sample Sample Size Sample Variance 1 $n_1=8$ $s_{1}^{2}=102$ 2 $n_2=8$ $s_{2}^{2}=603$ 1. Find the statistic $F=s_{1}^{2}/s_{2}^{2}$ 2. Find the degrees of freedom $df_1$ and $df_2$. 3. For $\alpha =0.05$ find $F_{1-\alpha }$using $df_1$ and $df_2$computed above. 4. Perform the test the hypotheses $H_0:\alpha _{1}^{2}=\alpha _{2}^{2}\; vs\; H_a:\alpha _{1}^{2}<\alpha _{2}^{2}$at the $5\%$ level of significance. 14. Two random samples taken from two normal populations yielded the following information: Sample Sample Size Sample Variance 1 $n_1=9$ $s_{1}^{2}=123$ 2 $n_2=31$ $s_{2}^{2}=543$ 1. Find the statistic $F=s_{1}^{2}/s_{2}^{2}$ 2. Find the degrees of freedom $df_1$ and $df_2$. 3. For $\alpha =0.05$ find $F_{1-\alpha /2}$ and $F_{\alpha /2}$ using $df_1$ and $df_2$computed above. 4. Perform the test the hypotheses $H_0:\alpha _{1}^{2}=\alpha _{2}^{2}\; vs\; H_a:\alpha _{1}^{2}\neq \alpha _{2}^{2}$at the $5\%$ level of significance. 15. Two random samples taken from two normal populations yielded the following information: Sample Sample Size Sample Variance 1 $n_1=21$ $s_{1}^{2}=199$ 2 $n_2=21$ $s_{2}^{2}=66$ 1. Find the statistic $F=s_{1}^{2}/s_{2}^{2}$ 2. Find the degrees of freedom $df_1$ and $df_2$. 3. For $\alpha =0.05$ find $df_1$ and $df_2$computed above. 4. Perform the test the hypotheses $H_0:\alpha _{1}^{2}=\alpha _{2}^{2}\; vs\; H_a:\alpha _{1}^{2}\neq \alpha _{2}^{2}$at the $5\%$ level of significance. Applications 1. Japanese sturgeon is a subspecies of the sturgeon family indigenous to Japan and the Northwest Pacific. In a particular fish hatchery newly hatched baby Japanese sturgeon are kept in tanks for several weeks before being transferred to larger ponds. Dissolved oxygen in tank water is very tightly monitored by an electronic system and rigorously maintained at a target level of $6.5$ milligrams per liter (mg/l). The fish hatchery looks to upgrade their water monitoring systems for tighter control of dissolved oxygen. A new system is evaluated against the old one currently being used in terms of the variance in measured dissolved oxygen. Thirty-one water samples from a tank operated with the new system were collected and $16$ water samples from a tank operated with the old system were collected, all during the course of a day. The samples yield the following information:$\text{New Sample 1: }n_1=31\; s_{1}^{2}=0.0121\ \text{Old Sample 2: }n_1=16\; s_{2}^{2}=0.0319$Test, at the $10\%$ level of significance, whether the data provide sufficient evidence to conclude that the new system will provide a tighter control of dissolved oxygen in the tanks. 2. The risk of investing in a stock is measured by the volatility, or the variance, in changes in the price of that stock. Mutual funds are baskets of stocks and offer generally lower risk to investors. Different mutual funds have different focuses and offer different levels of risk. Hippolyta is deciding between two mutual funds, $A$ and $B$, with similar expected returns. To make a final decision, she examined the annual returns of the two funds during the last ten years and obtained the following information: $\text{Mutual Fund A Sample 1: }n_1=10\; s_{1}^{2}=0.012\ \text{Mutual Fund B Sample 2: }n_1=10\; s_{2}^{2}=0.005$Test, at the $5\%$ level of significance, whether the data provide sufficient evidence to conclude that the two mutual funds offer different levels of risk. 3. It is commonly acknowledged that grading of the writing part of a college entrance examination is subject to inconsistency. Every year a large number of potential graders are put through a rigorous training program before being given grading assignments. In order to gauge whether such a training program really enhances consistency in grading, a statistician conducted an experiment in which a reference essay was given to $61$ trained graders and $31$ untrained graders. Information on the scores given by these graders is summarized below:$\text{Trained Sample 1: }n_1=61\; s_{1}^{2}=2.15\ \text{Untrained Sample 2: }n_1=31\; s_{2}^{2}=3.91$Test, at the $5\%$ level of significance, whether the data provide sufficient evidence to conclude that the training program enhances the consistency in essay grading. 4. A common problem encountered by many classical music radio stations is that their listeners belong to an increasingly narrow band of ages in the population. The new general manager of a classical music radio station believed that a new playlist offered by a professional programming agency would attract listeners from a wider range of ages. The new list was used for a year. Two random samples were taken before and after the new playlist was adopted. Information on the ages of the listeners in the sample are summarized below:$\text{Before Sample 1: }n_1=21\; s_{1}^{2}=56.25\ \text{After Sample 2: }n_1=16\; s_{2}^{2}=76.56$Test, at the $10\%$ level of significance, whether the data provide sufficient evidence to conclude that the new playlist has expanded the range of listener ages. 5. A laptop computer maker uses battery packs supplied by two companies, $A$ and $B$. While both brands have the same average battery life between charges (LBC), the computer maker seems to receive more complaints about shorter LBC than expected for battery packs supplied by company $B$. The computer maker suspects that this could be caused by higher variance in LBC for Brand $B$. To check that, ten new battery packs from each brand are selected, installed on the same models of laptops, and the laptops are allowed to run until the battery packs are completely discharged. The following are the observed LBCs in hours. Brand $A$ Brand $B$ 3.2 3.0 3.4 3.5 2.8 2.9 3.0 3.1 3.0 2.3 3.0 2.0 2.8 3.0 2.9 2.9 3.0 3.0 3.0 4.1 Test, at the $5\%$ level of significance, whether the data provide sufficient evidence to conclude that the LBCs of Brand $B$ have a larger variance that those of Brand $A$. 6. A manufacturer of a blood-pressure measuring device for home use claims that its device is more consistent than that produced by a leading competitor. During a visit to a medical store a potential buyer tried both devices on himself repeatedly during a short period of time. The following are readings of systolic pressure. Manufacturer Competitor 132 129 134 132 129 129 129 138 130 132 1. Test, at the $5\%$ level of significance, whether the data provide sufficient evidence to conclude that the manufacturer’s claim is true. 2. Repeat the test at the $10\%$ level of significance. Quote as many computations from part (a) as possible. Large Data Set Exercises Large Data Sets not available 1. Large $\text{Data Sets 1A and 1B }$record SAT scores for $419$ male and $581$ female students. Test, at the $1\%$ level of significance, whether the data provide sufficient evidence to conclude that the variances of scores of male and female students differ. 2. Large $\text{Data Sets 7, 7A, and 7B }$record the survival times of $140$ laboratory mice with thymic leukemia. Test, at the $10\%$ level of significance, whether the data provide sufficient evidence to conclude that the variances of survival times of male mice and female mice differ. Answers 1. $11.0$ 2. $5.06$ 3. $3.23$ 1. $0.23$ 2. $0.25$ 3. $0.40$ 1. $3.14$ 2. $0.27$ 3. $3.95$ 4. $0.21$ 1. $\text{Sample 1}$ 1. $n_1=5$ 2. $\bar{x_1}=3.8$ 3. $s_{1}^{2}=30.2$ 2. $\text{Sample 1}$ 1. $n_2=7$ 2. $\bar{x_1}=0.4286$ 3. $s_{2}^{2}=3.95$ 1. $1.6563$ 2. $df_1=15,\; df_2=20$ 3. $F_{0.05}=2.2$ 4. do not reject $H_0$ 1. $0.5217$ 2. $df_1=9,\; df_2=12$ 3. $F_{0.95}=0.3254$ 4. do not reject $H_0$ 1. $0.1692$ 2. $df_1=8,\; df_2=30$ 3. $F_{0.975}=0.26,\; F_{0.025}=2.65$ 4. reject $H_0$ 1. $F = 0.3793,\; F_{0.90}=0.58$, reject $H_0$ 2. $F = 0.5499,\; F_{0.95}=0.61$, reject $H_0$ 3. $F = 0.0971,\; F_{0.95}=0.31$, reject $H_0$ 4. $F = 0.893131, df_1=418,\; df_2=580$. Rejection Region: $(0,0.7897]\cup [1.2614,\infty )$. Decision: Fail to reject $H_0$ of equal variances. 11.4 F-Tests in One-Way ANOVA Basic 1. The following three random samples are taken from three normal populations with respective means $\mu _1$, $\mu _2$ and $\mu _3$, and the same variance $\sigma ^2$. Sample 1 Sample 2 Sample 3 2 3 0 2 5 1 3 7 2 5 1 3 1. Find the combined sample size $n$. 2. Find the combined sample mean $\bar{x}$. 3. Find the sample mean for each of the three samples. 4. Find the sample variance for each of the three samples. 5. Find $MST$. 6. Find $MSE$. 7. Find $F=MST/MSE$. 2. The following three random samples are taken from three normal populations with respective means $\mu _1$, $\mu _2$ and $\mu _3$, and the same variance $\sigma ^2$. Sample 1 Sample 2 Sample 3 0.0 1.3 0.2 0.1 1.5 0.2 0.2 1.7 0.3 0.1 0.5 0.0 1. Find the combined sample size $n$. 2. Find the combined sample mean $\bar{x}$. 3. Find the sample mean for each of the three samples. 4. Find the sample variance for each of the three samples. 5. Find $MST$. 6. Find $MSE$. 7. Find $F=MST/MSE$. 3. Refer to Exercise 1. 1. Find the number of populations under consideration $K$. 2. Find the degrees of freedom $df_1=K-1$ and $df_2=n-K$ 3. For $\alpha =0.05$, find $F_{\alpha }$ with the degrees of freedom computed above. 4. At $\alpha =0.05$, test hypotheses $H_0: \mu _1=\mu _2=\mu _3\ vs\; H_a: \text{at least one pair of the population means are not equal}$ 4. Refer to Exercise 2. 1. Find the number of populations under consideration $K$. 2. Find the degrees of freedom $df_1=K-1$ and $df_2=n-K$ 3. For $\alpha =0.01$, find $F_{\alpha }$ with the degrees of freedom computed above. 4. At $\alpha =0.01$, test hypotheses $H_0: \mu _1=\mu _2=\mu _3\ vs\; H_a: \text{at least one pair of the population means are not equal}$ Applications 1. The Mozart effect refers to a boost of average performance on tests for elementary school students if the students listen to Mozart’s chamber music for a period of time immediately before the test. In order to attempt to test whether the Mozart effect actually exists, an elementary school teacher conducted an experiment by dividing her third-grade class of $15$ students into three groups of $5$. The first group was given an end-of-grade test without music; the second group listened to Mozart’s chamber music for $10$ minutes; and the third groups listened to Mozart’s chamber music for $20$ minutes before the test. The scores of the $15$ students are given below: Group 1 Group 2 Group 3 80 79 73 63 73 82 74 74 79 71 77 82 70 81 84 Using the ANOVA $F$-test at $\alpha =0.10$, is there sufficient evidence in the data to suggest that the Mozart effect exists? 2. The Mozart effect refers to a boost of average performance on tests for elementary school students if the students listen to Mozart’s chamber music for a period of time immediately before the test. Many educators believe that such an effect is not necessarily due to Mozart’s music per se but rather a relaxation period before the test. To support this belief, an elementary school teacher conducted an experiment by dividing her third-grade class of $15$ students into three groups of $5$. Students in the first group were asked to give themselves a self-administered facial massage; students in the second group listened to Mozart’s chamber music for $15$ minutes; students in the third group listened to Schubert’s chamber music for $15$ minutes before the test. The scores of the $15$ students are given below: Group 1 Group 2 Group 3 79 82 80 81 84 81 80 86 71 89 91 90 86 82 86 Test, using the ANOVA $F$-test at the $10\%$ level of significance, whether the data provide sufficient evidence to conclude that any of the three relaxation method does better than the others. 3. Precision weighing devices are sensitive to environmental conditions. Temperature and humidity in a laboratory room where such a device is installed are tightly controlled to ensure high precision in weighing. A newly designed weighing device is claimed to be more robust against small variations of temperature and humidity. To verify such a claim, a laboratory tests the new device under four settings of temperature-humidity conditions. First, two levels of high and low temperature and two levels of high and low humidity are identified. Let $T$ stand for temperature and $H$ for humidity. The four experimental settings are defined and noted as $\text{(T, H): (high, high), (high, low), (low, high), and (low, low)}$. A pre-calibrated standard weight of $1$ kg was weighed by the new device four times in each setting. The results in terms of error (in micrograms mcg) are given below: (high, high) (high, low) (low, high) (low, low) −1.50 11.47 −14.29 5.54 −6.73 9.28 −18.11 10.34 11.69 5.58 −11.16 15.23 −5.72 10.80 −10.41 −5.69 Test, using the ANOVA $F$-test at the $1\%$ level of significance, whether the data provide sufficient evidence to conclude that the mean weight readings by the newly designed device vary among the four settings. 4. To investigate the real cost of owning different makes and models of new automobiles, a consumer protection agency followed $16$ owners of new vehicles of four popular makes and models, call them $\text{TC,HA,NA, and FT}$, and kept a record of each of the owner’s real cost in dollars for the first five years. The five-year costs of the $16$ car owners are given below: TC HA NA FT 8423 7776 8907 10333 7889 7211 9077 9217 8665 6870 8732 10540 7129 9747 7359 8677 Test, using the ANOVA $F$-test at the $5\%$ level of significance, whether the data provide sufficient evidence to conclude that there are differences among the mean real costs of ownership for these four models. 5. Helping people to lose weight has become a huge industry in the United States, with annual revenue in the hundreds of billion dollars. Recently each of the three market-leading weight reducing programs claimed to be the most effective. A consumer research company recruited $33$ people who wished to lose weight and sent them to the three leading programs. After six months their weight losses were recorded. The results are summarized below: Statistic Prog. 1 Prog. 2 Prog. 3 Sample Mean $\bar{x_1}=10.65$ $\bar{x_2}=8.90$ $\bar{x_3}=9.33$ Sample Variance $s_{1}^{2}=27.20$ $s_{2}^{2}=16.86$ $s_{3}^{2}=32.40$ Sample Size $n_1=11$ $n_2=11$ $n_3=11$ The mean weight loss of the combined sample of all $33$ people was $\bar{x}=9.63$. Test, using the ANOVA $F$-test at the $5\%$ level of significance, whether the data provide sufficient evidence to conclude that some program is more effective than the others. 6. A leading pharmaceutical company in the disposable contact lenses market has always taken for granted that the sales of certain peripheral products such as contact lens solutions would automatically go with the established brands. The long-standing culture in the company has been that lens solutions would not make a significant difference in user experience. Recent market research surveys, however, suggest otherwise. To gain a better understanding of the effects of contact lens solutions on user experience, the company conducted a comparative study in which $63$ contact lens users were randomly divided into three groups, each of which received one of three top selling lens solutions on the market, including one of the company’s own. After using the assigned solution for two weeks, each participant was asked to rate the solution on the scale of $1$ to $5$ for satisfaction, with $5$ being the highest level of satisfaction. The results of the study are summarized below: Statistics Sol. 1 Sol. 2 Sol. 3 Sample Mean $\bar{x_1}=3.28$ $\bar{x_2}=3.96$ $\bar{x_3}=4.10$ Sample Variance $s_{1}^{2}=0.15$ $s_{2}^{2}=0.32$ $s_{3}^{2}=0.36$ Sample Size $n_1=18$ $n_1=23$ $n_1=22$ The mean satisfaction level of the combined sample of all $63$ participants was $\bar{x}=3.81$. Test, using the ANOVA $F$-test at the $5\%$ level of significance, whether the data provide sufficient evidence to conclude that not all three average satisfaction levels are the same Large Data Set Exercise Large Data Set not available 1. Large $\text{Data Set 9}$ records the costs of materials (textbook, solution manual, laboratory fees, and so on) in each of ten different courses in each of three different subjects, chemistry, computer science, and mathematics. Test, at the $1\%$ level of significance, whether the data provide sufficient evidence to conclude that the mean costs in the three disciplines are not all the same. Answers 1. $n=12$ 2. $\bar{x}=2.8333$ 3. $\bar{x_1}=3,\; \bar{x_2}=5,\; \bar{x_3}=1$ 4. $s_{1}^{2}=1.5,\; s_{2}^{2}=4,\; s_{3}^{2}=0.6667$ 5. $MST=13.83$ 6. $MSE=1.78$ 7. $F = 7.7812$ 1. $K=3$ 2. $df_1=2,\; df_2=9$ 3. $F_{0.05}=4.26$ 4. $F = 5.53$, reject $H_0$ 1. $F = 3.9647,\; F_{0.10}=2.81$, reject $H_0$ 2. $F = 9.6018,\; F_{0.01}=5.95$, reject $H_0$ 3. $F = 0.3589,\; F_{0.05}=5.32$, do not reject $H_0$ 4. $F = 1.418$, $df_1=2,\; and \; df_2=27$, Rejection Region: $[5.4881,\infty )$, Decision: Fail to reject $H_0$ of equal means.
textbooks/stats/Introductory_Statistics/Introductory_Statistics_(Shafer_and_Zhang)/11%3A_Chi-Square_Tests_and_F-Tests/11.E%3A_Chi-Square_Tests_and_F-Tests_%28Exercises%29.txt
Scientists seek to answer questions using rigorous methods and careful observations. These observations - collected from the likes of field notes, surveys, and experiments - form the backbone of a statistical investigation and are called data. Statistics is the study of how best to collect, analyze, and draw conclusions from data. It is helpful to put statistics in the context of a general process of investigation: 1. Identify a question or problem. 2. Collect relevant data on the topic. 3. Analyze the data. 4. Form a conclusion. Statistics as a subject focuses on making stages 2-4 objective, rigorous, and efficient. That is, statistics has three primary components: How best can we collect data? How should the data be analyzed? What can we infer from the analysis? The topics scientists investigate are as diverse as the questions they ask. However, many of these investigations can be addressed with a small number of data collection techniques, analytic tools, and fundamental concepts in statistical inference. You are exposed to statistics regularly. If you are a sports fan, then you have the statistics for your favorite player. If you are interested in politics, then you look at the polls to see how people feel about certain issues or candidates. If you are an environmentalist, then you research arsenic levels in the water of a town or analyze the global temperatures. If you are in the business profession, then you may track the monthly sales of a store or use quality control processes to monitor the number of defective parts manufactured. If you are in the health profession, then you may look at how successful a procedure is or the percentage of people infected with a disease. There are many other examples from other areas. “There are of course many problems connected with life, of which some of the most popular are: Why are people born? Why do they die? Why do they want to spend so much time wearing digital watches?” (Adams, 2002) To understand how to collect and analyze data, you need to understand what the field of statistics is. Many of the words defined throughout this course have common definitions that are also used in non-statistical terminology. In statistics, some of these terms have slightly different definitions. It is important that you notice the difference and utilize the statistical definitions. Statistics is the study of how to collect, organize, analyze, and interpret data collected from a group There are two main branches of statistics. One is called descriptive statistics, which is where you collect, organize and describe data. The other branch is called inferential statistics, which is where you interpret data. First, you need to look at descriptive statistics since you will use the descriptive statistics when making inferences. In order to use inferential statistics, we will briefly touch on a completely new topic called Probability. Once we get some background in probability combined with your knowledge of descriptive statistics, we will move into Inferential Statistics. 1.02: Samples vs. Populations The first thing to decide in a statistical study is whom you want to measure and what you want to measure. You always want to make sure that you can answer the question of whom you measured and what you measured. The “who” is known as the individual and the “what” is known as the variable. Individual – a person, case or object that you are interested in finding out information about. Variable (also known as a random variable) – the measurement or observation of the individual. Population – is the total set of all the observations that are the subject of a study. Notice, the population answers “who” you want to measure and the variable answers “what” you want to measure. Make sure that you always answer both of these questions or you have not given the audience reading your study the entire picture. As an example, if you just say that you are going to collect data from the senators in the United States Congress, you have not told your reader what you are going to collect. Do you want to know their income, their highest degree earned, their voting record, their age, their political party, their gender, their marital status, or how they feel about a particular issue? Without telling “what” you what to measure, your reader has no idea what your study is actually about. Sometimes the population is very easy to collect. If you are interested in finding the average age of all of the current senators in the United States Congress, there are only 100 senators. This would not be hard to find. However, if instead you were interested in knowing the average age that a senator in the United States Congress first took office for all senators that ever served in the United States Congress, then this would be a bit more work. It is still doable, but it would take a bit of time to collect. However, what if you are interested in finding the average diameter at breast height of all Ponderosa Pine trees in the Coconino National Forest? This data would be impossible to collect. What do you do in these cases? Instead of collecting the entire population, you take a smaller group of the population, a snapshot of the population. This smaller group, called a sample, is a subset of the population, see Figure 1-1. Sample – a subset from the population. Consider the following three research questions: 1. What is the average mercury content in albacore tuna in the Pacific Ocean? 2. Over the last 5 years, what is the average time to complete a degree for Portland State University undergraduate students? 3. Does a new drug reduce the number of deaths in patients with severe heart disease? Each research question refers to a target population. In the first question, the target population is all albacore tuna in the Pacific Ocean, and each fish represents a case. A sample represents a subset of the cases and is often a small fraction of the population. For instance, 60 albacore tuna in the population might be selected and the mercury level is measured in each fish. The sample average of the 60 fish may then be used to provide an estimate of the population average of all the fish and answer the research question. We use the lower-case n to represent the number of cases in the sample and the upper-case N to represent the number of cases in the population. n = sample size. N = population size. How the sample is collected can determine the accuracy of the results of your study. There are many ways to collect samples. No sampling method is perfect, but some methods are better than other methods. Sampling techniques will be discussed in more detail later. For now, realize that every time you take a sample you will find different data values. The sample is a snapshot of the population, and there is more information than is in this small picture. The idea is to try to collect a sample that gives you an accurate picture, but you will never know for sure if your picture is the correct picture. Unlike previous mathematics classes, where there was always one right answer, in statistics there can be many answers, and you do not know which are right. The sample average in this case is the statistic, and the population average is the parameter. We use sample statistics to make inferences, educated guesses made by observation, about the population parameter. Once you have your data, either from a population or from a sample, you need to know how you want to summarize the data. As an example, suppose you are interested in finding the proportion of people who like a candidate, the average height a plant grows to using a new fertilizer, or the variability of the test scores. Understanding how you want to summarize the data helps to determine the type of data you want to collect. Since the population is what we are interested in, then you want to calculate a number from the population. This is known as a parameter. Parameter – An unknown quantity from the population. Usually denoted with a Greek letter (for example μ “mu”). This number is a fixed, unknown number that we want to estimate. As mentioned already, it is hard to collect the entire population. Even though this is the number you are interested in, you cannot really calculate it. Instead, you use the number calculated from the sample, called a statistic, to estimate the parameter. Statistic – a number calculated from the sample. Usually denoted with a ^ (called a hat, for example $\hat{p}$ “p-hat”) or a – (called a bar, for example $\bar{x}$ “x-bar”) above the letter. Since most samples are not exactly the same, the statistic values are going to be different from sample to sample. Statistics estimate the value of the parameter, but again, you do not know for sure if your statistic is correctly estimating the parameter.
textbooks/stats/Introductory_Statistics/Mostly_Harmless_Statistics_(Webb)/01%3A_Introduction_to_Data/1.01%3A_Introduction.txt
Qualitative vs. Quantitative Variables can be either quantitative or qualitative. Quantitative variables are numeric values that count or measure an individual. Qualitative variables are words or categories used to describe a quality of an individual. Qualitative variables are also called categorical variables and can sometimes have numeric responses that represent a category or word. Qualitative or categorical variable – answer is a word or name that describes a quality of the individual. Quantitative or numerical variable – answer is a number (quantity), something that can be counted or measured from the individual. Each type of variable has different graphs, parameters and statistics that you find. Quantitative variables usually have a number line associated with graphical displays. Qualitative variables usually have a category name associated with graphical displays. Examples of quantitative variables are number of people per household, age, height, weight, time (usually things we can count or measure). Examples of qualitative variables are eye color, gender, sports team, yes/no (usually things that we can name). When setting up survey questions it is important to know what statistical questions you would like the data to answer. For example, a company is trying to target the best age group to market a new game. They put out a survey with the ordinal age groupings: baby, toddler, adolescent, teenager, adult, and elderly. We could narrow down a range of ages for, say, teenagers to 13-19, although many 19-year-olds may record their response as an adult. The company wants to run an ad for the new game on television and they realize that 13-year-olds do not watch the same shows nor in the same time slots as 19-year-olds. To narrow down the age range the survey question could have just asked the person’s age. Then the company could look at a graph or average to decide more specifically that 17-year-olds would be the best target audience. Types of Measurement Scales There are four types of data measurement scales: nominal, ordinal, interval and ratio. Nominal data is categorical data that has no order or rank, for example the color of your car, ethnicity, race, or gender. Ordinal data is categorical data that has a natural order to it, for example, year in school (freshman, sophomore, junior, senior), a letter grade (A, B, C, D, F), the size of a soft drink (small, medium, large) or Likert scales. A Likert scale is a numeric scale that indicates the extent to which they agree or disagree with a series of statements. Interval data is numeric where there is a known difference between values, but zero does not mean “nothing.” Interval data is ordinal, but you can now subtract one value from another and that subtraction makes sense. You can do arithmetic on this data. For example, Fahrenheit temperature, 0° is cold but it does not mean that no temperature exists. Time, dates and IQ scores are other examples. Ratio data is numeric data that has a true zero, meaning when the variable is zero nothing is there. Most measurement data are ratio data. Some examples are height, weight, age, distance, or time running a race. Here are some ways to help you decide if the data are nominal, ordinal, interval, or ratio. First, if the variable is words instead of numbers then it is either nominal or ordinal data. Now ask yourself if you can put the data in a particular order. If you can order the names then this is ordinal data. Otherwise, it is nominal data. If the variable is numbers (not including words coded as numbers like Yes = 1 and No = 0), then it is either interval or ratio data. For ratio data, a value of zero means there is no measurement. This is known as the absolute zero. If there is an absolute zero in the data, then it means it is ratio. If there is no absolute zero, then the data are interval. An example of an absolute zero is if you have \$0 in your bank account, then you are without money. The amount of money in your bank account is ratio data. Word of caution, sometimes ordinal data is displayed using numbers, such as 5 being strongly agree, and 1 being strongly disagree. These numbers are not really numbers. Instead, they are used to assign numerical values to ordinal data. In reality, you should not perform any computations on this data, though many people do. If there are numbers, make sure the numbers are inherent numbers, and not numbers that were randomly assigned. Likert scales are frequently misused as quantitative data. A Likert scale is a numeric scale that indicates the extent to which they agree or disagree with a series of statements. For example, if we look at the following 5-point Likert Scale: 1. Strongly disagree 2. Disagree 3. Neither agree nor disagree 4. Agree 5. Strongly agree Nominal and ordinal data are qualitative, while interval and ratio data are quantitative. Likert scales are ordinal in that one can easily see that the larger number corresponds to a higher level of agreeableness. Some people argue that since there is a one-unit difference between the numeric values Likert scales should be interval data. However, the number 1 is just a placeholder for someone that strongly disagrees. There is no way to quantify a one-unit difference between two different subjects that answered 1 or 2 on the scale. For example, one person’s response for strongly disagree could stem from the exact same reasoning behind another person’s response of disagree. People view subjects at different intensities that is not quantifiable. Discrete vs. Continuous Quantitative variables are discrete or continuous. This difference will be important later on when we are working with probability. Discrete variables have gaps between points that are countable, usually integers like the number of cars in a parking garage or how many people per household. A continuous variable can take on any value and is measurable, like height, time running a race, distance between two buildings. Usually, just asking yourself if you can count the variable then it is discrete and if you can measure the variable then it is continuous. If you can actually count the number of outcomes (even if you are counting to infinity), then the variable is discrete. Discrete variables can only take on particular values like integers. Discrete variables have outcomes you can count. Continuous variables can take on any value. Continuous variables have outcomes you can measure. For example, think of someone’s age. They may report in a survey an integer value like 28 years-old. The person is not exactly 28 years-old though. From the time of their birth to the point in time that the survey respondent recorded, their age is a measurable number in some unit of time. A person’s true age has a decimal place that can keep going as far as the best clock can measure time. It is more convenient to round our age to an integer rather than 28 years 5 months, 8 days, 14 hours, 12 minutes, 27 seconds, 5 milliseconds or as a decimal 28.440206335775. Therefore, age is continuous. However, a continuous variable like age could be broken into discrete bins, for example, instead of the question asking for a numeric response for a person’s age they could have had discrete age ranges where the survey respondent just checks a box. 1. Under 18 2. 18-24 3. 25-35 4. 36-45 5. 46-62 6. Over 62 When a survey question takes a continuous variable and chunks it into discrete categories, especially categories with different widths, you limit what type of statistics you can do on that data. Figure 1-2 is a breakdown of the different variable and data types. Figure 1-2 Types of Sampling If you want to know something about a population, it is often impossible or impractical to examine the entire population. It might be too expensive in terms of time or money to survey the population. It might be impractical: you cannot test all batteries for their length of lifetime because there would not be any batteries left to sell. When you choose a sample, you want it to be as similar to the population as possible. If you want to test a new painkiller for adults, you would want the sample to include people of different weights, age, etc. so that the sample would represent all the demographics of the population that would potentially take the painkiller. The more similar the sample is to the population, the better our statistical estimates will be in predicting the population parameters. There are many ways to collect a sample. No sampling technique is perfect, and there is no guarantee that you will collect a representative sample. That is unfortunately the limitation of sampling. However, several techniques can result in samples that give you a semi-accurate picture of the population. Just remember to be aware that the sample may not be representative of the whole population. As an example, you can take a random sample of a group of people that are equally distributed across all income groups, yet by chance, everyone you choose is only in the high-income group. If this happens, it may be a good idea to collect a new sample if you have the time and money. When setting up a study there are different ways to sample the population of interest. The five main sampling techniques are: 1. Simple Random Sample 2. Systematic Sample 3. Stratified Sample 4. Cluster Sample 5. Convenience Sample A simple random sample (SRS) means selecting a sample size of n objects from the population so that every sample of the same size n has equal probability of being selected as every other possible sample of the same size from that population. For example, we have a database of all PSU student data and we use a random number generator to randomly select students to receive a questionnaire on the type of transportation they use to get to school. See Figure 1-3. Simple random sampling was used to randomly select the 18 cases. Retrieved from OpenIntroStatistics. Figure 1-3 A stratified sample is where the population is split into groups called strata, then a random sample is taken from each stratum. For instance, we divide Portland by ZIP code and then randomly select n registered voters out of each ZIP code. See Figure 1-4. Cases were grouped into strata, then simple random sampling was employed within each stratum. Retrieved from OpenIntroStatistics. Figure 1-4 A cluster sample is where the population is split up into groups called clusters, then one or more clusters are randomly selected and all individuals in the chosen clusters are sampled. Similar to the previous example, we split Portland up by ZIP code, randomly pick 5 ZIP codes and then sample every registered voter in those 5 ZIP codes. See Figure 1-5. Data were binned into nine clusters, three of these clusters were sampled, and all observations within these three clusters were included in the sample. Retrieved from OpenIntroStatistics. Figure 1-5 A systematic sample is where we list the entire population, then randomly pick a starting point at the nth object, and then take every nth value until the sample size is reached. For example, we alphabetize every PSU student, randomly choose the number 7. We would sample the 7th, 14th, 21st, 28th, 35th, etc. student. A convenience sample is picking a sample that is conveniently at hand. For example, asking other students in your statistics course or using social media to take your survey. Most convenience samples will give biased views and are not encouraged. There are many more types of sampling, snowball, multistage, voluntary, purposive, and quota sampling to name some of the ways to sample from a population. We can also combine the different sampling methods. For example, we could stratify by rural, suburban and urban school districts, then take 3rd grade classrooms as clusters. Guidelines for planning a statistical study 1. Identify the individuals that you are interested in studying. Realize that you can only make conclusions for these individuals. As an example, if you use a fertilizer on a certain genus of plant, you cannot say how the fertilizer will work on any other types of plants. However, if you diversify too much, then you may not be able to tell if there really is an improvement since you have too many factors to consider. 2. Specify the variable. You want to make sure the variable is something that you can measure, and make sure that you control for all other factors too. For example, if you are trying to determine if a fertilizer works by measuring the height of the plants on a particular day, you need to make sure you can control how much fertilizer you put on the plants (which is what we call a treatment), and make sure that all the plants receive the same amount of sunlight, water, and temperature. 3. Specify the population. This is important in order for you to know for whom and what conclusions you can make. 4. Specify the method for taking measurements or making observations. 5. Determine if you are taking a census or sample. If taking a sample, decide on the sampling method. 6. Collect the data. 7. Use appropriate descriptive statistics methods and make decisions using appropriate inferential statistics methods. 8. Note any concerns you might have about your data collection methods and list any recommendations for future. Observational vs. Experimental The section is an introduction to experimental design. This is a brief introduction on how to design an experiment or a survey so that they are statistically sound. Experimental design is a very involved process, so this is just a small overview. There are two types of studies: 1. An observational study is when the investigator collects data by observing, measuring, counting, watching or asking questions. The investigator does not change anything. 2. An experiment is when the investigator changes a variable or imposes a treatment to determine its effect. For instance, if you were to poll students to see if they favor increasing tuition, this would be an observational study since you are asking a question and getting data. Give a patient a medication that lowers their blood pressure. This is an experiment since you are giving the treatment and then getting the data. Many observational studies involve surveys. A survey uses questions to collect the data and needs to be written so that there is no bias. Bias is the tendency of a statistic to incorrectly estimate a parameter. There are many ways bias can seep into statistics. Sometimes we don’t ask the correct question, give enough options for answers, survey the wrong people, misinterpret data, sampling or measurement errors, or unrepresentative samples. In an experiment, there are different options to assign treatments. 1. Completely Randomized Experiment: In this experiment, the individuals are randomly placed into two or more groups. One group gets either no treatment or a placebo (a fake treatment); this group is called the control group. The groups getting the treatment are called the treatment groups. The idea of the placebo is that a person thinks they are receiving a treatment, but in reality, they are receiving a sugar pill or fake treatment. Doing this helps to account for the placebo effect, which is where a person’s mind makes their body respond to a treatment because they think they are taking the treatment when they are not really taking the treatment. Note, not every experiment needs a placebo, such as when using animals or plants. In addition, you cannot always use a placebo or no treatment. For example, if you are testing a new Ebola vaccination you cannot give a person with the disease a placebo or no treatment because of ethical reasons. 2. Matched Pairs Design: This is a subset of the randomized block design where the treatments are given to two groups that can be matched up with each other in some way. One example would be to measure the effectiveness of a muscle relaxer cream on the right arm and the left arm of individuals, and then for each individual you can match up their right arm measurement with their left arm. Another example of this would be before and after experiments, such as weight of a person before and weight after a diet. 3. Randomized Block Design: A block is a group of subjects that are considered similar or the same subject measured multiple times, but the blocks differ from each other. Then randomly assign treatments to subjects inside each block. For instance, a company has several new stitching methods for a soccer ball and would like to pick the ball that travels the fastest. We would expect variation in different soccer player’s abilities which we do not want affect our results. We randomly choose players to kick each of the new types of balls where the order of the ball design is also randomized. Figure 1-6 shows blocking using a variable depicting patient risk. Patients are first divided into low-risk and high-risk blocks, then each block is evenly separated into the treatment groups using randomization. This strategy ensures an equal representation of patients in each treatment group from both the low-risk and high-risk categories. 4. Factorial Design: This design has two or more independent categorical variables called factors. Each factor has two or more different treatment levels. The factorial design allows the researcher to test the effect of the different factors simultaneously on the dependent variable. For example, an educator believes that both the time of day (morning, afternoon, evening) and the way an exam is delivered (multiple-choice paper, short answer paper, multiplechoice electronic, short answer electronic) affects a student’s grade on their exam. Retrieved from OpenIntroStatistics. Figure 1-6 No matter which experiment type you conduct, you should also consider the following: Replication: repetition of an experiment on more than one subject so you can make sure that the sample is large enough to distinguish true effects from random effects. It is also the ability for someone else to duplicate the results of the experiment. Blind study is where the individual does not know which treatment they are getting or if they are getting the treatment or a placebo. Double-blind study is where neither the individual nor the researcher knows who is getting the treatment and who is getting the placebo. This is important so that there can be no bias in the results created by either the individual or the researcher. One last consideration is the time-period that you are collecting the data. There are different time-periods that you can consider. Cross-sectional study: observational data collected at a single point in time. Retrospective study: observational data collected from the past using records, interviews, and other similar artifacts. Prospective (or longitudinal or cohort) study: Subjects are measured from a starting point over time for the occurrence of the condition of interest.
textbooks/stats/Introductory_Statistics/Mostly_Harmless_Statistics_(Webb)/01%3A_Introduction_to_Data/1.03%3A_Collecting_Data_and_Sampling_Techniques.txt
Chapter 1 Exercises 1. The dotplot shows the height of some 5-year-old children measured in inches. Use the distribution of heights to find the approximate answer to the question, “How many inches tall are 5-year-olds?” “‘Alright,’ he said, ‘but where do we start? How should I know? They say the Ultimate Answer or whatever is Forty‐two, how am I supposed to know what the question is? It could be anything. I mean, what's six times seven?’ Zaphod looked at him hard for a moment. Then his eyes blazed with excitement. ‘Forty-two!’ he cried.” (Adams, 2002) 2. The dotplot shows the height of some 3-year-old children measured in cm. You are asked, “How many cm tall are 3-year-olds?” 1. Is this a statistical question? 2. Use the distribution of heights to approximate the answer for the question, “How many cm tall are 3-year-olds?” 3. What are statistics? 1. A question with a variety of answers. 2. A way to measure the entire population. 3. The science of collecting, organizing, analyzing and interpreting data. 4. A question from a survey. 4. What is a statistical question? 1. A question where you expect to get a variety of answers and you are interested in the distribution and tendency of those answers. 2. A question using reported statistics. 3. A question on a survey. 4. A question on a census. 5. Which of the following are statistical questions? Select all that apply. 1. How old are you? 2. What is the weight of a mouse? 3. How tall are all 3-year-olds? 4. How tall are you? 5. What is the average blood pressure of adult men? 6. In 2010, the Pew Research Center questioned 1,500 adults in the United States to estimate the proportion of the population favoring marijuana use for medical purposes. It was found that 73% are in favor of using marijuana for medical purposes. Identify the individual, variable, population, sample, parameter and statistic. 1. Percentage who favor marijuana for medical purposes calculated from sample. 2. Set of 1,500 responses of United States adults who are questioned. 3. All adults in the United States. 4. Percentage who favor marijuana for medical purposes calculated from population. 5. The response to the question “should marijuana be used for medical purposes?” 6. An adult in the United States. 7. Suppose you want to estimate the percentage of videos on YouTube that are cat videos. It is impossible for you to watch all videos on YouTube so you use a random video picker to select 1,000 videos for you. You find that 2% of these videos are cat videos. Determine which of the following is an observation, a variable, a sample statistic, or a population parameter. 1. Percentage of all videos on YouTube that are cat videos. 2. A video in your sample. 3. 2% 4. Whether a video is a cat video. 8. A doctor wants to see if a new treatment for cancer extends the life expectancy of a patient versus the old treatment. She gives one group of 25 cancer patients the new treatment and another group of 25 the old treatment. She then measures the life expectancy of each of the patients. Identify the individual, variable, population, sample, parameter and statistic. 1. Cancer patient given the new treatment and cancer patient given the old treatment. 2. The two groups of 25 cancer patients given the old and new treatments. 3. Average life expectancy of 25 cancer patients given the old treatment and average life expectancy of 25 cancer patients given the new treatment. 4. Average life expectancy of all cancer patients given the old and new treatment. 5. All cancer patients. 6. Life expectancy of the cancer patients. 9. The 2010 General Social Survey asked the question, “After an average workday, about how many hours do you have to relax or pursue activities that you enjoy?” to a random sample of 1,155 Americans. The average relaxing time was found to be 1.65 hours. Determine which of the following is an individual, a variable, a sample statistic, or a population parameter. 1. Average number of hours all Americans spend relaxing after an average workday. 2. 1.65 3. An American in the sample. 4. Number of hours spent relaxing after an average workday. 10. In a study, the sample is chosen by dividing the population by gender, and choosing 30 people of each gender. Which sampling method is used? 11. In a study, the sample is chosen by separating all cars by size, and selecting 10 of each size grouping. What is the sampling method? 12. In a study, the sample is chosen by writing everyone’s name on a playing card, shuffling the deck, then choosing the top 20 cards. What is the sampling method? 13. In a study, the sample is chosen by asking people on the street. What is the sampling method? 14. In a study, the sample is chosen by selecting a room of the house, and appraising all items in that room. What is the sampling method? 15. In a study, the sample is chosen by surveying every 3rd driver coming through a tollbooth. What is the sampling method? 16. Researchers collected data to examine the relationship between air pollutants and preterm births in Southern California. During the study air pollution levels were measured by air quality monitoring stations. Specifically, levels of carbon monoxide (CO) were recorded in parts per million, nitrogen dioxide (NO2) and ozone (03) in parts per hundred million, and coarse particulate matter (PM10) in μg/m3 . Length of gestation data were collected on 143,196 births between the years 1989 and 1993, and air pollution exposure during gestation was calculated for each birth. The analysis suggested that increased ambient PM10 and, to a lesser degree, CO concentrations may be associated with the occurrence of preterm births. [B. Ritz et al. “Effect of air pollution on preterm birth among children born in Southern California between 1989 and 1993.” In: Epidemiology 11.5 (2000), pp. 502– 511.] In this study, identify the variables. Select all that apply. 1. Ozone 2. Carbon Monoxide 3. PM10 4. Preterm Births in California 5. Length of Gestation 6. 143,196 Births 7. 1989-1993 8. Nitrogen Dioxide 17. State whether each study is observational or experimental. 1. You want to determine if cinnamon reduces a person’s insulin sensitivity. You give patients who are insulin sensitive a certain amount of cinnamon and then measure their glucose levels. 2. A researcher wants to evaluate whether countries with lower fertility rates have a higher life expectancy. They collect the fertility rates and the life expectancies of countries around the world. 3. A researcher wants to determine if diet and exercise together helps people lose weight over just exercising. The researcher solicits volunteers to be part of the study, and then randomly assigns the volunteers to be in the diet and exercise group or the exercise only group. 4. You collect the weights of tagged fish in a tank. You then put an extra protein fish food in water for the fish and then measure their weight a month later. 18. The Buteyko method is a shallow breathing technique developed by Konstantin Buteyko, a Russian doctor, in 1952. Anecdotal evidence suggests that the Buteyko method can reduce asthma symptoms and improve quality of life. In a scientific study to determine the effectiveness of this method, researchers recruited 600 asthma patients aged 18-69 who relied on medication for asthma treatment. These patients were split into two research groups: one practiced the Buteyko method and the other did not. Patients were scored on quality of life, activity, asthma symptoms, and medication reduction on a scale from 0 to 10. On average, the participants in the Buteyko group experienced a significant reduction in asthma symptoms and an improvement in quality of life. [McGowan. “Health Education: Does the Buteyko Institute Method make a difference?” In: Thorax 58 (2003).] Which of the following is the main research question? 1. The Buteyko method causes shallow breathing. 2. The Buteyko method can reduce asthma symptoms and an improvement in quality of life. 3. Effectiveness of the Buteyko method. 4. The patients score on quality of life, activity, asthma symptoms and medication reduction. 19. Researchers studying the relationship between honesty, age and self-control conducted an experiment on 160 children between the ages of 5 and 15. Participants reported their age, sex, and whether they were an only child or not. The researchers asked each child to toss a fair coin in private and to record the outcome (white or black) on a paper sheet, and said they would only reward children who report white. Half the students were explicitly told not to cheat and the others were not given any explicit instructions. In the no instruction group, the probability of cheating was found to be uniform across groups based on child’s characteristics. In the group that was explicitly told to not cheat, girls were less likely to cheat, and while rate of cheating did not vary by age for boys, it decreased with age for girls. [Alessandro Bucciol and Marco Piovesan. “Luck or cheating? A field experiment on honesty with children.” In: Journal of Economic Psychology 32.1 (2011), pp. 73–78.] In this study, identify the variables. Select all that apply. 1. Age 2. Sex 3. Paper Sheet 4. Cheated or Not 5. Reward for White Side of Coin 6. White or Black Side of Coin 7. Only Child or Not 20. Select the measurement scale Nominal, Ordinal, Interval or Ratio for each scenario. 1. A person’s age. 2. A person’s race. 3. Age groupings (baby, toddler, adolescent, teenager, adult, elderly). 4. Clothing brand. 5. A person’s IQ score. 6. Temperature in degrees Celsius. 7. The amount of mercury in a tuna fish. 21. Select the measurement scale Nominal, Ordinal, Interval or Ratio for each scenario. 1. Temperature in degrees Kelvin. 2. Eye color. 3. Year in school (freshman, sophomore, junior, senior). 4. The weight of a hummingbird. 5. The height of a building. 6. The amount of iron in a person’s blood. 7. A person’s gender. 8. A person’s race. 22. State which type of variable each is, qualitative or quantitative? 1. A person’s age. 2. A person’s gender. 3. The amount of mercury in a tuna fish. 4. The weight of an elephant. 5. Temperature in degrees Fahrenheit. 23. State which type of variable each is, qualitative or quantitative? 1. The height of a giraffe. 2. A person’s race. 3. Hair color. 4. A person’s ethnicity. 5. Year in school (freshman, sophomore, junior, senior). 24. State whether the variable is discrete or continuous. 1. A person’s weight. 2. The height of a building. 3. A person’s age. 4. The number of floors of a skyscraper. 5. The number of clothing items available for purchase. 25. State whether the variable is discrete or continuous. 1. Temperature in degrees Celsius. 2. The number of cars for sale at a car dealership. 3. The time it takes to run a marathon. 4. The amount of mercury in a tuna fish. 5. The weight of a hummingbird. 26. State whether each study is cross-sectional, retrospective or prospective. 1. To see if there is a link between smoking and bladder cancer, patients with bladder cancer are asked if they currently smoke or if they smoked in the past. 2. The Nurses Health Survey was a survey where nurses were asked to record their eating habits over a period of time, and their general health was recorded. 3. A new study is underway to track the eating and exercise patterns of people at different time-periods in the future, and see who is afflicted with cancer later in life. 4. The prices of generic items are compared to the prices of the equivalent named brand items. 27. Which type of sampling method is used for each scenario, Simple Random Sampling (SRS), Systematic, Stratified, Cluster or Convenience? 1. The quality control officer at a manufacturing plant needs to determine what percentage of items in a batch are defective. The officer chooses every 15th batch off the line and counts the number of defective items in each chosen batch. 2. The local grocery store lets you survey customers during lunch hour on their preference for a new bottle design for laundry detergent. 3. Put all names in a hat and draw a certain number of names out. 4. The researcher randomly selects 5 hospitals in the United States then measures the cholesterol level of all the heart attack patients in each of those hospitals. 28. Which type of sampling method is used for each scenario, Simple Random Sampling (SRS), Systematic, Stratified, Cluster or Convenience? 1. If you want to calculate the average price of textbooks, you could divide the individuals into groups by major and then conduct simple random samples inside each group. 2. Obtain a list of patients who had surgery at a hospital. Divide the patients according to type of surgery. Draw simple random samples from each group. 3. You want to measure whether a tree in the forest is infected with bark beetles. Instead of having to walk all over the forest, you divide the forest up into sectors, and then randomly pick the sectors that you will travel to. Then record whether a tree is infected or not for every tree in that sector. 4. You select every 3rd customer that comes that orders from your website. Answer to Odd Numbered Exercises 1) 42 3) c 5) b, c, e 7) 1. A question where you expect to get a variety of answers and you are interested in the distribution and tendency of those answers. 2. A question using reported statistics. 3. A question on a survey. 4. A question on a census. a) Population Parameter b) Observation c) Sample Statistic d) Variable 9) 1. A question where you expect to get a variety of answers and you are interested in the distribution and tendency of those answers. 2. A question using reported statistics. 3. A question on a survey. 4. A question on a census. a) Population Parameter b) Sample Statistic c) Individual d) Variable 11) Stratified 13) Convenience 15) Systematic 17) 1. Experimental 2. Observational 3. Experimental 4. Experimental 19) a, b, d, f, g 21) 1. Ratio 2. Nominal 3. Ordinal 4. Ratio 5. Ratio 6. Ratio 7. Nomina 23) 1. Quantitative 2. Qualitative 3. Qualitative 4. Qualitative 5. Qualitative 25) 1. Continuous 2. Discrete 3. Continuous 4. Continuous 5. Continuous 27) 1. Systematic 2. Convenience 3. Random 4. Cluster
textbooks/stats/Introductory_Statistics/Mostly_Harmless_Statistics_(Webb)/01%3A_Introduction_to_Data/1.04%3A_Chapter_1_Exercises.txt
Once a sample is collected, we can organize and present the data in tables and graphs. These tables and graphs help summarize, interpret and recognize characteristics within the data more easily than raw data. There are many types of graphical summaries. We will concentrate mostly on the ones that we can use technology to create. A population is a collection of all the measurements from the individuals of interest. Remember, in most cases you cannot collect data on the entire population, so you have to take a sample. Now you have a large number of data values. What can you do with them? Just looking at a large set of numbers does not answer our questions. If we organize the data into a table or graph, we can see patterns in the data. Ultimately, though, you want to be able to use that table or graph to interpret the data, to describe the distribution of the data set, explore different characteristics of the data and make inferences about the original population. Some characteristics to look for in tables and graphs: 1. Center: middle of the data set, also known as the average. 2. Variation: how spread out is the data. 3. Distribution: shape of the data. 4. Outliers: data values that are far from the majority of the data. 5. Time: changing characteristics of the data over time. There is technology that will create most of the graphs you need, though it is important for you to understand the basics of how they are created. Qualitative data are words describing a characteristic of the individual. Qualitative data is graphed using several different types of graphs, bar graphs, Pareto charts, and pie charts. Quantitative data are numbers that we count or measure. Quantitative data graphed using stem-and-leaf plots, dotplots, histograms, ogives, and time series. The bar graph for quantitative data called a histogram looks similar to a bar graph for qualitative data, except there are some major differences. First, in a bar graph the categories can be put in any order on the horizontal axis. There is no set order for these data values. You cannot say how the data is distributed based on the shape, since the shape can change just by putting the categories in different orders. With quantitative data, the data are in specific orders since you are dealing with numbers. With quantitative data, you can talk about a distribution; the shape changes depending on how many categories you set up. This shape of the quantitative graph is called a frequency distribution. This leads to the second difference from bar graphs. In a bar graph, the categories are determined by the name of the label. In quantitative data, the categories are numerical categories, and the frequencies are determined by how many categories (or what are called classes) you choose. There can be many different classes depending on the point of view of the author and how many classes there are. The third difference is that the categories touch with quantitative data, and there will be no gaps in the graph. The reason that bar graphs have gaps is to show that the categories do not continue on, as they do in quantitative data.
textbooks/stats/Introductory_Statistics/Mostly_Harmless_Statistics_(Webb)/02%3A_Organizing_Data/2.01%3A_Introduction.txt
Frequency Tables for Quantitative Data To create many of these graphs, you must first create the frequency distribution. The idea of a frequency distribution is to take the interval that the data spans and divide it into equal sized subintervals called classes. The grouped frequency distribution gives either the frequency (count) or the relative frequency (usually expressed as a percent) of individuals who fall into each class. When creating frequency distributions, it is important to note that the number of classes that are used and the value of the first class boundary will change the shape of, and hence the impression given by, the distribution. There is no one correct width to the class boundaries. It usually takes several tries to create a frequency distribution that looks just the way you want. As a reader and interpreter of such tables, you should be aware that such features are selected so that the table looks the way it does to show a particular point of view. For small samples, we usually have between four and seven classes, and as you get more data you will need more classes. We will start with an example of a random sample of 35 ages from credit card applications given below in no particular order. Organize the data in a frequency distribution table. 46 47 49 25 46 22 42 32 39 24 46 40 39 27 25 30 31 29 33 27 46 21 29 20 26 39 26 25 25 26 35 49 33 26 30 Solution As you can see, the data is hard to interpret in this format. If we were to peruse through the data we could find that minimum age is 20 and the maximum age is 49. If we cut the age groups up in to 10-year intervals, we get only three classes 20-29, 30-39 and 40-49. Although this would work, if we had more classes we can sometimes see trends within the data at a more granular level. If we split the ages up in to 5-year intervals to 20-24, 25-29, 30-34, 35-39, 40-44, and 45-49 we would get six classes. Note that the class limits should never overlap; we call these mutually exclusive classes. For example, if your class went from 20-25, 25-30 etc., the 25-year-olds would fall within both classes. Also, make sure each of the classes have the same width. A more formal way to pick your classes uses the following process. Steps involved in making a frequency distribution table: 1. Find the range = largest value – smallest value. 2. Pick the number of classes to use. Usually the number of classes is between five and twenty. Five classes are used if there are a small number of data points and twenty classes if there are a large number of data points (over 1,000 data points). 3. Class width = $\frac{\text { range }}{\text { # of classes }}$. Always round up to the next integer (if the answer is already a whole number, go to the next integer). If you do not round up, your last class will not contain your largest data value, and you would have to add another class just for it. If you round up, then your largest data value will fall in the last class, and there are no issues. 4. Create the classes. Each class has limits that determine which values fall in each class. To find the class limits, set the smallest value in the data set as the lower class limit for the first class. Then add the class width to the lower class limit to get the next lower class limit. Repeat until you get all the classes. The upper class limit for a class is one less than the lower limit for the next class. 5. If your data value has decimal places, then round up the class width to the nearest value with the same number of decimal places as the original data. As an example, if your data was out to two decimal places, and you divided your range by the number of classes to get 4.8333, then the class width would round the second decimal place up and end on 4.84. 6. The frequency for a class is the number of data values that fall in the class. For the age data let us use 6 classes. Find the range by taking 49 – 20 = 29 and divide this by the number of classes 29/6 = 4.8333. Round this number up to 5 and use 5 for the class width. Once you determine your class width and class limits, place each of these classes in a table and then tally up the ages that fall within each class. Count the tally marks and record the number in the frequency table. The total of the frequency column should be the number of observations in the data. You may want to total the frequencies to make sure you did not leave any of the numbers out of the table. Class Limits Tally   Class Frequency 20-24 4   20-24 4 25-29 12   25-29 12 30-34 6   30-34 6 35-39 4   35-39 4 40-44 2   40-44 2 45-49 7   45-49 7 Total 35 Figure 2-1 Using the frequency table, we can now see that there are more people in the 25- 29 year-old class, followed by the 45-49 year-old class. We call this most frequent category the modal class. There may be no mode at all or more than one mode. Frequency Tables for Qualitative Data A frequency distribution can also be made for qualitative data. Suppose you have the following data for which type of car students at a college drive. Make a frequency table to summarize the data. Ford Honda Nissan Chevy Chevy Chevy Chevy Toyota Chevy Saturn Honda Toyota Nissan Honda Toyota Toyota Nissan Ford Toyota Chevy Toyota Ford Chevy Chevy Chevy Nissan Toyota Toyota Ford Nissan Kia Nissan Nissan Nissan Honda Nissan Mercedes Honda Toyota Toyota Chevy Chevy Porsche Chevy Toyota Toyota Ford Hyundai Honda Nissan Solution The list of data is hard to analyze, so you need to summarize it. The classes in this case are the car brands. However, several car brands only have one car in the list. In that case, it is easier to make a category called “other” for the categories with low frequencies. Count how many of each type of cars there are: there are 12 Chevys, 5 Fords, 6 Hondas, 10 Nissans, 12 Toyotas, and 5 other brands (Hyundai, Kia, Mercedes, Porsche, and Saturn). Place the other brands into a frequency distribution table alphabetically: Category Frequency Chevy 12 Ford 5 Honda 6 Nissan 10 Toyota 12 Other 5 Total 50 For nominal data, either alphabetize the classes or arrange the classes from most frequent to least frequent, with the “other” category always at the end. For ordinal data put the classes in their order with the “other” category at the end. Relative Frequency Tables Frequencies by themselves are not as useful to tell other people what is going on in the data. If you want to know what percentage the category is of the total sample then we can use the relative frequency of each category. The relative frequency is just the frequency divided by the total. The relative frequency is the proportion in each category and may be given as a decimal, percentage, or fraction. Using the car data’s frequency distribution, we will create a third column labeled relative frequency. Take each frequency and divide by the sample size, see Figure 2-2. The relative frequencies should add up to one (ignoring rounding error). Type of Car Frequency Relative Frequency Chevy 12 12/50 = 0.24 Ford 5 5/50 = 0.1 Honda 6 6/50 = 0.12 Nissan 10 10/50 = 0.2 Toyota 12 12/50 = 0.24 Other 5 5/50 = 0.1 Total 50 1 Figure 2-2 Many people understand percentages better than proportions so we may want to multiply each of these decimals by 100% to get the following relative frequency percent table. Type of Car Percent Chevy 24 Ford 10 Honda 12% Nissan 20% Toyota 24% Other 10% Total 100% Figure 2-3 We can summarize the car data and see that for college students Chevy and Toyota make up 48% of the car models. Excel Recall the frequency table for the credit card applicants. The relative frequency table for the random sample of 35 ages from credit card applications follows. ClassFrequencyRelative Frequency 20-24 4 4/35 = 0.1143 25-29 12 12/35 = 0.3429 30-34 6 6/35 = 0.1714 35-39 4 4/35 = 0.1143 40-44 2 2/35 = 0.0571 45-49 7 7/35 = 0.2 Total 35 1 Making a relative frequency table using Excel Solution In Excel, type your frequencies into a column and then in the next column type in =(cell reference number)/35. Then copy and paste the formula in cell B2 down the page. If you used the cell reference number, Excel will automatically change the copied cells to the next row down. You get the following relative frequency table. The sum of the relative frequencies will be one (the sum may add to 0.9999 or 1.0001 if you are doing the calculations by hand due to rounding). To get Excel to show the percentage instead of the proportion by highlighting the relative frequencies and selecting the percent % button on the Home tab. Class Frequency Relative Frequency Relative Frequency Percent 20-24 4 0.1143 11% 25-29 12 0.3429 34% 30-34 6 0.1714 17% 35-39 4 0.1143 11% 40-44 2 0.0571 6% 45-49 7 0.2 20% Total 35 1 100% The relative frequency table lets us quickly see that a little more than half 34% + 20% = 54% of the ages of the credit card holders are between the ages of 25 – 29 and 45 – 49 years-old. Cumulative & Cumulative Relative Frequency Tables Another useful piece of information is how many data points fall below a particular class. As an example, a teacher may want to know how many students received below a 70%, a doctor may want to know how many adults have cholesterol above 160, or a manager may want to know how many stores gross less than \$2,000 per day. This calculation is known as a cumulative frequency and is used for ordinal or quantitative data. If you want to know what percent of the data falls below a certain class, then this fact would be a cumulative relative frequency. To create a cumulative frequency distribution, count the number of data points that are below the upper class limit, starting with the first class and working up to the top class. The last upper class should have all of the data points below it. Recall the credit card applicants. Make a cumulative frequency table. ClassFrequencyRelative Frequency Percent 20-24 4 11% 25-29 12 34% 30-34 6 17% 35-39 4 11% 40-44 2 6% 45-49 7 20% Total 35 100% Solution To find the cumulative frequency, carry over the first frequency of 4 to the first row of the cumulative frequency column. Then take this 4 and add it to the next frequency of 12 to get 16 for the second cumulative frequency value. For the third cumulative frequency, take 16 and add to the third frequency of 6 to get 22. Keep doing this additive process until you finish the column. The cumulative frequency in the last class should have the last number as the sample size. The cumulative relative frequency is just the cumulative frequency divided by the sample size. The cumulative relative frequencies should have one as the last number. Class Frequency Relative Frequency Cumulative Frequency Cumulative Relative Frequency 20-24 4 0.1143 4 4/35 = 0.1143 25-29 12 0.3429 4 + 12 = 16 16/35 = 0.4571 30-34 6 0.1714 16 + 6 = 22 22/35 = 0.6286 35-39 4 0.1143 22 + 4 = 26 26/35 = 0.7429 40-44 2 0.0571 26 + 2 = 28 28/35 = 0.8 45-49 7 0.2 28 + 7 = 35 35/35 = 1 Total 35 1 Use Excel to add up the values for you. You can also express the cumulative relative frequencies as a percentage instead of a proportion. Class Cumulative Frequency Cumulative Relative Frequency 20-24 4 11 25-29 16 46 30-34 22 63 35-39 26 74 40-44 28 80 45-49 35 100 If a manager wanted to know how many applicants were under the age of 40 we could look across the 35-39 year-old class to see that there were 26 applicants that were under 40 (39 years old or younger), or if we used the cumulative relative frequency, 74% of the applicants were under 40 years old. If the manager wants to know how many applicants were over 44, we could subtract 100% – 80% = 20%. Contingency Tables A contingency table provides a way of portraying data that can facilitate calculating probabilities. A contingency table summarizes the frequency of two qualitative variables. The table displays sample values in relation to two different variables that may be dependent or contingent on one another. There are other names for contingency tables. Excel calls the contingency table a pivot table. Other common names are two-way table, cross-tabulation, or cross-tab for short. A fitness center coach kept track of members over the last year. They recorded if the person stretched before they exercised, and whether they sustained an injury. The following contingency table shows their results. Find the relative frequency for each value in the table. Injury No Injury Stretched 52 270 Did Not Stretch 21 57 Solution Each value in the table represents the number of times a particular combination of variable outcomes occurred, for example, there were 57 members that did not stretch and did not sustain an injury. It is helpful to total up the categories. The row totals provide the total counts across each row (i.e. 52 + 270 = 322), and column totals are total counts down each column (i.e. 52 + 21 = 73). See Figure 2-4. Injury No Injury Total Stretched 52 270 322 Did Not Stretch 21 57 78 Total 73 327 400 Figure 2-4 We can quickly summarize the number of athletes for each category. The bottom right-hand number in Figure 2-4 is the grand total and represents the total number of 400 people. There were 322 people that stretched before exercising. There were 73 people that sustained an injury while exercising, etc. If we find the relative frequency for each value in the table, we can find the proportion of the 400 people for each category. To find a relative frequency we divide each value in the table by the grand total. See Figure 2-5. Injury No Injury Total Stretched 52/400 = =0.13 270/400 = 0.675 322/400 = 0.805 or 0.13 + 0.675 = 0.805 Did Not Stretch 21/400 = 0.0525 57/400 = 0.1425 78/400 = 0.195 or 0.0525 + 0.1425 = 0.195 Total 73/400 = 0.1825 or 0.13 + 0.0525 = 0.1825 327/400 = 0.8175 or 0.675 + 0.1425 = 0.8175 400/400 = 1 or the sum of either the row or column totals Figure 2-5 When data is collected, it is usually presented in a spreadsheet where each row represents the responses from an individual or case. “Of course, one never has the slightest notion what size or shape different species are going to turn out to be, but if you were to take the findings of the latest Mid‐Galactic Census report as any kind of accurate Guide to statistical averages you would probably guess that the craft would hold about six people, and you would be right. You'd probably guessed that anyway. The Census report, like most such surveys, had cost an awful lot of money and didn't tell anybody anything they didn't already know - except that every single person in the Galaxy had 2.4 legs and owned a hyena. Since this was clearly not true the whole thing had eventually to be scrapped.” (Adams, 2002) Make a pivot table using Excel. A random sample of 500 records from the 2010 United States Census were downloaded to Excel. Below is an image of just the first 20 people. There are seven variables: • State • Total family income (in United States dollars) • Age • Biological Sex (with reported categories female and male) • Race (with reported categories American Indian or Alaska Native, Black, Chinese, Japanese, Other Asian or Pacific Islander, Two major races, White, Other) • Marital status (with reported categories Divorced, Married/spouse absent, Married/spouse present, Never married/single, Separated, Widowed) • Total personal income (in United States dollars). • Solution In Excel, select the Insert tab, then select Pivot Table. Excel should automatically select all 500 rows in the Table/Range cell, if not then use your mouse and highlight all the data including the labels. Then select OK. Each version of Excel may look different at this point. One common area, though, is the bottom right-hand drag and drop area of the Pivot Table dialogue box. Drag the sex variable to the COLUMNS box, and marital status variable to the ROWS box. You will see the contingency table column and row headers appear as you drop the variables in the boxes. To get the counts to appear, drag and drop marital status into the Values box and the default will usually say, “Count of maritalStatus,” if not then change it to count in the drop-down menu. A contingency table should appear on your spreadsheet as you fill in the pivot table dialogue box. Contingency tables go by many names including pivot tables, two-way tables, cross tabulations, or cross-tabs. Count of Marital Status Column Labels Row Labels Female Male Grand Total Divorced 21 17 38 Married/spouse absent 5 9 14 Married/spouse present 92 100 192 Never married/single 93 129 222 Separated 1 2 3 Widowed 20 11 31 Grand Total 232 268 500
textbooks/stats/Introductory_Statistics/Mostly_Harmless_Statistics_(Webb)/02%3A_Organizing_Data/2.02%3A_Tabular_Displays.txt
Statistical graphs are useful in getting the audience’s attention in a publication or presentation. Data presented graphically is easier to summarize at a glance compared to frequency distributions or numerical summaries. Graphs are useful to reinforce a critical point, summarize a data set, or discover patterns or trends over a period of time. Florence Nightingale (1820-1910) was one of the first people to use graphical representations to present data. Nightingale was a nurse in the Crimean War and used a type of graph that she called polar area diagram, or coxcombs to display mortality figures for contagious diseases such as cholera and typhus. Nightingale Nightingale-mortality.jpg. (2021, May 18). Wikimedia Commons, the free media repository. Retrieved July 2021 from https://commons.wikimedia.org/w/index.php?title=File:Nightingale-mortality.jpg&oldid=561529217. It is hard to provide a complete overview of the most recent developments in data visualization with the onset of technology. The development of a variety of highly interactive software has accelerated the pace and variety of graphical displays across a wide range of disciplines. 2.3.1 Stem-and-Leaf Plot Stem-and-leaf plots (or stemplots) are a useful way of getting a quick picture of the shape of a distribution by hand. Turn the graph sideways and you can see the shape of your data. You can now easily identify outliers. Each observation is divided into two pieces; the stem and the leaf. If the number is just two digits then the stem would be the tens digit and the leaf would be the ones digit. When a number is more than two digits then the cut point should split the data into enough classes that is useful to see the shape of the data. To create a stem-and-leaf plot: 1. Separate each observation into a stem and a leaf. 2. Write the stems in a vertical column in ascending order (from smallest to largest). Fill in missing numbers even if there are gaps in the data. Draw a vertical line to the right of this column. 3. Write each leaf in the row to the right of its stem, in increasing order. Create a stem-and-leaf plot for the sample of 35 ages. 46 47 49 25 46 22 42 24 46 40 39 27 25 30 33 27 46 21 29 20 26 25 25 26 35 49 33 26 32 31 39 30 39 29 26 Solution Divide each number so that the tens digit is the stem and the ones digit is the leaf. The smallest observation is 20. The stem = 2 and the leaf = 0. The next value is 21 and the stem = 2 and the leaf = 1, up to the last value of 49 which would have a stem = 4 and a leaf = 9. If we use the tens categories we have the stems 2, 3 and 4. Line up the stems without skipping a number even if there are no values in that stem. In other words, the stems should have equal spacing (for example, count by ones, tens, hundreds, thousands, etc.). Then place a vertical line to the right of the stems. In each row put the leaves with a space between each leaf. Sort each row from smallest to largest. In Figure 2- 6 the 2 | 0 = 20. \begin{array}{l|llllllllllllllll} 2 & 0 & 1 & 2 & 4 & 5 & 5 & 5 & 5 & 6 & 6 & 6 & 6 & 7 & 7 & 9 & 9 \ 3 & 0 & 0 & 1 & 2 & 3 & 3 & 5 & 9 & 9 & 9 \ 4 & 0 & 2 & 6 & 6 & 6 & 6 & 7 & 9 & 9 \end{array} Figure 2-6 It is hard to see the shape with so few classes and so many leaves in each class. We can break each stem in half, putting leaves 0-4 in the first row and 5-9 in the second row, as in Figure 2-7. \begin{array}{l|llllllllllll} 2 & 0 & 1 & 2 & 4 \ 2 & 5 & 5 & 5 & 5 & 6 & 6 & 6 & 6 & 7 & 7 & 9 & 9 \ 3 & 0 & 0 & 1 & 2 & 3 & 3 \ 3 & 5 & 9 & 9 & 9 \ 4 & 0 & 2 \ 4 & 6 & 6 & 6 & 6 & 7 & 9 & 9 \end{array} Figure 2-7 Now, add labels and make sure the leaves are in ascending order. Be careful to line the leaves up in columns. You need to be able to compare the lengths of the rows when you interpret the graph. Imagine lines around the leaves and turn the graph 90 degrees to the left. You can now see in Figure 2-8 the shape of the distribution. Note that Excel uses the upper class limit for the axis label. Figure 2-8 If a leaf takes on more than the ones category then supply a footnote at the bottom of the plot with the units. A small sample of house prices in thousands of dollars was collected: 375, 189, 432, 225, 305, 275. Make a stem-and-leaf plot. Solution If we were to split the stem and leaf between the ones and tens place, then we would need stems going from 18 up to 43. Twenty-six stems for only six data points is too many. The next break then for a stem would be between the tens and hundreds. This would give stems from 1 to 4. Then each leaf will be the ones and tens. For example, then number 375 would have a stem = 3 and a leaf = 75. \begin{array}{l|ll} 1 & 89 \ 2 & 25 & 75 \ 3 & 05 & 75 \ 4 & 32 \end{array} Leaf = $1000 A small sample of coffee prices: 3.75, 1.89, 4.32, 2.25, 3.05, 2.75 was collected. Make a stem-and-leaf plot. Solution \begin{array}{l|ll} 1 & 89 \ 2 & 25 & 75 \ 3 & 05 & 75 \ 4 & 32 \end{array} Leaf =$0.01 Note that the last two stem-and-leaf plots look identical except for the footnote. It is important to include units to tell people what the stems and leaves mean by inserting a legend. Back-to-back stem-and-leaf plots let us compare two data sets on the same number line. The two samples share the same set of stems. The sample on the right is written backward from largest leaf to smallest leaf, and the sample on the left has leaves from smallest to largest. Use the following back-to-back stem-and-leaf plot to compare pulse rates before and after exercise. Solution The group on the left has leaves going in descending order and represent the pulse rates before exercise. The stems are in the middle column. The group on the right has leaves going in ascending order and represent the pulse rates after exercise. The first row has pulse rates of 62, 65, 66, 67, 68, 68 and 69. The last row of pulse rates are 124, 125, and 128. 2.3.2 Histogram A histogram is a graph for quantitative data (we call these bar graphs for qualitative data). The data is divided into a number of classes. The class limits become the horizontal axis demarcated with a number line and the vertical axis is either the frequency or the relative frequency of each class. Figure 2-9 is an example of a histogram. The histogram for quantitative data looks similar to a bar graph, except there are some major differences. First, in a bar graph the categories can be put in any order on the horizontal axis. There is no set order for these nominal data. You cannot say how the data is distributed based on the shape, since the shape can change just by putting the categories in different orders. With quantitative data, the data are in a specific order, since you are dealing with numbers. With quantitative data, you can talk about a distribution shape. This leads to the second difference from bar graphs. In a bar graph, the categories that you made in the frequency table were the words used for the category name. In quantitative data, the categories are numerical categories, and the numbers are determined by how many classes you choose. If two people have the same number of categories, then they will have the same frequency distribution. Whereas in qualitative data, there can be many different categories depending on the point of view of the author. The third difference is that the bars touch with quantitative data, and there will be no gaps in the graph. The reason that bar graphs have gaps is to show that the categories do not continue on, as they do in quantitative data. Since the graph for quantitative data is different from qualitative data, it is given a different name of histogram. Some key features of a histogram: • Equal spacing on each axis • Bars are the same width • Label each axis and title the graph • Show the scale on the frequency axis • Label the categories on the category axis • The bars should touch at the class boundaries To create a histogram, you must first create a frequency distribution. Software and calculators can create histograms easily when a large amount of sample data is being analyzed. Excel To create a histogram in Excel you will need to first install the Data Analysis tool. If your Data Analysis is not showing in the Data tab, follow the directions for installing the free add-in here: https://support.office.com/en-us/article/Load-the-Analysis-ToolPak-in-Excel-6a63e598-cd6d-42e3-9317- 6b40ba1a66b4. Type in the data into one blank column in any order. If you want to have class widths other than Excel’s default setting, type in a new column the endpoints of each class found in your frequency distribution, these are called the bins in Excel. Using the sample of 35 ages, make a histogram using Excel. 46 47 49 25 46 22 42 24 46 40 39 27 25 30 33 27 46 21 29 20 26 25 25 26 35 49 33 26 32 31 39 30 39 29 26 Solution Type the data in any order into column A and the bins in order in column B as shown below. Then select the Data tab, select Data Analysis, select Histogram, then select OK. In the dialogue box, click into the Input Range box, then use your mouse and highlight the ages including the label. Then click into the Bin Range box and use your mouse to highlight the bins including the label. Select the box for Labels only if you included the labels in your ranges. You can have your output default to a new worksheet, or select the circle to the left of Output Range, click into the box to the right of Output Range and then select one blank cell on your spreadsheet where you want the top left-hand corner of your table and graph to start. Then check the boxes next to Cumulative Percentage and Chart Output. Then select OK, and see below. A histogram needs to have bars that touch, which is not the default in Excel. To get the bars to touch, right-click on one of the blue bars and select Format Data Series and slide the Gap Width to 0%. Excel produces both a frequency table and a histogram. The table has the frequencies and the cumulative relative frequencies. Bin Frequency Cumulative % 24 4 11.43% 29 12 45.71% 34 6 62.86% 39 4 74.29% 44 2 80.00% 49 7 100.00% More 0 100.00% The histogram has bars for the height of each frequency and then makes a line graph of the cumulative relative frequencies over the bars. This red line is a line graph of the cumulative relative frequencies, also called an ogive and is discussed in a later section. It is important to note that the number of classes that are used and the value of the first class boundary will change the shape of the histogram. A relative frequency histogram is when the relative frequencies are used for the vertical axis instead of the frequencies and the y-axis will represent a percent instead of the number of people. In Excel, after you create your histogram, you can manually change the frequency column to the relative frequency values by dividing each number by the sample size. Here is a screen shot just as the last number was changed, note as soon as you hit enter the bars will shrink and adjust. After the last value =7/35 was entered and the label changed to Relative Frequency you get the following graph. The shape of the histogram will be the same for the relative frequency distribution and the frequency distribution; the height, though, is the proportion instead of frequency. TI-84: To make a histogram, enter the data by pressing [STAT]. The first option is already highlighted (1:Edit) so you can either press [ENTER] or [1]. Make sure the cursor is in the list, not on the list name and type the desired values pressing [ENTER] after each one. Press [2nd] [QUIT] to return to the home screen. To clear a previously stored list of data values, arrow up to the list name you want to clear, press [CLEAR], and then press enter. An alternative way is press [STAT], press 4 for 4:ClrList, press [2nd], then press the number key corresponding to the data list you wish to clear, for example, [2nd] [1] will clear L1, then press [ENTER]. After you enter the data, press [2nd] [STAT PLOT]. Select the first plot by hitting [Enter] or the number [1:Plot 1]. Turn the plot [On] by moving the cursor to On and selecting Enter. Select the Histogram option using the right arrow keys. Select [Zoom], then [ZoomStat]. You can see and change the class width by selecting [Window], then change the minimum x value Xmin=20, the maximum x value Xmax=50, the x-scale to Xscl=5 and the minimum y value Ymin=-6.5 and the maximum y value to Ymax=14. Select the [GRAPH] button. We get a similar looking Histogram compared to the stem-and-leaf plot and Excel histogram. Select the [TRACE] button to see the height of each bar and the classes. TI-89: First, enter the data into the Stat/List editor under list 1. Press [APP] then scroll down to Stat/List Editor, on the older style TI-89 calculators, go into the Flash/App menu, and then scroll down the list. Make sure the cursor is in the list, not on the list name, and type the desired values pressing [ENTER] after each one. To clear a previously stored list of data values, arrow up to the list name you want to clear, press [CLEAR], and then press enter. After you enter the data, select Press [F2] Plots, scroll down to [1: Plot Setup] and press [Enter]. Select [F1] Define. Use your arrow keys to select Histogram for Type, and then scroll down to the x-variable box. Press [2nd] [Var-Link] this key is above the [+] sign. Then arrow down until you find your List1 name under the Main file folder. Then press [Enter] and this will bring the name List1 back to the menu. You will now see that Plot1 has a small picture of a histogram. To view the histogram, select [F5] [Zoom Data]. The histogram looks a little different from Excel; you can change the settings for the bucket to match your table. Press [♦] [F2:Window]. Change the minimum x value xmin=20, the maximum x value xmax=50, the x-scale to xscl=5 and the minimum y value ymin=-6.5 and the maximum y value to ymax=14. Then press the [♦] [F3:GRAPH] button. Select [F3:Trace] to see the frequency for each bar. Then use your left and right arrow keys to move to the other bars. Make a histogram for the following random sample of student rent prices using Excel. 1500 1350 350 1200 850 900 1500 1150 1500 900 1400 1100 1250 600 610 960 890 1325 900 800 2550 495 1200 690 Solution Start by making a relative frequency distribution table with 7 classes. 1. Find the range: largest value – smallest value = 2550 – 350 = 2200, range = $2,200. 2. Find the class width: width = $\frac{\text { range }}{\text { 7 }}$ = $\frac{\text { 2000 }}{\text { 7 }}$ ≈ 314.286. Round up to 315. Always round up to the next integer even if the width is already an integer. 3. Find the class limits: Start at the smallest observation. This is the lower class limit for the first class. Add the class width to get the lower limit of the next class. Keep adding the class width to get all the lower limits, 350 + 315 = 665, 665 + 315 = 980, 980 + 315 = 1295, etc. The upper limit is one unit less than the next lower limit: so, for the first class the upper class limit would be 665 – 1 = 664. When you have all 7 classes, make sure the last number, in this case the 2550, is at least as large as the largest value in the data. If not, you made a mistake somewhere. Using Excel: Type the raw data in Excel in column A, the right-hand class endpoints for the bins in column B. Select Data, Data Analysis, Histogram. Select the Input Range, Bin Range, Labels (if you selected them), output option, Chart Output, then OK. See finished histogram below in Figure 2-13. Figure 2-10 By hand: Tally and find the frequency of the data. Frequency Distribution for Monthly Rent Class Limits Tally Frequency Relative Frequency 350-664 4 4 0.1667 665-979 8 8 0.333 980-1294 5 5 0.2083 1295-1609 6 6 0.25 1610-1924 0 0 0 1925-2239 0 0 0 2240-2554 1 1 0.0417 Total 0 24 1 Figure 2-11 Make sure the total of the frequencies is the same as the number of data points and the total of the relative frequency is one. Since we want the bars on the histogram to touch, the number line needs to use the class boundaries that are half way between the endpoints of the class limits. Start by finding the distance between the class endpoints and divide by two: (665-664)/2 = 0.5. Then subtract 0.5 from the left-hand side of each class limit and this will give you the points to use on the x-axis: 349.5, 664.5, 979.5, 1294.5, 1609.5, 1924.5, 2239.5, and 2554.5. Then draw your graph as in Figure 2-12. You can use frequencies or relative frequencies for the y-axis. Figure 2-12 Figure 2-13 Reviewing the graph in Figure 2-13, you can see that most of the students pay around$750 per month for rent, with about $1,500 being the other common value. Most students pay between$600 and $1,600 per month for rent. Of course, these values are just estimates pulled from the graph. There is a large gap between the$1,500 class and the highest data value. This seems to say that one student is paying a great deal more than everyone else is. This value may be an outlier. An outlier is a data value that is far from the rest of the values. It may be an unusual value or a mistake. It is a data value that should be investigated. In this case, the student lives in a very expensive part of town, thus the value is not a mistake, and is just very unusual. There are other aspects that can be discussed, but first some other concepts need to be introduced. 2.3.3 Ogive The line graph for the cumulative or cumulative relative frequency is called an ogive (oh-jyve). To create an ogive, first create a scale on both the horizontal and vertical axes that will fit the data. Then plot the points of the upper class boundary versus the cumulative (or cumulative relative) frequency. Make sure you include the point with the lowest class and the zero cumulative frequency. Then just connect the dots. The steeper the line the more accumulation occurs across the corresponding class. If the line is flat then the frequency for that class is zero. The ogive graph will always be going uphill from left to right and should never dip below the previous point. Figure 2-14 is an example of an ogive. Ogive comes from the uphill shape used in architecture. Here is an example of an ogive in the East Hall staircase at PSU. Figure 2-14 Make an ogive for the following random sample of rent prices students pay with the corresponding cumulative frequency distribution table. 1500 1350 350 1200 850 900 1250 600 610 960 890 1325 1500 1150 1500 900 1400 1100 900 800 2550 495 1200 690 Class Limits Frequency Cumulative Frequency 350 - 664 4 4 665 - 979 8 12 980 - 1294 5 17 1295 - 1609 6 23 1610 - 1924 0 23 1925 - 2239 0 23 2240 - 2554 1 24 Solution Find the class boundaries, 349.5, 664.5 … use these for the tick mark labels on the horizontal x-axis, the same as what was used for the histogram. The y-axis uses the cumulative frequencies. The largest cumulative frequency is 24. Every third number is marked on the y-axis units. See Figure 2-15 and Figure 2-16. By hand: Figure 2-15 Using software: Figure 2-16 The usefulness of an ogive is to allow the reader to find out how many students pay less than a certain value, and what amount of monthly rent a certain number of students pay. For instance, if you want to know how many students pay less than $1,500 a month in rent, then you can go up from the$1,500 until you hit the line and then you go left to the cumulative frequency axis to see what cumulative frequency corresponds to $1,500. It appears that around 21 students pay less than$1,500. See Figure 2-17. If you want to know the cost of rent that 15 students pay less than, then you start at 15 on the vertical axis and then go right to the line and down to the horizontal axis to the monthly rent of about $1,200. You can see that about 15 students pay less than about$1,200 a month. See Figure 2-18. Figure 2-17 Figure 2-18 If you graph the cumulative relative frequency then you can find out what percentage is below a certain number instead of just the number of people below a certain value. Using the sample of 35 ages, make an ogive. 46 47 49 25 46 22 42 24 46 40 39 27 25 30 33 27 46 21 29 20 26 25 25 26 35 49 33 26 32 31 39 30 39 29 26 Solution Excel Excel will plot an ogive over a histogram as one of its options, but the scale is harder to read. Type the data in any order into column A and the bins in order in column B as shown below. Then select the Data tab, select Data Analysis, select Histogram, then select OK, see below. In the dialogue box, click into the Input Range box, then use your mouse and highlight the ages including the label. Then click into the Bin Range box and use your mouse to highlight the bins including the label. Select the box for Labels only if you included the labels in your ranges. You can have your output default to a new worksheet, or select the circle to the left of Output Range, click into the box to the right of Output Range and then select one blank cell on your spreadsheet where you want the top left-hand corner of your table and graph to start. Then check the boxes next to Cumulative Percentage and Chart Output. Then select OK. A histogram needs to have bars that touch, which is not the default in Excel. To get the bars to touch, right-click on one of the blue bars and select Format Data Series and slide the Gap Width to 0%. Excel produces both a frequency table and a histogram. The table has the frequencies and the cumulative relative frequencies. Bin Frequency Cumulative % 24 4 11.43% 29 12 45.71% 34 6 62.86% 39 4 74.29% 44 2 80.00% 49 7 100.00% More 0 100.00% The orange line is the ogive and the vertical axis is on the right side. 2.3.4 Pie Chart You cannot make stem-and-leaf plots, histograms, ogives or time series graphs for qualitative data. Instead, we use bar or pie charts for a qualitative variable, which lists the categories and gives either the frequency (count) or the relative frequency (percent) of individual items that fall into each category. A pie chart or pie graph is a very common and easy-to-construct graph for qualitative data. A pie chart takes a circle and divides the circle into pie shaped wedges that are proportional to the size of the relative frequency. There are 360 degrees in a full circle. Relative frequency is just the percentage as a decimal. To find the angle for each pie wedge, multiply the relative frequency for each category by 360 degrees. Figure 2-19 is an example of a pie chart. Figure 2-19 Use Excel to make a pie chart for the following frequency distribution of marital status. Marital Status Frequency Divorced (D) 16 Married (M) 44 Single (S) 23 Widowed (W) 9 Solution In Excel, type in the table as it appears, then use your mouse and highlight the entire table. Select the Insert tab, then select the pie graph icon, then select the first option under the 2-D Pie. Once you have the pie chart you can select the Design window to get a graph to your liking. It is good practice to include the class label and the percent. The percent should add up to 100%, although with rounding sometimes the sum can be off by 1%. You can also click on the green plus sign to the right of the graph and add different formatting options, or the paintbrush to change colors. Here is the finished pie graph. 2.3.5 Bar Graph A bar graph (column graph or bar chart) is another graph of a distribution for qualitative data. Bar graphs or charts consist of frequencies on one axis and categories on the other axis. Then you draw rectangles for each category with a height (if frequency is on the vertical axis) or length (if frequency is on the horizontal axis) that is equal to the frequency. All of the rectangles should be the same width, and there should be equally wide gaps between each bar. Figure 2-20 is an example of a bar chart. Figure 2-20 Some key features of a bar graph: • Equal spacing on each axis • Bars are the same width • Label each axis and title the graph • Show the scale on the frequency axis • Label the categories on the category axis • The bars do not touch. You can draw a bar graph with frequency or relative frequency on the vertical axis. The relative frequency is useful when you want to compare two samples with different sample sizes. The relative frequency graph and the frequency graph should look the same, except for the scaling on the frequency axis. Use Excel to make a bar chart for the following frequency distribution of marital status. Marital Status Frequency Divorced (D) 16 Married (M) 44 Single (S) 23 Widowed (W) 9 Solution In Excel, type in the table as it appears, then use your mouse and highlight the entire table. Similar steps as the pie chart, but this time choose the column graph option we get the following bar graph for marital status. Then format the graph as needed. The completed bar graph is below. Pie charts are useful for comparing sizes of categories. Bar charts show similar information. It really is a personal preference and what information you are trying to address. However, pie charts are best when you only have a few categories and the data can be expressed as a percentage. The data does not have to be percentages to draw the pie chart, but if a data value can fit into multiple categories, you cannot use a pie chart to display the data. As an example, if you are asking people which is their favorite national park and you ask them to pick their top three choices, then the total number of answers can add up to more than 100% of the people surveyed. Therefore, you cannot use a pie chart to display the favorite national park, but a bar chart would be appropriate. 2.3.6 Pareto Chart A Pareto (pronounced pə-RAY-toh) chart is a bar graph that starts from the most frequent class to the least frequent class. The advantage of Pareto charts is that you can visually see the more popular answer to the least popular. This is especially useful in business applications, where you want to know what services your customers like the most, what processes result in more injuries, which issues employees find more important, and other type of questions where you are interested in comparing frequency. Figure 2-21 is an example of a Pareto chart. Pareto Figure 2-21 Use Excel to make a Pareto chart for the following frequency distribution of marital status. Marital Status Frequency Divorced (D) 16 Married (M) 44 Single (S) 23 Widowed (W) 9 Solution In Excel, type in the table as it appears, then use your mouse and highlight the entire table. Highlight the table, then select the Home tab, then select Sort & Filter, then select Custom Sort. Change the Sort by to Frequency and the Order to Largest to Smallest and click OK. This will automatically arrange the bars in your bar chart from largest to smallest. Many Pareto charts will have the bars touching. You can right click on the bars, choose format data series, and then change the Gap Width to zero. Here is the completed Pareto chart. There are many other types of graphs used on qualitative data. There are software packages that will create most of them. It depends on your data as to which graph may be best to display the data. 2.3.7 Stacked Column Chart The next example illustrates one of these types known as a stacked column chart. Stacked column (bar) charts are used when we need to show the ratio between a total and its parts. Each color shows the different series as a part of the same single bar, where the entire bar is used as a total. In the Wii Fit game, you can do four different types of exercises: yoga, strength, aerobic, and balance. The Wii system keeps track of how many minutes you spend on each of the exercises every day. The following graph is the data for Niko over one-week time-period. Discuss any interpretations you can infer from the graph. Figure 2-22 Solution It appears that Niko spends more time on yoga than on any other exercises on any given day. He seems to spend less time on aerobic exercises on a given day. There are several days when the amount of exercise in the different categories is almost equal. The usefulness of a stacked column chart is the ability to compare several different categories over another variable, in this case time. This allows a person to interpret the data with a little more ease. Data scientists write programming using statistics to filter spam from incoming email messages. By noting specific characteristics of an email, a data scientist may be able to classify some emails as spam or not spam with high accuracy. One of those characteristics is whether the email contains no numbers, small numbers, or big numbers. Make a stacked column chart with the data in the table. Which type of email is more likely to be spam? Number     None Small Big Total Spam 149 168 50 367 Not Spam 400 2659 495 3554 Total 549 2827 545 3921 Example from OpenIntroStatistics. Solution Type the summarized table into Excel. Highlight just the inside of the table from the row label, column label and data (do not include the totals or Number label). Select the Insert tab, and then select the 2nd option under the column chart. Add a legend, labels and change colors for clarity. The completed stacked bar graph is shown in Figure 2-23. Figure 2-23 Emails with no numbers have a relatively high rate of spam email (149/549 = 0.271) about 27%. On the other hand, less than 10% of email with small numbers (168/2827 = 0.059) or big numbers (50/545 = 0.092) are spam. 2.3.8 Multiple or Side-by-Side Bar Graph A multiple bar graph, also called a side-by-side bar graph, allows comparisons of several different categories over another variable. The percentages of people who use certain contraceptives in Central American countries are displayed in the graph below. Use the graph to find the type of contraceptive that is most used in Costa Rica and El Salvador. Figure 2-24 Solution This side-by-side bar graph allows you to quickly see the differences between the countries. For instance, the birth control pill is used most often in Costa Rica, while condoms are most used in El Salvador. Make a side-by-side bar graph for the following medal count for the 2018 Olympics. GoldSilverBronzeNorway 14 14 11 Germany 14 10 7 Canada 11 8 10 United States 9 8 6 Solution Copy the table over to Excel. Highlight the entire table, then use similar steps as the regular bar graph. Add labels and change the color. The completed graph is shown below. 2.3.9 Time-Series Plot A time-series plot is a graph showing the data measurements in chronological order, where the data is quantitative data. For example, a time-series plot is used to show profits over the last 5 years. To create a time-series plot, time always goes on the horizontal axis, and the frequency or relative frequency goes on the vertical axis. Then plot the ordered pairs and connect the dots. A time series allows you to see trends over time. Caution: You must realize that the trend may not continue. Just because you see an increase does not mean the increase will continue forever. As an example, prior to 2007, many people noticed that housing prices were increasing. The belief at the time was that housing prices would continue to increase. However, the housing bubble burst in 2007, and many houses lost value during the recession. 2.3.10 Scatter Plot Sometimes you have two quantitative variables and you want to see if they are related in any way. A scatter plot helps you to see what the relationship may look like. A scatter plot is just a plotting of the ordered pairs. • When you see the dots increasing from left to right then there is a positive relationship between the two quantitative variables. • If the dots are decreasing from left to right then there is a negative relationship. • If there is no apparent pattern going up or down, then we say there is no relationship between the two variables. Is there any relationship between elevation and high temperature on a given day? The following data are the high temperatures at various cities on a single day and the elevation of the city. Make a scatterplot to see what type of relationship exists. Elevation (in feet) 7000 4000 6000 3000 7000 4500 5000 Temperature (°F) 50 60 48 70 55 55 60 Solution Excel Type the data into two columns next to each other. It is important not to have a blank column between the points or Excel may give you an error message. Once you type your data into columns A and B, use your mouse and highlight all the data including the labels. Select the Insert tab, and then select the first box under Scatter. Add appropriate labels. The completed scatter plot is shown below. TI-84: First, enter the data into lists 1 and 2. Press [STAT] the first option is already highlighted (1:Edit) so you can either press [ENTER] or 1.Type in the data pressing [ENTER] after each one. For x-y data pairs, enter all x -values in one list. Enter all corresponding y -values in a second list. Press [2nd] [QUIT] to return to the home screen. Make sure you turn off other stat plots or graphs in the y= menu. Press [2nd] then the [y=] button. Select the first plot. Highlight On and press [Enter] so that On is highlighted. Arrow down to Type and highlight the first option that looks like a scatter plot. Make sure your x and y lists are using L1 and L2. Select Zoom and arrow down to ZoomStat and press [Enter]. You will get the following scatterplot. Select Trace and use your arrow keys to see the values at different points. TI-89: Press [♦] then [F1] (to get Y=) and clear any equations that are in the y-editor. Open the Stats/List Editor. Press [APPS], select FlashApps then press [ENTER]. Highlight Stats/List Editor then press [ENTER]. Press [ENTER] again to select the main folder. Type in the data pressing [ENTER] after each one. Enter all x-values in one list. Enter all corresponding y-values in a second list. In the Stats/List Editor, select [F2] for the Plots menu. Use cursor keys to highlight 1:Plot Setup. Make sure that the other graphs are turned off by pressing [F4] button to remove the check marks. Under “Plot 1” press [F1] Define. In the “Plot Type” menu, select “Scatter.” Move the cursor to the “x” space press [2nd] Var-Link, scroll down to list1, and then press [Enter]. This will put the list name in the dialogue box. Do the same for the y values, but this time choose list2. Press [ENTER] twice and you will be returned to the Plot Setup menu. Press F5 ZoomData to display the graph. Press F3 Trace and use the arrow keys to scroll along the different points. Interpreting the scatter plot. The graph indicates a linear relationship between temperature and elevation. If you were to hold a pencil up to cover the dots, note that you would see that the dots roughly follow a fat line downhill. It also appears to be a negative relationship, thus as elevation increases, the temperature decreases. Figure 2-25 Be careful with the vertical axis of both time-series and scatter plots. If the axis does not start at zero the slope of the line can be exaggerated to show more or less of increase than there really is. This is done in politics and advertising to manipulate the data. For example, if we change the vertical axis of temperature to go between 45°F and 75°F we get the following scatter plot in Figure 2-26. We have the same arrangements of dots, but the slope looks much steeper over the 30° range. Figure 2-26 2.3.11 Misleading Graphs One thing to be aware of as a consumer, data in the media may be represented in misleading graphs. Misleading graphs not only misrepresent the data, they can lead the reader to false conclusions. There are many ways that graphs can be misleading. One way to mislead is to use picture graphs or 3D graphs that exaggerate differences and should be used with caution. Leaving off units and labels can result in a misleading graph. Another more common example is to rescale or reverse the vertical axis to try to show a large difference between categories. Not starting the vertical axes at zero will show a more dramatic rate of change. Other ways that graphs can be misleading is to change the horizontal axis labels so that they are out of time sequence, using inappropriate graphs, not showing the base population. What is misleading about the following graph? An ad for a new diet pill shows the following time-series plot for someone that has lost weight over a 5-month period. Solution If you do not start the vertical axis at zero, then a change can look much more dramatic than it really is. Notice the decrease in weight looks much larger in Figure 2-27. The graph in Figure 2-28 has the vertical axis starting at zero. Notice that over the 5 months, the weight appears to be decreasing, however, it does not look like there is a large decrease. Figure 2-27 Figure 2-28 What is misleading about the graph in Figure 2-29? https://www.mediamatters.org/blog/2014/03/31/dishonest-fox-charts-obamacare-enrollment-editi/198679. Figure 2-29 Solution The y-axis scale is different for each bar and there are no units on the axis. The first bar has each tic mark as 2 billion, the second bar has each tick as less then 1 billion. This exaggerates the difference. If they used square scaling as in Figure 2-30, there would not be such an extreme difference between the height of the bars. https://www.mediamatters.org/blog/2014/03/31/dishonest-fox-charts-obamacare-enrollment-editi/198679. Figure 2-30 What is misleading about the graph in Figure 2-31? https://www.livescience.com/45083-misleading-gun-death-chart.html Figure 2-31 Solution The graph has the y-axis reversed. What looks like an increasing trend line really is decreasing when you correct the y-axis. The red background is also an effect to raise alarm, almost like a curtain of blood. What is misleading about the graph shown in a Lanacane commercial in May 2012, shown in Figure 2-32? Retrieved 7/2/2021 from https://youtu.be/I0DapkQ-c1I?t=17 Figure 2-32 Solution It appears that Lanacane is better than regular hydrocorisone cream at releiving itching. However, note that there are no units or labels to the axis. What is misleading about the graph published Georgia’s Department of Public Health website in May 2020, shown in Figure 2-33? Retrieved 7/3/2021 from https://www.vox.com/covid-19-coronav...ning-reopening Figure 2-33 Solution There are two misleading items for this graph. The horizontal axis is time, yet the dates are out of sequence starting with April 28, April 27, April 29, May 1, April 30, May 4, May 6, May 5, May 2, May 7, April 26, May 3, May 8, May 9. The first date of April 26 is presented almost at the end of the axis. The graph at first glance would deceive viewers in cases going down over time. A Pareto style chart should never be used for time series data. The second misleading item is the graph’s title and no label on the y-axis. What does the height of each bar represent? Is the height the number of cases for each county, or is the height the number of deaths and hospitalizations? The website later corrected the graphic as shown in Figure 2-34. Retrieved 7/3/2021 from https://www.vox.com/covid-19-coronav...ning-reopening Figure 2-34 Large data sets need to be summarized in order to make sense of all the information. The distribution of data can be represented with a table or a graph. It is the role of the researcher or data scientist to make accurate graphical representations that can help make sense of this in the context of the data. Tables and graphs can summarize data, but they alone are insufficient. In the next chapter we will look at describing data numerically.
textbooks/stats/Introductory_Statistics/Mostly_Harmless_Statistics_(Webb)/02%3A_Organizing_Data/2.03%3A_Graphical_Displays.txt
Chapter 2 Exercises 1. Which types of graphs are used for quantitative data? Select all that apply. a) Ogive b) Pie Chart c) Histogram d) Stem-and-Leaf Plot e) Bar Graph 2. Which types of graphs are used for qualitative data? Select all that apply. a) Pareto Chart b) Pie Chart c) Dotplot d) Stem-and-Leaf Plot e) Bar Graph f) Time Series Plot 3. The bars for a histogram should always touch, true or false? 4. A sample of rents found the smallest rent to be $600 and the largest rent was$2,500. What is the recommended class width for a frequency table with 7 classes? 5. An instructor had the following grades recorded for an exam. 96 66 65 82 85 82 87 76 80 85 83 69 79 70 83 63 81 94 71 83 99 75 73 83 86 a) Create a stem-and-leaf plot. b) Complete the following table. Class Frequency Cumulative Frequency Relative Frequency Cumulative Relative Frequency 60 – 69 70 – 79 80 – 89 90 – 99 Total 25 c) What should the relative frequencies always add up to? d) What should the last value always be in the cumulative frequency column? e) What is the frequency for students that were in the C range of 70-79? f) What is the relative frequency for students that were in the C range of 70-79? g) Which is the modal class? h) Which class has a relative frequency of 12%? i) What is the cumulative frequency for students that were in the B range of 80-89? j) Which class has a cumulative relative frequency of 40%? 6. Eyeglassomatic manufactures eyeglasses for different retailers. The number of lenses for different activities is in table. Activity Grind Multi-coat Assemble Make Frames Receive Finished Unknown Number of lenses 18,872 12,105 4,333 25,880 26,991 1,508 Grind means that they ground the lenses and put them in frames, multi-coat means that they put tinting or scratch resistance coatings on lenses and then put them in frames, assemble means that they receive frames and lenses from other sources and put them together, make frames means that they make the frames and put lenses in from other sources, receive finished means that they received glasses from another source, and unknown means they do not know where the lenses came from. a) Make a relative frequency table for the data. b) How many of the eyeglasses did Eyeglassomatic assemble? c) How many of the eyeglasses did Eyeglassomatic manufacture all together? d) What is the relative frequency for the assemble category? e) What percent of eyeglasses did Eyeglassomatic grind? 7. The following table is from a sample of five hundred homes in Oregon asked the primary source of heating in their residential homes. Type of Heat Percent Electricity 33 Heating Oil 4 Natural Gas 50 Firewood 8 Other 5 a) How many of the households heat their home with firewood? b) What percent of households heat their home with natural gas? 8. The following table is from a sample of 50 undergraduate PSU students. Class Relative Frequency Percent Freshman 18 Sophomore 13 Junior 23 Senior 46 a) What percent of students are below a senior class? b) What is the cumulative frequency of the junior class? 9. A sample of heights of 20 people in cm is recorded below. Make a stem-and-leaf plot. Height (cm) 167 201 170 185 175 162 182 186 172 173 188 154 185 178 177 184 178 165 169 171 185 178 175 176 10. The stem-and-leaf plot below is for pulse rates before and after exercise. a) Was pulse rate higher on average before or after exercise? b) What was the fastest pulse rate of the before exercise group? c) What was the slowest pulse rate of the after-exercise group? 11. The following data represents the percent change in tuition levels at public, four-year colleges (inflation adjusted) from 2008 to 2013 (Weissmann, 2013). Below is the frequency distribution and histogram. Class Limits Class Midpoint Frequency Relative Frequency 2.2 – 11.7 6.95 6 0.12 11.8 – 21.3 16.55 20 0.40 21.4 – 30.9 26.15 11 0.22 31.0 – 40.5 35.75 4 0.08 40.6 – 50.1 45.35 2 0.04 50.2 – 59.7 54.95 2 0.04 59.8 – 69.3 64.55 3 0.06 69.4 – 78.9 74.15 2 0.04 a) How many colleges were sampled? b) What was the approximate value of the highest change in tuition? c) What was the approximate value of the most frequent change in tuition? 12. The following data and graph represent the grades in a statistics course. Class Limits Class Midpoint Frequency Relative Frequency 40 – 49.9 45 2 0.08 50 – 59.9 55 1 0.04 60 – 69.9 65 7 0.28 70 – 79.9 75 6 0.24 80 – 89.9 85 7 0.28 90 – 99.9 95 2 0.04 a) How many students were in the class? b) What was the approximate lowest and highest grade in the class? c) What percent of students had a passing grade of 70% or higher? 13. The following graph represents a random sample of car models driven by college students. What percent of college students drove a Nissan? 14. The following graph and data represent the percent change in tuition levels at public, four-year colleges (inflation adjusted) from 2008 to 2013 (Weissmann, 2013). Class Limits Cumulative Frequency 2.2 – 11.7 6 11.8 – 21.3 26 21.4 – 30.9 37 31.0 – 40.5 41 40.6 – 50.1 43 50.2 – 59.7 45 59.8 – 69.3 48 69.4 – 78.9 50 a) How many colleges were sampled? b) What class of percent changes had the most colleges in that range? c) How many colleges had a percent change below 50.2% change in tuition? d) What is the cumulative relative frequency for the 50.2% – 59.7% change in tuition class? 15. Eyeglassomatic manufactures eyeglasses for different retailers. The number of lenses for different activities is in table. Activity Grind Multi-coat Assemble Make Frames Receive Finished Unknown Number of lenses 18,872 12,105 4,333 25,880 26,991 1,508 a) Make a pie chart. b) Make a bar chart. c) Make a Pareto chart. 16. The daily sales using different sales strategies is shown in the graph below. a) Which strategy generated the most sales? b) Was there a particular strategy that worked well for one product, but not for another product? 17. The following graph represents a random sample of car models driven by college students. What was the most common car model? 18. The Australian Institute of Criminology gathered data on the number of deaths (per 100,000 people) due to firearms during the period 1983 to 1997. The data is in table below. Create a time-series plot of the data. What is the overall trend over time? (2013, September 26). Retrieved from http://www.statsci.org/data/oz/firearms.html. Year Rate 1983 4.31 1984 4.42 1985 4.52 1986 4.35 1987 4.39 1988 4.21 1989 3.4 1990 3.61 1991 3.67 1992 3.61 1993 2.98 1994 2.95 1995 2.72 1996 2.95 1997 2.3 19. A scatter plot for a random sample of 24 countries shows the average life expectancy and the average number of births per woman (fertility rate). What is the approximate fertility rate for a country that has a life expectancy of 76 years? (2013, October 14). Retrieved from http://data.worldbank.org/indicator/SP.DYN.TFRT.IN. 20. The Australian Institute of Criminology gathered data on the number of deaths (per 100,000 people) due to firearms during the period 1983 to 1997. The time-series plot is below. What year had the highest rate of deaths? (2013, September 26). Retrieved from http://www.statsci.org/data/oz/firearms.html. 21. A survey by the Pew Research Center, conducted in 16 countries among 20,132 respondents from April 4 to May 29, 2016, before the United Kingdom’s Brexit referendum to exit the EU. The following is a time series graph for the proportion of survey respondents by country that responded that the current economic situation is their country was good. http://www.pewglobal.org/2016/08/09/views-on-national-economies-mixed-as-many-countries-continue-to-struggle/ a) Which country had the most favorable outlook of their country’s economic situation in 2010? b) Which country had the least favorable outlook of their country’s economic situation in 2016? 22. Why is this a misleading or poor graph? 23. Why is this a misleading or poor graph? 24. Why is this a misleading or poor graph? United States unemployment. (2013, October 14). Retrieved from http://www.tradingeconomics.com/united-states/unemployment-rate 25. The Australian Institute of Criminology gathered data on the number of deaths (per 100,000 people) due to firearms during the period 1983 to 1997. Why is this a misleading or poor graph? (2013, September 26). Retrieved from http://www.statsci.org/data/oz/firearms.html. 26. Why is this a misleading or poor graph? Answer to Odd Numbered Exercises 1) a, c, 3) True 5) a) $\begin{array}{l|llllllllllll} 6 & 3 & 5 & 6 & 9 \ 7 & 0 & 1 & 3 & 5 & 6 & 9 \ 8 & 0 & 1 & 2 & 2 & 3 & 3 & 3 & 3 & 5 & 5 & 6 & 7 \ 9 & 4 & 6 & 9 \ \end{array}$ b) c) 1 d) The sample size n. e) 6 f) 0.24 g) 80-89 h) 90-99 i) 0.88 j) 70-79 7) a) 40 b) 50% 9) $\begin{array}{l|lllllllllll} 15 & 4 & & & & & & & & & \ 16 & 2 & 5 & 7 & 9 & & & & & & & \ 17 & 0 & 1 & 2 & 3 & 5 & 5 & 6 & 7 & 8 & 8 & 8 \ 18 & 2 & 4 & 5 & 5 & 5 & 6 & 8 & & & \ 19 & & & & & & & & & & & \ 20 & 1 & & & & & & & & & \end{array}$ 11) a) 50 b) 78 c) 16.55 13) 20% 15) a) b) c) 17) Chevy & Toyota 19) 1.5 21) a) Poland b) Greece 23) There are no labels for both axis or categories. 25) The vertical axis is reversed, making the graph appear to increase when it is actually decreasing. There are no labels for both axes.
textbooks/stats/Introductory_Statistics/Mostly_Harmless_Statistics_(Webb)/02%3A_Organizing_Data/2.04%3A_Chapter_2_Exercises.txt
• 3.1: Measures of Center Both graphical and numerical methods of summarizing data make up the branch of statistics known as descriptive statistics. Later, descriptive statistics will be used to estimate and make inferences about population parameters using methods that are part of the branch called inferential statistics. This section introduces numerical measurements to describe sample data. • 3.2: Measures of Spread Variability describes how the data are spread out. If the data are very close to each other, then there is low variability. If the data are very spread out, then there is high variability. How do you measure variability? It would be good to have a number that measures it. This section will describe some of the different measures of variability, also known as variation. • 3.3: Measures of Placement • 3.4: Chapter 3 Formulas • 3.5: Chapter 3 Exercises 03: Descriptive Statistics Both graphical and numerical methods of summarizing data make up the branch of statistics known as descriptive statistics. Later, descriptive statistics will be used to estimate and make inferences about population parameters using methods that are part of the branch called inferential statistics. This section introduces numerical measurements to describe sample data. This section focuses on measures of central tendency. Many times, you are asking what to expect “on average.” Such as when you pick a career, you would probably ask how much you expect to earn in that field. If you are trying to buy a home, you might ask how much homes are selling for in your area. If you are planting vegetables in your garden, you might want to know how long it will be until you can harvest. These questions, and many more, can be answered by knowing the center of the data set. The three most common measures of the “center” of the data are called the mode, mean, and median. 3.1.1 Mode To find the mode, you count how often each data value occurs, and then determine which data value occurs most often. The mode is the data value that occurs the most frequently in the data. There may not be a mode at all, or you may have more than one mode. If there is a tie between two values for the greatest number of times then both values are the mode and the data is called bimodal (two modes). If every data point occurs the same number of times, there is no mode. If there are more than two numbers that appear the most times, then usually we write there is no mode. When looking at grouped data in a frequency distribution or a histogram then the largest frequency is called the modal class. Below is a dotplot showing the height of some 3-year-old children in cm and we would like to answer the question, “How tall are 3-year-olds?” Figure 3-1 From the graph, we can see that the most frequent value is 95 cm. This is not exactly the middle of the distribution, but it is the most common height and is close to the middle in this case. We call this most frequent value the mode. For larger data sets, use software to find the mode or at least sort the data so that you can see grouping of numbers. Excel reports a mode at the first repetitive value, so be careful in Excel with bimodal data or data with many multiples that would really have no mode at all. Note that zero may be the most frequent value in a data set. The mode = 0 is not the same as “no mode” in the data set. The mode is the observation that occurs most often. • Example 3-1: -5 4 8 3 4 2 0 mode = 4 • Example 3-2: 3 -6 0 1 -2 1 0 5 0 mode = 0 • Example 3-3: 18 25 15 32 10 27 no mode (Excel writes N/A) • Example 3-4: 15 23 18 15 24 23 17 modes = 15, 23 (bimodal) • Example 3-5: 100 125 100 125 130 140 130 140 no mode (Excel gives 100) Summation Notation Throughout this course, we will be using summation notation, also called sigma notation. The capital Greek letter Σ “sigma” means to add. For example, Σx means to sum up all of the x values where X is the variable name. A random sample of households had the following number of children living at home 4, –3, 2, 1, and 3. Calculate Σx. Solution Let x1 = 4, x2 = –3, x3 = 2, x4 = 1, x5 = 3. Start with the first value i = 1 up to the nth value i = 5 to get $\sum_{i=1}^{n} x_{i}$ = 4 + –3 + 2 + 1 + 3 = 7. To make things simpler we will drop the subscripts and write $\sum_{i=1}^{n} x_{i}$ as Σxi or Σx. The order of operations is important in summation notation. For example, Σx2 = (4)2 + (–3)2 + (2)2 + (1)2 + (3)2 = 39. When we insert parentheses (Σx)2 = (4 + –3 + 2 + 1 + 3)2 = (7)2 = 49. Note that Σx2 ≠ (Σx)2. “‘One of the interesting things about space,’ Arthur heard Slartibartfast saying to a large and voluminous creature who looked like someone losing a fight with a pink duvet and was gazing raptly at the old man's deep eyes and silver beard, ‘is how dull it is.’ ‘Dull?’ said the creature, and blinked her rather wrinkled and bloodshot eyes. ‘Yes,’ said Slartibartfast, ‘staggeringly dull. Bewilderingly so. You see, there's so much of it and so little in it. Would you like me to quote some statistics?’ ‘Er, well…’ ‘Please, I would like to. They, too, are quite sensationally dull.’” (Adams, 2002) 3.1.2 Mean The mean is the arithmetic average of the numbers. This is the center that most people call the average. Distinguishing between a population and a sample is very important in statistics. We frequently use a representative sample to generalize about a population. A statistic is any characteristic or measure from a sample. A parameter is any characteristic or measure from a population. We use sample statistics to make inferences about population parameters. The sample mean = $\overline{ x }$ (pronounced “x bar”) of a sample of n observations x1, x2, x3,…,xn taken from a population, is given by the formula: $\overline{ x }$ = $\frac{\text { ∑x }}{\text { n }}$ = $\frac{\text { x1+x2+x3+⋯+xn }}{\text { n }}$. The population mean = μ (pronounced “mu”) is the average of the entire population, is given by the formula: μ = $\frac{\text { ∑x }}{\text { N }}$ = $\frac{\text { x1+x2+x3+⋯+xN }}{\text {N }}$. Most cases, you cannot find the population parameter, so you use the sample statistic to estimate the population parameter. Since μ cannot be calculated in most situations, the value for ��̅is used to estimate μ. You should memorize the symbol μ and what it represents for future reference. Find the mean for the following sample of house prices ($1,000): 325, 375, 385, 395, 420, and 825. Solution Before starting any mathematics problem, it is always a good idea to define the unknown in the problem. In this case, you want to define the variable. The symbol for the variable is x. The variable is x = price of a house in$1,000. $\overline{ x }$ = $\frac{\text { ∑x }}{\text { n }}$ = $\frac{\text { 325+375+385+395+420+825 }}{\text { 6 }}$ = 454.1$\overline{6}$ Solution The data is already ordered from smallest to largest. The sample size is even so take the average of the two middle values $\frac{\text { 385+395 }}{\text { 2 }}$ = 390. The median house price is $390,000. We can use technology to find the median. Directions for the TI calculators are in the next section. In Excel the median is found using the cell function MEDIAN(array). For this example, we can type the data into column A and then in a blank cell =MEDIAN(A1:A6). Recall that the sample mean house price is$454,167. Note that the median is much lower than the mean for this example. The observation of 825 is an outlier and is very large compared to the rest of the data. The sample mean is sensitive to unusual observations, i.e. outliers. The median is resistant to outliers. 3.1.5 Outliers An outlier is a data value that is very different from the rest of the data and is far enough from the center. If there are extreme values in the data, the median is a better measure of the center than the mean. The mean is not a resistant measure because it is moved in the direction of the outlier. The median and the mode are resistant measures because they are not affected by extreme values. As a consumer, you need to be aware that people choose the measure of center that best supports their claim. When you read an article in the newspaper and it talks about the “average,” it usually means the mean but sometimes it refers to the median. Some articles will use the word “median” instead of “average” to be more specific. If you need to make an important decision and the information says “average,” it would be wise to ask if the “average” is the mean or the median before you decide. As an example, suppose that a company administration wants to use the mean salary as the average salary for the company. This is because the high salaries of the administration will pull the mean higher. The company can say that the employees are paid well because the average is high. However, the employees’ union wants to use the median since it discounts the extreme values of the administration and will give a lower value of the average. This will make the salaries seem lower and that a raise is in order. Why use the mean instead of the median? When multiple samples are taken from the same population, the sample means tend to be more consistent than other measures of the center. The sample mean is the more reliable measure of center. 3.1.6 Distribution Shapes Remember that there are varying levels of skewness and symmetry. Sample data is rarely exactly symmetric, but is approximately symmetric. Outliers will pull the mean in the direction of the outlier. If the distribution has a skewed tail to the left, the mean will be smaller than the median. If the distribution has a skewed tail to the right, the mean will be larger than the median. The mode, or modal class, is the tallest point(s), highest frequency, of the distribution. The following show examples of different distribution shapes. Figures 3-2 to 3-5 show example distribution shapes. Figure 3-2 Figure 3-3 Figure 3-4 ' Figure 3-6 Comparing the mean and the median provides useful information about the distribution shape. • If the mean is equal to the median, the data is symmetric, see Figure 3-6. If the mean is larger than (to the right of) the median, the data is right skewed or positively skewed, see Figure 3-7. If the mean is smaller than (to the left of) the median, the data is left skewed, or negatively skewed, see Figure 3-8. Figure 3-7 Figure 3-8 The following is a histogram for a random sample of student rent prices. Comment on the distribution shape. Figure 3-9 Figure 3-10 Solution If we were to use Excel to find the mean and median we would get that the mean house rental price is $1,082.08 and the median house rental price is$1,030. The mean is larger than the median and is being pulled to the right by the outlier of \$2,550. If you were to draw a curve around the bars as in Figure 3- 10, you would get a tail for the one data point on the right. The outlier on the right is the direction of the skewness. This distribution is skewed to the right, or positively skewed. Which measure of center is used on which type of data? • Mode can be found on nominal, ordinal, interval, and ratio data, since the mode is just the data value that occurs most often. You are just counting the data values. • Median can be found on ordinal, interval, and ratio data, since you need to put the data in order. As long as there is order to the data, you can find the median. • Mean can be found on interval and ratio data, since you must have numbers to add together.
textbooks/stats/Introductory_Statistics/Mostly_Harmless_Statistics_(Webb)/03%3A_Descriptive_Statistics/3.01%3A_Measures_of_Center.txt
Variability is an important idea in statistics. If you were to measure the height of everyone in your classroom, every student gives you a different value. That means not every student has the same height. Thus, there is variability in people’s heights. If you were to take a sample of the income level of people in a town, every sample gives you different information. There is variability between samples too. Variability describes how the data are spread out. If the data are very close to each other, then there is low variability. If the data are very spread out, then there is high variability. How do you measure variability? It would be good to have a number that measures it. This section will describe some of the different measures of variability, also known as variation. Numerical statistics for variation can show how spread out data is. The variation of data is relative, and is usually used when comparing two sets of similar data. When we are making inferences about an average, we can make better estimates when there is less variation in the data. The four most common measures of the “spread” of data are called the range, variance, standard deviation, and coefficient of variation. A sample of house prices (in $1,000): 325, 375, 385, 395, 420, and 825, found the mean house price of$454,167. How much does this tell you about the price of all houses? Can you tell if most of the prices were close to the mean or were the prices really spread out? What is the highest price and the lowest price? All you know is that the center of the price is $454,167. What if you were approved for only$400,000 for a home loan, could you buy a home in this area? You need more information. 3.2.1 Range The range of a set of data is the difference between the highest and the lowest data values (or maximum and minimum values). Note in statistics we only report a single number which represents the spread from the lowest to highest value. Range = Max – Min. Look at the following three sets of data. Find the mean, median and range of each of data set. 1. 10, 20, 30, 40, 50 2. 10, 29, 30, 31, 50 3. 28, 29, 30, 31, 32 Solution 1. mean = 30, median = 30, range = 50 – 10 = 40 2. mean = 30, median = 30, range = 50 – 10 = 40 3. mean = 30, median = 30, range = 32 – 28 = 4 Based on the mean, median, and range, the first two distributions are the same, but you can see from the graphs that they are distributed differently. In part 1, the data are spread out equally. In part 2, the data has a clump in the middle and a single value at each end. The mean and median are the same for part 3, but the range is much smaller. All the data is clumped together in the middle. 3.2.2 Variance & Standard Deviation The range does not really provide a very detailed picture of the variability. A better way to describe how the data is spread out is needed. Instead of looking at the distance as the highest value from the lowest, how about looking at the distance each value is from the mean? This spread is called the deviation. Suppose a vet wants to analyze the weights of cats. The weights (in pounds) of five cats are 6.8, 8.2, 7.5, 9.4, and 8.2. Compute the deviation for each of the data values. The deviation is how far each data point is from the mean. To be consistent always subtract the data point minus the mean. Solution Variable: X = weight of a cat. First, find the mean for the data set. The mean is $\overline{ x }$ = $\frac{\Sigma(x)}{\text { n }}$ = $\frac{\text { (6.8+8.2+7.5+9.4+8.2) }}{\text { 5 }}$ = 8.02 pounds. Subtract the mean from each data point to get the deviations. Figure 3-11 Now average the deviations. Add the deviations together. Figure 3-12 The average distance from the mean cannot be zero. The reason the deviations add to 0 is that there are some positive and negative values. The sum of the deviations from the mean will always be zero. To get rid of the negative signs square each deviation. Figure 3-13 Then average the total of the squared deviations. The only thing is that in statistics there is a strange average here. Instead of dividing by the number of data values, you divide by the number of data values minus one. This n – 1 is called the degrees of freedom and will be discussed more later in the text. When we divide by the degrees of freedom, this gives an unbiased statistic. In this case, you would have the following: s2 = $\frac{(x-\bar{x})^{2}}{n-1}$ = $\frac{\text { 3.728 }}{\text { 5 − 1 }}$ = $\frac{\text { 3.728 }}{\text { 4 }}$ = 0.932 pounds2 Notice that this statistic is denoted as s2 . This statistic is called the sample variance and it is a measure of the average squared distance from the mean. If you now take the square root, you will get the average distance from the mean. The square root of the variance is called the sample standard deviation, and is denoted with the letter s. s= $\sqrt{0.932}$ = 0.9654 pounds The standard deviation is the average (mean) distance from a data point to the mean. It can be thought of as how much a typical data point differs from the mean. The sample variance formula: s2 = $\frac{∑(x-\bar{x})^{2}}{n-1}$. Where $\overline{ x }$ is the sample mean, n is the sample size, and Σ means to find the sum. The sample standard deviation formula: s = $\sqrt{s^{2}}=\sqrt{\frac{\sum(x-x)^{2}}{n-1}}$. The n – 1 in the denominator has to do with a concept called degrees of freedom (df). Dividing by the df makes the sample standard deviation a better approximation of the population standard deviation than dividing by n. We rarely will find a population variance or standard deviation, but you will need to know the symbols. The population variance formula: $\sigma^{2}=\frac{\sum(x-\mu)^{2}}{N}$. The population standard deviation formula: $\sigma=\sqrt{\frac{\sum(x-\mu)^{2}}{N}}$. The lower-case Greek letter σ pronounced “sigma” and σ2 represents the population variance, μ is the population mean, and N is the size of the population. Note: the sum of the deviations should always be zero. Try not to round too much in the calculations for standard deviation since each rounding causes a slight error. Suppose that a manager wants to test two new training programs. They randomly select 5 people for each training type and measures the time it takes to complete a task after the training. The times for both trainings are in table below. Which training method is more consistent? Solution It is important that you define what each variable is since there are two of them. Variable 1: X1 = productivity from training 1 Variable 2: X2 = productivity from training 2 The units and scale are the same for both groups. To answer which training method better, first you need some descriptive statistics. Start with the mean for each sample. $\overline{ x }$1 = $\frac{\text { 56 + 75 + 48 + 63 + 59 }}{\text { 5 }}$ = 60.2 minutes $\overline{ x }$2 = $\frac{\text { 60 + 58 + 66 + 59 + 58 }}{\text { 5 }}$ = 60.2 minutes Since both means are the same values, you cannot answer the question about which is better. Now calculate the standard deviation for each sample. Figure 3-14 Figure 3-15 The variance for each sample is: \begin{aligned} &s_{1}^{2}=\frac{394.8}{4}=98.7 \text { minutes }^{2} \ &s_{2}^{2}=\frac{44.8}{4}=11.2 \text { minutes }^{2} \end{aligned} The standard deviations are: s1 = $\sqrt{ 98.7 }$ = 9.9348 minutes s2 = $\sqrt{ 11.2 }$ = 3.3466 minutes. Comparing the standard deviations, the second training method seemed to be the better training since the data is less spread out. This means it is more consistent. It would be better for the managers in this case to have a training program that produces more consistent results so they know what to expect for the time it takes to complete the task. Descriptive statistics can be time-consuming to calculate by hand so use technology. One Variable Statistics on the TI Calculator The procedure for calculating the sample mean ( $\overline{ x }$ ) and the sample standard deviation (sx) for the TI calculator are shown below. Note, the TI calculator also gives you the population standard deviation (σx) because it does not know whether the data you input is a population or a sample. You need to decide which value you need to use, based on whether you have a population or sample. In almost all cases you have a sample and will be using sx. In addition, the calculator uses the notation of sx instead of just s. It is just a way for it to denote the information. TI-84: Enter the data in a list and then press [STAT]. Use cursor keys to highlight CALC. Press 1 or [ENTER] to select 1:1-Var Stats. Press [2nd], then press the number key corresponding to your data list. Press [Enter] to calculate the statistics. Note: the calculator always defaults to L1 if you do not specify a data list. sx is the sample standard deviation. You can arrow down and find more statistics. Use the min and max to calculate the range by hand. To find the variance simply square the standard deviation. TI-89: Press [APPS], select FlashApps then press [ENTER]. Highlight Stats/List Editor then press [ENTER]. Press [ENTER] again to select the main folder. To clear a previously stored list of data values, arrow up to the list name you want to clear, press [CLEAR], then press enter. Press [F4], select 1: 1-Var Stats. To get the list name to the List box, press [2nd] [VarLink], arrow down to list1 and press [Enter]. This will bring list1 to the List box. Press [Enter] to enter the list name and then enter again to calculate. Use the down arrow key to see all the statistics. Sx is the sample standard deviation. You can arrow down and find more statistics. Use the min and max to calculate the range by hand. To find the variance simply square the standard deviation or take the last sum of squares divided by n – 1. Excel: Type in the data into one column, select the Data tab, and choose Data Analysis. Select Descriptive Statistics, and then select OK. Highlight the data for the Input Range, if you highlighted a label; check the Labels in first row box. Select the circle to the left of Output Range, then click into the box to the right of the Output Range and select one cell where you want the top left-hand corner of your summary table to start. Select the box next to Summary statistics, then select OK, see below. We get the following summary statistics: In general, a “small” standard deviation means the data are close together (more consistent) and a “large” standard deviation means the data is spread out (less consistent). Sometimes you want consistent data and sometimes you do not. As an example, if you are making bolts, you want the lengths to be very consistent so you want a small standard deviation. If you are administering a test to see who can be a pilot, you want a large standard deviation so you can tell whom the good and bad pilots are. What do “small” and “large” mean? To a bicyclist whose average speed is 20 mph, s = 20 mph is huge. To an airplane whose average speed is 500 mph, s = 20 mph is nothing. The “size” of the variation depends on the size of the numbers in the problem and the mean. Another situation where you can determine whether a standard deviation is small or large is when you are comparing two different samples. A sample with a smaller standard deviation is more consistent than a sample with a larger standard deviation. If we were to compare the variability between two histograms. The standard deviation and variance measure the average spread from left to right. Take a moment and see if you can order the following histograms from the smallest to the largest standard deviation. Figure 3-16 Figure 3-17 FIgure 3-18 Figure 3-19 The histogram that has more of the data close to the mean will have the smallest standard deviation. The histogram that has more of the data towards the end points will have a larger standard deviation. Figure 3-16 will have the largest standard deviation since more of the data is grouped in the first and last class. Figure 3-17 will have the smallest standard deviation since more of the data is grouped in the center class which will be close to the mean in a symmetric distribution. Figures 3-18 and 3-19 are harder to compare without also having access to the mean and median to indicate skewness. However, Figure 3-19 does have smaller frequencies in the first and last three classes compared to Figure 3-18. The correct order from smallest to largest standard deviation would be Figure 3-17, Figure 3-19, Figure 3-18, and then Figure 3-16. One should not compare the range, standard deviation or variance of different data sets that have different units or scale. 3.2.3 Coefficient of Variation The coefficient of variation, denoted by CVar or CV, is the standard deviation divided by the mean. The units on the numerator and denominator cancel with one another and the result is usually expressed as a percentage. The coefficient of variation allows you to compare variability among data sets when the units or scale is different. Coefficient of Variation = CVar = ( $\frac{\text { s }}{\text { \(\overline{ x }$ }}\) ∙ 100) % The following is a sample of the alcohol content and calories for 12 oz. beers. Is the alcohol content (alcohol by volume ABV) or calories more variable? Name Brewery ABV Calories in 12 oz. Big Sky Scape Goat Pale Ale Big Sky Brewing 4.70% 163 Sierra Nevada Harvest Ale Sierra Nevada 6.70% 215 Steel Reserve Miller Coors 8.10% 222 O'Doul's Anheuser Busch 0.40% 70 Coors Light Miller Coors 4.15% 104 Genesee Cream Ale High Falls Brewing 5.10% 162 Breakside Pilsner Breakside 5.00% 158 Dark Ale Alberta Brewing Company 5.00% 155 Flying Dog Doggie Style Flying Dog Brewery 4.70% 158 Big Sky I.P.A. Big Sky Brewing 6.20% 195 Solution Type in the data to Excel and run descriptive statistics on both data sets to get the following: Next, compute the coefficient of variation using the mean and standard deviation for both data sets. Alcohol Content CVar = ( $\frac{\text { 0.02003 }}{\text { 0.05005}}$ ∙ 100) % = 39.97% Calories CVar = ( $\frac{\text { 46.39875 }}{\text { 160.2 }}$ ∙ 100) % = 28.96% The alcohol content varies more than the number of calories. There is no shortcut on the calculator or Excel for CVar, but you can find s and $\overline{ x }$ then simply divide.
textbooks/stats/Introductory_Statistics/Mostly_Harmless_Statistics_(Webb)/03%3A_Descriptive_Statistics/3.02%3A_Measures_of_Spread.txt
3.3.1 Z-Scores A z-score is the number of standard deviations an observation x is above or below the mean. Z-scores are used to compare placement of a value compared to the mean. If x is an observation from a sample then the standardized value of x is the z-score where z = $\frac{x-\bar{x}}{s}$ If x is an observation from a population then the standardized value of x is the z-score where z = $\frac{x-\mu}{\sigma}$ If the z-score is negative, x is less than the mean. If the z-score is positive, x is greater than the mean. There are no shortcuts on the calculator in Excel for z-score, but you can find s and $\overline{ x }$ then simply subtract and divide. The number of standard deviations that a data value is from the mean is frequently used when comparing position of values. If a z-score is zero, then the data value is the same as the mean. If the z-score is one, then the data value x is one standard deviation above the mean. If the z-score is –3.5, then the data value is three and a half standard deviations below the mean. The shaded area in Figure 3-20 represents one standard deviation from the mean. Figure 3-20 For a random sample, the mean time to make a cappuccino is 2.8 minutes with a standard deviation of 0.86 minutes. Find the z-score for someone that makes their cappuccino in 4.95 minutes. Solution z = $\frac{x-\bar{x}}{s}$ = $\frac{4.95-2.8}{0.86}$ = 2.5. Their time is 2.5 standard deviations above average. On a math test, a student scored 45. The class average was 50 with a standard deviation of 3. The same student scored an 80 on a history test, and the class average was 85 with a standard deviation of 2.5. Which exam did the student perform better on compared with the rest of the class? Solution $z_{m}$ = $\frac{45-50}{3}$ = -1.67 $\quad$ $z_{h}$ = $\frac{80-85}{2.5}$ =-2 Test scores are “better” when they are larger, so whichever has the largest z-score did better. The student did better on the math test than the history test, compared to the rest of the class. Be careful with the word “better,” depending on the context, better may be smaller rather than larger. For example, golf scores, time running a race, and cholesterol levels would be better if they were smaller values. The length of a human pregnancy has a mean of 272 days. A pregnancy lasting 281 days or more has a z-score of one. How many standard deviations above the mean is a pregnancy lasting 281 days or more? Solution One, since by definition the z-score is the number of standard deviations from the mean. The length of a human pregnancy has a mean of 272 days. A pregnancy lasting 281 days or more has a z-score of one. What is the standard deviation of human pregnancy length? Solution We know the z-score = 1 and mean = 272. Replace these two numbers in the z-score formula then solve for the standard deviation. 1 = $\frac{281-272}{\sigma}$ $\quad$ $\Rightarrow$ $\quad$ 1 = $\frac{9}{\sigma}$ $\quad$ $\Rightarrow$ $\quad$ $\sigma$ = 9 3.3.2 Percentiles Along with the center and variability, another useful numerical measure is the ranking of a number. A percentile is a measure of ranking. It represents a location measurement of a data value to the rest of the values. Many standardized tests give the results as a percentile. Doctors use percentiles graphs to show height and weight standards. Interpreting Percentiles The p th percentile is the value that separates the bottom p% from the upper (100 – p)% of the ordered (smallest to largest) data. For example, the 75th percentile is the value that separates the bottom 75% from the upper 25% of the data. There are several methods used to find percentiles. You may get different percentile values depending on which software or calculator you use. For example, Excel has two methods, both of which are not the same method as the TI calculators. What does a score of the 90th percentile represent? Solution This means that 90% of the scores were at or below this score. (A person did the same as or better than 90% of the test takers.) What does a score of the 70th percentile represent? Solution This means that 70% of the scores were at or below this score. Percentile versus Score If the test was out of 100 points and you scored at the 80th percentile, what was your score on the test? You do not know! All you know is that you scored the same as or better than 80% of the people who took the test. If all the scores were low, you could have still failed the test. On the other hand, if many of the scores were high you could have gotten a 95% or so. Note there is more than one method to find percentiles. This rounding rule in Excel is not the same as used on your TI calculators. Finding a Percentile: Step 1: Arrange the data in order from lowest to highest. Step 2: Substitute into the formula i= $\frac{(n+1) \cdot p}{100}$ where n = sample size and p = percentile. Step 3A: If i is a whole number, count out i places from the lowest number to find the percentile. For example, if you get i = 3, then the 3rd value is the percentile. Step 3B: If i is not a whole number, then take the weighted average between the ith and ith +1 data value as the percentile. For example, if i = 3.25, this would be 25% of the distance between the 3rd and the 4th data values as the percentile. Percentile = ith data value + (ith + 1 data value – ith data value)*(0.##) where ## is the remainder percent. Compute the 10th percentile of the random sample of 13 ages: 15, 18, 22, 25, 26, 31, 33, 35, 38, 46, 51, 53, and 95. Solution Data is already ordered, so next find i= $\frac{(n+1) \cdot p}{100} = \frac{14 \cdot 10}{100}$ = 1.4. Since i is not a whole number use Step 3B, take the weighted average of 40% of the way between the 1st and 2nd values. This would be 15 + (18 – 15)∙0.4 = 16.2 and this is your 10th percentile, P10 = 16.2. In Excel use =PERCENTILE.EXC(array, k) where array is the cell reference to where the data is located and k is the percentile as a decimal between 0 and 1. Note you do not have to sort the data prior to typing it in to Excel. For this example, if you type in the data into column A, then use the formula =PERCENTILE.EXC(A1:A13, 0.1) = 16.2. 3.3.3 Quartiles There are special percentiles called quartiles. Quartiles are numbers that divide the data into fourths. One fourth (or a quarter) of the data falls between consecutive quartiles. There are three quartiles Q1, Q2, and Q3 that subsequently divide the ordered data into the 4 pieces of approximately equal size, or 25% each. Thus, 25% of the values are less than Q1, 25% of the data values are between Q1 and Q2, 25% of the data values are between Q2 and Q3, and 25% are of the data values are greater than Q3. Use the dollar as an example. If we make change for a dollar, we would get four quarters to make one dollar. Hence, quarter for quartiles. To find the quartiles use the same rules as percentiles where we: 1. Arrange the observations from smallest to largest and use the previous percentile rule. 2. Then find all three quartiles. • Q1 = first quartile = 25th percentile • Q2 = second quartile = median = 50th percentile • Q3 = third quartile = 75th percentile Compute all three quartiles for the random sample of 13 ages: 15, 18, 22, 25, 26, 31, 33, 35, 38, 46, 51, 53, and 95. Solution For the first quartile i= $\frac{(n+1) \cdot p}{100}$ = $\frac{14 \cdot 25}{100}$ = 3.5. Since i is not a whole number take the weighted average of half way between the 3rd and 4th data values 22 + (25 – 22)∙0.5 = 23.5, so Q1 = 23.5. In Excel you could use the percentile formula, but there is also a quartile formula: =QUARTILE.EXC(array, quartile), where array is the cell reference to the data and quartile is either 1, 2 or 3 for the 3 possible quartiles. In this example we would have =QUARTILE.EXC(A1:A13, 1) = 23.5. To find the second quartile: i = $\frac{(n+1) \cdot p}{100}=\frac{14 \cdot 50}{100}$ = 7. Since i is a whole number use the 7th value for Q2, so Q2 = 33. Or use the Excel formula =QUARTILE.EXC(A1:A13, 2) = 33. For the third quartile, i = $\frac{(n+1) \cdot p}{100}=\frac{14 \cdot 75}{100}$ = 10.5. Since i is not a whole number use the weighted average of half way between the 10th and 11th values 46 + (51 – 46)∙0.5 = 48.5. Or use the Excel formula =QUARTILE.EXC(A1:A13, 3) = 48.5, so Q3 = 48.5. The high school graduating class of 2016 in Oregon had the following ACT quartile scores. Interpret what the number 26 under the Composite column represents. https://www.act.org/content/dam/act/unsecured/documents/P_38_389999_S_S_N00_ACT-GCPR_Oregon.pdf Solution From the report we can see that the third quartile for composite score is 26, this means that 75% of Oregon students that took the ACT exam scored 26 or below. Other Types of Percentiles Quintiles break a data set up into five equal pieces. We will not be using these, but be aware that percentiles come in different forms. Deciles break a data set up into ten equal pieces and are found using the percentile rule. For example, the 6th decile = D6 = 60th percentile. Use the dollar as an example. If we make change for a dollar, we would get ten dimes to make one dollar. Hence, a dime might help you remember deciles. Earlier in Example 2-14, we made an ogive using Excel with the following sample of 35 ages. Use the ogive to find the age for the 8th decile. 46 47 49 25 46 22 42 24 46 40 39 27 25 30 33 27 46 21 29 20 26 25 25 26 35 49 33 26 32 31 39 30 39 29 26 Figure 3-21 Figure 3-22 Solution The cumulative % represents the cumulative relative frequencies which are equivalent to the percentiles for each class. The red line is the ogive and the percentiles correspond to the vertical axis on the right side. If we wanted to know what age the 80th percentile was in the sample, we could use the ogive to get an approximate value. Starting on the right at 80% make a horizontal line until you hit the red cumulative % line, and then make a vertical line from there down to the axis to get the approximate age. See Figure 3-22. In this case, the 80th percentile = 8th decile would be approximately 44. 3.3.4 Five Number Summary & Outliers If you record the quartiles together with the minimum and maximum values from a data set, you have five numbers. These five numbers are known as the five-number summary consisting of the minimum, the first quartile (Q1), the median (Q2), the third quartile (Q3), and the maximum (in that order). The interquartile range, IQR, is the difference between the first and third quartiles, Q1 and Q3. Half of the data (50%) falls in the interquartile range. If the IQR is “large,” the data is spread out and if the IQR is “small,” the data is closer together. The interquartile range (IQR) = Q3Q1 Not only does the IQR give a range of the middle 50% of the data, but is also used to determine outliers in a sample. To find these outliers we first find what are called a lower and upper limit sometimes called fences. The lower limit, or inner fence, is Q1 – (1.5·IQR). Any values that are less than the lower limit is considered an outlier. Similarly, the upper limit, or outer fence, is Q3 + (1.5·IQR). Any values that are more than the upper limit are considered outliers. If all the numbers in the sample fall between the lower and upper limit, including the endpoints, then there are no outliers in the sample. Any values outside these limits would be considered outliers. 3.3.5 Modified Box-and-Whisker Plot A boxplot (or box-and-whisker plot) is a graphical display of the five-number summary. A boxplot can be drawn vertically or horizontally. The modified boxplot shows outliers, whereas a regular boxplot does not show outliers. The basic format of the plot is a box drawn from Q1 to Q3, a vertical line drawn inside the box for the median, and horizontal lines (called whiskers) extending out of the middle of each end of the box to the minimum and maximum. The box should not touch the number line. The modified boxplot extends the left line to the smallest value greater than the lower fence, and extends the right line to the largest value less than the upper fence. Dots, circles or asterisks represent any outlier. We will make modified boxplots for this course. Like always, label the tick marks on the number line and give the graph a title. A boxplot is a graph of the 5-number summary, see Figure 3-23. Figure 3-23 It is important to note that when you are making the boxplot the limits for finding outliers are not graphed in the plot, they were only used to find the outliers. The whiskers would go to the next largest (or smallest) value in the data set after you removed the outlier(s). If the sample has a symmetrical distribution, then the boxplot will be visibly symmetrical. If the data distribution has a left skew or a right skew, the line on that side of the boxplot will be visibly long in the direction of skewness. If the four quartiles are all about the same distance apart, then the data are likely a near uniform distribution. If a boxplot is symmetrical, and both outside lines are noticeably longer than the Q1 to median and median to Q3 distance, the distribution is then probably bell-shaped. Make a modified box-and-whisker plot for the random sample of 13 ages: 15, 18, 22, 25, 26, 31, 33, 35, 38, 46, 51, 53, and 95. Solution Use Excel to compute the three quartiles as: Q1 =QUARTILE.EXC(A1:A13, 1) = 23.5 Q2 =QUARTILE.EXC(A1:A13, 2) = 33 Q3 =QUARTILE.EXC(A1:A13, 3) = 48.5. The 5-number summary values are 15, 23.5, 33, 48.5 and 95. Each of these numbers will need to be incorporated into the box-and-whisker plot, and any outliers to graph the modified box-and-whisker plot. To find the outliers, first find the IQR, and then find the lower and upper limits. The IQR = Q3Q1 = 48.5 – 23.5 = 25. The lower limit is Q1 – (1.5· IQR) = 23.5 – 1.5(25) = –14. The upper limit is Q3 + (1.5· IQR) = 48.5 + 1.5(25) = 86. Any value in our data set that is not between the lower and upper limits [–14, 86] is an outlier. By observation, we have one number that is outside the range so the outlier is 95. The whiskers would be drawn to the next largest (or smallest) value in the data set after you removed the outlier(s). For this example, the next largest value in the data set is 53. Now put that all together to get the following graph in Figure 3-24 Figure 3-24 The TI-calculator and newer versions of Excel will make a modified boxplot. Note that the quartile rules used in the TI calculators are slightly different then in Excel and what is presented in this content. They do not use a weighted mean between values, just half way between values. TI-84: First, enter your data in to list 1. Next, press 2nd > STAT PLOT, then choose the first plot. Note that your calculator may say Plot1…Off or show a different type of graph then the screenshot. Using your arrow keys, turn the plot on. Choose the modified boxplot which is the first of the two boxplot options with the small dots to the right of the whiskers representing outliers. Make sure your Xlist: is on L1, keep frequency as a one, and any mark will work, but the square shows up best. Here is screen shot from the calculator for the last example. You can use Trace on the boxplot from the TI-84 calculator below to see where each quartile, whisker and outlier are. TI-89: Enter the data into the Stat/List editor under list 1. Press [APP] then scroll down to Stat/List Editor; on the older style TI-89 calculators, go into the Flash/App menu, and then scroll down the list. Make sure the cursor is in the list, not on the list name, and type the desired values pressing [ENTER] after each one. To clear a previously stored list of data values, arrow up to the list name you want to clear, press [CLEAR], and then press enter. After you enter the data, select Press [F2] Plots, scroll down to [1: Plot Setup] and press [Enter]. Then select [F1] Define. Use your arrow keys to select Mod Box Plot for Type, and then scroll down to the x-variable box. Press [2nd] [VarLink] this key is above the + sign. Then arrow down until you find your List1 name under the Main file folder. Then press [Enter] and this will bring the name List1 back to the menu. You will now see that Plot1 has a small picture of a boxplot. To view the boxplot, press [F5] Zoom Data. Select [F3:Trace] to see the five-number summary and any outliers. Use the left and right arrow keys to move to the other values. Excel: Note this example is on a PC running Excel 2019. Older versions of Excel may not have a boxplot option. First, type your sample data into column A in any order. Highlight the data, and then select the Insert tab. Under the graphing options, the picture shaped liked a histogram called statistical charts, select Box and Whisker. You can change the formatting options and add the chart title as needed. Below is the finished Excel boxplot. Note that Excel does a vertical boxplot rather than the traditional horizontal number line. Excel marks an × just above the median where the mean would fall. Usually one would not include the mean on a boxplot. Remember that when the mean is greater than the median the distribution is usually skewed to the right. When a boxplot has outliers only on one side, then we can also say the distribution is skewed in the direction of the outlier, which also indicates that these ages are skewed to the right. Side by side boxplots are great at comparing quartiles and distribution shapes for several samples using the same units and scale. There are four franchises in different parts of town. Compare the weekly sales over a year for each of the four franchises. Compare the boxplots shown in Figure 3-25. Figure 3-25 Solution We can see that Store 2 in Figure 3-25 has the highest sales since the median for this store is higher than the third quartile for all the other stores. Store 2 also has sales that are more consistent from week to week with the smaller range and has a symmetric distribution. The lowest performing store, Store 1, has the lowest median sales and is skewed to the right. Both Stores 3 and 4 have moderate sales and are skewed left 3.3.6 Empirical Rule Before looking at the process for computing probabilities, it is somewhat useful to look at the Empirical Rule which gives the approximate proportion of data points under a bell-shaped curve between two points. The Empirical Rule is just an approximation, more precise methods for finding these proportions will be demonstrated in later sections. The Empirical Rule should only be used with bell-shaped data. The Empirical Rule (also called the 68-95-99.7 Rule) In a bell-shaped distribution with mean μ and standard deviation σ, • Approximately 68% of the observations fall within 1 standard deviation (σ) of the mean μ. • Approximately 95% of the observations fall within 2 standard deviations (2σ) of the mean μ. • Approximately 99.7% of the observations fall within 3 standard deviations (3σ) of the mean μ. Note that we are using notation for the population mean μ and the population standard deviation σ, but the rule would also work using the sample mean and sample standard deviation. Figure 3-26 In 2009 the average SAT mathematics score was 501, with a standard deviation of 116. Assume that SAT scores are bell-shaped. a) Approximately what proportion of students scored between 269 and 733 on the 2009 SAT mathematics test? b) Approximately what proportion of students scored between 385 and 617 on the 2009 SAT mathematics test? c) Approximately what proportion of students scored at least 617 on the 2009 SAT mathematics test? Solution a) The key word is bell shaped so we can use the Empirical Rule. Start by finding the z-scores z = $\frac{x-\mu}{\sigma}$ for both endpoints given in the question. A z-score by definition is the number of standard deviations a data value is from the mean. Figure 3-27 The two z-scores show that the test scores of 269 and 733 are two standard deviations from the mean. Using the second bulleted item in the Empirical Rule the answer would be approximately 95% of the math SAT scores will fall between 269 and 733. b) Take the z-scores of the endpoints to get: z=$\frac{385-501}{116}=-1, z=\frac{617-501}{116}=1$ Figure 3-28 The two z-scores show that the test scores of 385 and 617 are one standard deviation from the mean. Using the first bulleted item in the Empirical Rule the answer would be approximately 68% of the math SAT scores will fall between 385 and 617. c): Start by taking the z-score of 617, we get z = $\frac{617-501}{116}$ = 1. Since a bell-shaped curve is symmetric and we can assume that 100% of the population is represented then we can subtract the middle from the whole to get 100% – 68% = 32%. If we divide this outside area by two, $\frac{32%}{2}$ = 16%, we would expect 16% in each tail area. See Figure 3-29. Figure 3-29 The answer would be approximately 16% of students scored at least 617 on the 2009 SAT mathematics test. If you were to get a z-score that is not –3, –2, –1, 1, 2 or 3 then you would not be able to apply the Empirical Rule. We also need to ensure that our population has a bell-shaped curve before using the Empirical Rule. 3.3.7 Chebyshev’s Theorem One way of estimating the proportion of values from any data set within a certain number of standard deviations is Chebyshev’s Theorem. Pafnuty Chebyshev (Чебышёва) was a Russian mathematician who proved several important theorems. One that we will use for this chapter is called Chebyshev’s Inequality. The Empirical Rule only works for bell-shaped distributions. However, we can use Chebyshev’s Inequality for any distribution shape. Chebyshev Chebyshev’s Inequality: The proportion (percent or fraction) of values from a data set that will fall within k standard deviations of the mean will be at least $\left(\left(1-\frac{1}{(z)^{2}}\right) \cdot 100\right)$ % , where z is a real number that has an absolute value greater than 1 (z is not necessarily an integer). The average number of acres for farms in a certain country is 443 with a standard deviation of 42 acres. At least what percent of farms will have between 338 and 548 acres? Solution The question gives no indication for the distribution shape for number of farm acres so we will use Chebyshev’s Inequality instead of the Empirical Rule. The easiest way to start is to find the z-score of the lower and upper bounds given in the question where μ = 443 and σ = 42. z = $\frac{338−443}{42}$ = −2.5 and z = $\frac{548−443}{42}$ = 2.5 Use either z-score, but it is easier to use the positive value, and substitute into the formula: $\left(\left(1-\frac{1}{(z)^{2}}\right) \cdot 100\right) \%=\left(\left(1-\frac{1}{(2.5)^{2}}\right) \cdot 100\right) \%=84 \%$ At least 84% of the farms will have between 338 and 548 acres. The average quiz score for a statistics course is 15.2 with a standard deviation of 3.15. What are the quiz scores that would have at least 75% of the scores between? Solution This question gives a percent and we need to work backward. We will use Chebyshev’s Inequality since there is no mention of a bell-shaped distribution. Use algebra to solve for z in the following formula $\left(\left(1-\frac{1}{(z)^{2}}\right) \cdot 100\right) \%$ = 75%. Start by dividing both sides by 100% to get rid of the %. We then have 1 − $\frac{1}{(z)^{2}}$ = 0.75. Next, add $\frac{1}{(z)^{2}}$ to both sides of the equation and subtract 0.75 from both sides of the equation to get 0.25 = $\frac{1}{(z)^{2}}$. Multiply both sides by z2 , divide both sides by 0.25 and simplify to get (z)2 = $\frac{1}{0.25}$ $\quad$ $\Rightarrow$ $\quad$ z2 = 4. Take the square root of both sides $\sqrt{z^{2}}$ = $\sqrt{4}$ $\quad$ $\Rightarrow$ $\quad$ = $\pm 2$. This means that according to Chebyshev’s Theorem at least 75% of the data will fall within two standard deviations from the mean. Next, we need to find what quiz scores are two standard deviations from the mean. Find the mean $\pm 2$ standard deviations, by computing: \begin{aligned} &\mu-2 \cdot \sigma=15.2-2 \cdot 3.15=8.9 \ &\mu+2 \cdot \sigma=15.2+2 \cdot 3.15=21.5 \end{aligned} At least 75% of the students scored between 8.9 and 21.5 on the quiz. The general formulas for finding the endpoints are: Lower endpoint: a = μ – z·σ Upper endpoint: b = μ + z·σ If the distribution of quiz scores were bell shaped we would have a larger percent (95%) between 8.9 and 21.5. Chebyshev’s Inequality assumes the distribution is skewed and says “at least” which would then also be correct if more students fell within the range. Since Chebyshev’s Inequality works for any shape of a distribution we should only use if between two values, not strictly below or above a point. If we were interested in below a certain point, we would not know if we had the fat or skinny tail of a skewed distribution. The following picture shows a positively skewed distribution with 1.5 standard deviations from the mean shaded in red. Chebyshev’s Inequality would guarantee at least $\left(\left(1-\frac{1}{(1.5)^{2}}\right) \cdot 100\right) \%$ = 56% of the data in the shaded area would fall within 1.5 standard deviations from the mean. Figure 3-30 Summary: Use the mode for nominal, ordinal, interval, and ratio data, since the mode is just the data value that occurs most often. You are just counting the data values. The median can be found on ordinal, interval, and ratio data, since you need to put the data in order. As long as there is order to the data you can find the median. The mean can be found on interval and ratio data, since you must have numbers to add together. The mean is pulled in the direction of outliers. By comparing the mean to the median, you can decide if a distribution is symmetric or skewed. The range, variance, standard deviation and coefficient of variation are used to measure how spread out a data set is from the middle. When comparing two data sets with different units or scale use the coefficient of variation. Z-scores tell you how many standard deviations a data point is away from the mean. Quartiles are special percentiles that are used to find the interquartile range, identify outliers, and make a box-and-whisker plot. Use the Empirical Rule when finding the proportion of a sample or population that fall within 1, 2 or 3 standard deviations on a bell-shaped curve. If the distribution is bell shaped then the Empirical Rule states that approximately 68% of the data will fall within one standard deviation, 95% within two standard deviations and 99.7% within three standard deviations. If you do not know the distribution shape, then use Chebyshev’s Inequality to find the minimum proportion within |z| > 1 standard deviations from the mean.
textbooks/stats/Introductory_Statistics/Mostly_Harmless_Statistics_(Webb)/03%3A_Descriptive_Statistics/3.03%3A_Measures_of_Placement.txt
Sample Mean: $\bar{x} = \frac{\sum x}{n}$ Population Mean: $\mu = \frac{\sum x}{N}$ Weighted Mean: $\bar{x} = \frac{\Sigma(x w)}{\sum w}$ Range = Max – Min Sample Standard Deviation: s = $\sqrt{\frac{\sum(x-\bar{x})^{2}}{n-1}}$ Population Standard Deviation = σ Sample Variance: s2 = $\frac{\Sigma(x-\bar{x})^{2}}{n-1}$ Population Variance = σ2 Coefficient of Variation: CVar = $\left(\frac{s}{\bar{x}} \cdot 100\right) \%$ Z-Score: z = $\frac{x-\bar{x}}{s}$ Percentile Index: i = $\frac{(n + 1) \cdot p}{100}$ Interquartile Range: IQR = Q3Q1 Empirical Rule: z = 1, 2, 3 $\Rightarrow$ 68%, 95%, 99.7% Outlier Lower Limit: Q1 – (1.5·IQR) Chebyshev’s Inequality: $\left(\left(1-\frac{1}{(z)^{2}}\right) \cdot 100\right) \%$ Outlier Upper Limit: Q3 + (1.5·IQR) 3.05: Chapter 3 Exercises Chapter 3 Exercises 1. A sample of eight cats found the following weights in kg. a) Compute the mode. b) Compute the median. c) Compute the mean. 2. Cholesterol levels in milligrams (mg) of cholesterol per deciliter (dL) of blood, were collected from patients two days after they had a heart attack (Ryan, Joiner & Ryan, Jr, 1985). Retrieved from http://www.statsci.org/data/general/cholest.html. a) Compute the mode. b) Compute the median. c) Compute the mean. 3. The lengths (in kilometers) of rivers on the South Island of New Zealand that flow to the Tasman Sea are listed below. a) Compute the mode. b) Compute the median. c) Compute the mean. 4. A university assigns letter grades with the following 4-point scale: A = 4.00, A– = 3.67, B+ = 3.33, B = 3.00, B– = 2.67, C+ = 2.33, C = 2.00, C– = 1.67, D+ = 1.33, D = 1.00, D– = 0.67, F = 0.00. Calculate the grade point average (GPA) for a student who took in one term a 4 credit math course and received a B+, a 1 credit seminar course and received an A, a 3 credit history course and received an A- and a 5 credit writing course and received a D. 5. A university assigns letter grades with the following 4-point scale: A = 4.00, A– = 3.67, B+ = 3.33, B = 3.00, B– = 2.67, C+ = 2.33, C = 2.00, C– = 1.67, D+ = 1.33, D = 1.00, D– = 0.67, F = 0.00. Calculate the grade point average (GPA) for a student who took in one term a 3 credit biology course and received a C+, a 1 credit lab course and received a B, a 4 credit engineering course and received an A- and a 4 credit chemistry course and received a C+. 6. An employee at Clackamas Community College (CCC) is evaluated based on goal setting and accomplishments toward the goals, job effectiveness, competencies, and CCC core values. Suppose for a specific employee, goal 1 has a weight of 30%, goal 2 has a weight of 20%, job effectiveness has a weight of 25%, competency 1 has a weight of 4%, competency 2 has weight of 3%, competency 3 has a weight of 3%, competency 4 has a weight of 3%, competency 5 has a weight of 2%, and core values has a weight of 10%. Suppose the employee has scores of 3.0 for goal 1, 3.0 for goal 2, 2.0 for job effectiveness, 3.0 for competency 1, 2.0 for competency 2, 2.0 for competency 3, 3.0 for competency 4, 4.0 for competency 5, and 3.0 for core values. Compute the weighted mean score for this employee. If an employee has a score less than 2.5, they must have a Performance Enhancement Plan written. Does this employee need a plan? 7. A statistics class has the following activities and weights for determining a grade in the course: test 1 worth 15% of the grade, test 2 worth 15% of the grade, test 3 worth 15% of the grade, homework worth 10% of the grade, semester project worth 20% of the grade, and the final exam worth 25% of the grade. If a student receives an 85 on test 1, a 76 on test 2, an 83 on test 3, a 74 on the homework, a 65 on the project, and a 61 on the final, what grade did the student earn in the course? All the assignments were out of 100 points. 8. A statistics class has the following activities and weights for determining a grade in the course: test 1 worth 15% of the grade, test 2 worth 15% of the grade, test 3 worth 15% of the grade, homework worth 10% of the grade, semester project worth 20% of the grade, and the final exam worth 25% of the grade. If a student receives a 25 out of 30 on test 1, a 20 out of 30 on test 2, a 28 out of 30 on test 3, a 120 out of 140 points on the homework, a 65 out of 100 on the project, and a 31 out of 35 on the final, what grade did the student earn in the course? 9. A sample of eight cats found the following weights in kg. a) Compute the range. b) Compute the variance. c) Compute the standard deviation. 10. The following data represents the percent change in tuition levels at public, four-year colleges (inflation adjusted) from 2008 to 2013 (Weissmann, 2013). To the right is the histogram. What is the shape of the distribution? 11. The following is a histogram of quiz grades. a) What is the shape of the distribution? b) Which is higher, the mean or the median? 12. Cholesterol levels were collected from patients two days after they had a heart attack (Ryan, Joiner & Ryan, Jr, 1985). Retrieved from http://www.statsci.org/data/general/cholest.html. a) Compute the range. b) Compute the variance. c) Compute the standard deviation. 13. Suppose that a manager wants to test two new training programs. The manager randomly selects five people for each training type and measures the time it takes to complete a task after the training. The times for both trainings are in table below. Which training method is more variable? 14. The lengths (in kilometers) of rivers on the South Island of New Zealand that flow to the Tasman Sea are listed below. a) Compute the range. b) Compute the variance. c) Compute the standard deviation. 15. Here are pulse rates before and after exercise. Which group has the larger range? 16. A midterm in a statistics course had a mean score of 70 with a standard deviation 5. A quiz in a biology course had an average of 20 with a standard deviation of 5. a) Compute the coefficient of variation for the statistics midterm exam. b) Compute the coefficient of variation for the biology quiz. c) Aaliyah scored a 75 on the statistics midterm exam. Compute Aaliyah’s z-score. d) Viannie scored a 25 on the biology quiz. Compute Viannie’s z-score. e) Which student did better on their respect test? Why? f) Was there more variability for the midterm or the quiz scores? Justify your answer with statistics. 17. The following is a sample of quiz scores. a) Compute $\overline{ x }$. b) Compute s2. c) Compute the median. d) Compute the coefficient of variation. e) Compute the range. 18. The time it takes to fill an online order was recorded and the following descriptive statistics were found using Excel. What is the coefficient of variation? 19. The following is the height and weight of a random sample of baseball players. a) Compute the coefficient of variation for both height and weight. b) Is there more variation in height or weight?a) Compute the coefficient of variation for both height and weight. 20. A midterm in a statistics course had a mean score of 70 with a standard deviation 5. A quiz had an average of 20 with a standard deviation of 5. A student scored a 73 on their midterm and 22 on their quiz. On which test did the student do better on compared to the rest of the class? Justify your answer with statistics. 21. The length of a human pregnancy is normally distributed with a mean of 272 days with a standard deviation of 9.1 days. William Hunnicut was born in Portland, Oregon, at just 181 days into his gestation. What is the zscore for William Hunnicut’s gestation? Retrieved from: http://digitalcommons.georgefox.edu/cgi/viewcontent.cgi?article=1149&context=gfc_life. 22. Arm span (sometimes referred to as wingspan) is the physical measurement of the length of an individual's arms from fingertip to fingertip. The average arm span of a man is 70 inches with a standard deviation of 4.5 inches. The Olympic gold medalists Michael Phelps has an arm span of 6 feet 7 inches, which is three inches more than his height. What is the z-score for Michael Phelps arm span? 23. The average time to run the Pikes Peak Marathon 2017 was 7.44 hours with a standard deviation of 1.34 hours. Rémi Bonnet won the Pikes Peak Marathon with a run time of 3.62 hours. Retrieved from: http://pikespeakmarathon.org/results/ppm/2017/. The Tevis Cup 100-mile one-day horse race for 2017 had an average finish time of 20.38 hours with a standard deviation of 1.77 hours. Tennessee Lane won the 2017 Tevis cup in a ride time of 14.75 hours. Retrieved from: https://aerc.org/rpts/RideResults.aspx. a) Compute the z-score for Rémi Bonnet’s time. b) Compute the z-score for Tennessee Lane’s time. c) Which competitor did better compared to their respective events? 24. Cholesterol levels were collected from patients two days after they had a heart attack (Ryan, Joiner & Ryan, Jr, 1985). Retrieved from http://www.statsci.org/data/general/cholest.html. a) Compute the 25th percentile. b) Compute the 90th percentile. c) Compute the 5th percentile. d) Compute Q3. 25. A sample of eight cats found the following weights in kg. Compute the 5-number summary. 26. The following data represent the grade point averages for a sample of 15 PSU students. a) Compute the lower and upper limits. b) Identify if there are any outliers. c) Draw a modified box-and-whisker plot. 27. The lengths (in kilometers) of rivers on the South Island of New Zealand that flow to the Tasman Sea are listed below. a) Compute the 5-number summary. b) Compute the lower and upper limits and any outlier(s) if any exist. c) Make a modified box-and-whisker plot. 28. The following are box-and-whiskers plot for life expectancy for European countries and Southeast Asian countries from 2011. What is the distribution shape of the European countries’ life expectancy? 29. To determine if Reiki is an effective method for treating pain, a pilot study was carried out where a certified second-degree Reiki therapist provided treatment on volunteers. Pain was measured using a visual analogue scale (VAS) immediately before and after the Reiki treatment (Olson & Hanson, 1997). Higher numbers mean the patients had more pain. a) Use the box-and-whiskers plots to determine the IQR for the before treatment measurements. b) Use the box-and-whiskers plots of the before and after VAS ratings to determine if the Reiki method was effective in reducing pain. 30. The median household income (in \$1,000s) from a random sample of 100 counties that gained population over 2000-2010 are shown on the left. Median incomes from a random sample of 50 counties that had no population gain are shown on the right. (OpenIntro Statistics, 2016) What is the distribution shape for the counties with no population gain? 31. Match the correct descriptive statistics to the letter of the corresponding histogram and boxplot. Choose the correct letter for the corresponding histogram and Roman numeral for the corresponding boxplot. You should only use the visual representation, the definition of standard deviation and measures of central tendency to match the graphs with their respective descriptive statistics. i. ii. iii. a) b) c) 32. A sample has the following statistics: Minimum = 10, Maximum = 50, Range = 40, $\overline{ x }$ = 20, mode = 32, Q1 =30, Q2 =35, Q3 =38, standard deviation = 5 and IQR = 8. a) According to Chebyshev’s Inequality, what percentage of data would fall within the values 12.5 and 27.5? b) If this sample were bell-shaped what percentage of the data would fall within the values 10 and 30? 33. The length of a human pregnancy is bell-shaped with a mean of 272 days with a standard deviation of 9 days (Bhat & Kushtagi, 2006). Compute the percentage of pregnancies that last between 245 and 299 days. 34. Arm span is the physical measurement of the length of an individual's arms from fingertip to fingertip. A man’s arm span is approximately bell-shaped with mean of 70 inches with a standard deviation of 4.5 inches. What percent of men have an arm span between 61 and 79 inches? 35. The size of fish is very important to commercial fishing. A study conducted in 2012 found the length of Atlantic cod caught in nets in Karlskrona to have a mean of 49.9 cm and a standard deviation of 3.74 cm (Ovegard, Berndt & Lunneryd, 2012). a) According to Chebyshev’s Inequality, at least what percent of Atlantic cod should be between 44.29 and 55.51 cm? b) Assume the length of Atlantic cod is bell-shaped. Approximately what percent of Atlantic cod are between 46.16 and 53.64 cm? c) Assume the length of Atlantic cod is bell-shaped. Approximately what percent of Atlantic cod are between 42.42 and 57.38 cm? 36. Scores on the SAT for a certain year were bell-shaped with a mean of 1511 and a standard deviation of 194. a) What two SAT scores that separated the middle 68% of SAT scores for that year? b) What two SAT scores that separated the middle 95% of SAT scores for that year? c) How high did a student need to score that year to be in the top 2.5%? 37. In a mid-size company, the distribution of the number of phone calls answered each day by each of the 12 employees is bell-shaped and has a mean of 59 and a standard deviation of 10. Using the empirical rule, what is the approximate percentage of daily phone calls numbering between 29 and 89? 38. The number of potholes in any given 1 mile stretch of pavement in Portland has a bell-shaped distribution. This distribution has a mean of 54 and a standard deviation of 5. Using the empirical rule, what is the approximate percentage of 1-mile long roadways with potholes numbering between 44 and 59? 39. A company has a policy of retiring company cars; this policy looks at number of miles driven, purpose of trips, style of car and other features. The distribution of the number of months in service for the fleet of cars is bellshaped and has a mean of 42 months and a standard deviation of 3 months. Using the Empirical Rule, what is the approximate percentage of cars that remain in service between 48 and 51 months? 40. The following is an infant weight percentile chart. What is the 50th percentile height in cm for a 10-month old boy? Retrieved from: https://www.cdc.gov/growthcharts/data/set1clinical/cj41l017.pdf. Answer to Odd Numbered Exercises 1 a) mode = 4 b) median = 3.75 c) $\overline{ x }$ = 3.725 3) a) 56 & 64 b) 64 c) 67.6818 5) 2.833 7) 72.25 9) a) Range = 0.9 b) s 2 = 0.0870 c) s = 0.2949 11) a) Negatively skewed b) The median is higher. 13) s1 = 9.9348, s2 = 3.3466; Training 1 is more variable 15) Before range = 42, after range = 42, both groups have the same range. 17) a) 30.35 b) 136.7 c) 32.7 d) 38.52% e) 28.4 19) a) CVheight = 3.04%; CVweight = 9.84% b) Weight, because it has a higher coefficient of variation. 21) -10 23) a) -2.8507 b) -3.1808 c) Tennessee Lane 25) Min = 3.2, Q1 = 3.45 (TI: 3.55), Q2 = 3.7, Q3 = 3.95 (TI: 3.85), Max = 4.1 27) a) Min = 32, Q1 = 46 (TI: 48), Q2 = 64, Q3 = 77 (TI: 76), Max = 177 b) lower limit = -0.5 (TI: 6), upper limit = 123.5 (TI: 118), outliers = 177 (TI: 121 & 177) c) 29) a) 4 b) Yes, the treatment was effective. 31) 1. c. ii; 2. a. iii; 3. b. i 33) 99.7% 35) a) 55.56% b) 68% c) 95% 37) 99.7% 39) 2.35%
textbooks/stats/Introductory_Statistics/Mostly_Harmless_Statistics_(Webb)/03%3A_Descriptive_Statistics/3.04%3A_Chapter_3_Formulas.txt
One story about how probability theory was developed is that a gambler wanted to know when to bet more and when to bet less. He talked to a couple of friends of his that happened to be mathematicians. Their names were Pierre de Fermat and Blaise Pascal. Since then many other mathematicians have worked to develop probability theory. Understanding probability is important in life. Examples of mundane questions that probability can answer for you are: do I need to carry an umbrella? Do you wear a heavy coat on a given day? More important questions that probability can help with are your chances that the car you are buying will need more maintenance, your chances of passing a class, your chances of winning the lottery, or your chances of catching a deadly virus. The chance of you winning the lottery is very small, yet many people will spend the money on lottery tickets. In general, events that have a low probability (under 5%) are unlikely to occur. Whereas if an event has a high probability of happening (over 80%), then there is a good chance that the event will happen. This chapter will present some of the theory that you need to help decide if an event is likely to happen or not. First, some definitions: Definition: Experiment An activity or process that has specific results that can be repeated indefinitely which has a set of well-defined outcomes. Definition: Outcomes The results of an experiment. Definition: Sample Space Collection of all possible outcomes of the experiment. Usually denoted as S. Definition: Event The set of outcomes that are a subset of the sample space. The symbol used for events is usually a capital letter often at the beginning of the alphabet like A, B or C. Here are some examples of sample spaces and events. Figure 4-1 Experiment Sample Space Example of Event Toss a coin twice {HH, HT, TH, TT} A = Getting exactly two heads = {HH} Toss a coin twice {HH, HT, TH, TT} B = Getting at least one head = {HH, HT, TH} Roll a die {1, 2, 3, 4, 5, 6} C = Roll an odd number = {1, 3, 5} Roll a die {1, 2, 3, 4, 5, 6} D = Roll a prime number = {2, 3, 5} Roll a die {1, 2, 3, 4, 5, 6} E = Roll an even number = {2, 4, 6} A tree diagram is a graphical way of representing a random experiment with multiple steps A bag contains 10 colored marbles: 7 red and 3 blue. A random experiment consists of drawing a marble from the bag, then drawing another marble without replacement (without putting the first marble back in the bag). Create the tree diagram for this experiment and write out the sample space. Solution The first marble that is drawn can be either red or blue and is represented with the first sideway V split. The next marble that is drawn is represented by the two sideway V splits on the right. The top sideways V assumes that a red marble was drawn on the first draw, and then the second marble drawn can be either red or blue. The bottom sideway V assumes that a blue marble was drawn on the first draw, and then the second marble drawn can be either red or blue. Then combine the colors as you trace up the four pathways from left to right. Figure 4-2 The sample space would be S = {RR, RB, BR, BB}. Note that the event RB and BR are considered different outcomes since they are picked in a different order and are considered distinct events.
textbooks/stats/Introductory_Statistics/Mostly_Harmless_Statistics_(Webb)/04%3A_Probability/4.01%3A_Introduction.txt
Classical Approach to Probability (Theoretical Probability) $P(A) = \dfrac{\text{Number of ways A can occur}}{\text{Number of different outcomes in S}}$ The classical approach can only be used if each outcome has equal probability. If an experiment consists of flipping a coin twice, compute the probability of getting exactly two heads. Solution The event of getting exactly two heads is A = {HH}. The number of ways A can occur is 1. The number of different outcomes in S = {HH, HT, TH, TT} is 4. Thus P(A) = ¼. If a random experiment consists of rolling a six-sided die, compute the probability of rolling a 4. Solution The sample space is S = {1, 2, 3, 4, 5, 6}. The event A is that you want is to get a 4, and the event space is A = {4}. Thus, in theory the probability of rolling a 4 would be P(A) = 1/6 = 0.1667. Suppose you have an iPhone with the following songs on it: 5 Rolling Stones songs, 7 Beatles songs, 9 Bob Dylan songs, 4 Johnny Cash songs, 2 Carrie Underwood songs, 7 U2 songs, 4 Mariah Carey songs, 7 Bob Marley songs, 6 Bunny Wailer songs, 7 Elton John songs, 5 Led Zeppelin songs, and 4 Dave Matthews Band songs. The different genre that you have are rock from the ‘60s which includes Rolling Stones, Beatles, and Bob Dylan; country includes Johnny Cash and Carrie Underwood; rock of the ‘90s includes U2 and Mariah Carey; Reggae includes Bob Marley and Bunny Wailer; rock of the ‘70s includes Elton John and Led Zeppelin; and bluegrass/rock includes Dave Matthews Band. 1. What is the probability that you will hear a Johnny Cash song? 2. What is the probability that you will hear a Bunny Wailer song? 3. What is the probability that you will hear a song from the ‘60s? 4. What is the probability that you will hear a Reggae song? 5. What is the probability that you will hear a song from the ‘90s or a bluegrass/rock song? 6. What is the probability that you will hear an Elton John or a Carrie Underwood song? 7. What is the probability that you will hear a country song or a U2 song? 8. Solution a) The way an iPhone works, it randomly picks the next song so you have no idea what the next song will be. Now you would like to calculate the probability that you will hear the type of music or the artist that you are interested in. The sample set is too difficult to write out, but you can figure it from looking at the number in each set and the total number. The total number of songs you have is 67. There are 4 Johnny Cash songs out of the 67 songs. P(Johnny Cash song) = 4/67 = 0.0597 b) There are 6 Bunny Wailer songs. P(Bunny Wailer) = 6/67 = 0.0896. c) There are 5, 7, and 9 songs that are classified as rock from the ‘60s, which is a total of 21. P(rock from the ‘60s) = 21/67 = 0.3134. d) There are total of 13 songs that are classified as Reggae. P(Reggae) = 13/67 = 0.1940 e) There are 7 and 4 songs that are songs from the ‘90s and 4 songs that are bluegrass/rock, for a total of 15. P(rock from the ‘90s or bluegrass/rock) = 15/67 = 0.2239. f) There are 7 Elton John songs and 2 Carrie Underwood songs, for a total of 9. P(Elton John or Carrie Underwood song) = 9/67 = 0.1343. g) There are 6 country songs and 7 U2 songs, for a total of 13. P(country or U2 song) = 13/67 = 0.1940. Empirical Probability (Experimental or Relative Frequency Probability) The experiment is performed many times and the number of times that event A occurs is recorded. Then the probability is approximated by finding the relative frequency. $P(A) = \dfrac{\text{Number of ways A occurred}}{\text{Number of times the experiment was repeated}}$ Important: The probability of any event A satisfies 0 ≤ P(A) ≤ 1, keep this in mind if the question is asking for a probability, and make sure your answer is a number between 0 and 1. A probability, relative frequency, percentage, and proportion are all different words for the same concept. Probability answers can be given as percentages, decimals, or reduced fractions. Suppose that the experiment is rolling a die. Compute the probability of rolling a 4. Solution The sample space is S = {1, 2, 3, 4, 5, 6}. The event A is that you want is to get a 4, and the event space is A = {4}. To do this, roll a die 10 times. When you do that, you get 4 two times. Based on this experiment, the probability of getting a 4 is 2 out of 10 or 1/5 = 0.2. To get more accuracy, repeat the experiment more times. It is easiest to put this information in a table, where n represents the number of times the experiment is repeated. When you put the number of 4s found divisible by the number of times you repeat the experiment, this is the relative frequency. See the last column in Figure 4-3. Figure 4-3: Trials for Die Experiment n Number of 4s Relative Frequency 10 2 0.2 50 6 0.12 100 18 0.18 500 81 0.162 1,000 163 0.163 Notice that as n increased, the relative frequency seems to approach a number; it looks like it is approaching 0.163. You can say that the probability of getting a 4 is approximately 0.163. If you want more accuracy, then increase n even more by rolling the die more times. These probabilities are called experimental probabilities since they are found by actually doing the experiment or simulation. They come about from the relative frequencies and give an approximation of the true probability. The approximate probability of an event $A$, notated as $P(A)$, is $P(A) = \frac{\text{Number of ways A occurred}}{\text{Number of times the experiment was repeated}}$ For the event of getting a 4, the probability would be P(Roll a 4) = $\frac{\text{Number of times A occurred}}{\text{Number of times the experiment was repeated}}$ = 0.163 “‘What was that voice?’ shouted Arthur. ‘I don't know,’ yelled Ford, ‘I don't know. It sounded like a measurement of probability.’ ‘Probability? What do you mean?’ ‘Probability. You know, like two to one, three to one, five to four against. It said two to the power of one hundred thousand to one against. That's pretty improbable you know.’” (Adams, 2002) Law of Large Numbers: as n increases, the relative frequency tends towards the theoretical probability Figure 4-4 shows a graph of experimental probabilities as n gets larger and larger. The dashed yellow line is the theoretical probability of rolling a four of 1/6 $\neq$ 0.1667. Note the x-axis is in a log scale. Note that the more times you roll the die, the closer the experimental probability gets to the theoretical probability. Figure 4-4 You can compute experimental probabilities whenever it is not possible to calculate probabilities using other means. An example is if you want to find the probability that a family has 5 children, you would have to actually look at many families, and count how many have 5 children. Then you could calculate the probability. Another example is if you want to figure out if a die is fair. You would have to roll the die many times and count how often each side comes up. Make sure you repeat an experiment many times, because otherwise you will not be able to estimate the true probability of 5 children. This is due to the law of large numbers, since the more times we repeat the experiment, the closer the experimental probabilities will get to the theoretical probabilities. For difficult theoretical probabilities, we can run computer simulations that can run an experiment repeatedly many times very quickly and come up with accurate estimates of the theoretical probability. A fitness center coach kept track of members over the last year. They recorded if the person stretched before they exercised, and whether they sustained an injury. The following contingency table shows their results. Select one member at random and find the following probabilities. InjuryNo InjuryStretched 52 270 Did Not Stretch 21 57 1. Compute the probability that a member sustained an injury. 2. Compute the probability that a member did not stretch. 3. Compute the probability that a member sustained an injury and did not stretch. Solution a) Find the totals for each row, column, and grand total. Injury No Injury Total Stretched 52 270 322 Did Not Stretch 21 57 78 Total 73 327 400 Next, find the relative frequencies by dividing each number by the total of 400. Injury No Injury Total Stretched 0.13 0.675 0.805 Did Not Stretch 0.0525 0.1425 0.195 Total 0.1825 0.8175 1 Using the definition of a probability we get P(Injury) = $\frac{\text{Number of injuries}}{\text{Total number of people}}$ = $\frac{73}{400}$ = 0.1825. Injury No Injury Total Stretched 0.13 0.675 0.805 Did Not Stretch 0.0525 0.1425 0.195 Total 0.1825 0.8175 1 Using the table, we can get the same answer very quickly by just taking the column total under Injury to get 0.1825. As we get more complicated probability questions, these contingency tables will help organize your data. b) Using the relative frequency contingency table, take the total of the row for all the members that did not stretch and we get the P(Did Not Stretch) = 0.195. Injury No Injury Total Stretched 0.13 0.675 0.805 Did Not Stretch 0.0525 0.1425 0.195 Total 0.1825 0.8175 1 c) Using the relative frequency contingency table, take the intersection of the injury column with the did not stretch row and we get P(Injury and Did Not Stretch) = 0.0525. Injury No Injury Total Stretched 0.13 0.675 0.805 Did Not Stretch 0.0525 0.1425 0.195 Total 0.1825 0.8175 1 3. Subjective Probability The probability of event A is estimated using previous knowledge and is someone’s opinion. Compute the probability of meeting Dolly Parton. Solution I estimate the probability of meeting Dolly Parton to be 1.2E-9 $\neg$ 0.0000000012 (i.e. very, very small). What is the probability it will rain tomorrow? Solution A weather reporter looks at several forecasts, uses their expert knowledge of the region, and reports the probability that it will rain in Portland, OR, is 80%.
textbooks/stats/Introductory_Statistics/Mostly_Harmless_Statistics_(Webb)/04%3A_Probability/4.02%3A_Three_Types_of_Probability.txt
AA random sample of 500 records from the 2010 United States Census were downloaded to Excel and the following contingency table was found for biological sex and marital status. Select one member at random and find the following probabilities. Count of Marital StatusColumn Labels   Row LabelsFemaleMaleGrand Total Divorced 21 17 38 Married/spouse absent 5 9 14 Married/spouse absent 92 100 192 Never married/single 93 129 222 Separated 1 2 3 Widowed 20 11 31 Grand Total232268500 a) Compute the probability that a person is divorced. b) Compute the probability that a person is not divorced. Solution a) Take the row total of all divorced which is 38 and then divide by the grand total of 500 to get P(Divorced) = 38/500 = 0.076. b) We can add up all the other category totals besides divorced 14 + 192 + 222 + 3 + 31 = 462, then divide by the grand total to get P(Not Divorced) = 462/500 = 0.924. There is a faster way to computer these probabilities that will be important for more complicated probabilities called the complement rule. The table contains 100% (100% = 1 as a proportion) of our data so we can assume that the probability of the divorced is the opposite (complement) to the probability of not being divorced. Notice that the P(Divorced) + P(Not Divorced) = 1. This is because these two events have no outcomes in common, and together they make up the entire sample space. Events that have this property are called complementary events. Notice P(Not Divorced) = 1 – P(Divorced) = 1 – 0.076 = 0.924. If two events are complementary events, then to find the probability of one event just subtract the probability from 1. Notation used for complement of A also called “not A” is AC. P(A) + P(AC) = 1 or P(A) = 1 – P(AC) or P(AC) = 1 – P(A) Some texts will use the notation for a complement as A' or $\overline{ A }$, instead of AC. Suppose you know that the probability of it raining today is 80%. What is the probability of it not raining today? Solution Since not raining is the complement of raining, then P(not raining) = 1 – P(raining) = 1 – 0.8 = 0.2. “On a small obscure world somewhere in the middle of nowhere in particular - nowhere, that is, that could ever be found, since it is protected by a vast field of unprobability to which only six men in this galaxy have a key - it was raining.” (Adams, 2002) Venn Diagrams Figure 4-5 is an example of a Venn diagram and is a visual way to represent sets and probability. The rectangle represents all the possible outcomes in the entire sample space (the population). The shapes inside the rectangle represent each event in the sample space. Usually these are ovals, but they can be any shape you want. If there are any shared elements between the events then the circles should overlap one another. Figure 4-5 The field of statistics includes machine learning, data analysis, and data science. The field of computer science includes machine learning, data science and web development. The field of business and domain expertise includes data analysis, data science and web development. If you know machine learning, then you will need a background in both statistics and computer science. If you are a data scientist, then you will need a background in statistics, computer science, business and domain expertise. Suppose you know the probability of not getting the flu is 0.24. Draw a Venn diagram and find the probability of getting the flu. Solution Since getting the flu is the complement of not getting the flu, the P(getting the flu) = 1 – P(not getting the flu) = 1 – 0.24 = 0.76. Label each space as in Figure 4-6. Figure 4-6 The complement is useful when you are trying to find the probability of an event that involves the words “at least” or an event that involves the words “at most.” As an example of an “at least” event is supposing you want to find the probability of making at least $50,000 when you graduate from college. That means you want the probability of your salary being greater than or equal to$50,000. An example of an “at most” event is supposing you want to find the probability of rolling a die and getting at most a 4. That means that you want to get less than or equal to a 4 on the die, a 1, 2, 3, or 4. The reason to use the complement is that sometimes it is easier to find the probability of the complement and then subtract from 1.
textbooks/stats/Introductory_Statistics/Mostly_Harmless_Statistics_(Webb)/04%3A_Probability/4.03%3A_Complement_Rule.txt
When two events cannot happen at the same time, they are called mutually exclusive or disjoint events. Figure 4-7 Figure 4-8 Figure 4-9 Figure 4-10 For example, a student cannot be a freshman and a sophomore at the same time, see Figure 4-9. These are mutually exclusive events. A student could be freshman and a business major at the same time so the event freshman and the event business major are not mutually exclusive see Figure 4-10. Intersection When we are finding the probability of both A and B happening at the same time we denote this as P(A ∩ B). This overlap is called the intersection. When two events, say A and B, occur at the same time, this is denoted as the intersection of A and B and is denoted as (A ∩ B). Think of the symbol ∩ as an A in “and.” If two events are mutually exclusive then A ∩ B = { } the empty set (also denoted as \(\varnothing\)) and the P(A ∩ B) = 0. Union When either event A, event B, or both occur then we call this the union of A or B, which is denoted as (A U B). When finding the probability of A or B we denote this as P(A U B). When we write “or” in statistics, we mean “and/or” unless we explicitly state otherwise. Thus, A or B occurs means A, B, or both A and B occur. Figure 4-11 is a Venn diagram for the union rule. Figure 4-11 The Union Rule: P(A U B) = P(A) + P(B) – P(A ∩ B). If two events are mutually exclusive, then the probability of them occurring at the same time is P(A ∩ B) = 0. So, if A and B are mutually exclusive then the P(A U B) = P(A) + P(B) as shown in Figure 4-7. It is best to write out the rule with the intersection so that you do not forget to subtract any overlapping intersection. The family college data set contains a sample of 792 cases with two variables, teen and parents. The teen variable is either college or not, where the college label means the teen went to college immediately after high school. The parent’s variable takes the value degree if at least one parent of the teenager completed a college degree. Make a Venn Diagram for the data. Example from OpenIntroStatistics. Solution Find the relative frequencies. Figure 4-12 for completed Venn Diagram. Note that you do not need to use circles to represent the sets. Figure 4-12 A random sample of 500 people was taken from the 2010 United States Census. Their marital status and race were recorded in the following contingency table using the census labels. A person is randomly chosen from the census data. Find the following. Race   Marital Status American Indian Black Asian White Two Major Races Total Divorced 0 6 1 30 1 38 Married 1 25 23 156 4 209 Single 2 33 21 155 11 222 Widowed 0 7 2 22 0 31 Total 3 71 47 363 16 500 a) P (Single ∩ American Indian) b) P(Single U American Indian) c) Probability that the person is Asian or Married. d) P(Single ∩ Married) e) P(Single U Married) Solution a) The intersection for a contingency table is found by simply finding where the row or intersection meets. There are 2 Single American Indians, therefore the P(Single ∩ American Indian) = P(Single and American Indian) = 2/500 = 0.004. b) There are 222 Single people, there are 3 American Indians, but we do not want to count the 2 Single American Indians twice, therefore the P(Single U American Indian) = P(Single or American Indian) = 222/500 + 3/500 – 2/500 = 223/500 = 0.446. c) The union for a contingency table is found by either using the union formula or adding up all the numbers in the corresponding row and column. There are 47 Asian people, there are 209 Married people, but we do not want to count the 23 Married Asian people twice, therefore the P(Asian U Married) = P(Asian or Married) = 47/500 + 209/500 – 23/500 = 233/500 = 0.466. d) The events Single and Married are mutually exclusive so the P(Single ∩ Married) = 0. Alternatively, there is no place in the table where the Single row and Married row meet. e) The events single and married are mutually exclusive so the P(Single U Married) = P(Single) + P(Married) – P(Single ∩ Married) = 222/500 + 209/500 – 0 = 431/500 = 0.862. Use a random experiment consisting of rolling two dice and adding the numbers on the faces. a) Compute the probability of rolling a sum of 8. b) Compute the probability of rolling a sum of 8 or a sum of 5 c) Compute the probability of rolling a sum of 8 or a double (each die has the same number). Solution a) There are 36 possible outcomes for rolling the two dice as shown in the following sum table. There are 5 pairs where the sum of the two dice is an 8, the (2,6), (3,5), (4,4), (5,3) and (6,2). Note that the events (2,6) and (6,2) are different outcomes since they numbers come from different dice. Thus, the P(8) = 5/36 = 0.1389. b) Highlight all the places where a sum of 5 or a sum of 8 occurs. There are 9 pairs where the sum of the two dice is a 5 or an 8. Thus, the P(5 U 8) = P(5) + P(8) – P(5 ∩ 8) = 4/36 + 5/36 – 0 = 9/36 = 0.25. Note that rolling a sum of 5 is mutually exclusive from rolling a sum of 8 so the probability is zero for the intersection of the two events. c) The events rolling an 8 and rolling doubles are not mutually exclusive since the pair of fours (4, 4) falls into both events. An easy way is to highlight all the places a sum 8 or doubles occur, count the highlighted values, and divide by the total 10/36 = 0.2778. When using the union rule, we subtract this overlap out one time to account for this. Using the union formula: P(8 U Doubles) = P(8) + P(Doubles) – P(8 ∩ Doubles) = 5/36 + 6/36 – 1/36 = 10/36 = 0.2778. “‘It's... well, it's a long story,’ he said, ‘but the Question I would like to know is the Ultimate Question of Life, the Universe and Everything. All we know is that the Answer is Forty-two, which is a little aggravating.’ Prak nodded again. ‘Forty-two,’ he said. ‘Yes, that's right.’ He paused. Shadows of thought and memory crossed his face like the shadows of clouds crossing the land. ‘I'm afraid,’ he said at last, ‘that the Question and the Answer are mutually exclusive. Knowledge of one logically precludes knowledge of the other. It is impossible that both can ever be known about the same universe.’” (Adams, 2002) Randomly pick a card from a standard deck. A standard deck of cards, not including jokers consists of 4 suits called clubs = ♣, spades = ♠, hearts = ♥, and diamonds = ♦. The clubs and spades are called the black cards. The hearts and diamonds are called the red cards. Each suit has 13 cards. The numbered cards shown in Figure 4-13, are Ace = 1 or A, 2, 3, 4, 5, 6, 7, 8, 9, 10. The face cards are the Jack = J, Queen = Q, and King = K. Figure 4-13 a) Compute the probability of selecting a card that shows a club. b) Compute the probability of selecting a heart or a spade card. c) Compute the probability of selecting a spade or a face card. Solution a) There are 52 cards in a standard deck. There are 13 cards of each suit. The P(♣) = 13/52 = 0.25. b) There are 52 cards in a standard deck. There are 13 cards of each suit. P(♥ U ♠) = 26/52 = 0.5. c) There are 13 spades and 12 face cards. However, there are 3 cards that are both spades and face cards. P(♠ U FC) = P(♠) + P(FC) – P(♠ ∩ FC) = 13/52 + 12/52 – 3/52 = 22/52 = 0.4231. Since the sample space is small, you could just count how many spades and face cards there are. Words Are Important! When working with probability, words such as “more than” or “less than” can drastically change the answer. Figure 4-14 shows some of the common phrases you may run into while reading a problem. It will be essential later in the course that you can correctly match these phrases with their correct symbol. Figure 4-14 Use a random experiment consisting of rolling two dice and adding the numbers on the faces. a) Compute the probability of rolling a sum of less than 5. b) Compute the probability of rolling a sum of 5 or less. Solution a) Let X be the event rolling a sum less than 5. A sum less than 5 would not include the 5. For notation, we use P(X < 5), which is read as the “probability that X is less than five.” Shade in all the sums that are less than 5. Then the P(X < 5) = 6/36 = 0.1667. b) : Let X be the event rolling a sum of 5 or less. A sum of 5 or less includes the 5. For notation, we can use P(X ≤ 5), which is read as the “probability that X is less than or equal to five.” Shade in all the sums that are 5 or less. Then the P(X ≤ 5) = 10/36 = 0.2778.
textbooks/stats/Introductory_Statistics/Mostly_Harmless_Statistics_(Webb)/04%3A_Probability/4.04%3A_Union_and_Intersection.txt
Two trials (or events or results of a random experiment) are independent trials if the outcome of one trial does not influence the outcome of the second trial. If two events are not independent, they are dependent events. For instance, if two coins are flipped they are independent since flipping one coin does not affect the outcome of the second coin. Independent Events: If A and B are independent events, then P(A ∩ B) = P(A) ‧ P(B). Be careful with this rule. You cannot just multiply probabilities to find an intersection unless you know they are independent. Also, do not confuse independent events with mutually exclusive events. Two events are mutually exclusive when the P(A ∩ B) = 0. If a random experiment consists of flipping a coin twice, find the probability of getting heads twice in a row. Solution The event of getting a head on the first flip is independent of getting a head on the second flip since the probability does not change with each flip of the coin. Thus, using the multiplication rule of independent events, P(Both coins are heads) = P(1st coin is a head)·P(2nd coin is a head) = $\frac{1}{2}$ ∙ $\frac{1}{2}$ = $\frac{1}{4}$ = 0.25. The probability of Apple stock rising is 0.3, the probability of Boeing stock rising is 0.4. Assume Apple and Boeing stocks are independent. What is the probability that neither stock rises? Solution Let A = Apple stock and B = Boeing stock. Since A and B are independent the probability of both stocks rising at the same time is P(A ∩ B) = 0.3 ‧ 0.4 = 0.12. Neither is the complement to either. P(Not Either) = 1 – P(A U B) = 1 – (P(A) + P(B) – P(A ∩ B)) = 1 – (0.3 + 0.4 + 0.12) = 1 – 0.58 = 0.42. The probability that a student has their own laptop is 0.78. If three students are randomly selected, what is the probability that at least one owns a laptop? Solution There is an assumption that the three students are not related and that the probability of one owning a laptop is independent of the other people owning a laptop. The probability of none owning a laptop is (1 – 0.78)3 = 0.0106. The probability of at least one is the same as 1 – P(None) = 1 – 0.0106 = 0.9894. When two events are dependent, you cannot simply multiply their corresponding probabilities to find their intersection. You will need to use the General Multiplication Rule discussed in the next section. 4.06: Conditional Probability The probability of event B happening, given that event A already happened, is called the conditional probability. The conditional probability of B, given A is written as P(B | A), and is read as “the probability of B given A happened first.” We can use the General Multiplication Rule when two events are dependent. Definition: General Multiplication Rule \begin{align*} P(A ∩ B) &= P(A) \cdot P(B | A) P(A ∩ B) \[4pt] &= P(A) \cdot P(B|A) \end{align*} A bag contains 10 colored marbles: 7 red and 3 blue. A random experiment consists of drawing a marble from the bag, then drawing another marble without replacement (without putting the first marble back in the bag). Find the probability of drawing a red marble on the first draw (event R1), and drawing another red marble on the second draw (event R2). Solution Drawing a red marble on the first draw and drawing a red marble on the second draw are dependent events because we do not place the marble back in the bag. The probability of drawing a red marble on the first draw is P(R1) = $\frac{7}{10}$, but on the second draw, the probability of drawing a red marble given that a red marble was drawn on the first draw is P(R2|R1) = $\frac{6}{9}$. Thus, by the general multiplication rule, P(R1 and R2) = P(R1)·P(R2|R1) = ( $\frac{7}{10}$ ) ( $\frac{6}{9}$ ) = 0.4667. A bag contains 10 colored marbles: 7 red and 3 blue. A random experiment consists of drawing a marble from the bag, then drawing another marble without replacement. Create the tree diagram for this experiment and compute the probabilities of each outcome. Solution Figure 4-15 If we were to multiply the probabilities as we move from left to right up each set of tree branches as shown in Figure 4-15, we get the intersections. For example, by the general multiplication rule, P(R1 and R2) = P(R1)·P(R2|R1) = ( $\frac{7}{10}$ ) ( $\frac{6}{9}$ ) = 0.4667. Put the four intersection values into a contingency table and total the rows and columns. The table will help solve probability questions of other events. R2 B2 Total R1 0.4667 0.2333 0.7 B1 0.2333 0.0667 0.3 Total 0.7 0.3 1 The grand total should add up to 1 since we have 100% of the sample space. Conditional Probability Rule: P (A |B ) = $\frac{P(A \cap B)}{P(B)}$ or P (B |A ) = $\frac{P(A \cap B)}{P(A)}$ The following table shows the utility contract granted for a specific year. One contractor is randomly chosen. Corporation Government Individual Total United States 0.45 0.007 0.08 0.537 Foreign 0.41 0.003 0.05 0.463 Total 0.86 0.01 0.13 1 1. Compute the probability the contractor is from the United States and is a corporation. 2. Compute the probability the contractor is from the United States given that they are a corporation. 3. If the contractor is from a foreign country, what is the probability that it is from a government? 4. Are the events a “contractor is an individual” independent of a “contractor from the United States?” Solution a) For the intersection in the contingency tables use where the row and column meet. P(U.S. ∩ Corp) = 0.45. b) P (U.S.|Corp) = $\frac{P(\text { U.S. ∩ Corp})}{P(\text {Corp})}$ = $\frac{0.45}{0.86}$ = 0.5233. c) P (Gov|Foreign) = $\frac{P(\text { Gov ∩ Foreign})}{P(\text {Foreign})}$ = $\frac{0.003}{0.463}$ = 0.0065. d) Do not assume independence between two variables in a contingency table since the data may show relationships that you didn’t know were there. Use the definition of independent events. If the two events are independent then we would have P(Individual ∩ U.S.) = P(Individual)·P(U.S.). First find the intersection using where the row and column meet to get P(Individual ∩ U.S.) = 0.08. Then use the row and column totals to find P(Individual)·P(U.S.) = 0.13·0.537 = 0.0698. Since P(Individual ∩ U.S.) ≠ P(Individual)·P(U.S.) these two events are dependent. A random sample of 500 people was taken from the 2010 United States Census. Their marital status and race were recorded in the following contingency table. A person is randomly chosen, find the following. Race Marital Status American Indian Black Asian White Two Major Races Total Divorced 0 6 1 30 1 38 Married 1 25 23 156 4 209 Single 2 33 21 155 11 222 Widowed 0 7 2 22 0 31 Total 3 71 47 363 16 500 a) P(Single and Asian) b) P(Single | Asian) c) Given that a person is single what is the probability their race is Asian? Solution a) The intersection for a contingency table is found by simply finding where the row or intersection meets. There are 21 single Asians, therefore the P(Single ∩ Asian) = P(Single and Asian) = 21/500 = 0.042. Do not multiply the row total times the column total since there is no indication that these are independent events. b) In words we are trying to find the probability that the person is single given that we already know that their race is Asian. Using the conditional probability formula, we get P(Single | Asian) = $\frac{P(\text { Single ∩ Asian})}{P(\text {Asian})}$ = $\frac{21}{47}$ = 0.4468. c) This seems similar to the last question, however the part we know is that the person is single, but we do not know their race. In symbols we want to find the P(Asian | Single) = $\frac{P(\text { Asian ∩ Single})}{P(\text {Single})}$= $\frac{21}{222}$ = 0.0946. Keep in mind that P(A | B) ≠ P(B | A) since we would divide by a different total in the equation. A blood test correctly detects a certain disease 95% of the time (positive result), and correctly detects no disease present 90% of the time (negative result). It is estimated that 25% of the population have the disease. A person takes the blood test and they get a positive result. What is the probability that they have the disease? Solution Let D = Having the Disease, DC = Not having the disease, + is a positive result, and – is a negative result. We are given in the problem the following: P(+ | D) = 0.95, P(– | DC ) = 0.90, P(D) = 0.25. We want to find P (D|+) = $\frac{P(D \cap+)}{P(+)}$. Figure 4-16 When you multiply up each pair of tree branches from left to right as shown in Figure 4-16, you are finding the intersection of the events. Place the multiplied values into a table. Note that the 0.2375 is not our answer. This is the people who have the disease and tested positive, but does not take into consideration the false positives. Since we know that the result was positive, we only divide by the proportion of positive results. DC D Total + 0.075 0.2375 0.3125 0.675 0.0125 0.6875 Total 0.75 0.25 1 $P (D|+) = \frac{P(D \cap+)}{P(+)} = \frac{0.2375}{0.3125} = 0.76$ There is a 76% chance that they have the disease given that they tested positive. Many of the more difficult probability problems can be set up in a table, which makes the probabilities easier to find.
textbooks/stats/Introductory_Statistics/Mostly_Harmless_Statistics_(Webb)/04%3A_Probability/4.05%3A_Independent_Events.txt
There are times when the sample space is very large and is not feasible to write out. In that case, it helps to have mathematical tools for counting the size of the sample space. These tools are known as counting techniques or counting rules. Fundamental Counting Rule: If task 1 can be done m1 ways, task 2 can be done m2 ways, and so forth to task n being done mn ways. Then the number of ways to do task 1, 2,…, n together would be to multiply the number of ways for each task m1·m2···mn. A menu offers a choice of 3 salads, 8 main dishes, and 5 desserts. How many different meals consisting of one salad, one main dish, and one dessert are possible? Solution There are three tasks, picking a salad, a main dish, and a dessert. The salad task can be done 3 ways, the main dish task can be done 8 ways, and the dessert task can be done 5 ways. The ways to pick a salad, main dish, and dessert are: $\frac{3}{\text { salad }} \cdot \frac{2}{\text { main }} \cdot \frac{1}{\text { dessert }}$ = 120 different meals. How many 4-digit debit card personal identification numbers (PIN) can be made? Solution Four tasks must be done in this example. The tasks are to pick the first number, then the second number, then the third number, and then the fourth number. The first task can be done 10 ways since there are digits 1 through 9 or a zero. We can use the same numbers over again (repeats are allowed) to find that the second task can also be done 10 ways. The same with the third and fourth tasks, which also have 10 ways. There are $\frac{10}{\text { first number }} \cdot \frac{10}{\text { second number }} \cdot \frac{10}{\text { third number }} \cdot \frac{10}{\text { fourth number }}$ = 10,000 possible PINs. How many ways can the three letters a, b, and c be arranged with no letters repeating? Solution Three tasks must be done in this case. The tasks are to pick the first letter, then the second letter, and then the third letter. The first task can be done 3 ways since there are 3 letters. The second task can be done 2 ways, since the first task took one of the letters (repeats are not allowed). The third task can be done 1 way, since the first and second task took two of the letters. There are $\frac{3}{1^{\text {st }} \text { letter }} \cdot \frac{2}{2^{\text {nd }} \text { letter }} \cdot \frac{1}{3^{\text {rd }} \text { letter }}$ You can also look at this example in a tree diagram, see Figure 4-17. There are 6 different arrangements of the letters. The solution was found by multiplying 3·2·1 = 6. Figure 4-17 If we have 10 different letters for, say, a password, the tree diagram would be very time-consuming to make because of the length of options and tasks, so we have some shortcut formulas that help count these arrangements. Many counting problems involve multiplying a list of decreasing numbers, which is called a factorial. The factorial is represented mathematically by the starting number followed by an exclamation point, in this case 3! = 3·2·1 = 6. There is a special symbol for this and a special button on your calculator or computer. Factorial Rule: The number of different ways to arrange n objects is n! = n·(n – 1)·(n – 2) ···3·2·1, where repetitions are not allowed. Zero factorial is defined to be 0! = 1, and 1 factorial is defined to be 1! = 1. TI-84: On the home screen, enter the number of which you would like to find the factorial. Press [MATH]. Use cursor keys to move to the PRB menu. Press 4 (4:!) Press [ENTER] to calculate. TI-89: On the home screen, enter the number of which you would like to find the factorial. Press [2nd] [Math] > 7:Probability > 1:!. Press [ENTER] to calculate. Excel: In an empty cell type in =FACT(n) where n is the number so 4! would be =FACT(4). How many ways can you arrange five people standing in line? Solution No repeats are allowed since you cannot reuse a person twice. Order is important since the first person is first in line and will be selected first. This meets the requirements for the factorial rule, 5! = 5·4·3·2·1 = 120 ways. Sometimes we do not want to select the entire group but only select r objects from n total objects. The number of ways to do this depends on if the order you choose the r objects matters or if it does not matter. As an example, if you are trying to call a person on the phone, you have to have the digits of their number in the correct order. In this case, the order of the numbers matters. If you were picking random numbers for the lottery, it does not matter which number you pick first since they always arrange the numbers from the smallest to largest once the numbers are drawn. As long as you have the same numbers that the lottery officials pick, you win. In this case, the order does not matter. A permutation is an arrangement of items with a specific order. You use permutations to count items when the order matters. Permutation Rule: The number of different ways of picking r objects from n total objects when repeats are not allowed and order matters ${ }_{\mathrm{n}} \mathrm{P}_{\mathrm{r}}=\frac{n !}{(n-r) !}$. When the order does not matter, you use combinations. A combination is an arrangement of items when order is not important. When you do a counting problem, the first thing you should ask yourself is “are repeats allowed,” then ask yourself “does order matter?” Combination Rule: The number of ways to select r objects from n total objects when repeats are not allowed and order does not matter ${ }_{\mathrm{n}} \mathrm{C}_{\mathrm{r}}=\frac{n !}{(r !(n-r) !)}$. TI-84: Enter the number “trials” (n) on the home screen. Press [MATH]. Use cursor keys to move to the PRB menu. Press 2 for permutation (2: nPr), 3 for combination (3: nCr). Enter the number of “successes” (r). Press [ENTER] to calculate. TI-89: Press [2nd] Math > 7:Probability > Press 2 for permutation (2: nPr), 3 for combination (3: nCr). Enter the sample size on the home screen, then a comma, then enter the number of “successes,” then end the parenthesis. Press [ENTER] to calculate. Excel: In a blank cell type in the formula =COMBIN(n, r) or =PERMUT(n, r) where n is the total number of objects and r is the smaller number of objects that you are selecting out of n. For example =COMBIN(8, 3). The following flow chart in Figure 4-18 may help with deciding which counting rule to use. Start on the left; ask yourself if the same item can be repeated. For instance, a person on a committee cannot be counted as two distinct people; however, a number on a car license plate may be used twice. If repeats are not allowed, then ask, does the order in which the item is chosen matter? If it does not then we use the combinations, if it does then ask are you ordering the entire group, use factorial, or just some of the group, use permutation. Figure 4-18 Critical Miss, PSU's Tabletop Gaming Club, has 15 members this term. How many ways can a slate of 3 officers consisting of a president, vice-president, and treasurer be chosen? Solution In this case, repeats are not allowed since we don’t want the same member to hold more than one position. The order matters, since if you pick person 1 for president, person 2 for vice-president, and person 3 for treasurer, you would have different members in those positions than if you picked person 2 for president, person 1 for vice-president, and person 3 for treasurer. This is a permutation problem with n = 15 and r = 3. ${ }_{15} \mathrm{P}_{3}=\frac{15 !}{(15-3) !}=\frac{15 !}{12 !}=2730$ There are 2,730 ways to elect these three positions. In general, if you were selecting items that involve rank, a position title, 1st, 2nd, or 3rd place or prize, etc. then the order in which the items are arranged is important and you would use permutation. Critical Miss, PSU's Tabletop Gaming Club, has 15 members this term. They need to select 3 members to have keys to the game office. How many ways can the 3 members be chosen? Solution In this case, repeats are not allowed, because we don’t want one person to have more than one key. The order in which the keys are handed out does not matter. This is a combination problem with n = 15 and r = 3. ${ }_{15} \mathrm{C}_{3}=\frac{15 !}{(3 !(15-3) !)}=\frac{15 !}{(3 ! \cdot 12 !)}=455 \quad \text { There are } 455 \text { ways to hand out the three keys. }$ We can use these counting rules in finding probabilities. For instance, the probability of winning the lottery can be found using these counting rules. What is the probability of getting a full house if 5 cards are randomly dealt from a standard deck of cards? Solution A full house is a combined three of a kind and pair, for example, QQQ22. There are 13C1 ways to choose a card between Ace, 2, 3, … , King. Once a number is chosen, there are 4 cards with that rank and there are 4C3 ways to choose a three of kind from that rank. Once we use up one of the ranks, such as the three queens, there are 12C1 ways to choose the rank for the pair. Once the pair is chosen there are 4C2 ways to choose a pair from that rank. All together there are 52C5 ways to randomly deal out 5 cards. The probability of getting a full house with then be $\frac{_{13} C_{1} \cdot{ }_{4} C_{3} \cdot{ }_{12} C_{1} \cdot{ }_{4} C_{2}}{_{52} C_{5}}$ = $\frac{3744}{2598960}$ = 0.00144
textbooks/stats/Introductory_Statistics/Mostly_Harmless_Statistics_(Webb)/04%3A_Probability/4.07%3A_Counting_Rules.txt
Complement Rules: P(A) + P(AC) = 1 P(A) = 1 – P(AC) P(AC) = 1 – P(A) Mutually Exclusive Events: P(A ∩ B) = 0 Union Rule: P(A U B) = P(A) + P(B) – P(A ∩ B) Independent Events: P(A ∩ B) = P(A) ‧ P(B) Intersection Rule: P(A ∩ B) = P(A) ‧ P(B|A) Conditional Probability Rule: $P(A \mid B)=\frac{P(A \cap B)}{P(B)}$ Fundamental Counting Rule: m1·m2···mn Factorial Rule: n! = n·(n – 1)·(n – 2)···3·2·1 Combination Rule: ${ }_{\mathrm{n}} \mathrm{C}_{\mathrm{r}}=\frac{n !}{(r !(n-r) !)}$ Permutation Rule: ${ }_{\mathrm{n}} \mathrm{P}_{\mathrm{r}}=\frac{n !}{(n-r) !}$ 4.09: Chapter 4 Exercises Chapter 4 Exercises 1. The number of M&M candies for each color found in a case were recorded in the table below. Blue Brown Green Orange Red Yellow Total 481 371 483 544 372 369 2,620 What is the probability of selecting a red M&M? 2. An experiment is to flip a fair coin three times. Write out the sample space for this experiment. 3. An experiment is to flip a fair coin three times. What is the probability of getting exactly two heads? 4. In the game of roulette, there is a wheel with spaces marked 0 through 36 and a space marked 00. Compute the probability of winning if you pick the number 30 and it comes up on the wheel. 5. A raffle sells 1,000 tickets for \$35 each to win a new car. What is the probability of winning the car? 6. Compute the probability of rolling a sum of two dice that is more than 7. 7. Compute the probability of rolling a sum of two dice that is a 7 or a 12. 8. A random sample of 500 people’s marital status and biological sex from the 2010 United States Census are recorded in the following contingency table. a) Compute the probability that a randomly selected person is single. b) Compute the probability that a randomly selected person is not single. c) Compute the probability that a randomly selected person is single or male. d) Compute the probability that a randomly selected person is divorced or widowed. e) Given that randomly selected person is male, what is the probability they are single? f) Are the events divorced and male mutually exclusive? g) Are the events divorced and male independent? Verify using statistics. 9. The probability that a consumer entering a retail outlet for microcomputers and software packages will buy a computer of a certain type is 0.15. The probability that the consumer will buy a particular software package is 0.10. There is a 0.05 probability that the consumer will buy both the computer and the software package. What is the probability that the consumer will buy the computer or the software package? 10. A fitness center owner kept track of members over the last year. They recorded if the person stretched before they exercised, and whether they sustained an injury. The following contingency table shows their results. Select one member at random and find the following. Injury No Injury Total Stretched 52 270 322 Did Not Stretch 21 57 78 Total 73 327 400 a) P(No Injury) b) P(Injury ∩ Stretch) c) Compute the probability that a randomly selected member stretched or sustained an injury. d) Compute the probability that a randomly selected member stretched given that they sustained an injury. e) P(Injury | Did Not Stretch) 11. Giving a test to a group of students, the grades and if they were business majors are summarized below. One student is chosen at random. Give your answer as a decimal out to at least 4 places. A B C Total Business Majors 4 5 13 22 Non-business Majors 18 10 19 47 Total 22 15 32 69 a) Compute the probability that the student was a non-business major or got a grade of C. b) Compute the probability that the student was a non-business major and got a grade of C. c) Compute the probability that the student was a non-business major given they got a C grade. d) Compute the probability that the student did not get a B grade. e) Compute P(B ∪ Business Major). f) Compute P(C | Business Major). 12. A poll showed that 48.7% of Americans say they believe that Marilyn Monroe had an affair with JFK. What is the probability of randomly selecting someone who does not believe that Marilyn Monroe had an affair with JFK. 13. Your favorite basketball player is an 81% free throw shooter. Find the probability that they do not make their next free throw shot. 14. A report for a school's computer web visits for the past month obtained the following information. Find the percentage that visited none of these three sites last month. Hint: Draw a Venn Diagram. 37% visited Facebook. 42% visited LinkedIn. 29% visited Google. 27% visited Facebook and LinkedIn. 19% visited Facebook and Google. 19% visited LinkedIn and Google. 14% visited all three sites. 15. The smallpox data set provides a sample of 6,224 individuals from the year 1721 who were exposed to smallpox in Boston. Inoculated Not Inoculated Total Lived 238 5136 5374 Died 6 844 850 Total 244 5980 6224 Fenner F. 1988. Smallpox and Its Eradication (History of International Public Health, No. 6). Geneva: World Health Organization. ISBN 92-4-156110-6. a) Compute the relative frequencies. Inoculated Not Inoculated Total Lived Died Total     1 b) Compute the probability that a person was inoculated. c) Compute the probability that a person lived. d) Compute the probability that a person died or was inoculated. e) Compute the probability that a person died if they were inoculated. f) Given that a person was not inoculated, what is the probability that they died? 16. A certain virus infects one in every 400 people. A test used to detect the virus in a person is positive 90% of the time if the person has the virus and 8% of the time if the person does not have the virus. (This 8% result is called a false positive.) Let A be the event "the person is infected" and B be the event "the person tests positive." a) Find the probability that a person has the virus given that they have tested positive, i.e. find P(A|B). b) Find the probability that a person does not have the virus given that they test negative, i.e. find P(AC|BC). 17. A store purchases baseball hats from three different manufacturers. In manufacturer A’s box there are 12 blue hats, 6 red hats, and 6 green hats. In manufacturer B’s box there are 10 blue hats, 10 red hats, and 4 green hats. In manufacturer C’s box, there are 8 blue hats, 8 red hats, and 8 green hats. A hat is randomly selected. Given that the hat selected is green, what is the probability that it came from manufacturer B’s box? Hint: Make a table with the colors as the columns and the manufacturers as the rows. 18. The following table represents food purchase amounts and whether the customer used cash or a credit/debit card. One customer is chosen at random. Give your answer as a decimal out to at least 4 places. Less than \$10 \$10-\$49 \$50 or More Total Cash Purchase 11 10 18 39 Card Purchase 17 6 19 42 Total 28 16 37 81 a) Compute the probability that the customer's purchasing method was a cash purchase or the customer spent \$10-\$49. b) Compute the probability that the customer's purchasing method was cash purchase and the customer spent \$10-\$49. c) Compute the probability that the customer's purchasing method was a cash purchase given they spent \$10- \$49. d) Compute the probability that the customer spent less than \$50. e) What percent of cash purchases were for \$50 or more? 19. The probability of stock A rising is 0.3; and of stock B rising is 0.4. What is the probability that neither of the stocks rise, assuming that these two stocks are independent? 20. You are going to a Humane Society benefit dinner, and need to decide before the dinner what you want for salad, main dish, and dessert. You have 2 salads to choose from, 3 main dishes, and 5 desserts. How many different meals are available? 21. How many different phone numbers are possible in the area code 503, if the first number cannot start with a 0 or 1? 22. You are opening a screen-printing business. You can have long sleeves or short sleeves, three different colors, five different designs, and four different sizes. How many different shirts can you make? 23. The California license plate has one number followed by three letters followed by three numbers. How many different license plates are possible? 24. Calculate the following. a) 9P4 b) 10P6 c) 10C5 d) 20C4 e) 8! f) 5! 25. The PSU’s Mixed Me club has 30 members. You need to pick a president, treasurer, and secretary from the 30. How many different ways can you do this? 26. How many different 4-digit personal identification numbers (PIN) are there if repeats are not allowed? 27. A baseball team has a 20-person roster. A batting order has nine people. How many different batting orders are there? 28. How many ways can you choose 4 cookies from a cookie jar containing 25 cookies of all the same type? 29. A computer generates a random password for your account (the password is not case sensitive). The password must consist of 8 characters, each of which can be any letter or number. How many different passwords could be generated? 30. How many unique tests can be made from a test bank of 20 questions if the test consists of 8 questions, order does not matter? 31. A typical PSU locker is opened with correct sequence of three numbers between 0 and 49 inclusive. A number can be used more than once, for example 8-8-8 is valid. How many possible locker combinations are there? 32. In the game of Megabucks, you get six numbers from 48 possible numbers without replacement. Megabucks jackpots start at \$1 million and grow until someone wins. What is the probability of matching all 6 numbers in any order? Answer to Odd Numbered Exercises 1) 0.1420 3) 0.375 5) 0.001 7) 0.1944 9) 0.2 11) a) 0.8696 b) 0.2754 c) 0.5938 d) 0.7826 e) 0.4638 f) 0.5909 13) 0.19 15) a) Inoculated Not Inoculated Total Lived 0.0382 0.8252 0.8634 Died 0.0010 0.1356 0.1366 Total 0.0392 0.9608 1 b) 0.0392 c) 0.8643 d) 0.1748 e) 0.026 f) 0.141 17) 0.2222 19) 0.42 21) 8,000,000 23) 138,240,000 25) 24,360 27) 60,949,324,800 29) 2,821,109,907,456 31) 125,000
textbooks/stats/Introductory_Statistics/Mostly_Harmless_Statistics_(Webb)/04%3A_Probability/4.08%3A_Chapter_4_Formulas.txt
A variable or what will be called the random variable from now on, is represented by the letter X and it represents a quantitative (numerical) variable that is measured or observed in an experiment. A random variable (usually X) is a numeric description of an event. Recall that a sample space is all the possible outcomes and an event is a subset of the sample space. Usually we use capital letters from the beginning of the alphabet to represent events, A, B, C, etc. We use capital letters from the end of the alphabet to represent random variables, X, Y, and Z. The possible outcomes of X are labeled with a corresponding lower-case letter x and subscripts like x1, x2, etc. For instance, if we roll two 6-sided dice the sample space is S = {(1,1), (1,2), (1,3), (1,4), (1,5), (1,6), (2,1), (2,2),…, (6,6)} and the event E the sum of the two rolls is five, then E = {(1,4), (2,3), (3,2), (4,1)}. Now, we could define the random variable X to denote the sum of the two rolls, then X = {2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12} and event E corresponds to x = 5. There are different types of quantitative variables, called discrete or continuous. Discrete random variables can only take on particular values in a range. Continuous random variables can take on any value in a range. Discrete random variables usually arise from counting while continuous random variables usually arise from measuring A discrete random variable is a variable that is finite or infinitely countable. A continuous random variable is a variable that has an infinite number of possible values in an interval of numbers. 5.02: Discrete Probability Distributions In many cases, the random variable is what you are measuring, but when it comes to discrete random variables, it is usually what you are counting. For the example of height, the random variable is the height of the child. For the example of how many fleas are on prairie dogs in a colony, the random variable is the number of fleas on a prairie dog in a colony. Now suppose you put all the possible values of the random variable together with the probability that the random variable would occur. You could then have a distribution like before, but now it is called a probability distribution since it involves probabilities. A probability distribution is an assignment of probabilities to all the possible values of the random variable. The abbreviation of pdf is used for a probability density (distribution) function in your calculators. The probability distribution of X lists all the possible values of x and their corresponding probabilities. A valid discrete probability distribution has to satisfy two criteria: 1. The probability of x is between 0 and 1, 0 ≤ P(xi) ≤ 1. 2. The probability of all x values adds up to 1, ∑ P(xi) = 1. Two books are assigned for a statistics class: a textbook and its corresponding study guide. No students buy just the study guide. The university bookstore determined 20% of enrolled students do not buy either book, 55% buy the textbook only, 25% buy both books, and these percentages are relatively constant from one term to another. Is this a valid discrete probability distribution? Solution Each probability is a number between 0 and 1; 0 ≤ 0.2 ≤ 1, 0 ≤ 0.55 ≤ 1, and 0 ≤ 0.25 ≤ 1. The sum of the probabilities adds up to 1; ∑ P(xi) = 0.2 + 0.55 + 0.25 = 1. Yes, this is a valid discrete probability distribution. A random experiment consists of flipping a fair coin three times. Let X = the number of heads that show up. Create the probability distribution of X. Solution When you flip a coin three times, you can get 0 heads, 1 head, 2 heads or 3 heads, the corresponding values of x are 0, 1, 2, 3. We can find the probabilities for each of these values by finding the sample space and using the classical method of computing probability. The sample space is S = {HHH, HHT, HTH, THH, HTT, THT, TTH, TTT}. The event x = 0 can happen one way, namely {TTT}. Thus P(X = x) = P(X = 0) = $\frac{1}{8}$. The event x = 1 can happen three ways, namely {HTT, THT, TTH}. Thus P(X = 1) = $\frac{3}{8}$. The event x = 2 can happen three ways, namely {HHT, HTH, THH}. Thus P(X = 2) = $\frac{3}{8}$. The event x = 3 can happen one way, namely {HHH}. Thus P(X = 3) = $\frac{1}{8}$. Therefore, the probability distribution of X is: x 0 1 2 3 P(X=x) $\frac{1}{8}$ $\frac{3}{8}$ $\frac{3}{8}$ $\frac{1}{8}$ Figure 5-1 Note this is a valid probability distribution because the probability of each x, P(x), is between 0 and 1, and the probability of the sum of all x values from 0 to 3 is ∑ P (x) = $\frac{1}{8}+\frac{3}{8}+\frac{3}{8}+\frac{1}{8}=1$. Figure 5-2 is a graph of the probability distribution using Example 5-2. Figure 5-2 The height of each line corresponds to the probability of each x value. Sometimes you will see a bar graph instead. The following table shows the probability of winning an online video game, where X = the net dollar winnings for the video game. Is the following table a valid discrete probability distribution? x -5 -2.5 0 2.5 5 P(X=x) 0.55 $\frac{1}{4}$ 0.15 0 5% Solution It is easier to have all the probabilities as proportions: ¼ = 0.25 and 5% = 0.05. Each probability is a number between 0 and 1: 0 ≤ 0.55 ≤ 1, 0 ≤ 0.25 ≤ 1, 0 ≤ 0.15 ≤ 1, 0 ≤ 0 ≤ 1, 0 ≤ 0.05 ≤ 1. The sum of the probabilities is equal to one: is Σ P(x) = 0.55 + 0.25 + 0.15 + 0 + 0.05 = 1. Yes, this is a valid discrete probability distribution since the table has the two properties that each probability is between 0 and 1, and the sum of the probabilities is one. x 0 1 2 3 P(X=x) $\frac{1}{2}$ $\frac{1}{4}$ $\frac{1}{4}$ $\frac{1}{4}$ Solution It is easier to have all the probabilities as proportions: ½ = 0.5 and ¼ = 0.25. Each probability is a number between 0 and 1: 0 ≤ 0.5 ≤ 1, 0 ≤ 0.25 ≤ 1, 0 ≤ 0.25 ≤ 1, 0 ≤ 0.25 ≤ 1. The sum of the probabilities does not equal one: is Σ P(x) = 0.5 + 0.25 + 0.25 + 0.25 = 1.25. No, this is not a valid discrete probability distribution since the sum of the probabilities is not equal to one. Words Are Important! When finding probabilities pay close attention to the following phrases. Figure 5-2 The following is a valid discrete probability distribution of X = the net dollar winnings for an online video game. Find the probability of winning at most $2.50. x -5 -2.5 0 2.5 5 P(X=x) 0.55 $\frac{1}{4}$ 0.15 0 5% Solution It is easier to have all the probabilities as proportions: ¼ = 0.25 and 5% = 0.05. At most means the same as, less than or equal to. The probabilities can be found by adding all the probabilities that are less than or equal to 2.5. P(X ≤ 2.5) = 0.55 + 0.25 + 0.15 + 0 = 0.95. You can also use the complement rule since all the probabilities add to one. P(X ≤ 2.5) = 1 – P(X > 2.5) = 1 – 0.05 = 0.95. The 2010 United States Census found the chance of a household being a certain size. Size of Household 1 2 3 4 5 6 7 or more Probability 26.7% 33.6% 15.8% 13.7% 6.3% 2.4% 1.5% a) Is this a valid probability distribution? b) Compute the probability that a household has exactly 3 people. c) Compute the probability that a household has at most 3 people. d) Compute the probability that a household has more than 3 people. e) Compute the probability that a household has at least 3 people. f) Compute the probability that a household has 3 or more people. Solution a) In this case, the random variable is X = number of people in a household. This is a discrete random variable, since you are counting the number of people in a household. This is a probability distribution since you have the x value and the probabilities that go with it, all of the probabilities are between zero and one, and the sum of all of the probabilities is one. We can use these discrete probability distribution tables to find probabilities. b) Let X = the number of people per household. Trace along the Size of Household row until you get to 3, then take the corresponding probability in that column, P(X = 3) = 0.158. c) “At most” 3 would be 3 or less. P(X ≤ 3) = P(X = 1) + P(X = 2) + P(X =3) = 0.267 + 0.336 + 0.158 = 0.761. d) “More than” 3, does not include the 3, and would be all the probabilities for 4 or more added together. P(X > 3) = P(X ≥ 4) = 0.137 + 0.063 + 0.024 + 0.015 = 0.239. e) “At least” 3, means 3 or more. P(X ≥ 3) = 0.158 + 0.137 + 0.063 + 0.024 + 0.015 = 0.397. f) This would be the same as the previous question just worded differently, P(X ≥ 3) = 0.158 + 0.137 + 0.063 + 0.024 + 0.015 = 0.397. Rare Events The reason probability is studied in statistics is to help in making decisions in inferential statistics. To understand how making decisions is done, the concept of a rare event is needed. Rare Event Rule for Inferential Statistics: If, under a given assumption, the probability of a particular observed event is extremely small, then you can conclude that the assumption is probably not correct. An example of this is suppose you roll an assumed fair die 1,000 times and get six 600 times, when you should have only rolled a six around 160 times, then you should believe that your assumption about it being a fair die is untrue. Determining if an event is unusual: If you are looking at a value of X for a discrete variable, and the P(the variable has a value of x or more) ≤ 0.05, then you can consider the x an unusually high value. Another way to think of this is if the probability of getting such a high value is less than or equal to 0.05, then the event of getting the value x is unusual. Similarly, if the P(the variable has a value of x or less) ≤ 0.05, then you can consider this an unusually low value. Another way to think of this is if the probability of getting a value as small as x is less than or equal to 0.05, then the event x is considered unusual. Why is it "x or more" or "x or less" instead of just "x" when you are determining if an event is unusual? Consider this example: you and your friend go out to lunch every day. Instead of each paying for their own lunch, you decide to flip a coin, and the loser pays for both lunches. Your friend seems to be winning more often than you would expect, so you want to determine if this is unusual before you decide to change how you pay for lunch (or accuse your friend of cheating). The process for how to calculate these probabilities will be presented in a later section on the binomial distribution. If your friend won 6 out of 10 lunches, the probability of that happening turns out to be about 20.5%, not unusual. The probability of winning 6 or more is about 37.7%, still not unusual. However, what happens if your friend won 501 out of 1,000 lunches? That does not seem so unlikely! The probability of winning 501 or more lunches is about 47.8%, and that is consistent with your hunch that this probability is not so unusual. Nevertheless, the probability of winning exactly 501 lunches is much less, only about 2.5%. That is why the probability of getting exactly that value is not the right question to ask: you should ask the probability of getting that value or more (or that value or less on the other side). The value 0.05 will be explained later, and it is not the only value you can use for deciding what is unusual. The 2010 United States Census found the chance of a household being a certain size. Size of Household 1 2 3 4 5 6 7 or more Probability 26.7% 33.6% 15.8% 13.7% 6.3% 2.4% 1.5% a) Is it unusual for a household to have six people in the family? b) Is it unusual for a household to have four people in the family? Solution a) To determine this, you need to look at probabilities. However, you cannot just look at the probability of six people. You need to look at the probability of x being six or more people P(X ≥ 6) = 0.024 + 0.015 = 0.039. Since this probability is less than 5%, then six is an unusually high value. It is unusual for a household to have six people in the family. b) Four is toward the middle so we need to look at both the probability of x being four or more and the probability of x being four or less. P(X ≥ 4) = 0.137 + 0.063 + 0.024 + 0.015 = 0.239, since this probability is more than 5%, four is not an unusually high value. P(X ≤ 4) = 0.267 + 0.336 + 0.158 + 0.137 = 0.898, since this probability is more than 5%, four is not an unusually low value. Thus, four is not an unusual size of a family. What was that voice?" shouted Arthur. "I don't know," yelled Ford, "I don't know. It sounded like a measurement of probability." "Probability? What do you mean?" "Probability. You know, like two to one, three to one, five to four against. It said two to the power of one hundred thousand to one against. That's pretty improbable you know." A million-gallon vat of custard upended itself over them without warning. "But what does it mean?" cried Arthur. "What, the custard?" "No, the measurement of probability!" "I don't know. I don't know at all. I think we're on some kind of spaceship." (Adams, 2002) 5.2.1 Mean of a Discrete Probability Distribution The mean of a discrete random variable X is an average of the possible values of x, which considers the fact that not all outcomes are equally likely. The mean of a random variable is the value of x that one would expect to see after averaging a large number of trials. The mean of a random variable does not need to be a possible value of x. Two books are assigned for a statistics class: a textbook and its corresponding study guide. No students buy just the study guide. The university bookstore determined 20% of enrolled students do not buy either book, 55% buy the textbook only, 25% buy both books, and these percentages are relatively constant from one term to another. If there are 100 students enrolled, how many books should the bookstore expect to sell to this class? Solution It is expected that 0.20 × 100 = 20 students will not buy either book (0 books total), that 0.55 × 100 = 55 will buy just the textbook (55 books total), and 0.25 × 100 = 25 will buy both books (totaling 50 books for these 25 students). The bookstore should expect to sell about 105 total books for this class. The textbook costs$137 and the study guide $33. How much revenue should the bookstore expect from this class of 100 students? Use the results from the previous example. Solution It is expected that 55 students will just buy a textbook, providing revenue of$137 × 55 = $7,535. The roughly 25 students who buy both the textbook and the study guide would pay a total of ($137 + $33)25 =$170 × 25 = $4,250. Thus, the bookstore should expect to generate about$7,535 + $4,250 =$11,785 from these 100 students for this one class. However, there might be some sampling variability so the actual amount may differ by a little bit. Probability distribution for the bookstore’s revenue from a single student. The distribution balances on a triangle representing the average revenue per student represented in Figure 5-4. Figure 5-4 What is the average revenue per student for this course? The expected total revenue is $11,785, and there are 100 students. Therefore, the expected revenue per student is$11,785/100 = $117.85. Mean of a Discrete Random Variable: Suppose that X is a discrete random variable with values x1, x2, …, xk. Then the mean of X is, μ = Σ(xi ∙ P(xi)) = x1 ∙ P(x1) + x2 ∙ P(x2) + ‧‧∙ + xk ∙ P(xk). The mean is also referred to as the expected value of x denoted as E(X). Two books are assigned for a statistics class: a textbook and its corresponding study guide. The university bookstore determined 20% of enrolled students do not buy either book, 55% buy the textbook only, 25% buy both books, and these percentages are relatively constant from one term to another. Use the formula for the mean to find the average textbook cost. Solution Let X = the revenue from statistics students for the bookstore, then x1 =$0, x2 = $137, and x3 =$170, which occur with probabilities 0.20, 0.55, and 0.25. The distribution of X is summarized in the table below. x $0$137 $170 P(X=x) 0.2 0.55 0.25 Compute the average outcome of X as μ = Σ(xi ∙ P(xi)) = x1 ∙ P(x1) + x2 ∙ P(x2) + x3 ∙ P(x3) = 0 ∙ 0.2 + 137 ∙ 0.55 + 170 ∙ 0.25 = 117.85. We call this average the expected value of X, denoted by E(X) =$117.85. Note using this method we are not dividing the answer by the total number of students since the probabilities are found by dividing the frequency by the total so we would not divide again. It may have been tempting to set up the table with the x values 0, 1 and 2 books. This would be fine if the question was asking for the average number of books sold. Since the books are different prices, we would not be able to get an average cost. Make sure X represents the variable that you are using to calculate the mean. The following is a valid discrete probability distribution of X = the net dollar winnings for an online video game. Find the mean net earnings for playing the game. x -5 -2.5 0 2.5 5 P(X = x) 0.55 ¼ 0.15 0 5% Solution It is easier to have all the probabilities as proportions: ¼ = 0.25 and 5% = 0.05. Be careful with the negative x values. μ = Σ(xi ∙ P(xi)) = x1 ∙ P(x1) + x2 ∙ P(x2) + x3 ∙ P(x3) = (-5) ∙ 0.55 + (-2.5) ∙ 0.25 + 0 ∙ 0.15 + 5 ∙ 0.05 = (-2.75) + (-0.625) + 0 + 0.25 = -3.125. If you were to play the game many times, in the long run, you can expect to lose, on average, $3.125. The 2010 United States Census found the chance of a household being a certain number of people (household size). Compute the mean household size. Size of Household 1 2 3 4 5 6 7 or more Probability 26.7% 33.6% 15.8% 13.7% 6.3% 2.4% 1.5% Solution To find the mean it is easier to use a table as shown below. We will need to consider the category 7 or more to just be one number, we will use the lower bound of 7 since the mean could not be calculated with the information provided. The formula for the mean says to multiply the x value by the P(x) value, so add a row into the table for this calculation. Also, convert all P(x) to decimal form. x 1 2 3 4 5 6 7 P(x) 0.267 0.336 0.158 0.137 0.063 0.024 0.015 x∙P(x) 0.267 0.672 0.474 0.548 0.315 0.144 0.105 Add up the new row and you get the answer 2.525. This is the mean or the expected value, μ = 2.525 people. This means that you expect a household in the United States to have 2.525 people in it. Keep your answer as a decimal. Now of course you cannot have part of a person, but what this tells you is that you expect a household to have either 2 or 3 people, with a little more 3-person households than 2-person households. Just as with any data set, you can calculate the mean and standard deviation. In problems involving a table of probabilities, the probability distribution represents a population. The probability distribution in most cases comes from repeating an experiment many times. This is because you are using the data from repeated experiments to estimate the true probability. Since a probability distribution represents a population, the mean and standard deviation that are calculated are actually the population parameters and not the sample statistics. The notation used is for the population mean μ “mu” and population standard deviation σ “sigma.” Note, the mean can be thought of as the expected value. It is the value you expect to get on average, if the trials were repeated an infinite number of times. The mean or expected value does not need to be a whole number, even if the possible values of x are whole numbers. In the Oregon lottery called Pick-4, a player pays$1 and then picks a four-digit number. If those four numbers are picked in that specific order, the person wins $2,000. What is the expected value in this game? Solution To find the expected value, you need to first create the probability distribution. In this case, the random variable x =$ net winnings. If you pick the right numbers in the right order, then you win $2,000, but you paid$1 to play, so you actually win a net amount of $1,999. If you did not pick the right numbers, you lose the$1, the x net value is –$1. You also need the probability of winning and losing. Since you are picking a four-digit number, and for each digit, there are 10 possible numbers to pick from, with each independent of the others, you can use the fundamental counting rule. To win, you have to pick the right numbers in the right order. The first digit, you pick 1 number out of 10, the second digit you pick 1 number out of 10, and the third digit you pick 1 number out of 10. The probability of picking the right number in the right order is $\frac{1}{10} \cdot \frac{1}{10} \cdot \frac{1}{10} \cdot \frac{1}{10}=\frac{1}{10000}=0.0001$ The probability of losing (not winning) would be the complement rule 1 – 0.0001 = 0.9999. Putting this information into a table will help to calculate the expected value. Game Outcome Win Lose x$1,999 –$1 P(X = x) 0.0001 0.9999 Find the mean which is the same thing as the expected value. Game Outcome Win Lose x$1,999 –$1 P(x) 0.0001 0.9999 Total x∙P(x)$0.1999 –$0.9999 –$0.80 Now sum the last row and you have the expected value of $0.1999 + (–$0.9999) = –$0.80. If you kept playing this game, in the long run, you will expect to lose$0.80 per game. Since the expected value is not 0, this game is not fair. Most lottery and casino games such as craps, 21, roulette, etc. and insurance policies are built with negative expectations for the consumer. This is how casinos and insurance companies stay in business. 5.2.2 Variance & Standard Deviation of Discrete Probability Distributions Suppose you ran the university bookstore. Besides how much revenue you expect to generate, you might also want to know the volatility (variability) in your revenue. The variance and standard deviation can be used to describe the variability of a random variable. When we first introduced a method for finding the variance and standard deviation for a data set we had sample data and found sample statistics. We first computed deviations from the mean (xi − μ), squared those deviations (xi − μ)2 , and took an average to get the variance. In the case of a random variable, we again compute squared deviations. However, we take their sum weighted by their corresponding probabilities, just as we did for the expectation. This weighted sum of squared deviations equals the variance, and we calculate the standard deviation by taking the square root of the variance, just as we did for a sample variance. We also are using notation for the population parameters σ and σ2, instead of the sample statistics s and s2. Variance of a Discrete Random Variable X: Suppose that X is a discrete random variable with values x1, x2, …, xk. Then the variance of X is, σ2 = ∑(xi – μ)2 ∙P(xi) or an easier to compute, algebraically equivalent formula σ2 = ∑(xi2 ∙P(xi)) – μ2 = (x12 ∙P(x1) + x22 ∙P(x2) + ∙∙∙ + xk2 ∙P(xk)) – μ2 . Standard Deviation of a Discrete Random Variable X: The standard deviation of X is the positive square root of the variance, $\sigma=\sqrt{\sigma^{2}}$ There are many situations that we can model a discrete distribution with a formula. This will make finding probabilities with large sample sizes much easier than making a table. There are many types of discrete distributions. We will just be covering a few of them. The following is a valid discrete probability distribution of X = the net dollar winnings for an online video game. Find the standard deviation of the net earnings for playing the game. x -5 -2.5 0 2.5 5 P(x) 0.55 ¼ 0.15 0 5% Solution It is easier to have all the probabilities as proportions: ¼ = 0.25 and 5% = 0.05. Be careful with the negative x values. First, find the mean μ = Σ(xi ∙ P(xi)) = x1 ∙ P(x1) + x2 ∙ P(x2) + x3 ∙ P(x3) = (-5) ∙ 0.55 + (-2.5) ∙ 0.25 + 0 ∙ 0.15 + 5 ∙ 0.05 = (-2.75) + (-0.625) + 0 + 0.25 = -3.125. Next find the variance, σ2 = ∑(xi2 ∙P(xi)) – μ2 = (x12 ∙P(x1) + x22 ∙P(x2) + x32 ∙P(x3) + x42 ∙P(x4) + x52 ∙P(x5)) – μ2 = ((-5)2 ∙0.55 + (-2.5)2 ∙ 0.25 + 02 ∙ 0.15 + 2.52 ∙ 0 + 52 ∙ 0.05) – (-3.125)2 = (13.75 + 1.5625 + 0 + 0 + 1.25) – 9.765625 = 16.5625 – 9.765625 = 6.796875. Now take the square root of the variance to get to the standard deviation, $\sqrt{6.796875}$ = 2.607, or σ = \$2.607. The 2010 United States Census found the chance of a household being a certain size. Compute the variance and standard deviation. Size of Household 1 2 3 4 5 6 7 or more Probability 26.7% 33.6% 15.8% 13.7% 6.3% 2.4% 1.5% Solution Make a table similar to how we started the mean by changing the probabilities to decimals and use 7 for the category 7 or more, but this time square each x value before multiplying by its corresponding probability. x 1 2 3 4 5 6 7 P(x) 0.267 0.336 0.158 0.137 0.063 0.024 0.015 x2*P(x) 0.267 1.344 1.422 2.192 1.575 0.864 0.735 Add this new row up to get the beginning of the variance formula ∑(xi2 ∙P(xi)) = 8.399. In a previous example we found the mean household size to be μ = 2.525 people. To finish finding the variance we need to square the mean and subtract from the sum to get σ2 = ∑(xi2 ∙P(xi)) – μ2 = 8.399 – 2.5252 = 2.023375 people2. Having a measurement in squared units is not very helpful when trying to interpret so we take the square root to find the standard deviation, which is back in the original units. The standard deviation of the number of people in a household is σ = $\sqrt{2.023375}$ = 1.4225 people. This means that you can expect an average United States household to have 2.525 people in it, with an average spread or standard deviation of 1.42 people. TI-84: Press [STAT], choose 1:Edit. For x and P(x) data pairs, enter all x-values in one list. Enter all corresponding P(x) values in a second list. Press [STAT]. Use cursor keys to highlight CALC. Select 1:1-Var Stats. Enter list 1 for List and list 2 for frequency list. Press Enter to calculate the statistics. For TI-83, you will just see 1-Var Stats on the screen. Enter each list separated with a comma by pressing [2nd], then press the number 1 key corresponding to your x list, then a comma, then [2nd] and the number 2 key corresponding to your P(x) values. The home screen should look like this 1-Var Stats L1,L2. Where the calculator says $\overline{ x }$ is µ the population mean and σx is the population standard deviation (square this number to get the population variance). TI-89: Go to the [Apps] Stat/List Editor, and type the x values into List 1 and P(x) values into List2. Select F4 for the Calc menu. Use cursor keys to highlight 1:1-Var Stats. Under List, press [2nd] Var-Link, then select list1. Under Freq, press [2nd] Var-Link, then select list2. Press enter twice and the statistics will appear in a new window. Use the cursor keys to arrow up and down to see all of the values. Note: $\overline{ x }$ this is µ the population mean and σx is the population standard deviation; square this value to get the variance. Bernoulli trial The focus of the previous section was on discrete probability distribution tables. To find the table for those situations, we usually need to actually conduct the experiment and collect data. Then one can calculate the experimental probabilities. Jacob Bernoulli If certain conditions are met, we can instead use theoretical probabilities. One of these theoretical probabilities can be used when we have a Bernoulli trial. Properties of a Bernoulli trial or binomial trial: 1. Trials are independent, which means that what happens on one trial does not influence the outcomes of other trials. 2. There are only two outcomes, which are called a success and a failure. 3. The probability of a success does not change from trial to trial, where p = probability of success and q = probability of failure, the complement of p, q = 1 – p. If you know you have a Bernoulli trial, then you can calculate probabilities using some theoretical probability formulas. This is important because Bernoulli trials come up often in real life. Examples of Bernoulli experiments are: • Toss a fair coin twice, and find the probability of getting two heads. • Question twenty people in class, and look for the probability of more than half are business majors. • A patient has a virus. • What is the probability of passing a multiple-choice test if you have not studied? Bernoulli trials are used in both the geometric and binomial distributions.
textbooks/stats/Introductory_Statistics/Mostly_Harmless_Statistics_(Webb)/05%3A_Discrete_Probability_Distributions/5.01%3A_Introduction.txt
The geometric distribution is a discrete probability distribution used to find the probability of success when there are two outcomes to each trial, and the trials are independent with the same probability of occurrence. A geometric probability distribution results from a random experiment that meets all of the following requirements. 1. Repeat the trials until you get a success. 2. The trials must be independent. 3. Each trial must have exactly two categories that can be labeled “success” and “failure.” 4. The probability of a “success,” denoted by p, remains the same in all trials. The probability of “failure” is often denoted by q, thus q = 1– p. 5. Random Variable, X, counts the number of trials until your first success If a random experiment satisfies all of the above, the distribution of the random variable X, where X counts the number of trials until the first success, is called a geometric distribution. If a discrete random variable X has a geometric distribution with the probability of success is p, we write X ~ G(p). The geometric distribution is P(X = x) = p ∙ q(x – 1) , x = 1, 2, 3, … where x is the number of trials up to the first success that you are trying to find the probability for, p is the probability of a success for one trial and q = 1 – p is the probability of a failure for one trial. Be careful, a “success” is not always a “good” thing. Sometimes a success is something that is “bad,” like finding a defect or getting in a car crash. The success will be the event from the probability question. A game of chance has only two possible outcomes to win or lose for each time you play. The probability of losing the game is 65%. You play this game until your first loss. What is the probability of playing the game exactly 6 times in a row? Solution We have independent trials with two outcomes, no set sample size and the same probability of success each time we play so we can use the geometric distribution. Let p = 0.65, which means q = 1 – 0.65 = 0.35 and x = 6. P(X = 6) = 0.65 ∙ 0.355 = 0.0034. This is the probability of winning the first 5 games and losing on the 6th game in which you would stop playing. TI-84: Press [2nd] [DISTR]. This will get you a menu of probability distributions. Press 0 or arrow down to geometpdf( and press [ENTER]. This puts geometpdf( on the home screen. Enter the values for p and x with a comma between each. Press [ENTER]. This is the probability density function and will return you the probability of exactly x successes. For the previous example, we would use geometpdf(0.65,6). TI-89: Go to the [Apps] Stat/List Editor, then select F5 [DISTR]. This will get you a menu of probability distributions. Arrow down to Geometric Pdf and press [ENTER]. Enter the values for p and x into each cell. Press [ENTER]. This gives the probability density function and will return you the probability of exactly x successes. Flip a fair coin until you get a tail. What is the probability of flipping the coin exactly 8 times until you get a tail? Solution In this case the probability of a success is getting a tail on any one toss which is p = ½ = 0.5. We are interested in a success on the eighth flip so x = 8. P(X = 8) = 0.5 ∙ 0.57 = 0.00391. Figure 5-5 shows a graph of the discrete probability distribution. Figure 5-5 When looking at a person’s eye color, it turns out that only 2% of people in the world have green eyes (not to be confused with hazel colored eyes). Randomly select people and look at their eye color. What is the probability that you get someone with green eyes on the 5th person? Solution The probability of a success is 0.02. We are looking for P(X = 5) = 0.02 ∙ 0.984 = 0.0184. Mean, Variance & Standard Deviation of a Geometric Distribution For a geometric distribution, μ, the expected number of successes, σ2 , the variance, and σ, the standard deviation for the number of success are given by the formulas, where p is the probability of success and q = 1 – p. $\mu=\frac{1}{p} \quad \sigma^{2}=\frac{1-p}{p^{2}} \quad \sigma=\sqrt{\frac{1-p}{p^{2}}}$
textbooks/stats/Introductory_Statistics/Mostly_Harmless_Statistics_(Webb)/05%3A_Discrete_Probability_Distributions/5.03%3A_Geometric_Distributions.txt
The binomial distribution is a discrete probability distribution used to find the probability of success when there are two outcomes to each trial, and there are a set number of independent trials with the same probability of occurrence. To develop the process for calculating the probabilities in a binomial experiment, consider the following example. Suppose you are given a three-question multiple-choice test. Each question has four responses and only one is correct. Suppose you want to find the probability that you can just guess at the answers and get two questions correct. To help with the idea that you are going to guess, suppose the test is on string theory. 1. Is this a binomial experiment? Since all of the properties are met, this is a binomial experiment. 2. What is the probability of getting two questions right? 3. What is the probability of getting zero right, one right, two right and all three right? 4. Solution a) The random variable is x = number of correct answers. 1. There are three questions, and each question is a trial, so there are a fixed number of trials. In this case, n = 3. 2. Getting the first question right has no effect on getting the second or third question correct, thus the trials are independent. 3. Either you get the question right or you get it wrong, so there are only two outcomes. In this case, the success is getting the question right. 4. The probability of getting a question right is one out of four. This is the same for every trial since each question has 4 responses. In this case, p = ¼ and q = 1 – ¼ = ¾. b) To answer this question, start with the sample space. SS = {RRR, RRW, RWR, WRR, WWR, WRW, RWW, WWW}, where RRW means you get the first question right, the second question right, and the third question wrong. Now the event space for getting 2 right is {RRW, RWR, WRR}. What you did in chapter four was just to find three divided by eight to compute the probability outcome. However, this would not be correct in this case, because the probability of getting a question right is different from getting a question wrong. What else can you do? Look at just P(RRW) for the moment. Again, that means P(RRW) = P(R on 1st, R on 2nd, and W on 3rd). Since the trials are independent, then P(RRW) = P(R on 1st, R on 2nd, and W on 3rd) = P(R on 1st) * P(R on 2nd) * P(W on 3rd). Just multiply p · p · q = P(RRW) = ¼ ∙ ¼ ∙ ¾ = (¼)2 ∙ (¾)1. Similarly, you can compute the P(RWR) and P(WRR). To find the probability of 2 correct answers, just add these three probabilities together. P(2 correct answers) = P(RRW) + P(RWR) + P(WRR) = (¼)2 ∙ (¾)1 + (¼)2 ∙ (¾)1 + (¼)2 ∙ (¾)1 = 3∙(¼)2 ∙ (¾)1 . c) You could go through the same argument that you did above and come up with the following: Do you see the resulting pattern? You can now write the general formula for the probabilities for a binomial experiment. First, the random variable in a binomial experiment is x = number of successes. A binomial probability distribution results from a random experiment that meets all of the following requirements. 1. The procedure has a fixed number of trials (or steps), which is denoted by n. 2. The trials must be independent. 3. Each trial must have exactly two categories that can be labeled “success” and “failure.” 4. The probability of a “success,” denoted by p, remains the same in all trials. The probability of “failure” is often denoted by q, thus q = 1 – p. 5. Random Variable, X, counts the number of “successes.” If a random experiment satisfies all of the above, the distribution of the random variable X, where X counts the number of successes, is called a binomial distribution. A binomial distribution is described by the population proportion p and the sample size n. If a discrete random variable X has a binomial distribution with population proportion p and sample size n, we write X ~ B(n, p). Be careful, a “success” is not always a “good” thing. Sometimes a success is something that is “bad,” like finding a defect or getting in a car crash. The success will be the event from the probability question. The geometric and binomial distributions are easy to mix up. Keep in mind that the binomial distribution has a given sample size, whereas the geometric is sampling until you get a success. Excel Formula for Binomial Distribution: For exactly P(X = x) use =binom.dist(x,n,p,false). For P(X ≤ x) use =binom.dist(x,n,p,true). TI-84: Press [2nd] [DISTR]. This will get you a menu of probability distributions. Press 0 or arrow down to 0:binompdf( and press [ENTER]. This puts binompdf( on the home screen. Enter the values for n, p and x with a comma between each. Press [ENTER]. This is the probability density function and will return you the probability of exactly x successes. If you leave off the x value and just enter n and p, you will get all the probabilities for each x from 0 to n. Press [ALPHA] A or arrow down to A:binomcdf( and press [ENTER]. This puts binomcdf( on the home screen. Enter the values for n, p and x with a comma between each. If you have the newer operating system on the TI84, the screen will prompt you for each value. Press [ENTER]. This is the cumulative distribution function and will return you the probability of at most (≤) x successes. If you have at least x success (≥), use the complement rule. If you have < or > adjust x to get ≤ or ≥. TI-89: Go to the [Apps] Stat/List Editor, then select F5 [DISTR]. This will get you a menu of probability distributions. Arrow down to binomial Pdf and press [ENTER]. Enter the values for n, p and x into each cell. Press [ENTER]. This is the probability density function and will return you the probability of exactly x successes. If you leave off the x value and just enter n and p, you will get all the probabilities for each x from 0 to n. Arrow down to binomial Cdf and press [ENTER]. Enter the values for n, p and lower and upper value of x into each cell. Press [ENTER]. This is the cumulative distribution function and will return you the probability between the lower and upper x-values, inclusive. When looking at a person’s eye color, it turns out that only 2% of people in the world have green eyes (not to be confused with hazel colored eyes). Consider a randomly selected group of 20 people. 1. Is this a binomial experiment? 2. Compute the probability that none have green eyes. 3. Compute the probability that nine have green eyes. 4. Solution a) Yes, since all the requirements are met: 1. There are 20 people, and each person is a trial, so there are a fixed number of trials. 2. If you assume that each person in the group is chosen at random the eye color of one person does not affect the eye color of the next person, thus the trials are independent. 3. Either a person has green eyes or they do not have green eyes, so there are only two outcomes. In this case, the success is a person has green eyes. 4. The probability of a person having green eyes is 0.02. This is the same for every trial since each person has the same chance of having green eyes. b) You are looking for P(X = 0), since this problem is asking for none x = 0. There are 20 people so n = 20. The success is selecting someone with green eyes, so the probability of a success p = 0.02. Then the probability of not selecting someone with green eyes is q = 1 – p = 1 – 0.02 = 0.98. Using the formula: P(X = 0) = 20C0· 0.020 · 0.98(20-0) = 0.6676. Thus, there is a 66.76% chance that in a group of 20 people none of them will have green eyes. c) P(X = 9) = binom.dist(9,20,0.02,false) = 6.8859E-11 = 0.00000000006859 which is zero rounded to four decimal places. The probability that out of 20 people, nine of them have green eyes is a very small chance and would be considered a rare event. As you read through a problem look for some of the following key phrases in Figure 5-6. Once you find the phrase then match up to what sign you would use and then use the table to walk you through the computer or calculator formula. The same idea about signs applies to all the discrete probabilities that follow. Figure 5-6 As of 2018, the Centers for Disease Control and Prevention (CDC) reported that about 1 in 88 children in the United States have been diagnosed with autism spectrum disorder (ASD). A researcher randomly selects 10 children. Compute the probability that 2 children have been diagnosed with ASD. Solution The random variable x = number of children with ASD. There are 10 children, and each child is a trial, so there are a fixed number of trials. In this case, n = 10. If you assume that each child in the group is chosen at random, then whether a child has ASD does not affect the chance that the next child has ASD. Thus, the trials are independent. Either a child has been diagnosed with ASD or they have not been diagnosed ASD, so there are two outcomes. In this case, the success is a child has ASD. The probability of a child having ASD is 1/88. This is the same for every trial since each child has the same chance of having ASD, $p=\frac{1}{88} \text { and } q=1-\frac{1}{88}=\frac{87}{88}$ Using the formula: $\mathrm{P}(X=2)={ }_{10} C_{2} \cdot\left(\frac{1}{88}\right)^{2} \cdot\left(\frac{87}{88}\right)^{(10-2)}=0.0053$ Using the TI-83/84 Calculator: P(X = 2) = binompdf(10,1/88,2) = 0.0053. Using Excel: =BINOM.DIST(2,10,1/88,FALSE) = 0.0053. Flip a fair coin exactly 10 times. a) What is the probability of getting exactly 8 tails? b) What is the probability of getting 8 or more tails? Solution a) There are only two outcomes to each trial, heads or tails. The coin flips are independent and the probability of a success, flipping a tail, p = ½ = 0.5 is the same for each trial. This is a binomial experiment with a sample size n = 10. Using the formula: P(X = 8) = 10C8·0.58 ·0.5(10-8) = 0.0439 b) We still have p = 0.5 and n = 10, however we need to look at the probability of 8 or more. The P(X ≥ 8) = P(X = 8) + P(X =9) + P(X = 10). We can stop at x = 10 since the coin was only flipped 10 times. P(X ≥ 8) = 10C8· 0.58 · 0.5(10-8) + 10C9· 0.59 · 0.5(10-9) + 10C10· 0.510 · 0.5(10-10) = 0.0439 + 0.0098 + 0.0010 = 0.0547. So far, most of the examples for the binomial distribution were for exactly x successes. If we want to find the probability of accumulation of x values then we would use the cumulative distribution function (cdf) instead of the pdf. Phrases such as “at least,” “more than,” or “below” can drastically change the probability answers. Approximately 10.3% of American high school students drop out of school before graduation. Choose 10 students entering high school at random. Find the following probabilities. a) No more than two drop out. b) At least 6 students graduate. c) Exactly 10 stay in school and graduate. Solution a) There is a set sample size of independent trials. The person either has or has not dropped out of school before graduation. A “success” is what we are finding the probability for, so in this case a success is to drop out so p = 0.103 and q = 1 – 0.103 = 0.897. P(X ≤ 2) = P(X = 0) + P(X = 1) + P(X = 2) = 10C0 · 0.1030 · 0.89710 + 10C1 · 0.1031 ·0.8979 + 10C2 · 0.1032 · 0.8978 = 0.3372 + 0.3872 + 0.2001 = 0.9245 Calculator shortcut use binompdf(10,0.103,0) + binompdf(10,0.103,1) + binompdf(10,0.103,2) = 0.9245 or the binomcdf(10,0.103,2) = 0.9245. On the TI-89 cdf enter the lower value of x as 0 and upper value of x as 2. In Excel use the function =BINOM.DIST(2,10,0.103,TRUE). Note that if you choose False under cumulative this would return just the P(X = 2) not P(X ≤ 2). b) A success is to graduate so p = 0.897 and q = 0.103. P(X ≥ 6) = P(X = 6) + P(X = 7) + P(X = 8) + P(X = 9) + P(X = 10) = 10C6 · 0.8976 · 0.1034 + 10C7 · 0.8977 · 0.1033 + 10C8 · 0.8978 · 0.1032 + 10C9 · 0.8979 · 0.1031 + 10C10 · 0.89710 · 0.1030 = 0.0123 + 0.0613 + 0.2001 + 0.3872 + 0.3372 = 0.9981. This is a lot of work to do each one so we can use technology to find the answer. Note that Excel and the older TI-84 programs only find the probability below x so you have to use the complement rule. For the TI-84 calculator shortcut use binompdf(10,0.897,6) + binompdf(10,0.897,7) + binompdf(10,0.897,8) + binompdf(10,0.897,9) + binompdf(10,0.897,10) = 0.9981 or use the complement rule P(X ≥ 6) = 1– P(X≤5) = 1–binomcdf(10,0.897,5) = 0.9981. On the TI-89, just use the binomcdf with the lower x value as 6 and the upper x value as 10. In Excel use =1–BINOM.DIST(5,10,0.897,TRUE). c) A success is to graduate so p = 0.897 and q = 0.103. Find P(X = 10) = 10C10 · 0.89710 · 0.1030 = 0.3372. On the TI-84 use binompdf(10,0.897,10) = 0.3372. In Excel =BINOM.DIST(10,10,0.897,FALSE) = 0.3372. When looking at a person’s eye color, it turns out that only 2% of people in the world have green eyes (not to be confused with hazel colored eyes). Consider a randomly selected group of 20 people. a) Compute the probability that at most 3 have green eyes. b) Compute the probability that less than 3 have green eyes. c) Compute the probability that more than 3 have green eyes. d) Compute the probability that 3 or more have green eyes. Solution a) This fits a binomial experiment. P(X ≤ 3) = P(X = 0) + P(X = 1) + P(X = 2) + P(X = 3) = 20C0 · 0.020 · 0.8020 + 20C1 · 0.021 · 0.8019 + 20C2 · 0.022 · 0.8018 + 20C3 · 0.023 · 0.8017 = 0.667608 + 0.272493 + 0.05283 + 0.006469 = 0.9994. On the TI-84 use binompdf(20,0.02,3) = 0.9994. In Excel =BINOM.DIST(3,20,0.02,true) = 0.9994. b) P(X < 3) = P(X = 0) + P(X = 1) + P(X = 2) = 0.667608 + 0.272493 + 0.05283 = 0.9929. On the TI-84 use binompdf(20,0.02,2) = 0.9929. In Excel =BINOM.DIST(2,20,0.02,true) = 0.9929. c) P(X > 3) = 1 – P(X ≤ 3) = 1 – 0.9994 = 0.0006. On the TI-84 use 1-binompdf(20,0.02,3) = 0.0006. In Excel =1-BINOM.DIST(3,20,0.02,true) = 0.0006. d) P(X ≥ 3) = 1 – P(X ≤ 2) = 1 – 0.9929 = 0.0071 On the TI-84 use 1-binompdf(20,0.02,2) = 0.0071. In Excel =1-BINOM.DIST(2,20,0.02,true) = 0.0071. As of 2018, the Centers for Disease Control and Prevention (CDC) reported that about 1 in 88 children in the United States have been diagnosed with autism spectrum disorder (ASD). A researcher randomly selects 10 children. Compute the probability that at least 2 children have been diagnosed with ASD. Solution $\mathrm{P}(X \geq 2)=1-\mathrm{P}(X \leq 1)=1-\left({ }_{10} C_{0} \cdot\left(\frac{1}{88}\right)^{0} \cdot\left(\frac{87}{88}\right)^{10-0)}+{ }_{10} C_{1} \cdot\left(\frac{1}{88}\right)^{1} \cdot\left(\frac{87}{88}\right)^{(10-1)}\right)=1-(0.89202+0.102528)=0.0055$ On the TI-84 use 1-binompdf(10,1/88,1) = 0.0055. In Excel =1-BINOM.DIST(1,10,1/88,true) = 0.0055. Mean, Variance & Standard Deviation of a Binomial Distribution If you list all possible values of x in a Binomial distribution, you get the Binomial Probability Distribution (pdf). You can then find the mean, the variance, and the standard deviation using the general formulas μ = Σ(xi ∙ P(xi)) and σ2 = ∑(xi2 ∙P(xi)) – μ2 . This, however, would take a lot of work if you had a large value for n. If you know the type of distribution, like binomial, then you can find the mean, variance and standard deviation using easier formulas. They are derived from the general formulas. For a Binomial distribution, μ, the expected number of successes, σ2 , the variance, and σ, the standard deviation for the number of success are given by the formulas, where p is the probability of success and q = 1 – p. $\mu=n \cdot \mathrm{p} \quad \sigma^{2}=n \cdot \mathrm{p} \cdot \mathrm{q} \quad \sigma=\sqrt{n \cdot p \cdot q}$ A random experiment consists of flipping a coin three times. Let X = the number of heads that show up. Compute the mean and standard deviation of X, that is, the mean and standard deviation for the number of heads that show up when a coin is flipped three times. Solution This experiment follows a binomial distribution; hence, we can use the mean and standard deviation formulas for a binomial. The mean of number of heads is µ = 3·0.5 = 1.5. The standard deviation of X is $\sigma=\sqrt{n \cdot p \cdot q}=\sqrt{(3 \cdot 0.5 \cdot(1-0.5))}=0.8660$ When looking at a person’s eye color, it turns out that only 2% of people in the world have green eyes (not to be confused with hazel colored eyes). Consider a randomly selected group of 20 people. Compute the mean, variance and standard deviation. Solution Since this is a binomial experiment, then you can use the formula μ = n ∙ p. So μ = 20 · 0.02 = 0.4 people. You would expect on average that out of 20 people, less than 1 would have green eyes. The variance would be σ2 = n ∙ p ∙ q = 20(0.02)(0.98) = 0.392 people2 . Once you have the variance, you just take the square root of the variance to find the standard deviation σ = $\sqrt0.392$ = 0.6261 people. We would expect on average spread of the distribution to have 0.4 $\pm$ 0.6261 or 0 to 1 person out of 20 people to have green eyes.
textbooks/stats/Introductory_Statistics/Mostly_Harmless_Statistics_(Webb)/05%3A_Discrete_Probability_Distributions/5.04%3A_Binomial_Distribution.txt
The hypergeometric distribution is a discrete probability distribution used to find the probability of success when there are two outcomes to each trial, and there are a set number of dependent trials. A hypergeometric probability distribution results from a random experiment that meets all of the following requirements. 12 1. The procedure has a fixed sample size n taken from a population N. 2. The trials are taken without replacement (dependent). 3. Each trial has exactly two outcomes, the number of successes = a and the number of failures = b. Note a + b = N. If a random experiment satisfies all of the above, the distribution of the random variable X, where X counts the number of successes, is called a hypergeometric distribution, we write X ~ H(n, a, N). The hypergeometric distribution is P(X = x) = $\frac{ {}_{a} C_{x} \cdot {}_{b} C_{n-x}}{ {}_{N} C_{n}}$, x = 0, 1, 2, … , n or a, whichever is smaller. Where n is the sample size taken without replacement from the population size N, x is the number of successes out of n that you are trying to find the probability for, a is the number of a successes and b = Na is the number of failures out of the population N. A committee of 5 people is to be formed from 15 volunteers and 6 appointed positions. Compute the probability that the committee will consist of 4 randomly selected volunteers. Solution There are a total of N = 15 + 6 = 21 people. A sample of n = 5 people are selected without replacement. The success is choosing a volunteer, so a = 15. Of the 15 volunteers, we are asked to find the probability of selecting x = 4. Substitute in each value: P(X = 4) = $\frac{ {}_{15} C_{4} \cdot { }_{6} C_{1}}{ {}_{21} C_{5}} = 0.4025$. There is a pattern to the numbers in the formula. The top two numbers on the left of the C’s add up to the bottom number on the left of C, and the top two numbers on the right of the C’s add up to the bottom number on the right of C. The waiting room for jury selection has 40 people with a college degree and 30 people without. A jury of 12 is selected from this waiting room. What is the probability that exactly 5 of the jury members will have a college degree? Solution There are a total of N = 40 + 30 = 70 people. A sample of n = 12 people are selected without replacement. The success is choosing someone with a college degree, so a = 40. Of the 12 jurors we are asked to find the probability of selecting x = 5. Substitute in each value: P(X = 5) = $\frac{ {}_{40} C_{5} \cdot { }_{30} C_{7}}{ {}_{70} C_{12}} = 0.1259$. A wallet contains three $100 bills and five$1 bills. You randomly choose four bills. What is the probability that you will choose exactly two $100 bills? Solution There are a total of N = 3 + 5 = 8 bills. A sample of n = 4 bills are selected without replacement. The success is choosing a$100 bill, so a = 3. Of the 3 \$100 bills we are asked to find the probability of selecting x = 2. Substitute in each value: P(X = 2) = $\frac{ {}_{3} C_{2} \cdot { }_{5} C_{2}}{ {}_{8} C_{4}} = 0.4286$. Note: the calculator does not have the hypergeometric distribution shortcut. The following online calculator will calculate the probability: https://homepage.divms.uiowa.edu/~mbognar/applets/hg.html. The M = a is the number of successes a from our example, or use Excel. Figure 5-7 may help you decide when to use the “True” for cumulative in Excel. Figure 5-7
textbooks/stats/Introductory_Statistics/Mostly_Harmless_Statistics_(Webb)/05%3A_Discrete_Probability_Distributions/5.05%3A_Hypergeometric_Distribution.txt
The Poisson distribution was named after the French mathematician Siméon Poisson (pronounced pwɑːsɒn, means fish in French). Poisson The Poisson discrete probability distribution finds the probability of an event over some unit of time or space. A Poisson probability distribution may be used when a random experiment meets all of the following requirements. 1. Events occur independently. 2. The discrete random variable X is the number of occurrences over an interval of time, volume, space, area, etc. 3. The mean number of successes μ over time, volume, space, area, etc. is given. (Note some textbooks and calculators use lambda = λ instead of mu = μ for the mean). The formula for the Poisson distribution is P(X = x) = $\frac{e^{-\mu} \mu^{x}}{x !}$, where e is a mathematical constant approximately equal to 2.71828, x = 0, 1, 2, … is the number successes that you are trying to find the probability for, μ is the mean number of a success over one interval of time, space, volume, etc. The value of x has no stopping point since there is no set sample size like the binomial distribution. Note that e is not a variable, it is a constant number. Use the ex button on your calculator. Mean, Variance & Standard Deviation of a Poisson Distribution For a Poisson distribution, μ, the expected number of successes, and the variance σ2 are equal to one another. $\mu=\sigma^{2} \quad \sigma=\sqrt{\sigma^{2}}$ Sometimes the question will ask for a probability over a different unit of time, space or area than originally given in the problem. Always change the mean to fit the new units in the question. This formula will help in correctly rescaling the mean to fit the questions new units: New μ = old μ($\frac {\text {new units}}{\text {old units}}$). The “old” would be the original stated mean and units and the “new” is the units from the question. TI-84: Press [2nd] [DISTR]. This will get you a menu of probability distributions. Press [ALPHA] B or arrow down to B:poissonpdf( and press [ENTER]. This puts poissonpdf( on the home screen. Enter the values for μ and x with a comma between each. Press [ENTER]. This is the probability density function and will return you the probability of exactly x successes. Press [ALPHA] C or arrow down to C:poissoncdf( and press [ENTER]. This puts poissoncdf( on the home screen. Enter the values for μ and x with a comma between each. Press [ENTER]. This is the cumulative distribution function and will return you the probability of at most x successes. TI-89: Go to the [Apps] Stat/List Editor, then select F5 [DISTR]. This will get you a menu of probability distributions. Arrow down to Poisson Pdf and press [ENTER]. Enter the values for μ and x into each cell. Press [ENTER]. This is the probability density function and will return you the probability of exactly x successes. Arrow down to Poisson Cdf and press [ENTER]. Enter the values for μ and the lower and upper values of x into each cell. Press [ENTER]. This is the cumulative distribution function and will return you the probability between the lower and upper x-values, inclusive. Excel: Use the formula =POISSON.DIST(x,mean,false) for P(X = x). Use the formula =POISSON.DIST(x,mean,true) for P(X ≤ x). There are about 8 million individuals in New York City. Using historical records, the average number of individuals hospitalized for acute myocardial infarction (AMI), i.e. a heart attack, each day is 4.4 individuals. What is the probability that exactly 6 people are hospitalized for AMI in NY City tomorrow? Solution First, note that the probability question is over the same units of time, one day, which the average is given. It is important that these units always match. The random variable X = the number of individuals hospitalized each day in NY City for an AMI. We are trying to find P(X = 6) and the mean μ = 4.4 people. Using the formula P(X = 6) = $\frac{e^{-4.4} 4.4^{6}}{6 !}$ = 0.1237. TI-84: poissonpdf(4.4,6) = 0.1237. Excel =POISSON.DIST(6,4.4,FALSE) = 0.1237. A bank drive-through has an average of 10 customers every hour. Find the following probabilities. a) Compute the probability that no customers arrive in an hour. b) Compute the probability of exactly 2 customers use the drive-through in an hour. c) Compute the probability that exactly 2 customers use the drive-through in a 30-minute period. d) Compute the probability that fewer than 2 customers use the drive-through in a half hour. Solution a) In this case, X is the number of customers that use the drive-through. We are trying to find the probability that x = 0. The average is given in the problem as μ = 10. This gives P(X = 0) = $\frac{e^{-10} 10^{0}}{0 !}$ = 4.53999E-5 = 0.0000454. TI-84: poissonpdf(10,0) = 0.0000454. Excel: =POISSON.DIST(0,10,FALSE) = 0.0000454. b) P(X = 2) = $\frac{e^{-10} 10^{2}}{2 !}$ = 0.0023. TI-84: P(X = 2) = poissonpdf(10,2) = 0.0023. Excel: =POISSON.DIST(2,10,FALSE) = 0.0023. c) The unit of time has changed from 1 hour to 30 minutes, so we need to rescale the mean to fit the new unit of time. Use the following formula to convert your units, New μ = old μ($\frac {\text {new units}}{\text {old units}}$) = 10($\frac {\text {30 minutes}}{\text {60 minutes}}$) = 5. P(X = 2) = $\frac{e^{-5} 5^{2}}{2 !}$ = 0.0842. TI-84: P(X = 2) = poissonpdf(5,2) = 0.0842. Excel: P(X = 2) =POISSON.DIST(2,5,FALSE) = 0.0842. Note; always rescale the mean to fit the units of the question, not the other way around. d) The time-period is half an hour = 30 minutes. The mean from part c would be μ = 5. To find “less than” 2 we would have zero or one customer. P(X < 2) = P(X = 0) + P(X =1) = $\frac{e^{-5} 5^{0}}{0 !}$ + $\frac{e^{-5} 5^{1}}{1 !}$ = = 0.0067 + 0.0337 = 0.0404. TI-84: P(X = 2) = poissoncdf(5,1) = 0.0404. Excel: =POISSON.DIST(1,5,TRUE) = 0.0404. So far, most of the examples for the Poisson distribution were for exactly x successes. If we want to find the probability of accumulation of x values then we would use the cumulative distribution function (cdf) instead of the pdf. As you read through a problem look for some of the following key phrases in Figure 5-9. Once you find the phrase then match up to what sign you would use and then use the table to walk you through the computer or calculator formula. Figure 5-9 A bank drive-through has an average of 10 customers every hour. Find the following probabilities. a) Compute the probability of at least four customers arriving in an hour. b) Compute the probability of at most four customers arriving in an hour. c) Compute the probability of fewer than four customers arriving in an hour. d) Compute the probability of more than four customers arriving in an hour. e) Compute the probability that fewer than 2 customers arrive in a 15-minute period. Solution a) Because we do not have a set sample size to stop at like the binomial distribution, you would have to find the probability of 4 or more until your answers are small enough that they are not changing your answer out to at least four decimal places 0.0000. This may take a lot of work; instead, we will use the complement rule. The complement to “at least 4,” is “3 or less.” Find P(X ≥ 4) = 1 – P(X ≤ 3) = 1 – P(X = 0) + P(X = 1) + P(X = 2) + P(X = 3) = $1-\left(\frac{e^{-10} 10^{0}}{0 !}+\frac{e^{-10} 10^{1}}{1 !}+\frac{e^{-10} 10^{2}}{2 !}+\frac{e^{-10} 10^{3}}{3 !}\right)$ = 1 – 0.000045 + 0.000454 + 0.00227 + 0.007567 = 1 – 0.0103 = 0.9897. Note that the value of x went from 4 to 3. The complement of strictly less than 4 starts at x = 3. TI-84: 1 – poissoncdf(10,3) = 0.9897. Excel: =1-POISSON.DIST(3,10,TRUE) = 0.9897. b) P(X ≤ 4) = P(X = 0) + P(X = 1) + P(X = 2) + P(X = 3) + P(X = 4) = $\frac{e^{-10} 10^{0}}{0 !}+\frac{e^{-10} 10^{1}}{1 !}+\frac{e^{-10} 10^{2}}{2 !}+\frac{e^{-10} 10^{3}}{3 !}+\frac{e^{-10} 10^{4}}{4 !}$ = 0.000045 + 0.000454 + 0.00227 + 0.007567 + 0.018917 = 0.0293. TI-84: poissoncdf(10,4) = 0.0293. Excel: =POISSON.DIST(4,10,TRUE) = 0.0293. c) P(X < 4) = P(X = 0) + P(X = 1) + P(X = 2) + P(X = 3) = $\frac{e^{-10} 10^{0}}{0 !}+\frac{e^{-10} 10^{1}}{1 !}+\frac{e^{-10} 10^{2}}{2 !}+\frac{e^{-10} 10^{3}}{3 !}$ 0.000045 + 0.000454 + 0.00227 + 0.007567 = 0.0103. TI-84: poissoncdf(10,3) = 0.0103. Excel: =POISSON.DIST(3,10,TRUE) = 0.0103. d) Using part b, P(X > 4) = 1 – P(X ≤ 4) = 1 – 0.0293 = 0.9707. TI-84: 1 – poissoncdf(10,4) = 0.9707. Excel: =1-POISSON.DIST(4,10,TRUE) = 0.9707. e) The unit of time has changed from 1 hour to 15 minutes, so we need to rescale the mean to fit the new unit of time. Use the following formula to convert your units, New μ = old μ($\frac {\text {new units}}{\text {old units}}$) = 10($\frac {\text {15 minutes}}{\text {60 minutes}}$) = 2.5. P(X < 2) = P(X = 0) + P(X = 1) = $\frac{e^{-2.5} 2.5^{0}}{0 !}$ + $\frac{e^{-2.5} 2.5^{1}}{1 !}$ = 0.0821 + 0.2052 = 0.2873. TI-84: poissoncdf(2.5,1) = 0.2873. Excel: =POISSON.DIST(1,2.5,TRUE) = 0.2873. The last ever dolphin message was misinterpreted as a surprisingly sophisticated attempt to do a double--‐ backwards – somersault through a hoop whilst whistling the "Star Sprangled Banner," but in fact the message was this: So long and thanks for all the fish. (Adams, 2002)
textbooks/stats/Introductory_Statistics/Mostly_Harmless_Statistics_(Webb)/05%3A_Discrete_Probability_Distributions/5.06%3A_Poisson_Distribution.txt
Chapter 5 Exercises 1. Determine if the following tables are valid discrete probability distributions. If they are not state why. a) x –5 –2.5 0 2.5 5 P(X = x) 0.15 0.25 0.32 0.18 0.1 b) x 0 1 2 3 4 P(X = x) 0.111 0.214 0.312 0.163 0.159 c) x 0 1 2 3 4 P(X = x) 0.2 -0.3 0.5 0.4 0.2 2. The random variable X = the number of vehicles owned. x 0 1 2 3 4 P(X = x) 0.1 0.35 0.25 0.2 0.1 a) Compute the probability that a person owns at least 2 vehicles. b) Compute the P(X > 2). c) Compute the probability that a person owns less than 2 vehicles. d) Compute the expected number of vehicles owned. e) Compute the standard deviation of the number of vehicles owned. f) Compute σ2 . 3. The following discrete probability distribution represents the amount of money won for a raffle game. x –5 –2.5 0 2.5 5 P(X = x) 0.15 0.25 0.32 0.18 0.1 a) Compute μ. b) Compute σ 4. Keke's Kookies sells mini cookies in packs of 5 and has determined a probability distribution for the number of cookies that they sell in a given day. x = #sold 0 5 10 15 20 P(X = x) 0.22 0.38 0.14 ..? 0.07 a) What is the probability of selling 15 mini cookies in a given day? b) Find the expected number of mini cookies sold in a day using the discrete probability distribution. c) Find the variance of the number of mini cookies sold in a day using the discrete probability distribution. 5. The bookstore also offers a chemistry textbook for \$159 and a book supplement for \$41. From experience, they know about 25% of chemistry students just buy the textbook while 60% buy both the textbook and supplement. Compute the standard deviation of the bookstore revenue. 6. A \$100,000 life insurance policy for a 50-year-old woman has an annual cost of \$335. The probability that a 50- year-old woman will die is 0.003118. What is the expected value of the policy for the woman’s estate? 7. An LG Dishwasher, which costs \$1000, has a 24% chance of needing to be replaced in the first 2 years of purchase. If the company has to replace the dishwasher within the two-year extended warranty, it will cost the company \$112.10 to replace the dishwasher. a) Fill out the probability distribution for the value of the extended warranty from the perspective of the company. x P(X =x) b) What is the expected value of the extended warranty? c) Write a sentence interpreting the expected value of the warranty. 8. The Oregon lottery has a game called Pick 4 where a player pays \$1 and picks a four-digit number. If the four numbers come up in the order you picked, then you win \$2000. a) Fill out the probability distribution for a player's winnings x P(X =x) b) What are your expected winnings? c) Write a sentence interpreting the expected winnings. 9. The following table represents the probability of the number of pets owned by a college student. x 0 1 2 3 P(X=x) 0.46 0.35 0.12 0.07 a) Is this a valid discrete probability distribution? Explain your answer. b) Find the mean number of pets owned. c) Find the standard deviation of the number of cars owned. d) Find σ2. 10. Suppose a random variable, X, arises from a geometric distribution. If p = 0.13, compute the P(X = 4). 11. Approximately 10% of all people are left-handed. You randomly sample people until you get someone who is left-handed. What is the probability that the 4th person selected will be the left-handed person? 12. An actress has a probability of getting offered a job after a tryout of 0.12. She plans to keep trying out for new jobs until she is offered a job. Assume outcomes of tryouts are independent. Compute the probability she will need to attend more than 7 tryouts. 13. A fair coin is flipped until a head is shown. What is the probability that head shows on the 6th flip? 14. When you post a picture on social media, it seems like your friends randomly "like" the picture. Independent of the quality or the humor of the photo, there seems to be an 18% chance of the picture being "liked" for any given picture. Let X represent the number of pictures you post until one is "liked." (X represents the photo number that is actually liked.) Compute P(X = 15). 15. Suppose a random variable, x, arises from a binomial experiment. If n = 14, and p = 0.13, find the following probabilities. a) P(X = 3) b) P(X ≤ 3) c) P(X < 3) d) P(X > 3) e) P(X ≥ 3) 16. Suppose a random variable, X, arises from a binomial experiment. If n = 25, and p = 0.85, find the following probabilities. a) P(X = 15) b) P(X ≤ 15) c) P(X < 15) d) P(X > 15) e) P(X ≥ 15) 17. Suppose a random variable, X, arises from a binomial experiment. If n = 14, and p = 0.13, compute the standard deviation. 18. Suppose a random variable, X, arises from a binomial experiment. If n = 25, and p = 0.85, compute the variance. 19. A fair coin is flipped 30 times. a) What is the probability of getting exactly 15 heads? b) What is the probability of getting 15 or more heads? c) What is the probability of getting at most 15 heads? d) How many times would you expect to get heads? e) What is the standard deviation of the number of heads? 20. Approximately 10% of all people are left-handed. Out of a random sample of 15 people, find the following. a) What is the probability that 4 of them are left-handed? b) What is the probability that less than 4 of them are left-handed? c) What is the probability that at most 4 of them are left-handed? d) What is the probability that at least 4 of them are left-handed? e) What is the probability that more than 4 of them are left-handed? f) Compute μ. g) Compute σ. h) Compute σ2. 21. Approximately 8% of all people have blue eyes. Out of a random sample of 20 people, find the following. a) What is the probability that 2 of them have blue eyes? b) What is the probability that at most 2 of them have blue eyes? c) What is the probability that less than 2 of them have blue eyes? d) What is the probability that at least 2 of them have blue eyes? e) What is the probability that more than 2 of them have blue eyes? f) Compute μ. g) Compute σ. h) Compute σ2. 22. An unprepared student takes a 10 question TRUE/FALSE quiz and ended up guessing each answer. a) What is the probability that the student got 7 questions correct? b) What is the probability that the student got 7 or more questions correct? 23. A local county has an unemployment rate of 7.3%. A random sample of 20 employable people are picked at random from the county and are asked if they are employed. The distribution is a binomial. Round answers to 4 decimal places. a) Find the probability that exactly 3 in the sample are unemployed. b) Find the probability that there are fewer than 4 in the sample are unemployed. c) Find the probability that there are more than 2 in the sample are unemployed. d) Find the probability that there are at most 4 in the sample are unemployed. 24. About 1% of the population has a particular genetic mutation. Find the standard deviation for the number of people with the genetic mutation in a group of 100 randomly selected people from the population. 25. You really struggle remembering to bring your lunch to work. Each day seems to be independent as to whether you remember to bring your lunch or not. The chance that you forget your lunch each day is 25.6%. Consider the next 48 days. Let X be the number of days that you forget your lunch out of the 48 days. Compute P(10 ≤ X ≤ 14). 26. A flu vaccine has a 90% effective rate. If a random sample of 200 people are given the vaccine, what is the probability that at most 180 people did not get the flu? 27. The Lee family had 6 children. Assuming that the probability of a child being a girl is 0.5, find the probability that the Smith family had at least 4 girls? 28. If a seed is planted, it has a 70% chance of growing into a healthy plant. If 142 randomly selected seeds are planted, answer the following. a) What is the probability that exactly 100 of them grow into a healthy plant? b) What is the probability that less than 100 of them grow into a healthy plant? c) What is the probability that more than 100 of them grow into a healthy plant? d) What is the probability that exactly 103 of them grow into a healthy plant? e) What is the probability that at least 103 of them grow into a healthy plant? f) What is the probability that at most 103 of them grow into a healthy plant? 29. A manufacturing machine has a 6% defect rate. An inspector chooses 4 items at random. a) What is the probability that at least one will have a defect? b) What is the probability that exactly two will have a defect? c) What is the probability that less than two will have a defect? d) What is the probability that more than one will have a defect? 30. A large fast-food restaurant is having a promotional game where game pieces can be found on various products. Customers can win food or cash prizes. According to the company, the probability of winning a prize (large or small) with any eligible purchase is 0.162. Consider your next 33 purchases that produce a game piece. Calculate the following: a) What is the probability that you win 5 prizes? b) What is the probability that you win more than 8 prizes? c) What is the probability that you win between 3 and 7 (inclusive) prizes? d) What is the probability that you win 3 prizes or fewer? 31. A small regional carrier accepted 20 reservations for a particular flight with 17 seats. 15 reservations went to regular customers who will arrive for the flight. Each of the remaining passengers will arrive for the flight with a 60% chance, independently of each other. a) Find the probability that overbooking occurs. b) Find the probability that the flight has empty seats. 32. A poll is given, showing 72% are in favor of a new building project. Let X be the number of people who favor the new building project when 37 people are chosen at random. What is the probability that between 10 and 16 (including 10 and 16) people out of 37 favor the new building project? 33. A committee of 5 people is to be formed from 10 students and 7 parents. Compute the probability that the committee will consist of exactly 3 students and 2 parents. 34. In endurance horse racing, people over the age of 15 are called seniors and people 15 years or younger are called juniors. An endurance horse race consists of 25 seniors and 5 juniors. What is the probability that the top three finishers were: a) All seniors. b) All juniors. c) 2 seniors and one junior. d) 1 senior and two juniors. 35. A bag contains 9 strawberry Starbursts and 21 other flavored Starbursts. 5 Starbursts are chosen randomly without replacement. Find the probability that 3 of the Starbursts drawn are strawberry. 36. A jury selection room has 12 people that are married and 20 people that are not married to choose from. What is the probability that in a jury of 12 randomly selected people, exactly 3 of them would be married? 37. A pharmaceutical company receives large shipments of ibuprofen tablets and uses this acceptance sampling plan: randomly select and test 25 tablets, then accept the whole batch if there is at most one that doesn't meet the required specifications. If a particular shipment of 100 ibuprofen tablets actually has 5 tablets that have defects, what is the probability that this whole shipment will be accepted? 38. In a shipment of 24 keyboards, there are 3 that are defective (a success is a defective keyboard). A random sample of 4 keyboards are selected. The shipment will be returned if one or more of the 4 keyboards in the sample is defective. What is the probability that the shipment will be returned? 39. A writer makes on average one typographical error every page. The writer has landed a 3-page article in an important magazine. If the magazine editor finds any typographical errors, they probably will not ask the writer for any more material. What is the probability that the reporter made no typographical errors for the 3-page article? 40. A coffee shop serves an average of 75 customers per hour during the morning rush, which follows a Poisson distribution. Find the following. a) Compute the probability that 80 customers arrive in an hour during tomorrow’s morning rush. b) Compute the probability that less than 60 customers arrive in an hour during tomorrow’s morning rush. c) Compute the probability that more than 60 customers arrive in an hour during tomorrow’s morning rush. 41. Suppose a random variable, x, follows a Poisson distribution. Let μ = 2.5 every minute, find the following probabilities. a) P(X = 5) over a minute. b) P(X < 5) over a minute. c) P(X ≤ 5) over a minute. d) P(X > 5) over a minute. e) P(X ≥ 5) over a minute. f) P(X = 125) over an hour. g) P(X ≥ 125) over an hour. 42. The PSU computer help line receives, on average, 14 calls per hour asking for assistance. What is the probability that the company will receive more than 20 calls per hour? 43. There are on average 5 old-growth Sitka Spruce trees per 1/8 of an acre in a local forest. a) Compute the probability that there are exactly 30 Sitka Spruce trees in 1 acre. b) Compute the probability that there are more than 8 Sitka Spruce trees in a ¼ acre. 44. The number of rescue calls received by Pacific Northwest Search & Rescue follows a Poisson distribution with an average of 2.83 rescues every eight hours. a) What is the probability that the squad will have exactly 4 calls in two hours? b) What is the probability that the company will receive 2 calls in a 12-minute period? c) What is the probability that the squad will have at most 2 calls in an hour? 45. Suppose a random variable, x, follows a Poisson distribution. Let μ = 3 every day; compute the P(X ≤ 12) over a week. Answer to Odd Numbered Exercises 1) a) Yes b) No c) No 3) a) -\$0.425 b) \$2.9592 5) \$69.283 7) a) x -112.1 887.9 P(X = x) 0.76 0.24 b) \$127.90 c) For many of these extended warranties bought by customers, they can expect to gain 127.9 dollars per warranty on average. 9) a) Yes, Σ P(x) = 1 and 0 ≤ P(x) ≤ 1 b) 0.8 c) 0.9055 d) 0.82 11) 0.0729 13) 0.0156 15) a) 0.1728 b) 0.9021 c) 0.7292 d) 0.0979 e) 0.2708 17) 1.2583 19) a) 0.1445 b) 0.5722 c) 0.5722 d) 15 e) 2.7386 21) a) 0.2711 b) 0.7879 c) 0.5169 d) 0.4831 e) 0.2121 f) 1.6 g) 1.2133 h) 1.472 23) a) 0.1222 b) 0.9464 c) 0.1759 d) 0.9873 25) 0.5923 27) 0.3438 29) a) 0.2193 b) 0.0191 c) 0.9801 d) 0.0199 31) a) 0.6826 b) 0.087 33) 0.4072 35) 0.1238 37) 0.6328 39) 0.0498 41) a) 0.0668 b) 0.8912 c) 0.958 d) 0.042 e) 0.1088 f) 0.0039 g) 0.9835 43) a) 0.0185 b) 0.6672 45) 0.0245 5.08: Chapter 5 Formulas Discrete Distribution Table: 0 ≤ P(xi) ≤ 1 ∑ P(xi) = 1 Discrete Distribution Mean: μ = Σ(xi ∙ P(xi)) Discrete Distribution Variance: σ2 = ∑(xi2 ∙P(xi)) – μ2 Discrete Distribution Standard Deviation: σ = $\sqrt {\sigma ^{2}}$ Geometric Distribution: P(X = x) = p q (x – 1) , x = 1, 2, 3, … Geometric Distribution Mean: μ = $\frac {1}{p}$ Variance: σ2 = $\frac {1−p}{p ^{2}}$ Standard Deviation: σ = $\sqrt \frac {1-p}{p ^{2}}$ Binomial Distribution: P(X = x) = nCx·px ·q(n-x) , x = 0, 1, 2, … , n Binomial Distribution Mean: μ = n ∙ p Variance: σ 2 = n ∙ p ∙ q Standard Deviation: σ = $\sqrt n \cdot p \cdot q$ Hypergeometric Distribution: P(X = x) = $\frac{a C_{x} \cdot {}_b C_{n-x}}{ _{N} C_{n}}$ p = P(success) q = P(failure) = 1 – p n = sample size N = population size Unit Change for Poisson Distribution: New μ = old μ($\frac{\text { new units }}{\text { old units }}$) Poisson Distribution: P(X = x) = $\frac{e^{-\mu} \mu^{x}}{x !}$
textbooks/stats/Introductory_Statistics/Mostly_Harmless_Statistics_(Webb)/05%3A_Discrete_Probability_Distributions/5.07%3A_Chapter_5_Exercises.txt
A continuous random variable (usually denoted as X) is a variable that has an infinite number of random values in an interval of numbers. There are many different types of continuous distributions. To be a valid continuous distribution the total area under the curve has to be equal to one and the function’s y-values need to be positive. For example, we may have a random variable that is uniformly distributed so we could use the Uniform distribution that looks like a rectangle. See Figure 6-1. Figure 6-1 We may want to model the time it takes customer service to complete a call with the exponential distribution. See Figure 6-2. Figure 6-2 We may have standardized test scores that follow a bell-shaped curve like the Gaussian (Normal) Distribution. See Figure 6-3. Figure 6-3 Figure 6-4 We may want to model the average time it takes for a component to be manufactured and use the bell-shaped Student t-distribution. See Figure 6-4. This is just an introductory course so we are only going to cover a few distributions. If you want to explore more distributions, check out the chart by Larry Leemis at: http://www.math.wm.edu/~leemis/chart/UDR/UDR.html. Very Important The probability of an interval between two X values is equal to the area under the density curve between those two $X$ values. For a discrete random variable, we can assign probabilities to each outcome. We cannot do this for a continuous random variable. The probability for a single $X$ value for a continuous random variable is 0. Thus “” are equivalent to “≤” and “≥.” In other words, $P(a ≤ X ≤ b) = P(a < X < b) = P(a ≤ X < b) = P(a < X ≤ b) \nonumber$ since there is no area of a line. We now will look at some specific models that have been found useful in practice. Consider an experiment that consists of observing events in a certain time frame, such as buses arriving at a bus stop or telephone calls coming into a switchboard during a specified period. It may then be of interest to place a probability distribution on the actual time of occurrence. In this section, we will tell you which distribution to use in the question. 6.02: Uniform Distribution The continuous uniform distribution models the probability that is the same on an interval from a to b. We use the following probability density function (PDF) to graph a straight line. f(x)= $\begin{cases}\frac{1}{b-a}, & \text { for } a \leq x \leq b \ 0, & \text { elsewhere }\end{cases}$ The probability is found by taking the area between two points within the rectangle formed from the x-axis, between the endpoints a and b, the length, and f(x) = 1/(b-a), the height. When working with continuous distributions it is helpful to draw a picture of the distribution, then shade in the area of the probability that you are trying to find. See Figure 6-5. Figure 6-5 If a continuous random variable X has a uniform distribution with starting point a and ending point b then the distribution is denoted as X~U(a,b). Area of a Rectangle = length*height To find the probability (area) under the uniform distribution, use the following formulas. • $\mathrm{P}(X \geq x)=\mathrm{P}(X>x)=\left(\frac{1}{b-a}\right) \cdot(b-x)$ • $\mathrm{P}(X \leq x)=\mathrm{P}(X<x)=\left(\frac{1}{b-a}\right) \cdot(x-a)$ • $\mathrm{P}\left(x_{1} \leq X \leq x_{2}\right)=\mathrm{P}\left(x_{1}<X<x_{2}\right)=\left(\frac{1}{b-a}\right) \cdot\left(x_{2}-x_{1}\right)$ The arrival time between trains at a train stop is uniformly distributed between 0 and 15 minutes. A student does not check the schedule and has arrived at the train stop. 1. Compute the probability they wait more than 10 minutes. 2. Compute the probability of waiting between 2 and 8 minutes. 3. Solution a) First plug in the endpoints a = 0 and b = 15 into the PDF to get the height of the rectangle. The height is f(x)= $\frac{1}{15-0}=\frac{1}{15}$. Draw and label the distribution with the a, b and the height as in Figure 6- 6. The probability is the area of the shaded rectangle P(X > 10). Draw a vertical line at x = 10. We want x values that are greater than 10, so shade the area to the right of 10, stopping at b = 15. To find the area of the shaded rectangle in Figure 6-6, we can take the length times the height. The length would be b – a = 15 – 10 = 5 and the height is f(x) = 1/15. Figure 6-6 The area of the shaded rectangle is 5 ($\frac{1}{15}$) = $\frac{1}{3}$ = 0.3333 or P(X > 10) = 0.3333, which is the probability of waiting more than 10 minutes. Note that this would be the same if we asked P(X ≥ 10) = 0.3333 since there is no area at the line X = 10. b) The area will be length times height. Draw the picture and shade the rectangle between 2 and 8, see Figure 6-7. The length is b – a = 8 – 2 = 6 and the height is still f(x) = 1/15. P(2 ≤ X ≤ 8) = 6($\frac{1}{15}$) = 0.4 Figure 6-7 6.03: Exponential Distribution An exponential distribution models a continuous random variable over time, area or space where the rate of occurrences decreases as X gets larger. The probability density function (PDF) for an exponential curve is $f(x)= \{\begin{array}{l}\lambda e^{-x \lambda}, \text {for } x \geq 0 \ 0, \text{elsewhere}\end{array}.$ The value lambda λ is the fixed rate of occurrence and is equal to one divided by the mean, $\frac{1}{\mu}$. If the mean is given in the problem then you write the PDF as f(x)= $\frac{1}{\mu} e^{\left(-\frac{x}{\mu}\right)}$, where e is a mathematical constant approximately equal to 2.71828, x ≥ 0 and x is the value you are trying to find the probability for, μ is the mean number of a successes over an interval of time, space, volume, etc. The distribution is denoted as X~Exp(λ). Figure 6-8 gives example graphs for a mean of 5, 10 and 20. Note the curve hits the y-axis at 1/μ and keeps going forever to the right with an asymptote at y = 0. Figure 6-8 You would need integral calculus skills to find the area under this curve. To get around having the calculus requirement, we have three scenarios that we can use to find probability for an exponential distribution where we will not have to use the PDF. To find the probability (area) under the exponential curve, use the following formulas. • $\mathrm{P}(X \geq x)=\mathrm{P}(X>x)=\mathrm{e}^{-x / \mu}$ • $\mathrm{P}(X \leq x)=\mathrm{P}(X<x)=1-\mathrm{e}^{-x / \mu}$ • $\mathrm{P}\left(x_{1} \leq X \leq x_{2}\right)=\mathrm{P}\left(x_{1}<X<x_{2}\right)=e^{\left(-x_{1} / \mu\right)}-e^{\left(-x_{2} / \mu\right)}$ The time it takes to help a customer at the customer service desk is exponentially distributed with an average help time of 45 seconds. Find the probability that a customer waits less than two minutes. Solution We need to have the same units as the mean in the question so instead of finding P(X < 2 minutes) we will use P(X < 120 seconds). Also note that < and ≤ find the same probabilities so use the equation \(\mathrm{P}(X $\mathrm{P}(X<120)=1-\mathrm{e}^{-120 / 45}=0.9305. \nonumber$ In Excel use =EXPON.DIST(x,λ,TRUE) =EXPON.DIST(120,1/45,TRUE) = 0.9305. Figure 6-9 Alternatively, as shown in Figure 6-9, the following website will calculate the exponential probability: https://homepage.divms.uiowa.edu/~mbognar/applets/exp.html.
textbooks/stats/Introductory_Statistics/Mostly_Harmless_Statistics_(Webb)/06%3A_Continuous_Probability_Distributions/6.01%3A_Introductions.txt
Empirical Rule Before looking at the process for finding the probabilities under a normal curve, recall the Empirical Rule that gives approximate values for areas under a bell-shaped distribution. The Empirical Rule, shown in Figure 6-10, is just an approximation for probability under any bell-shaped distribution and will only be used in this section to give you an idea of the size of the probability for different shaded areas. A more precise method for finding probabilities will be demonstrated using technology. Please do not use the empirical rule in the homework questions except for rough estimates. The Empirical Rule (or 68-95-99.7 Rule) In a bell-shaped distribution with mean μ and standard deviation σ, • Approximately 68% of the observations fall within one standard deviation (σ) of the mean μ. • Approximately 95% of the observations fall within two standard deviations (2σ) of the mean μ. • Approximately 99.7% of the observations fall within three standard deviations (3σ) of the mean μ. Gauss Figure 6-10 For now, we will be working with the most common bell-shaped probability distribution known as the normal distribution, also called the Gaussian distribution, named after the German mathematician Johann Carl Friedrich Gauss. See Figure 6-11. Figure 6-11 A normal distribution is a special type of distribution for a continuous random variable. Normal distributions are important in statistics because many situations in the real world have normal distributions. Properties of the normal density curve: 1. Symmetric bell-shaped. 2. Unimodal (one mode). 3. Centered at the mean μ= median = mode. 4. The total area under the curve is equal to 1 or 100%. 5. The spread of a normal distribution is determined by the standard deviation σ. The larger σ is, the more spread out the normal curve is from the mean. 6. Follows the Empirical Rule. If a continuous random variable X has a Normal distribution with mean μ and standard deviation σ then the distribution is denoted as X~N(μ, σ). Any x values from a Normal distribution can be transformed or standardized into a standard Normal distribution by taking the z-score of x. The formula for the normal probability density function is: $f(x)=\frac{1}{\sigma \sqrt{2 \pi}} e^{\left(-\frac{1}{2}\left(\frac{x-\mu}{\sigma}\right)^{2}\right)}$. We will not be using this formula. The probability is found by using integral calculus to find the area under the PDF curve. Prior to the handheld calculators and personal computers, there were probability tables made to look up these areas. This text does not use probability tables and will instead rely on technology to compute the area under the curve. Every time the mean or standard deviation changes the shape of the normal distribution changes. The center of the normal curve will be the mean and the spread of the normal curve gets wider as the standard deviation gets larger. Figure 6-12 compares two normal distributions N(0, 1) in green on the left and N(7, 6) in blue on the right. Figure 6-12 “‘So, what's odd about it?’ ‘Nothing, it's Perfectly Normal.’” (Adams, 2002) 6.4.1 Standard Normal Distribution A normal distribution with mean μ = 0 and standard deviation σ = 1 is called the standard normal distribution. The letter Z is used exclusively to denote a variable that has a standard normal distribution and is written Z ~ N(0, 1). A particular value of Z is denoted z (lower-case) and is referred to as a z-score. Recall that a z-score is the number of standard deviations x is from the mean. Anytime you are asked to find a probability of Z use the standard normal distribution. Standardizing and z-scores: A z-score is the number of standard deviations an observation x is above or below the mean μ. If the z-score is negative, x is below the mean. If the z-score is positive, x is above the mean. If x is an observation from a distribution that has mean μ and standard deviation σ, the standardized value of x (or z-score) is $\mathrm{z}=\frac{x-\mu}{\sigma}$. To find the area under the probability density curve involves calculus so we will need to rely on technology to find the area. Compute the area under the standard normal distribution to the left of z = 1.39. Solution First, draw a bell-shaped distribution with 0 in the middle as shown in Figure 6-13. Mark 1.39 on the number line and shade to the left of z = 1.39. Figure 6-13 Note that the lower value of the shaded region is -∞, which the TI-84 does not have. Instead we use a really small number in scientific notation -1E99 or -1*1099 (make sure you use the negative sign (-) not the minus – sign. The normalcdf on the calculator needs the lower and upper value of the shaded area followed by the mean and standard deviation. (The TI-89 uses -∞ for the lower boundary instead of -1E99.) TI-84: Press [2nd] [DISTR] menu, select the normalcdf. Then type in the lower value, upper value, mean = 0, standard deviation = 1 to get normalcdf(-1E99,1.39,0,1) = 0.9177, which is your answer. The area under the curve is equivalent to the probability of getting a z-score less than 1.39, or P(Z < 1.39) = 0.9177. TI-89: Go to the [Apps] Stat/List Editor, then select F5 [DISTR]. This will get you a menu of probability distributions. Arrow down to Normal Cdf and press [ENTER]. Enter the values for the lower z value (z1), upper z value (z2), μ = 0, and σ = 1 into each cell. Press [ENTER]. This is the cumulative distribution function and will return P(z1 < Z < z2). For a left-tail area use a lower bound of negative infinity (-∞), and for a right-tail area use an upper bound infinity (∞). Excel: For Excel the program will only find the area to the left of a point. Therefore, if we want to find the area to the right of a point or between two points there will be one extra step. Use the formula =NORM.S.DIST(1.39,TRUE). Compute the probability of getting a z-score between –1.37 and 1.68. Solution P(-1.37 ≤ Z ≤ 1.68) is the same as finding the area under the curve between –1.37 and 1.68. First, draw a bell-shaped distribution and identify the two points on the number line. Shade the area between the two points as shown in Figure 6-14. Figure 6-14 TI Calculator: P(-1.37 ≤ Z ≤ 1.68) = normalcdf(-1.37,1.68,0,1) = 0.8682. Excel: P(-1.37 ≤ Z ≤ 1.68) =NORM.S.DIST(1.68,TRUE)-NORM.S.DIST(-1.37,TRUE) = 0.8682. Using Excel or TI-Calculator to Find Standard Normal Distribution As you read through a problem look for some of the following key phrases in Figure 6-15. Once you find the phrase then match up to what sign you would use and then use the table to walk you through using Excel or the calculator. Note that we could also use the NORM.DIST function with µ = 0 and σ = 1. Figure 6-15 Compute the area under the standard normal distribution that is 2 standard deviations from the mean. Solution A rough estimate using the Empirical Rule would be 0.95, however since this is not just any bell-shaped distribution, we will find the P(-2 ≤ Z ≤ 2). Draw and shade the curve as in Figure 6-16. TI Calculator: P(-2 ≤ Z ≤ 2) = normalcdf(-2,2,0,1) = 0.9545. Excel: P(-2 ≤ Z ≤ 2) =NORM.S.DIST(2,TRUE) -NORM.S.DIST(-2,TRUE) = 0.9545. Figure 6-16 Compute the area to the right of z = 1. Solution Draw and shade the curve to find P(Z > 1). See Figure 6-17. TI Calculator: P(Z > 1) = normalcdf(1,1E99,0,1) = 0.1587. Excel: P(Z > 1) = 1-NORM.S.DIST(1,TRUE) = 0.1587. Figure 6-17 6.4.2 Applications of the Normal Distribution Many variables are nearly normal, but none are exactly normal. Thus, the normal distribution, while not perfect for any single problem, is very useful for a variety of problems. Variables such as SAT scores and heights of United States adults closely follow the normal distribution. Note that the Excel function NORM.S.DIST is for a standard normal when µ = 0 and σ = 1 Using Excel or TI-Calculator to Find Normal Distribution Probabilities Figure 6-18 TI-84: Press [2nd] [DISTR]. This will show a menu of probability distributions. Arrow down to 2:normalcdf( and press [ENTER]. This puts normalcdf( on the home screen. Enter the values for the lower x value (x1), upper x value (x2), μ, and σ with a comma between each. Press [ENTER]. This is the cumulative distribution function and will return P(x1 < X < x2). For example, to find P(80 < X < 110) when the mean is 100 and the standard deviation is 20, you should have normalcdf(80,110,100,20). If you leave out the μ and σ, then the default is the standard normal distribution. For a left-tail area use a lower bound of –1E99 (negative infinity), (press [2nd] [EE] to get E) and for a right-tail area use an upper bound of 1E99 (infinity). For example, to find P(Z < -1.37) you should have normalcdf(-1E99,-1.37). TI-89: Go to the [Apps] Stat/List Editor, select F5 [DISTR]. This will show a menu of probability distributions. Arrow down to Normal Cdf and press [ENTER]. Enter the values for the lower x value (x1), upper x value (x2), μ, and σ into each cell. Press [ENTER]. This is the cumulative distribution function and will return P(x1 < X < x2). For example, to find P(80 < X < 110) when the mean is 100 and the standard deviation is 20, you should have in the following order 80, 110, 100, 20. If you have a z-score, use μ = 0 and σ = 1, then you will get a standard normal distribution. For a left-tail area use a lower bound of negative infinity (-∞), and for a right-tail area use an upper bound infinity (∞). “The Hitchhiker's Guide to the Galaxy offers this definition of the word "Infinite." Infinite: Bigger than the biggest thing ever and then some. Much bigger than that in fact, really amazingly immense, a totally stunning size, "wow, that's big," time. Infinity is just so big that by comparison, bigness itself looks really titchy. Gigantic multiplied by colossal multiplied by staggeringly huge is the sort of concept we're trying to get across here.” (Adams, 2002) Let X be the height of 15-year old boys in the United States. Studies show that the heights of 15-year old boys in the United States are normally distributed with average height 67 inches and a standard deviation of 2.5 inches. Compute the probability of randomly selecting one 15-year old boy who is 69.5 inches or taller. Solution Find P(X ≥ 69.5) where X ~ N(67, 2.5). Draw the curve and label the mean, then shade the area to the right of 69.5. Figure 6-19 First, standardize the value of x = 69.5 using the z-score formula $z=\frac{x-\mu}{\sigma}$, where μ = 67 and σ = 2.5. The standardized value of x = 69.5 is $z=\frac{69.5-67}{2.5}$ = 1. Now using the standard normal distribution and shading the area to the right of z = 1 gives: TI calculator: P(Z ≥ 1) = normalcdf(1,1E99,0,1) = 0.158655. Excel: P(Z ≥ 1) = 1-NORM.S.DIST(1,TRUE) = 0.158655. Figure 6-20 We could also use the Empirical rule to approximate P(X ≥ 69.5) because this is the same as being more than one standard deviation above the mean. Do you see why this makes sense? If we were to add a standard deviation to 67 we would get 69.5. Thus P(X ≥ 69.5) $\approx$ 0.16, which is close to our 15.87% using the standard normal distribution. The Empirical rule only gives an approximate value though. The process of standardizing the X value was started so that we could use the standard normal distribution table to look up probabilities instead of using caclulus. With technology, you no longer have to standardize first, we can just find P(X ≥ 69.5). TI calculator: P(X ≥ 69.5) = normalcdf(69.5, 1E99,67,2.5) $\approx$ 0.158655 (The TI-89 use ∞ for the upper boundary instead of 1E99). Excel: P(X ≥ 69.5) =1 – NORM.DIST(69.5,67,2.5,TRUE) = 0.158655. Note that in Excel you can do this in two cells. First find the area to the left of 69.5 by using =NORM.DIST(69.5,67,2.5,TRUE) and the value 0.8413 is returned. In a new cell then subtract this value from 1 to get the answer 0.1587. In 2009, the average SAT mathematics score was 501, with a standard deviation of 116. If we randomly selected a student from that year who took the SAT, what is the probability of getting a SAT mathematics score between 400 and 600? Solution Find P(400 ≤ X ≤ 600) using the TI calculator normalcdf(400,600,501,116) = 0.6113, which is also close to our results above. In Excel we can use the following formula =NORM.DIST(600,501,116,TRUE)- NORM.DIST(400,501,116,TRUE). Note that the right-hand endpoint goes in first. If you put the 400 in first you would get a negative answer, and probabilities are never negative. You can also do each piece separately in two cells, then in a third cell subtract the smaller area from the larger area. A nice feature of this section is that the problems will say that the distribution is normally distributed, unlike the discrete distributions where you have to look for certain characteristics. However, when handling real data, you may have to know how to detect whether the data is normally distributed. One way to see if your variable is approximately normally distributed is by looking at a histogram, or we can use a normal probability plot. 6.4.3 Normal Probability Plot A normal quantile plot, also called a normal probability plot, is a graph that is useful in assessing normality. A normal quantile plot plots the variable x against each of the x values corresponding z-score. It is not practical to make a normal quantile plot by hand. Interpreting a normal quantile plot to see if a distribution is approximately normally distributed. 1. All of the points should lie roughly on a straight line y = x. 2. There should be no S pattern present. 3. Outliers appear as points that are far away from the overall pattern of the plot. Here are two examples of histograms with their corresponding quantile plots. Note that as the distribution becomes closer to a normal distribution the dots on the quantile plot will be in a straighter line. Figures 6-21 is the histogram and figure 6-22 is the corresponding normal probability plot. Note the histogram is skewed to the left and dots do not line up on the y = x line. Figures 6-23 and Figure 6-24 represent a sample that is approximately normally distributed. Note that the dots still do not line up perfectly on the line y = x, but they are close to the line. Figure 6-21 Figure 6-22 Figure 6-23 Figure 6-24 6.4.4 Finding Percentiles for a Normal Distribution Sometimes you will be given an area or probability and have to find the associated random variable x or z-score. For example, the probability below a point on the normal distribution is a percentile. If we find that the P(Z < 1.645) =NORM.S.DIST(1.645,TRUE) = 0.950015, that tells us that about 95% of z-scores are below 1.645. In other words, the z-score of 1.645 is the 95th percentile. We can use technology to find z-score given a percentile. Most technology has built in commands that will find the probability below a point. If you want to find the area above a point, or between two points, then find the area below a point by using the complement rule and keep in mind that the total area under the curve is 1 and the total area below the mean is 0.5. If x is an observation from a distribution that has mean μ and standard deviation σ, the standardized value of x (or zscore) is $z=\frac{x-\mu}{\sigma}$. If you have a z-score and want to convert back to x, you can do so by solving the above equation for x, which yields x = zσ + μ. TI-84: Press [2nd] [DISTR]. This will get you a menu of probability distributions. Press 3 or arrow down to 3:invNorm( and press [ENTER]. This puts invNorm( on the home screen. Enter the area to the left of the x value, μ, and σ with a comma between each. Press [ENTER]. This will return the percentile for the x value. For example, to find the 95th percentile when the mean is 100 and the standard deviation is 20, you should have invNorm(0.95,100,20). If you leave out the μ and σ, then the default is the z-score for the standard normal distribution TI-89: Go to the [Apps] Stat/List Editor, then select F5 [DISTR]. This will get you a menu of probability distributions. Arrow down to Inverse Normal and press [ENTER]. Enter the area to the left of the x value, μ, and σ into each cell. Press [ENTER]. This will return the percentile for the x value. For example, to find the 95th percentile when the mean is 100 and the standard deviation is 20, you should enter 0.95, 100, 20. If you use μ = 0 and σ = 1, then the default is the z-score for the standard normal distribution. Compute the z-score that corresponds to the 25th percentile. Solution First, draw the standard normal curve with zero in the middle as in Figure 6-25. The 25th percentile would have to be below the mean since the mean = median = 50th percentile for a bell-shaped distribution. Figure 6-25 It is okay if you do not have this drawing to scale, but drawing a picture similar to Figure 6-25 will give you a good idea if your answer is correct. For instance, by just looking at this graph we can see that the answer for the z-score will need to be a negative number. TI Calculator: z = invNorm(0.25,0,1) = -0.6745. Excel: z = NORM.S.INV(0.25) = -0.6745. The z = -0.6745 represents the 25th percentile. Compute the z-score that corresponds to the area of 0.4066 between zero and z shown in Figure 6-26. Solution First notice that this picture does not quite match the calculator or Excel, which will only find for a left-tail area. The value of zero is the median on a standard normal distribution, so 50% of the area lies to the left of z = 0. This means that the total area to the left of the unknown z-score would be 0.5 + 0.4066 = 0.9066. Figure 6-26 TI Calculator: z = invNorm(0.9066) = 1.32. Excel: z = NORM.S.INV(0.9066) = 1.32. Using Excel or TI-Calculator for the Percentile of a Normal Distribution Note that the NORM.S.INV function is for a standard normal when µ = 0 and σ = 1. Figure 6-27 In 2009, the average SAT mathematics score was 501, with a standard deviation of 116. Find the SAT score that is for the top 10% of students taking the exam that year. Solution Draw the distribution curve and shade the top 10% as shown in Figure 6- 28. The area below the unknown x value in Figure 6-28 is 1 – 0.10 = 0.90. Figure 6-28 TI Calculator: invNorm(0.9,501,116) = 649.66. Excel: =NORM.INV(0.9,501,116) = 649.66. A student that scored above 649.66 would be in the top 10%, also known as the 90th percentile. We could have also found the z-score that corresponds to the top 10%. Then use the z-score formula to find the xvalue. Using Excel =NORM.S.INV(0.9) = 1.2816. Then use the formula x = zσ + μ = 1.2816 ∙ 116 + 501 = 649.67. This is using a rounded z-score so the answer is slightly off, but close. Note: It is common practice to round z-scores to two decimal places. This is left over from using probability tables that only went out to two decimal places. If you use a rounded z-score in other calculations then keep in mind that you will get a larger rounding error. If the average price of a new one family home is $246,300 with a standard deviation of$15,000, find the minimum and maximum prices of the houses that a contractor will build to satisfy the middle 98% of the market. Assume that the variable is normally distributed. Solution First, draw the curve and shade the middle area of 0.98, see Figure 6-29. We need to get the area to the left of the x1 value. Take the complement 1 – 0.98 = 0.02, then split this area between both tails. The lower tail area for x1 would have 0.02/2 = 0.01. The upper value of x2 will have a left tail area of 0.99. Figure 6-29 On the calculator use, invNorm(0.01,246300,15000) and you get a minimum price of $211404.78 and use invNorm(0.99,246300,15000) and you get a maximum price of$281195.22. In Excel you would have to do this in two separate cells =NORM.INV(0.01,246300,15000) = $211,404.78 and =NORM.INV(0.99,246300,15000) =$281,195.22.
textbooks/stats/Introductory_Statistics/Mostly_Harmless_Statistics_(Webb)/06%3A_Continuous_Probability_Distributions/6.04%3A_Normal_Distribution.txt