chapter
stringlengths 1.97k
1.53M
| path
stringlengths 47
241
|
---|---|
Take a moment to visualize humanity in its entirety, from the earliest humans to the present. How would you characterize the well-being of humanity? Think beyond the latest stories in the news. To help clarify, think about medical treatment, housing, transportation, education, and our knowledge. While there is no denying that we have some problems that did not exist in earlier generations, we also have considerably more knowledge.
The progress humanity has made in learning about ourselves, our world and our universe has been fueled by the desire of people to solve problems or gain an understanding. It has been financed through both public and private monies. It has been achieved through a continual process of people proposing theories and others attempting to refute the theories using evidence. Theories that are not refuted become part of our collective knowledge. No single person has accomplished this, it has been a collective effort of humankind.
As much as we know and have accomplished, there is a lot that we don’t know and have not yet accomplished. There are many different organizations and institutions that contribute to humanity’s gains in knowledge, however one organization stands out for challenging humanity to achieve even more. This organization is XPrize.1 On their webpage they explain that they are an innovation engine. A facilitator of exponential change. A catalyst for the benefit of humanity.” This organization challenges humanity to solve bold problems by hosting competitions and providing a monetary prize to the winning team. Examples of some of their competitions include:
• 2004: Ansari XPrize ($10 million) – Private Space Travel – build a reliable, reusable, privately financed, manned spaceship capable of carrying three people to 100 kilometers above the Earth’s surface twice within two weeks. • Current: The Barbara Bush Foundation Adult Literacy XPrize ($7 million) - “challenging teams to develop mobile applications for existing smart devices that result in the greatest increase in literacy skills among participating adult learners in just 12 months.”
There are an estimated 36 million American adults with a reading level below third grade level. They have difficulty reading bedtime stories, reading prescriptions, and completing job applications, among other things. Developing a good app could have huge benefits for a lot of people, which would also provide benefits for the country.
The following fictional story will introduce you to the way data and statistics are used to test theories and make decisions. The goal is for you to see that the thought processes are not algebraic and that it is necessary to develop new ways of thinking so we can validate our theories or make evidence- based decisions.
Adult Literacy Prize Story
Imagine being part of a team competing for the Adult Literacy Xprize. During the early stages of development, a goal of your team is to create an app that is engaging for the user so that they will use it frequently. You tested your first version (Version 1) of the app on some adults who lacked basic literacy and found it was used an average of 6 hours during the first month. Your team decided this was not very impressive and that you could do better, so you developed a completely new version of the software designated as Version 2. When it was time to test the software, the 10 members of your team each gave it to 8 different people with low literacy skills. This group of 80 individuals that received the software is a small subset, or sample, of all those who have low literacy skills. The objective was to determine if Version 2 is used more than an average of 6 hours per month.
While the data will ultimately be pooled together, your teammates decide to compete against each other to determine whose group of 8 does better. The results are shown in the table below. The column on the right is the mean (average) of the data in the row. The mean is found by adding the numbers in the row and dividing that sum by 8.
Team Member Version 2 Data (hours of use in 1 month) Mean
You, The reader 4.4 3.8 4.4 6.7 1.1 5.7 0.8 2.5 3.675
Betty 11 8.4 8.4 2.7 4.4 8.4 5.7 4.4 6.675
Joy 1.6 2.2 12.5 5.7 2.2 6.6 0.8 0.3 3.9875
Kerissa 16.1 11.1 8.7 9.1 1.4 9.1 1.2 14.4 8.8875
Crystal 0 2.1 0 3.2 0.2 1.8 9.1 3.3 2.4625
Marcin 2.2 6.3 1.3 8.8 0.8 2.7 0.9 0.8 2.975
Tisa 8.8 5.8 9.7 2.8 3.2 0.9 0.1 16.1 5.925
Tyler 11 0.9 11.3 6.6 0.3 5.9 1.7 1.9 4.95
Patrick 0.9 1.8 6.3 3.1 6.1 6.3 3.2 6.7 4.3
One way to make sense of the data is to graph it. The graph to the right is called a histogram. It shows the distribution of the amount of time the software was used by each participant. To interpret this graph, notice the scale on the horizontal (x) axis counts by 2. These numbers represent hours of use. The height of each bar shows how many usage times fall between the x values. For example, 26 people used the app between 0 and 2 hours while 2 people used the app between 16 and 18 hours.
The second graph is a histogram of the mean (average) for each of the 10 groups. This is a graph of the column in the table that is shaded. A histogram of means is called a sampling distribution. The distribution to the right shows that 4 of the means are between 2 and 4 hours while only one mean was between 8 and 10 hours. Notice how the means are grouped closer together than the original data.
The overall mean for the 80 data values is 4.88 hours. Our task is to use the graphs and the overall mean to decide if Version 2 is used more than the Version 1 was used (6 hours per month). What is your conclusion? Answer this question before continuing your reading.
Yes Version 2 is better than Version 1 No, Version 2 is not better than Version 1
Which of the following had the biggest influence on your decision?
______ 54 of the 80 data values were below 6
______ The mean of the data is 4.88, which is below 6
______ 8 of the 10 sample means are below 6.
Version 3
Version 3 was a total redesign of the software. A similar testing strategy was employed as with the prior version. When you received the data from the 8 users you gave the software to, you found that the average length of usage was 10.25 hours. Based on your results, do you feel that this version is better than version 1?
Team Member Version 3 Data (hours of use in 1 month) Mean
You, The reader 14 13 8 4 8 21 3 11 10.25
Yes Version 3 is better than Version 1 No, Version 3 is not better than Version 1
Your colleague Keer looked at her data, which is shown in the table below. What conclusion would Keer arrive at, based on her data?
Team Member Version 3 Data (hours of use in 1 month) Mean
Keer 0 3 2 3 5 4 8 11 4.
Yes Version 3 is better than Version 1 No, Version 3 is not better than Version 1
If your interpretation of your data and Keer’s data are typical, then you would have concluded that Version 3 was better than Version 1 based on your data and Version 3 was not better based on Keer’s data. This illustrates how different samples can lead to different conclusions. Clearly, the conclusion based on your data and the conclusion based on Keer’s data cannot both be correct. To help appreciate who might be in error, let’s look at all the data for the 80 people who tested Version 3 of the software.
Team Member Version 3 Data (hours of use in 1 month) Mean
You, The reader 14 13 8 4 8 21 3 11 10.25
Keer 0 3 2 3 5 4 8 11 4.5
Betty 8 5 5 4 5 0 1 16 5.5
Joy 7 5 8 4 7 13 7 6 7.125
Kerissa 8 6 14 3 11 2 5 8 7.125
Crystal 6 7 4 7 6 3 7 5 5.625
Marcin 7 7 6 1 2 7 5 5 5
Tisa 3 3 5 4 14 13 3 2 5.875
Tyler 0 7 2 7 4 2 5 2 3.625
Patrick 8 3 1 14 2 6 7 2 5.375
The histogram on the right is of the data from individual users. This shows that about half the data (42 out of 80) are below 6 and the rest are above 6.
The histogram on the right is of the mean of the 8 users for each member of the team. This sampling distribution shows that 7 of the 10 sample means are below 6.
The mean of all the individual data values is 6.0. Consequently, if you concluded that Version 3 was better than Version 1 because the mean of your 8 users was 10.25 hours, you would have come to the wrong conclusion. You would have been misled by data that was selected by pure chance.
None of the first 3 versions was particularly successful but your team is not discouraged. They already have new ideas and are putting together another version of their literacy program.
Version 4.
When Version 4 is complete, each member of the team randomly selects 8 people with low literacy levels, just as was done for the prior versions. The data that is recorded is the amount of time the app is used during the month. Your data is shown below.
Team Member Version 4 Data (hours of use in 1 month) Mean
You, The reader 60 44 37 62 32 88 32 48.375
Based on your results, do you feel that this version is better than version 1?
Yes Version 4 is better than Version 1 No, Version 4 is not better than Version 1
The results of all 80 participants is shown in the table below.
Team Member Version 4 Data (hours of use in 1 month) Mean
You, The reader 60 44 37 32 62 32 88 32 48.375
Keer 48 37 24 20 82 76 67 67 52.625
Betty 88 39 67 24 71 85 81 24 59.875
Joy 23 58 21 88 81 75 84 81 63.875
Kerissa 88 24 58 53 81 57 88 24 59.125
Crystal 47 85 767 24 39 67 40 77 56.875
Marcin 61 45 75 58 87 51 37 73 60.875
Tisa 76 77 58 84 20 55 81 82 66.625
Tyler 82 47 48 60 88 21 50 24 52.5
Patrick 20 40 52 24 55 33 33 84 42.625
The histogram on the right is of the data from individual users. Notice that all these values are higher than 20.
The histogram on the right is of the mean of the 8 users for each member of the team. Notice that all the sample means are significantly higher than 6.
Based on the results of Version 4, all the data is much higher than 6 hours per month. The average is 56.3 hours per month which is almost 2 hours per day. This is significantly more usage of the app than the early versions and consequently will be the app that is used in the XPrize competition.
Making decisions using statistics
There were several objectives of the story you just read.
1. To give you an appreciation of the variation that can exist in sample data.
2. To introduce you to a type of data graph called a histogram, which is a good way for looking at the distribution of data.
3. To introduce you to the concept of a sampling distribution, which is a distribution of means of a sample, rather than of the original data.
4. To illustrate the various results that can occur when we try to answer questions using data. These results are summarized below in answer to the question of whether the new version is better than the first version.
a. Version 2: This was not better. In fact, it appeared to be worse.
b. Version 3: At first it looked better, but ultimately it was the same.
c. Version 4: This was much better.
Because data sometimes provide clarity about a decision that should be made (Versions 2 and 4), but other times is not clear (Version 3), a more formal, statistical reasoning process will be explained in this chapter with the details being developed throughout the rest of the book.
Before beginning with this process, it is necessary to be clear about the role of statistics in helping us understand our world. There are two primary ways in which we establish confidence in our knowledge of the world, by providing analytical evidence or empirical evidence.
Analytical evidence makes use of definitions or mathematical rules. A mathematical proof is an analytical method for using established facts to prove something new. Analytical evidence is useful for proving things that are deterministic. Deterministic means that the same outcome will be achieved each time (if errors aren’t made). Algebra and Calculus are examples of deterministic math and they can be used to provide analytical evidence.
In contrast, empirical evidence is based on observations. More specifically, someone will propose a theory and then research can be conducted to determine the validity of that theory. Most of the ideas we believe with confidence have resulted because of the rejection of theories we previously had and our current knowledge consists of those ideas we have not been able to reject with empirical evidence. Empirical evidence is gained through rigorous research. This contrasts with anecdotal evidence, which is also gained through observation, but not in a rigorous manner. Anecdotal evidence can be misleading.
The role of statistics is to objectively evaluate the evidence so a decision can be made about whether to reject, or not reject a theory. It is particularly useful for those situations in which the evidence is the result of a sample taken from a much larger population. In contrast to deterministic relationships, stochastic populations are ones in which there is randomness, while the evidence is gained though random sampling, thus meaning the evidence we see is the result of chance.
The scientific method that is used throughout the research community to increase our understanding of the world is based on proposing and then testing theories using empirical methods. Statistics plays a vital role in helping researchers understand the data they produce. The scientific method contains the following components.
1. Ask a question
2. Propose a hypothesis about the answer to the question
3. Design research (Chapter 2)
4. Collect data (Chapter 2)
5. Develop an understanding of the data using graphs and statistics (Chapter 3)
6. Use the data to determine if it supports, or contradicts the hypothesis (Chapters 5,7,8)
7. Draw a conclusion.
Before exploring the statistical tools used in the scientific method, it is helpful to understand the challenges we face with stochastic populations and the statistical reasoning process we use to draw conclusions.
1. When a theory is proposed about a population, it is based on every person or element of the population. A population is the entire set of people or things of interest.
2. Because the population contains too many people or elements from which to get information, we make a hypothesis about what the information would be, if we could get all of it.
3. Evidence is collected by taking a sample from the population.
4. The evidence is used to determine if the hypothesis should be rejected or not rejected.
These four components of the statistical reasoning process will now be developed more fully. The challenge is to determine if there is sufficient support for the hypothesis, based on partial evidence, when it is known that partial evidence varies, depending upon the sample that was selected. By analogy, it is like trying to find the right person to marry, by getting partial evidence from dating or to find the right person to hire, by getting partial evidence from interviews.
1. Theories about populations.
When someone has a theory, that theory applies to a population that should be clearly defined. For example, a population might be everyone in the country, or all senior citizens, or everyone in a political party, or everyone who is athletic, or everyone who is bilingual, etc. Populations can also be any part of the natural world including animals, plants, chemicals, water, etc. Theories that might be valid for one population are not necessarily valid for another. Examples of theories being applied to a population include the following.
• The team working on the literacy app theorizes that one version of their app will be used regularly by the entire population of adults with low literacy skills who have access to it.
• A teacher theorizes that her teaching pedagogy will lead to the greatest level of success for the entire population of all the students she will teach.
• A pharmaceutical company theorizes that a new medicine will be effective in treating the entire population of people suffering from a disease who use the medicine.
• A water resource scientist theorizes that the level of contamination in an entire body of water is at an unsafe level.
1.5 Data, Parameters, and Statistics
Before discussing hypotheses, it is necessary to talk about data, parameters and statistics.
On the largest level, there are two types of data, categorical and quantitative. Categorical datais data that can be put into categories. Examples include yes/no responses, or categories such as color, religion, nationality, pass/fail, win/lose, etc. Quantitative data is data that consists of numbers resulting from counts or measurements. Examples include, height, weight, time, amount of money, number of crimes, heart rate, etc.
The ways in which we understand the data, graphs and statistics, are dependent upon the type of data. Statistics are numbers used to summarize the data. For the moment, there are two statistics that will be important, proportions and means. Later in the book, other statistics will be introduced.
A proportion is the part divided by the whole. It is similar to percent, but it is not multiplied by 100. The part is the number of data values in a category. The whole is the number of data values that were collected. Thus, if 800 people were asked if they had ever visited a foreign country and 200 said they had, then the proportion of people who had visited a foreign country would be:
$\dfrac{\text{part}}{\text{whole}} = \dfrac{x}{n} = \dfrac{200}{800} = 0.25$
The part is represented by the variable x and the whole by the variable n.
A mean, often known as an average, is the sum of the quantitative data divided by the number of data values. If we refer back to the literacy app, version 3, the data for Marcin was:
Marcin 7 7 6 1 2 7 5 5 5
The mean is $\dfrac{7+ 7 + 6 + 1 + 2 + 7 + 5 +}{8} = \dfrac{40}{8} = 5$
While statistics are numbers that are used to summarize sample data, parameters are numbers used to summarize all the data in the population. To find a parameter, however, requires getting data from every person or element in the population. This is called a census. Generally, it is too expensive, takes too much time, or is simply impossible to conduct a census. However, because our theory is about the population, then we have to distinguish between parameters and statistics. To do this, we use different variables.
Data Type Summary Population Sample
Categorical Proportion p $\hat{p}$ (p-hat)
Quantitative Mean $\mu$ $bar{x}$ (x-bar)
To elaborate, when the data is categorical, the proportion of the entire population is represented with the variable p, while the proportion of the sample is represented with the variable $\hat{p}$. When the data is quantitative, the mean of the entire population is represented with the Greek letter $\mu$, while the mean of the sample is represented with the variable $bar{x}$.
In a typical situation, we will not know either p or $\mu$ and so we would make a hypothesis about them. From the data we collect we will find $\hat{p}$ or $bar{x}$ and use that to determine if we should reject our hypothesis.
2. Hypotheses
Hypotheses are written about parameters before data is collected (a priori). Hypotheses are written in pairs that contain a null hypothesis ($H_0$) and an alternative hypothesis ($H_1$).
Suppose someone had a theory that the proportion of people who have attended a live sporting event in the last year was greater than 0.2. In such a case, they would write their hypotheses as:
$H_0$ : $p = 0.2$
$H_1$ : $p > 0.2$
If someone had a theory that the mean hours of watching sporting events on the TV was less than 15 hours per week, then they would write their hypotheses as:
$H_0$ : $\mu$ = 15
$H_1$ : $\mu$ < 15
The rules that are used to write hypotheses are:
1. There are always two hypotheses, the null and the alternative.
2. Both hypotheses are about the same parameter.
3. The null hypothesis always contains the equal sign (=).
4. The alternative contains an inequality sign (<, >, ≠).
5. The number will be the same for both hypotheses.
When hypotheses are used for decision making, they should be selected in such a way that if the evidence supports the null hypothesis, one decision should be made, while evidence supporting the alternative hypothesis should lead to a different decision.
The hypothesis that researchers desire is often the alternative hypothesis. The hypothesis that will be tested is the null hypothesis. If the null hypothesis is rejected because of the evidence, then the alternative hypothesis is accepted. If the evidence does not lead to a rejection of the null hypothesis, we cannot conclude the null is true, only that it was not rejected. We will use the term “supported” in this text. Thus either the null hypothesis is supported by the data or the alternative hypothesis is supported. Being supported by the data does not mean the hypothesis is true, but rather that the decision we make should be based on the hypothesis that is supported.
Two of the situations you will encounter in this text are when there is a theory about the proportion or mean for one population or when there is a theory about how the proportion or mean compares between two populations. These are summarized in the table below.
Hypothesis about one population
Notation
Hypothesis about 2 populations
Notation
The proportion is greater than 0.2
$H_0$ : $p = 0.2$
$H_1$ : $p > 0.2$
The proportion of population A is greater than the proportion of population B
$H_0$ : $p_A = p_B$
$H_1$ : $p_A > p_B$
The proportion is less than 0.2
$H_0$ : $p = 0.2$
$H_1$ : $p < 0.2$
The proportion of population A is less than the proportion of population B
$H_0$ : $p_A = p_B$
$H_1$ : $p_A < p_B$
The proportion is not equal to 0.2
$H_0$ : $p = 0.2$
$H_1$ : $p \ne 0.2$
The proportion of population A is different than the proportion of population B
$H_0$ : $p_A = p_B$
$H_1$ : $p_A \ne p_B$
The mean is greater than 15
$H_0$ : $\mu = 15$
$H_1$ : $\mu > 15$
The mean of population A is greater than the mean of population B
$H_0$ : $\mu_A = \mu_B$
$H_1$ : $\mu_A > \mu_B$
The mean is less than 15
$H_0$ : $\mu = 15$
$H_1$ : $\mu < 15$
The mean of population A is less than the mean of population B
$H_0$ : $\mu_A = \mu_B$
$H_1$ : $\mu_A < \mu_B$
The mean does not equal 15
$H_0$ : $\mu = 15$
$H_1$ : $\mu \ne 15$
The mean of population A is different than the mean of population B
$H_0$ : $\mu_A = \mu_B$
$H_1$ : $\mu_A \ne \mu_B$
3. Using evidence to determine which hypothesis is more likely correct.
From the Literacy App story, you should have seen that sometimes the evidence clearly supports one conclusion (e.g. version 2 is worse than version 1), sometimes it clearly supports the other conclusion (version 4 is better than version 1), and sometimes it is too difficult to tell (version 3). Before discussing a more formal way of testing hypotheses, let’s develop some intuition about the hypotheses and the evidence.
Suppose the hypotheses are
$H_0$: p = 0.4
$H_0$: p < 0.4
If the evidence from the sample is $\hat{p} = 0.45$, would this evidence support the null or alternative? Decide before continuing.
The hypotheses contain an equal sign and a less than sign but not a greater than sign, so when the evidence is greater than, what conclusion should be drawn? Since the sample proportion does not support the alternative hypothesis because it is not less than 0.4, then we will conclude 0.45 supports the null hypothesis.
If the evidence from the sample is $\hat{p}$ = 0.12, would this evidence support the null or alternative? Decide before continuing.
In this case, 0.12 is considerably less than 0.4, therefore it supports the alternative.
If the evidence from the sample is $\hat{p}$ = 0.38, would this evidence support the null or alternative? Decide before continuing.
This is a situation that is more difficult to determine. While you might have decided that 0.38 is less than 0.4 and therefore supports the alternative, it is more likely that it supports the null hypothesis.
How can that be?
In arithmetic, 0.38 is always less than 0.4. However, in statistics, it is not necessarily the case. The reason is that the hypothesis is about a parameter, it is about the entire population. On the other hand, the evidence is from the sample. Different samples yield different results. A direct comparison of the statistic (0.38) to the hypothesized parameter (0.4) is not appropriate. Rather, we need a different way of making that determination. Before elaborating on the different way, let’s try another one.
Suppose the hypotheses are
$H_0$ : $\mu = 30$
$H_1$ : $\mu > 30$
If the evidence from the sample is $\bar{x}$ = 80, which hypothesis is supported? Null Alternative
If the evidence from the sample is $\bar{x}$ = 26, which hypothesis is supported? Null Alternative
If the evidence from the sample is $\bar{x}$ = 32, which hypothesis is supported? Null Alternative
If the evidence is $\bar{x}$ = 80, the alternative would be supported. If the evidence is $\bar{x}$ = 26, the null would be supported. If the evidence is $\bar{x}$ = 32, at first glance, it appears to support the alternative, but it is close to the hypothesis, so we will conclude that we are not sure which it supports.
It might be disconcerting to you to be unable to draw a clear conclusion from the evidence. After all, how can people make a decision? What follows is an explanation of the statistical reasoning strategy that is used.
Statistical Reasoning Process
The reasoning process for deciding which hypothesis the data supports is the same for any parameter (p or μ).
1. Assume the null hypothesis is true.
2. Gather data and calculate the statistic.
3. Determine the likelihood of selecting the data that produced the statistic or could produce a more extreme statistic, assuming the null hypothesis is true.
4. If the data are likely, they support the null hypothesis. However, if they are unlikely, they support the alternative hypothesis.
To illustrate this, we will use a different research question: “What proportion of American adults believe we should transition to a society that no longer uses fossil fuels (coal, oil, natural gas)? Let’s assume a researcher has a theory that the proportion of American adults who believe we should make this transition is greater than 0.6. The hypotheses that would be used for this are:
$H_0$ : p = 0.6
$H_1$ : p > 0.6
We could visualize this situation if we used a bag of marbles. Since the first step in the statistical reasoning process is to assume the null hypothesis is true, then our bag of marbles might contain 6 green marbles that represent the adults who want to stop using fossil fuels, and 4 white marbles to represent those who want to keep using fossil fuels. Sampling will be done with replacement, which means that after a marble is picked, the color is recorded and the marble is placed back in the bag.
If 100 marbles are selected from the bag (with replacement), do you expect exactly 60 of them (60%) to be green? Would this happen every time?
The results of a computer simulation of this sampling process are shown below. The simulation is of 100 marbles being selected, with the process being repeated 20 times.
0.62 0.57 0.58 0.64 0.64 0.53 0.73 0.55 0.58 0.55
0.61 0.66 0.6 0.54 0.54 0.5 0.62 0.55 0.61 0.61
Notice that some of the times, the sample proportion is greater than 0.6, some of the times it is less than 0.6 and there is only one time in which it actually equaled 0.6. From this we can infer that although the null hypothesis really was true, there are sample proportions that might make us think the alternative is true (which could lead us to making an error).
There are three items in the statistical reasoning process that need to be clarified. The first is to determine what values are likely or unlikely to occur while the second is to determine the division point between likely and unlikely. The third point of clarification is the direction of the extreme.
Likely and Unlikely values
When the evidence is gathered by taking a random sample from the population, the random sample that is actually selected is only one of many, many, many possible samples that could have been taken instead. Each random sample would produce different statistics. If you could see all the statistics, you would be able to determine if the sample you took was likely or unlikely. A graph of statistics, such as sample proportions or sample means, is called a sampling distribution.
While it does not make sense to take lots of different samples to find all possible statistics, a few demonstrations of what happens if someone does do that can give you some confidence that similar results would occur in other situations as well. The data used in the graphs below were done using computer simulations.
The histogram at the right is a sampling distribution of sample proportions. 100 different samples that contained 200 data values were selected from a population in which 40% favored replacing fossil fuel (green marbles). The proportion in favor of replacing fossil fuels (green marbles) was found for each sample and graphed. There are two things you should notice in the graph. The first is that most of the sample proportions are grouped together in the middle and the second thing is that the middle is approximately 0.40 which is equivalent to the proportion of green marbles in the container.
That may, of course, have been a coincidence. So let’s look at a different sample. In this one, the original population was 60% green marbles representing those in favor of replacing fossil fuels. The sample size was 500 and the process was repeated 100 times.
Once again we see most of the sample proportions grouped in the middle and the middle is around the value of 0.60, which is the proportion of green marbles in the original population.
We will look at one more example. In this example, the proportion in favor of replacing fossil fuels is 0.80 while the proportion of those opposed is 0.20. The sample size will be 1000 and there will be 100 samples of that size. Where do you expect the center of this distribution to fall?
As you can see, the center of this distribution is near 0.80 with more values near the middle than at the edges.
One issue that has not been addressed is the effect of the sample size. Sample sizes are represented with the variable n. These three graphs all had different sample sizes. The first sample had n=200, the second had n=500 and the third had n=1000. To see the effect of these different sample sizes, all three sets of sample proportions have been graphed on the same histogram.
What this graph illustrates is that the smaller the sample size, the more variation that exists in the sample proportions. This is evident because they are spread out more. Conversely, the larger the sample size, the less variation that exists. What this means is the larger the sample size, the closer the sample result will be to the parameter. Does this seem reasonable? If there were 10,000 people in a population and you got the opinion of 9,999 of them, do you think all your possible sample proportions would be closer to the parameter (population proportion) than if you only asked 20 people?
We will return to sampling distributions in a short time, but first we need to learn about directions of extremes and probability.
Direction of Extreme
The direction of extreme is the direction (left or right) on a number line that would make you think the alternative hypothesis is true. Greater than symbols have a direction of extreme to the right, less than symbols indicate the direction is to the left and not-equal signs indicate a two-sided direction of extreme.
Notation Notation Direction of Extreme
$H_0$ : $p = 0.2$
$H_1$ : $p > 0.2$
$H_0$ : $p_A = p_B$
$H_1$ : $p_A > p_B$
Right
Left $H_0$ : $p_A = p_B$
$H_1$ : $p_A < p_B$
Left
$H_0$ : $p = 0.2$
$H_1$ : $p \ne 0.2$
$H_0$ : $p_A = p_B$
$H_1$ : $p_A \ne p_B$
Two-sided
$H_0$ : $\mu = 15$
$H_1$ : $\mu > 15$
$H_0$ : $\mu_A = \mu_B$
$H_1$ : $\mu_A > \mu_B$
Right
$H_0$ : $\mu = 15$
$H_1$ : $\mu < 15$
$H_0$ : $\mu_A = \mu_B$
$H_1$ : $\mu_A < \mu_B$
Left
$H_0$ : $\mu = 15$
$H_1$ : $\mu \ne 15$
$H_0$ : $\mu_A = \mu_B$
$H_1$ : $\mu_A \ne \mu_B$
Two-sided
Probability
At this time it is necessary to have a brief discussion about probability. A more detailed discussion will occur in Chapter 4. When theories are tested empirically by sampling from a stochastic population, then the sample that is obtained is based on chance. When a sample is selected through a random process and the statistic is calculated, it is possible to determine the probability of obtaining that statistic or more extreme statistics if we know the sampling distribution.
By definition, probability is the number of favorable outcomes divided by the number of possible outcomes.
$P(A) = \dfrac{Number\ of\ Favorable\ Outcomes}{Number\ of\ Possible\ Outcomes}$
This formula assumes that all outcomes are equally likely as is theoretically the case in a random selection processes. It reflects the proportion of times that a result would be obtained if an experiment were done a very large number of times. Because you cannot have a negative number of outcomes or more successful outcomes than are possible, probability is always a fraction or a decimal between 0 and 1. This is shown generically as $0 \le P(A) \le 1$ where P(A) represents the probability of event A.
Using Sampling Distributions to Test Hypotheses
Remember our research question, “What proportion of American adults believe we should transition to a society that no longer uses fossil fuels (coal, oil, natural gas)? The researchers theory is that the proportion of American adults who believe we should make this transition is greater than 0.6. The hypotheses that would be used for this are:
$H_0 : p = 0.6$
$H_1 : p > 0.6$
To test this hypothesis, we need two things. First, we need the sampling distribution for the null hypothesis, since we will assume that is true, as stated first in the list for the reasoning process used for testing a hypothesis. The second thing we need is data. Because this is instructional, at this point, several sample proportions will be provided so you can compare and contrast the results.
A small change has been made to the sampling distribution that was shown previously. At the top of each bar is a proportion. On the x-axis there are also proportions. The difference between these proportions is that the ones on the x-axis indicate the sample proportions while the proportions at the top of the bars indicate the proportion of sample proportions that were between the two boundary values. Thus, out of 100 sample proportions, 0.38 (or 38%) of them were between 0.60 and 0.62. The proportions at the top of the bars can also be interpreted as probabilities.
It is with this sampling distribution from the null hypotheses that we can find the likelihood, or probability of getting our data, or more extreme data. We will call this probability a p-value.
As a reminder, for the hypothesis we are testing, the direction of extreme is to the right.
Suppose the sample proportion we got for our data was $\hat{p}$ = 0.64. What is the probability we would have gotten that sample proportion or more extreme from this distribution? That probability is 0.01, consequently the p-value is 0.01. This number is found at the top of the right-most bar.
Suppose the sample proportion we got from our data was $\hat{p}$ = 0.62. What is the probability we would have gotten that sample proportion from this distribution? That probability is 0.11. This was calculated by adding the proportions on the top of the two right-most bars. The p-value is 0.11.
You try it. Suppose the sample proportion we got from our data was $\hat{p}$ = 0.60. What is the probability we would have gotten that sample proportion from this distribution?
Now, suppose the sample proportion we got from our data was $\hat{p}$ = 0.68. What is the probability we would have gotten that sample proportion from this distribution? In this case, there is no evidence of any sample proportions equal to 0.68 or higher, so consequently the probability, or p-value would be 0.
Testing the hypothesis
We will now try to determine which hypothesis is supported by the data. We will use the p=0.8 distribution to represent the alternative hypothesis. Both the null and alternative distributions are shown on the same graph.
If the data that is selected had a statistic of $\hat{p}$ = 0.58, what is the p-value? Which of the two distributions do you think the data came from? Which hypothesis is supported?
The p-value is 0.81 (0.32+0.38+0.10+0.01). This data came from the null distribution (p=0.6). This evidence supports the null hypothesis.
If the data that is selected was $\hat{p}$ = 0.78, what is the p-value? Which of the two distributions do you think the data came from? Which hypothesis is supported?
The p-value is 0 because there are no values in the p=0.6 distribution that are 0.78 or higher. The data came from the alternative (p=0.8) distribution. The alternative hypothesis is supported.
In the prior examples, there as a clear distinction between the null and alternative distributions. In the next example, the distinction is not as clear. The alternative distribution will be represented with a proportion of 0.65
.
If the data that is selected was $\hat{p}$ = 0.62, from which of the two distributions do you think the data came from? Which hypothesis is supported?
Notice that in this case, because the distributions overlap, a sample proportion of 0.62 or more extreme could have come from either distribution. It isn’t clear which one it came from. Because of this lack of clarity, we could possibly make an error. We might think it came from the null distribution whereas it really came from the alternative distribution. Or perhaps we thought it came from the alternative distribution, but it really came from the null distribution. How do we decide???
Before explaining the way we decide, we need to discuss errors, as they are part of the decision- making process.
There are two types of errors we can make as a result of the sampling process. They are known as sampling errors. These errors are named Type I and Type II errors. A type I error occurs when we think the data supports the alternative hypothesis but in reality, the null hypothesis is correct. A type II error occurs when we think the data supports the null hypothesis, but in reality the alternative hypothesis is correct. In all cases of testing hypotheses, there is the possibility of making either a type I or type II error.
The probability of making either Type I or Type II errors is important in the decision-making process. We represent the probability of making a Type I error with the Greek letter alpha, $\alpha$. It is also called the level of significance. The probability of making a Type II error is represented with the Greek letter Beta, $\beta$. The probability of the data supporting the alternative hypothesis, when the alternative is true is called power. Power is not an error. The errors are summarized in the table below.
The True Hypothesis
$H_0$ Is True $H_1$ Is True
The Evidence upon which the decision is based The Data Supports $H_0$ No Error Type II Error Probability: $\beta$
The Data Supports $H_1$ Type I Error Probability: $\alpha$ No Error Probability: Power
The reasoning process for deciding which hypothesis the data supports is reprinted here.
1. Assume the null hypothesis is true.
2. Gather data and calculate the statistic.
3. Determine the likelihood of selecting the data that produced the statistic or could produce a more extreme statistic, assuming the null hypothesis is true. This is called the p-value.
4. If the data are likely, they support the null hypothesis. However, if they are unlikely, they support the alternative hypothesis.
The determination of whether data are likely or not is based on a comparison between the p- value and α. Both alpha and p-values are probabilities. They must always be values between 0 and 1, inclusive. If the p-value is less than or equal to α, the data supports the alternative hypothesis. If the p-value is greater than α, the data supports the null hypothesis. When the data supports the alternative hypothesis, the data are said to be significant. When the data supports the null hypothesis, the data arenot significant. Reread this paragraph at least 3 times as it defines the decision making rule used throughout statistics and it is critical to understand.
Because some values clearly support the null hypothesis, others clearly support the alternative hypothesis but some do not clearly support either, then a decision has to be made, before data is ever collected (a priori), as to the probability of making a type I error that is acceptable to the researcher.The most common values for α are 0.05, 0.01, and 0.10. There is not a specific reason for these choices but there is considerable historical precedence for them and they will be used routinely in this book. The choice for a level of significance should be based on several factors.
1. If the power of the test is low because of small sample sizes or weak experimental design, a larger level of significance should be used.
2. Keep in mind the ultimate objective of research – “to understand which hypotheses about the universe are correct. Ultimately these are yes and no decisions.” (Scheiner, Samuel M., and Jessica Gurevitch. Design and Analysis of Ecological Experiments. Oxford [etc.: Oxford UP, 2001. Print.) Statistical tests should lead to one of three results. One result is that the hypothesis is almost certainly correct. The second result is that the hypothesis is almost certainly incorrect. The third result is that further research is justified. P- values within the interval (0.01,0.10) may warrant continued research, although these values are as arbitrary as the commonly used levels of significance.
3. If we are attempting to build a theory, we should use more liberal (higher) values of α, whereas if we are attempting to validate a theory, we should use more conservative (lower) values of $\alpha$.
Demonstration of an elementary hypothesis test
Now, you have all the parts for deciding which hypothesis is supported by the evidence (the data). The problem will be restated here.
$H_0 : p = 0.6$
$H_1 : p > 0.6$
$\alpha = 0.01$
A vertical line was drawn on the graph so that a proportion of only 0.01 was to the right of the line in the null distribution. This is called a decision line because it is the line that determines how we will decide if the statistic supports the null or alternative hypothesis. The number at the bottom of the decision line is called the critical value.
If the data that is selected was $\hat{p}$ = 0.62, from which of the two distributions do you think the data came from? Which hypothesis is supported?
To answer these questions, first find the p-value. The p-value is 0.11 (0.10 + 0.01).
Next, compare the p-value to $\alpha$. Since 0.11 > 0.01, this evidence supports the null hypothesis.
Because showing both distributions on the same graph can make the graph a little difficult to read, this graph will be split into two graphs. The decision line is shown at the same critical value on both graphs (0.64). The level of significance, α, is shown on the null distribution. It points in the direction of the extreme. β and power are shown on the alternative distribution. Power is on the same side of the distribution as the direction of extreme while β is on the opposite side. The p-value is also shown on the null distribution, pointing in the direction of the extreme.
Another example will be demonstrated next.
Question: What is the proportion of people who have visited a different country?
Theory: The proportion is less than 0.40
Hypotheses: $H_0: p = 0.40$
$H_1: p < 0.40$
$\alpha = 0.04$
The distribution on the left is the null distribution, that is, it is the distribution that was obtained by sampling from a population in which the proportion of people who have visited a different country is really 0.40. The distribution on the right is representing the alternative hypothesis.
The objective is to identify the portion of each graph associated with α, β, and power. Once the data has been provided, you will also be able to show the part of the graph that indicates the p-value.
The reasoning process for labeling the distributions is as follows.
1. Determine the direction of the extreme. This is done by looking at the inequality sign in the alternative hypothesis. If the sign is <, then the direction of the extreme is to the left. If the sign is >, then the direction of the extreme is to the right. If the sign is $\ne$, then the direction of extreme is to the left and right, which is called two-sided. Notice that the inequality sign points towards the direction of extreme. To keep these concepts a little easier as you are learning them, we will not do two-sided alternative hypotheses until later in the text.
In this problem the direction of extreme is to the left because smaller sample proportions support the alternative hypothesis.
2. Draw the Decision line. The direction of extreme along with α are used to determine the placement of the decision line. Alpha is the probability of making a Type I error. A Type I error can only occur if the null hypothesis is true, therefore, we always place alpha on the null distribution. Starting on the side of the direction of extreme, add the proportions at the top of the bars until they equal alpha. Draw the decision line between bars separating those that could lead to a Type I error from the rest of the distribution.
Notice the x-axis value at the bottom of the decision line. This value is called the critical value. Identify the critical value on the alternative distribution and place another decision line there.
In this problem, the direction of extreme is to the left and $\alpha$ = 4% (0.04) so the decision line is placed so that the proportion of sample proportions to the left is 0.04. The critical value is 0.36 so the other decision line is placed at 0.36 on the alternative distribution.
3. Labeling $\alpha$, $\beta$, and power. $\alpha$ is always placed on the null distribution on the side of the decision line that is in the direction of extreme. $\beta$ is always placed on the alternative distribution on the side of the decision line that is opposite of the direction of extreme. Power is always placed on the alternative distribution on the side of the decision line that is in the direction of extreme.
4. Identify the probabilities for $\alpha$, $\beta$, and power. This is done by adding the proportions at the top of the bars.
In this example, the probability for $\alpha$ is 0.04. The probability for $\beta$ is 0.30 (0.02 + 0.06 + 0.22). The probability for power is 0.70 (0.02 + 0.03 + 0.29 + 0.36).
5. Find the p-value. Data is needed to test the hypothesis, so here is the data: In a sample of 200 people, 72 have visited another country. The sample proportion is $\hat{p} = \dfrac{72}{200} = 0.36$. The p-value, which is the probability of getting the data, or more extreme values, assuming the null hypothesis is true, is always placed on the null distribution and always points in the direction of the extreme.
In this example, the p-value has been indicated on the null distribution.
6. Make a decision. The probability for the p-value is 0.04. To determine which hypothesis is supported by the data, we compare the p-value to alpha. If the p-value is less than or equal to alpha, the evidence supports the alternative hypothesis. In this case, the p-value of 0.04 equals alpha which is also 0.04, so this evidence supports the alternative hypothesis leading to the conclusion that the proportion of people who have visited another country is less than 40%.
7. Errors and their consequence. While this problem is not serious enough to have consequences that matter, we will, nevertheless, explore the consequences of the various errors that could be made.
Because the evidence supported the alternative hypothesis, we have the possibility of making a type I error. If we did make a type I error it would mean that we think fewer than 40% of Americans have visited another country, when in fact 40% have done so.
In contrast to this, if our data had been 0.38 so that our p-value was 0.20, then our results would have supported the null hypothesis and we could be making a Type II error. This error means that we would think 40% of Americans had visited another country when, in fact, the true proportion would be less than that.
8. Reporting results. Statistical results are reported in a sentence that indicates whether the data are significant, the alternative hypothesis, and the supporting evidence, in parentheses, which at this point include the p-value and the sample size (n).
For the example in which $\hat{p}$ = 0.36 we would write, the proportion of Americans who have visited other countries is significantly less than 0.40 (p = 0.04, n = 200).
For the example in which $\hat{p}$ = 0.38 we would write, the proportion of Americans who have visited other countries is not significantly less than 0.40 (p= 0.20, n = 200).
At this point, a brief explanation is needed about the letter p. In the study of statistics there are several words that start with the letter p and use p as a variable. The list of words includes parameters, population, proportion, sample proportion, probability, and p-value. The words parameter and population are never represented with a p. Probability is represented with notation that is similar to function notation you learned in algebra, f(x), which is read f of x. For probability, we write P(A) which is read the probability of event A. To distinguish between the use of p for proportion and p for p-value, pay attention to the location of the p. When p is used in hypotheses, such as $H_0: p = 0.6$, $H_1: p > 0.6$, it means the proportion of the population. When p is used in the conclusion, such as the proportion is significantly greater than 0.6 (p = 0.01, n = 200), then the p in p = 0.01 is interpreted as a p-value. If the sample proportion is given, it is represented as $\hat{p}$ = 0.64.
We will conclude this chapter with a final thought about why we are formal in the testing of hypotheses. According to Colquhoun (1971), “Most people need all the help they can get to prevent them from making fools of themselves by claiming that their favorite theory is substantiated by observations that do nothing of the sort. And the main function of that section of statistics that deals with tests of significance is to prevent people making fools of themselves”. (Green, 1979).
Chapter 1 Homework
1. Identify each of the following as a parameter or statistic.
A. p is a
B. $\bar{x}$ is a
C. $\hat{p}$ is a
D. $\mu$ is a
2. Are hypotheses written about parameters or statistics? _________________
3. A sampling distribution is a histogram of which of the following?
______original data
______possible statistics that could be obtained when sampling from a population
4. Write the hypotheses using the appropriate notation for each of the following hypotheses. Using meaningful subscripts when comparing two population parameters. For example, comparing men to women, you might use scripts of m and w, for instance $p_m = p_w$.
4a. The mean is greater than 20. $H_0$: $H_1$:
4b. The proportion is less than 0.75. $H_0$: $H_1$:
4c. The mean for Americans is different than the mean for Canadians. $H_0$: $H_1$:
4d. The proportion for Mexicans is greater than the proportion for Americans. $H_0$: $H_1$:
4e. The proportion is different than 0.45. 4f. The mean is less than 3000. $H_0$: $H_1$:
5. If the p-value is less than $\alpha$,
5a. which hypothesis is supported?
5b. are the data significant?
5c. what type error could be made?
6. For each row of the table you are given a p-value and a level of significance (α). Determine whichhypothesis is supported, if the data are significant and which type error could be made. If a given p- value is not a valid p-value (because it is greater than 1), put an x in each box in the row.
p-value $\alpha$ Hypothesis $H_0$ or $H_1$ Significant or Not Significant Error
Type I or Type II
0.043 0.05
0.32 0.05
0.043 0.01
0.0035 0.01
0.043 0.10
0.15 0.10
5.6 $\times 10^{-6}$ 0.05
7.3256 0.01
7. For each set of information that is provided, write the concluding sentence in the form used by researchers.
7a. $H_1: p > 0.5, n = 350$, p - value = 0.022, $\alpha = 0.05$
7b. $H_1: p < 0.25, n = 1400$, p - value = 0.048, $\alpha = 0.01$
7c. $H_1: \mu > 20, n = 32$, p - value = $5.6 \times 10^{-5}$, $\alpha = 0.05$
7d. $H_1: \mu \ne 20, n = 32$, p - value = $5.6 \times 10^{-5}$, $\alpha = 0.05$
8. Test the hypotheses:
$H_0: p = 0.5$
$H_1: p < 0.5$
Use a 2% level of significance.
8a. What is the direction of the extreme?
8b. Label each distribution with a decision rule line. Identify $\alpha$, $\beta$, and power on the appropriate distribution.
8c. What is the critical value?
8d. What is the value of $\alpha$?
8e. What is the value of $\beta$?
8f. What is the value of Power?
The Data: The sample size is 80. The sample proportion is 0.45.
8g. Show the p-value on the appropriate distribution.
8h. What is the value of the p-value?
8i. Which hypothesis is supported by the data?
8j. Are the data significant?
8k. What type error could have been made?
8l. Write the concluding sentence.
9. Test the hypotheses:
$H_0: \mu = 300$
$H_0: \mu > 300$
Use a 3.5% level of significance.
8a. What is the direction of the extreme?
8b. Label each distribution with a decision rule line. Identify $\alpha$, $\beta$, and power on the appropriate distribution.
8c. What is the critical value?
8d. What is the value of $\alpha$?
8e. What is the value of $\beta$?
8f. What is the value of Power?
The Data: The sample size is 10. The sample mean is 360.
8g. Show the p-value on the appropriate distribution.
8h. What is the value of the p-value?
8i. Which hypothesis is supported by the data?
8j. Are the data significant?
8k. What type error could have been made?
8l. Write the concluding sentence.
10. Question: Is the five-year cancer survival rate for all races improving?
.
5 – year Cancer Survival Rate. According to the American Cancer Society, in 1974-1976 the five- year survival rate for all races was 50%. This means that 50% of the people who were diagnosed with cancer were still alive 5 years later. These people could still be undergoing treatment, could be in remission or could be disease-free. (www.cancer.org/acs/groups/con...securedpdf.pdf Viewed 5-29-13)
Study Design: To determine if the survival rates are improving, data will be gathered from people who have been diagnosed with cancer at least 5 years before the start of this study. The data that will be collected is whether the people are still alive 5 years after their diagnosis. The data will be categorical, that is the people will be put into one of two categories, survive or did not survive. Suppose the medical records of 100 people diagnosed with cancer are examined. Use a level of significance of 0.02.
10a. Write the hypotheses that would be used to show that the proportion of people who survive cancer for at least five years after diagnosis is greater than 0.5. Use the appropriate parameter.
$H_0:$
$H_1:$
10b. What is the direction of the extreme?
10c. Label the null and alternate sampling distributions below with the decision rule line, $\alpha$, $\beta$, power.
10d. What is the critical value?
10e. What is the value of $\alpha$?
10f. What is the value of $\beta$?
10g. What is the value of Power?
The data: The 5-year survival rate is 65%.
10h. What is the p-value for the data?
10i. Write your conclusion in the appropriate format.
10j. What Type Error is possible?
10k. In English, explain the conclusion that can be drawn about the question.
11. Why Statistical Reasoning Is Important for a Business Student and Professional
Developed in Collaboration with Tom Phelps, Professor of Economics, Mathematics, and Statistics This topic is discussed in ECON 201, Micro Economics.
Briefing 1.2
Generally speaking, as the price of an item increases, there are fewer units of the item purchased. In economics terms, there is less “quantity demanded”. The ratio of the percent change in quantity demanded to the percent change in price is called price elasticity of demand. The formula is $e_d = \dfrac{\%\Delta Q_d}{\%\Delta P}$. For example, if a 1% price increase resulted in a 1.5% decrease in the quantity demanded, the price elasticity is $e_d = \dfrac{-1.5\%}{1\%}$ = −1.5. It is common for economists to use the absolute value of $e_d$ since almost all $e_d$ values are negative. Elasticity is a unit-less number called an elasticity coefficient.
Food is an item that is essential, so demand will always exist, however eating out, which is more expensive than eating in, is not as essential. The average price elasticity of demand for food for the home is 0.51. This means that a 1% price increase results in a 0.51% decrease in quantity demanded. Because eating at home is less expensive than eating in restaurants, it would not be unreasonable to assume that as prices increase, people would eat out less often. If this is the case, we would expect that the price elasticity of demand for eating out would be greater than for eating at home. Test the hypothesis that the mean elasticity for food away from home is higher than for food at home, meaning that changing prices have a greater impact on eating out. (www.ncbi.nlm.nih.gov/pmc/articles/PMC2804646/) (www.ncbi.nlm.nih.gov/pmc/arti...46/table/tbl1/)
11a. Write the hypotheses that would be used to show that the mean elasticity for food away from home is greater than 0.51. Use a level of significance of 7%.
$H_0:$
$H_1:$
11b. Label each distribution with the decision rule line. Identify $\alpha$, $\beta$, and power on the appropriate distribution.
11c. What is the direction of the extreme?
11d. What is the value of $\alpha$?
11e. What is the value of $\beta$?
11f. What is the value of Power?
The Data: A sample of 13 restaurants had a mean elasticity of 0.80.
11g. Show the p-value on the appropriate distribution.
11h. What is the value of the p-value?
11i. Which hypothesis is supported by the data?
11j. Are the data significant?
11k. What type error could have been made?
11l. Write the concluding sentence. | textbooks/stats/Introductory_Statistics/Foundations_in_Statistical_Reasoning_(Kaslik)/01%3A_Statistical_Reasoning.txt |
A primary role of statistics is to use evidence from stochastic populations to improve our understanding of the world. Deciding what evidence will be collected is an essential part of the process. Research design is that portion of the statistical process in which planning is done so that the conclusions are drawn with confidence and can be supported under scrutiny.
There are three research designs we will explore in this chapter, observational studies, observational experiments, and manipulative experiments. The type of research that is conducted is dependent upon the objective of the research. In cases where the objective is to understand a population or compare populations, an observational studies is appropriate. In cases in which we want to determine if a causal relationship exists between two variables, we conduct an experiment. A causal relationship (cause and effect relationships) implies the existence of two variables. The variable that is the cause must happen first. The first variable is called an explanatory variable, the variable that is affected is a response variable.
In experiments, the explanatory variable is a treatment or intervention that is imposed upon people or elements of a population. Of the two experiments, observational and manipulative, the latter is better for showing a causal relationship. In manipulative experiments the researcher can randomly assign the treatment or intervention whereas for observational experiments, the treatment or intervention is imposed by someone other than the researcher.
Before clarifying each of these research designs, a few examples might be useful.
Examples of Observational Studies
• A researcher might conduct a survey of Americans to compare the proportion of Democrats who support efforts at reducing carbon emissions to the proportion of Republicans who want to reduce carbon emissions.
• Water samples could be taken in the Puget Sound to determine the level of PCB contamination.
• Students could be given an unannounced exam on a math skill they had learned earlier in the school year to see how much they retained.
Example of Observation experiments
• Since some states have legalized the recreational use of marijuana, it is possible to determine if it really a gateway drug by seeing if there is a change in the usage of harder drugs.
• When some states increase the minimum wage, it is possible to determine if raising the minimum wage has an effect on the number of people who are employed in the state by comparing them with states that don’t raise the minimum wage.
• When a natural disaster strikes an area, it is possible to determine the effect on donations to organizations such as the Red Cross.
Example of Manipulative experiments
• A coach randomly assigns some runners to a weight training program and does not allow other runners to lift weights, but otherwise all runners have the same training program, then the coach can determine the effect of weight training on running improvement.
• Loaves of homemade bread can be baked at different temperatures to determine the effect of temperature on the bread.
• A company can try different internet ads to see if there is an effect on sales of their product.
Randomness
Each research design incorporates one application of randomness. In the case of observational studies and observational experiments, a random selection is made from the population. This may be difficult for some observational experiments. For example, there are not enough states that have legalized marijuana to randomly select from. In manipulative experiments, the researcher randomly assigns participants to different groups, for example, to groups receiving a treatment and those not receiving it. The methods used for random selection and random assignment are discussed later in this chapter.
Distinguishing between research designs
It can be challenging to determine which research design is being used. The following questions can guide your decision.
1. Is the researcher looking for the effect of a treatment or intervention?
2. If the answer to the first question is yes, then can the researcher randomly assign participants to different groups?
The flow chart that follows uses these questions to determine the type of research design.
Types-of-Research Flow Chart
Experiments
Policy makers, businesses managers, physicians, educators, scientists, and coaches typically have an outcome they would like to achieve, but they want to make an evidence-based decision in order to achieve the outcome. That is, they want to know what variable they can change so the change has an effect on a different variable. For example,
A policy maker may wonder what variable should be changed to reduce poverty.
A business manager may wonder what advertising strategy will lead to the greatest increase in sales.
A physician may wonder which medicine will cure a person.
An educator may wonder which teaching strategy will lead to the greatest amount of learning for the students.
For causal relationships, it has already been stated that a cause must proceed an effect, but there is another criterion of importance. In a causal relationship, a treatment produces a particular outcome while not providing the treatment means that particular outcome is not produced. Thus, simply showing that a certain response occurred when a treatment was provided does not prove the treatment caused the response. There could be another factor that caused that particular response. To prove causation, the research should be designed to show that one response occurs with the treatment and does not occur without the treatment and that there is unlikely to be another variable that is causing the response. This requires having at least two groups, one (or more) which receives the treatment and one that does not receive the treatment. The group that does not receive the treatment is called a control group.
When experiments are conducted on non-humans, it is possible to have a control group that does not receive any treatment. An example would be agriculture researchers who might fertilize some crops but not others. However, when an experiment is conducted on humans, there can be complications. In typical experiments involving the testing of new medicines on a person with an illness, it is not sufficient to simply give some people the medicine being tested and not give it to others. Humans can have psychosomatic effects – physical changes that are a result of the expectations of a certain effect of the medicine, attributed to the mind-body interactions. To address this problem, it is customary to give an inert medicine, called a placebo, to some of the participants. It is important that the subjects do not know if they are receiving the real medicine or the placebo. It is also important that the researcher examining the subjects doesn’t know either. This is achieved by doing a double blind experiment. In this type of experiment, subjects are randomly assigned to either the treatment group or the placebo group, but are not told which group they are in. The doctor is not told either.
A problem has been observed with this type of double blind experiment however. That problem is called breaking blind and is caused because subjects have to be warned about possible side effects from the medication. Consequently, those experiencing the side effects can guess they are taking the actual medicine and those that don’t experience them conclude they are taking the placebo. In some experiments, more than 80% of the doctors and subjects correctly identified if a subject was in the treatment group or placebo group. Since a correct guess about the group should occur about 50% of the time, it is likely that side effects, or possibly other clues, led to the higher value of correct identification. To help minimize this problem, some researchers use an active placebo instead of the more typical sugar pill. An active placebo produces side effects similar to the real medicine, but does not provide a cure for the medical condition.("Listening to Prozac, But Hearing Placebo." The Emperor's New Drugs: Exploding The Antidepressant Myth. Philadelphia: Basic Books, 2010. 7-20. Print.)
In medical studies, besides having a treatment group and a placebo group, it is appropriate to have a control group that receives no treatment at all. This is often accomplished because some people who apply to be in the experiment are not accepted. Because illnesses can go through cycles (good days, bad days) and people usually wait until they feel very bad to get treatment, then comparing the results of treatment to people who don’t receive any treatment can be helpful to show if something other than the normal cycle of symptoms is occurring as a result of the treatment.
Response variables, explanatory variables, levels, and confounding
Response and explanatory variables will be explained using teachers as an example. An ideal outcome for a teacher would be for the entire class of students to be successful in the class. The teacher would like to know which teaching strategies (pedagogy) will lead to the greatest success for the students. Notice in this example, there are two variables, teaching strategy and student success. Since teaching must come before assessment of student success, then teaching strategy is the explanatory variable and student success is the response variable.
The response variable is rather vague however. What does student success mean? There are many aspects of learning, such as memorization of facts, ability to calculate, skills in the laboratory, writing skills, ability to think critically, ability to think creatively, etc. A researcher needs to be clear about the response variable. For example, since this book is used for a statistics class, then one outcome of particular interest is whether students can correctly test a hypothesis. A different outcome might be whether the students can create appropriate graphs for the data.
There are many possibilities for the explanatory variable of teaching strategies. These possibilities are called levels. Examples of levels include lecturing, active learning, discovery learning, computer teaching software, etc. Levels are specific examples of the explanatory variable.
While teaching pedagogy is an explanatory variable a teacher can modify, it is not the only variable that can affect the response variable of student success. Other variables include student interest and motivation, the text, study time, distractions (lack of food or shelter, deployment of a spouse, divorce, illness, etc). These other variables, which could be used as explanatory variables in different research, are called confounding variables. Potential confounding variables should be identified during the research design stage so they can be controlled in the experiment by making sure that they are equally distributed in the different experimental groups.
To get practice identifying the different elements of research, you will be given stories that explain a research project. From the story, your object is to identify the key elements, including the research question, the variables, parameter, and type of research. These will be organized in a research design table. When completing this table, think of the potential confounding variables yourself as they are not usually included in the story.
Research Design Table
Research Question:
Type of Research Observational Study
Observational Experiment
Manipulative Experiment
What is the response variable?
What is the parameter that will be calculated Mean Proportion
List potential confounding variables.
Grouping/Explanatory Variables 1 (if present)
Levels:
Some of the examples below contain underlined words, others do not. The purpose of underlining is to help you identify the key words in the story. Ultimately, you need to identify these parts without them being underlined.
Example 2.1 Is there a difference in the number of electronic items in the homes of people who were born and raised in the US compared to people who immigrated to the US and have lived in the US for at least 5 years?
To answer this question, the residency status of people will be classified as native or 5-year immigrant. A random samples of native residents and immigrants who have been in the US at least 5 years will be taken. All electronic items will be counted individually (e.g. cell phones, computers, TVs, radios). The objective is to determine if the mean number of electronic items is different for the two groups.
Research Design Table
Research Question: Is there a difference in the number of electronic items in the homes of people who were born and raised in the US compared to people who immigrated to the US and have lived in the US for at least 5 years?
Type of Research Observational Study
Observational Experiment
Manipulative Experiment
What is the response variable? number of electronic items
What is the parameter that will be calculated? Mean Proportion
List potential confounding variables. Income, wealth, age, size of family
Grouping/explanatory Variables (if present)
Residency status
Levels:
native
5-year immigrant
Briefing 2.1 Bare Foot Running
In 2011, Vintage Books published the book “Born to Run: A hidden Tribe, Superathletes, and the Greatest Race the World Has Never Seen” by Christopher McDougall. One of the topics discussed was the concept of bare-foot running. The author argued that running bare foot (or with minimal protection between the sole of the foot and the ground) leads to a forefoot running style that leads to fewer injuries than those who run with padded shoes and use a heal strike.
A high school running coach would like to know if new runners using minimalist shoes that lead to a forefoot running style will have fewer injuries than new runners using padded shoes that lead to heal strikes. The coach uses a coin flip to randomly assign the type of shoe a new runner should wear. The coach will record this shoe choice and maintain an injury record for each athlete. Ultimately, the coach will determine if there is a difference in the proportion of runners from each group who are injured. Only new runners will be included because it would be difficult and perhaps inappropriate to change the running style of experienced runners.
Research Design Table
Research Question: Does running style make a difference in injuries?
Type of Research Observational Study
bservational Experiment
Manipulative Experiment
What is the response variable? injury
What is the parameter that will be calculated? Mean Proportion
List potential confounding variables. Prior running experience, prior injuries, general fitness
Grouping/explanatory Variables (if present)
Type of shoe
Levels:
minimalist shoes
Padded shoes
Example 2.3 Will raising the tax rate for the wealthy solve the national debt problem?
Every time a law is changed the country conducts an experiment. One would assume that lawmakers reflect carefully about the possible consequences of any change in law they approve. The country is now faced with a large national debt that has some lawmakers concerned and which occasionally attracts the interest of investors. There is also an ideological debate that persists about the benefits or consequences of raising taxes or cutting spending. A popular recommendation of some is to raise taxes on the wealthy.
Briefing 2.2 Marginal Tax Rate
Tax brackets are used to show the amount of tax paid for each dollar earned. The marginal tax rates for 2013 are shown in the table on the next page for people who are married filing jointly. (taxfoundation.org/article/us-...usted-brackets viewed 7/08/13)
A person earning $80,000 would pay 10% tax on the first$17,488, 15% tax on the money between $17,488 and$71,030, and 25% on the amount over \$71,030.
The graph below shows the change in the marginal tax rate on the wealthiest Americans and the National Debt. From this graph we see that the national debt started rising a lot in the late 1970s and 1980s. We also notice this rise was preceded by big drops in the top marginal tax rates during the Reagan administration.
The information from this graph cannot be used to test the theory that lowering tax rates leads to increased national debt (or vice versa) because theories cannot be proved with the evidence that was used to create the theory in the first place. Therefore, if an economist wanted to test the theory about the effect of marginal tax rates on national debt, they will need to get different data. It would be unrealistic to expect that any country would agree to participate in an experiment in which a research economist would make them change their tax rates. However, countries do change tax rates on their own, so a researcher could observe what happens after each such change. The national debt before the rate change and 5 years after could be determined. If the goal is to establish a cause-and -effect relationship, it will also be necessary to identify changes in national debt for countries that do not change their tax rates. A comparison of the changes in national debt could be made for both groups. Other important aspects of national debt include the amount of spending that is done as well as the amount of concern legislators have with keeping the budget balanced.
Research Design Table
Research Question: Does lowering tax rates lead to an increase in national debt?
Type of Research Observational Study
Observational Experiment
Manipulative Experiment
What is the response variable? Changes in national debt 5 years after rate change
What is the parameter that will be calculated? Mean Proportion
List potential confounding variables. State of the economy, number of people at each economic level, government budget priorities
Grouping/explanatory Variables (if present)
Marginal tax rate change
Levels:
Control (did not change tax rates)
Impact (reduced tax rates)
Sampling
Observational studies and some observational experiments require random sampling from a population. The next step in the research design process is to determine how a sample will be taken from the population so that it is representative of the population. The objective is to avoid bias. Bias is systematic prejudice in one direction. Recall the sampling distributions that were discussed in Chapter 1. Half the statistics in the sampling distribution were less than the parameter and half were more, thus the probability of getting a statistic higher or lower than the parameter was the same. If sampling is not done correctly, it is easily possible to end up with biased results. That means that samples are more likely to be less than the parameter or they are more likely to be greater than the parameter. For example, if you want to determine which sport people think is the most exciting, pro football or pro soccer and you only sample people in a city with an NFL team, you are likely to get biased results in favor of pro football. On the other hand, if you conduct your survey in a city such as London, you are likely to get biased results in favor of soccer. Either way, you are getting biased results, which means any conclusion you draw is not valid.
Biased results are obtained when doing voluntary sampling and convenience sampling. Voluntary sampling occurs when people voluntarily agree to participate in a survey, such as an online survey or a TV survey where people can text their response. Convenience sampling occurs because of getting responses from people who are convenient. It is possible that these people share an opinion and consequently group together, resulting in biased results.
The best sampling is achieved using probability sampling methods. The four methods that will be discussed are:
1. Simple Random Sampling
2. Stratified Sampling
3. Systematic Sampling
4. Cluster Sampling
Simple Random Sampling
Simple random sampling meets two desirable criteria. First, every individual or unit in the population has an equal chance of being selected and second, every collection of selected units has an equal chance of being selected. The sampling distributions that underlie testing of hypotheses are based on simple random sampling with replacement. That means that once selected, a unit is put back into the pool and can be selected again. Consequently, information from the same unit can be used more than once.
The simplest example of a simple random sample is pulling names out of a hat. That is, everyone in a group can have their name written on a piece of paper and then put into a hat or other container. Someone mixes the pieces of paper and then pulls out a name. This is much like raffles that are done by organizations.
Putting names on a piece of paper quickly becomes unmanageable with larger populations and so a different strategy is needed. Instead, each person or unit is given a number and then numbers are selected. Data is then gathered from the person or unit with the selected number. Three different methods will be provided for doing a simple random sample. These methods make use of a table of random digits, the TI83 or TI84 calculator, and the website called Random.Org. The first two methods are known as pseudo-random meaning that while a random process is used to generate the numbers, it is a repeatable process. They will be explained below. The random numbers generated at Random.Org are truly random as they are based on atmospheric noise. Visit the website and select integer generator to try their selection process.
Table of Random Digits
A table of random digits consists of the digits 0 – 9 that have been randomly selected, with replacement. They are grouped with 5 digits together for visual convenience. Rows and columns are numbered.
To use the table, determine the size of the population from which a sample will be drawn. Assign a number to each person or unit in the population. The easiest way to do this is to assign a 1 to the first person (unit), a 2 to the second person (unit), etc. However, this is not the only strategy. People or units may already have a number (e.g. student ID number, production number), which can be used. The number of digits that will be selected at the same time corresponds to the number of digits in the largest assigned number. If the selection is to be done from a population of size 89 units, then since 89 is a 2-digit number, then assigned numbers will be 01, 02, ... 89 and all selections will be 2 digits. If the size of the population is 745, since this is a 3-digit number then the assigned numbers will be 001, 002, ... 745.
Table of Random Digits
Row Col 1-5 Col 6-10 Col 11-15 Col 16-20 Col 21-25 Col 26-30 Col 31-35 Col 36-40
1 05902 75968 00100 12330 92481 64625 83012 90763
2 53365 25560 86425 45946 67093 36638 71740 16878
3 69363 06820 49676 25363 96300 94376 65819 19636
4 37520 54955 31507 70745 41817 86606 97766 44989
5 10390 12738 54072 03238 08294 89479 03156 24217
6 98735 90798 96609 18368 74876 17403 33783 85101
7 79609 87687 77178 39784 76983 05689 84023 24804
8 00348 58777 90570 09114 99677 08126 76132 19334
9 98367 93351 08246 81492 57876 04366 21851 28620
10 34588 88493 61188 29234 32565 82010 07425 37173
11 74198 34943 64557 20118 25540 50014 29338 87231
12 00621 86824 81204 71923 03600 69080 31712 36599
13 44684 53902 86099 98640 86347 88061 60420 54118
14 43526 09310 21922 40743 64742 12780 88432 41496
15 37335 98934 61403 85336 76356 22349 31498 34136
16 25488 41567 32833 56973 04039 57733 88677 44817
17 45327 69347 85698 03248 60079 64469 71406 19478
18 47458 08093 94256 14305 42728 676159 35991 13527
19 91622 23621 91124 08233 54571 73527 29012 31534
20 77630 37356 85498 21296 14880 24981 70976 64922
For example, what will be the numbers of the first 3 people that would be selected from a population with 6890 people? People are assigned numbers such as 0001, 0002, ... 6890. The selection will begin in row 16, which is reproduced below. Four digits will be selected in a row. If they are less than or equal to 6890 they will be selected (shown with underlining). If they are larger than 6890, they will be ignored.
16 25488 41567 32833 56973 04039 57733 88677 44817
The first three numbers that are selected are 2548, 6732 and 0403.
The Texas Instrument TI84 calculator is able to generate random integers. A process that is analogous to picking a row in a table of random digits is to seed the calculator. The calculator is seeded and then random integers are selected. For example, if the seed number is 38, then the key strokes on the calculator would be:
38 sto math prb 1 rand enter. 38 should appear on the screen.
To generate the random number, the key strokes are:
math prb 5 randint, enter. The function randint expects the input of three numbers, the low, the high and the number of values you think will fit within the screen window. If we continue with the example of 6890 people, then since this is a 4 digit number we might expect 3 such numbers to fit on the screen, so we would enter: randint(1,6890,3). If we need more than 3 numbers, then we can just push enter again as often as is necessary.
The numbers that are selected in this example are: 2283, 3612, 3884.
Stratified Sampling
There are times when parts of the population might be expected to produce different data than other parts. For example, it might be expected that the concentration of a toxic chemical in the Puget Sound would be higher near industrial areas than in locations that are far from those industrial areas. Since random sampling may result in areas being missed, then a stratified sampling can be done. In this approach, areas are defined, with each area being a stratum. A simple random sampling process is then used within each stratum.
As a separate example, a group seeking to expand public transportation in a state may wonder how much support there would be for an initiative. They might expect the support for public transportation will be substantially different for people who use public transportation than for people who never use it. Consequently, they may do a simple random sampling from each of these groups.
It should be noticed that stratifying is based on the assumption that there will be differences between the strata although this may not be something that has been proved. This is different than actually having a hypothesis about the difference between the strata, in which case each stratum is considered to be a different population, rather than different parts of the same population.
Systematic Sampling
A sampling strategy this is particularly useful for sampling time series data is systematic sampling. This is a 1 in k sampling method in which every k$^{\text{th}}$ unit is selected. Since the value of data in one year may be influenced by the previous year (or more), the data are not independent. For example, this year’s cost of tuition is closely related to last year’s cost. Suppose that a sample is to be taken from time series data that is serially dependent when the data are 1,2 and 3 years separated but not when they are separated by 4 years. In this case, sampling every 4t$^{\text{th}}$ year would be appropriate. Suppose that data is available from 1961 to the present and a 1 in 4 systematic sampling method is used. What will be the first year in which data are selected? Since every year has to have a chance of being selected, then it will be necessary to randomize the initial value. This will be done by randomly selecting one number between one and k. To find successive numbers, add k to the number selected. For example, if a TI84 calculator is seeded with the number 42, then randint(1961,1964,1) will produce the number 1962. To this will be added 4 repeatedly until a sample of the desired size has been selected. The table below shows the years that will be selected.
1961 1962 1963 1964 1965 1966 1967 1968 1969 1970
1971 1972 1973 1974 1975 1976 1977 1978 1979 1980
The value of k is dependent upon the size of the population (N) and the size of the sample (n)and is found by dividing the former by the latter: $K \thickapprox N/n$.
Cluster Sampling
Collecting data can be time consuming and expensive, neither of which is a trivial factor for any organization that needs the data. When data must be collected from different locations and there is not an assumption that the locations will cause the variation in the data, then cluster sampling can be used. For example, a community college may want to sample the student body about charging students a technology fee so that a new student computer lab can be built. Because students take many different classes and these classes are not likely to have a major impact on their preference about the fees, then different classes can be selected and all the students in those classes can be asked their preference on the fees. If a college has 450 classes, they can be numbered from 1 to 450 and a simple random sampling process can be used to select the desired number of classes. If the goal is to select 8 classes and a seed value of 16 is use, then on the TI84, the function Randint(1,450,4) will give the class numbers of 419, 313, 273, 229, 445, 162, 127, 428.
These methods are often confused. The following guidelines may help clarify the differences. Simple Random Sample – Random Sampling is done from the entire population.
Stratified Sampling – The entire population is divided into strata then simple random sampling is done from each strata. The samples from each strata are combined before being analyzed.
Systematic Sampling – One number is randomly selected from the first k numbers. The numbers of the other data are found by adding k to the last number that was selected.
Cluster Sampling – The entire population is divided into groups or clusters, which are given numbers. The groups are randomly selected and every unit within the group becomes part of the sample.
Chapter 2 Homework
Complete the design-layout tables. Use underlined words when available.
1. A student would like to know which of two possible routes is faster for the daily trips to school. Route 1 is shorter but has many traffic lights. Route 2 is a little longer but doesn’t have traffic lights. Each morning, a coin flip will be used to determine the route taken to school. The time it takes for the commute will be measured with a stopwatch. After approximately 15 trials on each route, the average time for each will be compared.
Research Design Table
Research Question:
Type of Reserach Observational Study
Observational Experiment
Manipulative Experiment
What is the response variable?
What is the parameter that will be calculated? Mean Proportion
List potential confounding variables.
Grouping/explanatory Variable (if present) Levels:
2. Suppose researchers wanted to know if the opinion people had about the future was influenced by the amount of news they consume (watched, listened to, or read). The researchers categorized news consumption into three categories: 5-7 days/week, 1-4 days/week, less than 1 day/week. They then asked the people their opinion of the future (if they expected the future to be better or worse than the present). They will compare the proportion of optimistic people in each group.
Research Design Table
Research Question:
Type of Reserach Observational Study
Observational Experiment
Manipulative Experiment
What is the response variable?
What is the parameter that will be calculated? Mean Proportion
List potential confounding variables.
Grouping/explanatory Variable (if present) Levels:
3. Because so many species are becoming extinct, scientists would like to know how to increase biodiversity. There are two approaches to improve biodiversity in the world. The hands-off approach is one in which no one makes any deliberate changes to the environment with the intent of improving biodiversity. The deliberate approach is to deliberately introduce species that will reshape the environment, using surrogate species when necessary (e.g. use elephants instead of woolly mammoths, which are extinct). Examples of the first approach include the DMZ between North and South Korea. An example of the second includes the creation of a Pleistocene park in northeast Siberia by ecologist Sergei Zimov. Whether they occur by accident or design, there is no central planning organization that will randomly determine the approach that will be taken, so researchers can only look at the evidence after ecosystems have been engineered. A comparison will also be made with similar areas (control groups) that do not receive either the hands-off or deliberate approach. The researchers might record data on the increase in the number of species and determine if the average increase in number of species is different for the two approaches and the control groups. (Brand, Stewart. Whole Earth Discipline: An Ecopragmatist Manifesto. New York: Viking, 2009. Print.)
Research Design Table
Research Question:
Type of Reserach Observational Study
Observational Experiment
Manipulative Experiment
What is the response variable?
What is the parameter that will be calculated? Mean Proportion
List potential confounding variables.
Grouping/explanatory Variable (if present) Levels:
4. a. It has been hypothesized that a lack of flexibility of the hamstring muscles can contribute to poor posture. To determine if that is the case, a group of adults was randomly selected. The group was divided into two, those with good posture and those with poor posture. The flexibility of their hamstrings was measured using a sit and reach test.(http://silbergen564s15.weebly.com/. Viewed 4/8/2017) The further a person can reach, the greater their hamstring flexibility.
Research Design Table
Research Question:
Type of Reserach Observational Study
Observational Experiment
Manipulative Experiment
What is the response variable?
What is the parameter that will be calculated? Mean Proportion
List potential confounding variables.
Grouping/explanatory Variable (if present) Levels:
b. Two types of stretching can be done to improve flexibility, static stretching and dynamic stretching. Static stretching involves stretching a muscle and holding it in a stretched position for about 30 sec. Dynamic stretching involves stretching while moving through a range of motion. To determine which type of stretching resulted in improvement, the group of people with poor hamstring flexibility were randomly assigned to one of three groups. One group did static stretching daily for one month. Once group did dynamic stretching daily for one month. The third group was the control group, which did not do any stretching. Afterwards, the subjects were retested and categorized as improving or not improving since their first test.
Research Design Table
Research Question:
Type of Reserach Observational Study
Observational Experiment
Manipulative Experiment
What is the response variable?
What is the parameter that will be calculated? Mean Proportion
List potential confounding variables.
Grouping/explanatory Variable (if present) Levels:
5. Researchers want to know the proportion of acres of forest in the state that show evidence of the brown beetle infestation.
Research Design Table
Research Question:
Type of Reserach Observational Study
Observational Experiment
Manipulative Experiment
What is the response variable?
What is the parameter that will be calculated? Mean Proportion
List potential confounding variables.
Grouping/explanatory Variable (if present) Levels:
6. A teacher wants to know the mean amount of time community college students spend doing homework each night.
Research Design Table
Research Question:
Type of Reserach Observational Study
Observational Experiment
Manipulative Experiment
What is the response variable?
What is the parameter that will be calculated? Mean Proportion
List potential confounding variables.
Grouping/explanatory Variable (if present) Levels:
7. A fisheries biologist want to know the average weight of Coho Salmon returning to spawn.
Research Design Table
Research Question:
Type of Reserach Observational Study
Observational Experiment
Manipulative Experiment
What is the response variable?
What is the parameter that will be calculated? Mean Proportion
List potential confounding variables.
Grouping/explanatory Variable (if present) Levels:
1. In Chapter 2, you were introduced to sampling distributions. Understanding these distributions proves challenging for many but since they form the bases upon which p-values are determined and therefore conclusions are drawn, knowing how the distributions are created and what they mean is helpful for your understanding of statistics. Sampling distributions are really theoretical in nature because they would be extremely difficult to make in reality, but having the experience of partially making one should give you greater insight into what one would really be like. In this problem, you are given a set of data, which is considered the entire population. Each data value has been numbered. You will then practice the various sampling methods multiple times, using different seed values. In each case, you will determine the statistic of the sample, which in this case will be a sample proportion. You will then fill in one box in the distribution that is provided. The first box you put in should be considered the one and only sample that you would have taken. Use a different color for shading the box. The remaining samples you take will represent other possible samples that you would have gotten with a different seed number.
The population consists of all the berths in a harbor. Each dock has room for 20 boats. In this problem, each cluster is a different dock. The two strata are the west side of the harbor and the east side. Yes means there is a boat at the berth, no means that it is vacant.
West Side of Harbor East Side of Harbor
Cluster 1 Cluster 2 Cluster 3 Cluster 4 Cluster 5 Cluster 6 Cluster 7
1 Yes 21 No 41 Yes 61 No 81 101 121
2 Yes 22 No 42 No 62 No 82 Yes 102 122 Yes
3 No 23 No 43 No 63 No 83 103 Yes 123
4 Yes 24 No 44 Yes 64 No 84 104 Yes 124
5 No 25 No 45 No 65 No 85 Yes 105 Yes 125 Yes
6 Yes 26 Yes 46 Yes 66 No 86 Yes 106 No 126 No
7 No 27 No 47 No 67 Yes 87 No 107 No 127 Yes
8 Yes 28 No 48 No 68 No 88 No 108 Yes 128 No
9 No 29 No 49 No 69 No 89 Yes 109 No 129 Yes
10 No 30 No 50 No 70 Yes 90 No 110 No 130 No
11 No 31 No 51 Yes 71 No 91 No 111 No 131 No
12 Yes 32 No 52 Yes 72 Yes 92 Yes 112 Yes 132 Yes
13 Yes 33 No 53 Yes 73 Yes 93 No 113 No 133 No
14 Yes 34 No 54 Yes 74 Yes 94 Yes 114 No 134 Yes
15 35 Yes 55 No 75 Yes 95 No 115 No 135 No
16 36 Yes 56 No 76 Yes 96 No 116 No 136 No
17 Yes 37 Yes 57 No 77 No 97 Yes 117 Yes 137 Yes
18 Yes 38 Yes 58 No 78 Yes 98 No 118 Yes 138 No
19 No 39 No 59 Yes 79 No 99 No 119 Yes 139 Yes
20 No 40 No 60 No 80 No 100 No 120 Yes 140 Yes
For each sampling method, 20 samples will be taken. Sample with replacement, which means the same number can be selected more than once. Determine the proportion of samples that are Yes. On each line, write the number selected and a Y for yes or N for no (e.g. 8Y)
8. a. Use a simple random sample. The seed number for what will be considered the official sample is 5.
_______, _______, _______, _______, _______, _______, _______, _______, _______, _______,
_______, _______, _______, _______, _______, _______, _______, _______, _______, _______,
Sample Proportion ______
The following are alternate sample results you could get if you had used different sampling methods and seed numbers.
b. Use a stratified sample with a seed number of 10 for the West and 11 for the East.
West ______, ______, ______, ______, ______,______, ______, ______, ______, ______,______,
East _______, _______, _______, _______, _______,_______, _______, _______, _______
Sample Proportion ______
c. Use systematic sampling with a seed number of 15. Let k = 7.
_______, _______, _______, _______, _______, _______, _______, _______, _______, _______,
_______, _______, _______, _______, _______, _______, _______, _______, _______, _______,
Sample Proportion ______
d. Use a cluster sampling method with a seed number of 20.
Which cluster is selected? ___________ Sample Proportion _________8e. Use a simple random sample with a seed number of 25.
_______, _______, _______, _______, _______, _______, _______, _______, _______, _______,
_______, _______, _______, _______, _______, _______, _______, _______, _______, _______,
Sample Proportion ______
f. Use a stratified sample with a seed number of 30 for the West and 31 for the East.
West ______, ______, ______, ______, ______,______, ______, ______, ______, ______,______,
East _______, _______, _______, _______, _______,_______, _______, _______, _______
Sample Proportion ______
g. Use systematic sampling with a seed number of 35. Let k = 7.
_______, _______, _______, _______, _______, _______, _______, _______, _______, _______,
_______, _______, _______, _______, _______, _______, _______, _______, _______, _______,
Sample Proportion ______
h. Use a cluster sampling method with a seed number of 40.
Which cluster is selected? ___________ Sample Proportion _________
i. Use a simple random sample with a seed number of 45.
_______, _______, _______, _______, _______, _______, _______, _______, _______, _______,
_______, _______, _______, _______, _______, _______, _______, _______, _______, _______,
Sample Proportion ______
j. Use a stratified sample with a seed number of 50 for the West and 51 for the East.
West ______, ______, ______, ______, ______,______, ______, ______, ______, ______,______,
East _______, _______, _______, _______, _______,_______, _______, _______, _______
Sample Proportion ______
k. Use a cluster sampling method with a seed number of 55. Which cluster is selected? ___________ Sample Proportion _________
l. Use systematic sampling with a seed number of 60. Let k = 7.
_______, _______, _______, _______, _______, _______, _______, _______, _______, _______,
_______, _______, _______, _______, _______, _______, _______, _______, _______, _______,
Sample Proportion ______
m. Fill in a square in the appropriate column, starting at the bottom row (that does not contain the numbers). The first sample proportion you get (from problem 8a) should be shaded differently than the rest of the sample proportions.
0.00
0.05
0.10
0.15
0.20
0.25
0.30
0.35
0.40
0.45
0.50
0.55
0.60
0.65
0.70
0.75
0.80
0.85
0.90
0.95
1.00
n. Find the parameter by finding the proportion of all the 140 responses that are yes. Show this on the chart in 8m. How do the sample proportions compare to the population proportion?
9. The first graph shows the change in employment when the Federal minimum wage has been increased. This graph shows a comparison in the number of people employed 6 months after the increase, compared to six months before the increase. The numbers on the x-axis represent millions of people (e.g. 1000 x 1000) with positive numbers reflecting an increase in employment. Notice that most of the time, minimum wage went up, so did employment. However, this graph does not provide solid evidence that raising the minimum wage leads to an increase in employment. This is because there is no comparison. It could be that jobs were increasing or decreasing anyway, because of bigger economic changes, and that the minimum wage had only minor effect.
A better way to determine the effect of raising the minimum wage is to compare states that raise it with states that don’t since states have the ability to raise the minimum wage above the Federal level. The average after – before change in annual unemployment can be compared between these groups of states. For example, if the minimum wage in a state is increased in 2003, then the unemployment rate in 2002 can be subtracted from the unemployment rate in 2003. If the 2003 rate is lower than the 2002 rate, it means the unemployment rate went down and the difference would be a negative number. (Note: while the graph above was about the number of employed people, the graphs that follow are about the number of unemployed people).
a. Complete the Research Design table.
Research Design Table
Research Question:
Type of Reserach Observational Study
Observational Experiment
Manipulative Experiment
What is the response variable?
What is the parameter that will be calculated? Mean Proportion
List potential confounding variables.
Grouping/explanatory Variable (if present) Levels:
Hypotheses and the level of significance are to be established before data is collected. The hypotheses for this question are that the average after-before difference in annual unemployment rates is different in the states that raise their minimum wage compared to states that don’t.
$H_0: \mu_{\text{Raise}} = \mu_{\text{Not Raise}}$
$H_1: \mu_{\text{Raise}} \ne \mu_{\text{Not Raise}}$
$\alpha = 0.05$
From the table at http://www.dol.gov/whd/state/stateMinWageHis.htm, minimum wage data is available for consecutive years from 2000 to 2013, with an indication of rate changes beginning in the year 2001. Sampling from this set of data will be done by selecting 3 different years and using all the data from those years.
b. What is the name of the sampling method that is being used? ________________________
Which three years will be selected if your TI84 calculator is seeded with the number 42 and the years 2001 thru 2012 can be selected? These years were chosen because there is minimum wage increase data for these years and unemployment records for the year of, and the year before the unemployment rate increased, are available. Unemployment rates are found at http://www.bls.gov/lau/tables.htm.
c. Which years are selected? _______, ________, _________
The two graphs below are of the actual After-Minus-Before change in unemployment rates for the various states in the years that were randomly selected.
d. Do the graphs appear to support the null hypothesis or the alternate hypothesis better?
e. Both graphs are based on the same data. Which graph do you think shows the data better? Why?
One additional graph is shown to the right. It includes concepts that will be discussed near the end of the book, but because this topic is of interest to the many people working at minimum wage, the graph is being included here. Each line is for a different year. The mean for each year is in the center of the vertical bar. The vertical bars on the left show the change in unemployment for the states that raised their minimum wage and the vertical bars on the right are for the states that did not. The bars represent the confidence interval. Since decreasing unemployment is viewed as desirable, then this graph shows that in two of the years (2004 and 2012), the states that raised their minimum wage reduced their unemployment rate more than the states that didn’t raise their rates. In 2006, the states that didn’t raise their minimum wage reduced their unemployment rate more than the states that did raise their rates.
f. The table below shows the average change in unemployment rates for all the data combined. Which hypothesis do these statistics support?
States Raised Minimum Wage States Did Not Raise Minimum Wage
Mean -0.615 -0.519
n 26 105
g. The p-value for a comparison of the two means is 0.286. Write a concluding sentence in the style used in scholarly journals (like you were taught in Chapter 1).
h. Suppose you were in a class in which this topic was being discussed. What would you say to a classmate who argued that the minimum wage should not be raised because it will lead to more unemployment?
What would you say to the classmate who argued that the minimum wage should be raised because it means the poorer people will have more money to spend which means businesses will do better and have to hire more people thereby causing unemployment to drop even more?
10. Why Statistical Reasoning Is Important for a Nursing Student and Professional Developed in collaboration with Becky Piper, Pierce College Puyallup Nursing Program Director This topic is discussed in NURS 112.
This problem is based on An Analysis of Falls in the Hospital: Can We Do Without Bedrails?
by H.C. Hanger, M.C. Ball and L.A. Wood. the American Geriatrics Society, 47:529-531.
There was a time when women who helped the sick and injured were poorly regarded. However, in 1844, Florence Nightingale, daughter of a British banker, started visiting hospitals and learning about the care of patients. She eventually provided leadership to the British field hospitals during the Crimean War of 1853-56.(http://en.Wikipedia.org/wiki/Crimean_War) While her efforts helped improve the quality of the hospitals, it was after the war that she reflected about results she considered disappointing. She sought the assistance of William Farr who had recently invented the field of medical statistics. To help Florence understand the reasons for all the deaths in the hospital, he suggest that “We do not want impressions, we want facts.” One of her theories had been that many of the deaths were the result of inadequate food and supplies. The statistics lead to a rejection of this theory and instead pointed to lack of sanitation as a cause.(https://www.sciencenews.org/article/...e-statistician) Nightingale was also known for her use of graphs as a way of showing her analysis. Because of Florence Nightingale, the profession of Nursing is inextricably linked with statistics. In the modern context, it is called “evidence-based practices”.
Because hospital patients, particularly the elderly, have physical and possible cognitive problems that required placement in a hospital or nursing home, there is a need for nurses to keep the patients safe. One problem for these patients is falls, including falling out of bed. A standard practice for facilities has been to use bedrails so that a patient doesn’t accidently roll out of bed.
The researchers who wrote the article could find no evidence that bedrails prevented falls, so they conducted their own experiment. They instituted a policy at their hospital (in Christchurch, NZ) to discontinue the use of bedrails unless there was a justifiable reason for their use that was documented and approved. Their experiment was to compare the average number of falls per 10,000 bed days after the implementation of the policy to before its implementation. If the bedrails helped reduce falls, the number of falls should increase after they are removed.
$H_0: \mu_{\text{after}} = \mu_{\text{before}}$
$H_1: \mu_{\text{after}} > \mu_{\text{before}}$
$\alpha = 0.05$
a. Complete the experiment design table
Research Design Table
Research Question:
Type of Reserach Observational Study
Observational Experiment
Manipulative Experiment
What is the response variable?
What is the parameter that will be calculated? Mean Proportion
List potential confounding variables.
Grouping/explanatory Variable (if present) Levels:
b. Before implementing the policy, the average number of falls per 10,000 bed days was 164.8 (S.D. = 20.6). After the new policy was implemented, the average number of falls per 10,000 bed days was 191.7 (S.D. = 40.7). The p-value was 0.18. Write a complete concluding sentence.
c. An additional part of the experiment was to compare the severity of the falls. Falls were classified as serious injury, minor injury or no injury. The table below shows the distribution of the injuries.
Pre-policy Post-policy
Serious injury 33 18
Minor injury 43 60
No injury 110 154
There is a significant difference in the injuries (p = 0.008). Explain what the difference is and give a possible reason for the difference.
d. If you were a nurse, would you suggest that bedrails be required or be removed? Why? | textbooks/stats/Introductory_Statistics/Foundations_in_Statistical_Reasoning_(Kaslik)/02%3A_Obtaining_Useful_Evidence.txt |
We live in a world in which decisions must be made without complete information. Knowing this, we intuitively seek to gather as much information as possible before making the actual decision. Consider marriage, which is a rather important decision. We can never know everything possible about a person we’d want to marry but we do seek as much information as possible by dating first. Employment is another example of an important decision, for both the employer and the potential employee. In each case, information is gained through interviews, resumes, references and research before a job offer is given or accepted.
When faced with a decision that will be based on data, it is the production of graphs and statistics that will be analogous to dating and interviews. The data that is collected must be useful to answer the questions that were asked. Chapter 2 focused on both the planning of the experiment and the random selection process that is important for producing good sample data. Chapter 3 will now focus on what to do with the data once you have it.
Types of Data
We have already classified data into two categories. Numeric data is considered quantitative while data consisting of words is called categorical or qualitative data. Quantitative data can be subdivided into discrete and continuous data.
• Discrete data contains a finite number of possible values because they are often based on counts. Often these values are whole numbers, but that is not a requirement. Examples of discrete data include the number of salmon migrating up a stream to spawn, the number of vehicles crossing a bridge each day, or number of homeless people in a community.
• Continuous data contains an infinite number of possible values because they are often based on measurements, which in theory could be measured to many decimal places if the technology existed to do so. Examples of continuous data include the weight of the salmon that are spawning, the time it takes to cross the bridge, or the number of calories a homeless person consumes in a day.
Discrete quantitative data and categorical data are often confused. Look at the actual data that would be written for each unit in the sample to determine the type of data. As an example, consider the brown beetle, which is infecting trees in the western US and Canada. If the purpose of the research was to determine the proportion of trees that are infected, then the data that would be collected for each tree is “infected” or “not infected”. Ultimately, the researcher would count the number of trees marked infected or not infected, but the data itself would be those words. If the purpose of the research was to determine the average number of brown beetle on each tree, then the data that would be collected is “the number of brown beetle on a tree”, which is a count. Thus, counts are involved for both categorical and discrete quantitative data. Categorical data is counted were as if categorical data is counted in multiple places or times, then the counts become discrete quantitative data. For example, in class today, students in the class roster can be marked as present or absent and this would be categorical. However, if we consider the number of students who have been present each class during the past week, then the data in which we are interested is quantitative discrete.
Examining the evidence from sample data
Since sample data are our window into the stochastic data in the population, we need ways to make the data meaningful and understandable. This is accomplished by using a combination of graphs and statistics. There is one or more graph and statistic that is appropriate for each type of data. In the following sections you will learn how to make the graphs by hand and how to find the statistics. There are many other graphs that exist besides this collection, but these are the basic ones.
Examining the evidence provided by sample categorical data
There are two graphs and two statistics that are appropriate for categorical data. The graphs most commonly used are bar graphs and pie charts. The statistics are counts and proportions. If the hypothesis being tested is about counts, then a bar graph and sample counts should be used. If the hypothesis being tested is about proportions, then a pie chart and sample proportions should be used. For categorical data, the statistics are found first and then used in the production of a graph.
Counts and Bar Graphs
Political leadership in the US is typically divided between two political parties, the Democrats and the Republicans. Only a few politicians have been elected as independents meaning they do not belong to one of these parties. The highest politically elected positions other than the President are congressmen, senators and state governors. If we want to understand the distribution of political parties in 2013, then the political party of our leaders is categorical data that can be put into a contingency table in which each cell represents a count of the number of people who fit both the leadership position category and the political party category. A bar graph can be made from these counts.
2013 Leadership Position
Congress Senate Governor
Political Party Democrats 200 52 20
Independents 0 2 0
Republicans 233 46 30
Proportions and Pie Charts
Opinion polls frequently use proportions or percentages to show support for candidates or initiatives. The difference between proportions and percentages is that percentages are obtained by multiplying the proportion by 100. Thus, a proportion of 0.25 would be equivalent to 25%. Formulas use proportions while we often communicate verbally using percentages. You should be able to move from one to the other effortlessly.
There are almost always two proportions of interest to us. The population proportion, represented with the symbol p, is the proportion we would really like to know, but which is usually unknowable. We make hypotheses about p. The sample proportion, represented with $\hat{p}$, is what we can find from sample data and is used to test the hypothesis. The formula for proportions are:
$p = \dfrac{x}{N}$
and
$\hat{p} = \dfrac{x}{n}$
where $x$ is a count of the number of values in a category, $N$ is the size of the population, and $n$ is the size of the sample.
The results of two surveys discussed on a washingtonstatewire.com blog will be used for an example. Given that much of the transportation gridlock is caused by cars, and that Washington State’s bridges need maintenance (there was a bridge collapse on Interstate 5 near Mount Vernon, WA in 2013) it would be natural to wonder about voter support for state funding of transportation projects. Two polls were conducted at about the same time in 2013. (washingtonstatewire.com/blog/...portation-tax- package-offer-a-measure-of-voter-mood-after-bridge-collapse/ viewed 7-25-13.)
Poll 1 used human interviewers who began a scripted interview by observing that “of course transportation projects are expensive and take a long time to complete,” and concluded with, “as I said, transportation projects are expensive. The other part of the package will be how to pay for those improvements. No one likes to raise taxes, but as I read some funding options, tell me whether you would favor the proposal, be inclined to accept it, be inclined to oppose, or find it unacceptable.”
Poll 2 used robo-polling which asked voters whether it is important for “the legislature to pass a statewide package this year to address congestion and safety issues, fund road and bridge maintenance and improvement, and provide additional transit funding.”
As best as can be estimated from the article, the results of Poll 1 were that 160 out of 400 people who were surveyed supported raising taxes for improving the transportation system. The results of Poll 2 were that 414 out of 600 think it is important for the Legislature to pass the funding package.
From data such as this we can make a pie chart. This will be demonstrated with Poll 1 and then you should make a pie chart for Poll 2.
The first step in making a pie chart is to calculate the proportion of values in each group. In Poll 1, we will consider there are two groups. The first group is for those who supported raising taxes and the second group is for those who did not support raising taxes. Since 160 out of 400 people supported raising taxes, then the proportion is found by dividing 160 by 400. Therefore, $\hat{p} = \dfrac{160}{400} = 0.40$. As a
reminder, $\hat{p}$ is the proportion of the sample that supports raising taxes. It is a statistic, which provides insight into the population proportion, represented with the variable p. The legislators would like to know the value of p, but that would require doing a census, so they must settle for the sample proportion, $\hat{p}$. It is likely the p does not equal $\hat{p}$, but that it is close to that value. When making a pie chart, draw the line separating the slices so that 40% of the circle is in one slice, which means that 60% of the circle is in the other.
There are a few things to notice about the pie chart. First, it contains a title that describes the content of the graph. Next, each slice contains a label that briefly explains the meaning of the slice, the number of data values that contributed to the slice and the percent of all the values that are put in the slice. Why should all of this information be included?
If you are going to use any graph to show the results of your research, it is important to communicate those results clearly. The goal is to produce reader-friendly graphs. A reader looking at an unlabeled graph will not be able to gain any understanding from it, and thus you have failed to communicate something important. The percentage is included to make it easy for the reader to know the percent of values in each slice. Without the percentages, a person would need to guess at the percentage and it is likely their guess would not be precise. Including the number of people in each slice is important because it gives the reader an indication of how seriously to treat the results. A survey of 40 people of which 16 supported taxes would have a pie chart identical to the one above. Likewise, a survey of 40,000 people, of which 16,000 supported taxes, would also be identical to the above graph. The more people there are, the stronger the support. This should be obvious from the graph and therefore it is important to include the value.
A mention must be made about computer graphics since most pie charts are produced on a computer. While computers can make very fancy and colorful graphs, the colors can be indistinguishable if printed on a black and white printer or photo copied in black and white. Keep this in mind when you make graphs and pick colors that will be distinguishable when copied in black and white.
Use the results of Poll 2 to produce a completely labeled pie chart. Find the sample proportion first.
Do these two polls produce similar results or opposite results? Were the questions well worded?
Why or why not?
A final word about pie charts needs to be made. In some circles, pie charts are not considered useful graphs. There is some evidence that people do not do a good job of interpreting them. Pie charts very seldom appear in scholarly journals. However, pie charts do appear in print media and can give an indication of how the whole is divided. They may be of benefit to those who like the visual representation, rather than just the statistics.
Examining the evidence provided by sample quantitative data
The three most common types of graphs used for quantitative data are histograms, box plots and scatter plots. Histograms and box plots are used for univariate data whereas scatter plots are used for bivariate data. A variate is a single measurement or observation of a random variable obtained for each subject or unit in a sample.(Sokal, Robert R., and F. James Rohlf. Introduction to Biostatistics. New York: Freeman, 1987, Print.) When there is only one random variable that is being considered, the data that are collected are univariate. When two random variables are being considered simultaneously for the same unit, then the data for the two variables are considered bivariate. Examples of univariate data include the number of vehicles on a stretch of highway, the amount it costs for a student to earn their degree, or the amount of water used by a household each month. Examples of bivariate data include the pairing of number of cars on the highway and the commute time, the amount of tuition and the amount of financial aid a student uses, or the number of people in a household and the amount of water used.
The statistics used for univariate data fit one of two objectives. The first objective is to define the center of the data and the second objective is to define the variation that exists in the data. The most common ways of defining the center are with the arithmetic mean and the median, although these are not the only two measures of center. In cases where the arithmetic mean is used, variation is quantified using standard deviation. The statistic most commonly used for bivariate data is correlation, which indicates the strength of the linear relationship between the two variables.
Histograms
Chapters one and two contained numerous examples of histograms. They are used to show the distribution of the data by showing the frequency or count of data in each class. The process for creating histograms by hand includes the following steps.
1. Identify the lowest and highest data values.
2. Create reader-friendly boundaries that will be used to sort the data into 4 to 10 classes. The lowest boundary should be a number that either equals, or is a nice number below, the lowest data value. The class width, which is the difference between consecutive boundaries, should be a factor of the boundary values.
3. Make a frequency distribution to provide an organized structure to count the number of data values in each class.
4. Create the histogram by labeling the x-axis with the lower boundaries and the y-axis with the frequencies. The height of the bars reflects the number of values in each class. Adjacent bars should touch.
5. Put a title on the graph and on each axis.
There isn’t a precise mathematical way to pick the starting value and the class width for a histogram. Rather, some thought is necessary to use numbers that are easy for a reader to understand. For example, if the lowest number in a set of data is 9 and the highest number is 62, then using a starting value of 0 and a class width of 10 would result in the creation of 7 classes with reader-friendly boundaries of 0,10,20,30,40,50,60, and 70. On the other hand, starting at 9 and using a class width of 10 would not produce reader-friendly boundaries (9,19,29, ...). Numbers such as 2,4,6,8... or 5,10,15,20... or any version of these numbers if they are multiplied by a power of 10 make good class boundaries.
Once the class boundaries have been determined, a frequency distribution is created. A frequency distribution is a table that shows the classes and provides a place to tally the number of data values in each class. The frequency distribution should also help clarify which class will be given the boundary values. For example, would a value of 20 be put into a 10 – 20 class or a 20 – 30 class? While there is no universal agreement on this issue, it seems a little more logical to have all the values that begin with the same number be grouped together. Thus, 20 would be put into the 20 – 30 class which contains all the values from 20.000 up to 29.999. This can be shown in a few ways as are demonstrated in the table below.
0 up to, but not including 10 $0 \le x < 10$ [0, 10)
10 up to, but not including 10 $10 \le x < 20$ [10, 20)
20 up to, but not including 10 $20 \le x < 30$ [20, 30)
30 up to, but not including 10 $30 \le x < 40$ [30, 40)
All three columns indicate the same classes. The third column uses interval notation and because it is explicit and uses the least amount of writing, will be the method used in this text. As a reminder about interval notation, the symbol “ [ “ indicates that the low number is included whereas the symbol “ ) “ indicates the high number is not included.
To demonstrate the construction of a histogram, data from the US Department of Transportation, Federal Highway Administration will be used.(explore.data.gov/Transportat...3-mssz,7-28-13) The data is the estimated number of miles driven in a state in December, 2010. A stratified sample will be used since the data are already divided into regions of the country. The data in the table has units of millions miles.
4778 768 859 3816
6305 4425 789 1517
9389 3681 21264 8394
583 2958 2034 2362
712 5858 738 7861
5664 352 16256 2594
665 28695 4435
1. The low value is 352, the high value is 28,695.
2. The lowest class boundary will be 0, the class width will be 5000. This will produce 6 classes.
3. This is the frequency distribution that includes a count of the number of values in each class.
Classes Frequency
[0, 5000) 19
[5000, 10000) 6
[10000, 15000) 0
[15000, 20000) 1
[20000, 25000) 0
[25000, 30000) 1
4, This is the completely labeled histogram. Notice how the height of the bars corresponds with the frequencies in the frequency distribution.
Suppose we want to compare the amount of driving in states with a large area to those with a smaller area. This could be done using a multiple bar histogram in which one set of bars will be for larger states and the other for smaller states.
Frequency Distribution and Multiple Bar Histogram:
Interpretation: While it might be reasonable to assume there would be more driving in bigger states because the distance between cities is larger, it is difficult to discern from this graph if that is the case. Therefore, in addition to the use of a graph, this data can be compared using the arithmetic mean and the standard deviation.
Arithmetic Mean, Variance and Standard Deviation
The arithmetic mean and standard deviation are common statistics used in conjunction with histograms. The mean is probably the most commonly used way to identify the center of data, but it is not the only method. The mean can be thought of as the balance point for the data, much like the fulcrum on a teeter-totter. Values far from the mean have a greater impact on it than do values closer to the mean in the same way a small child sitting at the end of a teeter-totter can balance with a larger person sitting near the fulcrum.
There are almost always two arithmetic means of interest to us. The population mean, represented with the symbol $\mu$ (mu), is the mean we would really like to know, but which is usually unknowable. We make hypotheses about $\mu$. The sample mean, represented with $\bar{x}$ (x-bar), is what we can find from a sample and is what is used to test the hypothesis. The formula for the means, as shown in Chapter 1 are:
$\mu = \dfrac{\sum x_i}{N}$ and $\bar{x} = \dfrac{\sum x_i}{n}$
Where $\sum$ is an upper case sigma used in summation notation that means add everything that follows, xiis the set of data values and N is the number of values in the population and n is the number of values in the sample. These formulas say to add all the values and divide by the number of values.
There are several reasons why the arithmetic mean is commonly used and some reasons why it shouldn’t be used at times. A primary reason it is commonly used is because the sample mean is an unbiased estimator of the population mean. This is because about half the sample means that could be obtained from a population will be lower than the population mean and half will be higher. An arithmetic mean is not the best measure of center when there are a few extremely high values in the data, as they will have more of an impact on the mean than the remaining values.
In addition to the mean, it is also useful to know how much variation exists in the data. Notice in the double bar histogram how the data in the states with the largest area is spread out more than the data in the states with the smallest area. The more spread out data is, the more difficult it is to obtain a significant result when testing hypotheses.
Standard deviation is the primary way in which the spread of data is quantified. It can be thought of as the approximate average distance between each data value and the mean. As with the mean, there are two values of standard deviation that interest us. The population standard deviation,represented with the symbol σ (lower case sigma), is the standard deviation we would really like to know, but which is usually unknowable. The sample standard deviation, represented with s, is what we can find from a sample. The formulas of standard deviation are:
$\sigma = \sqrt{\dfrac{\sum (x - \mu)^2}{N}}$
and
$s = \sqrt{\dfrac{\sum (x - \bar{x})^2}{n - 1}}$
North Atlantic hurricane data will be used to demonstrate the process of finding the mean and standard deviation. (Data from: www.wunderground.com/hurrican...asp?region=ep.)
Year 2005 2006 2007 2008 2009 2010 2011
Number of Hurricanes 15 5 6 8 3 12 7
Since this is a sample, the appropriate formula for finding the sample mean is $\bar{x} = \dfrac{\sum x_i}{n}$. The calculation is $\dfrac{15 + 5 + 6 + 8 + 3 + 12 + 7}{7} = \dfrac{56}{7} = 8$. There were an average of 8 North Atlantic hurricanes per year between 2005 and 2011. Notice that there weren’t 8 hurricanes every year. This is because there is natural variation in the number of hurricanes. We can use standard deviation as one way for determining the amount of variation. To do so, we will build a 3-column table to help with the calculations.
x $(x - \bar{x})$ $(x - \bar{x})^2$
15 15- 8 = 7 $(7)^2 = 49$
5 5 - 8 = -3 $(-3)^2 = 9$
6 6 - 8 = -2 $(-2)^2 = 4$
8 8 - 8 = 0 $(0)^2 = 0$
3 3 - 8 = -5 $(-5)^2 = 25$
12 12 - 8 = 4 $(4)^2 = 16$
7 7 - 8 = -1 $(-1)^2 = 1$
$\sum (x - \bar{x}) = 0$ $\sum (x - \bar{x})^2 = 0$
Since this is a sample, the appropriate formula for finding the sample standard deviation is $s = \sqrt{\dfrac{\sum (x - \bar{x})^2}{n - 1}}$ which, after substitution is $\sqrt{\dfrac{104}{7 - 1}} = 4.16$. This number indicates that the average variation from the mean in each year is 4.16 hurricanes.
Variance is another measure of variation that is related to the standard deviation. Variance is the square of the standard deviation or, conversely, the standard deviation is the square root of the variance. The formulas for variance are:
$\sigma ^2 = \dfrac{\sum (x - \mu)^2}{N}$
and
$s^2 = \dfrac{\sum (x - \bar{x})^2}{n - 1}$
In the example about hurricanes, the variance is $s^2 = \dfrac{104}{7 - 1} = 17.33$.
Medians and Box Plots
Another combination of statistics and graphs are medians and box plots. A median is found before a box plot can be created. A median is the value of a variable in an ordered array that has an equal number of items on either side of it.(5 Sokal, Robert R., and F. James Rohlf. Introduction to Biostatistics. New York: Freeman, 1987. Print.) To find the median, put the data in order from small to large. Assign a rank to the numbers. The smallest number has a rank of 1, the second smallest has a rank of 2, etc. The rank of the median is found using formula 4.5.
$Rank\ of Median = \dfrac{n + 1}{2}$
If n is odd, that is, if there are an odd number of data values, then the median will be one of the data values. If n is an even number, then the median will be the average of the two middle values.
The same hurricane data will be used in the first of two demonstrations for finding the median.
Year 2005 2006 2007 2008 2009 2010 2011
Number of Hurricanes 15 5 6 8 3 12 7
The first step is to create an ordered array.
Number of Hurricanes 3 5 6 7 8 12 15
Rank 1 2 3 4 5 6 7
The second step is to find the rank of the median using the formula $Rank\ of\ Median = \dfrac{n + 1}{2}$, $\dfrac{7 + 1}{2} = 4$.
The third step is to find the data value that corresponds with the rank of the median.
Since the rank of the median is 4 and the corresponding number is 7 hurricanes then the median number is 7 hurricanes.
The second demonstration will be with the number of East Pacific Hurricanes. Since there is no data for 2011, only the years 2005-2010 will be used.
Year 2005 2006 2007 2008 2009 2010 2011
Number of Hurricanes 5 10 2 4 7 3
The first step is to create an ordered array.
Number of Hurricanes 2 3 4 5 7 10
Rank 1 2 3 4 5 6
The second step is to find the rank of the median using the formula $Rank\ of\ Median = \dfrac{n + 1}{2}$
$\dfrac{6 + 1}{2} = 3.5$. This means the median is halfway between the third and fourth values.
The third step is to find the data value that corresponds with the rank of the median.
The average of the third and fourth values is $\dfrac{4 + 5}{2} = 4.5$. Therefore the median number of East Pacific
hurricanes between 2005 and 2010 is 4.5. Notice that in this case, 4.5 is not one of the data values and it is not even possible to have half of a hurricane, but it is still the median.
A box plot is a graph that shows the median along with the highest and lowest values and two other values called the first quartile and the third quartile. The first quartile can be thought of as the median of the lower half of the data and the third quartile can be thought of as the median of the upper half of the data.
The North Atlantic Hurricane Data will be used to produce a box plot.
The first step is to create an ordered array.
Number of Hurricanes 3 5 6 7 8 12 15
Rank 1 2 3 4 5 6 7
The second step is to identify the lowest value, the median, and the highest value.
Lowest Median Highest
Number of Hurricanes 3 5 6 7 8 12 15
Rank 1 2 3 4 5 6 7
The third step is to identify the first quartile and the third quartile. This is done by finding the median of all the values below the median and above the median.
The box plot divides the data into 4 groups. It shows how the data within each group is spread out.
When graphing quantitative data, is it better to use a histogram or box plot? Compare the follow graphs that show a comparison of the number of hurricanes in four areas, North Atlantic, East Pacific, West Pacific, Indian Ocean. The data is from the years 1970 – 2010.
While the histogram gives a more detailed break down of the data, it is very cluttered and difficult to interpret. Therefore, in spite of the additional information it provides, the reader has to study the graph intently to understand what it shows. On the other hand, the box plot provides less information, but it is much easier to draw a comparison between the different hurricane areas. In general, if there is only one set of data being graphed, a histogram is the better choice. If there are three or more sets of data being graphed, a box plot is the better choice. If there are two sets of data being graphed, make both a histogram and a box plot and decide which is more effective for helping the reader understand the data.
Scatter Plots and Correlation
Some research questions result from the desire to find an association between two quantitative variables. Examples include wealth gap (Gini Coefficient)/poverty rates, driving speed/distance to stop, height/jumping ability. The goal is to determine the relationship between these two random variables and in many cases to see if that relationship is linear.
For demonstration purposes, we will explore the relationship between the wealth gap as measured by the Gini Coefficient and the poverty rate. The units will be randomly selected US states from the year 2010. A scatter plot will give a quick understanding of the relationship.
From this scatter plot it appears that the greater the wealth gap, the lower the poverty rate, although the relationship is not a strong one since the points do not appear to be grouped close together to form a straight line. To determine the strength of the linear relationship between these variables we use the Pearson Product-Moment Correlation Coefficient.
There are almost always two correlation coefficients of interest to us. The population correlation, represented with the symbol ρ (rho), is the correlation coefficient we would really like to know, but which is usually unknowable. We make hypotheses about ρ. The sample correlation, represented with r, is what we can find from a sample and is what is used to test the hypothesis. The formula for the sample correlation coefficient is:
$r = \dfrac{\text{cov}(x, y)}{s_x s_y}$
The numerator is the covariance between the two variables, the denominator is the product of the standard deviation of each variable.
Correlation will always be a value between -1 and 1. A correlation of 0 means no correlation. A correlation of 1 means a direct linear relationship in which y gets larger as x gets larger. A correlation of -1 means an inverse linear relationship in which y gets smaller as x gets larger.
A brief explanation of the correlation formula follows. Think of bivariate data as an (x,y) ordered pair. The ordered pair $(\bar{x}, \bar{y})$ is the centroid of the data. For this data, the centroid is at (0.4467, 11.7857). This is shown in the graph below.
The covariance is given by the formula $\text{cov}(x, y) = \dfrac{\sum (x - \bar{x})(y - \bar{y})}{n - 1}$. It shows the product of
the distance each point is away from the average x value and the average y value. Since multiplying both the x values and y values by 10 would result in a covariance that is 100 times larger than this data would produce, yet the graph would look the same, the covariance is standardized by dividing by the product of the standard deviations of x and y.
Calculate the Covariance
(x, y) or (gini, pov) $(x - \bar{x})$
($x$ - 0.4467)
$(y - \bar{y})$
($y$ - 11.7857)
$(x - \bar{x})(y - \bar{y})$
(0.486, 10.1) 0.0393 -1.6857 -0.0662
(0.443, 9.9) -0.0037 -1.8857 0.0070
(0.44, 11.6) -0.0067 -0.1857 0.0012
(0.433, 13) -0.0137 1.2143 -0.0167
(0.419, 13.2) -0.0277 1.4143 -0.0392
(0.442, 14.4) -0.0047 2.6143 -0.0123
(0.464, 10.3) 0.0173 -1.4857 -0.0257
Sum 0.0000 0.0000 -0.1518
$\text{cov}(x, y) = \dfrac{\sum (x - \bar{x})(y - \bar{y})}{n - 1}$
$\text{cov}(x, y) = \dfrac{-0.1518}{7 - 1}$
$\text{cov}(x, y) = -0.0253$
Calculate the standard deviation of $x$ and $y$
($x$, $y$) or (gini, pov) $(x - \bar{x})$
$(x - 0.4467)$
$(x - \bar{x})^2$ $(y - \bar{y})$
$(y - 11.7857)$
$(y - \bar{y})^2$
(0.486, 10.1) 0.0393 0.00154 -1.6857 2.84163
(0.443, 9.9) -0.0037 0.00001 -1.8857 3.55592
(0.44, 11.6) -0.0067 0.00005 -0.1857 0.03449
(0.433, 13) -0.0137 0.00019 1.2143 1.47449
(0.419, 13.2) -0.0277 0.00077 1.4143 2.00020
(0.442, 14.4) -0.0047 0.00002 2.6143 6.83449
(0.464, 10.3) 0.0173 0.00030 -1.4857 2.20735
Sum 0.0000 0.0029 0.0000 18.9486
$S_x = \sqrt{\dfrac{\sum (x - \bar{x})^2}{n - 1}}$ $S_y = \sqrt{\dfrac{\sum (y - \bar{y})^2}{n - 1}}$ $S_x = \sqrt{\dfrac{0.0029}{7 - 1}}$ $S_y = \sqrt{\dfrac{19.9486}{7 - 1}}$ $S_x = 0.0219$ $S_y = 1.777$
Use these results to calculate the correlation.
\begin{align*} = \dfrac{\text{cov}(x, y)}{S_x S_y} \[4pt] &= \dfrac{-0.0253}{0.0219 \cdot 1.777} \[4pt] &= -0.650 \end{align*}
This correlation indicates that higher Gini Coefficients correspond with lower poverty levels. Whether a correlation of –0.650 indicates the data are significant or simply random results from a population without correlation, is a matter for a later chapter. (www.census.gov/prod/2012pubs/acsbr11-02.pdf)
While it is important to understand that a correlation between variables does not imply causation, a scatter plot is drawn with one of the variables being the independent x value, also known as the explanatory variable and the other being the dependent y value, also known as the response variable. The names explanatory and response are used because if a linear relationship between the two variables exists, the explanatory variable can be used to predict the response variable. For example, one would expect that driving speed would influence stopping distance rather than stopping distance influencing driving speed so that driving speed would be the explanatory variable and stopping distance would be the response variable. However, a person may choose to drive slower under certain conditions because of how long it could take them to stop (e.g. a school zone) so the choice of explanatory and response variables must be consistent with the intent of the research. The accuracy of the prediction is based on the strength of the linear relationship. (www.census.gov/prod/2012pubs/acsbr11-01.pdf Sheskin, David J. Handbook of Parametric and Nonparametric Statistical Procedures. Boca Raton: Chapman & Hall/CRC, 2000. Print.)
If a correlation between the explanatory variable and the response variable can be established, one of seven possibilities exists.
1. Changing the x variable will cause a change in the y variable
2. Changing the y variable will cause a change in the x variable
3. A feedback loop may exist in which a change in the x variable leads to a change in the y variable which leads to another change in the x variable, etc.
4. The changes in both variables are determined by a third variable
5. The changes in both variables are coincidental.
6. The correlation is the result of outliers, without which there would not be significant correlation.
7. The correlation is the result of confounding variables.
The best guideline is to assume that correlation is not causation, but if you think it is in a certain circumstance, additional proof will need to be provided. A causal relationship can be established easier with a manipulative experiment than an observational experiment since the later may contain hidden confounding variables that are not accounted for.
TI-84 Calculator
The TI-84 calculator has the ability to quickly find all the statistics presented in this chapter. To find the arithmetic mean, standard deviation and all 5 box plot numbers, use the Stat key on your calculator. You will be presented with three options: EDIT, CALC, TESTS. Edit is already highlighted, so press the enter key and you will find three lists labeled L1, L2 and L3. There are also three other lists labeled L4, L5, L6 that can be found by scrolling to the right. Enter your data into one of the lists. After that, press the stat key again, use your cursor arrows to scroll to the right until Calc is highlighted, then press enter. The first option is 1-Var Stats. It is already highlighted, so press enter, then press the 2ndkey and the number corresponding to the list that your data is in (1-6). You will be presented with the following information.
$\bar{x}$ - Sample Arithmetic Mean
$\sum x$
$\sum x^2$
$S_x$ - Sample S tan dard Deviation
$\sigma_x$ - Population S tan dard Deviation
$n$ - sample size
min $X$ - lowest value
$Q$1 - first quartile
Med - median
$Q$3 - third quartile
max $X$ - highest value
For bivariate data, enter the x values into one list and the y values into a different list, making certain they are properly paired. Use the stat key, select Calc, then select 4:LinReg(ax + b). Use the second key to enter the list number for the x variable followed by a comma and then enter the list number for the y variable. This will provide more information than we are ready for at the moment, but the one value you will look for is labeled r. If the r is not visible, you will need to turn the calculator diagnostics on. This is done by using the 2$^{\text{nd}}$ key followed by 0 (which will get the catalog). Scroll down to diagnosticOn, then press enter twice.
03: Examining the Evidence using Graphs and Statistics
1. Fans of professional sports teams expect the owners of the team to spend the necessary money to get the players who will help them win a championship. The payrolls of various professional sports teams in the US were divided into thirds, and the number of championships won by teams in each third was compared. Make a complete bar graph of this data. (data from unpublished student statistics class project)
Payroll Ranking Number of Championships
Lowest Third 2
Middle Third 7
Highest Third 11
2. According to National Geographic, in 1903 there were 307 varieties of corn seed sold by seed houses. In 1983, there were 12 varieties sold be seed houses, the rest no longer being used. Find the sample proportion of varieties of corn seed that was still available in 1983 compared to 1903. Make a complete pie chart. (ngm.nationalgeographic.com/20...ariety-graphic viewed 9/9/13)
Do you think it is good or bad that there are fewer varieties? Why?
3. Between 2010 and 2014, two dams on the Elwha River in the Olympic National Park near Port Angeles, WA were removed, allowing salmon to spawn for the first time in that river in 100 years. Assume the weights of 10 Chinook salmon that returned were recorded. These weights are shown in the table below.
41 48 40 43 45
39 35 47 41 51
The purpose of this problem is to find all the statistics using the formulas by hand. Calculators should only be used to find the square root. Show all work.
Find the mean, variance and standard deviation, and the 5 box plot numbers for this data.
4. Students in development mental math classes such as intermediate algebra are expected to know their math facts quickly. Automaticity, or math fact fluency, is the ability to recall math facts without having to make the calculations. The benefit of quickly knowing the math facts is that the working memory of the brain is not filled with the effort to make the calculations so that it can focus on the higher level thinking required for the algebra. Intermediate algebra students were given an automaticity test in which they had to solve as many one-step linear equations as possible in one minute. All addition, subtraction and multiplication equations used numbers between –10 and 10 while division equations had answers in that range. All answers were integers. The table below gives the number of problems completed successfully in one minute.
11 30 28 31 23
27 29 9 38 18
19 17 26 12 10
19 20 17 15 23
22 34 23 36 10
Make a frequency distribution and histogram. Using a starting value of 5 and a class width of 5. Label the graph completely.
Make a box plot.
5. The objective of the automaticity experiments is to determine if there is a relationship between a student’s math fact fluency and their final grade for the quarter. The table below contains the bivariate data for 6 of the students.
Automaticity Score Final Grade
Student 1 19 4
Student 2 31 2.9
Student 3 16 1.4
Student 4 19 4
Student 5 20 2.3
Student 6 16 1.3
Make a scatter plot for this data.
Find the correlation using the formulas. You can use your calculator for the basic functions but not for simply finding correlation, other than to check your answer. Show all your work.
6. Automaticity is one area to investigate when a college attempts to improve success rates for developmental math classes. If it is a factor in success, then the college will develop a method for helping students improve their automaticity. An experiment was conducted in which intermediate algebra students were given a computerized automaticity test that required them to solve a mixture of one-step linear equations that required adding, subtracting or multiplying numbers between -10 and 10 and has solutions between those same values for the division problems. Examples include -3x = 12 and x + 5 = -3. A student’s score was the maximum number of problems they could answer correctly in one minute. Students had to get an answer correct before moving on to a new problem. One goal was to see if the average number of problems answered correctly in one minute was greater for students who passed the class than for those who didn’t pass.
The hypotheses that will be tested are:
$H_0: \mu_\text{pass} = \mu_{fall}$
$H_0: \mu_\text{pass} > \mu_{fall}$
$\alpha = 0.05$
a. Complete the design layout table.
Research Design Table
Research Question:
Type of Research Observational Study
Observational Experiment
Manipulative Experiment
What is the response varialbe?
What is the parameter that will be calculated? Mean Proportion Correlation
List potential latent variables.
Grouping/explanatory Variable 1 (if present) Levels:
b. If two intermediate algebra classes were to be randomly selected from 12 classes being offered, with the classes being numbered 1 to 12, which two classes would be selected if the calculator was seeded with 27 or row 10 was used in the table of random digits?
c. What type of sampling method is used when a class is selected and everyone in the class participates in the research?
d. The data that will be gathered is the number of problems answered correctly in one minute. Are these data quantitative discrete, quantitative continuous or categorical?
The automaticity scores for the students who failed the class are shown in the table below.
16 15 14 15
13 16 22 30
20 8 13 14
16 16 16 16
6 27 9
The automaticity scores for the students who passed the class are shown in the table below.
20 19 33 15 20
14 9 11 12 17
8 20 31 38 29
9 22 31 31 30
9 22 31 31 30
15 10 10 23 22
7 11 23 17 19
20 20 18 9 27
25 15 18 9 27
25 15 23 28 11
20 13 36 34
e. Make a frequency distribution, double bar histogram and side-by-side box plots to show a graphical comparison of these two sets of scores.
f. Which graph is more effective in helping you see the difference between the data sets?
g. Find the mean, variance and standard deviation for both sets of data separately. You may use the statistical functions of your calculator.
h. The p-value of the statistical test that compares the two means is 0.0395. Write a concluding sentence in the style used in scholarly journals (like you were taught in Chapter 1).
i. Based on the results of this analysis and the decision rule in the story, will the college develop a program to help improve automaticity?
7. Why Statistical Reasoning Is Important for a Biology Student and Professional Developed in collaboration with Elysia Mbuja and Robert Thissen, Biology Department This topic is discussed in BIOL 160, General Biology.
To explore the scientific method, students will study the effect of alcohol on a Daphnia. Daphnia, living water fleas, are used because they are almost transparent and the beating heart can be seen. The theory to be tested is whether alcohol slows the heart rate of Daphnia. To conduct this test, a Daphnia will be placed in a drop of water on a microscope. The number of heartbeats in 15 seconds will be counted. The water will be removed from the slide and a drop of 8% alcohol will be placed on the Daphnia. After 1 minute, the heartbeats will be counted again. If the heartbeats are lower, it cannot be concluded that the reason is because of the alcohol. It could simply be the reaction to a drop of fluid being placed on the Daphnia or the effect of being on a slide under a light. Therefore, after the Daphnia is allowed to recover, it is returned to the slide following the exact same procedure except a drop of water is used instead of alcohol.
$H_0: \mu_{\text{alcohol}} = \mu_{\text{water}}$
$H_1: \mu_{\text{alcohol}} < \mu_{\text{water}}$
$\alpha = 0.05$
a. Complete the experiment design table.
Research Design Table
Research Question:
Type of Research Observational Study
Observational Experiment
Manipulative Experiment
What is the response varialbe?
What is the parameter that will be calculated? Mean Proportion Correlation
List potential latent variables.
Grouping/explanatory Variable 1 (if present) Levels:
b. Make an appropriate graph to compare the two sets of data. The data in the shaded cells is authentic. It comes from a BIOL 160 class.
c. Show the relevant statistics for the two sets of data.
Heart Rate after Alcohol Heart Rate after Water
Mean
Standard Deviation
Median
d. The p-value from the t-test for 2 independent populations is 1.28E-5. Write a concluding sentence.
e. What is the effect of alcohol on the heart rate of a Daphnia? Do you think it will have the same effect on a human? | textbooks/stats/Introductory_Statistics/Foundations_in_Statistical_Reasoning_(Kaslik)/03%3A_Examining_the_Evidence_using_Graphs_and_Statistics/3.E%3A_Examining_the_Evidence_using_Graphs_and_Statistics_%28Exercises%29.txt |
Prior to elections, pollsters will survey approximately 1000 people and on the basis of their results, try to predict the outcome of the election. On the surface, it should seem like an absurdity that the opinions of 1000 can give any indication about the opinion of 100,000,000 voters in a national presidential election. Likewise, taking 100 or 1000 water samples from the Puget Sound, when that is a miniscule amount of water compared to the total amount of constantly changing water in the Sound, should seem insufficient for making a decision.
The objective of this chapter is to develop the theory that helps us understand why a relatively small sample size can actually lead to conclusions about a much larger population. The explanation is different for categorical and quantitative data. We will begin with categorical data.
The journey that you will take through this section has a final destination at the formula that will ultimately be used to test hypotheses. While you might be willing to accept the formula without taking the journey, it will be the journey that gives meaning to the formula. Because data are stochastic, that is, they are subject to randomness, probability plays a critical role in this journey.
Our journey begins with the concept of inference. Inference means that a small amount of observed information is used to draw general conclusions. For example, you may visit a business and receive outstanding customer service from which you infer that this business cares about its customers. Inference is used when testing a hypothesis. A small amount of information, the sample, is used to draw a conclusion, or infer something about the entire population.
The theory begins with finding the probability of getting a particular sample and ultimately ends with creating distributions of all the sample results that are possible if the null hypothesis is true. For each step of the 7-step journey, digressions will be made to learn about the specific rules of probability that contribute to the inferential theory.
Before starting, it is necessary to clarify some of the terminology that will be used. Regardless of the question being asked, if it produces categorical data, that data will be identified generically as a success or failure, without using those terms in their customary manner. For example, a researcher making a hypothesis about the proportion of people who are unemployed would consider being unemployed a success from the statistical point of view, and being employed as a failure, even though that contradicts the way it is viewed in the real world. Thus, success is data values about which the hypotheses are written and failure is the alternate data value.
Briefing 4.1 Self-driving Cars
Google, as well as most car companies, are developing self-driving cars. These autonomous cars will not need to have a driver and are considered less likely to be in an accident than cars driven by humans. Cars such as these are expected to be available to the public around the years 2020 – 2025. There are many questions that must be answered before these cars are made available. One such question is to determine who is responsible in the event of an accident. Is the owner of the car responsible, even though they were not steering the car or is the manufacturer responsible since their technology did not avoid the accident? (mashable.com/2014/07/07/drive...-cars-tipping- point/, viewed July 2014).
Step 1 – How likely is it that a particular data value is success?
Suppose a researcher wanted to determine the proportion of the public that believe the owner is responsible for the accident. The researcher has a hypothesis that the proportion is over 60%. In this case, the hypotheses will be:
• $H_0: p = 0.60$
• $H_1: p > 0.60$
When collecting data, the order in which the units or people are selected determines the order in which the data is collected. In this case, assigning responsibility to the owner will be considered a success and assigning responsibility to the manufacturer is considered a failure. If the first person believes the owner is responsible, the second person believes the manufacturer is responsible, the third person selects the manufacturer, the fourth, fifth and sixth people all select the owner as the responsible party, then we can convert this information to successes and failure by listing the order in which the data were obtained as SFFSSS.
The strategy that is employed to determine which of two competing hypotheses is better supported is always to assume that the null hypothesis is true. This is a critical point, because if we can assume a condition is true, then we can determine the probability of getting our particular sample result, or more extreme results. This is a p-value.
However, before we can determine the probability of obtaining a sequence such as SFFSSS, we must first find the probability of obtaining a success. For this, we need to explore the concept of probability.
Digression 1 – Probability
Probability is the chance that a particular outcome will happen if a process is repeated a very large number of times. It is quantified by dividing the number of favorable outcomes by the number of possible outcomes. This is shown as a formula:
$P(A) = \dfrac{\text{Number of Favorable Outcomes}}{\text{Number of Possible Outcomes}}$
where $P(A)$ means the probability of some event called $A$. This formula assumes that all outcomes are equally likely, which is what happens with a good random sampling process. The entire set of possible outcomes is called the sample space. The number of possible outcomes is the same as the number of elements in the sample space.
While the intent of this chapter is to focus on developing the theory to test hypotheses, a few concepts will be explained initially with easier examples.
If we wanted to know the probability of getting a tail when we flip a fair coin, then we must first consider the sample space, which would be {H, T}. Since there is one element in that sample space that is favorable and the sample space contains two elements, the probability is $p(tails) = \dfrac{1}{2}$.
To find the probability of getting a 4 when rolling a fair die, we create the sample space with six elements {1,2,3,4,5,6}, since these are the possible results that can be obtained when rolling the die. To find the probability or rolling a 4, we can substitute into the formula to get $P(4) = \dfrac{1}{6}$.
A more challenging question is to determine the probability of getting two heads when flipping two coins or flipping one coin twice. The sample space for this experiment is {HH, HT, TH, TT}. The probability is (HH) = $\dfrac{1}{4}$. The probability of getting one head and one tail, in any order is (1 head and 1 tail) = $\dfrac{2}{4}$ = $\dfrac{1}{2}$.
Probability will always be a number between 0 and 1, thus $0 \le P(A) \le 1$. A probability of 0 means something cannot happen. A probability of 1 is a certainty.
Apply this concept of probability to the hypothesis about the responsibility for a self-driving car in an accident.
If we assume that the null hypothesis is true, then the proportion of people who believe the owner is responsible is 0.60. What does that mean? It means that if there are exactly 100 people, then exactly 60 of them hold the owner responsible and 40 of them do not.
If our goal is to find the probability of SFFSSS, then we must first find the probability of getting a success (owner). If there are 100 people in the population and 60 of these select the owner, then the probabiltiy of selecting a person who chooses the owner is $P(owner) = \dfrac{60}{100} = 0.60$. Notice that this probability exactly equals the proportion defined in the null hypothesis. This is not a coincidence and it will happen every time because the proportion in the null hypothesis is used to generate a theoretical population, which was used to find the probability. The first important step in the process of testing a hypothesis is to realize that the probability of any data being a success is equal to the proportion defined in the null hypothesis, assuming that sampling is done with replacement, or that the population size is extremely large so that removing a unit from the population does not change the probability a significant amount.
Example 1
• If $H_0: p = 0.35$, then the probability the $5^{\text{th}}$ value is a success is 0.35.
• If $H_0: p = 0.82$, then the probability the $20^{\text{th}}$ value is a success is 0.82.
Step 2 - How likely is it that a particular data value is a failure?
Now that we know how to find the probability of success, we must find the probability of failure. For this, we will again digress to the rules of probability.
Digression 2 - Probability of A or B
When one unit is selected from a population, there can be several possible measures that can be taken on it. For example, a new piece of technology could be put into several brands of cars and then tested for reliability. The contingency table below shows the number of cars in each of the categories. These numbers are fictitious and we will pretend it is the entire population of cars under development.
Hyundai Nissan Google Total
Reliable 80 75 90 245
Not Reliable 25 10 20 55
Total 105 85 110 Grand Total 300
From this we can ask a variety of probability questions.
If one car is randomly selected, what is the probability that it is a Nissan?
$P(Nissan) = \dfrac{85}{300} = 0.283$
If one car is randomly selected, what is the probability that the piece of technology was not reliable?
$P(not reliable) = \dfrac{55}{300} = 0.183$
If one car is randomly selected, what is the probability that it is a Hyundai or the piece of technology was reliable?
This question introduces the word “or” which means that the car has one characteristic or the other characteristic or both. The word “or” is used when only one selection is made. The table below should help you understand how the formula will be derived.
Notice that the 2 values in the Hyundai column are circled and the three values in the Reliable row are circled, but that the value in the cell containing the number 80 that represents the Hyundai and Reliable is circled twice. We don’t want to count those particular cars twice so after adding the column for Hyundai to the row for Reliable, it is necessary to subtract the cell for Hyundai and Reliable because it was counted twice but should only be counted once. Thus, the equation becomes:
P(Hyundai or Reliable) = P(Hyundai) + P(Reliable) – P(Hyundai and Reliable)
= $\dfrac{105}{300} + \dfrac{245}{300} - \dfrac{80}{300} = \dfrac{270}{300} = 0.90$
From this we will generalize the formula to be
$P(A\ or\ B) = P(A) + P(B) - P(A\ and\ B).$
What happens if we use the formula to determine the probability of randomly selecting a Nissan or a Google car?
Because these two criteria cannot both happen at the same time, they are said to be mutually exclusive. Consequently, their intersection is 0.
P(Nissan or Google) = P(Nissan) + P(Google) – P(Nissan and Google)
= $\dfrac{85}{300} + \dfrac{110}{300} - \dfrac{0}{300} = \dfrac{195}{300} = 0.65$
Because the intersection is 0, this leads to a modified formula for categorical data values that are mutually exclusive.
$P(A\ or\ B) = P(A) + P(B)$
This is the formula that is of primary interest to us for determining how to find the probability of failure.
If success and failure are the only two possible results, and it is not possible to simultaneously have success and failure, then they are mutually exclusive. Furthermore, if a random selection is made, than it is certain that it will be a success or failure. Things that are certain have a probability of 1. Therefore, we can write the formula using S and F as:
$P(S\ or\ F) = P(S) + P(F)$
or
$1 = P(S) + P(F)$
with a little algebra this becomes
$1- P(S) = P(F)$
What this means is that subtracting the probability of success from 1 gives the probability of failure. The probability of failure is called the complement of the probability of success.
Apply this concept of probability to the hypothesis about the responsibility for a self-driving car in an accident.
Recall that the original hypothesis for the responsibility in an accident is that: H0: p = 0.60. We have established that the probability of success is 0.60. The probability of failure is 0.40 since it is the complement of the probability of success and 1 – 0.60 = 0.40.
Example 2
• If $H_0: p = 0.35$, then the probability the $5^{\text{th}}$ value is a success is 0.35. The probability the $5^{\text{th}}$ value is a failure is 0.65.
• If $H_0: p = 0.82$, then the probability the $20^{\text{th}}$ value is a success is 0.82. The probability the $20^{\text{th}}$ value is a failure is 0.18.
Step 3 - How likely is it that a sample consists of a specific sequence of successes and failures?
We now know that the probability of success is identical to the proportion defined by the null hypothesis and the probability of failure is the complement. But these probabilities apply to only one selection. What happens when more than one is selected? To find that probability, we must learn the last of the probability rules:
$P(A\ and\ B) = P(A)P(B)$
Digression 3 - P(A and B) = P(A)P(B)
If two or more selections are made, the word “and” becomes important because it indicates we are seeking one result for the first selection and one result for the second selection. This probability is found by multiplying the individual probabilities. Part of the key to choosing this formula is to identify the problem as an “and” problem. For instance, early in this chapter we found the probability of getting two heads when flipping two coins is 0.25. This problem can be viewed as an “and” problem if we ask “what is the probability of getting a head on the first flip and a head on the second flip”? Using subscripts of 1 and 2 to represent the first and second flips respectively, we can rewrite the formula to show:
$P(H_1\ and\ H_2) = P(H_1)P(H_2) = (\dfrac{1}{2})(\dfrac{1}{2}) = \dfrac{1}{4} = 0.25.$
Suppose the researcher randomly selects three cars. What is the probability that there will be one car of each of the makes (Hyundai, Nissan, Google)?
Hyundai Nissan Google Total
Reliable 80 75 90 245
Not Reliable 25 10 20 55
Total 105 85 110 Grand Total 300
First, since there are three cars being selected, this should be recognized as an “and” problem and can be phrased P(Hyundai and Nissan and Google). Before we can determine the probability however, there is one important question that must be asked. That question is whether the researcher will select with replacement.
If the researcher is sampling with replacement, then the probability can be determined as follows.
P(Hyundai and Nissan and Google) = P(Hyundai)P(Nissan)P(Google) =
$(\dfrac{105}{300})(\dfrac{85}{300})(\dfrac{110}{300}) = 0.03636.$
Notice the slight change in the probability as a result of not using replacement.
Apply this concept of probability to the hypothesis about the responsibility for a self-driving car in an accident.
We are now ready to answer the question of what is the probability that we would get the exact sequence of people if the first person believes the owner is responsible, the second person believes the manufacturer is responsible, the third person selects the manufacturer, the fourth, fifth and sixth people all select the owner as the responsible party. Because there are six people selected, then this is an “and” problem and can be written as P(S and F and F and S and S and S) or more concisely, leaving out the word “and” but retaining it by implication, we write P(SFFSSS). Remember that P(S) = 0.6 and P(F) = 0.4
P(SFFSSS) = P(S)P(F)P(F)P(S)P(S)P(S) = (0.6)(0.4)(0.4)(0.6)(0.6)(0.6) = 0.0207.
To summarize, if the null hypothesis is true, then 60% of the people believe the owner is responsible for accidents. Under these conditions, if a sample of six people is taken, with replacement, then the probability of getting this exact sequence of successes and failures is 0.0207.
Step 4 - How likely is it that a sample would contain a specific number of successes?
Knowing the probability of an exact sequence of successes and failures is not particularly important by itself. It is a stepping-stone to a question of greater importance – what is the probability that four out of six randomly selected people (with replacement) will believe the owner is responsible? This is an important transition in thinking that is being made. It is the transition from thinking about a specific sequence to thinking about the number of successes in a sample.
When data are collected, researchers don’t care about the order of the data, only how many successes there were in the sample. We need to find a way to transition from the probability of getting particular sequence of successes and failures such as SFFSSS to finding the probability of getting four successes from a sample of size 6. This transition will make use of the commutative property of multiplication, the P(A or B) rule and combinatorics (counting methods).
At the end of Step 3 we found that P(SFFSSS) = 0.0207.
• What do you think will be the probability of P(SSSFSF)?
• What do you think will be the probability of P(SSSSFF)?
The answer to both these questions is 0.0207 because all of these sequences contain 4 successes and two failures and since the probability is found by multiplying the probabilities of success and failure in sequence and since multiplication is commutative (order doesn’t matter) then
(0.6)(0.4)(0.4)(0.6)(0.6)(0.6)= (0.6)(0.6)(0.6)(0.4)(0.6)(0.4)= (0.6)(0.6)(0.6)(0.6)(0.4)(0.4) = 0.020736.
If the question now changes from what is the probability of a sequence to what is the probability of 4 successes in a sample of size 6, then we have to consider all the different ways in which four successes can be arranged. We could get 4 successes if the sequence of our selection is SFFSSS or SSSFSF or SSSSFF or numerous other possibilities. Because we are sampling one time and because there are many possible outcomes we could have, this is an “or” problem that uses an expanded version of the formula P(A or B) = P(A) + P(B). This can be written as:
P(4 out of 6) = P(SFFSSS or SSSFSF or SSSSFF or ...) = P(SFFSSS) + P(SSSFSF) + P(SSSSFF) + ...
In other words, we can add the probability of each of these orders. However, since the probability of each of these orders is the same (0.0207) then this process would be much quicker if we simply multiplied 0.0207 by the number of orders that are possible. The question we must answer then is how many ways are there to arrange four successes and 2 failures? To answer this, we must explore the field of combinatorics, which provide various counting methods.
Digression 4 – Combinatorics
Researchers designing the cars will compare different technologies to see which works better. Suppose two brands of a video camera are available for a car. How many different pairs are possible?
A tree-diagram can help explain this.
Making a tree-diagram to answer questions such as this can be tedious, so an easier approach is to use the fundamental counting rule which states that if there are M options for one choice that must be made and N options for a second choice that must be made, then there are MN unique combinations. One way to show this is to draw and label a line for each choice that must be made and on the line write the number of options that are available. Multiply the numbers.
_______2_________ ________3_________ = 6
Videos Cars
This tells you there are six unique combinations for one camera and one make of car.
If researchers have 4 test vehicles that will be driving on the freeway as a convoy and the colors of the vehicles are blue, red, green, and yellow, how many ways can these cars be ordered in the convoy?
To answer this question, think of it as having to make four choices, which color of car is first, second, third, fourth. Draw a line for each choice and on the line write the number of options that are available and then multiply these numbers. There are four options for the first car. Once that choice is made there are three options remaining for the second car. When that choice is made, there are two options remaining for the third car. After that choice is made, there is only one option available for the final car.
4 3 2 1 = 24 unique orders
First Car Second Car Third Car Fourth Car
Examples of some of these unique orders include: blue, red, green, and yellow
red, blue, green, and yellow
green, red, blue, and yellow
Each unique sequence is called a permutation. Thus in this situation, there are 24 permutations.
The way to find the number of permutations when all available elements are used is called factorial. In this problem, all four cars are used, so the number of permutations is 4 factorial which is shown symbolically as 4!. 4! means (4)(3)(2)(1). To be more general,
$n! = n(n - 1)(n - 2)... 1$
Permutations can also be found when fewer elements are used than are available. For example, suppose the researchers will only use two of the four cars. How many different orders are possible? For example, the blue car followed by the green car is a different order than the green car followed by the blue car. We can answer this question two ways (and hopefully get the same answer both ways!). The first way is to use the fundamental counting rule and draw two lines for the choices we make, putting the number of options available for each choice on the line and then multiplying.
4 3 = 12 permutations
First Car Second Car
Examples of possible permutations include:
Blue, Green, Green, Blue, Blue, Red Yellow, Green
The second approach is to use the formula for permutations when the number selected is less than or equal to the number of available. In this formula, r represents the number of items selected, n represents the number of items available.
$nPr = \dfrac{n!}{(n - r)!}$
For this example, n is 4 and r is 2 so with the formula we get:
$_4P_2 = \dfrac{4!}{(4 - 2)!} = \dfrac{4!}{2!} = \dfrac{4 \cdot 3 \cdot 2 \cdot 1}{2 \cdot 1} = 4 \cdot 3 = 12$ permutations.
Notice the final product of $4 \cdot 3$ is the same as we have when using the fundamental counting rule. The denominator term of (n-r)! is used to cancel the unneeded numerator terms.
For permutations, order is important, but what if order is not important? For example, what if we wanted to know how many pairs of cars of different colors could be combined if we didn’t care about the order in which they drove. In such a case, we are interested in combinations. While Blue, Green, and Green, Blue represent two permutations, they represent only one combination. There will always be more permutations than combinations. The number of permutations for each combination is r!. That is, when 2 cars are selected there are 2! permutations for each combination.
To determine the number of combinations there are if two of the four cars are selected we can divide the total number of permutations by the number of permutations per combination.
$Number\ of\ combinations = Number\ of\ permutations (\dfrac{1\ Combination}{Number\ of\ Permutations})$
Using similar notation as was used for permutations (nPr), combinations can be represented with nCr, so the equation can be rewritten as
$\begin{array} {rcl} {nCr} &= & {nPr(\dfrac{1}{r!})\ or} \ {nCr} &= & {\dfrac{n!}{(n - r)!} (\dfrac{1}{r!})} \ {nCr} &= & {\dfrac{n!}{(n - r)!r!}} \end{array}$
An alternate way to develop this formula that could be used for larger sample sizes that contain successes and failures is to consider that the number of permutations is n! while the number of permutations for each combination is r!(n-r)!. For example, in a sample of size 20 with 12 successes and 8 failure, there are 20! permutations of the successes and failures combined with 12! permutations of successes and 8! permutations of failures for each combination. Thus,
$Number\ of\ combinations = 20!(\dfrac{1\ Combination}{12!8!}) = \dfrac{20!}{12!8!}\ or\ as\ a\ formula:$
$Number\ of\ combinations = n!(\dfrac{1\ Combination}{r!(n - r)!}) = \dfrac{n!}{r!(n - r)!}\ or\ \dfrac{n!}{(n - r)!r!}$.
For our example about car colors we have:
$_4C_2 = \dfrac{4!}{(4 - 2)! 2!} = \dfrac{4 \cdot 3 \cdot 2 \cdot 1}{2 \cdot 1 \cdot 2 \cdot 1} = 6$ combinations.
This sequence of combinatorics concepts has reached the intended objective in that the interest is in the number of combinations of successes and failure there are for a given number of successes in a sample. We will now return to the problem of the responsibility for accidents.
Apply this concept of probability to the hypothesis about the responsibility for a self-driving car in an accident.
Recall that 6 people were selected and 4 thought the owner should be responsible. We saw that the probability of any sequence of 4 successes and 2 failures, such as SFFSSS or SSSFSF or SSSSFF is 0.0207 if the null hypothesis is p = 0.60. If we knew the number of combinations of these 4 successes and 2 failures, we could multiply that number times the probability of any specific sequence to get the probability of 4 successes in a sample of size 6.
Using the formula for nCr, we get: $_6C_4 = \dfrac{6!}{(6 - 4)!4!} = 15$ combinations.
Therefore, the probability of 4 successes in a sample of size 6 is 15*0.020736 = 0.31104. This means that if the null hypothesis of p=0.60 is true and six people are asked, there is a probability of 0.311 that four of those people will believe the owner is responsible.
We are now ready to make the transition to distributions. The following table summarizes our journey to this point.
Step 1 Use the null hypothesis to determine P(S) for any selection, assuming replacement.
Step 2 Use the P(A or B) rule to find the complement, which is the P(F) = 1 - P(S)
Step 3 Use the P(A or B) rule to find the probability of a specific sequence of a specific sequence of successes and failures, such as SFFSSS, by multiplying the individual probabilities.
Step 4 Recognize that all combinations of r successes out of a sample of size n have the same probability of occurring. Find the number of combinations nCr and multiply this times the probability of any of the combinations to determine the probability of getting r successes out of a sample of size n.
Step 5 – How can the exact p-value be found using the binomial distribution?
Recall that in chapter 2, we determined which hypothesis was supported by finding the p-value. If the p-value was small, less than the level of significance, we concluded that the data supported the alternative hypothesis. If the p-value was larger than the level of significance, we concluded that the data supported the null hypothesis. The p-value is the probability of getting the data, or more extreme data, assuming the null hypothesis is true.
In Step 4 we found the probability of getting the data (for example, four successes out of 6) but we haven’t found the probability of getting more extreme data yet. To do so, we must now create distributions. A distribution shows the probability of all the possible outcomes from an experiment. For categorical data, we make a discrete distribution.
Before looking at the distribution that is relevant to the problem of responsibility for accidents, a general discussion of distributions will be provided.
In chapter 4 you learned to make histograms. Histograms show the distribution of the data. In chapter 4 you also learned about means and standard deviations. Distributions have means and standard deviations too.
To demonstrate the concepts, we will start with a simple example. Suppose that someone has two routes used for running. One route is 2 miles long and the other route is 5 miles long. Below is the running schedule for last week.
Sunday Monday Tuesday Wednesday Thursday Friday Saturday
5 2 2 5 2 2 5
A distribution of the amount run each day is shown below.
The mean can be found by adding all the daily runs and dividing by 7. The mean is 3.286 miles per day. Because the same distances are repeated on different days, a weighted mean can also be used. In this case, the weight is the number of times a particular distance was run. A weighted mean takes advantage of multiplication instead of addition. Thus, instead of calculating: $\dfrac{2 + 2 + 2 + 2 + 5 + 5 + 5}{7} = 3.286$. we can multiply each number by the number of times it occurs then divide by the number of occurrences: $\dfrac{4 \cdot 2 + 3 \cdot 5}{4 + 3} = 3.286$. The formula for a weighted mean is:
$\dfrac{\sum w \cdot x}{\sum w}$
The same graph is presented below, but this time there are percentages above the bars.
Instead of using counts as the weight, the percentages (actually the proportions) can be used as the weight. Thus 57.143% can be written as 0.57143. Likewise, 42.857% can be written 0.42857.
Substituting into the formula gives: $\dfrac{0.57143 \cdot 2 + 0.42857 \cdot 5}{0.57143 + 0.42857} = 3.286$. Notice that the denominator adds to 1. Therefore, if the weight is the proportion of times that a value occurs, then the mean of a distribution that uses percentages can be found using the formula:
$\mu = \sum P(x) x$
This mean, which is also known as the expected value, is the sum of the probability of a value times the value. There is no need for dividing, as is customary when finding means, because we would always just divide by 1.
Recall from chapter 4 that the standard deviation is the square root of the variance. The variance is $\sigma^2 = \sum[(x - \mu)^2 \cdot P(x)]$. The standard deviation is $\sigma = \sqrt{\sum[(x - \mu)^2 \cdot P(x)]}$.
$\sigma = \sqrt{\sum[(x - \mu)^2 \cdot P(x)]}$.
$\sigma = \sqrt{(2 - 3.286)^2 \cdot 0.57143 + (5 - 3.286)^2 \cdot 0.42857} = 1.485$
The self-driving car problem will show us one way in which we encounter discrete distributions. In fact, it will result in the creation of a special kind of discrete distribution called the Binomial distribution, which will be defined after exploring the concepts that lead to its creation.
When testing a hypothesis about proportions of successes, there are two random variables that are of interest to us. The first random variable is specific to the data that we will collect. For example, in research about who is responsible for accidents caused by autonomous cars, the random variable would be “responsible party”. There would be two possible values for this random variable, owner or car manufacturer. We have been considering the owner to be a success and the manufacturer to be a failure. The data the researchers collect is about this random variable. However, creating distributions and finding probabilities and p-values requires a shift of our focus to a different random variable. This second random variable is about the number of successes in a sample of size n. In other words, if six people are asked, how many of them think the owner is responsible? It is possible that none of them think the owner is responsible, or one thinks the owner is responsible, or two, or three, or four, or five, or all six think the owner is responsible. Therefore, in a sample of size 6, the random variable for the number of successes can have the values of 0,1,2,3,4,5,6. We have already found that the probability of getting four successes is 0.3110. We will now find the probability of getting 0,1,2,3,5,6 successes, assuming the hypotheses are still $H_0: p = 0.60$, $H_1: p > 0.60$. This will allow us to create the discrete binomial distribution of all possible outcomes.
Find the probability of 0 successes
The only way to have 0 successes is to have all failures, thus we are seeking P(FFFFFF).
P(FFFFFF) = P(F)P(F)P(F)P(F)P(F)P(F) = (0.4)(0.4)(0.4)(0.4)(0.4)(0.4) = 0.004096.
Since there is only one combination for 0 successes, then the probability of 0 successes is 0.0041.
Find the probability of 1 success
We know that all combinations have the same probability, so we may as well create the simplest combination for 1 success. This would be SFFFFF.
P(SFFFFF) = P(S)P(F)P(F)P(F)P(F)P(F) = (0.6)(0.4)(0.4)(0.4)(0.4)(0.4) = $(0.6)^{1}(0.4)^{5}$= 0.006144.
How many combinations are there for 1 success? This can be found using $_6C_1$.
$_6C_1 = \dfrac{6!}{(6 - 1)!1!} = 6$ combinations. Does this answer seem reasonable? Consider there are only 6 places in which the success could happen.
The probability of 1 success is then 6*0.006144 = 0.036864 or if we round to four decimal places, 0.0369.
Find the probability of 2 successes.
Instead of doing this problem in steps as was done for the prior examples, it will be demonstrated by combining steps.
$_6C_2 P$(SSFFFF)
$\dfrac{6!}{(6 - 2)! 2!} (0.6)(0.6)(0.4)(0.4)(0.4)(0.4) = \dfrac{6!}{(6 - 2)! 2!} (0.6)^2(0.4)^4 =$
15(0.009216) = 0.13824 or with rounding to four decimal places 0.1382.
Find the probability of 3 successes using the combined steps. (Now its your turn).
Find the probability of 5 successes.
Find the probability of 6 successes.
When all the probabilities have been found, we can create a table that shows the values the random variable can take and their probabilities. We will define the random variables for the number of successes as X with the possible values defined as x.
X = x 0 1 2 3 4 5 6
P(X = x) 0.0041 0.0369 0.1382 0.2765 0.3110 0.1866 0.0467
Do your values for 3,5, and 6 successes agree with the values found in the table?
A graph of this distribution can lead to a better understanding of it. This graph is called a probability mass function, which is shown using a stick graph. It is a way to graph discrete distributions, since there cannot be any values in between the numbers on the x-axis. The heights of the bar correspond to the probability of getting the number of successes.
There are three things you should notice about this distribution. First, it is a complete distribution. That is, in a sample of size six, it is only possible to have 0,1,2,3,4,5, or 6 successes and all of those have been included in the graph. The second thing to notice is that all the probabilities have values between 0 and 1. This should be expected because probabilities must always be between 0 and 1. The final thing to notice, which may not be obvious at first, is that if you add all the probabilities, the sum will be 1. The sum of all complete probability distributions should equal 1, $\sum P(x) =1$. If you add all the probabilities and they don’t equal one, but are very close, it could be because of rounding, not because you did anything wrong.
Digression 5 - Binomial Distributions
The entire journey that has been taken since the beginning of this chapter has led to the creation of a very important discrete distribution called the binomial distribution, which has the following components.
1. A Bernoulli Trial is a sample that can have only two possible results, success and failure.
2. An experiment can consist of n independent Bernoulli Trials.
3. A Binomial Random Variable, X is the number of successes in an experiment
4. A Binomial Distribution shows all the values for X and the probability of each of those values occurring.
The assumptions are that:
1. All trials are independent.
2. The number of trials in an experiment is the same and defined by the variable n.
3. The probability of success remains constant for each sample. The probability of failure is the complement of the probability of success. The variable p = P(S) and the variable $q = P(F). q = 1 – p$.
4. The random variable X can have values of 0, 1, 2,...n.
The probability can be found for each possible number of successes that the random variable X can have using the binomial distribution formula
$P(X = x) = _nC_x P^x q^{n-x}$.
If this formula looks confusing, review the work you did when finding the probability that 3,5 or 6 people believe the owner is responsible, because you were actually using this formula. $_nC_x$, which is shown in your calculator as $_nC_r$, is the number of combinations for x successes. The x and the r represent the same thing and are used interchangeably.
p is the probability of success. It comes from the null hypothesis.
q is the probability of failure. It is the complement of p.
n is the sample size
x is the number of successes
If we use this formula for all possible values of the random variable, X, we can create the binomial distribution and graph.
$P(X = 0) = _6C_0(0.60)^0(0.40)^{6 - 0} = 0.0041$
$P(X = 1) = _6C_1(0.60)^1(0.40)^{6 - 1} = 0.0369$
$P(X = 2) = _6C_2(0.60)^2(0.40)^{6 - 2} = 0.1382$ You can finish the rest of them.
The TI84 calculator has an easier way to create this distribution. Find and press your Y= key. The cursor should appear in the space next to Y1 = . Next, push the $2^{\text{nd}}$ key, then the key with VARS on it and DISTR above it. This will take you to the collection of distributions. Scroll up until you find Binompdf. This is the binomial probability distribution function. Select it and then enter the three values n, p, x. For example, if you enter Y1=Binompdf(6,0.60,x) and then select $2^{\text{nd}}$ TABLE, you should see a table that looks like the following:
X Y1
0 0.0041
1 0.03686
2 0.13824
3 0.27648
4 0.31104
5 0.18662
6 0.04666
If the table doesn't look like this, press $2^{\text{nd}}$ TBLSET and make sure your settings are:
TblStart = 0
$\Delta$ TBL = 1
Indpnt: Auto
Depend: Auto.
Binomial distributions have a mean and standard deviation. The approach for finding the mean and standard deviation of a discrete distribution can be applied to a binomial distribution.
$X = x$ 0 1 2 3 4 5 6
$P(x = x)$ 0.0041 0.0369 0.1382 0.2765 0.3110 0.1866 0.0467
$x(P(x))$ 0 0.0369 0.2764 0.8295 1.244 0.933 0.2802
$(x - \mu)^2$ $(0 - 3.6)^2 = 12.96$ 6.76 2.56 0.36 0.16 1.96 5.76
$(x - \mu)^2 \cdot P(x)$ 0.0531 0.2494 0.3538 0.0995 0.0498 0.3657 0.2690
$\mu = \sum P(x) x$
$\mu = 0 + 0.0369+0.2764+0.8295+1.244+0.933+0.2802 = 3.6$
$\sigma = \sqrt{\sum[(x - \mu)^2 \cdot P(x)]}$
$\sigma = \sqrt{0.0531 + 0.2494 + 0.3538 + 0.0995 + 0.0498 + 0.3657 + 0.2690} = \sqrt{1.4403} = 1.20$
The mean is also called the expected value of the distribution. Finding the expected value and standard deviation for using these formulas can be very tedious. Fortunately, for the binomial distribution, there is an easier way. The expected value can be found with the formula:
$E(x) = \mu = np$
The standard deviation is found with the formula
$\sigma = \sqrt{npq} = \sqrt{np(1 - p)}$
To determine the mean number of people who think the owner is responsible for accidents, use the formula
$E(x) = \mu = np = 6\ (0.6) = 3.6$.
This indicates that if lots of samples of 6 people were taken the average number of people who believe the owner is responsible would be 3.6. It is acceptable for this average to not be a whole number.
The standard deviation of this distribution is: $\sigma = \sqrt{np(1 - p)} = \sqrt{6 (0.6) (0.4)} = 1.2$
Notice the same results were obtained with an easier process. Formulas 5.12 and 5.13 should be used to find the mean and standard deviation for all binomial distributions.
Apply this concept of probability to the hypothesis about the responsibility for a self-driving car in an accident.
Now that you have the ability to create a complete binomial distribution, you are ready to test a hypothesis. This will be demonstrated with the autonomous car example that has been used throughout this chapter.
Suppose a researcher wanted to determine the proportion of people who believe the owner is responsible. The researcher may have had a hypothesis that the proportion of people who believe the owner is responsible for accidents is over 60%. In this case, the hypotheses will be: H0: p = 0.60 and H1: p > 0.60. The level of significance will be 0.10 because only a small sample size will be used. In this case, the sample size will be 6.
With this sample size we have already seen what the binomial distribution will be like. We also know that the direction of the extreme is to the right because the alternative hypothesis uses a greater than symbol.
The researcher randomly selects 6 people. Of these, four say the owner is responsible. Which hypothesis is supported by this data?
The p-value is the probability the researcher would get four or more people claiming the owner is responsible. From the table we created earlier, we see the probability of getting 4 people who think
$X = x$ 0 1 2 3 4 5 6
$P(X = x)$ 0.0041 0.0369 0.1382 0.2765 0.3110 0.1866 0.0467
the owner is responsible is 0.3110, the probability of getting 5 is 0.1866 and the probability of getting 6 is 0.0467. If we add these together, we find the probability of getting 4 or more is 0.5443. Since this probability is larger than our level of significance, we conclude the data supports the null hypothesis and is therefore not significant. The conclusion that would be written is: At the 0.10 level of significance, the proportion of people who think the owner is responsible is not significantly greater than 0.60 ($x = 4$, $p = 0.5443$, $n = 6$). Remember that in statistical conclusions, the p is the p-value, not the sample proportion.
The TI84 has a quick way to add up the probabilities. It uses the function binomcdf, for binomial cumulative distribution function. It is found in the $2^{\text{nd}}$ DISTR list of distributions. Binomcdf will add up all the probabilities beginning on the left, thus binomcdf(6,.6,1) will add the probabilities for 0 and 1. There are two conditions that are encountered when testing hypotheses using the binomial distribution. The way to find the p-value using binomcdf is based on the alternative hypothesis.
Condition 1. The alternative hypothesis has a less than sign (<).
Since the direction of the extreme is to the left, then using binomcdf(n,p,x) will produce the p value. The variable n represents the sample size, the variable p represents the probability of success (see null hypothesis), and x represents the specific number of successes from the data.
Condition 2. The alternative hypothesis has a greater than sign (>).
Since the direction of the extreme is to the right, it is necessary to use the complement rule and also reduce the value of $x$ by 1, so enter 1 – binomcdf($n$, $p$, $x - 1$) in your calculator. For example, if the data is 4, then enter 1 – binomcdf(6,0.6,3). Can you figure out why x – 1 is used and why binomcdf(n,p,x-1) is subtracted from 1? If not, ask in class.
In this example, the data were not significant and so the researcher could not claim the proportion of people who think the owner is responsible is greater than 0.60. A sample size of 6 is very small for categorical data and therefore it is difficult to arrive at any significant results. If the data are changed so that instead of getting 4 out of 6 people, the researcher gets 400 out of 600, does the conclusion change? Use 1 – binomcdf(600,0.6,399) to find the p-value for this situation.
1 – binomcdf(600,0.6,399) = ____________________
Write the concluding sentence:
Step 6 - How can the approximate p-value be found using the normal approximation to the binomial distribution?
When a hypothesis is tested using the binomial distribution, an exact p-value is found. It is exact because the binomial distribution is created from every combination of successes and failures that is possible for a sample of size n. There are other methods for determining the p-value that will give an approximate p-value. In fact, the typical method that is used to test hypotheses about proportions will give an approximate p-value. You may wonder why a method that gives an approximate p-value is used instead of the method that gives an exact p-value. This will be explained after the next two methods have been demonstrated. Before these can be demonstrated, we need to learn about a different distribution called the normal distribution.
Digression 6 – The Normal Distribution
Behind Pierce College is Waughop Lake, which is used by many students for learning scientific concepts outside of a classroom. The approximate shape of the lake is shown below. If one of the science labs required students to estimate the surface area of the water, what strategy could they use for this irregularly shaped lake?
A possible strategy is to think that this lake is almost a rectangle, and so they could draw a rectangle over it. Since a formula is known for the area of a rectangle, and if we know that each arrow below is 200 meters, can the area of the lake be estimated?
There are two important questions to consider. If this approach is taken, will the area of the lake exactly equal the area of the rectangle? Will it be close?
The answer to the first question is no, unless we happened to be extremely lucky with our drawing of the rectangle. The answer to the second question is yes it should be close.
The concept of approximating an irregular shape with a shape for which the properties are know is the strategy we will use to find new ways of determining a p-value. To the right is the irregular shape of a binomial distribution if n = 60, p = 0.60. The smooth curve that is drawn over the top of the bars is called the normal distribution. It also goes by the names bell curve and Gaussian distribution.
The formula for the normal distribution is $f(x, \mu, \sigma) = \dfrac{1}{\sigma \sqrt{2\pi}} e^{[\dfrac{1}{2} (\dfrac{x - \mu}{\sigma})^2]}$. It is not important that you know this formula. What is important is to notice the variables in it. Both πand e are constants with the values of 3.14159 and 2.71828 respectively. The x is the independent variable, which is found along the x-axis. The important variables to notice are μ and σ, the mean and standard deviation. The implication of these two variables is that they play an important role in defining this curve. The function can be shown as $N(\mu,\sigma)$.
The binomial distribution is a discrete distribution whereas the normal distribution is a continuous distribution. It is known as a density function.
A normal distribution is contrasted with skewed distributions below.
A negatively skewed distribution, such as is shown on the left, has some values that are very low causing the curve to be stretched to the left. These low values would cause the mean to be less than the median for the distribution. The positively skewed distribution, such as is shown on the right, has some values that are very high, causing the curve to be stretched to the right. These high values would cause the mean to be greater than the median for the distribution. The normal curve in the middle is symmetrical. The mean, median and mode are all in the middle. The mode is the high point of the curve.
The normal curve is called a density function, in contrast to the binomial distribution, which is a probability mass function. The space under the curve is called the area under the curve. The area is synonymous with the probability. The area under the entire curve, corresponding to the probability of selecting a value from anywhere in the distribution is 1. This curve never touches the x-axis, in spite of the fact that it looks like it does. Our ultimate objective with the normal curve is to find the area in the tail, which corresponds with finding the p-value.
We will start to think about the area (probability) under the curve by looking at the standard normal curve. The standard normal curve has a mean of 0 and a standard deviation of 1 and is shown as a function N(0,1). Notice that the x-axis of the curve is numbered with –3, -2, -1, 0, 1, 2, 3. These numbers are called z scores. They represent the number of standard deviations x is from the mean, which is in the middle of the curve.
Does it seem reasonable that half of the curve is to the left of the mean and half the curve is to the right? We can label each side with this value, which is interpreted as both an area and a probability that a value would exist in that area.
Thinking about the area under the normal distribution is not as easy as thinking about the area under a uniform distribution. For example, we could create a uniform distribution for the outcome of an experiment in which one die is rolled. The probability of rolling any number is 1/6. Therefore the uniform distribution would look like this.
The area on this distribution can be found by multiplying the length by the width (height). Thus, to find the probability of getting a 5 or higher, we consider the length to be 2 and the width to be 1/6 so that $2 \times (\dfrac{1}{6} = \dfrac{1}{3}$. That is, there is a probability of 1/3 that a 5 or 6 would be rolled on the die.
But a normal distribution is not as familiar as a rectangle, for which the area is easier to find. The Empirical Rule is an approximation of the areas for different sections of the normal curve; 68% of the curve is within one standard deviation of the mean, 95% of the curve is within two standard deviations of the mean, and 99.7% of the curve is within three standard deviations of the mean.
To find the area under a normal distribution was originally done using a technique called integration, which is taught in Calculus. However, these areas have already been found for the standard normal distribution N(0,1) and are provided in a table on the next page. The tables will always provide the area to the left. The area to the right is the complement of the area to the left, so to find the area to the right, subtract the area to the left from 1. A few examples should help clarify this.
Example 3. Find the areas to the left and right of z = -1.96.
Since the z value is less than 0, use the first of the two tables. Find the row with – 1.9 in the left column and find the column with the 0.06 in the top row. The intersection of those rows and columns gives the area to the left, designated as $A_L$ as 0.0250. The area to the right, designated as $A_R = 1 – 0.0250 = 0.9750$.
Z 0.09 0.08 0.07 0.06 0.05 0.04 0.03 0.02 0.01 0.00
-1.9 0.0233 0.0239 0.0244 0.0250 0.0256 0.0262 0.0268 0.0274 0.0281 0.0287
Example 4. Find the areas to the left and right of z = 0.57.
Since the z value is greater than 0, use the second of the two tables. Find the row with 0.5 in the left column and find the column with 0.07 in the top row. The intersection of those rows and columns gives $A_L = 0.7157$, therefore $A_R = 1 – 0.7157 = 0.2843$.
Standard Normal Distribution – N(0,1)
Area to the left when $z \le 0$
Z 0.09 0.08 0.07 0.06 0.05 0.04 0.03 0.02 0.01 0.00
-3.5 0.0002 0.0002 0.0002 0.0002 0.0002 0.0002 0.0002 0.0002 0.0002 0.0002
-3.4 0.0002 0.0003 0.0003 0.0003 0.0003 0.0003 0.0003 0.0003 0.0003 0.0003
-3.3 0.0003 0.0004 0.0004 0.0004 0.0004 0.0004 0.0004 0.0005 0.0005 0.0005
-3.2 0.0005 0.0005 0.0005 0.0006 0.0006 0.0006 0.0006 0.0006 0.0007 0.0007
-3.1 0.0007 0.0007 0.0008 0.0008 0.0008 0.0008 0.0009 0.0009 0.0009 0.0010
-3.0 0.0010 0.0010 0.0011 0.0011 0.0011 0.0012 0.0012 0.0013 0.0013 0.0013
-2.9 0.0014 0.0014 0.0015 0.0015 0.0016 0.0016 0.0017 0.0018 0.0018 0.0019
-2.8 0.0019 0.0020 0.0021 0.0021 0.0022 0.0023 0.0023 0.0024 0.0025 0.0026
-2.7 0.0026 0.0027 0.0028 0.0029 0.0030 0.0031 0.0032 0.0033 0.0034 0.0035
-2.6 0.0036 0.0037 0.0038 0.0039 0.0040 0.0041 0.0043 0.0044 0.0045 0.0047
-2.5 0.0048 0.0049 0.0051 0.0052 0.0054 0.0055 0.0057 0.0059 0.0060 0.0062
-2.4 0.0064 0.0066 0.0068 0.0069 0.0071 0.0073 0.0075 0.0078 0.0080 0.0082
-2.3 0.0084 0.0087 0.0089 0.0091 0.0094 0.0096 0.0099 0.0102 0.0104 0.0107
-2.2 0.0110 0.0113 0.0116 0.0119 0.0122 0.0125 0.0129 0.0132 0.0136 0.0139
-2.1 0.0143 0.0146 0.0150 0.0154 0.0158 0.0162 0.0166 0.0170 0.0174 0.0179
-2.0 0.0183 0.0188 0.0192 0.0197 0.0202 0.0207 0.0212 0.0217 0.0222 0.0228
-1.9 0.0233 0.0239 0.0244 0.0250 0.0256 0.0262 0.0268 0.0274 0.0281 0.0287
-1.8 0.0294 0.0301 0.0307 0.0314 0.0322 0.0329 0.0336 0.0344 0.0351 0.0359
-1.7 0.0367 0.0375 0.0384 0.0392 0.0401 0.0409 0.0418 0.0427 0.0436 0.0446
-1.6 0.0455 0.0465 0.0475 0.0485 0.0495 0.0505 0.0516 0.0526 0.0537 0.0446
-1.5 0.0559 0.0571 0.0582 0.0594 0.0606 0.0618 0.0630 0.0643 0.0655 0.0668
-1.4 0.0681 0.0694 0.0708 0.0721 0.0735 0.0749 0.0764 0.0778 0.0793 0.0808
-1.3 0.0823 0.0838 0.0853 0.0869 0.0885 0.0901 0.0918 0.0934 0.0951 0.0968
-1.2 0.0985 0.1003 0.1020 0.1038 0.1056 0.1075 0.1093 0.1112 0.1131 0.1151
-1.1 0.1170 0.1190 0.1210 0.1230 0.1251 0.1271 0.1292 0.1314 0.1334 0.1357
-1.0 0.1379 0.1401 0.1423 0.1446 0.1469 0.1492 0.1515 0.1539 0.1562 0.1587
-0.9 0.1611 0.1635 0.1660 0.1685 0.1711 0.1736 0.1762 0.1788 0.1814 0.1841
-0.8 0.1867 0.1894 0.1922 0.1949 0.1977 0.2005 0.2033 0.2061 0.2090 0.2119
-0.7 0.2148 0.2177 0.2206 0.2236 0.2266 0.2296 0.2327 0.2358 0.2389 0.2420
-0.6 0.2451 0.2483 0.2514 0.2546 0.2578 0.2611 0.2643 0.2676 0.2709 0.2743
-0.5 0.2776 0.2810 0.2843 0.2877 0.2912 0.2946 0.2981 0.3015 0.3050 0.3085
-0.4 0.3121 0.3156 0.3192 0.3228 0.3264 0.3300 0.3336 0.3372 0.3409 0.3446
-0.3 0.3483 0.3520 0.2557 0.3594 0.3632 0.3669 0.3707 0.3745 0.3783 0.3821
-0.2 0.4247 0.4286 0.4325 0.4364 0.4404 0.4443 0.4483 0.4522 0.4562 0.4602
0.0 0.4641 0.4681 0.4721 0.4761 0.4801 0.4840 0.4880 0.4920 0.4960 0.5000
Standard Normal Distribution – N(0,1)
Area to the left when $z \ge 0$
Z 0.00 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09
0.0 0.5000 0.5040 0.5080 0.5120 0.5160 0.5199 0.5239 0.5279 0.5319 0.5359
0.1 0.5398 0.5438 0.5478 0.5517 0.5557 0.5596 0.5636 0.5675 0.5714 0.5753
0.2 0.5793 0.5832 0.5871 0.5910 0.5948 0.5987 0.6026 0.6064 0.6103 0.6141
0.3 0.6179 0.6217 0.6255 0.6293 0.6331 0.6368 0.6406 0.6443 0.6480 0.6517
0.4 0.6554 0.6591 0.6628 0.6664 0.6700 0.6736 0.6772 0.6808 0.6844 0.6879
0.5 0.6915 0.6950 0.6985 0.7019 0.7054 0.7088 0.7123 0.7157 0.7190 0.7224
0.6 0.7257 0.7291 0.7324 0.7357 0.7389 0.7422 0.7454 0.7486 0.7517 0.7549
0.7 0.7580 0.7611 0.7642 0.7673 0.7704 0.7734 0.7764 0.7794 0.7823 0.7852
0.8 0.7881 0.7910 0.7939 0.7967 0.7995 0.8023 0.8051 0.8078 0.8106 0.8133
0.9 0.8159 0.8186 0.8212 0.8238 0.8264 0.8289 0.8315 0.8340 0.8365 0.8389
1.0 0.8413 0.8438 0.8461 0.8485 0.8508 0.8531 0.8554 0.8577 0.8599 0.8621
1.1 0.8643 0.8665 0.8686 0.8708 0.8729 0.8749 0.8770 0.8790 0.8810 0.8830
1.2 0.8849 0.8869 0.8888 0.8907 0.8925 0.8944 0.8962 0.8980 0.8997 0.9015
1.3 0.9032 0.9049 0.9066 0.9082 0.9099 0.9115 0.9131 0.9147 0.9162 0.9177
1.4 0.9192 0.9207 0.9222 0.9236 0.9251 0.9265 0.9279 0.9292 0.9306 0.9319
1.5 0.9332 0.9345 0.9357 0.9370 0.9382 0.9394 0.9406 0.9418 0.9429 0.9441
1.6 0.9452 0.9463 0.9474 0.9484 0.9495 0.9505 0.9515 0.9525 0.9535 0.9545
1.7 0.9554 0.9564 0.9573 0.9582 0.9591 0.9599 0.9608 0.9616 0.9625 0.9633
1.8 0.9641 0.9649 0.9656 0.9664 0.9671 0.9678 0.9686 0.9693 0.9699 0.9706
1.9 0.9713 0.9719 0.9726 0.9732 0.9738 0.9744 0.9750 0.9756 0.9761 0.9767
2.0 0.9772 0.9778 0.9783 0.9788 0.9793 0.9798 0.9803 0.9808 0.9812 0.9817
2.1 0.9821 0.9826 0.9830 0.9834 0.9838 0.9842 0.9846 0.9850 0.9854 0.9857
2.2 0.9861 0.9864 0.9868 0.9871 0.9875 0.9878 0.9881 0.9884 0.9887 0.9890
2.3 0.9893 0.9896 0.9898 0.9901 0.9904 0.9906 0.9909 0.9911 0.9913 0.9916
2.4 0.9918 0.9920 0.9922 0.9925 0.9927 0.9929 0.9931 0.9932 0.9934 0.9936
2.5 0.9938 0.9940 0.9941 0.9943 0.9945 0.9946 0.9948 0.9949 0.9951 0.9952
2.6 0.9953 0.9955 0.9956 0.9957 0.9959 0.9960 0.9961 0.9962 0.9963 0.9964
2.7 0.9965 0.9966 0.9967 0.9968 0.9969 0.9970 0.9971 0.9972 0.9973 0.9974
2.8 0.9974 0.9975 0.9976 0.9977 0.9977 0.9978 0.9979 0.9979 0.9980 0.9981
2.9 0.9981 0.9982 0.9982 0.9983 0.9984 0.9984 0.9985 0.9985 0.9986 0.9986
3.0 0.9987 0.9987 0.9987 0.9988 0.9988 0.9989 0.9989 0.9989 0.9990 0.9990
3.1 0.9990 0.9991 0.9991 0.9991 0.9992 0.9992 0.9992 0.9992 0.9993 0.9993
3.2 0.9993 0.9993 0.9994 0.9994 0.9994 0.9994 0.9994 0.9995 0.9995 0.9995
3.3 0.9995 0.9995 0..9995 0.9996 0.9996 0.9996 0.9996 0.9996 0.9996 0.9997
3.4 0.9997 0.9997 0.9997 0.9997 0.9997 0.9997 0.9997 0.9997 0.9997 0.9998
Since it is very unlikely that we will encounter authentic populations that are normally distributed with a mean of zero and a standard deviation of one, then of what use is this? The answer to this question has two parts. The first part is to answer the question about which useful populations are normally distributed. The second part is to determine how these tables can be used by other distributions with different means and standard deviations.
You have already seen that the normal curve fits very nicely over the binomial distribution. In chapters one and two you also saw distributions of sample proportions and sample means that look normally distributed. Therefore, the primary use of the normal distribution is to find probabilities when it is used to model other distributions such as the binomial distribution or the sampling distributions of $\hat{p}$ or $\bar{x}$. The following illustrate the elements of the distributions being modeled by the curve.
Now that some of the distributions that can be modeled with a normal curve have been established, we can address the second question, which is how to make use of the tables for the standard normal curve. Probabilities and more specifically p-values, can only be found after we have our sample results. Those sample results are part of a distribution of possible results that are approximately normally distributed. By determining the number of standard deviations our sample results are from the mean of the population, we can use the standard normal distribution tables to find the p-value. The transformation of sample results into standard deviations from the mean makes use of the z formula.
The z score is the number of standard deviations a value is from the mean. By subtracting the value from the mean and dividing by the standard deviation, we calculate the number of standard deviations. The formula is
$z = \dfrac{x - \mu}{\sigma}$
This is the basic formula upon which many others will be built.
Example 5
Suppose the mean number of successes in a sample of 100 is 20 and the standard deviation is 4. Sketch and label a normal curve and find the area in the left tail for the number 13.
First find the z score: $z = \dfrac{x - \mu}{\sigma}$
$z = \dfrac{13 - 20}{4} = -1.75$
Find the area to the left in the table
$A_L = 0.0401$
Example 6
If the mean is 30 and the standard deviation is 5, then sketch and label a normal curve and find the area in the right tail for the number 44.1.
First find the z score: $z = \dfrac{x - \mu}{\sigma}$
$z = \dfrac{44.1 - 30}{5} = 2.82$
Find the area to the left in the table
$A_L = 0.9976$
Use this to find the area to the right by subtracting from 1.
$A_R = 0.0024$
Return to Step 6: Apply this concept of probability to the hypothesis about the responsibility for a self-driving car in an accident.
Remember that the hypotheses for the autonomous car problem are: $H_0: p = 0.60$, $H_1: p > 0.60$. In the original problem, the researcher found that 4 out of 6 people thought the owner was responsible. Which hypothesis does this data support if the level of significance is 0.10?
This hypothesis test will be done using a method called the Normal Approximation to the Binomial Distribution.
The first step is to find the mean and standard deviation of the binomial distribution (which was done earlier but is now repeated):
$\mu = np = 6(0.6) = 3.6$
$\sigma = \sqrt{npq} = \sqrt{6(0.6)(0.4)} = 1.2$
Draw and label a normal curve with a mean of 3.6 and a standard deviation of 1.2.
Find the z score if the data is 4.
$z = \dfrac{x - \mu}{\sigma}$ $z = \dfrac{4 - 3.6}{1.2} = 0.33$
From the table, the area to the left is $A_L = 0.6255$. Since the direction of the extreme is to the right, subtract the area to the left from 1 to get $A_R = 0.3745$. This is the p-value.
This p-value can also be found with the calculator ($2^{\text{nd}}$ Distr #2: normalcdf(low, high, $\mu$, $\sigma$)) shown as normalcdf(4, 1E99, 3.6,1.2)=0.3694.
Since this value is greater than the level of significance, if the calculator generated p-value is used, the conclusion will be written as: At the 0.10 level of significance, the proportion of people who think the owner is responsible is not significantly more than 0.60 (z = 0.33, p = 0.3694, n = 6).
Let us now take a moment to compare the p-value from the Normal Approximation to the Binomial Distribution (0.3694) to the exact p-value found using the Binomial Distribution (0.5443). While these p- values are not very close to each other, the conclusion that is drawn is the same. The reason they are not very close is because a sample size of 6 is very small and the normal approximation is not very good with a small sample size.
Test the hypothesis again if the researcher finds that 400 out of 600 of the people believe the owner is responsible for accidents.
$\mu = np = 600(0.6) = 360$ This indicates that if lots of samples of 600 people were sampled the average number of people who think the owner is responsible would be 360.
$\sigma = \sqrt{npq} = \sqrt{600(0.6)(0.4)} = 12$
Draw a label a curve with a mean of 360 and a standard deviation of 12.
Find the z score if the data is 400.
$z = \dfrac{x - \mu}{\sigma}$ $z = \dfrac{400 - 360}{12} = 3.33$
Using the table, the area to the left is $A_L = 0.9996$. Since the direction of the extreme is to the right, subtract the area to the left from 1 to get $A_R = 0.0004$. More precisely, it is 0.000430.
This time when the results of the Normal Approximation to the Binomial Distribution (0.000430) are compared to the results of the binomial distribution (0.000443), they are very close. This is because the sample size is larger.
In general, if $np \ge 5$ and $nq \ge 5$, then the normal approximation makes a good, but not perfect estimate for the binomial distribution. When a sample of size 6 was used, $np = 3.6$ which is less than 5. Also, $nq = 6(0.4) = 2.4$, which is less than 5, too. Therefore, using the normal approximation for samples that small is not a good strategy.
Step 7 – Find the approximate p-value using the Sampling Distribution of Sample Proportions
Up to this point the discussion has been about the number of people. When sampling that produces categorical data is done, these numbers or counts can also be represented as proportions by dividing the number of successes by the sample size. Thus, instead of the researcher saying that 4 out of 6 people believe the owner is responsible, the researcher could say that 66.7% of the people believe the owner is responsible. This leads to the concept of looking at proportions rather than counts which means that instead of the distribution being made up of the number of successes, represented by x, it is made up of the sample proportion of successes represented by $\hat{p}$.
Digression 7 – Sampling Distribution of Sample Proportions
Since the binomial distribution contains all possible counts of the number of successes and it is approximately normally distributed and since all counts can be converted to proportions by dividing by the sample size, then the distribution of $\hat{p}$ is also approximately normally distributed. This distribution has a mean and standard deviation that can be found by dividing the mean and standard deviation of the binomial distribution by the sample size n.
The mean of all the sample proportions is the mean number of successes divided by n.
$\mu_{\hat{p}} = \dfrac{\mu}{n} = \dfrac{np}{n} = p$ This indicates that the mean of all possible sample proportions equals the true proportion for the population.
$\mu_{\hat{p}} = p$
The standard deviation of all the sample proportions is the standard deviation of the number of successes divided by n.
$\sigma_{\hat{p}} = \dfrac{\sigma}{n} = \sqrt{\dfrac{npq}{n^2}} = \sqrt{\dfrac{pq}{n}}\ or\ \sqrt{\dfrac{p(1- p)}{n}}$
$\sigma_{\hat{p}} = \sqrt{\dfrac{pq}{n}}\ \ \ \ or \ \ \ \ \sigma_{\hat{p}} = \sqrt{\dfrac{p(1- p)}{n}}$
The basic z formula $z = \dfrac{x − \mu}{\sigma}$ can now be rewritten knowing that in a distribution of sample proportions, the results of the sample that have formerly been represented with $X$ can now be represented with $\hat{p}$. The mean, $\mu$ can now be represented with $p$, since $\mu_{\hat{p}} = p$ and the standard deviation $\sigma$ can now be represented with $\sqrt{\dfrac{p(1- p)}{n}}$ since $\sigma_{\hat{p}} = \sqrt{\dfrac{p(1- p)}{n}}$. Therefore, for the sampling distribution of sample proportions, the z formula $z = \dfrac{x − \mu}{\sigma}$ becomes
$z = \dfrac{\hat{p} - p}{\sqrt{\dfrac{p(1-p)}{n}}}.$
Apply this concept of probability to the hypothesis about the responsibility for a self-driving car in an accident.
Remember that the hypotheses for the people who think the owner is responsible are: $H_0: p = 0.60$, $H_1: p > 0.60$. In the original problem, the researcher found that 4 out of 6 people think the owner is responsible. Which hypothesis does this data support if the level of significance is 0.10?
Since $\mu_{\hat{p}} = p$ then the mean is 0.60 (from the null hypothesis).
Since $\sigma_{\hat{p}} = \sqrt{\dfrac{p(1-p)}{n}} = \sqrt{\dfrac{0.6(0.4)}{6}} = 0.2$ then the standard deviation is 0.2.
Draw a label a normal curve with a mean of 0.6 and a
standard deviation of 0.2.
If the data is 4, then the sample proportion,
$\hat{p} = \dfrac{x}{n} = \dfrac{4}{6} = 0.6667$
Find the z score if the data is 4.
$z = \dfrac{\hat{p} - p}{\sqrt{\dfrac{p(1-p)}{n}}}$ $z = \dfrac{0.6667 - 0.6}{0.2} = 0.33$
The area to the left is $A_L = 0.6304$. Since the direction of the extreme is to the right, subtract the area to the left from 1 to get $A_R = 0.3696$.
Compare this result to the result found when using the Normal Approximation to the Binomial Distribution. Notice that both results are exactly the same. This should happen every time, provided there isn’t any rounding of numbers. The reason this has happened is because the number of successes can be represented as counts or proportions. The distributions are the same, although the x-axis is labeled differently. Divide the z scores for the normal approximation by the sample size 6 and you will get the z scores for the sampling distribution.
Test the hypothesis again if the researcher finds that 400 out of 600 of the people believe the owner is responsible.
Since $\mu_{\hat{p}} = p$ then the mean is 0.60 (from the null hypothesis).
Since $\sigma_{\hat{p}} = \sqrt{\dfrac{p(1-p)}{n}} = \sqrt{\dfrac{0.6(0.4)}{600}} = 0.02$ then the standard deviation is 0.02.
If the data is 400, then the sample proportion, $\hat{p} = \dfrac{x}{n} = \dfrac{400}{600} = 0.66667$
$z = \dfrac{\hat{p} - p}{\sqrt{\dfrac{p(1-p)}{n}}}$ $z = \dfrac{0.66667 - 0.6}{0.02} = 3.33$
The area to the left is $A_L = 0.9996$. Since the direction of the extreme is to the right, subtract the area to the left from 1 to get $A_R = 0.0004$. More precisely, it is 0.000430.
Conclusion for testing hypotheses about categorical data.
By this time, many students are wondering why there are three methods and why the binomial distribution method isn’t the only one that is used since it produces an exact p-value. One justification of using the last method is comparing the results of surveys or other data. Imagine if one news organization reported their results of a survey as 670 out of 1020 were in favor while another organization reported they found 630 out of 980 were in favor. A comparison between these would be difficult without converting them to proportions, therefore, the third method, which uses proportions, is the method of choice. When the sample size is sufficiently large, there is not much difference between the methods. For smaller samples, it may be more appropriate to use the binomial distribution.
Making inferences using quantitative Data
The strategy for making inferences with quantitative data uses sampling distributions in the same way that they were used for making inferences about proportions. In that case, the normal distribution was used to model the distribution of sample proportions, pˆ . With quantitative data, we find the mean, therefore the normal distribution will be used to model the distribution of sample means, x .
To demonstrate this, a small population will be used. This population consists of the 50 states of the United States plus the District of Columbia and the 5 US territories of American Samoa, Guam, Northern Mariana Islands, Puerto Rico and the Virgin Islands, each of which has one representative in Congress with limited voting authority. A histogram showing the distribution of the number of representatives in a state or territory is provided. On the graph is a normal distribution based on the mean of this population being 7.875 representatives and a standard deviation of 9.487. The distribution is positively skewed and cannot be modeled by the normal curve that is on the graph.
Be aware that in reality, the mean and standard deviation, which are parameters, are not known and so we would normally write hypotheses about them. However, for this demonstration, a small population with a known mean and standard deviation are necessary. With this, it is possible to illustrate what happens when repeated samples of the same size are drawn from this population, with replacement, and the means of each sample are found and becomes part of the distribution of sample means.
A sampling distribution of sample means (a distribution of x ) contains all possible sample means that in theory could be obtained if a random selection process was used, with replacement. The number of possible sample means can be found using the fundamental rule of counting. Draw a line to represent each state/territory that would be selected. On the line write the number of options, so that it would look like this:
Options: 56 56 56 56 56
State: 1 2 3 4... n
If our sample size is 40, then there are $56^{40}$ possible samples that could be selected which is equal to $8.46 \times 10^{69}$. That is a lot of possible samples. For this demonstration, only 10,000 samples of size 40 will be taken. The distribution of these sample means when this was done is shown in the histogram below.
The mean of all these sample means is 7.8748 and the standard deviation is 1.502. Notice that the mean of all these sample means is almost exactly the same as the mean of the original population. Also notice that the standard deviation of all these sample means is much smaller than the standard deviation of the population. This is summarized in the table below.
Population Sampling Distribution
Mean 7.875 7.8748
Standard Deviation 9.487 1.502
The following graph has both the original data and the sample means on it. Notice how the two normal curves are centered at approximately the same place but the curve for the sample means is narrower. This shows that when samples of sufficient size are taken from any population, the means of those samples will be close to the means of the population.
We are now ready to discuss the Central Limit Theorem. This theorem states that for any set of quantitative data with a mean μ and a standard deviation σ, the mean of all possible sample means will equal the mean of the population. The standard deviation of all the sample means, which is also called the standard error, will equal the standard deviation of the population divided by the square root of n. These are shown as:
$\mu_{\bar{x}} = \mu$
and
$\sigma_{\bar{x}} = \dfrac{\sigma}{\sqrt{n}}.$
It also says the distribution of sample means will be normal if the sample size is sufficiently large (generally considered to be 30 or more). If the original population is normally distributed, than the distribution of sample means will be normally distributed for any sample size.
Before doing an example, it will be important to see the effect of sample sizes. Compare the following 4 graphs that show the distribution of sample means for samples of size 40, 30, 20, and 10.
Notice how the distributions become more skewed as the sample size decreases. Notice also that the mean of the sample means are still approximately equal to the mean of the population but the standard deviations get larger as the sample size gets smaller. This implies there is more variation in sample means with small sample sizes than with large sample sizes.
Population $n = 40$ $n = 30$ $n = 20$ $n = 10$
Mean 7.875 7.8748 7.8906 7.8845 7.9083
Standard Deviation 9.487 1.5021 1.6971 2.0936 2.995
Calculated Standard Deviation using
$\sigma_{\bar{x}} = \dfrac{\sigma}{\sqrt{n}}$
1.500 1.732 2.121 3.000
When making inferences about quantitative data, the basic $z$ formula $z = \dfrac{x − μ}{\sigma}$ can now be rewritten knowing that in a distribution of sample means, the results of the sample that have formerly been represented with $x$ can now be represented with $\bar{x}$. The mean, $\mu$ will still be represented with $\mu$, since $\mu_{\bar{x}} = \mu$ and the standard deviation $\sigma$ can now be represented with $\dfrac{\sigma}{\sqrt{n}}$ since $\sigma_{\bar{x}} = \dfrac{\sigma}{\sqrt{n}}$
Therefore, for the sampling distribution of sample means, the $z$ formula, $z = \dfrac{x − μ}{\sigma}$ becomes
$z = \dfrac{\bar{x} - \mu}{\dfrac{\sigma}{\sqrt{n}}}.$
It is now time to use the central limit theorem to test a hypothesis.
Example 7
Example 7 Mercury in fish is not healthy and restrictions are placed on the amount of fish that should be eaten. Suppose a researcher wanted to know if the average concentration of methylmercury per kilogram of fish tissue was greater than the maximum recommended limit of 300 μg/kg. If the average concentration is greater than 300, the fisheries will be closed, otherwise it will remain open. Suppose also that the standard deviation for the population is $\sigma = 50 \mu g/kg$. The researcher catches 36 fish. The sample mean concentration is 310.
The hypotheses to be tested are:
$H_0: \mu = 300$
$H_1: \mu > 300$
$\alpha = 0.1$
Since all the information that is needed is provided in the problem, the first step is to find the $z$ score.
$z = \dfrac{\bar{x} - \mu}{\dfrac{\sigma}{\sqrt{n}}} = \dfrac{310 - 300}{\dfrac{50}{\sqrt{36}}} = 1.2.$
The next step is to look up 1.20 in the standard normal distribution tables. This gives an area to the left of $A_L=0.8849$ and so the area to the right is $A_R = 0.1151$. This is a p-value.
Since the p-value is greater than the level of significance, the conclusion is that the average concentration of methylmercury in the fish tissue is not significantly greater than 300 $\mu$g/kg ($z$ = 1.20, $p$ = 0.1151, $n$ = 36). Therefore, the fisheries will not be closed to fishing
Example 8
According to one estimate, the average wait time for subsidized housing for homeless people is 35 months.(www.stcloudstate.edu/reslife/...Statistics.pdf viewed 9/13/13) Assume the distribution of times is normal and the standard deviation is 10 months. One city evaluates its current program and to see if it is effective and justifies continued funding. If the average wait time is less than 35 months, the program will continue. Otherwise, the program will be replaced with a different program.
The hypotheses to be tested are:
• $H_0: \mu = 35$
• $H-1: \mu < 35$
$\alpha = 0.01$
The wait time (in months) of twenty people who recently received subsidized housing is recorded below.
44 23 26 27 22 33 20 28 8 22
23 19 12 23 12 7 17 4 18 33
Since we are given the data, we must find the sample mean before finding the z score.
$\bar{x} = 21.05$
$z = \dfrac{\bar{x} - \mu}{\dfrac{\sigma}{\sqrt{n}}} = \dfrac{21.05 - 35}{\dfrac{10}{\sqrt{20}}} = -6.24.$
Because the direction of the extreme is to the left we find the area to the left on the standard normal distribution table. The lowest z score we find on that table is -3.49. The area to the left of -3.49 is 0.0002. Going even further to the left, the area will be less than that. Therefore, the p-value for this data is <0.0002. The amount of wait time before people receive subsidized housing with the current program is significantly less than 35 months (z = -6.24, p < 0.0002, n = 20). Based on the decision rule, the program is effective and will continue to be funded.
04: Inferential Theory
Chapter 4 Homework
For all parts of this problem, $H_0: p = 0.70$, $H_1: p < 0.70$. Show all supporting work including formulas, substitutions, and solutions as appropriate.
1. What is the probability the first unit selected is a success?
2. What is the probability the first unit selected is a failure?
3. What is the probability the first five units selected will be in the order of FSSSF?
4. If 5 values are selected, how many combinations are there for 3 successes?
5. What is the probability three of five units will be a success?
6. Create the entire binomial probability distribution if 5 units are selected. Record probabilities to 4 decimal places and then draw a stick graph in the provided space.
0.40
0.35
0.30
0.25
0.20
0.15
0.10
0.05
0
$X = x$ 0 1 2 3 4 5
$P(X = x)$
1. What is the mean and standard deviation of this binomial distribution?
2. Test the hypotheses if there were 3 successes in a sample of 5, what is the probability that three or fewer successes would be obtained if the null hypothesis is true? This is a p-value. At the 0.20 level of significance, which hypothesis is supported?
For all parts of this problem, $H_0: p = 0.40$, $H_1: p > 0.40$. Show all supporting work including formulas, substitutions, and solutions as appropriate.
a. What is the probability the first unit selected is a success?
b. What is the probability the first unit selected is a failure?
c. What is the probability the first five units selected will be in the order of SFSFSSS?
d. If 7 values are selected, how many combinations are there for 5 successes?
e. What is the probability three of five units will be a success?
f. Create the entire binomial probability distribution if 5 units are selected. Record probabilities to 4 decimal places and then draw a stick graph in the provided space.
0.40
0.35
0.30
0.25
0.20
0.15
0.10
0.05
0
$X = x$ 0 1 2 3 4 5
$P(X = x)$
1. What is the mean and standard deviation of this binomial distribution?
2. Test the hypotheses if there were 5 successes in a sample of 7, what is the probability that three or fewer successes would be obtained if the null hypothesis is true? This is a p-value. At the 0.20 level of significance, which hypothesis is supported?
Briefing 5.1 Coal Export Terminals in the Pacific Northwest
Coal is used to produce electricity. It is also a major contributor of greenhouse gases and other pollutants to the atmosphere. A substantial amount of coal is mined in Montana and Wyoming. One goal is to export this coal to Asia. To do so means building coal terminals in Washington or Oregon. Some people want that to happen because it will bring money to the coal producers and jobs to those who work for the railroad or coal terminals. Others are opposed because of the impact to the community because of frequent long trains that will go through the towns, the pollution from the coal dust that is lost by the trains, the impact on the fishing industry from water pollution and the effect coal has on the climate.
1. Assume that in a hypothetical Pacific Northwest coast community that has been suggested as a potential coal terminal location, the mayor of the town has mixed thoughts about whether to support the project or oppose it. While it will bring more jobs to the community that needs them, the consequences are troubling. The mayor decides to have a survey conducted. 300 people will be surveyed. If a majority of the residents in the community oppose the coal terminal (success), the mayor will also oppose it; otherwise the mayor will support it. The hypotheses used to test for a majority are $H_0: p = 0.50$ and $H_1: p > 0.50$. The level of significance is 0.05.
a. What is the probability that the $30^{\text{th}}$ person selected by the pollster opposes the coal terminal?
b. What is the probability that the $287^{\text{th}}$ person selected by the pollster doesn’t oppose the coal terminal?
c. What is the probability that the first ten people selected will be in this order, where S represents opposition to the terminal and F represents not being opposed: SFFFFFSFSS?
DATA: 165 out of 300 surveyed people oppose the terminal.
d. What is the probability that the pollster obtained any specific sequence of 165 successes and 135 failures?
e. How many combinations are there for 165 successes in a sample of 300? 3f. What is the probability of 165 success in a sample of 300?
g. What is the mean of the binomial distribution if n is 300?
h. What is the standard deviation of the binomial distribution if n is 300?
i. In a sample of 300, there could be between 0 and 300 successes. In this problem, you will only focus on 145 to 155 successes. Complete the partial distribution below and make a stick graph of this section of the distribution.
0.046
0.045
0.044
0.043
0.042
0.041
0.040
0.039
$X = x$ 145 146 147 148 149 150 151 152 153 154 155
$P(X = x)$
j. Use the binomial distribution to determine which hypothesis is supported if 165 out of 300 people opposed the terminal? Show the calculator function that will be used, and your substitutions. Write a complete concluding sentence in the style of a scholarly journal that includes the p-value and sample size.
Calculator Input p-value
k. Use the normal approximation to the binomial distribution. Draw and label the normal curve. Find the z-score, and p-value then write a complete concluding sentence in the style of a scholarly journal that includes the z-score, p-value and sample size. Show the formula and substitution for the z score.
Formula Substitution z value p-value
l. What is the sample proportion of people opposed to the terminal?
m. What is the mean and standard deviation of the distribution of $\hat{p}$? Show formulas, substitutions and solutions.
3n. Use the sampling distribution of sample proportion method for testing the hypothesis. Draw and label a normal curve. Find the z-score, and p-value then write a complete concluding sentence in the style of a scholarly journal that includes the z-score, p-value and sample size. Show the formula and substitution for the z score.
Formula Substitution z value p-value
o. Based on the results of all of these hypothesis tests, will the mayor support or oppose the project?
2. In 2001, the Seattle Mariners won 116 games, which tied a record for the most number of games won by a baseball team in a season. During that year, the average attendance at home games in Safeco Field was 43,362. (http://www.baseball-almanac.com/teams/mariatte.shtml, viewed 9/13/13). Assume the standard deviation is 7,900 and that attendance is normally distributed. A sample of attendance at 10 games is taken from the 2013 season. Let α = 0.10.
10493 13000 30089 16294 13823
24701 18000 28198 15995 11656
Test the hypothesis that the average attendance in 2013 is less than it was in 2001.
a. What hypotheses would be used to test if the average attendance in 2013 is less than 43,362.
b. What is the mean of the distribution of sample means that is appropriate for testing this hypothesis?
c. What is the standard deviation of the distribution of sample means that is appropriate for testing this hypothesis?
d. Draw and label a normal curve for the sampling distribution.
e. What is the sample mean from 2013?
f. Test the hypothesis. Show all appropriate formulas, substitutions and solutions and write your complete concluding sentence.
Formula Substitution z value p-value
3. Ocean fishermen and boaters are familiar with tides and usually consult a tide table when planning a trip, but infrequent visitors to marine waters are less familiar with tides. As a first step in learning about tides, a curious person wants to determine if the time between consecutive high tides is greater than 12 hours? The hypotheses are $H_0: \mu = 12$ and $H_1: \mu > 12$. Assume the standard deviation is 1.4 hours. Let $\alpha = 0.05$.
A histogram of 36 times between consecutive high tides from September 2013 is shown below.(tides.mobilegeographics.com/c...onth/2152.html viewed 9/13/13 for Gig Harbor.)
a. Assuming the null hypothesis is true, then what is the mean of the sampling distribution of sample means for 36 differences between consecutive high tides?
b. Assuming the null hypothesis is true, then what is the standard deviation of the sampling distribution of sample means for 36 differences between consecutive high tides?
c. Draw and label a normal curve for the distribution of $\bar{x}$, if n=36.
d. Test the hypothesis if the sample mean time between consecutive high tides is 12.38 hours. Show the formula, substitution and solution. Write a complete concluding sentence that includes the z score, p-value and sample size.
Formula Substitution z value p-value
4. According to the website walkscore.com, a walk score is a number between 0 and 100 that measures the walkability of any address. Scores over 90 indicate a Walker’s Paradise while scores under 50 are car-dependent. Advantages of walkable neighborhoods include lower weight for residents, fewer carbon emissions and less car expenses. The objective of this experiment is to determine if smaller communities, defined for this problem as having less than 100,000 residents, have a higher average walk score than the largest cities. For the purposes of this problem, the average walk score of the largest 31 cities is 54.1. Assume the standard deviation for walk scores is 16.1. Let $\alpha = 0.05$.
a. Complete the design-layout table.
Research Design Table
Research Question
Type of Research Observational Study
Observational Experiment
Manipulative Experiment
What is the response variable?
What is the parameter that will be calculated? Mean Porportion Correlation
List potential latent variables
Grouping/explanatory Variables 1 (if present) Levels:
Grouping/explanatory Variables 2 (if present) Levels:
b. What are the hypotheses for this problem?
The walk scores of the 30 cities in the sample are provided below.
33 59 37 33 31 29
36 47 42 43 69 38
22 57 36 48 51 34
66 92 65 65 40 25
58 29 45 63 40 69
c. Make a frequency distribution and histogram for this data. Use reader-friendly class boundaries.
d. Find the sample mean and standard deviation
e. What is the mean and standard deviation of the sampling distribution of sample means that is based on the null hypothesis?
f. Draw and label the normal distribution for the sample means.
g. Test the hypothesis. Show formulas, substitutions and solutions. Write a complete conclusion including z score, p-value and sample size.
Formula Substitution z value p-value
h. Can we conclude smaller towns have a higher walk score than the largest cities?
5. Magazines about sports regularly contain predictions about who will win games or championships. One would expect that the writers who make the predictions would have considerable expertise and insight and have a high rate of success. At a minimum, one would hope that the writers are better than a coin flip for determining winners. A coin flip has a 50% chance of picking the winning team.
To test if the writers are better than a coin flip, two major sports magazines were selected and predictions from regular NFL games were compared with results. A success was if the prediction was correct, and a failure was if the prediction was wrong. Use a 5% level of significance.
b. Write the hypotheses.
The reporters picked the winning team 181 out of 322 times.
c. What is the sample proportion?
d. Make a completely labeled pie chart.
e. Test the hypothesis using the binomial distribution. Write a complete concluding sentence.
Calculator Input p-value
f. What is the mean and standard deviation of the binomial distribution for this problem?
g. Test the hypothesis using the normal approximation to the binomial distribution. Include a completely labeled drawing of the normal curve, the appropriate formulas, substitutions and solutions and then write a complete concluding sentence.
Formula Substitution z value p-value
h. Test the hypothesis using the sampling distribution method. Include a completely labeled drawing of the normal curve, the appropriate formulas, substitutions and solutions and then write a complete concluding sentence.
What is the sample proportion?
Formula Substitution z value p-value
i. Based on the statistical results, do these sports writers appear to be better than a coin flip? Are you impressed by their ability to predict NFL winners?
6. Developed in collaboration with Alan Kemp, Professor of Sociology and author of the bookDeath, Dying and Bereavement in a Changing World, published by Pearson, 2013.
This topic is discussed in SOC 212, Sociology of Death.
Briefing 5.2
Terror Management Theory (TMT) was developed by Ernest Becker in the 1960s and 1970s. Extensive experimentation has been done to test these theories. This problem is based on the article Evidence for terror management theory: 1. The effects of mortality salience on reactions to those who violate or uphold cultural values. By Rosenblatt, Greenberg, Solomon, Pyszczynski and Lyon. It was published in the Journal of Personality and Social Psychology, Vol 57(4), Oct, 1989 pp 681-690.
A basic premise behind the theory is that humans are the only species who recognize their own mortality (they know they will die in the future). Consequently, humans need a way to manage the emotions related to this knowledge. The two predominate ways that humans cope are with culture (e.g. religion and other beliefs) and self-efficacy which means that we want to know that what we do matters within our cultural worldview. One such consequence of this is that following a terrorist attack or deadly natural disaster, patriotism increases (culture) as does heroism (self-efficacy).
Cultures are an artificial construction and therefore the worldview they portray can be exposed to potential threats. Since a culture can provide the standards by which a person can feel that life is fair, any person or idea that threatens the cultural norms must be removed or punished. Consequently, an expected outcome of this theory is that people will respond positively toward those who support cultural values and negatively toward those who violate these values. To test the theory, the authors of the article designed an experiment to determine if a reminder of one’s own mortality would lead to more negative responses for something that violates cultural values.
Municipal court judges were selected for this experiment. The purpose of the experiment was disguised. The judges were given a questionnaire. Within the questionnaire of half the judges were questions about their thoughts and feelings about the prospect of their own death. The remaining judges did not have these questions. Judges were randomly assigned the questionnaire. Following questions, the judges were given a scenario about a case of alleged prostitution and asked to set the amount of bail for the prostitute. Prostitution was used because it emphasized the moral nature of the alleged crime. No effort was made to determine the judge’s opinion about prostitution, which could affect their bail amount. Judges were selected for this experiment because they have been trained to make such punishments. The objective was to determine if the reminder about their own mortality would lead to harsher penalties when someone violated the cultural norms. The average bail amount between the two groups will be compared.
The hypotheses that will be tested are meant to show that judges who have been reminded about their own mortality (impact) will set higher bail amounts than judges who have not been reminded (control). $H_0: \mu_{\text{impact}} = \mu_{\text{control}}$, $H_1: \mu_{\text{impact}} > \mu_{\text{control}}$, $\alpha = 0.05$
The following data is not authentic, but it closely approximates the results obtained by the researchers. The impact group of judges was reminded of their own mortality. The control group was not.
Amount of Bail
impact 50 50 150 200 1500 1500 1200 205 50 50 50
Control 25 25 25 50 150 50 50 25 25 25 25
a. Make an appropriate graph so that the two groups can be compared. You need to decide what is appropriate.
b. Complete the table below.
Impact Control
Mean
Standard Deviation
Median
c. The p-value for the comparison of the mean bail amount is 0.041. The sample size is 22. Write a complete concluding sentence to show if there is a significant difference between the bail amounts set by judges reminded of their own mortality and judges who were not reminded.
d. Do the results of this experiment support the contention that contemplation of one’s own death leads to increased punishment of those who violate cultural norms? | textbooks/stats/Introductory_Statistics/Foundations_in_Statistical_Reasoning_(Kaslik)/04%3A_Inferential_Theory/4.E%3A_Inferential_Theory_%28Exercises%29.txt |
Since the beginning of the text it has been emphasized that a primary reason for doing statistics is to make a decision. Better decisions can be made if they are based on the best available evidence. While the ideal situation would be to get data from the entire population, the reality is that data will almost always come from a sample. Because sample data varies based on the random process that was used to select it, the researcher is forced to use sample data to draw a conclusion about the entire population. This is inference. It is using specific partial evidence to make a more general conclusion.
In Chapter 5, formulas were developed for testing hypotheses about proportions and means. In the former case the formula was
$z = \dfrac{\hat{p} - p}{\dfrac{p(1 - p)}{n}}$
and in the latter case it was
$z = \dfrac{\bar{x} - \mu}{\dfrac{\sigma}{\sqrt{n}}}.$
In general, these formulas generate a test statistic, z, which is used to determine the number of standard errors a statistic is from a parameter. The normal distribution is then used to determine the probability of getting that statistic, or a more extreme statistic. That probability is called a p-value.
Every number that is needed to make use of the formula
$z = \dfrac{\hat{p} - p}{\dfrac{p(1 - p)}{n}}$
can be found in the null hypothesis (p) or from the data ($\hat{p}$, $n$). The same cannot be said for the formula
$z = \dfrac{\bar{x} - \mu}{\dfrac{\sigma}{\sqrt{n}}}.$
While the value for $\mu$ comes from the null hypothesis and the value of $\bar{x}$ and n come from the sample data, there is no way to obtain the value of $\sigma$ without doing a census. In the last chapter you were always told the value of $\sigma$, but this does not happen in the real world because to find $\sigma$ requires first finding $\mu$ and if you knew $\mu$, there would be no reason to test a hypothesis about it.
The resolution of this problem requires two changes to the process that was used in the previous chapter. The first change is that we will have to estimate $\sigma$. The best estimate is s, the standard deviation of the sample. Replacing $\sigma$ with s means we can no longer use the standard normal distribution (z distribution). The second change therefore is to find a more appropriate distribution that can be used to model the distribution of sample means.
A set of distributions called the t distributions is used when the standard error of the mean, $\sigma_{\bar{x}} = \dfrac{\sigma}{\sqrt{n}}$ is replaced with the estimated standard error of the mean $s_{\bar{x}} = \dfrac{s}{\sqrt{n}}$. The z formula for means,
$z = \dfrac{\bar{x} - \mu}{\dfrac{\sigma}{\sqrt{n}}}$
is then modified to become the t formula
$t = \dfrac{\bar{x} - \mu}{\dfrac{s}{\sqrt{n}}}.$
Notice the only difference is the use of s instead of $\sigma$. The t distributions are used because they provide a better approximation of the distribution of sample means when the population standard deviation must be estimated using the sample standard deviation.
Unlike the normal distribution, there are many t distributions with each being defined by the number of degrees of freedom. Degrees of Freedom are a new concept that requires a little explanation.
The concept of degrees of freedom has to do with the number of independent values that can identify a position. This may be easier to think about if you picture a Cartesian coordinate system. With any two independently chosen values, normally called x and y, a point’s position can be located somewhere on the graph. Consequently, the point that is picked has two degrees of freedom. However, if a constraint is placed on the points, such as x + y = 3, then only one of the values can be independent and the other value will depend on the independent value. Because of the constraint, one degree of freedom has been lost so now the point has only one degree of freedom. If a second constraint is placed on the system, such as x – y = 1, then another degree of freedom is lost. Degrees of freedom are lost every time a constraint is applied.
For sample data, each value represents a new piece of evidence, provided the data are independent. Dependent data would artificially inflate the sample size without providing any more information. Since a larger sample size would produce a smaller standard error, which would lead to a larger t value and therefore increase the chance of a statistically significant conclusion, then it is important to only count the number of independent data values, which are known as degrees of freedom. One degree of freedom is lost every time a parameter is replaced by a statistic. Therefore, when the standard error $\sigma_{bar{x}} = \dfrac{\sigma}{\sqrt{n}}$ becomes the estimated standard error $s_{\bar{x}} = \dfrac{s}{\sqrt{n}}$, one degree of freedom has been lost. In this case, df = n – 1 where df is an abbreviation for degrees of freedom.
The formula for generating the test statistic, t, that is used to determine the number of standard errors a sample mean is from a hypothesized mean is
$t = \dfrac{\bar{x} - \mu}{\dfrac{s}{\sqrt{n}}}$
It has n-1 degrees of freedom.
In the same way that z = 1 represents 1 standard deviation above the mean for a normal distribution, $t = 1$ represents 1 standard deviation above the mean in a t distribution. Once the value of $t$ has been determined, the p-value can be found by looking in a t table.
Student t distributions
One Tail Probability 0.4 0.25 0.1 0.05 0.025 0.01 0.005 0.0005
Two Tail Probability 0.8 0.5 0.2 0.1 0.05 0.02 0.01 0.001
Confidence Level 20% 50% 80% 90% 95% 98% 99% 99.9%
df
1 0.325 1.000 3.078 6.314 12.706 31.821 63.656 636.578
2 0.289 0.816 1.886 2.920 4.303 6.965 9.925 31.600
3 0.277 0.765 1.638 2.353 3.182 4.541 5.841 12.924
4 0.271 0.741 1.533 2.132 2.776 3.747 4.604 8.610
5 0.267 0.727 1.476 2.015 2.571 3.365 4.032 6.869
6 0.265 0.718 1.440 1.943 2.447 3.143 3.707 5.959
7 0.263 0.711 1.415 1.895 2.365 2.998 3.499 5.408
8 0.262 0.706 1.397 1.860 2.306 2.896 3.355 5.041
9 0.261 0.703 1.383 1.833 2.262 2.821 3.250 4.781
10 0.260 0.700 1.372 1.812 2.228 2.764 3.169 4.587
11 0.260 0.697 1.363 1.796 2.201 2.718 3.106 4.437
12 0.259 0.695 1.356 1.782 2.179 2.681 3.055 4.318
13 0.259 0.694 1.350 1.771 2.160 2.650 3.012 4.221
14 0.258 0.692 1.345 1.761 2.145 2.624 2.977 4.140
15 0.258 0.691 1.341 1.753 2.131 2.602 2.947 4.073
16 0.258 0.690 1.337 1.746 2.120 2.583 2.921 4.015
17 0.257 0.689 1.333 1.740 2.110 2.567 2.898 3.965
18 0.257 0.689 1.330 1.734 2.101 2.552 2.878 3.922
19 0.257 0.688 1.328 1.729 2.093 2.539 2.861 3.883
20 0.257 0.687 1.325 1.725 2.086 2.528 2.845 3.850
21 0.257 0.686 1.323 1.721 2.080 2.518 2.831 3.819
22 0.256 0.686 1.321 1.717 2.074 2.608 2.819 3.792
23 0.256 0.685 1.319 1.714 2.069 2.500 2.807 3.768
24 0.256 0.685 1.318 1.711 2.064 2.492 2.797 3.745
25 0.256 0.684 1.316 1.708 2.060 2.485 2.787 3.745
26 0.256 0.684 1.315 1.706 2.056 2.479 2.779 3.707
27 0.256 0.684 1.314 1.703 2.052 2.473 2.771 3.689
28 0.256 0.683 1.313 1.701 2.048 2.467 2.763 3.674
29 0.256 0.683 1.311 1.699 2.045 2.462 2.756 3.660
30 0.256 0.683 1.310 1.697 2.042 2.457 2.750 3.646
40 0.255 0.681 1.303 1.684 2.021 2.423 2.704 3.551
60 0.254 0.679 1.296 1.671 2.000 2.390 2.660 3.460
120 0.254 0.677 1.289 1.658 1.980 2.358 2.617 3.373
$z^{\ast}$ 0.253 0.674 1.282 1.645 1.960 2.326 2.576 3.290
An assumption when using t distributions with a small sample size is that the sample is drawn from a normally distributed population. While some researchers believe this test statistic is robust enough to tolerate some violation of this assumption, at a minimum, a histogram of the data should be viewed to see if the assumption appears realistic. If it does not, other methods of analysis not discussed in this text must be pursued.
The way this t table is used to determine a p-value is to first find the row with the appropriate number of degrees of freedom. In that row, locate the range that would contain the test statistic. Move up to the first row if you are doing a one-tail test or the second row if it is a two-tail test. Next, identify the location of alpha. If your p-value is greater than alpha then use an inequality symbol to show that. If your p-value is less than alpha then show that with an inequality symbol. If greater detail can be provided, it should be. Since the t distributions are symmetric, negative t values can be found in this table by ignoring the negative signs and assuming the areas in the first row are to the left. Following are 2 examples. The sign in the alternative hypothesis, the level of significance, degrees of freedom, and the t value is provided in each example.
1. $H_1: >$ $\alpha = 0.05$ df = 6, t = 2.3
For 6 degrees of freedom, 2.3 falls between 1.943 and 2.447, which means it has an area in the tail that is between 0.05 and 0.025. The p-value would be reported as p < 0.05.
2. $H_1: \ne$ $\alpha = 0.01$ df = 18, t = -1.26
For 18 degrees of freedom, -1.26 falls between 0.688 and 1.328 if the negative sign is ignored, so the area in two tails falls between 0.5 and 0.2. Since any value in this range would not be significant at the 0.01 level, then the p-value is greater than 0.01. However, greater detail can be provided by indicating the p-value is greater than 0.2. It would be incorrect to say the p-value is less than 0.5 because that does not tell us whether it is greater or less than 0.01.
There are two different inferential approaches that can be taken. Throughout most of this text the focus has been on the concept of testing hypotheses. That means there is actually a hypothesis of what would be found from a census. The alternative inferential approach occurs when there is not a hypothesis. In such cases the goal is to estimate the parameter rather than determine if the hypothesis about it is correct. Because the entire focus of the book has been on testing hypotheses, we will begin there and then address the idea of estimating the parameter in the next chapter. There are a considerable number of hypothesis test situations and formula, but we will focus on only four of them in this chapter and then add a few more in later chapters. The explanation will be provided with a discussion of exercise.
Briefing 5.1 Exercise
The US government recommends that people get 2.5 hours of moderately-intense aerobic exercise each week or 1.25 hours of vigorous-intense exercise each week along with some strength training such as weights or push-ups. Exercise helps reduce the risk of diabetes, heart disease, some types of cancer and improves mental health. (www.cbsnews.com/8301-204_162-...nded-exercise/)
The four hypothesis-test formulas that will be shown in this chapter will be illustrated with these five questions. As you read the questions, try to determine any similarities or differences between them, as that will ultimately guide you into which formula should be used.
1. Is the proportion of people who exercise enough to meet the government’s recommendation less than 0.25?
2. Is the proportion of people with a health problem such as diabetes, heart disease or cancer lower for those who meet the government’s exercise recommendation than it is for those who don’t?
3. Is the average amount of exercise a college student does in a week greater than 2.5 hours?
4. Is the average weight of a person less after a month of new regular aerobic fitness program?
5. For those who exercise regularly, is the average amount of exercise a college graduate does in a week different than someone who does not graduate from college?
There are two different things to look for when determining similarities and differences. The first is whether the parameter that is mentioned is a mean or proportion. The second is the number of populations. The following table restates the questions, provides the parameter of interest, the number of populations and an example of the hypotheses.
Question
Parameter
Populations
Hypotheses
Is the proportion of people who exercise enough to meet the government’s recommendation less than 0.25?
proportion
1
$H_0: P = 0.25$
$H_1: P < 0.25$
Is the proportion of people with a health problem such as diabetes, heart disease or cancer lower for those who meet the government’s exercise recommendation than it is for those who don’t?
proportion
2
$H_0: P_{\text{exercise}} = P_{\text{don’t}}$
$H_1: P_{\text{exercise}} < P_{\text{don’t}}$
Is the average amount of exercise a college student does in a week greater than 2.5 hours?
mean
1
$H_0: \mu = 2.5$
$H_1: \mu > 2.5$
Is the average weight of a person less after a month of new regular aerobic fitness program?
mean
1
$H_0: \mu = 0$
$H_1: \mu < 0$
For those who exercise regularly, is the average amount of exercise a college graduate does in a week different than someone who does not graduate from college?
mean
2
$H_0: \mu_{\text{college grad}} = \mu_{\text{not college grad}}$
$H_1: \mu_{\text{college grad}} \ne \mu_{\text{not college grad}}$
Categorical data will be needed for questions about a proportion; quantitative data will be needed for questions about a mean. A brief explanation is needed for the fourth question. To determine the amount of change in a person after starting a fitness program, it is necessary to collect two sets of data. The person will need to be weighed prior to the fitness program and then again after one month. These data are dependent, which means they have to apply to the same person. Ultimately, the data that will be analyzed is the difference between a person’s before and after weight. Therefore two data values are compressed into one value by subtraction. If the after-minus-before difference in weight is 0, then there has been no change. If it is less than 0, weight has been lost.
Since the evidence to help decide which hypothesis is supported by the data will come from a sample, and that sample is just one of the many possible sample results that form a normally distributed sampling distribution, then we can use what is known about the sampling distribution to determine the probability that we would have selected the data we got, or more extreme data (p-value).
In spite of the theoretical nature of a sampling distribution, it is the source for determining probabilities. Therefore, we will first define what the distributions contain and the important formulas related to this distribution.
The first time we encountered the normal distribution was when it was used as an approximation for the binomial distribution. In this case the data consisted of counts.
The mean of this distribution is found from $\mu = np$. The standard deviation is $\sigma = \sqrt{npq}$. The formula for determining the number of standard deviations a value is from the mean is $z = \dfrac{x - \mu}{\sigma}$.
Counts were eventually turned into proportions by dividing the counts by the sample size. The distribution consisted of all the possible sample proportions.
The distribution of sample proportions has a mean of $\mu_{\hat{p}} = p$ and a standard deviation of $\sigma_{\hat{p}} = \dfrac{p(1 - p)}{n}$. The formula for determining the number of standard deviations a sample proportion is from the mean is $z = \dfrac{\hat{p} - p}{\sqrt{\dfrac{p(1 - p)}{n}}}$.
The next time we encountered the normal distribution was when we had quantitative data in which case the distribution was made up of sample means.
The mean of all possible sample means is $\mu_{\bar{x}} = \mu$ and the standard error is $\sigma_{\bar{x}} = \dfrac{\sigma}{\sqrt{n}}$. The formula for determining the number of standard deviations a sample mean is from the hypothesized population mean is $z = \dfrac{\bar{x} - \mu}{\dfrac{\sigma}{\sqrt{n}}}$. Because $\sigma$ is not known, it is estimated with s, so that the estimated standard error is $s_{\bar{x}} = \dfrac{s}{\sqrt{n}}$ and the Z formula is replaced by the t formula where $t = \dfrac{\bar{x} - \mu}{\dfrac{s}{\sqrt{n}}}$.
The distributions and formulas that were just shown are the same as, or similar to, the ones that you saw in Chapter 5 and that are appropriate for questions 1, 3, and 4. On the other hand, questions 2 and 5 have hypotheses unlike those encountered before and so some effort is needed to define the relevant distributions and their means and standard deviations. These will be based on three statistical results that will not be proven here:
1. The mean of the difference of two random variables is the difference of the means.
2. The variance of the difference of two independent random variables is the sum of the variances.
3. The difference of two independent normally distributed random variables is also normally distributed. (Aliaga, Martha, and Brenda Gunderson. Interactive Statistics. Upper Saddle River, NJ: Pearson Prentice Hall, 2006. Print.)
We will start with the question of whether the proportion of people with a health problem such as diabetes, heart disease or cancer is lower for those who meet the government’s exercise recommendation than it is for those who don’t. This means that there are two populations, the population that exercises above government recommended levels and the population that doesn’t. Within each population, the proportion of people with a health problem will be found. A hypothesis test will be used to determine if people who exercise at the recommended levels have fewer health problems than people who don’t. The hypotheses are:
$H_0: P_{\text{exercise}} = P_{\text{don't}}$
$H_1: P_{\text{exercise}} < P_{\text{don't}}$
Writing hypotheses in this manner is easy to interpret, but an algebraic manipulation of these will give us some insight into the distribution that would be used to represent the null hypothesis. $P_{\text{don’t}}$ will be subtracted from both sides.
$H_0: P_{\text{exercise}} - P_{\text{don't}} = 0$
$H_1: P_{\text{exercise}} - P_{\text{don't}} < 0$
Since neither $P_{\text{exercise}}$ or $P_{\text{don’t}}$ is known because these are parameters, the best that can be done is estimate them using sample proportions. Therefore $\hat{p}_{exercise}$ will be used as an estimate of $P_{\text{exercise}}$ and $\hat{p}_{don't}$ will be used as an estimate of $P_{\text{don’t}}$. Then $\hat{p}_{exercise} - \hat{p}_{don't}$ as an estimate for $P_{\text{exercise}} - P_{\text{don’t}}$.
The distribution of interest to us is the one consisting of the difference between sample proportions, generically shown as $\hat{p}_{A} - \hat{p}_{B}$.
The mean of this distribution is $p_A - p_B$ and the standard deviation is $\sqrt{\dfrac{p_{A} (1 - p_{A})}{n_{A}} + \dfrac{p_{B} (1 - p_{B})}{n_{B}}}$. Since the only thing that is known about $p_A$ and $p_B$ is that they are equal, it is necessary to estimate their value so that the standard deviation can actually be computed. To do this, the sample proportions will be combined. The combined proportion is defined as
$\hat{p}_c = \dfrac{x_A + x_B}{n_A + n_B}.$
Replacing $p_A$ and $p_B$ with $\hat{p}_c$ results in the formula for estimated standard error of
$\sqrt{\dfrac{\hat{p}_{c} (1 - \hat{p}_{c})}{n_{A}} + \dfrac{\hat{p}_{c} (1 - \hat{p}_{c})}{n_{B}}} \text{ or } \sqrt{\hat{p}_{c} (1 - \hat{p}_{c}) (\dfrac{1}{n_{A}} + \dfrac{1}{n_{B}})}$
We can now substitute into the $z$ formula, $z = \dfrac{x − \mu}{\sigma}$ to get the test statistic used when testing the difference between two population proportions,
$z = \dfrac{(\hat{p}_{A} - \hat{p}_{B}) - (p_{A} - p_{B})}{\sqrt{\hat{p}_{c} (1 - \hat{p}_{c}) (\dfrac{1}{n_{A}} + \dfrac{1}{n_{B}})}}$
This can be written a little more simply in cases when the null hypothesis is $P_A = P_B$ which means that $p_A – p_B = 0$, so that term can be eliminated to give the test statistic
$z = \dfrac{(\hat{p}_{A} - \hat{p}_{B})}{\sqrt{\hat{p}_{c} (1 - \hat{p}_{c}) (\dfrac{1}{n_{A}} + \dfrac{1}{n_{B}})}}$
For this test statistic, both sample sizes should be sufficient large (n>20) with a minimum of 5 successes and 5 failures.
A similar approach will be taken with question 4, which asks if the average amount of exercise a college graduate does in a week is different than someone who does not graduate from college? There are two populations being compared, the population of college graduates and the population of non- college graduates. The average amount of exercise in each of these populations will be compared.
When the means of two populations are compared, the hypotheses are written as:
$H_0: \mu_{\text{college grad}} = \mu_{\text{not college grad}}$
$H_1: \mu_{\text{college grad}} \ne \mu_{\text{not college grad}}$
Writing hypotheses in this manner is easy to interpret, but an algebraic manipulation of these will give us some insight into the distribution that would be used to represent the null hypothesis.
$\mu_{\text{not college grad}}$ will be subtracted from both sides.
$H_0: \mu_{\text{college grad}} - \mu_{\text{not college grad} = 0}$
$H_1: \mu_{\text{college grad}} - \mu_{\text{not college grad} \ne 0}$
Since $n$ either $\mu_{\text{college grad}}$ or $\mu_{\text{not college grad}}$ are known because these are parameters, the best that can be done is to estimate them using sample means. Therefore $\bar{x}_{college\ grad}$ will be used as an estimate of $\mu_{\text{college grad}}$ and $\bar{x}_{college\ grad}$ will be used as an estimate of $\mu_{\text{not college grad}}$. Then $\bar{x}_{college\ grad} - \bar{x}_{not\ college\ grad}$
The distribution of interest to us is the one consisting of the difference between sample means, generically shown as $\bar{x}_{A} - \bar{x}_{B}$.
The mean of this distribution is $\mu_A - \mu_B$ and the standard deviation is $\sqrt{\dfrac{\sigma_{A}^{2}}{n_{A}} + \dfrac{\sigma_{B}^{2}}{n_{B}}}$. Once again we run into the problem that the standard deviation of the populations $\sigma_A$ and $\sigma_B$ are not known, so they must be estimated with the sample standard deviation sA and sB. An additional problem is that it is not known if the variances for the two populations are equal (homogeneous). Unequal variances (heterogeneous) increase the Type I error rate. (Sheskin, David J. Handbook of Parametric and Nonparametric Statistical Procedures. Boca Raton: Chapman & Hall/CRC, 2000. Print.)
The $t$ Test for Two Independent Samples is used to test the hypothesis. This test is dependent upon the following assumptions.
1. Each sample is randomly selected from the population it represents.
2. The distribution of data in the population from which the sample was drawn is normal
3. The variances of the two populations are equal. This is the homogeneity of variance assumption. (Sheskin, David J. Handbook of Parametric and Nonparametric Statistical Procedures. Boca Raton: Chapman & Hall/CRC, 2000. Print.)
The test statistic follows the same basic pattern as the other tests, which involves finding the number of standard errors a statistic is away from the hypothesized parameter.
$t = \dfrac{(\bar{x}_{A} - \bar{x}_{B}) - (\mu_A - \mu_B)}{\sqrt{\dfrac{s_{1}^{2}}{n_1} + \dfrac{s_{2}^{2}}{n_2}}}$
The assumption with this formula is that the two sample sizes are equal. If this formula is used when the sample sizes are not equal, there is an increased chance of making a Type I error. In such cases, an alternative formula is used which includes the weighted average of the estimated population variances of the two groups. The weighted average is based on the number of degrees of freedom in each sample. This formula can be used for both equal and non-equal sample sizes.
$t = \dfrac{(\bar{x}_{A} - \bar{x}_{B}) - (\mu_A - \mu_B)}{\sqrt{[\dfrac{(n_A - 1)s_{A}^{2} + (n_B - 1)s_{B}^{2}}{n_A + n_B - 2}][\dfrac{1}{n_A} + \dfrac{1}{n_B}]}}$
Because two parameters ($\sigma_A$ and $\sigma_B$) are replaced by $s_A$ and $s_B$, two degrees of freedom are lost. Thus, the number of degrees of freedom for this test statistic is $n_1 + n_2 - 2$.
There are four different hypothesis tests presented in this chapter. The hypotheses and test statistics are summarized in the following table.
Proportions (for categorical data) Means (for quantitative data)
1 - sample $H_0: p = p_0$
$H_1: p < p_0$ or $p > p_0$ or $p \ne p_0$
$z = \dfrac{\hat{p} - p}{\sqrt{\dfrac{p(1 - p)}{n}}}$
Assumptions:
$np \ge 5, n(1 - p) \ge 5$
$H_0: \mu = \mu_0$
$H_1: \mu < \mu_0$ or $\mu > \mu_0$ or $\mu \ne \mu_0$
$t = \dfrac{\bar{x} - \mu}{\dfrac{s}{\sqrt{n}}}$
df = n - 1
Assumptions:
If $n < 30$, population is approximately normally distributed.
2 - samples $H_0: p_A = p_B$
$H_1: p_A < p_B$ or $p_A > p_B$ or $p_A \ne p_B$
$z = \dfrac{(\hat{p}_{A} - \hat{p}_{B}) - (p_{A} - p_{B})}{\sqrt{\hat{p}_{c} (1 - \hat{p}_{c}) (\dfrac{1}{n_{A}} + \dfrac{1}{n_{B}})}}$
where $\hat{p}_c = \dfrac{x_A + x_B}{n_A + n_B}$
Assumptions:
$np \ge 5$, $n(1 - p) \ge 5$ for both populations
$H_0: \mu_A = \mu_B$
$H_1: \mu_A < \mu_B$ or $\mu_A > \mu_B$ or $\mu_A \ne \mu_B$
$t = \dfrac{(\bar{x}_{A} - \bar{x}_{B}) - (\mu_A - \mu_B)}{\sqrt{[\dfrac{(n_A - 1)s_{A}^{2} + (n_B - 1)s_{B}^{2}}{n_A + n_B - 2}][\dfrac{1}{n_A} + \dfrac{1}{n_B}]}}$
df = $n_A + n_B - 2$
Assumptions:
If $n < 30$, population is approximately normally distributed.
For each hypothesis-testing situation, you will have to decide which formula and which table to use. Notice that when the hypotheses are about proportions, the standard normal $z$ distribution is used. When the hypotheses are about means, the t distributions are used.
We will now return to our original five questions. The statistics given in this problem are fictitious.
1. Is the proportion of people who exercise enough to meet the government’s recommendation less than 0.25?
Assume that a random sample of 800 adults was taken. Of these, 184 claimed they met the government’s recommendation for exercise. Can we conclude that the proportion that meets this recommendation is less than 25%? Use a level of significance of 0.05.
The hypotheses are:
$H_0: p = 0.25$
$H_1: p < 0.25$
The sample proportion is $\bar{p} = \dfrac{x}{n} = \dfrac{184}{800} = 0.23$
The test statistic is $z = \dfrac{\hat{p} - p}{\sqrt{\dfrac{p(1 - p)}{n}}}$. With substitution $z = \dfrac{0.23 - 0.25}{\sqrt{\dfrac{0.25(1 - 0.25)}{800}}} = -1.31$
Check the standard normal distribution table to find the area to the left is 0.0951. This is the p-value because the direction of the extreme is to the left. Since the p-value is greater than the level of significance, the data are consistent with the null hypothesis. We conclude that at the 0.05 level of significance, the proportion of adults who meet government recommendations for exercise is not significantly less than 25% ($z$ = -1.31, $p$ = 0.0951, $n$ = 800).
2. Is the proportion of people with a health problem such as diabetes, heart disease or cancer lower for those who meet the government’s exercise recommendation than it is for those who don’t?
Assume a random sample is taken from both populations. For the people who meet the recommended amount of exercise, 84 out of 560 had a health problem. For the people who did not exercise enough, 204 out of 850 had a health problem.
The hypotheses are:
$H_0: P_{\text{exercise}} = P_{\text{don't}}$
$H_1: P_{\text{exercise}} < P_{\text{don't}}$
The sample proportions are $\hat{p}_{exercise} = \dfrac{x}{n} = \dfrac{84}{560} = 0.15$ and $\hat{p}_{don't} = \dfrac{x}{n} = \dfrac{204}{850} = 0.24$
The pooled proportion is $\hat{p}_{c} = \dfrac{x_{A} + x_{B}}{n_{A} + n_{B}} = \dfrac{84 + 204}{560 + 850} = 0.204$
The test statistic is $z = \dfrac{(\hat{p}_{A} - \hat{p}_{B}) - (p_{A} - p_{B})}{\sqrt{\hat{p}_{c} (1 - \hat{p}_{c}) (\dfrac{1}{n_{A}} + \dfrac{1}{n_{B}})}}$
with substitution $z = \dfrac{(0.15 - 0.24)}{\sqrt{0.204(1 - 0.204) (\dfrac{1}{560} + \dfrac{1}{850})}} = -4.10$
Checking the standard normal distribution table, the $z$ value of -4.10 is below the lowest value in the table (-3.49) therefore the area in the left tail is less than 0.0002. We conclude that at the 0.05 level of significance, the proportion of health problems for people meeting the government’s recommendation for exercise is significant less than for people who don’t exercise this much ($z$ = -4.10, p < 0.0002, $n_{\text{exercise}}=560$, $n_{\text{don’t}} = 850$).
3. Is the average amount of exercise a college student does in a week greater than 2.5 hours?
For this question, the evidence that needs to be gathered is hours of exercise in a week. That is quantitative data. To use the t-test, we need to make sure the data in the sample are approximately normally distributed. The hypotheses that will be tested are:
$H_0: \mu = 2.5$
$H_1: \mu > 2.5$
The level of significance is 0.10.
The number of hours of exercise by 20 randomly selected students is shown in the table below.
3.7 2 7.1 1.7 0 0 2.1 2.9 4 3.2
3.4 1.3 1 4.2 0 1.3 2.9 5.3 4.4 2.3
A histogram for this data shows that it is approximately normally distributed. The biggest deviation from normality is in the left tail since it isn’t possible to exercise less than 0 hours per week.
The sample mean and standard deviation are 2.64 hours and 1.855 hours, respectively. The test statistic is: $t = \dfrac{\bar{x} - \mu}{\dfrac{s}{\sqrt{n}}}$, with substitution, $t = \dfrac{2.64 - 2.5}{\dfrac{1.855}{\sqrt{20}}}$. After simplification, t = 0.338. There are 19 degrees of freedom (20 – 1) . Use the t table, in the row with 19 degrees of freedom, find the location of 0.338. An excerpt of the table is shown below. Notice that 0.338 falls between 0.257 and 0.688 so consequently the table shows that the area in the right tail is between 0.25 and 0.40. Since the level of significance is 0.1, and since the area in the tail is greater than 0.1 and more specifically greater than 0.25, we would report that the p-value is greater than 0.25.
Conclusion: at the 0.10 level of significance, the average time that college students exercise is not significantly greater than 2.5 hours ($t$ = 0.338, $p$ > 0.25, $n$ = 20).
4. Is the average weight of a person less after a month of new regular aerobic fitness program?
For this question, two sets of data must be collected, the before weight and the after weight. The before weight will be subtracted from the after weight to determine the change in weight. Because ultimately there will be only one set of data, the t test for one population mean will be used.
$H_0: \mu = 0$
$H_1: \mu < 0$
The level of significance is 0.10.
Subject 1 2 3 4 5 6 7 8 9
Before weight 158 213 142 275 184 136 172 263 205
After weight 154 213 135 278 180 134 171 258 199
After Before -4 0 -7 3 -4 -2 -1 -5 -6
This distribution is approximately normal, so it is appropriate to use the t-test for one population mean. The sample mean is -2.89 lbs with a standard deviation of 3.18 lbs. The test statistic is: $t = \dfrac{\bar{x} - \mu}{\dfrac{s}{\sqrt{n}}}$, with substitution, $t = \dfrac{-2.89 - 0}{\dfrac{3.18}{\sqrt{9}}}$. After simplification, t = -2.726. There are 8 degrees of freedom (9-1). Since -2.726 falls between 2.306 and 2.896 in the row for 8 degrees of freedom and since the level of significance is 0.1 but the area in the tail to the left of -2.726 is less than 0.025, then the conclusion is that the new weight is significantly less than the original weight ($t$ = -2.726, $p$ < 0.025, $n$ = 9). We conclude people lost weight.
5. For those who exercise regularly, is the average amount of exercise a college graduate does in a week different than someone who does not graduate from college?
Assume a random sample is taken for the population of college graduates who exercise regularly and a different random sample is taken from the population of non-graduates who exercise regularly. Also assume that the amount of exercise is normally distributed for both groups and that the variance is homogeneous. The hypotheses are shown below. Use a level of significance of 0.05.
$H_0: \mu_{\text{college grad}} = \mu_{\text{not college grad}}$
$H_1: \mu_{\text{college grad}} \ne \mu_{\text{not college grad}}$
The table below shows the mean, standard deviation and sample size for the two samples.
Units: hours/week College Graduates Not College Graduations
Mean 4.2 3.8
Standard Deviation 1.3 1.2
Sample size, $n$ 12 16
The difference in sample size means that we need the test statistic formula:
$t = \dfrac{(\bar{x}_{A} - \bar{x}_{B}) - (\mu_A - \mu_B)}{\sqrt{[\dfrac{(n_A - 1)s_{A}^{2} + (n_B - 1)s_{B}^{2}}{n_A + n_B - 2}][\dfrac{1}{n_A} + \dfrac{1}{n_B}]}}$
which is used for independent populations. Substituting into the formula gives:
$t = \dfrac{(4.2 - 3.8) - (0)}{\sqrt{[\dfrac{(12 - 1) 1.3^{2} + (16 - 1) 1.2^{2}}{12 + 16 - 2}][\dfrac{1}{12} + \dfrac{1}{16}]}} = 0.842$
Because of the inequality sign in the alternative hypothesis, this is a two-tailed test. The test statistic of 0.842 produces a p-value between 0.5 and 0.8. Since this is clearly higher than the level of significance, the conclusion is that at the 0.05 level of significance, the amount of exercise for college graduates is not significantly different than the amount for non-graduates ($t$ = 0.842, $p$ > 0.5, $n_{\text{college grads}} =12$, $n_{\text{not college grads}} =16$).
6. The t-test for two independent samples has been based on the assumption of homogeneity of variance. There are tests to determine if the variance is homogeneous and modifications that can be made to the degrees of freedom if it isn’t. These are not included in this text.
All of these tests can be done using the TI84 calculator. The tests are found by selecting the STAT key and then using the cursor arrows to move to the right to TESTS.
Proportions (for categorical data) Means (for quantitative data)
1 - sample $H_0: p = p_0$
$H_1: p < p_0$ or $p > p_0$ or $p \ne p_0$
Test 5: 1- PropZTest
$H_0: \mu = \mu_0$
$H_1: \mu < \mu_0$ or $mu > \mu_0$ or $mu \ne \mu_0$
Test 2: T- Test
2 - samples $H_0: p_A = p_B$
$H_1: p_A < p_B$ or $p_A > p_B$ or $p_A \ne p_B$
Test 6: 2- PropZTest
$H_0: \mu_A = \mu_B$
$H_1: \mu_A < \mu_B$ or $\mu_A > \mu_B$ or $\mu_A \ne \mu_B$
Test 4: 2-SampTTest
05: Testing Hypotheses
Chapter 5 Homework
1. For each of the following questions, write the hypotheses that would be tested and then determine which hypothesis test should be used. Select from the following four choices.
$\bullet$ 1 proportion Z test
$\bullet$ 2 proportion Z test
$\bullet$ 1 sample t test
$\bullet$ 2 independent samples t test
a. Is the average commute time different when people use transit compared to if they drove?
$H_0$:___________ $H_1$:___________ Test: ________________________________
b. Do a majority of people eat raw cookie dough?
$H_0$:___________ $H_1$:___________ Test: ________________________________
c. In a statistics class, is the proportion of STEM students different than the proportion of social science student?
$H_0$:___________ $H_1$:___________ Test: ________________________________
d. Is the average income of a self-employed person greater than a person working for a large company?
$H_0$:___________ $H_1$:___________ Test: ________________________________
e. Do you average more than 7 hours of sleep a night?
$H_0$:___________ $H_1$:___________ Test: ________________________________
f. Is the proportion of students with student loans less than 0.60?
$H_0$:___________ $H_1$:___________ Test: ________________________________
g. In a long race, such as a marathon, is the difference between the second half split time faster than the first half split time. This can be phrased as: if the first split is subtracted from the second split, will the difference be less than 0? (For example, first half split: 1:06.44, second half split 1:06.01, so 1:06.01 - 1:06.44 = -0.43. The second half was faster than the first half. This is called negative splitting).
$H_0$:___________ $H_1$:___________ Test: ________________________________
Test the following hypotheses. Assume all assumptions for the tests have been met. Show the formulas, show the substitution and simplification, and write an appropriate concluding sentence.
2. When a young adult leaves home and lives on their own for the first time, they become responsible for feeding themselves. In general, there are two options, eat out or cook for themselves. Suppose someone hypothesized that the average the spent on eating out, including tax and tip, is less than $15 per day. For 30 days, they keep all their receipts and find the mean amount spent per day is$14.28 with a standard deviation of 4.6. Test the hypothesis that the mean they spend per day is less than $15. $\alpha = 0.05$ Write the hypotheses: $H_0$: ___________, $H_1$: ___________, ___________ _________________ ________________ ________________ Formula Substitution Test Statistic p-value Fill in the blanks for the concluding sentence. At the _____ level of significance, the mean money spent per day _____ significantly less than$15 (t = ______________, p = _____________, n = ___________).
3. When a child has been adopted, they may grow up wondering why their birth parents did not keep them. Technology and some changes in the laws now allow these children to find their birth parents. However, there is no guarantee the birth parent will be happy to be found. The child runs an enormous risk of being rejected. On the other hand, some birth parents are delighted to be found and fully appreciate the reunification. It has been hypothesized that a majority of the reunions have a favorable outcome. Test this hypothesis if a survey of children who found their birth parents resulted in 118 outof 179 having a favorable result. Let $\alpha = 0.05$
Write the hypotheses: $H_0$: ___________, $H_1$: ___________,
___________ _________________ ________________ ________________
Formula Substitution Test Statistic p-value
Fill in the blanks for the concluding sentence. At the ________ level of significance, the proportion of reunions that are favorable _________ significantly greater than 0.50 ($z$ = __________, $p$ = __________, $n$ =_______)
4. US News and World Report has stated what many think. The cheapest time to buy airline tickets is on Tuesdays after 3 pm Eastern time. money.usnews.com/money/personal- finance/articles/2012/04/18/8-insider-secrets-to-booking-cheap-airfare viewed 4-30-17. Use the data in the table below to determine if the mean difference between the Tuesday price and the price the rest of the week is less than 0.
Data was collected from Travelocity.com in May 2017 for round-trip flights on Sept 10-17 2017.
Destination Airlines Day Price Tuesday Tuesday - Day
Seattle to Boston Alaska Sunday 420.40 420.40 0
Chicago to Dallas United Thursday 446.40 470.40 24
San Francisco to Orlando Delta Thursday 399.60 362.60 -37
Memphis to Phoenix American Saturday 277.90 277.90 0
Denver to Las Vegas Frontier Thursday 205.95 195.98 -9.97
New York to LA Jet Blue Saturday 411.40 411.40 0
Albuquerque to Philadelphia American Sunday 513.20 513.20 0
$H_0$: ___________, $H_1$: ___________, $\alpha = 0.05$
___________ _________________ ________________ ________________
Formula Substitution Test Statistic p-value
Write the concluding sentence:
5. The concept of anchoring is one that makes us question our own rationality. An example of anchoring is when you see something you want to buy, but can’t afford and are then told the item is on sale. Suddenly, it looks like a good price. On the other hand, if someone told you they had bought the same item for half the price you paid, you would feel cheated. Daniel Kahneman and Amos Tversky were researchers who investigated anchoring. This concept was tested in an experiment with the Spring 2017 statistics class. The class was asked to write down the last two numbers of their phone number. Then they were asked to write down their guess for the population of Morocco. The objective is to determine if the people who wrote down numbers that were greater than or equal to 60 (last 2 digits of phone number) had a higher estimate of Morocco’s population than those with numbers less than or equal to 40. Use a level of significance of 0.10.
Write the hypotheses:
$H_0$: ___________, $H_1$: ___________,
Formula Substitution
Test Statistic p-value significant?
Write the concluding sentence:
6. Since more people are aware of the problem of waste and are attempting to do their own part in reducing wasted bags by bringing their own reusable bags to a grocery store or not using a bag at all, is it possible that we now have fewer than half of shoppers using bags provided by the grocery store? Data: Out of 750 shoppers, 282 used a paper bag provided by the store (plastic bags are illegal in Seattle where this study by statistics students was conducted). $\alpha = 0.05$
$H_0$: ___________, $H_1$: ___________, What is the sample proportion?___________
___________ _________________ ________________ ________________
Formula Substitution Test Statistic p-value
7. The Tacoma-Pierce County Health Department conducts a Healthy Youth Survey to assess the health related behaviors of Pierce County.(www.tpchd.org/resources/publi...-health-risks/) The survey is given to students in grades 6,8,10 and 12. The data from the 2002 and 2012 reports will be used to determine if there has been an increase in the use of marijuana or hashish in $12^{\text{th}}$ grade students. Use a 5% level of significance.
a. Write the hypotheses that will be tested.
The Data: In 2012, $x_{2012} = 165$, $n_{2012} = 630$, $x_{2002} = 537$, $n_{2002} = 2184$
b. Find the sample proportions for each year. 2002_________ 2012 __________
c. Test the hypotheses.
___________ _________________ ________________ ________________
Formula Substitution Test Statistic p-value
d. Write a concluding sentence. At the 5% level of significance ___________________________________________________________________________________ __________________________
e. Washington just legalized recreational use of marijuana for adults. Do you expect the use of marijuana by $12^{\text{th}}$ graders to increase, remain the same, or decrease because of this new law? Why? Circle one: increase remain the same decrease Why? ______________________________________________________________________________________________________________
8. Briefing 5.2: Trailblazing Women
In 1966, Bobbi Gibb applied to participate in the Boston Marathon. The director wrote her back saying “women are not physiologically able to run marathon distances, and we wouldn't want to take the medical liability. “6 That was during a time when opportunities were limited for women because of the assumption they were physically incapable of doing many of the things that men could do. The brief story is that Bobbi crashed the race wearing her brother’s shorts and a sweatshirt, but within 30 seconds, some of the men realized she was a woman and where glad she was in the race.
Spectators realized it too. “As Gibb ran by the crowds, she saw their reactions. Men were cheering and clapping, and women were jumping wildly up and down and weeping.” She finished ahead of two thirds of the other runners. Six years later, the rules were changed to allow women to run in the Boston Marathon.(http://www.californiareport.org/arch...201304150850/b)
Now that women are allowed to compete on an equal basis with men, we can explore the differences in performance in longer athletic events. One of the most grueling competitions is the Ironman Triathlon in which participants swim 2.4 miles, bicycle 112 miles and then run a full 26.2-mile marathon. Results from the 2013 Canadian Ironman Triathlon in Whistler, BC will be used to compare the times of the non-professional participants. The question is whether there is a significant difference in the mean times of the men and women who finished the course. Use a 5% level of significance.
a. Complete the design layout table.
Research Design Table
Research Question:
Type of Research Observational Study
Observational Experiment
Manipulative Experiment
What is the response variable?
What is the parameter that will be calculated? Mean Proportion Correlation
List potential confounding variables.
Grouping/explanatory Variables 1 (if present) Levels:
b. Write the hypotheses. $H_0$: ___________, $H_1$: ___________, $\alpha = 0.05$
Data
Enter the data into your calculator to find the statistics needed for 7e and 7f.
c. Make a side-by-side box plot.
d. Find the mean and standard deviation for the men and women.
n mean standard deviation
Men
Women
e. Test the hypotheses (you may use your calculator, you don’t need to write the formula).
________________ __________
Test Statistic p-value
f. Write a concluding sentence.
g Based on the results of this study, what can be concluded about the physical capabilities of women compared to men in endurance activities? _________ _______________________________________
9. Why Statistical Reasoning is Important for Psychology Students and Professionals In collaboration with Tom Link, Professor of Psychology Based on the research article “How Does Stigma “Get Under the Skin”?”(Hatzenbuehler, M. L., Nolen-Hoeksema, S., & Dovidio, J. (2009). How does stigma “get under the skin”? The mediating role of emotion regulation. Psychological Science, 20(10), 1282-1289.) by Mark Hatzenbuehler, Susan Nolen-Hoeksema, and John Dovidio, published in the Jorunal of the Association for Psychological Science in 2009.
This topic is taught in Psyc 100 (General Psychology) and Psyc 210 (Social Psychology)
What strategies are most effective for avoiding adverse mental health issues for victims of discrimination?
Briefing 6.3
Various groups of people feel stigma-related stress. The concept of social stigma was originally discussed by psychologist Erving Goffman in the 1963 publication Stigma: Notes on the Management of Spoiled Identity. Stigma is defined as “a set of negative and often unfair beliefs that a society or group of people have about something”. (www.merriam-webster.com 11-14-13) These groups include African Americans, LGB (lesbian, gay, bisexual), women, criminals, and obese individuals. There is some research that links stigma-related stressors to adverse mental and behavioral health. Stigma can be concealed for LGB individuals or criminals but not for the other groups. This means that people cannot tell if another person is gay unless they have been told, so the stigma is concealed from them (also called discreditable stigma). In contrast, skin color or gender is evident to another person instantly (also called discredited stigma).
Individuals subjected to discrimination have a variety of ways to respond. Two of the ways include rumination and distraction. Rumination is the process of focusing on yourself and reflecting on why you received such treatment. Distraction is to have your thoughts focused on something other than how you feel and the discrimination situation. Psychological distress, the dependent variable, will be used as a measure of mental health. It will be measured with a commonly used test.
There are three questions that will be asked in this problem. To answer these three questions, the researchers used undergraduate students and community members. The data to answer the first question will come from a daily diary the subjects maintain. The data for the second question will come from surveys given to the subjects after they wrote about a time when they were victims of discrimination. The data for the third question will be based on another survey the subjects take after they are directed towards one of the two coping mechanisms, rumination or distraction, using a random process (e.g. coin flip).
You may use your calculator to test these hypotheses. You do not need to write the formulas or show substitution.
a. Is the proportion of days with a discriminatory incident different for African Americans than it is for LGB individuals? $\alpha = 0.05$
African American and LGB subjects maintained a journal for 20 days.
n At least one stigma- related stressor days of data
African Americans 19 139 190
LGB 31 226 310
$H_0$: ___________, $H_1$: ___________,
Test Statistic p-value
Conclusion:
b. Is there a significant difference between the mean psychological distress score of African Americans and LGB individuals following an experiment in which the individual had to recall a discrimination issue they faced? $\alpha = 0.05$
n Average Psychological Distress Standard Deviation
African Americans 19 4.71 3.96
LGB 31 4.51 4.52
$H_0$: ___________, $H_1$: ___________,
Test Statistic p-value
Conclusion:
c. Is there a significant difference between the mean psychological distress score of those who used rumination to cope with the discrimination and those who used distraction?
Is there a significant difference between the psychological distress score of those who used rumination to cope with the discrimination and those who used distraction?
n Mean Standard Deviation
Rumination 26 13.24 6.14
Distraction 26 10.07 4.10
$H_0$: ___________, $H_1$: ___________,
Test Statistic p-value
Conclusion:
d. The subjects in this research were undergraduate students and community members living near Yale University in Connecticut. Does this affect the conclusions that have been drawn? Why or why not? | textbooks/stats/Introductory_Statistics/Foundations_in_Statistical_Reasoning_(Kaslik)/05%3A_Testing_Hypotheses/5.E%3A_Testing_Hypotheses_%28Exercises%29.txt |
The inferences that were discussed in chapters 5 and 6 were based on the assumption of an a priori hypothesis that the researcher had about a population. However, there are times when the researchers do not have a hypothesis. In such cases they would simply like a good estimate of the parameter.
By now you should realize that the statistic (which comes from the sample) will most likely not equal the parameter of the population, but it will be relatively close since it is part of the normally distributed collection of possible statistics. Consequently, the best that can be claimed is that the statistic is a point estimate of the parameter. Because half the statistics that could be selected are higher than the parameter and half are lower, and because the variation that can be expected for statistics is dependent, in part, upon sample size, then the knowledge of the statistic is insufficient for determining the degree to which it is a good estimate for the parameter. For this reason, estimates are provided with confidence intervals instead of point estimates.
You are probably most familiar with the concept of confidence intervals from polling results preceding elections. A reporter might say that 48% of the people in a survey plan to vote for candidate A, with a margin of error of plus or minus 3%. The interpretation is that between 45% and 51% of the population of voters will vote for candidate A. The size of the margin of error provides information about the potential gap between the point estimate (statistic) and the parameter. The interval gives the range of values that is most likely to contain the true parameter. For a confidence interval of (0.45,0.51) the possibility exists that the candidate could have a majority of the support. The margin of error, and consequently the interval, is dependent upon the degree of confidence that is desired, the sample size, and the standard error of the sampling distribution.
The logic behind the creation of confidence intervals can be demonstrated using the empirical rule, otherwise known as the 68-95-99.7 rule that you learned in Chapter 5. We know that of all the possible statistics that comprise a sampling distribution, 95% of them are within approximately 2 standard errors of the mean of the distribution. From this we can deduce that the mean of the distribution is within 2 standard errors of 95% of the possible statistics. By analogy, this is equivalent to saying that if you are less than two meters from the student who is seated next to you, then that student is less than two meters from you. Consequently, by taking the statistic and adding and subtracting two standard errors, an interval is created that should contain the parameter for 95% of the statistics we could get using a good random sampling process.
When using the empirical rule, the number 2, in the phrase “2 standard errors”, is called a critical value. However, a good confidence interval requires a critical value with more precision than is provided by the empirical rule. Furthermore, there may be a desire to have the degree of confidence be something besides 95%. Common alternatives include 90% and 99% confidence intervals. If the degree of confidence is 95%, then the critical values separate the middle 95% of the possible statistics from the rest of the distribution. If the degree of confidence is 99%, then the critical values separate the middle 99% of the possible statistics from the rest of the distribution. Whether the critical value is found in the standard normal distribution (a $z$ value) or in the t distributions (a t value) is based on the whether the confidence interval is for a proportion or a mean.
The critical value and the standard error of the sampling distribution must be determined in order to calculate the margin of error.
The critical value is found by first determining the area in one tail. The area in the left tail (AL) is found by subtracting the degree of confidence from 1 and then dividing this by 2.
$A_L = \dfrac{1 - \text{degree of confidence}}{2}.$
For example, substituting into the formula for a 95% confidence interval produces
$A_L = \dfrac{1 - 0.95}{2} = 0.025$
The critical Z value for an area to the left of 0.025 is -1.96. Because of symmetry, the critical value of an area to the right of 0.025 is +1.96. This means that if we find the critical values corresponding to an area in the left tail of 0.025, that we will find the lines that separate the group of statistics with a 95% chance of being selected from the group that has a 5% chance of being selected.
An area in the left tail of 0.025, which is found in the body of the z distribution table, corresponds with a $z^{\ast}$ value of -1.96. This is shown in the section of the Z table shown below.
The critical $z$ value of -1.96 is also called the 2.5th percentile. That means that 2.5% of all possible statistics are below that value.
Critical values can also be found using a TI 84 calculator. Use $2^{\text{nd}}$ Distr, #3 invnorm (percentile, $\mu$, $\sigma$). For example invnorm(0.025,0,1) gives -1.95996 which rounds to -1.96.
Confidence intervals for proportions always have a critical value found on the standard normal distribution. The $z$ value that is found is given the notation $z^{\ast}$. These critical values vary based on the degree of confidence. The other most common confidence intervals are 90% and 99%. Complete the following table below to find these commonly used critical values.
Degree of Confidence Area in Left Tail $z^{\ast}$
0.90
0.95 0.025 1.96
0.99
Confidence intervals for means require a critical value, $t^{\ast}$, which is found on the t tables. These critical values are dependent upon both the degree of confidence and the sample size, or more precisely, the degrees of freedom. The top of the t-table provides a variety of confidence levels along with the area in one or both tails. The easiest approach to finding the critical $t^{\ast}$ value is to find the column with the appropriate confidence level then find where that column intersects with the row containing the appropriate degrees of freedom. For example, the $t^{\ast}$ value for a 95% confidence interval with 7 degrees of freedom is 2.365.
The second component of the margin of error, which is the standard error for the sampling distribution, assumes knowledge of the mean of the distribution (e.g. $\mu_{\hat{p}} = p$ and $\mu_{\bar{x}} = \mu$). When testing hypotheses about the mean of the distribution, we assume these values because we assume the null hypothesis is true. However, when creating confidence intervals, we admit to not knowing these values and so consequently we cannot use the standard error. For example, the standard error for the $\sigma_{\hat{p}} = \sqrt{\dfrac{p(1 - p)}{n}}$. Since we don’t know $p$, we can’t use this formula. Likewise, the standard error for the distribution of sample means is $\sigma_{\bar{x}} = \dfrac{\sigma}{\sqrt{n}}$. To find $\sigma$ we need to know the population mean, $\mu$, but once again we don’t know it, and we don’t even have a hypothesis about it, so consequently we can’t find $\sigma$. The strategy in both these cases is to find an estimate of the standard error by using a statistic to estimate the missing parameter. Thus, $\hat{p}$ is used to estimate $p$ and $s$ is used to estimate $\sigma$. The estimated standard errors then become: $s_{\hat{p}} = \sqrt{\dfrac{\hat{p}(1 - \hat{p})}{n}}$ and $s_{\bar{x}} = \dfrac{s}{\sqrt{n}}$.
The groundwork has now been laid to develop the confidence interval formulas for the situations for which we tested hypotheses in the preceding chapter, namely $p$, $p_A - p_B$, $\mu$, and $\mu_A - \mu_B$. The table below summarizes these four parameters, their distributions and estimated standard errors.
Parameter Distribution Estimated Standard Error
Proportion for one population, $p$ $s_{\hat{p}} = \sqrt{\dfrac{\hat{p}(1 - \hat{p})}{n}}$
Difference between proportions for two populations, $p_A - p_B$ $s_{\bar{p}_A - \bar{p}_B} = \sqrt{\dfrac{\hat{p}_A(1 - \hat{p}_A)}{n_A} + \dfrac{\hat{p}_B(1 - \hat{p}_B)}{n_B}}$
Mean for one population or mean difference for dependent data, $\mu$ $s_{\bar{x}} = \dfrac{s}{\sqrt{n}}$
Difference between means of two independent populations, $\mu_A - \mu_B$ $s_{\bar{x}_A - \bar{x}_B} = \sqrt{[\dfrac{(n_A - 1) s_{A}^{2} + (n_B - 1) s_{B}^{2}}{n_A + n_B - 2}][\dfrac{1}{n_A} + \dfrac{1}{n_B}]}$
The reasoning process for determining the formulas for the confidence intervals is the same in all cases.
1. Determine the degree of confidence. The most common are 95%, 99% and 90%.
2. Use the degree of confidence along with the appropriate table (z* or t*) to find the critical value.
3. Multiply the critical value times the standard error to find the margin of error.
4. The confidence interval is the statistic plus or minus the margin of error.
Notice that all the confidence intervals have the same format, even though some look more difficult than others.
statistic $\pm$ margin of error
statistic $\pm$ critical value $\times$ estimated standard error
Confidence intervals about the proportion for one population:
$\hat{p} \pm z^{\ast} \sqrt{\hat{p}(1 - \hat{p})}{n}$
Confidence intervals for the difference in proportions between two populations:
$(\hat{p}_A - \hat{p}_B \pm z^{\ast} \sqrt{\dfrac{\hat{p}_A\hat{q}_A}{n_A} + \dfrac{\hat{p}_B\hat{q}_B}{n_B}}$
Remember that $q = 1 – p$.
Confidence intervals for the mean for one population:
$\bar{x} \pm t^{\ast} \dfrac{s}{\sqrt{n}}$
Confidence interval for the difference between two independent mean:
$(\bar{x}_A + \bar{x}_B \pm t^{\ast} (\sqrt{[\dfrac{(n_A - 1) s_{A}^{2} + (n_B - 1) s_{B}^{2}}{n_A + n_B - 2}][\dfrac{1}{n_A} + \dfrac{1}{n_B}]})$
where $t^{\ast}$ is the appropriate percentile from the t($n_A$ + $n_B$ - 2) distribution.
The confidence interval formulas are organized below in the same way the hypothesis test formulas were organized in Chapter 6. You should see a similarity between corresponding formulas.
Proportions (for categorical data) Means (for quantitative data)
1 - sample
$\hat{p} \pm z^{\ast} \sqrt{\hat{p}(1 - \hat{p})}{n}$
Assumptions:
$np \ge 5$, $n(1-p) \ge 5$
$\bar{x} \pm t^{\ast} \dfrac{s}{\sqrt{n}}$
df = n - 1
Assumptions:
If $n < 30$, population is approximately normally distributed.
2 - samples
$(\hat{p}_A - \hat{p}_B \pm z^{\ast} \sqrt{\dfrac{\hat{p}_A\hat{q}_A}{n_A} + \dfrac{\hat{p}_B\hat{q}_B}{n_B}}$
Assumption:
$np \ge 5$, $n(1-p) \ge 5$ for both population
$(\bar{x}_A + \bar{x}_B \pm t^{\ast} (\sqrt{[\dfrac{(n_A - 1) s_{A}^{2} + (n_B - 1) s_{B}^{2}}{n_A + n_B - 2}][\dfrac{1}{n_A} + \dfrac{1}{n_B}]})$
df = $n_A$ + $n_B$ - 2
Assumptions:
If $n < 30$, population is approximately normally distributed.
What does a confidence interval mean? For a 95% confidence interval, 95% of all possible statistics are within z* (or t*) standard errors of the mean of the distribution. Therefore, there is a 95% probability that the data that is randomly selected will produce one of those statistics and the confidence interval that is created will contain the parameter. Whether the interval ultimately does include the parameter or not is unknown. We only know that if the sampling processes was repeated a large number of times producing many confidence intervals, about 95% of them would contain the parameter.
Example 1
In an automaticity experiment community college students were given two opportunities to go to the computer lab to test their automaticity skills (math fact fluency). Students were randomly assigned to use one of two practice programs to determine if one program leads to greater improvement than the other. These programs will be called program A and program B.
a. What is the 95% confidence interval for the proportion of students who improve from their first attempt to their second attempt if 99 out of 113 students improved?
To help pick the correct confidence interval formula, notice this problem is about proportions and there is only one group of student.
The formula that meets these criteria is: $\hat{p} \pm z^{\ast} \sqrt{\hat{p}(1 - \hat{p})}{n}$. Before substituting, it is necessary to calculate $\hat{p}$. Since $\hat{p} = \dfrac{x}{n} = \dfrac{99}{113} = 0.876$ (round to 3 decimal places), then the confidence interval formula becomes $0.876 \pm 1.96 \sqrt{\dfrac{0.876(1 - 0.876}{113}}$. This simplifies to 0.876 $\pm$ 0.061. The margin of error is 6.1%. The confidence interval is (0.815,0.937). The conclusion is that we are 95% confident that the true proportion of students who would improve from one test to the next is between 0.815 (81.5%) and 0.937 (93.7%).
To find this interval on the TI 84 calculator, select Stat, Tests, A 1-PropZInt.
b. What is the 90% confidence interval for the difference in the proportion of students who improved from their first attempt to their second attempt using Program A (37/45) and Program B (61/67)?
To help pick the correct confidence interval formula, notice this problem is about proportions and there are two different populations – one using Program A and the other using Program B.
The formula that meets these criteria is: $(\hat{p}_A - \hat{p}_B \pm z^{\ast} \sqrt{\dfrac{\hat{p}_A\hat{q}_A}{n_A} + \dfrac{\hat{p}_B\hat{q}_B}{n_B}}$. Since $\hat{p}_A = \dfrac{x}{n} = \dfrac{37}{45} = 0.822$ and $\hat{p}_B = \dfrac{x}{n} = \dfrac{61}{67} = 0.919$, then the confidence interval formula becomes
$0.822 - 0.910) \pm 1.645 \sqrt{\dfrac{0.822(1 - 0.822)}{45} + \dfrac{0.910(1 - 0.910)}{67}}$
After simplification the confidence interval can be written as a statistic ± margin of error: -0.088 $\pm$ 0.110. The margin of error is 11.0%. The confidence interval is (-0.198,0.022). The conclusion is that we are 90% confident that the true difference in proportions between those using Program A and those using Program B is between -0.198 and 0.022. Notice that 0 falls within that range, which indicates there is potentially no difference between these two proportions.
To find this interval on the TI 84 calculator, select Stat, Tests, B 2-PropZInt.
c. What is the 99% confidence interval for the average improvement from Introductory Algebra students using program B (mean = 5.0, SD = 3.18, n = 19).
To help pick the correct confidence interval formula, notice this problem is about means and there is only one population (Introductory Algebra students using program B).
The formula that meets these criteria is: $\bar{x} \pm t^{\ast} \dfrac{s}{\sqrt{n}}$. There are 18 degrees of freedom (df = n-1) so the $t^{\ast}$ value is 2.878. After substituting for all the variables the formula becomes $5.0 \pm 2.878 \dfrac{3.18}{\sqrt{19}}$. This simplifies to 5.0 $\pm$ 2.1. The confidence interval is (2.9, 7.1).
To find this interval on the TI 84 calculator, select Stat, Tests, #8 Tinterval.
d. What is the 95% confidence for the difference in improvement between introductory algebra and intermediate algebra students using program A. The statistics for introductory algebra are mean = 2.4, SD = 3.53, n = 16. The statistics for intermediate algebra are mean = 4, SD = 4.89, n = 21.
To help pick the correct confidence interval formula, notice this problem is about means but there are two populations (introductory algebra students and intermediate algebra students). The formula that meets these criteria is:
$(\bar{x}_A + \bar{x}_B \pm t^{\ast} (\sqrt{[\dfrac{(n_A - 1) s_{A}^{2} + (n_B - 1) s_{B}^{2}}{n_A + n_B - 2}][\dfrac{1}{n_A} + \dfrac{1}{n_B}]})$
There are 35 degrees of freedom ($n_A$ + $n_B$ - 2). Unfortunately, this value does not exist on the $t$ table in Chapter 6, so it will be necessary to estimate it. One approach is to use the critical value for 30 degrees of freedom (2.042) which is larger than the critical value for 40 degrees of freedom (2.021) as this will ensure that the confidence interval is at least as large as necessary. After substituting for all the variables, the formula becomes $(2.4 - 4) \pm 2.042 (4.359 \sqrt{\dfrac{1}{16} + \dfrac{1}{21}})$ and with simplification -1.6 $\pm$ 2.95. The interval is (-4.55,1.35). Because the critical $t^{\ast}$ value is slightly larger than it should be, the interval is slightly wider than it would be calculated using the functions on a TI 84 calculator (-4.537,1.3368).
The second approach is to find this interval on the TI 84 calculator. Select Stat, Tests, #0 2-SampTInt.
Sample Size Estimation
The margin of error portion of a confidence interval formula can also be used to estimate the sample size that needed. Let E represent the desired margin of error. If sampling of categorical data for one population is done, then $E = z^{\ast} \sqrt{\dfrac{\hat{p}(1 - \hat{p})}{n}}$. Solve this for n using algebra. Since the goal is to make sure the sample size is large enough, and since $\hat{p}$ is not known in advance, then it is necessary to make sure that $\hat{p}(1 - \hat{p})$ is the largest possible value. That will happen when $\hat{p} = 0.5$.
$E = z^{\ast} \sqrt{\dfrac{\hat{p}(1 - \hat{p})}{n}}$
$\dfrac{E}{z^{\ast}} = \sqrt{\dfrac{\hat{p}(1 - \hat{p})}{n}}$
$\dfrac{E^2}{z^{\ast^{2}}} = \dfrac{\hat{p}(1 - \hat{p})}{n}$
$n = \dfrac{z^{\ast^{2}} \hat{p}(1 - \hat{p})}{E^2}$
$n = \dfrac{z^{\ast^{2}} 0.5(0.5)}{E^2}$
$n = \dfrac{0.25 z^{\ast^{2}}}{E^2}$ or $n = \dfrac{z^{\ast^{2}}}{4E^2}$
Example 2
Estimate the sample size needed for a national presidential poll if the desired margin of error is 3%. Assume 95% degree of confidence.
$n = \dfrac{1.96^2}{4(0.03)^2} = 1067.1$ or 1068 (round up to get enough in the ample).
06: Confidence Intervals and Sample Size
Chapter 6 Homework
Briefing 6.1 Gender gap in Science
A variety of explanations have been provided for why males are more likely to study science and have a profession in the field of science than are females. One explanation is that teachers are more likely to encourage boys to ask questions and integrate concepts. Kevin Crowley and other researchers sought to answer questions about the role of parents in contributing to the gender gap in science.(Crowley, K., Callanan, M. A., Tenenbaum, H. R., & Allen, E. (2001). Parents explain more often to boys than to girls during shared scientific thinking. Psychological Science, 12(3), 258-261.) Their research was published in Psychological Science, May 2001.
The research was conducted at a children’s museum using video cameras and wireless microphones. It forms the basis for the first four questions.
1. Find the 95% confidence interval for the proportion of times a boy chose to interact with an exhibit at the museum if 144 out of 185 boys initiated this interaction? This means the child chose to interact without parental encouragement.
Formula Substitution Margin of Error Confidence Interval
Calculator confidence interval______________________
2. Find the 99% confidence interval for the difference in the proportion of times a boy initiated interaction with the exhibit and a girl initiated interaction with the exhibit. Out of 185 boys, 144 initiated interaction. Out of 113 girls, 84 initiated interaction.
Formula Substitution Margin of Error Confidence Interval
Calculator confidence interval______________________
3. Find the 90% confidence interval for the mean length of time girls remained engaged with the exhibit if the sample mean time is 88 seconds, the standard deviation is 93 seconds and there were 113 girls.
Formula Substitution Margin of Error Confidence Interval
Calculator confidence interval______________________
4. Find the 95% confidence interval for the difference in the mean length of time boys remained engaged with an exhibit (mean = 107 sec, SD = 117 sec, n = 185) and girls remained engaged (mean = 88 sec, SD = 93 sec, n = 113) with the exhibit.
Formula Substitution Margin of Error Confidence Interval
Calculator confidence interval______________________
5. What is the 90% confidence interval for the difference in the mean weight of hatchery and wild Coho salmon that have returned to spawn? What is the point estimate for the difference? (Student project, Summer 2002)
Hatchery Wild
Mean 2434 grams 2278 grams
Median 2234 grams 2048 grams
Standard Deviation 1066 grams 1000 grams
Sample Size 602 745
Point estimate ________________________
Formula Substitution Margin of Error Confidence Interval
Calculator confidence interval______________________
6. If a person cannot afford to pay for heat, how much warmer will their home be than the outside temperature? Outside and inside temperatures were recorded for a vacant log cabin. Find the point estimate for the difference between outside and inside air temperature. Find the 95% confidence interval for the difference between outside air temperature and inside air temperature (inside – outside). Temperatures are recorded in degrees Celsius. (student project Winter 2002)
Outside Inside Inside - Outside
2.2 10.5
6.1 10.5
8.3 12.2
6.7 13.3
13.3 11,7
15.5 12.8
3.9 11.1
2.2 10.0
7.8 9.4
0.5 8.9
-3.3 10
Point estimate ________________________
Formula Substitution Margin of Error Confidence Interval
Calculator confidence interval______________________
Can it be concluded that the inside temperature is warmer than the outside temperature?
7. An experiment was conducted at a photo copy store in which coupons were given to customers. Half of the coupons were black and white while the other half were printed on bright yellow paper. The printing on both was identical as was the amount of discount the customers received (10%). What is the point estimate for the difference in the proportion of color and of black and white coupons that were returned? What is the 95% confidence interval for the difference in the proportion of color and of black and white coupons that were returned? (student project)
Color Black and White
Number returned (used) 129 87
Number distributed 250 250
Point estimate ________________________
Formula Substitution Margin of Error Confidence Interval
Calculator confidence interval______________________
Can it be concluded that color coupons have a better return (use) rate than black and white coupons?
8. In the early 1900s, males accounted for approximately 10% of all nurses. By 1960, this percentage had fallen to about 2% but since that time it has increased to over 12%. Data was collected from colleges that offered a BS degree in nursing to determine the proportion of the students who are male, as this might give some insight into potential changes within the profession. Out of 2352 nursing students, 273 are male. What is the point estimate? What is the 99% confidence interval for the proportion of male nursing students in a nursing degree program? (based on student project, Brian Walsh Fall 2013)
Point estimate ________________________
Formula Substitution Margin of Error Confidence Interval
Calculator confidence interval______________________
9. Determine the effect of the desired margin of error on the size of the samples that must be taken for 1 population categorical data. Complete the chart. Show formula and substitutions. Use a 95% degree of confidence.
Margin of Error 1% 5% 10% 20%
Sample Size
What do you conclude?
10. Determine the effect of the degree of confidence on the size of the samples that must be taken for 1 population categorical data. Use a margin of error of 3%.
Degree of Confidence 99% 95% 90% 80%
Sample Size
What do you conclude?
11. Why Statistical Reasoning Is Important for a Diagnostic Health and Fitness Technician (DHFT) Student and Professional
Developed in collaboration with
Lisa Murray, Professor of HSCI, Nutrition and Physical Education.
This topic is discussed in Nutrition 101.
The FDA recommends that daily sodium intake should not exceed 2300 mg per day. High sodium consumption has been shown to have a negative effect on blood pressure and other health problems.
One of the more popular treats for moviegoers is popcorn. Popcorn by itself is considered a healthy snack, but adding oil, butter, and salt to it can decrease its nutritional value. To estimate the salt content of movie theater popcorn, popcorn of various sizes will be purchased from randomly selected theaters and then sent to a lab for analysis. The final results will be presented as mg of sodium per cup of popcorn. In this case, we don’t have a hypothesis about the amount, so the objective will be to create a confidence interval.
Because different theater chains may use different amounts of salt, a random sample will be taken from each of three large theater companies.
a. What sampling method is being used?
b. If one of the chains has 389 theaters, which 3 theaters would be selected if the calculator is seeded with the number 21?
______, _______, _______
The data (mg sodium per cup of popcorn):
50 49 49 35 37
36 86 103 88 53
48 54 38 33 33
80 98 95 55 70
(These numbers are based on data from a study in the Nutrition Action Healthletter).
c. Make a frequency distribution and histogram for this data.
d. Find the mean and standard deviation for this sample.
e. Show the 95% confidence interval for the amount of sodium per cup. Include formula, substitution and the interval.
f. If the size of bags of popcorn range from 6 cups to 20 cups, what is the range of sodium that could be consumed by buying popcorn at a theater?
g. How will knowledge of this influence your next purchase of movie theater popcorn? | textbooks/stats/Introductory_Statistics/Foundations_in_Statistical_Reasoning_(Kaslik)/06%3A_Confidence_Intervals_and_Sample_Size/6.E%3A_Confidence_Intervals_and_Sample_Size_%28Exercises%29.txt |
For the past three chapters you have been learning about making inferences for univariate data. For each research question that could be asked, only one random variable was needed for the answer. That random variable could be either categorical or quantitative. In some cases, the same random variable could be sampled and compared for two different populations, but that still makes it univariate data. In this chapter, we will explore bivariate quantitative data. This means that for each unit in our sample, two quantitative variables will be determined. The purpose of collecting two quantitative variables is to determine if there is a relationship between them.
The last time the analysis of two quantitative variables was discussed was in Chapter 4 when you learned to make a scatter plot and find the correlation. At the time, it was emphasized that even if a correlation exists, that fact alone is insufficient to prove causation. There are a variety of possible explanations that could be provided for an observed correlation. These were listed in Chapter 4 and provided again here.
1. Changing the x variable will cause a change in the y variable
2. Changing the y variable will cause a change in the x variable
3. A feedback loop may exist in which a change in the x variable leads to a change in the y variable which leads to another change in the x variable, etc.
4. The changes in both variables are determined by a third variable
5. The changes in both variables are coincidental.
6. The correlation is the result of outliers, without which there would not be significant correlation.
7. The correlation is the result of confounding variables.
Causation is easier to prove with a manipulative experiment than an observational experiment. In a manipulative experiment, the researcher will randomly assign subjects to different groups, thereby diminishing any possible effect from confounding variables. In observational experiments, confounding variables cannot be distributed equitably throughout the population being studied. Manipulative experiments cannot always be done because of ethical reasons. For example, the earth is currently undergoing an observational experiment in which the explanatory variable is the amount of fossil fuels being converted to carbon dioxide and the response variable is the mean global temperature. It would have been considered unethical if a scientist had proposed in the 1800s that we should burn as many fossil fuels as possible to see how it affects the global temperature. Likewise, experiments that would force someone to smoke, text while driving, or do other hazardous actions would not be considered ethical and so correlations must be sought using observational experiments.
There are several reasons why it is appropriate to collect and analyze bivariate data. One such reason is that the dependent or response variable is of greater interest but the independent or explanatory variable is easier to measure. Therefore, if there is a strong relationship between the explanatory and response variable, that relationship can be used to calculate the response variable using data from the explanatory variable. For example, a physician would really like to know the degree to which a patient’s coronary arteries are blocked, but blood pressure is easier data to obtain. Therefore, since there is a strong relationship between blood pressure and the degree to which arteries are blocked, then blood pressure can be used as a predictive tool.
Another reason for collecting and analyzing bivariate data is to establish norms for a population. As an example, infants are both weighed and measured at birth and there should be a correlation between their weight and length (height?). A baby that is substantially underweight compared to babies of the same length would raise concerns for the doctor.
In order to use the methods described in this chapter, the data must be independent, quantitative, continuous, and have a bivariate normal distribution. The use of discrete quantitative data exceeds the scope of this chapter. Independence means that the magnitude of one data value does not affect the magnitude of another data value. This is often violated when time series data are used. For example, annual GDP (gross domestic product) data should not be used as one of the random variables for bivariate data analysis because the size of the economy in one year has a tremendous influence on the size of it the next year. This is shown in the two graphs below. The graph on the left is a time series graph of the actual GDP for the US. The graph on the right is a scatter plot that uses the GDP for the US as the x variable and the GDP for the US one year later (lag 1) for the y value. The fact that these points are in such a straight line indicates that the data are not independent. Consequently, this data should not be used in the type of the analyses that will be discussed in this chapter.
A bivariate normal distribution is one in which y values are normally distributed for each x value and x values are normally distributed for each y value. If this could be graphed in three dimensions, the surface would look like a mountain with a rounded peak.
We will now return to the example in chapter 4 in which the relationship between the wealth gap, as measured by the Gini Coefficient, and poverty were explored. Life can be more difficult for those in poverty and certainly the influence they can have in the country is far more limited than those who are affluent. Since people in poverty must channel their energies into survival, they have less time and energy to put towards things that would benefit humanity as a whole. Therefore, it is in the interest of all people to find a way to reduce poverty and thereby increase the number of people who can help the world improve.
There are a lot of possible variables that could contribute to poverty. A partial list is shown below. Not all of these are quantitative variables and some can be difficult to measure, but they can still have an impact on poverty levels
1. Education
2. Parent’s income level
3. Community’s income level
4. Job availability
5. Mental Health
6. Knowledge
7. Motivation and determination
8. Physically disabilities or illness
9. Wealth gap
10. Race/ethnicity/immigration status/gender
11. Percent of population that is employed
In Chapter 4, only the relationship between wealth gap and poverty level was explored. Data was gathered from seven states to determine if there is a correlation between these two variables. The scatter plot is reproduced below. The correlation is -0.65.
As a reminder, correlation is a number between -1 and 1. The population correlation is represented with the Greek letter $\rho$, while the sample correlation coefficient is represented with the letter $r$. A correlation of 0 indicates no correlation, whereas a correlation of 1 or -1 indicates a perfect correlation. The question is whether the underlying population has a significant linear relationship. The evidence for this comes from the sample. The hypotheses that are typically tested are:
$H_0: \rho = 0$
$H_1: \rho \ne 0$
This is a two-tailed test for a non-directional alternative hypothesis. A significant result indicates only that the correlation is not 0, it does not indicate the direction of the correlation.
The logic behind this hypothesis test is based on the assumption the null hypothesis is true which means there is no correlation in the population. An example is shown in the scatter plot on the left. From this distribution, the probability of getting the sample data (shown in solid circles in the graph at the right), or more extreme data (forming a straighter line), is calculated.
The test used to determine if the correlation is significant is a t test. The formula is:
$t = \dfrac{r\sqrt{n - 2}}{\sqrt{1 - r^2}}.$
There are n - 2 degrees of freedom.
This can be demonstrated with the example of Gini coefficients and poverty rates as provided in Chapter 4 and using a level of significance of 0.05. The correlation is -0.650. The sample size is 7, so there are 5 degrees of freedom. After substituting into the test statistic, $t = \dfrac{-0.650 \sqrt{7 - 2}}{\sqrt{1 - (-0.650)^2}}$, the value of the test statistic is -1.91. Based on the t-table with 5 degrees of freedom, the two-sided p-value is greater than 0.10 (actual 0.1140). Consequently, there is not a significant correlation between Gini coefficient and poverty rates.
Another explanatory variable that can be investigated for its correlation with poverty rates is the employment-population ratio (percent). This is the percent of the population that is employed at least one hour in the month
.
The correlation for this data is -0.6445, $t$ = -2.80 and $p$ = 0.0174. Notice at the 0.05 level of significance, this correlation is significant. Before exploring the meaning of a significant correlation, compare the results of the correlation between Gini Coefficient and poverty rate which was -0.650 and the results of the correlation between Employment-Population Ratio and poverty rates which is -0.6445. The former correlation was not significant while the later was significant even though it is less than the former. This is a good example of why the knowledge of a correlation coefficient is not sufficient information to determine if the correlation is significant. The other factor that influences the determination of significance is the sample size. The Employment-Population Ratio/poverty rates data was determined from a larger sample size (13 compared with 7). Sample size plays an important role in determining if the alternative is supported. With very large samples, very small sample correlations can be shown to be significant. The question is whether significant corresponds with important.
The effect of sample size on possible correlations is shown in the four distributions below. These distributions were created by starting with a population that had a correlation of $\rho = 0.000$.10,000 samples of size 5,15,35, and 300 were drawn from this population, with replacement.
Look carefully at the x-axis scales and the heights of the bars. Values near the middle of the graphs are likely values while values on the far left and right of the graph are unlikely values which, when testing a hypothesis, would possibly lead to a significant conclusion. With small sample sizes, the magnitude of the correlation must be very large to conclude there is significant correlation. As the sample size increases, the magnitude of the correlation can be much smaller to conclude there is significant correlation. The critical values for each of these are shown in the table below and are based on a two-tailed test with a level of significance of 5%.
n 5 15 35 300
t 2.776 2.145 2.032 1.968
|r| 0.848 0.511 0.334 0.113
In the histogram in the bottom right in which the sample size was 300, a correlation that exceeds 0.113 would lead to a conclusion of significant correlation, yet there is the question of whether a correlation that small is very meaningful, even if it is significant. It might be meaningful or it might not. The researcher must determine that for each situation.
Returning to the analysis of Gini coefficients and poverty rates, since there was not a significant correlation between these two variables, then there is no point in trying to use Gini Coefficients to estimate poverty rates or focusing on changes to the wealth gap as a way of improving the poverty rate. There might be other reasons for wanting to change the wealth gap, but its impact on poverty rates does not appear to be one of the reasons. On the other hand, because there is a significant correlation between Employment-Population Ratio and poverty rates, then it is reasonable to use the relationship between them as a model for estimating poverty rates for specific Employment-Population Ratios. If this relationship can be determined to be causal, then it justifies improving the employment-population ratio to help reduce poverty rates. In other words, people need jobs to get out of poverty.
Since the Pearson Product Moment Correlation Coefficient measures the strength of the linear relationship between the two variables, then it is reasonable to find the equation of the line that best fits the data. This line is called the least squares regression line or the line of best fit. A regression line has been added to the graph for Employment-Population Ratio and Poverty Rates. Notice that there is a negative slope to the line. This corresponds to the sign of the correlation coefficient.
The equation of the line, as it appears in the subtitle of the graph is $y = 35.8284 – 0.3567x$, where $x$ is the Employment-Population Ratio and $y$ is the poverty rate. As an algebra student, you were taught that a linear equation can be written in the form of $y = mx + b$. In statistics, linear regression equations are written in the form $y = b + mx$ except that they traditionally are shown as $y' = a + bx$ where $y'$ represents the y value predicted by the line, $a$ represents the $y$ intercept and $b$ represents the slope.
To calculate the values of $a$ and $b$, 5 other values are needed first. These are the correlation (r), the mean and standard deviation for $x$ ($\bar{x}$ and $s_x$) and the mean and standard deviation for $y$ ($\bar{y}$ and $s_y$). First find $b$ using the formula: $b = r(\dfrac{s_y}{s_x})$. Next, substitute $\bar{y}$, $\bar{x}$, and $b$ into the basic linear equation $\bar{y} = a + b\bar{x}$ and solve for $a$.
For this example, $r = -0.6445$, $bar{x} = 61.76$, $s_x = 4.67$, $bar{y} = 13.80$, and $s_y = 2.58$.
$b = r(\dfrac{s_y}{s_x})$
$b = -0.6445(\dfrac{2.58}{4.67}) = -0.3561$
$\bar{y} = a + b\bar{x}$
$1380 = a + -0.3561(61.76)$
$a = 35.79$
Therefore, the final regression equation is $y' = 35.79 - 0.3561x$. The difference between this equation and the one in the graph is the result of rounding errors used for these calculations.
The regression equation allows us to estimate the y value, but does not provide an indication of the accuracy of the estimate. In other words, what is the effect of the relationship between $x$ and $y$ on the $y$ value?
To determine the influence of the relationship between $x$ and $y$ begins with the idea that there is variation between the $y$ value and the mean of all the $y$ values ($bar{y}$). This is something that you have seen with univariate quantitative data. There are two reasons why the $y$ values are not equivalent to the mean. These are called explained variation and error variation. Explained variation is the variation that is a consequence of the relationship $y$ has with $x$. In other words, $y$ does not equal the mean of all the $y$ values because the relationship shown by the regression line influences it. The error variation is the variation between an actual point and the $y$ value predicted by the regression line that is a consequence of all the other factors that impact the response random variable. This vertical distance between each actual data point and the predicted $y$ value ($y'$) is called the residual. The explained variation and error variation is shown in the graph below. The horizontal line at 13.8 is the mean of all the $y$ values.
The total variation is given by the sum of the squared distance each value is from the average $y$ value. This is shown as $\sum_{i = 1}^{n} (y_i - \bar{y})^2$.
The explained variation is given by the sum of the squared distances the $y$ value predicted by the regression equation ($y'$) is from the average $y$ value, $\bar{y}$. This is shown as
$\sum_{i = 1}^{n} (y_i' - \bar{y})^2.$
The error variation is given by the sum of the squared distances the actual $y$ data value is from the predicted $y$ value ($y'$). This is shown as $\sum_{i = 1}^{n} (y_i - y_i ')^2$.
The relationship between these can be shown with a word equation and an algebraic equation.
Total Variation = Explained Variation + Error Variation
$\sum_{i = 1}^{n} (y_{i} - \bar{y})^{2} = \sum_{i = 1}^{n} (y_{i}' - \bar{y})^{2} + \sum_{i = 1}^{n} (y_{i} - y_{i} ')^2$
The primary reason for this discussion is to lead us to an understanding of the mathematical (though not necessarily causal) influence of the $x$ variable on the $y$ variable. Since this influence is the explained variation, then we can find the ratio of the explained variation to the total variation. We define this ratio as the coefficient of determination. The ratio is represented by $r^2$.
$r^2 = \dfrac{\sum_{i = 1}^{n} (y_i' - \bar{y})^2}{\sum_{i = 1}^{n} (y_i - \bar{y})^2}$
The coefficient of determination is the square of the correlation coefficient. What it represents is the proportion of the variance of one variable that results from the mathematical influence of the variance of the other variable. The coefficient of determination will always be a value between 0 and 1, that is $0 \le r^2 \le 1$. While $r^2$ is presented in this way, it is often spoken of in terms of percent, which results by multiplying the $r^2$ value by 100.
In the scatter plot of poverty rate against employment-population ratio, the correlation is $r = - 0.6445$, so $r^2 = 0.4153$. Therefore, we conclude that 41.53% of the influence on the variance in poverty rate is from the variance in the employment-population ratio. The remaining influence that is considered error variation comes from some of the other items in the list of possible variables that could affect poverty.
There is no definitive scale for determining desirable levels for $r^2$. While values close to 1 show a strong mathematical relationship and values close to 0 show a weak relationship, the researcher must contemplate the actual meaning of the $r^2$ value in the context of their research.
Technology
Calculating correlation and regression equations by hand can be very tedious and subject to rounding errors. Consequently, technology is routinely employed to in regression analysis. The data that was used when comparing the Gini Coefficients to poverty rates will be used here.
Gini Coefficient Poverty Rate
0.486 10.1
0.443 9.9
0.44 11.6
0.433 13
0.419 13.2
0.442 14.4
0.464 10.3
ti 84 Calculator
To enter the data, use Stat – Edit – Enter to get to the lists that were used in Chapter 4. Clear lists one and two by moving the cursor up to L1, pushing the clear button and then moving the cursor down. Do the same for L2.
Enter the Gini Coefficients into L1, the Poverty Rate into L2. They must remain paired in the same way they are in the table.
To determine the value of t, the p-value, the r and r2 values and the numeric values in the regression equation, use Stat – Tests – E: LinRegTTest. Enter the Xlist as L1 and the Ylist as L2. The alternate hypothesis is shown as $\beta$ & $\rho$: $\ne$ 0. Put cursor over Calculate and press enter.
The output is:
LinRegTTest
$y = a + bx$
$\beta \ne 0$ and $\rho \ne 0$
t = -1.912582657
p = 0.1140079665
df = 5
b = -52.72871602
$s = 1.479381344$ (standard error)
$r^2 = 0.4224975727$
$r = -0.6499981406$
Microsoft’s Excel contains an add-in that must be installed in order to complete the regression analysis. In more recent versions of Excel (2010), this addin can be installed by
• Select the file tab
• Select Options
• On the left side, select Add-Ins
• At the bottom, next to where it says Excel Add-ins, click on Go Check the first box, which says Analysis ToolPak then click ok. You may need your Excel disk at this point.
To do the actual Analysis:
• Select the data tab
• Select the data analysis option (near the top right side of the screen)
• Select Regression
• Fill in the spaces for the y and x data ranges.
• Click ok.
A new worksheet will be created that contains a summary output. Some of the numbers are shown in gray to help you know which numbers to look for. Notice how they correspond to the output from the TI 84 and the calculations done earlier in this chapter.
07: Analysis of Bivariate Quantitative Data
Chapter 7 Homework
In the first problem, all calculations, except finding the correlation, should be done using the formulas and tables. For the remaining problems you may use either the calculator or Excel.
1. In the game of baseball the objective is to win games by scoring more runs than the opposing team. Runs can only be scored if someone gets on base. Traditionally, batting average (which is actually a proportion of hits to at bats) has been used as one of the primary measures of player success. An alternative is slugging percent which is the ratio of total number of bases reached during an at bat to the number of at bats. A walk or single counts as one base, a double counts as two bases, etc. The table below contains the batting average, slugging percentage, and runs scored from 10 Major League Baseball teams randomly selected from the 2012 and 2013 seasons.(http://www.fangraphs.com, 12-12-13)
Team Batting Average Team Slugging Percentage Team Runs Scored
0.242 0.380 614
0.231 0.335 513
0.283 0.434 796
0.240 0.375 610
0.252 0.398 640
0.268 0.422 726
0.245 0.407 716
0.260 0.390 701
0.240 0.394 697
0.255 0.422 748
a. Make a scatter plot of team batting average and team runs scored. Label the graph completely.
b. Use your calculator to find the mean and standard deviation for batting average and runs scored. The correlation between these is 0.805.
Mean batting average _________ Standard deviation for batting average ___________
Mean runs scored ______________ Standard deviation for runs scored____________
c. Use the appropriate t test statistic to determine if the correlation is significant at the 0.05 level of significance. Show the formula, substitution and the results in a complete concluding sentence.
Formula Substitution
Concluding sentence:
d. Find the equation of the regression line.
$b = r(\dfrac{s_y}{s_x})$
$\bar{y} = a + b\bar{x}$
Regression equation:
Draw this line on your scatter plot. (Hint: pick two different x values, one near either side of the x-axis, substitute into the regression equation to find y. Then plot the two (x, y) ordered pairs that you produced. This is how you learned to graph in Algebra using a table of values).
e. What is the $r^2$ value and what does it mean?
f. Predict the number of runs scored for a team with a batting average of 0.250.
g. Repeat this entire problem for slugging percent and runs scored, only this time use the LinRegTTest function on your calculator.
Correlation _____________
Hypothesis test concluding sentence
Regression equation _________________________
Coefficient of determination ($R^2$) ____________
Predict the number of runs scored for a team with a slugging percentage of 0.400. __________________
Compare and contrast the results from the analysis of batting average and slugging percentage and their relationship to runs scored.
2. In an ideal society, crime would seldom happen and consequently the population’s financial resources could be spent on other things that benefit society. The primary categories for state spending are k-12 education, higher education, public assistance, Medicaid, transportation and corrections. Many of us in the field of education believe that it is critical for the country and holds the possibility of reducing both crime and public assistance. Is there a significant correlation between the percent of state budgets spent on k-12 and higher education and the percent spent on public assistance? Is there a significant correlation between the percent of state budgets spent on education and corrections? Data is from 2011.(www.nasbo.org/sites/default/f...20Report_1.pdf 12-12-13.)
a. Make a scatter plot, use your calculator to test the hypothesis that there is a correlation between education spending and public assistance spending. Show calculator outputs including the correlation, $r^2$ value and equation of the regression line. Write a statistical conclusion then interpret the results. Use a level of significance of 0.05.
Correlation ____________
Coefficient of determination ($r^2$ value) _______________
Regression equation _____________________
What does $x$ represent in this equation? ______________
What does $y$ represent in this equation? ______________
Hypothesis test concluding sentence:
b. Make a scatter plot, use your calculator to test the hypothesis that there is a correlation between education spending and corrections spending. Show calculator outputs including the correlation, $r^2$ value and equation of the regression line. Write a statistical conclusion then interpret the results. Use a level of significance of 0.05.
Correlation ____________
Coefficient of determination ($r^2$ value) _______________
Regression equation _____________________
What does $x$ represent in this equation? ______________
What does $y$ represent in this equation? ______________
Hypothesis test concluding sentence:
3. Is there a correlation between the population of a state and the median income in the state? (Data from http://www.city-data.com/ 12-12-13.)
State Population (millions) Median income (\$)
2.7 55764
2.8 49444
2 43569
7.9 61090
9.6 48448
1.3 46405
11.5 46563
2.7 54065
4.5 43362
Make a scatter plot, use your calculator to test the hypothesis that there is a correlation between population and median income. Show calculator outputs including the correlation, r2 value and equation of the regression line. Write a statistical conclusion then interpret the results. Use a level of significance of 0.05.
Correlation ____________
Coefficient of determination ($r^2$ value) _______________
Regression equation _____________________
What does $x$ represent in this equation? ______________
What does $y$ represent in this equation? ______________
Hypothesis test concluding sentence:
4. One theory about the benefit of large cities is that they serve as a hub for creativity due to the frequent interactions between people. One measure of creative problem solving is the number of patents that are granted. Is there a correlation between the size of a metropolitan or micropolitan area and the number of patents that were granted to someone in that area?(www.uspto.gov/web/offices/ac/...allcbsa_gd.htm andwww.census.gov/popfinder/ (12-12-13))
a. Make a scatter plot, use your calculator to test the hypothesis that there is a correlation between population and total patents 2000-2011. Show calculator outputs including the correlation, $r^2$ value and equation of the regression line. Write a statistical conclusion then interpret the results. Use a level of significance of 0.05.
Correlation ____________
Coefficient of determination ($r^2$ value) _______________
Regression equation _____________________
What does $x$ represent in this equation? ______________
What does $y$ represent in this equation? ______________
Hypothesis test concluding sentence:
b. There are two outliers in this data. Do you think they have too great an influence on the correlation and therefore should be removed or do you think they are relevant and should be kept with the data?
c. Use the regression line to predict the number of patents for a city with 60,000 people.
5. Why Statistical Reasoning is Important for Anatomy and Physiology Students and Professionals In collaboration with Barry Putman, Professor of Biology, Natural Science Coordinator, JBLM
This topic is discussed in the following Pierce College Course: Biol 241
briefing 8.1
Near Point of Accommodation (NPA) is the nearest point at which the eyes can comfortably focus. In the lab conducted in the anatomy and physiology class, students will hold a meter stick against their forehead, close their left eye and with their right eye they will focus on a small ruler held against the meter stick. With the ruler starting at arm’s length they will slowly move it toward their eye. When they reach the point where the ruler has the greatest focus (NPA), a partner will record the distance, in centimeters, from their eye.
Since people often need glasses later in life, it would be reasonable to determine if there is a correlation between a person’s age and their NPA. Consequently, students in the study record both their age and NPA.
a. Of the two variables, NPA and age, which should be the explanatory variable? Why?
b. Of the two variables, NPA and age, which should be the response variable? Why?
c. There were 103 data values made available for this problem. This number will be reduced using a random process to save you time. If a systematic sampling method is used with every $10^{\text{th}}$ value selected, what are the 10 or 11 numbers that would be selected if the calculator is seeded with a 31?
, , , , , , , , , ,
The table below contains the data.
Age 26 28 30 26 36 19 20 20 27 25 24
NPA 31 13 36 22 34 8 8 10 24 14 11
d. Make a scatter plot. Write a complete sentence explaining your interpretation of the graph.
e. Use your calculator to find the sample correlation.
f. Write and test the hypotheses to determine if there is a significant correlation in the population. Use a 0.05 level of significance. Write a concluding sentence.
g. What type error could have been made?
h. What do you conclude about the relationship between age and NPA? | textbooks/stats/Introductory_Statistics/Foundations_in_Statistical_Reasoning_(Kaslik)/07%3A_Analysis_of_Bivariate_Quantitative_Data/7.E%3A_Analysis_of_Bivariate_Quantitative_Data_%28Exercises%29.txt |
In chapter 5, the inferential theory for categorical data was developed based upon the binomial distribution. Recall that the binomial distribution shows the probability of the possible number of successes in a sample of size n when there were only two possible independent outcomes, success and failure. What happens if there are more than two possible outcomes however?
Consider the following three questions.
1. Does the TI 84 calculator generate equal numbers of 0-9 when using the random integer generator?
2. Doing something about climate change has been a challenge for humanity. The website Edge.Org had one proposal put forth by Lee Smolin, a physicist with the Perimeter Institute and author of book Time Reborn.(www.edge.org/conversation/del...ooperation/#rc Nov 30, 2013.) The essence of the proposal is that a carbon tax should be placed on all carbon that is used but instead of the money going to the government it goes to individual climate retirement accounts. Each person would have such an account. Each account would have two categories of possible investments that an individual could choose. Category A investments would be in things that will mitigate climate change (e.g. solar, wind, etc). Category B investments would be in things that might do well if climate change does not happen (e.g. utilities that burn coal, coastal real estate developments and car companies that do not produce fuel efficient or electric cars). Is there a correlation between a person’s opinion about climate change and their choice of investment?
3. Hurricanes are classified as category 1,2,3,4,5. Is the distribution of hurricanes in the years 1951- 2000 different than it was in 1901-1950?
Before an analysis can be done, it is necessary to understand the type of data that is gathered for each of these questions.
In question 1, the data that will be gathered are the numbers 0 though 9. While numbers are typically considered quantitative, in this case we simply want to know if the calculator produces each specific number. Therefore, this is actually about the frequency with which these numbers are produced. If the process used by the calculator is sufficiently random, then the frequencies for all the numbers should be equal if a large enough sample is taken. So, in spite of appearing to be quantitative data, this is actually categorical data, with 10 different categories and the data being that a number was selected.
In question 2, imagine a two-question survey in which people are asked:
1. Do you believe climate change is happening because humans have been using carbon sources that lead to an increase in greenhouse gases? Yes No
2. Which of the following most closely represents the choice you would make for your individual climate retirement account investments? Category A Category B
For this question, there is one population. Each person that takes the survey would provide two answers. The objective is to determine if there is a correlation between their climate change opinion and their investment choice. An alternate way of saying this is that the two variables are either independent of each other, which means that one response does not affect the other, or they are not independent which means that climate change opinion and investment strategy are related.
In question 3, there are two populations. The first population is hurricanes in 1901-50 and the second population is hurricanes in 1951-2000. There are 5 categories of hurricanes and the goal is to see if the distribution of hurricanes in these categories is the same or different.
The problems fit one of the following classes of problems, in order: goodness of fit, test for independence, and test for homogeneity. The use of these problems and their hypotheses are shown below.
1. Goodness of Fit
The goodness of fit test is used when a categorical random variable with more than two levels has an expected distribution.
$H_0$: The distribution is the same as expected
$H_1$: The distribution is different than expected
2. Test for Independence
The test for independence is used when there are two categorical random variables for the same unit (or person) and the objective is to determine a correlation between them.
$H_0$: The two random variables are independent (no correlation)
$H_1$: The two random variables are not independent (correlation)
If the data are significant, than knowledge of the value of one of the random variables increases the probability of knowing the value of the other random variable compared to chance.
3. Test for Homogeneity
The test for homogeneity is used when there are samples taken from two (or more) populations with the objective of determining if the distribution of one random variable is similar or different in the two populations.
$H_0$: The two populations are homogeneous
$H_1$: The two populations are not homogeneous
Since all of the problems have data that can be counted exactly one time, the strategy is to determine how the distribution of counts differs from the expected distribution. The analysis of all these problems uses the same test statistic formula called $chi ^2$(Chi Square).
$\chi^2 = \sum \dfrac{(O - E)^2}{E}$
The distribution that is used for testing the hypotheses is the set of $chi ^2$ distributions. These distributions are positively skewed. They cannot be negative. Each distribution is based on the number of degrees of freedom. Unlike the t distributions in which degrees of freedom were based on the sample size, in the case of $chi ^2$, the degrees of freedom are based on the number of levels of the random variable(s).
The following distributions show 10,000 samples of size n = 100 in which the $chi ^2$ test statistics calculated and graphed. The numbers of degrees of freedom in these four graphs are 1,2,5, and 9.
Notice how the Chi Square distribution becomes less skewed and is approaching a normal distribution as the number of degrees of freedom increase. An increase in the number of degrees of freedom corresponds to an increase in the number of levels of the explanatory factor. The way in which degrees of freedom are found is different for the goodness of fit test compared to the test for independence and test for homogeneity. Each method will be explained in turn.
Goodness of Fit Test
1. Does the TI 84 calculator generate equal numbers of 0-9 when using the random integer generator?
In this experiment, 12 numbers between 1 and 100 were randomly generated by the TI 84 calculator. These 12 numbers were used as seed values. After seeding the calculator with each number, 10 new numbers between 0 and 9 were randomly generated using the randint function on the calculator. Thus, a total of 120 numbers between 0 and 9 were produced. The frequency of these numbers is shown in the table below.
0 1 2 3 4 5 6 7 8 9
15 11 12 14 10 14 10 11 14 9
The hypotheses to be tested are:
$H_0$: The observed cell frequency equals the expected cell frequency for all cells
$H_1$: The observed cell frequency does not equal the expected cell frequency for at least one cell. Use a 0.05 level of significance
This can be represented symbolically as
$H_0$: $o_1 = \epsilon_1$ for all cells
$H_1$: $o_1 \ne \epsilon_1$ for at least one cell
where ο is the lower case Greek letter omicron that represents the observed cell frequency in the underlying population and ε is the lower case Greek letter epsilon that represents the expected cell frequency. The expected cell frequency should always be 5 or higher. If it isn’t, cells should be regrouped.
The table above shows the observed frequencies, but what are the expected frequencies? In theory, if the process is truly random, then each number would occur with the same frequency if the sampling were to be done a very large number of times. If this is the case, then in a sample of size 120, with 10 possible alternatives, the expected number of frequencies for each alternative should be 12. From the table, we see that most frequencies are not 12, but what is needed is a way to determine if the amount of variation that exists is enough to suggest that the observed frequencies do not equal the expected frequencies. Such a conclusion would imply the calculator does not produce a truly random set of numbers. The strategy is to find $\chi^2$ and then use the appropriate $\chi^2$ distribution to find the p-value. One way to find $\chi^2 = \sum \dfrac{(O - E)^2}{E}$ is with a table.
Observed Expected O - E $(O - E)^2$ $\dfrac{(O - E)^2}{E}$
15 12 3 9 $\dfrac{9}{12}$
11 12 -1 1 $\dfrac{1}{12}$
12 12 0 0 $\dfrac{0}{12}$
14 12 2 4 $\dfrac{4}{12}$
10 12 -2 4 $\dfrac{4}{12}$
14 12 2 4 $\dfrac{4}{12}$
10 12 -2 4 $\dfrac{4}{12}$
11 12 -1 1 $\dfrac{1}{12}$
14 12 2 4 $\dfrac{4}{12}$
9 12 -3 9 $\dfrac{9}{12}$
$\chi^2 = \dfrac{40}{12} = 3.33$
If r represents the number of rows, then the number of degrees of freedom in a goodness of fit test is:
df = r – 1.
For this Goodness of fit test, there are 10 rows of data. Consequently there are 9 degrees of freedom.
The p-value for $\chi^2$ can be found using the table of the Chi Square Distributions at the end of this chapter or your calculator.
The Chi-Square Distributions can also be used to find the p-value. Using the table below, find the degrees of freedom in the left column, locate the $\chi^2$ value in the row, then move to the row that shows the area to the right and use an inequality sign to show the p-value. If the p-value isgreater than $\alpha$, then use the greater than symbol. If it is less than α, use the less than symbol, but in either case, use as much precision as possible. For example, if α is 0.05 but the area to the right is lessthan 0.025, then p < 0.025 is preferred over p < 0.05.
In this example, $\chi^2$ = 3.33, there are 9 degrees of freedom, so the p-value > 0.9.
$\chi^2 = 3.33$
Using $\chi^2$ cdf (low, high, df) in the TI 84 calculator results in $\chi^2$ cdf (3.33, 1E99,9) = 0.9496.
Since this p-value is clearly higher than 0.05, the conclusion can be written:
At the 5% level of significance, the observed cell values are not significantly different than the expected cell values ($\chi^2$ = 3.33, p = 0.9496, df=9). The TI84 calculator appears to produce a good set of random integers.
In the case of the calculator, if it is random in generating numbers, we would expect the same number of values in each category. That is, we would expect to get the same number of 0s, 1s, 2s, etc. Since the sample consisted of 120 trials with 10 possibilities for each outcome, the expected value is 12 because 120 divided by 10 is 12. But what happens if the expected outcome is not the same in all cases?
In the fall of 2013, our college was made up of 54% Caucasian, 14% Hispanic/Latino, 11% African American, 10% Asian/Pacific Islander, 1% Native American, 3% international, and 7% other. If we wanted to determine if the racial/ethnic distribution of statistics students is different than of the entire school, we could take a survey of statistics students to obtain the observed data. The table below contains hypothetical observed data. Since there are 300 students in the sample and based on college enrollment, 54% of the student body is white, then the expected number of students in the class who are white is found by multiplying 300 times 0.54. The same approach is taken for each race. This is shown in the table. Notice the total in the expected column is the same as in the observed column.
Race/Ethnicity Observed Expected
Caucasian/white (54%) 154 0.54(300) = 162
Hispanic/Latino (14%) 48 0.14(300) = 42
African American/Black (11%) 36 0.11(300) = 33
Asian/Pacific Islander (10%) 35 0.10(300) = 30
Native American (1%) 6 0.01(300) = 3
International (3%) 9 0.03(300) = 9
Other (7%) 12 0.07(300) = 21
Total 300 Total 300
The remainder of the goodness of fit test is done the same as with the calculator example and will not be demonstrated here.
Chi Square Test for Independence
The Chi Square Test for Independence is used when a researcher wants to determine a relationship between two categorical random variables collected on the same unit (or person). Sample questions include:
1. Is there a relationship between a person’s religious affiliation and their political party preference?
2. Is there a relationship between a person’s willingness to eat genetically engineered food and their willingness to use genetically engineered medicine?
3. Is there a relationship between the field of study for a college graduate and their ability to think critically?
4. Is there a relationship between the quality of sleep a person gets and their attitude during the next day?
As an example, we will learn the mechanics of the test for independence using the hypothetical example of responses to the two questions about climate change and investments.
1. Do you believe climate change is happening because humans have been using carbon sources that lead to an increase in greenhouse gases? Yes No
2. Which of the following most closely represents the choice you would make for your individual climate retirement account investments? Category A Category B
Category A – solar, wind Category B – Coal, ocean side development
$H_0$: The two random variables are independent (no correlation)
$H_1$: The two random variables are not independent (correlation)
This can also be represented symbolically as
$H_0: o_1 = \epsilon_1$ for all cells
$H_1: o_1 \ne \epsilon_1$ for at least one cell
where $o$ is the lower case Greek letter omicron that represents the observed cell frequency in the underlying population and $\epsilon$ is the lower case Greek letter epsilon that represents the expected cell frequency. The expected cell frequency should always be 5 or higher. If it isn’t, cells should be regrouped.
Use a level of significance of 0.05.
Because this will be done with pretend data, it will be useful to do it twice, producing opposite conclusions each time.
The data will be presented in a 2 x 2 contingency table.
Version 1
Observed
Yes - humans contribute to clime change No - humans do not contribute to climate change Totals
Category A Investments (wind, solar) 56 54
Category B Investments (coal, ocean shore developments) 47 43
Total
The test for independence uses the same formula as the goodness of fit test. $\chi^2 = \sum \dfrac{(O - E)^2}{E}$. Unlike that test however, there is no clear indication of what the expected values are. Instead they must be calculated, which is a four-step process.
Step 1, Find the row and column totals and the grand total.
Version 1
Observed
Yes - humans contribute to clime change No - humans do not contribute to climate change Totals
Category A Investments (wind, solar) 56 54 110
Category B Investments (coal, ocean shore developments) 47 43 90
Total 103 97 200
Step 2. Create a new table for the expected values. The reasoning process for calculating the expected values is to first consider the proportion of all the values that fall in each column. In the first column there are 103 values out of 200 which is $\dfrac{}{} = 0.515$. In the second column there are 97 out of 200 values (0.485). Since 51.5% of the values are in the first column, then it would be expected that 51.5% of the first row’s values would also be in the first column. Thus, 0.515(110) gives an expected value of 56.65. Likewise, 0.485(90) will produce the expected value of 43.65 for the last cell. As a formula, this can be expressed as
$\dfrac{Column\ Total}{Grand\ Total} \cdot Row\ Total$
Version 1
Observed
Yes - humans contribute to clime change No - humans do not contribute to climate change Totals
Category A Investments (wind, solar) $\dfrac{103}{200} \cdot 110 = 56.65$ $\dfrac{97}{200} \cdot 110 = 53.35$ 110
Category B Investments (coal, ocean shore developments) $\dfrac{103}{200} \cdot 90 = 46.35$ $\dfrac{97}{200} \cdot 110 = 43.65$ 90
Total 103 97 200
Step 3. Use a table similar to the one used in the Goodness of Fit test to calculate Chi Square.
Observed Expected $O - E$ $(O - E)^2$ $\dfrac{(O - E)^2}{E}$
56 56.65 -0.65 0.4225 0.0075
54 53.35 0.65 0.4225 0.0079
47 46.35 0.65 0.4225 0.0091
43 43.65 -0.65 0.4225 0.0097
$\chi^2 = 0.0342$
Step 4. Determine the Degrees of Freedom and find the p-value
If R is the number of Rows in the contingency Table and C is the number of columns in the contingency table, then the number of degrees of freedom for the test for independence is found as
df = (R - 1)(C - 1).
For a 2 x 2 contingency table such as in this problem, there is only 1 degree of freedom because (2-1)(2-1) = 1.
The p-value for $\chi^2$ can be found using the table or your calculator.
In the table we locate 0.034 in the row with 1 degree of freedom, then move up to the row for the area to the right. Since the area to the right is greater than 0.05, but more specifically it is greater than 0.1, the p-value is written as p > 0.1.
On your calculator, use $\chi^2$ cdf (low, high, df) . In this case, $\chi^2$ cdf (0.0342, 1E99, 1) = 0.853.
Since the data are not significant, we conclude that people’s investment strategy is independent of their opinion about human contributions to climate change.
Version 2 of this problem uses the following contingency table.
Version 2
Observed
Yes - humans contribute to clime change No - humans do not contribute to climate change Totals
Category A Investments (wind, solar) 80 30
Category B Investments (coal, ocean shore developments) 30 60
Total
This time, the entire problem will be calculated using the TI 84 calculator instead of building the tables that were used in Version 1.
Step 1. Matrix
Step 2. Make 1:[A] into a 2 x 2 matrix by selecting Edit Enter then modify the R x C as necessary. Step 3. Enter the frequencies as they are shown in the table.
Step 4. STAT TESTS $\chi^2$ − Test
Observed:[A]
Expected:[B] (you do not need to create the Expected matrix, the calculator will for you.)
Select Calculate to see the results:
$\chi^2$ = 31.03764922
p=2.5307155E-8
df=1
In this case, the data are significant. This means that there is a correlation between each person’s opinion about human contributions to climate change and their choice of investments. Remember that correlation is not causation.
Chi Square Test for Homogeneity
The third and final problem is about the classification of hurricanes in two different decades, 1901-50 and 1951-2000. One theory about climate change is that hurricanes could get worse. be worked using tables.
Hurricanes are classified by the Saffir-Simpson Hurricane Wind Scale.2
Category 1 Sustained Winds 74-95 mph
Category 2 Sustained Winds 96-110 mph
Category 3 Sustained Winds 111-129 mph
Category 4 Sustained Winds 130-156 mph
Category 5 Sustained Winds 157 or higher.
Category 3, 4, and 5 hurricanes are considered major.
This problem will
The population of interest is the distribution of hurricanes for the prevailing climate conditions at the time. The hypotheses being tested are
$H_0$: The distributions are homogeneous
$H_1$: The distributions are not homogeneous
This can also be represented symbolically as
$H_0: o_1 = \epsilon_1$ for all cells
$H_1: o_1 \ne \epsilon_1$ for at least one cell
where $o$ is the lower case Greek letter omicron that represents the observed cell frequency in the underlying population and $\epsilon$ is the lower case Greek letter epsilon that represents the expected cell frequency. The expected cell frequency should always be 5 or higher. If it isn’t, cells should be regrouped.
A 5 x 2 contingency table will be used to show the frequencies that were observed. The expected frequencies were calculated in the same way as in the test of independence. (http://www.nhc.noaa.gov/pastdec.shtml viewed 12/7/13)
Observed 1901 - 1950 1951 - 2000 Totals
Category 1 37 29 66
Category 2 24 15 39
Category 3 26 21 47
Category 4 7 5 12
Category 5 1 2 3
Totals 95 72 167
Expected 1901 - 1950 1951 - 2000 Totals
Category 1 37.54 28.46 66
Category 2 22.19 16.81 39
Category 3 26.74 20.26 47
Category 4 6.83 5.17 12
Category 5 1.71 1.29 3
Totals 95 72 167
Notice that the expected cell frequencies for category 5 hurricanes are less than 5, therefore it will be necessary for us to redo this problem by combining groups. Group 5 will be combined with group 4 and the modified tables will be provided.
Observed 1901 - 1950 1951 - 2000 Total
Category 1 37 29 66
Category 2 24 15 39
Category 3 26 21 47
Category 4 & 5 8 7 15
Total 95 72 167
Observed 1901 - 1950 1951 - 2000 Total
Category 1 37.54 28.46 66
Category 2 22.19 16.81 39
Category 3 26.74 20.26 47
Category 4 & 5 8.53 6.47 15
Total 95 72 167
Observed Expected $O - E$ $(O - E)^2$ $\dfrac{(O - E)^2}{E}$
1901 - 50
Category 1 37 37.54 -0.54 0.30 0.008
Category 2 24 22.19 1.81 3.29 0.148
Category 3 26 26.74 -0.74 0.54 0.020
Category 4 & 5 8 8.53 -0.53 0.28 0.033
1951 - 2000
Category 1 29 28.46 0.54 0.30 0.010
Category 2 15 16.81 -1.81 3.29 0.196
Category 3 21 20.26 0.74 0.54 0.027
Category 4 & 5 7 6.47 0.53 0.28 0.044
$\chi^2 = 0.487$
If R is the number of Rows in the contingency Table and C is the number of columns in the contingency table, then the number of degrees of freedom for the test for homogeneity is found as
df = (R-1)(C-1).
For a 4 $\times$ 2 contingency table such as in this problem, there are 3 degrees of freedom because (4-1)(2-1) = 3 degrees of freedom.
The table shows the p-value is less than 0.05. The calculator confirms this because $\chi^2$ cdf (0.486, 1E99, 3) = 0.9218. Consequently the conclusion is that there is not a significant difference between the distribution of hurricanes in 1951-2000 and 1901-50.
Distinguishing between the use of the test of independence and homogeneity
While the mathematics behind both the test of independence and the test of homogeneity are the same, the intent behind their usage and interpretation of the results is different.
The test for independence is used when two random variables, both of which are considered to be response variables, are determined for each unit. The test for homogeneity is used when one of the random variables is the explanatory variable and subjects are selected based on their level of this variable. The other random variable is the response variable.
The determination of which test to used is established by the sampling approach. If two populations are clearly defined beforehand and a random selection is made from each population, then the populations will be compared using the test of homogeneity. If no effort is made to distinguish populations beforehand, and a random selection is made from this population and then the values of the two random variables are determined, the test of independence is appropriate.
An example may clarify the subtle difference between the two tests. Consider one random variable to be a person’s preference between running and swimming for exercise and the other random variable to be a person’s preference between watching TV or reading a book. If the researcher randomly selects some runners and some swimmers and asks each group about their preference for TV or reading a book, the test for homogeneity would be appropriate. On the other hand, if the researcher survey’s randomly selected people and asks if they prefer running or swimming and if they prefer TV or reading, then the objective will be to determine if there is a correlation between these two random variables by using the test of independence.
Chi - Square Distributions
Area Left 0.005 0.01 0.025 0.05 0.1 0.9 0.95 0.975 0.99 0.995
Area Right 0.995 0.99 0.975 0.95 0.9 0.1 0.05 0.025 0.01 0.005
df
1 0.000 0.000 0.001 0.004 0.016 2.706 3.841 5.024 6.635 7.879
2 0.010 0.020 0.051 0.103 0.211 4.605 5.991 7.378 9.210 10.597
3 0.072 0.115 0.216 0.352 0.584 6.251 7.815 9.348 11.345 12.838
4 0.207 0.287 0.484 0.711 1.064 7.779 9.488 11.143 13.277 14.860
5 0.412 0.554 0.831 1.145 1.610 9.236 11.070 12.832 15.086 16.750
6 0.676 0.872 1.237 1.635 2.204 10.645 12.592 14.449 16.812 18.548
7 0.989 1.239 1.690 2.167 2.833 12.017 14.067 16.013 18.475 20.278
8 1.344 1.647 2.180 2.733 3,490 13.362 15.507 17.535 20.090 21.955
9 1.735 2.088 2.700 3.325 4.168 14.684 16.919 19.023 21.666 23.589
10 2.156 2.558 3.247 3.940 4.865 15.987 18.307 20.483 23.209 25.188
11 2.603 3.053 3.816 4.575 5.578 17.275 19.675 21.920 24.725 26.757
12 3.074 3.571 4.404 5.226 6.304 18.549 21.026 23.337 26.217 28.300
13 3.565 4.107 5.009 5.892 7.041 19.812 22.362 24.736 27.688 29.819
14 4.075 4.660 5.629 6.571 7.790 21/064 23.685 26.119 29.141 31.319
15 4.601 5.229 6.262 7.261 8.547 22.307 24.996 27.488 30.578 32.801
16 5.142 5,812 6.908 7.962 9.312 23.542 26.296 28.845 32.000 34.267
17 5.697 6.408 7.564 8.672 10.085 24.769 27.587 30.191 33.409 35.718
18 6.265 7.015 8.231 9.390 10.865 25.989 28.869 31.526 34.805 37.156
19 6.844 7.633 8.907 10.117 11.651 27.204 30.144 32.852 36.191 38.582
20 7.434 8.260 9.591 10.851 12.443 28.412 31.410 34.170 37.566 39.997
21 8.034 8.897 10.283 11.591 13.240 29.615 32.671 35.479 38.932 41.401
22 8.643 9.542 10.982 12.338 14.041 30.813 33.924 36.781 40.289 42.796
23 9.260 10.196 11.689 13.091 14.848 32.007 35.172 38.076 41.638 44.181
24 9.886 10.856 12.401 13.848 15.659 33.196 36.415 39.365 42.980 45.558
25 10.520 11.524 13.120 14.611 16.473 34.382 37.652 40.646 44.314 46.928
26 11.160 12.198 13.844 15.379 17.292 35.563 38.885 41.923 45.642 48.290
27 11.808 12.878 14.573 16.151 18.114 36.741 40.113 43.195 46.963 49.645
28 12.461 13.565 15.398 16.928 18.939 37.916 41.337 44.461 48.278 50.994
29 13.121 14.256 16.047 17.708 19.768 39.087 42.557 45.722 49.588 52.335
30 13.787 14.953 16.791 18.493 20.599 40.256 43.773 46.979 50.892 53.672
40 20.707 22.164 24.433 26.509 29.051 51.805 55.758 59.342 63.691 66.766
50 27.991 29.707 32.357 34.764 37.689 63.167 67.505 71.420 76.154 79.490
60 35.534 37.485 40.482 43.188 46.459 74.397 79.082 83.298 88.379 91.952
70 43.275 45.442 48.758 51.739 55.329 85.527 90.531 95.023 100.425 104.215
80 51.172 53.540 57.153 60.391 64.278 96.578 101.879 106.629 112.329 116.321
90 59.196 61.754 65.647 69.126 73.291 107.565 113.145 118.136 124.116 128.299
100 67.328 70.065 74.222 77.929 82.358 118.498 124.342 129.561 135.807 140.170
110 75.550 78.458 82.867 86.792 91.471 129.385 135.480 140.916 147.414 151.948
08: Chi Square
Q8.1
For each of the following questions, determine the appropriate test that should be used. Pick from the following three tests.
A. Goodness of Fit
B. Test for Independence
C. Test for Homogeneity
1. The tutor center maintains a list of student who use their services. These students are classified as drop-in students or appointment students. At the end of the term, the director of the tutor center randomly select students from each of the groups then looks up the grade they received in the class for which they were being tutored. The objective is to determine if there is a difference in the distribution of grades for the two groups. Grades are classified as A,B,C,F.
2. Historically, a teacher found that 33% of the students in a class earned an A, 47% a B, 15% a C, and 5% a D or F. After modifying the way she teaches, she wants to know if her most recent class of students was consistent with past students.
3. Students are given a math assessment and a musical assessment with the objective of determining if there is a correlation between mathematical ability and musical ability.
4. Quantitative data are grouped by the number of standard deviations they are from the mean (e.g. z intervals of [-3, -2), [-2, -1),... [2, 3)). The objective is to determine if the distribution is normal and is based on the probability that a value would fall within each of those ranges.
5. A researcher with the Department of Social and Health Services reviewed records of families who were receiving government assistance two years early. The researcher recorded if it was a one-parent household or a two-parent household. The researcher also recorded if the family was currently receiving government assistance. The objective was to determine if there is a correlation between then number of the parents in the household and whether the household was still receiving government assistance.
6. A researcher with the Department of Social and Health Services wants to know if the number of parents in a household affects the length of time a family receives government assistance. The researcher identifies one-parent families and two-parent families then randomly selects from each of these two groups to determine the number of years in which they receive government assistance. A comparison will be made between the distribution of one-parent families and two-parent families.
Q8.2
For each of the following problems, identify the test that should be done, then write the hypotheses, conduct the test to find chi square and the p-value, and then write a concluding sentence.
1. Bunko is a dice game that serves as the motivation for a group of people to get together for an evening of socializing and eating. One regular Bunko player called it a mindless dice game, because it doesn’t require much thinking and players can talk (or eat!) while playing. A normal game of Bunko involves 12 players, but other multiples of 4 can work nicely. If there are three tables, with the head table being number 1, then after each round of play, winning players move up one table with the goal of being at the head table (the one closest to the food!). The losers from the head table go to table 3. Three dice are used at each table. On each turn a player will roll all three dice. The first round the objective is to get ones, the second round the objective is to get twos, etc. If one of the desired numbers is obtained, the player gets a point. If two of the desired numbers are obtained on the same roll, the player gets 2 points. If all three of the dice are the desired number, the player yells bunko and gets 21 points. If none of the dice show the desired number, the dice are passed to the next person. When the head table reaches 21 points, the round is over for everyone.
As with any game of chance, there is an expected probability distribution. The expected distribution for the probability of having 0,1,2 or 3 successes can be found using the binomial distribution. Complete the probability distribution table. Refer to Chapter 4 if you can’t remember the process.
$X = x$ 0 1 2 3
$P(X = x)$
Three dice were rolled 158 times and the number of ones was recorded for each turn. If the dice are fair, the sample distribution should be a good fit with the expected distribution. The sample data is shown in the table below.
$X = x$ 0 1 2 3
Sample Results 100 44 13 1
Which test is appropriate for this problem?
Conduct the test then write a concluding sentence.
2. In Major League Soccer, is there a correlation between the number of shots a forward attempts and the number of goals he scores? A systematic random sample was taken from the 2013 MLS season results for all players classified as forward. The number of shots the player took was categorized as high (20 or more) and low (less than 20). The number of goals he scored was categorized as high (5 or more) and low (less than 5). The contingency table shows the results of the sample.
High Shots Low Shots
High Goals 13 0
Low Goals 15 21
Which test is appropriate?
Complete the test using tables. Test the hypothesis at the 0.1 level of significance. Write a complete concluding sentence.
3. Lower back pain can be treated with a variety of approaches including using drugs and non-drug therapies. Data from a clinic that specializes in pain management was used to determine if there was a difference in the change in pain level for the patients being treated with a combination of drugs (local anesthetic, anti-inflammatory and a muscle relaxer) and those receiving physical therapy (lumbar traction, heat and ultrasound therapy and transcutaneous electrical nerve stimulation). Pain levels, on a scale of 1 – 5, were determined during the initial visit. The change in pain level was assessed at the 4- week period. If pain improved by 4 or 5 levels it was classified as substantial improvement. If pain improved by 1,2 or 3 levels it was classified as moderate improvement. If pain was unchanged or got worse, it was classified as no improvement. The table below shows the changes. Use this data to determine if there is a difference in pain reduction using drugs vs non-drug therapy. (data from unpublished student statistics class project)
Observed Drug Non-Drug
Substantial improvement 9 6
moderately improvement 22 23
no improvement 9 11
Which test is appropriate?
Complete the test using tables. Test the hypothesis at the 0.1 level of significance. Write a complete concluding sentence.
Expected Drug Non-Drug
Substantial improvement
moderately improvement
no improvement
Observed Expected $O - E$ $(O - E)^2$ $\dfrac{(O - E)^2}{E}$
$\chi ^2 =$
4. Nationwide, for Native American tribal members with college degrees, 37% are associate degrees, 48% are bachelor degrees and 15% are Masters or PhDs. The distribution of degrees in one of the Puget Sound area tribes is 36 associate degrees, 22 bachelor degrees and 7 masters or PhDs. Is the distribution of degrees in the Puget Sound area tribe different than the national distribution?(data from unpublished student statistics class project)
Which test is appropriate?
Complete the test using tables or calculator. Test the hypothesis at the 0.1 level of significance. Write a complete concluding sentence. Show work (either tables or calculator inputs).
5. Why Statistical Reasoning Is Important for a Criminal Justice Student and Professional Developed in collaboration with Teresa Carlo, Professor of Criminal Justice
This topic is discussed in CJ 200 and others (Conflict view of Injustice).
The table below shows the racial distribution for Washington State. The data is from the WA State Government, Office of Financial Management. These percentages include those of Hispanic origin. (http://www.ofm.wa.gov/pop/census2010/data.asp)
White Asian Black Native Other
77.3% 7.2% 3.6% 1.5% 10.4%
In theory, the racial distribution of prisoners in WA state prisons should be consistent with this distribution. To determine if this is the case, a sample of prisoners can be taken. The random variable that will be measured is race. The hypotheses to be tested are:
$H_0$: The racial distribution in WA prisons is the same as the racial distribution of the WA population
$H_1$: The racial distribution in WA prisons is not the same as the racial distribution of the WA population.
Use a 5% level of significance. If the data are not significant then we will consider that society and justice are blind to race. If the data are significant, then we will seek a solution to this injustice.
There are 12 prison facilities in WA of which eight are major prisons and four are minimum- security. There is the possibility that the racial distribution varies based on location and security level and because of this random samples will be taken from each prison.
a. What type of sampling method is being used? ___________________________
b. One prison has 2156 prisoners. If thirty prisoners will be selected from this prison, what are the first three random numbers that would be selected if the calculator were seeded with the number 12?
______, ______, ______
Suppose the entire sample included prisoners from all the prisons. In total, 300 prisoners were selected. The number of prisoners of each race in this sample is shown in the table below. (This distribution is based on the actual distribution in WA prisons.) (www.doc.wa.gov/facilities/prison/)
White Asian Black Native Other
216 11 56 12 5
c. Which test is appropriate for this problem?
d. Make a double bar graph that shows a comparison between the observed and expected number of prisoners for each race.
e. Make a table to find the χ2 value. Use the χ2 table to find the p-value.
f. Write a concluding sentence.
g. Explain this conclusion in English. What do you think is the reason for this result?
h. If a solution is needed, what solution would you suggest? | textbooks/stats/Introductory_Statistics/Foundations_in_Statistical_Reasoning_(Kaslik)/08%3A_Chi_Square/8.E%3A_Chi_Square_%28Exercises%29.txt |
Chapter 1 Data and Statistics
1. A survey question asked whether you were looking forward to the time when most of the cars on the road were self-driving (autonomous) cars, and the choice of answers was yes or no.
a. Is the data from the responses to this question categorical or quantitative?
b. Is the appropriate statistic $\hat{p}$ or $\bar{x}$?
c. The table below gives the responses to 20 questions. Calculate the value of the appropriate statistic used for the answer yes.
no yes yes yes no no yes no no yes
yes no yes no yes no yes yes yes no
2. In the school’s cafeteria, an employee counted the number of people sitting at each table.
a. Is the data from the responses to this question categorical or quantitative?
b. Is the appropriate statistic $\hat{p}$ or $\bar{x}$?
c. The table below gives the number at 10 different tables. Calculate the value of the appropriate statistic.
5 6 8 7 4
1 7 8 3 1
Chapter 1 Writing Hypotheses
Name___________________________ Effort _____/4 Attendance ____/1 Total ____/5
1. The equal sign must always go in the null hypothesis ($H_0$)
2. The equal sign may never appear in the alternate hypothesis ($H_1$)
3. The alternate hypothesis uses one of the following: <, >, $\ne$
4. Both hypotheses must be about the same parameter (mean (μ) or proportion (p)). If the hypothesis is about a proportion then use $H_0: p = a$ number between 0 and 1. If the hypothesis is about a mean, use $H_0: \mu = a$ number.
5. The number in the null and alternate hypothesis must be the same.
Example: What proportion of students ate breakfast today?
$H_0: p = 0.60$
$H_1: p < 0.60$
Example: What is the average number of calories consumed for breakfast today by students?
$H_0: \mu = 200$
$H_1: \mu > 200$
Write your hypotheses for each question. Use each of the three inequalities at least once.
1. What is the average heart rate of college students?
$H_0:$
$H_1:$
2. Given the choice between humanity creating a fantastic future with technology or suffering a collapse of society due to resource depletion and other environmental problems, what proportion of college students do you hypothesize believes the future will be fantastic?
$H_0:$
$H_1:$
3. What is the average time, in minutes, that it takes students to get to school in the morning?
$H_0:$
$H_1:$
4. What proportion of students eat raw cookie dough?
$H_0:$
$H_1:$
Chapter 1 Sampling Distributions
1. In the distribution to the right:
What proportion of sample means will be between 150 and 170?
What proportion of sample means will be between 200 and 230?
What proportion of sample means will be between 150 and 230?
2. In the distribution to the right:
What proportion of sample proportions will be between 0.70 and 0.74?
What proportion of sample proportions will be between 0.84 and 0.90?
What proportion of sample proportions will be less than 0.70?
Chapter 2 p-values and levels of significance
1. For each row of the table you are given a p-value and a level of significance ($\alpha$). Determine which hypothesis is supported, if the data are significant and which type error could be made. If a given p-value is not a valid p-value, put an x in each box in the row.
p - value $\alpha$ Hypothesis $H_0$ or $H_1$ Significant or Not Significant Error
Type I or Type II
0.48 0.05
0.023 0.10
6.7E-6 0.01
Identify each as true or false if data are not significant
_____ The null hypothesis is definitely true
_____ The alternative hypothesis is definitely true
_____ The alternative hypothesis is rejected
_____ The null hypothesis was not rejected
_____The p-value is larger than $\alpha$
2. For each row of the table you are given a p-value and a level of significance ($\alpha$). Determine which hypothesis is supported, if the data are significant and which type error could be made. If a given p-value is not a valid p-value, put an x in each box in the row.
p - value $\alpha$ Hypothesis $H_0$ or $H_1$ Significant or Not Significant Error
Type I or Type II
0.048 0.05
0.0023 0.10
6.70 0.01
Identify each as true or false if data are not significant
_____ The null hypothesis is definitely true
_____ The alternative hypothesis is definitely true
_____ The alternative hypothesis is rejected
_____ The null hypothesis was not rejected
_____The p-value is larger than $\alpha$
Elementary Hypothesis Test, Example 1 Arsenic
Briefing: Arsenic is a naturally occurring element and also a human produced element (e.g. fracking, combustion of coal) that can be found in ground water. It causes a variety of health problems and can lead to death. The EPA limit is 10 ppb, meaning 10 ppb or higher is unsafe. Problem: Fracking was started in your community. A year later, sickness in the community leads health department officials to test your water to determine if it is contaminated with arsenic. The official will take 5 samples of water over the next 2 months and decide whether you have safe water or unsafe water based on the average of these samples. The hypotheses to be tested are: $H_0: \mu = 10$ (Not safe) $H_1: \mu < 10$ (Safe). The level of significance is: $\alpha = 0.12$.
Assume these are the two possible distributions that exist.
What is the direction of the extreme?
Show the decision line on both distributions.
What is the critical value?
Label $\alpha$, $\beta$, and power
What is the probability of $\alpha$?
What is the probability of $\beta$?
What is the power?
What is the consequence of a Type I error?
What is the consequence of a Type II error?
Data: What you select from the container that was passed around the classroom
Write a concluding sentence:
What decision do you make about your house and water supply?
Elementary Hypothesis Test, Example 2: Do a majority of people in the US believe it is time for a new voting system?
Briefing: The plurality voting system has been used in this, and other countries, since the democracies were formed. However, this system has led to the domination of two parties which don’t necessarily reflect the opinions of the citizens. Some countries, such as New Zealand, and some states and communities in the US have adopted other voting systems which allow for better representation. Imagine a survey in which people were asked if they think it is time to change the voting system as a solution to the decisive partisanship that currently exists in the US. The objective is to determine if a majority of voters are ready to explore alternative voting systems. The hypotheses are: $H_0: p = 0.50$, $H_1: p > 0.50$, $\alpha = 0.07$.
What is the direction of the extreme?
Show the decision line on both distributions.
What is the critical value?
Label $\alpha$, $\beta$, and power
What is the probability of $\alpha$?
What is the probability of $\beta$?
What is the power?
What is the consequence of a Type I error?
What is the consequence of a Type II error?
Data: 54 out of 100 voters wanted to explore alternative voting systems.
What is the sample proportion?
Write a concluding sentence:
Chapter 2 Design Tables
1. In an effort to determine which strategy is most effective for losing weight, a researcher randomly assigns subjects to one of four groups. One group (exercise) will become involved in a regular exercise program, a second group will be fed a balanced diet (food) but with appropriate size portions, a third group (exercise and food) will use both the exercise program and the balanced diet, while the fourth group (no change) will not change their diet or exercise.
Research Design Table
Research Question:
Type of Research Observational Study
Observational Experiment
Manipulative Experiment
What is the response variable?
What is the parameter that will be calculated? Mean Proportion
List potential confounding variables.
Grouping/explanatory Variables 1 (if present) Levels:
2. People get excited when a young athlete achieves great success but there is always the question of whether the best college athletes were actually among the best young athletes. If interviews of starting varsity athletes from Division 1 schools were done and they were asked if they were considered a superior athlete as a 10 year old in their sport, would the proportion that were successful as a young child be different for males and females?
Research Design Table
Research Question:
Type of Research Observational Study
Observational Experiment
Manipulative Experiment
What is the response variable?
What is the parameter that will be calculated? Mean Proportion
List potential confounding variables.
Grouping/explanatory Variables 1 (if present) Levels:
Chapter 2 Random Numbers
1. A survey at our college will be done. The administration expects different responses from running start students, traditional students, returning students and veterans. Sampling will be done from each of these groups.
What sampling method is being used?
If there are 1320 veterans (1-1320), what are the numbers of the first 3 randomly selected veterans if a seed value of 3 is used?
2. Time series data will be selected 5 years apart so that the data are independent. What are the numbers of the first 3 randomly selected years of data if the first year of data is 1960? Use a seed value of 4.
Chapter 2 Compare and Contrast Sampling Methods
Name___________________________ Effort_____/5 Attendance ____/1 Total ___/6
A current debate in Washington is whether to build coal export terminals so that coal mined in Montana and Wyoming can be sent by train to the Washington, Oregon or British Columbia coast and then exported to Asia. Some concerns include long trains that will be a constant disruption to traffic, coal dust from the trains will pollute the air near the rail lines, water pollution that will destroy the fisheries and fishing industry, and the concern that coal will contribute to climate change. Suppose a task force of 100 people from Idaho, Washington, Oregon and British Columbia gather to determine a regional policy for this situation. The task force is made up of government officials (G) and public citizens (C). They have all been assigned a number from 1 to 100. All sampling will be done with replacement. That means you can use the same number twice within one sampling method. This activity is meant to allow you to compare and contrast the 4 sampling methods.
Group 1
Idaho
Group 2
Washington
Group 3
Oregon
Group 4
British Columbia
1 -C No Coal 23 -G No Coal 49 -G Terminals 71 -C Terminals
2 -C Terminals 24 -C Terminals 50 -G No Coal 72 -G Terminals
3 -C Terminals 25 -G No Coal 51 -G No Coal 73 -C No Coal
4 -C Terminals 26 -G No Coal 52 -G No Coal 74 -G Terminals
5 -C No Coal 27 -C Terminals 53 -C No Coal 75 -C Terminals
6 -C Terminals 28 -G No Coal 54 -C Terminals 76 -C Terminals
7 -C Terminals 29 -G No Coal 55 -G No Coal 77 -C Terminals
8 -G No Coal 30 -G No Coal 56 -C No Coal 78 -G Terminals
9 -G Terminals 31 -C No Coal 57 -G No Coal 79 -G Terminals
10 -G No Coal 32 -C Terminals 58 -G No Coal 80 -C Terminals
11 -C Terminals 33 -G Terminals 59 -G No Coal 81 -C No Coal
12 -G No Coal 34 -G Terminals 60 -C No Coal 82 -G Terminals
13 -G Terminals 35 -G Terminals 61 -C Terminals 83 -G Terminals
14 -G No Coal 36 -G Terminals 62 -G No Coal 84 -C No Coal
15 -G Terminals 37 -C Terminals 63 -C No Coal 85 -C No Coal
16 -G No Coal 38 -G Terminals 64 -C Terminals 86 -G Terminals
17 -C Terminals 39 -C Terminals 65 -C Terminals 87 -C No Coal
18 -G No Coal 40 -G No Coal 66 -G No Coal 88 -C Terminals
19 -G Terminals 41 -G No Coal 67 -G Terminals 89 -G No Coal
20 -G Terminals 42 -G Terminals 68 -G No Coal 90 -G No Coal
19 -C Terminals 43 -G No Coal 69 -C Terminals 91 -G No Coal
22 -G No Coal 44 -C No Coal 70 -C Terminals 92 -C Terminals
45 -C No Coal 93 -G No Coal
46 -C No Coal 94 -C No Coal
47 -G No Coal 95 -G No Coal
48 -G No Coal 96 -G Terminals
97 -G Terminals
98 -C Terminals
99 -C Terminals
100 -G Terminals
1. Simple Random Sample
Use your calculator with a seed of 23 to randomly select a sample of size 10. The lowest number is 1 and the highest is 100. List the selected numbers then determine the proportion of the sample that is against the coal terminals (No Coal).
Number: _____, _____, _____, _____, _____, _____, _____, _____, _____, _____,
N or T _____, _____, _____, _____, _____, _____, _____, _____, _____, _____,
Proportion that is against the coal terminals: $\hat{p} =$ _____
2. Stratified Random Sample
Use your calculator with a seed of 13. The low is 1 and the high is 100. Put the random numbers in the appropriate strata. When a stratum is filled, ignore other numbers that belong in it.
Citizens: Number _____, _____, _____, _____, _____,
N or T _____, _____, _____, _____, _____,
Government: Number _____, _____, _____, _____, _____,
N or T _____, _____, _____, _____, _____,
Proportion (use citizens and government officials combined) that is against coal terminals: $\hat{p} =$ _____
3. Systematic Random Sample
Use a 1 in K sampling method, with k = 10 to randomly select a sample of size 10. To determine the first number selected, use your calculator with a seed of 18, a low of 1 and a high of 10. Determine the proportion of the sample that is against coal terminals.
Number: _____, _____, _____, _____, _____, _____, _____, _____, _____, _____,
N or T _____, _____, _____, _____, _____, _____, _____, _____, _____, _____,
Proportion that is against coal terminals: $\hat{p} =$ _____
4. Cluster Sampling
Use your calculator with a seed value of 33 to randomly select one of the groups (1-4). Which group is selected? _____________. What is the sample proportion of the selected group that is against coal terminals? $\hat{p} =$ _____
Chapter 3 Histograms and Box Plots
Name___________________________ Effort_____/5 Attendance ____/1 Total ___/6
The results of an exam on Chapters 2 and 3 from one statistics class are shown in the table below. The numbers represent the percent of possible points the student earned.
76.8 91.5 98.8 97.6 76.8 93.9 57.3 86.6 90.2
93.9 93.9 82.9 92.7 89.0 72.0 57.3 93.9 92.7
93.9 81.7 63.4 68.3 85.4 50.0 84.1 90.2 86.6
97.6 84.1 81.7 95.1 87.8 75.6 92.7 73.2 91.5
Low value ___________ High value _____________
Make a frequency distribution. Use interval notation for the boundaries [lower,upper).
Classes
Make a histogram. Label completely.
Use your calculator to complete the table below by entering the original data into the lists.
Mean
Standard Deviation Sx
Minimum
Q1
Median
Q3
Maximum
Make a box plot. Label completely.
Chapter 4 Inferential Theory
Question 2: Do more than 70% of Americans drink tea (either hot or iced)?
a. Write your null and alternate hypothesis:
b. Find P(S): c. Find P(F):
d. If you took a sample of 7 people, what is the probability the exact order would be SFSSFSS? That is, find P(SFSSFSS).
e. How many combinations are there for 5 successes in a sample of 7 people?
f. What is the probability you would get 5 successes in a sample of 7 people?
g. Make a binomial distribution for the number of successes in a sample of 7 people.
h. What is the mean and standard deviation for this distribution?
i. Finish the concluding sentence if there were 5 successes in a sample of 7 people. At the 5% level of significance, the proportion of Americans who drink tea __________________________________________________________________________________________________________________________
Chapter 4 Inferential Theory – Testing Hypotheses
Pacific Northwest residents are often concerned with the issue of sustainability. If a survey of 400 Pacific Northwest individuals resulted in 296 who said they make choices based on being sustainable, then test the hypothesis that over 67% of individuals in this region make choices based on being sustainable.
Test the hypotheses ($H_0: p = 0.67$ $H_1: p > 0.67$) using three different methods and a level of significance of 0.05. For each method, you will be asked which hypothesis is supported.
1a. Binomial Distribution: Use the binomial distribution to calculate the exact p-value based on the data (296 out of 400).
__________________________ ___________________
Calculator input p-value
Which hypothesis is supported by the data? Choose 1: $H_0$ $H_1$
1b. Normal Approximation: Use the normal approximation to the binomial distribution to calculate the approximate p-value based on the data (296 out of 400). Provide the requested information.
$\mu = np =$ , $\sigma = \sqrt{npq} =$
Formula Substitution z value p-value
Which hypothesis is supported by the data? Choose 1: $H_0$ $H_1$
1c. Sampling Distribution for Sample Proportions: Find the p-value using sample proportions for the data (296 out of 400). Provide the requested information.
Sample proportion
Formula Substitution z value p-value
Which hypothesis is supported by the data? Choose 1: $H_0$ $H_1$
A student at UC Santa Barbara(http://www.culturechange.org/cms/content/view/704/62/) did some research on the plastic red cups that people use for drinks at parties. These cups are made of Polystyrene, which cannot be recycled in Santa Barbara. Many of the cups end up in the landfill, but some end up in the ocean. In the nearby college town of Isla Vista, the researcher estimated that the average number of cups used per person per year was 58. Assume the standard deviation is 8.
In an effort to change the culture, suppose an education campaign was used to reduce the number of red cups by encouraging the purchase of beverages in cans (since they can be recycled). To determine if this is effective, a random sample of 16 students will keep track of the number of red cups they use throughout the year. The hypotheses that will be tested are: $H_0: \mu = 58$ $H_1: \mu < 58$, $\alpha = 0.05$
2a. What is the mean of the sampling distribution of sample means? $\mu_{\bar{x}}$ ________
2b. What is the standard deviation of the sampling distribution of sample means? $\sigma_{\bar{x}}$ _________
2c. Draw and label a normal distribution showing the mean and first three standard deviations (standard errors) on each side of the mean for the distribution of sample means of 16 students.
2d, Test the hypothesis if the sample mean of the 16 students is 55 using a level of significance of $\alpha = 0.05$.
Formula Substitution z value p-value
2e. Based on the results in this experiment, has there been a reduction in the use of red cups? Choose 1: Yes No
Chapters 5 and 6 Mixed Practice with Hypothesis Testing and Confidence Intervals
For each problem, provide the hypotheses and test the hypotheses by calculating the test statistic and p-value. Fill in all the blanks in the following sentence. Also, give calculator answer in parentheses for the test statistic and p-value. This will not be corrected or graded but will help prepare you for the exam.
1. A student read that in the bay area of California, the average person produces 2 pounds of garbage per day. The student believed that she produced less than that but wanted to test her hypothesis statistically. She collected data on 10 randomly selected days. Use $\alpha = 0.05$.
2.0 2.3 1.9 1.9 2.3
1.2 2.3 2.1 1.7 1.8
$H_0:$
$H_1:$
What is the sample mean? Sample Mean ______________
What is the sample standard deviation? Sample Standard Deviation______________
Formula Substitution Test Statistic value p-value
Calculator:
Test Statistic value p-value
The average amount of garbage produced daily by the student ___________ significantly less than 2 pounds (t = __________, p = _____________, n=_______________).
What is the 95% confidence interval for the amount of garbage she produces?
Formula Substitution Margin of Error Confidence Interval
Calculator confidence Interval: __________________
2. A living wage is the hourly rate that an individual must earn to support their family, if they are the sole provider and are working full-time. In 2005, it was estimated that 33% of the job openings had wages that were inadequate (below the living wage). A researcher wishes to determine if that is still the case. In a sample of 460 jobs, 207 had wages that were inadequate.Test the claim that the proportion of jobs with inadequate wages is greater than 0.33. Let $\alpha =$ 0.01.
$H_0$ $H_1$
Formula Substitution Test Statistic value p-value
Calculator:
Test Statistic value p-value
What is the 90% confidence interval for the proportion of jobs with inadequate wages?
Formula Substitution Margin of Error Confidence Interval
Calculator confidence Interval: __________________
3. Suppose you had two different ways to get to school. One way was on main roads with a lot of traffic lights, the other way was on back roads with few traffic lights. You would like to know which way is faster. You randomly select 6 days to use the main road and 6 days to use the back roads. Your objective is to determine if the mean time it takes on the back road μb is different than the mean time on the main road μm. The data is presented in the table below. The units are minutes. Assume population variances are equal. Because the sample size is small, you decide touse a significance level of $\alpha = 0.1$.
Back Road 14.5 15.0 16.2 18.9 21.3 17.4
Main Road 19.5 17.3 21.2 20.9 21.1 17.7
Write the appropriate null and alternate hypotheses: H0: _____________ H1:______________
What is the sample mean for each route? Back Road__________ Main Road ______
What is the sample standard deviation for each route? Back Road__________ Main Road ______
Test this using your calculator
Test Statistic value p-value
There _____________ a significant difference between taking the back road and the main road (t = ______, p = ___________, n=_______).
What is the 99% confidence interval for the difference in the mean times?
Calculator confidence Interval: __________________
Use your calculator generated confidence interval to calculate the margin of error ____________
4. Some parents of age group athletes believe their child will be better if they pay them a financial reward for being successful. For example they may pay $5 for scoring a goal in soccer or$1 for a best time at a swim meet. The argument against paying is that it is counterproductive and destroys the child’s self-motivation. Is the dropout rate of children that have been paid different than of children who have not been paid? Let $\alpha = 0.05$.
Dropout rate of children who have been paid: 450 out of 510
Dropout rate of children who have not been paid: 780 out of 930
$H_0$ $H_1$
Test this using your calculator
Test Statistic value p-value
What is the 95% confidence interval for the difference between the dropout rate of children that have been paid and children who have not been paid? Let α = 0.05.
Calculator confidence interval: __________________
Use your calculator generated confidence interval to calculate the margin of error ____________
Chapter 7 – Linear Regression Analysis
Homework problem 4 looks at the relationship between the population of a metropolitan area and the number of patents produced in that area. Below is an expand sample. It includes more of the large metropolitan areas. Make a new scatter plot. Use a different color marker to Indicate Las Vegas and Fresno on this scatter plot. In the homework, these two communities looked like outliers. Do they still?
Use a 5% level of significance.
Show calculator outputs including the correlation, $r^2$ value and equation of the regression line (which has been conveniently placed on the graph for you). Write a statistical conclusion then interpret the results. Use a level of significance of 0.10.
Correlation ____________
Coefficient of determination ($r^2$ value) _______________
Regression equation _____________________
Hypothesis test concluding sentence:
Chapter 7 – $\chi ^2$
If a teacher changes the way a course is taught or uses a new book, how does the teacher know if the changes resulted in better success for the students? One way is to compare the distribution of grades (A, B, C, below C) to what has happened in past classes, assuming that assessments and grading were similar.
The distribution of grades for past classes that used the first edition of Foundations in Statistical Reasoning is shown in the middle column of the table below. The number of students who received each grade when using the second edition is shown below.
Grade
Proportion
Count from the second edition
A
0.349
16
B
0.287
11
C
0.204
7
Below C
0.160
6
Test the hypothesis that the distribution of grades from the second edition is different than the distribution from the first edition.
Write the hypotheses:
$H_0$:
$H_1$:
Which test is appropriate for this problem?
A. _______ Goodness of Fit B. _______ Test for Independence C. _______ Test for Homogeneity
Test the hypothesis using the table below.
Observed Expected $O - E$ $(O - E)^2$ $\dfrac{(O - E)^2}{E}$
$\chi^2 =$
Write a concluding sentence:
Which of the following conclusions does the evidence support?
_____The second edition resulted in a significantly improved distribution of grades
_____The second edition resulted in a significantly worsening of the distribution of grades
_____The second edition did not appear to affect the distribution of grades
This problem could be done a different way if you were told the number of people who got each grade using the first edition.
Grade
Count from the first edition
Count from the second edition
A
174
16
B
143
11
C
102
7
Below C
80
6
Test the hypothesis that the distribution of grades from the second edition is different than the distribution from the first edition.
Write the hypotheses:
$H_0:$
$H_1:$
Which test is appropriate for this problem?
A. _______ Goodness of Fit B. _______ Test for Independence C. _______ Test for Homogeneity
Use the matrix and $\chi^2$ test on your calculator to test the hypothesis. | textbooks/stats/Introductory_Statistics/Foundations_in_Statistical_Reasoning_(Kaslik)/09%3A_In-class_Activities.txt |
In Algebra or other deterministic math, if you substitute numbers into a formula and calculate the answer, then the results can be reported without too much additional thought. However, with statistics, there is not necessarily one simple answer. Rather, it is necessary to consider all the evidence that can be understood from the sample. This means a careful interpretation of the graphs, and evaluation of the statistics, and a consideration of the test of significance. To simply rely on a p-value, or conversely ignore it all together, are both problematic. A p-value is important, but it is not sacred.
In this chapter you will be given graphs, statistics, and p-values. Your task will be to give a written explanation that is justified by the results. You should provide context as well as reference to the evidence. Before writing, you should answer the following questions in your mind.
1. What is the context? What is the story about and what is the purpose?
2. What does the graph show? Think about the distribution. Do you see any patterns or outliers?
3. Look at the statistics. Do they do a good job of representing the distribution shown in the graph?
4. Identify the hypothesis test. Is it appropriate?
5. Does the p-value indicate the data are significant? Keep in mind that significant and important are not synonymous. Is the evidence strong or weak? Use a 5% level of significance for all problems.
There should be a flow to your writing.
• It should begin with background information to provide context. This should include a statement of the objective or the question to be answered.
• Once the context has been provided, write about the evidence that you can gather from the graphs and the statistics.
• Since you have only been given sample data, it is necessary to make an inference. This is where you will write a concluding sentence such as you have been practicing throughout this text.
• Write a conclusion that directly answers the question and is consistent with the evidence and inference. If some of the evidence is contradictory, address the contradictions.
There are three communication activities to do. More direction is given with the first than the others. They increase in point values as your writing should improve each time. The first should be submitted after the exam on Chapter 1. The second should be submitted after the exam on Chapter 3, and the third should be submitted after the exam on Chapter 6.
Effective Communication 1 (due the day after the exam on Chapter 1)
Name _____________________________________ Points ______/4 (-1 per day for late)
The information presented below is about a comparison of the proportion of trips made by bicycle between The Netherlands and Denmark, the two top countries in the world for bicycling. A sample was taken of people in the Netherlands and Denmark. They were asked about the mode of transportation used during the last trip they made from their home to another destination. The data is whether the trips were made by bike or other mode of transportation. The objective is to determine if the proportion of trips by bicycle is higher in the Netherlands than Denmark.(http://top10hell.com/top-10-countrie...es-per-capita/ Viewed 6/21/17) On the back of this page, write your analysis legibly, or type it. There are guides about what should be written.
Results of hypothesis test: p-value = 0.000016.
Section 1. Write about the context. What is this information about? Why might it be of interest? What is the question that is being asked? Try to engage the reader.
Section 2. Give evidence. This is where you explain the distribution of the data and give the statistics. Make use of all relevant statistics and evidence from the graph.
Section 3. Make an inference. This is where you write the concluding sentence to extend the results from the sample to the population. Make sure it is phrased in a way that is consistent with the question being asked in Section 1. Include p-value and sample sizes.
Section 4. Conclusion. Provide a direct answer to the question, being consistent with the evidence and inference.
Effective Communication 2 (due the day after the exam on Chapter 3)
Name _____________________________________ Points ______/8 (-2 per day for late)
The current national debt of the United States is about 20 trillion dollars. While this number sounds high (okay, it is high), there is a difference if a country of the size of the US has a 20 trillion dollar national debt compared with a country the size of Bermuda (for example). Therefore, one thing economists will do is to find the ratio of the national debt of a country to the Gross Domestic Product (GDP) for the country. The current ratio for the US is 106. This means that the debt is 106% of the GDP. The information below will allow you to determine if the average debt to GDP ratio for other randomly selected countries is less than the US.2 The US is not included in the distribution below.
Mean: 60.8, Median: 45.2, Minimum: 3.1, Max: 250.4 (Japan), Standard Deviation: 54.1
Hypothesis Test Results: P-value = 0.0012, n = 18.
Write an analysis that compares the debt to GDP ratio of other countries to the US. Include context, statistics and a statement of significance. Use good grammar and spelling. Write legibly or type.
Organize your writing in the order that was done in the first effective communication activity, give background and context followed by evidence, inference, and then conclusion.
Effective Communication 3 (due the day after the exam on Chapter 6)
Name _____________________________________ Points ______/12 (-3 per day for late)
The CEO of an investment company is trying to fill a position of Senior Investment Manager. Two investment advisors are applying for the same position. You have been given the task of analyzing the success of the investments they managed for their clients. The data that you will compare is the change in the value of the investment for each \$10,000 that is invested. For example, if the investment grew to \$12,400, then the change would be \$2,400. However, if the investment shrank to \$6500, the change would be -\$3,500. Below you will find graphs, statistics and the results of a hypothesis test that compares the two investment managers, Andy and Bobbie (note, these names can be used for males and females, so you can use whichever pronouns you want, e.g. him or her). Your objective is to write a report that compares the two and make a recommendation. You must justify your recommendation.
Andy Bobbie
Mean 2537.74 2550.76
Median 3220.50 -635.50
Standard Deviation 3736.56 7112.00
n 50 50
The results of a t-test for 2 independent means to test for a difference between means are t = -0.011, p = 0.991.
Write an analysis to help the CEO make a choice between Andy and Bobbie. Include context, statistics and a statement of significance. Use good grammar and spelling. Write legibly or type. Organize your writing in a similar way to the other effective communication activities. | textbooks/stats/Introductory_Statistics/Foundations_in_Statistical_Reasoning_(Kaslik)/10%3A_Communication_of_Statistical_Results.txt |
Several years ago, I was teaching an introductory Statistics course at De Anza College where I had several achieving students who were dedicated to learning the material and who frequently asked me questions during class and office hours. Like many students, they were able to understand the material on descriptive statistics and interpreting graphs. Unlike many introductory Statistics students, they had excellent math and computer skills and went on to master probability, random variables and the Central Limit Theorem.
However, when the course turned to inference and hypothesis testing, I watched these students’ performance deteriorate. One student asked me after class to again explain the difference between the Null and Alternative Hypotheses. I tried several methods, but it was clear these students never really understood the logic or the reasoning behind the procedure. These students could easily perform the calculations, but they had difficulty choosing the correct model, setting up the test, and stating the conclusion.
These students, (to their credit) continued to work hard; they wanted to understand the material, not simply pass the class. Since these students had excellent math skills, I went deeper into the explanation of Type II error and the statistical power function. Although they could compute power and sample size for different criteria, they still didn’t conceptually understand hypothesis testing.
On my long drive home, I was listening to National Public Radio’s Talk of the Nation1 and heard discussion on the difference between the reductionist and holistic approaches to the sciences. The commentator described this as the Western tradition vs. the Eastern tradition. The reductionist or Western method of analyzing a problem, mechanism or phenomenon is to look at the component pieces of the system being studied. For example, a nutritionist breaks a potato down into vitamins, minerals, carbohydrates, fats, calories, fiber and proteins. Reductionist analysis is prevalent in all the sciences, including Inferential Statistics and Hypothesis Testing.
Holistic or Eastern tradition analysis is less concerned with the component parts of a problem, mechanism or phenomenon but rather with how this system operates as a whole, including its surrounding environment. For example, a holistic nutritionist would look at the potato in its environment: when it was eaten, with what other foods it was eaten, how it was grown, or how it was prepared. In holism, the potato is much more than the sum of its parts.
Consider these two renderings of fish:
The first image is a drawing of fish anatomy by John Cimbaro used by the La Crosse Fish Health Center.2 This drawing tells us a lot about how a fish is constructed, and where its vital organs are located. There is much detail given to the scales, fins, mouth and eyes.
The second image is a watercolor by the Chinese artist Chen Zheng‐ Long3 . In this artwork, we learn very little about fish anatomy since we can only see minimalistic eyes, scales and fins. However, the artist shows how fish are social creatures, how their fins move to swim and the type of plants they like. Unlike the first drawing, the drawing teaches us much more about the interaction of the fish in its surrounding environment and much less about how a fish is built.
This illustrative example shows the difference between reductionist and holistic analyses. Each rendering teaches something important about the fish: the reductionist drawing of the fish anatomy helps explain how a fish is built and the holistic watercolor helps explain how a fish relates to its environment. Both the reductionist and holistic methods add to knowledge and understanding, and both philosophies are important. Unfortunately, much of Western science has been dominated by the reductionist philosophy, including the backbone of the scientific method, Inferential Statistics.
Although science has traditionally been reluctant, often hostile, to embrace or include holistic philosophy in the scientific method, there have been many who now support a multicultural or multi‐ philosophical approach. In his book Holism and Reductionism in Biology and Ecology4, Looijen claims that “holism and reductionism should be seen as mutually dependent, and hence co‐operating research programs than as conflicting views of nature or of relations between sciences.” Holism develops the “macro‐laws” that reductionism needs to “delve deeper” into understanding or explaining a concept or phenomena. I believe this claim applies to the study of Statistics as well.
I realize that the problem of my high‐achieving students being unable to comprehend hypothesis testing could be cultural – these were international students who may have been schooled under a more holistic philosophy. The Introductory Statistics curriculum and most texts give an incomplete explanation of the logic of Hypothesis Testing, eliminating or barely explaining such topics as Power, the consequence of Type II error or Bayesian alternatives. The problem is how to supplement an Introductory Statistics course with a holistic philosophy without depriving the students of the required reductionist course curriculum – all in one quarter or semester!
I believe it is possible to teach the concept of Inferential Statistics holistically. This course material is a result of that inspiration, and it was designed to supplement, not replace, a traditional course textbook or workbook. This supplemental material includes:
• Examples of deriving research hypotheses from general questions and explanatory conclusions consistent with the general question and test results.
• An in‐depth explanation of statistical power and type II error.
• Techniques for checking the validity of model assumptions and identifying potential outliers using graphs and summary statistics.
• Replacement of the traditional step‐by‐step “cookbook” for hypothesis testing with interrelated procedures.
• De‐emphasis of algebraic calculations in favor of a conceptual understanding using computer software to perform tedious calculations.
• Interactive Flash animations to explain the Central Limit Theorem, inference, confidence intervals, and the general hypothesis testing model, which includes Type II error and power.
• PowerPoint Slides of the material for classroom demonstration.
• Excel Data sets for use with computer projects and labs.
This material is limited to one population hypothesis testing but could easily be extended to other models. My experience has been that once students understand the logic of hypothesis testing, the introduction of new models is a minor change in the procedure. | textbooks/stats/Introductory_Statistics/Inferential_Statistics_and_Probability_-_A_Holistic_Approach_(Geraghty)/01%3A_Introduction/1.01%3A_A_Classroom_Story_and_an_Inspiration.txt |
This old story from China or India was made into the poem The Blind Man and the Elephant by John Godfrey Saxe5. Six blind men find excellent empirical evidence from different parts of the elephant and all come to reasoned inferences that match their observations. Their research is flawless and their conclusions are completely wrong, showing the necessity of including holistic analysis in the scientific process.
Here is the poem in its entirety:
It was six men of Indostan, to learning much inclined,
who went to see the elephant (Though all of them were blind),
that each by observation, might satisfy his mind.
The first approached the elephant, and, happening to fall,
against his broad and sturdy side,
at once began to bawl: "God bless me! but the elephant, is nothing but a wall!"
The second feeling of the tusk, cried: "Ho! what have we here,
so very round and smooth and sharp? To me tis mighty clear,
this wonder of an elephant, is very like a spear!"
The third approached the animal, and, happening to take,
the squirming trunk within his hands, "I see," quoth he,
the elephant is very like a snake!"
The fourth reached out his eager hand, and felt about the knee:
"What most this wondrous beast is like, is mighty plain," quoth he;
"Tis clear enough the elephant is very like a tree."
The fifth, who chanced to touch the ear, Said; "E'en the blindest man
can tell what this resembles most; Deny the fact who can,
This marvel of an elephant, is very like a fan!"
The sixth no sooner had begun, about the beast to grope,
than, seizing on the swinging tail, that fell within his scope,
"I see," quothe he, "the elephant is very like a rope!"
And so these men of Indostan, disputed loud and long,
each in his own opinion, exceeding stiff and strong,
Though each was partly in the right, and all were in the wrong!
So, oft in theologic wars, the disputants, I ween,
tread on in utter ignorance, of what each other mean,
and prate about the elephant, not one of them has seen!
-John Godfrey Saxe
1.03: What can go Wrong in Research Two Stories
The first story is about a drug that was thought to be effective in research, but was pulled from the market when it was found to be ineffective in practice.
FDA Orders Trimethobenzamide Suppositories Off the market6
FDA today ordered makers of unapproved suppositories containing trimethobenzamide hydrochloride to stop manufacturing and distributing those products.
Companies that market the suppositories, according to FDA, are Bio Pharm, Dispensing Solutions, G&W Laboratories, Paddock Laboratories, and Perrigo New York. Bio Pharm also distributes the products, along with Major Pharmaceuticals, PDRX Pharmaceuticals, Physicians Total Care, Qualitest Pharmaceuticals, RedPharm, and Shire U.S. Manufacturing.
FDA had determined in January 1979 that trimethobenzamide suppositories lacked "substantial evidence of effectiveness" and proposed withdrawing approval of any NDA for the products.
"There's a variety of reasons" why it has taken FDA nearly 30 years to finally get the suppositories off the market, Levy said.
At least 21 infant deaths have been associated with unapproved carbinoxamine-containing products, Levy noted.
Many products with unapproved labeling may be included in widely used pharmaceutical reference materials, such as the Physicians' Desk Reference, and are sometimes advertised in medical journals, he said.
Regulators urged consumers using suppositories containing trimethobenzamide to contact their health care providers about the products.
The second story is about promising research that was abandoned because the test data showed no significant improvement for patients taking the drug.
Drug Found Ineffective Against Lung Disease7
Treatment with interferon gamma-1b (Ifn-g1b) does not improve survival in people with a fatal lung disease called idiopathic pulmonary fibrosis, according to a study that was halted early after no benefit to participants was found.
Previous research had suggested that Ifn-g1b might benefit people with idiopathic pulmonary fibrosis, particularly those with mild to moderate disease.
The new study included 826 people, ages 40 to 79, who lived in Europe and North America. They were given injections of either 200 micrograms of Ifn-g1b (551 people) or a placebo (275) three times a week.
After a median of 64 weeks, 15 percent of those in the Ifn-g1b group and 13 percent in the placebo group had died. Symptoms such as flu-like illness, fatigue, fever and chills were more common among those in the Ifn-g1b group than in the placebo group. The two groups had similar rates of serious side effects, the researchers found.
"We cannot recommend treatment with interferon gamma-1b since the drug did not improve survival for patients with idiopathic pulmonary fibrosis, which refutes previous findings from subgroup analyses of survival in studies of patients with mild-to-moderate physiological impairment of pulmonary function," Dr. Talmadge E. King Jr., of the University of California, San Francisco, and colleagues wrote in the study published online and in an upcoming print issue of The Lancet.
The negative findings of this study "should be regarded as definite, [but] they should not discourage patients to participate in one of the several clinical trials currently underway to find effective treatments for this devastating disease," Dr. Demosthenes Bouros, of the Democritus University of Thrace in Greece, wrote in an accompanying editorial.
Bouros added that people deemed suitable "should be enrolled early in the transplantation list, which is today the only mode of treatment that prolongs survival."
Although these are both stories of failures in using drugs to treat diseases, they represent two different aspects of hypothesis testing. In the first story, the suppositories were thought to be effective in treatment from the initial trials, but were later shown to be ineffective in the general population. This is an example of what statisticians call Type I Error: supporting a hypothesis (the suppositories are effective) that later turns out to be false.
In the second story, researchers chose to abandon research when the interferon was found to be ineffective in treating lung disease during clinical trials. Now this may have been the correct decision, but what if this treatment was truly effective and the researchers just had an unusual group of test subjects? This would be an example of what statisticians call Type II Error: failing to support a hypothesis (the interferon is effective) that later turns out to be true. Unlike the first story, the second story will never result in answer to this question since the treatment will not be released to the general public.
In a traditional Introductory Statistics course, very little time is spent analyzing the potential error shown in the second story. However, both types of error are important and will be explored in this course material.
Preliminary Results – bringing the holistic approach to the entire statistics curriculum.
After writing what are now chapters 8, 9 and 10, I decided to use this holistic approach in several of my courses. I found students were more engaged in the course, were able to understand the logic of hypothesis testing, and would state the appropriate conclusion. I wanted to bring this approach to the entire statistics course and this book is the result. | textbooks/stats/Introductory_Statistics/Inferential_Statistics_and_Probability_-_A_Holistic_Approach_(Geraghty)/01%3A_Introduction/1.02%3A_The_Blind_Man_and_the_Elephant.txt |
In statistics, we organize data into graphs, which (when properly created) are powerful tools to help us understand, interpret and analyze the phenomena we study.
Here is an example of raw data, the month closing stock price (adjusted for splits) of Apple Inc. from December 1999 to December 20169:
Most people would look at this data and be unable to analyze or interpret what has happened at Apple. However a simple line graph over time is much easier to understand:
The line graph tells the story of Apple, from the dot.com crash in 2000, to the introduction of the first IPod in 2005, the first smart phone in 2007, the economic collapse of 2008, and competition from other operating systems, such as Android:
Graphs can help separate perception from reality. The polling organization Gallup has annually asked the question “Is there more crime in the U.S. then there was a year ago, or less?” In virtually every poll done, a large majority has said that crime has gone up.10
However, actual data from the U.S. Justice Department shows that violent crime rates have actually decreased in almost every since 1990.11
Perhaps people are influenced by stories in the news, which may sensationalize crime, but here is an example of where we can use statistics to challenge these false perceptions.
Here are two other examples of graphs of data. Make your own interpretation:
• Pew Research conducted a study in 2013 on how First Generation immigrant crime rates compares with second generation and native born Americans.12
• The Next Big Future conducted a study comparing deaths caused by creating energy from different sources: coal, oil and nuclear.13
2.02: Types of Data
In Statistics, two important concepts are the population and the sample. If we are collecting data, the population refers to all data for the phenomena that is being studied, while the sample refers to a subset of that data. In statistics, we are almost always analyzing sample data. These concepts will be explored in greater detail in Chapter 3. Until then, we will work with only sample data.
Sample data is a collection of information taken from a population for the purpose of analysis.
Quantitative data are measurements and numeric quantities that can be determined from the data. When describing quantitative data, we can look at the center, spread, shape and unusual features.
Qualitative data are non‐numeric values that describe the data. Note that all quantitative data is numeric but some numbers without quantity (such as Zip Code or Social Security Number) are qualitative. When describing categorical data, we are limited to observing counts in each group and comparing the differences in percentages.
Categorical data are non‐numeric values. Some examples of categorical data include eye color, gender, model of computer, and city.
Discrete data are quantitative natural numbers (0, 1, 2, 3, ...). Some examples of discrete data include number of siblings, friends on Facebook, bedrooms in a house. Discrete data are values that are counted, or answers to the question "How many?"
Continuous data are quantitative based on the real numbers. Some examples of continuous data include time to complete an exam, height, and weight. Continuous data are values that are measured, or answers to the question "How much?"
2.03: Levels of Data
Data can also be organized into four levels of data, Nominal, Ordinal, Interval and Ratio
Nominal Data are qualitative data that only define attributes, not hierarchal ranking. Examples of nominal data include hair color, ethnicity, gender and any yes/no question.
Ordinal Data are qualitative data that define attributes with a hierarchal ranking. Examples of nominal data include movie rating (G, PG, PG13, R, NC17), T‐shirt size (S, M L, XL), or your letter grade on a term paper.
The difference between Nominal and Ordinal data is that Ordinal data can be ranked, while Nominal data are just labels.
Interval Data are quantitative data that have meaningful distance between values, but do not have a "true" zero. Interval data are numeric, but zero is just a place holder. Examples of interval data include temperature in degrees Celsius, and year of birth.
Ratio Data are quantitative data that have meaningful distance between values, and have a "true" zero. Examples of ratio data include time it takes to drive to work, weight, height, and number of children in a family. Most numeric data will be ratio.
One way to tell the difference between Interval and Ratio data is to look if zero has the same value under all possible units. For example zero degrees Celsius is not the same as zero degrees Fahrenheit, so temperature has no true zero. But zero minutes, zero days, zero months all mean the same thing, since for time zero means "no time." | textbooks/stats/Introductory_Statistics/Inferential_Statistics_and_Probability_-_A_Holistic_Approach_(Geraghty)/02%3A_Displaying_and_Analyzing_Data_with_Graphs/2.01%3A_Introduction_and_Examples.txt |
When describing categorical data with graphs, we want to be able to visualize the difference in proportions or percentages within each group. These values are also known as relative frequencies.
Definition: Relative Frequency
n = sample size ‐ The number of observations in your sample size.
Frequency ‐ the number of times a particular value is observed.
Relative frequency ‐ The proportion or percentage of times a particular value is observed.
Relative Frequency = Frequency / n
Example: One categorical variable ‐ marital status
A sample of 500 adults (aged 18 and over) from Santa Clara County, California was taken from the year 2000 United States Census.14 The results are displayed in the table:
Marital Status Frequency Relative Frequency
Married 270 270/500 = 0.540 or 54.0%
Widowed 22 22/500 = 0.044 or 4.4%
Divorced ‐ not remarried 42 42/500 = 0.084 or 8.4%
Separated 10 10/500 = 0.020 or 2.0%
Single ‐ never married 156 156/500 = 0.312 or 31.2%
Total 500 500/500 = 1.000 or 100.0%
Solution
Analysis ‐ over half of the sampled adults were reported as married. The smallest group was separate which represented only 2% of the sample.
Example: Comparing two categorical variables ‐ presidential approval and gender
Reuters/Ipsos conducts a daily tracking poll of American adults to assess support of the president of the United States. Here are the results of a tracking poll ending August 17, 2017, which includes data from the five days on which Donald Trump made several highly controversial statements regarding violence following a gathering of neo‐Nazis and white supremacists in Charlottesville, Virginia. The question is "Overall, do you approve or disapprove of the way Donald Trump is handling his job as president?"15
Female Frequency Male Frequency Female Relative Frequency Male Relative Frequency
Approve 392 404 0.295 or 29.5% 0.400 or 40.0%
Disapprove 846 545 0.634 or 63.4% 0.541 or 54.1%
Unsure/No Opinion 96 59 0.079 or 7.9% 0.059 or 5.9%
Total 1334 1008 1.000 or 100% 1.000 or 100%
Solution
Analysis – Both men and women disapproved of the way Donald Trump was handling his job as president on the date of the poll. Women had a higher disapproval rate than men. In political science, this is called a gender gap.
Bar Graphs
One way to represent categorical data is on a bar graph, where the height of the bar can represent the frequency or relative frequency of each choice.
The graphs below represent the marital status information from the one categorical example. The vertical axis on the first graph shows frequencies for each group, while the second graph shows the relative frequencies (shown here as percentages).
There is no difference in the shape of each graph as the percentage or frequency in each group is directly proportional to the area of each bar.
In either case, we can make the same analysis, that married and single are the most frequently occurring marital statuses.
A clustered bar graph can be used to compare categorical variables, such as the presidential approval poll cross‐ tabulated by gender. You can see in this graph that women have a much stronger disapproval of Trump than men do. In this graph, the vertical axis is frequency, but you could also make the vertical axis relative frequency or percentage.
Another way of representing the same data is a stacked bar graph, shown here with percentage (relative frequency) as the vertical axis. It is harder to see the difference between men and women, but the total approval/disapproval percentages are easier to read.
Example: Historic gender gaps
Here is another clustered bar graph reported by ABC News, August 21, showing that Trump had a larger gender gap than the two prior presidents, Barack Obama and George W. Bush.16
In conclusion, bar graphs are an excellent way to display, analyze and compare categorical data. However, care must be taken to not create misleading graphs.
Example: Misreported Affordable Care Act enrollment
Here is an example of a bar graph reported on the Fox News Channel that distorted the truth about people signing up for the Affordable Care Act (ACA) in 2014, as reported by mediamatters.org17
On March 27 health insurance enrollment through the ACA's exchanges surpassed 6 million, exceeding the revised estimate of enrollees for the program's first year before the March 31 open enrollment deadline. Enrollment appears on track to hit the Congressional Budget Office's initial estimate of 7 million sign‐ups, and taking Medicaid enrollees into account, the ACA will have reportedly extended health care coverage to at least 9.5 million previously uninsured individuals.
Fox celebrated the final day of open enrollment by attempting to somehow twist the recent enrollment surge into bad news for the law.
America's Newsroom aired an extremely skewed bar chart which made it appear that the 6 million enrollees comprised roughly one‐third of the 7 million enrollee goal:
At first look, the graph seemingly shows that the ACA enrollment was well below the projected goal. The graph is misleading for three reasons:
1. The vertical axis doesn’t start at zero enrollees, greatly overstating the difference between the two numbers.
2. The graph of the “6,000,000” enrolled failed to include new enrollees in Medicaid, which was part of the “March 31 Goal.”
3. The reported enrollment was 4 days before the deadline. Like students doing their homework, many people waited until the last day to enroll.
The actual enrollment numbers far exceeded the goal, the exact opposite of this poorly constructed bar graph.
Pie Charts
Another way to represent categorical data is a pie chart, in which each slice of the pie represents the relative frequency or percentage of data in each category.
The pie chart shown here represents the marital status of 500 adults in Santa Clara County taken from the 200 census, the same data that was represented by a bar graph in a previous example.
The analysis again shows that most people are married, followed by single.
A multiple pie chart can be used to compare the effect of one categorical variable on another.
In the presidential approval poll example, a higher percentage of female adults disapprove of Donald Trump's performance as U.S. President compared to male adults. This is comparable to stacked or clustered bar graphs shown in the prior example. | textbooks/stats/Introductory_Statistics/Inferential_Statistics_and_Probability_-_A_Holistic_Approach_(Geraghty)/02%3A_Displaying_and_Analyzing_Data_with_Graphs/2.04%3A_Graphs_of_Categorical_Data.txt |
Numeric data is treated differently from categorical data as there exists quantifiable differences in the data values. In analyzing quantitative data, we can describe quantifiable features such as the center, the spread, the shape or skewness18, and any unusual features (such as outliers).
Interpreting and Describing Numeric Data
Center – Where is the middle of the data, what value would represent the average or typical value?
Spread‐ How much variability is there in the data? What is range of the data? (range is highest value – lowest value.)
Shape – Are the data values symmetric or is it skewed positive or negative? Are the values clustered toward the center, evenly spread, or clustered towards the extreme values?
Unusual Features – Are there outliers (values that are far removed from the bulk of the data?)
Example: Students browsing the web
This data represents how much time 30 students spent on a web browser (on the Internet) in a 24 hour period.19
Data is rounded to the nearest minute.
This data set is continuous, ratio, quantitative data, even though times are rounded to the nearest integer. Sample data presented unsorted in this format are sometimes called raw data.
Not much can be understood by looking simply at raw data, so we want to make appropriate graphs to help us conduct preliminary analysis.
2.05: Graphs of Numeric Data
A stem and leaf plot is a method of tabulating the data to make it easy to interpret. Each data value is split into a "stem" (the first digit or digits) and a "leaf" (the last digit, usually). For example, the stem for 102 minutes would be 10 and the leaf would be 2.
The stems are then written on the left side of the graph and all corresponding leaves are written to the right of each matching stem.
The stem and leaf plot allows us to do some preliminary analysis of the data. The center is around 100 minutes. The spread between the highest and lowest numbers is 58 minutes. The shape is not symmetric since the data is more spread out towards the lower numbers. In statistics, this is called skewness and we would call this data negatively skewed.
Stem and leaf plots can also be used to compare similar data from two groups in a back‐to‐back format.
In a back‐to‐back stem and leaf plot, each group would share a common stem and leaves would be written for each group to the left and right of the stem.
Example: Comparing two airlines’ passenger loading times
The data shown represents the passenger boarding time (in minutes) for a sample of 16 airplanes each for two different airlines.
Airline A will be represented on the left side of the stem, while Airline B will be represented on the right. Instead of using the last digit as the leaf (each row representing 10 minutes), we are instead going to let each row represent 5 minutes. This will allow us to better see the shape of the data.
The center for Airline B is about 5 minutes lower than Airline A. The spread for each airline is about the same. Airline A shape seems slightly skewed towards positive values (skewed positive) while Airline B times are somewhat symmetric.
2.5.02: Dot Plots
A dot plot represents each value of a data set as a dot on a simple numeric scale. Multiple values are stacked to create a shape for the data. If the data set is large, each dot can represent multiple values of the data.
Example: Weights of apples
A Chilean agricultural researcher collected a sample of 100 Royal Gala apples.20 The weight of each apple (reported in grams) is shown in the table below:
Here is the data organized into a dot plot, in which each dot represents one apple. The scaling of the horizontal axis rounds each apple’s weight to the nearest 10 grams.
The center of the data is about 250, meaning that a typical apple would weight about 250 grams. The range of weights is between 110 and 440 grams, although the 440 gram apple is an outlier, an unusually large apple. The next highest weight is only 370 grams. Not counting the outlier, the data is symmetric and clustered towards the center.
Dot plots can also be used to compare multiple populations.
Example: Dot plots can also be used to compare multiple populations
The Chilean agricultural researcher collected a sample of 100 navel oranges21 and recorded the weight of each orange in grams.
We can now add the weights of the oranges to the dot plot of the apple weights made in the prior example. The first chart keeps apples and oranges in separate graphs while the second chart combines data with a different marker for apples and orange. This second chart is called a stacked dot plot.
From the graphs, we can see that the typical orange weighs about 30 grams more than the typical apple. The spread of weights for apples and oranges is about the same. The shapes of both graphs are symmetric and clustered towards the center. There is a high outlier for apples at 440 grams and a low outlier for oranges at 120 grams. | textbooks/stats/Introductory_Statistics/Inferential_Statistics_and_Probability_-_A_Holistic_Approach_(Geraghty)/02%3A_Displaying_and_Analyzing_Data_with_Graphs/2.05%3A_Graphs_of_Numeric_Data/2.5.01%3A_Stem_and_Leaf_Plots.txt |
Another way to organize raw data is to group them into class intervals, and to then create a frequency distribution of these class intervals.
There are many methods of creating class intervals, so we will simply focus on creating intervals of equal width.
How to create class intervals of equal width and a frequency distribution
1. Choose how many intervals you want. Best is between 5 and 15 intervals.
2. Determine the interval width using the formula and rounding UP to a convenient value:
$\text { IW }=\text { Interval Width }=\dfrac{\text { Maximum Value - Minimum Value+ } 1}{\text { Number of Intervals }} \nonumber$
1. Create the class intervals starting with the minimum value:
Min to under Min + IW,
Min +IW to under Min +2(IW), ...
1. Calculate the frequency of each class interval by counting the values in each class interval. Values that are on an endpoint should be put in the lower class interval. This result is called a frequency distribution.
Example: Students browsing the web
Let's return to the data that represents how much time 30 students spent on a web browser in a 24 hour period. Data is rounded to the nearest minute.
First we choose how many class intervals. In this example, we will create 5 class intervals.
Next Determine the Class Interval Width and round up to a convenient value.
$\mathrm{IW}=\frac{125-67+1}{5}=11.8 \rightarrow 12 \nonumber$
Now create class intervals of width 12, starting with the lowest value, 67.
$\begin{array}{lllll} (67 \text { to } 79) & (79 \text { to } 91) & (91 \text { to } 103) & (103 \text { to } 115) & (115 \text { to } 127) \end{array} \nonumber$
Now, create a frequency distribution, by counting how many are in each interval. Values that are on an endpoint should be put in the higher class interval. For example, 103 should be counted in the interval (103 to 115):
As we did with categorical data, we can define Relative Frequency as the proportion or percentage of values in any Class Interval.
n = sample size ‐ The number of observations in your sample size.
Frequency ‐ the number of times a particular value is observed in a class interval.
Relative frequency ‐ The proportion or percentage of times a particular value is observed in a class interval.
Relative Frequency = Frequency / n
Note that the value for the (91 to 103) class interval was deliberately rounded down to make the totals add up to exactly 100%
From the frequency distribution, we can see that 30% of the students are on the internet between 103 and 115 minutes per day, while only 10% of students are on the internet between 67 and 79 minutes.
Example: Comparing weights of apples and oranges
A Chilean agricultural researcher collected a sample of 100 Royal Gala apples and 100 navel oranges and measured their weights in grams (see previous example on dot plots).
We will start with a value of 100 and make the interval width equal to 30. Using the tally feature of Minitab, we can create a frequency distribution for the two fruits. Minitab uses “Count” for “Frequency” and reports “Percent” for “Relative Frequency”
The most frequently occurring interval for apples is 220 to 250 grams while the most frequently occurring interval for oranges is 280 to 310 grams. Notice that there are some intervals with 0 observations, showing a potential high outlier for apples and a low outlier for oranges. | textbooks/stats/Introductory_Statistics/Inferential_Statistics_and_Probability_-_A_Holistic_Approach_(Geraghty)/02%3A_Displaying_and_Analyzing_Data_with_Graphs/2.05%3A_Graphs_of_Numeric_Data/2.5.03%3A_Grouping_Numeric_Data.txt |
A histogram is a graph of grouped rectangles where the vertical axis is frequency or relative frequency and the horizontal axis show the endpoints of the class intervals. The area of each rectangle is proportional to the frequency or relative frequency of the class interval represented.
Example: Students browsing the web
In the earlier example of 30 students browsing the web, we made 5 class intervals of the data. Here histograms represent frequency in the first graph and relative frequency in the second graph. Note that the shape of each graph is identical; all that is different is the scaling of the vertical axis.
Like the stem and leaf diagram, the histogram allows us to interpret and analyze the data. The center is around 100 minutes. The spread between the highest and lowest numbers is about 60 minutes. The shape is slightly skewed negative. The data clusters towards the center and there doesn’t seem to be any unusual features like outliers.
Example: Comparing weights of apples and oranges
First let’s make a histogram of apples and oranges separately
For the apples, the center is around 250 grams and for the oranges the center is around 280 grams, meaning the oranges appear slightly heavier. For both apples and oranges, the range is about 360 grams from the minimum to the maximum values. Both graphs seem approximately symmetric. The apples have one value that is unusually high, and the oranges have one value that is unusually low.
Another way of comparing apples and oranges is to combine them into a single graph, also called a grouped histogram.
Here, the histograms are laid on top of each other, the light blue and purple match the histogram of apples and the light red and purple match the histogram of the oranges. Here is easier to see that oranges, in general, weigh more than apples.
2.5.05: Cumulative Frequency and Relative Fr
The cumulative frequency of a class interval is the count of all data values less than the right endpoint. The cumulative relative frequency of a class interval is the cumulative frequency divided by the sample size.
Definition: Cumulative Relative Frequency
n = sample size ‐ The number of observations in your sample size.
Cumulative Frequency ‐ the number of times a particular value is observed in a class interval or in any lower class interval.
Cumulative Relative Frequency ‐ The proportion or percentage of times a particular value is observed in a class interval or in any lower class interval.
Cumulative Relative Frequency = Cumulative Frequency / n
Example: Students browsing the web
Let's again return to the data that represents how much time 30 students spent on a web browser in a 24 hour period. Data is rounded to the nearest minute. Earlier we had made a frequency distribution and so we will now add columns for cumulative frequency and cumulative relative frequency.
Note that the last class interval will always have a cumulative relative frequency of 100% of the data.
Some possible ways to interpret cumulative relative frequency: 83.3% of the students are on the internet less that 115 minutes.
The middle value (median) of the data occurs in the interval 91 to 103 minutes since 53.3% of the students are on the internet less than 103 minutes.
Example: Comparing weights of apples and oranges
The tally feature of Minitab can also be used to find cumulative relative frequencies (called cumulative counts and percentages here):
Cumulative relative frequency can also be used to find percentiles of quantitative data. A percentile is the value of the data below which a given percentage of the data fall.
In our example 280 grams would represent the 69th percentile for apples since 69% of apples have weights lower than 280 grams. The 68th percentile for oranges would be 310 grams since 68% of oranges weigh less than 310 grams.
2.5.06: Using Ogives to find Percentiles
The table of cumulative relative frequencies can be used to find percentiles for the endpoints. One method of estimating other percentiles of the data is by creating a special graph of cumulative relative frequencies, called an Ogive.
An Ogive is a line graph where the vertical axis is cumulative relative frequency and the horizontal axis is the value of the data, specifically the endpoints of the class intervals. The left end point of the first class interval will have a cumulative relative frequency of zero. All other endpoints are given the right endpoint of the corresponding class interval. The points are then connected by line segments.
The graph can then be read to find any percentile desired. For example, the 25th, 50th and 75th percentiles break the data into equal fourths and are called quartiles.
Definition: Percentile
Percentile ‐ the value of the data below which a given percentage of the data fall.
The 25th percentile is also known as the 1st Quartile.
The 50th percentile is also known as the 2nd Quartile or median.
The 75th percentile is also known as the 3rd Quartile
Example: Students browsing the web
We can refer to the cumulative relative frequency graph shown in the prior example to make the Ogive shown here.
Using the graph, we can estimate the quartiles of the distributions by where the line graph crosses cumulative relative frequency values of 0.25, 0.50 and 0.75.
The 1st Quartile is about 87 minutes.
The median is about 100 minutes.
The 3rd Quartile is about 108 minutes.
Example: Comparing weights of apples and oranges
For the cumulative relative frequencies of the weights of apples and oranges, we can put both ogives on a single graph and estimate the quartiles.
Line Graphs with time
The ogive is an example of a line graph. A very useful line graph is one in which time is the horizontal axis. An early example from Section 1.1 of this type of line graphs is the historical crime rates. The line graph shows that violent crime has decreased over time.
Example: Major Hurricanes in the Atlantic Ocean
In a one month period in 2017, four major hurricanes (category 3 or higher) formed in the Atlantic Ocean. Three of these hurricanes did devastating and costly damage to regions of the United States: Hurricane Harvey in Texas, Hurricane Irma in Florida and Hurricane Maria in Puerto Rico and the Virgin Islands. There was also catastrophic damage from these storms in Cuba, Dominica and other Caribbean countries, islands, and territories.
A Google Analytic graph shows that much more attention was paid to Hurricane Irma throughout the days it was threatening Florida.22
However, Google Analytics excludes Puerto Rico which took a direct hit from Hurricane Maria. It could also be that after Harvey caused massive flooding in and near Houston, more people became interested in all hurricane activity. | textbooks/stats/Introductory_Statistics/Inferential_Statistics_and_Probability_-_A_Holistic_Approach_(Geraghty)/02%3A_Displaying_and_Analyzing_Data_with_Graphs/2.05%3A_Graphs_of_Numeric_Data/2.5.04%3A_Histograms.txt |
In the prior section, methods of organizing data into tables and graphs were shown as a way of analyzing the data. By observing graphs, we can describe the central tendency (center), the variability (spread), shape (skewness) and unusual features (outliers) of the data. In this section, we will explore statistics that can be calculated from the data and that can help describe and analyze the data.
03: Descriptive Statistics
Let’s start this section with an example and a multiple choice question:
Example: Pizza delivery
Anthony’s Pizza, a Detroit based company, offers pizza delivery to its customers. A driver for Anthony’s Pizza will often make several deliveries on a single delivery run. A sample of 5 delivery runs by a driver showed the total number of pizzas delivered on each run:23
2 2 5 9 12
What is the “average” number of pizzas sent out on a delivery run?
1. 2 pizzas
2. 5 pizzas
3. 6 pizzas
Pick what you think is the answer and we will return to this example and discuss the answer at the end of this section.
Sample Mean
The sample mean is the arithmetic average of the data values. You simply add up all the numbers and divide by the sample size. The symbol $\bar{X}$ (pronounced X‐bar) refers to the sample mean.
Definition: Sample Mean
If $X_{1}, X_{2}, \cdots, X_{n}$ represents a sample of size $n$, then the sample mean is:
$\bar{X}=\dfrac{X_{1}+X_{2}+\cdots+X_{n}}{n}=\dfrac{\sum X_{i}}{n} \nonumber$
For the Example - Pizza delivery data, the sample mean is $\bar{X}=\dfrac{2+2+5+9+12}{5}=6$ pizzas (the middle value).
Sample Median
The sample median is the value that represents the exact middle of data, when the values are sorted from lowest to highest.
Procedure for finding the sample median
1. Sort the data values from lowest to highest.
2. If there is an odd number of values, the sample median is the middle value. $\text { The median of }\{1,3,8,13,14\} \text { is } 8 \nonumber$
3. If there is an even number of values, the sample median is the mean of the 2 middle values $\text { The median of }\{1,3,8,10,13,14\} \text { is } \dfrac{8+10}{2}=9 \nonumber$
Example: Pizza delivery
For the pizza delivery data {2, 2, 5, 9, 12}, the sample median is 5 pizzas (the middle value).
Example: Home prices in a single neighborhood
Here are the selling prices of 6 homes in the same neighborhood in Antioch, California24:
$500,000$550,000 $600,000$700,000 $700,000$1,950,000
The sample mean is $1,000,000 (add up the values and divide by 5). The sample median is$650,000 ($600,000 plus$700,000 divided by 2).
Which of the two values is a better measure of the “average” home in this neighborhood?
Here the sample median is a better measure of center, because $650,000 better represents a typical home in this neighborhood. The mean is not a good measure of center here because the value of the outlier home, which costs$1,950,000. The median will never be affected by outliers because it is only location that matters when calculating the median.
Unlike the mean, the median (which is based on ranking instead of values), can be calculated for ordinal categorical data, but not for nominal data.
Example: Grades in a math class
In a community college algebra class, an instructor gave out the following grades to 40 students. Determine the median grade for the course.
The first step is to sort the grades from lowest to highest:
The middle values are both B’s, so the median grade is B.
Sample mode
The sample mode is the most frequently occurring value in the data. If there are multiple values that occur most frequently, then there are multiple modes in the data.
Example: Pizza delivery
For the pizza delivery data {2, 2, 5, 9, 12} , the sample mode is 2 pizzas because 2 occurs most frequently in the data.
Let's now return to the original question at the beginning of this section.
What is the “average” number of pizzas sent out on a delivery run?
1. 2 pizzas
2. 5 pizzas
3. 6 pizzas
Since 2 is the mode, 5 is the median and 6 is the mean, practically speaking all 3 answers are examples of "averages". Lightbulb Books humorously calls these statistics "The Average Bears."25
Many (including some Statistics texts) will automatically assume that average is the same as mean. In general life, people will use the terms mean and average interchangeably. But in Statistics, when we use the word "average", we mean a value that represents the center of the data. There are many statistics that represent the center of the data, including the mean, median and mode.
The mode can also be used for both nominal and ordinal categorical data.
Example: Nominal data ‐ Marital status
Let's return to the sample of 500 adults (aged 18 and over) from Santa Clara County taken from the year 2000 United States Census.
The mode for this data is value with the highest frequency, "Married."
Using the mean and median to determine skewness
Skewness is a measure of how asymmetric the data vales are. Data can be positively skewed (stretched to the right), negatively skewed (stretched to the left) or symmetric (no skewness). Let’s now explore what effect skewness has on measures of center with several examples.
Example: Symmetric data – Heights of men
Here is a dot plot and summary statistics of the heights in inches of 1000 men, aged 30 years
Sample mean = 68.98 inches
Sample median = 69 inches
Sample mode = 69 inches
The data values are evenly spread on the right and left of the peak. When data are symmetric, the mean, median and mode are about the same.
Example: Positively skewed data – Redwood trees
Here is a dot plot and summary statistics of the age of 1000 redwood trees sampled in California parks.
Sample mean = 237.48 years
Sample median = 180 years
Sample mode = 100 years
The data values are stretched to the right of center, causing the mean to be greater than the median. Also, the median will usually be greater than the mode for positively skewed data.
Example: Negatively skewed data – Exam grades
Here is a dot plot and summary statistics of the percentage grade of 1000 midterm exams given by a math instructor to algebra students.
Sample mean = 76.21
Sample median = 80
Sample mode = 91
The data values are stretched to the left of center, causing the mean to be less than the median. Also, the median will usually be less than the mode for negatively skewed data.
Using the mean and median to find skewness in data26
Example: Students browsing the web
From a prior example, this stem and leaf graph represents how much time 30 students spent on a web browser (on the Internet) in a 24 hour period. Data is rounded to the nearest minute.
$\begin{array}{ll} 6 & 7 \ 7 & 18 \ 8 & 25677 \ 9 & 25799 \ 10 & 01233455789 \ 11 & 268 \ 12 & 245 \end{array} \nonumber$
The sample median is 101.5 minutes, since the 15th observation is 101 and the 16th observation is 102.
Since the data is skewed negative, we would expect the sample mean to be less than the sample median.
Adding up the values and dividing by 30, we calculate that the sample mean is 96.6 minutes, consistent with data values that are negatively skewed.
Note that the mode is not helpful in this example since the sample size is small. | textbooks/stats/Introductory_Statistics/Inferential_Statistics_and_Probability_-_A_Holistic_Approach_(Geraghty)/03%3A_Descriptive_Statistics/3.01%3A_Measures_of_Central_Tendency.txt |
When analyzing data, it is also important to describe the spread or variability of the data.
Example: Comparing high temperatures between San Francisco and St. Louis
Here are the daily high temperatures for every day in 2016 for the cities of San Francisco and St. Louis.272829
Even though both cities seem to have approximately the same center, it’s obvious that the spread of daily high temperatures in San Francisco is much lower than it is in St. Louis. San Francisco temperatures are mostly mild all year long, while St. Louis has some very hot and very cold days. This section will explore statistics that are used to measure variability in data.
Range
Definition: Range
The easiest measure of variability to calculate is the range of the data.
Range = maximum value ‐ minimum value
Here are the extreme high temperatures in 2016 for San Francisco and St. Louis.
The range for San Francisco high temperatures is about half of the range for St. Louis.
Example: Students browsing the web
Let’s return to the example of daily minutes spent on the internet by 30 students and find the difference of the two most extreme values.
Range = 125 ‐ 67 = 58 minutes
The advantage of the range is that it is easy to calculate. The main disadvantage is that the range only uses two points and is extremely affected by outliers. For example, on September 1, 2017 San Francisco set an all time high temperature record of 106˚F! If this had occurred in 2016, an outlier of 106˚F would have changed the range for San Francisco from 42˚F to 56˚F. Therefore, statisticians prefer to use measures of variability that use all the data, not simply the outliers.
Variance and Standard Deviation
Statisticians wanted to develop a measure of spread that showed variability with respect to the center of the data, call it an "average deviation from center". This section will explore deviations from the sample mean and a later section will explore variability with respect to the sample median.
Example: Pizza delivery
Let's Return to the Anthony's pizza example in which a sample of 5 delivery runs by a driver showed that the total number of pizzas delivered on each run were {2, 2, 5, 9, 12}. Recall that the sample mean X for this data was 6, so we can calculate the deviation from the sample mean for each point:
Record number
$i$
Pizzas delivered
$X_{i}$
Deviation from mean
$X_{i}-\bar{X}$
1 2 2 - 6 = - 4
2 2 2 - 6 = - 4
3 5 5 - 6 = -1
4 9 9 - 6 = +3
5 12 12 - 6 = +6
Total $\sum$ 0
The sum of deviations from the mean will always equal zero, so we need a way to calculate an "average" deviation from the mean. Statisticians realized the sign of the deviation doesn't really matter so they explored statistics such as the absolute value of the deviation from the mean:
Record number
$i$
Pizzas delivered
$X_{i}$
Deviation from mean
$X_{i}-\bar{X}$
Absolute value of Deviation from mean
$\left|X_{i}-\bar{X}\right|$
1 2 2 - 6 = - 4 4
2 2 2 - 6 = - 4 4
3 5 5 - 6 = -1 1
4 9 9 - 6 = +3 3
5 12 12 - 6 = +6 6
Total $\sum$ 0 18
Dividing by the sample size, we can find the "average absolute deviation from the mean" to be 18/5 = 3.6 pizzas. For reasons that will be explained in a later section, this measure was not found to be ideal.
Another method of eliminating negative signs from data is to square the numbers, since any negative numbers raised to an even power will become positive.
Record number
$i$
Pizzas delivered
$X_{i}$
Deviation from mean
$X_{i}-\bar{X}$
Squared Deviation from the mean
$\left(X_{i}-\bar{X}\right)^{2}$
1 2 2 - 6 = - 4 16
2 2 2 - 6 = - 4 16
3 5 5 - 6 = -1 1
4 9 9 - 6 = +3 9
5 12 12 - 6 = +6 36
Total $\sum$ 0 78
The quantity $\sum\left(X_{i}-\bar{X}\right)^{2}=78$ is called the sum of squared deviations from the mean. To calculate an "average" square deviation, it is best for the sum of squared deviations to be divided by $n‐1$ instead of by $n$ ($n$ is the sample size). This statistic is called the sample variance and referred to by the symbol $s^2$ .
$\text { Sample Variance: } \quad s^{2}=\dfrac{\sum\left(X_{i}-\bar{X}\right)^{2}}{n-1} \nonumber$
You might be asking “Since this is an average of squared deviations, why are we dividing by $n‐1$ instead of by $n$?” The reason is that $\bar{X}$, the sample mean, uses the same data $X_{1}, X_{2}, \cdots, X_{n}$ so you can show mathematically that you only need to know $n‐1$ points plus the sample mean to determine the sample variance. In statistics this is called $n‐1$ degrees of freedom, and they will be explored in a later section.
For the pizza data, the sample variance is: $s^{2}=\dfrac{78}{5-1}=19.5$
Although the sample variance uses all the data and measures variability from the mean, the units of this statistic are squared when the deviations are squared. In our example, the sample variance is 19.5 pizzas‐squared. To solve this problem, we can simply take the square root of the variance to return to the original units. This statistic is called sample standard deviation and is represented by the symbol $s$.
$\text { Sample Standard Deviation: } \quad s=\sqrt{\dfrac{\sum\left(X_{i}-\bar{X}\right)^{2}}{n-1}} \nonumber$
Example: Comparing high temperatures between San Francisco and St. Louis
Calculating the variance and standard deviation manually is tedious, so we will use technology to calculate summary statistics for 2016 daily high temperatures in San Francisco and St. Louis.
The means and medians show that on average St. Louis is somewhat warmer than San Francisco. The variances and standard deviations show that there is much more variability in high temperatures for St. Louis, consistent with the dot plot shown at the beginning of this section.
Interpreting the Standard Deviation
A student once asked me about the distribution of score from a statistics midterm after she saw her score of 82 out of 100. I told her the distribution of test scores had a mean score of 70 and a standard deviation of 10. Most people would have an intuitive grasp of the mean score as being the “average student’s score” and would say this student did better than average. However, having an intuitive grasp of standard deviation is more challenging. Fortunately, there is a tool to help us.
The Empirical Rule (68 – 95 – 99.7 Rule)
The Empirical Rule is a helpful tool in explaining standard deviation if you have data that is clustered towards the mean and not heavily skewed.
The standard deviation is a measure of variability or spread from the center of the data as defined by the mean.
Note
The Empirical Rule states that for bell‐shaped data:
• 68% of the data is within 1 standard deviation of the mean.
• 95% of the data is within 2 standard deviations of the mean.
• 99.7% of the data is within 3 standard deviations of the mean.
Here is an interpretation of the exam grades for the class in which the sample mean was 70 and the standard deviation was 10 using the Empirical Rule.
The student who scored an 82 would be in the upper 16% of the class, more than one standard deviation above the mean score.
Example: Students browsing the web
Let’s return to the example of daily minutes spent on the internet by 30 students and use the empirical rule to find values between which 68%, 95% and 99.7% of the data lie. Compare these results to the actual results from the data.
Recall that the shape of this data is slightly skewed, but the data values cluster to the center. Let’s see how close the Empirical Rule is to actual results.
To use the Empirical Rule, we need to first calculate the sample mean and standard deviation.
$\bar{X}=99.6 \quad s=14.7 \nonumber$
The Empirical Rule says that about 68% of the data is within 1 standard deviation of the mean, between 84.9 and 114.3 minutes. The actual result for the data is 21/30 or 70% of the data.
The Empirical Rule says that about 95% of the data is within 2 standard deviations of the mean, between 70.2 and 129.1 minutes. The actual result for the data is 29/30 or 96.7% of the data.
The Empirical Rule says that about 99.7% of the data is within 3 standard deviations of the mean, between 55.5 and 143.8 minutes. The actual result for the data is 30/30 or 100% of the data.
So even though the time on internet data has some negative skewness, the actual percentages of data within 1, 2 and 3 standard deviations of the mean are close to the percentages from the Empirical Rule.
Using the range to estimate sample standard deviation.
The Empirical Rule also gives a very quick rule for making a rough estimate of the standard deviation.
Rough estimate of Sample Standard Deviation using Range
For small sample sizes (between 15 and 70): s ≈ Range/4
For intermediate sample sizes (between 70 and 500): s ≈ Range/5
For large sample sizes (over 500): s ≈ Range/6
Example: Students browsing the web
In the prior example of time spent on the Internet by 30 students, we determined the Range to be 58. Using this rule, we would estimate the sample standard deviation to be 58/4 = 14.5 minutes. This rough estimate is actually quite close to the calculated sample standard deviation of 14.7 minutes.
This rule should not be used to determine the actual standard deviation, but can be used to check the reasonableness of a calculated or presented sample standard deviation. | textbooks/stats/Introductory_Statistics/Inferential_Statistics_and_Probability_-_A_Holistic_Approach_(Geraghty)/03%3A_Descriptive_Statistics/3.02%3A_Measures_of_Variability.txt |
A student receives a score of 82 on a Midterm Exam and asks the instructor, “How well did I do on the test?” To answer this question, we need statistics that measure the ranking of this grade relative to the class. These statistics are called measure of relative standing.
The z‐score
Related to the Empirical Rule is the z‐score which measures how many standard deviations a particular data point is above or below the mean. Unusual observations would have a z‐score over 2 or under ‐2. Extreme observations would have z‐scores over 3 or under ‐3 and should be investigated as potential outliers. For a particular value from the data ($X_{i}$), we can easily calculate the z‐score for that value.
$\text { Formula for z-score: } \quad z-\text { score }=\dfrac{X_{i}-\bar{X}}{s} \nonumber$
For the student who received an 82 on the exam we can calculate the Z‐score if we know the sample mean and standard deviation for the class. Suppose for this class, the sample mean was 70 and the sample standard deviation was 10. Then for this student:
$z-\text { score }=\dfrac{82-70}{10}=+1.2 \nonumber$
The z‐score of 1.7 tells us the student's score was well above average, but not highly unusual.
Interpreting z‐score for several students
Exam Score z‐score Interpretation
82 +1.2 well above average
66 -0.4 slightly below average
94 +2.4 unusually above average
34 -3.6 extremely below average
Example: Comparing apples to oranges
The sample mean for 100 Fuji apples was 252 grams and the standard deviation was 55 grams. The sample mean for 100 Navel oranges was 286 grams and the standard deviation was 67 grams. What would be more unusual: a small apple that weighed 130 grams or a large orange that weighed 430 grams?
Solution
Some people might say “The small apple is 122 grams below the mean and the large orange is 144 grams above the mean so the orange is more unusual”, but this does not take into account the spread of weights for apples and oranges. Instead, we should determine which z‐score is further from zero.
z‐score for apple = (130 – 252)/55 = ‐2.22
z‐score for orange = (430 – 286)/67 = +2.15
The small apple is slightly more unusual than the large orange because ‐2.22 is further from zero.
Percentile, Quartiles and the Interquartile Range
In an earlier section, we explored how we can use the ogive graph to calculate percentiles and quartiles for data. This section will introduce the percentile as a measure of relative standing.
Definition: pth Percentile
$p^{th}$ Percentile ‐ the value of the data below which p percent of the data fall.
To calculate the location of the $p^{th}$ percentile in a sample of size $n$, use the formula:
$p^{\text {th }} \text { percentile location }=p(n+1) \nonumber$
The $25^{th}$ percentile is also known as the 1st Quartile or Q1
The $50^{th}$ percentile is also known as the 2nd Quartile or median
The $75^{th}$ percentile is also known as the 3rd Quartile or Q3
Example: Students browsing the web
Let’s again return to the example of daily minutes spent on the internet by 30 students and use the empirical rule to find the 70th percentile.
Solution
Location of $70^{th}$ percentile = 0.70(30+1) = 21.7 ≈ 22nd location
$70^{th}$ percentile ≈ 107 minutes.
For a more accurate calculation, you can use linear interpolation of the fractional part of 21.7 by adding 30% of the 21st location to 70% of the 22nd location.
$70^{th}$ percentile = (0.3)(105) + (0.7)(107) = 106.4 minutes
There is an alternative method to find the quartiles of data.
1. Find the median (2nd quartile). The median divides the data in half.
2. Q1 (1st quartile) will be the median of the first half of the data
3. Q3 (3rd quartile) will be the median of the second half of the data.
Example: Students browsing the web
Find the three quartiles for this data.
Solution
Median = (101 +102)/2 = 101.5
Q1 = 1st quartile = 87
Q3 = 3rd quartile = 108
Interquartile Range
Definition: Interquartile Range (IQR)
A measure of variability based on the ranking of the data is called the Interquartile Range (IQR), which is the difference between the third quartile and the first quartile. The IQR represents the range of the middle 50% of the data and represents variability of the data with respect to the median.
Example: Students browsing the web
Find and explain the interquartile range for this data
Solution
IQR = 108 ‐87 = 21 minutes
The middle 50% of the observations are between 87 and 108 minutes.
3.04: Box Plots (Box and Whisker Plot)
The box plot was created to represent the 3 quartiles (Q1, median and Q3) along with the minimum and maximum values of the data. These values are also called the Five Point Summary of the data. Let's start with a box plot of data with no outliers.
Steps for making a box plot (no outliers)
1. Draw the box between Q1 and Q3
2. Accurately plot the median
3. Draw whiskers to minimum and maximum values
Each section of the box plot represents 25% of the data. Box plots can be drawn horizontally or vertically.
Example: Students browsing the web
Let’s again return to the example of daily minutes spent on the internet by 30 students. Find the five point summary, create a box plot and interpret the graph.
Solution
Five point Summary:
Minimum = 67
Q1=87
Median = 101.5
Q3=108
Maximum = 125
Here are box plots representing these data values horizontally and vertically.
You can chose either method to make a box plot.
The center as represented by the median is 101.5 minutes.
The spread as measured by the range is 58 minutes.
The spread as measured by the IQR is 21 minutes (the middle 50% of the data).
The data values are negatively skewed from the median. | textbooks/stats/Introductory_Statistics/Inferential_Statistics_and_Probability_-_A_Holistic_Approach_(Geraghty)/03%3A_Descriptive_Statistics/3.03%3A_Measures_of_Relative_Standing.txt |
An outlier is a data point that is far removed from the other entries in the data set. Outliers could be caused by:
• Mistakes made in recording data
• Data that don’t belong in the population
• True rare events
The first two cases are simple to deal with as we can correct errors or remove data that that does not belong in the population. The third case is more problematic as extreme outliers will increase the standard deviation dramatically and heavily skew the data.
In The Black Swan, Nicholas Taleb argues that some populations with extreme outliers should not be analyzed with traditional confidence intervals and hypothesis testing.30 He defines a Black Swan as an unpredictable extreme outlier that causes dramatic effects on the population. A recent example of a Black Swan was the catastrophic drop in the value of unregulated Credit Default Swap (CDS) real estate insurance investments which caused the near collapse of international banking system in 2008. The traditional statistical analysis that measured the risk of the CDS investments did not take into account the consequence of a rapid increase in the number of foreclosures of homes. In this case, statistics that measure investment performance and risk were useless and created a false sense of security for large banks and insurance companies.
Example: Realtor home sales
Here are the quarterly home sales for 10 realtors: 2 2 3 4 5 5 6 6 7 50
With outlier Without outlier
Mean 9.00 4.44
Median 5.00 5.00
Standard Deviation 14.51 1.81
Interquartile Range 3.00 3.50
In this example, the number 50 is an outlier. When calculating summary statistics, we can see that the mean and standard deviation are dramatically affected by the outlier, while the median and the interquartile range (which are based on the ranking of the data) are hardly changed. One solution when dealing with a population with extreme outliers is to use inferential statistics that use the ranks of the data, also called non‐parametric statistics.
Using Box Plot to find outliers
• The “box” is the region between the 1st and 3rd quartiles.
• Possible outliers are more than 1.5 IQR’s from the box (inner fence)
• Probable outliers are more than 3 IQR’s from the box (outer fence)
• In the box plot below of the realtor example, the dotted lines represent the inner and outer “fences” that are 1.5 and 3 IQR’s respectively from the box. See how the data point 50 is well outside the outer fence and therefore an almost certain outlier.
• The whiskers now end at the most extreme value that is NOT a possible outlier.
Lower Inner Fence = Q1 – (1.5)IQR = 3 – (1.5)(3) = ‐1.5
Lower Outer Fence = Q1 – (3)IQR = 3 – (3)(3) = ‐6
Upper Inner Fence = Q3 + (1.5)IQR = 6 + (1.5)(3) = 10.5
Upper Outer Fence = Q3 + (3)IQR = 6 + (3)(3) = 15
Since the value 50 is far beyond the outer fence of 15, 50 is an extreme outlier.
Steps for making a box plot (with outliers)
1. Draw the box between Q1 and Q3
2. Accurately plot the median
3. Determine possible outliers that are more than 1.5 interquartile ranges from the box.
Lower Inner Fence = Q1 – (1.5)IQR
Upper Inner Fence = Q3 + (1.5)IQR
1. Mark outliers with a special character like a * or •.
2. Draw whiskers to minimum and maximum values that are not possible outliers.
(note: boxplot below not drawn to scale)
Example: Comparing apples to oranges
Using the summary statistics, make side‐by side box plots of the weights of 100 Fuji apples and 100 Navel oranges. Analyze and interpret the graphs, including outliers.
Summary Statistics:
Variable Fruit N Minimum Q1 Median Q3 Maximum IQR
weights apples 100 118.00 210.00 248.00 291.50 435.00 81.50
oranges 100 122.00 237.25 283.50 333.50 458.00 96.25
Solution
Oranges have a higher median weight compared to apples.
The IQR is slightly larger for oranges.
Both fruits have graphs that are mostly symmetric.
The apple that weighs 435 grams is a possible outlier since the weight exceeds the Inner Fence = 291.50 + 1.5(81.5) = 414.
The next highest apple weight is 365 grams.
Using the z‐score to find outliers
The z‐score can also be used to find outliers, but care must be taken since the mean and standard deviation are affected by outliers. One strategy is to remove the outlier before calculating these statistics.
Procedure for using z‐score to find outliers
1. Calculate the sample mean and standard deviation without the suspected outlier.
2. Calculate the Z‐score of the suspected outlier: $z-\text { score }=\dfrac{X_{i}-\bar{X}}{s}$
3. If the Z‐score is more than 3 or less than ‐3, that data point is a probable outlier.
Example: Realtor home sales
Determine if 50 is an outlier.
Solution
Determine the sample mean and standard deviation excluding the value 50.$\bar{X}=4.44 \quad s=1.81 \nonumber$
Determine the z‐score for 50.$z-\text { score }=\dfrac{50-4.4}{1.81}=25.2 \nonumber$
Since 25.2 is much greater than 3, the value 50 is an extreme outlier
Outliers, what to do?
There is no clear answer what to do about legitimate outliers. Do we remove them or leave them in?
For some populations, outliers don’t dramatically change the overall statistical analysis. Example: the tallest person in the world will not dramatically change the mean height of 10000 people.
However, for some populations, a single outlier will have a dramatic effect on statistical analysis (called “Black Swan” by Nicholas Taleb31), and inferential statistics may be invalid in analyzing these populations. Example: the richest person in the world will dramatically change the mean wealth of 10000 people. | textbooks/stats/Introductory_Statistics/Inferential_Statistics_and_Probability_-_A_Holistic_Approach_(Geraghty)/03%3A_Descriptive_Statistics/3.05%3A_Working_with_Outliers.txt |
In statistics, bivariate data means two variables or measurements per observation. For purposes of this section, we will assume both measurements are numeric data. These variables are usually represented by the letters X and Y.
Example: Sunglasses sales and rainfall
A company selling sunglasses determined the units per 1000 people and the annual rainfall in 5 cities.
X = rainfall in inches
Y = sales of sunglasses per 1000 people.
X Y
10 40
15 35
20 25
30 25
40 15
In this example there are two numeric measurements for each of the five cities.
3.06: Bivariate Data
A scatterplot is a useful graph for looking for relationships between two numeric variables. This relationship is called correlation. When performing correlation analysis, ask these questions:
1. What is the direction of the correlation?
1. What is the strength of the correlation?
1. What is the shape of the correlation?
Example: Cucumber yield and rainfall
This scatterplot represents randomly collected data on growing season precipitation and cucumber yield. It is reasonable to suggest that the amount of water received on a field during the growing season will influence the yield of cucumbers growing on it.32
Solution
Direction: Correlation is positive, yield increases as precipitation increases.
Strength: There is a moderate to strong correlation.
Shape: Mostly linear, but there may be a slight downward curve in yield as precipitation increases.
Example: GPA and missing class
A group of students at Georgia College conducted a survey asking random students various questions about their academic profile. One part of their study was to see if there is any correlation between various students’ GPA and classes missed.33
Solution
Direction: Correlation, if any, is negative. GPA trends lower for students who miss more classes.
Strength: There is a very weak correlation present.
Shape: Hard to tell, but a linear fit is not unreasonable.
Example: Commute times and temperature
A mathematics instructor commutes by car from his home in San Francisco to De Anza College in Cupertino, California. For 100 randomly selected days during the year, the instructor recorded the commute time and the temperature in Cupertino at time of arrival.
Solution
Direction: There is no obvious direction present.
Strength: There is no apparent correlation between commute time and temperature.
Shape: Since there is no apparent correlation, looking for a shape is meaningless.
Other: There are two outliers representing very long commute times.
Example: Age of sugar maple trees
Is it possible to estimate the age of trees by measuring the diameters of the trunks? Data was reconstructed by a comprehensive study by the US Department of Agriculture. The researchers collected data for old growth sugar maple trees in northern US forests.34
Solution
Direction: There is a positive correlation present. Age increases as trunk size increases.
Strength: The correlation is strong.
Shape: The shape of the graph is curved downward meaning the correlation is not linear.
Example: Gun ownership and gun suicides
This scatterplot represents gun ownership and gun suicides for 73 different countries. The data is adjusted to rates per population for comparison purposes.35
Solution
Direction: There is a positive correlation present. More gun ownership means more gun suicides.
Strength: The correlation is moderate for most data.
Shape: The shape of the graph is linear for most of the data.
Other: There are a few outliers in which gun ownership is much higher. There is also an outlier with an extremely high suicide rate.
This final example demonstrates that outliers can make it difficult to read graphs. For example, The United States has the highest gun ownership rates and the highest suicide by gun rates among these countries, making the United States stand far away from the bulk of the data in the scatterplot. Montenegro had the second highest suicide by gun rate, but with a much lower gun ownership rate. | textbooks/stats/Introductory_Statistics/Inferential_Statistics_and_Probability_-_A_Holistic_Approach_(Geraghty)/03%3A_Descriptive_Statistics/3.06%3A_Bivariate_Data/3.6.01%3A_Graphing_Bivariate_Data_with_Scatterplots.txt |
The correlation coefficient (represented by the letter $r$) measures both the direction and strength of a linear relationship or association between two variables. The value $r$ will always take on a value between ‐1 and 1. Values close to zero indicate a very weak correlation. Values close to 1 or ‐1 indicate a very strong correlation. The correlation coefficient should not be used for non‐linear correlation.
It is important to ignore the sign when determining strength of correlation. For example, $r = ‐0.75$ would indicate a stronger correlation than $r = 0.62$, since ‐0.75 is farther from zero.
We will use technology to calculate the correlation coefficient, but formulas for manually calculating $r$ are presented at the end of this section.
Interpreting the correlation coefficient ($r$)
$-1 \leq r \leq 1 \nonumber$
$r = 1$ means perfect positive correlation
$r = ‐1$ means perfect negative correlation
$r = 0$ mean no correlation
The farther $r$ is from zero, the stronger the correlation
$r > 0$ means positive correlation
$r < 0$ means negative correlation
Some Examples
Example: Cucumber yield and rainfall
This scatterplot represents randomly collected data on growing season precipitation and cucumber yield.
$r= 0.871$ indicating strong positive correlation.
Example: GPA and missing class
A group of students at Georgia College conducted a survey asking random students various questions about their academic profile. One part of their study was to see if there is any correlation between various students’ GPA and classes missed.
$r= ‐0.236$ indicating weak negative correlation.
Example: Commute times and temperature
A mathematics instructor commutes by car from his home in San Francisco to De Anza College in Cupertino, California. For 100 randomly selected days during the year, the instructor recorded the commuting time and the temperature in Cupertino at time of arrival.
$r = ‐0.02$ indicating no correlation.
Calculating the correlation coefficient
Manually calculating the correlation coefficient is a tedious process, but the needed formulas and one simple example are presented here:
Formulas for calculating the correlation coefficient ($r$)
$r=\dfrac{S S X Y}{\sqrt{S S X \cdot S S Y}} \nonumber$
$S S X=\Sigma X^{2}-\dfrac{1}{n}(\Sigma X)^{2} \nonumber$
$S S Y=\Sigma Y^{2}-\dfrac{1}{n}(\Sigma Y)^{2} \nonumber$
$S S X Y=\Sigma X Y-\dfrac{1}{n}(\Sigma X \cdot \Sigma Y) \nonumber$
Example: Sunglasses sales and rainfall
A company selling sunglasses determined the units sold per 1000 people and the annual rainfall in 5 cities.
X = rainfall in inches
Y = sales of sunglasses per 1000 people.
X Y
10 40
15 35
20 25
30 25
40 15
Solution
First, find the following sums:
$\sum X, \sum Y, \sum X^{2}, \sum Y^{2}, \sum X Y \nonumber$
$X)$ $Y$ $X^{2}$ $Y^{2}$ $XY$
10 40 100 1600 400
15 35 225 1225 525
20 25 400 625 500
30 25 900 625 750
40 15 1600 225 600
$\mathbf{\Sigma}$ 115 140 3225 4300 2775
Then, find $SSX$, $SSY$, $SSXY$
$\begin{array}{ll} S S X=3225-115^{2} / 5 & =580 \ S S Y=4300-140^{2} / 5 & =380 \ S S X Y=2775-(115)(140) / 5 & =-445 \end{array}$
Finally, calculate $r$
$r=\dfrac{S S X Y}{\sqrt{S S X \cdot S S Y}}=\dfrac{-445}{\sqrt{580 \cdot 330}}=-0.9479$
The correlation coefficient is ‐0.95, indicating a strong, negative correlation between rainfall and sales of sunglasses.
3.6.03: Correlation vs. Causation
One of the greatest mistakes people make in Statistics is in confusing correlation with causation.
Example: Nicolas Cage movies and drownings
A study done by law student Tyler Vigan showed a moderate to strong correlation between the number of movies Nicolas Cage releases in a year and the number of drownings in swimming pools in the same year.36
The scatterplot shows moderate positive correlation, supported by a correlation coefficient of 0.66.
What does this mean? When Nicolas Cage releases a movie, people get excited and go jump in the pool? Or maybe in a year when there are many drownings, Nicolas Cage gets inspired to release a new movie?
This is an example of a spurious correlation, a correlation that just happens by chance.
Example: Crime and police expenditures
The scatterplot shows data from all 50 states adjusted for population differences. The horizontal axis is annual police expenditures per person. The vertical axis represents reported violent crimes per 100,000 people per year.
There is a moderate positive correlation present, with a correlation coefficient of 0.547.
What does this mean? Here are possible explanations.
1. Police cost causes crime. The more money spent on police, the more crime there is. Eliminate the police to reduce crime.
2. Crime causes police cost. The more crime there is, more police get hired. High crime states need to spend more money on the police.
3. More police means more reported crimes. The data shows reported crimes, but many crimes go unreported. Having more police means more reported crimes.
4. Crime and police costs are higher in cities. States like California, Texas and Florida have major cities where all expenses are higher and there is more crime. So in this example, urbanization is the cause of both variables increasing. (This is an example of a confounding variable).
The truth is we can’t say why there is a correlation between police expenditures and violent crime. As statisticians, we can only say the variables are correlated, and we cannot support a cause and effect relationship.
In observational studies such as this, correlation does not equal causation.
Confounding (lurking) variables
A confounding or lurking variable is a variable that is not known to the researcher, but affects the results of the study.
Research has shown there is a strong, positive correlation between shark attacks and ice cream sales. There is actually a store in New York called Shark’s Ice Cream, possibly inspired by this correlation.37
A possible confounding variable might be temperature. On hot days people are more likely to swim in the ocean and are also more likely to buy ice cream.
This graph from the BBC seems to support this claim. 38Both shark attacks and ice cream sales are highest in the summer months.
In the next section, we will discuss how to design experiments that control for confounding variables.
Hopefully taking this Statistics class will help you avoid making the mistake of confusing correlation and causation. Or, maybe you already knew that, as inspired by this XKCD comic “Correlation.”39 | textbooks/stats/Introductory_Statistics/Inferential_Statistics_and_Probability_-_A_Holistic_Approach_(Geraghty)/03%3A_Descriptive_Statistics/3.06%3A_Bivariate_Data/3.6.02%3A_Correlation_Coefficient.txt |
The prior sections dealt with analyzing data. We now want to explore how data is obtained and introduce the concept of finding a representative sample, a critical component of statistical inference.
04: . Populations and Sampling
A population is the entire group of individuals or objects of interest to us. In practice, it is difficult or impossible to study every individual or object in the population.
A sample is a subset of the population that we can study by collecting or gathering data.
Quantities that describe populations are called parameters. We will explore some of these values in the future chapters on random variables.
Quantities that describe samples are called statistics and were investigated in the previous chapter.
Example: Math anxiety and community college students
A large community college has about 25,000 students. In a study of 85 students from college, it was determined that about 60 of the students have moderate or high math anxiety.
In this study, the population is all the students at this college. The sample is the 85 students whose math anxiety was measured
A census is a sample of every individual or object in the population. It is rarely possible to effectively conduct a complete census due to unavailability of data or prohibitive costs. For example, the cost of the 2010 United States census was \$13 billion to simply count people and collect basic data.40 Keep in mind that even the US census is not perfect since there are both over‐counting of some groups and under‐counting of other groups.
The major goal in Statistics is to be able to make estimates or support claims about populations based on the sample measurements, a process called statistical inference. To be able to make a valid inference, care must be taken in collecting sample data.
4.02: The Statistical Process
Statistical Inference can be thought of as a process that can be used for testing claims and making estimates.
Steps of a Statistical Process
Step 1 (Problem): Ask a question that can be answered with sample data.
Step 2 (Plan): Determine what information is needed.
Step 3 (Data): Collect sample data that is representative of the population.
Step 4 (Analysis): Summarize, interpret and analyze the sample data.
Step 5 (Conclusion): State the results and conclusion of the study.
In Step 3, we introduce the concept of a representative sample. Let’s define it here.
Definition: Representative sample
A representative sample has characteristics, behaviors and attitudes similar to the population from which the sample is selected.
Definition: Biased sample
A sample that is not representative is a biased sample.
Representative samples are necessary to make valid claims about the population. We will explore methods of obtaining representative samples in a later section.
Example: Online dating trends
In 2015, the Pew Research Center Pew Research Center was investigating trends in online dating; this culminated in a study published in February, 2016.41 Pew Research wanted to investigate a belief that American’s use of online dating website and mobile applications had increased from an earlier study done in 2013, especially among younger adults.
A survey was conducted among a national sample of 2,001 adults, 18 years of age or older, living in all 50 U.S. states and the District of Columbia. Fully 701 respondents were interviewed on a landline telephone, and 1,300 were interviewed on a cell phone, including 749 who had no landline telephone. Calls were made using random digit dialing. In addition to questions about online dating, researchers collected demographic data as well (age, gender, ethnicity, etc).
The survey found that in 2015, 15% of American adults have used online dating sites and mobile apps, compared to 11% in 2013. However, for young adults aged 18‐24, the increase was dramatic: from 10% in 2013 to 27% in 2015. All age groups are summarized in the graph.
Let’s first identify the population and the sample in this study.
The population is all American adults living in all 50 states and the District of Columbia. The sample is the 2,001 adults surveyed.
In this example we can investigate how Pew Research Center followed the Steps of a Statistical Process in performing this analysis.
1: Ask a question that can be answered with sample data. Has there been an increase in American’s use of online dating in the last two years? Are these rates affected by age?
2: Determine what information is needed. The percentage of adults who are using online dating service. The age of each individual.
3: Collect sample data that is representative of the population. Since the researchers surveyed both land lines and cell phones using a random dialer, the sample should be representative of the population.
4: Summarize, interpret and analyze the sample data. 15% of American Adults have used online dating sites and mobile apps, compared to 11% in 2013. For young adults aged 18‐24, the increase was dramatic: from 10% in 2013 to 27% in 2015. Other age groups are displayed in the graph.
5: State the results and conclusion of the study. Adults are using online dating sites and mobile dating apps at increasing rates, especially younger adults. | textbooks/stats/Introductory_Statistics/Inferential_Statistics_and_Probability_-_A_Holistic_Approach_(Geraghty)/04%3A_._Populations_and_Sampling/4.01%3A_Populations_and_Samples.txt |
Most studies can be categorized as an observational study or as an experiment.
Observational Studies
An observational study starts with selecting a representative sample from a population. The researcher then takes measurements from the sample, but does not manipulate any of the variables with treatments. The goal of an observational study is to interpret and analyze the measured variables, but it is not possible to show a cause and effect relationship.
Example: GPA and missing class
A group of students at Georgia College conducted a survey asking random students various questions about their academic profile. One part of their study was to see if there is any correlation between various students’ GPA and classes missed.
In this observational study, there is no attempt by the researchers to manipulate any variables. The conclusion was that there is a weak correlation between GPA and classes missed, but there is basis for concluding that missing class lowers GPA.
Experiments
An experiment starts with a representative sample from a population. The researcher will then randomly break this sample into groups and then apply treatments in order to manipulate a variable of interest. The goal of an experiment is to find a cause and effect relationship between a random variable in the population and the variable manipulated by the researcher. If an experiment is conducted properly, the researcher can control for confounding or lurking variables and test for a placebo effect.
Example: Electronic gaming machines42
The following study was published in the Journal of Addictive Behaviors in 2012:
Electronic gaming machines (EGM) may be a particularly addictive form of gambling, and gambling speed is believed to contribute to the addictive potential of such machines. The aim of this study was to generate more knowledge concerning speed as a structural characteristic in gambling, by comparing the effects of three different bet‐to‐outcome intervals (BOI) on gamblers bet‐sizes, game evaluations and illusion of control during gambling on a computer simulated slot machine. Furthermore, the researchers investigated whether problem gambling moderates effects of BOI on gambling behavior and cognitions.
62 participants played a computerized slot machine with either fast (400 ms), medium (1700 ms) or slow (3000 ms) BOI. SOGS‐R was used to measure pre‐existing gambling problems. Mean bet size, game evaluations and illusion of control comprised the dependent variables.
Gambling speed had no overall effect on either mean bet size, game evaluations or illusion of control, but in the fast machines, at‐risk gamblers employed higher bet sizes compared to no‐risk gamblers.
The findings corroborate and elaborate on previous studies and indicate that restrictions on gambling speed may serve as a harm reducing effort for at‐risk gamblers. 43
In this experiment, the researchers controlled one variable, the speed of the electronic gaming machine. They then measured the variable they did not control, the bet size made by the problem gambler. Because the researchers controlled the experiment, they established a cause and effect relationship and concluded that the speed of these machines will increase the bet size.
Explanatory and response variables
When conducting an experiment, the goal is to show a cause and effect relationship between an explanatory variable the researcher controls and a response variable that is observed or measured.
Variables in an Experiment
Definition: Explanatory Variable
The variable that is controlled or manipulated by the researcher.
Definition: Response Variable
The variable which is being measured and is the focus of the study.
The researcher tries to answer the question: "Does the explanatory variable (cause) affect the response variable (effect)?
Example: Blue jean tensile strength
"Denim trousers, commonly known as “blue jeans”44, have maintained their popularity for many years. For the purpose of supporting customers’ purchasing behavior and to address their aesthetic taste, companies have been trying in recent years to develop various techniques to improve the visual aspects of denim fabrics. These techniques mainly include printing on fabrics, embroidery and washing the final product. Especially, fraying certain areas of the fabric by sanding and stone washing to create designs is a popular technique. However, due to certain inconveniences caused by these procedures and in response to growing demands, research is underway to obtain a similar appearance by creating better quality and more advantageous manufacturing conditions."45
Traditionally, this extra process was done by manual cutting and stitching. A new process using a laser beam to transfer these images is being tested to see if there is a difference in tensile strength as measured in pounds per square inch (psi).
The researchers use random assignment on 40 pairs of jeans, with each group receiving 20 pairs of jeans. Each pair of jeans was then tested in 3 different places, so a total of 60 measurements were taken for the manual method and 60 measurements for the laser method.
The dot plot shows the values of each of these methods.
Based on these results, the researchers concluded that blue jeans made using the laser method were stronger than blue jeans manufactured under the manual method.
The explanatory variable is the production method (manual or laser), which is the variable that is controlled by the researcher, randomly assigning jeans into the two groups.
The response variable, which is the variable the researcher wanted to compare for each method of production, is the tensile strength of each measurement taken from the jeans.
Let's now organize this study into the steps of the statistical process.
1: Ask a question that can be answered with sample data. Is there a difference in tensile strength of denim blue jeans between the manual method and the laser method of modification?
2: Determine what information is needed. The method of production (manual or laser), The tensile strength of each sample
3: Collect sample data that is representative of the population. The researchers used random assignment to control for confounding variables, such as defects in the fabric. 60 measurements were taken for each method.
4: Summarize, interpret and analyze the sample data. Reviewing the dot plots of tensile strength under reach method, both graphs have the spread and shape, but the center for the laser method is substantial higher than the graph for the manual method.
5: State the results and conclusion of the study. The laser method produces blue jeans with higher tensile strength compared to the manual method.
Placebos and Blinding
Sometimes in an experiment, a participant will respond in a positive way to a treatment with no active ingredients. This called the placebo effect, and a treatment with no active ingredients is called a placebo.
Example: Headache Pill
A researcher for a pharmaceutical company is conducting research on an experimental drug to reduce the pain from migraine headaches. Participants with migraine headaches are randomly split into 3 groups. The first group gets the experimental drug (Treatment Group). The second group gets a placebo, a fake drug (Placebo Group). The third group gets nothing (Control Group).
The researcher found that pain was reduced for both the treatment group and the placebo group, establishing a placebo effect. The researcher must then compare the amount of pain reduction in the treatment group to the placebo group in order to determine if the treatment was effective.
The best method of conducting an experiment is to implement blinding. A single blind study is where the participant does not know whether the treatment is real or a placebo. A double blind study is where neither the administrator of the treatment nor the participant knows whether the treatment is real or a placebo.
In the headache pill example, the researcher implemented a double blind study to minimize the chance that the participant knows what type of drug is being administered.
Some experiments cannot be blinded. For example, if you wanted to study for a difference in health benefits between daily 30 minute walks or a 30 minute runs, it would be impossible to blind the participants since they know the difference between a walk and run. | textbooks/stats/Introductory_Statistics/Inferential_Statistics_and_Probability_-_A_Holistic_Approach_(Geraghty)/04%3A_._Populations_and_Sampling/4.03%3A_Types_of_Studies.txt |
When doing research, it is critical to obtain a sample that is representative of the population. Non‐ representative or biased samples will produce invalid inferences, regardless of the sample size. For example, it is far better to have a representative sample of 500 observations, than a biased sample of 50,000 observations. In this section we will explore methods of sampling that have the highest chance of producing a representative sample.
A word of caution: even if you carefully attempt to create a representative sample, there is always a chance you will select non‐representative outlier sample. However, if you use one of these appropriate methods of sampling, you have a small probability of selecting an outlier sample.
The best methods of sampling are those in which the probability of getting a representative sample can be calculated. The methods are called probability sampling methods. Other non‐probability sampling methods have immeasurable bias and need to be avoided when conducting research
Probability Sampling Methods
These methods will usually produce a sample that is representative of the population. These methods are also called scientific sampling.
Simple Random Sampling46
A simple random sample is a subset of a population in which all members of the population have the same chance of being chosen and are mutually independent of each other. Think of random sampling as a raffle or lottery in which all names are put in a bowl and then some names are randomly selected.
Random samples in practice are almost impossible to obtain as it is difficult to list every member of the population.
Advantages of Simple Random Sampling:
• no possibility of bias in the sampling method
• no knowledge of population demographics needed
• easy to measure precision
Disadvantages of Simple Random Sampling:
• often impossible to conduct due to difficulty of cataloguing population
• high expense
• often less precise than a stratified sample
Example: Custom control searching
Before leaving customs at several international airports, all passengers must push a button. If the button is red, you will be required to go through an intensive search. If the button is green, you will not be searched.47 The button is totally random and has a 20% chance of being red. Passengers who are subject to the intensive search are a true simple random sample of the entire population of arriving passengers.
Systematic Sampling48
A systematic sample is a subset of the population in which the first member of the sample is selected at random and all subsequent members are chosen by a fixed periodic interval. An example would be having a list of the entire population and then taking every 3rd person on the list.
Advantages of Systematic Sampling:
• easy to design and explain
• more economical than random sampling
• avoids random clustering (several adjacent values)
Disadvantages of Systematic Sampling:
• may be biased if population is patterned or has a periodic trait
• easier for researcher to wrongly influence data
• population size needs to be known in advance
Example: Random drug testing of employees
A shipping company has approximately 20,000 employees. The company decided to administer a random drug test to 5% of the employees, a sample size of 1000. The company has a list of all employees sorted by social security number. A random number is selected between 1 and 20. Starting with that person, every subsequent 20th person is also sampled. For example, if the selected number is 16, then the company would select persons 16, 36, 56, 76, ... , 19996 for drug testing.
Stratified Sampling49
A stratified sample is designed by breaking the population into subgroups called strata, and then sampling so the proportion of each subgroup in the sample matches the proportion of each subgroup in the population. For example, if a population is known to be 60% female and 40% male, then a sample of 1000 people would have 600 women and 400 men.
Advantages of Stratified Sampling:
• minimizes selection bias as all strata are fairly represented
• each subgroup receives proper representation
• high precision (low standard deviation) compared to other methods
Disadvantages of Stratified Sampling:
• high knowledge of population demographics needed
• not all populations are easily stratified
• time consuming and expensive
Example: Social media conversations about race
In 2016, Pew Research Center conducted a study to examine how people use social media such as Twitter or Facebook.50 The study focused on the content and hash tags used on people's comments about events involving racially motivated attacks by the police and differences in opinions about groups such as Black Lives Matter.
Since the study involved people's opinions about race, it was important that Pew used stratified sampling by race. Particular care was taken to make sure that there was appropriate representation in the sample from traditionally undersampled African American and Latino groups.
Cluster Sampling51
A cluster sample is created by first breaking the population into groups called clusters, and then taking a sample of clusters. An example of cluster sampling is randomly selected several classes at a college and then sampling all the students in those selected classes.
Advantages of Cluster Sampling:
• most economical form of sampling because only clusters need to be randomized
• study can be completed in less time
• suitable for surveying populations that are broken into natural clusters
Disadvantages of Cluster Sampling:
• sample may not be as diverse as population
• clusters may have a similar bias, causing sample to be biased
• less precision (higher standard deviation)
Example: Police attitudes
In 2017, Pew Research Center conducted a survey of 8000 police officers called Behind the Badge. 52 The goal was to draw on the attitudes and experiences of police officers especially in light of highly publicized and controversial killings of Black Americans by the police.
To conduct this survey, the researchers had to select police departments throughout the country that they felt were representative of the population of departments. Then they surveyed police officers in those departments. One potential problem reported by the researchers was that only police departments with at least 100 officers were sampled. This is a example of potential similarity bias that sometimes arises in cluster sampling.
Example: Student homelessness53
The Bill Wilson Center of Santa Clara County provides social services for children, teens and adults. In 2017, the center conducted a study documenting homeless youth populations, surveying both high school students and community college students.54
For community college students, the researchers chose two community colleges from the eight in Santa Clara County and surveyed students from Winter 2017 to Spring 2017. One finding was that a staggering 44% of community college students surveyed at these two colleges reported that they were homeless. (Homeless in this study means living on the street, living in cars, or couch surfing).
This study is an example of cluster sampling. Out of the eight Santa Clara County community colleges, the researchers chose 2. Although not reported in the study, it would be important that the demographics of the two chosen colleges match the average of all community college students in the county.
Non‐probability Sampling Methods
There are non‐scientific methods of sampling being conducted today that have immeasurable biases and should not be used in scientific research. The only advantage of these methods is that they are inexpensive and can generate very large samples. However, these samples will often fail to create a representative sample and therefore have no value in research. Worse yet, these biased samples may be presented as more accurate or better than scientific studies because of the large sample size. However, a biased sample of any size has little or no value ‐‐ a big pile of garbage is still garbage.
Convenience Sampling
A convenience sample is simply a sample of people who are easy to reach.
Example: Marijuana usage
A 21 year old student wants to conduct a survey on marijuana usage. He asks his friends on Facebook to fill out a survey. The results of his survey show that 65% of respondents frequently use marijuana.
The student's Facebook friends were easy to sample but are not representative of the population. For example, if the student frequently uses marijuana, it is more likely that his Facebook friends would also use marijuana.
Self‐selected Sampling
A self‐selected sample is one in which the participants volunteer to be sampled. This would include Internet polls and studies that advertise for volunteers.
Do not confuse self‐selected sampling with scientific studies that ask for volunteers from an initial representative sample. Researchers take care to avoid bias making sure the demographics of the volunteers match the demographics of the representative sample.
Example: Boaty McBoatface
The Natural Environment Research Council (NERC), an agency of the British government, decided to let the Internet suggest a name for a \$287 million polar research ship. A public relations professional and former BBC employee started a social media frenzy by suggesting people vote for the name "Boaty McBoatface."55
The final result of this self‐selected poll showed that Boaty McBoatface was the overwhelming winner. You can see that the top 20 entries included many other humorous choices, along with some more traditional names.56
The NERC eventually chose a more serious name, the RSS Sir David Attenborough, but as a consolation to the voters, the agency named a remotely operated underwater research vessel Boaty McBoatface . 57
The results of the poll do not reflect what the public wanted. What happened instead was many people, through social media, were inspired to vote for Boaty McBoatface as a joke.
Example: Online Movie Ratings
Many people use online rating services, such as Google, Yelp, Rotten Tomatoes, IMDb and Rate My Professor to make decisions about restaurants, products, services, movies or what college class to take.
All of these ratings systems are examples of self‐selected sampling as users volunteer to write reviews. This can lead to ratings that may be extremely inaccurate.
The Internet Movie Database (IMDb) maintains movie reviews and ratings by users. Movies are rated on a scale of 1 (the worst) to 10 (the best). On July 28, 2017, Al Gore's "An Inconvenient Sequel: Truth to Power" was released as a follow‐up to his original documentary about climate change, "An Inconvenient Truth". The IMDb overall rating for the movie was 5.2, which is the average of all ratings by users.
The website fivethirtyeight.com conducted an analysis of this overall rating by comparing "An Inconvenient Sequel" to other movies with similar ratings. 58
It is clear that the graph "An Inconvenient Sequel" was far different from the other five movies with that also had an average rating of 5.2; in this case, most people voted either 1 or 10. The fivethirtyeight.com study also found that many of the reviews were written before the movie release date. Also, traditional critics rated the movie much higher. The IMDb rating in this case was not a true movie rating but an attempt to discredit or to support climate change.
The conclusion by fivethirtyeight.com was a warning about these popular online rating systems: "Say what you will, but in addition to being controversial, “An Inconvenient Sequel” was ambitious: Few films involve Arctic expeditions, inside access to the Paris Climate Conference, interviews with the sitting secretary of state and a globe‐trotting look at catastrophic weather conditions. If ambitious‐yet‐controversial films are boiled down to a single number that makes them look identical to mediocre films, what incentive does Hollywood have to continue investing in movies that challenge the audience? "The democratization of film reviews has been one of the most substantial structural changes in the movie business in some time, but there are dangerous side effects. The people who make movies are terrified. IMDb scores represent a few thousand mostly male reviewers who might have seen the film but maybe didn’t, and they’re influencing the scoring system of one of the most popular entertainment sites on the planet."
We will all continue to use online rating services, but we must keep in mind the reviews could be fake, manipulated or extremely biased. | textbooks/stats/Introductory_Statistics/Inferential_Statistics_and_Probability_-_A_Holistic_Approach_(Geraghty)/04%3A_._Populations_and_Sampling/4.04%3A_Sampling_Techniques.txt |
In the last selection we discussed how non‐probability sampling methods will often not create a representative sample that is needed to draw any meaningful conclusions. These methods usually create two types of bias.
Selection Bias
Selection bias occurs when the sampling method does not create a representative sample for the study. Selection bias frequently occurs when using convenience sampling.
Example: Library fee
A community college proposes increasing the student fee by \$5.00 in order to create more open hours for the library. A survey was conducted by several student researchers to see if there was support for this fee. The researchers stood in the central part of the campus near the library and selected students for their sample as they were walking by. The students were only sampled during the morning hours.
This is a convenience sample and probably not representative for this study. The students sampled only day students, excluding night students who are less likely to use the library. Some excluded students only take classes online and don't use the library. Finally, the survey was conducted near the library, so it is more likely that the sample contained library users, who would probably be more likely to support added services. This is a clear example of selection bias.
Self‐selection Bias
Self‐selection bias occurs when individuals can volunteer to be part of the study, the non‐probability self‐selected sampling method discussed above. Volunteers will often have a stronger opinion about the research question and will usually not be representative of the population.
Example: Twitter poll
Many members of congress will try to use online surveys to generate support for their position. Here is an example during the 2017 attempt to repeal the Affordable Care Act (ObamaCare).
Rep. Marsha Blackburn (R‐Tenn.) on Tuesday posted a poll on Twitter to get feedback on Republicans' proposed ObamaCare repeal. As it turns out, though, a majority of Twitter users who voted recommended keeping the healthcare law in place.
While Blackburn might have expected to hear only from her Tennessee district — which handily reelected her in November — she soon found the poll swamped with votes opposed to an ObamaCare repeal.
The poll from Blackburn, a member of President‐elect Trump's transition team, received 7,968 votes, with 84 percent opposing a repeal of ObamaCare. The repeal opponents' side was likely helped by a retweet from White House spokesman Eric Schultz.59
84% of the respondents did not support the repeal of ObamaCare, a much higher percentage than is shown in properly conducted surveys. Supporters of the Affordable Care Act could encourage others to vote in the poll. Plus a Twitter poll is never going to be representative since the sampled population is only Twitter users. The wording of the question is also biased, a phenomena that will be explored later in this section.
Bias also occurs when a poll or survey produces results that do not reflect the true opinions or beliefs of the general population. This is often a result of the methods used to conduct the survey or the wording of the questions asked.60
Non‐response Bias
Non‐response bias occurs when people are intentionally or non‐intentionally excluded from participation or choose not to participate in a survey or poll. Sometimes people will lie to pollsters as well.
A recent example of probable non‐response bias occurred during the 2016 presidential election where, in which every poll showed Hillary Clinton winning the election over Donald Trump. Although Clinton won the popular vote, Trump won the electoral vote and the presidency.61
The Pew Center Research conducted a post‐mortem of the election polling and pointed to probable non‐ response bias:
One likely culprit is what pollsters refer to as non‐response bias. This occurs when certain kinds of people systematically do not respond to surveys despite equal opportunity outreach to all parts of the electorate. We know that some groups – including the less educated voters who were a key demographic for Trump on Election Day – are consistently hard for pollsters to reach. It is possible that the frustration and anti‐ institutional feelings that drove the Trump campaign may also have aligned with an unwillingness to respond to polls. The result would be a strongly pro‐Trump segment of the population that simply did not show up in the polls in proportion to their actual share of the population.
Some have also suggested that many of those who were polled simply were not honest about whom they intended to vote for. The idea of so‐called “shy Trumpers” suggests that support for Trump was socially undesirable, and that his supporters were unwilling to admit their support to pollsters. This hypothesis is reminiscent of the supposed “Bradley effect,” when Democrat Tom Bradley, the black mayor of Los Angeles, lost the 1982 California gubernatorial election to Republican George Deukmejian despite having been ahead in the polls, supposedly because voters were reluctant to tell interviewers that they were not going to vote for a black candidate.
A third possibility involves the way pollsters identify likely voters. Because we can’t know in advance who is actually going to vote, pollsters develop models predicting who is going to vote and what the electorate will look like on Election Day. This is a notoriously difficult task, and small differences in assumptions can produce sizable differences in election predictions. We may find that the voters that pollsters were expecting, particularly in the Midwestern and Rust Belt states that so defied expectations, were not the ones that showed up. Because many traditional likely‐voter models incorporate measures of enthusiasm into their calculus, 2016’s distinctly unenthused electorate – at least on the Democratic side – may have also wreaked some havoc with this aspect of measurement.62
Pew’s analysis showed three possible sources of non‐response bias. First, it may have been more difficult to reach Trump supporters. Second, Trump supporters, may be less honest to pollsters. Finally, the pollsters may have incorrectly identified likely voters, meaning Trump voters were undersampled.
Response Bias
Response bias occurs when the responses to a survey are influenced by the way the question is asked, or when responses do not reflect the true opinion of the respondent. When conducting a survey or poll, the type, order and wording of questions are important considerations. Poorly worded questions can invalidate the results of a survey.
Questions should be asked in a manner that is balanced.
Example: High speed rail
Consider the questions:
“Do you feel that the increasing cost of the high speed rail project is too expensive for California?”
“Do you feel that high speed rail will be important to the future economy of California?”
“Do you approve or disapprove of building a high speed rail system in California?”
The first question encourages people to oppose high speed rail because of the expense. The second question encourages people to support high speed rail to support the economy. The third question simply asks people’s opinion without the leading bias.
Example: Twitter poll
Let’s return to the Twitter poll example in which Marsha Blackburn, an opponent of the Affordable Care Act, asked followers to vote on the question: “Do you support the repeal of Obamacare? [Retweet] if you do, and share what you want to see as the replacement.”
There are many sources of bias in this question. First, supporting a repeal sounds like supporting, the more positive stance. Secondly, many polls have shown that using the words “Obamacare” instead of “Affordable Care Act” will encourage support for repeal. Finally, the last part of the question is encouraging people to take action if they support repeal.
Questions should not be vague.
For example, the question “What’s wrong with the economy?” is vague. It is unclear what the question is trying to determine.
Here are some questions from recent polls and surveys regarding same sex marriage. Discuss the issues of bias and fairness in these questions:
Should states continue to discriminate against couples who want to marry and who are of the same gender?
Do you support marriage equality?
Should states be forced to legalize homosexual marriage over the wishes of a majority of the people?
Do you think marriages between same‐sex couples should or should not be recognized by the law as valid, with the same rights as traditional marriages?
Giving people explanatory information can change their opinions
Care must be taken in providing explanatory information about an issue; however, providing no information may also lead to misleading results. For example, you might want to ask people if they support the CHIP program. Most people have no idea what the CHIP program is, so some explanation is needed. You then add the language: “The Children's Health Insurance Program (CHIP) is a program administered by the federal government whose aim is to help states provide health insurance to families with children who were just above the financial threshold for Medicaid.”
Example: Aid to Puerto Rico after Hurricane Maria
On September 20, 2017, Hurricane Maria caused catastrophic damage to the U.S. territory of Puerto Rico. This came shortly after two other major hurricanes hit the United States, causing major damage in Texas and Florida.
However, the initial public support for Puerto Rico seemed less than that for Florida or Texas. A poll of 2200 American adults conducted by Morning Consult showed that only 54% of Americans knew that Puerto Rico was part of the United States.63
The survey then split the sample into two groups to answer the question “Should Puerto Rico receive additional government aid to help rebuild the territory?” The first group was given no information about Puerto Rican citizenship and 64% supported giving aid. The second group was first told that Puerto Ricans were American citizens, and support for aid increased to 68%.
In conclusion, the wording of polls or providing additional information can lead to biased results and care should be taken so the wording of the questions is both clear and balanced. | textbooks/stats/Introductory_Statistics/Inferential_Statistics_and_Probability_-_A_Holistic_Approach_(Geraghty)/04%3A_._Populations_and_Sampling/4.05%3A_Bias_in_Statistical_Studies.txt |
In the prior three sections we covered how to obtain and analyze sample data. In the next three sections, we will explore the modeling of populations.
05: Probability
Rather than defining probability, here are some real life examples:
The Golden State Warriors are trailing the Cleveland Cavaliers by one point late in an important NBA game. Cleveland forward LeBron James fouls Golden State guard Stephen Curry with 1.4 seconds left in the game, meaning Curry will get to shoot 2 free throws. What is the probability the Warriors will win the game?
Thuy is an actress and auditions for a starring role in a Broadway musical. The audition goes extremely well and the director says she did a great job, sings beautifully, and is perfect for the role. He promises to call her back the next day after auditions are completed. What is the probability Thuy will get the role in the musical?
Robert is a student taking a Statistics class for the second time, after dropping the class in the prior quarter. He has a lot of math anxiety, but needs to pass the class to be able to transfer to San Jose State University to continue his dream of becoming a psychologist. What is the probability he will successfully pass the class?
Lupe goes to the doctor after having some pain in her lower back. Her family has a history of kidney problems, so the doctor decides to run some additional tests. What is the probability that Lupe has a kidney disorder that requires treatment?
In all of these examples, it is uncertain or unknown what the actual outcomes will be; however, we can make a guess as to whether each outcome is either more likely or less likely. We can quantify this by a value between 0 and 1, or between 0% and 100%. For example, maybe we say The Warriors have a good chance of winning the game since Curry is one of the best free throw shooters in the NBA, say 0.7 or 70% . Maybe Thuy (from her experience in auditioning) is less likely to get the starring role, say 0.2 or 20%. These quantities are called probabilities.
Definition: Probability
Probability is the measure of the likelihood that an event A will occur.
This measure is a quantity between 0 (never) and 1 (always) and will be expressed as P(A) ( read as “The probability event A occurs.”)
5.02: Types of Probability
Classical probability (also called Mathematical Probability) is determined by counting or by using a mathematical formula or model.
Example
The probability of getting a "Heads" when tossing a fair coin is 0.5 or 50%. The probability of rolling a 5 on a fair six‐sided die is 1/6, since all numbers are equally likely.
Empirical probability is based on the relative frequencies of historical data, studies or experiments.
Example
The probability that Stephen Curry make a free throw is 90.8% based on the frequency of successes from all prior free throws.
The probability of a random student getting an A in a Statistics class taught by Professor Nguyen is 22.8%, because grade records show that of the 1000 students who took her class in the past, 228 received an A.
In a study of 832 adults with colon cancer, an experimental drug reduced tumors in 131 patients. The probability that the experimental drug reduces colon cancer tumors is 131/832, or 15.7%.
Subjective probability is a “one‐shot” educated guess based on anecdotal stories, intuition or a feeling as to whether an event is likely, unlikely or “50‐50”. Subjective probability is often inaccurate.
Example
Although Robert is nervous about retaking the Statistics course after dropping the prior quarter, he is 90% sure he will pass the class because the website ratemyprofessor.com gave the instructor very positive reviews.
Jasmine believes that she will probably not like a new movie that is coming out soon because she is not a fan of the actor who is starring in the film. She is about 20% sure she will like the new movie.
No matter how probability is initially derived, the laws and rules of probability will be treated the same. | textbooks/stats/Introductory_Statistics/Inferential_Statistics_and_Probability_-_A_Holistic_Approach_(Geraghty)/05%3A_Probability/5.01%3A_What_is_Probability.txt |
We can use counting methods to determine classical probability. However, we need to be careful in our methods to be sure to get the correct answer.
An Event is a result of an experiment, usually referred to with a capital letter A, B, C, etc. Consider the experiment of flipping two coins. Then use the letter A to refer to the event of getting exactly one head.
An Outcome is a result of the experiment that cannot be broken down into smaller events. Consider event A, getting exactly one head. Note that there are two ways or outcomes to get one head in two tosses, by first getting a head then a tail, or by first getting a tail, then a head. Let’s write these distinct outcomes as HT and TH.
The Sample Space is the set of all possible outcomes of an experiment. In the experiment of flipping two coins, there are 4 possible outcomes, which can be expressed in set notation.
$\text { Sample Space }=\{\mathrm{HH}, \mathrm{HT}, \mathrm{TH}, \mathrm{TT}\} \nonumber$
We can now redefine an Event of an experiment to be a subset of the Sample Space. If event A is getting exactly one head in two coin tosses, then
$\mathrm{A}=\{\mathrm{HT}, \mathrm{TH}\}\nonumber$
After carefully listing the outcomes of the Sample Space and the outcomes of the event, we can then calculate the probability the event occurs.
Probability Event Occurs = number of outcomes in Event / number of outcomes in Sample Space
We will use the notation P(A) to mean the probability event A occurs.
In the example, the probability of getting exactly 1 head in two coin tosses is 2 out of 4 or 50%. $P(A)=2 / 4=0.5=50 \% \nonumber$
Example: Field Bet
In the casino game of craps, two dice are rolled at the same time and then the resulting two numbers are totaled. There are many bets in craps, so let us consider the Field bet. In this bet, the player will win even money if a total of 3, 4, 9, 10 or 11 is rolled. If a total of 2 is rolled, the player will win double the original bet, and if a total of 12 is rolled, the player will win triple the original bet. If a total of 5, 6, 7 or 8 is rolled, the player loses the original bet.
At first glance, this looks like a winning bet for the player since the player wins on 7 different numbers and the casino only wins on 4 different numbers. However, we know that a casino always designs games to give the casino the advantage. Let us carefully use counting methods to calculate the probability of a player winning the Field bet.
Let’s first consider the task of listing the sample space of possible outcomes. Since there are two dice rolled, we can consider each outcome to be an ordered pair. There are 6 possible values for the first die and 6 possible values for the second die, meaning that there are 36 ordered pairs or outcomes. In the diagram, the red die is the first roll and the green die is the second roll.
$\text { Sample Space }=\left\{\begin{array}{l} (1,1),(1,2),(1,3),(1,4),(1,5),(1,6), \ (2,1),(2,2),(2,3),(2,4),(2,5),(2,6), \ (3,1),(3,2),(3,3),(3,4),(3,5),(3,6), \ (4,1),(4,2),(4,3),(4,4),(4,5),(4,6), \ (5,1),(5,2),(5,3),(5,4),(5,5),(5,6), \ (6,1),(6,2),(6,3),(6,4),(6,5),(6,6) \end{array}\right\} \nonumber$
Now define the event W to be the winning pairs of numbers in the Field bet, the pairs that add up to 2, 3, 4, 9, 10, 11 or 12. The winning pairs of numbers are shown in blue and the losing pairs are shown in red.
$\text { Sample Space }=\left\{\begin{array}{l} (1,1),(1,2),(1,3),(1,4),(1,5),(1,6), \ (2,1),(2,2),(2,3),(2,4),(2,5),(2,6), \ (3,1),(3,2),(3,3),(3,4),(3,5),(3,6), \ (4,1),(4,2),(4,3),(4,4),(4,5),(4,6), \ (5,1),(5,2),(5,3),(5,4),(5,5),(5,6), \ (6,1),(6,2),(6,3),(6,4),(6,5),(6,6) \end{array}\right\} \quad W=\left\{\begin{array}{l} (1,1),(1,2),(1,3), \ (2,1),(2,2), \ (3,1),(3,6), \ (4,5),(4,6), \ (5,4),(5,5),(5,6), \ (6,3),(6,4),(6,5),(6,6) \end{array}\right\} \nonumber$
This means that there are 16 outcomes out of 36 in which the player wins. It’s now easy to see the that probability of winning is less than 50%, as the casino took the numbers that occur the most frequently.
$P(W)=\frac{16}{36}=\frac{4}{9} \approx 44.4 \% \nonumber$
As a final note on this example, you might recall that the casino pays double if the player rolls (1,1) or triple if the player rolls (6,6). Even taking this extra bonus into account, if a player makes 36 $100 bets, the casino will expect to win$2000 (20 numbers x $100), and the player will expect to win$1900 (16 numbers x $100, plus$100 extra for the 2 and $200 extra for the 12), meaning the player loses$100 for every $3600 bet, a house (casino) advantage of 2.78%. Field Bet – Summary of 36 possible rolls Amount won on$100 bets
$(1,1)$ (pays double) +$200 $(6,6)$ (pays triple) +$300
$(1,2),(1,3),(2,1),(2,2),(3,1),(3,6),(4,5),(4,6),(5,4),(5,5),(5,6),(6,3),(6,4),(6,5)$ +$1400 $(1,4),(1,5),(1,6),(2,3),(2,4),(2,5),(2,6),(3,2),(3,3),(3,4),(3,5),(4,1),(4,2),(4,3),(4,4),(5,1),(5,2),(5,3),(6,1),(6,2)$ ‐$2000
Overall expected result of 36 rolls ($3600 bet) ‐$100
Just remember, in the long run, the casino always wins. | textbooks/stats/Introductory_Statistics/Inferential_Statistics_and_Probability_-_A_Holistic_Approach_(Geraghty)/05%3A_Probability/5.03%3A_How_to_Calculate_Classical_Probability.txt |
It is sometimes difficult to calculate the probability that an event will occur, but it is much easier to calculate the probability that an event will not occur.
For example you may want to determine the probability that a student at California State University – East Bay majors in something other than Business. Instead of adding up all the non‐Business major probabilities, it would be much easier to find the chance that a student at CSUEB majors in Business, say 21%. Then you would determine that the probability that a student does not major in Business (all other students) is the remaining 79%.
Rule of Complement
A'(read as “A‐complement”) is the event that event A does not occur. In that case, the Rule of Complement is:
$P(A)+P\left(A^{\prime}\right)=1 \quad P(A)=1-P\left(A^{\prime}\right) \quad P\left(A^{\prime}\right)=1-P(A) \nonumber$
Example: Die rolling
In a game, you must keep rolling a six‐sided die until you get a six. What is the probability that you would need 2 or more rolls to get a six?
Solution
The event A is “2 or more rolls to get a six” which would be a very difficult probability to calculate ‐‐ it’s actually an infinite sum!
The event A’ is “do not take 2 or more rolls to get a six” which is the same as saying “get a six on the first roll.” That’s a much easier probability to calculate, $P\left(A^{\prime}\right)=1 / 6$.
So $P(A)=1-P\left(A^{\prime}\right)=1-1 / 6=5 / 6$
Therefore, the probability of needing two or more rolls to get a six is 5/6 or about 83.3%
5.05: Joint Probability and Additive Rule
Two or more events can be combined into joint events by using “or” statements or “and” statements.
The Union of two events A and B is that either event A or B occurs, or both; (the blue, red and purple parts of the Venn diagram shown to the right).
The Intersection of two events A and B is that both events A and B occur; (the purple overlap of the Venn diagram shown to the right).
Marginal Probability means the probability of a single event occurring.
Joint Probability means the probability of the union or intersection of multiple events occurring.
Example: Student courses
In a group of 100 students, a total of 40 students take Math, a total of 20 students take History, and 10 students take both Math and History. (Note that these 10 students were already counted twice as being Math students and History students). Find the marginal and joint probabilities.
Solution
Marginal Probabilities:
P(Math) = 40/100 = 0.4
P(History) = 20/100 = 0.2
Joint Probabilities:
P(Math and History) = 10/100 = 0.1 (this is the intersection of the two events)
P(Math or History) = 50/100 = 0.5 (this is the union of the two events)
We can make a rule for relating joint and marginal probabilities but noticing that we are double counting the outcomes in the intersection of two events when combining marginal probabilities from event each event. This is called the Additive Rule.
The Additive Rule for Probability
$P(A \text { or } B)=P(A)+P(B)-P(A \text { and } B) \nonumber$
Example: Student courses
Calculate the probability that a student is taking Math or History using the additive rule. Compare to the direct calculation in the prior example.
Solution
P(Math or History) = P(Math) + P(History) – P(Math or History)
P(Math or History) = 0.4 + 0.2 – 0.1 = 0.5
Mutually Exclusive means that two events A, B cannot both occur. In this case, the intersection of two events has no possible outcomes.
The Additive Rule for Mutually Exclusive Events
$P(A \text { or } B)=P(A)+P(B) \nonumber$
Example: Spanish class
500 students at a community college are taking Spanish 1A in the Fall Quarter this year. 32 students are in Section 11 and 30 students are in Section 12. Find the probability that a Spanish 1A student is in Sections 11 or Section 12.
Solution
Since students cannot be in two sections of the same class, the events Section 11 and Section 12 are mutually exclusive.
P(Sec 11 or 12) = P(Sec 11) + P(Sec 12) = 32/500 + 30 /500 = 62/500 = 0.124 | textbooks/stats/Introductory_Statistics/Inferential_Statistics_and_Probability_-_A_Holistic_Approach_(Geraghty)/05%3A_Probability/5.04%3A_Rule_of_Complement.txt |
Conditional Probability means the probability of an event A occurring given that another event B has already occurred. This probability is written as $P(A|B)$ which is read as $P(A \text{ given } B)$.
Example: 2016 presidential election
In the 2016 United States presidential election, Donald Trump received 46% of the total vote, Hillary Clinton received 48%, and other candidates received 6%. (Note: although Clinton received about 3 million more votes than Trump, the Electoral College determined the actual winner to be Trump).
CNN conducted exit polls to determine how people voted based on demographic statistics, such as gender.64 These exit polls showed that 53% of the voters were female and 47% of the voters were male. These two values are examples of marginal probabilities.
The polls also showed that Donald Trump received 41% of the female vote and 52% of the male vote. These two values are examples of conditional probability, in which the condition is knowing the gender of the voter.
Solution
Events Marginal Probabilities Conditional Probabilities
T = Voter chooses Trump $P(T)=0.46$ $P(T|F) = 0.41$
F = Voter is Female $P(F) = 0.53$ $P(T|M) = 0.52$
M = Voter is Male $P(M) = 0.47$
In calculating the probability of A given B, we only need to consider the elements of Event B instead of the entire sample space.
Example: Student courses
Let us revisit the example of students taking Math and History. Suppose we wanted to calculate the probability that a student who is taking math is also taking history.
Solution
In this case we only need to consider the 40 students taking math as the sample space and the 10 students taking both math and history as the conditional event occurring.
$P(\text { History })=20 / 100=0.20$
$P(\text { History } \mid \text { Math })=10 / 40=0.25$
In this example, we used classical counting probability rules, but conditional probability can be calculated directly using known marginal and conditional probabilities.
Rules for Conditional Probability
$P(A \mid B)=\dfrac{P(A \text { and } B)}{P(B)} \nonumber$
$P(B \mid A)=\dfrac{P(A \text { and } B)}{P(A)} \nonumber$
Example: Cell phone carrier
Of all cell phone users in the US, 15% have a smart phone with AT&T. 25% of all cell phone users use AT&T. Given a selected cell phone user has AT&T, find the probability the user also has a smart phone.
Solution
Let A = AT&T subscriber. Let B = Smart Phone User
$P(A)=0.25 \quad P(A \text { and } B)=0.15$
$P(A \mid B)=\dfrac{0.15}{0.25}=0.60$
This means 60% of all AT&T subscribers have smart phones.
5.07: Contingency (Twoway) Tables
Contingency Tables, also known as cross tabulations, crosstabs or two‐way tables, is a method of displaying the counts of the responses of two categorical variables from data.
Example: Accidents and DUI
1000 drivers were asked if they were involved in an accident in the last year. They were also asked if during this time, they were DUI, driving under the influence of alcohol or drugs. The totals are summarized in a contingency table:
Accident No Accident Total
DUI 70 130 200
Non-DUI 30 770 800
Total 100 900 1000
Solution
In the table, each column represents a choice for the accident question and each row represents a choice for the DUI question.
Marginal Probabilities can be determined form the contingency table by using the outside total values for each event divided by the total sample size.
• Probability a driver had an accident = $P(A)$ = 100/1000 = 0.10
• Probability a driver was not DUI = $P(D') = 1 ‐ P(D)$ = 1 ‐ 200/1000 = 0.80
Joint Probabilities can be determined from the contingency table by using the inside values of the table divided by the total sample size.
• Probability a driver had an accident and was DUI= $P(A \text{ and } D)$ = 70/1000 = 0.07
• Probability a driver had an accident or was DUI= $P(A \text{ or } D)$ = (100+200‐70)/1000 = 0.23
Conditional Probabilities can be determined from the contingency table by using the inside values of the table divided by the outside total value of the conditional event.
• Probability a driver was DUI given the driver had an accident = $P(D|A)$ = 70/100 = 0.70
• Probability a DUI driver had an accident = $P(A|D)$ = 70/200 = 0.35
Creating a two‐table from reported probabilities
We can create a hypothetical two‐way table from reported cross tabulated probabilities, such as the CNN exit poll for the 2016 presidential election:
Step 1: Choose a convenient total number. (This is called the radix of the table).
Radix chosen = 10000 random voters
Step 2: Determine the outside values of the table by multiplying the radix times the marginal probabilities for gender.
Total Female = (0.53)(10000) = 5300
Total Male = (0.47)(10000) = 4700
Step 3: Determine the inside values of the table by multiplying the appropriate gender total times the conditional probabilities from the exit polls.
Trump Female = (0.41)(5300) = 2173
Clinton Female = (0.54)(5300) = 2862
Other Female = (0.05)(5300) = 265
Trump Male = (0.52)(4700) = 2444
Clinton Male = (0.41)(4700) = 1927
Other Male = (0.057)(4700) = 329
Step 4: Add each row to get the row totals.
Trump = 2173 + 2444 = 4617
Clinton = 2862 + 1927 = 4789
Other = 265 + 329 = 594
From the last column, we can now get the marginal probabilities (which are slightly off from the actual vote due to rounding in the exit polls): Donald Trump received 46%, Hillary Clinton received 48% and other candidates received 6% of the total vote.
5.08: Multiplicative Rule and Tree Diagrams
Earlier, we learned about the additive rule for finding the joint probability of the Union of two events. There is a corresponding multiplicative rule to find the probability of the Intersection of two events. Using algebra, this rule can calculated directly from the Rule for Conditional Probability.
Multiplicative Rule of Probability
$P(A \text { and } B)=P(A) \times P(B \mid A) \nonumber$
$P(A \text { and } B)=P(B) \times P(A \mid B) \nonumber$
One useful way to express the Multiplicative Rule is by creating a tree diagram, a simple way to express all possible outcomes in a sequence of events.
The first level of branches connecting to the start are marginal probabilities, and all lower levels of branches are conditional probabilities. To find the probability of getting to the end of any last branch, multiply the probabilities of all branches that connect back to Start.
Example: Red and green balls
A box contains 4 green balls and 3 red balls. Two balls are drawn, one at a time, without replacement. Make a tree diagram and find the probability of choosing two red balls.
Solution
Let A be the event red on the first Draw and B be the event R]red on second draw. Then in this example A' would be the event green (not red) on the first and B' would be the event green on the second draw.
First, make a tree of the first draw and assign probabilities based on the number of balls in the box; 3 out of 7 are red and 4 out of 7 are green.
Next, conduct the second draw, assuming the ball chosen on the first draw is gone. For example, if the first draw was red, the chance of getting another red is 2 out of 6, since there are 2 remaining reds and 4 remaining greens. However, if the first draw was green, the chance of getting red is 3 out of 6.
Finally, use the multiplicative rule and multiply down the branch to get all joint probabilities. If you have constructed the tree diagram correctly, all of these probabilities must add to 1.
The probability of getting 2 red balls is 1/7 or approximately 0.143
Example: Circuit switches
A Circuit has three linear switches. If at least two of the switches function, the Circuit will succeed. Each switch has a 10% failure rate if all are operating, and a 20% failure rate if one switch has already failed. Construct a tree diagram and find the probability that the circuit will succeed.
Solution
Event A = first switch succeeds Event A' = first switch fails
Event B = second switch succeeds Event B' = second switch fails
Event C = third switch succeeds Event C' = third switch fails
P(2 or more successes) = 0.81 + 0.072 + 0.064 = 0.946
The switch has a 94.6% chance of succeeding. Notice that we did not need a tie‐breaking third branch for the cases of the first 2 switches succeeding, or the first 2 switches failing. | textbooks/stats/Introductory_Statistics/Inferential_Statistics_and_Probability_-_A_Holistic_Approach_(Geraghty)/05%3A_Probability/5.06%3A_Conditional_Probability.txt |
Two events are considered independent if the probability of one event occurring is not changed by knowing if the other event occurred or not. Events that are not independent are called dependent.
Here are examples of independent (unrelated) events:
• A fair coin flip comes up heads; the coin is flipped again and comes up heads.
• A student is unable to attend a math class at De Anza College; it rains today in New York City.
• A house in San Francisco starts on fire; on the same day, a house in Dallas starts on fire.
• A patient is diagnosed with cancer; on the same day, another patient is diagnosed with pneumonia.
In these independent events, the probability of the second event occurring is not affected by whether the first event occurs.
Examples of dependent (related) events
• A student gets an A on the first exam; the same student gets an A on the second exam.
• A person has never smoked; the same person gets lung cancer.
• An earthquake destroys a home in San Francisco; on the same day, an earthquake destroys a home in Oakland.
• A student majors in Computer Science; the same student wants to work for Google.
In these dependent events, the probability of the second event occurring is affected by whether the first event occurs:
• A student who gets an A on an exam is more likely to get an A on another exam.
• A non‐smoker is less likely to get lung cancer than is a smoker.
• A single strong earthquake will affect homes all over the Bay Area.
• A Computer Science major is more likely to work for a tech company, such as Google.
The mathematical definition of independent events means that the marginal probability of the first event occurring is the same as the conditional probability of the first occurring given the second event occurred. We can then adjust the Multiplicative Rule to get three formulas, any of which can be used to test for independence:
Independent events
If events A and B are independent, then the following statements are all true:
$P(A)=P(A \mid B) \nonumber$
$P(B)=P(B \mid A) \nonumber$
$P(A \text { and } B)=P(A) \times P(B) \nonumber$
The last formula is particularly useful and can be easily generalized to find the joint probability of many independent events from looking at the simple marginal probabilities, making random sampling in statistical research so critical.
Example: Flip a coin ten times
A fair coin is flipped ten times. Find the probability of getting heads on all 10 tosses.
Solution
Because the coin tosses are independent, the multiplicative rule requires only marginal probabilities:
$P(\text { all Heads })=P(H)^{10}=0.5^{10}=0.0009766$
Example: Surprise quiz
On Monday, there is a 10% chance your history instructor will have a surprise quiz. On the same day, there is a 20% chance that your Math instructor will also have a surprise quiz. No other class you are taking has surprise quizzes. What is the probability that you will have a least one surprise quiz on Monday? Assume that all events are independent.
Solution
Let H be the event "Surprise quiz in History" and M be the event "Surprise quiz in Math." Then use both the Additive Rule and the Multiplicative Rule for independent events.
$P(H \text { or } M)=P(H)+P(M)-P(M \text { and } H)$
$P(H)=0.10 \qquad P(M)=0.20$
$P(H \text { and } M)=P(H) \times P(M)=0.10 \times 0.20=0.02$
$P(H \text { or } M)=0.10+0.20-0.02=0.28$
There is a 28% chance that there will be at least one surprise quiz on Monday.
Example: Accidents and DUI
1000 drivers were asked if they were involved in an accident in the last year. They were also asked if during this time, they were DUI, driving under the influence of alcohol or drugs. Are the events "Driver was DUI" and "Driver was involved in an accident" independent or dependent event
Accident No Accident Total
DUI 70 130 200
Non-DUI 30 770 800
Total 100 900 1000
Solution
Let A be the event “the driver had an accident” and D be the event “the driver was DUI”. We can use any of the rules for independence answer this question. Let's show all three possible methods here, but in practice choose the most convenient formula given the provided data.
Use Formula 1:
$P(A) = 100/1000 = 0.10$
$P(A|D) = 70/200 =0.35$
$P(A) \neq P(A|D)$
Use Formula 2:
$P(D) = 200/1000 = 0.20$
$P(D|A) = 70/100 =0.70$
$P(D) \neq P(D|A)$
Use Formula 3:
$P(A) = 100/1000 = 0.10$
$P(D) = 200/1000 = 0.20$
$P(A\text{ and }D) = 70/1000 = 0.07$
$P(A) \times P(D) = (0.10)(0.20) = 0.02$
$P(A\text{ and }D) \neq P(A) \times P(D)$
"Driver was DUI" and "Driver was involved in an accident" are dependent events.
Example: Accidents and origin of car
1000 drivers were asked if they were involved in an accident during the last year. They were also asked if during this time, if they were driving a domestic car or an imported car. Are the events "Driver drives a domestic car" and "Driver was involved in an accident" independent or dependent events?
Accident No Accident Total
Domestic Car 40 540 600
Imported Car 60 360 400
Total 100 900 1000
Solution
Let A be the event “the driver had an accident” and D be the event “the driver drives a domestic car”. Let's again show all three possible methods here, but in practice choose the most convenient formula given the provided data.
Use Formula 1
$P(A)$ = 100/1000 = 0.10
$P(A|D)$ = 60/600 =0.10
$P(A) = P(A|D)$
Use Formula 2
$P(D)$ = 600/1000 = 0.60
$P(D|A)$ = 60/100 =0.60
$P(D) = P(D|A)$
Use Formula 3
$P(A)$ = 100/1000 = 0.10
$P(D)$ = 600/1000 = 0.60
$P(A \text{ and } D)$ = 60/1000 = 0.06
$P(A) \times P(D)$ = (0.10)(0.60) = 0.06
$P(A \text{ and } D) = P(A) \times P(D)$
"Driver has an accident" and "Driver drives a domestic car" are independent events. | textbooks/stats/Introductory_Statistics/Inferential_Statistics_and_Probability_-_A_Holistic_Approach_(Geraghty)/05%3A_Probability/5.09%3A_Independence.txt |
A trucking company is concerned that some of their drivers may be using amphetamine drugs to stay awake, exposing the company to lawsuits. They hire a testing agency to randomly test drivers. The marketing material for this testing agency claims that 99% of drivers who are using amphetamines will have a positive test result, so the company can be assured that any driver who tests positive will almost certainly be using the amphetamines.
This marketing material presented by the testing agency represents faulty reasoning. The 99% represents the probability that a driver tests positive given the driver is using amphetamines, while the claim was that the probability would be near‐certain that a driver was using amphetamines given the test was positive. The conditionality has been incorrectly switched because in general: $P(A \mid B) \neq P(B \mid A)$.
To switch the conditionality requires several pieces of information and is often explained in statistics books by using Bayes' Theorem:
If the sample space is the union of mutually events $\mathrm{A}_{1}, \mathrm{~A}_{2}, \ldots, \mathrm{A}_{n}$, then
$P\left(A_{i} \mid B\right)=\frac{P\left(A_{i}\right) \times P\left(B \mid A_{i}\right)}{P\left(A_{1}\right) \times P\left(B \mid A_{1}\right)+P\left(A_{2}\right) \times P\left(B \mid A_{2}\right)+\cdots+P\left(A_{n}\right) \times P\left(B \mid A_{n}\right)} \nonumber$
A more straightforward approach to solving this type of problem is to use techniques that have already been covered in this section:
• First construct a tree diagram.
• Second, create a Contingency Table using a convenient radix (sample size).
• From the Contingency table it is easy to calculate all conditional probabilities.
Example: Diagnostic testing
10% of prisoners in a Canadian prison are HIV positive. (This is also known in medical research as the incidence rate or prevalence). A test will correctly detect HIV 95% of the time, but will incorrectly “detect” HIV in non‐infected prisoners 15% of the time (false positive). If a randomly selected prisoner tests positive, find the probability the prisoner is HIV+.
Solution
Let A be the event that a prisoner is HIV positive and B the event that a prisoner tests positive. Then A' would be the event that a prisoner is HIV negative and B' would be the event that the prisoner tests negative.
There are four possible outcomes in this probability model:
• True Positive (also known as in medical research as sensitivity) ‐ The prisoner correctly tests positive and is actually HIV positive.
• False Negative ‐ The prisoner incorrectly tests negative and is actually HIV positive.
• False Positive ‐ The prisoner incorrectly tests positive and is actually HIV negative.
• True Negative (also known as in medical research as specificity) ‐ The prisoner correctly tests negative and is actually is HIV negative.
From the information given, first construct a tree diagram.
$P(A) = 0.10 P(A')$ = 1 ‐ 0.10 = 0.90
$P(B|A)$ = 0.95 $P(B|A')$ = 0.15 $P(B'|A)$ = 1 ‐ 0.95 = 0.05 $P(B'|A')$ = 1 ‐ 0.15 = 0.85
Next, construct a contingency table. It is helpful to choose a convenient radix (sample size) such as 10000 and multiply by each joint probability from the tree diagram:
• Samples in A and B = (.095)(10000) = 950
• Samples in A and B' = (.005)(10000) = 50
• Samples in A' and B = (.135)(10000) = 1350
• Samples in A' and B' = (.765)(10000) = 7650
HIV+ A HIV- A' Total
Test+ B 950 1350 2300
Test- B' 50 7650 7700
Total 1000 9000 10000
To find the probability that a prisoner who tests positive really is HIV positive, find $P(A|B)$:
$P(A \mid B)=\dfrac{950}{2300}=0.413 \nonumber$
So the probability that a prisoner who tests positive really is HIV positive is only 41.3%. This result may seem unusual, but when the incidence rate is lower than the false positive rate, it is more likely that a positive result on a test will be incorrect.
This problem could have also been answered directly, but much less straightforward by using Bayes' Theorem:
\begin{aligned} P(B \mid A) &=\dfrac{P(A) \times P(B \mid A)}{P(A) \times P(B \mid A)+P\left(A^{\prime}\right) \times P\left(B \mid A^{\prime}\right)} \ &=\dfrac{(0.10)(0.95)}{(0.10)(0.95)+(0.90)(0.85)}\ &=0.413 \end{aligned} \nonumber | textbooks/stats/Introductory_Statistics/Inferential_Statistics_and_Probability_-_A_Holistic_Approach_(Geraghty)/05%3A_Probability/5.10%3A_Changing_the_Conditionality_and_Bayesian_Statistics.txt |
The next two chapters will explore random variables. This chapter covers random variables for data that is discrete, while the next chapter explores random variables for continuous data.
06: Discrete Random Variables
A random variable is a variable in which the value depends upon an experiment, observation or measurement. This differs from Math classes where one can assign values to the random variables. Here, the value is assigned by a random process and is not known in advance. For the purposes of this class, the variable will be numeric.
This chapter covers random variables for data that is discrete, while the next chapter explores random variables for continuous data.
Like in Mathematics, we will use letters as symbols to represent random variables. Upper case letters refer to the random variable as a function of some random activity. Lower case letters refer to values of the random variable, which are numbers.
Example: Roll a die
A fair six‐sided die is rolled. Let the random variable X represent the numeric value of the die roll. A five is rolled.
Upper Case X = the function = the number seen when a fair six‐sided die is rolled.
Lower Case x = the value of the roll = 5
6.02: What is a Discrete Random Variable
A discrete random variable is a random variable that has only discrete values. Discrete values are related to counting numbers.
Examples of discrete random variables
• The number when a die is rolled. Possible values = {1, 2, 3, 4, 5, 6}
• The number of heads when flipping two coins. Possible values = {0, 1, 2}
• Number of Siblings you have. Possible Values = {0, 1, 2, …} Here we don’t know the maximum, but the possible values are still whole numbers.
6.03: Probability Distribution Function (PDF) for Discrete Random Variables
All random variables have the value assigned in accordance with a probability model. For discrete variables, this assigning of probabilities to each possible value of the random variable is called a probability distribution function, or PDF for short.
This probability distribution function is written as $P(X=x)$ or $P(x)$ for short. This PDF can be read as “The probability the random variable $X$ equals the value $x$.”
Additionally, probability statements can be written as inequalities.
$P(X < x)$ means the probability the value of the random variable is less than $x$.
$P(X \leq x)$ means the probability the value of the random variable is at most $x$.
$P(X > x)$ means the probability the value of the random variable is more than $x$.
$P(X \geq x)$ means the probability the value of the random variable is at least $x$.
Like any function in Mathematics, a probability distribution function can be defined by a description, a table, a graph or a formula. The general method of assigning probabilities to values follows this procedure.
Procedure for creating a discrete probability distribution function
1. Define the random Variable $X$
2. List out all possible values
3. Assign probabilities to each value. You can use counting methods or relative frequencies.
4. This assignment must follow these two rules: $P(x) \geq 0$ and $\sum P(x)=1$
Example: Flip two coins
Two coins are flipped and the number of heads are counted.
$X$ = the number of heads when two coins are flipped
Possible Values = {0, 1, 2}
Here are 5 possible probability distribution functions:
A B C D E
$x$ $P(x)$
0 1/3
1 1/3
2 1/3
$x$ $P(x)$
0 0.25
1 0.50
2 0.25
$x$ $P(x)$
0 0
1 0
2 1
1$x$ $P(x)$
0 0.3
1 0.3
2 0.3
$x$ $P(x)$
0 0.6
1 -0.1
2 0.5
Models A, B and C are valid because each probability assignment is non‐negative and all probabilities total to 1.
Model B is the correct model for flipping fair coins as there are two ways to get one head.
Model C (a coin that only comes up head) is valid since zero probability is allowed.
Model D is invalid since the probabilities do not total to 1.
Model E is invalid because negative probabilities are not allowed.
Example: Multiple choice test
Students are given a multiple choice exam with 4 questions.
The random variable X = the number answers correct. Possible values = {0, 1, 2, 3, 4}
From past data, 10% of students get zero correct answers, 10% get exactly one correct answer, 20% get two correct, and 40% get three correct. Since the probabilities must add to 1, it can be determined that 20% of students got all correct, and the PDF can be finished.
$x$ $P(x)$
0 0.1
1 0.1
2 0.2
3 0..4
4 0.2
Solution
We can use the table to answer any type of probability question:
The probability of exactly 2 questions correct: $P(X =2) = P(2) = 0.2$
The probability of fewer than 2 questions correct: $P(X < 2) = P(0) + P(1) = 0.1 + 0.1 = 0.2$
The probability of more than 2 questions correct: $P(X > 2) = P(3) + P(4) = 0.4 + 0.2 = 0.6$
The probability of at least 2 questions correct: $P(X \geq 2) = P(2) + P(3) + P(4) = 0.2 + 0.4 + 0.2 = 0.8$
The probability of at most 2 questions correct: $P(X \leq 2) = P(0) + P(1) + P(2) = 0.1 + 0.1 + 0.2 = 0.4$
The probability at least 1 question correct: $P(X >1) = 1 – P(0) = 1 – 0.1 = 0.9$
The last example was done using the Rule of Complement. The complement of “at least one correct answer” is “zero correct answers”. | textbooks/stats/Introductory_Statistics/Inferential_Statistics_and_Probability_-_A_Holistic_Approach_(Geraghty)/06%3A_Discrete_Random_Variables/6.01%3A_What_is_a_Random_Variable.txt |
Earlier, we described how to calculate the statistics sample mean and sample variance as measures of center and spread for sample data. For probability models of populations, we can calculate the expected value as a parameter describing the center of the data and the population variance as a parameter describing spread.
Definition: Parameter and Statistic
A parameter is a quantity that describes a population.
A statistic is a quantity that describes a sample.
The expected value of a random variable is also known as the population mean and is expressed by the symbol $\mu$ (pronounced mu). The expected value is a parameter, meaning a fixed quantity.
The population variance of a random variable is the expected value of the squared deviations from the population mean, that is, the expected value of $(x-\mu)^{2}$. The population variance is also a fixed parameter and is expressed by the symbol $\sigma^{2}$ (pronounced sigma‐squared). The population standard deviation is the square root of the population variance and is expressed by the symbol $\sigma$. For discrete random variables, Expected Value is calculated by probability weighting.
Expected Value ($\mu$) and Variance ($\sigma^{2}$) of Discrete Random Variable $X$
Expected Value (Population Mean): $\mu=E(x)=\sum x \cdot P(x)$
Population Variance: $\sigma^{2}=\operatorname{Var}(x)=E\left[(x-\mu)^{2}\right]=\sum(x-\mu)^{2} \cdot P(x)$
Population Standard Deviation: $\sigma=\sqrt{\operatorname{Var}(x)}$
Example: Multiple choice test
Students are given a multiple choice exam with 4 questions. Find the expected value and population variance of the random variable with the given probability distribution:
$x$ $P(x)$
0 0.1
1 0.1
2 0.2
3 0..4
4 0.2
Solution
To find the expected value of $X$, weigh each value of $X$ by the probability, then add them up.
$x$ $P(x)$ $x \cdot P(x)$
0 0.1 0.0
1 0.1 0.1
2 0.2 0.4
3 0..4 1.2
4 0.2 0.8
Total 1.0 $\mu$ = 2.5
The expected number of correct answers is 2.5.
Note that the Expected Value of a random variable does not have to be a possible answer. For example, in 2015 the expected number of children an American woman will birth is 1.84, a quantity also known as the fertility rate.
To find the population variance, determine the quantity $(x-\mu)^{2}$ for each value of the random variable, weight by probability, and the add them up.
$x$ $P(x)$ $x \cdot P(x)$ $x-\mu$ $(x-\mu)^{2}$ $(x-\mu)^{2} \cdot P(x)$
0 0.1 0.0 ‐2.5 6.25 0.625
1 0.1 0.1 ‐1.5 2.25 0.225
2 0.2 0.4 ‐0.5 0.25 0.050
3 0.4 1.2 0.5 0.25 0.100
4 0.2 0.8 1.5 2.25 0.450
Total 1.0 $\mu$ = 2.5 $\sigma^{2}$ = 1.45
The population variance is 1.45 and the population standard deviation is $\sqrt{1.45}=1.20$ correct answers.
Example: Major Atlantic Hurricanes
Hurricanes are tropical cyclones that have wind speeds of at least 74 MPH. Hurricanes are classified by wind speed from Category 1 to Category 5 by the Saffir‐Simpson Scale. Major Hurricanes are storms that have sustained winds of at least 111 MPH (Category 3 or higher).
Historically, there have been anywhere from zero to eight major hurricanes in the Atlantic Ocean during a year. Based on this data, we can create a discrete probability distribution function X for number of major Atlantic hurricanes in a year65:
$x$ 0 1 2 3 4 5 6 7 8
$P(x)$ 0.187 0.290 0.271 0.090 0.054 0.054 0.036 0.012 0.006
Find the expected value and population variance of this random variable.
Solution
Here is a table following the procedure of the prior example:
$x$ $P(x)$ $x \cdot P(x)$ $x-\mu$ $(x-\mu)^{2}$ $(x-\mu)^{2} \cdot P(x)$
0 0.187 0.000 ‐1.936 3.748 0.701
1 0.290 0.290 ‐0.936 0.876 0.254
2 0.271 0.542 0.064 0.004 0.001
3 0.090 0.270 1.064 1.132 0.102
4 0.054 0.216 2.064 4.260 0.230
5 0.054 0.270 3.064 9.388 0.507
6 0.036 0.216 4.064 16.516 0.595
7 0.012 0.084 5.064 25.644 0.308
8 0.006 0.048 6.064 36.772 0.221
Total 1.0 1.936 = $\mu$ 2.919 = $\sigma^{2}$
The expected number of major Atlantic hurricanes in any year is 1.936. The population variance is 2.919 and the population standard deviation is 1.709 major hurricanes per year. | textbooks/stats/Introductory_Statistics/Inferential_Statistics_and_Probability_-_A_Holistic_Approach_(Geraghty)/06%3A_Discrete_Random_Variables/6.04%3A_Expected_Value_and_Variance_of_a_Discrete_Probability_Distribution_Function.txt |
We will now explore specific random variables that are frequently used in practice. These random variables will be generalized by parameters. We will start with the simplest of all random variables, the Bernoulli Distribution, also known as the indicator variable. This random variable, $X$, is designed for a yes/no or success/failure question. If the answer is Yes/Success, then $X = 1$. If the answer is No/Failure, then $X = 0$. The probability of success is $p$, and the probability of failure is $q = 1‐p$.
$X$ $P(X)$
0 $q = 1‐p$
1 $p$
Example: Free throw shooting
Draymond Green66, an NBA basketball player for the Golden State Warriors, is a 70% free throw shooter. This means when he shoots a free throw, there is a 70% probability that he will make the shot. The random variable $X$ = the number of successes when Draymond Green takes a free throw follows a Bernoulli Distribution with $p =0.7$ (success) and $q = 0.3$ (failure). Determine the pdf, mean and variance of the random variable.
$x$ $P(x)$ $x \cdot P(x)$ $x-\mu$ $(x-\mu)^{2}$ $(x-\mu)^{2} \cdot P(x)$
0 0.30 0.00 ‐0.70 0.49 0.147
1 0.70 0.70 0.30 0.09 0.063
Total 1 $0.7=\mu$ $0.21=\sigma^{2}$
Solution
The mean and variance can be calculated directly for the Bernoulli Random Variable.
$x$ $P(x)$ $x \cdot P(x)$ $x-\mu$ $(x-\mu)^{2}$ $(x-\mu)^{2} \cdot P(x)$
0 $1‐p$ 0 $‐p$ $p^{2}$ $(1-p) p^{2}$
1 $p$ $p$ $1‐p$ $(1-p)^{2}$ $P(1-p)^{2}$
Total 1 $\mu = p$ $\sigma^{2}=p(1-p)=p q$
For the Draymond Green example, $\mu=p=0.7$ and $\sigma^{2}=p q=(0.7)(0.3)=.21$, which matches the answer when calculated manually
Bernoulli Probability Distribution (parameter = $p$)
One trial, two possible outcomes (Success/Failure) or (Yes/No)
$\mathbf{P} = P$(yes/success)
$q=1-p=P$(no/failure)
X = Number of Yes/Successes {0, 1}
$\mu=p$
$\sigma^{2}=p(1-p)=p q$
6.06: Binomial Distribution
The Bernoulli Random variable can now be extended to the Binomial Random Variable by repeating the experiment a fixed number of times. It is important that each of these trials are mutually independent, meaning that success or failure on one trial doesn’t change the probability of success or failure on subsequent trials.
For example, if you flip a fair coin in which heads is equal to success, then the probability of success would be 50% on every trial, regardless of what prior tosses were. This is an example of mutual independence, and the Binomial Distribution would be the appropriate model.
However, if you ask the question “Did it rain today?”, the probability of it raining the next day would probably be higher after a rainy day. This would be an example of not mutually independent, and the Binomial Distribution would not be the appropriate model.
Example: Free throw shooting
Let’s return to the example of Draymond Green, a 70% free throw shooter. Now he takes three free throws and we will assume free throw successes are independent. Let $X$ = number of successes, which could be 0, 1, 2 or 3. Find the mean, variance probability that Draymond makes exactly 2 free throws. In this example $n=3$ trials and $p=0.7$
Solution
Because the Binomial Distribution is a sum of independent Bernoulli trials, we can simply multiply the Bernoulli formulas by $n$ to get mean and variance.
$\mu=n p=(3)(0.7)=2.1$
$\sigma^{2}=n p(1-p)=(3)(.7)(.3)=0.63$
To find the probability that Draymond Green makes three free throws, we can make a tree diagram of all possible outcomes of Successes and Failures (S or F). There are three ways to make exactly 2 free throws: SSF, SFS or FSS.
$P(X=2)=P(S S F)+P(S F S)+P(F S S)=(3)(0.7)^{2}(0.3)^{1}=0.441$
There is about a 44% chance that Draymond Green will make exactly two free throws in three trials.
To find the probability Draymond makes at least 2 free throws, we would have to also consider when he makes all three shots (SSS).
$P(X \geq 2)=P(X=2)+P(X=3)=(3)(0.7)^{2}(0.3)^{1}+(3)(0.7)^{3}(0.3)^{0}=0.441+.343=.784$
There is about a 78.4% chance that Draymond Green will make at least two free throws in three trials.
For larger sample sizes, tree diagrams are too tedious to use. There is a formula to find the probability of exactly x success in n trials:
$P(x)={ }_{n} C_{x} p^{x}(1-p)^{n-x} \nonumber$
The combination formula ${ }_{n} C_{x}=\dfrac{n !}{x !(n-x) !}$ means the number of ways x successes can occur out of n trials. This formula is also tedious to use, so we will rely on tables or technology to calculate binomial probabilities.
Here is a summary of the Binomial Distribution
Binomial Probability Distribution (parameters= $n, p$)
$n$= number of independent trials (sample size)
Two possible outcomes (Success/Failure) or (Yes/No)
$\mathbf{p} = P$(yes/success) on one trial
$q = 1‐p = P$(no/failure) on one trial
$X$ = Number of Yes/Successes {0, 1, 2, ..., n}
$\mu=n p$
$\sigma^{2}=n p(1-p)$
$\sigma=\sqrt{n p(1-p)}$
$P(x)={ }_{n} C_{x} p^{x}(1-p)^{n-x}$
Example: Quality control
90% of super duplex globe vales valves67 manufactured are good (not defective). A sample of 10 valves is selected. Define the random variable and determine the parameters.
Solution
$X$ = number of good valves in the sample of 10.
$n = 10, p = 0.9$
Find the mean and variance
$\mu=n p=(10)(0.9)=9$
$\sigma^{2}=n p(1-p)=(10)(0.9)(0.1)=0.9$
For the following probability questions, we can use technology or a table. The displayed table was created by Minitab.
Find the probability of exactly 8 good valves being chosen.
$P(X=8)=0.194 \nonumber$
Find the probability of 9 or more good valves being chosen.
$P(X \geq 9)=P(9)+P(10)=0.387+0.349=0.736\nonumber$
Find the probability of 8 or fewer good valves being chosen.
$P(X \leq 9)=P(0)+P(1)+\ldots+P(8) \nonumber$ or instead use Rule of Complement and prior example
$P(X \leq 9)=1-P(X \geq 9)=1-0.736=0.264 \nonumber$ | textbooks/stats/Introductory_Statistics/Inferential_Statistics_and_Probability_-_A_Holistic_Approach_(Geraghty)/06%3A_Discrete_Random_Variables/6.05%3A_Bernoulli_Distribution.txt |
Consider these two random variables, which both start with repeated Bernoulli trials:
1. Flip a fair coin 10 times. Let $X$ = the number of heads.
2. Flip a fair coin repeatedly until you get a head. Let $X$ = the number of total flips.
The first random variable is a binomial random variable where $n =10$ and $p=0.5$. The possible values of $X$ are {0,1,2,3,4,5,6,7,8,9,10}
The second random variable is unusual in that there are an infinite number of possibilities for $X$. The possible number of flips until you get a head are {1, 2, 3, ...}. This is called the geometric distribution and its features are shown in the box.
Geometric Probability Distribution (parameter= $p$)
Two possible outcomes (Success/Failure) or (Yes/No)
$p = P$(yes/success) on one trial
$q = 1‐p = P$(no/failure) on one trial
$X$ = Number of independent trials until the first success. (1, 2, 3, ...)
$\mu=\dfrac{1}{p}$
$\sigma^{2}=\dfrac{1-p}{p^{2}}$
$\sigma=\sqrt{\dfrac{1-p}{p^{2}}}$
$P(x)=p(1-p)^{x-1}$
Example: Free throw shooting
Let’s again return to the example of Draymond Green, a 70% free throw shooter. Now let $X$ = the number of free throws Draymond takes until he makes a shot. $X$ follows a geometric distribution.
Solution
The expected number of shots: $\mu=\dfrac{1}{p}=1.43$ shots
The variance: $\sigma^{2}=\dfrac{1-0.7}{0.7^{2}}=0.612$
The probability that Draymond Green takes exactly 3 shots to make a free throw:
$P(X=3)=0.7(0.3)^{2}=0.063$
The probability that Draymond Green takes 3 or more shots to make a free throw:
Since $P(X \geq 3)=P(3)+P(4)+\ldots$ is an infinite sum, is better to use Rule of Complement.
$P(X \geq 3)=1-P(1)-P(2)=(0.7)(0.3)^{0}+(0.7)(0.3)^{1}=1-0.91=0.09$
6.08: Poisson Distribution
Random variables that can be thought of as “How many occurrences per time period”, or “How many occurrences per region” have many practical applications. For example:
The number of strong earthquakes per year in California.
The number of customers per hour at a restaurant.
The number of accidents per week at a manufacturing plant.
The number of errors per page in a manuscript.
If the rate is constant, these random variables will follow a Poisson distribution.
The Poisson Distribution is actually derived from a Binomial Distribution in which the sample size $n$ gets very large and the probability of success $p$ is very small. A good example of this is the Powerball Lottery.
Example: Powerball Lottery
The odds of winning the Powerball Lottery jackpot with a single ticket are 292,000,000 to 1. Suppose the jackpot gets large and 292,000,000 tickets are sold.
Solution
Let $X$ = Number of jackpot winning tickets sold.
Under the Binomial distribution, $n=292,000,000$ and $p = 1/292,000,000$. Note that $p$ is very close to zero, so $1‐p$ is very close to 1.
$\mu=n p=1$
$\sigma^{2}=n p(1-p) \approx n p=\mu=1$
The number of winners can be modeled by the Poisson Distribution, in which the single parameter $\(\mu$ is the expected number of winners; in this case $\mu=1$. There could theoretically be millions of winners, so the possible values of the Poisson is designed so there is no theoretical limit for the value of $X$ (although there are practical limits in real life problems).
The important features of the Poisson Distribution are shown here:
Poisson Probability Distribution (parameter= $\mu$)
$\mu$ = expected occurrences per given time period or region. This rate must be constant.
$X$ = number of occurrence per given time period or region Possible values of $X$ {0, 1, 2, …} (no upper limit)
$\sigma^{2}=\mu$
$\sigma=\sqrt{\mu}$
$P(x)=\dfrac{e^{-\mu} \mu^{x}}{x !}$
Example: Continuation of Powerball Lottery
Find the probability of no jackpot winners.
$P(0)=\dfrac{e^{-1} 2^{0}}{0 !}=0.368$
Find the probability of at least one jackpot winner. The answer calculated directly is an infinite sum, so instead use the Rule of Complement
$P(X \geq 1)=P(1)+P(2)+\cdots$
$P(X \geq 1)=1-P(0)=1-\dfrac{e^{-1} 2^{0}}{0 !}=0.632$
There is a 63.2% chance that at least one winning ticket is sold.
Example: Earthquakes
Earthquakes of Richter magnitude 3 or greater occur on a certain fault at a rate of two times per every year. Assume this rate is constant.
Solution
Find the probability of at least one earthquake of RM 3 or greater in the next year.
$\mu=2$ per year.
$P(X \geq 1)=1-P(0)=1-\dfrac{e^{-2} 2^{0}}{0 !}=0.865$
Find the probability of exactly 6 earthquakes of RM 3 or greater in the next 2 years.
When determining the parameter m for the Poisson Distribution, make sure that the expected value is over the time period or region given in the problem. Since these earthquakes occur at a rate of 2 per year, we would expect 4 earthquakes in 2 years.
$\mu$= (2 per year)( 2 years) = 4
$P(X=6)=\dfrac{e^{-6} 4^{0}}{6 !}=0.104$
Counting methods that are modeled by random variables that follow a Poisson Distribution are also called a Poisson Process. | textbooks/stats/Introductory_Statistics/Inferential_Statistics_and_Probability_-_A_Holistic_Approach_(Geraghty)/06%3A_Discrete_Random_Variables/6.07%3A_Geometric_Distribution.txt |
The prior section covered discrete random variables, in which the possible values are discrete whole numbers. We now want to move to random variables that have continuous data.
07: Continuous Random Variables
A continuous random variable is a random variable that has only continuous values. Continuous values are uncountable and are related to real numbers.
Examples of continuous random variables
• The time it takes to complete an exam for a 60 minute test Possible values = all real numbers on the interval [0,60]
• Age of a fossil Possible values = all real numbers on the interval [minimum age, maximum age]
• Miles per gallon for a Toyota Prius Possible Values = all real numbers on the interval [minimum MPG, maximum MPG]
The main difference between continuous and discrete random variables is that continuous probability is measured over intervals, while discrete probability is calculated on exact points.
For example, it would make no sense to find the probability it took exactly 32 minutes to finish an exam. It might take you 32.012342472… minutes. Probability of points no longer makes sense when we move from discrete to continuous random variables.
Instead, you could find the probability of taking at least 32 minutes for the exam, or the probability of taking between 31 and 33 minutes to complete the exam. Instead of assigning probability to points, we instead define a probability density function (pdf) that will help us find probabilities. This function must always have a non‐negative range (output). Probability can then be determined by finding the area under the function. To be a valid probability density function, the total area under the curve must equal 1.
If the drawing represents a valid probability density function for a random variable $X$, then
$P(a<X<b)=\text { shaded area } \nonumber$
This table shows the similarities and differences between Discrete and Continuous Distributions
Discrete Distributions Continuous Distributions
Countable
Discrete Points
Points have probability
$p(x)$ is probability distribution function
$p(x) \geq 0$
$\Sigma p(x)=1$
Uncountable
Continuous Intervals
Points have no probability
$f(x)$ is probability density function
$f(x) \geq 0$
Total Area under curve =1
Example: Driving to school
The time to drive to school for a community college student is an example of a continuous random variable. The probability density function and areas of regions created by the points 15 and 25 minutes are shown in the graph.
1. Find the probability that a student takes less than 15 minutes to drive to school.
2. Find the probability that a student takes no more than 15 minutes to drive to school. This answer is the same as the prior question, because points have no probability with continuous random variables.
3. Find the probability that a student takes more than 15 minutes to drive to school.
4. Find the probability that a student takes between 15 and 25 minutes to drive to school.
Solution
1. $P(X<15)=0.20$
2. $P(X \leq 15)=0.20$
3. $P(X>15)=0.45+0.35=0.80$
4. $P(15 \leq X \leq 25)=0.45$
We can also use a continuous distribution model to determine percentiles.
The $p^{th}$ percentile is the value $x_p$ such that $P\left(X<x_{p}\right)=p$
Find the $20^{th}$ and $65^{th}$ percentiles of times driving to school.
From the drawing $X_{20} = 15$ minutes and $X_{65} = 25$ minutes
Expected Value and Variance of Continuous Random Variables
The mean and variance can be calculated for most continuous random variables. The actual calculations require calculus and are beyond the scope of this course. We will use the same symbols to define the expected value and variance that were used for discrete random variables.
Expected Value ($\mu$) and Variance ($\sigma^{2}$) of Continuous Random Variable $X$
Expected Value (Population Mean): $\mu=E(x)$
Population Variance: $\sigma^{2}=\operatorname{Var}(x)=E\left[(x-\mu)^{2}\right]$
Population Standard Deviation: $\sigma=\sqrt{\operatorname{Var}(x)}$
These next sections explore three special continuous random variables that have practical applications. | textbooks/stats/Introductory_Statistics/Inferential_Statistics_and_Probability_-_A_Holistic_Approach_(Geraghty)/07%3A_Continuous_Random_Variables/7.01%3A_What_is_a_Continuous_Random_Variable.txt |
The exponential distribution is often used to model the waiting time until an event occurs. For example, the waiting time until you receive a text message or the waiting time until an accident at a manufacturing plant will follow an exponential distribution.
This model has one parameter, the expected waiting time, $\mu$.
An important assumption for the Exponential is that the expected future waiting time is independent of the past waiting time. For example, if you expect to wait 5 minutes for a text message and you wait 3 minutes, the expected waiting time at that point is still 5 minutes.
This can be written as a probability statement: $P(X>a)=P(X>a+b \mid X>b)$
The Exponential Distribution is useful to model the waiting time until something “breaks”, but would not be the appropriate model for something that “wears out.”
Exponential Probability Distribution (parameter=$\mu$)
$\mu$ = expected waiting time until event occurs.
$X$ = waiting time until event occurs
Assumption: Waiting time in the future is independent of waiting time in the past:
$P(X>a)=P(X>a+b \mid X>b)$
$\sigma^{2}=\mu^{2}$
$\sigma=\mu$
Example: Cracked screen on smart phone.
The time until a screen is cracked on a smart phone has an Exponential distribution with $\mu=500$ hours of use.
1. Find the probability that the screen will not crack for at least 600 hours.
2. What is the median time until the smart phone's screen is cracked?
Solution
1. Here we use the formula for a probability problem, $P(X>a)=e^{-a / \mu}$
$P(x>600)=e^{-600 / 500}=e^{-1.2}=.3012 \nonumber$
Assuming that the screen has already lasted 500 hours without cracking, find the chance that the display will last an additional 600 hours.
Because of the memoryless feature of the Exponential distribution, the answer will be the same as if the smart phone was never used.
$P(x>1100 \mid x>500)=P(x>600)=.3012 \nonumber$
1. Because the Exponential distribution is always positively skewed, the median will be lower than the mean of 500 hours. The median is the $50^{th}$ percentile, so this is a percentile problem. We can derive the formula for the $p^{th}$ percentile ($x_p$) using algebra:
\begin{aligned} P\left(X>x_{p}\right)=e^{-x_{p} / \mu}&=1-p \ -x_{p} / \mu&=\ln (1-p) \ x_{p}&=-\mu \ln (1-p) \end{aligned} \nonumber
median $=x_{50}=-500 \ln (1-0.5)=347$ hours
This means that half of smart phones will have cracked screens after 347 hours of usage.
Relationship between Exponential Distribution and Poisson Distribution
There is a relationship between the Poisson Distribution, (covered in Chapter 6 on discrete distributions) and the Exponential Distribution. Recall that the Poisson distributions models the number of occurrences in a fixed time period if the rate that events occur follows a constant rate. A random variable that follows a Poisson Distribution is called a Poisson Process.
If occurrences follow a Poisson Process with mean = $\mu$, then the waiting time for the next occurrence has Exponential distribution with mean = $1 / \mu$.
Example: Accidents at an oil refinery68
Accidents occur at an oil refinery at a constant rate of 3 per month. This is an example of a Poisson Process.
The random variable $Y$ = the number of accidents in the next month would follow a Poisson Distribution with $\mu=3$ occurrences per month
The Random Variable $X$ = the waiting time until the next refinery accident would follow an Exponential distribution with $\mu=1 / 3$ months.
1. Find the probability of waiting less than 2 months for the next oil refinery accident.
2. Find the $90^{th}$ percentile of waiting times for a refinery accident
Solution
1. $P(X<2)=1-e^{-2 /(1 / 3)}=1-e^{-6}=0.9975$
1. $x_{95}=-\dfrac{1}{3} \ln (1-.90)=0.768$ months | textbooks/stats/Introductory_Statistics/Inferential_Statistics_and_Probability_-_A_Holistic_Approach_(Geraghty)/07%3A_Continuous_Random_Variables/7.02%3A_Exponential_Distribution.txt |
A uniform distribution is a continuous random variable in which all values between a minimum value and a maximum value have the same probability.
The two parameters that define the Uniform Distribution are:
$a$= minimum $b$ = maximum
The probability density function is the constant function $f(x) = 1/(b‐a)$, which creates a rectangular shape.
Example: Loose leaf tea
A tea lover enjoys Tie Guan Yin loose leaf tea and drinks it frequently. To save money, when the supply gets to 50 grams he will purchase this popular Chinese tea in a 1000 gram package.
The amount of tea currently in stock follows a uniform random variable.
Solution
$X$ = the amount of tea currently in stock
$a$ = minimum = 50 grams
$b$ = maximum = 1050 grams
$f(x) = 1/(1050 ‐ 50) = 0.001$
The expected value, population variance and standard deviation are calculated using the formulas:
$\mu=\dfrac{a+b}{2} \qquad \sigma^{2}=\dfrac{(b-a)^{2}}{12} \qquad \sigma=\sqrt{\dfrac{(b-a)^{2}}{12}} \nonumber$
For the loose leaf tea problem:
$\mu=\dfrac{50+1050}{2}=550$g
$\sigma^{2}=\dfrac{(1050-50)^{2}}{12}=83,333$
$\sigma=\sqrt{83333}=289$g
Probability problems can be easily solved by finding the area of rectangles.
Find the probability that there are at least 700 grams of Tie Guan Yin tea in stock.
$P(X \geq 700)=\text { width } \times \text { height }=(1050-700)(0.001)=0.35$
The $p^{th}$ percentile of the Uniform Distribution is calculated by using linear interpolation: $x_{p}=a+p(b-a)$
Find the $80^{th}$ percentile of Tie Guan Yin in stock:
$x_{80}=50+0.80(1050-50)=850$ grams
The important features of the Uniform Distribution are summarized here:
Uniform Probability Distribution (parameters: $a, b$)
$a$ = minimum value
$b$ = maximum value
$a \leq X \leq b$: All values of $X$ between $a$ and $b$ are equally likely
$f(x)=\dfrac{1}{b-a}$
$\mu=\dfrac{a+b}{2}$
$\sigma^{2}=\dfrac{(b-a)^{2}}{12}$
$\sigma=\sqrt{\dfrac{(b-a)^{2}}{12}}$
Example: Waiting for a train
The Sounder commuter train69 from Lakeview to Seattle, Washington arrives at Tacoma station every 20 minutes during the morning rush hour. Assume that this train is running on time.
1. Find the expected waiting time, standard deviation, nad probability density function for $X$.
2. Find the Interquartile Range for this Random Variable. First find the $1^{st}$ and ^{rd}\) quartiles.
3. Find the probability of waiting at least 15 minutes for the next commuter train after arriving at Tacoma Station.
4. Find conditional probabilities for the Uniform Distribution.
Solution
Let $X$ = the waiting time for the next train to arrive. X will follow a Uniform Distribution with the minimum waiting time of 0 minutes (you just catch the train) and a maximum waiting time of 20 minutes (you just miss the train).
1. The expected waiting time is 10 minutes: $\mu=\dfrac{0+20}{2}=10$
The standard deviation is 5.77 minutes: $\sigma^{2}=\dfrac{(20-0)^{2}}{12}=33.33 \quad \sigma=\sqrt{33.33}=5.77$
The probability density function for X is: $f(x)=\dfrac{1}{20-0}=0.05$
1. \begin{aligned} &Q 1=x_{25}=0+.25(20-0)=5 \ &Q 3=x_{75}=0+.75(20-0)=15 \end{aligned}
Interquartile Range = $Q 3-Q 1=15-5=10$ minutes
1. $P(X \geq 15)=\dfrac{20-15}{20-0}=0.25$
1. To find conditional probabilities for the Uniform Distribution, it easiest to just create a new Uniform Distribution from the information given.
After arriving at Tacoma Station, a commuter waits 5 minutes. Find that the probability the commuter is going to wait at least an additional 10 minutes ( a total of 15 minutes) before the next train arrives.
The conditional probability statement can be written as $P(X \geq 15 \mid X \geq 5)$.
Instead, simply define a new Random Variable $Y$ = the expected total waiting time, assuming the commuter waits at least 5 minutes.
$a$ = minimum wait = 5 minutes
$b$ = maximum wait = 20 minutes
$P(Y \geq 15)=\dfrac{20-15}{20-5}=0.333$ | textbooks/stats/Introductory_Statistics/Inferential_Statistics_and_Probability_-_A_Holistic_Approach_(Geraghty)/07%3A_Continuous_Random_Variables/7.03%3A_Uniform_Distribution.txt |
The most important probability distribution in Statistics is the Normal Distribution, the iconic bell‐ shaped curve. The Normal Distribution is symmetric and defined by two parameters: the expected value (mean) $\mu$ which describes the center of the distribution and the standard deviation $\sigma$, which describes the spread.
The extremely complicated probability distribution function for the Normal Distribution is:
$f(x)=\dfrac{1}{\sigma \sqrt{2 \pi}} e^{-\frac{1}{2}\left(\frac{x-\mu}{\sigma}\right)^{2}},-\infty<X<\infty \nonumber$
Examples of the Normal Distribution are shown here.
There are many examples of data that are both symmetric and clustered towards the mean. For example, we created dot plots of the weights of apples and oranges in Chapter 2. You can see both graphs are clustered towards the center and are symmetric. A Normal Distribution would be an appropriate model for weight of apples and oranges
Standard Normal Distribution
A special case of the Normal Distribution is when $\mu=0$ and $\sigma=1$.
This random variable is known as the Standard Normal Distribution and is always represented by the letter $Z$.
For calculating probabilities and percentiles of the Normal Distribution, tables, graphing calculators or computers are needed. For illustration purposes, we will fill in some of the these probabilities for the Standard Normal Distribution by showing areas under the curve:
$P(-1<Z<1)=0.3413+0.3413=0.6826$
$P(-2<Z<2)=0.3413+0.3413+0.1359+0.1359=0.9544$
$P(-3<Z<3)=0.3413+0.3413+0.1359+0.1359+0.0214+0.0214=0.9972$
This means that for the standard Normal Distribution that 68% of the probability is between 1 and ‐1, 95% of the probability is between ‐2 and 2 and 99.7% of the probability is between ‐3 and 3.
These percentages may seem familiar from the Empirical Rule in Chapter 3.
68% of the data is within 1 standard deviation of the mean.
95% of the data is within 2 standard deviations of the mean.
99.7% of the data is within 3 standard deviations of the mean.
The Empirical Rule comes directly from the Standard Normal Distribution, $Z$. In fact, any Normal Random Variable, $X$ with Expected value $\mu$ and Standard Deviation $\sigma$ can be converted to a Standard Normal Distribution by using the formula: $Z=\dfrac{X-\mu}{\sigma}$
Example: Water usage
The daily water usage per person in a town is normally distributed with a mean (expected value) of 20 gallons and a standard deviation of 5 gallons.
1. Determine the proportion of people who use between 15 and 25 gallons of water.
2. Determine the proportion of people who use between 10 and 30 gallons of water
3. Between what two values would you expect to find about 95% of the water users?
Solution
1. \begin{aligned} P(15<X<25) &=P\left(\dfrac{15-20}{5}<Z<\dfrac{25-20}{5}\right) \ &=P(-1<Z<1)\&=0.6826 \end{aligned}
2. \begin{aligned} P(10<X<30) &=P\left(\dfrac{10-20}{5}<Z<\dfrac{30-20}{5}\right) \ &=P(-2<Z<2)\&=0.9544 \end{aligned}
3. Since $P(-2<Z<2)=0.9544$, we can say about 95% of the water users are within two standard deviation of the mean, or that they use between 10 and 30 gallons per day.
Normal Probability Distribution (parameters: $\mu, \sigma$)
$\mu$ = Expected Value of X, population mean
$\sigma$ = population standard deviation
$f(x)=\dfrac{1}{\sigma \sqrt{2 \pi}} e^{-\frac{1}{2}\left(\frac{x-\mu}{\sigma}\right)^{2}},-\infty<X<\infty$
Calculating Probabilities Calculating Percentiles
$Z=\dfrac{X-\mu}{\sigma}$
$P(a<X<b)=P\left(\dfrac{a-\mu}{\sigma}<Z<\dfrac{b-\mu}{\sigma}\right)$
$X=\mu+Z \sigma$
$P\left(Z<z_{p}\right)=P\left(X<\mu+z_{p} \sigma\right)=p$
In general, probability and percentile questions using the Normal Distribution will require a tables or technology that can calculate Normal Distribution probabilities or percentiles for non‐integer $Z$ values
Example: Water usage
The daily water usage per person in a town is normally distributed with a mean of 20 gallons and a standard deviation of 5 gallons.
1. What is the probability that a person from the town selected at random will use fewer than 18 gallons per person per day?
2. What proportion of the people use between 18 and 24 gallons per person per day?
3. What percentage of the population uses more than 26.2 gallons per person per day?
4. A special tax is going to be charged on the top 5% of water users. Find the value of daily water usage that generates the special tax.
Solution
1. \begin{aligned} P(X<18) &=P\left(Z<\dfrac{18-20}{5}\right) \ &=P(Z<-0.40)\&=0.3446 \end{aligned}
1. \begin{aligned} P(18<X<24) &=P\left(\dfrac{18-20}{5}<Z<\dfrac{24-20}{5}\right) \ &=P(-0.40<Z<0.80)\&=0.4435 \end{aligned}
1. \begin{aligned} P(X>26.2) &=P\left(Z>\dfrac{26.2-20}{5}\right) \ &=P(Z>1.24)\&=10.75 \% \end{aligned}
1. This problem is really finding the $95^{th}$ percentile.
The $Z$ value associated with $95^{th}$ percentile =1.645
$X_{95}=20 + 5(1.645) = 28.2$ gallons per day
Example: Grading on the curve
Professor Kurv has determined that the final averages in his statistics course is normally distributed with a mean of 77.1 and a standard deviation of 11.2. He decides to assign his grades for his current course such that the top 15% of the students receive an A.
What is the lowest average a student can receive to earn an A?
Solution
The top 15% would be the finding the $85^{th}$ percentile. The corresponding $Z$ value is 1.04.
The minimum grade for an A: $X=77.1+(1.04)(11.2)$, or $X=88.75$ points.
Example: Server tip
The amount of tip the servers in an exclusive restaurant receive per shift is normally distributed with a mean of $80 and a standard deviation of$10. Shelli feels she has provided poor service if her total tip for the shift is less than \$65. (This doesn't mean she gave poor service, but rather that she just feels like she did).
What percentage of the time will she feel like she provided poor service?
Solution
Let $y$ be the amount of tip.
The $Z$ value associated with $X=65$ is $Z= (65‐80)/10= ‐1.5$.
Thus $P(X<65)=P(Z<-1.5)=.0668$ | textbooks/stats/Introductory_Statistics/Inferential_Statistics_and_Probability_-_A_Holistic_Approach_(Geraghty)/07%3A_Continuous_Random_Variables/7.04%3A_Normal_Distribution.txt |
In Chapter 3, we explored the sample mean $\overline{X}$ as a statistic that represents the average of quantitative data. When sampling from a population, the sample mean could be many different values. Therefore, we now want to explore the sample mean as a random variable
08: The Central Limit Theorem
First, think of a random variable $X$ from a population that is defined by some probability distribution or density function. This random variable could be continuous or discrete data. Sampling is repeatedly obtaining values of this random variable.
We will define a Random Sample $X_{1}, X_{2}, \ldots, X_{n}$ in which each of the random variables $X_{i}$ has the same probability distribution and are mutually independent of each other. The sample mean is a function of these random variables (add them up and divide by the sample size), so $\overline{X}$ is a random variable. So what is the Probability Distribution Function (pdf) of $\overline{X}$?
To answer this question, conduct the following experiment. We will roll samples of $n$ dice, determine the mean roll, and create a pdf for different values of $n$. For the case $n=1$, the distribution of the sample mean is the same as the distribution of the random variable. Since each die has the same chance of being chosen, the distribution is rectangular shaped centered at 3.5:
For the case $n=2$, the distribution of the sample mean starts to take on a triangular shape since some values are more likely to be rolled than others. For example, there are six ways to roll a total of 7 and get a sample mean of 3.5, but only one way to roll a total of 2 and get a sample mean of 1. Notice the pdf is still centered at 3.5.
For the case $n=10$, the pdf of the sample mean now takes on a familiar bell shape that looks like a Normal Distribution. The center is still at 3.5 and the values are now more tightly clustered around the mean, implying that the standard deviation has decreased.
Finally, for the case $n=30$, the pdf continues to look like the Normal Distribution centered around the same mean of 3.5, but more tightly clustered than the prior example:
This die‐rolling example demonstrates the Central Limit Theorem’s three important observations about the PDF of $\overline{X}$ compared to the pdf of the original random variable.
1. The mean stays the same.
2. The standard deviation gets smaller.
3. As the sample size increase, the pdf of $\overline{X}$ is approximately a Normal Distribution.
Central Limit Theorem for the Sample Mean
If $X_{1}, X_{2}, \ldots, X_{n}$ is a random sample from a population that has a mean $\mu$ and a standard deviation $\sigma$, and $n$ is sufficiently large ($n \geq 30$) then:
1. $\mu_{\bar{X}}=\mu$
2. $\sigma_{\bar{X}}=\dfrac{\sigma}{\sqrt{n}}$
3. The Distribution of $\overline{X}$ is approximately Normal.
Combining all of the above into a single formula: $Z=\dfrac{\overline{X}-\mu}{\sigma / \sqrt{n}}$, where $Z$ represents the Standard Normal Distribution.
This powerful result allows us to use the sample mean $\overline{X}$ as an estimator of the population mean $\mu$. In fact, most inferential statistics practiced today would not be possible without the Central Limit Theorem.
Example: Mean height of men
The mean height of American men (ages 20‐29) is $\mu = 69.2$ inches. If a random sample of 60 men in this age group is selected, what is the probability the mean height for the sample is greater than 70 inches? Assume $\sigma=2.9^{\prime \prime}$
Solution
Due to the Central Limit Theorem, we know the distribution of the Sample will have approximately a Normal Distribution:
$P(\overline{X}>70)=P\left(Z>\dfrac{(70-69.2)}{2.9 / \sqrt{60}}\right)=P(Z>2.14)=0.0162 \nonumber$
Compare this to the much larger probability that one male chosen will be over 70 inches tall:
$P(X>70)=P\left(Z>\dfrac{(70-69.2)}{2.9}\right)=P(Z>0.28)=0.3897 \nonumber$
This example demonstrates how the sample mean will cluster towards the population mean as the sample size increases.
Example: Text messages
The waiting time until receiving a text message follows an exponential distribution with an expected waiting time of 1.5 minutes. Find the probability that the mean waiting time for the 50 text messages exceeds 1.6 minutes.
Solution
For the exponential distribution, the mean equals the standard deviation. Since the sample size is over 30, the distribution of $\overline{X}$ will be normal, even though the distribution of $X$ is heavily skewed.
$\mu=1.6 \qquad \sigma=1.6 \qquad n=50$
\begin{aligned} P(\bar{X}>1.6) &=P\left(Z>\frac{(1.6-1.5)}{1.5 / \sqrt{50}}\right) \ &=P(Z>0.47)\&=0.3192 \end{aligned} | textbooks/stats/Introductory_Statistics/Inferential_Statistics_and_Probability_-_A_Holistic_Approach_(Geraghty)/08%3A_The_Central_Limit_Theorem/8.01%3A_The_Central_Limit_Theorem_for_Sample_Means.txt |
The Central Limit Theorem will also work for sample proportions if certain conditions are met.
The Binomial Distribution
In Chapter 6, we explored the Binomial Random Variable, in which $X$ measures the number of successes in a fixed number of independent trials. The Binomial distribution had two parameters: the sample size $n$, and the probability of success on a single trial $p$.
Example: Free throw shooting
Recall the example of Draymond Green, an NBA basketball player for the Golden State Warriors who is a 70% free throw shooter.
The random variable $X$ = the number of successes when Draymond Green takes $n$ free throw follows a Bernoulli Distribution with $p =0.7$ (success) and $q = 0.3$ (failure). Let's graph the probability distribution function for $n$=1, 5, 25 and 100:
Notice that as the sample size gets larger, the shape of the random variable becomes Normal.
A good rule to use is that if $np>10$ and $n(1‐p) > 10$, the shape of the Binomial Distribution is approximately Normal.
The Sample Proportion random variable
Instead of looking at the number of successes in a fixed number, consider the proportion of successes in these trials. We will use the symbol $\hat{p}$ (read as p‐hat) to represent the proportion of successes in $n$ trials. If $X$ is the number of successes in $n$ trials, $\hat{p}=\dfrac{X}{n}$ is the sample proportion of successes in $n$ trials.
Here is a comparison of these two random variables:
Random Variable $X$ $\hat{p}$
Expected value $\mu=n p$ $\mu_{\hat{p}}=p$
Variance $\sigma^{2}=n p(1-p)$ $\sigma_{\hat{p}}^{2}=\dfrac{p(1-p)}{n}$
Standard Deviation $\sigma=\sqrt{n p(1-p)}$ $\sigma_{\hat{p}}=\sqrt{\dfrac{p(1-p)}{n}}$
Example: Free throw shooting
Draymond Green, a 70% free‐throw shooter, takes 4 free throws.
$X$ = The number of successes in 4 free throws.
$\hat{p}=\dfrac{X}{n}$ = The proportion of successes in 4 free throws.
Determine the probability distribution function, the expected value and the standard deviation for the random variable $\hat{p}$.
Solution
$x$ $\hat{p}$ $P(\hat{p})$
0 0.00 0.0081
1 0.25 0.0756
2 0.50 0.2646
3 0.75 0.3087
4 1.00 0.2401
$\mu_{\hat{p}}=p=0.7$
$\sigma_{\hat{p}}=\sqrt{\dfrac{p(1-p)}{n}}=\sqrt{\dfrac{0.7(1-0.7)}{4}}=0.2291$
The Central Limit Theorem for Sample Proportions
If $X$ is a Random Variable from a Binomial Distribution with parameters $n$ and $p$, and $np > 10$ and $n(1‐p) > 10$
Then the following is true for the Sample Proportion $\hat{p}=\dfrac{X}{n}$
1. $\mu_{\hat{p}}=p$
2. $\sigma_{\hat{p}}=\sqrt{\dfrac{p(1-p)}{n}}$
3. The Distribution of $\hat{p}$ is approximately Normal.
Combining all of the above into a single formula: $Z=\dfrac{\hat{p}-p}{\sqrt{\frac{p(1-p)}{n}}}$ where $Z$ represents the Standard Normal Distribution.
Example: California Community College Fee Waivers
The graph below shows enrollment at California Community Colleges and the percentage of students who are receiving Board of Governors Fee Waivers (BOGFW) to help financially.70
This graph shows that 45% of all community college students in California receive fee waivers. Suppose you randomly sample 1000 community college students to determine the proportion of students with fee waivers in the sample.
$p$ = 0.45 (the proportion of all community college students with fee waivers)
$n$ = 1000 ( the sample size)
$np = (1000)(0.45) = 450 n(1‐p) = (1000)(1‐0.45) = 550$.
Since both these values are over 10, the conditions for normality are met.
$\hat{p}$ = the proportion of sampled community college students with fee waivers, a random variable
$\mu_{\hat{p}}=0.45$
$\sigma_{\hat{p}}=\sqrt{\dfrac{0.45(1-.045)}{1000}}=0.0157$
483 of the sampled students are receiving fee waivers.
Determine $\hat{p}$. Is the result unusual?
Solution
$\hat{p}=\frac{483}{1000}=0.483$
$Z=\frac{0.483-0.45}{0.0157}=2.10$
$P(Z>2.10)=0.0179$
The sample proportion of 0.483 is unusually high, since the $Z$ value is more than 2. The probability of getting a sample proportion of 0.483 or larger is only 0.0179. | textbooks/stats/Introductory_Statistics/Inferential_Statistics_and_Probability_-_A_Holistic_Approach_(Geraghty)/08%3A_The_Central_Limit_Theorem/8.02%3A_The_Central_Limit_Theorem_for_Sample_Proportions.txt |
The reason we conduct statistical research is to obtain an understanding about phenomena in a population. For example, we may want to know if a potential drug is effective in treating a disease. Since it is not feasible or ethical to distribute an experimental drug to the entire population, we instead must study a small subset of the population called a sample. We then analyze the sample and make an inference about the population based on the sample. Using probability theory and the Central Limit Theorem, we can then measure the reliability of the inference.
Example: Home value appraisal
Lupe is trying to sell her house and needs to determine the market value of the home. The population in this example would be all the homes that are similar to hers in the neighborhood.
Lupe’s realtor chooses for the sample nine recent homes in this neighborhood that sold in the last six months. The realtor then adjusts some of the sales prices to account for differences among Lupe’s home and the sold homes.
Next the realtor takes the mean of the adjusted sample and recommends to Lupe a market value for Lupe’s home of \$450,000. The realtor has made an inference about the mean value of the population.
To measure the reliability of the inference, the realtor should look at factors such as: the small sample size, changes in values of homes over the last six months, or the fact that Lupe’s home is not exactly like the sampled homes.
9.02: Point Estimation
The example in 9.1 is an example of Estimation, a branch of Inferential Statistics in which sample statistics are used to estimate the values of a population parameter. Lupe’s realtor was trying to estimate the population mean ($\mu$) based on the sample mean ($\overline{X}$).
9.03: Confidence Intervals
Using probability and the Central Limit Theorem, we can design an Interval Estimate called a Confidence Interval which has a known probability (Level of Confidence) of capturing the true population parameter.
Confidence Interval for Population Mean
To find a confidence interval for the population mean ($\mu$) when the population standard deviation ($\sigma$)is known, and n is sufficiently large, we can use the Standard Normal Distribution probability distribution function to calculate the critical values for the Level of Confidence:
Example: Students working
The Dean wants to estimate the mean number of hours that students worked per week. A sample of 49 students showed a mean of 24 hours with a standard deviation of 4 hours. The point estimate is 24 hours (sample mean). What is the 95% confidence interval for the average number of hours worked per week by the students?
Solution
$24 \pm \dfrac{1.96 \cdot 4}{\sqrt{49}}=24 \pm 1.12=(22.88,25.12) \text{ hours per week} \nonumber$
The margin of error for the confidence interval is 1.12 hours. We can say with 95% confidence that the mean number of hours worked by students is between 22.88 and 25.12 hours per week.
If the level of confidence is increased, then the margin of error will also increase. For example, if we increase the level of confidence to 99% for the above example, then:
$24 \pm \dfrac{2.578 \cdot 4}{\sqrt{49}}=24 \pm 1.47=(22.53,25.47) \text{ hours per week} \nonumber$
Some important points about Confidence Intervals
• The confidence interval is constructed from random variables calculated from sample data and attempts to predict an unknown but fixed population parameter with a certain level of confidence.
• Increasing the level of confidence will always increase the margin of error.
• It is impossible to construct a 100% Confidence Interval without taking a census of the entire population.
• Think of the population mean as a dart that always goes to the same spot, and the confidence interval as a moving target that tries to “catch the dart.” A 95% confidence interval would be like a target that has a 95% chance of catching the dart.
Confidence Interval for Population Mean using Sample Standard Deviation – Student’s t Distribution
The formula for the confidence interval for the mean requires the knowledge of the population standard deviation ($\sigma$). In most real‐life problems, we do not know this value for the same reasons that we do not know the population mean. This problem was solved by the Irish statistician William Sealy Gosset, an employee at Guinness Brewing. Gosset, however, was prohibited by Guinness from using his own name in publishing scientific papers. He published under the name “A Student”, and therefore the distribution he discovered was named "Student's $t$‐distribution"71.
Characteristics of Student’s t Distribution
• It is continuous, bell‐shaped, and symmetrical about zero like the $z$ distribution.
• There is a family of $t$‐distributions sharing a mean of zero but having different standard deviations based on degrees of freedom.
• The $t$‐distribution is more spread out and flatter at the center than the $Z$‐distribution, but approaches the $Z$‐distribution as the sample size gets larger.
Confidence Interval for $\mu$
$\overline{X} \pm t_{c} \dfrac{s}{\sqrt{n}} \text{ with degrees of freedom} = n - 1 \nonumber$
Example: Rating heath care plans
Last year Sally belonged to an Health Maintenance Organization (HMO) heath care plan that had a population average rating of 62 (on a scale from 0‐100, with ‘100’ being best); this was based on records accumulated about the HMO over a long period of time. This year Sally switched to a new HMO. To assess the population mean rating of the new HMO, 20 members of this HMO are polled and they give the HMO an average rating of 65 with a standard deviation of 10. Find and interpret a 95% confidence interval for population average rating of the new HMO.
Solution
The $t$ distribution will have 20‐1 =19 degrees of freedom. Using a table or technology, the critical value for the 95% confidence interval will be $t_c=2.093$
$65 \pm \dfrac{2.093 \cdot 10}{\sqrt{20}}=65 \pm 4.68=(60.32,69.68) \text{ HMO rating} \nonumber$
With 95% confidence we can say that the rating of Sally’s new HMO is between 60.32 and 69.68. Since the quantity 62 is in the confidence interval, we cannot say with 95% certainty that the new HMO is either better or worse than the previous HMO.
Confidence Interval for Population Proportion
Recall from the section on random variables the binomial distribution where $p$ represented the proportion of successes in the population. The binomial model was analogous to coin‐flipping, or yes/no question polling. In practice, we want to use sample statistics to estimate the population proportion ($p$).
The sample proportion ($\hat{p}$) is the proportion of successes in the sample of size $n$ and is the point estimator for $p$. Under the Central Limit Theorem, if $n p>10$ and $n(1-p)>10$, the distribution of the sample proportion $\hat{p}$ will have an approximately Normal Distribution.
Normal Distribution for $\hat{p}$ if Central Limit Theorem conditions are met.
$\mu_{\hat{p}}=p \qquad \qquad \sigma_{\hat{p}}=\sqrt{\dfrac{p(1-p)}{n}} \nonumber$
Using this information we can construct a confidence interval for $p$, the population proportion:
Confidence interval for $p$
$\hat{p} \pm Z \sqrt{\dfrac{p(1-p)}{n}} \approx \hat{p} \pm Z \sqrt{\frac{\hat{p}(1-\hat{p})}{n}} \nonumber$
Example: Talking and driving
200 California drivers were randomly sampled and it was discovered that 25 of these drivers were illegally talking on their cell phones without the use of a hands‐free device. Find the point estimator for the proportion of drivers who are using their cell phones illegally and construct a 99% confidence interval.
Solution
The point estimator for $p$ is $\hat{p}=\dfrac{25}{200}=.125$ or 12.5%.
A 99% confidence interval for $p$ is: $0.125 \pm 2.576 \sqrt{\dfrac{.125(1-.125)}{200}}=.125 \pm .060$
The margin of error for this poll is 6% and we can say with 99% confidence that the true percentage of drivers who are using their cell phones illegally is between 6.5% and 18.5%
Point Estimator for Population Standard Deviation
We often want to study the variability, volatility or consistency of a population. For example, two investments both have expected earnings of 6% per year, but one investment is much riskier, with higher ups and downs. To estimate variation or volatility of a data set, we will use the sample standard deviation $s$ as a point estimator of the population standard deviation ($\sigma$).
Example
Investments A and B are both known to have a rate of return of 6% per year. Over the last 24 months, Investment A has a sample standard deviation of 3% per month, while Investment B has a sample standard deviation of 5% per month. We would say that Investment B is more volatile and riskier than Investment A due to the higher estimate of the standard deviation.
To create a confidence interval for an estimate of standard deviation, we need to introduce a new distribution, called the Chi‐square ($\chi^{2}$) distribution.
The Chi‐square $\chi^{2}$ Distribution
The Chi‐square distribution is a family of distributions related to the Normal Distribution, since it represents a sum of independent squared standard Normal Random Variables. Like the Student’s t distribution, the degrees of freedom will be $n - 1$ and will determine the shape of the distribution. Also, since the Chi‐square represents squared data, the inference will be about the variance rather than about the standard deviation.
Characteristics of Chi‐square $\chi^{2}$ Distribution
• It is positively skewed
• It is non‐negative
• It is based on degrees of freedom ($n - 1$)
• When the degrees of freedom change, a new distribution is created $\dfrac{(n-1) s^{2}}{\sigma^{2}}$ will have Chi‐square distribution.
Confidence Interval for Population Variance and Standard Deviation
Since the Chi‐square represents squared data, we can construct confidence intervals for the population variance ($\sigma^{2}$), and take the square root of the endpoints to get a confidence interval for the population standard deviation. Due to the skewness of the Chi‐square distribution the resulting confidence interval will not be centered at the point estimator, so the margin of error form used in the prior confidence intervals doesn’t make sense here.
Confidence Interval for population variance ($\sigma^{2}$)
• Confidence is NOT symmetric since chi‐square distribution is not symmetric.
• Take square root of both endpoints to get confidence interval for the population standard deviation ($\sigma$).
$\left(\dfrac{(n-1) s^{2}}{\chi_{R}^{2}}, \dfrac{(n-1) s^{2}}{\chi_{L}^{2}}\right) \nonumber$
Example: Performance risk in finance
In performance measurement of investments, standard deviation is a measure of volatility or risk. Twenty monthly returns from a mutual fund show an average monthly return of 1 percent and a sample standard deviation of 5 percent. Find a 95% confidence interval for the monthly standard deviation of the mutual fund.
Solution
The Chi‐square distribution will have 20‐1 =19 degrees of freedom. Using technology, we find that the two critical values are $\chi_{L}^{2}=8.90655$ and $\chi_{R}^{2}=32.8523$
Formula for confidence interval for $\sigma$ is: $\left(\sqrt{\dfrac{(19) 5^{2}}{32.8523}}, \sqrt{\dfrac{(19) 5^{2}}{8.90655}}\right)=(3.8,7.3)$
One can say with 95% confidence that the standard deviation for this mutual fund is between 3.8 and 7.3 percent per month. | textbooks/stats/Introductory_Statistics/Inferential_Statistics_and_Probability_-_A_Holistic_Approach_(Geraghty)/09%3A_Point_Estimation_and_Confidence_Intervals/9.01%3A_Inferential_Statistics.txt |
In the prior section we used statistical inference to make an estimate of a population parameter and to measure the reliability of the estimate through a confidence interval. In this section, we will explore in detail the use of statistical inference in testing a claim about a population parameter; this inference is the heart of the scientific method used in research.
10: One Population Hypothesis Testing
The actual conducting of a hypothesis test is only a small part of the scientific method. After a general question is formulated, the scientific method consists of: designing an experiment, collecting data through observation and experimentation, testing hypotheses, and reporting overall conclusions. The conclusions themselves lead to other research ideas, making this process a continuous flow of adding to the body of knowledge about the phenomena being studied.
Others may choose a more formalized and detailed set of procedures, but the general concepts of inspiration, design, experimentation, and conclusion allow one to see the whole process.
10.02: Formulate General Research Questions
Most general questions start with an inspiration or an idea about a topic or phenomenon of interest. Some examples of general questions:
• (Health Care) Would a public single payer health care system be more effective than the current private insurance system?
• (Labor) What is the effect of undocumented immigration and outsourcing of jobs on the current unemployment rate?
• (Economy) Is the federal economic stimulus package effective in lessening the impact of the recession?
• (Education) Are colleges too expensive for students today?
It is important to not be so specific in choosing these general questions. On the basis of available or potentially available data, we can decide later what specific research hypotheses will be formulated and tested to address the general question. During the data collection and testing process other ideas may come up and we may choose to redefine the general question. However, we always want to have an overriding purpose for our research.
10.03: Design Research Hypotheses and Experiment
After developing a general question and having some sense of the data that is available or that is collected, we then design and an experiment and a set of hypotheses
Hypotheses and Hypothesis
Testing For purposes of testing, we need to design hypotheses that are statements about population parameters. Some examples of hypotheses:
At least 20% of juvenile offenders are caught and sentenced to prison.
• The mean monthly income for college graduates is over \$5000.
• The mean standardized test score for schools in Cupertino is the same as the mean scores for schools in Los Altos.
• The lung cancer rates in California are lower than the rates in Texas.
• The standard deviation of the New York Stock Exchange today is greater than 10 percentage points per year.
These same hypotheses could be written in symbolic notation:
• $p \geq 0.20$
• $\mu>5000$
• $\mu_{1}=\mu_{2}$
• $p_{1}<p_{2}$
• $\sigma>10$
Hypothesis Testing is a procedure, based on sample evidence and probability theory, used to determine whether the hypothesis is a reasonable statement and should not be rejected, or is unreasonable and should be rejected. This hypothesis that is tested is called the Null Hypothesis and is designated by the symbol Ho. If the Null Hypothesis is unreasonable and needs to be rejected, then the research supports an Alternative Hypothesis designated by the symbol Ha.
Definition: Null Hypothesis ($H_o$)
A statement about the value of a population parameter that is assumed to be true for the purpose of testing.
Definition: Alternative Hypothesis ($H_a$)
A statement about the value of a population parameter that is assumed to be true if the Null Hypothesis is rejected during testing.
From these definitions it is clear that the Alternative Hypothesis will necessarily contradict the Null Hypothesis; both cannot be true at the same time. Some other important points about hypotheses:
• Hypotheses must be statements about population parameters, never about sample statistics.
• In most hypotheses tests, equality ($=, \leq, \geq$) will be associated with the Null Hypothesis while non‐equality ($\neq,<,>$) will be associated with the Alternative Hypothesis.
• It is the Null Hypothesis that is always tested in attempt to “disprove” it and support the Alternative Hypothesis. This process is analogous in concept to a “proof by contradiction” in Mathematics or Logic, but supporting a hypothesis with a level of confidence is not the same as an absolute mathematical proof.
Examples of Null and Alternative Hypotheses:
• $H_{o}: p \leq 0.20 \qquad H_{a}: p>0.20$
• $H_{o}: \mu \leq 5000 \qquad H_{a}: \mu>5000$
• $H_{o}: \mu_{1}=\mu_{2} \qquad H_{a}: \mu_{1} \neq \mu_{2}$
• $H_{o}: p_{1} \geq p_{2} \qquad H_{a}: p_{1}<p_{2}$
• $H_{o}: \sigma \leq 10 \qquad H_{a}: \sigma>10$
Statistical Model and Test Statistic
To test a hypothesis we need to use a statistical model that describes the behavior for data and the type of population parameter being tested. Because of the Central Limit Theorem, many statistical models are from the Normal Family, most importantly the $Z, t, \chi^{2}$, and $F$ distributions. Other models that are used when the Central Limit Theorem is not appropriate are called non‐parametric Models and will not be discussed here.
Each chosen model has requirements of the data called model assumptions that should be checked for appropriateness. For example, many models require that the sample mean have approximately a Normal Distribution, something that may not be true for some smaller or heavily skewed data sets.
Once the model is chosen, we can then determine a test statistic, a value derived from the data that is used to decide whether to reject or fail to reject the Null Hypothesis.
Examples of Statistical Models and Test Statistics
Statistical Model Test Statistic
Mean vs. Hypothesized Value $t=\dfrac{\overline{X}-\mu_{o}}{s / \sqrt{n}}$
Proportion vs. Hypothesized Value $Z=\dfrac{\hat{p}-p_{o}}{\sqrt{\frac{p_{o}\left(1-p_{0}\right)}{n}}}$
Variance vs. Hypothesized Value $\chi^{2}=\dfrac{(n-1) s^{2}}{\sigma^{2}}$
Errors in Decision Making
Whenever we make a decision or support a position, there is always a chance we make the wrong choice. The hypothesis testing process requires us to either to reject the Null Hypothesis and support the Alternative Hypothesis or fail to reject the Null Hypothesis. This creates the possibility of two types of error:
• Type I Error Rejecting the null hypothesis when it is actually true.
• Type II Error Failing to reject the null hypothesis when it is actually false.
In designing hypothesis tests, we need to carefully consider the probability of making either one of these errors.
Example: Pharmaceutical research
Recall the two news stories discussed earlier. In the first story, a drug company marketed a suppository that was later found to be ineffective (and often dangerous) in treatment. Before marketing the drug, the company determined that the drug was effective in treatment, meaning that the company rejected a Null Hypothesis that the suppository had no effect on the disease. This is an example of Type I error.
In the second story, research was abandoned when the testing showed Interferon was ineffective in treating a lung disease. The company in this case failed to reject a Null Hypothesis that the drug was ineffective. What if the drug really was effective? Did the company make Type II error? Possibly, but since the drug was never marketed, we have no way of knowing the truth.
These stories highlight the problem of statistical research: errors can be analyzed using probability models, but there is often no way of identifying specific errors. For example, there are unknown innocent people in prison right now because a jury made Type I error in wrongfully convicting defendants. We must be open to the possibility of modification or rejection of currently accepted theories when new data is discovered.
In designing an experiment, we set a maximum probability of making Type I error. This probability is called the level of significance or significance level of the test and is designated by the Greek letter $\alpha$, read as alpha. The analysis of Type II error is more problematic since there are many possible values that would satisfy the Alternative Hypothesis. For a specific value of the Alternative Hypothesis, the design probability of making Type II error is called Beta ($\beta$) which will be analyzed in detail later in this section.
Critical Value and Rejection Region
Once the significance level of the test is chosen, it is then possible to find the region(s) of the probability distribution function of the test statistic that would allow the Null Hypothesis to be rejected. This is called the Rejection Region, and the boundary between the Rejection Region and the “Fail to Reject” is called the Critical Value.
There can be more than one critical value and rejection region. What matters is that the total area of the rejection region equals the significance level $\alpha$.
One and Two tailed Tests
A test is one‐tailed when the Alternative Hypothesis, $H_{a}$, states a direction, such as:
$H_{o}$: The mean income of females is less than or equal to the mean income of males.
$H_{a}$: The mean income of females is greater than that of males.
Since equality is usually part of the Null Hypothesis, it is the Alternative Hypothesis which determines which tail to test.
A test is two‐tailed when no direction is specified in the alternate hypothesis Ha , such as:
$H_{o}$: The mean income of females is equal to the mean income of males.
$H_{a}$: The mean income of females is not equal to the mean income of the males.
In a two tailed‐test, the significance level is split into two parts since there are two rejection regions. In hypothesis testing, in which the statistical model is symmetrical ( eg: the Standard Normal $Z$ or Student’s t distribution) these two regions would be equal. There is a relationship between a confidence interval and a two‐tailed test: if the level of confidence for a confidence interval is equal to $1-\alpha$, where $\alpha$ is the significance level of the two‐tailed test, the critical values would be the same.
Here are some examples for testing the mean $\mu$ against a hypothesized value $\mu_{0}$:
Note
$H_{a}: \mu>\mu_{0}$ means test the upper tail and is also called a right‐tailed test.
$H_{a}: \mu<\mu_{0}$ means test the lower tail and is also called a left‐tailed test.
$H_{a}: \mu \neq \mu_{0}$ means test both tails.
Deciding when to conduct a one or two‐tailed test is often controversial and many authorities even go so far as to say that only two‐tailed tests should be conducted. Ultimately, the decision depends on the wording of the problem. If we want to show that a new diet reduces weight, we would conduct a lower tailed test, since we don’t care if the diet causes weight gain. If instead, we wanted to determine if mean crime rate in California was different from the mean crime rate in the United States, we would run a two‐tailed test, since different implies greater than or less than. | textbooks/stats/Introductory_Statistics/Inferential_Statistics_and_Probability_-_A_Holistic_Approach_(Geraghty)/10%3A_One_Population_Hypothesis_Testing/10.01%3A_Procedures_of_Hypotheses_Testing_and_the_Scientific_Method.txt |
After designing the experiment, we would then collect and verify the data. For the purposes of statistical analysis, we will assume that all sampling is either random or uses an alternative technique that adequately simulates a random sample.
Data Verification
After collecting the data but before running the test, we need to verify the data. First, get a picture of the data by making a graph (histogram, dot plot, box plot, etc). Check for skewness, shape and any potential outliers in the data.
Working with Outliers
An outlier is a data point that is far removed from the other entries in the data set. Outliers could be caused by:
• Mistakes made in recording data
• Data that don’t belong in population
• True rare events
The first two cases are simple to deal with since we can correct errors or remove data that does not belong in the population. The third case is more problematic as extreme outliers will increase the standard deviation dramatically and heavily skew the data.
In The Black Swan, Nicholas Taleb argues that some populations with extreme outliers should not be analyzed with traditional confidence intervals and hypothesis testing.72 He defines a Black Swan to be an unpredictable extreme outlier that causes dramatic effects on the population. A recent example of a Black Swan was the catastrophic drop in the value of unregulated Credit Default Swap (CDS) real estate insurance investments causing the near collapse of the international banking system in 2008. The traditional statistical analysis that measured the risk of the CDS investments did not take into account the consequence of a rapid increase in the number of foreclosures of homes. In this case, statistics that measure investment performance and risk were useless and created a false sense of security for large banks and insurance companies.
Example: Realtor home sales
Here are the quarterly home sales for 10 realtors
2 2 3 4 5 5 6 6 7 50
With outlier Without outlier
Mean 9.00 4.44
Median 5.00 5.00
Standard Deviation 14.51 1.81
Interquartile Range 3.00 3.50
In this example, the number 50 is an outlier. When calculating summary statistics, we can see that the mean and standard deviation are dramatically affected by the outlier, while the median and the interquartile range (which are based on the ranking of the data) are hardly changed. One solution when dealing with a population with extreme outliers is to use inferential statistics using the ranks of the data, also called non‐parametric statistics.
Using Box Plot to find outliers
• The “box” is the region between the 1st and 3rd quartiles.
• Possible outliers are more than 1.5 IQR’s from the box (inner fence)
• Probable outliers are more than 3 IQR’s from the box (outer fence)
• In the box plot below, which illustrates the realtor example, the dotted lines represent the “fences” that are 1.5 and 3 IQR’s from the box. See how the data point 50 is well outside the outer fence and therefore an almost certain outlier.
The Logic of Hypothesis Testing
After the data is verified, we want to conduct the hypothesis test and come up with a decision: whether or not to reject the Null Hypothesis. The decision process is similar to a “proof by contradiction” used in mathematics:
• We assume Ho is true before observing data and design $H_{a}$ to be the complement of $H_{o}$.
• Observe the data (evidence). How unusual are these data under $H_{o}$?
• If the data are too unusual, we have “proven” $H_{o}$ is false: reject $H_{o}$ and support $H_{a}$ (strong statement).
• If the data are not too unusual, we fail to reject $H_{o}$. This “proves” nothing and we say data are inconclusive. (weak statement) .
• We can never “prove” $H_{o}$, only “disprove” it.
• “Prove” in statistics means support with ($1-\alpha$)100% certainty. (example: if $\alpha =.05$, then we are at least 95% confident in our decision to reject $H_{o}$.
Decision Rule – Two methods, Same Decision
Earlier we introduced the idea of a test statistic which is a value calculated from the data under the appropriate Statistical Model from the data that can be compared to the critical value of the Hypothesis test. If the test statistic falls in the rejection region of the statistical model, we reject the Null Hypothesis.
Recall that the critical value was determined by design on the basis of the chosen level of significance $\alpha$. The more preferred method of making decisions is to calculate the probability of getting a result as extreme as the value of the test statistic. This probability is called the $p$‐value, and can be compared directly to the significance level.
Definition: $p$-value
$p$‐value: the probability, assuming that the null hypothesis is true, of getting a value of the test statistic at least as extreme as the computed value for the test.
• If the $p$‐value is smaller than the significance level $\alpha$, $H_o$ is rejected.
• If the $p$‐value is larger than the significance level $\alpha$, $H_o$ is not rejected.
Comparing $p$‐value to $\alpha$
Both the $p$‐value and $\alpha$ are probabilities of getting results as extreme as the data assuming $H_o$ is true.
The $p$‐value is determined by the data and is related to the actual probability of making Type I error (rejecting a true Null Hypothesis). The smaller the $p$‐value, the smaller the chance of making Type I error and therefore, the more likely we are to reject the Null Hypothesis.
The significance level $\alpha$ is determined by the design and is the maximum probability we are willing to accept of rejecting a true $H_o$.
Two Decision Rules lead to the same decision.
1. If the test statistic lies in the rejection region, reject $H_o$. (critical value method)
2. If the $p$‐value < $\alpha$, reject $H_o$. ($p$‐value method)
This $p$‐value method of comparison is preferred to the critical value method because the rule is the same for all statistical models: Reject $H_o$ if $p$‐value < $\alpha$.
Let’s see why these two rules are equivalent by analyzing a test of mean vs. hypothesized value.
Decision is Reject $H_o$
• $H_o: \mu=10 \qquad H_a: \mu > 10$
• Design: Critical value is determined by significance level $\alpha$.
• Data Analysis: p‐value is determined by test statistic
• Test statistic falls in rejection region.
• $p$‐value (blue) < $\alpha$ (purple)
• Reject $H_o$.
• Strong statement: Data supports the Alternative Hypothesis.
In this example, the test statistic lies in the rejection region (the area to the right of the critical value). The $p$‐value (the area to the right of the test statistic) is less than the significance level (the area to the right of the critical value). The decision is to Reject $H_o$.
Decision is Fail to Reject $H_o$
• $H_o: \mu=10 \qquad H_a: \mu > 10$
• Design: critical value is determined by significance level $\alpha$.
• Data Analysis: $p$‐value is determined by test statistic
• Test statistic does not fall in the rejection region.
• $p$‐value (blue) > $\alpha$(purple)
• Fail to Reject $H_o$.
• Weak statement: Data is inconclusive and does not support the Alternative Hypothesis.
In this example, the Test Statistic does not lie in the Rejection Region. The $p$‐value (the area to the right of the test statistic) is greater than the significance level (the area to the right of the critical value). The decision is Fail to Reject $H_o$. | textbooks/stats/Introductory_Statistics/Inferential_Statistics_and_Probability_-_A_Holistic_Approach_(Geraghty)/10%3A_One_Population_Hypothesis_Testing/10.04%3A_Collect_and_Analyze_Experimental_Data.txt |
The hypothesis test has been conducted and we have reached a decision. We must now communicate these conclusions so they are complete, accurate, and understood by the targeted audience. How a conclusion is written is open to subjective analysis, but here are a few suggestions.
Be consistent with the results of the Hypothesis Test
Rejecting $H_o$ requires a strong statement in support of $H_a$, while failing to reject $H_o$ does NOT support $H_o$, but requires a weak statement of insufficient evidence to support $H_a$.
Example
A researcher wants to support the claim that, on average, students send more than 1000 text messages per month, and the research hypotheses are $H_o: \mu=1000$ vs. $H_a: \mu>1000$
Conclusion if $H_o$ is rejected: The mean number of text messages sent by students exceeds 1000.
Conclusion if $H_o$ is not rejected: There is insufficient evidence to support the claim that the mean number of text messages sent by students exceeds 1000.
Use language that is clearly understood in the context of the problem
Do not use technical language or jargon, but instead refer back to the language of the original general question or research hypotheses. Saying less is better than saying more.
Example
A test supported the Alternative Hypothesis that housing prices and size of homes in square feet were positively correlated. Compare these two conclusions and decide which is clearer:
Solution
Conclusion 1: By rejecting the Null Hypothesis we are inferring that the Alterative Hypothesis is supported and that there exists a significant correlation between the independent and dependent variables in the original problem comparing home prices to square footage.
Conclusion 2: Homes with more square footage generally have higher prices.
Limit the inference to the population that was sampled
Care must be taken to describe the population being sampled and understand that the any claim is limited to this sampled population. If a survey was taken of a subgroup of a population, then the inference applies only to the subgroup.
For example, studies by pharmaceutical companies will only test adult patients, making it difficult to determine effective dosage and side effects for children. “In the absence of data, doctors use their medical judgment to decide on a particular drug and dose for children. ‘Some doctors stay away from drugs, which could deny needed treatment,’ Blumer says. ‘Generally, we take our best guess based on what's been done before.’ The antibiotic chloramphenicol was widely used in adults to treat infections resistant to penicillin. But many newborn babies died after receiving the drug because their immature livers couldn't break down the antibiotic.”73 We can see in this example that applying inference of the drug testing results on adults to the un‐sampled children led to tragic results.
Report sampling methods that could question the integrity of the random sample assumption
In practice it is nearly impossible to choose a random sample, and scientific sampling techniques that attempt to simulate a random sample need to be checked for bias caused by under‐sampling.
Telephone polling was found to under‐sample young people during the 2008 presidential campaign because of the increase in cell phone only households. Since young people were more likely to favor Obama, this caused bias in the polling numbers. Additionally, caller ID has dramatically reduced the percentage of successful connections to people being surveyed. The pollster Jay Leve of SurveyUSA said telephone polling was “doomed” and said his company was already developing new methods for polling.74
Sampling that didn’t occur over the weekend may exclude many full time workers while self‐selected and unverified polls (such as ratemyprofessors.com) could contain immeasurable bias.
Conclusions should address the potential or necessity of further research, sending the process back to the first procedure
Answers often lead to new questions. If changes are recommended in a researcher’s conclusion, then further research is usually needed to analyze the impact and effectiveness of the implemented changes. There may have been limitations in the original research project (such as funding resources, sampling techniques, unavailability of data) that warrant more comprehensive studies.
For example, a math department modifies its curriculum based on the improved student success rates of an experimental course. The department would want to do further study of student outcomes to assess the effectiveness of the new program. | textbooks/stats/Introductory_Statistics/Inferential_Statistics_and_Probability_-_A_Holistic_Approach_(Geraghty)/10%3A_One_Population_Hypothesis_Testing/10.05%3A_Report_Conclusions_in_Nonstatistical_Language.txt |
Example – Soy sauce production
A food company has a policy that the stated contents of a product match the actual results. A General Question might be “Does the stated net weight of a food product match the actual weight?” The quality control statistician decides to test the 16 ounce bottle of Soy Sauce and must now design the experiment.
The quality‐control statistician has been given the authority to sample 36 bottles of soy sauce and knows from past testing that the population standard deviation is 0.5 ounces. The model will be a test of population mean vs. hypothesized value of 16 oz. A two‐tailed test is selected since the company is concerned about both overfilling and underfilling the bottles as the stated policy is that the stated weight should match the actual weight of the product.
Research Hypotheses:
$H_o: \mu =16$ (The filling machine is operating properly)
$H_a: \mu \neq 16$ (The filling machine is not operating properly)
Since the population standard deviation is known the test statistic will be $Z=\dfrac{\overline{X}-\mu}{\sigma / \sqrt{n}}$. This model is appropriate since the sample size assures that the distribution of the sample mean is approximately Normal due to the Central Limit Theorem.
Type I error would be to reject the Null Hypothesis and say that the machine is not running properly when in fact it was operating properly. Since the company does not want to needlessly stop production and recalibrate the machine, the statistician chooses to limit the probability of Type I error by setting the level of significance ($\alpha$) to 5%.
The statistician now conducts the experiment and samples 36 bottles over one hour and determines from a box plot of the data that there is one unusual observation of 17.56 ounces. The value is rechecked and kept in the data set.
Next, the sample mean and the test statistic are calculated.
$\overline{X}=16.12 \text { ounces } \qquad \qquad Z=\dfrac{16.12-16}{0.5 / \sqrt{36}}=1.44 \nonumber$
The decision rule under the critical value method would be to reject the Null Hypothesis when the value of the test statistic is in the rejection region. In other words, reject $H_o$ when $Z >1.96$ or $Z<‐1.96$.
Based on this result, the decision is fail to reject $H_o$, since the test statistic does not fall in the rejection region.
Alternatively (and preferably) the statistician could use the p‐value method of decision rule. The $p$‐value for a two‐tailed test must include all values (positive and negative) more extreme than the Test Statistic, so in this example we find the probability that $Z < ‐1.44$ or $Z > 1.44$ (the area shaded blue).
Using a calculator, computer software or a Standard Normal table, the $p$‐value=0.1498. Since the $p$‐value is greater than $\alpha$ the decision again is fail to reject $H_o$.
Finally the statistician must report the conclusions and make a recommendation to the company’s management:
“There is insufficient evidence to conclude that the machine that fills 16 ounce soy sauce bottles is operating improperly. This conclusion is based on 36 measurements taken during a single hour’s production run. I recommend continued monitoring of the machine during different employee shifts to account for the possibility of potential human error”.
The statistician makes the weak statement and is not stating that the machine is running properly, only that there is not enough evidence to state that the machine is running improperly. The statistician also reports concerns about the sampling of only one shift of employees (restricting the inference to the sampled population) and recommends repeating the experiment over several shifts. | textbooks/stats/Introductory_Statistics/Inferential_Statistics_and_Probability_-_A_Holistic_Approach_(Geraghty)/10%3A_One_Population_Hypothesis_Testing/10.06%3A_Test_of_Mean_vs._Hypothesized_Value__A_Complete_Example.txt |
In the prior example, the statistician failed to reject the Null Hypothesis because the probability of making Type I error (rejecting a true Null Hypothesis) exceeded the significance level of 5%. However, the statistician could have made Type II error if the machine is really operating improperly. One of the important and often overlooked tasks is to analyze the probability of making Type II error ($\beta$). Usually statisticians look at statistical power which is the complement of $\beta$.
Beta ($\beta$): The probability of failing to reject the null hypothesis when it is actually false.
Power (or Statistical Power): The probability of rejecting the null hypothesis when it is actually false.
Both beta and power are calculated for specific possible values of the Alternative Hypothesis.
Fail to Reject $H_o$ Reject $H_o$
$H_o$ is true $1-\alpha$ $\alpha$ Type I error
$H_o$ is false $\beta$ Type II error $1-\beta$ Power
If a hypothesis test has low power, then it would be difficult to reject $H_o$, even if $H_o$ were false; the research would be a waste of time and money. However, analyzing power is difficult in that there are many values of the population parameter that support $H_a$. For example, in the soy sauce bottling example, the Alternative Hypothesis was that the mean was not 16 ounces. This means the machine could be filling the bottles with a mean of 16.0001 ounces, making Ha technically true. So when analyzing power and Type II error, we need to choose a value for the population mean under the Alternative Hypothesis ($\mu_a$) that is “practically different” from the mean under the Null Hypothesis ($\mu_o$). This practical difference is called the effect size.
Definition: Effect size
Effect Size: The “practical difference” between $\mu_{o}$ and $\mu_a=\left|\mu_{o}-\mu_{a}\right|$
where
$\mu_{o}$: The value of the population mean under the Null Hypothesis
$\mu_{a}$: The value of the population mean under the Alternative Hypothesis
Suppose we are conducting a one‐tailed test of the population mean:
$H_o: \mu=\mu_{0} \qquad Ha: \mu>\mu_{0} \nonumber$
Consider the two graphs shown below. The top graph is the distribution of the sample mean under the Null Hypothesis, which was covered in an earlier section. The area to the right of the critical value is the rejection region.
We now add the bottom graph, which represents the distribution of the sample mean under the Alternative Hypothesis for the specific value $\mu a$.
We can now measure the Power of the test (the area in green) and beta (the area in purple) on the lower graph.
There are several methods of increasing Power, but they all have trade‐offs:
Ways to Increase Power Trade off
Increase Sample Size Increased cost or unavailability of data
Increase Significance level ($\alpha$) More like to Reject a true $H_o$ (Type I error)
Choose a value of $\mu_{a}$ further from $\mu_{o}$ Result may be less meaningful
Redefine population to lower standard deviation Result may be too limited to have value
Conduct as a one‐tail rather than a two‐tail test May produce a biased result
Example: Bus brake pads
Bus brake pads are claimed to last on average at least 60,000 miles and the company wants to test this claim. The bus company considers a “practical” value for purposes of bus safety to be that the pads last at least 58,000 miles. If the standard deviation is 5,000 and the sample size is 50, find the power of the test when the mean is really 58,000 miles. (Assume $\alpha = .05$)
Solution
First, find the critical value of the test.
Reject $H_o$ when $Z < ‐1.645$
Next, find the value of that corresponds to the critical value.
$\overline{X}=\mu_{o}+\dfrac{Z \sigma}{\sqrt{n}}=60000-(1.645)(5000) / \sqrt{50}=58837 \nonumber$
$H_o$ is rejected when $\overline{X}<58837$
Finally, find the probability of rejecting $H_o$ if Ha is true.
\begin{aligned} P(\overline{X}<58837) &=P\left(Z<\dfrac{\left(58837-\mu_{a}\right)}{\sigma / \sqrt{n}}\right) \ &=P\left(Z<\dfrac{(58837-58000)}{5000 / \sqrt{50}}\right)\ &=P(Z<1.18)\ &=.8810 \end{aligned} \nonumber
Therefore, this test has 88% power and $\beta$ would be 12%
Power Calculation Values
Input Values
$\mu_{o}$ = 60,000 miles
$\mu_{a}$ = 58,000 miles
$\alpha$ = 0.05 ݊
$n$ = 50
$\sigma$ = 5000 miles
Calculated Values
Effect Size = 2000 miles
Critical Value = 58,837 miles
$\beta$ = 0.1190 or about 12%
Power = 0.8810 or about 88% | textbooks/stats/Introductory_Statistics/Inferential_Statistics_and_Probability_-_A_Holistic_Approach_(Geraghty)/10%3A_One_Population_Hypothesis_Testing/10.07%3A_Type_II_Error_and_Statistical_Power.txt |
The procedures outlined for the test of population mean vs. hypothesized value with known population standard deviation will apply to other models as well. All that really changes is the test statistic.
Examples of some other one population models:
• Test of population mean vs. hypothesized value, population standard deviation unknown.
• Test of population proportion vs. hypothesized value.
• Test of population standard deviation (or variance) vs. hypothesized value.
Test of population mean with unknown population standard deviation
The test statistic for the one sample case changes to a Student’s t distribution with degrees of freedom equal to $n-1: t=\dfrac{\overline{X}-\mu_{o}}{s / \sqrt{n}}$
The shape of the $t$ distribution is similar to the $Z$, except for the fact that the tails are fatter, so the logic of the decision rule is the same as for the $Z$ test statistic.
Example: Archaeology
Humerus bones from the same species have approximately the same length‐to‐width ratios. When fossils of humerus bones are discovered, archaeologists can determine the species by examining this ratio. It is known that Species A has a mean ratio of 9.6. A similar Species B has a mean ratio of 9.1 and is often confused with Species A. 21 humerus bones were unearthed in an area that was originally thought to be inhabited Species A. (Assume all unearthed bones are from the same species.)
1. Design a test in which the alternative hypothesis would be the humerus bones were not from Species A.
2. Determine the power of this test if the bones actually came from Species B (assume a standard deviation of 0.7)
3. Conduct the test using at a 5% significance level and state overall conclusions.
Solution
1. Research Hypotheses
$H_o: \mu=9.6$ (The humerus bones are from Species A)
$H_a: \mu\neq9.6$ (The humerus bones are not from Species A)
Significance level: $\alpha$ =.05
Test Statistic (Model): $t$‐test of mean vs. hypothesized value, unknown standard deviation
Model Assumptions: we may need to check the data for extreme skewness as the distribution of the sample mean is assumed to be approximately the Normal Distribution.
1.
Information needed for Power Calculation Results using Online Power Calculator75
• $\mu_{o}$ = 9.6 (Species A)
• $\mu_{a}$ = 9.1 (Species B)
• Effect Size =$| mo ‐ ma |$ = 0.5
• $s$ = 0.7 (given)
• $\alpha$ = .05
• $n$ = 21 (sample size)
• Two tailed test
• Power =.8755
• $\beta$ = 1 ‐ Power = .1245
• If humerus bones are from Species B, test has an 87.55% chance of correctly rejecting Ho and a maximum Type II error of 12.55%
1.
From MegaStat76, $p$‐value = .0308 and $\alpha$ =.05.
Since $p$‐value < $\alpha$, $H_o$ is rejected and we support $H_a$.
Conclusion: The evidence supports the claim ($p$‐value < .05) that the humerus bones are not from Species A. The small sample size limited the power of the test, which prevented us from making a more definitive conclusion. Further testing is recommended to determine if bones are from Species B or other unknown species.
We are also assuming that since the bones were unearthed in the same location, they came from the same species.
Test of population proportion vs. hypothesized value
When our data is categorical and there are only two possible choices (for example a yes/no question on a poll), we may want to make a claim about a proportion or a percentage of the population ($p$) being compared to a particular value ($p_o$). We will then use the sample proportion ($\hat{p}$) to test the claim.
Test of proportion vs. hypothesized value
$p$ = population proportion
$p_o$ = population proportion under $H_o$
$\hat{p}$ = sample proportion
$p_a$ = population proportion under Ha
Test Statistic: $Z=\dfrac{\hat{p}-p_{o}}{\sqrt{\frac{p_{o}\left(1-p_{o}\right)}{n}}}$
Requirement for Normality Assumption: $n p(1-p)>5$
Example: Charity solicitation
In the past, 15% of the mail order solicitations for a certain charity resulted in a financial contribution. A new solicitation letter has been drafted and will be sent to a random sample of potential donors. A hypothesis test will be run to determine if the new letter is more effective. Determine the sample so that (1) the test will be run at the 5% significance level and (2) if the letter has an 18% success rate, (an effect size of 3%), the power of the test will be 95%. After determining the sample size, conduct the test.
Solution
$H_o: p \leq 0.15$ (The new letter is not more effective.)
$H_a: p > 0.15$ (The new letter is more effective.)
Test Statistic – $Z$‐test of proportion vs. hypothesized value
Information needed for Power Calculation Results using Online Power Calculator75
• $p_{o}$ = 0.15 (current letter)
• $p_{a}$ = 0.18 (potential new letter)
• Effect Size = $| pa ‐ po |$ = 0.03
• Desired Power = 0.95
• $\alpha$ = .05
• One tailed test
• Sample size = 1652
• The charity sent out 1652 new solicitation letters to potential donors and ran the test, receiving 286 positive responses.
• $p$‐value for test = 0.0042
Since $p$‐value < $\alpha$, reject $H_o$ and support $H_a$. Since the $p$‐value is actually less than 0.01, we would go further and say that the data supports rejecting $H_o$ for $\alpha = .01$.
Conclusion: The evidence supports the claim that the new letter is more effective. The 1652 test letters were selected as a random sample from the charity’s mailing list. All letters were sent at the same time period. The letters needed to be sent in a specific time period, so we were not able to control for seasonal or economic factors. We recommend testing both solicitation methods over the entire year to eliminate seasonal effects and to create a control group.
Test of population standard deviation (or variance) vs. hypothesized value
We often want to make a claim about the variability, volatility or consistency of a population random variable. Hypothesized values for the population variance ($\sigma^{2}$) or the standard deviation ($\sigma$) are tested with the Chi‐square ($\chi^{2}$) distribution.
Examples of Hypotheses:
• $H_o: \sigma = 10$ $H_a: \sigma \neq 10$
• $H_o: \sigma^{2} = 100$ $H_a: \sigma^{2} > 100$
The sample variance $s^2$ is used in calculating the Chi‐square Test Statistic.
Test of variance vs. hypothesized value
$\sigma^{2}$ = population variance
$\sigma_{o}^{2}$ = population variance under Ho
$s^2$ = sample variance
Test Statistic: $\chi^{2}=\dfrac{(n-1) s^{2}}{\sigma_{o}^{2}}$
$n-1$ = degrees of freedom
Example: Standardized testing
A state school administrator claims that the standard deviation of test scores for 8th grade students who took a life‐science assessment test is less than 30, meaning the results for the class show consistency. An auditor wants to support that claim by analyzing 41 students’ recent test scores. The test will be run at 1% significance level.
$\begin{array}{|l|l|l|l|l|l|l|l|l|} \hline 57 & 75 & 86 & 92 & 101 & 108 & 110 & 120 & 155 \ \hline 63 & 77 & 88 & 96 & 102 & 108 & 111 & 122 & \ \hline 66 & 78 & 88 & 96 & 107 & 109 & 115 & 135 & \ \hline 68 & 81 & 92 & 98 & 107 & 109 & 115 & 137 & \ \hline 72 & 82 & 92 & 99 & 107 & 110 & 118 & 139 & \ \hline \end{array}$
Solution
Design:
Research Hypotheses:
$H_o$: Standard deviation for test scores equals 30.
$H_a$: Standard deviation for test scores is less than 30.
Hypotheses In terms of the population variance:
$H_o: \sigma^{2} = 900$
$H_a: \sigma^{2} < 900$
Results:
Decision: Reject $H_o$
Conclusion: The evidence supports the claim ($p$‐value < .01) that the standard deviation for 8th grade test scores is less than 30. The 40 test scores were the results of the recently administered exam to the 8th grade students. Since the exams were for the current class only, there is no assurance that future classes will achieve similar results. Further research would be to compare results to other schools that administered the same exam and to continue to analyze future class exams to see if the claim is holding true. | textbooks/stats/Introductory_Statistics/Inferential_Statistics_and_Probability_-_A_Holistic_Approach_(Geraghty)/10%3A_One_Population_Hypothesis_Testing/10.08%3A_New_Models_for_One_Population_Inference_Similar_Procedures.txt |
One of the most misinterpreted concepts in Statistics is the p‐value. In government studies and scientific research, there have been invalid conclusion based on misinterpreting the $p$‐value. On March 7, 2016, in an unprecedented statement, the American Statistical Association released a paper, "Statement on Statistical Significance and P‐Values", which offered principles to improve the conduct and interpretation of quantitative science.77
The paper introduced 6 standards, which we will review individually.
1. $P$‐values can indicate how incompatible the data are with a specified statistical model.
The $p$‐value is the probability of getting data this extreme given $H_o$ is true. This is a conditional probability and can be written as:
$p$‐value=$P$(getting this data or more extreme data | $H_o$ is true)
Example: Financial aid
A researcher wanted to show that the percentage of students at community colleges who receive financial aid exceeds 40%.
Solution
$H_o: p = 0.40$ (The proportion of community college students receiving financial aid is 0.40.).
$H_a: p > 0.40$ (The proportion of community college students receiving financial aid is over 0.40.).
The research sampled 874 students and found that 376 of them received financial aid. This works to a sample proportion $\hat{p}=0.430$, which leads to a $Z$ value of 1.822, if $p = 0.40$.
\begin{aligned} p \text {-value }=& P(\hat{p}>0.430 \mid H_o \text { is true }) \ &=P(Z>1.822)\&=0.034 \end{aligned}
The probability of getting this sample proportion, or something larger given the actual proportion is 0.40 is equal 0.034.
1. $P$‐values do not measure the probability that the studied hypothesis is true, or the probability that the data were produced by random chance alone.
After conducting an experiment, researchers would love to be able know the probability that their claim is true. Unfortunately, this probability cannot be calculated from the $p$‐value alone.
Example: Financial aid
Let's return to the researcher who wanted to show that the percentage of students at community colleges who receive financial aid exceeds 40%. After conducting the research, the p‐value was 0.034. Suppose the researcher wrote this conclusion:
"With 96.6% confidence, we conclude that the percentage of community college students who receive financial aid exceeds 40%”.
This conclusion is invalid, and conclusions written with a similar misinterpretation has shown up in many published works. Let's explore the problem here.
The researcher is claiming that the probability that the alternative hypothesis is true is the complement of the $p$‐value. In other words, the researcher is claiming the $p$‐value is the probability $H_o$ is true given this data. This researcher has flipped the conditionality in the $p$‐value definition!
Researcher's misinterpretation: $p$‐value = $P$( $H_o$ is true | Data this Extreme)
Correct interpretation of $p$‐value = $P$( Getting Data this Extreme | $H_o$ is true)
In Chapter 5 on probability, we explored why $P(A|B)$ is not the same as $P(B|A)$.
Recall the testing for HIV example from Chapter 5
$P$(Tests + | HIV‐) = 1350/9000 = 85%
$P$(HIV+ | Tests+) = 950/2300 = 41.3%
Even though the test has a true‐positive rate of 85%, there is only a 41% chance that someone who tests positive has HIV.
HIV+ A HIV- A' Total
Test+ B 950 1350 2300
Test- B' 50 7650 7700
Total 1000 9000 10000
1. Scientific conclusions and business or policy decisions should not be based only on whether a $p$‐ value passes a specific threshold.
In any statistics course, we learn that having a $p$‐value less than the significance level is evidence supporting the Alternative Hypothesis. This does not necessarily mean $H_a$ is true or even probably true. There needs to be other reasoning as to why $H_a$ might be true.
Some research journals, like Basic and Applied Social Psychology, now require research show “strong descriptive statistics, including effect sizes.”78
Example: Financial aid
We will again return to financial aid example. After conducting the research, the $p$‐value was 0.031. If we started with a significance level of 5%, the decision would be to Reject $H_o$ and support the claim that the percentage of students at community colleges who receive financial aid exceeds 40%. However, if we started with a significance level of 1%, the decision would be to Fail to Reject $H_o$ and there would not be enough evidence to support the claim that the percentage of students at community colleges who receive financial aid exceeds 40%. Even if $H_o$ is rejected, this evidence is not conclusive.
A significant result is only a piece of evidence, and there should always be additional criteria in decision making and research.
1. Proper inference requires full reporting and transparency.
Before conducting research and before collecting data, the experiment needs to be designed and hypotheses need to be stated. Often, especially with a dramatic increase in access to “Big Data”, some have used data dredging as a way to look at many possibilities and identify phenomena that are significant. Researchers, in a desire to get published, will cheat the science by using techniques called $p$‐hacking.
Methods of $p$‐hacking
• Collecting data until the $p$‐value < $\alpha$, then stop collecting data.
• Analyzing many options or conditions, but only publishing ones that are significant.
• Cherry picking the data to only include values that support the claim.
• Only looking at subgroups that are significant.
Use of these $p$‐hacking methods are troubling and is one of the main reasons scientific journals are now skeptical of $p$‐value based hypothesis testing.
The XKCD comic “Significant”79, pictured on the right, shows an example of $p$‐hacking, including how the media misinterprets research.
1. A $p$‐value, or statistical significance, does not measure the size of an effect or the importance of a result.
A result may be statistically significant, but have no practical value.
Suppose someone claims that the mean flying time between New York and San Francisco is 6 hours 20 minutes. After conducting a large sample size study, you find significant evidence ($p$‐value < .01) that the mean flying time is really longer, with a sample mean of 6 hours and 23 minutes.
Even though your evidence is strong, there is no practical difference between the times. The $p$‐value does not address effect sizes.
1. By itself, a $p$‐value does not provide a good measure of evidence regarding a model or hypothesis.
The $p$‐value is a useful tool, but by itself is not enough to support research.80 | textbooks/stats/Introductory_Statistics/Inferential_Statistics_and_Probability_-_A_Holistic_Approach_(Geraghty)/10%3A_One_Population_Hypothesis_Testing/10.09%3A_The_pvalue-_Misconceptions_and_Proper_Usage.txt |
In this section we consider expanding the concepts from the prior section to design and conduct hypothesis testing with two samples. Although the logic of hypothesis testing will remain the same, care must be taken to choose the correct model. We will first consider comparing two population means.
11: Two Populations Inference
In designing a two population test of means, first determine whether the experiment involves data that is collected by independent or dependent sampling.
Independent sampling
The data is collected by two simple random samples from separate and unrelated populations. This data will then be used to compare the two population means. This is typical of an experimental or treatment population versus a control population.
• $n_1$ is the sample size from Population 1.
• $n_2$ is the sample size from Population 2.
• $\overline{X}_1$ is the sample mean from Population 1.
• $\overline{X}_2$ is the sample mean from Population 2.
• $s_1$ is the sample standard deviation from Population 1.
• $s_2$ is the sample standard deviation from Population 2.
Example: Comparing algebra courses
A community college mathematics department wants to know if an experimental algebra course has higher success rates when compared to a traditional course. The mean grade points for 80 students in the experimental course (treatment) is compared to the mean grade points for 100 students in the traditional course (control).
Dependent sampling
The data consists of a single population and two measurements. A simple random sample is taken from the population and pairs of measurement are collected. This is also called related sampling or matched pair design. Dependent sampling actually reduces to a one population model of differences
• $n$ is the sample size from the population, the number of pairs
• $\overline{X}_d$ is the sample mean of the differences of each pair.
• $s_d$ is the sample standard deviation of the differences of each pair.
Example: Comparing midterm grades
An instructor of a statistics course wants to know if student scores are different on the second midterm compared to the first exam. The first and second midterm scores for 35 students is taken and the mean difference in scores is determined.
11.02: Independent Sampling Models
We will first consider the case when we want to compare the population means of two populations using independent sampling.
Distribution of the difference of two sample means
Suppose we wanted to test the hypothesis $H_o: \mu_{1}=\mu_{2}$. We have point estimators for both $\mu_{1}$ and $\mu_{2}$, namely $\overline{X}_1$ and $\overline{X}_1$, which have approximately Normal Distributions under the Central Limit Theorem, but it would useful to combine them both into a single estimator. Fortunately it is known that if two random variables have a Normal Distribution, then so does the sum and difference. Therefore we can restate the hypothesis as $H_o: \mu_{1}-\mu_{2}=0$ and use the difference of sample means $\overline{X}_1 - \overline{X}_1$as a point estimator for the difference in population means $\mu_{1}-\mu_{2}$.
We will first consider the case when we want to compare the population means of two populations using independent sampling.
Distribution of the difference of two sample means
Suppose we wanted to test the hypothesis $H_o: \mu_{1}=\mu_{2}$. We have point estimators for both $\mu_{1}$ and $\mu_{2}$, namely $\overline{X}_1$ and $\overline{X}_1$, which have approximately Normal Distributions under the Central Limit Theorem, but it would useful to combine them both into a single estimator. Fortunately it is known that if two random variables have a Normal Distribution, then so does the sum and difference. Therefore we can restate the hypothesis as $H_o: \mu_{1}-\mu_{2}=0$ and use the difference of sample means $\overline{X}_1 - \overline{X}_2$ as a point estimator for the difference in population means $\mu_{1}-\mu_{2}$.
Distribution of $\overline{X}_1 - \overline{X}_2$ under the Central Limit Theorem
$\mu_{\overline{X}_{1}-\overline{X}_{2}}=\mu_{1-} \mu_{2}$
$\sigma_{\overline{X}_{1}-\overline{X}_{2}}=\sqrt{\dfrac{\sigma_{1}^{2}}{n_{1}}+\dfrac{\sigma_{2}^{2}}{n_{2}}}$
$Z=\dfrac{\left(\overline{X}_{1}-\overline{X}_{2}\right)-\left(\mu_{1}-\mu_{2}\right)}{\sqrt{\frac{\sigma_{1}^{2}}{n_{1}}+\frac{\sigma_{2}^{2}}{n_{2}}}}$ if $n_1$ and $n_2$ are sufficiently large.
Comparing two means, independent sampling: Model when population variances known
When the population variances are known, the test statistic for the Hypothesis $H_o: \mu_{1}=\mu_{2}$ can be tested with Normal distribution $Z$ test statistic shown above. Also, if both sample size $n_1$ and $n_2$ exceed 30, this model can also be used.
Example: Homes and pools
Are larger homes more likely to have pools? The square footage (size) data for single family homes in California was separated into two populations: Homes with pools and homes without pools. We have data from 130 homes with pools and 95 homes without pools.
Solution
Design
Research Hypotheses:
$H_o: \mu_{1} \leq \mu_{2}$ (Homes with pools do not have more mean square footage)
$H_a: \mu_{1} > \mu_{2}$ (Homes with pools do have more mean square footage)
Since both sample sizes are over 30, the model will be a Large sample $Z$ test comparing two population means with independent sampling.
This model is appropriate since the sample sizes assures the distribution of the sample mean is approximately Normal from the Central Limit Theorem. We opt for a one‐tailed test since we want to support the claim that homes with pools are larger. The test statistic will be $=\dfrac{\left(\overline{X}_{1}-\overline{X}_{2}\right)-\left(\mu_{1}-\mu_{2}\right)}{\sqrt{\frac{\sigma_{1}^{2}}{n_{1}}+\frac{\sigma_{2}^{2}}{n_{2}}}}$
Type I error would be to reject the Null Hypothesis and claim home with pools are larger, when they are not larger. It was decided to limit this error by setting the level of significance ($\alpha$) to 1%.
The decision rule under the critical value method would be to reject the Null Hypothesis when the value of the test statistic is in the rejection region. In other words, reject $H_o$ when $Z > 2.326$. The decision under the $p$‐value method is to reject $H_o$ if the $p$‐value is < $\alpha$.
Data/Results
Since the test statistic ($Z = 4.19$) is greater than the critical value (2.326), $H_o$ is rejected. Also the $p$‐value (0.000013) is less than $\alpha$(0.01), the decision is to Reject $H_o$.
Conclusion
The researcher makes the strong statement that homes with pools have a significantly higher mean square footage than home without pools.
Model when population variances are unknown, but are assumed to be equal
In the case that the population standard deviations are unknown, it seems logical to simply replace the population standard deviations for each population with the sample standard deviations and use a $t$‐distribution as we did for the one population case. However, this is not so simple when the sample size for either group is under 30
We will consider two models. This first model (which we prefer to use since it has more power) assumes the population variances are equal and is called the pooled variance $t$‐test. In this model we combine or “pool” the two sample standard deviations into a single estimate called the pooled standard deviation, $s_p$. If the central limit theorem is working, we then can substitute $s_p$ for $s_1$ and $s_2$ get a $t$‐distribution with $n_{1}+n_{2}-2$ degrees of freedom:
Pooled variance $t$‐test to compare the means for two independent populations
Model Assumptions
• Independent Sampling
• $\overline{X}_{1}-\overline{X}_{2}$ approximately Normal
• $\sigma_{1}^{2}=\sigma_{2}^{2}$
Test Statistic
• $t=\dfrac{\left(\overline{X}_{1}-\overline{X}_{2}\right)-\left(\mu_{1}-\mu_{2}\right)}{s_{p} \sqrt{\frac{1}{n_{1}}+\frac{1}{n_{2}}}}$
• $s_{p}=\sqrt{\dfrac{\left(n_{1}-1\right) s_{1}^{2}-\left(n_{2}-1\right) s_{2}^{2}}{n_{1}+n_{2}-2}}$
• Degrees of freedom $=n_{1}+n_{2}-2$
Example: Fuel economy
A recent EPA study compared the highway fuel economy of domestic and imported passenger cars. A sample of 15 domestic cars revealed a mean of 33.7 MPG (mile per gallon) with a standard deviation of 2.4 mpg. A sample of 12 imported cars revealed a mean of 35.7 mpg with a standard deviation of 3.9. At the .05 significance level can the EPA conclude that the MPG is higher for the imported cars?
Solution
Design
It is best to associate the subscript 2 with the control group; in this case we will let domestic cars be population 2.
Research Hypotheses:
$H_o: \mu_{1} \leq \mu_{2}$ (Imported compact cars do not have a higher mean MPG)
$H_a: \mu_{1} > \mu_{2}$ (Imported compact cars have a higher mean MPG)
We will assume the population variances are equal $\sigma_{1}^{2}=\sigma_{2}^{2}$, so the model will be a Pooled variance $t$‐test. This model is appropriate if the distribution of the differences of sample means is approximately Normal from the Central Limit Theorem. A one‐tailed test is selected based on $H_a$.
Type I error would be to reject the Null Hypothesis and claim that imports have a higher mean MPG, when they do not have higher MPG. The test will be run at a level of significance ($\alpha$) of 5%.
The degrees of freedom for this test is 25, so the decision rule under the critical value method would be to reject $H_o$ when $t > 1.708$. The decision under the $p$‐value method is to reject $H_o$ if the $p$‐value is < $\alpha$.
Data/Results
$s_{p}=\sqrt{\dfrac{(12-1) 3.86^{2}-(12-1) 2.16^{2}}{15+12-2}}=3.03$
$t=\dfrac{(35.76-33.59)-0}{3.03 \sqrt{\frac{1}{12}+\frac{1}{15}}}=1.85$
Since 1.85 > 1.708, the decision would be to Reject $H_o$. Also the p‐value is calculated to be .0381 which again shows that the result is significant at the 5% level.
Conclusion
Imported compact cars have a significantly higher mean MPG rating when compared to domestic cars
Model when population variances unknown, but assumed to be unequal
In the prior example, we assumed the population variances were equal. However, when looking at the box plot of the data or the sample standard deviations, it appears that the import cars have more variability MPG than domestic cars, which would violate the assumption of equal variances required for the Pooled Variance $t$‐test.
Fortunately, there is an alternative model that has been developed for when population variances are unequal, called the Behrens‐Fisher model 81, or the unequal variances $t$‐test.
Unequal variance $t$‐test to compare the means for two independent populations
Model Assumptions
• Independent Sampling
• $\overline{X}_{1}-\overline{X}_{2}$ approximately Normal
• $\sigma_{1}^{2} \neq \sigma_{2}^{2}$
Test Statistic
• $t^{\prime}=\dfrac{\left(\bar{X}_{1}-\bar{X}_{2}\right)-\left(\mu_{1}-\mu_{2}\right)}{\sqrt{\frac{s_{1}^{2}}{n_{1}}+\frac{s_{2}^{2}}{n_{2}}}}$
• $d f=\dfrac{\left(\frac{s_{1}^{2}}{n_{1}}+\frac{s_{2}^{2}}{n_{2}}\right)^{2}}{\left[\frac{\left(s_{1}^{2} / n_{1}\right)^{2}}{\left(n_{1}-1\right)}+\frac{\left(s_{2}^{2} / n_{2}\right)^{2}}{\left(n_{2}-1\right)}\right]}$
The degrees of freedom will be less than or equal to $n_{1}+n_{2}-2$, so this test will usually have less power than the pooled variance $t$‐test.
Example: Fuel economy
We will repeat the prior example to see if we can support the claim that imported compact cars have higher mean MPG when compared to domestic compact cars. This time we will assume that the population variances are not equal.
Solution
Design
Again we will let domestic cars be population 2.
Research Hypotheses:
$H_o: \mu_{1} \leq \mu_{2}$ (Imported compact cars do not have a higher mean MPG)
$H_a: \mu_{1} > \mu_{2}$ (Imported compact cars have a higher mean MPG)
We will assume the population variances are unequal $\sigma_{1}^{2} \neq \sigma_{2}^{2}$, so the model will be an unequal variance $t$‐test. This model is appropriate if the distribution of the differences of sample means is approximately Normal from the Central Limit Theorem. A one‐tailed test is selected based on $H_a$.
Type I error would be to reject the Null Hypothesis and claim imports have a higher mean MPG, when they do not have higher MPG. The test will be run at a level of significance ($\alpha$) of 5%. The degrees of freedom for this test is 16 (see calculation below), so the decision rule under the critical value method would be to reject $H_o$ when $t > 1.746$. The decision under the $p$‐value method is to reject $H_o$ if the $p$‐value is < $\alpha$
Data/Results
$d f=\dfrac{\left(\frac{2.16^{2}}{15}+\frac{3.86^{2}}{12}\right)^{2}}{\left[\frac{\left(2.16^{2} / 15\right)^{2}}{(15-1)}+\frac{\left(3.86^{2} / 12\right)^{2}}{(12-1)}\right]}=16$
$t=\dfrac{(35.76-33.59)-0}{\sqrt{\frac{2.16^{2}}{15}+\frac{3.86^{2}}{12}}}=1.74$
Since 1.74 <1.746, the decision would be to Fail to Reject $H_o$. Also the $p$‐value is calculated to be .0504 which again shows that the result is not significant (barely) at the 5% level.
Conclusion
Insufficient evidence to claim imported compact cars have a significantly higher mean MPG rating when compared to domestic cars.
You can see the lower power of this test when compared to the pooled variance $t$‐test example where Ho was rejected. We always prefer to run the test with higher power when appropriate. | textbooks/stats/Introductory_Statistics/Inferential_Statistics_and_Probability_-_A_Holistic_Approach_(Geraghty)/11%3A_Two_Populations_Inference/11.01%3A_Independent_vs._Dependent_Sampling.txt |
The independent models shown above compared samples that were not related. However, it is often advantageous to have related samples that are paired up – two measurements from a single population. The model we will consider here is called the matched pairs $t$‐test also known as the paired difference $t$‐test. The advantage of this design is that we can eliminate variability because other factors are not being studied, increasing the power of the design.
In this model we take the difference of each pair and create a new population of differences, so if effect, the hypothesis test is a one population test of mean that we already covered in the prior section.
Matched pairs $t$‐test to compare the means for two dependent populations
Model Assumptions
• Dependent Sampling
• $X_{d}=X_{1}-X_{2}$
• $\overline{X}_{d}=\overline{X}_{1}-\overline{X}_{2}$ approximately Normal
Test Statistic
• $t=\dfrac{\overline{X}_{d}-\mu_{d}}{s_{d} / \sqrt{n}}$
• $d f=n-1$
Example: Rental cars
An independent testing agency is comparing the daily rental cost for renting a compact car from Hertz and Avis.
A random sample of 15 cities is obtained and the following rental information obtained.
At the .05 significance level can the testing agency conclude that there is a difference in the rental charged?
Notice in this example that cities are the single population being sampled and that two measurements (Hertz and Avis) are being taken from each city. Using the matched pair design, we can eliminate the variability due to cities being differently priced (Honolulu is cheap because you can’t drive very far on Oahu!)
Solution
Design
Research Hypotheses:
$H_o: \mu_{1}=\mu_{2}$ (Hertz and Avis have the same mean price for compact cars.)
$H_a: \mu_{1} \neq \mu_{2}$ (Hertz and Avis do not have the same mean price for compact cars.)
Model will be matched pair t‐test and these hypotheses can be restated as: $Ho: \mu_{d}=0 \quad Ha: \mu_{d} \neq 0$
The test will be run at a level of significance ($\alpha$) of 5%.
Model is two‐tailed matched pairs $t$‐test with 14 degrees of freedom. Reject $H_o$ if $t < ‐2.145$ or $t >2.145$
Data/Results
We take the difference for each pair and find the sample mean and standard deviation.
\begin{aligned} \overline{X}_{d}&=1.80 \ s_{d}& = 2.513\ n &= 15\ t&=\dfrac{1.80-0}{2.513/\sqrt{15}}=2.77 \end{aligned}
Reject $H_o$ under either the critical value or p‐value method.
Conclusion
There is a difference in mean price for compact cars between Hertz and Avis. Avis has lower mean prices.
The advantage of the matched pair design is clear in this example. The sample standard deviation for the Hertz prices is $5.23 and for Avis it is$5.62. Much of this variability is due to the cities, and the matched pairs design dramatically reduces the standard deviation to \$2.51, meaning the matched pairs t‐test has significantly more power in this example.
11.04: Independent Sampling Comparing Two Population Variances or Standard Deviations
Sometimes we want to test if two populations have the same spread or variation, as measured by variance or standard deviation. This may be a test on its own or a way of checking assumptions when deciding between two different models (e.g.: pooled variance $t$‐test vs. unequal variance $t$‐test). We will now explore testing for a difference in variance between two independent samples.
The $\mathbf{F}$ distribution is a family of distributions related to the Normal Distribution. There are two different degrees of freedom, usually represented as numerator ($\mathrm{df}_{\text {num}}$) and denominator ($\mathrm{df}_{\text {den}}$). Also, since the F represents squared data, the inference will be about the variance rather than the about the standard deviation.
Characteristics of $\mathbf{F}$ Distribution
• It is positively skewed
• It is non‐negative
• There are 2 different degrees of freedom ($\mathrm{df}_{\text {num}}$, $\mathrm{df}_{\text {den}}$)
• When the degrees of freedom change, a new distribution is created
• The expected value is 1.
$\mathbf{F}$ test for equality of variances
Suppose we wanted to test the Null Hypothesis that two population standard deviations are equal, $H_o: \sigma_{1}=\sigma_{2}$. This is equivalent to testing that the population variances are equal: $\sigma_{1}^{2}=\sigma_{2}^{2}$. We will now instead write these as an equivalent ratio: $H_o: \dfrac{\sigma_{1}^{2}}{\sigma_{2}^{2}}=1$ or $H_o: \dfrac{\sigma_{2}^{2}}{\sigma_{2}^{1}}=1$.
This is the logic behind the $\mathbf{F}$ test; if two population variances are equal, then the ratio of sample variances from each population will have $\mathbf{F}$ distribution. $\mathbf{F}$ will always be an upper tailed test in practice, so the larger variance goes in the numerator. The test statistics are summarized in the table.
Hypotheses Test Statistic
\begin{aligned} &H_{o}: \sigma_{1} \geq \sigma_{2} \ &H_{a}: \sigma_{1}<\sigma_{2} \end{aligned}
$\mathbf{F}=\dfrac{s_{2}^{2}}{s_{1}^{2}}$ use $\alpha$ table
\begin{aligned} &H_{o}: \sigma_{1} \leq \sigma_{2} \ &H_{a}: \sigma_{1}>\sigma_{2} \end{aligned}
$\mathbf{F}=\dfrac{s_{1}^{2}}{s_{2}^{2}}$ use $\alpha$ table
\begin{aligned} &H_{o}: \sigma_{1}=\sigma_{2} \ &H_{a}: \sigma_{1} \neq \sigma_{2} \end{aligned}
$\mathbf{F}=\dfrac{\max \left(s_{1}^{2}, s_{2}^{2}\right)}{\min \left(s_{1}^{2}, s_{2}^{2}\right)}$ use $\alpha/2$ table
Example: Variation in stocks
A stockbroker at a brokerage firm, reported that the mean rate of return on a sample of 10 software stocks (population 1)was 12.6 percent with a standard deviation of 4.9 percent. The mean rate of return on a sample of 8 utility stocks (population 2) was 10.9, percent with a standard deviation of 3.5 percent. At the .05 significance level, can the broker conclude that there is more variation in the software stocks?
Solution
Design
Research Hypotheses:
$H_o: \sigma_{1} \leq \sigma_{2}$ (Software stocks do not have more variation)
$H_a: \sigma_{1} > \sigma_{2}$ (Software stocks do have more variation)
Model will be $\mathbf{F}$ test for variances and the test statistic from the table will be $\mathbf{F}=\dfrac{s_{1}^{2}}{s_{2}^{2}}$. The degrees of freedom for numerator will be $n_{1}-1=9$, and the degrees of freedom for denominator will be $n_{2}-1=7$. The test will be run at a level of significance ($\alpha$) of 5%. Critical Value for $\mathbf{F}$ with $\mathrm{df}_{\text {num}}=9$ and $\mathrm{df}_{\text {den}}=7$ is 3.68. Reject $H_o$ if $\mathbf{F}$ >3.68.
Data/Results
$\mathbf{F}=4.9^{2} / 3.5^{2}=1.96$, which is less than critical value, so Fail to Reject $H_o$.
Conclusion
There is insufficient evidence to claim more variation in the software stock.
Example: Testing model assumptions
When comparing two means from independent samples, you have a choice between the more powerful pooled variance $t$‐test (assumption is $\sigma_{1}^{2}=\sigma_{2}^{2}$) or the weaker unequal variance $t$‐test (assumption is $\sigma_{1}^{2} \neq \sigma_{2}^{2}$. We can now design a hypothesis test to help us choose the appropriate model. Let us revisit the example of comparing the MPG for import and domestic compact cars. Consider this example a "test before the main test" to help choose the correct model for comparing means.
Solution
Design
Research Hypotheses:
$H_o: \sigma_{1} = \sigma_{2}$ (choose the pooled variance $t$‐test to compare means)
$H_a: \sigma_{1} \neq \sigma_{2}$ (choose the unequal variance $t$‐test to compare means)
Model will be $\mathbf{F}$ test for variances, and the test statistic from the table will be $\mathbf{F}=\dfrac{s_{1}^{2}}{s_{2}^{2}}$ ($s_1$ is larger). The degrees of freedom for numerator will be $n_{1}-1=11$ and the degrees of freedom for denominator will be $n_2‐1=14$. The test will be run at a level of significance ($\alpha$) of 10%, but use the $\alpha$=.05 table for a two‐tailed test. Critical Value for $\mathbf{F}$ with $\mathrm{df}_{\text {num}}=11$ and $\mathrm{df}_{\text {den}}=14$ is 2.57. Reject $H_o$ if $\mathbf{F}$ >2.57.
We will also run this test the $p$‐value way in Megastat.
Data/Results
$\mathbf{F}=14.894 / 4.654=3.20$, which is more than critical value; Reject $H_o$.
Also $p$‐value = 0.0438 < 0.10, which also makes the result significant.
Conclusion
Do not assume equal variances and run the unequal variance $t$‐test to compare population means
In Summary
This flowchart summarizes which of the four models to choose when comparing two population means. In addition, you can use the $\mathbf{F}$‐test for equality of variances to make the decision between the pool variance $t$‐test and the unequal variance $t$‐test. | textbooks/stats/Introductory_Statistics/Inferential_Statistics_and_Probability_-_A_Holistic_Approach_(Geraghty)/11%3A_Two_Populations_Inference/11.03%3A_Dependent_Sampling__Matched_Pairs_ttest.txt |
In Chapter 10, we covered the test for comparing a proportion to a hypothesized value. In this section we want to explore a test to compare two population proportions.
Like testing means, the usual null hypothesis will be that proportions are the same. We will usually denote each of the two proportions with a subscript, say 1 and 2. Here are some possible two‐tailed and one‐tailed Hypotheses:
$\begin{array}{lll} H_o: p_{1}=p_{2} & H_o: p_{1} \geq p_{2} & H_o: p_{1} \leq p_{2} \ H_a: p_{1} \neq p_{2} & H_a: p_{1}<p_{2} & H_a: p_{1}>p_{2} \end{array}$
Notice that the Null Hypothesis can be written as $H_o: p_{1}-p_{2}=0$, meaning we want to look at the distribution of the difference of sample proportions as a random variable.
Distribution of difference of sample proportions
Suppose we take a sample of $n_1$ from population 1 and $n_2$ from population 2. Let $X_1$ be the number of success in sample 1 and $X_2$ be the number of success in sample 2.
$\hat{p}_{1}=\dfrac{X_{1}}{n_{1}}$ represents the proportion of successes in sample 1
$\hat{p}_{2}=\dfrac{X_{2}}{n_{2}}$ represents the proportion of successes in sample 2
As long as there are at least 10 successes and 10 failures in each sample, then the difference of sample proportions $\hat{p}_{1}-\hat{p}_{2}$ will have a Normal Distribution.
Central Limit Theorem for the difference of proportions $\hat{p}_{1}-\hat{p}_{2}$
1. $\mu_{\hat{p}_{1}-\hat{p}_{2}}=p_{1}-p_{2}$
2. $\sigma_{\hat{p}_{1}-\hat{p}_{2}}=\sqrt{\dfrac{p_{1}\left(1-p_{1}\right)}{n_{1}}+\dfrac{p_{2}\left(1-p_{2}\right)}{n_{2}}}$
3. If $n 1 p 1, n 1(1-p 1), n 2 p 2, n 2(1-p 2)$ are all at least 10, then the Probability Distribution of $\hat{p}_{1}-\hat{p}_{2}$ is approximately Normal.
Combining all of the above into a single formula:
$Z=\dfrac{\left(\hat{p}_{1}-\hat{p}_{2}\right)-\left(p_{1}-p_{2}\right)}{\sqrt{\frac{p_{1}\left(1-p_{1}\right)}{n_{1}}+\frac{p_{2}\left(1-p_{2}\right)}{n_{2}}}} \nonumber$
Example: Left handedness by gender
12% of North Americans claim left‐handedness. With regard to gender, men are slightly more likely than women to be left‐handed, with most studies indicating that about 13% of men and about 11% of women are left‐handed82.
$p_m$ = 0.13 = proportion of men who are left‐handed
$p_w$ = 0.11 = proportion of women who are left‐handed
$p_m ‐ p_w$ = difference in proportion of men and women who are left‐handed
Solution
Suppose we take a sample of 100 men and 150 women. Let's investigate the random variable $\hat{p}_{m}-\hat{p}_{w}$
100(0.13) = 13 100(1‐0.13) = 87
150(0.11) = 16.5 150(1‐0.11) = 133.5
Since all values are greater than 10, $\hat{p}_{m}-\hat{p}_{w}$has approximately a normal distribution.
$\mu_{\hat{p}_{m}-\hat{p}_{w}}=0.13-0.11=0.02$
$\sigma_{\hat{p}_{m}-\hat{p}_{w}}=\sqrt{\dfrac{0.13(1-0.13)}{100}+\dfrac{0.11(1-0.11)}{150}}=0.0422$
Hypothesis test for difference of proportions
In conducting a Hypothesis test where the Null hypothesis assumes equal proportions, it is best practice to pool or combine the sample proportions into a single estimated proportion $\bar{p}$, and use an estimated standard error, $S_{\hat{p}_{m}-\hat{p}_{w}}$:
$\bar{p}=\dfrac{X_{1}+X_{2}}{n_{1}+n_{2}}$
$s_{\hat{p}_{1}-\hat{p}_{2}}=\sqrt{\dfrac{\bar{p}(1-\bar{p})}{n_{1}}+\dfrac{\bar{p}(1-\bar{p})}{n_{2}}}$
The test statistic will have a Normal Distribution as long as there are at least 10 successes and 10 failures in both samples.
$Z=\dfrac{\left(\hat{p}_{1}-\hat{p}_{2}\right)-\left(p_{1}-p_{2}\right)}{\sqrt{\frac{\bar{p}(1-\bar{p})}{n_{1}}+\frac{\bar{p}(1-\bar{p})}{n_{2}}}}$
Example: Background checks at gun shows
Under current United States law, private sales between owners are exempt from background check requirements. This is sometimes called the "Gun Show Loophole" as it may allow criminals, terrorists and the mentally ill to purchase assault weapons, such as those used in mass shootings.83
In an August 2016 Study, Pew Research analyzed American's opinions about gun laws and rights.84 Pew took a representative sample of 990 men and 1020 women and asked them several questions. In particular, they asked the sampled Americans if background checks required at gun stores should be made universal and extended to all sales of guns between private owners or at gun shows. 772 out 990 men said yes, while 857 out of 1020 women said yes.
Is there a difference in the proportion of men and women who support universal background checks for purchasing guns? Design and conduct the test with a significance level of 1%.
Solution
Design
$H_{o}: p_{m}=p_{w}$ (There is no difference in the proportion of support for background checks by gender)
$H_{a}: p_{m} \neq p_{w}$ (There is a difference in the proportion of support for background checks by gender)
Model: Two proportion $Z$ test. This is a two‐tailed test with $\alpha$ = 0.01.
Model Assumptions: for men there are 772 yes and 218 no. For women there are 857 yes and 163 no. Since all these numbers exceed 10, the model is appropriate.
Decision Rules:
Critical Value Method ‐ Reject $H_o$ if $Z$ > 2.58 or $Z$ < ‐2.58.
$P$‐value method ‐ Reject $H_o$ if $p$‐value <0.01
Data/Results
$\hat{p}_{m}=\dfrac{772}{990}=0.780 \qquad \hat{p}_{w}=\dfrac{857}{1020}=0.840 \qquad \bar{p}=\dfrac{772+857}{990+1020}=0.810$
$Z=\dfrac{(0.780-0.840)-0}{\sqrt{\frac{0.810(1-0.810)}{990}+\frac{0.810(1-0.810)}{1020}}}=-3.45 \mathrm{p} \text {-value }=0.0005<\alpha$
Reject $H_o$ under both methods
Conclusion
There is a difference in the proportion of support for background checks by gender. Women are more likely to support background checks. | textbooks/stats/Introductory_Statistics/Inferential_Statistics_and_Probability_-_A_Holistic_Approach_(Geraghty)/11%3A_Two_Populations_Inference/11.05%3A_Comparing_Two_Proportions.txt |
Often we want to conduct tests claims about the characteristics of qualitative or categorical non‐ numeric data. In Chapter 10, we covered a test of one population proportion. In reality, this was a test of a categorical variable with 2 choices (success, failure). Now in this section, we will expand our study of hypothesis tests involving categorical data to include categorical random variables with more than two choices using a goodness‐of‐fit test. In addition, we will compare two categorical variables for independence. Both of these models will use a Chi‐square test statistic, which looks at deviations between the observed values and expected values of the data.
12: Chisquare Tests for Categorical Data
A financial services company had anecdotal evidence that people were calling in sick on Monday and Friday more frequently than on Tuesday, Wednesday or Thursday. The speculation was that some employees were using sick days to extend their weekends. A researcher for the company was asked to determine if the data supported a significant difference in absenteeism due to the day of the week.
The categorical variable of interest here is “Day of Week” an employee called in sick (Monday through Friday). This is an example of a multinomial random variable, in which we will observe a fixed number of trials (the total number of sick days sampled) and at least 2 possible outcomes. (A binomial random variable is a special case of the multinomial random variable where there is exactly 2 possible outcomes and was studied in Chapter 10 as a $Z$ Test of Proportion.)
The Chi‐square goodness‐of‐fit test is used to test if observed data from a categorical variable is consistent with an expected assumption about the distribution of that variable.
Chi‐square Goodness of Fit Test
Model Assumptions
• $O_{i}$ = Observed in category $i$
• $p_{i}$ = Expected proportion in category $i$
• $E_{i}=n p_{i}$ = Expected in category $i$
• $E_{i} \geq 5$ for each $i$
Test Statistic
$\chi^{2}=\sum_{i=1}^{k} \dfrac{\left(O_{i}-E_{i}\right)^{2}}{E_{i}} \quad \mathrm{df}=k-1$ where
$k$ = number of categories $n$ = sample size
Chi‐Square Goodness‐of‐Fit test ‐ equal expected frequencies
Example: Sick days
A researcher for the financial services company collected 400 records of which day of the week employees called in sick to work. Can the researcher conclude that proportion of employees who call in sick is not the same for each day of the week? Design and conduct a hypothesis test at the 1% significance level.
Solution
Research Hypotheses:
$H_o$: There is a no difference in the proportion of employees who call in sick due to the day of the week.
$H_a$: There is a difference in the proportion of employees who call in sick due to the day of the week.
We can also state the hypotheses in terms of population parameters, $p_i$ for each category. Under the Null Hypothesis, we would expect 20% sick days would occur on each week day.
Research Hypotheses:
$H_o: p_{1}=p_{2}=p_{3}=p_{4}=p_{5}=0.20$
$H_a$: At least one pi is different than what was stated in $H_o$
Statistical Model: Chi‐square goodness‐of‐fit test.
Important Assumption: The Expected Value of Each Category needs to be greater than or equal to 5. In this example, $E_{i}=n p_{i}=(400)(.20)=80 \geq 5$ for each category, so the model is appropriate.
Test Statistic: $\chi^{2}=\sum_{i=1}^{k} \dfrac{\left(O_{i}-E_{i}\right)^{2}}{E_{i}} \qquad \mathrm{df}=5-1=4$
Decision Rule (Critical Value Method): Reject $H_o$ if $\chi^{2}>13.277 (\alpha=.01, 4 \mathrm{df})$
Results:
Since the Test Statistic is in the Rejection Region, the decision is to Reject $H_o$. Under the $p$‐value method, $H_o$ is also rejected since the $p \text {-value }=p\left(\chi^{2}>15.625\right)=0.004$, which is less than the Significance Level $\alpha$ of 1%.
Conclusion:
There is a difference in the proportion of employees who call in sick due to the day of the week. Employees are more likely to call in sick on days close to the weekend.
Chi‐Square Goodness‐of‐Fit test ‐ different expected frequencies
In the prior example, the Null Hypothesis was that all categories had the same proportion; in other words, there was no difference in counts due to the choices of a categorical variable. Another set of hypotheses using this same Chi‐square goodness‐of‐fit test can be used to compare current results of a current experiment to prior results. In these tests, it is quite likely that prior proportions were not the same.
Example: Method of Commuting)
In the 2010 United States census, data was collected on how people get to work ‐‐ their method of commuting. The results are shown in the graph to the right. Suppose you wanted to know if people who live in the San Jose metropolitan area (Santa Clara County) commute with similar proportions as the United States. We will sample 1000 workers from Santa Clara County and conduct a Chi‐square goodness‐of‐fit test. Design and conduct a hypothesis test at the 5% significance level.
Solution
Research Hypotheses:
$H_o$: Workers in Santa Clara county choose methods of commuting that match the United States averages.
$H_a$: Workers in Santa Clara county choose methods of commuting that do not match the United States averages.
We can also state the hypotheses in terms of population parameters, $p_i$ for each category. Under the Null Hypothesis, we would expect the Santa Clara proportions to be the same as the US 2010 Census data.
Research Hypotheses:
$H_o: p_{1}=.763 p_{2}=.098 p_{3}=.050 p_{4}=.028 p_{5}=.018 p_{6}=.043$
$H_a$: At least one $p_i$ is different than what was stated in $H_o$
Statistical Model: Chi‐square goodness‐of‐fit test.
Important Assumption: The Expected Value of Each Category needs to be greater than or equal to 5. In this example check the lowest $p_{i}: E_{5}=n p_{5}=(1000)(.018)=18 \geq 5$, so the model is appropriate.
Test Statistic: $\chi^{2}=\sum_{i=1}^{k} \dfrac{\left(O_{i}-E_{i}\right)^{2}}{E_{i}} \qquad \mathrm{df}=6-1=5$
Decision Rule (Critical Value Method): Reject $H_o$ if $\chi^{2}>11.071 (\alpha=.05, 5 \mathrm{df})$
After designing the experiment, we conducted the sample of Santa Clara County, shown in the Observed Frequency Column of the table below. The Expected Proportion and Expected Frequency Columns are calculated using the U.S. 2010 Census.
Results:
Since the Test Statistic of 16.2791 exceeds the critical value of 11.071, the decision is to Reject $H_o$. Under the $p$‐value method, $H_o$ is also rejected since the $p \text {-value }=P\left(\chi^{2}>16.2791\right)=0.006$ which is less than the Significance Level $\alpha$ of 5%.
Conclusion:
Workers in Santa Clara County do not have the same frequencies of method of commuting as workers in the entire United States. | textbooks/stats/Introductory_Statistics/Inferential_Statistics_and_Probability_-_A_Holistic_Approach_(Geraghty)/12%3A_Chisquare_Tests_for_Categorical_Data/12.01%3A_Chisquare_Goodnessoffit_Test.txt |
In 2014, Colorado became the first state to legalize the recreational use of marijuana. Other states have joined Colorado, while some have decriminalized or authorized the medical use of marijuana. The question is should marijuana be legalized in all states. Suppose we took a poll of 1000 American adults and asked "Should marijuana be legal or not legal for recreational use" and got the following results:
The interpretation of this poll is that 50% of adults polled favored the legalization of marijuana for recreational use, while 45% opposed it. The remaining 5% were undecided.
At this time, you might have questions and want to explore this poll in more depth. For example, are younger people more likely to support legalization of marijuana? Do other demographic characteristics such as gender, ethnicity, sexual orientation, or religion affect people's opinions about legalization?
Let us explore the possibility of difference of opinion due to gender. Are men more likely (or less likely) to oppose legalization of marijuana compared to women?
In the example above, suppose we have exactly 500 men and 500 women in the survey. What would we expect to see in the data if there were no difference in opinion between men and women?
Two‐way tables
Two‐way or contingency tables are used to summarize two categorical variables, also known as bivariate categorical data. In order to create a two‐way table, the researcher must cross‐tabulate the two responses for each categorical questions.
In the example above, the two categorical variables are gender and opinion on marijuana legalization. Gender has two choices (male or female) while opinion on marijuana legalization has three choices (legal, not legal and unsure).
In the example above, suppose we have exactly 500 men and 500 women in the survey. What would we expect to see in the data if there were no difference in opinion between men and women? We could then simply apply the total percentages to each group.
Let’s review from probability what independence means. If two events A an B are independent, then the following statements are true:
\begin{aligned} P(\text {A given B})&=P(A) \ P(\text {B given A})&=P(B) \ P(\text {A and B})&=P(A) P(B) \end{aligned} \nonumber
You can pick any two events in the table above to verify that Gender and Opinion of Legalization of Marijuana are independent events. For example, compare the events Not Legal and Men.
$P$(Not Legal given Men) = 225/500 = 45% same as $P$(Not Legal) = 45%
$P$(Men given Not Legal) = 225/450 = 50% same as $P$(Men) = 50%
$P$(Not Legal and Men) = 225/1000 = 22.5% same as $P$(Not Legal)P(Men) = (45%)(50%) = 22.5%
Based on these probability rules we can calculate the expected value of any pair of independent events by using the following formula:
Expected Value = (Row Total)(Column Total)/(Grand Total)
For example, looking at the events Not Legal and Men:
Expected Value = (450)(500)/(1000) = 225
What if the events are not independent? Let's review the same survey. What would we expect to see in the data if there was a difference in opinion between men and women? Let's say women were more likely to support legalization. In that case, we would expect the 450 people who supported legalization of marijuana to have a higher number of women (and a smaller number of men) compared to the first table. Note we only change the first six boxes (shaded below); the totals must remain the same.
Now let's see the actual results of this survey and see what is happening:
In this poll, a higher percentage of men support legalization of marijuana for recreational use compared to women. Question: Is this evidence strong enough to support the claim that gender and opinion about marijuana legalization are not independent events? This question can addressed by conducting a hypothesis test using with the Chi‐square Test for Independence model.
Chi‐square test of Independence
A Chi‐square test of independence can be used to determine if there is a relationship between two randomized categorical variables. If the categorical variables are labeled A and B, the hypotheses are always written in this form:
$H_o$: A and B are independent events
$H_a$: A and B are dependent events.
If only one variable is randomized, then the test is called a Chi‐square Test of Homogeneity, but the execution of the test is exactly the same. If A represents the randomized response variable and B represents the manipulated explanatory variable, then the hypotheses are written as:
$H_o$: There no difference in distribution of A due to B.
$H_a$: There is a difference in the distribution of A due to B.
Chi‐square Test for Independence
Model Assumptions
• $O_{i j}$ = Observed in category $ij$
• $E_{i j}=n p_{i j}=\dfrac{(\text { ColumnTotal })(\text { RowTotal })}{\text { Grand Total }}$; $E_{i j} \geq 5 \text { for each ij }$
Test Statistic
• $\chi^{2}=\sum_{i=1}^{r} \sum_{j=1}^{c} \dfrac{\left(O_{i j}-E_{i j}\right)^{2}}{E_{i j}} \quad \mathrm{df}=(r-1)(c-1)$ where
r = number of row categories c = number of column categories n = sample size
Example: Legalization of marijuana
Are Gender and Opinion about legalization of marijuana for recreational use independent events? Conduct a hypothesis test with a significance level of 5%.
Solution
Research Hypotheses:
$H_o$: Gender and Opinion about legalization of marijuana for recreational use are independent events.
$H_a$: Gender and Opinion about legalization of marijuana for recreational use are dependent events.
Statistical Model: Chi‐square Test of Independence. The two categorical variables in this example are Gender and Opinion.
Results:
Important Assumption: The Expected Value of Each Category needs to be greater than or equal to 5. In this example, the lowest expected value is 225 (Men, not legal) so the assumption is met.
Test Statistic: $\chi^{2}=\sum_{i=1}^{r} \sum_{j=1}^{c} \dfrac{\left(O_{i j}-E_{i j}\right)^{2}}{E_{i j}} \qquad \mathrm{df}=(3-1)(2-1)=2$
Decision Rule (Critical Value Method): Reject $H_o$ if $\chi^{2}>5.991(\alpha=.05,2 \mathrm{df})$
$\chi^{2}=1.600+1.600+1.778+1.778=6.756$
Since the Test Statistic exceeds the critical value, the decision is to Reject $H_o$. Under the $p$‐value method, $H_o$ is also rejected since the $p \text {-value }=p\left(\chi^{2}>6.756\right)=0.034$, which is less than the Significance Level $\alpha$ of 5%.
Conclusion:
Gender and Opinion about legalization of marijuana for recreational use are dependent events. Men are more likely to support legalization of marijuana for recreational use. | textbooks/stats/Introductory_Statistics/Inferential_Statistics_and_Probability_-_A_Holistic_Approach_(Geraghty)/12%3A_Chisquare_Tests_for_Categorical_Data/12.02%3A_Chisquare_Test_of_Independence.txt |
In Chapter 11, we used statistical inference to compare two population means under variety of models. These models can be expanded to compare more than two populations using a technique called Analysis of Variance, or ANOVA for short. There are many ANOVA models, but we limit our study to one of them, the One Factor ANOVA model, also known as One Way ANOVA.
13: One Factor Analysis of Variance (ANOVA)
Suppose we wanted to compare the means of more than two ($k$) independent populations and wanted to test the null hypothesis $H_o: \mu_{1}=\mu_{2}=\mathrm{L}=\mu_{k}$.
If we can assume all population variances are equal, we can expand the pooled variance $t$‐test for two populations to one factor ANOVA for $k$ populations.
13.02: The Logic of ANOVA How Comparing Variances Test for a Difference in M
It may seem strange to use a test of “variances” to compare means, but this graph demonstrates the logic of the test.
If the null hypothesis $H_o: \mu_{1}=\mu_{2}=\mu_{3}$ is true, then each population would have the same distribution and the variance of the combined data would be approximately the same. However, if the Null Hypothesis is false, then the difference between centers would cause the combined data to have an increased variance.
13.03: The One Factor ANOVA Model
In ANOVA, we calculate the variance two different ways: The mean square factor ($\mathrm{MS}_{F}$), also known as mean square between, measures the variability of the means between groups, while the mean square within ($\mathrm{MS}_{E}$), also known as mean square within, measures the variability within the population. Under the null hypothesis, the ratio of $\mathrm{MS}_{F} / \mathrm{MS}_{E}$ should be close to 1 and has $\mathrm{F}$ distribution.
One Factor ANOVA model to compare the means of $k$ independent populations
Model Assumptions
• The populations being sampled are normally distributed.
• The populations have equal standard deviations.
• The samples are randomly selected and are independent.
Test Statistic
• $\mathrm{F}=\dfrac{\mathrm{MS}_{\text {Factor }}}{\mathrm{MS}_{\text {Error }}}$
• $\mathrm{df}_{\text {num }}=k-1$
• $\mathrm{df}_{\text{den}}=n-k$
13.04: Factorial Design an Insight to other ANOVA Procedures
A different way of looking at this model is considering a single population with one numerical and one categorical variable being sampled.
The numeric variable is called the response and the categorical variable is the factor.
The possible responses to the factor are called the levels.
The numbers of observations per level are called the replicates.
If the replicates are equal, the design is balanced.
The Hypotheses can then be stated in context using the format:
\(H_o\): There is no difference in mean response due to factor.
\(H_a\): There is a difference in mean response due to factor.
By thinking of the model in this way, it is easy to extend the concept to the multi‐factor ANOVA models that are prevalent in the research you will encounter in future studies.
13.05: Understanding the ANOVA Table
When running Analysis of Variance, the data is usually organized into a special ANOVA table, especially when using computer software.
Sum of Squares: The total variability of the numeric data being compared is broken into the variability between groups ($\mathrm{SS}_{\text {Factor }}$) and the variability within groups ($\mathrm{SS}_{\text {Error }}$). These formulas are the most tedious part of the calculation. $T_c$ represents the sum of the data in each population and $n_c$ represents the sample size of each population. These formulas represent the numerator of the variance formula.
$\mathrm{SS}_{\text {Total }}=\Sigma\left(X^{2}\right)-\dfrac{(\Sigma X)^{2}}{n} \nonumber$
$\mathrm{SS}_{\text {Factor }}=\Sigma\left(\dfrac{T_{c}^{2}}{n_{c}}\right)-\dfrac{(\Sigma X)^{2}}{n} \nonumber$
$\mathrm{SS}_{\text {Error }}=\mathrm{SS}_{\text {Total }}-\mathrm{SS}_{\text {Factor }} \nonumber$
Degrees of freedom: The total degrees of freedom is also partitioned into the Factor and Error components.
Mean Square: This represents calculation of the variance by dividing Sum of Squares by the appropriate degrees of freedom.
$\mathrm{F}$: This is the test statistic for ANOVA: the ratio of two sample variances (mean squares) that are both estimating the same population value has an $\mathrm{F}$ distribution. Computer software will then calculate the $p$‐ value to be used in testing the Null Hypothesis that all populations have the same mean.
Example: Party Pizza
Party Pizza specializes in meals for students. Hsieh Li, President, recently developed a new tofu pizza.
Before making it a part of the regular menu she decides to test it in several of her restaurants. She would like to know if there is a difference in the mean number of tofu pizzas sold per day at the Cupertino, San Jose, and Santa Clara pizzerias. Data will be collected for five days at each location.
At the .05 significance level can Hsieh Li conclude that there is a difference in the mean number of tofu pizzas sold per day at the three pizzerias?
Solution
Design
Response: tofu pizzas sold
Factor: location of restaurant
Levels: $k = 3$ (Cupertino, San Jose, Santa Clara)
Research Hypotheses:
$H_o$: There is no difference in mean tofu pizzas sold due to location of restaurant.
$H_a$: There is a difference in mean tofu pizzas sold due to location of restaurant
$H_o$: $\mu_{1}=\mu_{2}=\mu_{3}$ (Mean sales same at all restaurants)
$H_a$: At least $\mu_{i}$ is different (Means sales not the same at all restaurants)
We will assume the population variances are equal $\sigma_{1}^{2}=\sigma_{2}^{2}=\sigma_{3}^{2}$, so the model will be One Factor ANOVA. This model is appropriate if the distribution of the sample means is approximately Normal from the Central Limit Theorem.
Type I error would be to reject the Null Hypothesis and claim mean sales are different, when they actually are the same. The test will be run at a level of significance ($\alpha$) of 5%.
The test statistic from the table will be $\mathrm{F}=\dfrac{\mathrm {MS}_\text{Factor }}{\mathrm {MS}_\text{Error }}$. The degrees of freedom for numerator will be 3‐1=2,and the degrees of freedom for denominator will be 13‐3=10. (The total sample size turned out to be only 13, not 15 as planned).
Critical Value for $\mathrm{F}$ at $\alpha$ of 5% with $\mathrm{df}_{\text {num }}=2$ and $\mathrm{df}_{\text {den }}=10$ is 4.10. Reject $H_o$ if $\mathrm{F}$ >4.10. We will also run this test using the p‐value method with statistical software, such as Minitab.
Data/Results
$\mathrm{F}=38.125 / 0.975=39.10$, which is more than the critical value of 4.10, so reject $H_o$. Also from the Minitab output, $p$‐value = 0.000 < 0.05 which also supports rejecting $H_o$.
Conclusion
There is a difference in the mean number of tofu pizzas sold at the three locations. | textbooks/stats/Introductory_Statistics/Inferential_Statistics_and_Probability_-_A_Holistic_Approach_(Geraghty)/13%3A_One_Factor_Analysis_of_Variance_(ANOVA)/13.01%3A_Comparing_means_from_more_than_two_Independent_Populations.txt |
When the Null Hypothesis is rejected in one factor ANOVA, the conclusion is that not all means are the same. This however leads to an obvious question: which particular means are different? Seeking further information after the results of a test is called post‐hoc analysis.
The problem of multiple tests
One attempt to answer this question is to conduct multiple pairwise independent same t‐tests and determine which ones are significant. We would compare $\mu_{1}$ to $\mu_{2}$, $\mu_{1}$ to $\mu_{3}$, $\mu_{2}$ to $\mu_{3}$, $\mu_{1}$ to $\mu_{4}$, etc. There is a major flaw in this methodology in that each test would have a significance level of $\alpha$, so making Type I error would be significantly more than the desired $\alpha$. Furthermore, these pairwise tests would NOT be mutually independent. There were several statisticians who designed tests that effectively dealt with this problem of determining an "honest" significance level of a set of tests; we will cover the one developed by John Tukey, the Honestly Significant Difference (HSD) test.86 To use this test, we need the critical value from the Studentized Range Distribution ($q$), which is used to find when difference of pairs of sample means are significant.
The Tukey HSD test
The Tukey HSD test
Tests: $H_{o}: \mu_{i}=\mu_{j} \quad H_{a}: \mu_{i} \neq \mu_{j}$ where the subscripts $i$ and $j$ represent two different populations
Overall significance level of $\alpha$: This means that all pairwise tests can be run at the same time with an overall significance level of $\alpha$
Test Statistic: $\mathrm{HSD}=q \sqrt{\dfrac{\mathrm{MSE}{n_{c}}}$
$q$ = critical value from Studentized Range table
$\mathrm{MSE}$ = Mean Square Error from ANOVA table
$n_c$ = number of replicates per treatment. An adjustment is made for unbalanced designs.
Decision: Reject $H_o$ if $\left|\overline{X}_{i}-\overline{X}_{j}\right|>\mathrm{HSD}_\text{critical value}$
Computer software, such as Minitab, will calculate the critical values and test statistics for these series of tests. We will not perform the manual calculations in this text.
Example: Party Pizza
Let us return to the Tofu pizza example where we rejected the Null Hypothesis and supported the claim that there was a difference in means among the three restaurants.
In reviewing the graph of the sample means, it appears that Santa Clara has a much higher number of sales than Cupertino and San Jose. There will be three pairwise post‐hoc tests to run.
Solution
Design
$H_{o}: \mu_{1}=\mu_{2} \qquad H_{a}: \mu_{1} \neq \mu_{2} \qquad H_{o}: \mu_{1}=\mu_{3} \qquad H_{a}: \mu_{1} \neq \mu_{3} \qquad H_{o}: \mu_{2}=\mu_{3} \qquad H_{a}: \mu_{2} \neq \mu_{3}$
These three tests will be conducted with an overall significance level of $\alpha$ = 5%.
The model will be the Tukey $\mathrm{HSD}$ test.
Here are the differences of the sample means for each pair ranked from lowest to highest:
Test 1: Cupertino to San Jose: $\left|\overline{X}_{1}-\overline{X}_{2}\right|=|12.75-11.50|=1.25$
Test 2: Cupertino to Santa Clara: $\left|\overline{X}_{1}-\overline{X}_{3}\right|=|12.75-17.00|=4.25$
Test 3: San Jose to Santa Clara: $\left|\overline{X}_{2}-\overline{X}_{3}\right|=|11.50-17.00|=5.50$
The $\mathrm{HSD}$ critical values (using statistical software) for this particular test:
$\mathrm{HSD}_\text{crit}$ at 5% significance level = 1.85 $\mathrm{HSD}_\text{crit}$ at 1% significance level = 2.51
For each test, reject $H_o$ if the difference of means is greater than $\mathrm{HSD}_\text{crit}$
Test 2 and Test 3 show significantly different means at both the 1% and 5% level.
The Minitab approach for the decision rule will be to reject $H_o$ for each pair that does not share a common group. Here are the results for the test conducted at the 5% level of significance:
Data/Results
Refer to the Minitab output. Santa Clara is in group A while Cupertino and San Jose are in Group B.
Conclusion
Santa Clara has a significantly higher mean number of tofu pizzas sold compared to both San Jose and Cupertino. There is no significant difference in mean sales between San Jose and Cupertino. | textbooks/stats/Introductory_Statistics/Inferential_Statistics_and_Probability_-_A_Holistic_Approach_(Geraghty)/13%3A_One_Factor_Analysis_of_Variance_(ANOVA)/13.06%3A_Posthoc_Analysis__Tukeys_Honestly_Significant_Difference_%28HSD%29_Tes.txt |
Often in statistical research, we want to discover if there is a relationship between two variables. The explanatory variable is the “cause” and the response variable is the “effect”, although a true cause and effect relationship can only be established in a scientific study that controls for all confounding (lurking) variables.
In Chapter 12, we were interested in determining if a person’s gender was a valid explanatory variable of the person’s opinion about legalization of marijuana for recreational use. In this case, both the explanatory and response variables are categorical and the appropriate model was the Chi‐square Test of Independence.
In Chapter 13, we explored if tofu pizza sales (the response variable) were affected by location of the restaurant (the explanatory variable). In this case, the explanatory variable was categorical but the response was numeric. The appropriate model for this example is One Factor Analysis of Variance (ANOVA).
What if we want to determine if a relationship exists when both the explanatory and response variables are both numeric? For example, does annual rainfall in a city help explain sales of sunglasses? This chapter explores and defines the appropriate model for this type of problem.
14: Correlation and Linear Regression
In Chapter 3, we defined bivariate data as data that have two different numeric variables. In an algebra class, these are also known as ordered pairs. We will let X represent the independent (or explanatory) variable and Y represent the dependent (or response) variable in this definition. Here is an example of five total pairs in which X represents the annual rainfall in inches in a city and Y represents annual sales of sunglasses per 1000 population.
The best way to graph bivariate data is by using a Scatterplot in which X, the independent variable is the vertical axis and Y, the dependent variable is the horizontal axis.
Example: Rainfall and sunglasses sales
Here is an example and scatterplot of five total pairs where X represents the annual rainfall in inches in a city and Y represents annual sales of sunglasses per 1000 population.
Solution
In the scatterplot for this data, it appears that cities with more rainfall have lower sales. It also appears that this relationship is linear, a pattern which can then be exemplified in a statistical model.
14.02: The Simple Linear Regression Model
In the scatterplot example shown above, we saw linear correlation between the two dependent variables. We are now going to create a statistical model relating these two variables, but let’s start by reviewing a mathematical linear model from algebra:
$Y=\beta_{0}+\beta_{1} X$
$Y$: Dependent Variable
$X$: Independent Variable
$\beta_{0}$: Y - intercept
$\beta_{1}$:Slope
Example
You have a small business producing custom t‐shirts. Without marketing, your business has revenue (sales) of $1000 per week. Every dollar you spend marketing will increase revenue by 2 dollars. Let variable $X$ represent the amount spent on marketing and let variable $Y$ represent revenue per week. Write a mathematical model that relates $X$ to $Y$. Solution In this example, we are saying that weekly revenue ($Y$) depends on marketing expense ($X$).$1000 of weekly revenue represents the vertical intercept, and $2 of weekly revenue per$1 marketing represents the slope, or rate of change of the model. We can choose some value of $X$ and determine $Y$ and then plot the points on a scatterplot to see this linear relationship.
We can then write out the mathematical linear model as an equation:
We all learned about these linear models in Algebra classes, but the real world doesn’t generally give such perfect results. In particular, we can choose what to spend on marketing, but the actual revenue will have more uncertainty. For example, the true revenue may look more like this:
The difference between the actual revenue and the expected revenue is called the residual error, $\varepsilon$ If we assume that the residual error (represented by $\varepsilon$) is a random variable that follows a normal distribution with $\mu=0$ and $\sigma$ a constant for all values of $X$, we have now created a statistical model called a simple linear regression model.
14.03: Estimating the Regression Model with the LeastSquare Line
We now return to the case where we know the data and can see the linear correlation in a scatterplot, but we do not know the values of the parameters of the underlying model. The three parameters that are unknown to us are the $y$‐intercept $\beta_{0}$, the slope ($\beta_{1}$) and the standard deviation of the residual error ($\sigma$):
Slope parameter: $b_1$ will be an estimator for $\beta_{1}$
Y‐intercept parameter: $b_0$ will be an estimator for $\beta_{0}$
Standard deviation: $s_e$ will be an estimator for $\sigma$
Regression line: $\hat{Y}=b_{0}+b_{1} X$
Example
Take the example comparing rainfall to sales of sunglasses in which the scatterplot shows a negative correlation. However, there are many lines we could draw. How do we find the line of best fit?
Solution
Minimizing Sum of Squared Residual Errors (SSE)
We are going to define the “best line” as the line that minimizes the Sum of Squared Residual Errors (SSE).
Suppose we try to fit this data with a line that goes through the first and last point. We can then calculate the equation of this line using algebra:
$\hat{Y}=\dfrac{145}{3}-\dfrac{5}{6} X \approx 48.3-0.833 X \nonumber$
The SSE for this line is 47.917:
Although this line is a good fit, it is not the best line. The slope($b_1$) and intercept($b_o$) for the line that minimizes SSE is be calculated using the least squares principle formulas:
Least squares principle formulas
$S S X=\Sigma X^{2}-\dfrac{1}{n}(\Sigma X)^{2}$
$S S Y=\Sigma Y^{2}-\dfrac{1}{n}(\Sigma Y)^{2}$
$S S X Y=\Sigma X Y-\dfrac{1}{n}(\Sigma X \cdot \Sigma y)$
$b_{1}=\dfrac{S S X Y}{S S X}$
$b_{0}=\bar{Y}-b_{1} \bar{X}$
In the Rainfall example where $X$=Rainfall and $Y$=Sales of Sunglasses:
The Sum of Squared Residual Errors (SSE) for this line is 38.578, making it the “best line”. (Compare to the value above, in which we picked the line that perfectly fit the two most extreme points).
In practice, we will use technology such as Minitab to calculate this line. Here is the example using the Regression Fitted Line Plot option in Minitab, which determines and graphs the regression equation. The point (20,25) has the highest residual error, but the overall Sum of Squared Residual Errors (SSE) is minimized. | textbooks/stats/Introductory_Statistics/Inferential_Statistics_and_Probability_-_A_Holistic_Approach_(Geraghty)/14%3A_Correlation_and_Linear_Regression/14.01%3A_Bivariate_Data_and_Scatterplots_Review.txt |
We will now describe a hypothesis test to determine if the regression model is meaningful; in other words, does the value of $X$ in any way help predict the expected value of $Y$?
Simple Linear Regression ANOVA Hypothesis Test
Model Assumptions
• The residual errors are random and are normally distributed.
• The standard deviation of the residual error does not depend on $X$
• A linear relationship exists between $X$ and $Y$
• The samples are randomly selected
Test Hypotheses
$H_o$: $X$ and $Y$ are not correlated
$H_a$: $X$ and $Y$ are correlated
$H_o$: $\beta_1$ (slope) = 0
$H_a$: $\beta_1$ (slope) ≠ 0
Test Statistic
$F=\dfrac{M S_{\text {Regression }}}{M S_{\text {Error }}}$
$d f_{\text {num }}=1$
$d f_{\text {den }}=n-2$
Sum of Squares
$S S_{\text {Total }}=\sum(Y-\bar{Y})^{2}$
$S S_{\text {Error }}=\sum(Y-\hat{Y})^{2}$
$S S_{\text {Regression }}=S S_{\text {Total }}-S S_{\text {Error }}$
In simple linear regression, this is equivalent to saying “Are X an Y correlated?”
In reviewing the model, $Y=\beta_{0}+\beta_{1} X+\varepsilon$, as long as the slope ($\beta_{1}$) has any non‐zero value, $X$ will add value in helping predict the expected value of $Y$. However, if there is no correlation between X and Y, the value of the slope ($\beta_{1}$) will be zero. The model we can use is very similar to One Factor ANOVA.
The Results of the test can be summarized in a special ANOVA table:
Source of Variation Sum of Squares (SS) Degrees of freedom (df) Mean Square (MS) $F$
Factor (due to X) $\mathrm{SS}_{\text {Regression }}$ 1 $\mathrm{MS}_{\text {Factor }}=\mathrm{SS}_{\text {Factor }} / 1$ $\mathrm{F}=\mathrm{MS}_{\text {Factor }} / \mathrm{MS}_{\text {Error }}$
Error (Residual) $\mathrm{SS}_{\text {Error }}$ $n-2$ $\mathrm{MS}_{\text {Error }}=\mathrm{SS}_{\text {Error }} / \mathrm{n}-2$
Total $\mathrm{SS}_{\text {Total }}$ $n-1$
Example: Rainfall and sales of sunglasses
Design: Is there a significant correlation between rainfall and sales of sunglasses?
Research Hypotheses:
$H_o$: Sales and Rainfall are not correlated $H_o$: 1 (slope) = 0
$H_a$: Sales and Rainfall are correlated $H_a$: 1 (slope) ≠ 0
Type I error would be to reject the Null Hypothesis and $t$ claim that rainfall is correlated with sales of sunglasses, when they are not correlated. The test will be run at a level of significance ($\alpha$) of 5%.
The test statistic from the table will be $\mathrm{F}=\dfrac{\text { MSRegression }}{\text { MSError }}$. The degrees of freedom for the numerator will be 1, and the degrees of freedom for denominator will be 5‐2=3.
Critical Value for $F$ at $\alpha$of 5% with $df_{num}=1$ and $df_{den}=3} is 10.13. Reject \(H_o$ if $F >10.13$. We will also run this test using the $p$‐value method with statistical software, such as Minitab.
Data/Results
$F=341.422 / 12.859=26.551$, which is more than the critical value of 10.13, so Reject $H_o$. Also, the $p$‐value = 0.0142 < 0.05 which also supports rejecting $H_o$.
Conclusion
Sales of Sunglasses and Rainfall are negatively correlated.
14.05: Estimating ( sigma) the standard error of the residuals
The simple linear regression model ($Y=\beta_{0}+\beta_{1} X+\varepsilon$) includes a random variable $\varepsilon$ representing the residual which follows a Normal Distribution with an expected value of 0 and a standard deviation $\sigma$ which is independent of the value of $X$. The estimate of $\sigma$ is called the sample standard error of the residuals and is represented by the symbol $s_e$. We can use the fact that the Mean Square Error (MSE) from the ANOVA table represents the estimated variance of the residuals errors:
$S_{e}=\sqrt{\mathrm{MSE}}=\sqrt{\dfrac{\mathrm{SSE}}{n-2}} \nonumber$
Example: Rainfall and sales of sunglasses
For the rainfall data, the standard error of the residuals is determined as:
$s_{e}=\sqrt{12.859}=3.586 \nonumber$
Keep in mind that this is the standard deviation of the residual errors and should not be confused with the standard deviation of $Y$.
14.06: ( r2) The Correlation of Determination
The Regression ANOVA hypothesis test can be used to determine if there is a significant correlation between the independent variable ($X$) and the dependent variable ($Y$). We now want to investigate the strength of correlation.
In the earlier chapter on descriptive statistics, we introduced the correlation coefficient ($r$), a value between ‐1 and 1. Values of $r$ close to 0 meant there was little correlation between the variables, while values closer to 1 or ‐1 represented stronger correlations.
In practice, most statisticians and researchers prefer to use $r^{2}$, the coefficient of determination as a measure of strength as it represents the proportion or percentage of the variability of $Y$ that is explained by the variability of $X$. 87
$r^2$
$r^{2}=\dfrac{S S_{\text{regression}}{S S_{\text {Total }}} \dquad 0 \% \leq r^{2} \leq 100 \%$
$r^{2}$ represents the percentage of the variability of $Y$ that is explained by the variability of $X$.
We can also calculate the correlation coefficient ($r$) by taking the appropriate square root of $r^{2}$, depending on whether the estimate of the slope ($b_1$) is positive or negative:
If $b_{1}>0, r=\sqrt{r^{2}}$
If $b_{1}<0, r=-\sqrt{r^{2}}$
Example: Rainfall and sales of sunglasses
For the rainfall data, the coefficient of determination is:
$r^{2}=\dfrac{341.422}{380}=89.85 \%$
89.85% of the variability of sales of sunglasses is explained by rainfall.
We can calculate the correlation coefficient ($r$) by taking the appropriate square root of $r^{2}$:
$r=-\sqrt{.8985}=-0.9479$
Here we take the negative square root since the slope of the regression line is negative. This shows that there is a strong, negative correlation between sales of sunglasses and rainfall.
14.07: Prediction
One valuable application of the regression model is to make predictions about the value of the dependent variable if the independent variable is known.
Consider the example about rainfall and sunglasses sales. Suppose we know that a city has 22 inches of rainfall. We can use the regression equation to predict the sales of sunglasses:
$\hat{Y}=45.647-.767 X$
$\hat{Y}_{22}=45.647-.767(22)=28.7$
For a city with 22 inches of annual rainfall, the model predicts sales of 28.7 per 1000 population.
To measure the reliability of this prediction, we can construct confidence intervals. However, we first have to decide what we are estimating. We could (1) be estimating the expected sales for a city with 22 inches of rainfall, or we could (2) be predicting the actual sales for a city with 22 inches of rainfall.
In the graph shown, the green line represents $Y=\beta_{0}+\beta_{1} X+\varepsilon$ the actual regression line which is unknown. The red line represents the least square equation, $\hat{Y}=45.647-.767 X$, which is derived from the data. The black dot represents our prediction $Y_{22}=28.7$. The green dot represents the correct population expected value of $Y_{22}$, while the yellow dot represents a possible value for the actual predicted value of $Y_{22}$. There is more uncertainty in predicting an actual value of $Y_x$ than the expected value.
Confidence interval and Prediction interval
The confidence interval for the expected value of $Y$ for a given value of $X$ is given by:
$\hat{Y}_{X} \pm t \cdot s_{e} \sqrt{\dfrac{1}{n}+\dfrac{(X-\bar{X})^{2}}{S S X}} \nonumber$
Degrees of freedom for $t =n‐2$
The prediction interval for the actual value of $Y$ for a given value of $X$ is given by:
$\hat{Y}_{X} \pm t \cdot S_{e} \sqrt{1+\frac{1}{n}+\frac{(X-\bar{X})^{2}}{S S X}} \nonumber$
Degrees of freedom for $t =n‐2$
Example: Rainfall sunglasses sales
1. Find a 95% confidence interval for the expected value of sales for a city with 22 inches of rainfall.
2. Find a 95% prediction interval for the value of sales for a city with 22 inches of rainfall.
Solution
1. Confidence interval
$28.7 \pm 3.182 \cdot 3.586 \sqrt{\dfrac{1}{5}+\dfrac{(22-23)^{2}}{580}}=28.7 \pm 5.1 \rightarrow(23.6,33.8)$
We are 95% confident that the expected annual sales of sunglasses for a city with 22 inches of annual rainfall is between 23.6 and 33.8 sales per 1000 population.
1. Prediction interval
$28.7 \pm 3.182 \cdot 3.586 \sqrt{1+\dfrac{1}{5}+\dfrac{(22-23)^{2}}{580}}=28.7 \pm 12.5 \rightarrow(16.2,41.2)$
We are 95% confident that the actual annual sales of sunglasses for a city with 22 inches of annual rainfall is between 16.2 and 41.2 sales per 1000 population. | textbooks/stats/Introductory_Statistics/Inferential_Statistics_and_Probability_-_A_Holistic_Approach_(Geraghty)/14%3A_Correlation_and_Linear_Regression/14.04%3A_Hypothesis_Test_for_Simple_Linear_Regression.txt |
When using the model to make predictions, care must be taken to only choose values of $X$ that are in the range of $X$ values of the data. In the rainfall/sales example, the values of $X$ range from 10 to 40 inches of rainfall. Choosing a value of $X$ outside this range is called extrapolation and could lead to invalid results. For example, if we use the model to predict sales for a city with 80 inches of rainfall, we get an impossible negative result for sales:
$\hat{Y}=45.647-.767 X$
$\hat{Y}_{80}=45.647-.767(80)=-15.7$
14.09: Residual Analysis
In regression, we assume that the model is linear and that the residual errors ($Y-\hat{Y}$ for each pair) are random and normally distributed. We can analyze the residuals to see if these assumptions are valid and if there are any potential outliers. In particular:
• The residuals should represent a linear model.
• The standard error (standard deviation of the residuals) should not change when the value of $X$ changes.
• The residuals should follow a normal distribution.
• Look for any potential extreme values of $X$.
• Look for any extreme residual errors.
Example: Model A
Model A is an example of an appropriate linear regression model. We will make three graphs to test the residual; a scatterplot with the regression line, a plot of the residuals, and a histogram of the residuals
Here we can see the that residuals appear to be random, the fit is linear, and the histogram is approximately bell shaped. In addition, there are no extreme outlier values of $X$ or outlier residuals.
Example: Model B
Model B looks like a strong fit, but the residuals are showing a pattern of being positive for low and high values of $X$ and negative for middle values of $X$. This indicates that the model is not linear and should be fit with a non‐linear regression model (for example, the third graph shows a quadratic model).
Example: Model C
Model C has a linear fit, but the residuals are showing a pattern of being smaller for low values of $X$ and higher for large values of $X$. This violates the assumption that the standard error should not change when the value of $X$ changes. This phenomena is called heteroscedasticity and requires a data transformation to find a more appropriate model.
Example: Model D
Model D seems to have a linear fit, but the residuals are showing a pattern of being larger when they are positive and smaller when they are negative. This violates the assumption that residuals should follow a normal distribution, as can be seen in the histogram.
Example: Model E
Model E seems to have a linear fit, and the residuals look random and normal. However, the value (16,51) is an extreme outlier value of $X$ and may have an undue influence on the choosing of the regression line.
Example: Model F
Model F seems to have a linear fit, and the residuals look random and normal, except for one outlier at the value (7,40). This outlier is different than the extreme outlier in Model E, but will still have an undue influence on the choosing of the regression line. | textbooks/stats/Introductory_Statistics/Inferential_Statistics_and_Probability_-_A_Holistic_Approach_(Geraghty)/14%3A_Correlation_and_Linear_Regression/14.08%3A_Extrapolation.txt |
Additive Rule
In probability, for events A and B, P(A or B) = P(A) +P(B) ‐ P(A and B).
Alpha ($\alpha$) – see Level of Significance
Alternative Hypothesis ($H_a$) A statement about the value of a population parameter that is assumed to be true if the Null Hypothesis is rejected during testing.
Analysis of Variance (ANOVA)
A group of statistical tests used to determine if the mean of a numeric variable (the Response) is affected by one or more categorical variables (Factors).
Bar Graphs
A graph of categorical data in which the height of the bar represents the frequency of each choice. Bar graphs can be clustered or stacked for multiple categorical variables.
Bernoulli Distribution
A probability distribution function (parameter p) for a discrete random variable that is the numbers of successes in a single trial when there are only two possible outcomes (success or failure).
Beta ($\beta$)
The probability, set by design, of failing to reject the Null Hypothesis when it is actually false. Beta is calculated for specific possible values of the Alternative Hypothesis.
Biased Sample
A sample that has characteristics, behaviors and attitudes from the population from which the sample is selected ‐‐ in other words a non‐representative sample.
Binomial Distribution
A probability distribution function (parameters n, p) for a discrete random variable that is the numbers of successes in a fixed number of independent trials when there are only two possible outcomes (success or failure).
Bivariate Data
Pairs of numeric data; there are two variables or measurements per observation.
Box Plot
A graph that represent the 3 quartiles (Q1, median and Q3), along with the minimum and maximum values of the data.
Blinding
In an experiment, blinding is keeping the participant and/or the administrator unaware as to what treatment is being given. A single blind study is when the participant does not know whether the treatment is real or a placebo. A double blind study is when neither the administrator of the treatment nor the participant knows whether the treatment is real or a placebo.
Categorical data
Non‐numeric values. Some examples of categorical data include eye color, gender, model of computer, and city.
Central Limit Theorem
A powerful theorem that allows us to understand the distribution of the sample mean, $\bar{X}$. If $X_{1}, X_{2}, \ldots, X_{n}$ is a random sample from a probability distribution with mean = $\mu$ and standard deviation = $\sigma$ and the sample size is “sufficiently large”, then $\bar{X}$ will have a Normal Distribution with the same mean and a standard deviation of $\sigma / \sqrt{n}$ also known as the Standard Error). Because of this theorem, most statistical inference is conducted using a sampling distribution from the Normal Family.
Class Intervals
For grouped numeric data, one category, usually of equal width, in which values are counted.
Chi‐square Distribution ($\chi^{2}$)
A family of continuous random variables (based on degrees of freedom) with a probability density function that is from the Normal Family of probability distributions. The Chi‐square distribution is non‐ negative and skewed to the right and has many uses in statistical inference such as inference about a population variance, goodness‐of‐fit tests and test of independence for categorical data.
Chi‐square Goodness‐of‐fit Test
A test that is used to test if observed data from a categorical variable is consistent with an expected assumption about the distribution of that variable.
Chi‐square Test of Independence
A test to determine if there is a relationship between two randomized categorical variables
Chi‐square Test of Homogeneity
A test that is run the same way as a Chi‐square Test of Independence, but in which only one of the categorical variables is randomized.
Classical probability (also called Mathematical Probability)
Determined by counting or by using a mathematical formula or model..
Cluster Sample
A sample that is created by first breaking the population into groups called clusters, and then by taking a sample of clusters.
Complement of an Event
The complement of an event means that the event does not occur. If the event is labeled A, then the complement of A is labeled A' and read as "not A".
Conditional Probability
The probability of an event A occurring given that another event B has already occurred. This probability is written as P(A|B) which is read as P(A given B).
Confidence Interval
An interval estimate that estimates a population parameter from a random sample using a predetermined probability called the level of confidence.
Confidence Level
see Level of Confidence
Confounding Variable
A lurking variable that is not known to the researcher, but that affects the results of the study.
Contingency Tables
A method of displaying the counts of the responses of two categorical variables from data, also known as cross tabulations, or two‐way tables.
Control Group
In an experiment, the group that receives no treatment giving the researcher a baseline to be able to compare the treatment and placebo groups.
Continuous data
Quantitative based on the real numbers. Some examples of continuous data include time to complete an exam, height, weight. Continuous data are values that are measured, or answers the question "How much"?
Continuous Random Variable
A random variable that has only continuous values. Continuous values are uncountable and are related to real numbers.
Correlation Coefficient
A measure of correlation(represented by the letter $r$) that measures both the direction and strength of a linear relationship or association between two variables. The value $r$ will always take on a value between ‐1 and 1. Values close to zero imply a very weak correlation. Values close to 1 or ‐1 imply a very strong correlation. The correlation coefficient should not be used for non‐linear correlation.
Critical value(s)
The dividing point(s) between the region where the Null Hypothesis is rejected and the region where it is not rejected. The critical value determines the decision rule
Cross Tabulations
see Contingency Tables
Cumulative Frequency
In grouped data, the number of times a particular value is observed in a class interval or in any lower class interval.
Cumulative Relative Frequency
In grouped data, the proportion or percentage of times a particular value is observed in a class interval or in any lower class interval.
Data Dredging
see $p$‐hacking
Decision Rule
The procedure that determines what values of the result of an experiment will cause the Null Hypothesis to be rejected. There are two methods that are equivalent decision rules:
1. If the test statistic lies in the Rejection Region, Reject $H_o$ (Critical Value method).
2. If the $p$‐value < $\alpha$, Reject $H_o$ ($p$‐value method).
Dependent Events
Two events are dependent if the probability of one event occurring is changed by knowing if the other event occurred or not. Events that are not dependent are called independent.
Dependent Sampling
A method of sampling in which 2 or more variables are related to each other (paired or matched). Examples would be the “Before and After” type models using the Matched Pairs $t$‐test.
Discrete data
Quantitative natural numbers (0, 1, 2, 3, ...). Some examples of discrete data include number of siblings, friends on Facebook, or bedrooms in a house. Discrete data are values that are counted, or where you might ask the question "How many"?
Discrete Random Variable
A random variable that has only discrete values. Discrete values are related to counting numbers.
Dot Plot
A graph of numeric data in which each value is represented as a dot on a simple numeric scale. Multiple values are stacked to create a shape for the data. If the data set is large, each dot can represent multiple values of the data.
Effect Size
The “practical difference” between a population parameter under the Null Hypothesis and a selected value of the population parameter under the Alternative Hypothesis.
Empirical probability
Probability that is based on the relative frequencies of historical data, studies or experiments.
Empirical Rule
(Also known as the 68‐95‐99.7 Rule). A rule used to interpret standard deviation for data that is approximately bell‐shaped. The rule says about 68% of the data is within one standard deviation of the mean, 95% of the data is within two standard deviations of the mean, and about 99.7% of the data is within three standard deviations of the mean.
Estimation
An inference process that attempts to predict the values of population parameters based on sample statistics
Event
A result of an experiment, usually referred to with a capital letter A, B, C, etc.
Expected Value
A value that describes the central tendency of a random variable, also known as the population mean and that is expressed by the symbol (pronounced mu). The expected value is a parameter, meaning a fixed quantity
Experiment
A study in which the researcher will randomly break a representative sample into groups and then apply treatments in order to manipulate a variable of interest. The goal of an experiment is to find a cause and effect relationship between a random variable in the population and the variable manipulated by the researcher.
Exponential Distribution
A probability distribution function (parameter $\mu$) for a continuous random variable that models the waiting time until the first occurrence of an event defined by a Poisson Process.
Explanatory Variable
The variable that the researcher controls or manipulates
$F$ Distribution
A family of continuous random variables (based on 2 different degrees of freedom for numerator and denominator) with a probability density function that is from the Normal Family of probability distributions. The F distribution is non‐negative and skewed to the right and has many uses in statistical inference such as inference about comparing population variances, ANOVA, and regression.
Factor
In ANOVA, the categorical variable(s) that break the numeric response variable into multiple populations or treatments.
Frequency
In grouped data, the number of times a particular value is observed.
Frequency distribution
An organization of numeric data into class intervals.
Geometric Distribution
A probability distribution function (parameter p) for a discrete random variable that is the number of independent trials until the first success in which there are only two possible outcomes (success or failure).
Hypothesis
A statement about the value of a population parameter developed for the purpose of testing
Hypothesis Testing
A procedure, based on sample evidence and probability theory, used to determine whether the hypothesis is a reasonable statement and should not be rejected, or is unreasonable and should be rejected.
Independent Events
Two events are independent if the probability of one event occurring is not changed by knowing if the other event occurred or not. Events that are not independent are called dependent.
Independent Sampling
A method of sampling in which 2 or more variables are not related to each other. Examples would be the “Treatment and Control” type models using the independent samples $t$‐test.
Inference
– see Statistical Inference
Interquartile Range (IQR)
A measure of variability that is calculated by taking the difference of the 1st quartile and 3rd quartiles.
Interval Estimate
A range of values based on sample data that is used to estimate a population parameter
Interval Level of Data
Quantitative data that have meaningful distance between values, but that do not have a "true" zero. Interval data are numeric, but zero is just a place holder. Examples of interval data include temperature in degrees Celsius and year of birth.
Joint Probability
The probability of the union or intersection of multiple events occurring. If A and B are multiple events, then P(A or B) and P(A and B) are examples of joint probability.
Level
In ANOVA, a possible value that a categorical variable factor could be. For example, if the factor was shirt color, levels would be blue, red, yellow, etc.
Level of Confidence
The probability, usually expressed as a percentage, that a Confidence Interval will contain the true population parameter that is being estimated.
Level of Significance ($\alpha$)
The maximum probability, set by design, of rejecting the Null Hypothesis when it is actually true (maximum probability of making Type I error).
Levels of Data
The four levels of data are Nominal, Ordinal, Interval and Ratio.
Lurking Variable
see Confounding Variable
Margin of Error
The distance in a symmetric Confidence Interval between the Point Estimator and an endpoint of the interval. For example a confidence interval for $\mu$ may be expressed as $\bar{X} \pm$Margin of Error.
Marginal Probability
The probability a single event A occurs, written as P(A).
Mean
see Population Mean or Sample Mean
Median
see Population Median or Sample Median
Mode
see Population Mode or Sample Mode
Model Assumptions
Criteria that must be satisfied to appropriately use a chosen statistical model. For example, a Student’s t statistic used for testing a population mean vs. a hypothesized value requires random sampling and that the sample mean has an approximately Normal Distribution.
Multiplicative Rule
In probability, for events A and B, P(A and B) = P(A)P(B|A) = P(B)P(A|B).
Mutually Exclusive Events
Events that cannot both occur; the intersection of two events has no possible outcomes.
Nominal Level of Data
Qualitative data that only define attributes, with no hierarchal ranking. Examples of nominal data include hair color, ethnicity, gender and any yes/no question.
Non‐probability Sampling Methods
Non‐scientific methods of sampling that have immeasurable biases and should not be used in scientific research. These methods include Convenience Sampling and Self‐selected sampling.
Non‐response Bias
A type of sampling bias that occurs when people are intentionally or non‐intentionally excluded from participation or choose not to participate in a survey or poll. Sometimes people will lie to pollsters as well.
Normal Distribution
Often called the “bell‐shaped” curve, the Normal Distribution is a continuous random variable which has Probability Density Function $X=\exp \left[-(x-\mu)^{2} / 2 \sigma^{2}\right] / \sigma \sqrt{2 \pi}$.The special case where $\mu=0$ and $\sigma=1$, is called the Standard Normal Distribution and is designated by $Z$.
Normal Family of Probability Distributions
The Standard Normal Distribution ($Z$) plus other Probability Distributions that are functions of independent random variables with Standard Normal Distribution. Examples include the $t$, the $F$ and the Chi‐square distributions.
Null Hypothesis ($H_o$)
A statement about the value of a population parameter that is assumed to be true for the purpose of testing
Observational Study
A study in which the researcher takes measurements from a representative sample, but does not manipulate any of the variables with treatments. The goal of an observational study is to interpret and analyze the measured variables, but it is not possible to show a cause and effect relationship.
Ogive
A line graph in which the vertical axis is cumulative relative frequency and the horizontal axis is the value of the data, specifically the endpoints of the class intervals. The left end point of the first class interval will have a cumulative relative frequency of zero. All other endpoints are given the right endpoint of the corresponding class interval. The points are then connected by line segments. The ogive can be used to estimate percentiles.
Outcome
A result of the experiment which cannot be broken down into smaller events.
Ordinal Level of Data
Qualitative data that define attributes with a hierarchal ranking. Examples of nominal data include movie ratings (G, PG, PG13, R, NC17), t‐shirt size (S, M L, XL), or your letter grade on a term paper.
Outlier
A data point that is far removed from the other entries in the data set.
$p$‐value
The probability, assuming that the Null Hypothesis is true, of getting a value of the test statistic at least as extreme as the computed value for the test.
$p$‐hacking
An improper research method that uses repeated experiments or multiple measures analysis until the researcher obtains a significant p‐value. Also known as Data Dredging.
Parameter
A fixed numerical value that describes a characteristic of a population.
Percentile
The value of the data below which a given percentage of the data fall.
Pie Chart
A circular graph of categorical data where each slice of the pie represents the relative frequency or percentage of data in each category.
Placebo
A treatment with no active ingredients.
Placebo Effect
In an experiment, when a participant responds in a positive way to a placebo, a treatment with no active ingredients.
Placebo Group
In an experiment, the group that receives the treatment with no active ingredients.
Point Estimate
A single sample statistic that is used to estimate a population parameter. For example, $\bar{X}$ is a point estimator for $\mu$.
Poisson Distribution
A probability distribution function (parameter $\mu$) for a discrete random variable that is the number of occurrences in a fixed time period or region, over which the rate occurrences is a constant.
Poisson Process
Counting methods that are modeled by random variables that follow a Poisson Distribution.
Population
The set of all possible members, objects or measurements of the phenomena being studied.
Population Mean
see Expected Value
Population Median
A value that describes the central tendency of a random variable that represents the 50th percentile. The population median is a parameter, meaning a fixed quantity.
Population Mode
The maximum value or values of a probability density function.
Population Variance
The expected value of the squared deviation from the mean, a value that describes the variability of a random variable expressed by the symbol $\sigma^{2}$ (pronounced sigma‐squared). The population variance is a parameter, meaning a fixed quantity.
Population Standard Deviation
The square root of the population variance, a value that describes the variability of a random variable expressed by the symbol $\sigma$ (pronounced sigma).
Power (or Statistical Power)
The probability, set by design, of rejecting the Null Hypothesis when it is actually false. Power is calculated for specific possible values of the Alternative Hypothesis and is the complement of Beta ($\beta$).
Probability
The measure of the likelihood that an event A will occur. This measure is a quantity between 0 (never) and 1 (always) and will be expressed as P(A) ( read as “The probability event A occurs.”)
Probability Density Function (pdf)
A non‐negative function that defines probability for a Continuous Random Variable. Probability is calculated by measuring the area under a probability density function.
Probability Distribution Function (PDF)
A function that assigns a probability to all possible values of a discrete random variable. In the case of a continuous random variable (like the Normal Distribution), the PDF refers to the area to the left of a designated value under a Probability Density Function.
Probability Sampling Methods
Sampling methods that will usually produce a sample that is representative of the population. These methods are also called scientific sampling. Examples include Simple Random Sampling, Systematic Sampling, Stratified Sampling and Cluster Sampling.
Qualitative Data
Non‐numeric values that describe the data. Note that all quantitative data is numeric, but some numbers without quantity (such as Zip Code or Social Security Number) are qualitative. When describing categorical data, we are limited to observing counts in each group and comparing the differences in percentages.
Quantitative Data
Measurements and numeric quantities that can be determined from the data. When describing quantitative data, we can look at the center, spread, shape and unusual features.
Quartile
The 25th, 50th and 75th percentiles, which are usually called, respectively, the 1st quartile, the median, and the 3rd quartile.
Radix
A convenient total used in creating a hypothetical two‐way table.
Random Sample
see Simple Random Sample
Range
For numeric data, the maximum value minus the minimum value.
Random Variable
A variable in which the value depends upon an experiment, observation or measurement.
Ratio Level of Data
Quantitative data that have meaningful distance between values, and have a "true" zero. Examples of ratio data include time to drive to work, weight, height, or number of children in a family. Most numeric data will be ratio.
Raw Data
Sample data presented unsorted.
Regression Analysis
A method of modeling correlated bivariate data.
Relative frequency
In grouped data, the proportion or percentage of times a particular value is observed.
Replicate
In ANOVA, the sample size for a specific level of factor. If the replicates are the same for each level, the design is balanced.
Rejection Region
Statistical Model region(s) which contain the values of the Test Statistic in which the Null Hypothesis will be rejected. The total area of the Rejection Region $=\alpha$.
Representative Sample
A sample that has characteristics, behaviors and attitudes similar to the population from which the sample is selected.
Response Variable
The numeric variable that is being tested under different treatments or populations.
Response bias
A type of sampling bias that occurs when the responses to a survey are influenced by the way the question is asked, or when responses do not reflect the true opinion of the respondent. When conducting a survey or poll, the type, order, and wording of questions are important considerations. Poorly worded questions can invalidate the results of a survey.
Rule of Complement
If the events A and A' are complements, then P(A) +P(A') =1.
Sample
A subset of the population that is studied to collect or gather data.
Sample Size
The number of observations in your sample size, usually represented by $n$.
Sample Mean
1. The arithmetic average of a numeric data set.
2. A random variable that has an approximately Normal Distribution if the sample size is sufficiently large.
3. An unbiased estimator for the population mean
Sample Median
The value that represents the exact middle of data, when the values are sorted from lowest to highest.
Sample Mode
The most frequently occurring value in the data. If there are multiple values that occur most frequently, then there are multiple modes in the data.
Significance Level
see Level of Significance
Sample Space
In probability, the set of all possible outcomes of an experiment.
Sample Standard Deviation
The square root of the sample variance, which measures the spread of data and distance from the mean. The units of the standard deviation are the same units as the data.
Sample Variance
A measure of the mean squared deviation of the data values from the mean. The units of the variance are the square of the units of the data.
Scatterplot
A graph of bivariate data used to visualize correlation between the two numeric variables.
Selection Bias
A type of sampling bias that occurs when the sampling method does not create a representative sample for the study. Selection bias frequently occurs when using convenience sampling
Self‐selection Bias
A type of sampling bias that occurs when individuals can volunteer to be part of the study. Volunteers will often have a stronger opinion about the research question and will usually not be representative of the population.
Simple Random Sample
A subset of a population in which each member of the population has the same chance of being chosen and is mutually independent from all other members.
Skewness
A measure of how asymmetric the data vales are.
Standard Deviation
see Sample Standard Deviation or Population Standard Deviation
Standard Normal Distribution
A special case of the Normal Distribution where $\mu = 0$ and $\sigma = 1$. The symbol $Z$ is usually reserved for the Standard Normal Distribution.
Statistic
A value that is calculated from only the sample data, and that is used to describe the data. Examples of statistics are the sample mean, the sample standard deviation, the range, the sample median and the interquartile range. Since statistics depend on the sample, they are also random variables.
Statistical Inference
The process of estimating or testing hypotheses of population parameters using statistics from a random sample.
Statistical Model
A mathematical model that describes the behavior of the data being tested.
Stem and Leaf Plot
A method of tabulating data by splitting it into the "stem" (the first digit or digits) and the "leaf" (the last digit, usually). For example, the stem for 102 minutes would be 10 and the leaf would be 2.
Stratified Sample
A sample that is designed by breaking the population into subgroups called strata, which are then sampled so that the proportion of each subgroup in the sample matches the proportion of each subgroup in the population.
Student’s $t$ Distribution (or $t$ Distribution)
A family of continuous random variables (based on degrees of freedom) with a probability density function that is from the Normal Family of Probability Distributions. The $t$ distribution is used for statistical inference of the population mean when the population standard deviation is unknown.
Subjective probability
Probability that is a “one‐shot” educated guess based on anecdotal stories, intuition or a feeling as to whether an event is likely, unlikely or “50‐50”. Subjective probability is often inaccurate.
Systematic Sample
A subset of the population in which the first member of the sample is selected at random and all subsequent members are chosen by a fixed periodic interval.
$t$ Distribution
see Student’s $t$ Distribution
Test Statistic
A value, determined from sample information, used to determine whether or not to reject the Null Hypothesis.
Treatment Group(s)
In an experiment, the group(s) that receive the treatment that the researcher controls.
Tukey HSD Test
In ANOVA, a post‐hoc collection of tests that report honest significant differences in pair of means.
Tree Diagram
A simple way to display all possible outcomes in a sequence of events. Each branch will represent a possible outcome. Using the Multiplicative Rule, the probability of each possible outcome can be calculated.
Two‐way Tables
see Contingency Tables
Type I Error
Rejecting the Null Hypothesis when it is actually true.
Type II Error
Failing to reject the Null Hypothesis when it is actually false.
Uniform Distribution
A probability distribution function (parameters a, b) for a continuous random variable in which all values between a minimum value and a maximum value have the same probability.
Variance
see Sample Variance or Population Variance
$Z$‐score
A measure of relative standing that shows the distance in standard deviations that a particular data point is above or below the mean. | textbooks/stats/Introductory_Statistics/Inferential_Statistics_and_Probability_-_A_Holistic_Approach_(Geraghty)/15%3A_Appendix/15.01%3A_Glossary_of_Statistical_Terms_used_in_Inference.txt |
1. Identify the following data by type (categorical, discrete, continuous)
1. Number of tickets sold at a rock concert.
2. Make of automobile.
3. Age of a fossil.
4. Temperature of a nuclear power plant core reactor.
5. Number of students who transfer to private colleges.
6. Cost per unit at a state University.
7. Letter grade on an English essay.
2. Identify the following level (nominal, ordinal, interval, ratio)
1. Number of tickets sold at a rock concert.
2. Make of automobile.
3. Age of a fossil.
4. Temperature of a nuclear power plant core reactor.
5. Number of students who transfer to private colleges.
6. Cost per unit at a state University.
7. Letter grade on an English essay.
3. 1038 Americans were asked, “What is your favorite sport to watch?” The results were summarized into a pie graph.
1. Interpret the pie graph.
2. Do you think a different graph would have a clearer way to show this data? Explain.
3. Using the same data create a bar graph. Instead of labeling each bar with counts, use percentages.
4. Compare the bar graph to the pie graph. In your opinion, which of these two graphs better explains the data?
1. The following average daily commute time (minutes) for residents of two cities are shown in the table.
1. Construct a back‐to back stem and leaf diagram.
2. Describe the center, shape and spread of each city.
3. What is similar about each city, and what is different?
1. The February 10, 2017 Nielsen ratings of 20 TV programs shown on commercial television, all starting between 8 PM and 10 PM, are given below:
1. Graph a stem and leaf plot with the tens and ones units making up the stem and the tenths unit being the leaf.
2. Group the data into intervals of width 2, starting with the 1st interval at 2, and obtain the frequency of each of the intervals.
3. Graphically depict the grouped frequency distribution in part b by a histogram.
4. Obtain the relative frequency, cumulative frequency and cumulative relative frequency for the intervals in part b.
5. Construct an ogive of the data. Estimate the median and quartiles.
1. The following data represent the median monthly rent from 2005 to 2015 for a studio apartment in the US, California and Santa Clara County. Create line graphs of US, California and Santa Clara County rents on the same graph. Make three interpretations from the graphs.
1. The two frequency histograms represent the ages of 78 Male US Senators and 22 Female US Senators. Ages were evaluated on October 20, 2017.
1. Estimate the center of each graph. Does there seem to be a difference in average age due to gender in the US Senate?
2. Estimate the range of each graph. there seem to be a difference in age spread due to gender in US Senate
3. Is there a difference in shape between the two graphs?
4. Senator Diane Feinstein of California, who is 84 years old, represents an outlier among the females. Would your answers to parts a, b or c change if Senator Feinstein were removed from the data? Explain.
1. An experiment was conducted on string bean plants. The plants were broken into three groups. The first group was given Fertilizer 1, the second group was given Fertilizer 2, and the third group was given no fertilizer. After 2 months, the heights in inches were measured with results shown in the dot plot. From the dot plots, describe the center, spread, shape and unusual features of each group, and then make an overall statement about the fertilizers. | textbooks/stats/Introductory_Statistics/Inferential_Statistics_and_Probability_-_A_Holistic_Approach_(Geraghty)/15%3A_Appendix/15.02%3A_Homework_Problems/15.2.01%3A_Chapter_2_Homework.txt |
1. A poll was taken of 150 students at De Anza College. Students were asked how many hours they work outside of college. The students were interviewed in the morning between 8 AM and 11 AM on a Thursday. The sample mean for these 150 students was 9.2 hours.
1. What is the Population?
2. What is the Sample?
3. Does the 9.2 hours represent a statistic or parameter? Explain.
4. Is the sample mean of 9.2 a reasonable estimate of the mean number of hours worked for all students at De Anza? Explain any possible bias.
2. The box plots represent the results of three exams for 40 students in a Math course.
1. Which exam has the highest median?
2. Which exam has the highest standard deviation?
3. For Exam 2, how does the median compare to the mean?
4. In your own words, compare the exams.
1. Examine the following average daily commute time (minutes) for residents of two cities.
1. Compute and interpret the z‐score for a 75‐minute commute for City A.
2. Compute and interpret the z‐score for a 75‐minute commute for City B.
3. For which group would a 75 ‐minute commute be more unusual? Explain.
1. The February 10, 2017 Nielsen ratings of 20 TV programs shown on commercial television, all starting between 8 PM and 10 PM, are given below:
1. Obtain the sample mean and median. Do you believe that the data is symmetric, right‐skewed or left skewed?
2. Determine the sample variance and standard deviation.
3. Assuming the data are bell shaped, between which two numbers would you expect to find 68% of the data?
1. The following data represents recovery time for 16 patients (arranged in a table to help you out).
1. Calculate the sample mean and median
2. Use the table to calculate the variance and standard deviation.
3. Use the range of the data to see if the standard deviation makes sense. (Range should be between 3 and 6 standard deviations).
4. Using the empirical rule between which two numbers should you expect to see 68% of the data? 95% of the data? 99.7% of the data?
5. Calculate the Z‐score for observation. Do you think any of these data are outliers?
1. The following data represents the heights (in feet) of 20 almond trees in an orchard.
1. Construct a box plot of the data.
2. Do you think the tree with the height of 45 feet is an outlier? Use the box plot method to justify your answer.
1. The following average daily commute time (in minutes) for residents of 2 cities are shown in the table.
1. Find the quartiles and interquartile range for each group.
2. Calculate the 80th percentile for each group.
3. Construct side‐by‐side box plots, and compare the two groups
1. Rank the following correlation coefficients from weakest to strongest.
.343, ‐.318, .214, ‐.765, 0, .998, ‐.932, .445
1. If you were trying to think of factors that affect health care costs:
1. Choose a variable you believe would be positively correlated with health care costs.
2. Choose a variable you believe would be negatively correlated with health care costs.
3. Choose a variable you believe would be uncorrelated with health care costs. | textbooks/stats/Introductory_Statistics/Inferential_Statistics_and_Probability_-_A_Holistic_Approach_(Geraghty)/15%3A_Appendix/15.02%3A_Homework_Problems/15.2.02%3A_Chapter_3_Homework.txt |
1. A researcher wanted to know if students who use the library at a college have higher GPAs than students who do not use the library. The researcher decided used a random number generator to choose 20 random classes at the college. Students in each of these classes were given surveys that could be filled out anonymously. Students that completed the surveys were given a \$5 gift card for the bookstore. 82% of students in the sampled classes returned the surveys.
Here are the two questions of interest:
1. How often do you use the library?
1. Never
2. Less than once a week
3. More than once a week, but not every day
4. Every day
2. What is your current GPA? __________
1. What method of sampling was used by the researcher?
2. Discuss the wording of the questions for possible bias.
3. Is this an observational study or an experiment? Explain.
4. The researcher concluded that students who use the library more frequently have higher GPAs. Is this a valid conclusion for this type of study? Explain.
1. A community college is considering using multiple measures to place students into math courses. The existing measure is that each student takes a standardized placement exam. On the basis of the score, the student will be placed in one of three math courses: Elementary Level, Intermediate Level and Transfer Level. A second measure is to use high school GPA to modify the needed placement exam score for each of the three courses.
200 incoming students who have high school GPAs were randomly split into two groups. The first group of 100 students was given the existing placement exam only. The second group of 100 students was placed by the new second measure, utilizing both placement exams and high school GPAs.
1. Is this an observational study or an experiment? Explain.
2. What is the explanatory variable and what is the response variable?
After three quarters, it was found that 17 of the first group completed the transfer level course, while 31 of the second group completed the transfer level course. Based on this result, the researcher decided that the new multiple measures method of placing students improved the percentage of students who pass the transfer level math course in three quarters.
1. A researcher for an electric car company was testing a new battery system. The goal of the battery system was to extend the life of the battery before recharging is necessary.
48 identical model electric cars were selected. 24 cars were given the new battery system (treatment group), while the remaining 24 cars kept the old system (control group). All cars were then fully charged. 24 drivers were then assigned a car. They were not told whether they were driving a car with the new batteries or a car with the regular batteries. The drivers were all given the same route to drive. The drivers drove the cars until the battery ran dead. The mileage driven was then recorded.
The 24 drivers then returned the next day to repeat the experiment with the remaining cars.
Each driver was assigned a new battery car and a regular battery car, but neither the driver nor the person assigning the car knew the order in which they drove the cars. The results are shown in the box plot. The researchers concluded that new battery system did extend the life of the battery by about 7%.
1. In this experiment, what is the explanatory variable and what is the response variable?
2. Was there blinding done in this experiment? Explain.
3. Suppose the researcher instead chose 48 drivers and each driver drove a single car. Would this create any lurking variables for the experiment?
1. Identify the Steps of a Statistical Process for the library use/GPA example in problem 1. The steps are listed below:
1. Ask a question that can be answered with sample data.
2. Determine the information needed
3. Collect sample data that is representative of the population.
4. Summarize, interpret and analyze the sample data.
5. State the results and conclusion of the study.
2. Identify the Steps of a Statistical Process for the multiple measures example in problem 2. The steps are listed below:
1. Ask a question that can be answered with sample data.
2. Determine the information needed
3. Collect sample data that is representative of the population.
4. Summarize, interpret and analyze the sample data.
5. State the results and conclusion of the study.
3. Identify the Steps of a Statistical Process for the electric car example in problem 3. The steps are listed below:
1. Ask a question that can be answered with sample data.
2. Determine the information needed
3. Collect sample data that is representative of the population.
4. Summarize, interpret and analyze the sample data.
5. State the results and conclusion of the study.
4. A researcher wants to determine the average student loan debt for California students. The researcher understands that the cost of college could be dramatically different for students who attend community college, the California State System (CSU), The University of California system (UC), or private colleges. To account for this, the researcher decides to employ stratified sampling.
1. Why did the researcher choose stratified sampling?
2. Identify the 4 strata (groups) for this method.
3. Based on recent estimates, 2.1 million students attend community college, 478,000 attend the CSU system, 238,000 attend the UC system and 184,000 attend private colleges. If the researcher wants to sample a total of 2000 students, determine the sample size for each group.
5. The 2015 US Supreme Court Decision Obergefell v. Hodges established a constitutional right for same‐ sex couples to marry. Before this decision, many polls were conducted. Read the wording of the following actual polling questions and decide if the questions are unbiased or biased. Explain your reasoning and why you think some questions are biased.
1. Do you think it should be legal or illegal for gay and lesbian couples to get married?
2. Do you favor or oppose allowing gay and lesbian couples to enter into same‐sex marriages?
3. Should state governments give legal recognition to marriages between couples of the same sex?
4. Do you think gays and lesbians have a constitutional right to get married and have their marriage recognized by law as valid?
5. Do you think marriages between same‐sex couples should or should not be recognized by the law as valid, with the same rights as traditional marriages?
6. Do you want homosexual marriage in your community even if it means schools will be required to teach sodomy to your children?
7. Would you support or oppose a law in your state that would allow same‐sex couples to get married?
8. Do you support marriage equality?
9. Should states continue to discriminate against couples of the same gender who want to marry?
10. Should states be forced to legalize homosexual marriage over the wishes of a majority of the people? | textbooks/stats/Introductory_Statistics/Inferential_Statistics_and_Probability_-_A_Holistic_Approach_(Geraghty)/15%3A_Appendix/15.02%3A_Homework_Problems/15.2.03%3A_Chapter_4_Homework.txt |
1. In the game of Craps, two dice are rolled and the sum totaled. One set of 4 bets are called hard ways, in which the player has to roll the number in doubles before a 7 or a non‐hard way version of the number is rolled. For example, suppose you want to bet on hard way 6. To win, you must roll a pair of threes before you roll a seven or any other combination that adds to 6. All others rolls are ignored.
1. For the hard way 6, list the sample space of rolls that have an effect on the game. Then find the probability of winning.
2. For the hard way 4, list the sample space of rolls that have an effect on the game. Then find the probability of winning.
3. For the hard way 4, the casino will pay 7 to 1 if you win. For the hard way 6, the casino will pay 9 to 1 if you win. Compare the payoff to the actual odds. Does the casino have an advantage in this game?
1. 40% of students at a community college are on financial aid. 30% of students at the same college live with at least one parent. 15% of students are on financial aid and live with at least one parent.
1. Find the probability that a community college student does not live with at least one parent. Is this marginal, joint or conditional probability?
2. Find the probability that a community college student is on financial aid or lives with at least one parent. Is this marginal, joint or conditional probability?
3. Find the probability that a community college student who lives with at least one parent is also on financial aid. Is this marginal, joint or conditional probability?
2. A poll of American registered voters was taken by Politico/Morning Consult in November, 2017 after the Las Vegas mass shooting, in which 58 concertgoers were murdered by a single gunman. The poll asked the question, "Do you support or oppose stricter gun laws in the United States? The results of the poll, cross‐tabulated by gender, are shown in the contingency table.
1. What percentage of all registered voters support (strong or somewhat) stricter gun laws?
2. What percentage of males support (strong or somewhat) stricter gun laws?
3. What percentage of females support (strong or somewhat) stricter gun laws?
4. Are gender and support of stricter gun laws independent events? Explain
1. A student has a 90% chance of getting to class on time on Monday and a 70% chance of getting to class on time on Tuesday. Assuming that these are independent events, determine the following probabilities:
1. The student is on time both Monday and Tuesday.
2. The student is on time at least once (Monday or Tuesday).
3. The student is late both days.
2. A class has 10 students, 6 females and 4 males. 3 students will be sampled without replacement for a group presentation.
1. Construct a tree diagram of all possibilities (there will be 8 total branches at the end)
2. Find the following probabilities:
1. All male students in the group presentation.
2. Exactly 2 female students in the group presentation.
3. At least 2 female students in the group presentation.
3. 20% of professional cyclists are using a performance enhancing drug. A test for the drug has been developed; this test has a 60% chance of correctly detecting the drug(true positive). However, the test will come out positive in 2% of cyclists who do not use the drug (false positive).
1. Construct a tree diagram in which the first set of branches are cyclists with and without the drug, and the second set is whether they test positive.
2. From the tree diagram, create a contingency table.
3. What percentage of cyclists will test positive for the drug?
4. If a cyclist tests positive, what is the probability that the cyclist did really used the drug?
4. 1% of the population of a country has disease X. A test for the disease has been developed; this test has a 95% probability of correctly detecting the disease (true positive). However, the test will come out positive in 2% of people who do not have disease X (false positive).
1. Construct a tree diagram in which the first set of branches are people with and without the disease, and the second set is whether they test positive. Assign probabilities to each option.
2. From the tree diagram create a contingency table with a radix of 10000
1. What percentage of the population will test positive for disease X?
2. If a person tests positive, what is the probability that the person really has disease X?
1. We wish to determine the morale of a certain company. We give each of the workers a questionnaire, and from their answers we can determine the level of their morale, whether it is ‘Low’, ‘Medium ‘ or ‘High: also noted below is the ‘worker type’ for each of the workers. For each worker type, the frequencies corresponding to the different levels of morale are given below.
1. We randomly select 1 worker from this population. What is the probability that the worker selected
1. is an executive?
2. is an executive with medium morale?
3. is an executive or has medium morale?
4. is an executive, given the information that the worker has medium morale.
2. Given the information that the selected worker is an executive, what is the probability that the worker
1. has medium morale?
2. has high morale?
3. Are the following events independent or dependent? Explain your answer:
1. is an executive’, ‘has medium morale’, are these independent?
2. is an executive’, ‘has high morale’, are these independent? | textbooks/stats/Introductory_Statistics/Inferential_Statistics_and_Probability_-_A_Holistic_Approach_(Geraghty)/15%3A_Appendix/15.02%3A_Homework_Problems/15.2.04%3A_Chapter_5_Homework.txt |
1. Explain the difference between population parameters and sample statistics. What symbols do we use for the mean and standard deviation for each of these?
2. Consider the following probability distribution function of the random variable X, which represents the number of people in a group(party) at a restaurant:
1. Find the population mean of X.
2. Find the population variance and standard deviation of X.
3. Find the probability that the next party will be over 4 people.
4. Find the probability that the next three parties (assuming independence) will each be over 4 people.
1. 10% of all children at a large urban elementary school district have been diagnosed with learning disabilities. 10 children are randomly and independently selected from this school district.
1. Let X = the number of children with learning disabilities in the sample. What type of random variable is this?
2. Find the mean and standard deviation of X.
3. Find the probability that exactly 2 of these selected children have a learning disability.
4. Find the probability that at least 1 of these children has a learning disability.
5. Find the probability that fewer than 3 of these children have a learning disability.
2. A general statement is made that an error occurs in 10% of all retail transactions. We wish to evaluate the truthfulness of this figure for a particular retail store, say store A. Twenty transactions of this store are randomly obtained. Assuming that the 10% figure also applies to store A, and let X be the number of retail transactions with errors in the sample
1. The probability distribution function (pdf) of X is binomial. Identify the parameters n and p.
2. Calculate the expected value of X.
3. Calculate the variance of X.
4. Find the probability exactly 2 transactions sampled are in error.
5. Find the probability at least 2 transactions sampled are in error.
6. Find the probability that no more than one transaction is in error.
7. Would it be unusual if 5 or more transactions were in error?
3. A newspaper finds a mean of 4 typographical errors per page. Assume the errors follow a Poisson distribution.
1. Let X equal the number of errors on one page. Find the mean and standard deviation of this random variable.
2. Find the probability that exactly three errors are found on one page.
3. Find the probability that no more than 2 errors are found on one page.
4. Find the probability that no more than 2 errors are found on two pages.
4. Major accidents at a regional refinery occur on the average once every five years. Assume the accidents follow a Poisson distribution.
1. How many accidents would you expect over 10 years?
2. Find the probability of no accidents in the next 10 years.
3. Find the probability of no accidents in the next 20 years.
5. 20% of the people in a California town consider themselves vegetarians. If 20 people are randomly sampled, find the probability that:
1. Exactly 3 are vegetarians.
2. At least 3 are vegetarians.
3. At most 3 are vegetarians
6. 20% of the people in a California town consider themselves vegetarians. People are sampled until the first vegetarian is found. Use the geometric distribution to find the following probabilities:
1. A vegetarian is picked on the first trial.
2. A vegetarian is picked somewhere within the first three trials.
3. A vegetarian is not picked until sometime after the third trial.
7. Cargo ships arrive at a loading dock at a rate of 2 per day. The dock has the capability of handling 3 arrivals per day. How many days per month (assume 30 days in a month) would you expect the dock to be unable to handle all arriving ships? (Hint: first find the probability that more than 3 ships arrive, and then use that probability to find the expected number of days in a month that too many ships arrive).
8. Major hurricanes strike the U.S. coast at a rate of 0.7 per year.
1. What is the probability that 4 major hurricanes strike the U.S. coast in one year?
2. What is the probability that more than 2 major hurricanes strike the U.S. coast in 2 years?
3. What is the probability that no major hurricane will strike the U.S. coast in the next 5 years?
4. In 2017, 3 major hurricanes made landfall in the United States, causing catastrophic damage to Texas, Florida, Puerto Rico and the Virgin Islands. Find the probability of three major hurricanes making landfall in one year. | textbooks/stats/Introductory_Statistics/Inferential_Statistics_and_Probability_-_A_Holistic_Approach_(Geraghty)/15%3A_Appendix/15.02%3A_Homework_Problems/15.2.05%3A_Chapter_6_Homework.txt |
1. The completion time (in minutes) for a student to complete a short quiz follows the probability density function shown here, with some areas calculated.
1. Find the probability that a student completes the exam in 4 minutes or less.
2. Find the probability that a student needs between 8 and 10 minutes to finish the quiz.
3. If the instructor allows 10 minutes for the quiz and the class has 40 students, how many students will run out of time before the quiz is finished?
4. Find the 64th percentile of the distribution.
1. A ferry boat leaves the dock once per hour. Your waiting time for the next ferry boat will follow a uniform distribution from 0 to 60 minutes.
1. Find the mean and variance of this random variable.
2. Find the probability of waiting more than 20 minutes for the next ferry.
3. Find the probability of waiting exactly 20 minutes for the next ferry.
4. Find the probability of waiting between 15 and 35 minutes for the next ferry.
5. Find the conditional probability of waiting at least 10 more minutes after you have already waited 15 minutes.
6. Find the probability of waiting more than 45 minutes for the ferry on 3 consecutive independent days.
2. The cycle times for a truck hauling concrete to a highway construction site are uniformly distributed over the interval 50 to 70 minutes.
1. Find the mean and variance for cycle times.
2. Find the 5th and 95th percentile of cycle times.
3. Find the interquartile range.
4. Find the probability that the cycle time for a randomly selected truck exceeds 62 minutes.
5. If you are given that the cycle time exceeds 55 minutes, find the probability that the cycle time is between 60 and 65 minutes.
3. The amount of gas in a car’s tank (X) follows a Uniform Distribution, in which the minimum is zero and the maximum is 12 gallons.
1. Find the mean and median amount of gas in the tank.
2. Find the variance and standard deviation of gas in the tank.
3. Find the probability that there is more than 3 gallons in the tank.
4. Find the probability that there is between 4 and 6 gallons in the tank.
5. Find the probability that there is exactly 3 gallons in the tank.
6. Find the 80th percentile of gas in the tank.
4. A normally distributed population of package weights has a mean of 63.5 g and a standard deviation of 12.2 g.
1. What percentage of this population weighs 66 g or more?
2. What percentage of this population weighs 41 g or less?
3. What percentage of this population weighs between 41 g and 66 g?
4. Find the 60th percentile for distribution of weights.
5. Find the three quartiles and the interquartile range.
5. Assume the expected waiting time until the next RM (Richter Magnitude) 7.0 or greater earthquake somewhere in California follows an exponential distribution with $\mu=10$ years.
1. Find the probability of waiting 10 or more years for the next RM 7.0 or greater earthquake.
2. Determine the median waiting time until the next RM 7.0 or greater earthquake.
6. High Fructose Corn Syrup (HFCS) is a sweetener in food products that is linked to obesity and Type 2 Diabetes. The mean annual consumption in the United States in 2008 of HFCS was 60 lbs with a standard deviation of 20 lbs. Assume the population follows a Normal Distribution.
1. Find the probability that a randomly selected American consumes more than 50 lbs of HFCS per year.
2. Find the probability that a randomly selected American consumes between 30 and 90 lbs of HFCS per year.
3. Find the 80th percentile of annual consumption of HFCS.
4. Between what two numbers would you expect to contain 95% of Americans HFCS annual consumption?
5. Find the quartiles and Interquartile range for this population.
6. A teenager who loves soda consumes 105 lbs of HFCS per year. Is this result unusual? Use probability to justify your answer.
7. A nuclear power plant experiences serious accidents once every 8 years. Let X = the waiting time until the next serious accident.
1. What is the mean and standard deviation of the random variable X?
2. Determine the probability of waiting more than 10 years before the next serious accident.
3. Suppose a plant went 5 years without a serious accident. Find the probability of waiting more than 10 years before the next serious accident.
4. Determine the probability of waiting less than 5 years before the next serious accident.
5. What is median waiting time until the next serious accident?
6. Find the Interquartile range for this distribution. | textbooks/stats/Introductory_Statistics/Inferential_Statistics_and_Probability_-_A_Holistic_Approach_(Geraghty)/15%3A_Appendix/15.02%3A_Homework_Problems/15.2.06%3A_Chapter_7_Homework.txt |
1. State in your own words the 3 important parts of the Central Limit Theorem.
2. For women aged 18‐24, systolic blood pressures (in mmHg) are normally distributed with $\mu=114.8$ and $\sigma=13.1$
1. Find the probability that a woman aged 18‐24 has systolic blood pressure exceeding 120.
2. If 4 women are randomly selected, find the probability that their mean blood pressure exceeds 120.
3. If 40 women are randomly selected, find the probability that their mean blood pressure exceeds 120.
4. If the pdf for systolic blood pressure did NOT follow a normal distribution, would your answer to part c change? Explain.
3. A normally distributed population of package weights has a mean of 63.5 g and a standard deviation of 12.2 g.
1. If you sample 1 package, find the probability that the sample mean is over 66 g.
2. If you sample 16 packages, find the probability that the sample mean is over 66 g. Compare this answer to part a.
3. If you sample 49 packages, find the probability that the sample mean is over 66 g. Compare this answer to parts a and b.
4. High Fructose Corn Syrup (HFCS) is a sweetener in food products that is linked to obesity and Type 2 Diabetes. The mean annual consumption in the United States in 2008 of HFCS was 60 lbs with a standard deviation of 20 lbs. Assume the population follows a Normal Distribution.
1. In a sample of 16 Americans, what is the probability that the sample mean will exceed 57 pounds of HFCS per year?
2. In a sample of 16 Americans, what is the probability that the sample mean will be between 50 and 70 pounds of HFCS per year.
3. In a sample of 16 Americans, between what two values would you expect to see 95% of the sample means?
5. The completion time (in minutes) for a student to complete a short quiz follows the continuous probability density function shown here, with some areas calculated. It is known that $\mu=5.3$ minutes and $\sigma=2.4$ minutes. 40 students take the quiz.
1. Find the mean completion time for the students is under 5 minutes.
2. Find the probability that the mean time for the class to finish the quiz is between 6 and 8 minutes.
3. The mean completion time for the class was 7.1 minutes. Is this result unusual? Explain.
1. A pollster sampled 100 adults in California and asked a series of questions. The Central Limit Theorem for Proportions requires that $np > 10$ and $n(1‐p) > 10$. Determine if these conditions are met for the following statements.
1. 61% of Californians live in Southern California.
2. 92% of Californians support Deferred Action for Childhood Arrivals (DACA).
3. 24% of Californians have visited Yosemite National Park.
4. 8% of Californians have a felony conviction.
2. The cycle times for a truck hauling concrete to a highway construction site are uniformly distributed over the interval 50 to 70 minutes. For the Uniform Distribution $\mu=\dfrac{a+b}{2}$ and $\sigma=\sqrt{\dfrac{(b-a)^{2}}{12}}$, in which $a$ is the minimum value and $b$ is the maximum value.
1. Find the mean and standard deviation for cycle times.
2. There have been 46 times that concrete has been hauled to the construction site. Find the probability that the mean cycle time for these 46 samples exceeds 58 minutes.
3. Nuclear power plants experience serious accidents once every 8 years. Let $X$ = the waiting time until the next serious accident. X follows an Exponential Distribution in which $\mu$ = the expected waiting time and $\sigma=\mu$.
1. What is the mean and standard deviation of the random variable X?
2. For 35 accidents at nuclear power plants, the mean waiting time was 6.1 years. Is this value unusually low? To answer, find the probability that the mean waiting time is 6.1 years or less. | textbooks/stats/Introductory_Statistics/Inferential_Statistics_and_Probability_-_A_Holistic_Approach_(Geraghty)/15%3A_Appendix/15.02%3A_Homework_Problems/15.2.07%3A_Chapter_8_Homework.txt |
1. The average number of years of post secondary education for employees within a certain industry is 1.5. A company claims that this average is higher for its employees. A random sample of 16 of its employees has an mean of 2.1 years of post secondary education with a standard deviation of 0.6 years.
1. Find a 95% confidence interval for the mean number of years of post secondary education for the company’s employees. How does this compare with the industry value?
2. Find a 95% confidence interval for the standard deviation of number years of post secondary education for the company’s employees.
2. When polling companies report a margin of error, they are referring to a 95% confidence interval. Go to the website www.pollingreport.com and verify the stated margins of error for 2 polls.
3. In a random sample of five microwave ovens, the mean repair cost was $75.00, and the sample standard deviation was$12.50. Construct and interpret a 95% confidence interval for the mean. standard deviation
4. In a random sample of seven computers, the mean repair cost was $100.00 and the was$42.50. Construct and interpret a 99% confidence interval for the mean.,
5. You did some research on repair costs of microwave ovens and found that the population standard deviation is $\sigma=\ 15$. Repeat Exercise 3, using a normal distribution with the appropriate calculations for a standard deviation that is known. Compare the results.
6. A soccer ball manufacturer wants to estimate the mean circumference of soccer balls within 0.15 inch. Assume that the population of circumferences is normally distributed.
1. Determine the minimum sample size required to construct a 99% confidence interval for the population mean. Assume the population standard deviation is 0.20 inch.
2. Repeat part (a) using a standard deviation of 0.10 inch. Which standard deviation requires a larger sample size? Explain.
3. Repeat part (a) using a confidence level of 95%. Which level of confidence requires a larger sample size? Explain.
7. If all other quantities remain the same, how does the indicated change affect the minimum sample size requirement (Increase, Decrease or No Change)?
1. Increase in the level of confidence
2. Increase in the error tolerance
3. Increase in the standard deviation
8. In a survey of 3,224 U.S. adults, 1515 said flying is the most stressful form of travel. Construct a 95% confidence interval for the proportion of all adults who say that flying is the most stressful form of travel.
9. A study of 2,008 traffic fatalities found that 800 of the fatalities were alcohol related. Find a 99% confidence interval for the population proportion, and explain what it means.
10. In a survey of 1,003 U.S. adults, 662 would be happy spending the rest of their career with their current employer. Construct a 90% confidence interval for the proportion that would be happy staying with their current employer. Does this result surprise you?
11. You wish to estimate, with 95% confidence and within 3.5% of the true population, the proportion of computers that need repairs or have problems by the time the product is three years old.
1. No preliminary estimate is available. Find the minimum sample size needed.
2. Find the minimum sample size needed, using a prior study that found that 19% of computers needed repairs or had problems by the time the product was three years old.
3. Compare the results from parts (a) and (b).
12. A lawn mower manufacturer is trying to determine the standard deviation of the life of one of its lawn mower models. To do this, it randomly selects 12 lawn mowers that were sold several years ago and finds that the sample standard deviation is 3.25 years. Use a 99% level of confidence to find a confidence interval for standard deviation.
13. The monthly incomes of 20 randomly selected individuals who have recently graduated with a bachelor's degree in social science have a sample standard deviation of \$107. Use a 95% level of confidence to find a confidence interval for standard deviation. | textbooks/stats/Introductory_Statistics/Inferential_Statistics_and_Probability_-_A_Holistic_Approach_(Geraghty)/15%3A_Appendix/15.02%3A_Homework_Problems/15.2.08%3A_Chapter_9_Homework.txt |
(Exercises 1‐6) Determine whether the statement is true of false. If it is false, rewrite it as a true statement.
1. In a hypothesis test, you assume that the alternative hypothesis is true.
2. A statistical hypothesis is a statement about a sample.
3. If you decide to reject the null hypothesis, you can support the alternative hypothesis.
4. The level of significance is the maximum probability that you allow for rejecting a null hypothesis when it is actually true.
5. A large p‐value in a test will favor a rejection of the null hypothesis.
6. If you want to support a claim, write it as your null hypothesis.
(Exercises 7‐12) Think about the context of the claim. Determine whether you want to support or reject the claim.
1. State the null and alternative hypotheses in words.
2. Write the null and alternative hypotheses in appropriate symbols
3. Describe in words Type I error (the consequence of rejecting a true null hypothesis).
4. Describe in words Type II error (the consequence of failing to reject a false null hypothesis).
1. You represent a chemical company that is being sued for paint damage to automobiles. You want to support the claim that the mean repair cost per automobile is not $650. How would you write the null and alternative hypotheses? 2. You are on a research team that is investigating the mean temperature of adult humans. The commonly accepted claim is that the mean temperature is about 98.6°F. You want to show that this claim is false. How would you write the null and alternative hypotheses? 3. A light bulb manufacturer claims that the mean life of a certain type of light bulb is at least 750 hours. You are skeptical of this claim and want to refute it. 4. As stated by a company's shipping department, the number of shipping errors per million shipments has a standard deviation that is less than 3. Can you support this claim? 5. A research organization reports that 33% of the residents in Ann Arbor, Michigan are college students. You want to reject this claim. 6. The results of a recent study show that the proportion of people in the western United States who use seat belts when riding in a car or truck is under 84%. You want to support this claim. 7. In your work for a national health organization, you are asked to monitor the amount of sodium in a certain brand of cereal. You find that a random sample of 82 cereal servings has a mean sodium content of 232 milligrams. The population standard deviation is known to be 10 milligrams. At = 0.01, can you conclude that the mean sodium content per serving of cereal is over 230 milligrams? 1. A tourist agency in Florida claims that the mean daily cost of meals and lodging for a family of four traveling in Florida is$284. You work for a consumer advocate and want to test this claim. In a random sample of 50 families of four traveling in Florida, the mean daily cost of meals and lodging is $292 and the standard deviation is$25. At = 0.05, do you have enough evidence to reject the agency's claim?
1. An environmentalist estimates that the mean waste recycled by adults in the United States is more than 1 pound per person per day. You want to test this claim. You find that the mean waste recycled per person per day for a random sample of 12 adults in the United States is 1.2 pounds and the standard deviation is 0.3 pound. At = 0.05, can you support the claim?
1. A government association claims that 44% of adults in the United States do volunteer work. You work for a volunteer organization and are asked to test this claim. You find that in a random sample of 1165 adults, 556 do volunteer work. At = 0.05, do you have enough evidence to reject the association's claim?
1. The geyser Old Faithful in Yellowstone National Park is claimed to erupt on average for about three minutes. Thirty‐six observations of eruptions of the Old Faithful were recorded (time in minutes)
Sample mean = 3.394 minutes. Sample standard deviation = 1.168 minutes. Test the hypothesis that the mean length of time for an eruption is 3 minutes and answer ALL the following questions:
1. General Question: Why do you think this test is being conducted?
2. Design
1. State the null and alternative hypotheses
2. What is the appropriate test statistic/model?
3. What is significance level of the test?
4. What is the decision rule?
3. Conduct the test
1. Are there any unusual observations that question the integrity of the data or the assumptions of the model? (additional problem only)
2. Is the decision to reject or fail to reject Ho?
4. Conclusions: State a one paragraph conclusion that is consistent with the decision using language that is clearly understood in the context of the problem. Address any potential problems with the sampling methods and address any further research you would conduct.
1. 15 i‐phone users were asked how many songs were on their i‐phone. Here are the summary statistics of that study: $\bar{X}=650 \quad s=200$
1. Can you support the claim that the number of songs on a user’s i‐phone is different than 500? Conduct the test with $\alpha=5 \%$.
2. Can you support the claim that the population standard deviation is under 300? Conduct the test with $\alpha=5 \%$.
2. Consider the design procedure in the test you conducted in Question 18a. Suppose you wanted to conduct a Power analysis if the population mean under $H_a$ was actually 550. Use the online Power calculator to answer the following questions.
1. Determine the Power of the test.
2. Determine Beta.
3. Determine the sample size needed if you wanted to conduct the test in Question 18a with 95% power.
3. The drawing shown diagrams a hypothesis test for population mean design under the Null Hypothesis (top drawing) and a specific Alternative Hypothesis (bottom drawing). The sample size for the test is 200.
1. State the Null and Alternative Hypotheses
2. What are the values of $\mu_{0}$ and $\mu_{a}$ in this problem?
3. What is the significance level of the test?
4. What is the Power of the test when the population mean = 4?
5. Determine the probability associated with Type I error.
6. Determine the probability associated with Type II error.
7. Under the Null Hypothesis, what is the probability the sample mean will be over 6?
8. If the significance level were set at 5%, would the power increase, decrease or stay the same?
9. If the test were conducted, and the $p$‐value were 0.085, would the decision be Reject or Fail to Reject the Null Hypothesis?
10. If the sample size was changed to 100, would the shaded on area on the bottom ($H_a$) graph increase, decrease or stay the same? | textbooks/stats/Introductory_Statistics/Inferential_Statistics_and_Probability_-_A_Holistic_Approach_(Geraghty)/15%3A_Appendix/15.02%3A_Homework_Problems/15.2.09%3A_Chapter_10_Homework.txt |
1. What is the difference between two samples that are dependent and two samples that are independent? Give an example of two dependent samples and two independent samples.
2. What conditions are necessary in order to use the dependent samples t‐test for the mean of the difference of two populations?
In Problems 3‐10, classify the two given samples as independent or dependent. Explain your reasoning.
1. Sample 1: The SAT scores for 35 high school students who did not take an SAT preparation course; Sample 2: The SAT scores for 40 high school students who did take an SAT preparation course
2. Sample 1: The SAT scores for 44 high school students; Sample 2: The SAT scores for the same 44 high school students after taking an SAT preparation course
3. Sample 1: The weights of 51 adults; Sample 2: The weights of the same 51 adults after participating in a diet and exercise program for one month
4. Sample 1: The weights of 40 females; Sample 2: The weights of 40 males
5. Sample 1: The average speed of 23 powerboats using an old hull design; Sample 2: The average speed of 14 powerboats using a new hull design
6. Sample 1: The fuel mileage of 10 cars; Sample 2: The fuel mileage of the same 10 cars using a fuel additive
7. The table shows the braking distances (in feet) for each of the four different sets of tires with the car's anti‐lock braking system (ABS) on and with ABS off. The tests were done on ice with cars traveling at 15 miles per hour.
Tire Set 1 2 3 4
Braking distance with ABS 42 55 43 61
Braking distance without ABS 58 67 59 75
1. The table shows the heart rates (in beats per minute) of five people before and after exercising.
Person 1 2 3 4 5
Heart Rate before Exercising 42 55 43 61 65
Heart Rate after Exercising 58 67 59 75 90
1. In a study testing the effects of an herbal supplement on blood pressure DATA in men, 11 randomly selected men were given an herbal supplement for 15 weeks. The following measurements are for each subject's diastolic blood pressure taken before and after the 15‐week treatment period. At $\alpha=.10$, can you support the claim that systolic blood pressure was lowered?
1. A random sample of 25 waiting times (in minutes) before patients saw a medical professional in a hospital's minor emergency department had a standard deviation of 0.7 minute. After a new admissions procedure was implemented, a random sample of 21 waiting times had a standard deviation of 0.5 minute. At $\alpha=.10$, can you support the hospital's claim that the standard deviation of the waiting times has decreased?
1. An engineer wants to compare the tensile strengths of steel bars that are produced using a conventional method and an experimental method. (The tensile strength of a metal is a measure of its ability to resist tearing when pulled lengthwise). To do so, the engineer randomly selects steel bars that are manufactured using each method and records the following tensile strengths (in Newtons per square millimeter). At $\alpha=.10$, can the engineer claim that the experimental method produces steel with greater mean tensile strength? Should the engineer recommend using the experimental method? First use the $F$ test to determine whether or not to use equal variances in choosing the model.
1. A community college is considering using multiple measures for student placement into math courses. The existing measure is that each student takes a standardized placement exam. Based on the score, the student will be placed in one of three math courses: Elementary Level, Intermediate Level and Transfer Level. A second measure will be to use high school GPA to modify the needed placement exam score for each of the three courses.
200 incoming students who have high school GPAs were randomly split into two groups. The first group of 100 students was given the existing placement exam only. The second group of 100 students was placed using the new second measure that utilizes both placement exams and high school GPAs.
After three quarters, it was found that 17 of the first group completed the Transfer Level course while 31 of the second group completed the Transfer Level course. Based on this result, the researcher decided that the new multiple measures method of placing students improved the percentage of students who pass the Transfer Level math course in three quarters. | textbooks/stats/Introductory_Statistics/Inferential_Statistics_and_Probability_-_A_Holistic_Approach_(Geraghty)/15%3A_Appendix/15.02%3A_Homework_Problems/15.2.10%3A_Chapter_11_Homework.txt |
1. A bicycle safety organization claims that fatal bicycle accidents are uniformly distributed throughout the week. The table shows the day of the week for which 911 randomly selected fatal bicycle accidents occurred. At $\alpha = 0.10$, can you reject the claim that the distribution is uniform?
1. Results from a five‐year‐old survey asked where coffee drinkers typically drink their first cup of coffee are shown in the graph. To determine whether this distribution has changed, you randomly select 581 coffee drinkers and asked them where they typically drink their first cup of coffee. The results are shown in the table. Can you conclude that there has been a change in the claimed or expected distribution? Use $\alpha = 0.05$.
1. In a SurveyUSA poll, 500 Americans adults were asked if marijuana should be legalized. The results of the poll were cross tabulated as shown in the contingency tables below. Conduct a hypothesis test for independence to determine if opinion about legalization of marijuana is dependent on age.
Male Female
Should be Legal 123 90
Should Not be Legal 127 160
1. In a SurveyUSA poll, 500 Americans adults were asked if marijuana should be legalized. The results of the poll were cross tabulated as shown in the contingency tables below. Conduct a hypothesis test for independence to determine if opinion about legalization of marijuana is dependent on gender.
18‐34 35‐54 55+
Should be Legal 95 83 48
Should Not be Legal 65 126 83
1. 1000 American adults were recently polled on their opinion about effect of recent stimulus bill and the economy. The results are shown in the following contingency table, broken down by gender:
Stimulus will hurt economy Stimulus will help the economy Stimulus will have no effect TOTAL
Male 150 150 200 500
Female 100 200 200 500
TOTAL 250 350 400 1000
Are gender and opinion on the stimulus dependent variables? Test using $\alpha =1\%$.
For the studies in questions 6 to 8, answer the following questions. (You will not have to actually conduct tests).
1. State the Null and Alternative Hypotheses in words
2. State the Null and Alternative Hypotheses in population parameters
3. Choose the appropriate model from among these three:
1. One population test of proportion
2. Chi‐square goodness of fit
3. Chi‐square test of independence
1. Starting in 2018, the California State University System (CSU) changed their prerequisite requirements for a Statistics course needed for community college students to transfer. The original provision was that students needed to take Intermediate Algebra before Statistics. The new requirement is that students can take Intermediate Algebra or an alternative path to Statistics course as a prerequisite for Statistics. There is some concern that students who choose the alternative path may be less successful after transferring to CSU. A study is proposed to determine the graduation rates in 3 years for transfer students who passed Intermediate Algebra and those who passed the alternative course. Data will be collected and cross‐tabulated into two questions: "What path did the student choose?" and "Did the student graduate within 3 years of transfer?"
2. The Achilles tendon connects the calf muscle to the heel bone. Of the patients who rupture (tear) the Achilles tendon and have it surgically repaired, 11% will re‐rupture the Achilles tendon within three years of treatment. A proposed non‐surgical method of treatment would treat the rupture with a series of casts, ultrasound and passive motion. The researcher wanted to show that the percentage of patients who choose the non‐surgical method of treatment had a reduced percentage of re‐ruptures.
3. A sport's shoe company has designed a women's running shoe and is considering producing the shoe in 4 different colors: pink, blue, teal and gray. The company wants to know if there is a preference among women for a specific color of the shoe. 154 women who are runners will participate in the study. | textbooks/stats/Introductory_Statistics/Inferential_Statistics_and_Probability_-_A_Holistic_Approach_(Geraghty)/15%3A_Appendix/15.02%3A_Homework_Problems/15.2.11%3A_Chapter_12_Homework.txt |
1. A clinical psychologist completed a study on hyperactivity in children using one‐way ANOVA. The model was balanced with 5 replicates per treatment. The factor was 3 types of school district (urban, rural and suburban). Unfortunately, hackers broke into the psychologist’s computer and wiped out all the data. All that remained was a fragment of the ANOVA table:
Source of Variation Sum of Squares Degrees of Freedom Mean Square $F$ statistic Critical Value of $F$ for $\alpha = .05$ Decision
Factor 7000
Error
Error 9000
Fill in the table and conduct the hypothesis test that compares mean level of hyperactivity in the 3 types of districts. Explain your results.
1. A sociologist was interested in the commute time for workers in the Bay Area. She categorized commuters by 4 regions (North Bay, South Bay, East Bay and Peninsula) and designed a balanced model with 8 replicates per region. Data is round trip commute time in minutes . The results and ANOVA output are shown below:
1. Test the Null Hypothesis that all regions have the same mean commute time at a significance level of 5%. State your decision in non‐statistical language.
2. Conduct all pairwise comparisons at an overall significance level of 5%.
3. One of the underlying assumptions of One Factor ANOVA is that all groups variances are equal. Review the data and decide whether you think this assumption may be being violated.
4. Explain the results of this experiment as if you were addressing a transportation committee. What would you recommend?
MINITAB results for question 2.
1. People who are concerned about their health may prefer hot dogs that are low in salt and calories. The data contains data on the calories and sodium contained in each of 54 major hot dog brands. The hot dogs are classified by type: beef, poultry, and meat (mostly pork and beef, but up to 15% poultry meat). Minitab output is attached for two different hypothesis tests. A test for a difference in calories due to hot dog type will be performed.
1. Design the test.
2. Fill in the missing information in the ANOVA table on the next page.
3. Conduct the test with an overall confidence level of 5%, including pairwise comparisons.
(Questions 4‐8) Does mindfulness reduce anxiety for students who are taking Mathematics courses? Several designs for studies are show. For each design, answer the following:
1. State the Null and Alternative Hypotheses in words
2. State the Null and Alternative Hypotheses in population parameters
3. Choose the appropriate model from among these five:
1. One population test of proportion
2. Matched pairs t‐test
3. Independent Samples (Pooled Variance or Unequal Variance) $t$‐test
4. Chi‐square test of independence
5. One factor ANOVA
1. The anxiety level of 117 Math students will be measured. After one month of mindfulness within the course, the anxiety level of these students will be measured again.
2. Do most students (more than 50%) want to see mindfulness taught in a math course? 324 math students will be asked if they would like to see a 20 minute weekly mindfulness unit in the class.
3. 400 students will participate in a study where they will be classified into 3 categories: low math anxiety, moderate math anxiety and high math anxiety. They will then be asked if they would like to add a 20 minute per week mindfulness unit in the math class to determine if the level of anxiety and opinion about mindfulness in the class are dependent events.
4. An instructor with two sections of the same course will offer 20 minutes of mindfulness per week in one class. The other class will be taught without the 20 minutes of mindfulness. After the course is completed, the students in both sections will have their anxiety level measured. A test will be run to see if the mean anxiety score is lower for the class with mindfulness.
5. An instructor with three sections of the same course will offer 20 minutes of mindfulness per week in the first class. The second class will have 10 minutes of mindfulness per week. The third class will be taught without mindfulness. After the course is completed, the students in all three sections will have their anxiety levels measured. A test will be run to see if there is a difference mean anxiety score due to the section the students were in. | textbooks/stats/Introductory_Statistics/Inferential_Statistics_and_Probability_-_A_Holistic_Approach_(Geraghty)/15%3A_Appendix/15.02%3A_Homework_Problems/15.2.12%3A_Chapter_13_Homework.txt |
1. A real estate agent uses a simple regression model to estimate the value of a home based on square size in which $Y$ is the value of the home in dollars and $X$ is the size in total square feet. The regression equation is $\hat{Y}=253000+438 X$.
1. Interpret the slope using the units of the problem.
2. Estimate the value of a home with 1347 square feet.
3. Will the correlation coefficient be positive or negative in this problem. Explain.
2. A car dealer uses a simple regression model to estimate the value of a used 2013 Toyota Prius basing the value on the car’s mileage. $Y$ is the value of the car in dollars and $X$ is the total miles on the odometer. The regression equation is $\hat{Y}=28000-0.048 X$.
1. Interpret the slope by using the units of the problem.
2. Estimate the value of a car with an odometer reading of 143,282 miles.
3. Why would this model not work for a Prius that was driven 600,000 miles?
3. A manager is concerned that overtime (measured in hours) is contributing to more sickness (measured in sick days) among the employees. Data records for 20 employees were sampled with the MINITAB results shown at the end of the questions.
1. Identify the explanatory (Independent) Variable – include units.
2. Identify the response (dependent) variable – include units.
3. Find the least square line where Sick Days is dependent on Overtime. Interpret the slope using the appropriate units.
4. Test the hypothesis that the regression model is significant ($\alpha = .10$). Show all steps. Fill in the missing values on the ANOVA table.
5. Find and interpret the $r^2$, coefficient of determination. (Blank Line)
6. Find the estimate of standard deviation of the residual error. (Blank Line)
7. Identify any residual that is more than two standard deviations from the regression line.
1. 16 student volunteers drank a randomly assigned number of cans of beer. Thirty minutes later a police officer measured their blood alcohol content (BAC) in grams of alcohol per deciliter of blood. Data and computer output are given below the questions.
1. Find the least square line where BAC is dependent on Beers consumed. Interpret the slope.
2. Find and interpret the r‐squared statistic.
3. Test the hypothesis that the beers consumed and BAC are correlated ($\alpha = .05$)
4. Find a 95% Confidence Interval for the mean BAC for a student who consumes 5 beers.
5. Would this model be appropriate for a student who consumed 20 beers? Explain.
6. Joe claims that he can still legally drive after consuming 5 beers: the legal BAC limit is 0.08. Find a 95% Prediction interval for Joe’s BAC. Do you think Joe can legally drive?
7. Residual Analysis
1. We would expect the residuals to be random: about half would be positive and half would be negative. Check the actual residuals and compare the actual percentages to the expected percentages.
2. The assumption for regression is that the residuals have a Normal Distribution. This means about 68% of the residuals would have a $Z$‐score between ‐1 and 1, 95% of the residuals would have a $Z$‐score between ‐2 and 2 and all the residuals would have a Z‐score between ‐3 and 3. The Column labeled “Standardized Residual” is the $Z$‐score for each residual. Check to see what percentage of the data has $Z$‐scores in each of these three intervals, and compare the actual percentages to the expected percentages (68%, 95%, 100%).
Data for Exercise 4 Regression Analysis: BAC versus Beers
1. The following regression analysis was used to test Poverty (percentage of population living below the poverty line) as a predictor for Dropout (High School Dropout Percentage.
1. Five items have been blanked out been; find these missing which can be calculated based on other information in the output.
1. $r^2$
2. $r$
3. Standard Error of the Residuals
4. $F$ Test Statistic
5. Predicted Value for Poverty = 15
1. Write out the regression equation.
2. Conduct the Hypothesis Test that Poverty and HSDropout are correlated with $\alpha =.01$ (Critical Value for $F$ is 7.19 ($\alpha =.01$, DF‐num=1,DF‐den=48)).
3. What percentage of the variability of High School Dropout Rates can be explained by Poverty?
4. North Dakota has a Poverty Rate of 11.9 percent and a HS Dropout Rate of 4.6 percent.
1. Calculate the predicted HS Dropout Rate for North Dakota from the regression equation.
2. The Standard Error (from part a‐iii) is the standard deviation with respect to the regression line. Calculate the $Z$‐score for the actual North Dakota HS Dropout Rate of 4.6 (Subtract the predicted value and divide by the Standard Error). Do you think that the North Dakota HS Dropout Rate is unusual? Explain.
For the studies in questions 6 to 8:
1. Identify the explanatory variable.
2. Identify the response variable
3. Choose the appropriate model from among these three:
1. Chi‐square test of independence
2. One factor ANOVA
3. Simple Linear Regression
1. A golf course designer was studying types of grass to be used in a region that was susceptible to droughts. The designer studied 5 types of grass: Bent Grass, Fescue, Rye Grass, Bermuda Grass and Paspalum. Ten samples were taken of each grass and watered to keep the grass in prime condition for a month. For each sample, the daily water usage was calculated in liters per square meter. The designer wanted to know if there was a significant difference in mean water usage due to grass type.
2. A school psychologist believes students who have more homework will sleep less. 200 students participated in a study. For each of 14 consecutive days, students were asked to count how many minutes they spent doing their homework and how many minutes they slept that night.
3. Does smoking change the way someone tastes salt? A researcher sampled 200 smokers and 200 non‐smokers. They were then given a bowl of soup and ask to classify the salt level into one of 3 categories: low salt, average salt and high salt. The researcher wanted to know if there was a significant difference in the saltiness classification due to whether the participant was a smoker. | textbooks/stats/Introductory_Statistics/Inferential_Statistics_and_Probability_-_A_Holistic_Approach_(Geraghty)/15%3A_Appendix/15.02%3A_Homework_Problems/15.2.13%3A_Chapter_14_Homework.txt |
Creating Graphs from Data (Chapter 2 required)
Open MINITAB file lab01.mpj from the website. This data represents information 700 instructors from the popular website ratemyprofessors.com. All instructors are sampled from the Foothill‐De Anza Community College District. Here is a description of the data:
College Foothill or De Anza
Smiley Positive Neutral Negative
Photo Instructor has a photo
Hot Instructor has a chili pepper
Gender Male or Female
Dept Academic Department (example ‐ Mathematics)
Division Academic Division (example ‐ PSME)
Num Number of Ratings for that faculty member
Overall Average Overall Quality Rating (1‐5 scale, lowest to highest)
Easiness Average Easiness Rating (1‐5 scale, hardest to easiest)
We are going to use Minitab to make some dot plots for this data. Specifically we are going to look at Average Overall Quality Rating and try to make some comparisons of groups. First, let's ask some questions about this data.
1. Identify the quantitative variables.
2. Identify the categorical variables.
3. Is this an observational study or an experiment? Explain.
4. What is the population?
5. What is the sample?
6. Do you think this is a representative sample of all instructors at Foothill‐De Anza? Explain.
Now we are going to make some dot plots of the Average Overall Quality Rating. These can be found under the GRAPHS menu command in MINTAB
1. Make a dot plot of all instructors' Average Overall Quality Rating. (Simple Dot Plot). Paste the graph here and analyze the dot plot. (Describe the data's shape, center, spread and unusual features)
2. Make a dot plot of all instructors' Average Overall Quality Rating by gender. (With Groups Dot Plot). Paste the graph here. Do you see any difference in overall quality between males and females?
3. Make a dot plot of all instructors' Average Overall Quality Rating by college. (With Groups Dot Plot). Paste the graph here. Do you see any difference in overall quality between Foothill and De Anza instructors?
4. Make a dot plot of all instructors' Average Overall Quality Rating by hotness. (With Groups Dot Plot). Paste the graph here. Do you see any difference in overall quality between "Hot" and "Not Hot" instructors?
5. Write a paragraph summarizing your results. Do you see any problems or bias with this study?
To graph categorical data, you can use pie charts or bar charts, both of which can be found on the GRAPHS menu command in MINTAB
1. Make a pie chart of the categorical variable college and interpret the graph.
1. Make a simple bar chart of the categorical variable gender and interpret the graph.
1. Make a clustered bar chart of the variables college and gender. What does this graph mean?
1. Make a clustered bar chart of the variables hot and smiley. Compare the smiley rating by hotness rating.
15.3.02: Chapter 3 Lab
Descriptive Statistics (Chapter 1, 2 required)
Open MINITAB file lab02.mpj from the website. This data represents information 700 instructors from the popular website ratemyprofessors.com. All instructors are sampled from the Foothill‐De Anza Community College District. Here is a description of the data:
College Foothill or De Anza
Smiley Positive Neutral Negative
Photo Instructor has a photo
Hot Instructor has a chili pepper
Gender Male or Female
Dept Academic Department (example ‐ Mathematics)
Division Academic Division (example ‐ PSME)
Num Number of Ratings for that faculty member
Overall Average Overall Quality Rating (1‐5 scale, lowest to highest)
Easiness Average Easiness Rating (1‐5 scale, hardest to easiest)
In Lab 1, we constructed some dot plots and made some interpretations of Average Overall Quality Rating. In Lab 2, we will look at other graphs and statistics that measure center, spread and relative standing.
1. Above is a dot plot you made of Average Overall Quality Rating in Lab 1. Make a histogram of the Average Overall Quality Rating. Paste the graph here. Are both graphs showing the same center, spread and shape? Explain your answer. Descriptive Statistics can be found in Minitab under STAT>BASIC STATISTICS.
1. Use this command to determine the sample mean and sample median for the Average Overall Quality Rating. Paste the results here and answer these questions:
1. Which statistic is a better measure of center for this data? Explain your answer.
2. Are the values of the sample mean and median consistent with the shape of the histogram? Explain your answer.
Here are dot plots comparing Average Overall Quality Ratings of instructors who are rated "hot" vs. those who are rated "not hot"; these plots were made in Lab 1.
1. Under the GRAPHS menu bar in Minitab, create box plots of Average Overall Quality Ratings comparing "hot" and "not hot" instructors. Paste the results here and from the box plots, answer these questions.
1. Which group has a higher sample median?
2. For the "hot" instructors, between what values would you find the middle 50% of ratings?
3. For the "not hot" instructors, between what values would you find the middle 50% of ratings?
4. Are there any possible outliers for the "hot" instructors? Explain.
2. Under the STAT>BASIC STATISTICS menu, find descriptive statistics of overall ratings for both "hot" and "not hot" instructors. Paste the results here. Then answer the following questions:
1. Which group has a higher sample mean? Is this result consistent with your box plot?
2. Which group has a higher sample standard deviation? Is this result consistent with your box plot?
3. What is more unusual: a "hot" instructor with an Overall Rating of 3.5 or a "not hot" instructor with an Overall Rating of 3.5? Calculate and compare the Z‐scores for each instructor to answer this question.
4. Using the Empirical Rule, between what two Average Overall Quality Ratings would you find 68% of the "not hot" instructors?
3. Under the STAT>BASIC STATISTICS menu, find descriptive statistics of overall ratings split by college for both "Foothill" and "De Anza" instructors. Paste the results here. Then answer the following questions:
1. Which group has a higher sample mean? Is this result consistent with your box plot?
2. Which group has a higher sample standard deviation? Is this result consistent with your box plot?
3. What is more unusual: a "Foothill" instructor with an Overall Rating of 2.3 or a "De Anza" instructor with an Overall Rating of 2.3? Calculate and compare the Z‐scores for each instructor to answer this question.
4. Using the Empirical Rule, between what two Average Overall Quality Ratings would you find 68% of the "De Anza" instructors?
4. To make a scatterplot: MINITAB>GRAPHS>SCATTERPLOT To find correlation coefficients: MINITAB> BASIC STATISTICS>CORRELATION
1. Create a scatterplot in which the dependent variable is Overall and the independent variable is Num. Paste the graph here. Describe the strength, direction and linearity of the correlation.
2. Determine the correlation coefficient of Overall and Num. Is the result consistent with part a?
3. Create a scatterplot in which the dependent variable is Overall and the independent variable is Easiness. Paste the graph here. Describe the strength, direction and linearity of the correlation.
4. Determine the correlation coefficient of Overall and Easiness. Is the result consistent with part c? Why are these two variables correlated? Give at least two possible explanations. | textbooks/stats/Introductory_Statistics/Inferential_Statistics_and_Probability_-_A_Holistic_Approach_(Geraghty)/15%3A_Appendix/15.03%3A_MINITAB_Labs/15.3.01%3A_Chapter_2_Lab.txt |
Experimental Design
1. Design a survey. You are going to ask other students in the class four questions, 2 of which you will create:
What is your gender (Male, Female, Other Answer)?
How many units are you currently taking?
Question 3 ________________________________________________________________________
Responses to Question 3 _____________________________________________________________
Question 4 ________________________________________________________________________
Responses to Question 4 _____________________________________________________________
1. Collect Data from Students in class ‐ ask as many students as possible the 4 questions. Put the responses here.
1. Is this an observational study or an experiment? Explain?
2. What type of sampling method did you use?
3. Now, enter the data in the Minitab worksheet lab03.mpj. There will be 4 columns: Gender, Units, Question3, Question4.
4. Create a graph that shows the percentage of each gender in your sample.
5. Create a graph that shows the distribution of Units.
6. Create a graph that shows the distribution of Question 3.
7. Create a graph that shows the distribution of Question 4.
8. Create a graph that shows the distribution of Question 3 by Gender.
9. Create a graph that shows the distribution of Question 4 by Units.
10. Write a paragraph describing the graphs, pointing out anything you found of interest.
15.3.04: Chapter 5 Lab
Cross‐tabulation and Two Way Tables
Open the Minitab file lab04.mpj from the website.
Here is a description of the data collected from elementary schools in Michigan:
1. Gender: (Boy, Girl)
2. Grade: 4, 5 or 6
3. Age: Age in years
4. Ethnicity: White, Other (Yes, that was the way it was reported when this data was collected!)
5. Location: Rural, Suburban, Urban
6. School: 1=Brentwood Elementary, 2=Brentwood Middle, 3=Ridge, 4=Sand, 5=Eureka, 6=Brown, 7=Main, 8=Portage, 9=Westdale Middle
7. Goals: Student's choice in the personal goals: 1=Make Good Grades, 2=Be Popular, 3=Be Good in Sports
8. Grades: Rank of "make good grades" (1=most important for popularity, 4=least important)
9. Sports: Rank of "being good at sports" (1=most important for popularity, 4=least important)
10. Looks: Rank of "being handsome or pretty" (1=most important for popularity, 4=least important)
11. Money: Rank of "having lots of money" (1=most important for popularity, 4=least important)
Cross Tabulation is a method of taking pairs of categorical variables and creating a two‐way table. The command can be found on the menu bar STAT>TABLES>CROSSTABULATION. Choose two data items and check that you want count, row percents and column percents. You can also make a clustered bar graph GRAPHS>BAR GRAPH>CLUSTERED. The example shows gender cross‐tabulated with grade level:
1. Cross‐tabulate Gender with Goal and create a two‐way table. Create a clustered bar graph. Paste them both here.
1. What is the probability a randomly selected student chooses sports as the most important goal? What type of probability is this (Marginal, Joint, or Conditional)?
2. What is probability that a randomly selected student is a boy? What type of probability is this (Marginal, Joint, or Conditional)?
3. What is probability that a randomly selected student is a boy and chooses sports as the most important goal? What type of probability is this (Marginal, Joint, or Conditional)?
4. What is the probability ca randomly selected boy chooses sports as the most important goal? What type of probability is this (Marginal, Joint, or Conditional)?
5. What conclusions can you make about Gender and Goal?
2. Cross‐tabulate Location with Goal and create a two‐way table. Create a pie graphs for Goal with a multiple variable Location on the same graph. Paste the cross‐tabulation and pie graphs here
1. What is the probability that a randomly selected student chooses sports as the most important goal?
2. What is probability that a randomly selected suburban student chooses sports?
3. What is the probability that a randomly selected rural student chooses sports?
4. What is the probability that a randomly selected urban student chooses sports?
5. What conclusions can you make about Location and Goal?
1. Cross‐tabulate any two variables of your choice and create a two‐way table. Create a clustered bar graph. Paste them both here.
1. Calculate and explain any marginal probability of your choice.
2. Calculate and explain any joint probability of your choice.
3. Calculate and explain any conditional probability of your choice.
4. What conclusions can you make about these two variables? | textbooks/stats/Introductory_Statistics/Inferential_Statistics_and_Probability_-_A_Holistic_Approach_(Geraghty)/15%3A_Appendix/15.03%3A_MINITAB_Labs/15.3.03%3A_Chapter_4_Lab.txt |
Open the MINITAB file lab05.mpj from the website.
Find Probabilities for a Binomial Random Variable (MINITAB>CALC>PROBABILITY DISTRIBUTIONS)
1. In a poll conducted in January 2015, 72% of American adults rated protecting freedom of speech ahead of not offending others. Assume this is the true proportion. You sample 64 American adults. Let \(X\) be the number in the sample who rated protecting freedom of speech ahead of not offending others.
1. Determine the probability that 44 American adults or fewer in the sample rated protecting freedom of speech ahead of not offending others. (Cumulative Probability) Is this result unusual?
2. Determine the probability that 56 American adults or more in the sample rated protecting freedom of speech ahead of not offending others. (Cumulative Probability plus Rule of Complement) Is this result unusual?
3. Create a Probability Distribution Plot of this binomial distribution (Under Graph Menu in Minitab).
4. What is the mean, variance and standard deviation of \(X\)?
5. Use the Empirical (68, 95 99.7) Rule to determine between what two values would you expect to find 95% of the values of the random variable \(X\)? Is the result consistent with the graph?
Find Probabilities for a Poisson Random Variable (MINITAB>CALC>PROBABILITY DISTRIBUTIONS)
1. Strong earthquakes (of RM 5 or greater) occur on a fault at a Poisson rate of 1.45 per year.
1. Determine the probability of exactly 2 strong earthquakes in the next year. (Probability)
2. Determine the probability of at least 1 strong earthquake in the next year. (Cumulative Probability plus Rule of Complement)
3. Determine the probability of at least 1 strong earthquake in the next 3 years. (Cumulative Probability plus Rule of Complement)
4. Create a Probability Distribution Plot of this binomial distribution (Under Graph Menu in Minitab).
15.3.06: Chapter 7 Lab
Modeling Continuous Random Variables
Open the Minitab file lab6.mpj from the website.
Simulate a Uniform Random Variable
1. The Uniform random variable is described by two parameters, the minimum and the maximum. Each value between the minimum and the maximum has the same probability of being chosen, so the uniform random variable has a rectangular shape. In this simulation, we will model the amount of concrete in a building supply store, which follows a uniform distribution from 20 to 180 tons.
1. Using the formulas from the part 4 slides, find the population mean, median and standard deviation for this random variable.
2. Use the column heading Uniform Sim to save data and simulate 1000 trials in Minitab (use the menu item CALC>RANDOM DATA and choose Uniform.) Use the command STAT>BASIC STATISTICS>GRAPHICAL SUMMARY to calculate the sample mean, sample median and sample standard deviation of the simulated data as well as a box plot and histogram, and paste the output here. Compare the sample statistics to the corresponding population values you calculated in part a.
3. Describe the shape of the histogram. Does it appear to match the rectangular shape of the population probability graph shown above?
4. Identify the minimum and maximum values. Are they near the values 20 and 180 that you used to define the model?
Simulate an Exponential Random Variable
1. The Exponential random variable is described by one parameter, the expected value or $\mu$. The shape of the curve is an exponential decay model that we studied in Module 4. This random variable is often used to model the waiting time until an event occurs, in which the future waiting time is independent of the past waiting time. In this simulation, we will model trauma patients who arrive at a hospital’s Emergency Room at a rate of one every 7.2minutes (7.2 minutes is the expected value.).
1. Using the formulas from the part 4 slides, find the population mean, median and standard deviation for this random variable.
2. Use the column heading Exponential Sim to save data and simulate 1000 trials in Minitab (use the menu item CALC>RANDOM DATA and choose Exponential. The scale box will be $\mu$ and the Threshold box should remain at 0.0 ) Use the command STAT>BASIC STATISTICS>GRAPHICAL SUMMARY to calculate the sample mean, sample median and sample standard deviation of the simulated data as well as a box plot and histogram, and paste the output here. Compare the sample statistics to the corresponding population values you calculated in part a.
3. Describe the shape of the histogram. Does it appear to match the exponential decay shape of the population probability graph shown above?
4. Identify the minimum and maximum values. Determine if the maximum value is an extreme outlier.
Simulate a Normal Random Variable
1. The Normal random variable is described by two parameters, the expected value $\mu$ and the population standard deviation $\sigma$. The curve is bell‐shaped and frequently occurs in nature. In this simulation, we will model the popcorn cooking time, which follows a Normal random variable, with $\mu=4.75$ minutes and $\sigma=0.64$ minutes.
1. Use the column heading Normal Sim to save data and simulate 1000 trials (use the menu item CALC>RANDOM DATA and choose Normal.) Use the command STAT>BASIC STATISTICS>GRAPHICAL SUMMARY to calculate the sample mean, sample median and sample standard deviation of the simulated data as well as a box plot and histogram, and paste the output here.
2. Describe the shape of the histogram. Does it appear to match the bell‐shape of the population probability graph shown above?
3. Identify the minimum and maximum values. Determine the $Z$‐score of each. Do these values seem to be extreme outliers?
4. Compare the sample mean, median and standard deviation to the population values. | textbooks/stats/Introductory_Statistics/Inferential_Statistics_and_Probability_-_A_Holistic_Approach_(Geraghty)/15%3A_Appendix/15.03%3A_MINITAB_Labs/15.3.05%3A_Chapter_6_Lab.txt |
Central Limit Theorem
Open the Minitab file lab07.mpj from the website.
The lifetime of optical scanning drives follows a skewed distribution with $\mu=100$ and $\sigma=100$. The five columns labeled CLT $n$= represent 1000 simulated random samples of 1, 5, 10, 30, and 100 from this population.
1. Make dot plots of all 5 sample sizes using the Multiple Y's Simple option and paste the result here.
1. As the sample size changes, describe the change in center.
2. As the sample size changes, describe the change in spread.
3. As the sample size changes, describe the change in shape.
1. Using the command STAT>DISPLAY DESCRIPTIVE STATISTICS, determine the mean and standard deviation for each of the five groups. Paste the results here.
1. The Central Limit Theorem states that the Expected Value of $\bar{X}$ is $\mu$. As the sample size increases, describe the change in mean. Is this consistent with the Central Limit Theorem?
2. The Central Limit Theorem states that the Standard Deviation of $\bar{X}$ is $\sigma / \sqrt{n}$. As the sample size increases, describe the change in standard deviation. Is this consistent with the Central Limit Theorem?
2. What you have observed are the three important parts of the Central Limit Theorem for the distribution of the sample mean $\bar{X}$. In your own words, describe these three important parts.
15.3.08: Chapter 9 Lab
Open MINITAB file lab08.mpj from the website. This data represents information for 700 instructors from the popular website ratemyprofessors.com. All instructors are sampled from the Foothill‐De Anza Community College District. Here is a description of the data:
College Foothill or De Anza
Smiley Positive Neutral Negative
Photo Instructor has a photo
Hot Instructor has a chili pepper
Gender Male or Female
Dept Academic Department (example ‐ Mathematics)
Division Academic Division (example ‐ PSME)
Num Number of Ratings for that faculty member
Overall Average Overall Quality Rating (1‐5 scale, lowest to highest)
Easiness Average Easiness Rating (1‐5 scale, hardest to easiest)
The BASIC STATISTICS>GRAPHICAL SUMMARY feature of MINITAB allows you to create confidence intervals for the population mean and standard deviation. You can set the confidence level to what you want.
(Question 1 – 5) For questions 3 to 6 you will need to use the "By Variables" option in MINITAB
1. Find a 95% confidence interval for the mean easiness rating of instructors. Analyze and interpret the confidence interval.
2. Find a 99% confidence interval for the mean easiness rating of instructors. Analyze and interpret the confidence interval. Compare your results to question 1 and explain why the confidence interval has a higher margin of error.
3. Find a 95% confidence interval for the standard deviation of easiness rating of instructors. Analyze and interpret the confidence interval.
4. Find 95% confidence intervals for the mean easiness rating of instructors at each college. Compare the confidence intervals. Do they seem to be different?
5. Find 95% confidence intervals for the mean easiness rating of instructors by gender. Compare the confidence intervals. Do they seem to be different?
6. Find 95% confidence intervals for the mean easiness rating of instructors by hotness rating. Compare the confidence intervals. Do they seem to be different?
7. Find 95% confidence intervals for the mean easiness rating of instructors by division. Compare the confidence intervals. Do they seem to be different?
Confidence Intervals for proportions can be run in Minitab using the command STAT>BASIC STATISTICS>1 PROPORTION.
1. Make and analyze the proportion of male instructors
1. Find a 95% Confidence Interval for the proportion of male instructors. What is the Margin of Error?
2. Interpret this confidence Interval.
3. Would you support a claim that women are underrepresented at these colleges? Explain.
You can use the Minitab command DATA>SUBSET WORKSHEET to look at an individual department, for example. We want create a worksheet of just Mathematics Instructors, so select Condition and set 'Dept'="Mathematics".
1. Find a 95% Confidence Interval for the proportion of male mathematics instructors.
1. Interpret this confidence Interval.
2. Would you support a claim that women are underrepresented in the Mathematics Departments at these two colleges? Explain. | textbooks/stats/Introductory_Statistics/Inferential_Statistics_and_Probability_-_A_Holistic_Approach_(Geraghty)/15%3A_Appendix/15.03%3A_MINITAB_Labs/15.3.07%3A_Chapter_8_Lab.txt |
One Population Hypothesis Testing
Year Year of Sale
Price Sale price in \$Thousands
Bedrooms Number of bedrooms
SqrFeet Size of home in 100's of square feet
Pool Does a home have a pool ? (Yes/No)
Garage Does a home have a garage? (Yes/No)
Bath Number of Bathrooms
Distance Distance in miles from city center
City City Region (Fresno, Los Angeles, Sacramento, San Francisco, San Jose)
School School District Rating (Poor, Fair, Good , Excellent)
1. You want to conduct a hypothesis test about the mean home prices in California using the housing data file: housing.mpj. At the 1% significance level, design the test for the hypothesis that the mean housing price is over \$850,000.
1. First create a dotplot for the price data, and paste the results here. Does the value \$850,000 seem to be at the center of the data, above the center of the data, or below the center of the data?
2. State the null and alternative hypotheses in words.
3. State the null and alternative hypotheses in population parameters.
4. What model are you choosing and what assumptions are needed? Do you think the skewness and high outlier are a problem in choosing this model?
5. Conduct the test at a significance level of 1%, using MINITAB command Stat>Basic Statistics>1 Population \(t\)‐test. Make sure you choose options to set \(H_a\). Paste the results here. All price data is in \$thousands, so you would enter \$850,000 as 850.
6. Do you reject or fail to reject \(H_o\)?
7. State your conclusion in the context of the problem.
8. Using the online or Minitab power calculator, determine the power of the test if the population mean is really \$900,000. Assume the standard deviation is \$450,000. (Remember the data is entered in \$ thousands).
9. Using the online or Minitab power calculator, determine the sample size needed to have 95% power for the test.
2. You want to conduct a hypothesis test about the standard deviation of home prices in California using the housing data file: housing.mpj. At the 5% significance level, design a test to support the claim that the standard deviation housing price is not \$400,000.
1. State the null and alternative hypotheses in words.
2. State the null and alternative hypotheses in population parameters.
3. What model are you choosing and what assumptions are needed?
4. Conduct the test at a significance level of 5%, using MINITAB command Stat>Basic Statistics>1 Variance. Make sure you choose options to set \(H_a\). Paste the results here.
5. Do you reject or fail to reject \(H_o\)?
6. State your conclusion in the context of the problem.
3. For the housing data above, we want to support the claim that the percentage of homes in California with garages is over 60%. We are going to conduct a Hypothesis Test using a significance level of 10%.
1. State the null and alternative hypotheses in words.
2. State the null and alternative hypotheses in population parameters.
3. Create a bar chart of garages and under Chart Option, click the box to show \(y\) as a percentage. Does the bar graph support the claim that more than 60% of homes have garages?
4. What model are you choosing and what assumptions are needed?
5. Using the online power calculator, determine the power of the test if the population proportion under \(H_a\) is 0.65
6. Conduct the test at a significance level of 5%, using MINITAB command Stat>Basic Statistics>1 Proportion. Make sure you choose options to set \(H_a\). Paste the results here.
7. Do you reject or fail to reject \(H_o\)?
8. State your conclusion in the context of the problem. | textbooks/stats/Introductory_Statistics/Inferential_Statistics_and_Probability_-_A_Holistic_Approach_(Geraghty)/15%3A_Appendix/15.03%3A_MINITAB_Labs/15.3.09%3A_Chapter_10_Lab.txt |
Two Population Hypothesis Testing
Open MINITAB file lab10.mpj from the website.
The National Basketball Association (NBA) announced that a new basketball would be used for the 2006–2007 season. Here is the announcement from the NBA about the new ball:
The NBA is introducing a new Official Game Ball for play beginning in the 2006–07 season. The new synthetic ball, manufactured by Spalding, features a new design and a new material that together offer better grip, feel, and consistency than the current leather ball. This marks the first change to the ball in over 35 years and only the second in 60 seasons.
Players in the NBA complained about the new ball, saying the ball reduced their performance. The NBA announced that the traditional leather ball would be used again beginning January 1, 2007.
For the following 4 problems, analyze data from NBA games hat show the home team score and visiting team score for games played with the original leather ball and with the new synthetic ball. You will then conduct the following hypothesis tests. Make sure you show all steps:
1. Test for a difference in Standard deviation in home team score due to the type of ball.
1. State \(H_o\) and \(H_a\).
2. State the model used and the assumptions needed.
3. Conduct the test at a significance level of 5% ‐ paste results.
4. State the decision (Reject or Fail to Reject \(H_o\)).
5. State the appropriate conclusion in the context of the original problem.
2. Test for a difference in mean home team score due to the type of ball.
1. State \(H_o\) and \(H_a\).
2. Is this model independent or dependent sampling? Explain.
3. State the model used and the assumptions needed. Use the \(F\)‐test from question 1 if you have independent sampling.
4. Conduct the test at a significance level of 5% ‐ paste results.
5. State the decision (Reject or Fail to Reject \(H_o\))
6. State the appropriate conclusion in the context of the original problem.
7. Make grouped box plots of the home score by type of ball. Is the graph consistent with your decision?
3. Test for a difference in mean visiting team score due to the type of ball.
1. State \(H_o\) and \(H_a\).
2. Is this model independent or dependent sampling? Explain.
3. State the model used and the assumptions needed. You will need to conduct the \(F\)‐test if you have independent sampling.
4. Conduct the test at a significance level of 5% ‐ paste results.
5. State the decision (Reject or Fail to Reject \(H_o\))
6. State the appropriate conclusion in the context of the original problem.
7. Make grouped box plots of the visiting score by type of ball. Is the graph consistent with your decision?
4. Test for a mean difference in scores between home team and visiting team.
1. State \(H_o\) and \(H_a\).
2. Is this model independent or dependent sampling? Explain.
3. State the model used and the assumptions needed. You will need to conduct the \(F\)‐test only if you have independent sampling.
4. Conduct the test at a significance level of 5% ‐ paste results.
5. State the decision (Reject or Fail to Reject \(H_o\))
6. State the appropriate conclusion in the context of the original problem.
15.3.11: Chapter 12 Lab
Chi‐square tests for categorical data
Open MINITAB file lab11.mpj from the website.
1. A sample of motor vehicle deaths for a recent year in Montana is broken down by day of the week. Test the claim that fatalities occur with equal frequency on the different days ($\alpha =5\%$).
Sun Mon Tue Wed Thu Fri Sat
35 21 22 18 23 29 45
1. State the null and alternative hypotheses in words.
2. State the null and alternative hypotheses in population parameters.
3. What model are you choosing and what assumptions are needed?
4. The data is in the first 2 columns of the Mintab worksheet. Conduct the test at a significance level of 5%, using MINITAB command: Stat>Table > Chi Square Goodness of Fit. Set the Observed Counts to the column you just entered and choose Equal Proportions. Paste the results here.
5. Do you reject or fail to reject $H_o$? Then state your conclusion in the context of the problem.
1. Pew Research conducted a poll of 2000 American adults asking whether they Favor or Oppose same‐sex marriage. The data is summarized in the two‐way table shown below. Conduct a hypothesis test to determine if Americans’ opinions about same‐sex marriage are age related?
1. Before conducting the test, determine the percentage in each group that supports same sex marriage. Describe the trend.
2. Now we will conduct the test. State the null and alternative hypotheses.
3. What model are you choosing and what assumptions are needed?
4. The table above has been entered in columns 4 to 7 of the Minitab file. Conduct the test at a significance level of 1%, using MINITAB command: Stat>Table > Crosstabulation/Chi Square. Choose Summarized Data. Highlight columns that contain the table. Paste the results here.
1. Do you reject or fail to reject $H_o$? Then state your conclusion in the context of the problem.
For questions 3 and 4, the popular data starts in column 9. Use the MINITAB command Stat>Table > Crosstabulation. Choose Raw Data. To run the Chi Square test of independence, Click Chi‐square and check the options as shown. Run these tests at a significance level of 5%.
1. Test for dependence between location and goal for elementary school students.
1. State the null and alternative hypotheses.
2. Run the test and paste the results here.
3. Do you reject or fail to reject $H_o$? Then state your conclusion in the context of the problem.
2. Test for dependence between gender and goal for elementary school students.
1. State the null and alternative hypotheses.
2. Run the test and paste the results here.
3. Do you reject or fail to reject $H_o$? Then state your conclusion in the context of the problem. | textbooks/stats/Introductory_Statistics/Inferential_Statistics_and_Probability_-_A_Holistic_Approach_(Geraghty)/15%3A_Appendix/15.03%3A_MINITAB_Labs/15.3.10%3A_Chapter_11_Lab.txt |
Analysis of Variance
Open MINITAB file lab12.mpj from the website.
1. You want to address the question: “Is there a difference in overall quality due to Division?” There are five divisions (some were combined): Certificate Programs/Other, Creative Arts/Physical Ed, Social Studies/Humanities/Business, Language/International/Multicultural, Physical and Health Science. Conduct the test at a significance level of 1%.
1. What is response and what is the factor? How many levels?
2. State the hypotheses in words and parameters.
3. Run the appropriate one factor ANOVA test (use columns 1 and 2 from data). Make sure you select the Tukey Test under the Comparisons options. Paste the results here, including a graph comparing the means.
4. State a detailed conclusion using the both ANOVA results and the Tukey Test results.
2. Columns 3 ‐5 of the Minitab file represent annual pay in \$ thousands for randomly sampled workers in San Jose, California, Ann Arbor, Michigan and Dallas, Texas. Test for a difference in mean pay among the three cities. Choose a significance level of 5%.
1. What is the response variable and what is the factor variable.? How many levels?
2. State the hypotheses in words and parameters.
3. Run the appropriate one factor ANOVA test. Make sure you select the Tukey Test under the Comparisons options. Paste the results here, including a graph comparing the means.
4. State a detailed conclusion using the both the ANOVA results and the Tukey Test results.
15.3.13: Chapter 14 Lab
Simple Linear Regression
Open MINITAB file lab13.mpj from the website: this file contains geographic and weather data for several California cities.
1. First design a regression Model in which Latitude (degrees North) is the Independent variable and Precipitation (annual rainfall in inches) is the response. Run Minitab Stat>Regression>Fitted Line Plot
1. Make a scatterplot and graph the least square line. Interpret the slope.
2. Conduct the appropriate hypothesis test for a significant correlation between precipitation and latitude using a significance level of 5%.
3. Find and interpret \(r^2\).
4. Run Minitab Stat>Regression>Regression>Fit Regression Model. Then find a 95% confidence interval for the expected precipitation for a city at latitude 40 degrees north using Stat>Regression>Regression>Predict. Interpret the interval.
2. Next design a regression Model in which Altitude (feet above sea level) is the Independent variable and Precipitation (annual rainfall in inches) is the response. Run Minitab Stat>Regression>Fitted Line Plot
1. Make a scatterplot and graph the least square line. Interpret the slope.
2. Conduct the appropriate hypothesis test for a significant correlation between precipitation and altitude, using a significance level of 5%.
3. Find and interpret \(r^2\).
4. Run Minitab Stat>Regression>Regression>Fit Regression Model. Click Results and change Fits and Diagnostics to for all observations. Then find a 95% prediction interval for the precipitation for a city at altitude of 150 feet using Stat>Regression>Regression>Predict.
5. Interpret the interval. Analyze the residuals. Which city fits the model best? Which city fits the model worst?
3. Finally, design a regression Model in which Distance from Coast (in miles) is the Independent variable and Precipitation (annual rainfall in inches) is the response. Run Minitab Stat>Regression>Fitted Line Plot
1. Make a scatterplot and graph the least square line. Interpret the slope.
2. Conduct the appropriate hypothesis test for a significant correlation between precipitation and distance from coast using a significance level of 5%.
3. Find and interpret \(r^2\).
4. Looking at the scatterplot, it seems that a non‐linear regression model might be a better fit for precipitation and distance from coast. Rerun the fitted line plot but choose cubic instead of linear. Paste the graph here. Under this model, what percentage of the variability precipitation is explained by distance from coast?
15.04: Flash Animations
I have designed four interactive Flash animations that will provide students with deeper insight of the major concepts of inference and hypothesis testing. These animations are on the website http://nebula2.deanza.edu/~mo/ .
Central Limit Theorem (Chapter 8)
Using die rolling with progressively increasing sample sizes, this animation shows the three main properties of the Central Limit Theorem.
Inference Process (Chapter 9)
This animation walks a student through the logic of the statistical inference and is presented just before confidence intervals and hypothesis testing.
Confidence Intervals (Chapter 9)
This animation compares hypothesis testing to an unusual method of playing darts and compares it to a practical example from the 2008 presidential election.
Statistical Power in Hypothesis Testing (Chapter 10)
This animation explains power, Type I and Type II error conceptually, and demonstrates the effect of changing model assumptions.
15.05: PowerPoint Slides
I have developed PowerPoint Slides that follow the material presented in this text. This material is presented on the text website as a slideshow. There are also note pages that can be downloaded. | textbooks/stats/Introductory_Statistics/Inferential_Statistics_and_Probability_-_A_Holistic_Approach_(Geraghty)/15%3A_Appendix/15.03%3A_MINITAB_Labs/15.3.12%3A_Chapter_13_Lab.txt |
1. Talk of the Nation, National Public Radio Archives, http://www.npr.org/
2. John Cimbaro, Fish Anatomy, http://www.fws.gov/midwest/lacrossef...hotoAlbum.html
3. Chen Zheng‐Long, Chinese Koi Fish, http://www.orientaloutpost.com/prodd...l.php?prod=czl‐kf135‐1
4. Richard Christian Looijen, Holism and Reductionism in Biology and Ecology: The Mutual Dependence of Higher and Lower Level Research Programmes, Springer, 2000
5. The Poems of John Godfrey Saxe (Highgate Edition), Boston: Houghton, Mifflin and Company, 1881
6. Donna Young, American Society of Health System Pharmacists, April 6, 2007, http://www.ashp.org/import/News/Heal...e.aspx?id=2517
7. The Lancet, news release, June 29, 2009, http://www.nlm.nih.gov/medlineplus/n...ory_86206.html
8. Creative Commons, https://creativecommons.org/licenses/by‐sa/4.0/
9. Apple, Inc. (AAPL) (2017). Profile, business summary. Yahoo! Finance. Retrieved from https://finance.yahoo.com/quote/AAPL?p=AAPL
10. Gallup Organization (2017). Polling on Crime. Retrieved from http://www.gallup.com/poll/1603/crime.aspx
11. Pew Research Center (2013). Crime rises among second‐generation immigrants as they assimilate. Retrieved from http://www.pewresearch.org/fact‐tank/2013/10/15/crime‐rises‐among‐second‐ generation‐immigrants‐as‐they‐assimilate/
12. Statista, the Statistics Portal (2017). Reported violent crime rate in the United States from 1990 to 2015. Retrieved from https://www.statista.com/statistics/191219/reported‐violent‐crime‐rate‐in‐the‐ usa‐since‐1990/
13. The Next Big Future (2008). Deaths per TWH by energy source. Retrieved from https://www.nextbigfuture.com/2011/03/deaths‐per‐twh‐by‐energy‐source.html
14. 2000 United States Census, Sample of 500 adults from Santa Clara County, CA, 2000
15. Reuter/Ispos Polling, Approve/Disapprove of President Trump, August 17, 2017, http://polling.reuters.com/#poll/CP3_2/, August 18, 2017
16. ABC News Washington Post Poll, Biggest Gender Gaps in Job Approval. Retrieved from http://abcnews.go.com/Politics/28‐approve‐trumps‐response‐charlottesville‐poll/story?id=49334079, August 21, 2017.
17. mediamatters.org, Dishonest Fox Charts: Obamacare Enrollment Edition (2014). Retrieved from https://www.mediamatters.org/blog/20...3/31/dishonest‐fox‐charts‐obamacare‐enrollment‐ editi/198679, August 26, 2017.
18. By Rodolfo Hermans (Godot) at en.wikipedia. ‐ Own work; transferred from en.wikipedia by Rodolfo Hermans (Godot)., CC BY‐SA 3.0, https://commons.wikimedia.org/w/inde...?curid=4567445
19. African American College Students in Computer Room, E‐Learning Africa News, http://ela‐ newsportal.com/what‐matters‐is‐government‐policy‐on‐creating‐local‐open‐educational‐ resources/african‐american‐college‐students‐in‐computer‐room‐3/, Feb 2013
20. By David Adam Kess (Own work) [CC BY‐SA 4.0 (http://creativecommons.org/licenses/by‐sa/4.0)], via Wikimedia Commons
21. By Benjamin D. Esham / Wikimedia Commons, CC BY‐SA 3.0 us, https://commons.wikimedia.org/w/inde...?curid=3433510
22. Axios, Irma captured America's attention more than other storms, Irma captured America's attention more than other storms, Sept 25, 2017
23. By English: Airman Bo J. Flannigan, U.S. Navy [Public domain], via Wikimedia Commons
24. By LPS.1 ‐ Own work, CC0, https://commons.wikimedia.org/w/inde...curid=32591423
25. Lightbulb Books, The Average Bears: Mr. Mean, Mr. Median & Mr. Mode, http://www.lightbulbbooks.com/blog/wp‐content/uploads/Mean‐Median‐Mode.jpg
26. Skewness graph, A Square School, http://www.asquareschool.com/wp‐content/uploads/2015/08/skewness.jpg
27. "Daily high temperatures for downtown San Francisco and St. Louis Airport" NOAA National Centers for Environmental Information, 2016. Web. 5 Sep 2017. https://www.ncdc.noaa.gov/data‐access
28. CC BY‐SA 1.0, https://commons.wikimedia.org/w/index.php?curid=10087
29. By Daniel Schwen ‐ Own work, CC BY‐SA 4.0, https://commons.wikimedia.org/w/inde...?curid=6814969
30. Taleb, Nicholas, The Black Swan: The Impact of the Highly Improbable, Penguin, 2007.
31. Taleb, Nicholas, The Black Swan: The Impact of the Highly Improbable, Penguin, 2007.
32. PhysicalGeography.net Fundamentals eBook, The Science of Physical Geography, 2015, http://www.physicalgeography.net/fundamentals/3h.html
33. Nable, Mosley, Witt, Davis, Is GPA affected by hours studying, classes missed, and age?, February 2012, StatCrunch, https://www.statcrunch.com/5.0/viewr...reportid=23993
34. Leak, William B, Relationships of Tree Age to Diameter in Old‐Growth Northern Hardwoods and Spruce‐Fir, United States Department of Agriculture, 1985.
35. StatCrunch, Guns and Gun Deaths by Country, August 2016, https://www.statcrunch.com/app/index...dataid=1880699
36. National Geographic, Nick Cage Movies Vs. Drownings, and more strange (but spurious) correlations, Illustration Photographs by (L) Fotos International, Getty (R) Bernadett Szabo, Corbis, September 2015, http://phenomena.nationalgeographic....015/09/11/nick‐cage‐movies‐vs‐drownings‐and‐more‐ strange‐but‐spurious‐correlations/
37. Sharks Ice Cream Store, Bloomfield New York, http://www.sharksicecream.com/
38. BBC, Bitsize ‐ Discussing Results, Drawing Inferences and Conclusions, http://www.bbc.co.uk/education/guide...dmn/revision/3, 2017
39. Munroe, Randall, Correlation, XKCD, https://imgs.xkcd.com/comics/correlation.png, 2016
40. "Censuses: Costing the count". The Economist. Jun, 2011.
41. Pew Research Center, 15% of American Adults Have Used Online Dating Sites or Mobile Dating Apps, Feb 2016, http://www.pewinternet.org/2016/02/11/15‐percent‐of‐american‐adults‐have‐used‐online‐ dating‐sites‐or‐mobile‐dating‐apps/
42. By Corpse Reviver (Own work) [GFDL (http://www.gnu.org/copyleft/fdl.html) or CC‐BY‐SA‐3.0 (http://creativecommons.org/licenses/by‐sa/3.0/)], via Wikimedia Commons
43. Mentzoni, et. al., Tempo in electronic gaming machines affects behavior among at‐risk gamblers , Journal of Behavioral Addictions, Sep 22, 2012.
44. By Charles O'Rear (1941—), Environmental Protection Agency http://www.flickr.com/photos/usnatio...es/3678468445/ Public Domain, https://commons.wikimedia.org/w/inde...?curid=9384331
45. Z. Ondogan, O. Pamuk, E.N. Ondogan, A. Ozguney (2005)."Improving the Appearance of All Textile Products from Clothing to HomeTextile Using Laser Technology," Optics and Laser Technology, Vol. 37, pp. 631‐637.
46. By Dan Kernler (Own work) [CC BY‐SA 4.0 (http://creativecommons.org/licenses/by‐sa/4.0)], via Wikimedia Commons
47. TJ's Flying Adventure, Random airport traffic light, http://www.tjflyingadventures.com/2012/
48. By Dan Kernler ‐ Own work, CC BY‐SA 4.0, https://commons.wikimedia.org/w/inde...curid=36506022
49. By Dan Kernler ‐ Own work, CC BY‐SA 4.0, https://commons.wikimedia.org/w/inde...curid=36506021
50. Pew Research Center (2016). Social Meida Conversations about race, http://www.pewinternet.org/2016/08/15/social‐media‐conversations‐about‐race/
51. By Dan Kernler ‐ Own work, CC BY‐SA 4.0, https://commons.wikimedia.org/w/inde...curid=36506019
52. Morin, Parker, Stepler, Mercer. Behind the Badge, Pew Research Center (2017). http://www.pewsocialtrends.org/2017/01/11/behind‐the‐badge/
53. By Ed Yourdon from New York City, USA (Helping the homeless Uploaded by Gary Dee) [CC BY‐SA 2.0 (https://creativecommons.org/licenses/by‐sa/2.0)], via Wikimedia Commons
54. Bill Wilson Center of Santa Clara County, Count Me! Hidden in Plain Sight: Documenting Homeless Youth Populations in 2017, September 2017
55. Rogers, Katie. Boaty McBoatface: What You Get When You Let the Internet Decide, The New York Times, March 21, 2016.
56. 9gag.com, Boaty McBoatface wins \$370M ship naming competition, these are the other names in the poll, https://9gag.com/gag/aq5Bg2j/boaty‐mcboatface‐wins‐370m‐ship‐naming‐competition‐these‐are‐ the‐other‐names‐in‐the‐poll
57. The Two‐way, Breaking News from NPR, http://www.npr.org/sections/thetwo‐ way/2017/03/13/519976028/boaty‐mcboatface‐prepares‐for‐first‐antarctic‐mission, March 2017
58. FivethirtyEight.com, Al Gore’s New Movie Expose the Big Flaw in Online Movie Ratings, Sept 2017, https://fivethirtyeight.com/features/al‐gores‐new‐movie‐exposes‐the‐big‐flaw‐in‐online‐movie‐ratings/
59. The Hill, GOP rep’s Obamacare Twitter poll backfires, January 4, 2017, http://thehill.com/blogs/in‐the‐ know/in‐the‐know/312674‐gop‐reps‐twitter‐poll‐doesnt‐go‐as‐planned
60. Mathios, Diane. De Anza College, Notes on Selection and Response Biases
61. By Donald Trump August 19, 2015 (cropped).jpg: BU Rob13 Hillary Clinton by Gage Skidmore 2.jpg: Gage [GFDL (http://www.gnu.org/copyleft/fdl.html) or CC BY‐SA 4.0 (https://creativecommons.org/licenses/by‐sa/4.0)], via Wikimedia Commons
62. Pew Research Center (2016). Why 2016 Election Polls Missed the Mark, http://www.pewresearch.org/fact‐tank/2016/11/09/why‐2016‐election‐polls‐missed‐their‐mark/
63. Dropp & Nyhan, Nearly Half of Americans Don’t Know Puerto Ricans Are Fellow Citizens, The New York Times, September 26, 2017. https://www.nytimes.com/2017/09/26/upshot/nearly‐half‐of‐americans‐ dont‐know‐people‐in‐puerto‐ricoans‐are‐fellow‐citizens.html?_r=0
64. CNN, Election 2016, National President Exit Polls, November 23, 2016, http://www.cnn.com/election/results/exit‐polls/national/president
65. Data and photo retrieved from the National Hurricane Center, NOAA, http://www.nhc.noaa.gov/
66. By Keith Allison from Hanover, MD, USA ‐ Draymond Green, CC BY‐SA 2.0, https://commons.wikimedia.org/w/inde...curid=46776515
67. By Heather Smith (The Alloy Valve Stockist's photo gallery.) [CC BY‐SA 3.0 (https://creativecommons.org/licenses/by‐sa/3.0) or GFDL (http://www.gnu.org/copyleft/fdl.html)], via Wikimedia Commons
68. Shell Oil Refinery, Martinez CA. CC SA 1.0, https://commons.wikimedia.org/w/inde...p?curid=110538
69. SountTransit, Sounder Commuter train. https://www.soundtransit.org/sounder. Schedule retrieved October 14, 2017
70. Fisher, Stacy B, The California Community Colleges Board of Governors Fee Waiver: A Comparison of State Aid Programs, California Community Colleges Chancellor’s Office, Jan 2016
71. Ronald Walpole & Raymond Meyers & Keying Ye, Probability and Statistics for Engineers and Scientists. Pearson Education, 2002, 7th edition.
72. Taleb, Nicholas, The Black Swan: The Impact of the Highly Improbable, Penguin, 2007.
73. Food and Drug Administration, FDA Consumer Magazine , Jan/Feb 2003
74. Mark Blumenthal, Is Polling as we Know it Doomed?, The National Journal Online, http://www.nationaljournal.com/njonl...90810_1804.php, August 10, 2009
75. Russ Lenth, Java Applets for Power and Sample Size, University of Iowa , http://www.stat.uiowa.edu/~rlenth/Power/ , 2009
76. J. B. Orris, MegaStat for Excel, Version 10.1, Butler University, 2007
77. The American Statistical Association, Statement on Statistical Significance and P‐Values ,March 7, 2016
78. Trafimow and Marks, Editorial, Basic and Applied Social Psychology, Volume 37, 2015, Issue 1.
79. Munroe, Randall, XKCD, Significant, https://xkcd.com/882/, 2013
80. Munroe, Randall, XKCD, P‐values, https://xkcd.com/1478/, 2015
81. Shlomo S. Sawilowsky, Fermat, Schubert, Einstein, and Behrens‐Fisher: The Probable Difference Between Two Means When $\sigma_{1}^{2} \neq \sigma_{2}^{2}$, Journal of Modern Applied Statistical Methods, Vol. 1, No 2, Fall 2002
82. Mastin, Luke, Right Left, Right Wrong? An Investigation of Handedness, http://www.rightleftrightwrong.com/statistics.html, 2012
83. Feuer, Alan. AR‐15 Rifles Are Beloved, Reviled and a Common Element in Mass Shootings New York Times,https://www.nytimes.com/2016/06/14/nyregion/ar‐15‐rifles‐are‐beloved‐reviled‐and‐ a‐common‐element‐in‐mass‐shootings.html, June 2016
84. Pew Research Center, Opinions on Gun Policy and the 2016 Campaign , Aug 2016, http://www.people‐ press.org/2016/08/26/opinions‐on‐gun‐policy‐and‐the‐2016‐campaign/
85. Lowry, Richard. One Way ANOVA – Independent Samples. Vassar.edu, 2011
86. NIST/SEMATECH e‐Handbook of Statistical Methods, http://www.itl.nist.gov/div898/handbook/, Section 7.4.7.1.Tukey's Method, April, 2016
87. Munroe, Randall, XKCD, Linear Regression, http://xkcd.com/1725/, 2016
Additional reference used but not specifically cited: Dean Fearn, Elliot Nebenzahl, Maurice Geraghty, Student Guide for Elementary Business Statistics, Kendall/Hunt, 2003 | textbooks/stats/Introductory_Statistics/Inferential_Statistics_and_Probability_-_A_Holistic_Approach_(Geraghty)/15%3A_Appendix/15.06%3A_Notes_and_Sources.txt |
16 years ago, I was co‐author for a Business Statistics textbook that was published by a boutique publisher. The textbook was expensive for students and I received little compensation for the work put into this text. Then Chancellor Martha Kanter of the Foothill‐De Anza Community College District initiated a movement to bring free Online Education Resources, including textbooks, to our students. Following the lead of two of my colleagues, Barbara Illowsky and Susan Dean, I decided that any material I create will be provided to students free of charge. Whenever possible, I try to use online resources to help students who are suffering financial hardship from the cost of college.
In order to protect this material from being marketed without my permission, I publish all material using a Creative Commons Attribution‐ShareAlike 4.0 International License.8 What this means is:
• Anyone can download and use this material without permission.
• Anyone can remix, modify or add to this material for non‐commercial use, as long as proper attribution is given to this source.
• None of this material can ever be copyrighted, I retain all rights for the material.
So please feel free to download and modify the material and share it with the world. If all of us could spend our energy sharing and being creative instead of being fearful and protectionist, imagine how rich the library of open resource material would be.
15.08: Acknowledgments
No textbook can be written in a vacuum, and there are so many colleagues, students, administrators, friends and family members who have supported this endeavor.
I would first like to thank my colleagues and administrators at De Anza College, especially: Barbara Illowsky, Susan Dean and Frank Soler, Statistics authors who helped me with process of writing my text and creating an online education resource; Diane Mathios, who allowed me to use her material on sampling bias; Doli Bambania, who shared with me ideas of adding rich context to material; Roberta Bloom, who I collaborated with in using other online resources; Lenore Desilets, Lisa Mesh, Hung Nguyen, and many others who have used some of my preliminary material and made suggestions; Martha Kanter, who initiated the OER incentive at FHDA as part of her life’s mission to make college more affordable to students; and Jerry Rosenberg, for supporting my Professional Development Leave request to write this book.
I would also like to thank the many students who have inspired me to complete this material. I want to especially thank students and tutors who agreed to review some of the preliminary chapters and who have found errors in the text: Kairev Sheth, Alice Lee, Nikki Diep, Thanh Pham, Ana Chaverri, Kamyar Kazemi, Milanko Plavsic, Alyssa Melesurgo, Andrea Yepez, Natalia Ramos, Meidan Jing, Derek Esteban, Yuhan Tan, Hilary Lou, Qiong Wu, Dan Trinh, Deshan Yapabandara, Christopher Ton, Lily Tran, Emily Sabour and Tony Ton.
Finally, I want to thank my daughter Amy Geraghty, who patiently edited this text for grammar and my wife Rita Geraghty, who inspires me to stay present by being mindful and to meet challenges with loving kindness.
Thank you all. | textbooks/stats/Introductory_Statistics/Inferential_Statistics_and_Probability_-_A_Holistic_Approach_(Geraghty)/15%3A_Appendix/15.07%3A_Why_Creative_Commons_Attribution_License.txt |
This first chapter begins by discussing what statistics are and why the study of statistics is important. Subsequent sections cover a variety of topics all basic to the study of statistics. One theme common to all of these sections is that they cover concepts and ideas important for other chapters in the book.
• 1.1: What are Statistics?
Statistics include numerical facts and figures, but also involves math and relies upon calculations of numbers. It also relies heavily on how the numbers are chosen and how the statistics are interpreted.
• 1.2: Importance of Statistics
It is important to properly evaluate data and claims that bombard us every day. If you cannot distinguish good from faulty reasoning, then you are vulnerable to manipulation and to decisions that are not in your best interest. Statistics provides tools that you need in order to react intelligently to information you hear or read. In this sense, statistics is one of the most important things that you can study.
• 1.3: Descriptive Statistics
Descriptive statistics are numbers that are used to summarize and describe data. The word "data" refers to the information that has been collected from an experiment, a survey, a historical record, etc. Descriptive statistics are just descriptive. They do not involve generalizing beyond the data at hand. Generalizing from our data to another set of cases is the business of inferential statistics.
• 1.4: Inferential Statistics
In statistics, we often rely on a sample --- that is, a small subset of a larger set of data --- to draw inferences about the larger set. The larger set is known as the population from which the sample is drawn.
• 1.5: Sampling Demonstration
This demonstration is used to teach students how to distinguish between simple random sampling and stratified sampling and how often random and stratified sampling give exactly the same result.
• 1.6: Variables
Variables are properties or characteristics of some event, object, or person that can take on different values or amounts (as opposed to constants such as π that do not vary). When conducting research, experimenters often manipulate variables. When a variable is manipulated by an experimenter, it is called an independent variable. The experiment seeks to determine the effect of the independent variable on a dependent variable.
• 1.7: Percentiles
A test score in and of itself is usually difficult to interpret. For example, if you learned that your score on a measure of shyness was 35 out of a possible 50, you would have little idea how shy you are compared to other people. More relevant is the percentage of people with lower shyness scores than yours. This percentage is called a percentile.
• 1.8: Levels of Measurement
Before we can conduct a statistical analysis, we need to measure our dependent variable. Exactly how the measurement is carried out depends on the type of variable involved in the analysis. Different types are measured differently. To measure the time taken to respond to a stimulus, you might use a stop watch. Stop watches are of no use, of course, when it comes to measuring someone's attitude towards a political candidate.
• 1.9: Measurements
This is a demonstration of a very complex issue. Experts in the field disagree on how to interpret differences on an ordinal scale, so do not be discouraged if it takes you a while to catch on. In this demonstration you will explore the relationship between interval and ordinal scales. The demonstration is based on two brands of baked goods.
• 1.10: Distributions
Define "distribution" Interpret a frequency distribution Distinguish between a frequency distribution and a probability distribution Construct a grouped frequency distribution for a continuous variable
• 1.11: Summation Notation
Many statistical formulas involve summing numbers. Fortunately there is a convenient notation for expressing summation. This section covers the basics of this summation notation.
• 1.12: Linear Transformations
Often it is necessary to transform data from one measurement scale to another. For example, you might want to convert height measured in feet to height measured in inches.
• 1.13: Logarithms
The log transformation reduces positive skew. This can be valuable both for making the data more interpretable and for helping to meet the assumptions of inferential statistics.
• 1.14: Statistical Literacy
• 1.E: Introduction to Statistics (Exercises)
01: Introduction to Statistics
Learning Objectives
• Describe the range of applications of statistics
• Identify situations in which statistics can be misleading
• Define "Statistics"
Statistics include numerical facts and figures. For instance:
• The largest earthquake measured \(9.2\) on the Richter scale.
• Men are at least \(10\) times more likely than women to commit murder.
• One in every \(8\) South Africans is HIV positive.
• By the year \(2020\), there will be \(15\) people aged \(65\) and over for every new baby born.
The study of statistics involves math and relies upon calculations of numbers. But it also relies heavily on how the numbers are chosen and how the statistics are interpreted. For example, consider the following three scenarios and the interpretations based upon the presented statistics. You will find that the numbers may be right, but the interpretation may be wrong. Try to identify a major flaw with each interpretation before we describe it.
1) A new advertisement for Ben and Jerry's ice cream introduced in late May of last year resulted in a \(30\%\) increase in ice cream sales for the following three months. Thus, the advertisement was effective.
A major flaw is that ice cream consumption generally increases in the months of June, July, and August regardless of advertisements. This effect is called a history effect and leads people to interpret outcomes as the result of one variable when another variable (in this case, one having to do with the passage of time) is actually responsible.
2) The more churches in a city, the more crime there is. Thus, churches lead to crime.
A major flaw is that both increased churches and increased crime rates can be explained by larger populations. In bigger cities, there are both more churches and more crime. This problem, which we discuss in more detail in the section on Causation in Chapter 6, refers to the third-variable problem. Namely, a third variable can cause both situations; however, people erroneously believe that there is a causal relationship between the two primary variables rather than recognize that a third variable can cause both.
3) \(75\%\) more interracial marriages are occurring this year than \(25\) years ago. Thus, our society accepts interracial marriages.
A major flaw is that we don't have the information that we need. What is the rate at which marriages are occurring? Suppose only \(1\%\) of marriages \(25\) years ago were interracial and so now \(1.75\%\) of marriages are interracial (\(1.75\) is \(75\%\) higher than \(1\)). But this latter number is hardly evidence suggesting the acceptability of interracial marriages. In addition, the statistic provided does not rule out the possibility that the number of interracial marriages has seen dramatic fluctuations over the years and this year is not the highest. Again, there is simply not enough information to understand fully the impact of the statistics.
As a whole, these examples show that statistics are not only facts and figures; they are something more than that. In the broadest sense, "statistics" refers to a range of techniques and procedures for analyzing, interpreting, displaying, and making decisions based on data.
• Mikki Hebl | textbooks/stats/Introductory_Statistics/Introductory_Statistics_(Lane)/01%3A_Introduction_to_Statistics/1.01%3A_What_are_Statistics.txt |
Learning Objectives
• Give examples of statistics encountered in everyday life
• Give examples of how statistics can lend credibility to an argument
Like most people, you probably feel that it is important to "take control of your life." But what does this mean? Partly, it means being able to properly evaluate the data and claims that bombard you every day. If you cannot distinguish good from faulty reasoning, then you are vulnerable to manipulation and to decisions that are not in your best interest. Statistics provides tools that you need in order to react intelligently to information you hear or read. In this sense, statistics is one of the most important things that you can study.
To be more specific, here are some claims that we have heard on several occasions. (We are not saying that each one of these claims is true!)
• \(4\) out of \(5\) dentists recommend Dentine.
• Almost \(85\%\) of lung cancers in men and \(45\%\) in women are tobacco-related.
• Condoms are effective \(94\%\) of the time.
• Native Americans are significantly more likely to be hit crossing the street than are people of other ethnicities.
• People tend to be more persuasive when they look others directly in the eye and speak loudly and quickly.
• Women make \(75\) cents to every dollar a man makes when they work the same job.
• A surprising new study shows that eating egg whites can increase one's life span.
• People predict that it is very unlikely there will ever be another baseball player with a batting average over \(400\).
• There is an \(80\%\) chance that in a room full of \(30\) people that at least two people will share the same birthday.
• \(79.48\%\) of all statistics are made up on the spot.
All of these claims are statistical in character. We suspect that some of them sound familiar; if not, we bet that you have heard other claims like them. Notice how diverse the examples are. They come from psychology, health, law, sports, business, etc. Indeed, data and data interpretation show up in discourse from virtually every facet of contemporary life.
Statistics are often presented in an effort to add credibility to an argument or advice. You can see this by paying attention to television advertisements. Many of the numbers thrown about in this way do not represent careful statistical analysis. They can be misleading and push you into decisions that you might find cause to regret. For these reasons, learning about statistics is a long step towards taking control of your life. (It is not, of course, the only step needed for this purpose.) The present textbook is designed to help you learn statistical essentials. It will make you into an intelligent consumer of statistical claims.
You can take the first step right away. To be an intelligent consumer of statistics, your first reflex must be to question the statistics that you encounter. The British Prime Minister Benjamin Disraeli is quoted by Mark Twain as having said, "There are three kinds of lies -- lies, damned lies, and statistics." This quote reminds us why it is so important to understand statistics. So let us invite you to reform your statistical habits from now on. No longer will you blindly accept numbers or findings. Instead, you will begin to think about the numbers, their sources, and most importantly, the procedures used to generate them.
We have put the emphasis on defending ourselves against fraudulent claims wrapped up as statistics. We close this section on a more positive note. Just as important as detecting the deceptive use of statistics is the appreciation of the proper use of statistics. You must also learn to recognize statistical evidence that supports a stated conclusion. Statistics are all around you, sometimes used well, sometimes not. We must learn how to distinguish the two cases.
• Mikki Hebl | textbooks/stats/Introductory_Statistics/Introductory_Statistics_(Lane)/01%3A_Introduction_to_Statistics/1.02%3A_Importance_of_Statistics.txt |
Learning Objectives
• Define "descriptive statistics"
• Distinguish between descriptive statistics and inferential statistics
Descriptive statistics are numbers that are used to summarize and describe data. The word "data" refers to the information that has been collected from an experiment, a survey, a historical record, etc. (By the way, "data" is plural. One piece of information is called a "datum.") If we are analyzing birth certificates, for example, a descriptive statistic might be the percentage of certificates issued in New York State, or the average age of the mother. Any other number we choose to compute also counts as a descriptive statistic for the data from which the statistic is computed. Several descriptive statistics are often used at one time, to give a full picture of the data.
Descriptive statistics are just descriptive. They do not involve generalizing beyond the data at hand. Generalizing from our data to another set of cases is the business of inferential statistics, which you'll be studying in another Section. Here we focus on (mere) descriptive statistics. Some descriptive statistics are shown in Table \(1\). The table shows the average salaries for various occupations in the United States in \(1999\). (Click here to see how much individuals with other occupations earn.)
Table \(1\): Average salaries for various occupations in \(1999\).
Salary Occupation
\$112,760 pediatricians
\$106,130 dentists
\$100,090 podiatrists
\$ 76,140 physicists
\$ 53,410 architects
\$ 49,720 school, clinical, and counseling psychologists
\$ 47,910 flight attendants
\$ 39,560 elementary school teachers
\$ 38,710 police officers
\$ 18,980 floral designers
Descriptive statistics like these offer insight into American society. It is interesting to note, for example, that we pay the people who educate our children and who protect our citizens a great deal less than we pay people who take care of our feet or our teeth.
For more descriptive statistics, consider Table \(2\) which shows the number of unmarried men per \(100\) unmarried women in U.S. Metro Areas in \(1990\). From this table we see that men outnumber women most in Jacksonville, NC, and women outnumber men most in Sarasota, FL. You can see that descriptive statistics can be useful if we are looking for an opposite-sex partner! (These data come from the Information Please Almanac.)
Table \(2\): Number of unmarried men per \(100\) unmarried women in U.S. Metro Areas in \(1990\).
Cities with mostly men Men per 100 Women Cities with mostly women Men per 100 Women
1. Jacksonville, NC
224
1. Sarasota, FL
66
2. Killeen-Temple, TX
123
2. Bradenton, FL
68
3. Fayetteville, NC
118
3. Altoona, PA
69
4. Brazoria, TX
117
4. Springfield, IL
70
5. Lawton, OK
116
5. Jacksonville, TN
70
6. State College, PA
113
6. Gadsden, AL
70
7. Clarksville-Hopkinsville, TN-KY
113
7. Wheeling, WV
70
8. Anchorage, Alaska
112
8. Charleston, WV
71
9. Salinas-Seaside-Monterey, CA
112
9. St. Joseph, MO
71
10. Bryan-College Station, TX
111
10. Lynchburg, VA
71
NOTE: Unmarried includes never-married, widowed, and divorced persons, \(15\) years or older.
These descriptive statistics may make us ponder why the numbers are so disparate in these cities. One potential explanation, for instance, as to why there are more women in Florida than men may involve the fact that elderly individuals tend to move down to the Sarasota region and that women tend to outlive men. Thus, more women might live in Sarasota than men. However, in the absence of proper data, this is only speculation.
You probably know that descriptive statistics are central to the world of sports. Every sporting event produces numerous statistics such as the shooting percentage of players on a basketball team. For the Olympic marathon (a foot race of \(26.2\) miles), we possess data that cover more than a century of competition. (The first modern Olympics took place in \(1896\).) Table \(3\) shows the winning times for both men and women (the latter have only been allowed to compete since \(1984\)).
Table \(3\): Winning Olympic marathon times.
Women
Year Winner Country Time
1984 Joan Benoit USA 2:24:52
1988 Rosa Mota POR 2:25:40
1992 Valentina Yegorova UT 2:32:41
1996 Fatuma Roba ETH 2:26:05
2000 Naoko Takahashi JPN 2:23:14
2004 Mizuki Noguchi JPN 2:26:20
Men
Year Winner Country Time
1896 Spiridon Louis GRE 2:58:50
1900 Michel Theato FRA 2:59:45
1904 Thomas Hicks USA 3:28:53
1906 Billy Sherring CAN 2:51:23
1908 Johnny Hayes USA 2:55:18
1912 Kenneth McArthur S. Afr. 2:36:54
1920 Hannes Kolehmainen FIN 2:32:35
1924 Albin Stenroos FIN 2:41:22
1928 Boughra El Ouafi FRA 2:32:57
1932 Juan Carlos Zabala ARG 2:31:36
1936 Sohn Kee-Chung JPN 2:29:19
1948 Delfo Cabrera ARG 2:34:51
1952 Emil Ztopek CZE 2:23:03
1956 Alain Mimoun FRA 2:25:00
1960 Abebe Bikila ETH 2:15:16
1964 Abebe Bikila ETH 2:12:11
1968 Mamo Wolde ETH 2:20:26
1972 Frank Shorter USA 2:12:19
1976 Waldemar Cierpinski E.Ger 2:09:55
1980 Waldemar Cierpinski E.Ger 2:11:03
1984 Carlos Lopes POR 2:09:21
1988 Gelindo Bordin ITA 2:10:32
1992 Hwang Young-Cho S. Kor 2:13:23
1996 Josia Thugwane S. Afr. 2:12:36
2000 Gezahenge Abera ETH 2:10.10
2004 Stefano Baldini ITA 2:10:55
There are many descriptive statistics that we can compute from the data in the table. To gain insight into the improvement in speed over the years, let us divide the men's times into two pieces, namely, the first \(13\) races (up to \(1952\)) and the second \(13\) (starting from \(1956\)). The mean winning time for the first \(13\) races is \(2\) hours, \(44\) minutes, and \(22\) seconds (written \(2:44:22\)). The mean winning time for the second \(13\) races is \(2:13:18\). This is quite a difference (over half an hour). Does this prove that the fastest men are running faster? Or is the difference just due to chance, no more than what often emerges from chance differences in performance from year to year? We can't answer this question with descriptive statistics alone. All we can affirm is that the two means are "suggestive."
Examining Table 3 leads to many other questions. We note that Takahashi (the lead female runner in \(2000\)) would have beaten the male runner in \(1956\) and all male runners in the first \(12\) marathons. This fact leads us to ask whether the gender gap will close or remain constant. When we look at the times within each gender, we also wonder how much they will decrease (if at all) in the next century of the Olympics. Might we one day witness a sub-\(2\) hour marathon? The study of statistics can help you make reasonable guesses about the answers to these questions.
• Mikki Hebl | textbooks/stats/Introductory_Statistics/Introductory_Statistics_(Lane)/01%3A_Introduction_to_Statistics/1.03%3A_Descriptive_Statistics.txt |
Learning Objectives
• Distinguish between a sample and a population
• Define inferential statistics
• Identify biased samples
• Distinguish between simple random sampling and stratified sampling
• Distinguish between random sampling and random assignment
Populations and samples
In statistics, we often rely on a sample, that is, a small subset of a larger set of data, to draw inferences about the larger set. The larger set is known as the population from which the sample is drawn.
Example \(1\)
You have been hired by the National Election Commission to examine how the American people feel about the fairness of the voting procedures in the U.S. Whom will you ask?
It is not practical to ask every single American how he or she feels about the fairness of the voting procedures. Instead, we query a relatively small number of Americans, and draw inferences about the entire country from their responses. The Americans actually queried constitute our sample of the larger population of all Americans. The mathematical procedures whereby we convert information about the sample into intelligent guesses about the population fall under the rubric of inferential statistics.
A sample is typically a small subset of the population. In the case of voting attitudes, we would sample a few thousand Americans drawn from the hundreds of millions that make up the country. In choosing a sample, it is therefore crucial that it not over-represent one kind of citizen at the expense of others. For example, something would be wrong with our sample if it happened to be made up entirely of Florida residents. If the sample held only Floridians, it could not be used to infer the attitudes of other Americans. The same problem would arise if the sample were comprised only of Republicans. Inferential statistics are based on the assumption that sampling is random. We trust a random sample to represent different segments of society in close to the appropriate proportions (provided the sample is large enough; see below).
Example \(2\)
We are interested in examining how many math classes have been taken on average by current graduating seniors at American colleges and universities during their four years in school. Whereas our population in the last example included all US citizens, now it involves just the graduating seniors throughout the country. This is still a large set since there are thousands of colleges and universities, each enrolling many students. (New York University, for example, enrolls \(48,000\) students.) It would be prohibitively costly to examine the transcript of every college senior. We therefore take a sample of college seniors and then make inferences to the entire population based on what we find. To make the sample, we might first choose some public and private colleges and universities across the United States. Then we might sample \(50\) students from each of these institutions. Suppose that the average number of math classes taken by the people in our sample were \(3.2\). Then we might speculate that \(3.2\) approximates the number we would find if we had the resources to examine every senior in the entire population. But we must be careful about the possibility that our sample is non-representative of the population. Perhaps we chose an overabundance of math majors, or chose too many technical institutions that have heavy math requirements. Such bad sampling makes our sample unrepresentative of the population of all seniors.
To solidify your understanding of sampling bias, consider the following example. Try to identify the population and the sample, and then reflect on whether the sample is likely to yield the information desired.
Example \(3\)
A substitute teacher wants to know how students in the class did on their last test. The teacher asks the \(10\) students sitting in the front row to state their latest test score. He concludes from their report that the class did extremely well. What is the sample? What is the population? Can you identify any problems with choosing the sample in the way that the teacher did?
In Example \(3\), the population consists of all students in the class. The sample is made up of just the \(10\) students sitting in the front row. The sample is not likely to be representative of the population. Those who sit in the front row tend to be more interested in the class and tend to perform higher on tests. Hence, the sample may perform at a higher level than the population.
Example \(4\)
A coach is interested in how many cartwheels the average college freshmen at his university can do. Eight volunteers from the freshman class step forward. After observing their performance, the coach concludes that college freshmen can do an average of \(16\) cartwheels in a row without stopping.
In Example \(4\), the population is the class of all freshmen at the coach's university. The sample is composed of the \(8\) volunteers. The sample is poorly chosen because volunteers are more likely to be able to do cartwheels than the average freshman; people who cannot do cartwheels probably did not volunteer! In the example, we are also not told of the gender of the volunteers. Were they all women, for example? That might affect the outcome, contributing to the non-representative nature of the sample (if the school is co-ed). Sampling Bias is Discussed in More Detail Here
Simple Random Sampling
Researchers adopt a variety of sampling strategies. The most straightforward is simple random sampling. Such sampling requires every member of the population to have an equal chance of being selected into the sample. In addition, the selection of one member must be independent of the selection of every other member. That is, picking one member from the population must not increase or decrease the probability of picking any other member (relative to the others). In this sense, we can say that simple random sampling chooses a sample by pure chance. To check your understanding of simple random sampling, consider the following example. What is the population? What is the sample? Was the sample picked by simple random sampling? Is it biased?
Example \(5\)
A research scientist is interested in studying the experiences of twins raised together versus those raised apart. She obtains a list of twins from the National Twin Registry, and selects two subsets of individuals for her study. First, she chooses all those in the registry whose last name begins with \(Z\). Then she turns to all those whose last name begins with \(B\). Because there are so many names that start with \(B\), however, our researcher decides to incorporate only every other name into her sample. Finally, she mails out a survey and compares characteristics of twins raised apart versus together.
In Example \(5\), the population consists of all twins recorded in the National Twin Registry. It is important that the researcher only make statistical generalizations to the twins on this list, not to all twins in the nation or world. That is, the National Twin Registry may not be representative of all twins. Even if inferences are limited to the Registry, a number of problems affect the sampling procedure we described. For instance, choosing only twins whose last names begin with \(Z\) does not give every individual an equal chance of being selected into the sample. Moreover, such a procedure risks over-representing ethnic groups with many surnames that begin with \(Z\). There are other reasons why choosing just the \(Z's\) may bias the sample. Perhaps such people are more patient than average because they often find themselves at the end of the line! The same problem occurs with choosing twins whose last name begins with \(B\). An additional problem for the \(B's\) is that the “every-other-one” procedure disallowed adjacent names on the \(B\) part of the list from being both selected. Just this defect alone means the sample was not formed through simple random sampling.
Sample size matters
Recall that the definition of a random sample is a sample in which every member of the population has an equal chance of being selected. This means that the sampling procedure rather than the results of the procedure define what it means for a sample to be random. Random samples, especially if the sample size is small, are not necessarily representative of the entire population. For example, if a random sample of \(20\) subjects were taken from a population with an equal number of males and females, there would be a nontrivial probability (\(0.06\)) that \(70\%\) or more of the sample would be female. (To see how to obtain this probability, see the section on the binomial distribution.) Such a sample would not be representative, although it would be drawn randomly. Only a large sample size makes it likely that our sample is close to representative of the population. For this reason, inferential statistics take into account the sample size when generalizing results from samples to populations. In later chapters, you'll see what kinds of mathematical techniques ensure this sensitivity to sample size.
More complex sampling
Sometimes it is not feasible to build a sample using simple random sampling. To see the problem, consider the fact that both Dallas and Houston are competing to be hosts of the \(2012\) Olympics. Imagine that you are hired to assess whether most Texans prefer Houston to Dallas as the host, or the reverse. Given the impracticality of obtaining the opinion of every single Texan, you must construct a sample of the Texas population. But now notice how difficult it would be to proceed by simple random sampling. For example, how will you contact those individuals who don’t vote and don’t have a phone? Even among people you find in the telephone book, how can you identify those who have just relocated to California (and had no reason to inform you of their move)? What do you do about the fact that since the beginning of the study, an additional \(4,212\) people took up residence in the state of Texas? As you can see, it is sometimes very difficult to develop a truly random procedure. For this reason, other kinds of sampling techniques have been devised. We now discuss two of them.
Random Assignment
In experimental research, populations are often hypothetical. For example, in an experiment comparing the effectiveness of a new anti-depressant drug with a placebo, there is no actual population of individuals taking the drug. In this case, a specified population of people with some degree of depression is defined and a random sample is taken from this population. The sample is then randomly divided into two groups; one group is assigned to the treatment condition (drug) and the other group is assigned to the control condition (placebo). This random division of the sample into two groups is called random assignment. Random assignment is critical for the validity of an experiment. For example, consider the bias that could be introduced if the first \(20\) subjects to show up at the experiment were assigned to the experimental group and the second \(20\) subjects were assigned to the control group. It is possible that subjects who show up late tend to be more depressed than those who show up early, thus making the experimental group less depressed than the control group even before the treatment was administered.
In experimental research of this kind, failure to assign subjects randomly to groups is generally more serious than having a non-random sample. Failure to randomize (the former error) invalidates the experimental findings. A non-random sample (the latter error) simply restricts the generalizability of the results.
Stratified Sampling
Since simple random sampling often does not ensure a representative sample, a sampling method called stratified random sampling is sometimes used to make the sample more representative of the population. This method can be used if the population has a number of distinct "strata" or groups. In stratified sampling, you first identify members of your sample who belong to each group. Then you randomly sample from each of those subgroups in such a way that the sizes of the subgroups in the sample are proportional to their sizes in the population.
Let's take an example: Suppose you were interested in views of capital punishment at an urban university. You have the time and resources to interview \(200\) students. The student body is diverse with respect to age; many older people work during the day and enroll in night courses (average age is \(39\)), while younger students generally enroll in day classes (average age of \(19\)). It is possible that night students have different views about capital punishment than day students. If \(70\%\) of the students were day students, it makes sense to ensure that \(70\%\) of the sample consisted of day students. Thus, your sample of \(200\) students would consist of \(140\) day students and \(60\) night students. The proportion of day students in the sample and in the population (the entire university) would be the same. Inferences to the entire population of students at the university would therefore be more secure.
• Mikki Hebl and David Lane | textbooks/stats/Introductory_Statistics/Introductory_Statistics_(Lane)/01%3A_Introduction_to_Statistics/1.04%3A_Inferential_Statistics.txt |
Learning Objectives
• Distinguish between simple random sampling and stratified sampling.
• Describe how often random and stratified sampling give exactly the same result.
Instructions
The sampling simulation uses a population of \(100\) animals: \(60\) lions, \(30\) turtles, \(10\) rabbits.
Options
: This option allows you to draw a sample of \(10\) animals at a time with each animal having an equal chance of being selected.
: This option allows you to draw a sample of \(10\) animals at a time, with each number of animals from a group being proportional to their group’s size of the population.
Simulation Results
The number of animals chosen from each group when a sample is drawn is shown next to the picture of the animal.
When you give it a try
Random Sampling
• Begin by leaving the option selected.
• Click on the button, \(10\) animals will be selected out of the population.
Note: The animals become highlighted in blue and a number count of each animal selected will be listed by each animal image.
• Each time you push the button, another sample will be drawn and the new tally will be shown on the right side of the previous sample.
• You should get different tally results for each animal as you select , however the computer may give you the same number drawn from an animal category every now and then.
Stratified Sample
Note: Your animals should become highlighted in blue and a number count should be listed by each animal image.
• Click on the button, to clear the simulation.
• Select the option.
• Click on the button a few times.
• As you get a new tally for every button, notice that the number of animals stays the same, but the animals selected are not always the same animals.
Illustrated Instructions
The opening screen of the sampling simulation displays all \(100\) animals in the population. You can select between a random sample and a stratified sample directly below the population and then generate a sample of ten animals.
Below is an example of a random sample. Notice that animals selected are highlighted in the population and the total number of animals selected from each category is listed at the bottom of the simulation.
1.06: Variables
Learning Objectives
• Define and distinguish between independent and dependent variables
• Define and distinguish between discrete and continuous variables
• Define and distinguish between qualitative and quantitative variables
Independent and Dependent variables
Variables are properties or characteristics of some event, object, or person that can take on different values or amounts (as opposed to constants such as $\pi$ that do not vary). When conducting research, experimenters often manipulate variables. For example, an experimenter might compare the effectiveness of four types of antidepressants. In this case, the variable is "type of antidepressant." When a variable is manipulated by an experimenter, it is called an independent variable. The experiment seeks to determine the effect of the independent variable on relief from depression. In this example, relief from depression is called a dependent variable. In general, the independent variable is manipulated by the experimenter and its effects on the dependent variable are measured.
Example $1$
Can blueberries slow down aging? A study indicates that antioxidants found in blueberries may slow down the process of aging. In this study, $19$-month-old rats (equivalent to $60$-year-old humans) were fed either their standard diet or a diet supplemented by either blueberry, strawberry, or spinach powder. After eight weeks, the rats were given memory and motor skills tests. Although all supplemented rats showed improvement, those supplemented with blueberry powder showed the most notable improvement.
1. What is the independent variable?
2. What are the dependent variables?
Solution
1. dietary supplement: none, blueberry, strawberry, and spinach
2. memory test and motor skills test
More information on the blueberry study
Example $2$
Does beta-carotene protect against cancer? Beta-carotene supplements have been thought to protect against cancer. However, a study published in the Journal of the National Cancer Institute suggests this is false. The study was conducted with $39,000$ women aged $45$ and up. These women were randomly assigned to receive a beta-carotene supplement or a placebo, and their health was studied over their lifetime. Cancer rates for women taking the beta-carotene supplement did not differ systematically from the cancer rates of those women taking the placebo.
1. What is the independent variable?
2. What is the dependent variable?
Solution
1. supplements: beta-carotene or placebo
2. occurrence of cancer
Example $3$
How bright is right? An automobile manufacturer wants to know how bright brake lights should be in order to minimize the time required for the driver of a following car to realize that the car in front is stopping and to hit the brakes.
1. What is the independent variable?
2. What is the dependent variable?
Solution
1. brightness of brake lights
2. time to hit brakes
Levels of an Independent Variable: Experiments and Controls
If an experiment compares an experimental treatment with a control treatment, then the independent variable (type of treatment) has two levels: experimental and control. If an experiment were comparing five types of diets, then the independent variable (type of diet) would have five levels. In general, the number of levels of an independent variable is the number of experimental conditions.
Qualitative and Quantitative Variables
An important distinction between variables is between qualitative variables and quantitative variables. Qualitative variables are those that express a qualitative attribute such as hair color, eye color, religion, favorite movie, gender, and so on. The values of a qualitative variable do not imply a numerical ordering. Values of the variable “religion” differ qualitatively; no ordering of religions is implied. Qualitative variables are sometimes referred to as categorical variables. Quantitative variables are those variables that are measured in terms of numbers. Some examples of quantitative variables are height, weight, and shoe size.
In the study on the effect of diet discussed in Example $1$, the independent variable was type of supplement: none, strawberry, blueberry, and spinach. The variable "type of supplement" is a qualitative variable; there is nothing quantitative about it. In contrast, the dependent variable "memory test" is a quantitative variable since memory performance was measured on a quantitative scale (number correct).
Discrete and Continuous Variables
Variables such as number of children in a household are called discrete variables since the possible scores are discrete points on the scale. For example, a household could have three children or six children, but not $4.53$ children. Other variables such as "time to respond to a question" are continuous variables since the scale is continuous and not made up of discrete steps. The response time could be $1.64$ seconds, or it could be $1.64237123922121$ seconds. Of course, the practicalities of measurement preclude most measured variables from being truly continuous.
Contributors and Attributions
• Online Statistics Education: A Multimedia Course of Study (http://onlinestatbook.com/). Project Leader: David M. Lane, Rice University.
• Heidi Ziemer | textbooks/stats/Introductory_Statistics/Introductory_Statistics_(Lane)/01%3A_Introduction_to_Statistics/1.05%3A_Sampling_Demonstration.txt |
Learning Objectives
• Define percentiles
• Use three formulas for computing percentiles
A test score in and of itself is usually difficult to interpret. For example, if you learned that your score on a measure of shyness was $35$ out of a possible $50$, you would have little idea how shy you are compared to other people. More relevant is the percentage of people with lower shyness scores than yours. This percentage is called a percentile. If $65\%$ of the scores were below yours, then your score would be the $65^{th}$ percentile.
Two Simple Definitions of Percentile
There is no universally accepted definition of a percentile. Using the $65^{th}$ percentile as an example, the $65^{th}$ percentile can be defined as the lowest score that is greater than $65\%$ of the scores. This is the way we defined it above and we will call this "$\text{Definition 1}$." The $65^{th}$ percentile can also be defined as the smallest score that is greater than or equal to $65\%$ of the scores. This we will call "$\text{Definition 2}$." Unfortunately, these two definitions can lead to dramatically different results, especially when there is relatively little data. Moreover, neither of these definitions is explicit about how to handle rounding. For instance, what rank is required to be higher than $65\%$ of the scores when the total number of scores is $50$? This is tricky because $65\%$ of $50$ is $32.5$. How do we find the lowest number that is higher than $32.5$ of the scores? A third way to compute percentiles (presented below) is a weighted average of the percentiles computed according to the first two definitions. This third definition handles rounding more gracefully than the other two and has the advantage that it allows the median to be defined conveniently as the $50^{th}$ percentile.
Third Definition
Unless otherwise specified, when we refer to "percentile," we will be referring to this third definition of percentiles. Let's begin with an example. Consider the $25^{th}$ percentile for the $8$ numbers in Table $1$. Notice the numbers are given ranks ranging from $1$ for the lowest number to $8$ for the highest number.
Table $1$: Test Scores.
Number 3 5 7 8 9 11 13 15
Rank 1 2 3 4 5 6 7 8
The first step is to compute the rank ($R$) of the $25^{th}$ percentile. This is done using the following formula:
$R = P/100 \times (N + 1)$
where $P$ is the desired percentile ($25$ in this case) and $N$ is the number of numbers ($8$ in this case). Therefore,
$R = 25/100 \times (8 + 1) = 9/4 = 2.25$
If $R$ is an integer, the $P^{th}$ percentile is the number with rank $R$. When $R$ is not an integer, we compute the $P^{th}$ percentile by interpolation as follows:
1. Define $IR$ as the integer portion of $R$ (the number to the left of the decimal point). For this example, $IR=2$.
2. Define $FR$ as the fractional portion of $R$. For this example, $FR=0.25$.
3. Find the scores with Rank $IR$ and with Rank $IR+1$. For this example, this means the score with Rank $2$ and the score with Rank $3$. The scores are $5$ and $7$.
4. Interpolate by multiplying the difference between the scores by $FR$ and add the result to the lower score. For these data, this is $(0.25)(7 - 5) + 5 = 5.5$.
Therefore, the $25^{th}$ percentile is $5.5$. If we had used the first definition (the smallest score greater than $25\%$ of the scores), the $25^{th}$ percentile would have been $7$. If we had used the second definition (the smallest score greater than or equal to $25\%$ of the scores), the $25^{th}$ percentile would have been $5$.
For a second example, consider the $20$ quiz scores shown in Table $2$.
Table $2$: $20$ Quiz Scores.
Number 4 4 4 5 5 5 6 6 7 7 7 8 8 9 9 9 10 10 10
Rank 1 2 3 4 5 6 7 9 10 11 12 13 14 15 16 17 18 19 20
We will compute the $25^{th}$ and the $85^{th}$ percentiles. For the $25^{th}$,
$R = 25/100 \times (20 + 1) = 21/4 = 5.25$
$IR=5\; and\; FR=0.25$
Since the score with a rank of $IR$ (which is $5$) and the score with a rank of $IR+1$ (which is $6$) are both equal to $5$, the $25^{th}$ percentile is $5$. In terms of the formula:
$25^{th}\; \text{percentile} = (0.25) \times (5 - 5) + 5 = 5$
For the $85^{th}$ percentile,
$R = 85/100 \times (20 + 1) = 17.85.$
$IR = 17\; and\; FR = 0.85$
Caution: $FR$ does not generally equal the percentile to be computed as it does here.
The score with a rank of $17$ is $9$ and the score with a rank of $18$ is $10$. Therefore, the $85^{th}$ percentile is:
$(0.85)(10 - 9) + 9 = 9.85$
Consider the $50^{th}$ percentile of the numbers $2, 3, 5, 9$.
$R = 50/100 \times (4 + 1) = 2.5$
$IR=2\; and\; FR=0.5$
The score with a rank of $IR$ is $3$ and the score with a rank of $IR+1$ is $5$. Therefore, the $50^{th}$ percentile is:
$(0.5)(5 - 3) + 3 = 4$
Finally, consider the $50^{th}$ percentile of the numbers $2, 3, 5, 9, 11$.
$R = 50/100 \times (5 + 1) = 3$
$IR=3\; and\; FR=0$
Whenever $FR=0$, you simply find the number with rank $IR$. In this case, the third number is equal to $5$, so the $50^{th}$ percentile is $5$. You will also get the right answer if you apply the general formula:
$50^{th}\; \text{percentile} = (0.00) (9 - 5) + 5 = 5$
Contributors and Attributions
• Online Statistics Education: A Multimedia Course of Study (http://onlinestatbook.com/). Project Leader: David M. Lane, Rice University.
• David M. Lane | textbooks/stats/Introductory_Statistics/Introductory_Statistics_(Lane)/01%3A_Introduction_to_Statistics/1.07%3A_Percentiles.txt |
Learning Objectives
• Define and distinguish among nominal, ordinal, interval, and ratio scales
• Identify a scale type
• Discuss the type of scale used in psychological measurement
• Give examples of errors that can be made by failing to understand the proper use of measurement scales
Types of Scales
Before we can conduct a statistical analysis, we need to measure our dependent variable. Exactly how the measurement is carried out depends on the type of variable involved in the analysis. Different types are measured differently. To measure the time taken to respond to a stimulus, you might use a stop watch. Stop watches are of no use, of course, when it comes to measuring someone's attitude towards a political candidate. A rating scale is more appropriate in this case (with labels like "very favorable," "somewhat favorable," etc.). For a dependent variable such as "favorite color," you can simply note the color-word (like "red") that the subject offers.
Although procedures for measurement differ in many ways, they can be classified using a few fundamental categories. In a given category, all of the procedures share some properties that are important for you to know about. The categories are called "scale types," or just "scales," and are described in this section.
Nominal scales
When measuring using a nominal scale, one simply names or categorizes responses. Gender, handedness, favorite color, and religion are examples of variables measured on a nominal scale. The essential point about nominal scales is that they do not imply any ordering among the responses. For example, when classifying people according to their favorite color, there is no sense in which green is placed "ahead of" blue. Responses are merely categorized. Nominal scales embody the lowest level of measurement.
Ordinal scales
A researcher wishing to measure consumers' satisfaction with their microwave ovens might ask them to specify their feelings as either "very dissatisfied," "somewhat dissatisfied," "somewhat satisfied," or "very satisfied." The items in this scale are ordered, ranging from least to most satisfied. This is what distinguishes ordinal from nominal scales. Unlike nominal scales, ordinal scales allow comparisons of the degree to which two subjects possess the dependent variable. For example, our satisfaction ordering makes it meaningful to assert that one person is more satisfied than another with their microwave ovens. Such an assertion reflects the first person's use of a verbal label that comes later in the list than the label chosen by the second person.
On the other hand, ordinal scales fail to capture important information that will be present in the other scales we examine. In particular, the difference between two levels of an ordinal scale cannot be assumed to be the same as the difference between two other levels. In our satisfaction scale, for example, the difference between the responses "very dissatisfied" and "somewhat dissatisfied" is probably not equivalent to the difference between "somewhat dissatisfied" and "somewhat satisfied." Nothing in our measurement procedure allows us to determine whether the two differences reflect the same difference in psychological satisfaction. Statisticians express this point by saying that the differences between adjacent scale values do not necessarily represent equal intervals on the underlying scale giving rise to the measurements. (In our case, the underlying scale is the true feeling of satisfaction, which we are trying to measure.)
What if the researcher had measured satisfaction by asking consumers to indicate their level of satisfaction by choosing a number from one to four? Would the difference between the responses of one and two necessarily reflect the same difference in satisfaction as the difference between the responses two and three? The answer is No. Changing the response format to numbers does not change the meaning of the scale. We still are in no position to assert that the mental step from \(1\) to \(2\) (for example) is the same as the mental step from \(3\) to \(4\).
Interval scales
Interval scales are numerical scales in which intervals have the same interpretation throughout. As an example, consider the Fahrenheit scale of temperature. The difference between \(30\) degrees and \(40\) degrees represents the same temperature difference as the difference between \(80\) degrees and \(90\) degrees. This is because each \(10\)-degree interval has the same physical meaning (in terms of the kinetic energy of molecules).
Interval scales are not perfect, however. In particular, they do not have a true zero point even if one of the scaled values happens to carry the name "zero." The Fahrenheit scale illustrates the issue. Zero degrees Fahrenheit does not represent the complete absence of temperature (the absence of any molecular kinetic energy). In reality, the label "zero" is applied to its temperature for quite accidental reasons connected to the history of temperature measurement. Since an interval scale has no true zero point, it does not make sense to compute ratios of temperatures. For example, there is no sense in which the ratio of \(40\) to \(20\) degrees Fahrenheit is the same as the ratio of \(100\) to \(50\) degrees; no interesting physical property is preserved across the two ratios. After all, if the "zero" label were applied at the temperature that Fahrenheit happens to label as \(10\) degrees, the two ratios would instead be \(30\) to \(10\) and \(90\) to \(40\), no longer the same! For this reason, it does not make sense to say that \(80\) degrees is "twice as hot" as \(40\) degrees. Such a claim would depend on an arbitrary decision about where to "start" the temperature scale, namely, what temperature to call zero (whereas the claim is intended to make a more fundamental assertion about the underlying physical reality).
Ratio scales
The ratio scale of measurement is the most informative scale. It is an interval scale with the additional property that its zero position indicates the absence of the quantity being measured. You can think of a ratio scale as the three earlier scales rolled up in one. Like a nominal scale, it provides a name or category for each object (the numbers serve as labels). Like an ordinal scale, the objects are ordered (in terms of the ordering of the numbers). Like an interval scale, the same difference at two places on the scale has the same meaning. And in addition, the same ratio at two places on the scale also carries the same meaning.
The Fahrenheit scale for temperature has an arbitrary zero point and is therefore not a ratio scale. However, zero on the Kelvin scale is absolute zero. This makes the Kelvin scale a ratio scale. For example, if one temperature is twice as high as another as measured on the Kelvin scale, then it has twice the kinetic energy of the other temperature.
Another example of a ratio scale is the amount of money you have in your pocket right now (\(25\) cents, \(55\) cents, etc.). Money is measured on a ratio scale because, in addition to having the properties of an interval scale, it has a true zero point: if you have zero money, this implies the absence of money. Since money has a true zero point, it makes sense to say that someone with \(50\) cents has twice as much money as someone with \(25\) cents (or that Bill Gates has a million times more money than you do).
What level of measurement is used for psychological variables?
Rating scales are used frequently in psychological research. For example, experimental subjects may be asked to rate their level of pain, how much they like a consumer product, their attitudes about capital punishment, their confidence in an answer to a test question. Typically these ratings are made on a \(5\)-point or a \(7\)-point scale. These scales are ordinal scales since there is no assurance that a given difference represents the same thing across the range of the scale. For example, there is no way to be sure that a treatment that reduces pain from a rated pain level of \(3\) to a rated pain level of \(2\) represents the same level of relief as a treatment that reduces pain from a rated pain level of \(7\) to a rated pain level of \(6\).
In memory experiments, the dependent variable is often the number of items correctly recalled. What scale of measurement is this? You could reasonably argue that it is a ratio scale. First, there is a true zero point: some subjects may get no items correct at all. Moreover, a difference of one represents a difference of one item recalled across the entire scale. It is certainly valid to say that someone who recalled \(12\) items recalled twice as many items as someone who recalled only \(6\) items.
But number-of-items recalled is a more complicated case than it appears at first. Consider the following example in which subjects are asked to remember as many items as possible from a list of \(10\). Assume that (a) there are \(5\) easy items and \(5\) difficult items, (b) half of the subjects are able to recall all the easy items and different numbers of difficult items, while (c) the other half of the subjects are unable to recall any of the difficult items but they do remember different numbers of easy items. Some sample data are shown below.
Table \(1\)
Subject Easy Items Difficult Items Score
A 0 0 1 1 0 0 0 0 0 0 2
B 1 0 1 1 0 0 0 0 0 0 3
C 1 1 1 1 1 1 1 0 0 0 7
D 1 1 1 1 1 0 1 1 0 1 8
Let's compare (1) the difference between Subject \(A's\) score of \(2\) and Subject \(B's\) score of \(3\) with (2) the difference between Subject \(C's\) score of \(7\) and Subject \(D's\) score of \(8\). The former difference is a difference of one easy item; the latter difference is a difference of one difficult item. Do these two differences necessarily signify the same difference in memory? We are inclined to respond "No" to this question since only a little more memory may be needed to retain the additional easy item whereas a lot more memory may be needed to retain the additional hard item. The general point is that it is often inappropriate to consider psychological measurement scales as either interval or ratio.
Consequences of level of measurement
Why are we so interested in the type of scale that measures a dependent variable? The crux of the matter is the relationship between the variable's level of measurement and the statistics that can be meaningfully computed with that variable. For example, consider a hypothetical study in which \(5\) children are asked to choose their favorite color from blue, red, yellow, green, and purple. The researcher codes the results as follows:
Table \(2\)
Color Code
Blue 1
Red 2
Yellow 3
Green 4
Purple 5
This means that if a child said her favorite color was "Red," then the choice was coded as "\(2\)," if the child said her favorite color was "Purple," then the response was coded as \(5\), and so forth. Consider the following hypothetical data:
Table \(3\)
Subject Color Code
1 Blue 1
2 Blue 1
3 Green 4
4 Green 4
5 Purple 5
Each code is a number, so nothing prevents us from computing the average code assigned to the children. The average happens to be \(3\), but you can see that it would be senseless to conclude that the average favorite color is yellow (the color with a code of \(3\)). Such nonsense arises because favorite color is a nominal scale, and taking the average of its numerical labels is like counting the number of letters in the name of a snake to see how long the beast is.
Does it make sense to compute the mean of numbers measured on an ordinal scale? This is a difficult question, one that statisticians have debated for decades. You will be able to explore this issue yourself in a simulation shown in the next section and reach your own conclusion. The prevailing (but by no means unanimous) opinion of statisticians is that for almost all practical situations, the mean of an ordinally-measured variable is a meaningful statistic. However, as you will see in the simulation, there are extreme situations in which computing the mean of an ordinally-measured variable can be very misleading.
Contributors and Attributions
• Online Statistics Education: A Multimedia Course of Study (http://onlinestatbook.com/). Project Leader: David M. Lane, Rice University.
• Dan Osherson and David M. Lane | textbooks/stats/Introductory_Statistics/Introductory_Statistics_(Lane)/01%3A_Introduction_to_Statistics/1.08%3A_Levels_of_Measurement.txt |
Learning Objectives
• Understand what it means for a scale to be ordinal and its relationship to interval scales.
• Determine whether an investigator can be misled by computing the means of an ordinal scale.
Instructions
This is a demonstration of a very complex issue. Experts in the field disagree on how to interpret differences on an ordinal scale, so do not be discouraged if it takes you a while to catch on. In this demonstration you will explore the relationship between interval and ordinal scales. The demonstration is based on two brands of baked goods.
The data on the left side labeled "interval scores" shows the amount of sugar in each of $12$ products. The column labeled "$\text{Brand 1}$" contains the sugar content of each of $12$ brand-one products. The second column ("$\text{Brand 1}$") shows the sugar content of the brand-two products. The amount of sugar is measured on an interval scale.
A rater tastes each of the products and rates them on a $5$-point "sweetness" scale. Rating scales are typically ordinal rather than interval.
The scale at the bottom shows the "mapping" of sugar content onto the ratings. Sugar content between $37$ and $43$ is rated as $1$, between $43$ and $49, 2, etc.$ Therefore, the difference between a rating of $1$ and a rating of $2$ represents, on average a "sugar difference" of $6$. A difference between a rating of $2$ and a rating of $3$ also represents, on average a "sugar difference" of $6$. The original ratings are rounded off and displayed are on an interval scale. It is likely that rater's ratings would not be on an interval scale. You can change the cutoff points between ratings by moving the vertical lines with the mouse. As you change these cutoffs, the ratings change automatically. For example, you might see what the ratings would look like if people did not consider something very sweet (rating of $5$) unless it was very very sweet.
The mean amount of sugar in $\text{Data Set 1}$ is $50$ for the first brand and $55$ for the second brand. The obvious conclusion is that, on average, the second brand is sweeter than the first. However, pretend that you only had the ratings to go by and were not aware of the actual amounts of sugar. Would you reach the correct decision if you compared the mean ratings of the two brands. Change the cutoffs for mapping the interval sugar scale onto the ordinal rating scale. Do any mappings lead to incorrect interpretations? Try this with $\text{Data Set 1}$ and with $\text{Data Set 2}$. Try to find a situation where the mean sweetness rating is higher for $\text{Brand 2}$ even though the mean amount of sugar is greater for $\text{Brand 1}$. If you find such a situation, then you have found an instance in which using the means of ordinal data lead to incorrect conclusions. It is possible to find this situation, so look hard.
Keep in mind that in realistic situations, you only know the ratings and not the "true" interval scale that underlies them. If you knew the interval scale, you would use it. | textbooks/stats/Introductory_Statistics/Introductory_Statistics_(Lane)/01%3A_Introduction_to_Statistics/1.09%3A_Measurements.txt |
Learning Objectives
• Define "distribution"
• Interpret a frequency distribution
• Distinguish between a frequency distribution and a probability distribution
• Construct a grouped frequency distribution for a continuous variable
• Identify the skew of a distribution
• Identify bimodal, leptokurtic, and platykurtic distributions
Distributions of Discrete Variables
A recently purchased a bag of Plain M&M's contained candies of six different colors. A quick count showed that there were \(55\) M&M's: \(17\) brown, \(18\) red, \(7\) yellow, \(7\) green, \(2\) blue, and \(4\) orange. These counts are shown below in Table \(1\).
Table \(1\): Frequencies in the Bag of M&M's
Color Frequency
Brown 17
Red 18
Yellow 7
Green 7
Blue 2
Orange 4
This table is called a frequency table and it describes the distribution of M&M color frequencies. Not surprisingly, this kind of distribution is called a frequency distribution. Often a frequency distribution is shown graphically as in Figure \(1\).
Figure \(1\): Distribution of \(55\) M&M's.
The distribution shown in Figure \(1\) concerns just my one bag of M&M's. You might be wondering about the distribution of colors for all M&M's. The manufacturer of M&M's provides some information about this matter, but they do not tell us exactly how many M&M's of each color they have ever produced. Instead, they report proportions rather than frequencies. Figure \(2\) shows these proportions. Since every M&M is one of the six familiar colors, the six proportions shown in the figure add to one. We call Figure \(2\) a probability distribution because if you choose an M&M at random, the probability of getting, say, a brown M&M is equal to the proportion of M&M's that are brown (\(0.30\)).
Notice that the distributions in Figures \(1\) and \(2\) are not identical. Figure \(1\) portrays the distribution in a sample of \(55\) M&M's. Figure \(2\) shows the proportions for all M&M's. Chance factors involving the machines used by the manufacturer introduce random variation into the different bags produced. Some bags will have a distribution of colors that is close to Figure \(2\); others will be further away.
Continuous Variables
The variable "color of M&M" used in this example is a discrete variable, and its distribution is also called discrete. Let us now extend the concept of a distribution to continuous variables. The data shown in Table \(2\) are the times it took one of us (DL) to move the mouse over a small target in a series of \(20\) trials. The times are sorted from shortest to longest. The variable "time to respond" is a continuous variable. With time measured accurately (to many decimal places), no two response times would be expected to be the same. Measuring time in milliseconds (thousandths of a second) is often precise enough to approximate a continuous variable in Psychology. As you can see in Table \(2\), measuring DL's responses this way produced times no two of which were the same. As a result, a frequency distribution would be uninformative: it would consist of the \(20\) times in the experiment, each with a frequency of \(1\).
Table \(2\): Response Times
568 720
577 728
581 729
640 777
641 808
645 824
657 825
673 865
696 875
703 1007
The solution to this problem is to create a grouped frequency distribution. In a grouped frequency distribution, scores falling within various ranges are tabulated. Table \(3\) shows a grouped frequency distribution for these \(20\) times.
Table \(3\): Grouped frequency distribution
Range Frequency
500-600 3
600-700 6
700-800 5
800-900 5
900-1000 0
1000-1100 1
Grouped frequency distributions can be portrayed graphically. Figure \(3\) shows a graphical representation of the frequency distribution in Table \(3\). This kind of graph is called a histogram. A later chapter contains an entire section devoted to histograms.
Probability Densities
The histogram in Figure \(3\) portrays just DL's \(20\) times in the one experiment he performed. To represent the probability associated with an arbitrary movement (which can take any positive amount of time), we must represent all these potential times at once. For this purpose, we plot the distribution for the continuous variable of time. Distributions for continuous variables are called continuous distributions. They also carry the fancier name probability density. Some probability densities have particular importance in statistics. A very important one is shaped like a bell, and called the normal distribution. Many naturally-occurring phenomena can be approximated surprisingly well by this distribution. It will serve to illustrate some features of all continuous distributions.
An example of a normal distribution is shown in Figure \(4\). Do you see the "bell"? The normal distribution doesn't represent a real bell, however, since the left and right tips extend indefinitely (we can't draw them any further so they look like they've stopped in our diagram). The \(Y\)-axis in the normal distribution represents the "density of probability." Intuitively, it shows the chance of obtaining values near corresponding points on the \(X\)-axis. In Figure \(4\), for example, the probability of an observation with value near \(40\) is about half of the probability of an observation with value near \(50\). (For more information, please see the chapter on normal distributions.)
Although this text does not discuss the concept of probability density in detail, you should keep the following ideas in mind about the curve that describes a continuous distribution (like the normal distribution). First, the area under the curve equals \(1\). Second, the probability of any exact value of \(X\) is \(0\). Finally, the area under the curve and bounded between two given points on the \(X\)-axis is the probability that a number chosen at random will fall between the two points. Let us illustrate with DL's hand movements. First, the probability that his movement takes some amount of time is one! (We exclude the possibility of him never finishing his gesture.) Second, the probability that his movement takes exactly \(598.956432342346576\) milliseconds is essentially zero. (We can make the probability as close as we like to zero by making the time measurement more and more precise.) Finally, suppose that the probability of DL's movement taking between \(600\) and \(700\) milliseconds is one tenth. Then the continuous distribution for DL's possible times would have a shape that places \(10\%\) of the area below the curve in the region bounded by \(600\) and \(700\) on the \(X\)-axis.
Shapes of Distributions
Distributions have different shapes; they don't all look like the normal distribution in Figure \(4\). For example, the normal probability density is higher in the middle compared to its two tails. Other distributions need not have this feature. There is even variation among the distributions that we call "normal." For example, some normal distributions are more spread out than the one shown in Figure \(4\) (their tails begin to hit the \(X\)-axis further from the middle of the curve -- for example, at \(10\) and \(90\) if drawn in place of Figure \(4\)). Others are less spread out (their tails might approach the \(X\)-axis at \(30\) and \(70\)). More information on the normal distribution can be found in a later chapter completely devoted to them.
The distribution shown in Figure \(4\) is symmetric; if you folded it in the middle, the two sides would match perfectly. Figure \(5\) shows the discrete distribution of scores on a psychology test. This distribution is not symmetric: the tail in the positive direction extends further than the tail in the negative direction. A distribution with the longer tail extending in the positive direction is said to have a positive skew. It is also described as "skewed to the right."
Figure \(6\) shows the salaries of major league baseball players in 1974 (in thousands of dollars). This distribution has an extreme positive skew.
A continuous distribution with a positive skew is shown in Figure \(7\).
Although less common, some distributions have a negative skew. Figure \(8\) shows the scores on a \(20\)-point problem on a statistics exam. Since the tail of the distribution extends to the left, this distribution is skewed to the left.
The histogram in Figure \(8\) shows the frequencies of various scores on a \(20\)-point question on a statistics test.
A continuous distribution with a negative skew is shown in Figure \(9\). The distributions shown so far all have one distinct high point or peak. The distribution in Figure \(10\) has two distinct peaks. A distribution with two peaks is called a bimodal distribution.
Distributions also differ from each other in terms of how large or "fat" their tails are. Figure \(11\) shows two distributions that differ in this respect. The upper distribution has relatively more scores in its tails; its shape is called leptokurtic. The lower distribution has relatively fewer scores in its tails; its shape is called platykurtic.
Contributors and Attributions
• Online Statistics Education: A Multimedia Course of Study (http://onlinestatbook.com/). Project Leader: David M. Lane, Rice University.
• David M. Lane and Heidi Ziemer | textbooks/stats/Introductory_Statistics/Introductory_Statistics_(Lane)/01%3A_Introduction_to_Statistics/1.10%3A_Distributions.txt |
Learning Objectives
• Use summation notation to express the sum of all numbers
• Use summation notation to express the sum of a subset of numbers
• Use summation notation to express the sum of squares
Many statistical formulas involve summing numbers. Fortunately there is a convenient notation for expressing summation. This section covers the basics of this summation notation.
Let's say we have a variable $X$ that represents the weights (in grams) of $4$ grapes. The data are shown in Table $1$.
Table $1$: Weights of $4$ grapes.
Grape X
1 4.6
2 5.1
3 4.9
4 4.4
We label Grape $1's$ weight $X_1$, Grape $2's$ weight $X_2$, etc. The following formula means to sum up the weights of the four grapes:
$\sum_{i=1}^4 X_i$
The Greek letter capital sigma ($\sum$) indicates summation. The "$i = 1$" at the bottom indicates that the summation is to start with $X_1$ and the $4$ at the top indicates that the summation will end with $X_4$. The "$X_i$" indicates that $X$ is the variable to be summed as $i$ goes from $1$ to $4$. Therefore,
$\sum_{i=1}^4 X_i = X_1 + X_2 + X_3 + X_4 = 4.6 + 5.1 + 4.9 + 4.4 = 19.0$
The symbol
$\sum_{i=1}^3 X_i$
indicates that only the first $3$ scores are to be summed. The index variable $i$ goes from $1$ to $3$.
When all the scores of a variable (such as $X$) are to be summed, it is often convenient to use the following abbreviated notation:
$\sum X$
Thus, when no values of i are shown, it means to sum all the values of $X$.
Many formulas involve squaring numbers before they are summed. This is indicated as
$\sum X^2 = 4.62 + 5.12 + 4.92 + 4.42 = 21.16 + 26.01 + 24.01 + 19.36 = 90.54$
Notice that:
$\left(\sum X \right)^2 \neq \sum X^2$
because the expression on the left means to sum up all the values of $X$ and then square the sum ($19^2 = 361$), whereas the expression on the right means to square the numbers and then sum the squares ($90.54$, as shown).
Some formulas involve the sum of cross products. Table $2$ shows the data for variables $X$ and $Y$. The cross products ($XY$) are shown in the third column. The sum of the cross products is $3+4+21 = 28$.
Table $2$: Cross Products.
X Y XY
1 3 3
2 2 4
3 7 21
In summation notation, this is written as:
$\sum XY = 28.$
• David M. Lane
1.12: Linear Transformations
Learning Objectives
• Give the formula for a linear transformation
• Determine whether a transformation is linear
• Describe what is linear about a linear transformation
Often it is necessary to transform data from one measurement scale to another. For example, you might want to convert height measured in feet to height measured in inches. Table $1$ shows the heights of four people measured in both feet and inches. To transform feet to inches, you simply multiply by $12$. Similarly, to transform inches to feet, you divide by $12$.
Table $1$: Converting between feet and inches
Feet Inches
5.00 60
6.25 75
5.50 66
5.75 69
Some conversions require that you multiply by a number and then add a second number. A good example of this is the transformation between degrees Celsius and degrees Fahrenheit. Table $2$ shows the temperatures of five US cities in the early afternoon of $\text{November 16, 2002}$.
Table $2$: Temperatures in 5 cities on $11/16/2002$
City Degrees Fahrenheit Degrees Celsius
Houston 54 12.22
Chicago 37 2.78
Minneapolis 31 -0.56
Miami 78 25.56
Phoenix 70 21.11
The formula to transform Celsius to Fahrenheit is:
$F = 1.8C + 32$
The formula for converting from Fahrenheit to Celsius is
$C = 0.5556F - 17.778$
The transformation consists of multiplying by a constant and then adding a second constant. For the conversion from Celsius to Fahrenheit, the first constant is $1.8$ and the second is $32$.
Figure $1$ shows a plot of degrees Celsius as a function of degrees Fahrenheit. Notice that the points form a straight line. This will always be the case if the transformation from one scale to another consists of multiplying by one constant and then adding a second constant. Such transformations are therefore called linear transformations.
Many transformations are not linear. With nonlinear transformations, the points in a plot of the transformed variable against the original variable would not fall on a straight line. Examples of nonlinear transformations are: square root, raising to a power, logarithm, and any of the trigonometric functions.
• David M. Lane
1.13: Logarithms
Learning Objectives
• Compute logs using different bases
• Perform basic arithmetic operations using logs
• State the relationship between logs and proportional change
The log transformation reduces positive skew. This can be valuable both for making the data more interpretable and for helping to meet the assumptions of inferential statistics.
Basics of Logarithms (Logs)
Logs are, in a sense, the opposite of exponents. Consider the following simple expression:
$10^2 = 100$
Here we can say the base of $10$ is raised to the second power. Here is an example of a log:
$\log_{10}(100) = 2$
This can be read as: The log base ten of $100$ equals $2$. The result is the power that the base of $10$ has to be raised to in order to equal the value ($100$). Similarly,
$\log_{10}(1000) = 3$
since $10$ has to be raised to the third power in order to equal $1,000$.
These examples all used base $10$, but any base could have been used. There is a base which results in "natural logarithms" and that is called $e$ and equals approximately $2.718$. It is beyond the scope here to explain what is "natural" about it. Natural logarithms can be indicated either as: $\ln (x)\; or\; \log_e(x)$.
Changing the base of the log changes the result by a multiplicative constant. To convert from $\log _{10}$ to natural logs, you multiply by $2.303$. Analogously, to convert in the other direction, you divide by $2.303$.
$\ln X =2.303 \log_{10} X$
Taking the $\text{antilog}$ of a number undoes the operation of taking the $\log$. Therefore, since $\log_{10}(1000) = 3$, the $antilog_{10}$ of $3$ is $10^3 = 1,000$. Taking the $\text{antilog}$ of a number simply raises the base of the logarithm in question to that number.
Logs and Proportional Change
A series of numbers that increase proportionally will increase in equal amounts when converted to logs. For example, the numbers in the first column of Table $1$
increase by a factor of $1.5$ so that each row is $1.5$ times as high as the preceding row. The $\log_{10}$ transformed numbers increase in equal steps of $0.176$.
Table $1$: Proportional raw changes are equal in log units
Raw Log
4.0 0.602
6.0 0.778
9.0 0.954
13.5 1.130
As another example, if one student increased their score from $100$ to $200$ while a second student increased theirs from $150$ to $300$, the percentage change ($100\%$) is the same for both students. The log difference is also the same, as shown below.
$Log_{10}(100) = 2.000\ \log_{10}(200) = 2.301\ Difference: 0.301\ \; \ \log_{10}(150) = 2.176\ \log_{10}(300) = 2.477\ Difference: 0.301$
Arithmetic Operations
Rules for logs of products and quotients are shown below.
$\log(AB) = \log(A) + \log(B)$
$\log\left(\dfrac{A}{B}\right) = \log(A) - \log(B)$
For example,
$\log_{10}(10 \times 100) = \log_{10}(10) + \log_{10}(100) = 1 + 2 = 3.$
Similarly,
$\log_{10}\left(\dfrac{100}{10}\right) = \log_{10}(100) - \log_{10}(10) = 2 - 1 = 1.$
Contributors and Attributions
• Online Statistics Education: A Multimedia Course of Study (http://onlinestatbook.com/). Project Leader: David M. Lane, Rice University.
• David M. Lane | textbooks/stats/Introductory_Statistics/Introductory_Statistics_(Lane)/01%3A_Introduction_to_Statistics/1.11%3A_Summation_Notation.txt |
Do Athletes Get Special Treatment?
Prerequisites
Levels of Measurement
The Board of Trustees at a university commissioned a top management-consulting firm to address the admission processes for academic and athletic programs. The consulting firm wrote a report discussing the trade-off between maintaining academic and athletic excellence. One of their key findings was:
The standard for an athlete’s admission, as reflected in SAT scores alone, is lower than the standard for non-athletes by as much as \(20\) percent, with the weight of this difference being carried by the so-called “revenue sports” of football and basketball. Athletes are also admitted through a different process than the one used to admit non-athlete students.
What do you think?
Based on what you have learned in this chapter about measurement scales, does it make sense to compare SAT scores using percentages? Why or why not?
As you may know, the SAT has an arbitrarily-determined lower limit on test scores of \(200\). Therefore, SAT is measured on either an ordinal scale or, at most, an interval scale. However, it is clearly not measured on a ratio scale. Therefore, it is not meaningful to report SAT score differences in terms of percentages. For example, consider the effect of subtracting \(200\) from every student's score so that the lowest possible score is \(0\). How would that affect the difference as expressed in percentages?
Statistical Errors in Politics
Prerequisites
Inferential Statistics
An article about ignorance of statistics in politics quotes a politician commenting on why the "American Community Survey" should be eliminated:
“We’re spending \(\$70\) per person to fill this out. That’s just not cost effective, especially since in the end this is not a scientific survey. It’s a random survey.”
What do you think?
What is wrong with this statement? Despite the error in this statement, what type of sampling could be done so that the sample will be more likely to be representative of the population?
Randomness is what makes the survey scientific. If the survey were not random, then it would be biased and therefore statistically meaningless, especially since the survey is conducted to make generalizations about the American population. Stratified sampling would likely be more representative of the population.
Reference
Mark C. C., scientopia.org
Contributors and Attributions
• Online Statistics Education: A Multimedia Course of Study (http://onlinestatbook.com/). Project Leader: David M. Lane, Rice University.
• Denise Harvey and David Lane
1.E: Introduction to Statistics (Exercises)
General Questions
Q1
A teacher wishes to know whether the males in his/her class have more conservative attitudes than the females. A questionnaire is distributed assessing attitudes and the males and the females are compared. Is this an example of descriptive or inferential statistics? ( relevant section 1, relevant section 2 )
Q2
A cognitive psychologist is interested in comparing two ways of presenting stimuli on subsequent memory. Twelve subjects are presented with each method and a memory test is given. What would be the roles of descriptive and inferential statistics in the analysis of these data? (relevant section 1 & relevant section 2 )
Q3
If you are told that you scored in the $80^{th}$ percentile, from just this information would you know exactly what that means and how it was calculated? Explain. (relevant section)
Q4
A study is conducted to determine whether people learn better with spaced or massed practice. Subjects volunteer from an introductory psychology class. At the beginning of the semester $12$ subjects volunteer and are assigned to the massed-practice condition. At the end of the semester $12$ subjects volunteer and are assigned to the spaced-practice condition. This experiment involves two kinds of non-random sampling:
1. Subjects are not randomly sampled from some specified population
2. Subjects are not randomly assigned to conditions.
Which of the problems relates to the generality of the results? Which of the problems relates to the validity of the results? Which problem is more serious? (relevant section)
Q5
Give an example of an independent and a dependent variable. (relevant section)
Q6
Categorize the following variables as being qualitative or quantitative: (relevant section)
1. Rating of the quality of a movie on a $7$-point scale
2. Age
3. Country you were born in
4. Favorite Color
5. Time to respond to a question
Q7
Specify the level of measurement used for the items in Question 6. (relevant section)
Q8
Which of the following are linear transformations? (relevant section)
1. Converting from meters to kilometers
2. Squaring each side to find the area
3. Converting from ounces to pounds
4. Taking the square root of each person's height.
5. Multiplying all numbers by $2$ and then adding $5$
6. Converting temperature from Fahrenheit to Centigrade
Q9
The formula for finding each student's test grade ($g$) from his or her raw score ($s$) on a test is as follows: $g = 16 + 3s$
Is this a linear transformation? If a student got a raw score of $20$, what is his test grade? (relevant section)
Q10
For the numbers $1, 2, 4, 16$, compute the following: (relevant section)
1. $\sum X$
2. $\sum X^2$
3. $\left ( \sum X \right )^2$
Q11
Which of the frequency polygons has a large positive skew? Which has a large negative skew? (relevant section)
Q12
What is more likely to have a skewed distribution: time to solve an anagram problem (where the letters of a word or phrase are rearranged into another word or phrase like "dear" and "read" or "funeral" and "real fun") or scores on a vocabulary test? (relevant section)
Questions from Case Studies:
The following questions are from the Angry Moods (AM) case study.
Q13
(AM#1) Which variables are the participant variables? (They act as independent variables in this study.) (relevant section)
Q14
(AM#2) What are the dependent variables? (relevant section)
Q15
(AM#3) Is Anger-Out a quantitative or qualitative variable? (relevant section)
The following question is from the Teacher Ratings (TR) case study.
Q16
(TR#1) What is the independent variable in this study? (relevant section)
The following questions are from the ADHD Treatment (AT) case study.
Q17
(AT#1) What is the independent variable of this experiment? How many levels does it have? (relevant section)
Q18
(AT#2) What is the dependent variable? On what scale (nominal, ordinal, interval, ratio) was it measured? (relevant section)
Select Answers
S9
$76$
S10
$23, 277, 529$ | textbooks/stats/Introductory_Statistics/Introductory_Statistics_(Lane)/01%3A_Introduction_to_Statistics/1.14%3A_Statistical_Literacy.txt |
Graphing data is the first and often most important step in data analysis. In this day of computers, researchers all too often see only the results of complex computer analyses without ever taking a close look at the data themselves. This is all the more unfortunate because computers can create many types of graphs quickly and easily. This chapter covers some classic types of graphs such as bar charts that were invented by William Playfair in the 18th century as well as graphs such as box plots invented by John Tukey in the 20th century.
• 2.1: Graphing Qualitative Variables
This section examines graphical methods for displaying the results of the interviews. We’ll learn some general lessons about how to graph data that fall into a small number of categories. A later section will consider how to graph numerical data in which each observation is represented by a number in some range. The key point about the qualitative data that occupy us in the present section is that they do not come with a pre-established ordering (the way numbers are ordered).
• 2.2: Quantitative Variables
Quantitative variables are variables measured on a numeric scale. Height, weight, response time, subjective rating of pain, temperature, and score on an exam are all examples of quantitative variables. Quantitative variables are distinguished from categorical (sometimes called qualitative) variables such as favorite color, religion, city of birth, and favorite sport in which there is no ordering or measuring involved.
• 2.3: Stem and Leaf Displays
A stem and leaf display is a graphical method of displaying data. It is particularly useful when your data are not too numerous. In this section, we will explain how to construct and interpret this kind of graph.
• 2.4: Histograms
A histogram is a graphical method for displaying the shape of a distribution. It is particularly useful when there are a large number of observations.
• 2.5: Frequency Polygons
Frequency polygons are a graphical device for understanding the shapes of distributions. They serve the same purpose as histograms, but are especially helpful for comparing sets of data. Frequency polygons are also a good choice for displaying cumulative frequency distributions.
• 2.6: Box Plots
Box plots are useful for identifying outliers and for comparing distributions.
• 2.7: Box Plot Demo
• 2.8: Bar Charts
Bar charts can be used to present other kinds of quantitative information, not just frequency counts in histograms.
• 2.9: Line Graphs
A line graph is a bar graph with the tops of the bars represented by points joined by lines (the rest of the bar is suppressed).
• 2.10: Dot Plots
Dot plots can be used to display various types of information.
• 2.11: Statistical Literacy
• 2.E: Graphing Distributions (Exercises)
02: Graphing Distributions
Learning Objectives
• Create a frequency table
• Determine when pie charts are valuable and when they are not
• Create and interpret bar charts
• Identify common graphical mistakes
When Apple Computer introduced the iMac computer in August \(1998\), the company wanted to learn whether the iMac was expanding Apple’s market share. Was the iMac just attracting previous Macintosh owners? Or was it purchased by newcomers to the computer market and by previous Windows users who were switching over? To find out, \(500\) iMac customers were interviewed. Each customer was categorized as a previous Macintosh owner, a previous Windows owner, or a new computer purchaser.
This section examines graphical methods for displaying the results of the interviews. We’ll learn some general lessons about how to graph data that fall into a small number of categories. A later section will consider how to graph numerical data in which each observation is represented by a number in some range. The key point about the qualitative data that occupy us in the present section is that they do not come with a pre-established ordering (the way numbers are ordered). For example, there is no natural sense in which the category of previous Windows users comes before or after the category of previous Macintosh users. This situation may be contrasted with quantitative data, such as a person’s weight. People of one weight are naturally ordered with respect to people of a different weight.
Frequency Tables
All of the graphical methods shown in this section are derived from frequency tables. Table \(1\) shows a frequency table for the results of the iMac study; it shows the frequencies of the various response categories. It also shows the relative frequencies, which are the proportion of responses in each category. For example, the relative frequency for "none" is \(85/500 = 0.17\).
Table \(1\): Frequency Table for the iMac Data
Previous Ownership Frequency Relative Frequency
None 85 0.17
Windows 60 0.12
Macintosh 355 0.71
Total 500 1.00
Pie Charts
The pie chart in Figure \(1\) shows the results of the iMac study. In a pie chart, each category is represented by a slice of the pie. The area of the slice is proportional to the percentage of responses in the category. This is simply the relative frequency multiplied by \(100\). Although most iMac purchasers were Macintosh owners, Apple was encouraged by the \(12\%\) of purchasers who were former Windows users, and by the \(17\%\) of purchasers who were buying a computer for the first time.
Pie charts are effective for displaying the relative frequencies of a small number of categories. They are not recommended, however, when you have a large number of categories. Pie charts can also be confusing when they are used to compare the outcomes of two different surveys or experiments. In an influential book on the use of graphs, Edward Tufte asserted, "The only worse design than a pie chart is several of them."
Here is another important point about pie charts. If they are based on a small number of observations, it can be misleading to label the pie slices with percentages. For example, if just \(5\) people had been interviewed by Apple Computers, and \(3\) were former Windows users, it would be misleading to display a pie chart with the Windows slice showing \(60\%\). With so few people interviewed, such a large percentage of Windows users might easily have occurred since chance can cause large errors with small samples. In this case, it is better to alert the user of the pie chart to the actual numbers involved. The slices should therefore be labeled with the actual frequencies observed (e.g., \(3\)) instead of with percentages.
Bar charts
Bar charts can also be used to represent frequencies of different categories. A bar chart of the iMac purchases is shown in Figure \(2\). Frequencies are shown on the \(Y\)-axis and the type of computer previously owned is shown on the \(X\)-axis. Typically, the \(Y\)-axis shows the number of observations in each category rather than the percentage of observations as is typical in pie charts.
Comparing Distributions
Often we need to compare the results of different surveys, or of different conditions within the same overall survey. In this case, we are comparing the "distributions" of responses between the surveys or conditions. Bar charts are often excellent for illustrating differences between two distributions. Figure \(3\) shows the number of people playing card games at the Yahoo website on a Sunday and on a Wednesday in the Spring of \(2001\). We see that there were more players overall on Wednesday compared to Sunday. The number of people playing Pinochle was nonetheless the same on these two days. In contrast, there were about twice as many people playing hearts on Wednesday as on Sunday. Facts like these emerge clearly from a well-designed bar chart.
The bars in Figure \(3\) are oriented horizontally rather than vertically. The horizontal format is useful when you have many categories because there is more room for the category labels. We’ll have more to say about bar charts when we consider numerical quantities later in the section Bar Charts.
Some graphical mistakes to avoid
Don’t get fancy! People sometimes add features to graphs that don’t help to convey their information. For example, \(3\)-dimensional bar charts such as the one shown in Figure \(4\) are usually not as effective as their two-dimensional counterparts.
Here is another way that fanciness can lead to trouble. Instead of plain bars, it is tempting to substitute meaningful images. For example, Figure \(5\) presents the iMac data using pictures of computers. The heights of the pictures accurately represent the number of buyers, yet Figure \(5\) is misleading because the viewer's attention will be captured by areas. The areas can exaggerate the size differences between the groups. In terms of percentages, the ratio of previous Macintosh owners to previous Windows owners is about \(6\) to \(1\). But the ratio of the two areas in Figure \(5\) is about \(35\) to \(1\). A biased person wishing to hide the fact that many Windows owners purchased iMacs would be tempted to use Figure \(5\) instead of Figure \(2\)! Edward Tufte coined the term "lie factor" to refer to the ratio of the size of the effect shown in a graph to the size of the effect shown in the data. He suggests that lie factors greater than \(1.05\) or less than \(0.95\) produce unacceptable distortion.
Another distortion in bar charts results from setting the baseline to a value other than zero. The baseline is the bottom of the \(Y\)-axis, representing the least number of cases that could have occurred in a category. Normally, but not always, this number should be zero. Figure \(6\) shows the iMac data with a baseline of \(50\). Once again, the differences in areas suggest a different story than the true differences in percentages. The percentage of Windows-switchers seems minuscule compared to its true value of \(12\%\).
Finally, we note that it is a serious mistake to use a line graph when the \(X\)-axis contains merely qualitative variables. A line graph is essentially a bar graph with the tops of the bars represented by points joined by lines (the rest of the bar is suppressed). Figure \(7\) inappropriately shows a line graph of the card game data from Yahoo. The drawback to Figure \(7\) is that it gives the false impression that the games are naturally ordered in a numerical way when, in fact, they are ordered alphabetically.
Figure \(7\): A line graph used inappropriately to depict the number of people playing different card games on Sunday and Wednesday.
Summary
Pie charts and bar charts can both be effective methods of portraying qualitative data. Bar charts are better when there are more than just a few categories and for comparing two or more distributions. Be careful to avoid creating misleading graphs.
2.02: Quantitative Variables
As discussed in the section on variables in Chapter 1, quantitative variables are variables measured on a numeric scale. Height, weight, response time, subjective rating of pain, temperature, and score on an exam are all examples of quantitative variables. Quantitative variables are distinguished from categorical (sometimes called qualitative) variables such as favorite color, religion, city of birth, and favorite sport in which there is no ordering or measuring involved.
There are many types of graphs that can be used to portray distributions of quantitative variables. The upcoming sections cover the following types of graphs:
1. stem and leaf displays
2. histograms
3. frequency polygons
4. box plots
5. bar charts
6. line graphs
7. scatter plots (discussed in a different chapter)
8. dot plots
Some graph types such as stem and leaf displays are best-suited for small to moderate amounts of data, whereas others such as histograms are best-suited for large amounts of data. Graph types such as box plots are good at depicting differences between distributions. Scatter plots are used to show the relationship between two variables. | textbooks/stats/Introductory_Statistics/Introductory_Statistics_(Lane)/02%3A_Graphing_Distributions/2.01%3A_Graphing_Qualitative_Variables.txt |
Learning Objectives
• Create and interpret basic stem and leaf displays
• Create and interpret back-to-back stem and leaf displays
• Judge whether a stem and leaf display is appropriate for a given data set
A stem and leaf display is a graphical method of displaying data. It is particularly useful when your data are not too numerous. In this section, we will explain how to construct and interpret this kind of graph.
As usual, an example will get us started. Consider Table $1$ that shows the number of touchdown passes (TD passes) thrown by each of the $31$ teams in the National Football League in the $2000$ season.
Table $1$: Number of touchdown passes
$\begin{matrix} 37 & 33 & 33 & 32 & 29 & 28 & 28 & 23 & 22\ 22 & 22 & 21 & 21 & 21 & 20 & 20 & 19 & 19\ 18 & 18 & 18 & 18 & 16 & 15 & 14 & 14 & 14\ 12 & 12 & 9 & 6 & & & & & \end{matrix}$
A stem and leaf display of the data is shown in Figure $1$. The left portion of Figure $1$ contains the stems. They are the numbers $3, 2, 1,\; and\; 0$, arranged as a column to the left of the bars. Think of these numbers as $10’s$ digits. A stem of $3$, for example, can be used to represent the $10’s$ digit in any of the numbers from $30$ to $39$. The numbers to the right of the bar are leaves, and they represent the $1’s$ digits. Every leaf in the graph therefore stands for the result of adding the leaf to $10$ times its stem.
$\begin{array}{c|c c c c c c c c c c c c c} 3 &2 &3 &3 &7\ 2 &0 &0 &1 &1 &1 &2 &2 &2 &3 &8 &8 &9\ 1 &2 &2 &4 &4 &4 &5 &6 &8 &8 &8 &8 &9 &9\ 0 &6 &9\ \end{array}$
Figure $1$: Stem and leaf display of the number of touchdown passes
To make this clear, let us examine Figure $1$ more closely. In the top row, the four leaves to the right of stem $3$ are $2, 3, 3,\; and\; 7$. Combined with the stem, these leaves represent the numbers $32, 33, 33,\; and\; 37$, which are the numbers of TD passes for the first four teams in Table $1$. The next row has a stem of $2$ and $12$ leaves. Together, they represent $12$ data points, namely, two occurrences of $20$ TD passes, three occurrences of $21$ TD passes, three occurrences of $22$ TD passes, one occurrence of $23$ TD passes, two occurrences of $28$ TD passes, and one occurrence of $29$ TD passes. We leave it to you to figure out what the third row represents. The fourth row has a stem of $0$ and two leaves. It stands for the last two entries in Table $1$, namely $9$ TD passes and $6$ TD passes. (The latter two numbers may be thought of as $09$ and $06$.)
One purpose of a stem and leaf display is to clarify the shape of the distribution. You can see many facts about TD passes more easily in Figure $1$ than in Table $1$. For example, by looking at the stems and the shape of the plot, you can tell that most of the teams had between $10$ and $29$ passing TDs, with a few having more and a few having less. The precise numbers of TD passes can be determined by examining the leaves.
We can make our figure even more revealing by splitting each stem into two parts. Figure $2$ shows how to do this. The top row is reserved for numbers from $35$ to $39$ and holds only the $37$ TD passes made by the first team in Table $1$. The second row is reserved for the numbers from $30$ to $34$ and holds the $32, 33,\; and\; 33$ TD passes made by the next three teams in the table. You can see for yourself what the other rows represent.
$\begin{array}{c|c c c c c c c c c c c c c} 3 &7\ 3 &2 &3 &3 \ 2 &8 &8 &9 \ 2 &0 &0 &1 &1 &1 &2 &2 &2 &3 \ 1 &5 &6 &8 &8 &8 &8 &9 &9\ 1 &2 &2 &4 &4 &4 \ 0 &6 &9 \end{array}$
Figure $2$: Stem and leaf display with the stems split in two
Figure $2$ is more revealing than Figure $1PageIndex{2}$ because the latter figure lumps too many values into a single row. Whether you should split stems in a display depends on the exact form of your data. If rows get too long with single stems, you might try splitting them into two or more parts.
There is a variation of stem and leaf displays that is useful for comparing distributions. The two distributions are placed back to back along a common column of stems. The result is a “back-to-back stem and leaf graph.” Figure $3$ shows such a graph. It compares the numbers of TD passes in the $1998$ and $2000$ seasons. The stems are in the middle, the leaves to the left are for the $1998$ data, and the leaves to the right are for the $2000$ data. For example, the second-to-last row shows that in 1998 there were teams with $11, 12,\; and\; 13$ TD passes, and in $2000$ there were two teams with $12$ and three teams with $14$ TD passes.
$\begin{array}{c|c|c c c c c c c c c c} \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; 1\; 1 & 4 \ &3 &7\ \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; 3\; 3\; 2 &3 &2 &3 &3\ \; \; \; \; \; \; \; \; \; \; \; \; \; 8\; 8\; 6\; 5 &2 &8 &8 &9\ \; \; 4\; 4\; 3\; 3\; 1\; 1\; 1\; 0 &2 &0 &0 &1 &1 &1 &2 &2 &2 &3\ 9\; 8\; 7\; 7\; 7\; 6\; 6\; 6\; 5 &1 &5 &6 &8 &8 &8 &8 &9 &9\ \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; 3\; 2\; 1 &1 &2 &2 &4 &4 &4 &4\ \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; 7 &0 &6 &9 \end{array}$
Figure $3$: Back-to-back stem and leaf display.
The left side shows the $1998$ TD data and the right side shows the $2000$ TD data. Figure $3$ helps us see that the two seasons were similar, but that only in $1998$ did any teams throw more than $40$ TD passes.
There are two things about the football data that make them easy to graph with stems and leaves. First, the data are limited to whole numbers that can be represented with a one-digit stem and a one-digit leaf. Second, all the numbers are positive. If the data include numbers with three or more digits, or contain decimals, they can be rounded to two-digit accuracy. Negative values are also easily handled. Let us look at another example.
Table $2$ shows data from the case study Weapons and Aggression. Each value is the mean difference over a series of trials between the times it took an experimental subject to name aggressive words (like “punch”) under two conditions. In one condition, the words were preceded by a non-weapon word such as "bug." In the second condition, the same words were preceded by a weapon word such as "gun" or "knife." The issue addressed by the experiment was whether a preceding weapon word would speed up (or prime) pronunciation of the aggressive word compared to a non-weapon priming word. A positive difference implies greater priming of the aggressive word by the weapon word. Negative differences imply that the priming by the weapon word was less than for a neutral word.
Table $2$: The effects of priming (thousandths of a second)
$\begin{matrix} 43.2 & 42.9 & 35.6 & 25.6 & 25.4 & 23.6 & & \ 20.5 & 19.9 & 14.4 & 12.7 & 11.3 & 10.2 & & \ 10.0 & 9.1 & 7.5 & 5.4 & 4.7 & 3.8 & 2.1 & 1.2\ -0.2 & -6.3 & -6.7 & -8.8 & -10.4 & -10.5 & & \ -14.9 & -14.9 & -15.0 & -18.5 & -27.4 \end{matrix}$
You see that the numbers range from $43.2$ to $-27.4$. The first value indicates that one subject was $43.2$ milliseconds faster pronouncing aggressive words when they were preceded by weapon words than when preceded by neutral words. The value $-27.4$ indicates that another subject was $27.4$ milliseconds slower pronouncing aggressive words when they were preceded by weapon words.
The data are displayed with stems and leaves in Figure $4$. Since stem and leaf displays can only portray two whole digits (one for the stem and one for the leaf), the numbers are first rounded. Thus, the value $43.2$ is rounded to $43$ and represented with a stem of $4$ and a leaf of $3$. Similarly, $42.9$ is rounded to $43$. To represent negative numbers, we simply use negative stems. For example, the bottom row of the figure represents the number $-27$. The second-to-last row represents the numbers $-10, -10, -15$, etc. Once again, we have rounded the original values from Table $2$.
$\begin{array}{c|c c c c c c c} 4 & 3 & 3 \ 3 &6 \ 2 &0 &0 &4 &5 &6\ 1 &0 &0 &1 &3 &4\ 0 &1 &2 &4 &5 &5 &8 &9\ -0 &0 &6 &7 &9\ -1 &0 &0 &5 &5 &5 &9\ -2 &7 \end{array}$
Figure $4$: Stem and leaf display with negative numbers and rounding
Observe that the figure contains a row headed by "$0$" and another headed by "$-0$". The stem of $0$ is for numbers between $0$ and $9$, whereas the stem of $-0$ is for numbers between $0$ and $-9$. For example, the fifth row of the table holds the numbers $1, 2, 4, 5, 5, 8, 9$ and the sixth row holds $0, -6, -7,\; and\; -9$. Values that are exactly $0$ before rounding should be split as evenly as possible between the "$0$" and "$-0$" rows. In Table $2$, none of the values are $0$ before rounding. The "$0$" that appears in the "$-0$" row comes from the original value of $-0.2$ in the table.
Although stem and leaf displays are unwieldy for large data sets, they are often useful for data sets with up to $200$ observations. Figure $5$ portrays the distribution of populations of $185$ US cities in $1998$. To be included, a city had to have between $100,000$ and $500,000$ residents.
$\begin{array}{c|ccccccccccccccccccccccccccccccccccccccccccc} 4 &8 &9 &9 \ 4 &6 \ 4 &4 &4 &5 &5\ 4 &3 &3 &3\ 4 &0 &1\ 3 &9 &9\ 3 &6 &7 &7 &7 &7 &7\ 3 &5 &5\ 3 &2 &2 &3\ 3 &1 &1 &1\ 2 &8 &8 &9 &9\ 2 &6 &6 &6 &6 &6 &7\ 2 &4 &4 &4 &4 &5 &5\ 2 &2 &2 &3 &3 &3\ 2 &0 &0 &0 &0 &0 &0\ 1 &8 &8 &8 &8 &8 &8 &8 &8 &8 &8 &8 &8 &9 &9 &9 &9 &9 &9 &9 &9 &9 &9 &9\ 1 &6 &6 &6 &6 &6 &6 &7 &7 &7 &7 &7 &7\ 1 &4 &4 &4 &4 &4 &4 &4 &4 &4 &4 &4 &4 &5 &5 &5 &5 &5 &5 &5 &5 &5 &5 &5 &5\ 1 &2 &2 &2 &2 &2 &2 &2 &2 &2 &2 &2 &2 &2 &2 &2 &2 &2 &2 &2 &3 &3 &3 &3 &3 &3 &3 &3 &3\ 1 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &1 &1 &1 &1 &1 &1 &1 &1 &1 &1 &1 &1 &1 &1 &1 &1 &1 &1 &1 &1 &1 &1 &1 &1 &1 &1 &1 \end{array}$
Figure $5$: Stem and leaf display of populations of $185$ US cities with populations between $100,000$ and $500,000$ in $1998$.
Since a stem and leaf plot shows only two-place accuracy, we had to round the numbers to the nearest $10,000$. For example, the largest number ($493,559$) was rounded to $490,000$ and then plotted with a stem of $4$ and a leaf of $9$. The fourth highest number ($463,201$) was rounded to $460,000$ and plotted with a stem of $4$ and a leaf of $6$. Thus, the stems represent units of $100,000$ and the leaves represent units of $10,000$. Notice that each stem value is split into five parts: $0-1, 2-3, 4-5, 6-7$, and $8-9$.
Whether your data can be suitably represented by a stem and leaf graph depends on whether they can be rounded without loss of important information. Also, their extreme values must fit into two successive digits, as the data in Figure $5$ fit into the $10,000$ and $100,000$ places (for leaves and stems, respectively). Deciding what kind of graph is best suited to displaying your data thus requires good judgment. Statistics is not just recipes! | textbooks/stats/Introductory_Statistics/Introductory_Statistics_(Lane)/02%3A_Graphing_Distributions/2.03%3A_Stem_and_Leaf_Displays.txt |
Learning Objectives
• Create a grouped frequency distribution
• Create a histogram based on a grouped frequency distribution
• Determine an appropriate bin width
A histogram is a graphical method for displaying the shape of a distribution. It is particularly useful when there are a large number of observations. We begin with an example consisting of the scores of \(642\) students on a psychology test. The test consists of \(197\) items, each graded as "correct" or "incorrect." The students' scores ranged from \(46\) to \(167\).
The first step is to create a frequency table. Unfortunately, a simple frequency table would be too big, containing over \(100\) rows. To simplify the table, we group scores together as shown in Table \(1\).
Table \(1\): Grouped Frequency Distribution of Psychology Test Scores
Interval's Lower Limit Interval's Upper Limit Class Frequency
39.5 49.5 3
49.5 59.5 10
59.5 69.5 53
69.5 79.5 107
79.5 89.5 147
89.5 99.5 130
99.5 109.5 78
109.5 119.5 59
119.5 129.5 36
129.5 139.5 11
139.5 149.5 6
149.5 159.5 1
159.5 169.5 1
To create this table, the range of scores was broken into intervals, called class intervals. The first interval is from \(39.5\) to \(49.5\), the second from \(49.5\) to \(59.5\), etc. Next, the number of scores falling into each interval was counted to obtain the class frequencies. There are three scores in the first interval, \(10\) in the second, etc.
Class intervals of width \(10\) provide enough detail about the distribution to be revealing without making the graph too "choppy." More information on choosing the widths of class intervals is presented later in this section. Placing the limits of the class intervals midway between two numbers (e.g., \(49.5\)) ensures that every score will fall in an interval rather than on the boundary between intervals.
In a histogram, the class frequencies are represented by bars. The height of each bar corresponds to its class frequency. A histogram of these data is shown in Figure \(1\).
The histogram makes it plain that most of the scores are in the middle of the distribution, with fewer scores in the extremes. You can also see that the distribution is not symmetric: the scores extend to the right farther than they do to the left. The distribution is therefore said to be skewed. (We'll have more to say about shapes of distributions in the chapter " Summarizing Distributions.")
In our example, the observations are whole numbers. Histograms can also be used when the scores are measured on a more continuous scale such as the length of time (in milliseconds) required to perform a task. In this case, there is no need to worry about fence-sitters since they are improbable. (It would be quite a coincidence for a task to require exactly \(7\) seconds, measured to the nearest thousandth of a second.) We are therefore free to choose whole numbers as boundaries for our class intervals, for example, \(4000,\; 5000\), etc. The class frequency is then the number of observations that are greater than or equal to the lower bound, and strictly less than the upper bound. For example, one interval might hold times from \(4000\) to \(4999\) milliseconds. Using whole numbers as boundaries avoids a cluttered appearance, and is the practice of many computer programs that create histograms. Note also that some computer programs label the middle of each interval rather than the end points.
Histograms can be based on relative frequencies instead of actual frequencies. Histograms based on relative frequencies show the proportion of scores in each interval rather than the number of scores. In this case, the \(Y\)-axis runs from \(0\) to \(1\) (or somewhere in between if there are no extreme proportions). You can change a histogram based on frequencies to one based on relative frequencies by (a) dividing each class frequency by the total number of observations, and then (b) plotting the quotients on the \(Y\)-axis (labeled as proportion).
Sturges' rule
There is more to be said about the widths of the class intervals, sometimes called bin widths. Your choice of bin width determines the number of class intervals. This decision, along with the choice of starting point for the first interval, affects the shape of the histogram. There are some "rules of thumb" that can help you choose an appropriate width. (But keep in mind that none of the rules is perfect.) Sturges' rule is to set the number of intervals as close as possible to \(1 + \log_2(N)\), where \(\log_2(N)\) is the base \(2\) log of the number of observations. The formula can also be written as \(1 + 3.3\log_2(N)\), where \(\log_{10}(N)\) is the log base \(10\) of the number of observations. According to Sturges' rule, \(1000\) observations would be graphed with \(11\) class intervals since \(10\) is the closest integer to \(\log_2(1000)\). We prefer the Rice rule, which is to set the number of intervals to twice the cube root of the number of observations. In the case of \(1000\) observations, the Rice rule yields \(20\) intervals instead of the \(11\) recommended by Sturges' rule. For the psychology test example used above, Sturges' rule recommends \(10\) intervals while the Rice rule recommends \(17\). In the end, we compromised and chose \(13\) intervals for Figure \(1\) to create a histogram that seemed clearest. The best advice is to experiment with different choices of width, and to choose a histogram according to how well it communicates the shape of the distribution.
To provide experience in constructing histograms, we have developed an interactive demonstration. The demonstration reveals the consequences of different choices of bin width and of lower boundary for the first interval. | textbooks/stats/Introductory_Statistics/Introductory_Statistics_(Lane)/02%3A_Graphing_Distributions/2.04%3A_Histograms.txt |
Learning Objectives
• Create and interpret frequency polygons
• Create and interpret cumulative frequency polygons
• Create and interpret overlaid frequency polygons
Frequency polygons are a graphical device for understanding the shapes of distributions. They serve the same purpose as histograms, but are especially helpful for comparing sets of data. Frequency polygons are also a good choice for displaying cumulative frequency distributions.
To create a frequency polygon, start just as for histograms, by choosing a class interval. Then draw an \(X\)-axis representing the values of the scores in your data. Mark the middle of each class interval with a tick mark, and label it with the middle value represented by the class. Draw the \(Y\)-axis to indicate the frequency of each class. Place a point in the middle of each class interval at the height corresponding to its frequency. Finally, connect the points. You should include one class interval below the lowest value in your data and one above the highest value. The graph will then touch the \(X\)-axis on both sides.
A frequency polygon for \(642\) psychology test scores shown in Figure \(1\) was constructed from the frequency table shown in Table \(1\).
Table \(1\): Frequency Distribution of Psychology Test Scores.
Lower Limit Upper Limit Count Cumulative Count
29.5 39.5 0 0
39.5 49.5 3 3
49.5 59.5 10 13
59.5 69.5 53 66
69.5 79.5 107 173
79.5 89.5 147 320
89.5 99.5 130 450
99.5 109.5 78 528
109.5 119.5 59 587
119.5 129.5 36 623
129.5 139.5 11 634
139.5 149.5 6 640
149.5 159.5 1 641
159.5 169.5 1 642
169.5 179.5 0 642
The first label on the \(X\)-axis is \(35\). This represents an interval extending from \(29.5\) to \(39.5\). Since the lowest test score is \(46\), this interval has a frequency of \(0\). The point labeled \(45\) represents the interval from \(39.5\) to \(49.5\). There are three scores in this interval. There are \(147\) scores in the interval that surrounds \(85\).
You can easily discern the shape of the distribution from Figure \(1\). Most of the scores are between \(65\) and \(115\). It is clear that the distribution is not symmetric inasmuch as good scores (to the right) trail off more gradually than poor scores (to the left). In the terminology of Chapter 3 (where we will study shapes of distributions more systematically), the distribution is skewed.
A cumulative frequency polygon for the same test scores is shown in Figure \(2\). The graph is the same as before except that the \(Y\) value for each point is the number of students in the corresponding class interval plus all numbers in lower intervals. For example, there are no scores in the interval labeled \(35\), three in the interval \(45\), and \(10\) in the interval \(55\). Therefore, the \(Y\) value corresponding to "\(55\)" is \(13\). Since \(642\) students took the test, the cumulative frequency for the last interval is \(642\).
Frequency polygons are useful for comparing distributions. This is achieved by overlaying the frequency polygons drawn for different data sets. Figure \(3\) provides an example. The data come from a task in which the goal is to move a computer cursor to a target on the screen as fast as possible. On \(20\) of the trials, the target was a small rectangle; on the other \(20\), the target was a large rectangle. Time to reach the target was recorded on each trial. The two distributions (one for each target) are plotted together in Figure \(3\). The figure shows that, although there is some overlap in times, it generally took longer to move the cursor to the small target than to the large one.
It is also possible to plot two cumulative frequency distributions in the same graph. This is illustrated in Figure \(4\) using the same data from the cursor task. The difference in distributions for the two targets is again evident. | textbooks/stats/Introductory_Statistics/Introductory_Statistics_(Lane)/02%3A_Graphing_Distributions/2.05%3A_Frequency_Polygons.txt |
Learning Objectives
• Define basic terms including hinges, H-spread, step, adjacent value, outside value, and far out value
• Create a box plot
• Create parallel box plots
• Determine whether a box plot is appropriate for a given data set
We have already discussed techniques for visually representing data (see histograms and frequency polygons). In this section, we present another important graph called a box plot. Box plots are useful for identifying outliers and for comparing distributions. We will explain box plots with the help of data from an in-class experiment. As part of the "Stroop Interference Case Study," students in introductory statistics were presented with a page containing $30$ colored rectangles. Their task was to name the colors as quickly as possible. Their times (in seconds) were recorded. We'll compare the scores for the $16$ men and $31$ women who participated in the experiment by making separate box plots for each gender. Such a display is said to involve parallel box plots.
There are several steps in constructing a box plot. The first relies on the $25^{th},\; 50^{th},\; and\; 75^{th}$ percentiles in the distribution of scores. Figure $1$shows how these three statistics are used. For each gender, we draw a box extending from the $25^{th}$ percentile to the $75^{th}$ percentile. The $50^{th}$ percentile is drawn inside the box. Therefore,
• the bottom of each box is the $25^{th}$ percentile,
• the top is the $75^{th}$ percentile,
• and the line in the middle is the $50^{th}$ percentile.
The data for the women in our sample are shown in Table $1$.
Table $1$: Women's times
14 17 18 19 20 21 29
15 17 18 19 20 22
16 17 18 19 20 23
16 17 18 20 20 24
17 18 18 20 21 24
For these data, the $25^{th}$ percentile is $17$, the $50^{th}$ percentile is $19$, and the $75^{th}$ percentile is $20$. For the men (whose data are not shown), the $25^{th}$ percentile is $19$, the $50^{th}$ percentile is $22.5$, and the $75^{th}$ percentile is $25.5$.
Before proceeding, the terminology in Table $2$ is helpful.
Table $2$: Box plot terms and values for women's times
Name Formula Value
Upper Hinge 75th Percentile 20
Lower Hinge 25th Percentile 17
H-Spread Upper Hinge - Lower Hinge 3
Step 1.5 x H-Spread 4.5
Upper Inner Fence Upper Hinge + 1 Step 24.5
Lower Inner Fence Lower Hinge - 1 Step 12.5
Upper Outer Fence Upper Hinge + 2 Steps 29
Lower Outer Fence Lower Hinge - 2 Steps 8
Upper Adjacent Largest value below Upper Inner Fence 24
Lower Adjacent
Smallest value above Lower Inner Fence 14
Outside Value A value beyond an Inner Fence but not beyond an Outer Fence 29
Far Out Value A value beyond an Outer Fence None
Continuing with the box plots, we put "whiskers" above and below each box to give additional information about the spread of the data. Whiskers are vertical lines that end in a horizontal stroke. Whiskers are drawn from the upper and lower hinges to the upper and lower adjacent values ($24$ and $14$ for the women's data).
Although we don't draw whiskers all the way to outside or far out values, we still wish to represent them in our box plots. This is achieved by adding additional marks beyond the whiskers. Specifically, outside values are indicated by small "$o's$" and far out values are indicated by asterisks ($\ast$). In our data, there are no far out values and just one outside value. This outside value of $29$ is for the women and is shown in Figure $3$.
There is one more mark to include in box plots (although sometimes it is omitted). We indicate the mean score for a group by inserting a plus sign. Figure $4$ shows the result of adding means to our box plots.
Figure $4$ provides a revealing summary of the data. Since half the scores in a distribution are between the hinges (recall that the hinges are the $25^{th}$ and $75^{th}$ percentiles), we see that half the women's times are between $17$ and $20$ seconds, whereas half the men's times are between $19$ and $25.5$. We also see that women generally named the colors faster than the men did, although one woman was slower than almost all of the men. Figure $5$ shows the box plot for the women's data with detailed labels.
Box plots provide basic information about a distribution. For example, a distribution with a positive skew would have a longer whisker in the positive direction than in the negative direction. A larger mean than median would also indicate a positive skew. Box plots are good at portraying extreme values and are especially good at showing differences between distributions. However, many of the details of a distribution are not revealed in a box plot, and to examine these details one should create a histogram and/or a stem and leaf display.
Here are some other examples of box plots:
Example $1$: Time to move the mouse over a target
The data come from a task in which the goal is to move a computer mouse to a target on the screen as fast as possible. On $20$ of the trials, the target was a small rectangle; on the other $20$, the target was a large rectangle. Time to reach the target was recorded on each trial. The box plots of the two distributions are shown below. You can see that although there is some overlap in times, it generally took longer to move the mouse to the small target than to the large one.
Example $2$: Draft lottery
In $1969$ the war in Vietnam was at its height. An agency called the Selective Service was charged with finding a fair procedure to determine which young men would be conscripted ("drafted") into the U.S. military. The procedure was supposed to be fair in the sense of not favoring any culturally or economically defined subgroup of American men. It was decided that choosing "draftees" solely on the basis of a person’s birth date would be fair. A birthday lottery was thus devised. Pieces of paper representing the $366$ days of the year (including $\text{February 29}$) were placed in plastic capsules, poured into a rotating drum, and then selected one at a time. The lower the draft number, the sooner the person would be drafted. Men with high enough numbers were not drafted at all.
The first number selected was $258$, which meant that someone born on the $258^{th}$ day of the year ($\text{September 14}$) would be among the first to be drafted. The second number was $115$, so someone born on the 1$15^{th}$ day ($\text{April 24}$) was among the second group to be drafted. All $366$ birth dates were assigned draft numbers in this way.
To crate box plots, we divided the $366$ days of the year into thirds. The first third goes from $\text{January 1 to May 1}$, the second from $\text{May 2 to August 31}$, and the last from $\text{September 1 to December 31}$. The three groups of birth dates yield three groups of draft numbers. The draft number for each birthday is the order it was picked in the drawing. The figure below contains box plots of the three sets of draft numbers. As you can see, people born later in the year tended to have lower draft numbers.
Variations on box plots
Statistical analysis programs may offer options on how box plots are created. For example, the box plots in Figure $8$ are constructed from our data but differ from the previous box plots in several ways.
1. It does not mark outliers.
2. The means are indicated by green lines rather than plus signs.
3. The mean of all scores is indicated by a gray line.
4. Individual scores are represented by dots. Since the scores have been rounded to the nearest second, any given dot might represent more than one score.
5. The box for the women is wider than the box for the men because the widths of the boxes are proportional to the number of subjects of each gender ($31$ women and $16$ men).
Each dot in Figure $8$ represents a group of subjects with the same score (rounded to the nearest second). An alternative graphing technique is to jitter thepoints. This means spreading out different dots at the same horizontal position, one dot for each subject. The exact horizontal position of a dot is determined randomly (under the constraint that different dots don’t overlap exactly). Spreading out the dots helps you to see multiple occurrences of a given score. However, depending on the dot size and the screen resolution, some points may be obscured even if the points are jittererd. Figure $9$ shows what jittering looks like.
Different styles of box plots are best for different situations, and there are no firm rules for which to use. When exploring your data, you should try several ways of visualizing them. Which graphs you include in your report should depend on how well different graphs reveal the aspects of the data you consider most important. | textbooks/stats/Introductory_Statistics/Introductory_Statistics_(Lane)/02%3A_Graphing_Distributions/2.06%3A_Box_Plots.txt |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.