chapter
stringlengths 1.97k
1.53M
| path
stringlengths 47
241
|
---|---|
Learning Objectives
• To use multiple logistic regression when you have one nominal variable and two or more measurement variables, and you want to know how the measurement variables affect the nominal variable. You can use it to predict probabilities of the dependent nominal variable, or if you're careful, you can use it for suggestions about which independent variables have a major effect on the dependent variable.
When to use it
Use multiple logistic regression when you have one nominal and two or more measurement variables. The nominal variable is the dependent ($Y$) variable; you are studying the effect that the independent ($X$) variables have on the probability of obtaining a particular value of the dependent variable. For example, you might want to know the effect that blood pressure, age, and weight have on the probability that a person will have a heart attack in the next year.
Heart attack vs. no heart attack is a binomial nominal variable; it only has two values. You can perform multinomial multiple logistic regression, where the nominal variable has more than two values, but I'm going to limit myself to binary multiple logistic regression, which is far more common.
The measurement variables are the independent ($X$) variables; you think they may have an effect on the dependent variable. While the examples I'll use here only have measurement variables as the independent variables, it is possible to use nominal variables as independent variables in a multiple logistic regression; see the explanation on the multiple linear regression page.
Epidemiologists use multiple logistic regression a lot, because they are concerned with dependent variables such as alive vs. dead or diseased vs. healthy, and they are studying people and can't do well-controlled experiments, so they have a lot of independent variables. If you are an epidemiologist, you're going to have to learn a lot more about multiple logistic regression than I can teach you here. If you're not an epidemiologist, you might occasionally need to understand the results of someone else's multiple logistic regression, and hopefully this handbook can help you with that. If you need to do multiple logistic regression for your own research, you should learn more than is on this page.
The goal of a multiple logistic regression is to find an equation that best predicts the probability of a value of the $Y$ variable as a function of the $X$ variables. You can then measure the independent variables on a new individual and estimate the probability of it having a particular value of the dependent variable. You can also use multiple logistic regression to understand the functional relationship between the independent variables and the dependent variable, to try to understand what might cause the probability of the dependent variable to change. However, you need to be very careful. Please read the multiple regression page for an introduction to the issues involved and the potential problems with trying to infer causes; almost all of the caveats there apply to multiple logistic regression, as well.
As an example of multiple logistic regression, in the 1800s, many people tried to bring their favorite bird species to New Zealand, release them, and hope that they become established in nature. (We now realize that this is very bad for the native species, so if you were thinking about trying this, please don't.) Veltman et al. (1996) wanted to know what determined the success or failure of these introduced species. They determined the presence or absence of $79$ species of birds in New Zealand that had been artificially introduced (the dependent variable) and $14$ independent variables, including number of releases, number of individuals released, migration (scored as $1$ for sedentary, $2$ for mixed, $3$ for migratory), body length, etc. Multiple logistic regression suggested that number of releases, number of individuals released, and migration had the biggest influence on the probability of a species being successfully introduced to New Zealand, and the logistic regression equation could be used to predict the probability of success of a new introduction. While hopefully no one will deliberately introduce more exotic bird species to new territories, this logistic regression could help understand what will determine the success of accidental introductions or the introduction of endangered species to areas of their native range where they had been eliminated.
Null hypothesis
The main null hypothesis of a multiple logistic regression is that there is no relationship between the $X$ variables and the $Y$ variable; in other words, the $Y$ values you predict from your multiple logistic regression equation are no closer to the actual $Y$ values than you would expect by chance. As you are doing a multiple logistic regression, you'll also test a null hypothesis for each $X$ variable, that adding that $X$ variable to the multiple logistic regression does not improve the fit of the equation any more than expected by chance. While you will get $P$ values for these null hypotheses, you should use them as a guide to building a multiple logistic regression equation; you should not use the $P$ values as a test of biological null hypotheses about whether a particular $X$ variable causes variation in $Y$.
How it works
Multiple logistic regression finds the equation that best predicts the value of the $Y$ variable for the values of the $X$ variables. The $Y$ variable is the probability of obtaining a particular value of the nominal variable. For the bird example, the values of the nominal variable are "species present" and "species absent." The $Y$ variable used in logistic regression would then be the probability of an introduced species being present in New Zealand. This probability could take values from $0$ to $1$. The limited range of this probability would present problems if used directly in a regression, so the odds, $Y/(1-Y)$, is used instead. (If the probability of a successful introduction is $0.25$, the odds of having that species are $0.25/(1-0.25)=1/3$. In gambling terms, this would be expressed as "$3$ to $1$ odds against having that species in New Zealand.") Taking the natural log of the odds makes the variable more suitable for a regression, so the result of a multiple logistic regression is an equation that looks like this:
$\ln \left [ \frac{Y}{1-Y} \right ]=a+b_1X_1+b_2X_2+b_3X_3+...$
You find the slopes ($b_1,\; b_2$, etc.) and intercept ($a$) of the best-fitting equation in a multiple logistic regression using the maximum-likelihood method, rather than the least-squares method used for multiple linear regression. Maximum likelihood is a computer-intensive technique; the basic idea is that it finds the values of the parameters under which you would be most likely to get the observed results.
You might want to have a measure of how well the equation fits the data, similar to the $R^2$ of multiple linear regression. However, statisticians do not agree on the best measure of fit for multiple logistic regression. Some use deviance, $D$, for which smaller numbers represent better fit, and some use one of several pseudo-$R^2$ values, for which larger numbers represent better fit.
Using nominal variables in a multiple logistic regression
You can use nominal variables as independent variables in multiple logistic regression; for example, Veltman et al. (1996) included upland use (frequent vs. infrequent) as one of their independent variables in their study of birds introduced to New Zealand. See the discussion on the multiple linear regression page about how to do this.
Selecting variables in multiple logistic regression
Whether the purpose of a multiple logistic regression is prediction or understanding functional relationships, you'll usually want to decide which variables are important and which are unimportant. In the bird example, if your purpose was prediction it would be useful to know that your prediction would be almost as good if you measured only three variables and didn't have to measure more difficult variables such as range and weight. If your purpose was understanding possible causes, knowing that certain variables did not explain much of the variation in introduction success could suggest that they are probably not important causes of the variation in success.
The procedures for choosing variables are basically the same as for multiple linear regression: you can use an objective method (forward selection, backward elimination, or stepwise), or you can use a careful examination of the data and understanding of the biology to subjectively choose the best variables. The main difference is that instead of using the change of $R^2$ to measure the difference in fit between an equation with or without a particular variable, you use the change in likelihood. Otherwise, everything about choosing variables for multiple linear regression applies to multiple logistic regression as well, including the warnings about how easy it is to get misleading results.
Assumptions
Multiple logistic regression assumes that the observations are independent. For example, if you were studying the presence or absence of an infectious disease and had subjects who were in close contact, the observations might not be independent; if one person had the disease, people near them (who might be similar in occupation, socioeconomic status, age, etc.) would be likely to have the disease. Careful sampling design can take care of this.
Multiple logistic regression also assumes that the natural log of the odds ratio and the measurement variables have a linear relationship. It can be hard to see whether this assumption is violated, but if you have biological or statistical reasons to expect a non-linear relationship between one of the measurement variables and the log of the odds ratio, you may want to try data transformations.
Multiple logistic regression does not assume that the measurement variables are normally distributed.
Example
Some obese people get gastric bypass surgery to lose weight, and some of them die as a result of the surgery. Benotti et al. (2014) wanted to know whether they could predict who was at a higher risk of dying from one particular kind of surgery, Roux-en-Y gastric bypass surgery. They obtained records on $81,751$ patients who had had Roux-en-Y surgery, of which $123$ died within $30$ days. They did multiple logistic regression, with alive vs. dead after $30$ days as the dependent variable, and $6$ demographic variables (gender, age, race, body mass index, insurance type, and employment status) and $30$ health variables (blood pressure, diabetes, tobacco use, etc.) as the independent variables. Manually choosing the variables to add to their logistic model, they identified six that contribute to risk of dying from Roux-en-Y surgery: body mass index, age, gender, pulmonary hypertension, congestive heart failure, and liver disease.
Benotti et al. (2014) did not provide their multiple logistic equation, perhaps because they thought it would be too confusing for surgeons to understand. Instead, they developed a simplified version (one point for every decade over $40$, $1$ point for every $10$ BMI units over $40$, $1$ point for male, $1$ point for congestive heart failure, $1$ point for liver disease, and $2$ points for pulmonary hypertension). Using this RYGB Risk Score they could predict that a $43$-year-old woman with a BMI of $46$ and no heart, lung or liver problems would have an $0.03\%$ chance of dying within $30$ days, while a $62$-year-old man with a BMI of $52$ and pulmonary hypertension would have a $1.4\%$ chance.
Graphing the results
Graphs aren't very useful for showing the results of multiple logistic regression; instead, people usually just show a table of the independent variables, with their $P$ values and perhaps the regression coefficients.
Similar tests
If the dependent variable is a measurement variable, you should do multiple linear regression.
There are numerous other techniques you can use when you have one nominal and three or more measurement variables, but I don't know enough about them to list them, much less explain them.
How to do multiple logistic regression
Spreadsheet
I haven't written a spreadsheet to do multiple logistic regression.
Web page
There's a very nice web page for multiple logistic regression. It will not do automatic selection of variables; if you want to construct a logistic model with fewer independent variables, you'll have to pick the variables yourself.
R
Salvatore Mangiafico's $R$ Companion has a sample R program for multiple logistic regression.
SAS
You use PROC LOGISTIC to do multiple logistic regression in SAS. Here is an example using the data on bird introductions to New Zealand.
DATA birds;
INPUT species $status$ length mass range migr insect diet clutch
broods wood upland water release indiv;
DATALINES;
Cyg_olor 1 1520 9600 1.21 1 12 2 6 1 0 0 1 6 29
Cyg_atra 1 1250 5000 0.56 1 0 1 6 1 0 0 1 10 85
Cer_nova 1 870 3360 0.07 1 0 1 4 1 0 0 1 3 8
Ans_caer 0 720 2517 1.1 3 12 2 3.8 1 0 0 1 1 10
Ans_anse 0 820 3170 3.45 3 0 1 5.9 1 0 0 1 2 7
Bra_cana 1 770 4390 2.96 2 0 1 5.9 1 0 0 1 10 60
Bra_sand 0 50 1930 0.01 1 0 1 4 2 0 0 0 1 2
Alo_aegy 0 680 2040 2.71 1 . 2 8.5 1 0 0 1 1 8
Ana_plat 1 570 1020 9.01 2 6 2 12.6 1 0 0 1 17 1539
Ana_acut 0 580 910 7.9 3 6 2 8.3 1 0 0 1 3 102
Ana_pene 0 480 590 4.33 3 0 1 8.7 1 0 0 1 5 32
Aix_spon 0 470 539 1.04 3 12 2 13.5 2 1 0 1 5 10
Ayt_feri 0 450 940 2.17 3 12 2 9.5 1 0 0 1 3 9
Ayt_fuli 0 435 684 4.81 3 12 2 10.1 1 0 0 1 2 5
Ore_pict 0 275 230 0.31 1 3 1 9.5 1 1 1 0 9 398
Lop_cali 1 256 162 0.24 1 3 1 14.2 2 0 0 0 15 1420
Col_virg 1 230 170 0.77 1 3 1 13.7 1 0 0 0 17 1156
Ale_grae 1 330 501 2.23 1 3 1 15.5 1 0 1 0 15 362
Ale_rufa 0 330 439 0.22 1 3 2 11.2 2 0 0 0 2 20
Per_perd 0 300 386 2.4 1 3 1 14.6 1 0 1 0 24 676
Cot_pect 0 182 95 0.33 3 . 2 7.5 1 0 0 0 3 .
Cot_aust 1 180 95 0.69 2 12 2 11 1 0 0 1 11 601
Lop_nyct 0 800 1150 0.28 1 12 2 5 1 1 1 0 4 6
Pha_colc 1 710 850 1.25 1 12 2 11.8 1 1 0 0 27 244
Syr_reev 0 750 949 0.2 1 12 2 9.5 1 1 1 0 2 9
Tet_tetr 0 470 900 4.17 1 3 1 7.9 1 1 1 0 2 13
Lag_lago 0 390 517 7.29 1 0 1 7.5 1 1 1 0 2 4
Ped_phas 0 440 815 1.83 1 3 1 12.3 1 1 0 0 1 22
Tym_cupi 0 435 770 0.26 1 4 1 12 1 0 0 0 3 57
Van_vane 0 300 226 3.93 2 12 3 3.8 1 0 0 0 8 124
Plu_squa 0 285 318 1.67 3 12 3 4 1 0 0 1 2 3
Pte_alch 0 350 225 1.21 2 0 1 2.5 2 0 0 0 1 8
Pha_chal 0 320 350 0.6 1 12 2 2 2 1 0 0 8 42
Ocy_loph 0 330 205 0.76 1 0 1 2 7 1 0 1 4 23
Leu_mela 0 372 . 0.07 1 12 2 2 1 1 0 0 6 34
Ath_noct 1 220 176 4.84 1 12 3 3.6 1 1 0 0 7 221
Tyt_alba 0 340 298 8.9 2 0 3 5.7 2 1 0 0 1 7
Dac_nova 1 460 382 0.34 1 12 3 2 1 1 0 0 7 21
Lul_arbo 0 150 32.1 1.78 2 4 2 3.9 2 1 0 0 1 5
Ala_arve 1 185 38.9 5.19 2 12 2 3.7 3 0 0 0 11 391
Pru_modu 1 145 20.5 1.95 2 12 2 3.4 2 1 0 0 14 245
Eri_rebe 0 140 15.8 2.31 2 12 2 5 2 1 0 0 11 123
Lus_mega 0 161 19.4 1.88 3 12 2 4.7 2 1 0 0 4 7
Tur_meru 1 255 82.6 3.3 2 12 2 3.8 3 1 0 0 16 596
Tur_phil 1 230 67.3 4.84 2 12 2 4.7 2 1 0 0 12 343
Syl_comm 0 140 12.8 3.39 3 12 2 4.6 2 1 0 0 1 2
Syl_atri 0 142 17.5 2.43 2 5 2 4.6 1 1 0 0 1 5
Man_mela 0 180 . 0.04 1 12 3 1.9 5 1 0 0 1 2
Man_mela 0 265 59 0.25 1 12 2 2.6 . 1 0 0 1 80
Gra_cyan 0 275 128 0.83 1 12 3 3 2 1 0 1 1 .
Gym_tibi 1 400 380 0.82 1 12 3 4 1 1 0 0 15 448
Cor_mone 0 335 203 3.4 2 12 2 4.5 1 1 0 0 2 3
Cor_frug 1 400 425 3.73 1 12 2 3.6 1 1 0 0 10 182
Stu_vulg 1 222 79.8 3.33 2 6 2 4.8 2 1 0 0 14 653
Acr_tris 1 230 111.3 0.56 1 12 2 3.7 1 1 0 0 5 88
Pas_dome 1 149 28.8 6.5 1 6 2 3.9 3 1 0 0 12 416
Pas_mont 0 133 22 6.8 1 6 2 4.7 3 1 0 0 3 14
Aeg_temp 0 120 . 0.17 1 6 2 4.7 3 1 0 0 3 14
Emb_gutt 0 120 19 0.15 1 4 1 5 3 0 0 0 4 112
Poe_gutt 0 100 12.4 0.75 1 4 1 4.7 3 0 0 0 1 12
Lon_punc 0 110 13.5 1.06 1 0 1 5 3 0 0 0 1 8
Lon_cast 0 100 . 0.13 1 4 1 5 . 0 0 1 4 45
Pad_oryz 0 160 . 0.09 1 0 1 5 . 0 0 0 2 6
Fri_coel 1 160 23.5 2.61 2 12 2 4.9 2 1 0 0 17 449
Fri_mont 0 146 21.4 3.09 3 10 2 6 . 1 0 0 7 121
Car_chlo 1 147 29 2.09 2 7 2 4.8 2 1 0 0 6 65
Car_spin 0 117 12 2.09 3 3 1 4 2 1 0 0 3 54
Car_card 1 120 15.5 2.85 2 4 1 4.4 3 1 0 0 14 626
Aca_flam 1 115 11.5 5.54 2 6 1 5 2 1 0 0 10 607
Aca_flavi 0 133 17 1.67 2 0 1 5 3 0 1 0 3 61
Aca_cann 0 136 18.5 2.52 2 6 1 4.7 2 1 0 0 12 209
Pyr_pyrr 0 142 23.5 3.57 1 4 1 4 3 1 0 0 2 .
Emb_citr 1 160 28.2 4.11 2 8 2 3.3 3 1 0 0 14 656
Emb_hort 0 163 21.6 2.75 3 12 2 5 1 0 0 0 1 6
Emb_cirl 1 160 23.6 0.62 1 12 2 3.5 2 1 0 0 3 29
Emb_scho 0 150 20.7 5.42 1 12 2 5.1 2 0 0 1 2 9
Pir_rubr 0 170 31 0.55 3 12 2 4 . 1 0 0 1 2
Age_phoe 0 210 36.9 2 2 8 2 3.7 1 0 0 1 1 2
Stu_negl 0 225 106.5 1.2 2 12 2 4.8 2 0 0 0 1 2
;
PROC LOGISTIC DATA=birds DESCENDING;
MODEL status=length mass range migr insect diet clutch broods wood upland
water release indiv / SELECTION=STEPWISE SLENTRY=0.15 SLSTAY=0.15;
RUN;
In the MODEL statement, the dependent variable is to the left of the equals sign, and all the independent variables are to the right. SELECTION determines which variable selection method is used; choices include FORWARD, BACKWARD, STEPWISE, and several others. You can omit the SELECTION parameter if you want to see the logistic regression model that includes all the independent variables. SLENTRY is the significance level for entering a variable into the model, if you're using FORWARD or STEPWISE selection; in this example, a variable must have a $P$ value less than $0.15$ to be entered into the regression model. SLSTAY is the significance level for removing a variable in BACKWARD or STEPWISE selection; in this example, a variable with a $P$ value greater than $0.15$ will be removed from the model.
Summary of Stepwise Selection
Effect Number Score Wald
Step Entered Removed DF In Chi-Square Chi-Square Pr > ChiSq
1 release 1 1 28.4339 <.0001
2 upland 1 2 5.6871 0.0171
3 migr 1 3 5.3284 0.0210
The summary shows that "release" was added to the model first, yielding a $P$ value less than $0.0001$. Next, "upland" was added, with a $P$ value of $0.0171$. Next, "migr" was added, with a $P$ value of $0.0210$. SLSTAY was set to $0.15$, not $0.05$, because you might want to include a variable in a predictive model even if it's not quite significant. However, none of the other variables have a $P$ value less than $0.15$, and removing any of the variables caused a decrease in fit big enough that $P$ was less than $0.15$, so the stepwise process is done.
Analysis of Maximum Likelihood Estimates
Standard Wald
Parameter DF Estimate Error Chi-Square Pr > ChiSq
Intercept 1 -0.4653 1.1226 0.1718 0.6785
migr 1 -1.6057 0.7982 4.0464 0.0443
upland 1 -6.2721 2.5739 5.9380 0.0148
release 1 0.4247 0.1040 16.6807 <.0001
The "parameter estimates" are the partial regression coefficients; they show that the model is:
$\ln \left [ \frac{Y}{1-Y} \right ]=-0.4653-1.6057(migration)-6.2721(upland)+0.4247(release)$
Power analysis
You need to have several times as many observations as you have independent variables, otherwise you can get "overfitting"—it could look like every independent variable is important, even if they're not. A frequently seen rule of thumb is that you should have at least $10$ to $20$ times as many observations as you have independent variables. I don't know how to do a more detailed power analysis for multiple logistic regression. | textbooks/stats/Applied_Statistics/Biological_Statistics_(McDonald)/05%3A_Tests_for_Multiple_Measurement_Variables/5.07%3A_Multiple_Logistic_Regression.txt |
• 6.1: Multiple Comparisons
When you perform a large number of statistical tests, some will have P values less than 0.05 purely by chance, even if all your null hypotheses are really true. The Bonferroni correction is one simple way to take this into account; adjusting the false discovery rate using the Benjamini-Hochberg procedure is a more powerful method.
• 6.2: Meta-Analysis
To use meta-analysis when you want to combine the results from different studies, making the equivalent of one big study, so see if an overall effect is significant.
06: Multiple Tests
Learning Objectives
• When you perform a large number of statistical tests, some will have $P$ values less than $0.05$ purely by chance, even if all your null hypotheses are really true. The Bonferroni correction is one simple way to take this into account; adjusting the false discovery rate using the Benjamini-Hochberg procedure is a more powerful method.
The problem with multiple comparisons
Any time you reject a null hypothesis because a $P$ value is less than your critical value, it's possible that you're wrong; the null hypothesis might really be true, and your significant result might be due to chance. A $P$ value of $0.05$ means that there's a $5\%$ chance of getting your observed result, if the null hypothesis were true. It does not mean that there's a $5\%$ chance that the null hypothesis is true.
For example, if you do $100$ statistical tests, and for all of them the null hypothesis is actually true, you'd expect about $5$ of the tests to be significant at the $P<0.05$ level, just due to chance. In that case, you'd have about $5$ statistically significant results, all of which were false positives. The cost, in time, effort and perhaps money, could be quite high if you based important conclusions on these false positives, and it would at least be embarrassing for you once other people did further research and found that you'd been mistaken.
This problem, that when you do multiple statistical tests, some fraction will be false positives, has received increasing attention in the last few years. This is important for such techniques as the use of microarrays, which make it possible to measure RNA quantities for tens of thousands of genes at once; brain scanning, in which blood flow can be estimated in $100,000$ or more three-dimensional bits of brain; and evolutionary genomics, where the sequences of every gene in the genome of two or more species can be compared. There is no universally accepted approach for dealing with the problem of multiple comparisons; it is an area of active research, both in the mathematical details and broader epistomological questions.
Controlling the familywise error rate - Bonferroni Correction
The classic approach to the multiple comparison problem is to control the familywise error rate. Instead of setting the critical $P$ level for significance, or alpha, to $0.05$, you use a lower critical value. If the null hypothesis is true for all of the tests, the probability of getting one result that is significant at this new, lower critical value is $0.05$. In other words, if all the null hypotheses are true, the probability that the family of tests includes one or more false positives due to chance is $0.05$.
The most common way to control the familywise error rate is with the Bonferroni correction. You find the critical value (alpha) for an individual test by dividing the familywise error rate (usually $0.05$) by the number of tests. Thus if you are doing $100$ statistical tests, the critical value for an individual test would be $0.05/100=0.0005$, and you would only consider individual tests with $P<0.0005$ to be significant. As an example, García-Arenzana et al. (2014) tested associations of $25$ dietary variables with mammographic density, an important risk factor for breast cancer, in Spanish women. They found the following results:
Dietary variable P value
Total calories <0.001
Olive oil 0.008
Whole milk 0.039
White meat 0.041
Proteins 0.042
Nuts 0.06
Cereals and pasta 0.074
White fish 0.205
Butter 0.212
Vegetables 0.216
Skimmed milk 0.222
Red meat 0.251
Fruit 0.269
Eggs 0.275
Blue fish 0.34
Legumes 0.341
Carbohydrates 0.384
Potatoes 0.569
Bread 0.594
Fats 0.696
Sweets 0.762
Dairy products 0.94
Semi-skimmed milk 0.942
Total meat 0.975
Processed meat 0.986
As you can see, five of the variables show a significant ($P<0.05$) $P$ value. However, because García-Arenzana et al. (2014) tested $25$ dietary variables, you'd expect one or two variables to show a significant result purely by chance, even if diet had no real effect on mammographic density. Applying the Bonferroni correction, you'd divide $P=0.05$ by the number of tests ($25$) to get the Bonferroni critical value, so a test would have to have $P<0.002$ to be significant. Under that criterion, only the test for total calories is significant.
The Bonferroni correction is appropriate when a single false positive in a set of tests would be a problem. It is mainly useful when there are a fairly small number of multiple comparisons and you're looking for one or two that might be significant. However, if you have a large number of multiple comparisons and you're looking for many that might be significant, the Bonferroni correction may lead to a very high rate of false negatives. For example, let's say you're comparing the expression level of $20,000$ genes between liver cancer tissue and normal liver tissue. Based on previous studies, you are hoping to find dozens or hundreds of genes with different expression levels. If you use the Bonferroni correction, a $P$ value would have to be less than $0.05/20000=0.0000025$ to be significant. Only genes with huge differences in expression will have a $P$ value that low, and could miss out on a lot of important differences just because you wanted to be sure that your results did not include a single false negative.
An important issue with the Bonferroni correction is deciding what a "family" of statistical tests is. García-Arenzana et al. (2014) tested $25$ dietary variables, so are these tests one "family," making the critical $P$ value $0.05/25$? But they also measured $13$ non-dietary variables such as age, education, and socioeconomic status; should they be included in the family of tests, making the critical $P$ value $0.05/38$? And what if in 2015, García-Arenzana et al. write another paper in which they compare $30$ dietary variables between breast cancer and non-breast cancer patients; should they include those in their family of tests, and go back and reanalyze the data in their 2014 paper using a critical $P$ value of $0.05/55$? There is no firm rule on this; you'll have to use your judgment, based on just how bad a false positive would be. Obviously, you should make this decision before you look at the results, otherwise it would be too easy to subconsciously rationalize a family size that gives you the results you want.
Controlling the false discovery rate: Benjamini–Hochberg procedure
An alternative approach is to control the false discovery rate. This is the proportion of "discoveries" (significant results) that are actually false positives. For example, let's say you're using microarrays to compare expression levels for $20,000$ genes between liver tumors and normal liver cells. You're going to do additional experiments on any genes that show a significant difference between the normal and tumor cells, and you're willing to accept up to $10\%$ of the genes with significant results being false positives; you'll find out they're false positives when you do the followup experiments. In this case, you would set your false discovery rate to $10\%$.
One good technique for controlling the false discovery rate was briefly mentioned by Simes (1986) and developed in detail by Benjamini and Hochberg (1995). Put the individual $P$ values in order, from smallest to largest. The smallest $P$ value has a rank of $i=1$, then next smallest has $i=2$, etc. Compare each individual $P$ value to its Benjamini-Hochberg critical value, $(i/m)Q$, where i is the rank, $m$ is the total number of tests, and $Q$ is the false discovery rate you choose. The largest $P$ value that has $P<(i/m)Q$ is significant, and all of the $P$ values smaller than it are also significant, even the ones that aren't less than their Benjamini-Hochberg critical value.
To illustrate this, here are the data from García-Arenzana et al. (2014) again, with the Benjamini-Hochberg critical value for a false discovery rate of $0.25$.
Dietary variable P value Rank (i/m)Q
Total calories <0.001 1 0.010
Olive oil 0.008 2 0.020
Whole milk 0.039 3 0.030
White meat 0.041 4 0.040
Proteins 0.042 5 0.050
Nuts 0.060 6 0.060
Cereals and pasta 0.074 7 0.070
White fish 0.205 8 0.080
Butter 0.212 9 0.090
Vegetables 0.216 10 0.100
Skimmed milk 0.222 11 0.110
Red meat 0.251 12 0.120
Fruit 0.269 13 0.130
Eggs 0.275 14 0.140
Blue fish 0.34 15 0.150
Legumes 0.341 16 0.160
Carbohydrates 0.384 17 0.170
Potatoes 0.569 18 0.180
Bread 0.594 19 0.190
Fats 0.696 20 0.200
Sweets 0.762 21 0.210
Dairy products 0.94 22 0.220
Semi-skimmed milk 0.942 23 0.230
Total meat 0.975 24 0.240
Processed meat 0.986 25 0.250
Reading down the column of $P$ values, the largest one with $P<(i/m)Q$ is proteins, where the individual $P$ value ($0.042$) is less than the $(i/m)Q$ value of $0.050$. Thus the first five tests would be significant. Note that whole milk and white meat are significant, even though their $P$ values are not less than their Benjamini-Hochberg critical values; they are significant because they have $P$ values less than that of proteins.
When you use the Benjamini-Hochberg procedure with a false discovery rate greater than $0.05$, it is quite possible for individual tests to be significant even though their $P$ value is greater than $0.05$. Imagine that all of the $P$ values in the García-Arenzana et al. (2014) study were between $0.10$ and $0.24$. Then with a false discovery rate of $0.25$, all of the tests would be significant, even the one with $P=0.24$. This may seem wrong, but if all $25$ null hypotheses were true, you'd expect the largest $P$ value to be well over $0.90$; it would be extremely unlikely that the largest $P$ value would be less than $0.25$. You would only expect the largest $P$ value to be less than $0.25$ if most of the null hypotheses were false, and since a false discovery rate of $0.25$ means you're willing to reject a few true null hypotheses, you would reject them all.
You should carefully choose your false discovery rate before collecting your data. Usually, when you're doing a large number of statistical tests, your experiment is just the first, exploratory step, and you're going to follow up with more experiments on the interesting individual results. If the cost of additional experiments is low and the cost of a false negative (missing a potentially important discovery) is high, you should probably use a fairly high false discovery rate, like $0.10$ or $0.20$, so that you don't miss anything important. Sometimes people use a false discovery rate of $0.05$, probably because of confusion about the difference between false discovery rate and probability of a false positive when the null is true; a false discovery rate of $0.05$ is probably too low for many experiments.
The Benjamini-Hochberg procedure is less sensitive than the Bonferroni procedure to your decision about what is a "family" of tests. If you increase the number of tests, and the distribution of $P$ values is the same in the newly added tests as in the original tests, the Benjamini-Hochberg procedure will yield the same proportion of significant results. For example, if García-Arenzana et al. (2014) had looked at $50$ variables instead of $25$ and the new $25$ tests had the same set of P values as the original $25$, they would have $10$ significant results under Benjamini-Hochberg with a false discovery rate of $0.25$. This doesn't mean you can completely ignore the question of what constitutes a family; if you mix two sets of tests, one with some low $P$ values and a second set without low $P$ values, you will reduce the number of significant results compared to just analyzing the first set by itself.
Sometimes you will see a "Benjamini-Hochberg adjusted $P$ value." The adjusted $P$ value for a test is either the raw $P$ value times $m/i$ or the adjusted $P$ value for the next higher raw $P$ value, whichever is smaller (remember that m is the number of tests and i is the rank of each test, with $1$ the rank of the smallest $P$ value). If the adjusted $P$ value is smaller than the false discovery rate, the test is significant. For example, the adjusted $P$ value for proteins in the example data set is $0.042\times (25/5)=0.210$; the adjusted $P$ value for white meat is the smaller of $0.041\times (25/4)=0.256$ or $0.210$, so it is $0.210$. In my opinion "adjusted $P$ values" are a little confusing, since they're not really estimates of the probability ($P$) of anything. I think it's better to give the raw $P$ values and say which are significant using the Benjamini-Hochberg procedure with your false discovery rate, but if Benjamini-Hochberg adjusted P values are common in the literature of your field, you might have to use them.
Assumption
The Bonferroni correction and Benjamini-Hochberg procedure assume that the individual tests are independent of each other, as when you are comparing sample A vs. sample B, C vs. D, E vs. F, etc. If you are comparing sample A vs. sample B, A vs. C, A vs. D, etc., the comparisons are not independent; if A is higher than B, there's a good chance that A will be higher than C as well. One place this occurs is when you're doing unplanned comparisons of means in anova, for which a variety of other techniques have been developed, such as the Tukey-Kramer test. Another experimental design with multiple, non-independent comparisons is when you compare multiple variables between groups, and the variables are correlated with each other within groups. An example would be knocking out your favorite gene in mice and comparing everything you can think of on knockout vs. control mice: length, weight, strength, running speed, food consumption, feces production, etc. All of these variables are likely to be correlated within groups; mice that are longer will probably also weigh more, would be stronger, run faster, eat more food, and poop more. To analyze this kind of experiment, you can use multivariate analysis of variance, or manova, which I'm not covering in this textbook.
Other, more complicated techniques, such as Reiner et al. (2003), have been developed for controlling false discovery rate that may be more appropriate when there is lack of independence in the data. If you're using microarrays, in particular, you need to become familiar with this topic.
When not to correct for multiple comparisons
The goal of multiple comparisons corrections is to reduce the number of false positives, because false positives can be embarrassing, confusing, and cause you and other people to waste your time. An unfortunate byproduct of correcting for multiple comparisons is that you may increase the number of false negatives, where there really is an effect but you don't detect it as statistically significant. If false negatives are very costly, you may not want to correct for multiple comparisons at all. For example, let's say you've gone to a lot of trouble and expense to knock out your favorite gene, mannose-6-phosphate isomerase (Mpi), in a strain of mice that spontaneously develop lots of tumors. Hands trembling with excitement, you get the first Mpi-/- mice and start measuring things: blood pressure, growth rate, maze-learning speed, bone density, coat glossiness, everything you can think of to measure on a mouse. You measure $50$ things on Mpi-/- mice and normal mice, run the approriate statistical tests, and the smallest $P$ value is $0.013$ for a difference in tumor size. If you use a Bonferroni correction, that $P=0.013$ won't be close to significant; it might not be significant with the Benjamini-Hochberg procedure, either. Should you conclude that there's no significant difference between the Mpi-/- and Mpi+/+ mice, write a boring little paper titled "Lack of anything interesting in Mpi-/- mice," and look for another project? No, your paper should be "Possible effect of Mpi on cancer." You should be suitably cautious, of course, and emphasize in the paper that there's a good chance that your result is a false positive; but the cost of a false positive—if further experiments show that Mpi really has no effect on tumors—is just a few more experiments. The cost of a false negative, on the other hand, could be that you've missed out on a hugely important discovery.
How to do the tests
Spreadsheet
I have written a spreadsheet to do the Benjamini-Hochberg procedure benjaminihochberg.xls on up to $1000$ $P$ values. It will tell you which $P$ values are significant after controlling for the false discovery rate you choose. It will also give the Benjamini-Hochberg adjusted $P$ values, even though I think they're kind of stupid.
I have also written a spreadsheet to do the Bonferroni correction bonferroni.xls on up to $1000$ $P$ values.
Web pages
I'm not aware of any web pages that will perform the Benjamini-Hochberg procedure.
R
Salvatore Mangiafico's $R$ Companion has a sample R programs for the Bonferroni, Benjamini-Hochberg, and several other methods for correcting for multiple comparisons.
SAS
There is a PROC MULTTEST that will perform the Benjamini-Hochberg procedure, as well as many other multiple-comparison corrections. Here's an example using the diet and mammographic density data from García-Arenzana et al. (2014).
DATA mammodiet;
INPUT food \$ Raw_P;
cards;
Blue_fish .34
Bread .594
Butter .212
Carbohydrates .384
Cereals_and_pasta .074
Dairy_products .94
Eggs .275
Fats .696
Fruit .269
Legumes .341
Nuts .06
Olive_oil .008
Potatoes .569
Processed_meat .986
Proteins .042
Red_meat .251
Semi-skimmed_milk .942
Skimmed_milk .222
Sweets .762
Total_calories .001
Total_meat .975
Vegetables .216
White_fish .205
White_meat .041
Whole_milk .039
;
PROC SORT DATA=mammodiet OUT=sorted_p;
BY Raw_P;
PROC MULTTEST INPVALUES=sorted_p FDR;
RUN;
Note that the $P$ value variable must be named "Raw_P". I sorted the data by "Raw_P" before doing the multiple comparisons test, to make the final output easier to read. In the PROC MULTTEST statement, INPVALUES tells you what file contains the Raw_P variable, and FDR tells SAS to run the Benjamini-Hochberg procedure.
The output is the original list of $P$ values and a column labeled "False Discovery Rate." If the number in this column is less than the false discovery rate you chose before doing the experiment, the original ("raw") $P$ value is significant.
Test Raw False Discovery Rate
1 0.0010 0.0250
2 0.0080 0.1000
3 0.0390 0.2100
4 0.0410 0.2100
5 0.0420 0.2100
6 0.0600 0.2500
7 0.0740 0.2643
8 0.2050 0.4911
9 0.2120 0.4911
10 0.2160 0.4911
11 0.2220 0.4911
12 0.2510 0.4911
13 0.2690 0.4911
14 0.2750 0.4911
15 0.3400 0.5328
16 0.3410 0.5328
17 0.3840 0.5647
18 0.5690 0.7816
19 0.5940 0.7816
20 0.6960 0.8700
21 0.7620 0.9071
22 0.9400 0.9860
23 0.9420 0.9860
24 0.9750 0.9860
25 0.9860 0.9860
So if you had chosen a false discovery rate of $0.25$, the first $6$ would be significant; if you'd chosen a false discovery rate of $0.15$, only the first two would be significant. | textbooks/stats/Applied_Statistics/Biological_Statistics_(McDonald)/06%3A_Multiple_Tests/6.01%3A_Multiple_Comparisons.txt |
Learning Objectives
• To use meta-analysis when you want to combine the results from different studies, making the equivalent of one big study, so see if an overall effect is significant.
When to use it
Meta-analysis is a statistical technique for combining the results of different studies to see if the overall effect is significant. People usually do this when there are multiple studies with conflicting results—a drug does or does not work, reducing salt in food does or does not affect blood pressure, that sort of thing. Meta-analysis is a way of combining the results of all the studies; ideally, the result is the same as doing one study with a really big sample size, one large enough to conclusively demonstrate an effect if there is one, or conclusively reject an effect if there isn't one of an appreciable size.
I'm going to outline the general steps involved in doing a meta-analysis, but I'm not going to describe it in sufficient detail that you could do one yourself; if that's what you want to do, see Berman and Parker (2002), Gurevitch and Hedges (2001), Hedges and Olkin (1985), or some other book. Instead, I hope to explain some of the basic steps of a meta-analysis, so that you'll know what to look for when you read the results of a meta-analysis that someone else has done.
Decide which studies to include
Before you start collecting studies, it's important to decide which ones you're going to include and which you'll exclude. Your criteria should be as objective as possible; someone else should be able to look at your criteria and then include and exclude the exact same studies that you did. For example, if you're looking at the effects of a drug on a disease, you might decide that only double-blind, placebo-controlled studies are worth looking at, or you might decide that single-blind studies (where the investigator knows who gets the placebo, but the patient doesn't) are acceptable; or you might decide that any study at all on the drug and the disease should be included.
You shouldn't use sample size as a criterion for including or excluding studies. The statistical techniques used for the meta-analysis will give studies with smaller sample sizes the lower weight they deserve.
Finding studies
The next step in a meta-analysis is finding all of the studies on the subject. A critical issue in meta-analysis is what's known as the "file-drawer effect"; people who do a study and fail to find a significant result are less likely to publish it than if they find a significant result. Studies with non-significant results are generally boring; it's difficult to get up the enthusiasm to write them up, and it's difficult to get them published in decent journals. It's very tempting for someone with a bunch of boring, non-significant data to quietly put it in a file drawer, say "I'll write that up when I get some free time," and then never actually get enough free time.
The reason the file-drawer effect is important to a meta-analysis is that even if there is no real effect, \(5\%\) of studies will show a significant result at the \(P<0.05\) level; that's what \(P<0.05\) means, after all, that there's a \(5\%\) probability of getting that result if the null hypothesis is true. So if \(100\) people did experiments to see whether thinking about long fingernails made your fingernails grow faster, you'd expect \(95\) of them to find non-significant results. They'd say to themselves, "Well, that didn't work out, maybe I'll write it up for the Journal of Fingernail Science someday," then go on to do experiments on whether thinking about long hair made your hair grow longer and never get around to writing up the fingernail results. The \(5\) people who did find a statistically significant effect of thought on fingernail growth would jump up and down in excitement at their amazing discovery, then get their papers published in Science or Nature. If you did a meta-analysis on the published results on fingernail thought and fingernail growth, you'd conclude that there was a strong effect, even though the null hypothesis is true.
To limit the file-drawer effect, it's important to do a thorough literature search, including really obscure journals, then try to see if there are unpublished experiments. To find out about unpublished experiments, you could look through summaries of funded grant proposals, which for government agencies such as NIH and NSF are searchable online; look through meeting abstracts in the appropriate field; write to the authors of published studies; and send out appeals on e-mail mailing lists.
You can never be 100% sure that you've found every study on your topic ever done, but that doesn't mean you can cynically dismiss the results of every meta-analysis with the magic words "file-drawer effect." If your meta-analysis of the effects of thought on fingernail growth found \(5\) published papers with individually significant results, and a thorough search using every resource you could think of found \(5\) other unpublished studies with non-significant results, your meta-analysis would probably show a significant overall effect, and you should probably believe it. For the \(5\) significant results to all be false positives, there would have to be something like \(90\) additional unpublished studies that you didn't know about, and surely the field of fingernail science is small enough that there couldn't be that many studies that you haven't heard of. There are ways to estimate how many unpublished, non-significant studies there would have to be to make the overall effect in a meta-analysis non-significant. If that number is absurdly large, you can be more confident that your significant meta-analysis is not due to the file-drawer effect.
Extract the information
If the goal of a meta-analysis is to estimate the mean difference between two treatments, you need the means, sample sizes, and a measure of the variation: standard deviation, standard error, or confidence interval. If the goal is to estimate the association between two measurement variables, you need the slope of the regression, the sample size, and the \(r^2\). Hopefully this information is presented in the publication in numerical form. Boring, non-significant results are more likely to be presented in an incomplete form, so you shouldn't be quick to exclude papers from your meta-analysis just because all the necessary information isn't presented in easy-to-use form in the paper. If it isn't, you might need to write the authors, or measure the size and position of features on published graphs.
Do the meta-analysis
The basic idea of a meta-analysis is that you take a weighted average of the difference in means, slope of a regression, or other statistic across the different studies. Experiments with larger sample sizes get more weight, as do experiments with smaller standard deviations or higher \(r^2\) values. You can then test whether this common estimate is significantly different from zero.
Interpret the results
Meta-analysis was invented to be a more objective way of surveying the literature on a subject. A traditional literature survey consists of an expert reading a bunch of papers, dismissing or ignoring those that they don't think are very good, then coming to some conclusion based on what they think are the good papers. The problem with this is that it's easier to see the flaws in papers that disagree with your preconceived ideas about the subject and dismiss them, while deciding that papers that agree with your position are acceptable.
The problem with meta-analysis is that a lot of scientific studies really are crap, and pushing a bunch of little piles of crap together just gives you one big pile of crap. For example, let's say you want to know whether moonlight-energized water cures headaches. You expose some water to moonlight, give little bottles of it to \(20\) of your friends, and say "Take this the next time you have a headache." You ask them to record the severity of their headache on a \(10\)-point scale, drink the moonlight-energized water, then record the severity of their headache \(30\) minutes later. This study is crap—any reported improvement could be due to the placebo effect, or headaches naturally getting better with time, or moonlight-energized water curing dehydration just as well as regular water, or your friends lying because they knew you wanted to see improvement. If you include this crappy study in a big meta-analysis of the effects of moonlight-energized water on pain, no amount of sophisticated statistical analysis is going to make its crappiness go away.
You're probably thinking "moonlight-energized water" is another ridiculously absurd thing that I just made up, aren't you? That no one could be stupid enough to believe in such a thing? Unfortunately, there are people that stupid.
The hard work of a meta-analysis is finding all the studies and extracting the necessary information from them, so it's tempting to be impressed by a meta-analysis of a large number of studies. A meta-analysis of \(50\) studies sounds more impressive than a meta-analysis of \(5\) studies; it's \(10\) times as big and represents \(10\) times as much work, after all. However, you have to ask yourself, "Why do people keep studying the same thing over and over? What motivated someone to do that \(50^{th}\) experiment when it had already been done \(49\) times before?" Often, the reason for doing that \(50^{th}\) study is that the preceding \(49\) studies were crappy in some way. If you've got \(50\) studies, and \(5\) of them are better by some objective criteria than the other \(45\), you'd be better off using just the \(5\) best studies in your meta-analysis.
Example
Chondroitin is a polysaccharide derived from cartilage. It is commonly used by people with arthritis in the belief that it will reduce pain, but clinical studies of its effectiveness have yielded conflicting results. Reichenbach et al. (2007) performed a meta-analysis of studies on chondroitin and arthritis pain of the knee and hip. They identified relevant studies by electronically searching literature databases and clinical trial registries, manual searching of conference proceedings and the reference lists of papers, and contacting various experts in the field. Only trials that involved comparing patients given chondroitin with control patients were used; the control could be either a placebo or no treatment. They obtained the necessary information about the amount of pain and the variation by measuring graphs in the papers, if necessary, or by contacting the authors.
The initial literature search yielded \(291\) potentially relevant reports, but after eliminating those that didn't use controls, those that didn't randomly assign patients to the treatment and control groups, those that used other substances in combination with chondroitin, those for which the necessary information wasn't available, etc., they were left with \(20\) trials.
The statistical analysis of all \(20\) trials showed a large, significant effect of chondroitin in reducing arthritis pain. However, the authors noted that earlier studies, published in 1987-2001, had large effects, while more recent studies (which you would hope are better) showed little or no effect of chondroitin. In addition, trials with smaller standard errors (due to larger sample sizes or less variation among patients) showed little or no effect. In the end, Reichenbach et al. (2007) analyzed just the three largest studies with what they considered the best designs, and they showed essentially zero effect of chondroitin. They concluded that there's no good evidence that chondroitin is effective for knee and hip arthritis pain. Other researchers disagree with their conclusion (Goldberg et al. 2007, Pelletier 2007); while a careful meta-analysis is a valuable way to summarize the available information, it is unlikely to provide the last word on a question that has been addressed with large numbers of poorly designed studies. | textbooks/stats/Applied_Statistics/Biological_Statistics_(McDonald)/06%3A_Multiple_Tests/6.02%3A_Meta-Analysis.txt |
Learning Objectives
• You can do most, maybe all of your statistics using a spreadsheet such as Excel. Here are some general tips.
Introduction
If you're like most biologists, you can do all of your statistics with spreadsheets such as Excel. You may spend months getting the most technologically sophisticated new biological techniques to work, but in the end you'll be able to analyze your data with a simple chi-squared test, $t$–test, one-way anova or linear regression. The graphing abilities of spreadsheets make it easy to inspect data for errors and outliers, look for non-linear relationships and non-normal distributions, and display your final results. Even if you're going to use something like SAS or SPSS or $R$, there will be many times when it's easier to enter your data into a spreadsheet first, inspect it for errors, sort and arrange it, then export it into a format suitable for your fancy-schmancy statistics package.
Some statisticians are contemptuous of Excel for statistics. One of their main complaints is that it can't do more sophisticated tests. While it is true that you can't do advanced statistics with Excel, that doesn't make it wrong to use it for simple statistics; that Excel can't do principal components analysis doesn't make its answer for a two-sample $t$–test incorrect. If you are in a field that requires complicated multivariate analyses, such as epidemiology or ecology, you will definitely have to use something more advanced than spreadsheets. But if you are doing well designed, simple laboratory experiments, you may be able to analyze all of your data with the kinds of tests you can do in spreadsheets.
The more serious complaint about Excel is that some of the procedures gave incorrect results (McCullough and Heiser 2008, Yalta 2008). Most of these problems were with procedures more advanced than those covered in this handbook, such as exponential smoothing, or were errors in how Excel analyzes very unusual data sets that you're unlikely to get from a real experiment. After years of complaining, Microsoft finally fixed many of the problems in Excel 2010 (Keeling and Pavur 2011). So for the statistical tests I describe in this handbook, I feel confident that you can use Excel and get accurate results.
A free alternative to Excel is Calc, part of the free, open-source OpenOffice.org package. Calc does almost everything that Excel does, with just enough exceptions to be annoying. Calc will open Excel files and can save files in Excel format. The OpenOffice.org package is available for Windows, Mac, and Linux. OpenOffice.org also includes a word processor (like Word) and presentation software (like PowerPoint).
Gnumeric sounds like a good, free, open-source spreadsheet program; while it is primarily used by Linux users, it can be made to work with Mac. I haven't used it, so I don't know how well my spreadsheets will work with it.
The instructions on this web page apply to both Excel and Calc, unless otherwise noted.
Basic spreadsheet tasks
I'm going to assume you know how to enter data into a spreadsheet, copy and paste, insert and delete rows and columns, and other simple tasks. If you're a complete beginner, you may want to look at tutorials on using Excel here or here. Here are a few other things that will be useful for handling data.
Separate text into columns
Excel
When you copy columns of data from a web page or text document, then paste them into an Excel spreadsheet, all the data will be in one column. To put the data into multiple columns, select the cells you want to convert, then choose "Text to columns..." from the Data menu. If you choose "Delimited," you can tell it that the columns are separated by spaces, commas, or some other character. Check the "Treat consecutive delimiters as one" box (in Excel) or the "Merge Delimiters" box (in Calc) if numbers may be separated by more than one space, more than one tab, etc. The data will be entered into the columns to the right of the original column, so make sure they're empty.
If you choose "Fixed width" instead of "Delimited", you can do things like tell it that the first $10$ characters go in column $1$, the next $7$ characters go in column $2$, and so on.
If you paste more text into the same Excel spreadsheet, it will automatically be separated into columns using the same delimiters. If you want to turn this off, select the column where you want to paste the data, choose "Text to columns..." from the Data menu, and choose "Delimited." Then unclick all the boxes for delimiters (spaces, commas, etc.) and click "Finish." Now paste your data into the column.
Series fill
You'll mainly use this for numbering a bunch of rows or columns. Numbering them will help you keep track of which row is which, and it will be especially useful if you want to sort the data, then put them back in their original order later. Put the first number of your series in a cell and select it, then choose "Fill: Series..." from the Edit menu. Choose "Rows" or "Columns" depending on whether you want the series to be in a row or a column, set the "Step value" (the amount the series goes up by; usually you'll use 1) and the "Stop value" (the last number in the series). So if you had a bunch of data in cells $B2$ through $E101$ and you wanted to number the rows, you'd put a $1$ in cell $A2$, choose "Columns", set the "Step value" to $1$ and the "Stop value" to $100$, and the numbers $1$ through $100$ would be entered in cells $A2$ through $A101$.
Sorting
To sort a bunch of data, select the cells and choose "Sort" from the Data menu. If the first row of your data set has column headers identifying what is in each column, click on "My list has headers." You can sort by multiple columns; for example, you could sort data on a bunch of chickens by "Breed" in column $A$, "Sex" in column $C$, and "Weight" in column $B$, and it would sort the data by breeds, then within each breed have all the females first and then all the males, and within each breed/sex combination the chickens would be listed from smallest to largest.
If you've entered a bunch of data, it's a good idea to sort each column of numbers and look at the smallest and largest values. This may help you spot numbers with misplaced decimal points and other egregious typing errors, as they'll be much larger or much smaller than the correct numbers.
Graphing
See the web page on graphing with Excel. Drawing some quick graphs is another good way to check your data for weirdness. For example, if you've entered the height and leg length of a bunch of people, draw a quick graph with height on the $X$ axis and leg length on the $Y$ axis. The two variables should be pretty tightly correlated, so if you see some outlier who's $2.10$ meters tall and has a leg that's only $0.65$ meters long, you know to double-check the data for that person.
Absolute and relative cell references
In the formula "$=B1+C1$", $B1$ and $C1$ are relative cell references. If this formula is in cell $D1$, "$B1$" means "that cell that is two cells to the left." When you copy cell $D1$ into cell $D2$, the formula becomes "$=B2+C2$"; when you copy it into cell $G1$, it would become "$=E1+F1$". This is a great thing about spreadsheets; for example, if you have long columns of numbers in columns $A$ and $B$ and you want to know the sum of each pair, you don't need to type "$=B1+C1$" into cell $D1$, then type "$=B2+C2$" into cell $D2$, then type "$=B3+C3$" into cell $D3$, and so on; you just type "$=B1+C1$" once into cell $D1$, then copy and paste it into all the cells in column $D$ at once.
Sometimes you don't want the cell references to change when you copy a cell; in that case, you should use absolute cell references, indicated with a dollar sign. A dollar sign before the letter means the column won't change when you copy and paste into a different cell. If you enter "$=\B1+C1$" into cell $D1$, then copy it into cell $E1$, it will change to "$=\B1+D1$"; the $C1$ will change to $D1$ because you've copied it one column over, but the $B1$ won't change because it has a dollar sign in front of it. A dollar sign before the number means the row won't change; if you enter "$=B\1+C1$" into cell $D1$ and then copy it to cell $D2$, it will change to "$=B\1+C2$". And a dollar sign before both the column and the row means that nothing will change; if you enter "$=\B\1+C1$" into cell $D2$ and then copy it into cell $E2$, it will change to "$=\B\1+D2$". So if you had $100$ numbers in column $B$, you could enter "$=B1-\text {AVERAGE}(B\1:B\100)$" in cell $C1$, copy it into cells $C2$ through $C100$, and each value in column $B$ would have the average of the $100$ numbers subtracted from it.
Paste Special
When a cell has a formula in it (such as "$=B1*C1+D1^2$"), you see the numerical result of the formula (such as "$7.15$") in the spreadsheet. If you copy and paste that cell, the formula will be pasted into the new cell; unless the formula only has absolute cell references, it will show a different numerical result. Even if you use only absolute cell references, the result of the formula will change every time you change the values in $B1,\; C1\; or\; D1$. When you want to copy and paste the number that results from a function in Excel, choose "Paste Special" from the Edit menu and then click the button that says "Values." The number ($7.15$, in this example) will be pasted into the cell.
In Calc, choose "Paste Special" from the Edit menu, uncheck the boxes labeled "Paste All" and "Formulas," and check the box labeled "Numbers."
Change number format
The default format in Excel and Calc displays $9$ digits to the right of the decimal point, if the column is wide enough. For example, the $P$ value corresponding to a chi-square of $4.50$ with $1$ degree of freedom, found with "=CHIDIST(4.50, 1)", will be displayed as $0.033894854$. This number of digits is almost always ridiculous. To change the number of decimal places that are displayed in a cell, choose "Cells..." from the Format menu, then choose the "Number" tab. Under "Category," choose "Number" and tell it how many decimal places you want to display. For the $P$ value above, you'd probably just need three digits, $0.034$. Note that this only changes the way the number is displayed; all of the digits are still in the cell, they're just invisible.
The disadvantage of setting the "Number" format to a fixed number of digits is that very small numbers will be rounded to $0$. Thus if you set the format to three digits to the right of the decimal, "=CHIDIST(24.50,1)" will display as "0.000" when it's really $0.00000074$. The default format ("General" format) automatically uses scientific notation for very small or large numbers, and will display $7.4309837243E-007$, which means $7.43\times 10^{-7}$; that's better than just rounding to $0$, but still has way too many digits. If you see a $0$ in a spreadsheet where you expect a non-zero number (such as a $P$ value), change the format to back to General.
For $P$ values and other results in the spreadsheets linked to this handbook, I created a user-defined format that uses $6$ digits right of the decimal point for larger numbers, and scientific notation for smaller numbers. I did this by choosing "Cells" from the Format menu and pasting the following into the box labeled "Format code":
$[>0.00001]0.\#\#\#\#\#\#;[<-0.00001]0.\#\#\#\#\#\#;0.00E-00$
This will display $0$ as $0.00E00$, but otherwise it works pretty well.
If a column is too narrow to display a number in the specified format, digits to the right of the decimal point will be rounded. If there are too many digits to the left of the decimal point to display them all, the cell will contain "$\#\#\#$". Make sure your columns are wide enough to display all your numbers.
Useful spreadsheet functions
There are hundreds of functions in Excel and Calc; here are the ones that I find most useful for statistics and general data handling. Note that where the argument (the part in parentheses) of a function is "$Y$", it means a single number or a single cell in the spreadsheet. Where the argument says "$Ys$", it means more than one number or cell. See AVERAGE(Ys) for an example.
All of the examples here are given in Excel format. Calc uses a semicolon instead of a comma to separate multiple parameters; for example, Excel would use "=ROUND(A1, 2)" to return the value in cell $A1$ rounded to $2$ decimal places, while Calc would use "=ROUND(A1; 2)". If you import an Excel file into Calc or export a Calc file to Excel format, Calc automatically converts between commas and semicolons. However, if you type a formula into Calc with a comma instead of a semicolon, Calc acts like it has no idea what you're talking about; all it says is "#NAME?".
I've typed the function names in all capital letters to make them stand out, but you can use lower case letters.
Math functions
ABS(Y) Returns the absolute value of a number.
EXP(Y) Returns $e$ to the $y^{th}$ power. This is the inverse of LN, meaning that "=EXP(LN(Y))" equals $Y$.
LN(Y) Returns the natural logarithm (logarithm to the base e) of $Y$.
LOG10(Y) Returns the base-$10$ logarithm of $Y$. The inverse of LOG is raising $10$ to the $Y^{th}$ power, meaning "=10^(LOG10(Y))" returns $Y$.
RAND() Returns a pseudorandom number, equal to or greater than zero and less than one. You must use empty parentheses so the spreadsheet knows that RAND is a function. For a pseudorandom number in some other range, just multiply; thus "=RAND()*79" would give you a number greater than or equal to $0$ and less than $79$. The value will change every time you enter something in any cell. One use of random numbers is for randomly assigning individuals to different treatments; you could enter "=RAND()" next to each individual, Copy and Paste Special the random numbers, Sort the individuals based on the column of random numbers, then assign the first $10$ individuals to the placebo, the next $10$ individuals to $10 mg$ of the trial drug, etc.
A "pseudorandom" number is generated by a mathematical function; if you started with the same starting number (the "seed"), you'd get the same series of numbers. Excel's pseudorandom number generator bases its seed on the time given by the computer's internal clock, so you won't get the same seed twice. There are problems with Excel's pseudorandom number generator that make it inappropriate for serious Monte Carlo simulations, but the numbers it produces are random enough for anything you're likely to do as an experimental biologist.
ROUND(Y,digits) Returns $Y$ rounded to the specified number of digits. For example, if cell $A1$ contains the number $37.38$, "=ROUND(A1, 1)" returns $37.4$, "=ROUND(A1, 0)" returns $37$, and "=ROUND(A1, -1)" returns $40$. Numbers ending in $5$ are rounded up (away from zero), so "=ROUND(37.35,1)" returns $37.4$ and "=ROUND(-37.35)" returns $-37.4$.
SQRT(Y) Returns the square root of $Y$.
SUM(Ys) Returns the sum of a set of numbers.
Logical functions
AND(logical_test1, logical_test2,...) Returns TRUE if logical_test1, logical_test2... are all true, otherwise returns FALSE. As an example, let's say that cells $A1,\; B1\; \text {and}\; C1$ all contain numbers, and you want to know whether they're all greater than $100$. One way to find out would be with the statement "=AND(A1>100, B1>100, C1>100)", which would return TRUE if all three were greater than $100$ and FALSE if any one were not greater than $100$.
IF(logical_test, A, B) Returns $A$ if the logical test is true, $B$ if it is false. As an example, let's say you have $1000$ rows of data in columns $A$ through $E$, with a unique ID number in column $A$, and you want to check for duplicates. Sort the data by column $A$, so if there are any duplicate ID numbers, they'll be adjacent. Then in cell $F1$, enter "=IF(A1=A2, "duplicate","ok"). This will enter the word "duplicate" if the number in $A1$ equals the number in $A2$; otherwise, it will enter the word "ok". Then copy this into cells $F2$ through $F999$. Now you can quickly scan through the rows and see where the duplicates are.
ISNUMBER(Y) Returns TRUE if $Y$ is a number, otherwise returns FALSE. This can be useful for identifying cells with missing values. If you want to check the values in cells $A1$ to $A1000$ for missing data, you could enter "=IF(ISNUMBER(A1), "OK", "MISSING")" into cell $B1$, copy it into cells $B2$ to $B1000$, and then every cell in $A1$ that didn't contain a number would have "MISSING" next to it in column $B$.
OR(logical_test1, logical_test2,...) Returns TRUE if one or more of logical_test1, logical_test2... are true, otherwise returns FALSE. As an example, let's say that cells $A1,\; B1\; \text{and}\; C1$ all contain numbers, and you want to know whether any is greater than $100$. One way to find out would be with the statement "=OR(A1>100, B1>100, C1>100)", which would return TRUE if one or more were greater than 100 and FALSE if all three were not greater than 100.
Statistical functions
AVERAGE(Ys) Returns the arithmetic mean of a set of numbers. For example, "=AVERAGE(B1..B17)" would give the mean of the numbers in cells $B1..B17$, and "=AVERAGE(7, A1, B1..C17)" would give the mean of $7$, the number in cell $A1$, and the numbers in the cells $B1..C17$. Note that Excel only counts those cells that have numbers in them; you could enter "=AVERAGE(A1:A100)", put numbers in cells $A1$ to $A9$, and Excel would correctly compute the arithmetic mean of those $9$ numbers. This is true for other functions that operate on a range of cells.
BINOMDIST(S, K, P, cumulative_probability) Returns the binomial probability of getting $S$ "successes" in $K$ trials, under the null hypothesis that the probability of a success is $P$. The argument "cumulative_probability" should be TRUE if you want the cumulative probability of getting $S$ or fewer successes, while it should be FALSE if you want the probability of getting exactly $S$ successes. (Calc uses $1$ and $0$ instead of TRUE and FALSE.) This has been renamed "BINOM.DIST" in newer versions of Excel, but you can still use "BINOMDIST".
CHIDIST(Y, df) Returns the probability associated with a variable, $Y$, that is chi-square distributed with $df$ degrees of freedom. If you use SAS or some other program and it gives the result as "Chi-sq=78.34, 1 d.f., P<0.0001", you can use the CHIDIST function to figure out just how small your $P$ value is; in this case, "=CHIDIST(78.34, 1)" yields $8.67\times 10^{-19}$. This has been renamed CHISQ.DIST.RT in newer versions of Excel, but you can still use CHIDIST.
CONFIDENCE(alpha, standard-deviation, sample-size) Returns the confidence interval of a mean, assuming you know the population standard deviation. Because you don't know the population standard deviation, you should never use this function; instead, see the web page on confidence intervals for instructions on how to calculate the confidence interval correctly.
COUNT(Ys)Counts the number of cells in a range that contain numbers; if you've entered data into cells $A1$ through $A9,\; A11,\; \text{and}\; A17$, "=COUNT(A1:A100)" will yield $11$.
COUNTIF(Ys, criterion)Counts the number of cells in a range that meet the given criterion.
"=COUNTIF(D1:E1100,50)" would count the number of cells in the range $D1:E100$ that were equal to $50$;
"=COUNTIF(D1:E1100,">50")" would count the number of cells that had numbers greater than $50$ (note the quotation marks around ">50");
"=COUNTIF(D1:E1100,F3)" would count the number of cells that had the same contents as cell $F3$;
"=COUNTIF(D1:E1100,"Bob")" would count the number of cells that contained just the word "Bob". You can use wildcards; "?" stands for exactly one character, so "Bo?" would count "Bob" or "Boo" but not "Bobble", while "Bo*" would count "Bob", "Boo", "Bobble" or "Bodacious".
DEVSQ(Ys)Returns the sum of squares of deviations of data points from the mean. This is what statisticians refer to as the "sum of squares." I use this in setting up spreadsheets to do anova, but you'll probably never need this.
FDIST(Y, df1, df2) Returns the probability value associated with a variable, $Y$, that is $F$-distributed with $df1$ degrees of freedom in the numerator and $df2$ degrees of freedom in the denominator. If you use SAS or some other program and it gives the result as "F=78.34, 1, 19 d.f., P<0.0001", you can use the FDIST function to figure out just how small your $P$ value is; in this case, "=FDIST(78.34, 1, 19)" yields $3.62\times 10^{-8}$. Newer versions of Excel call this function F.DIST.RT, but you can still use FDIST.
MEDIAN(Ys) Returns the median of a set of numbers. If the sample size is even, this returns the mean of the two middle numbers.
MIN(Ys) Returns the minimum of a set of numbers. Useful for finding the range, which is MAX(Ys)-MIN(Ys).
MAX(Ys) Returns the maximum of a set of numbers.
NORMINV(probability, mean, standard_deviation) Returns the inverse of the normal distribution for a given mean and standard deviation. This is useful for creating a set of random numbers that are normally distributed, which you can use for simulations and teaching demonstrations; if you paste "=NORMINV(RAND(),5,1.5)" into a range of cells, you'll get a set of random numbers that are normally distributed with a mean of $5$ and a standard deviation of $1.5$.
RANK.AVG(X, Ys, type) Returns the rank of $X$ in the set of $Ys$. If type is set to $0$, the largest number has a rank of $1$; if type is set to $1$, the smallest number has a rank of $1$. For example, if cells $A1:A8$ contain the numbers $10,\; 12,\; 14,\; 14,\; 16,\; 17,\; 20,\; 21$ "=RANK(A2, A$1:A$8, 0)" returns $7$ (the number $12$ is the $7^{th}$ largest in that list), and "=RANK(A2, A$1:A$8, 1)" returns $2$ (it's the 2nd smallest).
The function "RANK.AVG" gives average ranks to ties; for the above set of numbers, "=RANK.AVG(A3, A$1:A$8, 0)" would return $5.5$, because the two values of $14$ are tied for fifth largest. Older versions of Excel and Calc don't have RANK.AVG; they have RANK, which handled ties incorrectly for statistical purposes. If you're using Calc or an older version of Excel, this formula shows how to get ranks with ties handled correctly:
$=\text{AVERAGE}(\text{RANK}(A1,\; A\1:A\8,\; 0),\; 1+\text{COUNT}(A\1:A\8)-\text{RANK}(A\1,\; A\1:A\8,\; 1))$
STDEV(Ys) Returns an estimate of the standard deviation based on a population sample. This is the function you should use for standard deviation.
STDEVP(Ys) Returns the standard deviation of values from an entire population, not just a sample. You should never use this function.
SUM(Ys) Returns the sum of the $Ys$.
SUMSQ(Ys) Returns the sum of the squared values. Note that statisticians use "sum of squares" as a shorthand term for the sum of the squared deviations from the mean. SUMSQ does not give you the sum of squares in this statistical sense; for the statistical sum of squares, use DEVSQ. You will probably never use SUMSQ.
TDIST(Y, df, tails) Returns the probability value associated with a variable, $Y$, that is $t$-distributed with $df$ degrees of freedom and tails equal to one or two (you'll almost always want the two-tailed test). If you use SAS or some other program and it gives the result as "t=78.34, 19 d.f., P<0.0001", you can use the TDIST function to figure out just how small your $P$ value is; in this case, "=TDIST(78.34, 19, 2)" yields $2.55\times 10^{-25}$. Newer versions of Excel have renamed this function T.DIST.2T, but you can still use TDIST.
VAR(Ys) Returns an estimate of the variance based on a population sample. This is the function you should use for variance.
VARP(Ys) Returns the variance of values from an entire population, not just a sample. You should never use this function. | textbooks/stats/Applied_Statistics/Biological_Statistics_(McDonald)/07%3A_Miscellany/7.01%3A_Using_Spreadsheets_for_Statistics.txt |
Learning Objectives
• It's not easy, but you can force spreadsheets to make publication-quality scientific graphs. This page explains how.
Introduction
Drawing graphs is an important part of analyzing your data and presenting the results of your research. Here I describe the features of clear, effective graphs, and I outline techniques for generating graphs using Excel. Most of these instructions also apply if you're using Calc, part of the free OpenOffice.org suite of programs, instead of Excel.
Many of the default conditions for Excel graphs are annoying, but with a little work, you can get it to produce graphs that are good enough for presentations and web pages. With a little more work, you can make publication-quality graphs. If you're drawing a lot of graphs, you may find it easier to use a specialized scientific graphing program.
General tips for all graphs
• Don't clutter up your graph with unnecessary junk. Grid lines, background patterns, $3-D$ effects, unnecessary legends, excessive tick marks, etc. all distract from the message of your graph.
• Do include all necessary information. Clearly label both axes of your graph, including measurement units if appropriate. You should identify symbols and patterns in a legend on the graph, or in the caption. If the graph has "error bars," you should say in the caption whether they're $95\%$ confidence interval, standard error, standard deviation, comparison interval, or something else.
• Don't use color in graphs for publication. If your paper is a success, many people will be reading photocopies or will print it on a black-and-white printer. If the caption of a graph says "Red bars are mean HDL levels for patients taking 2000 mg niacin/day, while blue bars are patients taking the placebo," some of your readers will just see gray bars and will be confused and angry. For bars, use solid black, empty, gray, cross-hatching, vertical stripes, horizontal stripes, etc. Don't use different shades of gray, they may be hard to distinguish in photocopies. There are enough different symbols that you shouldn't need to use colors.
• Do use color in graphs for presentations. It's pretty, and it makes it easier to distinguish different categories of bars or symbols. But don't use red type on a blue background (or vice-versa), as the eye has a hard time focusing on both colors at once and it creates a distracting $3-D$ effect. And don't use both red and green bars or symbols on the same graph; from $5\%$ to $10\%$ of the men in your audience (and less than $1\%$ of the women) have red-green colorblindness and can't distinguish red from green.
Choosing the right kind of graph
There are many kinds of graphs—bubble graphs, pie graphs, doughnut graphs, radar graphs—and each may be the best for some kinds of data. But by far the most common graphs in scientific publications are scatter graphs and bar graphs, so that's all that I'll talk about here.
Use a scatter graph (also known as an $X-Y$ graph) for graphing data sets consisting of pairs of numbers. These could be measurement variables, or they could be nominal variables summarized as percentages. Plot the independent variable on the $X$ axis (the horizontal axis), and plot the dependent variable on the $Y$ axis.
The independent variable is the one that you manipulate, and the dependent variable is the one that you observe. For example, you might manipulate salt content in the diet and observe the effect this has on blood pressure. Sometimes you don't really manipulate either variable, you observe them both. In that case, if you are testing the hypothesis that changes in one variable cause changes in the other, put the variable that you think causes the changes on the $X$ axis. For example, you might plot "height, in cm" on the $X$ axis and "number of head-bumps per week" on the $Y$ axis if you are investigating whether being tall causes people to bump their heads more often. Finally, there are times when there is no cause-and-effect relationship, in which case you can plot either variable on the $X$ axis; an example would be a graph showing the correlation between arm length and leg length.
There are a few situations where it is common to put the independent variable on the $Y$ axis. For example, oceanographers often put "distance below the surface of the ocean" on the $Y$ axis, with the top of the ocean at the top of the graph, and the dependent variable (such as chlorophyll concentration, salinity, fish abundance, etc.) on the $X$ axis. Don't do this unless you're really sure that it's a strong tradition in your field.
Use a bar graph for plotting means or percentages for different values of a nominal variable, such as mean blood pressure for people on four different diets. Usually, the mean or percentage is on the $Y$ axis, and the different values of the nominal variable are on the $X$ axis, yielding vertical bars.
In general, I recommend using a bar graph when the variable on the $X$ axis is nominal, and a scatter graph when the variable on the $X$ axis is measurement. Sometimes it is not clear whether the variable on the $X$ axis is a measurement or nominal variable, and thus whether the graph should be a scatter graph or a bar graph. This is most common with measurements taken at different times. In this case, I think a good rule is that if you could have had additional data points in between the values on your $X$ axis, then you should use a scatter graph; if you couldn't have additional data points, a bar graph is appropriate. For example, if you sample the pollen content of the air on January 15, February 15, March 15, etc., you should use a scatter graph, with "day of the year" on the $X$ axis. Each point represents the pollen content on a single day, and you could have sampled on other days; there could be points in between January 15 and February 15. However, if you sampled the pollen every day of the year and then calculated the mean pollen content for each month, you should plot a bar graph, with a separate bar for each month. This is because you have one mean for January, and one mean for February, and of course there are no months between January and February. This is just a recommendation on my part; if most people in your field plot this kind of data with a scatter graph, you probably should too.
Drawing scatter graphs with Excel
1. Put your independent variable in one column, with the dependent variable in the column to its right. You can have more than one dependent variable, each in its own column; each will be plotted with a different symbol.
2. If you are plotting 95% confidence intervals, standard errors, standard deviation, or some other kind of error bar, put the values in the next column. These should be intervals, not limits; thus if your first data point has an $X$ value of $7$ and a $Y$ value of $4\pm 1.5$, you'd have $7$ in the first column, $4$ in the second column, and $1.5$ in the third column. For limits that are asymmetrical, such as the confidence limits on a binomial percentage, you'll need two columns, one for the difference between the percentage and the lower confidence limit, and one for the difference between the percentage and the upper confidence limit.
1. Select the cells that have the data in them. Don't select the cells that contain the confidence intervals. In the above example, you'd select cells $A2$ through $B8$.
2. From the Insert menu, choose "Chart". Choose "Scatter" (called "$X\; Y$" in some versions of Excel) as your chart type, then "Marked Scatter" (the one with just dots, not lines) as your chart subtype. Do not choose "Line"; the little picture may look like a scatter graph, but it isn't. And don't choose the other types of scatter graphs, even if you're going to put lines on your graph; you'll add the lines to your "Marked Scatter" graph later.
3. As you can see, the default graph looks horrible, so you need to fix it by formatting the various parts of the graph. Depending on which version of Excel you're using, you may need to click on the "Chart Layout" tab, or choose "Formatting Palette" from the View menu. If you don't see those, you can usually click once on the part of the graph you want to format, then choose it from the Format menu.
4. You can enter a "Chart Title", which you will need for a presentation graph. You probably won't want a title for a publication graph (since the graph will have a detailed caption there). Then enter titles for the $X$ axis and $Y$ axis, and be sure to include the units. By clicking on an axis title and then choosing "Axis Title..." from the Format menu, you can format the font and other aspects of the titles.
5. Use the "Legend" tab to get rid of the legend if you only have one set of $Y$ values. If you have more than one set of $Y$ values, get rid of the legend if you're going to explain the different symbols in the figure caption; leave the legend on if you think that's the most effective way to explain the symbols.
6. Click on the "Axes" tab, choose the $Y$ axis, and choose "Axis options". Modify the "Scale" (the minimum and maximum values of the $Y$ axis). The maximum should be a nice round number, somewhat larger than the highest point on the graph. If you're plotting a binomial percentage, don't make the $Y$ scale greater than $100\%$. If you're going to be adding error bars, the maximum $Y$ should be high enough to include them. The minimum value on the $Y$ scale should usually be zero, unless your observed values vary over a fairly narrow range. A good rule of thumb (that I made up, so don't take it too seriously) is that if your maximum observed $Y$ is more than twice as large as your minimum observed $Y$, your $Y$ scale should go down to zero. If you're plotting multiple graphs of similar data, they should all have the same scales for easier comparison.
7. Also use the "Axes" tab to format the "Number" (the format for the numbers on the $Y$ axis), "Ticks" (the position of tick marks, and whether you want "minor" tick marks in between the "major" ones). Use "Font" to set the font of the labels. Most publications recommend sans-serif fonts (such as Arial, Geneva, or Helvetica) for figures, and you should use the same font for axis labels, titles, and any other text on your graph.
8. Format your $X$ axis the same way you formatted your $Y$ axis.
9. Use the "Gridlines" tab get rid of the gridlines; they're ugly and unnecessary.
10. If you want to add a regression line to your graph, click on one of the symbols, then choose "Add Trendline..." from the Chart menu. You will almost always want the linear trendline. Only add a regression line if it conveys useful information about your data; don't just automatically add one as decoration to all scatter graphs.
11. If you want to add error bars, ignore the "Error Bars" tab; instead, click on one of the symbols on the graph, and choose "Data Series" from the Format menu. Click on "Error Bars" on the left side, and then choose "Y Error Bars". Ignore "Error Bars with Standard Error" and "Error Bars with Standard Deviation", because they are not what they sound like; click on "Custom" instead. Click on the "Specify value" button and click on the little picture to the right of the "Positive Error Value". Then drag to select the range of cells that contains your positive error intervals. In the above example, you would select cells $C2$ to $C8$. Click on the picture next to the box, and use the same procedure to select the cells containing your negative error intervals (which will be the same range of cells as the positive intervals, unless your error bars are asymmetrical). If you want horizontal ($X$ axis) error bars as well, repeat this procedure.
1. To format the symbols, click on one, and choose "Data Series" from the Format menu. Use "Marker Style" to set the shape of the markers, "Marker Line" to set the color and thickness of the line around the symbols, and "Marker Fill" to set the color that fills the marker. Repeat this for each set of symbols.
2. Click in the graph area, inside the graph, to select the whole graph. Choose "Plot Area" from the Format menu. Choose "Line" and set the color to black, to draw a black line on all four sides of the graph.
3. Click in the graph area, outside the graph, to select the whole box that includes the graph and the labels. Choose "Chart Area" from the Format menu. Choose "Line" and set the color to "No Line". On the "Properties" tab, choose "Don't move or size with cells," so the graph won't change size if you adjust the column widths of the spreadsheet.
4. You should now have a beautiful, beautiful graph. You can click once on the graph area (in the blank area outside the actual graph), copy it, and paste it into a word processing document, graphics program or presentation.
Drawing bar graphs with Excel
1. Put the values of the independent variable (the nominal variable) in one column, with the dependent variable in the column to its right. The first column will be used to label the bars or clusters of bars. You can have more than one dependent variable, each in its own column; each will be plotted with a different pattern of bar.
2. If you are plotting $95\%$ confidence intervals or some other kind of error bar, put the values in the next column. These should be confidence intervals, not confidence limits; thus if your first row has a $Y$ value of $4\pm 1.5$, you'd have Control in the first column, $4$ in the second column, and $1.5$ in the third column. For confidence limits that are asymmetrical, such as the confidence intervals on a binomial percentage, you'll need two columns, one for the lower confidence interval, and one for the upper confidence interval.
1. Select the cells that have the data in them. Include the first column, with the values of the nominal variable, but don't select cells that contain confidence intervals.
2. From the Insert menu, choose "Chart". Choose "Column" as your chart type, and then "Clustered Column" under "$2-D$ Column." Do not choose the three-dimensional bars, as they just add a bunch of clutter to your graph without conveying any additional information.
3. The default graph looks horrible, so you need to fix it by formatting the various parts of the graph. Depending on which version of Excel you're using, you may need to click on the "Chart Layout" tab, or choose "Formatting Palette" from the View menu. If you don't see those, you can usually click once on the part of the graph you want to format, then choose it from the Format menu.
4. You can enter a "Chart Title", which you will need for a presentation, but probably not for a publication (since the graph will have a detailed caption there). Then enter a title for the $Y$ axis, including the units. You may or may not need an $X$ axis title, depending on how self-explanatory the column labels are. By clicking on "Axis title options..." you can format the font and other aspects of the titles.
5. Use the "Legend" tab to get rid of the legend if you only have one set of bars. If you have more than one set of bars, get rid of the legend if you're going to explain the different patterns in the figure caption; leave the legend on if you think that's the most effective way to explain the bar patterns.
6. Click on the "Axes" tab, choose the Y axis, and choose "Axis options". Modify the "Scale" (the minimum and maximum values of the $Y$ axis). The maximum should be a nice round number, somewhat larger than the highest point on the graph. If you're plotting a binomial percentage, don't make the $Y$ scale greater than $100\%$. If you're going to be adding error bars, the maximum $Y$ should be high enough to include them. The minimum value on the $Y$ scale should usually be zero, unless your observed values vary over a fairly narrow range. A good rule of thumb (that I made up, so don't take it too seriously) is that if your maximum observed $Y$ is more than twice as large as your minimum observed $Y$, your $Y$ scale should go down to zero. If you're plotting multiple graphs of similar data, they should all have the same scales for easier comparison.
7. Also use the "Axes" tab to format the "Number" (the format for the numbers on the $Y$ axis), Ticks (the position of tick marks, and whether you want "minor" tick marks in between the "major" ones). Use "Font" to set the font of the labels. Most publications recommend sans-serif fonts (such as Arial, Geneva, or Helvetica) for figures, and you should use the same font for axis labels, titles, and any other text on your graph.
8. Format your $X$ axis the same way you formatted your Y axis.
9. Use the "Gridlines" tab get rid of the gridlines; they're ugly and unnecessary.
10. If you want to add error bars, ignore the "Error Bars" tab; instead, click on one of the bars on the graph, and choose "Data Series" from the Format menu. Click on "Error Bars" on the left side. Ignore "Standard Error" and "Standard Deviation", because they are not what they sound like; click on "Custom" instead. Click on the "Specify value" button and click on the little picture to the right of the "Positive Error Value". Then drag to select the range of cells that contains your positive error intervals. In the above example, you would select cells $C2$ to $C8$. Click on the picture next to the box, and use the same procedure to select the cells containing your negative error intervals (which will be the same range of cells as the positive intervals, unless your error bars are asymmetrical).
11. To format the bars, click on one, and choose "Data Series" from the "Format" menu. Use "Line" to set the color and thickness of the lines around the bars, and "Fill" to set the color and pattern that fills the bars. Repeat this for each set of bars. Use "Options" to adjust the "Gap width," the space between sets of bars, and "Overlap" to adjust the space between bars within a set. Negative values for "Overlap" will produce a gap between bars within the same group.
12. Click in the graph area, inside the graph, to select the whole graph. Choose "Plot Area" from the Format menu. Choose "Line" and set the color to black, to draw a black line on all four sides of the graph.
13. Click in the graph area, outside the graph, to select the whole box that includes the graph and the labels. Choose "Chart Area" from the Format menu. Choose "Line" and set the color to "No Line". On the "Properties" tab, choose "Don't move or size with cells," so the graph won't change size if you adjust the column widths of the spreadsheet.
14. You should now have a beautiful, beautiful graph.
Exporting Excel graphs to other formats
Once you've produced a graph, you'll probably want to export it to another program. You may want to put the graph in a presentation (Powerpoint, Keynote, Impress, etc.) or a word processing document. This is easy; click in the graph area to select the whole thing, copy it, then paste it into your presentation or word processing document. Sometimes, this will be good enough quality for your purposes.
Sometimes, you'll want to put the graph in a graphics program, so you can refine the graphics in ways that aren't possible in Excel, or so you can export the graph as a separate graphics file. This is particularly important for publications, where you need each figure to be a separate graphics file in the format and high resolution demanded by the publisher. To do this, right-click on the graph area (control-click on a Mac) somewhere outside the graph, then choose "Save as Picture". Change the format to PDF and you will create a pdf file containing just your graph. You can then open the pdf in a vector graphics program such as Adobe Illustrator or the free program Inkscape, ungroup the different elements of the graph, modify it, and export it in whatever format you need. | textbooks/stats/Applied_Statistics/Biological_Statistics_(McDonald)/07%3A_Miscellany/7.02%3A_Guide_to_Fairly_Good_Graphs.txt |
Learning Objectives
• Here are some tips for presenting scientific information in tables.
Graph or table
For a presentation, you should almost always use a graph, rather than a table, to present your data. It's easier to compare numbers to each other if they're represented by bars or symbols on a graph, rather than numbers. Here's data from the one-way anova page presented in both a graph and a table:
Length of the anterior adductor muscle scar divided by total length in Mytilus trossulus. SE: standard error, N: sample size
Location Mean AAM/ length SE N
Tillamook 0.080 0.0038 10
Newport 0.075 0.0030 8
Petersburg 0.103 0.0061 7
Magadan 0.078 0.0046 8
Tvarminne 0.096 0.0053 6
It's a lot easier to look at the graph and quickly see that the AAM/length ratio is highest at Petersburg and Tvarminne, while the other three locations are lower and about the same as each other. If you put this table in a presentation, you would have to point your laser frantically at one of the $15$ numbers and say, "Here! Look at this number!" as your audience's attention slowly drifted away from your science and towards the refreshments table. "Would it be piggish to take a couple of cookies on the way out of the seminar, to eat later?" they'd be thinking. "Mmmmm, cookies...."
In a publication, the choice between a graph and a table is trickier. A graph is still easier to read and understand, but a table provides more detail. Most of your readers will probably be happy with a graph, but a few people who are deeply interested in your results may want more detail than you can show in a graph. If anyone is going to do a meta-analysis of your data, for example, they'll want means, sample sizes, and some measure of variation (standard error, standard deviation, or confidence limits). If you've done a bunch of statistical tests and someone wants to reanalyze your data using a correction for multiple comparisons, they'll need the exact $P$ values, not just stars on a graph indicating significance. Someone who is planning a similar experiment to yours who is doing power analysis will need some measure of variation, as well.
Editors generally won't let you show a graph with the exact same information that you're also presenting in a table. What you can do for many journals, however, is put graphs in the main body of the paper, then put tables as supplemental material. Because these supplemental tables are online-only, you can put as much detail in them as you want; you could even have the individual measurements, not just means, if you thought it might be useful to someone.
Making a good table
Whatever word processor you're using probably has the ability to make good tables. Here are some tips:
• Each column should have a heading. It should include the units, if applicable.
• Don't separate columns with vertical lines. In the olden days of lead type, it was difficult for printers to make good-looking vertical lines; it would be easy now, but most journals still prohibit them.
• When you have a column of numbers, make sure the decimal points are aligned vertically with each other.
• Use a reasonable number of digits. For nominal variables summarized as proportions, use two digits for $n$ less than $101$, three digits for $n$ from $101$ to $1000$, etc. This way, someone can use the proportion and the $n$ and calculate your original numbers. For example, if $n$ is $143$ and you give the proportion as $0.22$, it could be $31/143$ or $32/143$; reporting it as $0.217$ lets anyone who's interested calculate that it was $31/143$. For measurement variables, you should usually report the mean using one more digit than the individual measurement has; for example, if you've measured hip extension to the nearest degree, report the mean to the nearest tenth of a degree. The standard error or other measure of variation should have two or three digits. $P$ values are usually reported with two digits ($P=0.44,\; P=0.032,\; P=2.7\times 10^{-5}$, etc.).
• Don't use excessive numbers of horizontal lines. You'll want horizontal lines at the top and bottom of the table, and a line separating the heading from the main body, but that's probably about it. The exception is when you have multiple lines that should be grouped together. If the table of AAM/length ratios above had separate numbers for male and female mussels at each location, it might be acceptable to separate the locations with horizontal lines.
• Table formats sometimes don't translate well from one computer program to another; if you prepare a beautiful table using a Brand X word processor, then save it in Microsoft Word format or as a pdf to send to your collaborators or submit to a journal, it may not look so beautiful. So don't wait until the last minute; try out any format conversions you'll need, well before your deadline. | textbooks/stats/Applied_Statistics/Biological_Statistics_(McDonald)/07%3A_Miscellany/7.03%3A_Presenting_Data_in_Tables.txt |
Learning Objectives
• This page gives an introduction to using the statistical software package SAS. Some of it is specific to the University of Delaware, but most of it should be useful for anyone using SAS.
Introduction
SAS, SPSS and Stata are some of the most popular software packages for doing serious statistics. I have a little experience with SAS, so I've prepared this web page to get you started on the basics. UCLA's Academic Technology Services department has prepared very useful guides to SAS, SPSS and Stata.
An increasingly popular tool for serious statistics is R, a free software package for Windows, Mac, Linux, and Unix. There are free online manuals, and many online and printed tutorials. I've never used R, so I can't help you with it.
SAS may seem intimidating and old-fashioned; accomplishing anything with it requires writing what is, in essence, a computer program, one where a misplaced semicolon can have disastrous results. But I think that if you take a deep breath and work your way patiently through the examples, you'll soon be able to do some pretty cool statistics.
The instructions here are for the University of Delaware, but most of it should apply anywhere that SAS is installed. There are four ways of using SAS:
• on a mainframe, in batch mode. This is what I'll describe below.
• on a mainframe, interactively in line mode. I don't recommend this, because it just seems to add complication and confusion.
• on a mainframe, interactively with the Display Manager System. From what I've seen, this isn't very easy. If you really want to try it, here are instructions. Keep in mind that "interactive" doesn't mean "user friendly graphical interface like you're used to"; you still have to write the same SAS programs.
• on your Windows personal computer. I've never done this. Before you buy SAS for your computer, see if you can use it for free on your institution's mainframe computer.
To use SAS on a mainframe computer, you'll have to connect your personal computer to a the mainframe; at the University of Delaware, you connect to a computer called Strauss. The operating system for mainframes like Strauss is Unix; in order to run SAS in batch mode, you'll have to learn a few Unix commands.
Getting connected to a mainframe from a Mac
On a Mac, find the program Terminal; it should be in the Utilities folder, inside your Applications folder. You'll probably want to drag it to your taskbar for easy access in the future. The first time you run Terminal, go to "Preferences" in the Terminal menu, choose "Settings", then choose "Advanced". Set "Declare terminal as:" to "vt100". Then check the box that says "Delete sends Ctrl-H". (Some versions of Terminal may have the preferences arranged somewhat differently, and you may need to look for a box to check that says "Delete key sends backspace.") Then quit and restart Terminal. You won't need to change these settings again.
When you start up Terminal, you'll get a prompt that looks like this:
Your-Names-Computer:~ yourname\$
After the dollar sign, type ssh [email protected], where userid is your user id name and computer.url is the address of the mainframe. At Delaware the mainframe is Strauss, so if your userid is joeblow, you'd type ssh [email protected]. Then hit Return. It will ask you for your password; type it and hit Return (it won't look like you've typed anything, but it will work). You'll then be connected to the mainframe, and you'll get a prompt like this:
strauss.udel.edu%
You're now ready to start typing Unix commands.
Getting connected to a mainframe from Windows
Unlike Macs, Windows computers don't come with a built-in terminal emulator, so you'll need to ask your site administrator which "terminal emulator" they recommend. PuTTY is one popular (and free) program, with a good set of instructions here. Whichever terminal emulator you use, you'll need to enter the "host name" (the name of the mainframe computer you're trying to connect to; at Delaware, it's strauss.udel.edu), your user ID, and your password. You may need to specify that your "Protocol" is "SSH". When you type your password, it may look like nothing's happening, but just type it and hit Enter. If it works, you'll be connected to the mainframe and get a prompt like this:
strauss.udel.edu%
You're now ready to start typing Unix commands.
Getting connected to a mainframe from Linux
If you're running Linux, you're already enough of a geek that you don't need my help getting connected to your mainframe.
A little bit of Unix
The operating system for mainframes like Strauss is Unix, so you've got to learn a few Unix commands. Unix was apparently written by people for whom typing is physically painful, as most of the commands are a small number of cryptic letters. Case does matter; don't enter CD and think it means the same thing as cd. Here is all the Unix you need to know to run SAS. Commands are in bold and file and directory names, which you choose, are in italics.
ls Lists all of the file names in your current directory.
pico filename pico is a text editor; you'll use it for writing SAS programs. Enter pico yourfilename.sas to open an existing file named yourfilename.sas, or create it if it doesn't exist. To exit pico, enter the control and x keys. You have to use the arrow keys, not the mouse, to move around the text once you're in a file. For this reason, I prefer to create and edit SAS programs in a text editor on my computer (TextEdit on a Mac, NotePad on Windows), then copy and paste them into a file I've created with pico. I then use pico for minor tweaking of the program.
Don't copy and paste from a word processor like Word into pico, as word processor files contain invisible characters that will confuse pico.
Note that there are other popular text editors, such as vi and emacs, and one of the defining characters of a serious computer geek is a strong opinion about the superiority of their favorite text editor and total loserness of all other text editors. To avoid becoming one of them, try not to get emotional about pico.
Unix filenames should be made of letters and numbers, dashes (-), underscores (_), and periods. Don't use spaces or other punctuation (slashes, parentheses, exclamation marks, etc.), as they have special meanings in Unix and may confuse the computer. It is common to use an extension after a period, such as .sas to indicate a SAS program, but that is for your convenience in recognizing what kind of file it is; it isn't required by Unix.
cat filename Opens a file for viewing and printing, but not editing. It will automatically take you to the end of the file, so you'll have to scroll up. To print, you may want to copy what you want, then paste it into a word processor document for easier formatting. You should use cat instead of pico for viewing the output files (.log and .lst) that SAS creates.
mv oldname newname Changes the name of a file from oldname to newname. When you run SAS on the file practice.sas, the output will be in a file called practice.lst. Before you make changes to practice.sas and run it again, you may want to change the name of practice.lst to something else, so it won't be overwritten.
cp oldname newname Makes a copy of file oldname with the name newname.
rm filename Deletes a file.
logout Logs you out of the mainframe.
mkdir directoryname Creates a new directory. You don't need to do this, but if you end up creating a lot of files, you may find it helpful to keep them organized into different directories.
cd directoryname Changes from one directory to another. For example, if you have a directory named sasfiles in your home directory, enter cd sasfiles. To go from within a directory up to your home directory, just enter cd.
rmdir directoryname Deletes a directory, if it doesn't have any files in it. If you want to delete a directory and the files in it, first go into the directory, delete all the files in it using rm, then delete the directory using rmdir.
sas filename Runs SAS. Be sure to enter sas filename.sas. If you just enter sas and then hit return, you'll be in interactive SAS mode, which is scary; enter ;endsas; if that happens and you need to get out of it.
Writing a SAS program
To use SAS, you first use pico to create an empty file; you can call the first one practice.sas. Type in the SAS program that you've written (or copy it from a text file you created with TextEdit or Notepad), then save the file by hitting the control and \(x\) keys. Once you've exited pico, enter sas practice.sas; the word sas is the command that tells Unix to run the SAS program, and practice.sas is the file it is going to run SAS on. SAS then creates a file named practice.log, which reports any errors. If there are no fatal errors, SAS also creates a file named practice.lst, which contains the results of the analysis.
The SAS program (which you write using pico) consists of a series of commands. Each command is one or more words, followed by a semicolon. You can put comments into your program to remind you of what you're trying to do; these comments have a slash and asterisk on each side, like this:
/*This is a comment. It is not read by the SAS program.*/
The SAS program has two basic parts, the DATA step and the PROC step. (Note--I'll capitalize all SAS commands to make them stand out, but you don't have to when you write your programs; unlike Unix, SAS is not case-sensitive.) The DATA step reads in data, either from another file or from within the program.
In a DATA step, you first say "DATA dataset;" where dataset is an arbitrary name you give the dataset. Then you say "INPUT variable1 variable2...;" giving an arbitrary name to each of the variables that is on a line in your data. So if you have a data set consisting of the length and width of mussels from two different species, you could start the program by writing:
DATA mussels;
INPUT species \$ length width;
A variable name for a nominal variable (a name or character) has a space and a dollar sign (\(\$\)) after it. In our practice data set, "species" is a nominal variable. If you want to treat a number as a nominal variable, such as an ID number, remember to put a dollar sign after the name of the variable. Don't use spaces within variable names or the values of variables; use Medulis or M_edulis, not M. edulis (there are ways of handling variables containing spaces, but they're complicated).
If you are putting the data directly in the program, the next step is a line that says "DATALINES;", followed by the data. A semicolon on a line by itself tells SAS it's done reading the data. You can put each observation on a separate line, with the variables separated by one or more spaces:
DATA mussels; /* names the data set "mussels" */
INPUT species \$ length width; /* names the variables, defines "species" as a nominal variable */
DATALINES; /* tells SAS that the data starts on the next line */
edulis 49.0 11.0
trossulus 51.2 9.1
trossulus 45.9 9.4
edulis 56.2 13.2
edulis 52.7 10.7
edulis 48.4 10.4
trossulus 47.6 9.5
trossulus 46.2 8.9
trossulus 37.2 7.1
; /* the semicolon tells SAS to stop reading data */
You can also have more than one set of data on each line, if you put "@@" at the end of the INPUT statement:
DATA mussels;
INPUT species \$ length width @@;
DATALINES;
edulis 49.0 11.0 trossulus 51.2 9.1 trossulus 45.9 9.4 edulis 56.2 13.2
edulis 52.7 10.7 edulis 48.4 10.4 trossulus 47.6 9.5 trossulus 46.2 8.9
trossulus 37.2 7.1
;
If you have a large data set, it will be more convenient to keep it in a separate file from your program. To read in data from another file, use an INFILE datafile; statement, with the name of the data file in single quotes. If you do this, you don't use the DATALINES statement. Here I've created a separate file (in the same directory) called "shells.dat" that has a huge amount of data in it, and this is how I tell SAS to read it:
DATA mussels;
INFILE 'shells.dat';
INPUT species \$ length width;
When you have your data in a separate file, it's a good idea to have one or more lines at the start of the file that explain what the variables are. You should then use FIRSTOBS=linenumber as an option in the INFILE statement to tell SAS which line has the first row of data. Here I tell SAS to start reading data on line \(3\) of the shells.dat data file, because the first two lines have explanatory information:
DATA mussels;
INFILE 'shells.dat' FIRSTOBS=3;
INPUT species \$ length width;
The DATA statement can create new variables from mathematical operations on the original variables. Here I make two new variables, "loglength," which is just the base-\(10\) log of length, and "shellratio," the width divided by the length. SAS can do statistics on these variables just as it does on the original variables.
DATA mussels;
INPUT species \$ length width;
loglength=log10(length);
shellratio=width/length;
DATALINES;
The PROC step
Once you've entered in the data, it's time to analyze it using one or more PROC commands. The PROC statement tells SAS which procedure to run, and almost always has some options. For example, to calculate the mean and standard deviation of the lengths, widths, and log-transformed lengths, you would use PROC MEANS. It is followed by certain options. DATA=dataset tells it which data set to analyze. MEAN and STD are options that tell PROC MEANS to calculate the mean and standard deviation; there are several other options that could have been used with PROC MEANS. VAR variables1 variable2 ... tells PROC MEANS which variables to calculate the mean and standard deviation of. RUN tells SAS to run.
PROC MEANS DATA=mussels MEAN STD; /* tells PROC MEANS to calculate mean and standard deviation */
VAR length width loglength; /* tells PROC MEANS which variables to analyze */
RUN; /* makes PROC MEANS run */
Now that you've read through a basic introduction to SAS, put it all together and run a SAS program. Connect to your mainframe and use pico to create a file named "practice.sas". Copy and paste the following into the file:
DATA mussels;
INPUT species \$ length width;
loglength=log10(length);
shellratio=width/length;
DATALINES;
edulis 49.0 11.0
tross 51.2 9.1
tross 45.9 9.4
edulis 56.2 13.2
edulis 52.7 10.7
edulis 48.4 10.4
tross 47.6 9.5
tross 46.2 8.9
tross 37.2 7.1
;
PROC MEANS DATA=mussels MEAN STD;
VAR length width loglength;
RUN;
Then exit pico (hit control-x). At the dollar sign prompt, enter sas practice.sas. Then enter ls to list the file names; you should see new files named practice.log and practice.lst. First, enter cat practice.log to look at the log file. This will tell you whether there are any errors in your SAS program. Then enter cat practice.lst to look at the output from your program. You should see something like thi
The SAS System: The MEANS Procedure
Variable Mean Std Dev
-----------------------------------------
length 48.2666667 5.2978769
width 9.9222222 1.6909892
loglength 1.6811625 0.0501703
If you do, you've successfully run SAS. Yay!
PROC SORT and PROC PRINT
I describe specific statistical procedures on the web page for each test. Two that are of general use are PROC SORT and PROC PRINT. PROC SORT sorts the data by one or more variables. For some procedures, you need to sort the data first. PROC PRINT writes the data set, including any new variables you've created (like loglength and shellratio in our example) to the output file. You can use it to make sure that SAS has read the data correctly, and your transformations, sorting, etc. have worked properly. You can sort the data by more than one variable; this example sorts the mussel data, first by species, then by length.
PROC SORT DATA=mussels;
BY species length;
RUN;
PROC PRINT DATA=mussels;
RUN;
Adding PROC SORT and PROC PRINT to the SAS file produces the following output:
The SAS System
Obs species length width loglength shellratio
1 edulis 48.4 10.4 1.68485 0.21488
2 edulis 49.0 11.0 1.69020 0.22449
3 edulis 52.7 10.7 1.72181 0.20304
4 edulis 56.2 13.2 1.74974 0.23488
5 trossulus 37.2 7.1 1.57054 0.19086
6 trossulus 45.9 9.4 1.66181 0.20479
7 trossulus 46.2 8.9 1.66464 0.19264
8 trossulus 47.6 9.5 1.67761 0.19958
9 trossulus 51.2 9.1 1.70927 0.17773
As you can see, the data were sorted first by species, then within each species, they were sorted by length.
Graphs in SAS
It's possible to draw graphs with SAS, but I don't find it to be very easy. I recommend you take whatever numbers you need from SAS, put them into a spreadsheet or specialized graphing program, and use that to draw your graphs.
Getting data from a spreadsheet into SAS
I find it easiest to enter my data into a spreadsheet first, even if I'm going to analyze it using SAS. But if you try to copy data directly from a spreadsheet into a SAS file, the numbers will be separated by tabs, which SAS will choke on; your log file will say "NOTE: Invalid data in line...". To get SAS to recognize data separated by tabs, use the DELIMITER option in an INFILE statement. For inline data, add an INFILE DATALINES DELIMITER='09'x; statement before the INPUT statement (SAS calls tabs '09'x):
DATA mussels;
INFILE DATALINES DELIMITER='09'x;
INPUT species \$ length width;
DATALINES;
edulis 49.0 11.0
tross 51.2 9.1
tross 45.9 9.4
edulis 56.2 13.2
edulis 52.7 10.7
edulis 48.4 10.4
tross 47.6 9.5
tross 46.2 8.9
tross 37.2 7.1
;
If your data are in a separate file, you include DELIMITER='09'x in the INFILE statement like this:
DATA mussels;
INFILE 'shells.dat' DELIMITER='09'x;
INPUT species \$ length width;
More information about SAS
The user manuals for SAS are available online for free. They're essential for advanced users, but they're not very helpful for beginners.
The UCLA Academic Technology Services has put together an excellent set of examples of how to do the most common statistical tests in SAS, SPSS or Stata; it's a good place to start if you're looking for more information about a particular test. | textbooks/stats/Applied_Statistics/Biological_Statistics_(McDonald)/07%3A_Miscellany/7.04%3A_Introduction_to_SAS.txt |
Learning Objectives
• This table is designed to help you decide which statistical test or descriptive statistic is appropriate for your experiment. In order to use it, you must be able to identify all the variables in the data set and tell what kind of variables they are.
test nominal variables measurement variables ranked variables purpose notes example
Exact test for goodness-of-fit \(1\) test fit of observed frequencies to expected frequencies use for small sample sizes (less than \(1000)\) count the number of red, pink and white flowers in a genetic cross, test fit to expected \(1:2:1 \) ratio, total sample \(<1000\)
Chi-square test of goodness-of-fit \(1\) test fit of observed frequencies to expected frequencies use for large sample sizes (greater than \(1000\)) count the number of red, pink and white flowers in a genetic cross, test fit to expected \(1:2:1\) ratio, total sample \(>1000\)
G–test of goodness-of-fit \(1\) test fit of observed frequencies to expected frequencies used for large sample sizes (greater than \(1000\)) count the number of red, pink and white flowers in a genetic cross, test fit to expected \(1:2:1\) ratio, total sample \(>1000\)
Repeated G–tests of goodness-of-fit \(2\) test fit of observed frequencies to expected frequencies in multiple experiments - count the number of red, pink and white flowers in a genetic cross, test fit to expected \(1:2:1\) ratio, do multiple crosses
test nominal variables measurement variables ranked variables purpose notes example
Fisher's exact test \(2\) test hypothesis that proportions are the same in different groups use for small sample sizes (less than \(1000\)) count the number of live and dead patients after treatment with drug or placebo, test the hypothesis that the proportion of live and dead is the same in the two treatments, total sample \(<1000\)
Chi-square test of independence \(2\) test hypothesis that proportions are the same in different groups use for large sample sizes (greater than \(1000\)) count the number of live and dead patients after treatment with drug or placebo, test the hypothesis that the proportion of live and dead is the same in the two treatments, total sample \(>1000\)
G–test of independence \(2\) test hypothesis that proportions are the same in different groups large sample sizes (greater than \(1000\)) count the number of live and dead patients after treatment with drug or placebo, test the hypothesis that the proportion of live and dead is the same in the two treatments, total sample \(>1000\)
Cochran-Mantel-Haenszel test \(3\) test hypothesis that proportions are the same in repeated pairings of two groups alternate hypothesis is a consistent direction of difference count the number of live and dead patients after treatment with drug or placebo, test the hypothesis that the proportion of live and dead is the same in the two treatments, repeat this experiment at different hospitals
test nominal variables measurement variables ranked variables purpose notes example
Arithmetic mean \(1\) description of central tendency of data - -
Median \(1\) description of central tendency of data more useful than mean for very skewed data median height of trees in forest, if most trees are short seedlings and the mean would be skewed by a few very tall trees
Range \(1\) description of dispersion of data used more in everyday life than in scientific statistics -
Variance \(1\) description of dispersion of data forms the basis of many statistical tests; in squared units, so not very understandable -
Standard deviation \(1\) description of dispersion of data in same units as original data, so more understandable than variance -
Standard error of the mean \(1\) description of accuracy of an estimate of a mean - -
Confidence interval \(1\) description of accuracy of an estimate of a mean - -
test nominal variables measurement variables ranked variables purpose notes example
One-sample t–test \(1\) test the hypothesis that the mean value of the measurement variable equals a theoretical expectation - blindfold people, ask them to hold arm at \(45^{\circ}\) angle, see if mean angle is equal to \(45^{\circ}\)
Two-sample t–test \(1\) \(1\) test the hypothesis that the mean values of the measurement variable are the same in two groups just another name for one-way anova when there are only two groups compare mean heavy metal content in mussels from Nova Scotia and New Jersey
One-way anova \(1\) \(1\) test the hypothesis that the mean values of the measurement variable are the same in different groups - compare mean heavy metal content in mussels from Nova Scotia, Maine, Massachusetts, Connecticut, New York and New Jersey
Tukey-Kramer test \(1\) \(1\) after a significant one-way anova, test for significant differences between all pairs of groups - compare mean heavy metal content in mussels from Nova Scotia vs. Maine, Nova Scotia vs. Massachusetts, Maine vs. Massachusetts, etc.
Bartlett's test \(1\) \(1\) test the hypothesis that the standard deviation of a measurement variable is the same in different groups usually used to see whether data fit one of the assumptions of an anova compare standard deviation of heavy metal content in mussels from Nova Scotia, Maine, Massachusetts, Connecticut, New York and New Jersey
test nominal variables measurement variables ranked variables purpose notes example
Nested anova \(2+\) \(1\) test hypothesis that the mean values of the measurement variable are the same in different groups, when each group is divided into subgroups subgroups must be arbitrary (model II) compare mean heavy metal content in mussels from Nova Scotia, Maine, Massachusetts, Connecticut, New York and New Jersey; several mussels from each location, with several metal measurements from each mussel
Two-way anova \(2\) \(1\) test the hypothesis that different groups, classified two ways, have the same means of the measurement variable - compare cholesterol levels in blood of male vegetarians, female vegetarians, male carnivores, and female carnivores
Paired t–test \(2\) \(1\) test the hypothesis that the means of the continuous variable are the same in paired data just another name for two-way anova when one nominal variable represents pairs of observations compare the cholesterol level in blood of people before vs. after switching to a vegetarian diet
Wilcoxon signed-rank test \(2\) \(1\) test the hypothesis that the means of the measurement variable are the same in paired data used when the differences of pairs are severely non-normal compare the cholesterol level in blood of people before vs. after switching to a vegetarian diet, when differences are non-normal
test nominal variables measurement variables ranked variables purpose notes example
Linear regression \(2\) see whether variation in an independent variable causes some of the variation in a dependent variable; estimate the value of one unmeasured variable corresponding to a measured variable - measure chirping speed in crickets at different temperatures, test whether variation in temperature causes variation in chirping speed; or use the estimated relationship to estimate temperature from chirping speed when no thermometer is available
Correlation \(2\) see whether two variables covary - measure salt intake and fat intake in different people's diets, to see if people who eat a lot of fat also eat a lot of salt
Polynomial regression \(2\) test the hypothesis that an equation with \(X^2\), \(X^3\), etc. fits the \(Y\) variable significantly better than a linear regression - -
Analysis of covariance (ancova) \(1\) \(2\) test the hypothesis that different groups have the same regression lines first test the homogeneity of slopes; if they are not significantly different, test the homogeneity of the \(Y\)-intercepts measure chirping speed vs. temperature in four species of crickets, see if there is significant variation among the species in the slope or \(Y\)-intercept of the relationships
test nominal variables measurement variables ranked variables purpose notes example
Multiple regression \(3+\) fit an equation relating several \(X\) variables to a single \(Y\) variable - measure air temperature, humidity, body mass, leg length, see how they relate to chirping speed in crickets
Simple logistic regression \(1\) \(1\) fit an equation relating an independent measurement variable to the probability of a value of a dependent nominal variable - give different doses of a drug (the measurement variable), record who lives or dies in the next year (the nominal variable)
Multiple logistic regression \(1\) \(2+\) fit an equation relating more than one independent measurement variable to the probability of a value of a dependent nominal variable - record height, weight, blood pressure, age of multiple people, see who lives or dies in the next year
test nominal variables measurement variables ranked variables purpose notes example
Sign test \(2\) \(1\) test randomness of direction of difference in paired data - compare the cholesterol level in blood of people before vs. after switching to a vegetarian diet, only record whether it is higher or lower after the switch
Kruskal–Wallis test \(1\) \(1\) test the hypothesis that rankings are the same in different groups often used as a non-parametric alternative to one-way anova \(40\) ears of corn (\(8\) from each of \(5\) varieties) are ranked for tastiness, and the mean rank is compared among varieties
Spearman rank correlation \(2\) see whether the ranks of two variables covary often used as a non-parametric alternative to regression or correlation \(40\) ears of corn are ranked for tastiness and prettiness, see whether prettier corn is also tastier | textbooks/stats/Applied_Statistics/Biological_Statistics_(McDonald)/07%3A_Miscellany/7.05%3A_Choosing_the_Right_Test.txt |
In this introductory section for PHC 6050 and PHC 6052, we will:
• Define statistics and biostatistics
• List the five steps in a typical research project and discuss the roles biostatistics can play in each
• Introduce the Big Picture of Statistics, which is the foundation of our course, and define and discuss its four components
• Introduce fundamental definitions related to data, datasets, and variables.
• Explain the different types/classifications of variables and introduce why this is important in biostatistics
Here are links to a few other online materials similar to 6060/6052 which you may find useful as secondary references
Preliminaries
Our first course objective will be addressed throughout the semester in that you will be adding to your understanding of biostatistics in an ongoing manner during the course.
CO-1: Describe the roles biostatistics serves in the discipline of public health.
What is Biostatistics?
Learning Objectives
LO 1.1: Define statistics and biostatistics.
Biostatistics is the application of statistics to a variety of topics in biology. In this course, we tend to focus on biological topics in the health sciences as we learn about statistics.
In an introductory course such as ours, there is essentially no difference between “biostatistics” and “statistics” and thus you will notice that we focus on learning “statistics” in general but use as many examples from and applications to the health sciences as possible.
Note
Statistics is all about converting data into useful information. Statistics is therefore a process where we are:
• collecting data,
• summarizing data, and
• interpreting data.
The following video adapted from material available from Johns Hopkins – Introduction to Biostatistics provides a few examples of statistics in use.
Video
Statistics Examples (3:14)
The following reading from the online version of Little Handbook of Statistical Practice contains excellent comments about common reasons why many people feel that “statistics is hard” and how to overcome them! We will suggest returning to and reviewing this document as we cover some of the topics mentioned in the reading.
Reading
Is Statistics Hard? (≈ 1500 words)
Steps in a Research Project
Learning Objectives
LO 1.2: Identify the steps in a research project.
In practice, every research project or study involves the following steps.
1. Planning/design of study
2. Data collection
3. Data analysis
4. Presentation
5. Interpretation
The following video adapted from material available at Johns Hopkins – Introduction to Biostatistics provides an overview of the steps in a research project and the role biostatistics and biostatisticians play in each step.
Video
(Optional) Outside Reading: Role of Biostatistics in Modern Medicine (≈ 1000 words)
The Big Picture
CO-1: Describe the roles biostatistics serves in the discipline of public health.
Throughout the course, we will add to our understanding of the definitions, concepts, and processes which are introduced here. You are not expected to gain a full understanding of this process until much later in the course!
To really understand how this process works, we need to put it in a context. We will do that by introducing one of the central ideas of this course, the Big Picture of Statistics.
We will introduce the Big Picture by building it gradually and explaining each component.
At the end of the introductory explanation, once you have the full Big Picture in front of you, we will show it again using a concrete example.
Learning Objectives
LO 1.3: Identify and differentiate between the components of the Big Picture of Statistics
Video
The process of statistics starts when we identify what group we want to study or learn something about. We call this group the population.
Note that the word “population” here (and in the entire course) is not just used to refer to people; it is used in the more broad statistical sense, where population can refer not only to people, but also to animals, things etc. For example, we might be interested in:
• the opinions of the population of U.S. adults about the death penalty; or
• how the population of mice react to a certain chemical; or
• the average price of the population of all one-bedroom apartments in a certain city.
Note
The population, then, is the entire group that is the target of our interest.
In most cases, the population is so large that as much as we might want to, there is absolutely no way that we can study all of it (imagine trying to get the opinions of all U.S. adults about the death penalty…).
A more practical approach would be to examine and collect data only from a sub-group of the population, which we call a sample. We call this first component, which involves choosing a sample and collecting data from it, Producing Data.
Note
A sample is a s subset of the population from which we collect data.
It should be noted that since, for practical reasons, we need to compromise and examine only a sub-group of the population rather than the whole population, we should make an effort to choose a sample in such a way that it will represent the population well.
For example, if we choose a sample from the population of U.S. adults, and ask their opinions about a particular federal health care program, we do not want our sample to consist of only Republicans or only Democrats.
Once the data have been collected, what we have is a long list of answers to questions, or numbers, and in order to explore and make sense of the data, we need to summarize that list in a meaningful way.
This second component, which consists of summarizing the collected data, is called Exploratory Data Analysis or Descriptive Statistics.
Now we’ve obtained the sample results and summarized them, but we are not done. Remember that our goal is to study the population, so what we want is to be able to draw conclusions about the population based on the sample results.
Before we can do so, we need to look at how the sample we’re using may differ from the population as a whole, so that we can factor that into our analysis. To examine this difference, we use Probability which is the third component in the big picture.
The third component in the Big Picture of Statistics, probability is in essence the “machinery” that allows us to draw conclusions about the population based on the data collected in the sample.
Finally, we can use what we’ve discovered about our sample to draw conclusions about our population.
We call this final component in the process Inference.
This is the Big Picture of Statistics.
EXAMPLE: Polling Public Opinion
At the end of April 2005, a poll was conducted (by ABC News and the Washington Post), for the purpose of learning the opinions of U.S. adults about the death penalty.
1. Producing Data: A (representative) sample of 1,082 U.S. adults was chosen, and each adult was asked whether he or she favored or opposed the death penalty.
2. Exploratory Data Analysis (EDA): The collected data were summarized, and it was found that 65% of the sampled adults favor the death penalty for persons convicted of murder.
3 and 4. Probability and Inference: Based on the sample result (of 65% favoring the death penalty) and our knowledge of probability, it was concluded (with 95% confidence) that the percentage of those who favor the death penalty in the population is within 3% of what was obtained in the sample (i.e., between 62% and 68%). The following figure summarizes the example:
Course Structure
The structure of this entire course is based on the big picture.
The course will have 4 units; one for each of the components in the big picture.
As the figure below shows, even though it is second in the process of statistics, we will start this course with exploratory data analysis (EDA), continue to discuss producing data, then go on to probability, so that at the end we will be able to discuss inference.
The main reasons we begin with EDA is that we need to understand enough about what we want to do with our data before we can discuss the issues related to how to collect it!!
This also allows us to introduce many important concepts early in the course so that you will have ample time to master them before we return to inference at the end of the course.
The following figure summarizes the structure of the course.
As you will see, the Big Picture is the basis upon which the entire course is built, both conceptually and structurally.
We will refer to it often, and having it in mind will help you as you go through the course. | textbooks/stats/Applied_Statistics/Biostatistics_-_Open_Learning_Textbook/Preliminaries/Role_of_Biostatistics.txt |
CO-4: Distinguish among different measurement scales, choose the appropriate descriptive and inferential statistical methods based on these distinctions, and interpret the results.
CO-7: Use statistical software to analyze public health data.
Classifying Types of Variables
Learning Objectives
LO 4.1: Determine the type (categorical or quantitative) of a given variable.
Learning Objectives
LO 4.2: Classify a given variable as nominal, ordinal, discrete, or continuous.
Video
Types of Variables (3 Parts; 13:25 total time)
Variables can be broadly classified into one of two types:
• Quantitative
• Categorical
Below we define these two main types of variables and provide further sub-classifications for each type.
Note
Categorical variables take category or label values, and place an individual into one of several groups.
Categorical variables are often further classified as either:
• Nominal, when there is no natural ordering among the categories.
Common examples would be gender, eye color, or ethnicity.
• Ordinal, when there is a natural order among the categories, such as, ranking scales or letter grades.
However, ordinal variables are still categorical and do not provide precise measurements.
Differences are not precisely meaningful, for example, if one student scores an A and another a B on an assignment, we cannot say precisely the difference in their scores, only that an A is larger than a B.
Note
Quantitative variables take numerical values, and represent some kind of measurement.
Quantitative variables are often further classified as either:
• Discrete, when the variable takes on a countable number of values.
Most often these variables indeed represent some kind of count such as the number of prescriptions an individual takes daily.
• Continuous, when the variable can take on any value in some range of values.
Our precision in measuring these variables is often limited by our instruments.
Units should be provided.
Common examples would be height (inches), weight (pounds), or time to recovery (days).
One special variable type occurs when a variable has only two possible values.
Note
A variable is said to be Binary or Dichotomous, when there are only two possible levels.
These variables can usually be phrased in a “yes/no” question. Whether nor not someone is a smoker is an example of a binary variable.
Currently we are primarily concerned with classifying variables as either categorical or quantitative.
Sometimes, however, we will need to consider further and sub-classify these variables as defined above.
These concepts will be discussed and reviewed as needed but here is a quick practice on sub-classifying categorical and quantitative variables.
Did I Get This?
Types of Variables
EXAMPLE: Medical Records
Let’s revisit the dataset showing medical records for a sample of patients
In our example of medical records, there are several variables of each type:
• Age, Weight, and Height are quantitative variables.
• Race, Gender, and Smoking are categorical variables.
Comments:
• Notice that the values of the categorical variable Smoking have been coded as the numbers 0 or 1.
It is quite common to code the values of a categorical variable as numbers, but you should remember that these are just codes.
They have no arithmetic meaning (i.e., it does not make sense to add, subtract, multiply, divide, or compare the magnitude of such values).
Usually, if such a coding is used, all categorical variables will be coded and we will tend to do this type of coding for datasets in this course.
• Sometimes, quantitative variables are divided into groups for analysis, in such a situation, although the original variable was quantitative, the variable analyzed is categorical.
A common example is to provide information about an individual’s Body Mass Index by stating whether the individual is underweight, normal, overweight, or obese.
This categorized BMI is an example of an ordinal categorical variable.
• Categorical variables are sometimes called qualitative variables, but in this course we’ll use the term “categorical.”
Software Activity
Learning Objectives
LO 7.1: View a dataset in EXCEL, text editor, or other spreadsheet or statistical software.
Learning Objectives
LO 4.1: Determine the type (categorical or quantitative) of a given variable.
Learn By Doing:
Exploring a Dataset using Software
Why Does the Type of Variable Matter?
Note
The types of variables you are analyzing directly relate to the available descriptive and inferential statistical methods.
It is important to:
• assess how you will measure the effect of interest and
• know how this determines the statistical methods you can use.
As we proceed in this course, we will continually emphasize the types of variables that are appropriate for each method we discuss.
For example:
EXAMPLE:
To compare the number of polio cases in the two treatment arms of the Salk Polio vaccine trial, you could use
• Fisher’s Exact Test
• Chi-Square Test
To compare blood pressures in a clinical trial evaluating two blood pressure-lowering medications, you could use
• Two-sample t-Test
• Wilcoxon Rank-Sum Test
(Optional) Great Resource: : UCLA Institute for Digital Research and Education – What statistical analysis should I use? | textbooks/stats/Applied_Statistics/Biostatistics_-_Open_Learning_Textbook/Preliminaries/Types_of_Variables.txt |
CO-1: Describe the roles biostatistics serves in the discipline of public health.
Before we jump into Exploratory Data Analysis, and really appreciate its importance in the process of statistical analysis, let’s take a step back for a minute and ask:
What do we really mean by data?
Learning Objectives
LO 1.4: Define basic terms regarding data and recognize common variations in terminology.
Video
What is Data? (2:49)
Data are pieces of information about individuals organized into variables.
• By an individual, we mean a particular person or object.
• By a variable, we mean a particular characteristic of the individual.
A dataset is a set of data identified with a particular experiment, scenario, or circumstance.
Datasets are typically displayed in tables, in which rows represent individuals and columns represent variables.
EXAMPLE: Medical Records
The following dataset shows medical records for a sample of patients.
In this example,
• the individuals are patients,
• and the variables are Gender, Age, Weight, Height, Smoking, and Race.
Each row, then, gives us all of the information about a particular individual (in this case, patient), and each column gives us information about a particular characteristic of all of the patients.
Individuals, Observations, or Cases
Note
The rows in a dataset (representing individuals) might also be called observations, cases, or a description that is specific to the individuals and the scenario.
For example, if we were interested in studying flu vaccinations in school children across the U.S., we could collect data where each observation was a
• student
• school
• school district
• city
• county
• state
Each of these would result in a different way to investigate questions about flu vaccinations in school children.
Independent Observations
Note
In our course, we will present methods which can be used when the observations being analyzed are independent of each other. If the observations (rows in our dataset) are not independent, a more complex analysis is needed.Clear violations of independent observations occur when
• we have more than one row for a given individual such as if we gather the same measurements at many different times for individuals in our study
• individuals are paired or matched in some way.
As we begin this course, you should start with an awareness of the types of data we will be working with and learn to recognize situations which are more complex than those covered in this course.
Variables
Note
The columns in a dataset (representing variables) are often grouped and labeled by their role in our analysis.
For example, in many studies involving people, we often collect demographic variables such as gender, age, race, ethnicity, socioeconomic status, marital status, and many more.
Note
The role a variable plays in our analysis must also be considered.
• In studies where we wish to predict one variable using one or more of the remaining variables, the variable we wish to predict is commonly called the response variable, the outcome variable, or the dependent variable.
• Any variable we are using to predict or explain differences in the outcome is commonly called an explanatory variable, an independent variable, a predictor variable, or a covariate.
Various Uses of the Term INDEPENDENT in Statistics
Note: The word “independent” is used in statistics in numerous ways. Be careful to understand in what way the words “independent” or “independence” (as well as dependent or dependence) are used when you see them used in the materials.
• Here we have discussed independent observations (also called cases, individuals, or subjects).
• We have also used the term independent variable as another term for our explanatory variables.
• Later we will learn the formal probability definitions of independent events and dependent events.
• And when comparing groups we will define independent samples and dependent samples. | textbooks/stats/Applied_Statistics/Biostatistics_-_Open_Learning_Textbook/Preliminaries/What_is_Data%3F.txt |
CO-1: Describe the roles biostatistics serves in the discipline of public health.
CO-6: Apply basic concepts of probability, random variation, and commonly used statistical probability distributions.
Video
Exploratory Data Analysis Introduction (2 videos, 7:04 total)
The Big Picture
Learning Objectives
LO 1.3: Identify and differentiate between the components of the Big Picture of Statistics
Recall “The Big Picture,” the four-step process that encompasses statistics (as it is presented in this course):
1. Producing Data — Choosing a sample from the population of interest and collecting data.
2. Exploratory Data Analysis (EDA) {Descriptive Statistics} — Summarizing the data we’ve collected.
3. and 4. Probability and Inference — Drawing conclusions about the entire population based on the data collected from the sample.
Even though in practice it is the second step in the process, we are going to look at Exploratory Data Analysis (EDA) first. (If you have forgotten why, review the course structure information at the end of the page on The Big Picture and in the video covering The Big Picture.)
Exploratory Data Analysis
Learning Objectives
LO 1.5: Explain the uses and important features of exploratory data analysis.
As you can tell from the examples of datasets we have seen, raw data are not very informative. Exploratory Data Analysis (EDA) is how we make sense of the data by converting them from their raw form to a more informative one.
Note
In particular, EDA consists of:
• organizing and summarizing the raw data,
• discovering important features and patterns in the data and any striking deviations from those patterns, and then
• interpreting our findings in the context of the problem
And can be useful for:
• describing the distribution of a single variable (center, spread, shape, outliers)
• checking data (for errors or other problems)
• checking assumptions to more complex statistical analyses
• investigating relationships between variables
Exploratory data analysis (EDA) methods are often called Descriptive Statistics due to the fact that they simply describe, or provide estimates based on, the data at hand.
In Unit 4 we will cover methods of Inferential Statistics which use the results of a sample to make inferences about the population under study.
Comparisons can be visualized and values of interest estimated using EDA but descriptive statistics alone will provide no information about the certainty of our conclusions.
Important Features of Exploratory Data Analysis
There are two important features to the structure of the EDA unit in this course:
Note
• The material in this unit covers two broad topics:
Examining Distributions — exploring data one variable at a time.
Examining Relationships — exploring data two variables at a time.
Note
• In Exploratory Data Analysis, our exploration of data will always consist of the following two elements:
visual displays, supplemented by
numerical measures.
Try to remember these structural themes, as they will help you orient yourself along the path of this unit.
Examining Distributions
Learning Objectives
LO 6.1: Explain the meaning of the term distribution in statistics.
We will begin the EDA part of the course by exploring (or looking at) one variable at a time.
As we have seen, the data for each variable consist of a long list of values (whether numerical or not), and are not very informative in that form.
In order to convert these raw data into useful information, we need to summarize and then examine the distribution of the variable.
Note
By distribution of a variable, we mean:
• what values the variable takes, and
• how often the variable takes those values.
We will first learn how to summarize and examine the distribution of a single categorical variable, and then do the same for a single quantitative variable.
Unit 1: Exploratory Data Analysis
CO-4: Distinguish among different measurement scales, choose the appropriate descriptive and inferential statistical methods based on these distinctions, and interpret the results.
Learning Objectives
LO 4.20: Classify a data analysis situation involving two variables according to the “role-type classification.”
Learning Objectives
LO 4.21: For a data analysis situation involving two variables, determine the appropriate graphical display(s) and/or numerical measures(s) that should be used to summarize the data.
Video
Video: Case C-C (10:34)
Related SAS Tutorials
Related SPSS Tutorials
Two Categorical Variables
Recall the role-type classification table for framing our discussion about the relationship between two variables:
We are done with case C→Q, and will now move on to case C→C, where we examine the relationship between two categorical variables.
Earlier in the course, (when we discussed the distribution of a single categorical variable) we examined the data obtained when a random sample of 1,200 U.S. college students were asked about their body image (underweight, overweight, or about right). We are now returning to this example, to address the following question:
If we had separated our sample of 1,200 U.S. college students by gender and looked at males and females separately, would we have found a similar distribution across body-image categories? More specifically, are men and women just as likely to think their weight is about right? Among those students who do not think their weight is about right, is there a difference between the genders in feelings about body image?
Answering these questions requires us to examine the relationship between two categorical variables, gender and body image. Because the question of interest is whether there is a gender effect on body image,
• the explanatory variable is gender, and
• the response variable is body image.
Here is what the raw data look like when we include the gender of each student:
Once again the raw data is a long list of 1,200 genders and responses, and thus not very useful in that form.
Contingency Tables
Learning Objectives
LO 4.22: Define and explain the process of creating a contingency table (two-way table).
To start our exploration of how body image is related to gender, we need an informative display that summarizes the data. In order to summarize the relationship between two categorical variables, we create a display called a two-way table or contingency table.
Here is the two-way table for our example:
The table has the possible genders in the rows, and the possible responses regarding body image in the columns. At each intersection between row and column, we put the counts for how many times that combination of gender and body image occurred in the data. We sum across the rows to fill in the Total column, and we sum across the columns to fill in the Total row.
Complete the following activities related to this data.
Learn By Doing: Case C-C
Comments:
Note that from the way the two-way table is constructed, the Total row or column is a summary of one of the two categorical variables, ignoring the other. In our example:
• The Total row gives the summary of the categorical variable body image:
• The Total column gives the summary of the categorical variable gender:(These are the same counts we found earlier in the course when we looked at the single categorical variable body image, and did not consider gender.)
Finding Conditional (Row and Column) Percents
Learning Objectives
LO 4.23: Given a contingency table (two-way table), interpret the information it reveals about the association between two categorical variables by calculating and comparing conditional percentages.
So far we have organized the raw data in a much more informative display — the two-way table:
Remember, though, that our primary goal is to explore how body image is related to gender. Exploring the relationship between two categorical variables (in this case body image and gender) amounts to comparing the distributions of the response variable (in this case body image) across the different values of the explanatory variable (in this case males and females):
Note that it doesn’t make sense to compare raw counts, because there are more females than males overall. So for example, it is not very informative to say “there are 560 females who responded ‘about right’ compared to only 295 males,” since the 560 females are out of a total of 760, and the 295 males are out of a total of only 440.
We need to supplement our display, the two-way table, with some numerical measures that will allow us to compare the distributions. These numerical measures are found by simply converting the counts to percents within (or restricted to) each value of the explanatory variable separately.
In our example: We look at each gender separately, and convert the counts to percents within that gender. Let’s start with females:
Note that each count is converted to percents by dividing by the total number of females, 760. These numerical measures are called conditional percents, since we find them by “conditioning” on one of the genders.
Now complete the following activities to calculate the row percentages for males.
Learn By Doing: Calculating Row Percentages
Comments:
• In our example, we chose to organize the data with the explanatory variable gender in rows and the response variable body image in columns, and thus our conditional percents were row percents, calculated within each row separately. Similarly, if the explanatory variable happens to sit in columns and the response variable in rows, our conditional percents will be column percents, calculated within each column separately. For an example, see the “Did I Get This?” exercises below.
• Another way to visualize the conditional percents, instead of a table, is the double bar chart. This display is quite common in newspapers.
Now that we have summarized the relationship between the categorical variables gender and body image, let’s go back and interpret the results in the context of the questions that we posed.
Learn By Doing: Interpretation in Case C-C
Learn By Doing: Case C-C (Software)
For additional practice complete the following activities.
Did I Get This?: Case C-C
Let’s Summarize
• The relationship between two categorical variables is summarized using:
• Data display: two-way table, supplemented by
• Numerical measures: conditional percentages.
• Conditional percentages are calculated for each value of the explanatory variable separately. They can be row percentages, if the explanatory variable “sits” in the rows, or column percentages, if the explanatory variable “sits” in the columns.
• When we try to understand the relationship between two categorical variables, we compare the distributions of the response variable for values of the explanatory variable. In particular, we look at how the pattern of conditional percentages differs between the values of the explanatory variable. | textbooks/stats/Applied_Statistics/Biostatistics_-_Open_Learning_Textbook/Unit_1%3A_Exploratory_Data_Analysis/Case_C-C.txt |
CO-4: Distinguish among different measurement scales, choose the appropriate descriptive and inferential statistical methods based on these distinctions, and interpret the results.
Learning Objectives
LO 4.20: Classify a data analysis situation involving two variables according to the “role-type classification.”
Learning Objectives
LO 4.21: For a data analysis situation involving two variables, determine the appropriate graphical display(s) and/or numerical measures(s) that should be used to summarize the data.
Video
Video: Case C-Q (6:34)
Related SAS Tutorials
Related SPSS Tutorials
Categorical Explanatory and Quantitative Response
Learning Objectives
LO 4.18: Compare and contrast distributions (of quantitative data) from two or more groups, and produce a brief summary, interpreting your findings in context.
Recall the role-type classification table for framing our discussion about the relationship between two variables:
We are now ready to start with Case C→Q, exploring the relationship between two variables where the explanatory variable is categorical, and the response variable is quantitative. As you’ll discover, exploring relationships of this type is something we’ve already discussed in this course, but we didn’t frame the discussion this way.
EXAMPLE: Hot Dogs
Background: People who are concerned about their health may prefer hot dogs that are low in calories. A study was conducted by a concerned health group in which 54 major hot dog brands were examined, and their calorie contents recorded. In addition, each brand was classified by type: beef, poultry, and meat (mostly pork and beef, but up to 15% poultry meat). The purpose of the study was to examine whether the number of calories a hot dog has is related to (or affected by) its type. (Reference: Moore, David S., and George P. McCabe (1989). Introduction to the Practice of Statistics. Original source: Consumer Reports, June 1986, pp. 366-367.)
Answering this question requires us to examine the relationship between the categorical variable, Type and the quantitative variable Calories. Because the question of interest is whether the type of hot dog affects calorie content,
• the explanatory variable is Type, and
• the response variable is Calories.
Here is what the raw data look like:
The raw data are a list of types and calorie contents, and are not very useful in that form. To explore how the number of calories is related to the type of hot dog, we need an informative visual display of the data that will compare the three types of hot dogs with respect to their calorie content.
The visual display that we’ll use is side-by-side boxplots (which we’ve seen before). The side-by-side boxplots will allow us to compare the distribution of calorie counts within each category of the explanatory variable, hot dog type:
As before, we supplement the side-by-side boxplots with the descriptive statistics of the calorie content (response) for each type of hot dog separately (i.e., for each level of the explanatory variable separately):
Let’s summarize the results we obtained and interpret them in the context of the question we posed:
Statistic Beef Meat Poultry
min 111 107 86
Q1 139.5 138.5 100.5
Median 152.5 153 113
Q3 179.75 180.5 142.5
Max 190 195 152
By examining the three side-by-side boxplots and the numerical measures, we see at once that poultry hot dogs, as a group, contain fewer calories than those made of beef or meat. The median number of calories in poultry hot dogs (113) is less than the median (and even the first quartile) of either of the other two distributions (medians 152.5 and 153). The spread of the three distributions is about the same, if IQR is considered (all slightly above 40), but the (full) ranges vary slightly more (beef: 80, meat: 88, poultry: 66). The general recommendation to the health-conscious consumer is to eat poultry hot dogs. It should be noted, though, that since each of the three types of hot dogs shows quite a large spread among brands, simply buying a poultry hot dog does not guarantee a low-calorie food.
What we learn from this example is that when exploring the relationship between a categorical explanatory variable and a quantitative response (Case C→Q), we essentially compare the distributions of the quantitative response for each category of the explanatory variable using side-by-side boxplots supplemented by descriptive statistics. Recall that we have actually done this before when we talked about the boxplot and argued that boxplots are most useful when presented side by side for comparing distributions of two or more groups. This is exactly what we are doing here!
Here is another example:
EXAMPLE: SSHA
Background: The Survey of Study Habits and Attitudes (SSHA) is a psychological test designed to measure the motivation, study habits, and attitudes toward learning of college students. Is there a relationship between gender and SSHA scores? In other words, is there a “gender effect” on SSHA scores? Data were collected from 40 randomly selected college students, and here is what the raw data look like:
(Reference: Moore and McCabe. (2003). Introduction to the Practice of Statistics)
Side-by-side boxplots supplemented by descriptive statistics allow us to compare the distribution of SSHA scores within each category of the explanatory variable—gender:
Statistic Female Male
min 103 70
Q1 128.75 95
Median 153 114.5
Q3 163.75 144.5
Max 200 187
Let’s summarize our results and interpret them:
By examining the side-by-side boxplots and the numerical measures, we see that in general females perform better on the SSHA than males. The median SSHA score of females is higher than the median score for males (153 vs. 114), and in fact, it is even higher than the third quartile of the males’ distribution (144.5). On the other hand, the males’ scores display more variability, both in terms of IQR (49.5 vs. 35) and in terms of the full range of scores (117 vs. 97). Based on these results, it seems that there is a gender effect on SSHA score. It should be noted, though, that our sample consists of only 20 males and 20 females, so we should be cautious about making any kind of generalizations beyond this study. One interesting question that comes to mind is, “Why did we observe this relationship between gender and SSHA scores?” In other words, is there maybe an explanation for why females score higher on the SSHA? Let’s leave it to the psychologists to try and answer that one.
Let’s Summarize
• The relationship between a categorical explanatory variable and a quantitative response variable is summarized using:
• Visual display: side-by-side boxplots
• Numerical measures: descriptive statistics used for one quantitative variable calculated in each group
• Exploring the relationship between a categorical explanatory variable and a quantitative response variable amounts to comparing the distributions of the quantitative response for each category of the explanatory variable. In particular, we look at how the distribution of the response variable differs between the values of the explanatory variable | textbooks/stats/Applied_Statistics/Biostatistics_-_Open_Learning_Textbook/Unit_1%3A_Exploratory_Data_Analysis/Case_C-Q.txt |
CO-4: Distinguish among different measurement scales, choose the appropriate descriptive and inferential statistical methods based on these distinctions, and interpret the results.
Learning Objectives
LO 4.20: Classify a data analysis situation involving two variables according to the “role-type classification.”
Learning Objectives
LO 4.21: For a data analysis situation involving two variables, determine the appropriate graphical display(s) and/or numerical measures(s) that should be used to summarize the data.
Video
Video: Case Q-Q (2:30)
Related SAS Tutorials
Related SPSS Tutorials
Introduction – Two Quantitative Variables
Here again is the role-type classification table for framing our discussion about the relationship between two variables:
Before reading further, try this interactive online data analysis applet.
Interactive Applet: Case Q-Q
We are done with cases C→Q and C→C, and now we will move on to case Q→Q, where we examine the relationship between two quantitative variables.
In this section we will discuss scatterplots, which are the appropriate visual display in this case along with numerical methods for linear relationships including correlation and linear regression.
Scatterplots
CO-4: Distinguish among different measurement scales, choose the appropriate descriptive and inferential statistical methods based on these distinctions, and interpret the results.
Learning Objectives
LO 4.21: For a data analysis situation involving two variables, determine the appropriate graphical display(s) and/or numerical measures(s) that should be used to summarize the data.
Video
Video: Scatterplots (7:20)
Related SAS Tutorials
Related SPSS Tutorials
In the previous two cases we had a categorical explanatory variable, and therefore exploring the relationship between the two variables was done by comparing the distribution of the response variable for each category of the explanatory variable:
• In case C→Q we compared distributions of the quantitative response.
• In case C→C we compared distributions of the categorical response.
Case Q→Q is different in the sense that both variables (in particular the explanatory variable) are quantitative. As you will discover, although we are still in essence comparing the distribution of one variable for different values of the other, this case will require a different kind of treatment and tools.
Learning Objectives
LO 4.24: Explain the process of creating a scatterplot.
Creating Scatterplots
Let’s start with an example:
EXAMPLE: Highway Signs
A Pennsylvania research firm conducted a study in which 30 drivers (of ages 18 to 82 years old) were sampled, and for each one, the maximum distance (in feet) at which he/she could read a newly designed sign was determined. The goal of this study was to explore the relationship between a driver’s age and the maximum distance at which signs were legible, and then use the study’s findings to improve safety for older drivers. (Reference: Utts and Heckard, Mind on Statistics (2002). Original source: Data collected by Last Resource, Inc, Bellfonte, PA.)
Since the purpose of this study is to explore the effect of age on maximum legibility distance,
• the explanatory variable is Age, and
• the response variable is Distance.
Here is what the raw data look like:
Note that the data structure is such that for each individual (in this case driver 1….driver 30) we have a pair of values (in this case representing the driver’s age and distance). We can therefore think about these data as 30 pairs of values: (18, 510), (32, 410), (55, 420), … , (82, 360).
The first step in exploring the relationship between driver age and sign legibility distance is to create an appropriate and informative graphical display. The appropriate graphical display for examining the relationship between two quantitative variables is the scatterplot. Here is how a scatterplot is constructed for our example:
To create a scatterplot, each pair of values is plotted, so that the value of the explanatory variable (X) is plotted on the horizontal axis, and the value of the response variable (Y) is plotted on the vertical axis. In other words, each individual (driver, in our example) appears on the scatterplot as a single point whose X-coordinate is the value of the explanatory variable for that individual, and whose Y-coordinate is the value of the response variable. Here is an illustration:
And here is the completed scatterplot:
Comment:
• It is important to mention again that when creating a scatterplot, the explanatory variable should always be plotted on the horizontal X-axis, and the response variable should be plotted on the vertical Y-axis. If in a specific example we do not have a clear distinction between explanatory and response variables, each of the variables can be plotted on either axis.
Interpreting Scatterplots
Learning Objectives
LO 4.25: Describe the relationship displayed in a scatterplot including: a) the overall pattern, b) striking deviations from the pattern.
How do we explore the relationship between two quantitative variables using the scatterplot? What should we look at, or pay attention to?
Recall that when we described the distribution of a single quantitative variable with a histogram, we described the overall pattern of the distribution (shape, center, spread) and any deviations from that pattern (outliers). We do the same thing with the scatterplot. The following figure summarizes this point:
As the figure explains, when describing the overall pattern of the relationship we look at its direction, form and strength.
Direction
• The direction of the relationship can be positive, negative, or neither:
A positive (or increasing) relationship means that an increase in one of the variables is associated with an increase in the other.
A negative (or decreasing) relationship means that an increase in one of the variables is associated with a decrease in the other.
Not all relationships can be classified as either positive or negative.
Form
• The form of the relationship is its general shape. When identifying the form, we try to find the simplest way to describe the shape of the scatterplot. There are many possible forms. Here are a couple that are quite common:
Relationships with a linear form are most simply described as points scattered about a line:
Relationships with a non-linear (sometimes called curvilinear) form are most simply described as points dispersed around the same curved line:
There are many other possible forms for the relationship between two quantitative variables, but linear and curvilinear forms are quite common and easy to identify. Another form-related pattern that we should be aware of is clusters in the data:
Strength
• The strength of the relationship is determined by how closely the data follow the form of the relationship. Let’s look, for example, at the following two scatterplots displaying positive, linear relationships:
The strength of the relationship is determined by how closely the data points follow the form. We can see that in the left scatterplot the data points follow the linear pattern quite closely. This is an example of a strong relationship. In the right scatterplot, the points also follow the linear pattern, but much less closely, and therefore we can say that the relationship is weaker. In general, though, assessing the strength of a relationship just by looking at the scatterplot is quite problematic, and we need a numerical measure to help us with that. We will discuss that later in this section.
• Data points that deviate from the pattern of the relationship are called outliers. We will see several examples of outliers during this section. Two outliers are illustrated in the scatterplot below:
Let’s go back now to our example, and use the scatterplot to examine the relationship between the age of the driver and the maximum sign legibility distance.
EXAMPLE: Highway Signs
Here is the scatterplot:
The direction of the relationship is negative, which makes sense in context, since as you get older your eyesight weakens, and in particular older drivers tend to be able to read signs only at lesser distances. An arrow drawn over the scatterplot illustrates the negative direction of this relationship:
The form of the relationship seems to be linear. Notice how the points tend to be scattered about the line. Although, as we mentioned earlier, it is problematic to assess the strength without a numerical measure, the relationship appears to be moderately strong, as the data is fairly tightly scattered about the line. Finally, all the data points seem to “obey” the pattern — there do not appear to be any outliers.
We will now look at two more examples:
EXAMPLE: Average Gestation Period
The average gestation period, or time of pregnancy, of an animal is closely related to its longevity (the length of its lifespan). Data on the average gestation period and longevity (in captivity) of 40 different species of animals have been examined, with the purpose of examining how the gestation period of an animal is related to (or can be predicted from) its longevity. (Source: Rossman and Chance. (2001). Workshop statistics: Discovery with data and Minitab. Original source: The 1993 world almanac and book of facts).
Here is the scatterplot of the data.
What can we learn about the relationship from the scatterplot? The direction of the relationship is positive, which means that animals with longer life spans tend to have longer times of pregnancy (this makes intuitive sense). An arrow drawn over the scatterplot below illustrates this:
The form of the relationship is again essentially linear. There appears to be one outlier, indicating an animal with an exceptionally long longevity and gestation period. (This animal happens to be the elephant.) Note that while this outlier definitely deviates from the rest of the data in term of its magnitude, it does follow the direction of the data.
Comment:
• Another feature of the scatterplot that is worth observing is how the variation in gestation increases as longevity increases. This fact is illustrated by the two red vertical lines at the bottom left part of the graph. Note that the gestation periods for animals that live 5 years range from about 30 days up to about 120 days. On the other hand, the gestation periods of animals that live 12 years vary much more, and range from about 60 days up to more than 400 days.
EXAMPLE: Fuel Usage
As a third example, consider the relationship between the average amount of fuel used (in liters) to drive a fixed distance in a car (100 kilometers), and the speed at which the car is driven (in kilometers per hour). (Source: Moore and McCabe, (2003). Introduction to the practice of statistics. Original source: T.N. Lam. (1985). “Estimating fuel consumption for engine size,” Journal of Transportation Engineering, vol. 111)
The data describe a relationship that decreases and then increases — the amount of fuel consumed decreases rapidly to a minimum for a car driving 60 kilometers per hour, and then increases gradually for speeds exceeding 60 kilometers per hour. This suggests that the speed at which a car economizes on fuel the most is about 60 km/h. This forms a non-linear (curvilinear) relationship that seems to be very strong, as the observations seem to perfectly fit the curve. Finally, there do not appear to be any outliers.
Learn By Doing: Scatterplots
EXAMPLE: Return on Incentives
The example in the last activity provides a great opportunity for interpretation of the form of the relationship in context. Recall that the example examined how the percentage of participants who completed a survey is affected by the monetary incentive that researchers promised to participants. Here again is the scatterplot that displays the relationship:
The positive relationship definitely makes sense in context, but what is the interpretation of the non-linear (curvilinear) form in the context of the problem? How can we explain (in context) the fact that the relationship seems at first to be increasing very rapidly, but then slows down? The following graph will help us:
Note that when the monetary incentive increases from $0 to$10, the percentage of returned surveys increases sharply — an increase of 27% (from 16% to 43%). However, the same increase of $10 from$30 to $40 doesn’t result in the same dramatic increase in the percentage of returned surveys — it results in an increase of only 3% (from 54% to 57%). The form displays the phenomenon of “diminishing returns” — a return rate that after a certain point fails to increase proportionately to additional outlays of investment.$10 is worth more to people relative to $0 than$30 is relative to \$10.
A Labeled (or Grouped) Scatterplot
In certain circumstances, it may be reasonable to indicate different subgroups or categories within the data on the scatterplot, by labeling each subgroup differently. The result is sometimes called a labeled scatterplot or grouped scatterplot, and can provide further insight about the relationship we are exploring. Here is an example.
EXAMPLE: Hot Dogs
The scatterplot below displays the relationship between the sodium and calorie content of 54 brands of hot dogs. Note that in this example there is no clear explanatory-response distinction, and we decided to have sodium content as the explanatory variable, and calorie content as the response variable.
The scatterplot displays a positive relationship, which means that hot dogs containing more sodium tend to be higher in calories.
The form of the relationship, however, is kind of hard to determine. Maybe if we label the scatterplot, indicating the type of hot dogs, we will get a better understanding of the form.
Here is the labeled scatterplot, with the three different colors representing the three types of hot dogs, as indicated.
The display does give us more insight about the form of the relationship between sodium and calorie content.
It appears that there is a positive relationship within all three types. In other words, we can generally expect hot dogs that are higher in sodium to be higher in calories, no matter what type of hot dog we consider. In addition, we can see that hot dogs made of poultry (indicated in blue) are generally lower in calories. This is a result we have seen before.
Interestingly, it appears that the form of the relationship specifically for poultry is further clustered, and we can only speculate about whether there is another categorical variable that describes these apparent sub-categories of poultry hot dogs.
Learn By Doing: Scatterplots (Software)
Let’s Summarize
• The relationship between two quantitative variables is visually displayed using the scatterplot, where each point represents an individual. We always plot the explanatory variable on the horizontal X axis, and the response variable on the vertical Y axis.
• When we explore a relationship using the scatterplot we should describe the overall pattern of the relationship and any deviations from that pattern. To describe the overall pattern consider the direction, form and strength of the relationship. Assessing the strength just by looking at the scatterplot can be problematic; using a numerical measure to determine strength will be discussed later in this course.
• Adding labels to the scatterplot that indicate different groups or categories within the data might help us get more insight about the relationship we are exploring.
Linear Relationships – Correlation
CO-4: Distinguish among different measurement scales, choose the appropriate descriptive and inferential statistical methods based on these distinctions, and interpret the results.
Learning Objectives
LO 4.21: For a data analysis situation involving two variables, determine the appropriate graphical display(s) and/or numerical measures(s) that should be used to summarize the data.
Video
Video: Linear Relationships – Correlation (8:37)
Related SAS Tutorials
Related SPSS Tutorials
Introduction
So far we have visualized relationships between two quantitative variables using scatterplots, and described the overall pattern of a relationship by considering its direction, form, and strength. We noted that assessing the strength of a relationship just by looking at the scatterplot is quite difficult, and therefore we need to supplement the scatterplot with some kind of numerical measure that will help us assess the strength.
In this part, we will restrict our attention to the special case of relationships that have a linear form, since they are quite common and relatively simple to detect. More importantly, there exists a numerical measure that assesses the strength of the linear relationship between two quantitative variables with which we can supplement the scatterplot. We will introduce this numerical measure here and discuss it in detail.
Even though from this point on we are going to focus only on linear relationships, it is important to remember that not every relationship between two quantitative variables has a linear form. We have actually seen several examples of relationships that are not linear. The statistical tools that will be introduced here are appropriate only for examining linear relationships, and as we will see, when they are used in nonlinear situations, these tools can lead to errors in reasoning.
Let’s start with a motivating example. Consider the following two scatterplots.
We can see that in both cases, the direction of the relationship is positive and the form of the relationship is linear. What about the strength? Recall that the strength of a relationship is the extent to which the data follow its form.
Learn By Doing: Strength of Correlation
The purpose of this example was to illustrate how assessing the strength of the linear relationship from a scatterplot alone is problematic, since our judgment might be affected by the scale on which the values are plotted. This example, therefore, provides a motivation for the need to supplement the scatterplot with a numerical measure that will measure the strength of the linear relationship between two quantitative variables.
The Correlation Coefficient — r
Learning Objectives
LO 4.26: Explain the limitations of Pearson’s correlation coefficient (r) as a measure of the association between two quantitative variables.
Learning Objectives
LO 4.27: In the special case of a linear relationship, interpret Pearson’s correlation coefficient (r) in context.
The numerical measure that assesses the strength of a linear relationship is called the correlation coefficient, and is denoted by r. We will:
• give a definition of the correlation r,
• discuss the calculation of r,
• explain how to interpret the value of r, and
• talk about some of the properties of r.
Correlation Coefficient: The correlation coefficient (r) is a numerical measure that measures the strength and direction of a linear relationship between two quantitative variables.
Calculation: r is calculated using the following formula:
$r=\dfrac{1}{n-1} \sum_{i=1}^{n}\left(\dfrac{x_{i}-\bar{x}}{s_{x}}\right)\left(\dfrac{y_{i}-\bar{y}}{s_{y}}\right)$
However, the calculation of the correlation (r) is not the focus of this course. We will use a statistics package to calculate r for us, and the emphasis of this course will be on the interpretation of its value.
Interpretation
Once we obtain the value of r, its interpretation with respect to the strength of linear relationships is quite simple, as these images illustrate:
In order to get a better sense for how the value of r relates to the strength of the linear relationship, take a look the following applets.
Interactive Applets: Correlation
If you will be using correlation often in your research, I highly urge you to read the following more detailed discussion of correlation.
(Optional) Outside Reading: Correlation Coefficients (≈ 2700 words)
Now that we understand the use of r as a numerical measure for assessing the direction and strength of linear relationships between quantitative variables, we will look at a few examples.
EXAMPLE: Highway Sign Visibility
Earlier, we used the scatterplot below to find a negative linear relationship between the age of a driver and the maximum distance at which a highway sign was legible. What about the strength of the relationship? It turns out that the correlation between the two variables is r = -0.793.
Since r < 0, it confirms that the direction of the relationship is negative (although we really didn’t need r to tell us that). Since r is relatively close to -1, it suggests that the relationship is moderately strong. In context, the negative correlation confirms that the maximum distance at which a sign is legible generally decreases with age. Since the value of r indicates that the linear relationship is moderately strong, but not perfect, we can expect the maximum distance to vary somewhat, even among drivers of the same age.
EXAMPLE: Statistic Courses
A statistics department is interested in tracking the progress of its students from entry until graduation. As part of the study, the department tabulates the performance of 10 students in an introductory course and in an upper-level course required for graduation. What is the relationship between the students’ course averages in the two courses? Here is the scatterplot for the data:
The scatterplot suggests a relationship that is positive in direction, linear in form, and seems quite strong. The value of the correlation that we find between the two variables is r = 0.931, which is very close to 1, and thus confirms that indeed the linear relationship is very strong.
Comments:
• Note that in both examples we supplemented the scatterplot with the correlation (r). Now that we have the correlation (r), why do we still need to look at a scatterplot when examining the relationship between two quantitative variables?
• The correlation coefficient can only be interpreted as the measure of the strength of a linear relationship, so we need the scatterplot to verify that the relationship indeed looks linear. This point and its importance will be clearer after we examine a few properties of r.
Did I Get This? Correlation Coefficient
Properties of r
We will now discuss and illustrate several important properties of the correlation coefficient as a numerical measure of the strength of a linear relationship.
• The correlation does not change when the units of measurement of either one of the variables change. In other words, if we change the units of measurement of the explanatory variable and/or the response variable, this has no effect on the correlation (r).
To illustrate this, below are two versions of the scatterplot of the relationship between sign legibility distance and driver’s age:
The top scatterplot displays the original data where the maximum distances are measured in feet. The bottom scatterplot displays the same relationship, but with maximum distances changed to meters. Notice that the Y-values have changed, but the correlations are the same. This is an example of how changing the units of measurement of the response variable has no effect on r, but as we indicated above, the same is true for changing the units of the explanatory variable, or of both variables.
This might be a good place to comment that the correlation (r) is “unitless”. It is just a number.
• The correlation only measures the strength of a linear relationship between two variables. It ignores any other type of relationship, no matter how strong it is. For example, consider the relationship between the average fuel usage of driving a fixed distance in a car, and the speed at which the car drives:
Our data describe a fairly simple non-linear (sometimes called curvilinear) relationship: the amount of fuel consumed decreases rapidly to a minimum for a car driving 60 kilometers per hour, and then increases gradually for speeds exceeding 60 kilometers per hour. The relationship is very strong, as the observations seem to perfectly fit the curve.
Although the relationship is strong, the correlation r = -0.172 indicates a weak linear relationship. This makes sense considering that the data fails to adhere closely to a linear form:
• The correlation by itself is not enough to determine whether or not a relationship is linear. To see this, let’s consider the study that examined the effect of monetary incentives on the return rate of questionnaires. Below is the scatterplot relating the percentage of participants who completed a survey to the monetary incentive that researchers promised to participants, in which we find a strong non-linear (sometimes called curvilinear) relationship:
The relationship is non-linear (sometimes called curvilinear), yet the correlation r = 0.876 is quite close to 1.
In the last two examples we have seen two very strong non-linear (sometimes called curvilinear) relationships, one with a correlation close to 0, and one with a correlation close to 1. Therefore, the correlation alone does not indicate whether a relationship is linear or not. The important principle here is:
Always look at the data!
• The correlation is heavily influenced by outliers. As you will learn in the next two activities, the way in which the outlier influences the correlation depends upon whether or not the outlier is consistent with the pattern of the linear relationship.
Interactive Applet: Correlation and Outliers
Hopefully, you’ve noticed the correlation decreasing when you created this kind of outlier, which is not consistent with the pattern of the relationship.
The next activity will show you how an outlier that is consistent with the direction of the linear relationship actually strengthens it.
Learn By Doing: Correlation and Outliers (Software)
In the previous activity, we saw an example where there was a positive linear relationship between the two variables, and including the outlier just “strengthened” it. Consider the hypothetical data displayed by the following scatterplot:
In this case, the low outlier gives an “illusion” of a positive linear relationship, whereas in reality, there is no linear relationship between X and Y.
Linear Relationships – Linear Regression
CO-4: Distinguish among different measurement scales, choose the appropriate descriptive and inferential statistical methods based on these distinctions, and interpret the results.
Learning Objectives
LO 4.21: For a data analysis situation involving two variables, determine the appropriate graphical display(s) and/or numerical measures(s) that should be used to summarize the data.
Video
Video: Linear Relationships – Linear Regression (5:58)
Related SAS Tutorials
Related SPSS Tutorials
Summarizing the Pattern of the Data with a Line
Learning Objectives
LO 4.28: In the special case of a linear relationship, interpret the slope of the regression line and use the regression line to make predictions.
So far we’ve used the scatterplot to describe the relationship between two quantitative variables, and in the special case of a linear relationship, we have supplemented the scatterplot with the correlation (r).
The correlation, however, doesn’t fully characterize the linear relationship between two quantitative variables — it only measures the strength and direction. We often want to describe more precisely how one variable changes with the other (by “more precisely,” we mean more than just the direction), or predict the value of the response variable for a given value of the explanatory variable.
In order to be able to do that, we need to summarize the linear relationship with a line that best fits the linear pattern of the data. In the remainder of this section, we will introduce a way to find such a line, learn how to interpret it, and use it (cautiously) to make predictions.
Again, let’s start with a motivating example:
Earlier, we examined the linear relationship between the age of a driver and the maximum distance at which a highway sign was legible, using both a scatterplot and the correlation coefficient. Suppose a government agency wanted to predict the maximum distance at which the sign would be legible for 60-year-old drivers, and thus make sure that the sign could be used safely and effectively.
How would we make this prediction?
It would be useful if we could find a line (such as the one that is presented on the scatterplot) that represents the general pattern of the data, because then,
and predict that 60-year-old drivers could see the sign from a distance of just under 400 feet we would simply use this line to find the distance that corresponds to an age of 60 like this:
How and why did we pick this particular line (the one shown in red in the above walkthrough) to describe the dependence of the maximum distance at which a sign is legible upon the age of a driver? What line exactly did we choose? We will return to this example once we can answer that question with a bit more precision.
Interactive Applets: Regression by Eye
The technique that specifies the dependence of the response variable on the explanatory variable is called regression. When that dependence is linear (which is the case in our examples in this section), the technique is called linear regression. Linear regression is therefore the technique of finding the line that best fits the pattern of the linear relationship (or in other words, the line that best describes how the response variable linearly depends on the explanatory variable).
To understand how such a line is chosen, consider the following very simplified version of the age-distance example (we left just 6 of the drivers on the scatterplot):
There are many lines that look like they would be good candidates to be the line that best fits the data:
It is doubtful that everyone would select the same line in the plot above. We need to agree on what we mean by “best fits the data”; in other words, we need to agree on a criterion by which we would select this line. We want the line we choose to be close to the data points. In other words, whatever criterion we choose, it had better somehow take into account the vertical deviations of the data points from the line, which are marked with blue arrows in the plot below:
The most commonly used criterion is called the least squares criterion. This criterion says: Among all the lines that look good on your data, choose the one that has the smallest sum of squared vertical deviations. Visually, each squared deviation is represented by the area of one of the squares in the plot below. Therefore, we are looking for the line that will have the smallest total yellow area.
This line is called the least-squares regression line, and, as we’ll see, it fits the linear pattern of the data very well.
For the remainder of this lesson, you’ll need to feel comfortable with the algebra of a straight line. In particular you’ll need to be familiar with the slope and the intercept in the equation of a line, and their interpretation.
Many Students Wonder: Algebra Review – Linear Equation
Interactive Applet: Linear Equations – Effect of Changing the Slope or Intercept on the Line
Like any other line, the equation of the least-squares regression line for summarizing the linear relationship between the response variable (Y) and the explanatory variable (X) has the form: Y = a + bX
All we need to do is calculate the intercept a, and the slope b, which we will learn to do using software.
The slope of the least squares regression line can be interpreted as the estimated (or predicted) change in the mean (or average) value of the response variable when the explanatory variable increases by 1 unit.
EXAMPLE: Age-Distance
Let’s revisit our age-distance example, and find the least-squares regression line. The following output will be helpful in getting the 5 values we need:
• Dependent Variable: Distance
• Independent Variable: Age
• Correlation Coefficient (r) = -0.7929
• The least squares regression line for this example is:
Distance $= 576 + (-3 * \text{ Age}$
• This means that for every 1-unit increase of the explanatory variable, there is, on average, a 3-unit decrease in the response variable. The interpretation in context of the slope (-3) is, therefore: In this dataset, when age increases by 1 year the average maximum distance at which subjects can read a sign is expected to decrease by 3 feet.
• Here is the regression line plotted on the scatterplot:
As we can see, the regression line fits the linear pattern of the data quite well.
Let’s go back now to our motivating example, in which we wanted to predict the maximum distance at which a sign is legible for a 60-year-old. Now that we have found the least squares regression line, this prediction becomes quite easy:
EXAMPLE: Age-Distance
Practically, what the figure tells us is that in order to find the predicted legibility distance for a 60-year-old, we plug Age = 60 into the regression line equation, to find that:
Predicted distance = 576 + (- 3 * 60) = 396
396 feet is our best prediction for the maximum distance at which a sign is legible for a 60-year-old.
Did I Get This?: Linear Regression
Comment About Predictions:
• Suppose a government agency wanted to design a sign appropriate for an even wider range of drivers than were present in the original study. They want to predict the maximum distance at which the sign would be legible for a 90-year-old. Using the least squares regression line again as our summary of the linear dependence of the distances upon the drivers’ ages, the agency predicts that 90-year-old drivers can see the sign at no more than 576 + (- 3 * 90) = 306 feet:
82. The equation of the regression line is Distance = 576 - 3 * Age" height="274" loading="lazy" src="http://phhp-faculty-cantrell.sites.m...2-linear16.gif" title="The scatterplot for Driver Age vs. Sign Legibility Distance. The scales of both axes have been enlarged so that the regression line has room on the right to be extended past where data exists. The regression line is negative, so it grows from the upper left to the lower right of the plot. Where the regression line is creating an estimate in between existing data, it is red. Beyond that, where there are no data points, the line is green. This area is x>82. The equation of the regression line is Distance = 576 - 3 * Age" width="405">
(The green segment of the line is the region of ages beyond 82, the age of the oldest individual in the study.)
Question: Is our prediction for 90-year-old drivers reliable?
Answer: Our original age data ranged from 18 (youngest driver) to 82 (oldest driver), and our regression line is therefore a summary of the linear relationship in that age range only. When we plug the value 90 into the regression line equation, we are assuming that the same linear relationship extends beyond the range of our age data (18-82) into the green segment. There is no justification for such an assumption. It might be the case that the vision of drivers older than 82 falls off more rapidly than it does for younger drivers. (i.e., the slope changes from -3 to something more negative). Our prediction for age = 90 is therefore not reliable.
In General
Prediction for ranges of the explanatory variable that are not in the data is called extrapolation. Since there is no way of knowing whether a relationship holds beyond the range of the explanatory variable in the data, extrapolation is not reliable, and should be avoided. In our example, like most others, extrapolation can lead to very poor or illogical predictions.
Interactive Applets: Linear Regression
Learn By Doing: Linear Regression (Software)
Let’s Summarize
• A special case of the relationship between two quantitative variables is the linear relationship. In this case, a straight line simply and adequately summarizes the relationship.
• When the scatterplot displays a linear relationship, we supplement it with the correlation coefficient (r), which measures the strength and direction of a linear relationship between two quantitative variables. The correlation ranges between -1 and 1. Values near -1 indicate a strong negative linear relationship, values near 0 indicate a weak linear relationship, and values near 1 indicate a strong positive linear relationship.
• The correlation is only an appropriate numerical measure for linear relationships, and is sensitive to outliers. Therefore, the correlation should only be used as a supplement to a scatterplot (after we look at the data).
• The most commonly used criterion for finding a line that summarizes the pattern of a linear relationship is “least squares.” The least squares regression line has the smallest sum of squared vertical deviations of the data points from the line.
• The slope of the least squares regression line can be interpreted as the estimated (or predicted) change in the mean (or average) value of the response variable when the explanatory variable increases by 1 unit.
• The intercept of the least squares regression line is the average value of the response variable when the explanatory variable is zero. Thus, this is only of interest if it makes sense for the explanatory variable to be zero AND we have observed data in that range (explanatory variable around zero) in our sample.
• The least squares regression line predicts the value of the response variable for a given value of the explanatory variable. Extrapolation is prediction of values of the explanatory variable that fall outside the range of the data. Since there is no way of knowing whether a relationship holds beyond the range of the explanatory variable in the data, extrapolation is not reliable, and should be avoided. | textbooks/stats/Applied_Statistics/Biostatistics_-_Open_Learning_Textbook/Unit_1%3A_Exploratory_Data_Analysis/Case_Q-Q.txt |
CO-1: Describe the roles biostatistics serves in the discipline of public health.
Video
Video: Causation (8:45)
Introduction
Learning Objectives
LO 1.6: Recognize the distinction between association and causation.
Learning Objectives
LO 1.7: Identify potential lurking variables for explaining an observed relationship.
So far we have discussed different ways in which data can be used to explore the relationship (or association) between two variables. To frame our discussion we followed the role-type classification table:
We have now completed learning how to explore the relationship in cases C→Q, C→C, and Q→Q. (As noted before, case Q→C will not be discussed in this course.)
When we explore the relationship between two variables, there is often a temptation to conclude from the observed relationship that changes in the explanatory variable cause changes in the response variable. In other words, you might be tempted to interpret the observed association as causation.
The purpose of this part of the course is to convince you that this kind of interpretation is often wrong! The motto of this section is one of the most fundamental principles of this course:
WORDS TO LIVE BY: Statistical analysis alone will never prove causation!
PRINCIPLE: Association does not imply causation!
Outside Reading: Cause & Effect (≈ 1700 words)
Let’s start by looking at the following example:
EXAMPLE: Fire Damage
The scatterplot below illustrates how the number of firefighters sent to fires (X) is related to the amount of damage caused by fires (Y) in a certain city.
The scatterplot clearly displays a fairly strong (slightly curved) positive relationship between the two variables. Would it, then, be reasonable to conclude that sending more firefighters to a fire causes more damage, or that the city should send fewer firefighters to a fire, in order to decrease the amount of damage done by the fire? Of course not! So what is going on here?
There is a third variable in the background — the seriousness of the fire — that is responsible for the observed relationship. More serious fires require more firefighters, and also cause more damage.
The following figure will help you visualize this situation:
Here, the seriousness of the fire is a lurking variable. A lurking variable is a variable that is not among the explanatory or response variables in a study, but could substantially affect your interpretation of the relationship among those variables.
Here we have the following three relationships:
• Damage increases with the number of firefighters
• Number of firefighters increases with severity of fire
• Damage increases with the severity of fire
• Thus the increase in damage with the number of firefighters may be partially or fully explained by severity of fire.
In particular, as in our example, the lurking variable might have an effect on both the explanatory and the response variables. This common effect creates the observed association between the explanatory and response variables, even though there is no causal link between them. This possibility, that there might be a lurking variable (which we might not be thinking about) that is responsible for the observed relationship leads to our principle:
PRINCIPLE: Association does not imply causation!
The next example will illustrate another way in which a lurking variable might interfere and prevent us from reaching any causal conclusions.
EXAMPLE: SAT Test
For U.S. colleges and universities, a standard entrance examination is the SAT test. The side-by-side boxplots below provide evidence of a relationship between the student’s country of origin (the United States or another country) and the student’s SAT Math score.
The distribution of international students’ scores is higher than that of U.S. students. The international students’ median score (about 700) exceeds the third quartile of U.S. students’ scores. Can we conclude that the country of origin is the cause of the difference in SAT Math scores, and that students in the United States are weaker at math than students in other countries?
No, not necessarily. While it might be true that U.S. students differ in math ability from other students — i.e. due to differences in educational systems — we can’t conclude that a student’s country of origin is the cause of the disparity. One important lurking variable that might explain the observed relationship is the educational level of the two populations taking the SAT Math test. In the United States, the SAT is a standard test, and therefore a broad cross-section of all U.S. students (in terms of educational level) take this test. Among all international students, on the other hand, only those who plan on coming to the U.S. to study, which is usually a more selected subgroup, take the test.
The following figure will help you visualize this explanation:
Here, the explanatory variable (X) may have a causal relationship with the response variable (Y), but the lurking variable might be a contributing factor as well, which makes it very hard to isolate the effect of the explanatory variable and prove that it has a causal link with the response variable. In this case, we say that the lurking variable is confounded with the explanatory variable, since their effects on the response variable cannot be distinguished from each other.
Note that in each of the above two examples, the lurking variable interacts differently with the variables studied. In example 1, the lurking variable has an effect on both the explanatory and the response variables, creating the illusion that there is a causal link between them. In example two, the lurking variable is confounded with the explanatory variable, making it hard to assess the isolated effect of the explanatory variable on the response variable.
The distinction between these two types of interactions is not as important as the fact that in either case, the observed association can be at least partially explained by the lurking variable. The most important message from these two examples is therefore: An observed association between two variables is not enough evidence that there is a causal relationship between them.
In other words …
PRINCIPLE: Association does not imply causation!
Learn By Doing: Causation
Simpson’s Paradox
Learning Objectives
LO 1.8: Recognize and explain the phenomenon of Simpson’s Paradox as it relates to interpreting the relationship between two variables.
So far, we have:
• discussed what lurking variables are,
• demonstrated different ways in which the lurking variables can interact with the two studied variables, and
• understood that the existence of a possible lurking variable is the main reason why we say that association does not imply causation.
As you recall, a lurking variable, by definition, is a variable that was not included in the study, but could have a substantial effect on our understanding of the relationship between the two studied variables.
What if we did include a lurking variable in our study? What kind of effect could that have on our understanding of the relationship? These are the questions we are going to discuss next.
Let’s start with an example:
EXAMPLE: Hospital Death Rates
Background: A government study collected data on the death rates in nearly 6,000 hospitals in the United States. These results were then challenged by researchers, who said that the federal analyses failed to take into account the variation among hospitals in the severity of patients’ illnesses when they were hospitalized. As a result, said the researchers, some hospitals were treated unfairly in the findings, which named hospitals with higher-than-expected death rates. What the researchers meant is that when the federal government explored the relationship between the two variables — hospital and death rate — it also should have included in the study (or taken into account) the lurking variable — severity of illness.
We will use a simplified version of this study to illustrate the researchers’ claim, and see what the possible effect could be of including a lurking variable in a study. (Reference: Moore and McCabe (2003). Introduction to the Practice of Statistics.)
Consider the following two-way table, which summarizes the data about the status of patients who were admitted to two hospitals in a certain city (Hospital A and Hospital B). Note that since the purpose of the study is to examine whether there is a “hospital effect” on patients’ status, “Hospital is the explanatory variable, and “Patient’s Status” is the response variable.
When we supplement the two-way table with the conditional percents within each hospital:
we find that Hospital A has a higher death rate (3%) than Hospital B (2%). Should we jump to the conclusion that a sick patient admitted to Hospital A is 50% more likely to die than if he/she were admitted to Hospital B? Not so fast …
Maybe Hospital A gets most of the severe cases, and that explains why it has a higher death rate. In order to explore this, we need to include (or account for) the lurking variable “severity of illness” in our analysis. To do this, we go back to the two-way table and split it up to look separately at patients who are severely ill, and patients who are not.
As we can see, Hospital A did admit many more severely ill patients than Hospital B (1,500 vs. 200). In fact, from the way the totals were split, we see that in Hospital A, severely ill patients were a much higher proportion of the patients — 1,500 out of a total of 2,100 patients. In contrast, only 200 out of 800 patients at Hospital B were severely ill. To better see the effect of including the lurking variable, we need to supplement each of the two new two-way tables with its conditional percentages:
Note that despite our earlier finding that overall Hospital A has a higher death rate (3% vs. 2%), when we take into account the lurking variable, we find that actually it is Hospital B that has the higher death rate both among the severely ill patients (4% vs. 3.8%) and among the not severely ill patients (1.3% vs. 1%). Thus, we see that adding a lurking variable can change the direction of an association.
Here we have the following three relationships:
• A greater percentage of hospital A’s patient’s died compared to hospital B.
• Patient’s who are severely ill are less likely to survive.
• Hospital A accepts more severely ill patients.
• In this case, after further careful analysis, we see that once we account for severity of illness, hospital A actually has a lower percentage of patient’s who died than hospital B in both groups of patients!
Whenever including a lurking variable causes us to rethink the direction of an association, this is called Simpson’s paradox.
The possibility that a lurking variable can have such a dramatic effect is another reason we must adhere to the principle:
PRINCIPLE: Association does not imply causation!
A Final Example – Gaining a Deeper Understaing of the Relationship
It is not always the case that including a lurking variable makes us rethink the direction of the association. In the next example we will see how including a lurking variable just helps us gain a deeper understanding of the observed relationship.
EXAMPLE: College Entrance Exams
As discussed earlier, in the United States, the SAT is a widely used college entrance examination, required by the most prestigious schools. In some states, a different college entrance examination is prevalent, the ACT.
Note that:
• the explanatory variable is the percentage taking the SAT,
• the response variable is the median SAT Math score, and
• each data point on the scatterplot represents one of the states, so for example, in Illinois, in the year these data were collected, 16% of the students took the SAT Math, and their median score was 528.
Notice that there is a negative relationship between the percentage of students who take the SAT in a state, and the median SAT Math score in that state. What could the explanation behind this negative trend be? Why might having more people take the test be associated with lower scores?
Note that another visible feature of the data is the presence of a gap in the middle of the scatterplot, which creates two distinct clusters in the data. This suggests that maybe there is a lurking variable that separates the states into these two clusters, and that including this lurking variable in the study (as we did, by creating this labeled scatterplot) will help us understand the negative trend.
It turns out that indeed, the clusters represent two groups of states:
• The “blue group” on the right represents the states where the SAT is the test of choice for students and colleges.
• The “red group” on the left represents the states where the ACT college entrance examination is commonly used.
It makes sense then, that in the “ACT states” on the left, a smaller percentage of students take the SAT. Moreover, the students who do take the SAT in the ACT states are probably students who are applying to more prestigious national colleges, and therefore represent a more select group of students. This is the reason why we see high SAT Math scores in this group.
On the other hand, in the “SAT states” on the right, larger percentages of students take the test. These students represent a much broader cross-section of the population, and therefore we see lower (more average) SAT Math scores.
To summarize: In this case, including the lurking variable “ACT state” versus “SAT state” helped us better understand the observed negative relationship in our data.
Learn By Doing: Causation and Lurking Variables
Did I Get This?: Simpson’s Paradox
The last two examples showed us that including a lurking variable in our exploration may:
• lead us to rethink the direction of an association (as in the Hospital/Death Rate example) or,
• help us to gain a deeper understanding of the relationship between variables (as in the SAT/ACT example).
Let’s Summarize
• A lurking variable is a variable that was not included in your analysis, but that could substantially change your interpretation of the data if it were included.
• Because of the possibility of lurking variables, we adhere to the principle that association does not imply causation.
• Including a lurking variable in our exploration may:
• help us to gain a deeper understanding of the relationship between variables, or
• lead us to rethink the direction of an association (Simpson’s Paradox)
• Whenever including a lurking variable causes us to rethink the direction of an association, this is an instance of Simpson’s paradox. | textbooks/stats/Applied_Statistics/Biostatistics_-_Open_Learning_Textbook/Unit_1%3A_Exploratory_Data_Analysis/Causation.txt |
CO-4: Distinguish among different measurement scales, choose the appropriate descriptive and inferential statistical methods based on these distinctions, and interpret the results.
Video
One Categorical Variable (4:57)
Note
Note: These videos are listed for reference. If you would like to follow along in your first reading, then you will need to see the preceding tutorial videos. These videos are also linked in the programming assignments.
• All SAS tutorial videos
• All SPSS tutorial videos
Related SAS Tutorials
Related SPSS Tutorials
Distribution of One Categorical Variable
Learning Objectives
LO 4.3: Using appropriate numerical measures and/or visual displays, describe the distribution of a categorical variable in context.
What is your perception of your own body? Do you feel that you are overweight, underweight, or about right?
A random sample of 1,200 U.S. college students were asked this question as part of a larger survey. The following table shows part of the responses:
Student Body Image
student 25 overweight
student 26 about right
student 27 underweight
student 28 about right
student 29 about right
Here is some information that would be interesting to get from these data:
• What percentage of the sampled students fall into each category?
• How are students divided across the three body image categories? Are they equally divided? If not, do the percentages follow some other kind of pattern?
There is no way that we can answer these questions by looking at the raw data, which are in the form of a long list of 1,200 responses, and thus not very useful.
Both of these questions will be easily answered once we summarize and look at the distribution of the variable Body Image (i.e., once we summarize how often each of the categories occurs).
Numerical Measures
In order to summarize the distribution of a categorical variable, we first create a table of the different values (categories) the variable takes, how many times each value occurs (count) and, more importantly, how often each value occurs (by converting the counts to percentages).
The result is often called a Frequency Distribution or Frequency Table.
Note
A Frequency Distribution or Frequency Table is the primary set of numerical measures for one categorical variable.
• Consists of a table with each category along with the count and percentage for each category.
• Provides a summary of the distribution for one categorical variable.
Here is the table for our example:
Category Count Percent
About right 855 (855/1200)*100 = 71.3%
Overweight 235 (235/1200)*100 = 19.6%
Underweight 110 (110/1200)*100 = 9.2%
Total n=1200 100%
Comments:
1. If you add the percentages in the above table you will get a total of 100.1% (instead of the true value which is, of course, 100%).This can occur whenever rounding has taken place. You should be aware of this possibility when working with real data.If you add the ratios directly as fractions, you will always get exactly 1 (or 100%).
2. In general, although it might be “less confusing” if we recorded the full values above (71.25% instead of 71.3% and so on), we prefer not to display too many decimal places as this can distract from the conclusions we want to illustrate.We don’t want those who are reading our results to be overwhelmed or distracted by unneeded digits.
Visual or Graphical Displays
In order to visualize the numerical measures we’ve obtained, we need a graphical display.
Note
There are two simple graphical displays for visualizing the distribution of one categorical variable:
• Pie Charts
• Bar Charts
Bar Chart
Note that the pie chart and bar chart are visual representations of the information in the frequency table.
Study the bar charts above and then answer the following question.
Learn By Doing: Bar Charts
Now that we have summarized the distribution of values in the Body Image variable, let’s go back and interpret the results in the context of the questions that we posed. Study the frequency table and graphs and answer the following questions.
Learn By Doing: Describe the Distribution of a Categorical Variable
Now that we’ve interpreted the results, there are some other interesting questions that arise:
• Can we reliably generalize our results to the entire population of interest and conclude that a similar distribution across body image categories exists among all U.S. college students? In particular, can we make such a generalization even though our sample consisted of only 1,200 students, which is a very small fraction of the entire population?
• If we had separated our sample by gender and looked at males and females separately, would we have found a similar distribution across body image categories?
These are the types of questions that we will deal with in future sections of the course.
Recall: Categorical variables take category or label values, and place an individual into one of several groups. Categorical variables are often further classified as either
• Nominal, when there is no natural ordering among the categories. Common examples would be gender, eye color, or ethnicity.
• Ordinal, when there is a natural order among the categories, such as, ranking scales or letter grades. However, ordinal variables are categorical and do not provide precise measurements. Differences are not precisely meaningful, for example, if one student scores an A and another a B on an assignment, we cannot say precisely the difference in their scores, only that an A is larger than a B.
Note: For ordinal categorical variables, pie charts are seldom used since the information about the order can be lost in such a display. Be careful that bar charts for ordinal variables display the data in a reasonable order given the scenario.
While both the pie chart and the bar chart help us visualize the distribution of a categorical variable, the pie chart emphasizes how the different categories relate to the whole, and the bar chart emphasizes how the different categories compare with each other.
Pictograms
A variation on the pie chart and bar chart that is very commonly used in the media is the pictogram. Here are two examples:
Source: USA Today Snapshots and the Impulse Research for Northern Confidential Bathroom survey
Source: Market Facts for the Association of Dressings and Sauces
Beware: Pictograms can be misleading. Consider the following pictogram:
This graph is aimed at advertisers deciding where to spend their budgets, and clearly suggests that Time magazine attracts by far the largest amount of advertising spending.
Are the differences really as dramatic as the graph suggests?
If we look carefully at the numbers above the pens, we find that advertisers spend in Time only \$4,433,879 / \$2,698,386 = 1.64 times more than in Newsweek, and only \$4,433,879 / \$1,537,617 = 2.88 times more than in U.S. News.
By looking at the pictogram, however, we get the impression that Time is much further ahead. Why?
In order to magnify the picture without distorting it, we must increase both its height and width. As a result, the area of Time’s pen is 1.64 * 1.64 = 2.7 times larger than the Newsweek pen, and 2.88 * 2.88 = 8.3 times larger than the U.S. News pen. Our eyes capture the area of the pens rather than only the height, and so we are misled to think that Time is a bigger winner than it really is.
Learn By Doing: One Categorical Variable (College Student Survey)
Let’s Summarize
The distribution of a categorical variable is summarized using:
• Visual display: pie chart or bar chart, supplemented by
• Numerical measures: frequency table of category counts and percentages.
A variation on pie charts and bar charts is the pictogram. Pictograms can be misleading, so make sure to use a critical approach when interpreting the information the pictogram is trying to convey. | textbooks/stats/Applied_Statistics/Biostatistics_-_Open_Learning_Textbook/Unit_1%3A_Exploratory_Data_Analysis/One_Categorical_Variable.txt |
CO-4: Distinguish among different measurement scales, choose the appropriate descriptive and inferential statistical methods based on these distinctions, and interpret the results.
Video
Video: One Quantitative Variable (4:16)
Note
Related SAS Tutorials
Related SPSS Tutorials
Distribution of One Quantitative Variable
Learning Objectives
LO 4.4: Using appropriate graphical displays and/or numerical measures, describe the distribution of a quantitative variable in context: a) describe the overall pattern, b) describe striking deviations from the pattern
In the previous section, we explored the distribution of a categorical variable using graphs (pie chart, bar chart) supplemented by numerical measures (percent of observations in each category).
In this section, we will explore the data collected from a quantitative variable, and learn how to describe and summarize the important features of its distribution.
We will learn how to display the distribution using graphs and discuss a variety of numerical measures.
An introduction to each of these topics follows.
Graphs
To display data from one quantitative variable graphically, we can use either a histogram or boxplot.
We will also present several “by-hand” displays such as the stemplot and dotplot (although we will not rely on these in this course).
Numerical Measures
The overall pattern of the distribution of a quantitative variable is described by its shape, center, and spread.
By inspecting the histogram or boxplot, we can describe the shape of the distribution, but we can only get a rough estimate for the center and spread.
A description of the distribution of a quantitative variable must include, in addition to the graphical display, a more precise numerical description of the center and spread of the distribution.
In this section we will learn:
• how to display the distribution of one quantitative variable using various graphs;
• how to quantify the center and spread of the distribution of one quantitative variable with various numerical measures;
• some of the properties of those numerical measures;
• how to choose the appropriate numerical measures of center and spread to supplement the graph(s); and
• how to identify potential outliers in the distribution of one quantitative variable
• We will also discuss a few measures of position (also called measures of location). These measures
• allow us to quantify where a particular value is relative to the distribution of all values
• do provide information about the distribution itself
• also use the information about the distribution to learn more about an INDIVIDUAL
We will present the material in a logical sequence which builds in difficulty, intermingling discussion of visual displays and numerical measures as we proceed.
Before reading further, try this interactive applet which will give you a preview of some of the topics we will be learning about in this section on exploratory data analysis for one quantitative variable.
Histograms & Stemplots
CO-4: Distinguish among different measurement scales, choose the appropriate descriptive and inferential statistical methods based on these distinctions, and interpret the results.
Learning Objectives
LO 4.4: Using appropriate graphical displays and/or numerical measures, describe the distribution of a quantitative variable in context: a) describe the overall pattern, b) describe striking deviations from the pattern
Video
Video: Histograms and Stemplots (5:03)
Note
Related SAS Tutorials
Related SPSS Tutorials
Histograms
Learning Objectives
LO 4.5: Explain the process of creating a histogram.
The idea is to break the range of values into intervals and count how many observations fall into each interval.
EXAMPLE: Exam Grades
Here are the exam grades of 15 students:
88, 48, 60, 51, 57, 85, 69, 75, 97, 72, 71, 79, 65, 63, 73
We first need to break the range of values into intervals (also called “bins” or “classes”).
In this case, since our dataset consists of exam scores, it will make sense to choose intervals that typically correspond to the range of a letter grade, 10 points wide: [40,50), [50, 60), … [90, 100).
By counting how many of the 15 observations fall in each of the intervals, we get the following table:
Score Count
[40-50) 1
[50-60) 2
[60-70) 4
[70-80) 5
[80-90) 2
[90-100) 1
Note: The observation 60 was counted in the 60-70 interval. See comment 1 below.
To construct the histogram from this table we plot the intervals on the X-axis, and show the number of observations in each interval (frequency of the interval) on the Y-axis, which is represented by the height of a rectangle located above the interval:
The previous table can also be turned into a relative frequency table using the following steps:
• Add a row on the bottom and include the total number of observations in the dataset that are represented in the table.
• Add a column, at the end of the table, and calculate the relative frequency for each interval, by dividing the number of observations in each row by the total number of observations.
These two steps are illustrated in red in the following frequency distribution table:
It is also possible to determine the number of scores for an interval, if you have the total number of observations and the relative frequency for that interval.
• For instance, suppose there are 15 scores (or observations) in a set of data and the relative frequency for an interval is 0.13.
• To determine the number of scores in that interval, multiplying the total number of observations by the relative frequency and round up to the next whole number: 15*.13 = 1.95, which rounds up to 2 observations.
A relative frequency table, like the one above, can be used to determine the frequency of scores occurring at or across intervals.
Here are some examples, using this frequency table:
What is the percentage of exam scores that were 70 and up to, but not including, 80?
• To determine the answer, we look at the relative frequency associated with the [70-80) interval.
• The relative frequency is 0.33; to convert to percentage, multiply by 100 (0.33*100= 33) or 33%.
What is the percentage of exam scores that are at least 70? To determine the answer, we need to:
• Add together the relative frequencies for the intervals that have scores of at least 70 or above.
• Thus, would need to add together the relative frequencies from [70-80), [80-90), and [90-100]
= 0.33 + 0.13 + 0.07 = 0.53.
• To get the percentage, need to multiple the calculated relative frequency by 100.
• In this case, it would be 0.53*100 = 53 or 53%.
Study the histogram again and table and answer the following question.
Learn By Doing: Histograms
Comments:
• It is very important that each observation be counted only in one interval. For the most part, it is clear which interval an observation falls in. However, in our example, we needed to decide whether to include 60 in the interval 50-60, or the interval 60-70, and we chose to count it in the latter.
• In fact, this decision is captured by the way we wrote the intervals. If you’ll scroll up and look at the table, you’ll see that we wrote the intervals in a peculiar way: [40-50), [50,60), [60,70) etc.
• The square bracket means “including” and the parenthesis means “not including”. For example, [50,60) is the interval from 50 to 60, including 50 and not including 60; [60,70) is the interval from 60 to 70, including 60, and not including 70, etc.
• It really does not matter how you decide to set up your intervals, as long as you are consistent.
• When you look at a histogram such as the one above it is important to know that values falling on the border are only counted in one interval, even if you do not know which way this was done for a particular graph.
• When data are displayed in a histogram, some information is lost. Note that by looking at the histogram
• we can answer: “How many students scored 70 or above?” (5+2+1=8)
• But we cannot answer: “What was the lowest score?” All we can say is that the lowest score is somewhere between 40 and 50.
• Obviously, we could have chosen to break the data into intervals differently — for example: [45, 50), [50, 55), [55, 60) etc.
To see how our choice of bins or intervals affects a histogram, you can use the applet linked below that let you change the intervals dynamically.
(OPTIONAL) Interactive Applet: Histograms
Many Students Wonder: Histograms
Question: How do I know what interval width to choose?
Answer: There are many valid choices for interval widths and starting points. There are a few rules of thumb used by software packages to find optimal values. In this course, we will rely on a statistical package to produce the histogram for us, and we will focus instead on describing and summarizing the distribution as it appears from the histogram.
The following exercises provide more practice working with histograms created from a single quantitative variable.
Did I Get This?: Histograms
Stemplot (Stem and Leaf Plot)
Learning Objectives
LO 4.6: Explain the process of creating a stemplot.
The stemplot (also called stem and leaf plot) is another graphical display of the distribution of quantitative variable.
Note
To create a stemplot, the idea is to separate each data point into a stem and leaf, as follows:
• The leaf is the right-most digit.
• The stem is everything except the right-most digit.
• So, if the data point is 34, then 3 is the stem and 4 is the leaf.
• If the data point is 3.41, then 3.4 is the stem and 1 is the leaf.
• Note: For this to work, ALL data points should be rounded to the same number of decimal places.
EXAMPLE: Best Actress Oscar Winners
We will continue with the Best Actress Oscar winners example (Link to the Best Actress Oscar Winners data).
34 34 26 37 42 41 35 31 41 33 30 74 33 49 38 61 21 41 26 80 43 29 33 35 45 49 39 34 26 25 35 33
To make a stemplot:
• Separate each observation into a stem and a leaf.
• Write the stems in a vertical column with the smallest at the top, and draw a vertical line at the right of this column.
• Go through the data points, and write each leaf in the row to the right of its stem.
• Rearrange the leaves in an increasing order.
* When some of the stems hold a large number of leaves, we can split each stem into two: one holding the leaves 0-4, and the other holding the leaves 5-9. A statistical software package will often do the splitting for you, when appropriate.
Note that when rotated 90 degrees counterclockwise, the stemplot visually resembles a histogram:
The stemplot has additional unique features:
• It preserves the original data.
• It sorts the data (which will become very useful in the next section).
You will not need to create these plots by hand but you may need to be able to discuss the information they contain.
To see more stemplots, use the interactive applet we introduced earlier.
In particular, notice how the raw data are rounded and look at the stemplot with and without split stems.
Comments: ABOUT DOTPLOTS
• There is another type of display that we can use to summarize a quantitative variable graphically — the dotplot.
• The dotplot, like the stemplot, shows each observation, but displays it with a dot rather than with its actual value.
• We will not use these in this course but you may see them occasionally in practice and they are relatively easy to create by-hand.
• Here is the dotplot for the ages of Best Actress Oscar winners.
Many Students Wonder: Graphs
Question: How do we know which graph to use: the histogram, stemplot, or dotplot?
Answer Since for the most part we are not going to deal with very small data sets in this course, we will generally display the distribution of a quantitative variable using a histogram generated by a statistical software package.
Let’s Summarize
• The histogram is a graphical display of the distribution of a quantitative variable. It plots the number (count) of observations that fall in intervals of values.
• The stemplot is a simple, but useful visual display of a quantitative variable. Its principal virtues are:
• Easy and quick to construct for small, simple datasets.
• Retains the actual data.
• Sorts (ranks) the data.
Describing Distributions
CO-4: Distinguish among different measurement scales, choose the appropriate descriptive and inferential statistical methods based on these distinctions, and interpret the results.
Learning Objectives
LO 4.4: Using appropriate graphical displays and/or numerical measures, describe the distribution of a quantitative variable in context: a) describe the overall pattern, b) describe striking deviations from the pattern
Video
Video: Describing Distributions (2 videos, 7:38 total)
Note
Related SAS Tutorials
Related SPSS Tutorials
Features of Distributions of Quantitative Variables
Learning Objectives
LO 4.7: Define and describe the features of the distribution of one quantitative variable (shape, center, spread, outliers).
Once the distribution has been displayed graphically, we can describe the overall pattern of the distribution and mention any striking deviations from that pattern.
Note
More specifically, we should consider the following features of the Distribution for One Quantitative Variable:
Shape
When describing the shape of a distribution, we should consider:
• Symmetry/skewness of the distribution.
• Peakedness (modality) — the number of peaks (modes) the distribution has.
We distinguish between:
Symmetric Distributions
Note
A distribution is called symmetric if, as in the histograms above, the distribution forms an approximate mirror image with respect to the center of the distribution.
The center of the distribution is easy to locate and both tails of the distribution are the approximately the same length.
Note that all three distributions are symmetric, but are different in their modality (peakedness).
• The first distribution is unimodal — it has one mode (roughly at 10) around which the observations are concentrated.
• The second distribution is bimodal — it has two modes (roughly at 10 and 20) around which the observations are concentrated.
• The third distribution is kind of flat, or uniform. The distribution has no modes, or no value around which the observations are concentrated. Rather, we see that the observations are roughly uniformly distributed among the different values.
Skewed Right Distributions
A distribution is called skewed right if, as in the histogram above, the right tail (larger values) is much longer than the left tail (small values).
Note that in a skewed right distribution, the bulk of the observations are small/medium, with a few observations that are much larger than the rest.
• An example of a real-life variable that has a skewed right distribution is salary. Most people earn in the low/medium range of salaries, with a few exceptions (CEOs, professional athletes etc.) that are distributed along a large range (long “tail”) of higher values.
Skewed Left Distributions
A distribution is called skewed left if, as in the histogram above, the left tail (smaller values) is much longer than the right tail (larger values).
Note that in a skewed left distribution, the bulk of the observations are medium/large, with a few observations that are much smaller than the rest.
• An example of a real life variable that has a skewed left distribution is age of death from natural causes (heart disease, cancer etc.). Most such deaths happen at older ages, with fewer cases happening at younger ages.
Comments:
1. Distributions with more than two peaks are generally called multimodal.
2. Bimodal or multimodal distributions can be evidence that two distinct groups are represented.
3. Unimodal, Bimodal, and multimodal distributions may or may not be symmetric.
Here is an example. A medium size neighborhood 24-hour convenience store collected data from 537 customers on the amount of money spent in a single visit to the store. The following histogram displays the data.
Note that the overall shape of the distribution is skewed to the right with a clear mode around $25. In addition, it has another (smaller) “peak” (mode) around$50-55.
The majority of the customers spend around $25 but there is a cluster of customers who enter the store and spend around$50-55.
Center
The center of the distribution is often used to represent a typical value.
One way to define the center is as the value that divides the distribution so that approximately half the observations take smaller values, and approximately half the observations take larger values.
Another common way to measure the center of a distribution is to use the average value.
From looking at the histogram we can get only a rough estimate for the center of the distribution. More exact ways of finding measures of center will be discussed in the next section.
Spread
One way to measure the spread (also called variability or variation) of the distribution is to use the approximate range covered by the data.
From looking at the histogram, we can approximate the smallest observation (min), and the largest observation (max), and thus approximate the range. (More exact ways of finding measures of spread will be discussed soon.)
Outliers
Outliers are observations that fall outside the overall pattern.
For example, the following histogram represents a distribution with a highly probable outlier:
10 have a frequency of 0, exception for x=15, which has a frequency of greater than zero. This is a outlier." height="258" loading="lazy" src="http://phhp-faculty-cantrell.sites.m...histogram7.gif" title="A histogram with frequency on the Y-axis. As we go from left to right on the x-axis, the frequency increases to a peak at x=5, then decreases. Eventually, we reach 0 at x=11. All of x > 10 have a frequency of 0, exception for x=15, which has a frequency of greater than zero. This is a outlier." width="377">
EXAMPLE: Exam Grades
As you can see from the histogram, the grades distribution is roughly symmetric and unimodal with no outliers.
The center of the grades distribution is roughly 70 (7 students scored below 70, and 8 students scored above 70).
approximate min: 45 (the middle of the lowest interval of scores)
approximate max: 95 (the middle of the highest interval of scores)
approximate range: 95-45=50
Let’s look at a new example.
EXAMPLE: Best Actress Oscar Winners
To provide an example of a histogram applied to actual data, we will look at the ages of Best Actress Oscar winners from 1970 to 2001
The histogram for the data is shown below. (Link to the Best Actress Oscar Winners data).
We will now summarize the main features of the distribution of ages as it appears from the histogram:
Shape: The distribution of ages is skewed right. We have a concentration of data among the younger ages and a long tail to the right. The vast majority of the “best actress” awards are given to young actresses, with very few awards given to actresses who are older.
Center: The data seem to be centered around 35 or 36 years old. Note that this implies that roughly half the awards are given to actresses who are less than 35 years old.
Spread: The data range from about 20 to about 80, so the approximate range equals 80 – 20 = 60.
Outliers: There seem to be two probable outliers to the far right and possibly a third around 62 years old.
You can see how informative it is to know “what to look at” in a histogram.
Learn By Doing: Shapes of Distributions (Best Actor Oscar Winners)
The following exercises provide more practice with shapes of distributions for one quantitative variable.
Did I Get This?: Shapes of Distributions
Did I Get This?: Shapes of Distributions Part 2
Let’s Summarize
• When examining the distribution of a quantitative variable, one should describe the overall pattern of the data (shape, center, spread), and any deviations from the pattern (outliers).
• When describing the shape of a distribution, one should consider:
• Symmetry/skewness of the distribution
• Peakedness (modality) — the number of peaks (modes) the distribution has.
• Not all distributions have a simple, recognizable shape.
• Outliers are data points that fall outside the overall pattern of the distribution and need further research before continuing the analysis.
• It is always important to interpret what the features of the distribution mean in the context of the data.
Measures of Center
CO-4: Distinguish among different measurement scales, choose the appropriate descriptive and inferential statistical methods based on these distinctions, and interpret the results.
Learning Objectives
LO 4.4: Using appropriate graphical displays and/or numerical measures, describe the distribution of a quantitative variable in context: a) describe the overall pattern, b) describe striking deviations from the pattern
Learning Objectives
LO 4.7: Define and describe the features of the distribution of one quantitative variable (shape, center, spread, outliers).
Video
Video: Measures of Center (2 videos, 6:09 total)
Note
Related SAS Tutorials
Related SPSS Tutorials
Introduction
Intuitively speaking, a numerical measure of center describes a “typical value” of the distribution.
The two main numerical measures for the center of a distribution are the mean and the median.
In this unit on Exploratory Data Analysis, we will be calculating these results based upon a sample and so we will often emphasize that the values calculated are the sample mean and sample median.
Each one of these measures is based on a completely different idea of describing the center of a distribution.
We will first present each one of the measures, and then compare their properties.
Mean
Learning Objectives
LO 4.8: Define and calculate the sample mean of a quantitative variable.
EXAMPLE
The mean is the average of a set of observations (i.e., the sum of the observations divided by the number of observations).
The mean is the average of a set of observations
• The sum of the observations divided by the number of observations).
• If the n observations are written as
$x_1, x_2, \cdots, x_n$
• their mean can be written mathematically as:their mean is:
$\bar{x}=\dfrac{x_{1}+x_{2}+\cdots+x_{n}}{n}=\dfrac{\sum_{i=1}^{n} x_{i}}{n}$
We read the symbol as “x-bar.” The bar notation is commonly used to represent the sample mean, i.e. the mean of the sample.
Using any appropriate letter to represent the variable (x, y, etc.), we can indicate the sample mean of this variable by adding a bar over the variable notation.
EXAMPLE: Best Actress Oscar Winners
We will continue with the Best Actress Oscar winners example (Link to the Best Actress Oscar Winners data).
34 34 26 37 42 41 35 31 41 33 30 74 33 49 38 61 21 41 26 80 43 29 33 35 45 49 39 34 26 25 35 33
The mean age of the 32 actresses is:
$\bar{x}=\dfrac{34+34+26+\ldots+35+33}{32}=\dfrac{1233}{32}=38.5$
We add all of the ages to get 1233 and divide by the number of ages which was 32 to get 38.5.
We denote this result as x-bar and called the sample mean.
Note that the sample mean gives a measure of center which is higher than our approximation of the center from looking at the histogram (which was 35). The reason for this will be clear soon.
EXAMPLE: World Cup Soccer
Often we have large sets of data and use a frequency table to display the data more efficiently.
Data were collected from the last three World Cup soccer tournaments. A total of 192 games were played. The table below lists the number of goals scored per game (not including any goals scored in shootouts).
Total # Goals/Game Frequency
0 17
1 45
2 51
3 37
4 25
5 11
6 3
7 2
8 1
To find the mean number of goals scored per game, we would need to find the sum of all 192 numbers, and then divide that sum by 192.
Rather than add 192 numbers, we use the fact that the same numbers appear many times. For example, the number 0 appears 17 times, the number 1 appears 45 times, the number 2 appears 51 times, etc.
If we add up 17 zeros, we get 0. If we add up 45 ones, we get 45. If we add up 51 twos, we get 102. Repeated addition is multiplication.
Thus, the sum of the 192 numbers
= 0(17) + 1(45) + 2(51) + 3(37) + 4(25) + 5(11) + 6(3) + 7(2) + 8(1) = 453.
The sample mean is then 453 / 192 = 2.359.
Note that, in this example, the values of 1, 2, and 3 are the most common and our average falls in this range representing the bulk of the data.
Did I Get This?: Mean
Median
Learning Objectives
LO 4.9: Define and calculate the sample median of a quantitative variable.
The median M is the midpoint of the distribution. It is the number such that half of the observations fall above, and half fall below.
To find the median:
• Order the data from smallest to largest.
• Consider whether n, the number of observations, is even or odd.
• If n is odd, the median M is the center observation in the ordered list. This observation is the one “sitting” in the (n + 1) / 2 spot in the ordered list.
• If n is even, the median M is the mean of the two center observations in the ordered list. These two observations are the ones “sitting” in the (n / 2) and (n / 2) + 1 spots in the ordered list.
EXAMPLE: Median(1)
For a simple visualization of the location of the median, consider the following two simple cases of n = 7 and n = 8 ordered observations, with each observation represented by a solid circle:
Comments:
• In the images above, the dots are equally spaced, this need not indicate the data values are actually equally spaced as we are only interested in listing them in order.
• In fact, in the above pictures, two subsequent dots could have exactly the same value.
• It is clear that the value of the median will be in the same position regardless of the distance between data values.
Did I Get This?: Median
EXAMPLE: Median(2)
To find the median age of the Best Actress Oscar winners, we first need to order the data.
It would be useful, then, to use the stemplot, a diagram in which the data are already ordered.
• Here n = 32 (an even number), so the median M, will be the mean of the two center observations
• These are located at the (n / 2) = 32 / 2 = 16th and (n / 2) + 1 = (32 / 2) + 1 = 17th
Counting from the top, we find that:
• the 16th ranked observation is 35
• the 17th ranked observation also happens to be 35
Therefore, the median M = (35 + 35) / 2 = 35
Learn By Doing: Measures of Center #1
Comparing the Mean and the Median
Learning Objectives
LO 4.10: Choose the appropriate measures for a quantitative variable based upon the shape of the distribution.
Note
As we have seen, the mean and the median, the most common measures of center, each describe the center of a distribution of values in a different way.
• The mean describes the center as an average value, in which the actual valuesof the data points play an important role.
• The median, on the other hand, locates the middle value as the center, and the orderof the data is the key.
To get a deeper understanding of the differences between these two measures of center, consider the following example. Here are two datasets:
Data set A → 64 65 66 68 70 71 73
Data set B → 64 65 66 68 70 71 730
For dataset A, the mean is 68.1, and the median is 68.
Looking at dataset B, notice that all of the observations except the last one are close together. The observation 730 is very large, and is certainly an outlier.
In this case, the median is still 68, but the mean will be influenced by the high outlier, and shifted up to 162.
The message that we should take from this example is:
The mean is very sensitive to outliers (because it factors in their magnitude), while the median is resistant (or robust) to outliers.
Interactive Applet: Comparing the Mean and Median
Therefore:
• For symmetric distributions with no outliers: the mean is approximately equal to the median.
• For skewed right distributions and/or datasets with high outliers: the mean is greater than the median.
• For skewed left distributions and/or datasets with low outliers: the mean is less than the median.
Conclusions… When to use which measures?
• Use the sample mean as a measure of center for symmetric distributions with no outliers.
• Otherwise, the median will be a more appropriate measure of the center of our data.
Did I Get This?: Measures of Center
Learn By Doing: Measures of Center #2
Learn By Doing: Measures of Center – Additional Practice
Let’s Summarize
• The two main numerical measures for the center of a distribution are the mean and the median. The mean is the average value, while the median is the middle value.
• The mean is very sensitive to outliers (as it factors in their magnitude), while the median is resistant to outliers.
• The mean is an appropriate measure of center for symmetric distributions with no outliers. In all other cases, the median is often a better measure of the center of the distribution.
Measures of Spread
CO-4: Distinguish among different measurement scales, choose the appropriate descriptive and inferential statistical methods based on these distinctions, and interpret the results.
Learning Objectives
LO 4.4: Using appropriate graphical displays and/or numerical measures, describe the distribution of a quantitative variable in context: a) describe the overall pattern, b) describe striking deviations from the pattern
Learning Objectives
LO 4.7: Define and describe the features of the distribution of one quantitative variable (shape, center, spread, outliers).
Video
Video: Measures of Spread (3 videos, 8:44 total)
Note
Related SAS Tutorials
Related SPSS Tutorials
Introduction
So far we have learned about different ways to quantify the center of a distribution. A measure of center by itself is not enough, though, to describe a distribution.
Consider the following two distributions of exam scores. Both distributions are centered at 70 (the median of both distributions is approximately 70), but the distributions are quite different.
The first distribution has a much larger variability in scores compared to the second one.
In order to describe the distribution, we therefore need to supplement the graphical display not only with a measure of center, but also with a measure of the variability (or spread) of the distribution.
In this section, we will discuss the three most commonly used measures of spread:
• Range
• Inter-quartile range (IQR)
• Standard deviation
Although the measures of center did approach the question differently, they do attempt to measure the same point in the distribution and thus are comparable.
However, the three measures of spread provide very different ways to quantify the variability of the distribution and do not try to estimate the same quantity.
In fact, the three measures of spread provide information about three different aspects of the spread of the distribution which, together, give a more complete picture of the spread of the distribution.
Range
Learning Objectives
LO 4.11: Define and calculate the range of one quantitative variable.
The range covered by the data is the most intuitive measure of variability. The range is exactly the distance between the smallest data point (min) and the largest one (Max).
• Range = Max – min
Note: When we first looked at the histogram, and tried to get a first feel for the spread of the data, we were actually approximating the range, rather than calculating the exact range.
EXAMPLE: Best Actress Oscar Winners
Here we have the Best Actress Oscar winners’ data
34 34 26 37 42 41 35 31 41 33 30 74 33 49 38 61 21 41 26 80 43 29 33 35 45 49 39 34 26 25 35 33
In this example:
• min = 21 (Marlee Matlin for Children of a Lesser God, 1986)
• Max = 80 (Jessica Tandy for Driving Miss Daisy, 1989)
The range covered by all the data is 80 – 21 = 59 years.
Inter-Quartile Range (IQR)
Learning Objectives
LO 4.12: Define and calculate Q1, Q3, and the IQR for one quantitative variable
While the range quantifies the variability by looking at the range covered by ALL the data,
the Inter-Quartile Range or IQR measures the variability of a distribution by giving us the range covered by the MIDDLE 50% of the data.
• IQR = Q3 – Q1
• Q3 = 3rd Quartile = 75th Percentile
• Q1 = 1st Quartile = 25th Percentile
The following picture illustrates this idea: (Think about the horizontal line as the data ranging from the min to the Max). IMPORTANT NOTE: The “lines” in the following illustrations are not to scale. The equal distances indicate equal amounts of data NOT equal distance between the numeric values.
Although we will use software to calculate the quartiles and IQR, we will illustrate the basic process to help you fully understand.
To calculate the IQR:
1. Arrange the data in increasing order, and find the median M. Recall that the median divides the data, so that 50% of the data points are below the median, and 50% of the data points are above the median.
2. Find the median of the lower 50% of the data. This is called the first quartile of the distribution, and the point is denoted by Q1. Note from the picture that Q1 divides the lower 50% of the data into two halves, containing 25% of the data points in each half. Q1 is called the first quartile, since one quarter of the data points fall below it.
3. Repeat this again for the top 50% of the data. Find the median of the top 50% of the data. This point is called the third quartile of the distribution, and is denoted by Q3.
Note from the picture that Q3 divides the top 50% of the data into two halves, with 25% of the data points in each.Q3 is called the third quartile, since three quarters of the data points fall below it.
4. The middle 50% of the data falls between Q1 and Q3, and therefore: IQR = Q3 – Q1
Comments:
1. The last picture shows that Q1, M, and Q3 divide the data into four quarters with 25% of the data points in each, where the median is essentially the second quartile. The use of IQR = Q3 – Q1 as a measure of spread is therefore particularly appropriate when the median M is used as a measure of center.
2. We can define a bit more precisely what is considered the bottom or top 50% of the data. The bottom (top) 50% of the data is all the observations whose position in the ordered list is to the left (right) of the location of the overall median M. The following picture will visually illustrate this for the simple cases of n = 7 and n = 8.
Note that when n is odd (as in n = 7 above), the median is not included in either the bottom or top half of the data; When n is even (as in n = 8 above), the data are naturally divided into two halves.
EXAMPLE: Best Actress Oscar Winners
To find the IQR of the Best Actress Oscar winners’ distribution, it will be convenient to use the stemplot.
Q1 is the median of the bottom half of the data. Since there are 16 observations in that half, Q1 is the mean of the 8th and 9th ranked observations in that half:
Q1 = (31 + 33) / 2 = 32
Similarly, Q3 is the median of the top half of the data, and since there are 16 observations in that half, Q3 is the mean of the 8th and 9th ranked observations in that half:
Q3 = (41 + 42) / 2 = 41.5
IQR = 41.5 – 32 = 9.5
Note that in this example, the range covered by all the ages is 59 years, while the range covered by the middle 50% of the ages is only 9.5 years. While the whole dataset is spread over a range of 59 years, the middle 50% of the data is packed into only 9.5 years. Looking again at the histogram will illustrate this:
Comment:
• Software packages use different formulas to calculate the quartiles Q1 and Q3. This should not worry you, as long as you understand the idea behind these concepts. For example, here are the quartile values provided by three different software packages for the age of best actress Oscar winners:
R:
Minitab:
Excel:
Q1 and Q3 as reported by the various software packages differ from each other and are also slightly different from the ones we found here. This should not worry you.
There are different acceptable ways to find the median and the quartiles. These can give different results occasionally, especially for datasets where n (the number of observations) is fairly small.
As long as you know what the numbers mean, and how to interpret them in context, it doesn’t really matter much what method you use to find them, since the differences are negligible.
Standard Deviation
Learning Objectives
LO 4.13: Define and calculate the standard deviation and variance of one quantitative variable.
So far, we have introduced two measures of spread; the range (covered by all the data) and the inter-quartile range (IQR), which looks at the range covered by the middle 50% of the distribution. We also noted that the IQR should be paired as a measure of spread with the median as a measure of center.
We now move on to another measure of spread, the standard deviation, which quantifies the spread of a distribution in a completely different way.
Idea
The idea behind the standard deviation is to quantify the spread of a distribution by measuring how far the observations are from their mean. The standard deviation gives the average (or typical distance) between a data point and the mean.
Notation
There are many notations for the standard deviation: SD, s, Sd, StDev. Here, we’ll use SD as an abbreviation for standard deviation, and use s as the symbol.
Formula
The sample standard deviation formula is:
$s=\sqrt{\dfrac{\sum_{i=1}^{n}\left(x_{i}-\bar{x}\right)^{2}}{n-1}}$
where,
$\mathrm{s}=$ sample standard deviation
$\mathrm{n}=$ number of scores in sample
$\sum=$ sum of...
and
$\overline{\mathcal{A}}=$ sample mean
Calculation
In order to get a better understanding of the standard deviation, it would be useful to see an example of how it is calculated. In practice, we will use a computer to do the calculation.
EXAMPLE: Video Store Customers
The following are the number of customers who entered a video store in 8 consecutive hours:
7, 9, 5, 13, 3, 11, 15, 9
To find the standard deviation of the number of hourly customers:
1. Find the mean, x-bar, of your data:
(7 + 9 + 5 + 13 + 3 + 11 + 15 + 9)/8 = 9
1. Find the deviations from the mean:
• The differences between each observation and the mean here are
(7 – 9), (9 – 9), (5 – 9), (13 – 9), (3 – 9), (11 – 9), (15 – 9), (9 – 9)
-2, 0, -4, 4, -6, 2, 6, 0
• Since the standard deviation attempts to measure the average (typical) distance between the data points and their mean, it would make sense to average the deviation we obtained.
• Note, however, that the sum of the deviations is zero.
• This is always the case, and is the reason why we need a more complex calculation.
1. To solve the previous problem, in our calculation, we square each of the deviations.
(-2)2, (0)2, (-4)2, (4)2, (-6)2, (2)2, (6)2, (0)2
4, 0, 16, 16, 36, 4, 36, 0
1. Sum the squared deviations and divide by n – 1:
(4 + 0 + 16 + 16 + 36 + 4 + 36 + 0)/(8 – 1)
(112)/(7) = 16
• The reason we divide by n-1 will be discussed later.
• This value, the sum of the squared deviations divided by n – 1, is called the variance. However, the variance is not used as a measure of spread directly as the units are the square of the units of the original data.
1. The standard deviationof the data is the square root of the variance calculated in step 4:
• In this case, we have the square root of 16 which is 4. We will use the lower case letter sto represent the standard deviation.
s = 4
• We take the square root to obtain a measure which is in the original units of the data. The units of the variance of 16 are in “squared customers” which is difficult to interpret.
• The units of the standard deviation are in “customers” which makes this measure of variation more useful in practice than the variance.
Recall that the average of the number of customers who enter the store in an hour is 9.
The interpretation of the standard deviation is that on average, the actual number of customers who enter the store each hour is 4 away from 9.
Comment: The importance of the numerical figure that we found in #4 above called the variance (=16 in our example) will be discussed much later in the course when we get to the inference part.
Learn By Doing: Standard Deviation
Properties of the Standard Deviation
1. It should be clear from the discussion thus far that the SD should be paired as a measure of spread with the mean as a measure of center.
2. Note that the only way, mathematically, in which the SD = 0, is when all the observations have the same value (Ex: 5, 5, 5, … , 5), in which case, the deviations from the mean (which is also 5) are all 0. This is intuitive, since if all the data points have the same value, we have no variability (spread) in the data, and expect the measure of spread (like the SD) to be 0. Indeed, in this case, not only is the SD equal to 0, but the range and the IQR are also equal to 0. Do you understand why?
3. Like the mean, the SD is strongly influenced by outliers in the data. Consider the example concerning video store customers: 3, 5, 7, 9, 9, 11, 13, 15 (data ordered). If the largest observation was wrongly recorded as 150, then the average would jump up to 25.9, and the standard deviation would jump up to SD = 50.3. Note that in this simple example, it is easy to see that while the standard deviation is strongly influenced by outliers, the IQR is not. The IQR would be the same in both cases, since, like the median, the calculation of the quartiles depends only on the order of the data rather than the actual values.
The last comment leads to the following very important conclusion:
Choosing Numerical Measures
Learning Objectives
LO 4.10: Choose the appropriate measures for a quantitative variable based upon the shape of the distribution.
• Use the mean and the standard deviation as measures of center and spread for reasonably symmetric distributions with no extreme outliers.
• For all other cases, use the five-number summary = min, Q1, Median, Q3, Max (which gives the median, and easy access to the IQR and range). We will discuss the five-number summary in the next section in more detail.
Let’s Summarize
• The range covered by the data is the most intuitive measure of spread and is exactly the distance between the smallest data point (min) and the largest one (Max).
• Another measure of spread is the inter-quartile range (IQR), which is the range covered by the middle 50% of the data.
• IQR = Q3 – Q1, the difference between the third and first quartiles.
• The first quartile (Q1) is the value such that one quarter (25%) of the data points fall below it, or the median of the bottom half of the data.
• The third quartile (Q3) is the value such that three quarters (75%) of the data points fall below it, or the median of the top half of the data.
• The IQR is generally used as a measure of spread of a distribution when the median is used as a measure of center.
• The standard deviation measures the spread by reporting a typical (average) distance between the data points and their mean.
• It is appropriate to use the standard deviation as a measure of spread with the mean as the measure of center.
• Since the mean and standard deviations are highly influenced by extreme observations, they should be used as numerical descriptions of the center and spread only for distributions that are roughly symmetric, and have no extreme outliers. In all other situations, we prefer the 5-number summary.
Measures of Position
CO-4: Distinguish among different measurement scales, choose the appropriate descriptive and inferential statistical methods based on these distinctions, and interpret the results.
Learning Objectives
LO 4.4: Using appropriate graphical displays and/or numerical measures, describe the distribution of a quantitative variable in context: a) describe the overall pattern, b) describe striking deviations from the pattern
Learning Objectives
LO 4.14: Define and interpret measures of position (percentiles, quartiles, the five-number summary, z-scores).
Video
Video: Measures of Position (2 videos, 4:20 total)
Note
Related SAS Tutorials
Related SPSS Tutorials
Although not a required aspect of describing distributions of one quantitative variable, we are often interested in where a particular value falls in the distribution. Is the value unusually low or high or about what we would expect?
Answers to these questions rely on measures of position (or location). These measures give information about the distribution but also give information about how individual values relate to the overall distribution.
Percentiles
A common measure of position is the percentile. Although there are some mathematical considerations involved with calculating percentiles which we will not discuss, you should have a basic understanding of their interpretation.
In general the P-th percentile can be interpreted as a location in the data for which approximately P% of the other values in the distribution fall below the P-th percentile and (100 –P)% fall above the P-th percentile.
The quartiles Q1 and Q3 are special cases of percentiles and thus are measures of position.
Five-Number Summary
The combination of the five numbers (min, Q1, M, Q3, Max) is called the five number summary, and provides a quick numerical description of both the center and spread of a distribution.
Each of the values represents a measure of position in the dataset.
The min and max providing the boundaries and the quartiles and median providing information about the 25th, 50th, and 75th percentiles.
Standardized Scores (Z-Scores)
Standardized scores, also called z-scores use the mean and standard deviation as the primary measures of center and spread and are therefore most useful when the mean and standard deviation are appropriate, i.e. when the distribution is reasonably symmetric with no extreme outliers.
For any individual, the z-score tells us how many standard deviations the raw score for that individual deviates from the mean and in what direction. A positive z-score indicates the individual is above average and a negative z-score indicates the individual is below average.
To calculate a z-score, we take the individual value and subtract the mean and then divide this difference by the standard deviation.
$z_{i}=\dfrac{x_{i}-\bar{x}}{S}$
Measures of Position
Measures of position also allow us to compare values from different distributions. For example, we can present the percentiles or z-scores of an individual’s height and weight. These two measures together would provide a better picture of how the individual fits in the overall population than either would alone.
Although measures of position are not stressed in this course as much as measures of center and spread, we have seen and will see many measures of position used in various aspects of examining the distribution of one variable and it is good to recognize them as measures of position when they appear.
Outliers
CO-4: Distinguish among different measurement scales, choose the appropriate descriptive and inferential statistical methods based on these distinctions, and interpret the results.
Learning Objectives
LO 4.4: Using appropriate graphical displays and/or numerical measures, describe the distribution of a quantitative variable in context: a) describe the overall pattern, b) describe striking deviations from the pattern
Learning Objectives
LO 4.7: Define and describe the features of the distribution of one quantitative variable (shape, center, spread, outliers).
Video
Video: Outliers (2:30)
Using the IQR to Detect Outliers
Learning Objectives
LO 4.15: Define and use the 1.5(IQR) and 3(IQR) criterion to identify potential outliers and extreme outliers.
So far we have quantified the idea of center, and we are in the middle of the discussion about measuring spread, but we haven’t really talked about a method or rule that will help us classify extreme observations as outliers. The IQR is commonly used as the basis for a rule of thumb for identifying outliers.
The 1.5(IQR) Criterion for Outliers
An observation is considered a suspected outlier or potential outlier if it is:
• below Q1 – 1.5(IQR) or
• above Q3 + 1.5(IQR)
The following picture (not to scale) illustrates this rule:
EXAMPLE: Best Actress Oscar Winners
We will continue with the Best Actress Oscar winners example (Link to the Best Actress Oscar Winners data).
34 34 26 37 42 41 35 31 41 33 30 74 33 49 38 61 21 41 26 80 43 29 33 35 45 49 39 34 26 25 35 33
Recall that when we first looked at the histogram of ages of Best Actress Oscar winners, there were three observations that looked like possible outliers:
We can now use the 1.5(IQR) criterion to check whether the three highest ages should indeed be classified as potential outliers:
• For this example, we found Q1 = 32 and Q3 = 41.5 which give an IQR = 9.5
• Q1 – 1.5 (IQR) = 32 – (1.5)(9.5) = 17.75
• Q3 + 1.5 (IQR) = 41.5 + (1.5)(9.5) = 55.75
The 1.5(IQR) criterion tells us that any observation with an age that is below 17.75 or above 55.75 is considered a suspected outlier.
We therefore conclude that the observations with ages of 61, 74 and 80 should be flagged as suspected outliers in the distribution of ages. Note that since the smallest observation is 21, there are no suspected low outliers in this distribution.
The 3(IQR) Criterion for Outliers
An observation is considered an EXTREME outlier if it is:
• below Q1 – 3(IQR) or
• above Q3 + 3(IQR)
EXAMPLE: Best Actress Oscar Winners
We can now use the 3(IQR) criterion to check whether any of the three suspected outliers can be classified as extreme outliers:
• For this example, we found Q1 = 32 and Q3 = 41.5 which give an IQR = 9.5
• Q1 – 3 (IQR) = 32 – (3)(9.5) = 3.5
• Q3 + 3 (IQR) = 41.5 + (3)(9.5) = 70
The 3(IQR) criterion tells us that any observation that is below 3.5 or above 70 is considered an extreme outlier.
We therefore conclude that the observations with ages 74 and 80 should be flagged as extreme outliers in the distribution of ages.
Note that since there were no suspected outliers on the low end there can be no extreme outliers on the low end of the distribution. Thus there was no real need for us to calculate the low cutoff for extreme outliers, i.e. Q1 – 3(IQR) = 3.5.
See the histogram below, and consider the outliers individually.
• The observation with age 62 is visually much closer to the center of the data. We might have a difficult time deciding if this value is really an outlier using this graph alone.
• However, the ages of 74 and 80 are clearly far from the bulk of the distribution. We might feel very comfortable deciding these values are outliers based only on the graph.
Did I Get This?: Identifying Outliers using IQR Method
Understanding Outliers
Learning Objectives
LO 4.16: Discuss possible methods for handling outliers in practice.
We just practiced one way to ‘flag’ possible outliers. Why is it important to identify possible outliers, and how should they be dealt with? The answers to these questions depend on the reasons for the outlying values. Here are several possibilities:
1. Even though it is an extreme value, if an outlier can be understood to have been produced by essentially the same sort of physical or biological process as the rest of the data, and if such extreme values are expected to eventually occur again, then such an outlier indicates something important and interesting about the process you’re investigating, and it should be kept in the data.
2. If an outlier can be explained to have been produced under fundamentally different conditions from the rest of the data (or by a fundamentally different process), such an outlier can be removed from the data if your goal is to investigate only the process that produced the rest of the data.
3. An outlier might indicate a mistake in the data (like a typo, or a measuring error), in which case it should be corrected if possible or else removed from the data before calculating summary statistics or making inferences from the data (and the reason for the mistake should be investigated).
Here are examples of each of these types of outliers:
1. The following histogram displays the magnitude of 460 earthquakes in California, occurring in the year 2000, between August 28 and September 9:
Identifying the outlier: On the very far right edge of the display (beyond 4.8), we see a low bar; this represents one earthquake (because the bar has height of 1) that was much more severe than the others in the data.
Understanding the outlier: In this case, the outlier represents a much stronger earthquake, which is relatively rarer than the smaller quakes that happen more frequently in California.
How to handle the outlier: For many purposes, the relatively severe quakes represented by the outlier might be the most important (because, for instance, that sort of quake has the potential to do more damage to people and infrastructure). The smaller-magnitude quakes might not do any damage, or even be felt at all. So, for many purposes it could be important to keep this outlier in the data.
2. The following histogram displays the monthly percent return on the stock of Phillip Morris (a large tobacco company) from July 1990 to May 1997:
Identifying the outlier: On the display, we see a low bar far to the left of the others; this represents one month’s return (because the bar has height of 1), where the value of Phillip Morris stock was unusually low.
Understanding the outlier: The explanation for this particular outlier is that, in the early 1990s, there were highly-publicized federal hearings being conducted regarding the addictiveness of smoking, and there was growing public sentiment against the tobacco companies. The unusually low monthly value in the Phillip Morris dataset was due to public pressure against smoking, which negatively affected the company’s stock for that particular month.
How to handle the outlier: In this case, the outlier was due to unusual conditions during one particular month that aren’t expected to be repeated, and that were fundamentally different from the conditions that produced the values in all the other months. So in this case, it would be reasonable to remove the outlier, if we wanted to characterize the “typical” monthly return on Phillip Morris stock.
3. When archaeologists dig up objects such as pieces of ancient pottery, chemical analysis can be performed on the artifacts. The chemical content of pottery can vary depending on the type of clay as well as the particular manufacturing technique. The following histogram displays the results of one such actual chemical analysis, performed on 48 ancient Roman pottery artifacts from archaeological sites in Britain:
As appeared in Tubb, et al. (1980). “The analysis of Romano-British pottery by atomic absorption spectrophotometry.” Archaeometry, vol. 22, reprinted in Statistics in Archaeology by Michael Baxter, p. 21.
Identifying the outlier: On the display, we see a low bar far to the right of the others; this represents one piece of pottery (because the bar has a height of 1), which has a suspiciously high manganous oxide value.
Understanding the outlier: Based on comparison with other pieces of pottery found at the same site, and based on expert understanding of the typical content of this particular compound, it was concluded that the unusually high value was most likely a typo that was made when the data were published in the original 1980 paper (it was typed as “.394” but it was probably meant to be “.094”).
How to handle the outlier: In this case, since the outlier was judged to be a mistake, it should be removed from the data before further analysis. In fact, removing the outlier is useful not only because it’s a mistake, but also because doing so reveals important structure that was otherwise hidden. This feature is evident on the next display:
When the outlier is removed, the display is re-scaled so that now we can see the set of 10 pottery pieces that had almost no manganous oxide. These 10 pieces might have been made with a different potting technique, so identifying them as different from the rest is historically useful. This feature was only evident after the outlier was removed.
Reading: Outliers (≈ 1400 words)
Boxplots
CO-4: Distinguish among different measurement scales, choose the appropriate descriptive and inferential statistical methods based on these distinctions, and interpret the results.
Learning Objectives
LO 4.4: Using appropriate graphical displays and/or numerical measures, describe the distribution of a quantitative variable in context: a) describe the overall pattern, b) describe striking deviations from the pattern
Learning Objectives
LO 4.7: Define and describe the features of the distribution of one quantitative variable (shape, center, spread, outliers).
Video
Video: Boxplots (2 videos, 7:02 total)
Note
Related SAS Tutorials
Related SPSS Tutorials
Introduction
Now we introduce another graphical display of the distribution of a quantitative variable, the boxplot.
The Five Number Summary
So far, in our discussion about measures of spread, some key players were:
• the extremes (min and Max), which provide the range covered by all the data; and
• the quartiles (Q1, M and Q3), which together provide the IQR, the range covered by the middle 50% of the data.
Recall that the combination of all five numbers (min, Q1, M, Q3, Max) is called the five number summary, and provides a quick numerical description of both the center and spread of a distribution.
EXAMPLE: Best Actress Oscar Winners
We will continue with the Best Actress Oscar winners example (Link to the Best Actress Oscar Winners data).
34 34 26 37 42 41 35 31 41 33 30 74 33 49 38 61 21 41 26 80 43 29 33 35 45 49 39 34 26 25 35 33
The five number summary of the age of Best Actress Oscar winners (1970-2001) is:
min = 21, Q1 = 32, M = 35, Q3 = 41.5, Max = 80
To sketch the boxplot we will need to know the 5-number summary as well as identify any outliers. We will also need to locate the largest and smallest values which are not outliers. The stemplot below might be helpful as it displays the data in order.
Learn By Doing: 5-Number Summary
Now that you understand what each of the five numbers means, you can appreciate how much information about the distribution is packed into the five-number summary. All this information can also be represented visually by using the boxplot.
The Boxplot
Learning Objectives
LO 4.17: Explain the process of creating a boxplot (including appropriate indication of outliers).
The boxplot graphically represents the distribution of a quantitative variable by visually displaying the five-number summary and any observation that was classified as a suspected outlier using the 1.5(IQR) criterion.
EXAMPLE: Constructing a boxplot
1. The central box spans from Q1 to Q3. In our example, the box spans from 32 to 41.5. Note that the width of the box has no meaning.
2. A line in the box marks the median M, which in our case is 35.
3. Lines extend from the edges of the box to the smallest and largest observations that were not classified as suspected outliers (using the 1.5xIQR criterion). In our example, we have no low outliers, so the bottom line goes down to the smallest observation, which is 21. Since we have three high outliers (61,74, and 80), the top line extends only up to 49, which is the largest observation that has not been flagged as an outlier.
4. outliers are marked with asterisks (*).
To summarize: the following information is visually depicted in the boxplot:
• the five number summary (blue)
• the range and IQR (red)
• outliers (green)
Learn By Doing: Boxplots
Did I Get This?: Boxplots
Side-By-Side (Comparative) Boxplots
Learning Objectives
LO 4.18: Compare and contrast distributions (of quantitative data) from two or more groups, and produce a brief summary, interpreting your findings in context.
As we learned earlier, the distribution of a quantitative variable is best represented graphically by a histogram. Boxplots are most useful when presented side-by-side for comparing and contrasting distributions from two or more groups.
EXAMPLE: Best Actress/Actor Oscar Winners
So far we have examined the age distributions of Oscar winners for males and females separately. It will be interesting to compare the age distributions of actors and actresses who won best acting Oscars. To do that we will look at side-by-side boxplots of the age distributions by gender.
Recall also that we found the five-number summary and means for both distributions. For the Best Actress dataset, we did the calculations by hand. For the Best Actor dataset, we used statistical software, and here are the results:
• Actors: min = 31, Q1 = 37.25, M = 42.5, Q3 = 50.25, Max = 76
• Actresses: min = 21, Q1 = 32, M = 35, Q3 = 41.5, Max = 80
Based on the graph and numerical measures, we can make the following comparison between the two distributions:
Center: The graph reveals that the age distribution of the males is higher than the females’ age distribution. This is supported by the numerical measures. The median age for females (35) is lower than for males (42.5). Actually, it should be noted that even the third quartile of the females’ distribution (41.5) is lower than the median age for males. We therefore conclude that in general, actresses win the Best Actress Oscar at a younger age than actors do.
Spread: Judging by the range of the data, there is much more variability in the females’ distribution (range = 59) than there is in the males’ distribution (range = 45). On the other hand, if we look at the IQR, which measures the variability only among the middle 50% of the distribution, we see more spread in the ages of males (IQR = 13) than females (IQR = 9.5). We conclude that among all the winners, the actors’ ages are more alike than the actresses’ ages. However, the middle 50% of the age distribution of actresses is more homogeneous than the actors’ age distribution.
Outliers: We see that we have outliers in both distributions. There is only one high outlier in the actors’ distribution (76, Henry Fonda, On Golden Pond), compared with three high outliers in the actresses’ distribution.
EXAMPLE: Temperature of Pittsburg vs. San Francisco
In order to compare the average high temperatures of Pittsburgh to those in San Francisco we will look at the following side-by-side boxplots, and supplement the graph with the descriptive statistics of each of the two distributions.
Statistic Pittsburgh San Francisco
min 33.7 56.3
Q1 41.2 60.2
Median 61.4 62.7
Q3 77.75 65.35
Max 82.6 68.7
When looking at the graph, the similarities and differences between the two distributions are striking. Both distributions have roughly the same center (medians are 61.4 for Pitt, and 62.7 for San Francisco). However, the temperatures in Pittsburgh have a much larger variability than the temperatures in San Francisco (Range: 49 vs. 12. IQR: 36.5 vs. 5).
The practical interpretation of the results we obtained is that the weather in San Francisco is much more consistent than the weather in Pittsburgh, which varies a lot during the year. Also, because the temperatures in San Francisco vary so little during the year, knowing that the median temperature is around 63 is actually very informative. On the other hand, knowing that the median temperature in Pittsburgh is around 61 is practically useless, since temperatures vary so much during the year, and can get much warmer or much colder.
Note that this example provides more intuition about variability by interpreting small variability as consistency, and large variability as lack of consistency. Also, through this example we learned that the center of the distribution is more meaningful as a typical value for the distribution when there is little variability (or, as statisticians say, little “noise”) around it. When there is large variability, the center loses its practical meaning as a typical value.
Learn By Doing: Comparing Distributions with Boxplots
Let’s Summarize
• The five-number summary of a distribution consists of the median (M), the two quartiles (Q1, Q3) and the extremes (min, Max).
• The five-number summary provides a complete numerical description of a distribution. The median describes the center, and the extremes (which give the range) and the quartiles (which give the IQR) describe the spread.
• The boxplot graphically represents the distribution of a quantitative variable by visually displaying the five number summary and any observation that was classified as a suspected outlier using the 1.5(IQR) criterion. (Some software packages indicate extreme outliers with a different symbol)
• Boxplots are most useful when presented side-by-side to compare and contrast distributions from two or more groups.
The "Normal" Shape
CO-4: Distinguish among different measurement scales, choose the appropriate descriptive and inferential statistical methods based on these distinctions, and interpret the results.
CO-6: Apply basic concepts of probability, random variation, and commonly used statistical probability distributions.
Learning Objectives
LO 4.4: Using appropriate graphical displays and/or numerical measures, describe the distribution of a quantitative variable in context: a) describe the overall pattern, b) describe striking deviations from the pattern
Learning Objectives
LO 4.7: Define and describe the features of the distribution of one quantitative variable (shape, center, spread, outliers).
Video
Video: The Normal Shape (5:34)
Related SAS Tutorials
Related SPSS Tutorials
The Standard Deviation Rule
Learning Objectives
LO 6.2: Apply the standard deviation rule to the special case of distributions having the “normal” shape.
In the previous activity we tried to help you develop better intuition about the concept of standard deviation. The rule that we are about to present, called “The Standard Deviation Rule” (also known as “The Empirical Rule”) will hopefully also contribute to building your intuition about this concept.
Consider a symmetric mound-shaped distribution:
For distributions having this shape (later we will define this shape as “normally distributed”), the following rule applies:
The Standard Deviation Rule:
• Approximately 68% of the observations fall within 1 standard deviation of the mean.
• Approximately 95% of the observations fall within 2 standard deviations of the mean.
• Approximately 99.7% (or virtually all) of the observations fall within 3 standard deviations of the mean.
The following picture illustrates this rule:
This rule provides another way to interpret the standard deviation of a distribution, and thus also provides a bit more intuition about it.
Interactive Applet: The Standard Deviation Rule
To see how this rule works in practice, consider the following example:
EXAMPLE: MALE HEIGHT
The following histogram represents height (in inches) of 50 males. Note that the data are roughly normal, so we would like to see how the Standard Deviation Rule works for this example.
Below are the actual data, and the numerical measures of the distribution. Note that the key players here, the mean and standard deviation, have been highlighted.
Statistic Height
N 50
Mean 70.58
StDev 2.858
min 64
Q1 68
Median 70.5
Q3 72
Max 77
To see how well the Standard Deviation Rule works for this case, we will find what percentage of the observations falls within 1, 2, and 3 standard deviations from the mean, and compare it to what the Standard Deviation Rule tells us this percentage should be.
It turns out the Standard Deviation Rule works very well in this example.
The following example illustrates how we can apply the Standard Deviation Rule to variables whose distribution is known to be approximately normal.
EXAMPLE: Length of Human Pregnancy
The length of the human pregnancy is not fixed. It is known that it varies according to a distribution which is roughly normal, with a mean of 266 days, and a standard deviation of 16 days. (Source: Figures are from Moore and McCabe, Introduction to the Practice of Statistics).
First, let’s apply the Standard Deviation Rule to this case by drawing a picture:
• Question: How long do the middle 95% of human pregnancies last? We can now use the information provided by the Standard Deviation Rule about the distribution of the length of human pregnancy, to answer some questions. For example:
• Answer: The middle 95% of pregnancies last within 2 standard deviations of the mean, or in this case 234-298 days.
• Question: What percent of pregnancies last more than 298 days?
• Answer: To answer this consider the following picture:
• Question: How short are the shortest 2.5% of pregnancies? Since 95% of the pregnancies last between 234 and 298 days, the remaining 5% of pregnancies last either less than 234 days or more than 298 days. Since the normal distribution is symmetric, these 5% of pregnancies are divided evenly between the two tails, and therefore 2.5% of pregnancies last more than 298 days.
• Answer: Using the same reasoning as in the previous question, the shortest 2.5% of human pregnancies last less than 234 days.
• Question: What percent of human pregnancies last more than 266 days?
• Answer: Since 266 days is the mean, approximately 50% of pregnancies last more than 266 days.
Here is a complete picture of the information provided by the standard deviation rule.
Did I Get This?: Standard Deviation Rule
Visual Methods of Assessing Normality
Learning Objectives
LO 6.3: Use histograms and QQ-plots (or Normal Probability Plots) to visually assess the normality of distributions of quantitative variables.
The normal distribution exists in theory but rarely, if ever, in real life. Histograms provide an excellent graphical display to help us assess normality. We can add a “normal curve” to the histogram which shows the normal distribution having the same mean and standard deviation as our sample. The closer the histogram fits this curve, the more (perfectly) normal the sample.
In the examples below, the graph on the top is approximately normally distributed whereas the graph on the bottom is clearly skewed right.
Unfortunately, we cannot quantitatively determine the extent to which the distribution is normally or not normally distributed using this method, but it can be helpful for making qualitative judgments about whether the data approximates the normal curve.
Another common graph to assess normality is the Q-Q plot (or Normal Probability Plot). In these graphs, the percentiles or quantiles of the theoretical distribution (in this case the standard normal distribution) are plotted against those from the data. If the data matches the theoretical distribution, the graph will result in a straight line. The graph below shows a distribution which closely follows a normal model.
Note: QQ-plots are not scatterplots (which we will dicuss soon), they only display information about one quantitative variable and graph this against the theoretical or expected values from a normal distribution with the same mean and standard deviation as our data. Other distributions can also be used.
In most cases the distributions that you encounter will only be approximations of the normal curve, or they will not resemble the normal distribution at all! However, it can be important to consider how well the data being analyzed approximates the normal curve since this distribution is a key assumption of many statistical analyses.
Here are a few more examples:
EXAMPLE: Some Real Data
The following gives the QQ-plot, histogram and boxplot for variables from a dataset from a population of women who were at least 21 years old, of Pima Indian heritage and living near Phoenix, Arizona, who were tested for diabetes according to World Health Organization criteria. The data were collected by the US National Institute of Diabetes and Digestive and Kidney Diseases. We used the 532 complete records after dropping the (mainly missing) data on serum insulin.
Body Mass Index is definitely unimodal and symmetric and could easily have come from a population which is normally distributed.
The Diabetes Pedigree Function scores were unimodal and skewed right. This data does not seem to have come from a population which is normally distributed.
The Triceps Skin Fold Thickness is basically symmetric with one extreme outlier (and one potential but mild outlier).
Be careful not to call such a distribution “skewed right” as it is only the single outlier which really shows that pattern here. At a minimum remove the outlier and recreate the graphs to see how skewed the rest of the data might be.
EXAMPLE: Randomly Generated Data
Since there were no skewed left examples in the real data, here are two randomly generated skewed left distributions. Notice that the first is less skewed left than the second and this is indicated clearly in all three plots.
Comments:
• Even if the population is exactly normally distributed, samples from this population can appear non-normal especially for small sample sizes. See this document containing 21 samples of size n = 50 from a normal distribution with a mean of 200 and a standard deviation of 30. The samples that produce results which are skewed or otherwise seemingly not-normal are highlighted but even among those not highlighted, notice the variation in shapes seen: Normal Samples
• The standard deviation rule can also help in assessing normality in that the closer the percentage of data points within 1, 2, and 3 standard deviations is to that of the rule, the closer the data itself fits a normal distribution.
• In our example of male heights, we see that the histogram resembles a normal distribution and the sample percentages are very close to that predicted by the standard deviation rule.
Did I Get This?: Assessing Normality
(Optional) Reading: The Normal Distribution (≈ 500 words)
Standardized Scores (Z-Scores)
Learning Objectives
LO 4.14: Define and interpret measures of position (percentiles, quartiles, the five-number summary, z-scores).
We have already learned the standard deviation rule, which for normally distributed data, provides approximations for the proportion of data values within 1, 2, and 3 standard deviations. From this we know that approximately 5% of the data values would be expected to fall OUTSIDE 2 standard deviations.
If we calculate the standardized scores (or z-scores) for our data, it would be easy to identify these unusually large or small values in our data. To calculate a z-score, recall that we take the individual value and subtract the mean and then divide this difference by the standard deviation.
$z_{i}=\dfrac{x_{i}-\bar{x}}{S}$
For any individual, the z-score tells us how many standard deviations the raw score for that individual deviates from the mean and in what direction. A positive z-score indicates the individual is above average and a negative z-score indicates the individual is below average.
Comments:
• Standardized scores can be used to help identify potential outliers
• For approximately normal distributions, z-scores greater than 2 or less than -2 are rare (will happen approximately 5% of the time).
• For any distribution, z-scores greater than 4 or less than -4 are rare (will happen less than 6.25% of the time).
• Standardized scores, along with other measures of position, are useful when comparing individuals in different datasets since the comparison takes into account the relative position of the individuals in their dataset. With z-scores, we can tell which individual has a relatively higher or lower position in their respective dataset.
• Later in the course, we will see that this idea of standardizing is used often in statistical analyses.
EXAMPLE: Best Actress Oscar Winners
We will continue with the Best Actress Oscar winners example (Link to the Best Actress Oscar Winners data).
34 34 26 37 42 41 35 31 41 33 30 74 33 49 38 61 21 41 26 80 43 29 33 35 45 49 39 34 26 25 35 33
In previous examples, we identified three observations as outliers, two of which were classified as extreme outliers (ages of 61, 74 and 80)
The mean of this sample is 38.5 and the standard deviation is 12.95.
• The z-score for the actress with age = 80 is
$z=\dfrac{80-38.5}{12.95} = 3.20$
Thus, among our female Oscar winners from our sample, this actress is 3.20 standard deviations older than average.
Did I Get This?: Z-Scores | textbooks/stats/Applied_Statistics/Biostatistics_-_Open_Learning_Textbook/Unit_1%3A_Exploratory_Data_Analysis/One_Quantitative_Variable%3A_Introduction.txt |
CO-4: Distinguish among different measurement scales, choose the appropriate descriptive and inferential statistical methods based on these distinctions, and interpret the results.
Video
Video: Role-Type Classification (Two Parts; 9:46 total time)
While it is fundamentally important to know how to describe the distribution of a single variable, most studies pose research questions that involve exploring the relationship between two (or more) variables. These research questions are investigated using a sample from the population of interest.
Reading: Form a Research Question (short)
Here are a few examples of such research questions with the two variables highlighted:
EXAMPLES:
1. Is there a relationship between gender and test scores on a particular standardized test? Other ways of phrasing the same research question:
• Is performance on the test related to gender?
• Is there a gender effect on test scores?
• Are there differences in test scores between males and females?
2. How is the number of calories in a hot dog related to (or affected by) the type of hot dog (beef, meat or poultry)? In other words, are there differences in the number of calories among the three types of hot dogs?
3. Is there a relationship between the type of light a baby sleeps with (no light, night-light, lamp) and whether or not the child develops nearsightedness?
4. Are the smoking habits of a person (yes, no) related to the person’s gender?
5. How well can we predict a student’s freshman year GPA from his/her SAT score?
6. What is the relationship between driver’s age and sign legibility distance (the maximum distance at which the driver can read a sign)?
7. Is there a relationship between the time a person has practiced driving while having a learner’s permit, and whether or not this person passed the driving test?
8. Can you predict a person’s favorite type of music (classical, rock, jazz) based on his/her IQ level?
Role of a Variable in a Study
Learning Objectives
LO 4.19: For a data analysis situation involving two variables, identify the role of each variable in the scenario.
In most studies involving two variables, each of the variables has a role. We distinguish between:
• the response variable — the outcome of the study; and
• the explanatory variable — the variable that claims to explain, predict or affect the response.
As we mentioned earlier the variable we wish to predict is commonly called the dependent variable, the outcome variable, or the response variable. Any variable we are using to predict (or explain differences) in the outcome is commonly called an explanatory variable, an independent variable, a predictor variable, or a covariate.
Comment:
• Typically the explanatory variable is denoted by X, and the response variable by Y.
Now let’s go back to some of the examples and classify the two relevant variables according to their roles in the study:
EXAMPLE 1:
Is there a relationship between gender and test scores on a particular standardized test? Other ways of phrasing the same research question:
• Is performance on the test related to gender?
• Is there a gender effect on test scores?
• Are there differences in test scores between males and females?
We want to explore whether the outcome of the study — the score on a test — is affected by the test-taker’s gender. Therefore:
Gender is the explanatory variable
Test score is the response variable
EXAMPLE 3:
Is there a relationship between the type of light a baby sleeps with (no light, night-light, lamp) and whether or not the child develops nearsightedness?
In this study we explore whether the nearsightedness of a person can be explained by the type of light that person slept with as a baby. Therefore:
Light type is the explanatory variable
Nearsightedness is the response variable
EXAMPLE 5:
How well can we predict a student’s freshman year GPA from his/her SAT score?
Here we are examining whether a student’s SAT score is a good predictor for the student’s GPA freshman year. Therefore:
SAT score is the explanatory variable
GPA of freshman year is the response variable
EXAMPLE 7:
Is there a relationship between the time a person has practiced driving while having a learner’s permit, and whether or not this person passed the driving test?
Here we are examining whether a person’s outcome on the driving test (pass/fail) can be explained by the length of time this person has practiced driving prior to the test. Therefore:
Time is the explanatory variable
Driving test outcome is the response variable
Now, using the same reasoning, the following exercise will help you to classify the two variables in the other examples.
Learn By Doing: Role Classification
Many Students Wonder: Role Classification
Question: Is the role classification of variables always clear? In other words, is it always clear which of the variables is the explanatory and which is the response?
Answer: No. There are studies in which the role classification is not really clear. This mainly happens in cases when both variables are categorical or both are quantitative. An example is a study that explores the relationship between students’ SAT Math and SAT Verbal scores. In cases like this, any classification choice would be fine (as long as it is consistent throughout the analysis).
Role-Type Classification
Learning Objectives
LO 4.20: Classify a data analysis situation involving two variables according to the “role-type classification.”
If we further classify each of the two relevant variables according to type (categorical or quantitative), we get the following 4 possibilities for “role-type classification”
1. Categorical explanatory and quantitative response (Case CQ)
2. Categorical explanatory and categorical response (Case CC)
3. Quantitative explanatory and quantitative response (Case QQ)
4. Quantitative explanatory and categorical response (Case QC)
This role-type classification can be summarized and easily visualized in the following table (note that the explanatory variable is always listed first):
This role-type classification serves as the infrastructure for this entire section. In each of the 4 cases, different statistical tools (displays and numerical measures) should be used in order to explore the relationship between the two variables.
This suggests the following important principle:
PRINCIPLE: When confronted with a research question that involves exploring the relationship between two variables, the first and most crucial step is to determine which of the 4 cases represents the data structure of the problem. In other words, the first step should be classifying the two relevant variables according to their role and type, and only then can we determine what statistical tools should be used to analyze them.
Now let’s go back to our 8 examples and determine which of the 4 cases represents the data structure of each:
EXAMPLE 1:
Is there a relationship between gender and test scores on a particular standardized test? Other ways of phrasing the same research question:
• Is performance on the test related to gender?
• Is there a gender effect on test scores?
• Are there differences in test scores between males and females?
We want to explore whether the outcome of the study — the score on a test — is affected by the test-taker’s gender.
Gender is the explanatory variable and it is categorical.
Test score is the response variable and it is quantitative.
Therefore this is an example of case CQ.
EXAMPLE 3:
Is there a relationship between the type of light a baby sleeps with (no light, night-light, lamp) and whether or not the child develops nearsightedness?
In this study we explore whether the nearsightedness of a person can be explained by the type of light that person slept with as a baby.
Light type is the explanatory variable and it is categorical.
Nearsightedness is the response variable and it is categorical.
Therefore this is an example of case CC.
EXAMPLE 5:
How well can we predict a student’s freshman year GPA from his/her SAT score?
Here we are examining whether a student’s SAT score is a good predictor for the student’s GPA freshman year.
SAT score is the explanatory variable and it is quantitative.
GPA of freshman year is the response variable and it is quantitative.
Therefore this is an example of case QQ.
EXAMPLE 7:
Is there a relationship between the time a person has practiced driving while having a learner’s permit, and whether or not this person passed the driving test?
Here we are examining whether a person’s outcome on the driving test (pass/fail) can be explained by the length of time this person has practiced driving prior to the test.
Time is the explanatory variable and it is quantitative.
Driving test outcome is the response variable and it is categorical.
Therefore this is an example of case QC.
Now you complete the rest…
Learn By Doing: Role-Type Classification
The remainder of this section on exploring relationships will be guided by this role-type classification. In the next three parts we will elaborate on cases C→Q, C→C, and Q→Q. More specifically, we will learn the appropriate statistical tools (visual display and numerical measures) that will allow us to explore the relationship between the two variables in each of the cases. Case Q→C will not be discussed in this course, and is typically covered in more advanced courses. The section will conclude with a discussion on causal relationships.
Did I Get This?: Role-Type Classification | textbooks/stats/Applied_Statistics/Biostatistics_-_Open_Learning_Textbook/Unit_1%3A_Exploratory_Data_Analysis/Role-Type_Classification.txt |
(Optional) Outside Reading: Look at the Data! (≈1200 words)
(Optional) Outside Reading: Creating Data Files (≈1200 words)
This summary provides a quick recap of the material in the Exploratory Data Analysis unit. Please note that this summary does not provide complete coverage of the material, only lists the main points.
• The purpose of exploratory data analysis (EDA) is to convert the available data from their raw form to an informative one, in which the main features of the data are illuminated.
• When performing EDA, we should always:
• use visual displays (graphs or tables) plus numerical measures.
• describe the overall pattern and mention any striking deviations from that pattern.
• interpret the results we find in context.
• When examining the distribution of a single variable, we distinguish between a categorical variable and a quantitative variable.
• The distribution of a categorical variable is summarized using:
• Display: pie-chart or bar-chart (variation: pictogram → can be misleading — beware!)
• Numerical measures: category (group) percentages.
• The distribution of a quantitative variable is summarized using:
• Display: histogram (or stemplot, mainly for small data sets). When describing the distribution as displayed by the histogram, we should describe the:
• Overall pattern → shape, center, spread.
• Deviations from the pattern → outliers.
• Numerical measures: descriptive statistics (measure of center plus measure of spread):
• If distribution is symmetric with no outliers, use mean and standard deviation.
• Otherwise, use the five-number summary, in particular, median and IQR (inter-quartile range).
• The five-number summary and the 1.5(IQR) Criterion for detecting outliers are the ingredients we need to build the boxplot. Boxplots are most effective when used side-by-side for comparing distributions (see also case C→Q in examining relationships).
• In the special case of a distribution having the normal shape, the Standard Deviation Rule applies. This rule tells us approximately what percent of the observations fall within 1,2, or 3 standard deviations away from the mean. In particular, when a distribution is approximately normal, almost all the observations (99.7%) fall within 3 standard deviations of the mean.
• When examining the relationship between two variables, the first step is to classify the two relevant variables according to their role and type:
and only then to determine the appropriate tools for summarizing the data. (We don’t deal with case Q→C in this course).
• Case C→Q: Exploring the relationship amounts to comparing the distributions of the quantitative response variable for each category of the explanatory variable. To do this, we use:
• Display: side-by-side boxplots.
• Numerical measures: descriptive statistics of the response variable, for each value (category) of the explanatory variable separately.
• Case C→C: Exploring the relationship amounts to comparing the distributions of the categorical response variable, for each category of the explanatory variable. To do this, we use:
• Display: two-way table.
• Numerical measures: conditional percentages (of the response variable for each value (category) of the explanatory variable separately).
• Case Q→Q: We examine the relationship using:
• Display: scatterplot. When describing the relationship as displayed by the scatterplot, be sure to consider:
• Overall pattern → direction, form, strength.
• Deviations from the pattern → outliers.
Labeling the scatterplot (including a relevant third categorical variable in our analysis), might add some insight into the nature of the relationship.
In the special case that the scatterplot displays a linear relationship (and only then), we supplement the scatterplot with:
• Numerical measures: Pearson’s correlation coefficient (r) measures the direction and, more importantly, the strength of the linear relationship. The closer r is to 1 (or -1), the stronger the positive (or negative) linear relationship. r is unitless, influenced by outliers, and should be used only as a supplement to the scatterplot.
• When the relationship is linear (as displayed by the scatterplot, and supported by the correlation r), we can summarize the linear pattern using the least squares regression line. Remember that:
• The slope of the regression line tells us the average change in the response variable that results from a 1-unit increase in the explanatory variable.
• When using the regression line for predictions, you should beware of extrapolation.
• When examining the relationship between two variables (regardless of the case), any observed relationship (association) does not imply causation, due to the possible presence of lurking variables.
• When we include a lurking variable in our analysis, we might need to rethink the direction of the relationship → Simpson’s paradox. | textbooks/stats/Applied_Statistics/Biostatistics_-_Open_Learning_Textbook/Unit_1%3A_Exploratory_Data_Analysis/Summary_%28Unit_1%29.txt |
CO-1: Describe the roles biostatistics serves in the discipline of public health.
Video
Video: Producing Data Introduction (4:35)
Review of the Big Picture
Learning Objectives
LO 1.3: Identify and differentiate between the components of the Big Picture of Statistics
Recall “The Big Picture,” the four-step process that encompasses statistics: data production, exploratory data analysis, probability, and inference.
In the previous unit, we considered exploratory data analysis — the discovery of patterns in the raw data. In this unit, we go back and examine the first step in the process: the production of data. This unit has two main topics; sampling and study design.
Introduction to Producing Data
In the first step of the statistics “Big Picture,” we produce data. The production of data has two stages.
• First we need to choose the individuals from the population that will be included in the sample.
• Then, once we have chosen the individuals, we need to collect data from them.
The first stage is called sampling, and the second stage is called study design.
As we have seen, exploratory data analysis seeks to illuminate patterns in the data by summarizing the distributions of quantitative or categorical variables, or the relationships between variables.
In the final part of the course, statistical inference, we will use the summaries about variables or relationships that were obtained in the study to draw conclusions about what is true for the entire population from which the sample was chosen.
For this process to “work” reliably, it is essential that the sample be truly representative of the larger population. For example, if researchers want to determine whether the antidepressant Zoloft is effective for teenagers in general, then it would not be a good idea to only test it on a sample of teens who have been admitted to a psychiatric hospital, because their depression may be more severe, and less treatable, than that of teens in general.
Thus, the very first stage in data production, sampling, must be carried out in such a way that the sample really does represent the population of interest.
Choosing a sample is only the first stage in producing data, so it is not enough to just make sure that the sample is representative. We must also remember that our summaries of variables and their relationships are only valid if these have been assessed properly.
For instance, if researchers want to test the effectiveness of Zoloft versus Prozac for treating teenagers, it would not be a good idea to simply compare levels of depression for a group of teenagers who happen to be using Zoloft to levels of depression for a group of teenagers who happen to be using Prozac. If they discover that one group of patients turns out to be less depressed, it could just be that teenagers with less serious depression are more likely to be prescribed one of the drugs over the other.
In situations like this, the design for producing data must be considered carefully. Studies should be designed to discover what we want to know about the variables of interest for the individuals in the sample.
In particular, if what you want to know about the variables is whether there is a causal relationship between them, special care should be given to the design of the study (since, as we know, association does not imply causation).
In this unit, we will focus on these two stages of data production: obtaining a sample, and designing a study.
Throughout this unit, we establish guidelines for the ideal production of data. While we will hold these guidelines as standards to strive for, realistically it is rarely possible to carry out a study that is completely free of flaws. Common sense must frequently be applied in order to decide which imperfections we can live with and which ones could completely undermine a study’s results.
A sample that produces data that is not representative because of the systematic under- or over-estimation of the values of the variable of interest is called biased. Bias may result from either a poor sampling plan or from a poor design for evaluating the variable of interest.
We begin this unit by focusing on what constitutes a good — or a bad — sampling plan after which we will discuss study design.
Unit 2: Producing Data
CO-3: Describe the strengths and limitations of designed experiments and observational studies.
Learning Objectives
LO 3.2: Explain how the study design impacts the types of conclusions that can be drawn.
Learning Objectives
LO 3.3: Identify and define key features of experimental design (randomized, blind etc.).
Video
Video: Causation and Experiments (8:57)
Recall that in an experiment, it is the researchers who assign values of the explanatory variable to the participants. The key to ensuring that individuals differ only with respect to explanatory values — which is also the key to establishing causation — lies in the way this assignment is carried out. Let’s return to the smoking cessation study as a context to explore the essential ingredients of experimental design.
EXAMPLE:
In our discussion of the distinction between observational studies and experiments, we described the following experiment: collect a representative sample of 1,000 individuals from the population of smokers who are just now trying to quit. We divide the sample into 4 groups of 250 and instruct each group to use a different method to quit. One year later, we contact the same 1,000 individuals and determine whose attempts succeeded while using our designated method.
This was an experiment, because the researchers themselves determined the values of the explanatory variable of interest for the individuals studied, rather than letting them choose.
We will begin by using the context of this smoking cessation example to illustrate the specialized vocabulary of experiments. First of all, the explanatory variable, or factor, in this case is the method used to quit. The different imposed values of the explanatory variable, or treatments (common abbreviation: ttt), consist of the four possible quitting methods. The groups receiving different treatments are called treatment groups. The group that tries to quit without drugs or therapy could be called the control group — those individuals on whom no specific treatment was imposed. Ideally, the subjects (human participants in an experiment) in each treatment group differ from those in the other treatment groups only with respect to the treatment (quitting method). As mentioned in our discussion of why lurking variables prevent us from establishing causation in observational studies, eliminating all other differences among treatment groups will be the key to asserting causation via an experiment. How can this be accomplished?
Randomized Controlled Experiments
Your intuition may already tell you, correctly, that random assignment to treatments is the best way to prevent treatment groups of individuals from differing from each other in ways other than the treatment assigned. Either computer software or tables can be utilized to accomplish the random assignment. The resulting design is called a randomized controlled experiment, because researchers control values of the explanatory variable with a randomization procedure. Under random assignment, the groups should not differ significantly with respect to any potential lurking variable. Then, if we see a relationship between the explanatory and response variables, we have evidence that it is a causal one.
Comments:
• Note that in a randomized controlled experiment, a randomization procedure may be used in two phases. First, a sample of subjects is collected. Ideally it would be a random sample so that it would be perfectly representative of the entire population.
• Often researchers have no choice but to recruit volunteers. Using volunteers may help to offset one of the drawbacks to experimentation which will be discussed later, namely the problem of noncompliance.
• Second, we assign individuals randomly to the treatment groups to ensure that the only difference between them will be due to the treatment and we can get evidence of causation. At this stage, randomization is vital.
Let’s discuss some other issues related to experimentation.
Inclusion of a Control Group
A common misconception is that an experiment must include a control group of individuals receiving no treatment. There may be situations where a complete lack of treatment is not an option, or where including a control group is ethically questionable, or where researchers explore the effects of a treatment without making a comparison. Here are a few examples:
EXAMPLE:
If doctors want to conduct an experiment to determine whether Prograf or Cyclosporin is more effective as an immunosuppressant, they could randomly assign transplant patients to take one or the other of the drugs. It would, of course, be unethical to include a control group of patients not receiving any immunosuppressants.
EXAMPLE:
Recently, experiments have been conducted in which the treatment is a highly invasive brain surgery. The only way to have a legitimate control group in this case is to randomly assign half of the subjects to undergo the entire surgery except for the actual treatment component (inserting stem cells into the brain). This, of course, is also ethically problematic (but, believe it or not, is being done).
EXAMPLE:
There may even be an experiment designed with only a single treatment. For example, makers of a new hair product may ask a sample of individuals to treat their hair with that product over a period of several weeks, then assess how manageable their hair has become. Such a design is clearly flawed because of the absence of a comparison group, but it is still an experiment because use of the product has been imposed by its manufacturers, rather than chosen naturally by the individuals. A flawed experiment is nevertheless an experiment.
Comment:
• The word control is used in at least three different senses.
• In the context of observational studies, we control for a confounding variable by separating it out.
• Referring to an experiment as a controlled experiment stresses that the values of the experiment’s explanatory variables (factors) have been assigned by researchers, as opposed to having occurred naturally.
• In the context of experiments, the control group consists of subjects who do not receive a treatment, but who are otherwise handled identically to those who do receive the treatment.
Learn By Doing: Random Assignment to Treatment Groups (Software)
Blind and Double-Blind Experiments
Suppose the experiment about methods for quitting smoking were carried out with randomized assignments of subjects to the four treatments, and researchers determined that the percentage succeeding with the combination drug/therapy method was highest, and the percentage succeeding with no drugs or therapy was lowest. In other words, suppose there is clear evidence of an association between method used and success rate. Could it be concluded that the drug/therapy method causes success more than trying to quit without using drugs or therapy? Perhaps.
Although randomized controlled experiments do give us a better chance of pinning down the effects of the explanatory variable of interest, they are not completely problem-free. For example, suppose that the manufacturers of the smoking cessation drug had just launched a very high-profile advertising campaign with the goal of convincing people that their drug is extremely effective as a method of quitting.
Even with a randomized assignment to treatments, there would be an important difference among subjects in the four groups: those in the drug and combination drug/therapy groups would perceive their treatment as being a promising one, and may be more likely to succeed just because of added confidence in the success of their assigned method. Therefore, the ideal circumstance is for the subjects to be unaware of which treatment is being administered to them: in other words, subjects in an experiment should be (if possible) blind to which treatment they received.
How could researchers arrange for subjects to be blind when the treatment involved is a drug? They could administer a placebo pill to the control group, so that there are no psychological differences between those who receive the drug and those who do not. The word “placebo” is derived from a Latin word that means “to please.” It is so named because of the natural tendency of human subjects to improve just because of the “pleasing” idea of being treated, regardless of the benefits of the treatment itself. When patients improve because they are told they are receiving treatment, even though they are not actually receiving treatment, this is known as the placebo effect.
Next, how could researchers arrange for subjects to be blind when the treatment involved is a type of therapy? This is more problematic. Clearly, subjects must be aware of whether they are undergoing some type of therapy or not. There is no practical way to administer a “placebo” therapy to some subjects. Thus, the relative success of the drug/therapy treatment may be due to subjects’ enhanced confidence in the success of the method they happened to be assigned. We may feel fairly certain that the method itself causes success in quitting, but we cannot be absolutely sure.
When the response of interest is fairly straightforward, such as giving up cigarettes or not, then recording its values is a simple process in which researchers need not use their own judgment in making an assessment. There are many experiments where the response of interest is less definite, such as whether or not a cancer patient has improved, or whether or not a psychiatric patient is less depressed. In such cases, it is important for researchers who evaluate the response to be blind to which treatment the subject received, in order to prevent the experimenter effect from influencing their assessments. If neither the subjects nor the researchers know who was assigned what treatment, then the experiment is called double-blind.
The most reliable way to determine whether the explanatory variable is actually causing changes in the response variable is to carry out a randomized controlled double-blind experiment. Depending on the variables of interest, such a design may not be entirely feasible, but the closer researchers get to achieving this ideal design, the more convincing their claims of causation (or lack thereof) are.
Did I Get This?: Experiments
Pitfalls in Experimentation
Some of the inherent difficulties that may be encountered in experimentation are the Hawthorne effect, lack of realism, noncompliance, and treatments that are unethical, impossible, or impractical to impose.
We already introduced a hypothetical experiment to determine if people tend to snack more while they watch TV:
• Recruit participants for the study.
• While they are presumably waiting to be interviewed, half of the individuals sit in a waiting room with snacks available and a TV on. The other half sit in a waiting room with snacks available and no TV, just magazines.
• Researchers determine whether people consume more snacks in the TV setting.
Suppose that, in fact, the subjects who sat in the waiting room with the TV consumed more snacks than those who sat in the room without the TV. Could we conclude that in their everyday lives, and in their own homes, people eat more snacks when the TV is on? Not necessarily, because people’s behavior in this very controlled setting may be quite different from their ordinary behavior.
If they suspect their snacking behavior is being observed, they may alter their behavior, either consciously or subconsciously. This phenomenon, whereby people in an experiment behave differently from how they would normally behave, is called the Hawthorne effect. Even if they don’t suspect they are being observed in the waiting room, the relationship between TV and snacking in the waiting room might not be representative of what it is in real life.
One of the greatest advantages of an experiment — that researchers take control of the explanatory variable — can also be a disadvantage in that it may result in a rather unrealistic setting. Lack of realism (also called lack of ecological validity) is a possible drawback to the use of an experiment rather than an observational study to explore a relationship. Depending on the explanatory variable of interest, it may be quite easy or it may be virtually impossible to take control of the variable’s values and still maintain a fairly natural setting.
In our hypothetical smoking cessation example, both the observational study and the experiment were carried out on a random sample of 1,000 smokers with intentions to quit. In the case of the observational study, it would be reasonably feasible to locate 1,000 such people in the population at large, identify their intended method, and contact them again a year later to establish whether they succeeded or not.
In the case of the experiment, it is not so easy to take control of the explanatory variable (cessation method) merely by telling all 1,000 subjects what method they must use. Noncompliance (failure to submit to the assigned treatment) could enter in on such a large scale as to render the results invalid.
In order to ensure that the subjects in each treatment group actually undergo the assigned treatment, researchers would need to pay for the treatment and make it easily available. The cost of doing that for a group of 1,000 people would go beyond the budget of most researchers.
Even if the drugs or therapy were paid for, it is very unlikely that most of the subjects contacted at random would be willing to use a method not of their own choosing, but dictated by the researchers. From a practical standpoint, such a study would most likely be carried out on a smaller group of volunteers, recruited via flyers or some other sort of advertisement.
The fact that they are volunteers might make them somewhat different from the larger population of smokers with intentions to quit, but it would reduce the more worrisome problem of non-compliance. Volunteers may have a better overall chance of success, but if researchers are primarily concerned with which method is most successful, then the relative success of the various methods should be roughly the same for the volunteer sample as it would be for the general population, as long as the methods are randomly assigned. Thus, the most vital stage for randomization in an experiment is during the assignment of treatments, rather than the selection of subjects.
There are other, more serious drawbacks to experimentation, as illustrated in the following hypothetical examples:
EXAMPLE:
Suppose researchers want to determine if the drug Ecstasy causes memory loss. One possible design would be to take a group of volunteers and randomly assign some to take Ecstasy on a regular basis, while the others are given a placebo. Test them periodically to see if the Ecstasy group experiences more memory problems than the placebo group.
The obvious flaw in this experiment is that it is unethical (and actually also illegal) to administer a dangerous drug like Ecstasy, even if the subjects are volunteers. The only feasible design to seek answers to this particular research question would be an observational study.
EXAMPLE:
Suppose researchers want to determine whether females wash their hair more frequently than males.
It is impossible to assign some subjects to be female and others male, and so an experiment is not an option here. Again, an observational study would be the only way to proceed.
EXAMPLE:
Suppose researchers want to determine whether being in a lower income bracket may be responsible for obesity in women, at least to some extent, because they can’t afford more nutritious meals and don’t have the means to participate in fitness activities.
The socioeconomic status of the study subject is a variable that cannot be controlled by the researchers, so an experiment is impossible. (Even if the researchers could somehow raise the money to provide a random sample of women with substantial salaries, the effects of their eating habits during their lives before the study began would still be present, and would affect the study’s outcome.)
These examples should convince you that, depending on the variables of interest, researching their relationship via an experiment may be too unrealistic, unethical, or impractical. Observational studies are subject to flaws, but often they are the only recourse.
Let’s summarize what we’ve learned so far:
1. Observational studies:
• The explanatory variable’s values are allowed to occur naturally.
• Because of the possibility of lurking variables, it is difficult to establish causation.
• If possible, control for suspected lurking variables by studying groups of similar individuals separately.
• Some lurking variables are difficult to control for; others may not be identified.
2. Experiments
• The explanatory variable’s values are controlled by researchers (treatment is imposed).
• Randomized assignment to treatments automatically controls for all lurking variables.
• Making subjects blind avoids the placebo effect.
• Making researchers blind avoids conscious or subconscious influences on their subjective assessment of responses.
• A randomized controlled double-blind experiment is generally optimal for establishing causation.
• A lack of realism may prevent researchers from generalizing experimental results to real-life situations.
• Noncompliance may undermine an experiment. A volunteer sample might solve (at least partially) this problem.
• It is impossible, impractical, or unethical to impose some treatments.
More About Experiments
CO-3: Describe the strengths and limitations of designed experiments and observational studies.
Learning Objectives
LO 3.2: Explain how the study design impacts the types of conclusions that can be drawn.
Learning Objectives
LO 3.3: Identify and define key features of experimental design (randomized, blind etc.).
Video
Video: More About Experiments (4:09)
Experiments With More Than One Explanatory Variable
It is not uncommon for experiments to feature two or more explanatory variables (called factors). In this course, we focus on exploratory data analysis and statistical inference in situations which involve only one explanatory variable. Nevertheless, we will now consider the design for experiments involving several explanatory variables, in order to familiarize students with their basic structure.
EXAMPLE:
Suppose researchers are not only interested in the effect of diet on blood pressure, but also the effect of two new drugs. Subjects are assigned to either Control Diet (no restrictions), Diet #1, or Diet #2, (the variable diet has, then, 3 possible values) and are also assigned to receive either Placebo, Drug #1, or Drug #2 (the variable Drug, then, also has three values). This is an example where the experiment has two explanatory variables and a response variable. In order to set up such an experiment, there has to be one treatment group for every combination of categories of the two explanatory variables. Thus, in this case there are 3 * 3 = 9 combinations of the two variables to which the subjects are assigned. The treatment groups are illustrated and labeled in the following table:
Subjects would be randomly assigned to one of the nine treatment groups. If we find differences in the proportions of subjects who achieve the lower “moderate zone” blood pressure among the nine treatment groups, then we have evidence that the diets and/or drugs may be effective for reducing blood pressure.
Comments:
• Recall that randomization may be employed at two stages of an experiment: in the selection of subjects, and in the assignment of treatments. The former may be helpful in allowing us to generalize what occurs among our subjects to what would occur in the general population, but the reality of most experimental settings is that a convenience or volunteer sample is used. Most likely the blood pressure study described above would use volunteer subjects. The important thing is to make sure these subjects are randomly assigned to one of the nine treatment combinations.
• In order to gain optimal information about individuals in all the various treatment groups, we would like to make assignments not just randomly, but also evenly. If there are 90 subjects in the blood pressure study described above, and 9 possible treatment groups, then each group should be filled randomly with 10 individuals. A simple random sample of 10 could be taken from the larger group of 90, and those individuals would be assigned to the first treatment group. Next, the second treatment group would be filled by a simple random sample of 10 taken from the remaining 80 subjects. This process would be repeated until all 9 groups are filled with 10 individuals each.
Did I Get This?: Experiments #2
Modifications to Randomization
In some cases, an experiment’s design may be enhanced by relaxing the requirement of total randomization and blocking the subjects first, dividing them into groups of individuals who are similar with respect to an outside variable that may be important in the relationship being studied. This can help ensure that the effect of treatments, as well as background variables, are most precisely measured. In blocking, we simply split the sampled subjects into blocks based upon the different values of the background variable, and then randomly allocate treatments within each block. Thus, blocking in the assignment of subjects is analogous to stratification in sampling.
For example, consider again our experiment examining the differences between three versions of software from the last Learn By Doing activity. If we suspected that gender might affect individuals’ software preferences, we might choose to allocate subjects to separate blocks, one for males and one for females. Within each block, subjects are randomly assigned to treatments and the treatment proceeds as usual. A diagram of blocking in this situation is below:
EXAMPLE:
Suppose producers of gasoline want to compare which of two types of gas results in better mileage for automobiles. In case the size of the vehicle plays a role in the effectiveness of different types of gasoline, they could first block by vehicle size, then randomly assign some cars within each block to Gasoline A and others to Gasoline B:
In the extreme, researchers may examine a relationship for a sample of blocks of just two individuals who are similar in many important respects, or even the same individual whose responses are compared for two explanatory values.
EXAMPLE:
For example, researchers could compare the effects of Gasoline A and Gasoline B when both are used on the same car, for a sample of many cars of various sizes and models.
Such a study design, called matched pairs, may enable us to pinpoint the effects of the explanatory variable by comparing responses for the same individual under two explanatory values, or for two individuals who are as similar as possible except that the first gets one treatment, and the second gets another (or serves as the control). Treatments should usually be assigned at random within each pair, or the order of treatments should be randomized for each individual. In our gasoline example, for each car the order of testing (Gasoline A first, or Gasoline B first) should be randomized.
EXAMPLE:
Suppose researchers want to compare the relative merits of toothpastes with and without tartar control ingredients. In order to make the comparison between individuals who are as similar as possible with respect to background and diet, they could obtain a sample of identical twins. One of each pair would randomly be assigned to brush with the tartar control toothpaste, while the other would brush with regular toothpaste of the same brand. These would be provided in unmarked tubes, so that the subjects would be blind. To make the experiment double-blind, dentists who evaluate the results would not know who used which toothpaste.
“Before-and-after” studies are another common type of matched pairs design. For each individual, the response variable of interest is measured twice: first before the treatment, then again after the treatment. The categorical explanatory variable is which treatment was applied, or whether a treatment was applied, to that participant.
Comment:
• We have explained data production as a two-stage process: first obtain the sample, then evaluate the variables of interest via an appropriate study design. Even though the steps are carried out in this order chronologically, it is generally best for researchers to decide on a study design before they actually obtain the sample. For the toothpaste example above, researchers would first decide to use the matched pairs design, then obtain a sample of identical twins, then carry out the experiment and assess the results.
These examples should convince you that, depending on the variables of interest, researching their relationship via an experiment may be too unrealistic, unethical, or impractical. Observational studies are subject to flaws, but often they are the only recourse.
Did I Get This?: More About Experiments | textbooks/stats/Applied_Statistics/Biostatistics_-_Open_Learning_Textbook/Unit_2%3A_Producing_Data/Causation_and_Experiments.txt |
CO-3: Describe the strengths and limitations of designed experiments and observational studies.
Learning Objectives
LO 3.2: Explain how the study design impacts the types of conclusions that can be drawn.
Video
Video: Causation and Observational Studies (3:09)
Suppose the observational study described earlier was carried out, and researchers determined that the percentage succeeding with the combination drug/therapy method was highest, while the percentage succeeding with neither therapy nor drugs was lowest. In other words, suppose there is clear evidence of an association between method used and success rate. Could they then conclude that the combination drug/therapy method causes success more than using neither therapy nor a drug?
It is at precisely this point that we confront the underlying weakness of most observational studies: some members of the sample have opted for certain values of the explanatory variable (method of quitting), while others have opted for other values. It could be that those individuals may be different in additional ways that would also play a role in the response of interest.
For instance, suppose women are more likely to choose certain methods to quit, and suppose women in general tend to quit more successfully than men. The data would make it appear that the method itself was responsible for success, whereas in truth it may just be that being female is the reason for success.
We can express this scenario in terms of the key variables involved. In addition to the explanatory variable (method) and the response variable (success or failure), a third, lurking variable (gender) is tied in (or confounded) with the explanatory variable’s values, and may itself cause the response to be a success or failure. The following diagram illustrates this situation.
Since the difficulty arises because of the lurking variable’s values being tied in with those of the explanatory variable, one way to attempt to unravel the true nature of the relationship between explanatory and response variables is to separate out the effects of the lurking variable. In general, we control for the effects of a lurking variable by separately studying groups that are defined by this variable.
Caution
We could control for the lurking variable “gender” by studying women and men separately. Then, if both women and men who chose one method have higher success rates than those opting for another method, we would be closer to producing evidence of causation.
The diagram above demonstrates how straightforward it is to control for the lurking variable gender.
Notice that we did not claim that controlling for gender would allow us to make a definite claim of causation, only that we would be closer to establishing a causal connection. This is due to the fact that other lurking variables may also be involved, such as the level of the participants’ desire to quit. Specifically, those who have chosen to use the drug/therapy method may already be the ones who are most determined to succeed, while those who have chosen to quit without investing in drugs or therapy may, from the outset, be less committed to quitting. The following diagram illustrates this scenario.
To attempt to control for this lurking variable, we could interview the individuals at the outset in order to rate their desire to quit on a scale of 1 (weakest) to 5 (strongest), and study the relationship between method and success separately for each of the five groups. But desire to quit is obviously a very subjective thing, difficult to assign a specific number to. Realistically, we may be unable to effectively control for the lurking variable “desire to quit.”
Furthermore, who’s to say that gender and/or desire to quit are the only lurking variables involved? There may be other subtle differences among individuals who choose one of the four various methods that researchers fail to imagine as they attempt to control for possible lurking variables.
For example, smokers who opt to quit using neither therapy nor drugs may tend to be in a lower income bracket than those who opt for (and can afford) drugs and/or therapy. Perhaps smokers in a lower income bracket also tend to be less successful in quitting because more of their family members and co-workers smoke. Thus, socioeconomic status is yet another possible lurking variable in the relationship between cessation method and success rate.
It is because of the existence of a virtually unlimited number of potential lurking variables that we can never be 100% certain of a claim of causation based on an observational study. On the other hand, observational studies are an extremely common tool used by researchers to attempt to draw conclusions about causal connections.
If great care is taken to control for the most likely lurking variables (and to avoid other pitfalls which we will discuss presently), and if common sense indicates that there is good reason for one variable to cause changes in the other, then researchers may assert that an observational study provides good evidence of causation.
Observational studies are subject to other pitfalls besides lurking variables, arising from various aspects of the design for evaluating the explanatory and response values. The next pair of examples illustrates some other difficulties that may arise.
EXAMPLE:
Suppose researchers want to determine if people tend to snack more while they watch TV. One possible design that we considered was to recruit participants for an observational study, and give them journals to record their hourly activities the following day, including TV watched and snacks consumed. Then they could review the journals to determine if snack consumption was higher during TV times.
We identified this as a prospective observational study, carried forward in time. Studying people in the more natural setting of their own homes makes the study more realistic than a contrived experimental setting. Still, when people are obliged to record their behavior as it occurs, they may be too self-conscious to act naturally. They may want to avoid embarrassment and so they may cut back on their TV viewing, or their snack consumption, or the combination of the two.
Yet another possible design is to recruit participants for a retrospective observational study. Ask them to recall, for each hour of the previous day, whether they were watching TV, and what snacks they consumed each hour. Determine if food consumption was higher during the TV times.
This design has the advantage of not disturbing people’s natural behavior in terms of TV viewing or snacking. It has the disadvantage of relying on people’s memories to record those variables’ values from the day before. But one day is a relatively short period of time to remember such details, and as long as people are willing to be honest, the results of this study could be fairly reliable. The issue of eliciting honest responses will be addressed in our discussion of sample surveys.
By now you should have an idea of how difficult — or perhaps even impossible — it is to establish causation in an observational study, especially due to the problem of lurking variables.
The key to establishing causation is to rule out the possibility of any lurking variable, or in other words, to ensure that individuals differ only with respect to the values of the explanatory variable.
In general, this is a goal which we have a much better chance of accomplishing by carrying out a well-designed experiment. | textbooks/stats/Applied_Statistics/Biostatistics_-_Open_Learning_Textbook/Unit_2%3A_Producing_Data/Causation_and_Observational_Studies.txt |
CO-3: Describe the strengths and limitations of designed experiments and observational studies.
Video
Video: Designing Studies (1:34)
Now that we have learned about the first stage of data production — sampling — we can move on to the next stage — designing studies.
Introduction
Obviously, sampling is not done for its own sake. After this first stage in the data production process is completed, we come to the second stage, that of gaining information about the variables of interest from the sampled individuals. Now we’ll discuss three study designs; each design enables you to determine the values of the variables in a different way.
You can:
• Carry out an observational study, in which values of the variable or variables of interest are recorded as they naturally occur. There is no interference by the researchers who conduct the study.
• Take a sample survey, which is a particular type of observational study in which individuals report variables’ values themselves, frequently by giving their opinions.
• Perform an experiment. Instead of assessing the values of the variables as they naturally occur, the researchers interfere, and they are the ones who assign the values of the explanatory variable to the individuals. The researchers “take control” of the values of the explanatory variable because they want to see how changes in the value of the explanatory variable affect the response variable. (Note: By nature, any experiment involves at least two variables.)
The type of design used, and the details of the design, are crucial, since they will determine what kind of conclusions we may draw from the results. In particular, when studying relationships in the Exploratory Data Analysis unit, we stressed that an association between two variables does not guarantee that a causal relationship exists. Here we will explore how the details of a study design play a crucial role in determining our ability to establish evidence of causation.
Here is how this topic is organized:
We’ll start by learning how to identify study types. In particular, we will highlight the distinction between observational studies and experiments.
We will then discuss each of the three study designs mentioned above.
• We’ll discuss observational studies, focusing on why it is difficult to establish causation in these type of studies, as well as other possible flaws.
• We’ll then focus on experiments, learning, among other things, that when appropriately designed, experiments can provide evidence of causation.
• We’ll end by discussing surveys and sample size
Identifying Study Design
Learning Objectives
LO 3.1: Identify the design of a study (controlled experiment vs. observational study)
Because each type of study design has its own advantages and trouble spots, it is important to begin by determining what type of study we are dealing with. The following example helps to illustrate how we can distinguish among the three basic types of design mentioned in the introduction — observational studies, sample surveys, and experiments.
EXAMPLE:
Suppose researchers want to determine whether people tend to snack more while they watch television. In other words, the researchers would like to explore the relationship between the explanatory variable “TV” (a categorical variable that takes the values “on'” and “not on”) and the response variable “snack consumption.”
Identify each of the following designs as being an observational study, a sample survey, or an experiment.
1. Recruit participants for a study. While they are presumably waiting to be interviewed, half of the individuals sit in a waiting room with snacks available and a TV on. The other half sit in a waiting room with snacks available and no TV, just magazines. Researchers determine whether people consume more snacks in the TV setting.
This is an experiment, because the researchers take control of the explanatory variable of interest (TV on or not) by assigning each individual to either watch TV or not, and determine the effect that has on the response variable of interest (snack consumption).
2. Recruit participants for a study. Give them journals to record hour by hour their activities the following day, including when they watch TV and when they consume snacks. Determine if snack consumption is higher during TV times.
This is an observational study, because the participants themselves determine whether or not to watch TV. There is no attempt on the researchers’ part to interfere.
3. Recruit participants for a study. Ask them to recall, for each hour of the previous day, whether they were watching TV, and what snacks they consumed each hour. Determine whether snack consumption was higher during the TV times.
This is also an observational study; again, it was the participants themselves who decided whether or not to watch TV. Do you see the difference between 2 and 3? See the comment below.
4. Poll a sample of individuals with the following question: While watching TV, do you tend to snack: (a) less than usual; (b) more than or usual; or (c) the same amount as usual?
This is a sample survey, because the individuals self-assess the relationship between TV watching and snacking.
Comment:
• Notice that in Example 2, the values of the variables of interest (TV watching and snack consumption) are recorded forward in time. Such observational studies are called prospective. In contrast, in Example 3, the values of the variables of interest are recorded backward in time. This is called a retrospective observational study.
Did I Get This?: Study Design
While some studies are designed to gather information about a single variable, many studies attempt to draw conclusions about the relationship between two variables. In particular, researchers often would like to produce evidence that one variable actually causes changes in the other.
For example, the research question addressed in the previous example sought to establish evidence that watching TV could cause an increase in snacking. Such studies may be especially useful and interesting, but they are also especially vulnerable to flaws that could invalidate the conclusion of causation.
In several of the examples we will see that although evidence of an association between two variables may be quite clear, the question of whether one variable is actually causing changes in the other may be too murky to be entirely resolved. In general, with a well-designed experiment we have a better chance of establishing causation than with an observational study.
However, experiments are also subject to certain pitfalls, and there are many situations in which an experiment is not an option. A well-designed observational study may still provide fairly convincing evidence of causation under the right circumstances.
Experiments vs. Observational Studies
Before assessing the effectiveness of observational studies and experiments for producing evidence of a causal relationship between two variables, we will illustrate the essential differences between these two designs.
EXAMPLE:
Every day, a huge number of people are engaged in a struggle whose outcome could literally affect the length and quality of their life: they are trying to quit smoking. Just the array of techniques, products, and promises available shows that quitting is not easy, nor is its success guaranteed. Researchers would like to determine which of the following is the best method:
1. Drugs that alleviate nicotine addiction.
2. Therapy that trains smokers to quit.
3. A combination of drugs and therapy.
4. Neither form of intervention (quitting “cold turkey”).
The explanatory variable is the method (1, 2, 3 or 4), while the response variable is eventual success or failure in quitting. In an observational study, values of the explanatory variable occur naturally. In this case, this means that the participants themselves choose a method of trying to quit smoking. In an experiment, researchers assign the values of the explanatory variable. In other words, they tell people what method to use. Let us consider how we might compare the four techniques, via either an observational study or an experiment.
1. An observational study of the relationship between these two variables requires us to collect a representative sample from the population of smokers who are beginning to try to quit. We can imagine that a substantial proportion of that population is trying one of the four above methods. In order to obtain a representative sample, we might use a nationwide telephone survey to identify 1,000 smokers who are just beginning to quit smoking. We record which of the four methods the smokers use. One year later, we contact the same 1,000 individuals and determine whether they succeeded.
2. In an experiment, we again collect a representative sample from the population of smokers who are just now trying to quit, using a nationwide telephone survey of 1,000 individuals. This time, however, we divide the sample into 4 groups of 250 and assign each group to use one of the four methods to quit. One year later, we contact the same 1,000 individuals and determine whose attempts succeeded while using our designated method.
The following figures illustrate the two study designs:
Observational study:
Experiment:
Both the observational study and the experiment begin with a random sample from the population of smokers just now beginning to quit. In both cases, the individuals in the sample can be divided into categories based on the values of the explanatory variable: method used to quit. The response variable is success or failure after one year. Finally, in both cases, we would assess the relationship between the variables by comparing the proportions of success of the individuals using each method, using a two-way table and conditional percentages.
The only difference between the two methods is the way the sample is divided into categories for the explanatory variable (method). In the observational study, individuals are divided based upon the method by which they choose to quit smoking. The researcher does not assign the values of the explanatory variable, but rather records them as they naturally occur. In the experiment, the researcher deliberately assigns one of the four methods to each individual in the sample. The researcher intervenes by controlling the explanatory variable, and then assesses its relationship with the response variable.
Now that we have outlined two possible study designs, let’s return to the original question: which of the four methods for quitting smoking is most successful? Suppose the study’s results indicate that individuals who try to quit with the combination drug/therapy method have the highest rate of success, and those who try to quit with neither form of intervention have the lowest rate of success, as illustrated in the hypothetical two-way table below:
Can we conclude that using the combination drugs and therapy method caused the smokers to quit most successfully? Which type of design was implemented will play an important role in the answer to this question.
Did I Get This?: Study Design #2 | textbooks/stats/Applied_Statistics/Biostatistics_-_Open_Learning_Textbook/Unit_2%3A_Producing_Data/Designing_Studies.txt |
CO-3: Describe the strengths and limitations of designed experiments and observational studies.
Learning Objectives
LO 3.2: Explain how the study design impacts the types of conclusions that can be drawn.
Learning Objectives
LO 3.4: Identify common problems with surveys and determine the potential impact(s) of each on the collected data and the accuracy of the data.
Video
Video: Sample Surveys (2:58)
Concepts of Sample Surveys
A sample survey is a particular type of observational study in which individuals report variables’ values themselves, frequently by giving their opinions. Researchers have several options to choose from when deciding how to survey the individuals involved: in person, or via telephone, Internet, or mail.
The following issues in the design of sample surveys will be discussed:
• open vs. closed questions
• unbalanced response options
• leading questions
• planting ideas with questions
• complicated questions
• sensitive questions
These issues are best illustrated with a variety of concrete examples.
Suppose you want to determine the musical preferences of all students at your university, based on a sample of students. In the Sampling section, we discussed various ways to obtain the sample, such as taking a simple random sample from all students at the university, then contacting the chosen subjects via email to request their responses and following up with a second email to those who did not respond the first time.
This method would ensure a sample that is fairly representative of the entire population of students at the university, and avoids the bias that might result from a flawed design such as a convenience sample or a volunteer sample.
However, even if we managed to select a representative sample for a survey, we are not yet home free: we must still compose the survey question itself so that the information we gather from the sampled students correctly represents what is true about their musical preferences.
Let’s consider some possibilities:
Question: “What is your favorite kind of music?”
This is what we call an open question, which allows for almost unlimited responses. It may be difficult to make sense of all the possible categories and subcategories of music that survey respondents could come up with.
Some may be more general than what you had in mind (“I like modern music the best”) and others too specific (“I like Japanese alternative electronic rock by Cornelius”). Responses are much easier to handle if they come from a closed question:
Question: Which of these types of music do you prefer: classical, rock, pop, or hip-hop?
What will happen if a respondent is asked the question as worded above, and he or she actually prefers jazz or folk music or gospel? He or she may pick a second-favorite from the options presented, or try to pencil in the real preference, or may just not respond at all. Whatever the outcome, it is likely that overall, the responses to the question posed in this way will not give us very accurate information about general music preferences. If a closed question is used, then great care should be taken to include all the reasonable options that are possible, including “not sure.” Also, in case an option was overlooked, “other:___________” should be included for the sake of thoroughness.
Many surveys ask respondents to assign a rating to a variable, such as in the following:
Question: How do you feel about classical music? Circle one of these: I love it, I like it very much, I like it, I don’t like it, I hate it.
Notice that the options provided are rather “top-heavy,” with three favorable options vs. two unfavorable. If someone feels somewhat neutral, they may opt for the middle choice, “I like it,” and a summary of the survey’s results would distort the respondents’ true opinions.
Some survey questions are either deliberately or unintentionally biased towards certain responses:
Question: “Do you agree that classical music is the best type of music, because it has survived for centuries and is not only enjoyable, but also intellectually rewarding? (Answer yes or no.)”
This sort of wording puts ideas in people’s heads, urging them to report a particular opinion. One way to test for bias in a survey question is to ask yourself, “Just from reading the question, would a respondent have a good idea of what response the surveyor is hoping to elicit?” If the answer is yes, then the question should have been worded more neutrally.
Sometimes, survey questions are ordered in such a way as to deliberately bias the responses by planting an idea in an earlier question that will sway people’s thoughts in a later question.
Question: In the year 2002, there was much controversy over the fact that the Augusta National Golf Club, which hosts the Masters Golf Tournament each year, does not accept women as members. Defenders of the club created a survey that included the following statements. Respondents were supposed to indicate whether they agreed or disagreed with each statement:
“The First Amendment of the U.S. Constitution applies to everyone regardless of gender, race, religion, age, profession, or point of view.”
“The First Amendment protects the right of individuals to create a private organization consisting of a specific group of people based on age, gender, race, ethnicity, or interest.”
“The First Amendment protects the right of organizations like the Boy Scouts, the Girls Scouts, and the National Association for the Advancement of Colored People to exist.”
“Individuals have a right to join a private group, club, or organization that consists of people who share the same interests and personal backgrounds as they do if they so desire.”
“Private organizations that are not funded by the government should be allowed to decide who becomes a member and who does not become a member on their own, without being forced to take input from other outside people or organizations.”
Notice how the first and second statements steer people to favor the opinion that specialized groups may form private clubs. The third statement reminds people of organizations that are formed by groups on the basis of gender and race, setting the stage for them to agree with the fourth statement, which supports people’s rights to join any private club. This in turn leads into the fifth statement, which focuses on a private organization’s right to decide on its membership. As a group, the questions attempt to relentlessly steer a respondent towards ultimately agreeing with the club’s right to exclude women.
Sometimes surveyors attempt to get feedback on more than one issue at a time.
Question: “Do you agree or disagree with this statement: ‘I don’t go out of my way to listen to modern music unless there are elements of jazz, or else lyrics that are clear and make sense.'”
Put yourself in the place of people who enjoy jazz and straightforward lyrics, but don’t have an issue with music being “too modern,” per se. The logic of the question (or lack thereof) may escape the respondents, and they would be too confused to supply an answer that correctly conveys their opinion. Clearly, simple questions are much better than complicated ones; rather than try to gauge opinions on several issues at once, complex survey questions like this should be broken down into shorter, more concise ones.
Depending on the topic, we cannot always assume that survey respondents will answer honestly.
Question1: “Have you eaten rutabagas in the past year?”
If respondents answer no, then we have good reason to believe that they did not eat rutabagas in the past year.
Question2: “Have you used illegal drugs in the past year?”
If respondents answer no, then it is still a possibility that they did use illegal drugs, but didn’t want to admit it.
Effective techniques for collecting accurate data on sensitive questions are a main area of inquiry in statistics. One simple method is randomized response, which allows individuals in the sample to answer anonymously, while the researcher still gains information about the population. This technique is best illustrated by an example.
EXAMPLE:
For the question, “Have you used illegal drugs in the past year?” respondents are told to flip a fair coin (in private) before answering and then answer based on the result of the coin flip: if the coin flip results in “Heads,” they should answer “Yes” (regardless of the truth), if a coin flip results in “Tails,” they should answer truthfully. Thus, roughly half of the respondents are “truth-tellers,” and the other half give the uncomfortable answer “Yes,” without the interviewer’s knowledge of who is in which group. The respondent who flips “Tails” and answers truthfully knows that he or she cannot be distinguished from someone who got “Heads” in the coin toss. Hopefully, this is enough to encourage respondents to answer truthfully. As we will learn later in the course, the surveyor can then use probability methods to estimate the proportion of respondents who admit they used illegal drugs in this scenario, while being unable to identify exactly which respondents have been drug abusers.
Besides using the randomized response method, surveyors may encourage honest answers from respondents in various other ways. Tactful wording of questions can be very helpful. Giving people a feeling of anonymity by having them complete questionnaires via computer, rather than paper and pencil, is another commonly used technique.
Did I Get This?: Sample Surveys
Let’s summarize
• A sample survey is a type of observational study in which respondents assess variables’ values (often by giving an opinion).
• Open questions are less restrictive, but responses are more difficult to summarize.
• Closed questions may be biased by the options provided.
• Closed questions should permit options such as “other:______” and/or “not sure” if those options may apply.
• Questions should be worded neutrally.
• Earlier questions should not deliberately influence responses to later questions.
• Questions shouldn’t be confusing or complicated.
• Survey method and questions should be carefully designed to elicit honest responses if there are sensitive issues involved. | textbooks/stats/Applied_Statistics/Biostatistics_-_Open_Learning_Textbook/Unit_2%3A_Producing_Data/Sample_Surveys.txt |
CO-2: Differentiate among different sampling methods and discuss their strengths and limitations.
Video
Video: Sampling (12:38)
Sampling Plans
As mentioned in the introduction to this unit, we will begin with the first stage of data production — sampling. Our discussion will be framed around the following examples:
Suppose you want to determine the musical preferences of all students at your university, based on a sample of students. Here are some examples of the many possible ways to pursue this problem.
EXAMPLES: Sampling
Example 1: Post a music-lovers’ survey on a university Internet bulletin board, asking students to vote for their favorite type of music.
This is an example of a volunteer sample, where individuals have selected themselves to be included. Such a sample is almost guaranteed to be biased. In general, volunteer samples tend to be comprised of individuals who have a particularly strong opinion about an issue, and are looking for an opportunity to voice it. Whether the variable’s values obtained from such a sample are over- or under-stated, and to what extent, cannot be determined. As a result, data obtained from a voluntary response sample is quite useless when you think about the “Big Picture,” since the sampled individuals only provide information about themselves, and we cannot generalize to any larger group at all.
Comment:
• It should be mentioned that in some cases volunteer samples are the only ethical way to obtain a sample. In medical studies, for example, in which new treatments are tested, subjects must choose to participate by signing a consent form that highlights the potential risks and benefits. As we will discuss in the next topic on study design, a volunteer sample is not so problematic in a study conducted for the purpose of comparing several treatments.
Example 2: Stand outside the Student Union, across from the Fine Arts Building, and ask the students passing by to respond to your question about musical preference.
This is an example of a convenience sample, where individuals happen to be at the right time and place to suit the schedule of the researcher. Depending on what variable is being studied, it may be that a convenience sample provides a fairly representative group. However, there are often subtle reasons why the sample’s results are biased. In this case, the proximity to the Fine Arts Building might result in a disproportionate number of students favoring classical music. A convenience sample may also be susceptible to bias because certain types of individuals are more likely to be selected than others. In the extreme, some convenience samples are designed in such a way that certain individuals have no chance at all of being selected, as in the next example.
Example 3: Ask your professors for email rosters of all the students in your classes. Randomly sample some addresses, and email those students with your question about musical preference.
Here is a case where the sampling frame — list of potential individuals to be sampled — does not match the population of interest. The population of interest consists of all students at the university, whereas the sampling frame consists of only your classmates. There may be bias arising because of this discrepancy. For example, students with similar majors will tend to take the same classes as you, and their musical preferences may also be somewhat different from those of the general population of students. It is always best to have the sampling frame match the population as closely as possible.
Example 4: Obtain a student directory with email addresses of all the university’s students, and send the music poll to every 50th name on the list.
This is called systematic sampling. It may not be subject to any clear bias, but it would not be as safe as taking a random sample.
If individuals are sampled completely at random, and without replacement, then each group of a given size is just as likely to be selected as all the other groups of that size. This is called a simple random sample (SRS). In contrast, a systematic sample would not allow for sibling students to be selected, because of having the same last name. In a simple random sample, sibling students would have just as much of a chance of both being selected as any other pair of students. Therefore, there may be subtle sources of bias in using a systematic sampling plan.
Example 5: Obtain a student directory with email addresses of all the university’s students, and send your music poll to a simple random sample of students.
As long as all of the students respond, then the sample is not subject to any bias, and should succeed in being representative of the population of interest.
But what if only 40% of those selected email you back with their vote?
The results of this poll would not necessarily be representative of the population, because of the potential problems associated with volunteer response. Since individuals are not compelled to respond, often a relatively small subset take the trouble to participate. Volunteer response is not as problematic as a volunteer sample (presented in example 1 above), but there is still a danger that those who do respond are different from those who don’t, with respect to the variable of interest. An improvement would be to follow up with a second email, asking politely for the students’ cooperation. This may boost the response rate, resulting in a sample that is fairly representative of the entire population of interest, and it may be the best that you can do, under the circumstances. Nonresponse is still an issue, but at least you have managed to reduce its impact on your results.
So far we’ve discussed several sampling plans, and determined that a simple random sample is the only one we discussed that is not subject to any bias.
A simple random sample is the easiest way to base a selection on randomness. There are other, more sophisticated, sampling techniques that utilize randomness that are often preferable in real-life circumstances. Any plan that relies on random selection is called a probability sampling plan (or technique). The following three probability sampling plans are among the most commonly used:
• Simple Random Sampling is, as the name suggests, the simplest probability sampling plan. It is equivalent to “selecting names out of a hat.” Each individual has the same chance of being selected.
• Cluster Sampling — This sampling technique is used when our population is naturally divided into groups (which we call clusters). For example, all the students in a university are divided into majors; all the nurses in a certain city are divided into hospitals; all registered voters are divided into precincts (election districts). In cluster sampling, we take a random sample of clusters, and use all the individuals within the selected clusters as our sample. For example, in order to get a sample of high-school seniors from a certain city, you choose 3 high schools at random from among all the high schools in that city, and use all the high school seniors in the three selected high schools as your sample.
• Stratified Sampling — Stratified sampling is used when our population is naturally divided into sub-populations, which we call stratum (plural: strata). For example, all the students in a certain college are divided by gender or by year in college; all the registered voters in a certain city are divided by race. In stratified sampling, we choose a simple random sample from each stratum, and our sample consists of all these simple random samples put together. For example, in order to get a random sample of high-school seniors from a certain city, we choose a random sample of 25 seniors from each of the high schools in that city. Our sample consists of all these samples put together.
Each of those probability sampling plans, if applied correctly, are not subject to any bias, and thus produce samples that represent well the population from which they were drawn.
Comment: Cluster vs. Stratified
• Students sometimes get confused about the difference between cluster sampling and stratified sampling. Even though both methods start out with the population somehow divided into groups, the two methods are very different.
• In cluster sampling, we take a random sample of whole groups of individuals taking everyone in that group but not all groups are taken), while in stratified sampling we take a simple random sample from each group (and all groups are represented).
• For example, say we want to conduct a study on the sleeping habits of undergraduate students at a certain university, and need to obtain a sample. The students are naturally divided by majors, and let’s say that in this university there are 40 different majors.
• In cluster sampling, we would randomly choose, say, 5 majors (groups) out of the 40, and use all the students in these five majors as our sample.
• In stratified sampling, we would obtain a random sample of, say, 10 students from each of the 40 majors (groups), and use the 400 chosen students as the sample.
• Clearly in this example, stratified sampling is much better, since the major of the student might have an effect on the student’s sleeping habits, and so we would like to make sure that we have representatives from all the different majors. We’ll stress this point again following the example and activity.
EXAMPLE:
Suppose you would like to study the job satisfaction of hospital nurses in a certain city based on a sample. Besides taking a simple random sample, here are two additional ways to obtain such a sample.
1. Suppose that the city has 10 hospitals. Choose one of the 10 hospitals at random and interview all the nurses in that hospital regarding their job satisfaction. This is an example of cluster sampling, in which the hospitals are the clusters.
2. Choose a random sample of 50 nurses from each of the 10 hospitals and interview these 50 * 10 = 500 regarding their job satisfaction. This is an example of stratified sampling, in which each hospital is a stratum.
Cluster or Stratified — which one is better?
Let’s go back and revisit the job satisfaction of hospital nurses example and discuss the pros and cons of the two sampling plans that are presented. Certainly, it will be much easier to conduct the study using the cluster sample, since all interviews are conducted in one hospital as opposed to the stratified sample, in which the interviews need to be conducted in 10 different hospitals. However, the hospital that a nurse works in probably has a direct impact on his/her job satisfaction, and in that sense, getting data from just one hospital might provide biased results. In this case, it will be very important to have representation from all the city hospitals, and therefore the stratified sample is definitely preferable. On the other hand, say that instead of job satisfaction, our study focuses on the age or weight of hospital nurses.
In this case, it is probably not as crucial to get representation from the different hospitals, and therefore the more easily obtained cluster sample might be preferable.
Comment:
• Another commonly used sampling technique is multistage sampling, which is essentially a “complex form” of cluster sampling. When conducting cluster sampling, it might be unrealistic, or too expensive to sample all the individuals in the chosen clusters. In cases like this, it would make sense to have another stage of sampling, in which you choose a sample from each of the randomly selected clusters, hence the term multistage sampling.
For example, say you would like to study the exercise habits of college students in the state of California. You might choose 8 colleges (clusters) at random, but you are certainly not going to use all the students in these 8 colleges as your sample. It is simply not realistic to conduct your study that way. Instead you move on to stage 2 of your sampling plan, in which you choose a random sample of 100 males and a random sample of 100 females from each of the 8 colleges you selected in stage 1.
So in total you have 8 * (100+100) = 1,600 college students in your sample.
In this case, stage 1 was a cluster sample of 8 colleges and stage 2 was a stratified sample within each college where the stratum was gender.
Multistage sampling can have more than 2 stages. For example, to obtain a random sample of physicians in the United States, you choose 10 states at random (stage 1, cluster). From each state you choose at random 8 hospitals (stage 2, cluster). Finally, from each hospital, you choose 5 physicians from each sub-specialty (stage 3, stratified).
Did I Get This?: Sampling
Overview So Far
We have defined the following:
Sampling Frame: List of potential individuals to be sampled. We want the sampling frame to match the population as closely as possible. The sampling frame is embedded within the population and the sample is embedded inside the sampling frame.
Biased Sample: A sample that produces data that is not representative because of the systematic under- or over-estimation of the values of the variable of interest.
Volunteer Sample: Individuals have selected themselves to be included.
Convenience Sample: Individuals happen to be at the right time and place to suit the schedule of the researcher
Systematic Sample: Starting from a randomly chosen individual in the ordered sampling frame, select every i-th individual to be included in the sample.
Simple Random Sample (SRS): Individuals are sampled completely at random, and without replacement. The result is that EVERY group of a given size is just as likely to be selected as all the other groups of that size. Each individual is also equally likely to be chosen.
Cluster Sampling: Used when “natural” groupings are evident in a statistical population and each group is generally representative of the population. In this technique, the total population is divided into these groups (or clusters) and a sample of these groups is selected. For example randomly selecting courses from all courses and surveying ALL students in selected courses.
Stratified Sampling: When subpopulations within an overall population vary, it can be advantageous to take samples from each subpopulation (stratum) independently. For example, take a random sample of males and a separate random sample of females.
Nonresponse: Individuals selected to participate do not respond or refuse to participate.
Sample Size
So far, we have made no mention of sample size. Our first priority is to make sure the sample is representative of the population, by using some form of probability sampling plan. Next, we must keep in mind that in order to get a more precise idea of what values are taken by the variable of interest for the entire population, a larger sample does a better job than a smaller one. We will discuss the issue of sample size in more detail in the Inference unit, and we will actually see how changes in the sample size affect the conclusions we can draw about the population.
EXAMPLE:
Suppose hospital administrators would like to find out how the staff would rate the quality of food in the hospital cafeteria. Which of the four sampling plans below would be best?
1. The person responsible for polling stands outside the cafeteria door and asks the next 5 staff members who come out to give the food a rating on a scale of 1 to 10.
2. The person responsible for polling stands outside the cafeteria door and asks the next 50 staff members who come out to give the food a rating on a scale of 1 to 10.
3. The person responsible for polling takes a random sample of 5 staff members from the list of all those employed at the hospital and asks them to rate the cafeteria food on a scale of 1 to 10.
4. The person responsible for polling takes a random sample of 50 staff members from the list of all those employed at the hospital and asks them to rate the cafeteria food on a scale of 1 to 10.
Plans 1 and 2 would be biased in favor of higher ratings, since staff members with unfavorable opinions about cafeteria food would be likely to eat elsewhere. Plan 3, since it is random, would be unbiased. However, with such a small sample, you run the risk of including people who provide unusually low or unusually high ratings. In other words, the average rating could vary quite a bit depending on who happens to be included in that small sample. Plan 4 would be best, as the participants have been chosen at random to avoid bias and the larger sample size provides more information about the opinions of all hospital staff members.
EXAMPLE:
Suppose a student enrolled in a statistics course is required to complete and turn in several hundred homework problems throughout the semester. The teaching assistant responsible for grading suggests the following plan to the course professor: instead of grading all of the problems for each student, he will grade a random sample of problems.
His first offer, to grade a random sample of just 3 problems for each student, is not well-received by the professor, who fears that such a small sample may not provide a very precise estimate of a student’s overall homework performance.
Students are particularly concerned that the random selection may happen to include one or two problems on which they performed poorly, thereby lowering their grade.
The next offer, to grade a random sample of 25 problems for each student, is deemed acceptable by both the professor and the students.
Comment:
• In practice, we are confronted with many trade-offs in statistics. A larger sample is more informative about the population, but it is also more costly in terms of time and money. Researchers must make an effort to keep their costs down, but still obtain a sample that is large enough to allow them to report fairly precise results.
Learn By Doing: Sampling (Software)
Let’s Summarize
Our goal, in statistics, is to use information from a sample to draw conclusions about the larger group, called the population. The first step in this process is to obtain a sample of individuals that are truly representative of the population. If this step is not carried out properly, then the sample is subject to bias, a systematic tendency to misrepresent the variables of interest in the population.
Bias is almost guaranteed if a volunteer sample is used. If the individuals select themselves for the study, they are often different in an important way from the individuals who did not volunteer.
A convenience sample, chosen because individuals were in the right place at the right time to suit the researcher, may be different from the general population in a subtle but important way. However, for certain variables of interest, a convenience sample may still be fairly representative.
The sampling frame of individuals from whom the sample is actually selected should match the population of interest; bias may result if parts of the population are systematically excluded.
Systematic sampling takes an organized (but not random) approach to the selection process, as in picking every 50th name on a list, or the first product to come off the production line each hour. Just as with convenience sampling, there may be subtle sources of bias in such a plan, or it may be adequate for the purpose at hand.
Most studies are subject to some degree of nonresponse, referring to individuals who do not go along with the researchers’ intention to include them in a study. If there are too many non-respondents, and they are different from respondents in an important way, then the sample turns out to be biased.
In general, bias may be eliminated (in theory), or at least reduced (in practice), if researchers do their best to implement a probability sampling plan that utilizes randomness.
The most basic probability sampling plan is a simple random sample, where every group of individuals has the same chance of being selected as every other group of the same size. This is achieved by sampling at random and without replacement.
In a cluster sample, groups of individuals are randomly selected, such as all people in the same household. In a cluster sample, all members of each selected group participate in the study.
A stratified sample divides the population into groups called strata before selecting study participants at random from within those groups.
Multistage sampling makes the sampling process more manageable by working down from a large population to successively smaller groups within the population, taking advantage of stratifying along the way, and sometimes finishing up with a cluster sample or a simple random sample.
Assuming the various sources of bias have been avoided, researchers can learn more about the variables of interest for the population by taking larger samples. The “extreme” (meaning, the largest possible sample) would be to study every single individual in the population (the goal of a census), but in practice, such a design is rarely feasible. Instead, researchers must try to obtain the largest sample that fits in their budget (in terms of both time and money), and must take great care that the sample is truly representative of the population of interest.
We will further discuss the topic of sample size when we cover sampling distributions and inferential statistics.
In this short section on sampling, we learned various techniques by which one can choose a sample of individuals from an entire population to collect data from. This is seemingly a simple step in the big picture of statistics, but it turns out that it has a crucial effect on the conclusions we can draw from the sample about the entire population (i.e., inference).
Caution
Generally speaking, a probability sampling plan (such as a simple random sample, cluster, or stratified sampling) will result in a nonbiased sample, which can be safely used to make inferences. Moreover, the inferential procedures that we will learn later in this course assume that the sample was chosen at random.
That being said, other (nonrandom) sampling techniques are available, and sometimes using them is the best we can do. It is important, though, when these techniques are used, to be aware of the types of bias that they introduce, and thus the limitations of the conclusions that can be drawn from the resulting samples. | textbooks/stats/Applied_Statistics/Biostatistics_-_Open_Learning_Textbook/Unit_2%3A_Producing_Data/Sampling.txt |
In this unit, we discussed the first step in the big picture of statistics — production of data.
Production of data happens in two stages: sampling and study design.
Our goal in sampling is to get a sample that represents the population of interest well, so that when we get to the inference stage, making conclusions based on this sample about the entire population will make sense.
We discussed several biased sampling plans, but also introduced the “family” of probability sampling plans, the simplest of which is the simple random sample, that (at least in theory) are supposed to provide a sample that is not subject to any biases.
In the section on study design, we introduced 3 types of design: observational study, controlled experiment, and sample survey.
We distinguished among different types of studies and learned the details of each type of study design. By doing so, we also expanded our understanding of the issue of establishing causation that was first discussed in the previous unit of the course. In the Exploratory Data Analysis unit, we learned that in general, association does not imply causation, due to the fact that lurking variables might be responsible for the association we observe, which means we cannot establish that there is a causal relationship between our “explanatory” variable and our response variable.
In this unit, we completed the causation puzzle by learning under what circumstances an observed association between variables CAN be interpreted as causation.
We saw that in observational studies, the best we can do is to control for what we think might be potential lurking variables, but we can never be sure that there aren’t any others that we didn’t anticipate. Therefore, we can come closer to establishing causation, but never really establish it.
The only way we can, at least in theory, eliminate the effect of (or control for) ALL lurking variables is by conducting a randomized controlled experiment, in which subjects are randomly assigned to one of the treatment groups. Only in this case can we interpret an observed association as causation.
Obviously, due to ethical or other practical reasons, not every study can be conducted as a randomized experiment. Where possible, however, a double-blind randomized controlled experiment is about the best study design we can use.
Another very common study design is the survey. While a survey is a special kind of observational study, it really is treated as a separate design, since it is so common and is the type of study that the general public is most often exposed to (polls). It is important that we be aware of the fact that the wording, ordering, or type of questions asked in a poll could have an impact on the response. In order for a survey’s results to be reliable, these issues should be carefully considered when the survey is designed.
We saw that with observational studies it is difficult to establish convincing evidence of a causal relationship, because of lack of control over outside variables (called lurking variables). Other pitfalls that may arise are that individuals’ behaviors may be affected if they know they are participating in an observational study, and that individuals’ memories may be faulty if they are asked to recall information from the past.
Experiments allow researchers to take control of lurking variables by randomized assignment to treatments, which helps provide more convincing evidence of causation. The design may be enhanced by making sure that subjects and/or researchers are blind to who receives what treatment. Depending on what relationship is being researched, it may be difficult to design an experiment whose setting is realistic enough that we can safely generalize the conclusions to real life.
Another reason that observational studies are utilized rather than experiments is that certain explanatory variables — such as income or alcohol intake — either cannot or should not be controlled by researchers.
Sample surveys are occasionally used to examine relationships, but often they assess values of many separate variables, such as respondents’ opinions on various matters. Survey questions should be designed carefully, in order to ensure unbiased assessment of the variables’ values.
Throughout this unit, we established guidelines for the ideal production of data, which should be held as standards to strive for. Realistically, however, it is rarely possible to carry out a study which is completely free of flaws. Therefore, common sense must frequently be applied in order to decide which imperfections we can live with, and which ones could completely undermine a study’s results.
(Optional) Outside Reading: Little Handbook – Design & Sampling (one long & one short) | textbooks/stats/Applied_Statistics/Biostatistics_-_Open_Learning_Textbook/Unit_2%3A_Producing_Data/Summary_%28Unit_2%29.txt |
CO-1: Describe the roles biostatistics serves in the discipline of public health.
CO-6: Apply basic concepts of probability, random variation, and commonly used statistical probability distributions.
Video
Video: Unit 3A: Introduction (5:36)
Review of the Big Picture
Learning Objectives
LO 1.3: Identify and differentiate between the components of the Big Picture of Statistics
Recall the Big Picture — the four-step process that encompasses statistics (as it is presented in this course):
So far, we’ve discussed the first two steps:
Producing data — how data are obtained, and what considerations affect the data production process.
Exploratory data analysis — tools that help us get a first feel for the data, by exposing their features using visual displays and numerical summaries which help us explore distributions, compare distributions, and investigate relationships.
(Recall that the structure of this course is such that Exploratory Data Analysis was covered first, followed by Producing Data.)
Our eventual goal is Inference — drawing reliable conclusions about the population based on what we’ve discovered in our sample.
In order to really understand how inference works, though, we first need to talk about Probability, because it is the underlying foundation for the methods of statistical inference.
The probability unit starts with an introduction, which will give you some motivating examples and an intuitive and informal perspective on probability.
Why do we need to understand probability?
• We often want to estimate the chance that an event (of interest to us) will occur.
• Many values of interest are probabilities or are derived from probabilities, for example, prevalence rates, incidence rates, and sensitivity/specificity of tests for disease.
• Plus!! Inferential statistics relies on probability to
• Test hypotheses
• Estimate population values, such as the population mean or population proportion.
Probability and Inference
We will use an example to try to explain why probability is so essential to inference.
First, here is the general idea:
As we all know, the way statistics works is that we use a sample to learn about the population from which it was drawn. Ideally, the sample should be random so that it represents the population well.
Recall from the discussion about sampling that when we say that a random sample represents the population well we mean that there is no inherent bias in this sampling technique.
It is important to acknowledge, though, that this does not mean that all random samples are necessarily “perfect.” Random samples are still random, and therefore no random sample will be exactly the same as another.
One random sample may give a fairly accurate representation of the population, while another random sample might be “off,” purely due to chance.
Unfortunately, when looking at a particular sample (which is what happens in practice), we will never know how much it differs from the population.
This uncertainty is where probability comes into the picture. This gives us a way to draw conclusions about the population in the face of the uncertainty that is generated by the use of a random sample.
We use probability to quantify how much we expect random samples to vary.
The following example will illustrate this important point.
EXAMPLE:
Suppose that we are interested in estimating the percentage of U.S. adults who favor the death penalty.
In order to do so, we choose a random sample of 1,200 U.S. adults and ask their opinion: either in favor of or against the death penalty.
We find that 744 out of the 1,200, or 62%, are in favor. (Comment: although this is only an example, this figure of 62% is quite realistic, given some recent polls).
Here is a picture that illustrates what we have done and found in our example:
Our goal here is inference — to learn and draw conclusions about the opinions of the entire population of U.S. adults regarding the death penalty, based on the opinions of only 1,200 of them.
Can we conclude that 62% of the population favors the death penalty?
• Another random sample could give a very different result. So we are uncertain.
But since our sample is random, we know that our uncertainty is due to chance, and not due to problems with how the sample was collected.
So we can use probability to describe the likelihood that our sample is within a desired level of precision.
For example, probability can answer the question, “How likely is it that our sample estimate is no more than 3% from the true percentage of all U.S. adults who are in favor of the death penalty?”
The answer to this question (which we find using probability) is obviously going to have an important impact on the confidence we can attach to the inference step.
In particular, if we find it quite unlikely that the sample percentage will be very different from the population percentage, then we have a lot of confidence that we can draw conclusions about the population based on the sample.
In the health sciences, a comparable situation to the death penalty example would be when we wish to determine the prevalence of a certain disease or condition.
In epidemiology, the prevalence of a health-related state (typically disease, but also other things like smoking or seat belt use) in a statistical population is defined as the total number of cases in the population, divided by the number of individuals in the population.
As we will see, this is a form of probability.
In practice, we will need to estimate the prevalence using a sample and in order to make inferences about the population from a sample, we will need to understand probability.
EXAMPLE:
The CDC estimates that in 2011, 8.3% of the U.S. population have diabetes. In other words, the CDC estimates the prevalence of diabetes to be 8.3% in the U.S.
Fact Sheet on Diabetes from the CDC.
There are numerous statistics and graphs reported in this document you should now understand!!
Other common probabilities used in the health sciences are
• (Cumulative) Incidence: the probability that a person with no prior disease will develop disease over some specified time period
• Sensitivity of a diagnostic or screening test: the probability the person tests positive, given the person has the disease. Specificity of a diagnostic or screening test: the probability the person tests negative, given the person does not have the disease. As well as predictive value positive, predictive value negative, false positive rate, false negative rate.
• Survival probability: the probability an individual survives beyond a certain time
Unit 3A: Probability
CO-6: Apply basic concepts of probability, random variation, and commonly used statistical probability distributions.
Learning Objectives
LO 6.4: Relate the probability of an event to the likelihood of this event occurring.
Learning Objectives
LO 6.5: Apply the relative frequency approach to estimate the probability of an event.
Learning Objectives
LO 6.6: Apply basic logic and probability rules in order to find the empirical probability of an event.
Video
Video: Basic Probability Rules (25:17)
In the previous section, we introduced probability as a way to quantify the uncertainty that arises from conducting experiments using a random sample from the population of interest.
We saw that the probability of an event (for example, the event that a randomly chosen person has blood type O) can be estimated by the relative frequency with which the event occurs in a long series of trials. So we would collect data from lots of individuals to estimate the probability of someone having blood type O.
In this section, we will establish the basic methods and principles for finding probabilities of events.
We will also cover some of the basic rules of probability which can be used to calculate probabilities.
Introduction
We will begin with a classical probability example of tossing a fair coin three times.
Since heads and tails are equally likely for each toss in this scenario, each of the possibilities which can result from three tosses will also be equally likely so that we can list all possible values and use this list to calculate probabilities.
Since our focus in this course is on data and statistics (not theoretical probability), in most of our future problems we will use a summarized dataset, usually a frequency table or two-way table, to calculate probabilities.
EXAMPLE: Toss a fair coin three times
Let’s list each possible outcome (or possible result):
{HHH, THH, HTH, HHT, HTT, THT, TTH, TTT}
Now let’s define the following events:
Event A: “Getting no H”
Event B: “Getting exactly one H”
Event C: “Getting at least one H”
Note that each event is indeed a statement about the outcome that the experiment is going to produce. In practice, each event corresponds to some collection (subset) of the possible outcomes.
Event A: “Getting no H” → TTT
Event B: “Getting exactly one H” → HTT, THT, TTH
Event C: “Getting at least one H” → HTT, THT, TTH, THH, HTH, HHT, HHH
Here is a visual representation of events A, B and C.
From this visual representation of the events, it is easy to see that event B is totally included in event C, in the sense that every outcome in event B is also an outcome in event C. Also, note that event A stands apart from events B and C, in the sense that they have no outcome in common, or no overlap. At this point these are only noteworthy observations, but as you’ll discover later, they are very important ones.
What if we added the new event:
Event D: “Getting a T on the first toss” → THH, THT, TTH, TTT
How would it look if we added event D to the diagram above? (Link to the answer)
Remember, since H and T are equally likely on each toss, and since there are 8 possible outcomes, the probability of each outcome is 1/8.
See if you can answer the following questions using the diagrams and/or the list of outcomes for each event along with what you have learned so far about probability.
Learn By Doing: Tossing a Fair Coin Three Times
If you were able to answer those questions correctly, you likely have a good instinct for calculating probability! Read on to learn how we will apply this knowledge.
If not, we will try to help you develop this skill in this section.
Comment:
• Note that in event C, “Getting at least one head” there is only one possible outcome which is missing, “Getting NO heads” = TTT. We will address this again when we talk about probability rules, in particular the complement rule. At this point, we just want you to think about how these two events are “opposites” in this scenario.
It is VERY important to realize that just because we can list out the possible outcomes, this does not imply that each outcome is equally likely.
This is the (funny) message in the Daily Show clip we provided on the previous page. But let’s think about this again. In that clip, Walter is claiming that since there are two possible outcomes, the probability is 0.5. The two possible outcomes are
• The world will be destroyed due to use of the large hadron collider
• The world will NOT be destroyed due to use of the large hadron collider
Hopefully it is clear that these two outcomes are not equally likely!!
Let’s consider a more common example.
EXAMPLE: Birth Defects
Suppose we randomly select three children and we are interested in the probability that none of the children have any birth defects.
We use the notation D to represent a child was born with a birth defect and N to represent the child born with NO birth defect. We can list the possible outcomes just as we did for the coin toss, they are:
{DDD, NDD, DND, DDN, DNN, NDN, NND, NNN}
Are the events DDD (all three children are born with birth defects) and NNN (none of the children are born with birth defects) equally likely?
It should be reasonable to you that P(NNN) is much larger than P(DDD).
This is because P(N) and P(D) are not equally likely events.
It is rare (certainly not 50%) for a randomly selected child to be born with a birth defect.
Rules of Probability
Now we move on to learning some of the basic rules of probability.
Fortunately, these rules are very intuitive, and as long as they are applied systematically, they will let us solve more complicated problems; in particular, those problems for which our intuition might be inadequate.
Since most of the probabilities you will be asked to find can be calculated using both
• logic and counting
and
• the rules we will be learning,
we give the following advice as a principle.
PRINCIPLE:
If you can calculate a probability using logic and counting you do not NEED a probability rule (although the correct rule can always be applied)
Probability Rule One
Our first rule simply reminds us of the basic property of probability that we’ve already learned.
The probability of an event, which informs us of the likelihood of it occurring, can range anywhere from 0 (indicating that the event will never occur) to 1 (indicating that the event is certain).
Probability Rule One:
• For any event A, 0 ≤ P(A) ≤ 1.
NOTE: One practical use of this rule is that it can be used to identify any probability calculation that comes out to be more than 1 (or less than 0) as incorrect.
Before moving on to the other rules, let’s first look at an example that will provide a context for illustrating the next several rules.
EXAMPLE: Blood Types
As previously discussed, all human blood can be typed as O, A, B or AB.
In addition, the frequency of the occurrence of these blood types varies by ethnic and racial groups.
According to Stanford University’s Blood Center (bloodcenter.stanford.edu), these are the probabilities of human blood types in the United States (the probability for type A has been omitted on purpose):
Motivating question for rule 2: A person in the United States is chosen at random. What is the probability of the person having blood type A?
Answer Our intuition tells us that since the four blood types O, A, B, and AB exhaust all the possibilities, their probabilities together must sum to 1, which is the probability of a “certain” event (a person has one of these 4 blood types for certain).
Since the probabilities of O, B, and AB together sum to 0.44 + 0.1 + 0.04 = 0.58, the probability of type A must be the remaining 0.42 (1 – 0.58 = 0.42):
Probability Rule Two
This example illustrates our second rule, which tells us that the probability of all possible outcomes together must be 1.
Probability Rule Two:
The sum of the probabilities of all possible outcomes is 1.
This is a good place to compare and contrast what we’re doing here with what we learned in the Exploratory Data Analysis (EDA) section.
• Notice that in this problem we are essentially focusing on a single categorical variable: blood type.
• We summarized this variable above, as we summarized single categorical variables in the EDA section, by listing what values the variable takes and how often it takes them.
• In EDA we used percentages, and here we’re using probabilities, but the two convey the same information.
• In the EDA section, we learned that a pie chart provides an appropriate display when a single categorical variable is involved, and similarly we can use it here (using percentages instead of probabilities):
Even though what we’re doing here is indeed similar to what we’ve done in the EDA section, there is a subtle but important difference between the underlying situations
• In EDA, we summarized data that were obtained from a sampleof individuals for whom values of the variable of interest were recorded.
• Here, when we present the probability of each blood type, we have in mind the entire populationof people in the United States, for which we are presuming to know the overall frequency of values taken by the variable of interest.
Did I Get This?: Probability Rule Two
Probability Rule Three
In probability and in its applications, we are frequently interested in finding out the probability that a certain event will not occur.
An important point to understand here is that “event A does not occur” is a separate event that consists of all the possible outcomes that are not in A and is called “the complement event of A.”
Notation: we will write “not A” to denote the event that A does not occur. Here is a visual representation of how event A and its complement event “not A” together represent all possible outcomes.
Comment:
• Such a visual display is called a “Venn diagram.” A Venn diagram is a simple way to visualize events and the relationships between them using rectangles and circles.
Rule 3 deals with the relationship between the probability of an event and the probability of its complement event.
Given that event A and event “not A” together make up all possible outcomes, and since rule 2 tells us that the sum of the probabilities of all possible outcomes is 1, the following rule should be quite intuitive:
Probability Rule Three (The Complement Rule):
• P(not A) = 1 – P(A)
• that is, the probability that an event does not occur is 1 minus the probability that it does occur.
EXAMPLE: Blood Types
Back to the blood type example:
Here is some additional information:
• A person with type A can donate blood to a person with type A or AB.
• A person with type B can donate blood to a person with type B or AB.
• A person with type AB can donate blood to a person with type AB only.
• A person with type O blood can donate to anyone.
What is the probability that a randomly chosen person cannot donate blood to everyone? In other words, what is the probability that a randomly chosen person does not have blood type O? We need to find P(not O). Using the Complement Rule, P(not O) = 1 – P(O) = 1 – 0.44 = 0.56. In other words, 56% of the U.S. population does not have blood type O:
Clearly, we could also find P(not O) directly by adding the probabilities of B, AB, and A.
Comment:
• Note that the Complement Rule, P(not A) = 1 – P(A) can be re-formulated as P(A) = 1 – P(not A).
• P(not A) = 1 – P(A)
• can be re-formulated as P(A) = 1 – P(not A).
• This seemingly trivial algebraic manipulation has an important application, and actually captures the strength of the complement rule.
• In some cases, when finding P(A) directly is very complicated, it might be much easier to find P(not A) and then just subtract it from 1 to get the desired P(A).
• We will come back to this comment soon and provide additional examples.
Did I Get This?: Probability Rule Three
Comments:
• The complement rule can be useful whenever it is easier to calculate the probability of the complement of the event rather than the event itself.
• Notice, we again used the phrase “at least one.”
• Now we have seen that the complement of “at least one …” is “none … ” or “no ….” (as we mentioned previously in terms of the events being “opposites”).
• In the above activity we see that
• P(NONE of these two side effects) = 1 – P(at least one of these two side effects )
• This is a common application of the complement rule which you can often recognize by the phrase “at least one” in the problem.
Probabilities Involving Multiple Events
We will often be interested in finding probabilities involving multiple events such as
• P(A or B) = P(event A occurs or event B occurs or both occur)
• P(A and B)= P(both event A occurs and event B occurs)
A common issue with terminology relates to how we usually think of “or” in our daily life. For example, when a parent says to his or her child in a toy store “Do you want toy A or toy B?”, this means that the child is going to get only one toy and he or she has to choose between them. Getting both toys is usually not an option.
In contrast:
In probability, “OR” means either one or the other or both.
and so P(A or B) = P(event A occurs or event B occurs or BOTH occur)
Having said that, it should be noted that there are some cases where it is simply impossible for the two events to both occur at the same time.
Probability Rule Four
The distinction between events that can happen together and those that cannot is an important one.
Disjoint: Two events that cannot occur at the same time are called disjoint or mutually exclusive. (We will use disjoint.)
It should be clear from the picture that
• in the first case, where the events are NOT disjoint, P(A and B) ≠ 0
• in the second case, where the events ARE disjoint, P(A and B) = 0.
Here are two examples:
EXAMPLE:
Consider the following two events:
A — a randomly chosen person has blood type A, and
B — a randomly chosen person has blood type B.
In rare cases, it is possible for a person to have more than one type of blood flowing through his or her veins, but for our purposes, we are going to assume that each person can have only one blood type. Therefore, it is impossible for the events A and B to occur together.
• Events A and B are DISJOINT
On the other hand …
EXAMPLE:
Consider the following two events:
A — a randomly chosen person has blood type A
B — a randomly chosen person is a woman.
In this case, it is possible for events A and B to occur together.
• Events A and B are NOT DISJOINT.
The Venn diagrams suggest that another way to think about disjoint versus not disjoint events is that disjoint events do not overlap. They do not share any of the possible outcomes, and therefore cannot happen together.
On the other hand, events that are not disjoint are overlapping in the sense that they share some of the possible outcomes and therefore can occur at the same time.
We now begin with a simple rule for finding P(A or B) for disjoint events.
Probability Rule Four (The Addition Rule for Disjoint Events):
• If A and B are disjoint events, then P(A or B) = P(A) + P(B).
Comment:
• When dealing with probabilities, the word “or” will always be associated with the operation of addition; hence the name of this rule, “The Addition Rule.”
EXAMPLE: Blood Types
Recall the blood type example:
Here is some additional information
• A person with type Acan donate blood to a person with type A or AB.
• A person with type Bcan donate blood to a person with type B or AB.
• A person with type ABcan donate blood to a person with type AB
• A person with type Oblood can donate to anyone.
What is the probability that a randomly chosen person is a potential donor for a person with blood type A?
From the information given, we know that being a potential donor for a person with blood type A means having blood type A or O.
We therefore need to find P(A or O). Since the events A and O are disjoint, we can use the addition rule for disjoint events to get:
• P(A or O) = P(A) + P(O) = 0.42 + 0.44 = 0.86.
It is easy to see why adding the probability actually makes sense.
If 42% of the population has blood type A and 44% of the population has blood type O,
• then 42% + 44% = 86% of the population has either blood type A or O, and thus are potential donors to a person with blood type A.
This reasoning about why the addition rule makes sense can be visualized using the pie chart below:
Learn By Doing: Probability Rule Four
Comment:
• The Addition Rule for Disjoint Events can naturally be extended to more than two disjoint events. Let’s take three, for example. If A, B and C are three disjoint events
then P(A or B or C) = P(A) + P(B) + P(C). The rule is the same for any number of disjoint events.
Did I Get This?: Probability Rule Four
We are now finished with the first version of the Addition Rule (Rule four) which is the version restricted to disjoint events. Before covering the second version, we must first discuss P(A and B).
Finding P(A and B) using Logic
We now turn to calculating
• P(A and B)= P(both event A occurs and event B occurs)
Later, we will discuss the rules for calculating P(A and B).
First, we want to illustrate that a rule is not needed whenever you can determine the answer through logic and counting.
Special Case:
There is one special case for which we know what P(A and B) equals without applying any rule.
Learn by Doing: Finding P(A and B) #1
So, if events A and B are disjoint, then (by definition) P(A and B)= 0. But what if the events are not disjoint?
Recall that rule 4, the Addition Rule, has two versions. One is restricted to disjoint events, which we’ve already covered, and we’ll deal with the more general version later in this module. The same will be true of probabilities involving AND
However, except in special cases, we will rely on LOGIC to find P(A and B) in this course.
Before covering any formal rules, let’s look at an example where the events are not disjoint.
EXAMPLE: Periodontal Status and Gender
Learn by Doing: Periodontal Status and Gender
We like to ask probability questions similar to the previous example (using a two-way table based upon data) as this allows you to make connections between these topics and helps you keep some of what you have learned about data fresh in your mind.
Caution
Remember, our primary goal in this course is to analyze real-life data!
Probability Rule Five
We are now ready to move on to the extended version of the Addition Rule.
In this section, we will learn how to find P(A or B) when A and B are not necessarily disjoint.
• We’ll call this extended version the “General Addition Rule” and state it as Probability Rule Five.
We will begin by stating the rule and providing an example similar to the types of problems we generally ask in this course. Then we will present a more another example where we do not have the raw data from a sample to work from.
As we witnessed in previous examples, when the two events are not disjoint, there is some overlap between the events.
• If we simply add the two probabilities together, we will get the wrong answer because we have counted some “probability” twice!
• Thus, we must subtract out this “extra” probability to arrive at the correct answer. The Venn diagram and the two-way tables are helpful in visualizing this idea.
This rule is more general since it works for any pair of events (even disjoint events). Our advice is still to try to answer the question using logic and counting whenever possible, otherwise, we must be extremely careful to choose the correct rule for the problem.
PRINCIPLE:
If you can calculate a probability using logic and counting you do not NEED a probability rule (although the correct rule can always be applied)
Notice that, if A and B are disjoint, then P(A and B) = 0 and rule 5 reduces to rule 4 for this special case.
Let’s revisit the last example:
EXAMPLE: Periodontal Status and Gender
Consider randomly selecting one individual from those represented in the following table regarding the periodontal status of individuals and their gender. Periodontal status refers to gum disease where individuals are classified as either healthy, have gingivitis, or have periodontal disease.
Let’s review what we have learned so far. We can calculate any probability in this scenario if we can determine how many individuals satisfy the event or combination of events.
• P(Male) = 3009/8027 = 0.3749
• P(Female) = 5018/8027 = 0.6251
• P(Healthy) = 3750/8027 = 0.4672
• P(Not Healthy) = P(Gingivitis or Perio) = (2419 + 1858)/8027 = 4277/8027 = 0.5328
We could also, calculate this using the complement rule: 1 – P(Healthy)
We also previously found that
• P(Male AND Healthy) = 1143/8027 = 0.1424
Recall rule 5, P(A or B) = P(A) + P(B) – P(A and B). We now use this rule to calculate P(Male OR Healthy)
• P(Male or Healthy) = P(Male) + P(Healthy) – P(Male and Healthy) = 0.3749 + 0.4672 – 0.1424 = 0.6997 or about 70%
We solved this question earlier by simply counting how many individuals are either Male or Healthy or both. The picture below illustrates the values we need to combine. We need to count
• All males
• All healthy individuals
• BUT, not count anyone twice!!
Using this logical approach we would find
• P(Male or Healthy) = (1143 + 929 + 937 + 2607)/8027 = 5616/8027 = 0.6996
We have a minor difference in our answers in the last decimal place due the rounding that occurred when we calculated P(Male), P(Healthy), and P(Male and Healthy) and then applied rule 5.
Clearly the answer is effectively the same, about 70%. If we carried our answers to more decimal places or if we used the original fractions, we could eliminate this small discrepancy entirely.
Let’s look at one final example to illustrate Probability Rule 5 when the rule is needed – i.e. when we don’t have actual data.
EXAMPLE: Important Delivery!
It is vital that a certain document reach its destination within one day. To maximize the chances of on-time delivery, two copies of the document are sent using two services, service A and service B. It is known that the probabilities of on-time delivery are:
• 0.90 for service A (P(A) = 0.90)
• 0.80 for service B (P(B) = 0.80)
• 0.75 for both services being on time (P(A and B) = 0.75)
(Note that A and B are not disjoint. They can happen together with probability 0.75.)
The Venn diagrams below illustrate the probabilities P(A), P(B), and P(A and B) [not drawn to scale]:
In the context of this problem, the obvious question of interest is:
• What is the probability of on-time delivery of the document using this strategy (of sending it via both services)?
The document will reach its destination on time as long as it is delivered on time by service A or by service B or by both services. In other words, when event A occurs or event B occurs or both occur. so….
P(on time delivery using this strategy)= P(A or B), which is represented the by the shaded region in the diagram below:
We can now
• use the three Venn diagrams representing P(A), P(B) and P(A and B)
• to see that we can find P(A or B) by adding P(A) (represented by the left circle) and P(B) (represented by the right circle),
• then subtracting P(A and B) (represented by the overlap), since we included it twice, once as part of P(A) and once as part of P(B).
This is shown in the following image:
If we apply this to our example, we find that:
• P(A or B)= P(on-time delivery using this strategy)= 0.90 + 0.80 – 0.75 = 0.95.
So our strategy of using two delivery services increases our probability of on-time delivery to 0.95.
While the Venn diagrams were great to visualize the General Addition Rule, in cases like these it is much easier to display the information in and work with a two-way table of probabilities, much as we examined the relationship between two categorical variables in the Exploratory Data Analysis section.
We will simply show you the table, not how we derive it as you won’t be asked to do this for us. You should be able to see that some logic and simple addition/subtraction is all we used to fill in the table below.
When using a two-way table, we must remember to look at the entire row or column to find overall probabilities involving only A or only B.
• P(A) = 0.90 means that in 90% of the cases when service A is used, it delivers the document on time. To find this we look at the total probability for the row containing A. In finding P(A), we do not know whether B happens or not.
• P(B) = 0.80 means that in 80% of the cases when service B is used, it delivers the document on time. To find this we look at the total probability for the column containing B. In finding P(B), we do not know whether A happens or not.
Comment
• When we used two-way tables in the Exploratory Data Analysis (EDA) section, it was to record values of two categorical variables for a concrete sample of individuals.
• In contrast, the information in a probability two-way table is for an entire population, and the values are rather abstract.
• If we had treated something like the delivery example in the EDA section, we would have recorded the actual numbers of on-time (and not-on-time) deliveries for samples of documents mailed with service A or B.
• In this section, the long-term probabilities are presented as being known.
• Presumably, the reported probabilities in this delivery example were based on relative frequencies recorded over many repetitions.
Interactive Applet: Probability Venn Diagram
Rounding Rule of Thumb for Probability:
Follow the following general guidelines in this course. If in doubt carry more decimal places. If we specify give exactly what is requested.
• In general you should carry probabilities to at least 4 decimal places for intermediate steps.
• We often round our final answer to two or three decimal places.
• For extremely small probabilities, it is important to have 1 or two significant digits (non-zero digits), such as 0.000001 or 0.000034, etc.
Many computer packages might display extremely small values using scientific notation such as
• 58×10-5 or 1.58 E-5 to represent 0.0000158
Let’s Summarize
So far in our study of probability, you have been introduced to the sometimes counter-intuitive nature of probability and the fundamentals that underlie probability, such as a relative frequency.
We also gave you some tools to help you find the probabilities of events — namely the probability rules.
You probably noticed that the probability section was significantly different from the two previous sections; it has a much larger technical/mathematical component, so the results tend to be more of the “right or wrong” nature.
In the Exploratory Data Analysis section, for the most part, the computer took care of the technical aspect of things, and our tasks were to tell it to do the right thing and then interpret the results.
In probability, we do the work from beginning to end, from choosing the right tool (rule) to use, to using it correctly, to interpreting the results.
Here is a summary of the rules we have presented so far.
1. Probability Rule #1 states:
• For any event A, 0 ≤ P(A) ≤ 1
2. Probability Rule #2 states:
• The sum of the probabilities of all possible outcomes is 1
3. The Complement Rule (#3) states that
• P(not A) = 1 – P(A)
or when rearranged
• P(A) = 1 – P(not A)
The latter representation of the Complement Rule is especially useful when we need to find probabilities of events of the sort “at least one of …”
4. The General Addition Rule (#5) states that for any two events,
• P(A or B) = P(A) + P(B) – P(A and B),
where, by P(A or B) we mean P(A occurs or B occurs or both).
In the special case of disjoint events, events that cannot occur together, the General Addition Rule can be reduced to the Addition Rule for Disjoint Events (#4), which is
• P(A or B) = P(A) + P(B). *
*ONLY use when you are CONVINCED the events are disjoint (they do NOT overlap)
5. The restricted version of the addition rule (for disjoint events) can be easily extended to more than two events.
6. So far, we have only found P(A and B) using logic and counting in simple examples | textbooks/stats/Applied_Statistics/Biostatistics_-_Open_Learning_Textbook/Unit_3A%3A_Probability/Basic_Probability_Rules.txt |
CO-6: Apply basic concepts of probability, random variation, and commonly used statistical probability distributions.
Learning Objectives
LO 6.4: Relate the probability of an event to the likelihood of this event occurring.
Learning Objectives
LO 6.5: Apply the relative frequency approach to estimate the probability of an event.
Learning Objectives
LO 6.6: Apply basic logic and probability rules in order to find the empirical probability of an event.
Review: Unit 1 Case C-C
• In particular the idea of conditional percentages will be equivalent to the idea of conditional probabilities discussed in this section.
Video
Video: Conditional Probability and Independence (28:13)
In the last section, we established some of the basic rules of probability, which included:
• Basic Properties of Probability (Rule One and Rule Two)
• The Complement Rule (Rule Three)
• The Addition Rule for Disjoint Events (Rule Four)
• The General Addition Rule for which the events need not be disjoint (Rule Five)
In order to complete our set of rules, we still require two Multiplication Rules for finding P(A and B) and the important concepts of independent events and conditional probability.
We’ll first introduce the idea of independent events, then introduce the Multiplication Rule for independent events which gives a way to find P(A and B) in cases when the events A and B are independent.
Next we will define conditional probability and use it to formalize our definition of independent events, which is initially presented only in an intuitive way.
We will then develop the General Multiplication Rule, a rule that will tell us how to find P(A and B) in cases when the events A and B are not necessarily independent.
We’ll conclude with a discussion of probability applications in the health sciences.
Independent Events
Learning Objectives
LO 6.7: Determine whether two events are independent or dependent and justify your conclusion.
We begin with a verbal definition of independent events (later we will use probability notation to define this more precisely).
Independent Events:
• Two events A and B are said to be independent if the fact that one event has occurred does not affect the probability that the other event will occur.
• If whether or not one event occurs does affect the probability that the other event will occur, then the two events are said to be dependent.
Here are a few examples:
EXAMPLE:
A woman’s pocket contains two quarters and two nickels.
She randomly extracts one of the coins and, after looking at it, replaces it before picking a second coin.
Let Q1 be the event that the first coin is a quarter and Q2 be the event that the second coin is a quarter.
Are Q1 and Q2 independent events?
• Why?
Since the first coin that was selected is replaced, whether or not Q1 occurred (i.e., whether the first coin was a quarter) has no effect on the probability that the second coin will be a quarter, P(Q2).
In either case (whether Q1 occurred or not), when she is selecting the second coin, she has in her pocket:
and therefore the P(Q2) = 2/4 = 1/2 regardless of whether Q1 occurred.
EXAMPLE:
A woman’s pocket contains two quarters and two nickels.
She randomly extracts one of the coins, and without placing it back into her pocket, she picks a second coin.
As before, let Q1 be the event that the first coin is a quarter, and Q2 be the event that the second coin is a quarter.
Are Q1 and Q2 independent events?
• Q1 and Q2 are not independent. They are dependent. Why?
Since the first coin that was selected is not replaced, whether Q1 occurred (i.e., whether the first coin was a quarter) does affect the probability that the second coin is a quarter, P(Q2).
If Q1 occurred (i.e., the first coin was a quarter), then when the woman is selecting the second coin, she has in her pocket:
• In this case, P(Q2) = 1/3.
However, if Q1 has not occurred (i.e., the first coin was not a quarter, but a nickel), then when the woman is selecting the second coin, she has in her pocket:
• In this case, P(Q2) = 2/3.
In these last two examples, we could actually have done some calculation in order to check whether or not the two events are independent or not.
Sometimes we can just use common sense to guide us as to whether two events are independent. Here is an example.
EXAMPLE:
Two people are selected simultaneously and at random from all people in the United States.
Let B1 be the event that one of the people has blue eyes and B2 be the event that the other person has blue eyes.
In this case, since they were chosen at random, whether one of them has blue eyes has no effect on the likelihood that the other one has blue eyes, and therefore B1 and B2 are independent.
On the other hand …
EXAMPLE:
A family has 4 children, two of whom are selected at random.
Let B1 be the event that one child has blue eyes, and B2 be the event that the other chosen child has blue eyes.
In this case, B1 and B2 are not independent, since we know that eye color is hereditary.
Thus, whether or not one child is blue-eyed will increase or decrease the chances that the other child has blue eyes, respectively.
Comments:
• It is quite common for students to initially get confused about the distinction between the idea of disjoint events and the idea of independent events. The purpose of this comment (and the activity that follows it) is to help students develop more understanding about these very different ideas.
The idea of disjoint events is about whether or not it is possible for the events to occur at the same time (see the examples on the page for Basic Probability Rules).
The idea of independent events is about whether or not the events affect each other in the sense that the occurrence of one event affects the probability of the occurrence of the other (see the examples above).
The following activity deals with the distinction between these concepts.
The purpose of this activity is to help you strengthen your understanding about the concepts of disjoint events and independent events, and the distinction between them.
Learn by Doing: Independent Events
Let’s summarize the three parts of the activity:
• In Example 1: A and B are not disjoint and independent
• In Example 2: A and B are not disjoint and not independent
• In Example 3: A and B are disjoint and not independent.
Why did we leave out the case when the events are disjoint and independent?
The reason is that this case DOES NOT EXIST!
A and B Independent A and B Not Independent
A and B Disjoint DOES NOT EXIST Example 3
A and B Not Disjoint Example 1 Example 2
If events are disjoint then they must be not independent, i.e. they must be dependent events.
Why is that?
• Recall: If A and B are disjoint then they cannot happen together.
• In other words, A and B being disjoint events implies that if event A occurs then B does not occur and vice versa.
• Well… if that’s the case, knowing that event A has occurred dramatically changes the likelihood that event B occurs – that likelihood is zero.
• This implies that A and B are not independent.
Now that we understand the idea of independent events, we can finally get to rules for finding P(A and B) in the special case in which the events A and B are independent.
Later we will present a more general version for use when the events are not necessarily independent.
Multiplication Rule for Independent Events (Rule Six)
Learning Objectives
LO 6.8: Apply the multiplication rule for independent events to calculate P(A and B) for independent events.
We now turn to rules for calculating
• P(A and B) = P(both event A occurs and event B occurs)
beginning with the multiplication rule for independent events.
Using a Venn diagram, we can visualize “A and B,” which is represented by the overlap between events A and B:
Probability Rule Six (The Multiplication Rule for Independent Events):
• If A and B are two INDEPENDENT events, then P(A and B) = P(A) * P(B).
Comment:
• When dealing with probability rules, the word “and” will always be associated with the operation of multiplication; hence the name of this rule, “The Multiplication Rule.”
EXAMPLE:
Recall the blood type example:
Two people are selected simultaneously and at random from all people in the United States.
What is the probability that both have blood type O?
• Let O1= “person 1 has blood type O” and
• O2= “person 2 has blood type O”
We need to find P(O1 and O2)
Since they were chosen simultaneously and at random, the blood type of one has no effect on the blood type of the other. Therefore, O1 and O2 are independent, and we may apply Rule 6:
• P(O1 and O2) = P(O1) * P(O2) = 0.44 * 0.44 = 0.1936.
Did I Get This?: Probability Rule Six
Comments:
• We now have an Addition Rule that says
P(A or B) = P(A) + P(B) for disjoint events,
and a Multiplication Rule that says
P(A and B) = P(A) * P(B) for independent events.
The purpose of this comment is to point out the magnitude of P(A or B) and of P(A and B) relative to either one of the individual probabilities.
Since probabilities are never negative, the probability of one event or another is always at least as large as either of the individual probabilities.
Since probabilities are never more than 1, the probability of one event and another generally involves multiplying numbers that are less than 1, therefore can never be more than either of the individual probabilities.
Here is an example:
EXAMPLE:
Consider the event A that a randomly chosen person has blood type A.
Modify it to a more general event — that a randomly chosen person has blood type A or B — and the probability increases.
Modify it to a more specific (or restrictive) event — that not just one randomly chosen person has blood type A, but that out of two simultaneously randomly chosen people, person 1 will have type A and person 2 will have type B — and the probability decreases.
It is important to mention this in order to root out a common misconception.
• The word “and” is associated in our minds with “adding more stuff.” Therefore, some students incorrectlythink that P(A and B) should be larger than either one of the individual probabilities, while it is actually smaller, since it is a more specific (restrictive) event.
• Also, the word “or” is associated in our minds with “having to choose between” or “losing something,” and therefore some students incorrectly think that P(A or B) should be smaller than either one of the individual probabilities, while it is actually larger, since it is a more general event.
Practically, you can use this comment to check yourself when solving problems.
For example, if you solve a problem that involves “or,” and the resulting probability is smaller than either one of the individual probabilities, then you know you have made a mistake somewhere.
Did I Get This?: Comparing P(A and B) to P(A or B)
Comment:
• Probability rule six can be used as a test to see if two events are independent or not.
• If you can easily find P(A), P(B), and P(A and B) using logic or are provided these values, then we can test for independent events using the multiplication rule for independent events:
IF P(A)*P(B) = P(A and B) THEN A and B are independent events, otherwise, they are dependent events.
As you’ve seen, the last three rules that we’ve introduced (the Complement Rule, the Addition Rules, and the Multiplication Rule for Independent Events) are frequently used in solving problems.
Before we move on to our next rule, here are two comments that will help you use these rules in broader types of problems and more effectively.
Comment:
• As we mentioned before, the Addition Rule for Disjoint events (rule four) can be extended to more than two disjoint events.
• Likewise, the Multiplication Rule for independent events (rule six) can be extended to more than two independent events.
• So if A, B and C are three independent events, for example, then P(A and B and C) = P(A) * P(B) * P(C).
• These extensions are quite straightforward, as long as you remember that “or” requires us to add, while “and” requires us to multiply.
EXAMPLE:
Three people are chosen simultaneously and at random.
What is the probability that all three have blood type B?
We’ll use the usual notation of B1, B2 and B3 for the events that persons 1, 2 and 3 have blood type B, respectively.
We need to find P(B1 and B2 and B3). Let’s solve this one together:
Learn by Doing: Extending Probability Rule Six
Here is another example that might be quite surprising.
EXAMPLE:
A fair coin is tossed 10 times. Which of the following two outcomes is more likely?
(a) HHHHHHHHHH
(b) HTTHHTHTTH
Learn by Doing: A Surprising Result using Probability Rule Six?
In fact, they are equally likely. The 10 tosses are independent, so we’ll use the Multiplication Rule for Independent Events:
• P(HHHHHHHHHH) = P(H) * P(H) * … *P(H) = 1/2 * 1/2 *… * 1/2 = (1/2)10
• P(HTTHHTHTTH) = P(H) * P(T) * … * P(H) = 1/2 * 1/2 *… * 1/2 = (1/2)10
Here is the idea:
Our random experiment here is tossing a coin 10 times.
• You can imagine how huge the sample space is.
• There are actually 1,024 possible outcomes to this experiment, all of which are equally likely.
Therefore,
• while it is true that it is more likely to get an outcome that has 5 heads and 5 tails than an outcome that has only heads
since there is only one possible outcome which gives all heads
and many possible outcomes which give 5 heads and 5 tails
• if we are comparing 2 specific outcomesas we do here, they are equally likely.
IMPORTANT Comments:
• Only use the multiplication rule for independent events, rule six, which says P(A and B) = P(A)P(B) if you are certain the two events are independent.
• Probability rule six is ONLY true for independent events.
• When finding P(A or B) using the general addition rule: P(A) + P(B) – P(A and B),
• do NOT use the multiplication rule for independent events to calculate P(A and B), use only logic and counting.
Conditional Probability (Rule Seven)
Learning Objectives
LO 6.9: Apply logic or probability rules to calculate conditional probabilities, P(A|B), and interpret them in context.
Now we will introduce the concept of conditional probability.
The idea here is that the probabilities of certain events may be affected by whether or not other events have occurred.
The term “conditional” refers to the fact that we will have additional conditions, restrictions, or other information when we are asked to calculate this type of probability.
Let’s illustrate this idea with a simple example:
EXAMPLE:
All the students in a certain high school were surveyed, then classified according to gender and whether they had either of their ears pierced:
(Note that this is a two-way table of counts that was first introduced when we talked about the relationship between two categorical variables.
It is not surprising that we are using it again in this example, since we indeed have two categorical variables here:
• Gender:M or F (in our notation, “not M”)
• Pierced:Yes or No
Suppose a student is selected at random from the school.
• Let Mand not M denote the events of being male and female, respectively,
• and Eand not E denote the events of having ears pierced or not, respectively.
What is the probability that the student has either of their ears pierced?
Since a student is chosen at random from the group of 500 students, out of which 324 are pierced,
• P(E) = 324/500 = 0.648
What is the probability that the student is male?
Since a student is chosen at random from the group of 500 students, out of which 180 are male,
• P(M) = 180/500 = 0.36.
What is the probability that the student is male and has ear(s) pierced?
Since a student is chosen at random from the group of 500 students out of which 36 are male and have their ear(s) pierced,
• P(M and E) = 36/500 = 0.072
Now something new:
Given that the student that was chosen is male, what is the probability that he has one or both ears pierced?
At this point, new notation is required, to express the probability of a certain event given that another event holds.
We will write
• the probability of having either ear pierced (E), given that a student is male (M)
• as P(E | M).
A word about this new notation:
• The event whose probability we seek (in this case E) is written first,
• the vertical line stands for the word “given” or “conditioned on,”
• and the event that is given (in this case M) is written after the “|” sign.
We call this probability the
• conditional probabilityof having either ear pierced, given that a student is male:
• it assesses the probability of having pierced ears under the condition of being male.
Now to find the probability, we observe that choosing from only the males in the school essentially alters the sample space from all students in the school to all male students in the school.
The total number of possible outcomes is no longer 500, but has changed to 180.
Out of those 180 males, 36 have ear(s) pierced, and thus:
• P(E | M) = 36/180 = 0.20.
A good visual illustration of this conditional probability is provided by the two-way table:
which shows us that conditional probability in this example is the same as the conditional percents we calculated back in section 1. In the above visual illustration, it is clear we are calculating a row percent.
EXAMPLE:
Consider the piercing example, where the following two-way table is given,
Recall also that M represents the event of being a male (“not M” represents being a female), and E represents the event of having one or both ears pierced.
Did I Get This?: Conditional Probability
Another way to visualize conditional probability is using a Venn diagram:
In both the two-way table and the Venn diagram,
• the reduced sample space (comprised of only males) is shaded light green,
• and within this sample space, the event of interest (having ears pierced) is shaded darker green.
The two-way table illustrates the idea via counts, while the Venn diagram converts the counts to probabilities, which are presented as regions rather than cells.
We may work with counts, as presented in the two-way table, to write
• P(E | M) = 36/180.
Or we can work with probabilities, as presented in the Venn diagram, by writing
• P(E | M) = (36/500) / (180/500).
We will want, however, to write our formal expression for conditional probabilities in terms of other, ordinary, probabilities and therefore the definition of conditional probability will grow out of the Venn diagram.
Notice that
• P(E | M) = (36/500) / (180/500) = P(M and E) / P(M).
Probability Rule Seven (Conditional Probability Rule):
• The conditional probability of event B, given event A, is P(B | A) = P(A and B) / P(A)
Comments:
• Note that when we evaluate the conditional probability, we always divide by the probability of the given event. The probability of both goes in the numerator.
• The above formula holds as long as P(A) > 0, since we cannot divide by 0. In other words, we should not seek the probability of an event given that an impossible event has occurred.
Let’s see how we can use this formula in practice:
EXAMPLE:
On the “Information for the Patient” label of a certain antidepressant, it is claimed that based on some clinical trials,
• there is a 14% chance of experiencing sleeping problems known as insomnia (denote this event by I),
• there is a 26% chance of experiencing headache (denote this event by H),
• and there is a 5% chance of experiencing both side effects (I and H).
(a) Suppose that the patient experiences insomnia; what is the probability that the patient will also experience headache?
Since we know (or it is given) that the patient experienced insomnia, we are looking for P(H | I).
According to the definition of conditional probability:
• P(H | I) = P(H and I) / P(I) = 0.05/0.14 = 0.357.
(b) Suppose the drug induces headache in a patient; what is the probability that it also induces insomnia?
Here, we are given that the patient experienced headache, so we are looking for P(I | H).
Using the definition
• P(I | H) = P(I and H) / P(H) = 0.05/0.26 = 0.1923.
Comment:
• Note that the answers to (a) and (b) above are different.
• In general, P(A | B) does not equal P(B | A). We’ll come back and illustrate this point later.
Now that we have introduced conditional probability, try the interactive demonstration below which uses a Venn diagram to illustrate the basic probabilities we have been discussing.
Now you can investigate the conditional probabilities as well.
Interactive Applet: Conditional Probability
Independent Events (Part 2)
Learning Objectives
LO 6.7: Determine whether two events are independent or dependent and justify your conclusion.
As we saw in the Exploratory Data Analysis section, whenever a situation involves more than one variable, it is generally of interest to determine whether or not the variables are related.
In probability, we talk about independent events, and earlier we said that two events A and B are independent if event A occurring does not affect the probability that event B will occur.
Now that we’ve introduced conditional probability, we can formalize the definition of independence of events and develop four simple ways to check whether two events are independent or not.
We will introduce these “independence checks” using examples, and then summarize.
EXAMPLE:
Consider again the two-way table for all 500 students in a particular high school, classified according to gender and whether or not they have one or both ears pierced.
Would you expect those two variables to be related?
• That is, would you expect having pierced ears to depend on whether the student is male or female?
• Or, to put it yet another way, would knowing a student’s gender affect the probability that the student’s ears are pierced?
To answer this, we may compare the overall probability of having pierced ears to the conditional probability of having pierced ears, given that a student is male.
Our intuition would tell us that the latter should be lower:
• male students tend not to have their ears pierced, whereas female students do.
Indeed, for students in general, the probability of having pierced ears (event E) is
• P(E) = 324/500 = 0.648.
But the probability of having pierced ears given that a student is male is only
• P(E | M) = 36/180 = 0.20.
As we anticipated, P(E | M) is lower than P(E).
The probability of a student having pierced ears changes (in this case, gets lower) when we know that the student is male, and therefore the events E and M are dependent.
Remember, if E and M were independent, knowing or not knowing that the student is male would not have made a difference … but it did.
The previous example illustrates that one method for determining whether two events are independent is to compare P(B | A) and P(B).
• If the two are equal(i.e., knowing or not knowing whether A has occurred has no effect on the probability of B occurring) then the two events are independent.
• Otherwise, if the probability changesdepending on whether we know that A has occurred or not, then the two events are not independent.
Similarly, using the same reasoning, we can compare P(A | B) and P(A).
EXAMPLE:
Recall the side effects activity (from the bottom of the page Basic Probability Rules.).
On the “Information for the Patient” label of a certain antidepressant, it is claimed that based on some clinical trials,
• there is a 14% chance of experiencing sleeping problems known as insomnia (denote this event by I),
• there is a 26% chance of experiencing headache (denote this event by H),
• and there is a 5% chance of experiencing both side effects (I and H).
Are the two side effects independent of each other?
To check whether the two side effects are independent, let’s compare P(H | I) and P(H).
In the previous part of this section, we found that
• P(H | I)= P(H and I) / P(I) = 0.05/0.14 = 0.357,
• while P(H) = 0.26.
Knowing that a patient experienced insomnia increases the likelihood that he/she will also experience headache from 0.26 to 0.357.
The conclusion therefore is that the two side effects are not independent, they are dependent.
Alternatively, we could have compared P(I | H) to P(I).
• P(I) = 0.14,
• and previously we found that P(I | H)= P(I and H) / P(H) = 0.05/0.26 = 0.1923,
Again, since the two are not equal, we can conclude that the two side effects I and H are dependent.
Comment:
• Recall the pierced ears example. We checked the independence of the events M (being a male) and E (having pierced ears) by comparing P(E) to P(E | M).
An alternative method of checking for dependence would be to compare P(E | M) with P(E | not M) [same as P(E | F)].
In our case, P(E | M) = 36/180 = 0.2, while P(E | not M) = 288/320 = 0.9, and since the two are very different, we can say that the events E and M are not independent.
In general, another method for checking the independence of events A and B is to compare P(B | A) and P(B | not A).
In other words, two events are independent if the probability of one event does not change whether we know that the other event has occurred or we know that the other event has not occurred.
It can be shown that P(B | A) and P(B | not A) would differ whenever P(B) and P(B | A) differ, so this is another perfectly legitimate way to establish dependence or independence.
Before we establish a general rule for independence, let’s consider an example that will illustrate another method that we can use to check whether two events are independent:
EXAMPLE:
A group of 100 college students were surveyed about their gender and whether they had decided on a major.
Offhand, we wouldn’t necessarily have any compelling reason to expect that deciding on a major would depend on a student’s gender.
We can check for independence by comparing the overall probability of being decided to the probability of being decided given that a student is female:
• P(D) = 45/100 = 0.45 and P(D | F) = 27/60 = 0.45.
The fact that the two are equal tells us that, as we might expect, deciding on a major is independent of gender.
Now let’s approach the issue of independence in a different way: first, we may note that the overall probability of being decided is 45/100 = 0.45.
And the overall probability of being female is 60/100 = 0.60.
If being decided is independent of gender, then 45% of the 60% of the class who are female should have a decided major;
in other words, the probability of being female and decided should equal the probability of being female multiplied by the probability of being decided.
If the events F and D are independent, we should have P(F and D) = P(F) * P(D).
In fact, P(F and D) = 27/100 = 0.27 = P(F) * P(D) = 0.45 * 0.60.
This confirms our alternate verification of independence.
In general, another method for checking the independence of events A and B is to
• compare P(A and B) to P(A) * P(B).
• If the two are equal, then A and B are independent, otherwise the two are not independent.
Let’s summarize all the possible methods we’ve seen for checking the independence of events in one rule:
Tests for Independent Events: Two events A and B are independent if any one of the following hold:
• P(B | A) = P(B)
• P(A | B) = P(A)
• P(B | A) = P(B | not A)
• P(A and B) = P(A) * P(B)
Comment:
• These various equalities turn out to be equivalent, so that if one equality holds, all are equal, and if one equality does not hold, all are not equal. (This is the case for the same reason that knowing one of the values P(A and B), P(A and not B), P(not A and B), or P(not A and not B), along with P(A) and P(B), allows you to determine the remaining cells of a two-way probability table.)
• Therefore, in order to check whether events A and B are independent or not, it is sufficient to check only whether one of the four equalities holds — whichever is easiest for you.
The purpose of the next activity is to practice checking the independence of two events using the four different possible methods that we’ve provided, and see that all of them will lead us to the same conclusion, regardless of which of the four methods we use.
Learn by Doing: Tests for Independent Events
General Multiplication Rule (Rule Eight)
Learning Objectives
LO 6.10: Use the general multiplication rule to calculate P(A and B) for any events A and B.
Now that we have an understanding of conditional probabilities and can express them with concise notation, and have a more formal understanding of what it means for two events to be independent, we can finally establish the General Multiplication Rule, a formal rule for finding P(A and B) that applies to any two events, whether they are independent or dependent.
We begin with an example that contrasts P(A and B) for independent and dependent cases.
EXAMPLE:
Suppose you pick two cards at random from four cards consisting of one of each suit: club, diamond, heart, and spade, where the first card is replaced before the second card is picked.
What is the probability of picking a club and then a diamond?
Because the sampling is done with replacement, whether or not a diamond is picked on the second selection is independent of whether or not a club has been picked on the first selection.
Rule 6, the multiplication rule for independent events, tells us that:
• P(C1 and D2) = P(C1) * P(D2) = 1/4 * 1/4 = 1/16.
Here we denote the event “club picked on first selection” as C1 and the event “diamond picked on second selection” as D2.
The display below shows that 1/4 of the time we’ll pick a club first, and of these times, 1/4 will result in a diamond on the second pick: 1/4 * 1/4 = 1/16 of the selections will have a club first and then a diamond.
EXAMPLE:
Suppose you pick two cards at random from four cards consisting of one of each suit: club, diamond, heart, and spade, without replacing the first card before the second card is picked.
What is the probability of picking a club and then a diamond?
The probability in this case is not 1/4 * 1/4 = 1/16.
• Because the sampling is done without replacement, so whether or not a diamond is picked on the second selection does depend on what was picked on the first selection.
• For instance, if a diamond was picked on the first selection, the probability of another diamond is zero!
• As in the example above, 1/4 of the time we’ll pick a club first.
• But since the club has been removed, 1/3 of these selections with a club first will have a diamond second.
The probability of a club and then a diamond is 1/4*1/3=1/12.
• This is the probability of getting a club first, multiplied by the probability of getting a diamond second, given that a club was picked first.
Using the notation of conditional probabilities, we can write
• P(C1 and D2) = P(C1) * P(D2 | C1) = 1/4 * 1/3 = 1/12.
For independent events A and B, we had the rule P(A and B) = P(A) * P(B).
Due to independence, to find the probability of A and B, we could multiply the probability of A by the simple probability of B, because the occurrence of A would have no effect on the probability of B occurring.
Now, for events A and B that may be dependent, to find the probability of A and B, we multiply the probability of A by the conditional probability of B, taking into account that A has occurred.
Thus, our general multiplication rule is stated as follows:
General Multiplication Rule – Probability Rule Eight:
• For any two events A and B, P(A and B) = P(A) * P(B | A)
Comments:
1. Note that although the motivation for this rule was to find P(A and B) when A and B are not independent, this rule is general in the sense that if A and B happen to be independent, then P(B | A) = P(B) is true, and we’re back to Rule 6 — the Multiplication Rule for Independent Events: P(A and B) = P(A) * P(B).
2. The General Multiplication Rule is just the definition of conditional probability in disguise. Recall the definition of conditional probability: P(B | A) = P(A and B) / P(A) Let’s isolate P(A and B) by multiplying both sides of the equation by P(A), and we get: P(A and B) = P(A) * P(B | A). That’s it … this is the General Multiplication Rule.
3. The General Multiplication Rule is useful when two events, A and B, occur in stages, first A and then B (like the selection of the two cards in the previous example). Thinking about it this way makes the General Multiplication Rule very intuitive. For both A and B to occur you first need A to occur (which happens with probability P(A)), and then you need B to occur, knowing that A has already occurred (which happens with probability P(B | A)).
Did I Get This?: The General Multiplication Rule
Let’s look at another, more realistic example:
EXAMPLE:
In a certain region, one in every thousand people (0.001) is infected by the HIV virus that causes AIDS.
• Tests for presence of the virus are fairly accurate but not perfect.
• If someone actually has HIV, the probability of testing positive is 0.95.
Let H denote the event of having HIV, and T the event of testing positive.
(a) Express the information that is given in the problem in terms of the events H and T.
• “one in every thousand people (0.001) of all individuals are infected with HIV” → P(H) = 0.001
• “If someone actually has HIV, the probability of testing positive is 0.95” → P(T | H) =0.95
(b) Use the General Multiplication Rule to find the probability that someone chosen at random from the population has HIV and tests positive.
• P(H and T)= P(H) * P(T | H) = 0.001*0.95 = 0.00095.
(c) If someone has HIV, what is the probability of testing negative? Here we need to find P(not T | H).
• The Complement Rule works with conditional probabilities as long as we condition on the same event, therefore:
• P(not T | H)= 1 – P(T | H) = 1 – 0.95 = 0.05.
The purpose of the next activity is to give you guided practice in expressing information in terms of conditional probabilities, and in using the General Multiplication Rule.
Learn by Doing: Conditional Probability and the General Multiplication Rule
Let’s Summarize
This section introduced you to the fundamental concepts of independent events and conditional probability — the probability of an event given that another event has occurred.
We saw that sometimes the knowledge that another event has occurred has no impact on the probability (when the two events are independent), and sometimes it does (when the two events are not independent).
We further discussed the idea of independence and discussed different ways to check whether two events are independent or not.
Understanding the concept of conditional probability also allowed us to introduce our final probability rule, the General Multiplication Rule.
The General Multiplication Rule tells us how to find P(A and B) when A and B are not necessarily independent. | textbooks/stats/Applied_Statistics/Biostatistics_-_Open_Learning_Textbook/Unit_3A%3A_Probability/Conditional_Probability_and_Independence.txt |
CO-6: Apply basic concepts of probability, random variation, and commonly used statistical probability distributions.
Video
Video: Probability Introduction (7:41)
Now that we understand how probability fits into the Big Picture as a key element behind statistical inference, we are ready to learn more about it. Our first goal is to introduce some fundamental terminology (the language) and notation that is used when discussing probability.
Probability is Not Always Intuitive
Although most of the probability calculations we will conduct will be rather intuitive due to their simplicity, we start with two fun examples that will illustrate the interesting and sometimes complex nature of probability.
Often, relying only on our intuition is not enough to determine probability, so we’ll need some tools to work with, which is exactly what we’ll study in this section.
Caution
For the next two examples, do not be concerned with the solution of the problem. Only how the answers to probability questions are not always easy to believe or determine.
Here is the first of two motivating examples:
EXAMPLE: The "Let's Make a Deal" Paradox
“Let’s Make a Deal” was the name of a popular television game show, which first aired in the 1960s. The “Let’s Make a Deal” Paradox is named after that show. In the show, the contestant had to choose between three doors. One of the doors had a big prize behind it such as a car or a lot of cash, and the other two were empty. (Actually, for entertainment’s sake, each of the other two doors had some stupid gift behind it, like a goat or a chicken, but we’ll refer to them here as empty.)
The contestant had to choose one of the three doors, but instead of revealing the chosen door, the host revealed one of the two unchosen doors to be empty. At this point of the game, there were two unopened doors (one of which had the prize behind it) — the door that the contestant had originally chosen and the remaining unchosen door.
The contestant was given the option either to stay with the door that he or she had initially chosen, or switch to the other door.
What do you think the contestant should do, stay or switch? What do you think is the probability that you will win the big prize if you stay? What about if you switch?
In order for you to gain a feel for this game, you can play it a few times using an applet.
Interactive Applet: Let’s Make a Deal
Now, what do you think a contestant should do?
Learn By Doing: Let’s Make a Deal
The intuition of most people is that the chance of winning is equal whether we stay or switch — that there is a 50-50 chance of winning with either selection. This, however, is not the case.
Actually, there is a 67% chance — or a probability of 2/3 (2 out of three) — of winning by switching, and only a 33% chance — or a probability of 1/3 (1 out of 3) — of winning by staying with the door that was originally chosen.
This means that a contestant is twice as likely to win if he/she switches to the unchosen door. Isn’t this a bit counterintuitive and confusing? Most people think so, when they are first faced with this problem.
We will now try to explain this paradox to you in two different ways:
Video: Let’s Make a Deal (Explanation #1) (1:10)
If you are still not convinced (or even if you are), here is a different way of explaining the paradox:
Video: Let’s Make a Deal (Explanation #2) (1:37)
If this example still did not persuade you that probability is not always intuitive, the next example should definitely do the trick.
EXAMPLE: The Birthday Problem
Suppose that you are at a party with 59 other people (for a total of 60). What are the chances (or, what is the probability) that at least 2 of the 60 guests share the same birthday?
To clarify, by “share the same birthday,” we mean that 2 people were born on the same date, not necessarily in the same year. Also, for the sake of simplicity, ignore leap years, and assume that there are 365 days in each year.
Learn By Doing: Birthday Problem
Indeed, there is a 99.4% chance that at least 2 of the 60 guests share the same birthday. In other words, it is almost certain that at least 2 of the guests share the same birthday. This is very counterintuitive.
Unlike the “Let’s Make a Deal” example, for this scenario, we don’t really have a good step-by-step explanation that will give you insight into this surprising answer.
From these two examples, (maybe) you have seen that your original hunches cannot always be counted upon to give you correct predictions of probabilities.
We won’t think any more about these examples as they are from the “harder” end of the complexity spectrum but hopefully they have motivated you to learn more about probability and you do not need to be convinced of their solution to continue!
In general, probability is not always intuitive.
Need a Laugh?
Watch this (funny) video which has an excellent point about “how probability DOES NOT work”: clip from the Daily Show with Jon Stewart about the Large Hadron Collider (5:58).
It is possible viewers in other countries may not be able to view the clip from this source. You may or may not be able to find it online through searching. Here is the transcript summary I sometimes use in class to get the point across (it isn’t quite as funny but I think you can still figure out what is wrong here):
• John Oliver: So, roughly speaking, what are the chances that the world is going to be destroyed? (by the large hadron collider) One-in-a-million? One-in-a-billion?
• Walter: Well, the best we can say right now is about a one-in-two chance.
• John Oliver: 50-50?
• Walter: Yeah, 50-50… It’s a chance; it’s a 50-50 chance.
• John Oliver: You keep coming back to this 50-50 thing, it’s weird Walter.
• Walter: Well, if you have something that can happen and something that won’t necessarily happen, it’s going to either happen or it’s going to not happen. And, so, it’s … the best guess is 1 in 2.
• John Oliver: I’m not sure that’s how probability works, Walter.
And … John Oliver is correct! :-)
What is Probability?
Learning Objectives
LO 6.4: Relate the probability of an event to the likelihood of this event occurring.
Eventually we will need to develop a more formal approach to probability, but we will begin with an informal discussion of what probability is.
Probability is a mathematical description of randomness and uncertainty. It is a way to measure or quantify uncertainty. Another way to think about probability is that it is the official name for “chance.”
Probability is the Likelihood of Something Happening
One way to think of probability is that it is the likelihood that something will occur.
Probability is used to answer the following types of questions:
• What is the chance that it will rain tomorrow?
• What is the chance that a stock will go up in price?
• What is the chance that I will have a heart attack?
• What is the chance that I will live longer than 70 years?
• What is the likelihood that when rolling a pair of dice, I will roll doubles?
• What is the probability that I will win the lottery?
• What is the probability that I will become diabetic?
Each of these examples has some uncertainty. For some, the chances are quite good, so the probability would be quite high. For others, the chances are not very good, so the probability is quite low (especially winning the lottery).
Certainly, the chance of rain is different each day, and is higher during some seasons. Your chance of having a heart attack, or of living longer than 70 years, depends on things like your current age, your family history, and your lifestyle. However, you could use your intuition to predict some of those probabilities fairly accurately, while others you might have no instinct about at all.
Notation
We think you will agree that the word probability is a bit long to include in equations, graphs and charts, so it is customary to use some simplified notation instead of the entire word.
If we wish to indicate “the probability it will rain tomorrow,” we use the notation “P(rain tomorrow).” We can abbreviate the probability of anything. If we let A represent what we wish to find the probability of, then P(A) would represent that probability.
We can think of “A” as an “event.”
NOTATION MEANING
P(win lottery) the probability that a person who has a lottery ticket will win that lottery
P(A) the probability that event A will occur
P(B) the probability that event B will occur
PRINCIPLE: The “probability” of an event tells us how likely it is that the event will occur.
What values can the probability of an event take, and what does the value tell us about the likelihood of the event occurring?
Video
Video: Basic Properties of Probability (0:53)
Did I Get This?: Basic Properties of Probability
PRINCIPLE: The probability that an event will occur is between 0 and 1 or 0 ≤ P(A) ≤ 1.
Many people prefer to express probability in percentages. Since all probabilities are decimals, each can be changed to an equivalent percentage. Thus, the latest principle is equivalent to saying, “The chance that an event will occur is between 0% and 100%.”
Probabilities can be determined in two fundamental ways. Keep reading to find out what they are.
Determining Probability
There are 2 fundamental ways in which we can determine probability:
• Theoretical (also known as Classical)
• Empirical (also known as Observational)
Classical methods are used for games of chance, such as flipping coins, rolling dice, spinning spinners, roulette wheels, or lotteries.
The probabilities in this case are determined by the game (or scenario) itself and are often found relatively easily using logic and/or probability rules.
Although we will not focus on this type of probability in this course, we will mention a few examples to get you thinking about probability and how it works.
EXAMPLE: Flipping a Coin
A coin has two sides; we usually call them “heads” and “tails.”
For a “fair” coin (one that is not unevenly weighted, and does not have identical images on both sides) the chances that a “flip” will result in either side facing up are equally likely.
Thus, P(heads) = P(tails) = 1/2 or 0.5.
Letting H represent “heads,” we can abbreviate the probability: P(H) = 0.5.
Classical probabilities can also be used for more realistic and useful situations.
A practical use of a coin flip would be for you and your roommate to decide randomly who will go pick up the pizza you ordered for dinner. A common expression is “Let’s flip for it.” This is because a coin can be used to make a random choice with two options. Many sporting events begin with a coin flip to determine which side of the field or court each team will play on, or which team will have control of the ball first.
EXAMPLE: Rolling a Fair Die
Each traditional (cube-shaped) die has six sides, marked in dots with the numbers 1 through 6.
On a “fair” die, these numbers are equally likely to end up face-up when the die is rolled.
Thus, P(1) = P(2) = P(3) = P(4) = P(5) = P(6) = 1/6 or about 0.167.
Here, again, is a practical use of classical probability.
Suppose six people go out to dinner. You want to randomly decide who will pick up the check and pay for everyone. Again, the P(each person) = 1/6.
EXAMPLE: Spinners
This particular spinner has three colors, but each color is not equally likely to be the result of a spin, since the portions are not the same size.
Since the blue is half of the spinner, P(blue) = 1/2. The red and yellow make up the other half of the spinner and are the same size. Thus, P(red) = P(yellow) = 1/4.
Suppose there are 2 freshmen, 1 sophomore, and one junior in a study group. You want to select one person. The P(F) = 2/4 = 1/2; P(S) = 1/4; and P(J) = 1/4, just like the spinner.
EXAMPLE: Selecting Students
Suppose we had three students and wished to select one of them randomly. To do this you might have each person write his/her name on a (same-sized) piece of paper, then put the three papers in a hat, and select one paper from the hat without looking.
Since we are selecting randomly, each is equally likely to be chosen. Thus, each has a probability of 1/3 of being chosen.
A slightly more complicated, but more interesting, probability question would be to propose selecting 2 of the students pictured above, and ask, “What is the probability that the two students selected will be different genders?”
We will now shift our discussion to empirical ways to determine probabilities.
A Question
A single flip of a coin has an uncertain outcome. So, every time a coin is flipped, the outcome of that flip is unknown until the flip occurs.
However, if you flip a fair coin over and over again, would you expect P(H) to be exactly 0.5? In other words, would you expect there to be the same number of results of “heads” as there are “tails”?
The following activity will allow you to discover the answer.
Learn By Doing: Empirical Probability #1
The above Learn by Doing activity was our first example of the second way of determining probability: Empirical (Observational) methods. In the activity, we determined that the probability of getting the result “heads” is 0.5 by flipping a fair coin many, many times.
A Second Question
After doing this experiment, an important question naturally comes to mind. How would we know if the coin was not fair? Certainly, classical probability methods would never be able to answer this question. In addition, classical methods could never tell us the actual P(H). The only way to answer this question is to perform another experiment.
The next activity will allow you to do just that.
Learn By Doing: Empirical Probability #2
So, these types of experiments can verify classical probabilities and they can also determine when games of chance are not following fair practices. However, their real importance is to answer probability questions that arise when we are faced with a situation that does not follow any pattern and cannot be predetermined. In reality, most of the probabilities of interest to us fit the latter description.
To Summarize So Far
1. Probability is a way of quantifying uncertainty.
2. We are interested in the probability of an event — the likelihood of the event occurring.
3. The probability of an event ranges from 0 to 1. The closer the probability is to 0, the less likely the event is to occur. The closer the probability is to 1, the more likely the event is to occur.
4. There are two ways to determine probability: Theoretical (Classical) and Empirical (Observational).
5. Theoretical methods use the nature of the situation to determine probabilities.
6. Empirical methods use a series of trials that produce outcomes that cannot be predicted in advance (hence the uncertainty).
Relative Frequency
Learning Objectives
LO 6.5: Apply the relative frequency approach to estimate the probability of an event.
If we toss a coin, roll a die, or spin a spinner many times, we hardly ever achieve the exact theoretical probabilities that we know we should get, but we can get pretty close. When we run a simulation or when we use a random sample and record the results, we are using empirical probability. This is often called the Relative Frequency definition of probability.
Here is a realistic example where the relative frequency method was used to find the probabilities:
EXAMPLE: Blood Type
Researchers discovered at the beginning of the 20th century that human blood comes in various types (A, B, AB, and O), and that some types are more common than others. How could researchers determine the probability of a particular blood type, say O?
Just looking at one or two or a handful of people would not be very helpful in determining the overall chance that a randomly chosen person would have blood type O. But sampling many people at random, and finding the relative frequency of blood type O occurring, provides an adequate estimate.
For example, it is now well known that the probability of blood type O among white people in the United States is 0.45. This was found by sampling many (say, 100,000) white people in the country, finding that roughly 45,000 of them had blood type O, and then using the relative frequency: 45,000 / 100,000 = 0.45 as the estimate for the probability for the event “having blood type O.”
(Comment: Note that there are racial and ethnic differences in the probabilities of blood types. For example, the probability of blood type O among black people in the United States is 0.49, and the probability that a randomly chosen Japanese person has blood type O is only 0.3).
Let’s review the relative frequency method for finding probabilities:
To estimate the probability of event A, written P(A), we may repeat the random experiment many times and count the number of times event A occurs. Then P(A) is estimated by the ratio of the number of times A occurs to the number of repetitions, which is called the relative frequency of event A.
Did I Get This?: Relative Frequency
Learn By Doing: Relative Frequency
So, we’ve seen how the relative frequency idea works, and hopefully the activities have convinced you that the relative frequency of an event does indeed approach the theoretical probability of that event as the number of repetitions increases. This is called the Law of Large Numbers.
The Law of Large Numbers states that as the number of trials increases, the relative frequency becomes the actual probability. So, using this law, as the number of trials increases, the empirical probability gets closer and closer to the theoretical probability.
PRINCIPLE: Law of Large Numbers – The actual (or true) probability of an event (A) is estimated by the relative frequency with which the event occurs in a long series of trials.
Interactive Applet: Law of Large Numbers
Comments:
1. Note that the relative frequency approach provides only an estimate of the probability of an event. However, we can control how good this estimate is by the number of times we repeat the random experiment. The more repetitions that are performed, the closer the relative frequency gets to the true probability of the event.
2. One interesting question would be: “How many times do I need to repeat the random experiment in order for the relative frequency to be, say, within 0.001 of the actual probability of the event?” We will come back to that question in the inference section.
3. A pedagogical comment: We’ve introduced relative frequency here in a more practical approach, as a method for estimating the probability of an event. More traditionally, relative frequency is not presented as a method, but as a definition:
Relative Frequency: (Definition) The probability of an event (A) is the relative frequency with which the event occurs in a long series of trials.
4. There are many situations of interest in which physical circumstances do not make the probability obvious. In fact, most of the time it is impossible to find the theoretical probability, and we must use empirical probabilities instead.
Let’s Summarize
Probability is a way of quantifying uncertainty. In this section, we defined probability as the likelihood or chance that something will occur and introduced the basic notation of probability such as P(win lottery).
You have seen that all probabilities are values between 0 and 1, where an event with no chance of occurring has a probability of 0 and an event which will always occur has a probability of 1.
We have discussed the two primary methods of calculating probabilities
• Theoretical or Classical Probability: uses the nature of the situation to determine probabilities
• Empirical or Observational Probability: uses a series of trials that produce outcomes that cannot be predicted in advance (hence the uncertainty)
In our course we will focus on Empirical probability and will often calculate probabilities from a sample using relative frequencies.
This is useful in practice since the Law of Large Numbers allows us to estimate the actual (or true) probability of an event by the relative frequency with which the event occurs in a long series of trials. We can collect this information as data and we can analyze this data using statistics. | textbooks/stats/Applied_Statistics/Biostatistics_-_Open_Learning_Textbook/Unit_3A%3A_Probability/Introduction_to_Probability.txt |
Video
Video: Live Examples – Calcium Oxalate Crystals (40:18 Total)
Video
Video: Live Examples – Diabetes (10:33 Total)
This summary provides a quick recap of the material you’ve learned in the probability unit so far. Please note that this summary does not provide complete coverage of the material, but just lists the main points. We therefore recommend that you use this summary only as a checklist or a review before going on to the next unit, or before an exam.
General Remarks
• Probability is a discipline by itself. In the context of the big picture of this course, probability is used to quantify the imperfection associated with drawing conclusions about the entire population based only on a random sample drawn from it.
• The probability of an event can be as low as 0 (when the event is impossible) and as high as 1 (when the event is certain).
• In some cases, the only way to find the probability of an event of interest is by repeating the random experiment many times and using the relative frequency approach.
• When all the possible outcomes of a random experiment are equally likely, the probability of an event is the fraction of outcomes which satisfy it.
• There are many applications of probability in the health sciences including sensitivity, specificity, predictive value positive, predictive value negative, relative risk, odds ratios, to name a few.
Probability Principles
Probability principles help us find the probability of events of certain types:
• The Complement Rule, P(not A) = 1 – P(A), is especially useful for finding events of the type “at least one of …”
• To find the probability of events of the type “A or B” (interpreted as A occurs or B occurs or both), we use the General Addition Rule: P(A or B) = P(A) + P(B) – P(A and B).In the special case when A and B are disjoint (cannot happen together; P(A and B) = 0) the Addition Rule reduces to: P(A or B) = P(A) + P(B).
• To find the probability of events of the type “A and B” (interpreted as both A and B occur), we use the General Multiplication Rule: P(A and B) = P(A) * P(B | A). In the special case when A and B are independent (the occurrence of one event has no effect on the probability of the other occurring; P(B | A) = P(B)) the Multiplication Rule reduces to: P(A and B) = P(A) * P(B).
• Both restricted versions of the addition rule (for disjoint events) and the multiplication rule (for independent events) can be extended to more than two events.
• P(B | A), the conditional probability of event B occurring given that event A has occurred, can be viewed as a reduction of the sample space S to event A. The conditional probability, then, is the fraction of event A where B occurs as well, P(B | A) = P(A and B) / P(A).
• Be sure to follow reasonable rounding rules for probability, including enough significant digits and avoiding any rounding in intermediate steps.
(Optional) Outside Reading: Little Handbook – Probability (≈ 1000 words) | textbooks/stats/Applied_Statistics/Biostatistics_-_Open_Learning_Textbook/Unit_3A%3A_Probability/Summary_%28Unit_3%29.txt |
CO-6: Apply basic concepts of probability, random variation, and commonly used statistical probability distributions.
Video
Video: Unit 3B Random Variables (10:00)
Introduction
In the remaining sections in Unit 3 we will begin to make the connection between probability and statistics so that we can apply these concepts in the final Unit on statistical inference.
These concepts bridge the gap between the mathematics of descriptive statistics and probability and true “Inferential Statistics” where we will formalize statistical hypothesis tests.
In other words, the topics in Unit 3B provide the mathematical background and concepts that will be needed for our study of inferential statistics.
In the previous sections we learned principles and tools that help us find probabilities of events in general.
Now that we’ve become proficient at doing that, we’ll talk about random variables.
Just like any other variable, random variables can take on multiple values.
What differentiates random variables from other variables is that the values for these variables are determined by a random trial, random sample, or simulation.
The probabilities for the values can be determined by theoretical or observational means.
Such probabilities play a vital role in the theory behind statistical inference, our ultimate goal in this course.
Random Variables
Learning Objectives
LO 6.11: Distinguish between discrete and continuous random variables
We first discussed variables in the Exploratory Data Analysis portion of the course. A variable is a characteristic of an individual.
We also made an important distinction between categorical variables, whose values are groups or categories (and an individual can be placed into one of them), and quantitative variables, which have numerical values for which arithmetic operations make sense.
In the previous sections, we focused mostly on events which arise when there is a categorical variable in the background: blood type, pierced ears (yes/no), gender, on time delivery (yes/no), side effect (yes/no), etc.
Now we will begin to consider quantitative variables that arise when a random experiment is performed. We will need to define this new type of variable.
A random variable assigns a unique numerical value to the outcome of a random experiment.
A random variable can be thought of as a function that associates exactly one of the possible numerical outcomes to each trial of a random experiment. However, that number can be the same for many of the trials.
Before we go any further, here are some simple examples:
EXAMPLE: Theortical
Consider the random experiment of flipping a coin twice.
• The sample space of possible outcomes is S = { HH, HT, TH, TT }.
Now, let’s define the variable X to be the number of tails that the random experiment will produce.
• If the outcome is HH, we have no tails, so the value for X is 0.
• If the outcome is HT, we got one tail, so the value for X is 1.
• If the outcome is TH, we again got one tail, so the value for X is 1.
• Lastly, if the outcome is TT, we got two tails, so the value for X is 2.
As the definition suggests, X is a quantitative variable that takes the possible values of 0, 1, or 2.
It is random because we do not know which of the three values the variable will eventually take.
We can ask questions like:
• What is the probability that X will be 2? In other words, what is the probability of getting 2 tails?
• What is the probability that X will be at least 1? In other words, what is the probability of getting at least 1 tail?
As you can see, random variables are not really a new thing, but just a different way to look at the same problem.
Note that if we had tossed a coin three times, the possible values for the number of tails would be 0, 1, 2, or 3. In general, if we toss a coin “n” times, the possible number of tails would be 0, 1, 2, 3, … , or n.
EXAMPLE: Observational
Consider getting data from a random sample on the number of ears in which a person wears one or more earrings.
We define the variable X to be the number of ears in which a randomly selected person wears an earring.
• If the selected person does not wear any earrings, then X = 0.
• If the selected person wears earrings in either the left or the right ear, then X = 1.
• If the selected person wears earrings in both ears, then X = 2.
As the definition suggests, X is a quantitative variable which takes the possible values of 0, 1, or 2.
We can ask questions like:
• What is the probability that a randomly selected person will have earrings in both ears?
• What is the probability that a randomly selected person will not be wearing any earrings in either ear?
NOTE… We identified the first example as theoretical and the second as observational.
Let’s discuss the distinction.
• To answer probability questions about a theoretical situation, we only need the principles of probability.
• However, if we have an observational situation, the only way to answer probability questions is to use the relative frequency we obtain from a random sample.
Here is a different type of example:
EXAMPLE: Lightweight Boxer
Assume we choose a lightweight male boxer at random and record his exact weight.
According to the boxing rules, a lightweight male boxer must weigh between 130 and 135 pounds, so the sample space here is
• S = { All the numbers in the interval 130-135 }.
Note that we can’t list all the possible outcomes here!
We’ll define X to be the weight of the boxer again, as the definition suggests, X is a quantitative variable whose value is the result of our random experiment.
Here X can take any value between 130 and 135.
We can ask questions like:
• What is the probability that X will be more than 132? In other words, what is the probability that the boxer will weigh more than 132 pounds?
• What is the probability that X will be between 131 and 133? In other words, what is the probability that the boxer weighs between 131 and 133 pounds?
What is the difference between the random variables in these examples? Let’s see:
• They all arise from a random experiment (tossing a coin twice, choosing a person at random, choosing a lightweight boxer at random).
• They are all quantitative (number of tails, number of ears, weight).
Where they differ is in the type of possible values they can take:
• In the first two examples, X has three distinct possible values: 0, 1, and 2. You can list them.
• In contrast, in the third example, X takes any value in the interval 130-135, and thus the possible values of X cover an infinite range of possibilities, and cannot be listed.
Types of Random Variables
A random variable like the one in the first two examples, whose possible values are a list of distinct values, is called a discrete random variable.
A random variable like the one in the third example, that can take any value in an interval, is called a continuous random variable.
The main distinction between these two types of random variables is that,
• although they can both take on a potentially infinite number of values,
• for discrete random variables there is always a GAP between any two possible values
• whereas for continuous random variables there are no gaps in the range of possible values – it can take on any value in an interval; our precision in measurement is only limited by our level of technology in taking that measurement.
Just as the distinction between categorical and quantitative variables was important in Exploratory Data Analysis, the distinction between discrete and continuous random variables is important here, as each one gets a different treatment when it comes to calculating probabilities and other quantities of interest.
Before we go any further, a few observations about the nature of discrete and continuous random variables should be mentioned.
Comments:
• Sometimes, continuous random variables are “rounded” and are therefore “in a discrete disguise.” For example:
• time spent watching TV in a week, rounded to the nearest hour (or minute)
• outside temperature, to the nearest degree
• a person’s weight, to the nearest pound.
Even though they “look like” discrete variables, these are still continuous random variables, and we will in most cases treat them as such.
• On the other hand, there are some variables which are discrete in nature, but take so many distinct possible values that it will be much easier to treat them as continuous rather than discrete.
• the IQ of a randomly chosen person
• the SAT score of a randomly chosen student
• the annual salary of a randomly chosen CEO, whether rounded to the nearest dollar or the nearest cent
• Sometimes we have a discrete random variable but do not know the extent of its possible values.
• For example: How many accidents will occur in a particular intersection this month?
• We may know from previously collected data that this number is from 0-5. But, 6, 7, or more accidents could be possible.
• A good rule of thumb is that discrete random variables are things we count, while continuous random variables are things we measure.
• We counted the number of tails and the number of ears with earrings. These were discrete random variables.
• We measured the weight of the lightweight boxer. This was a continuous random variable.
Often we can have a subject matter for which we can collect data that could involve a discrete or a continuous random variable, depending on the information we wish to know.
EXAMPLE: Soft Drinks
Suppose we want to know how many days per week you drink a soft drink.
• The sample space would be S = { 0, 1, 2, 3, 4, 5, 6, 7 }.
• There are a finite number of values for this variable.
• This would be a discrete random variable.
Instead, suppose we want to know how many ounces of soft drinks you consume per week.
• Even if we round to the nearest ounce, the answer is a measurement.
• Thus, this would be a continuous random variable.
EXAMPLE: x-bar
Suppose we are interested in the weights of all males.
• We take a random sample and get the mean for that sample, namely x-bar.
• We then take another random sample (with the same sample size) and get another x-bar.
• We would expect the values of the x-bars from these two samples to be different, but pretty close in value.
• Each time we take a sample we’ll get a different x-bar.
• We will take lots of samples and thus get many x-bar values.
The value of x-bar from these repeated samples is a random variable.
Since it can take on any value within an interval of possible male weights it is a continuous random variable.
Did I Get This?: Random Variables
We devote a great deal of attention to random variables, since random variables and the probabilities that are associated with them play a vital role in the theory behind statistical inference, our ultimate goal in this course.
We’ll start with discrete random variables, including a discussion of binomial random variables and then move on to continuous random variables where we will formalize our understanding of the normal distribution.
Unit 3B: Random Variables
CO-6: Apply basic concepts of probability, random variation, and commonly used statistical probability distributions.
Review:
Video: Binomial Random Variables (12:52)
So far, in our discussion about discrete random variables, we have been introduced to:
1. The probability distribution, which tells us which values a variable takes, and how often it takes them.
2. The mean of the random variable, which tells us the long-run average value that the random variable takes.
3. The standard deviation of the random variable, which tells us a typical (or long-run average) distance between the mean of the random variable and the values it takes.
We will now introduce a special class of discrete random variables that are very common, because as you’ll see, they will come up in many situations – binomial random variables.
Here’s how we’ll present this material.
• First, we’ll explain what kind of random experiments give rise to a binomial random variable, and how the binomial random variable is defined in those types of experiments.
• We’ll then present the probability distribution of the binomial random variable, which will be presented as a formula, and explain why the formula makes sense.
• We’ll conclude our discussion by presenting the mean and standard deviation of the binomial random variable.
As we just mentioned, we’ll start by describing what kind of random experiments give rise to a binomial random variable. We’ll call this type of random experiment a “binomial experiment.”
Binomial Experiment
Learning Objectives
LO 6.14: When appropriate, apply the binomial model to find probabilities.
Binomial experiments are random experiments that consist of a fixed number of repeated trials, like tossing a coin 10 times, randomly choosing 10 people, rolling a die 5 times, etc.
These trials, however, need to be independent in the sense that the outcome in one trial has no effect on the outcome in other trials.
In each of these repeated trials there is one outcome that is of interest to us (we call this outcome “success”), and each of the trials is identical in the sense that the probability that the trial will end in a “success” is the same in each of the trials.
So for example, if our experiment is tossing a coin 10 times, and we are interested in the outcome “heads” (our “success”), then this will be a binomial experiment, since the 10 trials are independent, and the probability of success is 1/2 in each of the 10 trials.
Let’s summarize and give more examples.
The requirements for a random experiment to be a binomial experiment are:
• a fixed number (n) of trials
• each trial must be independent of the others
• each trial has just two possible outcomes, called “success” (the outcome of interest) and “failure
• there is a constant probability (p) of success for each trial, the complement of which is the probability (1 – p) of failure, sometimes denoted as q = (1 – p)
In binomial random experiments, the number of successes in n trials is random.
It can be as low as 0, if all the trials end up in failure, or as high as n, if all n trials end in success.
The random variable X that represents the number of successes in those n trials is called a binomial random variable, and is determined by the values of n and p. We say, “X is binomial with n = … and p = …”
EXAMPLE: Random Experiments (Binomial or Not?)
Let’s consider a few random experiments.
In each of them, we’ll decide whether the random variable is binomial. If it is, we’ll determine the values for n and p. If it isn’t, we’ll explain why not.
Example A:
A fair coin is flipped 20 times; X represents the number of heads.
X is binomial with n = 20 and p = 0.5.
Example B:
You roll a fair die 50 times; X is the number of times you get a six.
X is binomial with n = 50 and p = 1/6.
Example C:
Roll a fair die repeatedly; X is the number of rolls it takes to get a six.
X is not binomial, because the number of trials is not fixed.
Example D:
Draw 3 cards at random, one after the other, without replacement, from a set of 4 cards consisting of one club, one diamond, one heart, and one spade; X is the number of diamonds selected.
X is not binomial, because the selections are not independent. (The probability (p) of success is not constant, because it is affected by previous selections.)
Example E:
Draw 3 cards at random, one after the other, with replacement, from a set of 4 cards consisting of one club, one diamond, one heart, and one spade; X is the number of diamonds selected. Sampling with replacement ensures independence.
X is binomial with n = 3 and p = 1/4
Example F:
Approximately 1 in every 20 children has a certain disease. Let X be the number of children with the disease out of a random sample of 100 children. Although the children are sampled without replacement, it is assumed that we are sampling from such a vast population that the selections are virtually independent.
X is binomial with n = 100 and p = 1/20 = 0.05.
Example G:
The probability of having blood type B is 0.1. Choose 4 people at random; X is the number with blood type B.
X is binomial with n = 4 and p = 0.1.
Example H:
A student answers 10 quiz questions completely at random; the first five are true/false, the second five are multiple choice, with four options each. X represents the number of correct answers.
X is not binomial, because p changes from 1/2 to 1/4.
Comments:
• Example D above was not binomial because sampling without replacement resulted in dependent selections.
• In particular, the probability of the second card being a diamond is very dependent on whether or not the first card was a diamond:
• the probability is 0 if the first card was a diamond, 1/3 if the first card was not a diamond.
• In contrast, Example E was binomial because sampling with replacement resulted in independent selections:
• the probability of any of the 3 cards being a diamond is 1/4 no matter what the previous selections have been.
• On the other hand, when you take a relatively small random sample of subjects from a large population, even though the sampling is without replacement, we can assume independence because the mathematical effect of removing one individual from a very large population on the next selection is negligible.
• For example, in Example F, we sampled 100 children out of the population of all children.
• Even though we sampled the children without replacement, whether one child has the disease or not really has no effect on whether another child has the disease or not.
• The same is true for Example (G.).
Did I Get This?: Binomial or Not?
Binomial Probability Distribution – Using Probability Rules
Now that we understand what a binomial random variable is, and when it arises, it’s time to discuss its probability distribution. We’ll start with a simple example and then generalize to a formula.
EXAMPLE: Deck of Cards
Consider a regular deck of 52 cards, in which there are 13 cards of each suit: hearts, diamonds, clubs and spades. We select 3 cards at random with replacement. Let X be the number of diamond cards we got (out of the 3).
We have 3 trials here, and they are independent (since the selection is with replacement). The outcome of each trial can be either success (diamond) or failure (not diamond), and the probability of success is 1/4 in each of the trials.
X, then, is binomial with n = 3 and p = 1/4.
Let’s build the probability distribution of X as we did in the chapter on probability distributions. Recall that we begin with a table in which we:
• record all possible outcomes in 3 selections, where each selection may result in success (a diamond, D) or failure (a non-diamond, N).
• find the value of X that corresponds to each outcome.
• use simple probability principles to find the probability of each outcome.
With the help of the addition principle, we condense the information in this table to construct the actual probability distribution table:
In order to establish a general formula for the probability that a binomial random variable X takes any given value x, we will look for patterns in the above distribution. From the way we constructed this probability distribution, we know that, in general:
Let’s start with the second part, the probability that there will be x successes out of 3, where the probability of success is 1/4.
Notice that the fractions multiplied in each case are for the probability of x successes (where each success has a probability of p = 1/4) and the remaining (3 – x) failures (where each failure has probability of 1 – p = 3/4).
So in general:
Let’s move on to talk about the number of possible outcomes with x successes out of three. Here it is harder to see the pattern, so we’ll give the following mathematical result.
Counting Outcomes
Consider a random experiment that consists of n trials, each one ending up in either success or failure. The number of possible outcomes in the sample space that have exactly k successes out of n is:
$\left(\begin{array}{l} n \ k \end{array}\right)=\dfrac{n !}{k !(n-k) !}$
The notation on the left is often read as “n choose k.” Note that n! is read “n factorial” and is defined to be the product 1 * 2 * 3 * … * n. 0! is defined to be 1.
EXAMPLE: Ear Piercings
You choose 12 male college students at random and record whether they have any ear piercings (success) or not. There are many possible outcomes to this experiment (actually, 4,096 of them!).
In how many of the possible outcomes of this experiment are there exactly 8 successes (students who have at least one ear pierced)?
There is no way that we would start listing all these possible outcomes. The result above comes to our rescue.
The result says that in an experiment like this, where you repeat a trial n times (in our case, we repeat it n = 12 times, once for each student we choose), the number of possible outcomes with exactly 8 successes (out of 12) is:
$\dfrac{12!}{8!(12-8)!} = \dfrac{1*2*3*\cdots*12}{(1*2*3*\cdots*8)(1*2*3*4)} = 495$
Did I Get This?: Counting Outcomes
EXAMPLE: Card Revisited
Let’s go back to our example, in which we have n = 3 trials (selecting 3 cards). We saw that there were 3 possible outcomes with exactly 2 successes out of 3. The result confirms this since:
$\dfrac{3!}{2!(3-2)!} = \dfrac{1*2*3}{(1*2)(1)} = \dfrac{6}{2} = 3$
In general, then
Putting it all together, we get that the probability distribution of X, which is binomial with n = 3 and p = 1/4 i
$P(X=x)=\dfrac{3 !}{\mathrm{x} !(3-x) !}\left(\dfrac{1}{4}\right)^{x}\left(\dfrac{3}{4}\right)^{3-x} \quad x=0,1,2,3$
In general, the number of ways to get x successes (and n – x failures) in n trials is
$\left(\begin{array}{l} n \ k \end{array}\right)=\dfrac{n !}{k !(n-k) !}$
Therefore, the probability of x successes (and n – x failures) in n trials, where the probability of success in each trial is p (and the probability of failure is 1 – p) is equal to the number of outcomes in which there are x successes out of n trials, times the probability of x successes, times the probability of n – x failures:
Binomial Probability Formula for P(X = x)
$P(X=x)=\dfrac{n !}{x !(n-x) !} p^{x}(1-p)^{(n-x)}$
where x may take any value 0, 1, … , n.
Let’s look at another example:
EXAMPLE: Blood Type A
The probability of having blood type A is 0.4. Choose 4 people at random and let X be the number with blood type A.
X is a binomial random variable with n = 4 and p = 0.4.
As a review, let’s first find the probability distribution of X the long way: construct an interim table of all possible outcomes in S, the corresponding values of X, and probabilities. Then construct the probability distribution table for X.
As usual, the addition rule lets us combine probabilities for each possible value of X:
Now let’s apply the formula for the probability distribution of a binomial random variable, and see that by using it, we get exactly what we got the long way.
Recall that the general formula for the probability distribution of a binomial random variable with n trials and probability of success p is:
$P(X=x)=\dfrac{n !}{x !(n-x) !} p^{x}(1-p)^{(n-x)} \text { for } \mathrm{x}=0,1,2,3, \ldots, \mathrm{n}$
In our case, X is a binomial random variable with n = 4 and p = 0.4, so its probability distribution is:
$P(X=x)=\dfrac{4 !}{x !(4-x) !}(0.4)^{x}(0.6)^{4-x} \text { for } \mathrm{x}=0,1,2,3,4$
Let’s use this formula to find P(X = 2) and see that we get exactly what we got before.
$P(X=2)=\dfrac{4 !}{2 !(4-2) !}(0.4)^{2}(0.6)^{4-2}=\dfrac{1^{*} 2^{*} 3^{*} 4}{\left(1^{*} 2\right)\left(1^{*} 2\right)}(0.4)^{2}(0.6)^{2}=0.3456$
Learn by Doing: Binomial Probabilities (Using Online Calculator)
Now let’s look at some truly practical applications of binomial random variables.
EXAMPLE: Airline Flights
Past studies have shown that 90% of the booked passengers actually arrive for a flight. Suppose that a small shuttle plane has 45 seats. We will assume that passengers arrive independently of each other. (This assumption is not really accurate, since not all people travel alone, but we’ll use it for the purposes of our experiment).
Many times airlines “overbook” flights. This means that the airline sells more tickets than there are seats on the plane. This is due to the fact that sometimes passengers don’t show up, and the plane must be flown with empty seats. However, if they do overbook, they run the risk of having more passengers than seats. So, some passengers may be unhappy. They also have the extra expense of putting those passengers on another flight and possibly supplying lodging.
With these risks in mind, the airline decides to sell more than 45 tickets. If they wish to keep the probability of having more than 45 passengers show up to get on the flight to less than 0.05, how many tickets should they sell?
This is a binomial random variable that represents the number of passengers that show up for the flight. It has p = 0.90, and n to be determined.
Suppose the airline sells 50 tickets. Now we have n = 50 and p = 0.90. We want to know P(X > 45), which is 1 – P(X ≤ 45) = 1 – 0.57 or 0.43. Obviously, all the details of this calculation were not shown, since a statistical technology package was used to calculate the answer. This is certainly more than 0.05, so the airline must sell fewer seats.
If we reduce the number of tickets sold, we should be able to reduce this probability. We have calculated the probabilities in the following table:
# tickets sold P(X > 45)
50 45)" class="lt-socialsci-152767">0.43
49 45)" class="lt-socialsci-152767">0.26
48 45)" class="lt-socialsci-152767">0.13
47 45)" class="lt-socialsci-152767">0.04
46 45)" class="lt-socialsci-152767">0.008
From this table, we can see that by selling 47 tickets, the airline can reduce the probability that it will have more passengers show up than there are seats to less than 5%.
Note: For practice in finding binomial probabilities, you may wish to verify one or more of the results from the table above.
Learn by Doing: Binomial Application
Mean and Standard Deviation of the Binomial Random Variable
Learning Objectives
LO 6.15: Find the mean, variance, and standard deviation of a binomial random variable.
Now that we understand how to find probabilities associated with a random variable X which is binomial, using either its probability distribution formula or software, we are ready to talk about the mean and standard deviation of a binomial random variable. Let’s start with an example:
EXAMPLE: Blood Type B - Mean
Overall, the proportion of people with blood type B is 0.1. In other words, roughly 10% of the population has blood type B.
Suppose we sample 120 people at random. On average, how many would you expect to have blood type B?
The answer, 12, seems obvious; automatically, you’d multiply the number of people, 120, by the probability of blood type B, 0.1.
This suggests the general formula for finding the mean of a binomial random variable:
Claim:
If X is binomial with parameters n and p, then the mean or expected value of X is:
$\mu_X = np$
Although the formula for mean is quite intuitive, it is not at all obvious what the variance and standard deviation should be. It turns out that:
Claim:
The ideal gas law is easy to remember and apply in solving problems, as long as you get the proper values aIf X is binomial with parameters n and p, then the variance and standard deviation of X are:
\begin{aligned}
\sigma_{X}^{2} &=n p(1-p) \
\sigma_{X} &=\sqrt{n p(1-p)}
\end{aligned}
Comments:
• The binomial mean and variance are special cases of our general formulas for the mean and variance of any random variable. Clearly it is much simpler to use the “shortcut” formulas presented above than it would be to calculate the mean and variance or standard deviation from scratch.
• Remember, these “shortcut” formulas only hold in cases where you have a binomial random variable.
EXAMPLE: Blood Type B - Standard Deviation
Suppose we sample 120 people at random. The number with blood type B should be about 12, give or take how many? In other words, what is the standard deviation of the number X who have blood type B?
Since n = 120 and p = 0.1,
$\sigma_{X}^{2}=120(0.1)(1-0.1)=10.8 ; \quad \sigma_{X}=\sqrt{10.8} \approx 3.3$
In a random sample of 120 people, we should expect there to be about 12 with blood type B, give or take about 3.3.
Did I Get This?: Binomial Distribution
Before we move on to continuous random variables, let’s investigate the shape of binomial distributions.
Learn by Doing: Shapes of Binomial Distributions | textbooks/stats/Applied_Statistics/Biostatistics_-_Open_Learning_Textbook/Unit_3B%3A_Random_Variables/Binomial_Random_Variables.txt |
CO-6: Apply basic concepts of probability, random variation, and commonly used statistical probability distributions.
Video
Video: Continuous Random Variables (3:59)
In the previous section, we discussed discrete random variables: random variables whose possible values are a list of distinct numbers. We talked about their probability distributions, means, and standard deviations.
We are now moving on to discuss continuous random variables: random variables which can take any value in an interval, so that all of their possible values cannot be listed (such as height, weight, temperature, time, etc.)
As it turns out, most of the methods for dealing with continuous random variables require a higher mathematical level than we needed to deal with discrete random variables. For the most part, the calculation of probabilities associated with a continuous random variable, and its mean and standard deviation, requires knowledge of calculus, and is beyond the scope of this course.
What we will do in this part is discuss the idea behind the probability distribution of a continuous random variable, and show how calculations involving such variables become quite complicated very fast!
We’ll then move on to a special class of continuous random variables – normal random variables. Normal random variables are very common, and play a very important role in statistical inference.
We’ll finish this section by presenting an important connection between the binomial random variable (the special discrete random variable that we presented earlier) and the normal random variable (the special continuous random variable that we’ll present here).
The Probability Distribution of a Continuous Random Variable
Learning Objectives
LO 6.16: Explain how a density function is used to find probabilities involving continuous random variables.
In order to shift our focus from discrete to continuous random variables, let us first consider the probability histogram below for the shoe size of adult males. Let X represent these shoe sizes. Thus, X is a discrete random variable, since shoe sizes can only take whole and half number values, nothing in between.
Recall that in all of the previous probability histograms we’ve seen, the X-values were whole numbers. Thus, the width of each bar was 1. The height of each bar was the same as the probability for its corresponding X-value. Due to the principle that states the sum of probabilities of all possible outcomes in the sample space must be 1, the heights of all the rectangles in the histogram must sum to 1. This meant that the area was also 1.
This histogram uses half-sizes. We wish to keep the area = 1, but we still want the horizontal scale to represent half-sizes. Therefore, we must adjust the vertical scale of the histogram. As is, the total area of the histogram rectangles would be .50 times the sum of the probabilities, since the width of each bar is .50. Thus, the area is .50(1) = .50. If we double the vertical scale, the area will double and be 1, just like we want. This means we are changing the vertical scale from “Probability” to “Probability per half size.” The shape and the horizontal scale remain unchanged.
Now we can tell the probability of shoe size taking a value in any interval, just by finding the area of the rectangles over that interval. For instance, the area of the rectangles up to and including 9 shows the probability of having a shoe size less than or equal to 9.
Recall that for a discrete random variable like shoe size, the probability is affected by whether we want strict inequality or not. For example, the area -and corresponding probability – is reduced if we only consider shoe sizes strictly less than 9:
Did I Get This?: Probability for Discrete Random Variables
Transition to Continuous Random Variables
Now we are going to be making the transition from discrete to continuous random variables. Recall that continuous random variables represent measurements and can take on any value within an interval.
For our shoe size example, this would mean measuring shoe sizes in smaller units, such as tenths, or hundredths. As the number of intervals increases, the width of the bars becomes narrower and narrower, and the graph approaches a smooth curve.
To illustrate this, the following graphs represent two steps in this process of narrowing the widths of the intervals. Specifically, the interval widths are 0.25 and 0.10.
We’ll use these smooth curves to represent the probability distributions of continuous random variables. This idea will be discussed in more detail on the next page.
Now consider another random variable X = foot length of adult males. Unlike shoe size, this variable is not limited to distinct, separate values, because foot lengths can take any value over a continuous range of possibilities, so we cannot present this variable with a probability histogram or a table. The probability distribution of foot length (or any other continuous random variable) can be represented by a smooth curve called a probability density curve.
Like the modified probability histogram above, the total area under the density curve equals 1, and the curve represents probabilities by area.
The probability that X gets values in any interval is represented by the area above this interval and below the density curve. In our foot length example, if our interval of interest is between 10 and 12 (marked in red below), and we would like to know P(10 < X < 12), the probability that a randomly chosen male has a foot length anywhere between 10 and 12 inches, we’ll have to find the area above our interval of interest (10,12) and below our density curve, shaded in blue:
If, for example, we are interested in P(X < 9), the probability that a randomly chosen male has a foot length of less than 9 inches, we’ll have to find the area shaded in blue below:
Comments:
• We have seen that for a discrete random variable like shoe size, whether we have a strict inequality or not does matter when solving for probabilities. In contrast, for a continuous random variable like foot length, the probability of a foot length of less than or equal to 9 will be the same as the probability of a foot length of strictly less than 9. In other words, P(X < 9) = P(X ≤ 9).Visually, in terms of our density curve, the area under the curve up to and including a certain point is the same as the area up to and excluding the point, because there is no area over a single point. Conceptually, because a continuous random variable has infinitely many possible values, technically the probability of any single value occurring is zero!
• It should be clear now why the total area under any probability density curve must be 1. The total area under the curve represents P(X gets a value in the interval of its possible values). Clearly, according to the rules of probability this must be 1, or always true.
• Density curves, like probability histograms, may have any shape imaginable as long as the total area underneath the curve is 1.
Let’s Summarize
The probability distribution of a continuous random variable is represented by a probability density curve.
The probability that X gets a value in any interval of interest is the area above this interval and below the density curve.
Now that we see how probabilities are found for continuous random variables, we understand why it is more complicated than finding probabilities in the discrete case. As anyone who has studied calculus can attest, finding the area under a curve can be difficult. The general approach is to use integrals. For those of you who did study calculus, the following should be familiar….
where f(x) represents the density curve.
For those who did not study calculus, don’t worry about it. This kind of calculation is definitely beyond the scope of this course.
In this course, we will encounter several important density curves—those for normal random variables, t random variables, chi-square random variables, and F random variables. Normal and t distributions are bell-shaped (single-peaked and symmetric) like the density curve in the foot length example; chi-square and F distributions are single-peaked and skewed right, like in the figure above.
Rather than get bogged down in the calculus of solving for areas under curves, we will find probabilities for the above-mentioned random variables by consulting tables. Also, statistical software automatically provides such probabilities in the appropriate context.
In the next section, we will study in more depth one of those random variables, the normal random variable, and see how we can find probabilities associated with it using software and tables. | textbooks/stats/Applied_Statistics/Biostatistics_-_Open_Learning_Textbook/Unit_3B%3A_Random_Variables/Continuous_Random_Variables.txt |
CO-6: Apply basic concepts of probability, random variation, and commonly used statistical probability distributions.
Review:
Video
Video: Discrete Random Variables (22:40 Total)
We begin with discrete random variables: variables whose possible values are a list of distinct values. In order to decide on some notation, let’s look at the coin toss example again:
A fair coin is tossed twice.
• Let the random variable X be the number of tails we get in this random experiment.
• In this case, the possible values that X can assume are
• 0 (if we get HH),
• 1 (if get HT or TH),
• and 2 (if we get TT).
Notation
If we want to find the probability of the event “getting 1 tail,” we’ll write: P(X = 1)
If we want to find the probability of the event “getting 0 tails,” we’ll write: P(X = 0)
In general, we’ll write: P(X = x) or P(X = k) to denote the probability that the discrete random variable X gets the value x or k respectively.
Many students prefer the second notation as keeping track of the difference between X and x can cause confusion.
• Here the X represents the random variable and x or k denote the value of interest in the current problem (0, 1, etc. ).
• Note that for the random variables we’ll use a capital letter, and for the value we’ll use a lowercase letter.
Section Plan
The way this section on discrete random variables is organized is very similar to the way we organized our discussion about one quantitative variable in the Exploratory Data Analysis unit.
It will be separated into four sections.
1. We’ll first discuss the probability distribution of a discrete random variable, ways to display it, and how to use it in order to find probabilities of interest.
2. We’ll then move on to talk about the mean and standard deviation of a discrete random variable, which are measures of the center and spread of its distribution.
3. We’ll conclude this part by discussing a special and very common class of discrete random variable: the binomial random variable.
Probability Distributions
Learning Objectives
LO 6.12: Use the probability distribution for a discrete random variable to find the probability of events of interest.
When we learned how to find probabilities by applying the basic principles, we generally focused on just one particular outcome or event, like the probability of getting exactly one tail when a coin is tossed twice, or the probability of getting a 5 when a die is rolled.
Now that we have mastered the solution of individual probability problems, we’ll proceed to look at the big picture by considering all the possible values of a discrete random variable, along with their associated probabilities.
This list of possible values and probabilities is called the probability distribution of the random variable.
Comments:
• In the Exploratory Data Analysis unit of this course, we often looked at the distribution of sample values in a quantitative data set. We would display the values with a histogram, and summarize them by reporting their mean.
• In this section, when we look at the probability distribution of a random variable, we consider all its possible values and their overall probabilities of occurrence.
• Thus, we have in mind an entire population of values for a variable. When we display them with a histogram or summarize them with a mean, these are representing a population of values, not a sample.
• The distinction between sample and population is an essential concept in statistics, because an ultimate goal is to draw conclusions about unknown values for a population, based on what is observed in the sample.
In the examples which follow we will sometimes illustrate how the probability distribution is created.
We do this to demonstrate the usefulness of the probability rules we previously discussed and to illustrate clearly how probability distributions can be created.
As we are more focused on data driven methods, you will often be given a probability distribution based upon data as opposed to constructing the theoretical probability distribution based upon flipping coins or similar classical probability experiments.
Recall our first example, when we introduced the idea of a random variable. In this example we tossed a coin twice.
EXAMPLE: Flipping a Coin Twice
What is the probability distribution of X, where the random variable X is the number of tails appearing in two tosses of a fair coin?
We first note that since the coin is fair, each of the four outcomes HH, HT, TH, TT in the sample space S is equally likely, and so each has a probability of 1/4.
(Alternatively, the multiplication principle can be applied to find the probability of each outcome to be 1/2 * 1/2 = 1/4.)
X takes the value 0 only for the outcome HH, so the probability that X = 0 is 1/4.
X takes the value 1 for outcomes HT or TH. By the addition principle, the probability that X = 1 is 1/4 + 1/4 = 1/2.
Finally, X takes the value 2 only for the outcome TT, so the probability that X = 2 is 1/4.
The probability distribution of the random variable X is easily summarized in a table:
As mentioned before, we write “P(X = x)” to denote “the probability that the random variable X takes the value x.”
The way to interpret this table is:
• X takes the values 0, 1, 2 and P(X = 0) = 1/4, P(X = 1) = 1/2, P(X = 2) = 1/4.
Note that events of the type (X = x) are subject to the principles of probability established earlier, and will provide us with a way of systematically exploring the behavior of random variables.
In particular, the first two principles in the context of probability distributions of random variables will now be stated.
The ideal gas law is easy to remember and apply in solving problems, as long as you get the proper values aAny probability distribution of a discrete random variable must satisfy:
1. $0 \leq P(X=x) \leq 1$
2. $\sum_{x} P(X=x)=1$
The probability distribution for two flips of a coin was simple enough to construct at once.
For more complicated random experiments, it is common to first construct a table of all the outcomes and their probabilities, then use the addition principle to condense that information into the actual probability distribution table.
EXAMPLE: Flipping a Coin Three Times
A coin is tossed three times. Let the random variable X be the number of tails.
Find the probability distribution of X.
We’ll follow the same reasoning we used in the previous example:
First, we specify the 8 possible outcomes in S, along with the number and the probability of that outcome.
• Because they are all equally likely, each has probability 1/8.
• Alternatively, by the multiplication principle, each particular sequence of three coin faces has probability 1/2 * 1/2 * 1/2 = 1/8.
Then we figure out what the value of X is (number of tails) for each possible outcome.
Next, we use the addition principle to assert that
• P(X = 1) = P(HHT or HTH or THH) = P(HHT) + P(HTH) + P(THH) = 1/8 + 1/8 + 1/8 = 3/8.
• Similarly, P(X = 2) = P(HTT or THT or TTH) = 3/8.
The resulting probability distribution is:
In the previous two examples, we needed to specify the probability distributions ourselves, based on the physical circumstances of the situation.
In some situations, the probability distribution may be specified with a formula.
Such a formula must be consistent with the constraints imposed by the laws of probability, so that the probability of each outcome must be between 0 and 1, and the probabilities of all possible outcomes together must sum to 1.
We will see this with the binomial distribution.
Probability Histograms
We learned to display the distribution of sample values for a quantitative variable with a histogram in which the horizontal axis represented the range of values in the sample.
• The vertical axis represented the frequency or relative frequency (sometimes given as a percentage) of sample values occurring in that interval.
• The width of each rectangle in the histogram was an interval, or part of the possible values for the quantitative variable.
• The height of each rectangle was the frequency (or relative frequency) for that interval.
Similarly, we can display the probability distribution of a random variable with a probability histogram.
• The horizontal axis represents the range of all possible values of the random variable
• The vertical axis represents the probabilities of those values.
Here an example of a probability histogram.
(Such probabilities are not always increasing; they just happen to be so in this example).
Area of a Probability Histogram
Notice that each rectangle in the histogram has a width of 1 unit. The height of each rectangle is the probability that it will occur.
Thus, the area of each rectangle is base times height, which for these rectangles is 1 times its probability for each value of X.
This means that for probability distributions of discrete random variables, the sum of the areas of all of the rectangles is the same as the sum of all of the probabilities. The total area = 1.
For probability distributions of discrete random variables, this is equivalent to the property that the sum of all of the probabilities must equal 1.
Learn by Doing: Probability Distributions
Finding Probabilities
We’ve seen how probability distributions are created. Now it’s time to use them to find probabilities.
EXAMPLE: Changing Majors
A random sample of graduating seniors was surveyed just before graduation. One question that was asked is:
How many times did you change majors?
The results are displayed in a probability distribution.
Using this probability distribution, we can answer probability questions such as:
What is the probability that a randomly selected senior has changed majors more than once?
This can be written as P(X > 1).
We can find this probability by adding the appropriate individual probabilities in the probability distribution.
• P(X > 1)
• = P(X = 2) + P(X = 3) + P(X = 4) + P(X = 5)
• = 0.23 + 0.09 + 0.02 + 0.01
• = 0.35
As you just saw in this example, we need to pay attention to the wording of the probability question.
The key words that told us which values to use for X are more than.
The following will clarify and reinforce the key words and their meanings.
Key Words
Let’s begin with some everyday situations using at least and at most.
Suppose someone said to you, “I need you to write at least 10 pages for a term paper.”
• What does this mean?
• It means that 10 pages is the smallest amount you are going to write.
• In other words, you will write 10 or morepages for the term paper.
• This would be the same as saying, “not less than10 pages.”
• So, for example, writing 9 pages would be unacceptable.
On the other hand, suppose you are considering the number of children you will have. You want at most 3 children.
• This means that 3 children is the most that you wish to have.
• In other words, you will have 3 or fewer
• This would be the same as saying, “not more than3 children.”
• So, for example, you would not want to have 4 children.
The following table gives a list of some key words to know.
Suppose a random variable X had possible values of 0 through 5.
Key Words Meaning Symbols Values for X
more than 2 strictly larger than 2 X > 2 3, 4, 5
no more than 2 2 or fewer X ≤ 2 0, 1, 2
fewer than 2 strictly smaller than 2 X < 2 0, 1
no less than 2 2 or more X ≥ 2 2, 3, 4, 5
at least 2 2 or more X ≥ 2 2, 3, 4, 5
at most 2 2 or fewer X ≤ 2 0, 1, 2
exactly 2 2, no more or no less, only 2 X = 2 2
Before we move on to the next section on the means and variances of a probability distribution, let’s revisit the changing majors example:
EXAMPLE: Changing Major
Question: Based upon this distribution, do you think it would be unusual to change majors 2 or more times?
Answer:
• P(X ≥ 2) = 0.35.
• So, 35% of the time a student changes majors 2 or more times.
• This means that it is not unusual to do so.
Question: Do you think it would be unusual to change majors 4 or more times?
Answer:
• P(X ≥ 4) = 0.03.
• So, 3% of the time a student changes majors 4 or more times.
• This means that it is fairly unusual to do so.
We can even answer more difficult questions using our probability rules!
Question: What is the probability of changing majors only once given at least one change in major.
Answer:
• P(X = 1 | X ≥ 1) = P(X = 1 AND X ≥ 1)/P(X ≥ 1) [using Probability Rule 7]
• = P(X = 1)/P(X ≥ 1) [since the only outcome that satisfies both X = 1 and X ≥ 1 is X = 1]
• = (0.37)/(0.37+0.23+.0.09+0.02+0.01) = 0.37/0.72 = 0.5139.
• So, among students who change majors, 51% of these students will only change majors one time.
After we learn about means and standard deviations, we will have another way to answer these types of questions.
Mean of a Discrete Random Variable
Learning Objectives
LO 6.13: Find the mean, variance, and standard deviation of a discrete random variable.
In the Exploratory Data Analysis (EDA) section, we displayed the distribution of one quantitative variable with a histogram, and supplemented it with numerical measures of center and spread.
We are doing the same thing here.
• We display the probability distribution of a discrete random variable with a table, formula or histogram.
• And supplement it with numerical measures of the center and spread of the probability distribution.
These measures are the mean and standard deviation of the random variable.
This section will be devoted to introducing these measures. As before, we’ll start with the numerical measure of center, the mean. Let’s begin by revisiting an example we saw in EDA.
EXAMPLE: World Cup Soccer
Recall that we used the following data from 3 World Cup tournaments (a total of 192 games) to introduce the idea of a weighted average.
We’ve added a third column to our table that gives us relative frequencies.
total # goals/game frequency relative frequency
0 17 17 / 192 = 0.089
1 45 45 / 192 = 0.234
2 51 51 / 192 = 0.266
3 37 37 / 192 = 0.193
4 25 25 / 192 = 0.130
5 11 11 / 192 = 0.057
6 3 3 / 192 = 0.016
7 2 2 / 192 = 0.010
8 1 1 / 192 = 0.005
The mean for this data is:
$\dfrac{0(17)+1(45)+2(51)+3(37)+4(25)+5(11)+6(3)+7(2)+8(1)}{192}$
Distributing the division by 192 we get:
$0\left(\dfrac{17}{192}\right)+1\left(\dfrac{45}{192}\right)+2\left(\dfrac{51}{192}\right)+\cdots+8\left(\dfrac{1}{192}\right)$
Notice that the mean is each number of goals per game multiplied by its relative frequency.
Since we usually write the relative frequencies as decimals, we can see that:
Mean number of goals per game =
• 0(0.089) + 1(0.234) + 2(0.266) + 3(0.193) + 4(0.130) + 5(0.057) + 6(0.016) + 7(0.010) + 8(0.005)
= 2.36, rounded to two decimal places.
In Exploratory Data Analysis, we used the mean of a sample of quantitative values—their arithmetic average—to tell the center of their distribution. We also saw how a weighted mean was used when we had a frequency table. These frequencies can be changed to relative frequencies.
So we are essentially using the relative frequency approach to find probabilities. We can use this to find the mean, or center, of a probability distribution for a discrete random variable, which will be a weighted average of its values; the more probable a value is the more weight it gets.
As always, it is important to distinguish between a concrete sample of observed values for a variable versus an abstract population of all values taken by a random variable in the long run.
Whereas we denoted the mean of a sample as x-bar, we now denote the mean of a random variable using the Greek letter mu with a subscript for the random variable we are using.
Let’s see how this is done by looking at a specific example.
EXAMPLE: Xavier's Production Line
Xavier’s production line produces a variable number of defective parts in an hour, with probabilities shown in this table:
How many defective parts are typically produced in an hour on Xavier’s production line? If we sum up the possible values of X, each weighted with its probability, we have
$\mu_{X}=0(0.15)+1(0.30)+2(0.25)+3(0.20)+4(0.10)=1.8$
Here is the general definition of the mean of a discrete random variable:
In general, for any discrete random variable X with probability distribution
The mean of X is defined to be
$\mu_{X}=x_{1} p_{1}+x_{2} p_{2}+\ldots+x_{n} p_{n}=\sum_{i=1}^{n} x_{i} p_{i}$
• In general, the mean of a random variable tells us its “long-run” average value.
• It is sometimes referred to as the expected valueof the random variable.
Although “expected value” is a common, and even preferred term in the field of statistics, this expression may be somewhat misleading, because in many cases it is impossible for a random variable to actually equal its expected value.
For example, the mean number of goals for a World Cup soccer game is 2.36. But we can never expect any single game to result in 2.36 goals, since it is not possible to score a fraction of a goal. Rather, 2.36 is the long-run average of all World Cup soccer games.
In the case of Xavier’s production line, the mean number of defective parts produced in an hour is 1.8. But the actual number of defective parts produced in any given hour can never equal 1.8, since it must take whole number values.
To get a better feel for the mean of a random variable, let’s extend the defective parts example:
EXAMPLE: Xavier's and Yves' Production Lines
Recall the probability distribution of the random variable X, representing the number of defective parts in an hour produced by Xavier’s production line.
The number of defective parts produced each hour by Yves’ production line is a random variable Y with the following probability distribution:
Look at both probability distributions. Both X and Y take the same possible values (0, 1, 2, 3, 4).
However, they are very different in the way the probability is distributed among these values.
Learn by Doing: Comparing Probability Distributions #1
Did I Get This?: Mean of Discrete Random Variable
Variance and Standard Deviation of a Discrete Random Variable
Learning Objectives
LO 6.13: Find the mean, variance, and standard deviation of a discrete random variable.
In Exploratory Data Analysis, we used the mean of a sample of quantitative values (their arithmetic average, x-bar) to tell the center of their distribution, and the standard deviation (s) to tell the typical distance of sample values from their mean.
We described the center of a probability distribution for a random variable by reporting its mean which we denoted by the Greek letter mu.
Now we would like to establish an accompanying measure of spread.
Our measure of spread will still report the typical distance of values from their means, but in order to distinguish the spread of a population of all of a random variable’s values from the spread (s) of sample values, we will denote the standard deviation of the random variable X with the Greek lower case “sigma,” and use a subscript to remind us what is the variable of interest (there may be more than one in later problems):
We will also focus more frequently than before on the squared standard deviation, called the variance, because some important rules we need to invoke are in terms of variance rather than standard deviation.
EXAMPLE: Xavier's Production Line
Recall that the number of defective parts produced each hour by Xavier’s production line is a random variable X with the following probability distribution:
We found the mean number of defective parts produced per hour to be 1.8.
Obviously, there is variation about this mean: some hours as few as 0 defective parts are produced, whereas in other hours as many as 4 are produced.
Typically, how far does the number of defective parts fall from the mean of 1.8?
As we did for the spread of sample values, we measure the spread of a random variable by calculating the square root of the average squared deviation from the mean.
Now “average” is a weighted average, where more probable values of the random variable are accordingly given more weight.
Let’s begin with the variance, or average squared deviation from the mean, and then take its square root to find the standard deviation:
\begin{aligned}
\text { Variance }&=\sigma_{X}^{2}=(0-1.8)^{2}(0.15)+(1-1.8)^{2}(0.30)+(2-1.8)^{2}(0.25) \
&+(3-1.8)^{2}(0.20)+(4-1.8)^{2}(0.1) \
&= 1.46
\end{aligned}
standard deviation $=\sigma_{X}=\sqrt{1.46}=1.21$
How do we interpret the standard deviation of X?
• Xavier’s production line produces an average of 1.80 defective parts per hour.
• The number of defective parts varies from hour to hour; typically (or, on average), it is about 1.21 away from the mean 1.80.
Here is the formal definition:
In general, for any discrete random variable X with probability distribution
The variance of X is defined to be
\begin{aligned}
\sigma_{X}^{2} &=\left(x_{1}-\mu_{X}\right)^{2} p_{1}+\left(x_{2}-\mu_{X}\right)^{2} p_{2}+\ldots+\left(x_{n}-\mu_{X}\right)^{2} p_{n} \
&=\sum_{i=1}^{n}\left(x_{i}-\mu_{X}\right)^{2} p_{i}
\end{aligned}
There is also a “short-cut” formula which is faster for by-hand calculation. In the formula below we have dropped the subscript for the variable in the notation.In this short-cut, we simply need to
• square each X,
• multiply by the probability of that X,
• then sum those values.
• From that result we subtract the square of the mean to find the variance.
$\operatorname{Var}(X)=\sigma^{2}=\sum_{i=1}^{n}\left[x_{i}^{2} P\left(X=x_{i}\right)\right]-\mu^{2}$
The standard deviation is the square root of the variance
$\sigma_{X}=\sqrt{\sigma_{X}^{2}}$
Did I Get This?: Standard Deviation of a Discrete Random Variable
The purpose of the next activity is to give you better intuition about the mean and standard deviation of a random variable.
Learn by Doing: Comparing Probability Distributions #2
EXAMPLE: Xavier's and Yves' Production Lines
Recall the probability distribution of the random variable X, representing the number of defective parts per hour produced by Xavier’s production line, and the probability distribution of the random variable Y, representing the number of defective parts per hour produced by Yves’ production line:
Look carefully at both probability distributions. Both X and Y take the same possible values (0, 1, 2, 3, 4).
However, they are very different in the way the probability is distributed among these values. We saw before that this makes a difference in means:
$\mu_X = 1.8$
$\mu_Y = 2.7$
We now want to get a sense about how the different probability distributions impact their standard deviations.
Recall that the standard deviation of a random variable can be interpreted as a typical (or the long-run average) distance between the value of X and its mean.
Learn by Doing: Comparing Probability Distributions #3
So, 75% of the time Y will assume a value (3) that is very close to its mean (2.7), while X will assume a value (2) that is close to its mean (1.8) much less often—only 25% of the time.
The long-run average, then, of the distance between the values of Y and their mean will be much smaller than the long-run average of the distance between the values of X and their mean.
Therefore
$\sigma_Y < \sigma_X = 1.21$
Actually we have
$\sigma_Y = 0.85$
So we can draw the following conclusion:
Yves’ production line produces an average of 2.70 defective parts per hour.
The number of defective parts varies from hour to hour; typically (or, on average), it is about 0.85 away from 2.70.
Here are the histograms for the production lines:
When we compare distributions, the distribution in which it is more likely to find values that are further from the mean will have a larger standard deviation.
Likewise, the distribution in which it is less likely to find values that are further from the mean will have the smaller standard deviation.
Did I Get This?: Standard Deviation of a Discrete Random Variable #2
Comment:
As we have stated before, using the mean and standard deviation gives us another way to assess which values of a random variable are unusual.
For reasonably symmetric distributions, any values of a random variable that fall within 2 or 3 standard deviations of the mean would be considered ordinary (not unusual).
For any distribution, it is unusual for values to fall outside of 3 or 4 standard deviations – depending on your definition of “unusual.”
EXAMPLE: Xavier's Production Line-Unusual or Not?
Looking once again at the probability distribution for Xavier’s production line:
Would it be considered unusual to have 4 defective parts per hour?
We know that the mean is 1.8 and the standard deviation is 1.21.
Ordinary values are within 2 (or 3) standard deviations of the mean.
• 1.8 – 2(1.21) = -0.62 and
• 1.8 + 2(1.21) = 4.22.
This gives us an interval from -0.62 to 4.22.
Since we cannot have a negative number of defective parts, the interval is essentially from 0 to 4.22.
Because 4 is within this interval, it would be considered ordinary. Therefore, it is not unusual.
Would it be considered unusual to have no defective parts?
Zero is within 2 standard deviations of the mean, so it would not be considered unusual to have no defective parts.
The following activity will reinforce this idea.
Learn by Doing: Unusual or Not? | textbooks/stats/Applied_Statistics/Biostatistics_-_Open_Learning_Textbook/Unit_3B%3A_Random_Variables/Discrete_Random_Variables.txt |
CO-6: Apply basic concepts of probability, random variation, and commonly used statistical probability distributions.
Learning Objectives
LO 6.2: Apply the standard deviation rule to the special case of distributions having the “normal” shape.
Video
Video: Normal Random Variables (2:08)
In the Exploratory Data Analysis unit of this course, we encountered data sets, such as lengths of human pregnancies, whose distributions naturally followed a symmetric unimodal bell shape, bulging in the middle and tapering off at the ends.
Many variables, such as pregnancy lengths, shoe sizes, foot lengths, and other human physical characteristics exhibit these properties: symmetry indicates that the variable is just as likely to take a value a certain distance below its mean as it is to take a value that same distance above its mean; the bell-shape indicates that values closer to the mean are more likely, and it becomes increasingly unlikely to take values far from the mean in either direction.
The particular shape exhibited by these variables has been studied since the early part of the nineteenth century, when they were first called “normal” as a way of suggesting their depiction of a common, natural pattern.
Observations of Normal Distributions
There are many normal distributions. Even though all of them have the bell-shape, they vary in their center and spread.
More specifically, the shape of the distribution is determined by its mean (mu, μ) and the spread is determined by its standard deviation (sigma, σ).
Some observations we can make as we look at this graph are:
• The black and the red normal curves have means or centers at μ = mu = 10. However, the red curve is more spread out and thus has a larger standard deviation. As you look at these two normal curves, notice that as the red graph is squished down, the spread gets larger, thus allowing the area under the curve to remain the same.
• The black and the green normal curves have the same standard deviation or spread (the range of the black curve is 6.5-13.5, and the green curve’s range is 10.5-17.5).
Even more important than the fact that many variables themselves follow the normal curve is the role played by the normal curve in sampling theory, as we’ll see in the next section in our unit on probability.
Understanding the normal distribution is an important step in the direction of our overall goal, which is to relate sample means or proportions to population means or proportions. The goal of this section is to better understand normal random variables and their distributions.
The Standard Deviation Rule for Normal Random Variables
We began to get a feel for normal distributions in the Exploratory Data Analysis (EDA) section, when we introduced the Standard Deviation Rule (or the 68-95-99.7 rule) for how values in a normally-shaped sample data set behave relative to their sample mean (x-bar) and sample standard deviation (s).
This is the same rule that dictates how the distribution of a normal random variable behaves relative to its mean (mu, μ) and standard deviation (sigma, σ). Now we use probability language and notation to describe the random variable’s behavior.
For example, in the EDA section, we would have said “68% of pregnancies in our data set fall within 1 standard deviation (s) of their mean (x-bar).” The analogous statement now would be “If X, the length of a randomly chosen pregnancy, is normal with mean (mu, μ) and standard deviation (sigma, σ), then
$0.68 = P(\mu - \sigma < X < \mu + \sigma)$
In general, if X is a normal random variable, then the probability is
• 68% that X falls within 1 standard deviation (sigma, σ) of the mean (mu, μ)
• 95% that X falls within 2 standard deviations (sigma, σ) of the mean (mu, μ)
• 99.7% that X falls within 3 standard deviation (sigma, σ) of the mean (mu, μ).
Using probability notation, we may write
\begin{aligned}
&0.68=P(\mu-\sigma<X<\mu+\sigma) \
&0.95=P(\mu-2 \sigma<X<\mu+2 \sigma) \
&0.997=P(\mu-3 \sigma<X<\mu+3 \sigma)
\end{aligned}
Comment
• Notice that the information from the rule can be interpreted from the perspective of the tails of the normal curve:
• Since 0.68 is the probability of being within 1 standard deviation of the mean, (1 – 0.68) / 2 = 0.16 is the probability of being further than 1 standard deviation below the mean (or further than 1 standard deviation above the mean.)
• Likewise, (1 – 0.95) / 2 = 0.025 is the probability of being more than 2 standard deviations below (or above) the mean.
• And (1 – 0.997) / 2 = 0.0015 is the probability of being more than 3 standard deviations below (or above) the mean.
• The three figures below illustrate this.
EXAMPLE: Foot Length
Suppose that foot length of a randomly chosen adult male is a normal random variable with mean μ = mu = 11 and standard deviation σ = sigma =1.5. Then the Standard Deviation Rule lets us sketch the probability distribution of X as follows:
(a) What is the probability that a randomly chosen adult male will have a foot length between 8 and 14 inches?
0.95, or 95%.
(b) An adult male is almost guaranteed (.997 probability) to have a foot length between what two values?
6.5 and 15.5 inches.
(c) The probability is only 2.5% that an adult male will have a foot length greater than how many inches?
14. (See image below)
Now you should try a few. (Use the figure that is just before part (a) to help you.)
Learn by Doing: Using the Standard Deviation Rule
Comment
• Notice that there are two types of problems we may want to solve: those like (a), (d) and (e), in which a particular interval of values of a normal random variable is given, and we are asked to find a probability, and those like (b), (c) and (f), in which a probability is given and we are asked to identify what the normal random variable’s values would be.
Did I Get This?: Using the Standard Deviation Rule
Learn by Doing: Normal Random Variables
Let’s go back to our example of foot length:
EXAMPLE: Foot Length
How likely or unlikely is it for a male’s foot length to be more than 13 inches?
Since 13 inches doesn’t happen to be exactly 1, 2, or 3 standard deviations away from the mean, we would only be able to give a very rough estimate of the probability at this point.
Clearly, the Standard Deviation Rule only describes the tip of the iceberg, and while it serves well as an introduction to the normal curve, and gives us a good sense of what would be considered likely and unlikely values, it is very limited in the probability questions it can help us answer.
Here is another familiar normal distribution:
EXAMPLE: SAT Scores
Suppose we are interested in knowing the probability that a randomly selected student will score 633 or more on the math portion of his or her SAT (this is represented by the red area). Again, 633 does not fall exactly 1, 2, or 3 standard deviations above the mean.
Notice, however, that an SAT score of 633 and a foot length of 13 are both about 1/3 of the way between 1 and 2 standard deviations. As you continue to read, you’ll realize that this positioning relative to the mean is the key to finding probabilities.
Standard Normal Distribution
CO-6: Apply basic concepts of probability, random variation, and commonly used statistical probability distributions.
Video
Video: Standard Normal Distribution (4:12)
Finding Probabilities for a Normal Random Variable
Learning Objectives
LO 6.17: Find probabilities associated with a specified normal distribution.
As we saw, the Standard Deviation Rule is very limited in helping us answer probability questions, and basically limited to questions involving values that fall exactly 1, 2, and 3 standard deviations away from the mean. How do we answer probability questions in general? The key is the position of the value relative to the mean, measured in standard deviations.
We can approach the answering of probability questions two possible ways: a table and technology. In the next sections, you will learn how to use the “standard normal table,” and then how the same calculations can be done with technology.
Standardizing Values
The first step to assessing a probability associated with a normal value is to determine the relative value with respect to all the other values taken by that normal variable. This is accomplished by determining how many standard deviations below or above the mean that value is.
EXAMPLE: Foot Length
How many standard deviations below or above the mean male foot length is 13 inches? Since the mean is 11 inches, 13 inches is 2 inches above the mean.
Since a standard deviation is 1.5 inches, this would be 2 / 1.5 = 1.33 standard deviations above the mean. Combining these two steps, we could write:
(13 in. – 11 in.) / (1.5 inches per standard deviation) = (13 – 11) / 1.5 standard deviations = +1.33 standard deviations.
In the language of statistics, we have just found the z-score for a male foot length of 13 inches to be z = +1.33. Or, to put it another way, we have standardized the value of 13.
In general, the standardized value z tells how many standard deviations below or above the mean the original value is, and is calculated as follows:
z-score = (value – mean)/standard deviation
The convention is to denote a value of our normal random variable X with the letter “x.”
$z=\dfrac{x-\mu}{\sigma}$
Notice that since the standard deviation (sigma, σ) is always positive, for values of x above the mean (mu, μ), z will be positive; for values of x below the mean (mu, μ), z will be negative.
Let’s go back to our foot length example, and answer some more questions.
EXAMPLE: Foot Length
(a) What is the standardized value for a male foot length of 8.5 inches? How does this foot length relate to the mean?
z = (8.5 – 11) / 1.5 = -1.67. This foot length is 1.67 standard deviations below the mean.
(b) A man’s standardized foot length is +2.5. What is his actual foot length in inches?
If z = +2.5, then his foot length is 2.5 standard deviations above the mean. Since the mean is 11, and each standard deviation is 1.5, we get that the man’s foot length is: 11 + 2.5(1.5) = 14.75 inches.
Note that z-scores also allow us to compare values of different normal random variables. Here is an example:
(c) In general, women’s foot length is shorter than men’s. Assume that women’s foot length follows a normal distribution with a mean of 9.5 inches and standard deviation of 1.2. Ross’ foot length is 13.25 inches, and Candace’s foot length is only 11.6 inches. Which of the two has a longer foot relative to his or her gender group?
To answer this question, let’s find the z-score of each of these two normal values, bearing in mind that each of the values comes from a different normal distribution.
Ross: z-score = (13.25 – 11) / 1.5 = 1.5 (Ross’ foot length is 1.5 standard deviations above the mean foot length for men).
Candace: z-score = (11.6 – 9.5) / 1.2 = 1.75 (Candace’s foot length is 1.75 standard deviations above the mean foot length for women).
Note that even though Ross’ foot is longer than Candace’s, Candace’s foot is longer relative to their respective genders.
Comment:
• Part (c) above illustrates how z-scores become crucial when you want to compare distributions.
Did I Get This?: Standardized Scores (z-scores)
Finding Probabilities with the Normal Calculator and Table
Now that you have learned to assess the relative value of any normal value by standardizing, the next step is to evaluate probabilities. In other contexts, as mentioned before, we will first take the conventional approach of referring to a normal table, which tells the probability of a normal variable taking a value less than any standardized score z.
Standard Normal Table
Since normal curves are symmetric about their mean, it follows that the curve of z scores must be symmetric about 0. Since the total area under any normal curve is 1, it follows that the areas on either side of z = 0 are both 0.5. Also, according to the Standard Deviation Rule, most of the area under the standardized curve falls between z = -3 and z = +3.
The normal table outlines the precise behavior of the standard normal random variable Z, the number of standard deviations a normal value x is below or above its mean. The normal table provides probabilities that a standardized normal random variable Z would take a value less than or equal to a particular value z*.
These particular values are listed in the form *.* in rows along the left margins of the table, specifying the ones and tenths. The columns fine-tune these values to hundredths, allowing us to look up the probability of being below any standardized value z of the form *.**.
For example, in the part of the table shown below, we can see that for a z-score of -2.81, we would find P(Z < -2.81) = 0.0025.
By construction, the probability P(Z < z*) equals the area under the z curve to the left of that particular value z*.
A quick sketch is often the key to solving normal problems easily and correctly.
Although normal tables are the traditional way to solve these problems, you can also use the normal calculator.
Normal Distribution Calculator: Non-JAVA Version
The image below illustrates the results of using the online calculator to find P(Z < -2.81) and P(Z < 1.15). Notice that the calculator behaves exactly as the table.
It is your choice to use the table or the online calculator but we will usually illustrate with the online calculator.
EXAMPLE: Standard Normal Probabilities
(a) What is the probability of a normal random variable taking a value less than 2.8 standard deviations above its mean?
P(Z < 2.8) = 0.9974 or 99.74%.
(b) What is the probability of a normal random variable taking a value lower than 1.47 standard deviations below its mean?
P(Z < -1.47) = 0.0708, or 7.08%.
(c) What is the probability of a normal random variable taking a value more than 0.75 standard deviations above its mean?
The fact that the problem involves the word “more” rather than “less” should not be overlooked! Our normal calculator provides left-tail probabilities, and adjustments must be made for any other type of problem.
Method 1:
By symmetry of the z curve centered on 0,
P(Z > +0.75) = P(Z < -0.75) = 0.2266.
Method 2:
Because the total area under the normal curve is 1,
P(Z > +0.75) = 1 – P(Z < +0.75) = 1 – 0.7734 = 0.2266.
[Note: most students prefer to use Method 1, which does not require subtracting 4-digit probabilities from 1.]
(d) What is the probability of a normal random variable taking a value between 1 standard deviation below and 1 standard deviation above its mean?
To find probabilities in between two standard deviations, we must put them in terms of the probabilities below. A sketch is especially helpful here:
P(-1 < Z < +1) = P(Z < +1) – P(Z < -1) = 0.8413 – 0.1587 = 0.6826.
Here are the normal calculator results which would be needed.
Did I Get This?: Standard Normal Probabilities
Comments:
• So far, we have used the normal calculator or table to find a probability, given the number (z) of standard deviations below or above the mean. The solution process when using the table involved first locating the given z value of the form *.** in the margins, then finding the corresponding probability of the form 0.**** inside the table as our answer.
• Now, in Example 2, a probability will be given and we will be asked to find a z value. The solution process using the table involves first locating the given probability of the form 0.**** inside the table, then finding the corresponding z value of the form *.** as our answer. For the online calculator, the solution is as simply typing in the correct probability and having the calculator solve, in reverse, for the z-score.
Finding Standard Normal Scores
Learning Objectives
LO 6.18: Given a probability, find scores associated with a specified normal distribution.
It is often good to think about this process as the reverse of finding probabilities. In these problems, we will be given some information about the area in a range and asked to provide the z-score(s) associated with that range. Common types of questions are
• Find the standard normal z-score corresponding to the top (or bottom) 8%.
• Find the standard normal z-score associated with the 25th percentile.
• Find the standard normal z-scores which contain the middle 40%.
EXAMPLE: Given Probabilities - Find Z-Scores
(a) What standard normal z-score is associated with the bottom (or lowest) 1%? The probability is 0.01 that a standardized normal variable takes a value below what particular value of z?
The closest we can come to a probability of 0.01 inside the table is 0.0099, in the z = -2.3 row and 0.03 column: z = -2.33. In other words, the probability is 0.01 that the value of a normal variable is lower than 2.33 standard deviations below its mean.
Using the online calculator, we simply use the calculator in reverse by typing in 0.01 in the “area” box (outlined in blue) and then click “compute” to see the associated z-score. Remember that, like the table, we always need to provide this calculator with the area to the left of the z-score we are currently trying to find.
(b) What standard normal z-score corresponds to the top (or upper) 15%? The probability is 0.15 that a standardized normal variable takes a value above what particular value of z?
Remember that the calculator and table only provide probabilities of being below a certain value, not above. Once again, we must rely on one of the properties of the normal curve to make an adjustment.
Method 1: According to the table, 0.15 (actually 0.1492) is the probability of being below -1.04. By symmetry, 0.15 must also be the probability of being above +1.04. Using the calculator, we can enter 0.15 exactly and find that the corresponding z-score is actually -1.036 giving a final answer of z = +1.036 or +1.04 if we round to two decimal places which is our preference (this results in no differences for students who use the table or the online calculator).
Method 2: If 0.15 is the probability of being above the value we seek, then 1 – 0.15 = 0.85 must be the probability of being below the value we seek. According to the table, 0.85 (actually 0.8508) is the probability of being below +1.04.
In other words, we have found 0.15 to be the probability that a normal variable takes a value more than 1.04 standard deviations above its mean.
(c) What standard normal z-scores contain the middle 95%? The probability is 0.95 that a normal variable takes a value within how many standard deviations of its mean?
A symmetric area of 0.95 centered at 0 extends to values -z* and +z* such that the remaining (1 – 0.95) / 2 = 0.025 is below -z* and also 0.025 above +z*. The probability is 0.025 that a standardized normal variable is below -1.96. Thus, the probability is 0.95 that a normal variable takes a value within 1.96 standard deviations of its mean. Once again, the Standard Deviation Rule is shown to be just roughly accurate, since it states that the probability is 0.95 that a normal variable takes a value within 2 standard deviations of its mean.
Did I Get This?: Finding Standard Normal Scores
Although the online calculator can provide results for any probability or z-score, our standard normal table, like most, only provides probabilities for z values between -3.49 and +3.49. The following example demonstrates how to handle cases where z exceeds 3.49 in absolute value.
EXAMPLE: Extreme Probabilities
(a) What is the probability of a normal variable being lower than 5.2 standard deviations below its mean?
There is no need to panic about going “off the edge” of the normal table. We already know from the Standard Deviation Rule that the probability is only about (1 -0 .997) / 2 = 0.0015 that a normal value would be more than 3 standard deviations away from its mean in one direction or the other. The table provides information for z values as extreme as plus or minus 3.49: the probability is only 0.0002 that a normal variable would be lower than 3.49 standard deviations below its mean. Any more standard deviations than that, and we generally say the probability is approximately zero.
In this case, we would say the probability of being lower than 5.2 standard deviations below the mean is approximately zero:
P(Z < -5.2) = 0 (approx.)
(b) What is the probability of the value of a normal variable being higher than 6 standard deviations below its mean?
Since the probability of being lower than 6 standard deviations below the mean is approximately zero, the probability of being higher than 6 standard deviations below the mean must be approximately 1.
P(Z > -6) = 1 (approx.)
(c) What is the probability of a normal variable being less than 8 standard deviations above the mean?
Approximately 1. P(Z < +8) = 1 (approx.)
(d) What is the probability of a normal variable being greater than 3.5 standard deviations above the mean?
Approximately 0. P(Z > +3.5) = 0 (approx.)
Normal Applications
CO-6: Apply basic concepts of probability, random variation, and commonly used statistical probability distributions.
Video
Video: Normal Applications (9:41)
Working with Non-standard Normal Values
Learning Objectives
LO 6.17: Find probabilities associated with a specified normal distribution.
In a much earlier example, we wondered,
“How likely or unlikely is a male foot length of more than 13 inches?” We were unable to solve the problem, because 13 inches didn’t happen to be one of the values featured in the Standard Deviation Rule.
Subsequently, we learned how to standardize a normal value (tell how many standard deviations below or above the mean it is) and how to use the normal calculator or table to find the probability of falling in an interval a certain number of standard deviations below or above the mean.
By combining these two skills, we will now be able to answer questions like the one above.
To convert between a non-standard normal (X) and the standard normal (Z) use the following equations, as needed:
$Z = \dfrac{x - \mu}{\sigma} \quad \quad X = \mu +zQ$
EXAMPLE: Male Foot Length
Male foot lengths have a normal distribution, with mean (mu, μ) = 11 inches, and standard deviation (sigma, σ) = 1.5 inches.
(a) What is the probability of a foot length of more than 13 inches?
First, we standardize:
$z = \dfrac{x-\mu}{\sigma} = \dfrac{13-11}{1.5} = +1.33$
The probability that we seek, P(X > 13), is the same as the probability that a normal variable takes a value greater than 1.33 standard deviations above its mean, i.e. P(Z > +1.33)
This can be solved with the normal calculator or table, after applying the property of symmetry:
P(Z > +1.33) = P(Z < -1.33) = 0.0918.
A male foot length of more than 13 inches is on the long side, but not too unusual: its probability is about 9%.
We can streamline the solution in terms of probability notation and write:
P(X > 13) = P(Z > 1.33) = P(Z < −1.33) = 0.0918
(b) What is the probability of a male foot length between 10 and 12 inches?
The standardized values of 10 and 12 are, respectively,
$\dfrac{10-11}{1.5} = -0.67$ and $\dfrac{12-11}{1.5} = 0.67$
Note: The two z-scores in a “between” problem will not always be the same value. You must calculate both or, in this case, you could recognize that both values are the same distance from the mean and hence result in z-scores which are equal but of opposite signs.
P(-0.67 < Z < +0.67) = P(Z < +0.67) – P(Z < -0.67) = 0.7486 – 0.2514 = 0.4972.
Or, if you prefer the streamlined notation,
P(10 < X < 12) = P(−0.67 < Z < +0.67) = P( Z < +0.67) − P(Z < −0.67) = 0.7486 − 0.2514 = 0.4972.
Comments:
By solving the above example, we inadvertently discovered the quartiles of a normal distribution! P(Z < -0.67) = 0.2514 tells us that roughly 25%, or one quarter, of a normal variable’s values are less than 0.67 standard deviations below the mean.
P(Z < +0.67) = 0.7486 tells us that roughly 75%, or three quarters, are less than 0.67 standard deviations above the mean.
And of course the median is equal to the mean, since the distribution is symmetric, the median is 0 standard deviations away from the mean.
Be sure to verify these results for yourself using the calculator or table!
Let’s look at another example.
EXAMPLE: Length of a Human Pregnancy
Length (in days) of a randomly chosen human pregnancy is a normal random variable with mean (mu, μ) = 266 and standard deviation (sigma, σ) = 16.
(a) Find Q1, the median, and Q3. Using the z-scores we found in the previous example we have
Q1 = 266 – 0.67(16) = 255
median = mean = 266
Q3 = 266 + 0.67(16) = 277
Thus, the probability is 1/4 that a pregnancy will last less than 255 days; 1/2 that it will last less than 266 days; 3/4 that it will last less than 277 days.
(b) What is the probability that a randomly chosen pregnancy will last less than 246 days?
Since (246 – 266) / 16 = -1.25, we write
P(X < 246) = P(Z < −1.25) = 0.1056
(c) What is the probability that a randomly chosen pregnancy will last longer than 240 days?
Since (240 – 266) / 16 = -1.63, we write
P(X > 240) = P(Z > −1.63) = P(Z < +1.63) = 0.9484
Since the mean is 266 and the standard deviation is 16, most pregnancies last longer than 240 days.
(d) What is the probability that a randomly chosen pregnancy will last longer than 500 days?
Method 1:
Common sense tells us that this would be impossible.
Method 2:
The standardized value of 500 is (500 – 266) / 16 = +14.625.
P(X > 500) = P(Z > 14.625) = 0.
(e) Suppose a pregnant woman’s husband has scheduled his business trips so that he will be in town between the 235th and 295th days. What is the probability that the birth will take place during that time?
The standardized values are (235 – 266) / 16) = -1.94 and (295 – 266) / 16 = +1.81.
P(235 < X < 295) = P(−1.94 < Z < +1.81) = P(Z < +1.81) − P(Z < −1.94) = 0.9649 − 0.0262 = 0.9387.
There is close to a 94% chance that the husband will be in town for the birth.
Be sure to verify these results for yourself using the calculator or table!
The purpose of the next activity is to give you guided practice at solving word problems that involve normal random variables. In particular, we’ll solve problems like the examples you just went over, in which you are asked to find the probability that a normal random variable falls within a certain interval.
Learn by Doing: Find Normal Probabilities
The previous examples most followed the same general form: given values of a normal random variable, you were asked to find an associated probability. The two basic steps in the solution process were to
• Standardize to Z;
• Find associated probabilities using the standard normal calculator or table.
Finding Normal Scores
Learning Objectives
LO 6.18: Given a probability, find scores associated with a specified normal distribution.
The next example will be a different type of problem: given a certain probability, you will be asked to find the associated value of the normal random variable. The solution process will go more or less in reverse order from what it was in the previous examples.
EXAMPLE: Foot Length
Again, foot length of a randomly chosen adult male is a normal random variable with a mean of 11 and standard deviation of 1.5.
(a) The probability is 0.04 that a randomly chosen adult male foot length will be less than how many inches?
According to the normal calculator or table, a probability of 0.04 below (actually 0.0401) is associated with z = -1.75.
In other words, the probability is 0.04 that a normal variable takes a value lower than 1.75 standard deviations below its mean.
For adult male foot lengths, this would be 11 – 1.75(1.5) = 8.375. The probability is 0.04 that an adult male foot length would be less than 8.375 inches.
(b) The probability is 0.10 that an adult male foot will be longer than how many inches? Caution is needed here because of the word “longer.”
Once again, we must remind ourselves that the calculator and table only show the probability of a normal variable taking a value lower than a certain number of standard deviations below or above its mean. Adjustments must be made for problems that involve probabilities besides “lower than” or “less than.” As usual, we have a choice of invoking either symmetry or the fact that the total area under the normal curve is 1. Students should examine both methods and decide which they prefer to use for their own purposes.
Method 1:
According to the calculator or table, a probability of 0.10 below is associated with a z value of -1.28. By symmetry, it follows that a probability of 0.10 above has z = +1.28.
We seek the foot length that is 1.28 standard deviations above its mean: 11 + 1.28(1.5) = 12.92, or just under 13 inches.
Method 2: If the probability is 0.10 that a foot will be longer than the value we seek, then the probability is 0.90 that a foot will be shorter than that same value, since the probabilities must sum to 1.
According to the calculator or table, a probability of 0.90 below is associated with a z value of +1.28. Again, we seek the foot length that is 1.28 standard deviations above its mean, or 12.92 inches.
Comment:
• Part (a) in the above example could have been re-phrased as: “0.04 is the proportion of all adult male foot lengths that are below what value?”, which takes the perspective of thinking about the probability as a proportion of occurrences in the long-run. As originally stated, it focuses on the chance of a randomly chosen individual having a normal value in a given interval.
EXAMPLE: Money Spent for Lunch
A study reported that the amount of money spent each week for lunch by a worker in a particular city is a normal random variable with a mean of $35 and a standard deviation of$5.
(a) The probability is 0.97 that a worker will spend less than how much money in a week on lunch?
The z associated with a probability of 0.9700 below is +1.88. The amount that is 1.88 standard deviations above the mean is 35 + 1.88(5) = 44.4, or $44.40. (b) There is a 30% chance of spending more than how much for lunches in a week? The z associated with a probability of 0.30 above is +0.52. The amount is 35 + 0.52(5) = 37.6, or$37.60.
Comment:
• Another way of expressing Example (part a.) above would be to ask, “What is the 97th percentile for the amount (X) spent by workers in a week for their lunch?” Many normal variables, such as heights, weights, or exam scores, are often expressed in terms of percentiles.
EXAMPLE:
The height X (in inches) of a randomly chosen woman is a normal random variable with a mean of 65 and a standard deviation of 2.5.
What is the height of a woman who is in the 80th percentile?
A probability of 0.7995 in the table corresponds to z = +0.84. Her height is 65 + 0.84(2.5) = 67.1 inches.
By now we have had practice in solving normal probability problems in both directions: those where a normal value is given and we are asked to report a probability and those where a probability is given and we are asked to report a normal value. Strategies for solving such problems are outlined below:
• Given a normal value x, solve for probability:
• Standardize: calculate
$Z = \dfrac{x-\mu}{\sigma}$
• If you are using the online calculator: Type the z-score for which you wish to find the area to the left and hit “compute.”
• If you are using the table: Locate z in the margins of the normal table (ones and tenths for the row, hundredths for the column). Find the corresponding probability (given to four decimal places) of a normal random variable taking a value below z inside the table.
• (Adjust if the problem involves something other than a “less-than” probability, by invoking either symmetry or the fact that the total area under the normal curve is 1.)
• Given a probability, solve for normal value x:
• (Adjust if the problem involves something other than a “less-than” probability, by invoking either symmetry or the fact that the total area under the normal curve is 1.)
• Locate the probability (given to four decimal places) inside the normal table. Using the table, find the corresponding z value in the margins (row for ones and tenths, column for hundredths). Using the calculator, provide the area to left of the z-score you wish to find and hit “compute.”
• “Unstandardize”: calculate
$X = \mu + z\sigma$
This next activity is a continuation of the previous one, and will give you guided practice in solving word problems involving the normal distribution. In particular, we’ll solve problems like the ones you just solved, in which you are given a probability and you are asked to find the normal value associated with it.
Learn by Doing: Find Normal Scores
Normal Approximation for Binomial
The normal distribution can be used as a reasonable approximation to other distributions under certain circumstances. Here we will illustrate this approximation for the binomial distribution.
We will not do any calculations here as we simply wish to illustrate the concept. In the next section on sampling distributions, we will look at another measure related to the binomial distribution, the sample proportion, and at that time we will discuss the underlying normal distribution.
Consider the binomial probability distribution displayed below for n = 20 and p = 0.5.
Now we overlay a normal distribution with the same mean and standard deviation.
And in the final image, we can see the regions for the exact and approximate probabilities shaded.
Unfortunately, the approximated probability, 0.1867, is quite a bit different from the actual probability, 0.2517. However, this example constitutes something of a “worst-case scenario” according to the usual criteria for use of a normal approximation.
Rule of Thumb
Probabilities for a binomial random variable X with n and p may be approximated by those for a normal random variable having the same mean and standard deviation as long as the sample size n is large enough relative to the proportions of successes and failures, p and 1 – p. Our Rule of Thumb will be to require that
np ≥ 10 and n(1 − p) ≥ 10
Continuity Correction
It is possible to improve the normal approximation to the binomial by adjusting for the discrepancy that arises when we make the shift from the areas of histogram rectangles to the area under a smooth curve. For example, if we want to find the binomial probability that X is less than or equal to 8, we are including the area of the entire rectangle over 8, which actually extends to 8.5. Our normal approximation only included the area up to 8. | textbooks/stats/Applied_Statistics/Biostatistics_-_Open_Learning_Textbook/Unit_3B%3A_Random_Variables/Normal_Random_Variables.txt |
We have almost reached the end our discussion of probability. We were introduced to the important concept of random variables, which are quantitative variables whose value is determined by the outcome of a random experiment.
We discussed discrete and continuous random variables.
We saw that all the information about a discrete random variable is packed into its probability distribution. Using that, we can answer probability questions about the random variable and find its mean and standard deviation. We ended the part on discrete random variables by presenting a special class of discrete random variables – binomial random variables.
As we dove into continuous random variables, we saw how calculations can get complicated very quickly, when probabilities associated with a continuous random variable are found by calculating areas under its density curve.
As an example for a continuous random variable, we presented the normal random variable, and discussed it at length. The normal distribution is extremely important, not just because many variables in real life follow the normal distribution, but mainly because of the important role it plays in statistical inference, our ultimate goal of this course.
We learned how we can avoid calculus by using the standard normal calculator or table to find probabilities associated with the normal distribution, and learned how it can be used as an approximation to the binomial distribution under certain conditions.
Random Variables
A random variable is a variable whose values are numerical results of a random experiment.
• A discrete random variable is summarized by its probability distribution — a list of its possible values and their corresponding probabilities.
The sum of the probabilities of all possible values must be 1.
The probability distribution can be represented by a table, histogram, or sometimes a formula.
• The probability distribution of a random variable can be supplemented with numerical measures of the center and spread of the random variable.
Center: The center of a random variable is measured by its mean (which is sometimes also referred to as the expected value).
The mean of a random variable can be interpreted as its long run average.
The mean is a weighted average of the possible values of the random variable weighted by their corresponding probabilities.
Spread: The spread of a random variable is measured by its variance, or more typically by its standard deviation (the square root of the variance).
The standard deviation of a random variable can be interpreted as the typical (or long-run average) distance between the value that the random variable assumes and the mean of X.
Binomial Random Variables
• The binomial random variable is a type of discrete random variable that is quite common.
• The binomial random variable is defined in a random experiment that consists of n independent trials, each having two possible outcomes (called “success” and “failure”), and each having the same probability of success: p. Such a random experiment is called the binomial random experiment.
• The binomial random variable represents the number of successes (out of n) in a binomial experiment. It can therefore have values as low as 0 (if none of the n trials was a success) and as high as n (if all n trials were successes).
• There are “many” binomial random variables, depending on the number of trials (n) and the probability of success (p).
• The probability distribution of the binomial random variable is given in the form of a formula and can be used to find probabilities. Technology can be used as well.
• The mean and standard deviation of a binomial random variable can be easily found using short-cut formulas.
Continuous Random Variables
The probability distribution of a continuous random variable is represented by a probability density curve. The probability that the random variable takes a value in any interval of interest is the area above this interval and below the density curve.
An important example of a continuous random variable is the normal random variable, whose probability density curve is symmetric (bell-shaped), bulging in the middle and tapering at the ends.
• There are “many” normal random variables, each determined by its mean μ (mu) (which determines where the density curve is centered) and standard deviation σ (sigma) (which determines how spread out (wide) the normal density curve is).
• Any normal random variable follows the Standard Deviation Rule, which can help us find probabilities associated with the normal random variable.
• Another way to find probabilities associated with the normal random variable is using the standard normal table. This process involves finding the z-score of values, which tells us how many standard deviations below or above the mean the value is.
• An important application of the normal random variable is that it can be used as an approximation of the binomial random variable (under certain conditions). A continuity correction can improve this approximation. | textbooks/stats/Applied_Statistics/Biostatistics_-_Open_Learning_Textbook/Unit_3B%3A_Random_Variables/Summary_%28Unit_3B_-_Random_Variables%29.txt |
CO-6: Apply basic concepts of probability, random variation, and commonly used statistical probability distributions.
NOTE: The following videos discuss all three pages related to sampling distributions.
Review: We will apply the concepts of normal random variables to two random variables which are summary statistics from a sample – these are the sample mean (x-bar) and the sample proportion (p-hat).
Video
Video: Sampling Distributions (34:00 total time)
Introduction
Already on several occasions we have pointed out the important distinction between a population and a sample. In Exploratory Data Analysis, we learned to summarize and display values of a variable for a sample, such as displaying the blood types of 100 randomly chosen U.S. adults using a pie chart, or displaying the heights of 150 males using a histogram and supplementing it with appropriate numerical measures such as the sample mean (x-bar) and sample standard deviation (s).
In our study of Probability and Random Variables, we discussed the long-run behavior of a variable, considering the population of all possible values taken by that variable. For example, we talked about the distribution of blood types among all U.S. adults and the distribution of the random variable X, representing a male’s height.
Now we focus directly on the relationship between the values of a variable for a sample and its values for the entire population from which the sample was taken. This material is the bridge between probability and our ultimate goal of the course, statistical inference. In inference, we look at a sample and ask what we can say about the population from which it was drawn.
Now, we’ll pose the reverse question: If I know what the population looks like, what can I expect the sample to look like? Clearly, inference poses the more practical question, since in practice we can look at a sample, but rarely do we know what the whole population looks like. This material will be more theoretical in nature, since it poses a problem which is not really practical, but will present important ideas which are the foundation for statistical inference.
Parameters vs. Statistics
Learning Objectives
LO 6.19: Identify and distinguish between a parameter and a statistic.
Learning Objectives
LO 6.20: Explain the concepts of sampling variability and sampling distribution.
To better understand the relationship between sample and population, let’s consider the two examples that were mentioned in the introduction.
EXAMPLE 1: Blood Type - Sampling Variability
In the probability section, we presented the distribution of blood types in the entire U.S. population:
Assume now that we take a sample of 500 people in the United States, record their blood type, and display the sample results:
Note that the percentages (or proportions) that we found in our sample are slightly different than the population percentages. This is really not surprising. Since we took a sample of just 500, we cannot expect that our sample will behave exactly like the population, but if the sample is random (as it was), we expect to get results which are not that far from the population (as we did). If we took yet another sample of size 500:
we again get sample results that are slightly different from the population figures, and also different from what we found in the first sample. This very intuitive idea, that sample results change from sample to sample, is called sampling variability.
Let’s look at another example:
EXAMPLE 2: Heights of Adults Males - Sampling Variability
Heights among the population of all adult males follow a normal distribution with a mean μ = mu =69 inches and a standard deviation σ = sigma =2.8 inches. Here is a probability display of this population distribution:
A sample of 200 males was chosen, and their heights were recorded. Here are the sample results:
The sample mean (x-bar) is 68.7 inches and the sample standard deviation (s) is 2.95 inches.
Again, note that the sample results are slightly different from the population. The histogram for this sample resembles the normal distribution, but is not as fine, and also the sample mean and standard deviation are slightly different from the population mean and standard deviation. Let’s take another sample of 200 males:
The sample mean (x-bar) is 69.1 inches and the sample standard deviation (s) is 2.66 inches.
Again, as in Example 1 we see the idea of sampling variability. In this second sample, the results are pretty close to the population, but different from the results we found in the first sample.
In both the examples, we have numbers that describe the population, and numbers that describe the sample. In Example 1, the number 42% is the population proportion of blood type A, and 39.6% is the sample proportion (in sample 1) of blood type A. In Example 2, 69 and 2.8 are the population mean and standard deviation, and (in sample 1) 68.7 and 2.95 are the sample mean and standard deviation.
A parameter is a number that describes the population.
A statistic is a number that is computed from the sample.
EXAMPLE 3: Parameters vs. Statistics from Example 1 and 2
In Example 1: 42% (0.42) is the parameter and 39.6% (0.396) is a statistic (and 43.2% is another statistic).
In Example 2: 69 and 2.8 are the parameters and 68.7 and 2.95 are statistics (69.1 and 2.66 are also statistics).
In this course, as in the examples above, we focus on the following parameters and statistics:
• population proportion and sample proportion
• population mean and sample mean
• population standard deviation and sample standard deviation
The following table summarizes the three pairs, and gives the notation
The only new notation here is p for population proportion (p = 0.42 for type A in Example 1), and p-hat (using the “hat” symbol ∧ over the p) for the sample proportion which is 0.396 in Example 1, sample 1).
Comments:
• Parameters are usually unknown, because it is impractical or impossible to know exactly what values a variable takes for every member of the population.
• Statistics are computed from the sample, and vary from sample to sample due to sampling variability.
In the last part of the course, statistical inference, we will learn how to use a statistic to draw conclusions about an unknown parameter, either by estimating it or by deciding whether it is reasonable to conclude that the parameter equals a proposed value.
Now we’ll learn about the behavior of the statistics assuming that we know the parameters. So, for example, if we know that the population proportion of blood type A in the population is 0.42, and we take a random sample of size 500, what do we expect the sample proportion p-hat to be? Specifically we ask:
• What is the distribution of all possible sample proportions from samples of size 500?
• Where is it centered?
• How much variation exists among different sample proportions from samples of size 500?
• How far off the true value of 0.42 might we expect to be?
Here are some more examples:
EXAMPLE 4: Parameters vs. Statistics
If students picked numbers completely at random from the numbers 1 to 20, the proportion of times that the number 7 would be picked is 0.05. When 15 students picked a number “at random” from 1 to 20, 3 of them picked the number 7. Identify the parameter and accompanying statistic in this situation.
The parameter is the population proportion of random selections resulting in the number 7, which is p = 0.05. The accompanying statistic is the sample proportion (p-hat) of selections resulting in the number 7, which is 3/15=0.20.
Note: Unrelated to our current discussion, this is an interesting illustration of how we (humans) are not very good at doing things randomly. I used to ask a similar question in introductory statistics courses where I asked students to RANDOMLY pick a number between 1 and 10. The number of students choosing 7 is almost always MUCH larger than would be predicted if the results were truly random.
Try it with some of your friends and family and see if you get similar results. We really like the number 7! Interestingly, if students were aware of this phenomenon, then they tended to pick 3 most often. This is interesting since if choices were truly random, we should see a relatively equal proportion for each number :-)
EXAMPLE 5: Parameters vs. Statistics
The length of human pregnancies has a mean of 266 days and a standard deviation of 16 days. A random sample of 9 pregnant women was observed to have a mean pregnancy length of 270 days, with a standard deviation of 14 days. Identify the parameters and accompanying statistics in this situation.
The parameters are population mean μ = mu =266 and population standard deviation σ = sigma = 16. The accompanying statistics are sample mean (x-bar) = 270 and sample standard deviation (s) = 14.
The first step to drawing conclusions about parameters based on the accompanying statistics is to understand how sample statistics behave relative to the parameter(s) that summarizes the entire population. We begin with the behavior of sample proportion relative to population proportion (when the variable of interest is categorical). After that, we will explore the behavior of sample mean relative to population mean (when the variable of interest is quantitative).
Did I Get This?: Parameters vs. Statistics
Unit 3B: Sampling Distribution
CO-6: Apply basic concepts of probability, random variation, and commonly used statistical probability distributions.
Behavior of the Sample Mean (x-bar)
Learning Objectives
LO 6.22: Apply the sampling distribution of the sample mean as summarized by the Central Limit Theorem (when appropriate). In particular, be able to identify unusual samples from a given population.
So far, we’ve discussed the behavior of the statistic p-hat, the sample proportion, relative to the parameter p, the population proportion (when the variable of interest is categorical).
We are now moving on to explore the behavior of the statistic x-bar, the sample mean, relative to the parameter μ (mu), the population mean (when the variable of interest is quantitative).
Let’s begin with an example.
EXAMPLE 9: Behavior of Sample Means
Birth weights are recorded for all babies in a town. The mean birth weight is 3,500 grams, µ = mu = 3,500 g. If we collect many random samples of 9 babies at a time, how do you think sample means will behave?
Here again, we are working with a random variable, since random samples will have means that vary unpredictably in the short run but exhibit patterns in the long run.
Based on our intuition and what we have learned about the behavior of sample proportions, we might expect the following about the distribution of sample means:
Center: Some sample means will be on the low side — say 3,000 grams or so — while others will be on the high side — say 4,000 grams or so. In repeated sampling, we might expect that the random samples will average out to the underlying population mean of 3,500 g. In other words, the mean of the sample means will be µ (mu), just as the mean of sample proportions was p.
Spread: For large samples, we might expect that sample means will not stray too far from the population mean of 3,500. Sample means lower than 3,000 or higher than 4,000 might be surprising. For smaller samples, we would be less surprised by sample means that varied quite a bit from 3,500. In others words, we might expect greater variability in sample means for smaller samples. So sample size will again play a role in the spread of the distribution of sample measures, as we observed for sample proportions.
Shape: Sample means closest to 3,500 will be the most common, with sample means far from 3,500 in either direction progressively less likely. In other words, the shape of the distribution of sample means should bulge in the middle and taper at the ends with a shape that is somewhat normal. This, again, is what we saw when we looked at the sample proportions.
Comment:
• The distribution of the values of the sample mean (x-bar) in repeated samples is called the sampling distribution of x-bar.
Let’s look at a simulation:
Video
Video: Simulation #3 (x-bar) (4:31)
Did I Get This?: Simulation #3 (x-bar)
The results we found in our simulations are not surprising. Advanced probability theory confirms that by asserting the following:
The Sampling Distribution of the Sample Mean
If repeated random samples of a given size n are taken from a population of values for a quantitative variable, where the population mean is μ (mu) and the population standard deviation is σ (sigma) then the mean of all sample means (x-bars) is population mean μ (mu).
As for the spread of all sample means, theory dictates the behavior much more precisely than saying that there is less spread for larger samples. In fact, the standard deviation of all sample means is directly related to the sample size, n as indicated below.
The standard deviation of all sample means ($\bar{x}$) is exactly $\dfrac{\sigma}{\sqrt{n}}$
Since the square root of sample size n appears in the denominator, the standard deviation does decrease as sample size increases.
Learn by Doing: Sampling Distribution (x-bar)
Let’s compare and contrast what we now know about the sampling distributions for sample means and sample proportions.
Now we will investigate the shape of the sampling distribution of sample means. When we were discussing the sampling distribution of sample proportions, we said that this distribution is approximately normal if np ≥ 10 and n(1 – p) ≥ 10. In other words, we had a guideline based on sample size for determining the conditions under which we could use normal probability calculations for sample proportions.
When will the distribution of sample means be approximately normal? Does this depend on the size of the sample?
It seems reasonable that a population with a normal distribution will have sample means that are normally distributed even for very small samples. We saw this illustrated in the previous simulation with samples of size 10.
What happens if the distribution of the variable in the population is heavily skewed? Do sample means have a skewed distribution also? If we take really large samples, will the sample means become more normally distributed?
In the next simulation, we will investigate these questions.
Video
Video: Simulation #4 (x-bar) (5:02)
Did I Get This?: Simulation #4 (x-bar)
To summarize, the distribution of sample means will be approximately normal as long as the sample size is large enough. This discovery is probably the single most important result presented in introductory statistics courses. It is stated formally as the Central Limit Theorem.
We will depend on the Central Limit Theorem again and again in order to do normal probability calculations when we use sample means to draw conclusions about a population mean. We now know that we can do this even if the population distribution is not normal.
How large a sample size do we need in order to assume that sample means will be normally distributed? Well, it really depends on the population distribution, as we saw in the simulation. The general rule of thumb is that samples of size 30 or greater will have a fairly normal distribution regardless of the shape of the distribution of the variable in the population.
Comment:
• For categorical variables, our claim that sample proportions are approximately normal for large enough n is actually a special case of the Central Limit Theorem. In this case, we think of the data as 0’s and 1’s and the “average” of these 0’s and 1’s is equal to the proportion we have discussed.
Before we work some examples, let’s compare and contrast what we now know about the sampling distributions for sample means and sample proportions.
Learn by Doing: Using the Sampling Distribution of x-bar
EXAMPLE 10: Using the Sampling Distribution of x-bar
Household size in the United States has a mean of 2.6 people and standard deviation of 1.4 people. It should be clear that this distribution is skewed right as the smallest possible value is a household of 1 person but the largest households can be very large indeed.
(a) What is the probability that a randomly chosen household has more than 3 people?
A normal approximation should not be used here, because the distribution of household sizes would be considerably skewed to the right. We do not have enough information to solve this problem.
(b) What is the probability that the mean size of a random sample of 10 households is more than 3?
By anyone’s standards, 10 is a small sample size. The Central Limit Theorem does not guarantee sample mean coming from a skewed population to be approximately normal unless the sample size is large.
(c) What is the probability that the mean size of a random sample of 100 households is more than 3?
Now we may invoke the Central Limit Theorem: even though the distribution of household size X is skewed, the distribution of sample mean household size (x-bar) is approximately normal for a large sample size such as 100. Its mean is the same as the population mean, 2.6, and its standard deviation is the population standard deviation divided by the square root of the sample size:
$\dfrac{\sigma}{\sqrt{n}}=\dfrac{1.4}{\sqrt{100}}=0.14$
To find
$P(\bar{x}>3)$
we standardize 3 to into a z-score by subtracting the mean and dividing the result by the standard deviation (of the sample mean). Then we can find the probability using the standard normal calculator or table.
$P(\bar{x}>3)=P\left(Z>\dfrac{3-2.6}{\dfrac{1.4}{\sqrt{100}}}\right)=P(Z>2.86)=0.0021$
Households of more than 3 people are, of course, quite common, but it would be extremely unusual for the mean size of a sample of 100 households to be more than 3.
The purpose of the next activity is to give guided practice in finding the sampling distribution of the sample mean (x-bar), and use it to learn about the likelihood of getting certain values of x-bar.
Learn by Doing: Using the Sampling Distribution of x-bar #2
Did I Get This?: Using the Sampling Distribution of x-bar | textbooks/stats/Applied_Statistics/Biostatistics_-_Open_Learning_Textbook/Unit_3B%3A_Sampling_Distribution/Sampling_Distribution_of_the_Sample_Mean%2C_x-bar.txt |
CO-6: Apply basic concepts of probability, random variation, and commonly used statistical probability distributions.
Behavior of Sample Proportions
Learning Objectives
LO 6.21: Apply the sampling distribution of the sample proportion (when appropriate). In particular, be able to identify unusual samples from a given population.
EXAMPLE 6: Behavior of Sample Proportions
Approximately 60% of all part-time college students in the United States are female. (In other words, the population proportion of females among part-time college students is p = 0.6.) What would you expect to see in terms of the behavior of a sample proportion of females (p-hat) if random samples of size 100 were taken from the population of all part-time college students?
As we saw before, due to sampling variability, sample proportion in random samples of size 100 will take numerical values which vary according to the laws of chance: in other words, sample proportion is a random variable. To summarize the behavior of any random variable, we focus on three features of its distribution: the center, the spread, and the shape.
Based only on our intuition, we would expect the following:
Center: Some sample proportions will be on the low side — say, 0.55 or 0.58 — while others will be on the high side — say, 0.61 or 0.66. It is reasonable to expect all the sample proportions in repeated random samples to average out to the underlying population proportion, 0.6. In other words, the mean of the distribution of p-hat should be p.
Spread: For samples of 100, we would expect sample proportions of females not to stray too far from the population proportion 0.6. Sample proportions lower than 0.5 or higher than 0.7 would be rather surprising. On the other hand, if we were only taking samples of size 10, we would not be at all surprised by a sample proportion of females even as low as 4/10 = 0.4, or as high as 8/10 = 0.8. Thus, sample size plays a role in the spread of the distribution of sample proportion: there should be less spread for larger samples, more spread for smaller samples.
Shape: Sample proportions closest to 0.6 would be most common, and sample proportions far from 0.6 in either direction would be progressively less likely. In other words, the shape of the distribution of sample proportion should bulge in the middle and taper at the ends: it should be somewhat normal.
Comment:
• The distribution of the values of the sample proportions (p-hat) in repeated samples (of the same size) is called the sampling distribution of p-hat.
The purpose of the next video and activity is to check whether our intuition about the center, spread and shape of the sampling distribution of p-hat was correct via simulations.
Video
Video: Simulation #1 (p-hat) (4:13)
Did I Get This?: Simulation #1 (p-hat)
At this point, we have a good sense of what happens as we take random samples from a population. Our simulation suggests that our initial intuition about the shape and center of the sampling distribution is correct. If the population has a proportion of p, then random samples of the same size drawn from the population will have sample proportions close to p. More specifically, the distribution of sample proportions will have a mean of p.
We also observed that for this situation, the sample proportions are approximately normal. We will see later that this is not always the case. But if sample proportions are normally distributed, then the distribution is centered at p.
Now we want to use simulation to help us think more about the variability we expect to see in the sample proportions. Our intuition tells us that larger samples will better approximate the population, so we might expect less variability in large samples.
In the next walk-through we will use simulations to investigate this idea. After that walk-through, we will tie these ideas to more formal theory.
Video
Video: Simulation #2 (p-hat) (4:55)
Did I Get This?: Simulation #2 (p-hat)
The simulations reinforced what makes sense to our intuition. Larger random samples will better approximate the population proportion. When the sample size is large, sample proportions will be closer to p. In other words, the sampling distribution for large samples has less variability. Advanced probability theory confirms our observations and gives a more precise way to describe the standard deviation of the sample proportions. This is described next.
The Sampling Distribution of the Sample Proportion
If repeated random samples of a given size n are taken from a population of values for a categorical variable, where the proportion in the category of interest is p, then the mean of all sample proportions (p-hat) is the population proportion (p).
As for the spread of all sample proportions, theory dictates the behavior much more precisely than saying that there is less spread for larger samples. In fact, the standard deviation of all sample proportions is directly related to the sample size, n as indicated below.
Since the sample size n appears in the denominator of the square root, the standard deviation does decrease as sample size increases. Finally, the shape of the distribution of p-hat will be approximately normal as long as the sample size n is large enough. The convention is to require both np and n(1 – p) to be at least 10.
We can summarize all of the above by the following:
Let’s apply this result to our example and see how it compares with our simulation.
In our example, n = 25 (sample size) and p = 0.6. Note that np = 15 ≥ 10 and n(1 – p) = 10 ≥ 10. Therefore we can conclude that p-hat is approximately a normal distribution with mean p = 0.6 and standard deviation
(which is very close to what we saw in our simulation).
Comment:
• These results are similar to those for binomial random variables (X) discussed previously. Be careful not to confuse the results for the mean and standard deviation of X with those of p-hat.
Learn by Doing: Sampling Distribution of p-hat
Did I Get This?: Sampling Distribution of p-hat
If a sampling distribution is normally shaped, then we can apply the Standard Deviation Rule and use z-scores to determine probabilities. Let’s look at some examples.
EXAMPLE 7: Using the Sample Distribution of p-hat
A random sample of 100 students is taken from the population of all part-time students in the United States, for which the overall proportion of females is 0.6.
(a) There is a 95% chance that the sample proportion (p-hat) falls between what two values?
First note that the distribution of p-hat has mean p = 0.6, standard deviation
$\sigma_{\hat{p}}=\sqrt{\dfrac{p(1-p)}{n}}=\sqrt{\dfrac{0.6(1-0.6)}{100}}=0.05$
and a shape that is close to normal, since np = 100(0.6) = 60 and n(1 – p) = 100(0.4) = 40 are both greater than 10. The Standard Deviation Rule applies: the probability is approximately 0.95 that p-hat falls within 2 standard deviations of the mean, that is, between 0.6 – 2(0.05) and 0.6 + 2(0.05). There is roughly a 95% chance that p-hat falls in the interval (0.5, 0.7) for samples of this size.
(b) What is the probability that sample proportion p-hat is less than or equal to 0.56?
To find
$P(\hat{p} \leq 0.56)$
we standardize 0.56 into a z-score by subtracting the mean and dividing the result by the standard deviation. Then we can find the probability using the standard normal calculator or table.
$P(\hat{p} \leq 0.56)=P\left(Z \leq \dfrac{0.56-0.6}{0.05}\right)=P(Z \leq-0.80)=0.2119$
To see the impact of the sample size on these probability calculations, consider the following variation of our example.
EXAMPLE 8: Using the Sample Distribution of p-hat
A random sample of 2500 students is taken from the population of all part-time students in the United States, for which the overall proportion of females is 0.6.
(a) There is a 95% chance that the sample proportion (p-hat) falls between what two values?
First note that the distribution of p-hat has mean p = 0.6, standard deviation
$\sigma_{\hat{p}}=\sqrt{\dfrac{p(1-p)}{n}}=\sqrt{\dfrac{0.6(1-0.6)}{2500}}=0.01$
and a shape that is close to normal, since np = 2500(0.6) = 1500 and n(1 – p) = 2500(0.4) = 1000 are both greater than 10. The Standard Deviation Rule applies: the probability is approximately 0.95 that p-hat falls within 2 standard deviations of the mean, that is, between 0.6 – 2(0.01) and 0.6 + 2(0.01). There is roughly a 95% chance that p-hat falls in the interval (0.58, 0.62) for samples of this size.
(b) What is the probability that sample proportion p-hat is less than or equal to 0.56?
To find
$P(\hat{p} \leq 0.56)$
we standardize 0.56 to into a z-score by subtracting the mean and dividing the result by the standard deviation. Then we can find the probability using the standard normal calculator or table.
$P(\hat{p} \leq 0.56)=P\left(Z \leq \dfrac{0.56-0.6}{0.01}\right)=P(Z \leq-4) \approx 0$
Comment:
• As long as the sample is truly random, the distribution of p-hat is centered at p, no matter what size sample has been taken. Larger samples have less spread. Specifically, when we multiplied the sample size by 25, increasing it from 100 to 2,500, the standard deviation was reduced to 1/5 of the original standard deviation. Sample proportion strays less from population proportion 0.6 when the sample is larger: it tends to fall anywhere between 0.5 and 0.7 for samples of size 100, whereas it tends to fall between 0.58 and 0.62 for samples of size 2,500. It is not so improbable to take a value as low as 0.56 for samples of 100 (probability is more than 20%) but it is almost impossible to take a value as low as 0.56 for samples of 2,500 (probability is virtually zero).
Summary (Unit 3B - Sampling Distributions)
We have finally reached the end our discussion of probability with our discussion of sampling distributions, which can be viewed in two ways. On the one hand Sampling Distributions can be viewed as a special case of Random Variables since we discussed two special random variables: the sample mean (x-bar) and the sample proportion (p-hat). On the other hand, Sampling Distributions can be viewed as the bridge that takes us from probability to statistical inference.
As mentioned in the introduction, this last concept in probability is the bridge between the probability section and inference. It focuses on the relationship between sample values (statistics) and population values (parameters). Statistics vary from sample to sample due to sampling variability, and therefore can be regarded as random variables whose distribution we call the sampling distribution.
In our discussion of sampling distributions, we focused on two statistics, the sample proportion, p-hat and the sample mean, x-bar. Our goal was to explore the sampling distribution of these two statistics relative to their respective population parameters, p and μ (mu), and we found in both cases that under certain conditions the sampling distribution is approximately normal. This result is known as the Central Limit Theorem. As we’ll see in the next section, the Central Limit Theorem is the foundation for statistical inference.
Outside Reading: Little Handbook – Behavior of Sample Means (≈ 3000 words)
Sampling Distributions
A parameter is a number that describes the population, and a statistic is a number that describes the sample.
• Parameters are fixed, and in practice, usually unknown.
• Statistics change from sample to sample due to sampling variability.
• The behavior of the possible values the statistic can take in repeated samples is called the sampling distribution of that statistic.
• The following table summarizes the important information about the two sampling distributions we covered. Both of these results follow from the central limit theorem which basically states that as the sample size increases, the distribution of the average from a sample of size n becomes increasingly normally distributed. | textbooks/stats/Applied_Statistics/Biostatistics_-_Open_Learning_Textbook/Unit_3B%3A_Sampling_Distribution/Sampling_Distribution_of_the_Sample_Proportion%2C_p-hat.txt |
CO-1: Describe the roles biostatistics serves in the discipline of public health.
CO-6: Apply basic concepts of probability, random variation, and commonly used statistical probability distributions.
Review: We are about to move into the inference component of the course and it is a good time to be sure you understand the basic ideas presented regarding exploratory data analysis.
Video
Video: Unit 4A: Introduction to Statistical Inference (15:45)
Recall again the Big Picture, the four-step process that encompasses statistics: data production, exploratory data analysis, probability and inference.
We are about to start the fourth and final unit of this course, where we draw on principles learned in the other units (Exploratory Data Analysis, Producing Data, and Probability) in order to accomplish what has been our ultimate goal all along: use a sample to infer (or draw conclusions) about the population from which it was drawn.
As you will see in the introduction, the specific form of inference called for depends on the type of variables involved — either a single categorical or quantitative variable, or a combination of two variables whose relationship is of interest.
Introduction
Learning Objectives
LO 6.23: Explain how the concepts covered in Units 1 – 3 provide the basis for statistical inference.
We are about to start the fourth and final part of this course — statistical inference, where we draw conclusions about a population based on the data obtained from a sample chosen from it.
The purpose of this introduction is to review how we got here and how the previous units fit together to allow us to make reliable inferences. Also, we will introduce the various forms of statistical inference that will be discussed in this unit, and give a general outline of how this unit is organized.
In the Exploratory Data Analysis unit, we learned to display and summarize data that were obtained from a sample. Regardless of whether we had one variable and we examined its distribution, or whether we had two variables and we examined the relationship between them, it was always understood that these summaries applied only to the data at hand; we did not attempt to make claims about the larger population from which the data were obtained.
Such generalizations were, however, a long-term goal from the very beginning of the course. For this reason, in the unit on Producing Data, we took care to establish principles of sampling and study design that would be essential in order for us to claim that, to some extent, what is true for the sample should be also true for the larger population from which the sample originated.
These principles should be kept in mind throughout this unit on statistical inference, since the results that we will obtain will not hold if there was bias in the sampling process, or flaws in the study design under which variables’ values were measured.
Perhaps the most important principle stressed in the Producing Data unit was that of randomization. Randomization is essential, not only because it prevents bias, but also because it permits us to rely on the laws of probability, which is the scientific study of random behavior.
In the Probability unit, we established basic laws for the behavior of random variables. We ultimately focused on two random variables of particular relevance: the sample mean (x-bar) and the sample proportion (p-hat), and the last section of the Probability unit was devoted to exploring their sampling distributions.
We learned what probability theory tells us to expect from the values of the sample mean and the sample proportion, given that the corresponding population parameters — the population mean (mu, μ) and the population proportion (p) — are known.
As we mentioned in that section, the value of such results is more theoretical than practical, since in real-life situations we seldom know what is true for the entire population. All we know is what we see in the sample, and we want to use this information to say something concrete about the larger population.
Probability theory has set the stage to accomplish this: learning what to expect from the value of the sample mean, given that population mean takes a certain value, teaches us (as we’ll soon learn) what to expect from the value of the unknown population mean, given that a particular value of the sample mean has been observed.
Similarly, since we have established how the sample proportion behaves relative to population proportion, we will now be able to turn this around and say something about the value of the population proportion, based on an observed sample proportion. This process — inferring something about the population based on what is measured in the sample — is (as you know) called statistical inference.
Types of Inference
Learning Objectives
LO: 1.9 Distinguish between situations using a point estimate, an interval estimate, or a hypothesis test.
We will introduce three forms of statistical inference in this unit, each one representing a different way of using the information obtained in the sample to draw conclusions about the population. These forms are:
• Point Estimation
• Interval Estimation
• Hypothesis Testing
Obviously, each one of these forms of inference will be discussed at length in this section, but it would be useful to get at least an intuitive sense of the nature of each of these inference forms, and the difference between them in terms of the types of conclusions they draw about the population based on the sample results.
Point Estimation
In point estimation, we estimate an unknown parameter using a single number that is calculated from the sample data.
EXAMPLE:
Based on sample results, we estimate that p, the proportion of all U.S. adults who are in favor of stricter gun control, is 0.6.
Interval Estimation
In interval estimation, we estimate an unknown parameter using an interval of values that is likely to contain the true value of that parameter (and state how confident we are that this interval indeed captures the true value of the parameter).
EXAMPLE:
Based on sample results, we are 95% confident that p, the proportion of all U.S. adults who are in favor of stricter gun control, is between 0.57 and 0.63.
Hypothesis Testing
In hypothesis testing, we begin with a claim about the population (we will call the null hypothesis), and we check whether or not the data obtained from the sample provide evidence AGAINST this claim.
EXAMPLE:
It was claimed that among all U.S. adults, about half are in favor of stricter gun control and about half are against it. In a recent poll of a random sample of 1,200 U.S. adults, 60% were in favor of stricter gun control. This data, therefore, provides some evidence against the claim.
Soon we will determine the probability that we could have seen such a result (60% in favor) or more extreme IF in fact the true proportion of all U.S. adults who favor stricter gun control is actually 0.5 (the value in the claim the data attempts to refute).
EXAMPLE:
It is claimed that among drivers 18-23 years of age (our population) there is no relationship between drunk driving and gender.
A roadside survey collected data from a random sample of 5,000 drivers and recorded their gender and whether they were drunk.
The collected data showed roughly the same percent of drunk drivers among males and among females. These data, therefore, do not give us any reason to reject the claim that there is no relationship between drunk driving and gender.
Did I Get This?: Types of Inference
In terms of organization, the Inference unit consists of two main parts: Inference for One Variable and Inference for Relationships between Two Variables. The organization of each of these parts will be discussed further as we proceed through the unit.
Inference for One Variable
The next two topics in the inference unit will deal with inference for one variable. Recall that in the Exploratory Data Analysis (EDA) unit, when we learned about summarizing the data obtained from one variable where we learned about examining distributions, we distinguished between two cases; categorical data and quantitative data.
We will make a similar distinction here in the inference unit. In the EDA unit, the type of variable determined the displays and numerical measures we used to summarize the data. In Inference, the type of variable of interest (categorical or quantitative) will determine what population parameter is of interest.
• When the variable of interest is categorical, the population parameter that we will infer about is the population proportion (p) associated with that variable. For example, if we are interested in studying opinions about the death penalty among U.S. adults, and thus our variable of interest is “death penalty (in favor/against),” we’ll choose a sample of U.S. adults and use the collected data to make an inference about p, the proportion of U.S. adults who support the death penalty.
• When the variable of interest is quantitative, the population parameter that we infer about is the population mean (mu, µ) associated with that variable. For example, if we are interested in studying the annual salaries in the population of teachers in a certain state, we’ll choose a sample from that population and use the collected salary data to make an inference about µ, the mean annual salary of all teachers in that state.
The following outlines describe some of the important points about the process of inferential statistics as well as compare and contrast how researchers and statisticians approach this process.
Outline of Process of Inference
Here is another restatement of the big picture of statistical inference as it pertains to the two simple examples we will discuss first.
• A simple random sample is taken from a population of interest.
• In order to estimate a population parameter, a statistic is calculated from the sample. For example:
Sample mean (x-bar)
Sample proportion (p-hat)
• We then learn about the DISTRIBUTION of this statistic in repeated sampling (theoretically). We now know these are called sampling distributions!
• Using THIS sampling distribution we can make inferences about our population parameter based upon our sample statistic.
It is this last step of statistical inference that we are interested in discussing now.
Applied Steps (What do researchers do?)
One issue for students is that the theoretical process of statistical inference is only a small part of the applied steps in a research project. Previously, in our discussion of the role of biostatistics, we defined these steps to be:
1. Planning/design of study
2. Data collection
3. Data analysis
4. Presentation
5. Interpretation
You can see that:
• Both exploratory data analysis and inferential methods will fall into the category of “Data Analysis” in our previous list.
• Probability is hiding in the applied steps in the form of probability sampling plans, estimation of desired probabilities, and sampling distributions.
Among researchers, the following represent some of the important questions to address when conducting a study.
• What is the population of interest?
• What is the question or statistical problem?
• How to sample to best address the question given the available resources?
• How to analyze the data?
• How to report the results?
AFTER you know what you are going to do, then you can begin collecting data!
Theoretical Steps (What do statisticians do?)
Statisticians, on the other hand, need to ask questions like these:
• What assumptions can be reasonably made about the population?
• What parameter(s) in the population do we need to estimate in order to address the research question?
• What statistic(s) from our sample data can be used to estimate the unknown parameter(s)?
• How does each statistic behave?
• Is it unbiased?
• How variable will it be for the planned sample size?
• What is the distribution of this statistic? (Sampling Distribution)
Then, we will see that we can use the sampling distribution of a statistic to:
• Provide confidence interval estimates for the corresponding parameter.
• Conduct hypothesis tests about the corresponding parameter.
Standard Error of a Statistic
Learning Objectives
LO: 1.10: Define the standard error of a statistic precisely and relate it to the concept of the sampling distribution of a statistic.
In our discussion of sampling distributions, we discussed the variability of sample statistics; here is a quick review of this general concept and a formal definition of the standard error of a statistic.
• All statistics calculated from samples are random variables.
• The distribution of a statistic (from a sample of a given sample size) is called the sampling distribution of the statistic.
• The standard deviation of the sampling distribution of a particular statistic is called the standard error of the statistic and measures variability of the statistic for a particular sample size.
The standard error of a statistic is the standard deviation of the sampling distribution of that statistic, where the sampling distribution is defined as the distribution of a particular statistic in repeated sampling.
• The standard error is an extremely common measure of the variability of a sample statistic.
EXAMPLE:
In our discussion of sampling distributions, we looked at a situation involving a random sample of 100 students taken from the population of all part-time students in the United States, for which the overall proportion of females is 0.6. Here we have a categorical variable of interest, gender.
We determined that the distribution of all possible values of p-hat (that we could obtain for repeated simple random samples of this size from this population) has mean p = 0.6 and standard deviation
$\sigma_{\hat{p}}=\sqrt{\dfrac{p(1-p)}{n}}=\sqrt{\dfrac{0.6(1-0.6)}{100}}=0.05$
which we have now learned is more formally called the standard error of p-hat. In this case, the true standard error of p-hat will be 0.05.
We also showed how we can use this information along with information about the center (mean or expected value) to calculate probabilities associated with particular values of p-hat. For example, what is the probability that sample proportion p-hat is less than or equal to 0.56? After verifying the sample size requirements are reasonable, we can use a normal distribution to approximate
$P(\hat{p} \leq 0.56)=P\left(Z \leq \dfrac{0.56-0.6}{0.05}\right)=P(Z \leq-0.80)=0.2119$
EXAMPLE:
Similarly, for a quantitative variable, we looked at an example of household size in the United States which has a mean of 2.6 people and standard deviation of 1.4 people.
If we consider taking a simple random sample of 100 households, we found that the distribution of sample means (x-bar) is approximately normal for a large sample size such as n = 100.
The sampling distribution of x-bar has a mean which is the same as the population mean, 2.6, and its standard deviation is the population standard deviation divided by the square root of the sample size:
$\dfrac{\sigma}{\sqrt{n}}=\dfrac{1.4}{\sqrt{100}}=0.14$
Again, this standard deviation of the sampling distribution of x-bar is more commonly called the standard error of x-bar, in this case 0.14. And we can use this information (the center and spread of the sampling distribution) to find probabilities involving particular values of x-bar.
$P(\bar{x}>3)=P\left(Z>\dfrac{3-2.6}{\dfrac{1.4}{\sqrt{100}}}\right)=P(Z>2.86)=0.0021$
Unit 4A: Introduction to Statistical Inference
CO-4: Distinguish among different measurement scales, choose the appropriate descriptive and inferential statistical methods based on these distinctions, and interpret the results.
CO-6: Apply basic concepts of probability, random variation, and commonly used statistical probability distributions.
Video
Video: Estimation (11:40)
Introduction
In our Introduction to Inference we defined point estimates and interval estimates.
• In point estimation, we estimate an unknown parameter using a single number that is calculated from the sample data.
• In interval estimation, we estimate an unknown parameter using an interval of values that is likely to contain the true value of that parameter (and state how confident we are that this interval indeed captures the true value of the parameter).
In this section, we will introduce the concept of a confidence interval and learn to calculate confidence intervals for population means and population proportions (when certain conditions are met).
In Unit 4B, we will see that confidence intervals are useful whenever we wish to use data to estimate an unknown population parameter, even when this parameter is estimated using multiple variables (such as our cases: CC, CQ, QQ).
For example, we can construct confidence intervals for the slope of a regression equation or the correlation coefficient. In doing so we are always using our data to provide an interval estimate for an unknown population parameter (the TRUE slope, or the TRUE correlation coefficient).
Point Estimation
Learning Objectives
LO 4.29: Determine and use the correct point estimates for specified population parameters.
Point estimation is the form of statistical inference in which, based on the sample data, we estimate the unknown parameter of interest using a single value (hence the name point estimation). As the following two examples illustrate, this form of inference is quite intuitive.
EXAMPLE:
Suppose that we are interested in studying the IQ levels of students at Smart University (SU). In particular (since IQ level is a quantitative variable), we are interested in estimating µ (mu), the mean IQ level of all the students at SU.
A random sample of 100 SU students was chosen, and their (sample) mean IQ level was found to be 115 (x-bar).
If we wanted to estimate µ (mu), the population mean IQ level, by a single number based on the sample, it would make intuitive sense to use the corresponding quantity in the sample, the sample mean which is 115. We say that 115 is the point estimate for µ (mu), and in general, we’ll always use the sample mean (x-bar) as the point estimator for µ (mu). (Note that when we talk about the specific value (115), we use the term estimate, and when we talk in general about the statistic x-bar, we use the term estimator. The following figure summarizes this example:
Here is another example.
EXAMPLE:
Suppose that we are interested in the opinions of U.S. adults regarding legalizing the use of marijuana. In particular, we are interested in the parameter p, the proportion of U.S. adults who believe marijuana should be legalized.
Suppose a poll of 1,000 U.S. adults finds that 560 of them believe marijuana should be legalized. If we wanted to estimate p, the population proportion, using a single number based on the sample, it would make intuitive sense to use the corresponding quantity in the sample, the sample proportion p-hat = 560/1000 = 0.56. We say in this case that 0.56 is the point estimate for p, and in general, we’ll always use p-hat as the point estimator for p. (Note, again, that when we talk about the specific value (0.56), we use the term estimate, and when we talk in general about the statistic p-hat, we use the term estimator. Here is a visual summary of this example:
Did I Get This?: Point Estimation
Desired Properties of Point Estimators
You may feel that since it is so intuitive, you could have figured out point estimation on your own, even without the benefit of an entire course in statistics. Certainly, our intuition tells us that the best estimator for the population mean (mu, µ) should be x-bar, and the best estimator for the population proportion p should be p-hat.
Probability theory does more than this; it actually gives an explanation (beyond intuition) why x-bar and p-hat are the good choices as point estimators for µ (mu) and p, respectively. In the Sampling Distributions section of the Probability unit, we learned about the sampling distribution of x-bar and found that as long as a sample is taken at random, the distribution of sample means is exactly centered at the value of population mean.
Our statistic, x-bar, is therefore said to be an unbiased estimator for µ (mu). Any particular sample mean might turn out to be less than the actual population mean, or it might turn out to be more. But in the long run, such sample means are “on target” in that they will not underestimate any more or less often than they overestimate.
Likewise, we learned that the sampling distribution of the sample proportion, p-hat, is centered at the population proportion p (as long as the sample is taken at random), thus making p-hat an unbiased estimator for p.
As stated in the introduction, probability theory plays an essential role as we establish results for statistical inference. Our assertion above that sample mean and sample proportion are unbiased estimators is the first such instance.
Importance of Sampling and Design
Notice how important the principles of sampling and design are for our above results: if the sample of U.S. adults in (example 2 on the previous page) was not random, but instead included predominantly college students, then 0.56 would be a biased estimate for p, the proportion of all U.S. adults who believe marijuana should be legalized.
If the survey design were flawed, such as loading the question with a reminder about the dangers of marijuana leading to hard drugs, or a reminder about the benefits of marijuana for cancer patients, then 0.56 would be biased on the low or high side, respectively.
Caution
Our point estimates are truly unbiased estimates for the population parameter only if the sample is random and the study design is not flawed.
Standard Error and Sample Size
Not only are the sample mean and sample proportion on target as long as the samples are random, but their precision improves as sample size increases.
Again, there are two “layers” here for explaining this.
Intuitively, larger sample sizes give us more information with which to pin down the true nature of the population. We can therefore expect the sample mean and sample proportion obtained from a larger sample to be closer to the population mean and proportion, respectively. In the extreme, when we sample the whole population (which is called a census), the sample mean and sample proportion will exactly coincide with the population mean and population proportion.There is another layer here that, again, comes from what we learned about the sampling distributions of the sample mean and the sample proportion. Let’s use the sample mean for the explanation.
Recall that the sampling distribution of the sample mean x-bar is, as we mentioned before, centered at the population mean µ (mu)and has a standard error (standard deviation of the statistic, x-bar) of
standard deviation of $\dfrac{\sigma}{\sqrt{n}}$
As a result, as the sample size n increases, the sampling distribution of x-bar gets less spread out. This means that values of x-bar that are based on a larger sample are more likely to be closer to µ (mu) (as the figure below illustrates):
Similarly, since the sampling distribution of p-hat is centered at p and has a
standard deviation of $\sqrt{\dfrac{p(1-p)}{n}}$
which decreases as the sample size gets larger, values of p-hat are more likely to be closer to p when the sample size is larger.
Another Point Estimator
Another example of a point estimator is using sample standard deviation,
$s=\sqrt{\dfrac{\sum_{i=1}^{n}\left(x_{i}-\bar{x}\right)^{2}}{n-1}}$
to estimate population standard deviation, σ (sigma).
In this course, we will not be concerned with estimating the population standard deviation for its own sake, but since we will often substitute the sample standard deviation (s) for σ (sigma) when standardizing the sample mean, it is worth pointing out that s is an unbiased estimator for σ (sigma).
If we had divided by n instead of n – 1 in our estimator for population standard deviation, then in the long run our sample variance would be guilty of a slight underestimation. Division by n – 1 accomplishes the goal of making this point estimator unbiased.
The reason that our formula for s, introduced in the Exploratory Data Analysis unit, involves division by n – 1 instead of by n is the fact that we wish to use unbiased estimators in practice.
Let’s Summarize
• We use p-hat (sample proportion) as a point estimator for p (population proportion). It is an unbiased estimator: its long-run distribution is centered at p as long as the sample is random.
• We use x-bar (sample mean) as a point estimator for µ (mu, population mean). It is an unbiased estimator: its long-run distribution is centered at µ (mu) as long as the sample is random.
• In both cases, the larger the sample size, the more precise the point estimator is. In other words, the larger the sample size, the more likely it is that the sample mean (proportion) is close to the unknown population mean (proportion).
Did I Get This?: Properties of Point Estimators
Interval Estimation
Point estimation is simple and intuitive, but also a bit problematic. Here is why:
When we estimate μ (mu) by the sample mean x-bar we are almost guaranteed to make some kind of error. Even though we know that the values of x-bar fall around μ (mu), it is very unlikely that the value of x-bar will fall exactly at μ (mu).
Given that such errors are a fact of life for point estimates (by the mere fact that we are basing our estimate on one sample that is a small fraction of the population), these estimates are in themselves of limited usefulness, unless we are able to quantify the extent of the estimation error. Interval estimation addresses this issue. The idea behind interval estimation is, therefore, to enhance the simple point estimates by supplying information about the size of the error attached.
In this introduction, we’ll provide examples that will give you a solid intuition about the basic idea behind interval estimation.
EXAMPLE:
Consider the example that we discussed in the point estimation section:
Suppose that we are interested in studying the IQ levels of students attending Smart University (SU). In particular (since IQ level is a quantitative variable), we are interested in estimating μ (mu), the mean IQ level of all the students in SU. A random sample of 100 SU students was chosen, and their (sample) mean IQ level was found to be 115 (x-bar).
In point estimation we used x-bar = 115 as the point estimate for μ (mu). However, we had no idea of what the estimation error involved in such an estimation might be. Interval estimation takes point estimation a step further and says something like:
“I am 95% confident that by using the point estimate x-bar = 115 to estimate μ (mu), I am off by no more than 3 IQ points. In other words, I am 95% confident that μ (mu) is within 3 of 115, or between 112 (115 – 3) and 118 (115 + 3).”
Yet another way to say the same thing is: I am 95% confident that μ (mu) is somewhere in (or covered by) the interval (112,118). (Comment: At this point you should not worry about, or try to figure out, how we got these numbers. We’ll do that later. All we want to do here is make sure you understand the idea.)
Note that while point estimation provided just one number as an estimate for μ (mu) of 115, interval estimation provides a whole interval of “plausible values” for μ (mu) (between 112 and 118), and also attaches the level of our confidence that this interval indeed includes the value of μ (mu) to our estimation (in our example, 95% confidence). The interval (112,118) is therefore called “a 95% confidence interval for μ (mu).”
Let’s look at another example:
EXAMPLE:
Let’s consider the second example from the point estimation section.
Suppose that we are interested in the opinions of U.S. adults regarding legalizing the use of marijuana. In particular, we are interested in the parameter p, the proportion of U.S. adults who believe marijuana should be legalized.
Suppose a poll of 1,000 U.S. adults finds that 560 of them believe marijuana should be legalized.
If we wanted to estimate p, the population proportion, by a single number based on the sample, it would make intuitive sense to use the corresponding quantity in the sample, the sample proportion p-hat = 560/1000=0.56.
Interval estimation would take this a step further and say something like:
“I am 90% confident that by using 0.56 to estimate the true population proportion, p, I am off by (or, I have an error of) no more than 0.03 (or 3 percentage points). In other words, I am 90% confident that the actual value of p is somewhere between 0.53 (0.56 – 0.03) and 0.59 (0.56 + 0.03).”
Yet another way of saying this is: “I am 90% confident that p is covered by the interval (0.53, 0.59).”
In this example, (0.53, 0.59) is a 90% confidence interval for p.
Let’s summarize
The two examples showed us that the idea behind interval estimation is, instead of providing just one number for estimating an unknown parameter of interest, to provide an interval of plausible values of the parameter plus a level of confidence that the value of the parameter is covered by this interval.
We are now going to go into more detail and learn how these confidence intervals are created and interpreted in context. As you’ll see, the ideas that were developed in the “Sampling Distributions” section of the Probability unit will, again, be very important. Recall that for point estimation, our understanding of sampling distributions leads to verification that our statistics are unbiased and gives us a precise formulas for the standard error of our statistics.
We’ll start by discussing confidence intervals for the population mean μ (mu), and later discuss confidence intervals for the population proportion p.
Population Means (Part 1)
CO-4: Distinguish among different measurement scales, choose the appropriate descriptive and inferential statistical methods based on these distinctions, and interpret the results.
Learning Objectives
LO 4.30: Interpret confidence intervals for population parameters in context.
Learning Objectives
LO 4.31: Find confidence intervals for the population mean using the normal distribution (Z) based confidence interval formula (when required conditions are met) and perform sample size calculations.
CO-6: Apply basic concepts of probability, random variation, and commonly used statistical probability distributions.
Learning Objectives
LO 6.24: Explain the connection between the sampling distribution of a statistic, and its properties as a point estimator.
Learning Objectives
LO 6.25: Explain what a confidence interval represents and determine how changes in sample size and confidence level affect the precision of the confidence interval.
Video
Video: Population Means – Part 1 (11:14)
As the introduction mentioned, we’ll start our discussion on interval estimation with interval estimation for the population mean μ (mu). We’ll start by showing how a 95% confidence interval is constructed, and later generalize to other levels of confidence. We’ll also discuss practical issues related to interval estimation.
Recall the IQ example:
EXAMPLE:
Suppose that we are interested in studying the IQ levels of students at Smart University (SU). In particular (since IQ level is a quantitative variable), we are interested in estimating μ (mu), the mean IQ level of all the students at SU.
We will assume that from past research on IQ scores in different universities, it is known that the IQ standard deviation in such populations is σ (sigma) = 15. In order to estimate μ (mu), a random sample of 100 SU students was chosen, and their (sample) mean IQ level is calculated (let’s assume, for now, that we have not yet found the sample mean).
We will now show the rationale behind constructing a 95% confidence interval for the population mean μ (mu).
• We learned in the “Sampling Distributions” section of probability that according to the central limit theorem, the sampling distribution of the sample mean x-bar is approximately normal with a mean of μ (mu) and standard deviation of σ/sqrt(n) = sigma/sqrt(n). In our example, then, (where σ (sigma) = 15 and n = 100), the possible values of x-bar, the sample mean IQ level of 100 randomly chosen students, is approximately normal, with mean μ (mu) and standard deviation 15/sqrt(100) = 1.5.
• Next, we recall and apply the Standard Deviation Rule for the normal distribution, and in particular its second part: There is a 95% chance that the sample mean we will find in our sample falls within 2 * 1.5 = 3 of μ (mu).
Obviously, if there is a certain distance between the sample mean and the population mean, we can describe that distance by starting at either value. So, if the sample mean (x-bar) falls within a certain distance of the population mean μ (mu), then the population mean μ (mu) falls within the same distance of the sample mean.
Therefore, the statement, “There is a 95% chance that the sample mean x-bar falls within 3 units of μ (mu)” can be rephrased as: “We are 95% confident that the population mean μ (mu) falls within 3 units of the x-bar we found in our sample.”
So, if we happen to get a sample mean of x-bar = 115, then we are 95% confident that μ (mu) falls within 3 units of 115, or in other words that μ (mu) is covered by the interval (115 – 3, 115 + 3) = (112,118).
(On later pages, we will use similar reasoning to develop a general formula for a confidence interval.)
Comment:
• Note that the first phrasing is about x-bar, which is a random variable; that’s why it makes sense to use probability language. But the second phrasing is about μ (mu), which is a parameter, and thus is a “fixed” value that does not change, and that’s why we should not use probability language to discuss it. In these problems, it is our x-bar that will change when we repeat the process, not μ (mu). This point will become clearer after you do the activities which follow.
The General Case
Let’s generalize the IQ example. Suppose that we are interested in estimating the unknown population mean (μ, mu) based on a random sample of size n. Further, we assume that the population standard deviation (σ, sigma) is known.
Caution
Note: The assumption that the population standard deviation is known is not usually realistic, however, we make it here to be able to introduce the concepts in the simplest case. Later, we will discuss the changes which need to be made when we do not know the population standard deviation.
The values of x-bar follow a normal distribution with (unknown) mean μ (mu) and standard deviation σ/sqrt(n)=sigma/sqrt(n) (known, since both σ, sigma, and n are known). In the standard deviation rule, we stated that approximately 95% of values fall within 2 standard deviations of μ (mu). From now on, we will be a little more precise and use the standard normal table to find the exact value for 95%.
Our picture is as follows:
Try using the applet in the post for Learn by Doing – Normal Random Variables to find the cutoff illustrated above.
We can also verify the z-score using a calculator or table by finding the z-score with the area of 0.025 to the left (which would give us -1.96) or with the area to the left of 0.975 = 0.95 + 0.025 (which would give us +1.96).
Thus, there is a 95% chance that our sample mean x-bar will fall within 1.96*σ/sqrt(n) = 1.96*sigma/sqrt(n) of μ (mu).
Which means we are 95% confident that μ (mu) falls within 1.96*σ/sqrt(n) = 1.96*sigma/sqrt(n) of our sample mean x-bar.
Here, then, is the general result:
Suppose a random sample of size n is taken from a normal population of values for a quantitative variable whose mean (μ, mu) is unknown, when the standard deviation (σ, sigma) is given.
A 95% confidence interval (CI) for μ (mu) is:
$\bar{x} \pm 1.96 * \dfrac{\sigma}{\sqrt{n}}$
Comment:
• Note that for now we require the population standard deviation (σ, sigma) to be known. Practically, σ (sigma) is rarely known, but for some cases, especially when a lot of research has been done on the quantitative variable whose mean we are estimating (such as IQ, height, weight, scores on standardized tests), it is reasonable to assume that σ (sigma) is known. Eventually, we will see how to proceed when σ (sigma) is unknown, and must be estimated with sample standard deviation (s).
Let’s look at another example.
EXAMPLE:
An educational researcher was interested in estimating μ (mu), the mean score on the math part of the SAT (SAT-M) of all community college students in his state. To this end, the researcher has chosen a random sample of 650 community college students from his state, and found that their average SAT-M score is 475. Based on a large body of research that was done on the SAT, it is known that the scores roughly follow a normal distribution with the standard deviation σ (sigma) =100.
Here is a visual representation of this story, which summarizes the information provided:
Based on this information, let’s estimate μ (mu) with a 95% confidence interval.
Using the formula we developed earlier
$\bar{x} \pm 1.96 * \dfrac{\simga}{\sqrt{n}}$
the 95% confidence interval for μ (mu) is:
\begin{aligned}
475 \pm 1.96 * \frac{100}{\sqrt{650}} &=\left(475-1.96 * \frac{100}{\sqrt{650}}, 475+1.96 * \frac{100}{\sqrt{650}}\right) \
&=(475-7.7,475+7.7) \
&=(467.3,482.7)
\end{aligned}
We will usually provide information on how to round your final answer. In this case, one decimal place is enough precision for this scenario. You could also round to the nearest whole number without much loss of information here.
We are not done yet. An equally important part is to interpret what this means in the context of the problem.
We are 95% confident that the mean SAT-M score of all community college students in the researcher’s state is covered by the interval (467.3, 482.7). Note that the confidence interval was obtained by taking 475 ± 7.7. This means that we are 95% confident that by using the sample mean (x-bar = 475) to estimate μ (mu), our error is no more than 7.7 points.
Learn by Doing: Confidence Intervals: Means #1
You just gained practice computing and interpreting a confidence interval for a population mean. Note that the way a confidence interval is used is that we hope the interval contains the population mean μ (mu). This is why we call it an “interval for the population mean.”
The following activity is designed to help give you a better understanding of the underlying reasoning behind the interpretation of confidence intervals. In particular, you will gain a deeper understanding of why we say that we are “95% confident that the population mean is covered by the interval.”
Learn by Doing: Connection between Confidence Intervals and Sampling Distributions with Video (1:18)
We just saw that one interpretation of a 95% confidence interval is that we are 95% confident that the population mean (μ, mu) is contained in the interval. Another useful interpretation in practice is that, given the data, the confidence interval represents the set of plausible values for the population mean μ (mu).
EXAMPLE:
As an illustration, let’s return to the example of mean SAT-Math score of community college students. Recall that we had constructed the confidence interval (467.3, 482.7) for the unknown mean SAT-M score for all community college students.
Here is a way that we can use the confidence interval:
Do the results of this study provide evidence that μ (mu), the mean SAT-M score of community college students, is lower than the mean SAT-M score in the general population of college students in that state (which is 480)?
The 95% confidence interval for μ (mu) was found to be (467.3, 482.7). Note that 480, the mean SAT-M score in the general population of college students in that state, falls inside the interval, which means that it is one of the plausible values for μ (mu).
This means that μ (mu) could be 480 (or even higher, up to 483), and therefore we cannot conclude that the mean SAT-M score among community college students in the state is lower than the mean in the general population of college students in that state. (Note that the fact that most of the plausible values for μ (mu) fall below 480 is not a consideration here.)
$\bar{x} \pm 1.96 * \dfrac{\sigma}{\sqrt{n}}$
the 95% confidence interval for μ (mu) is:
\begin{aligned}
475 \pm 1.96 * \frac{100}{\sqrt{650}} &=\left(475-1.96 * \frac{100}{\sqrt{650}}, 475+1.96 * \frac{100}{\sqrt{650}}\right) \
&=(475-7.7,475+7.7) \
&=(467.3,482.7)
\end{aligned}
We will usually provide information on how to round your final answer. In this case, one decimal place is enough precision for this scenario. You could also round to the nearest whole number without much loss of information here.
We are not done yet. An equally important part is to interpret what this means in the context of the problem.
We are 95% confident that the mean SAT-M score of all community college students in the researcher’s state is covered by the interval (467.3, 482.7). Note that the confidence interval was obtained by taking 475 ± 7.7. This means that we are 95% confident that by using the sample mean (x-bar = 475) to estimate μ (mu), our error is no more than 7.7 points.
Population Means (Part 2)
CO-4: Distinguish among different measurement scales, choose the appropriate descriptive and inferential statistical methods based on these distinctions, and interpret the results.
Learning Objectives
LO 4.30: Interpret confidence intervals for population parameters in context.
Learning Objectives
LO 4.31: Find confidence intervals for the population mean using the normal distribution (Z) based confidence interval formula (when required conditions are met) and perform sample size calculations.
CO-6: Apply basic concepts of probability, random variation, and commonly used statistical probability distributions.
Learning Objectives
LO 6.24: Explain the connection between the sampling distribution of a statistic, and its properties as a point estimator.
Learning Objectives
LO 6.25: Explain what a confidence interval represents and determine how changes in sample size and confidence level affect the precision of the confidence interval.
Video
Video: Population Means – Part 2 (4:04)
Other Levels of Confidence
95% is the most commonly used level of confidence. However, we may wish to increase our level of confidence and produce an interval that’s almost certain to contain μ (mu). Specifically, we may want to report an interval for which we are 99% confident that it contains the unknown population mean, rather than only 95%.
Using the same reasoning as in the last comment, in order to create a 99% confidence interval for μ (mu), we should ask: There is a probability of 0.99 that any normal random variable takes values within how many standard deviations of its mean? The precise answer is 2.576, and therefore, a 99% confidence interval for μ (mu) is:
$\bar{x} \pm 2.576 * \dfrac{\sigma}{\sqrt{n}}$
Another commonly used level of confidence is a 90% level of confidence. Since there is a probability of 0.90 that any normal random variable takes values within 1.645 standard deviations of its mean, the 90% confidence interval for μ (mu) is:
$\bar{x} \pm 1.645 * \dfrac{\sigma}{\sqrt{n}}$
EXAMPLE:
Let’s go back to our first example, the IQ example:
The IQ level of students at a particular university has an unknown mean (μ, mu) and known standard deviation σ (sigma) =15. A simple random sample of 100 students is found to have a sample mean IQ of 115 (x-bar). Estimate μ (mu) with a 90%, 95%, and 99% confidence interval.
A 90% confidence interval for μ (mu) is:
$\bar{x} \pm 1.645 \dfrac{\sigma}{\sqrt{n}} = 115 \pm 1.645(\dfrac{15}{\sqrt{100}}) = 115 \pm 2.5 = (112.5, 117.5)$.
A 95% confidence interval for μ (mu) is:
$\bar{x} \pm 1.96 \dfrac{\sigma}{\sqrt{n}} = 115 \pm 1.96 (\dfrac{15}{\sqrt{100}}) = 115 \pm 2.9 = (112.1, 117.9)$.
A 99% confidence interval for μ (mu) is:
$\bar{x} \pm 2.576 \dfrac{\sigma}{\sqrt{n}} = 115 \pm 2.576 (\dfrac{15}{\sqrt{100}} = 115 \pm 4.0 = (111,119)$.
The purpose of this next activity is to give you guided practice at calculating and interpreting confidence intervals, and drawing conclusions from them.
Did I Get This?: Confidence Intervals: Means #1
Note from the previous example and the previous “Did I Get This?” activity, that the more confidence I require, the wider the confidence interval for μ (mu). The 99% confidence interval is wider than the 95% confidence interval, which is wider than the 90% confidence interval.
This is not very surprising, given that in the 99% interval we multiply the standard deviation of the statistic by 2.576, in the 95% by 2, and in the 90% only by 1.645. Beyond this numerical explanation, there is a very clear intuitive explanation and an important implication of this result.
Let’s start with the intuitive explanation. The more certain I want to be that the interval contains the value of μ (mu), the more plausible values the interval needs to include in order to account for that extra certainty. I am 95% certain that the value of μ (mu) is one of the values in the interval (112.1, 117.9). In order to be 99% certain that one of the values in the interval is the value of μ (mu), I need to include more values, and thus provide a wider confidence interval.
Learn by Doing: Visualizing the Relationship between Confidence and Width
In our example, the wider 99% confidence interval (111, 119) gives us a less precise estimation about the value of μ (mu) than the narrower 90% confidence interval (112.5, 117.5), because the smaller interval ‘narrows-in’ on the plausible values of μ (mu).
The important practical implication here is that researchers must decide whether they prefer to state their results with a higher level of confidence or produce a more precise interval. In other words,
Caution
There is a trade-off between the level of confidence and the precision with which the parameter is estimated.
The price we have to pay for a higher level of confidence is that the unknown population mean will be estimated with less precision (i.e., with a wider confidence interval). If we would like to estimate μ (mu) with more precision (i.e. a narrower confidence interval), we will need to sacrifice and report an interval with a lower level of confidence.
Did I Get This?: Confidence Intervals: Means #2
So far we’ve developed the confidence interval for the population mean “from scratch” based on results from probability, and discussed the trade-off between the level of confidence and the precision of the interval. The price you pay for a higher level of confidence is a lower level of precision of the interval (i.e., a wider interval).
Is there a way to bypass this trade-off? In other words, is there a way to increase the precision of the interval (i.e., make it narrower) without compromising on the level of confidence? We will answer this question shortly, but first we’ll need to get a deeper understanding of the different components of the confidence interval and its structure.
Understanding the General Structure of Confidence Intervals
We explored the confidence interval for μ (mu) for different levels of confidence, and found that in general, it has the following form:
$\bar{x} \pm z* \dot \dfrac{\sigma}{\sqrt{n}}$
where z* is a general notation for the multiplier that depends on the level of confidence. As we discussed before:
• For a 90% level of confidence, z* = 1.645
• For a 95% level of confidence, z* = 1.96
• For a 99% level of confidence, z* = 2.576
To start our discussion about the structure of the confidence interval, let’s denote
$m = z* \dot \dfrac{\sigma}{\sqrt{n}}$
The confidence interval, then, has the form:
$\bar{x} \pm m$
To summarize, we have
X-bar is the sample mean, the point estimator for the unknown population mean (μ, mu).
m is called the margin of error, since it represents the maximum estimation error for a given level of confidence.
For example, for a 95% confidence interval, we are 95% confident that our estimate will not depart from the true population mean by more than m, the margin of error and m is further made up of the product of two components:
Here is a summary of the different components of the confidence interval and its structure:
This structure: estimate ± margin of error, where the margin of error is further composed of the product of a confidence multiplier and the standard deviation of the statistic (or, as we’ll see, the standard error) is the general structure of all confidence intervals that we will encounter in this course.
Obviously, even though each confidence interval has the same components, the formula for these components is different from confidence interval to confidence interval, depending on what unknown parameter the confidence interval aims to estimate.
Since the structure of the confidence interval is such that it has a margin of error on either side of the estimate, it is centered at the estimate (in our current case, x-bar), and its width (or length) is exactly twice the margin of error:
The margin of error, m, is therefore “in charge” of the width (or precision) of the confidence interval, and the estimate is in charge of its location (and has no effect on the width).
Did I Get This?: Margin of Error
Let us now go back to the confidence interval for the mean, and more specifically, to the question that we posed at the beginning of the previous page:
Is there a way to increase the precision of the confidence interval (i.e., make it narrower) without compromising on the level of confidence?
Since the width of the confidence interval is a function of its margin of error, let’s look closely at the margin of error of the confidence interval for the mean and see how it can be reduced:
$m = z* \dot \dfrac{\sigma}{\sqrt{n}}$
Since z* controls the level of confidence, we can rephrase our question above in the following way:
Is there a way to reduce this margin of error other than by reducing z*?
If you look closely at the margin of error, you’ll see that the answer is yes. We can do that by increasing the sample size n (since it appears in the denominator).
Many Students Wonder: Confidence Intervals (Population Mean)
Question: Isn’t it true that another way to reduce the margin of error (for a fixed z*) is to reduce σ (sigma)?
Answer: While it is true that strictly mathematically speaking the smaller the value of σ (sigma), the smaller the margin of error, practically speaking we have absolutely no control over the value of σ (sigma) (i.e., we cannot make it larger or smaller). σ (sigma) is the population standard deviation; it is a fixed value (which here we assume is known) that has an effect on the width of the confidence interval (since it appears in the margin of error), but is definitely not a value we can change.
Let’s look at an example first and then explain why increasing the sample size is a way to increase the precision of the confidence interval without compromising on the level of confidence.
EXAMPLE:
Recall the IQ example:
The IQ level of students at a particular university has an unknown mean (μ, mu) and a known standard deviation of σ (sigma) =15. A simple random sample of 100 students is found to have the sample mean IQ of 115 (x-bar).
For simplicity, in this question, we will round z* = 1.96 to 2. You should use z* = 1.96 in all problems unless you are specifically instructed to do otherwise.
A 95% confidence interval for μ (mu) in this case is:
$\bar{x} \pm 2 \dfrac{\sigma}{\sqrt{n}}=115 \pm 2\left(\dfrac{15}{\sqrt{100}}\right)=115 \pm 3.0=(112,118)$
Note that the margin of error is m = 3, and therefore the width of the confidence interval is 6.
Now, what if we change the problem slightly by increasing the sample size, and assume that it was 400 instead of 100?
In this case, a 95% confidence interval for μ (mu) is:
$\bar{x} \pm 2 \dfrac{\sigma}{\sqrt{n}}=115 \pm 2\left(\dfrac{15}{\sqrt{400}}\right)=115 \pm 1.5=(113.5,116.5)$
The margin of error here is only m = 1.5, and thus the width is only 3.
Note that for the same level of confidence (95%) we now have a narrower, and thus more precise, confidence interval.
Let’s try to understand why is it that a larger sample size will reduce the margin of error for a fixed level of confidence. There are three ways to explain this: mathematically, using probability theory, and intuitively.
We’ve already alluded to the mathematical explanation; the margin of error is
$m = z* \dot \dfrac{\sigma}{\sqrt{n}}$
and since n, the sample size, appears in the denominator, increasing n will reduce the margin of error.
As we saw in our discussion about point estimates, probability theory tells us that:
This explains why with a larger sample size the margin of error (which represents how far apart we believe x-bar might be from μ (mu) for a given level of confidence) is smaller.
On an intuitive level, if our estimate x-bar is based on a larger sample (i.e., a larger fraction of the population), we have more faith in it, or it is more reliable, and therefore we need to account for less error around it.
Comment:
• While it is true that for a given level of confidence, increasing the sample size increases the precision of our interval estimation, in practice, increasing the sample size is not always possible.
• Consider a study in which there is a non-negligible cost involved for collecting data from each participant (an expensive medical procedure, for example). If the study has some budgetary constraints, which is usually the case, increasing the sample size from 100 to 400 is just not possible in terms of cost-effectiveness.
• Another instance in which increasing the sample size is impossible is when a larger sample is simply not available, even if we had the money to afford it. For example, consider a study on the effectiveness of a drug on curing a very rare disease among children. Since the disease is rare, there are a limited number of children who could be participants.
• This is the reality of statistics. Sometimes theory collides with reality, and you simply do the best you can.
Did I Get This?: Sample Size and Confidence
Population Means (Part 3)
CO-4: Distinguish among different measurement scales, choose the appropriate descriptive and inferential statistical methods based on these distinctions, and interpret the results.
Learning Objectives
LO 4.30: Interpret confidence intervals for population parameters in context.
Learning Objectives
LO 4.31: Find confidence intervals for the population mean using the normal distribution (Z) based confidence interval formula (when required conditions are met) and perform sample size calculations.
CO-6: Apply basic concepts of probability, random variation, and commonly used statistical probability distributions.
Learning Objectives
LO 6.24: Explain the connection between the sampling distribution of a statistic, and its properties as a point estimator.
Learning Objectives
LO 6.25: Explain what a confidence interval represents and determine how changes in sample size and confidence level affect the precision of the confidence interval.
Video
Video: Population Means – Part 3 (6:02)
Sample Size Calculations
As we just learned, for a given level of confidence, the sample size determines the size of the margin of error and thus the width, or precision, of our interval estimation. This process can be reversed.
In situations where a researcher has some flexibility as to the sample size, the researcher can calculate in advance what the sample size is that he/she needs in order to be able to report a confidence interval with a certain level of confidence and a certain margin of error. Let’s look at an example.
EXAMPLE:
Recall the example about the SAT-M scores of community college students.
An educational researcher is interested in estimating μ (mu), the mean score on the math part of the SAT (SAT-M) of all community college students in his state. To this end, the researcher has chosen a random sample of 650 community college students from his state, and found that their average SAT-M score is 475. Based on a large body of research that was done on the SAT, it is known that the scores roughly follow a normal distribution, with the standard deviation σ (sigma) =100.
The 95% confidence interval for μ (mu) is
\begin{aligned}
475 \pm 1.96 * \frac{100}{\sqrt{650}} &=\left(475-1.96 * \frac{100}{\sqrt{650}}, 475+1.96 * \frac{100}{\sqrt{650}}\right) \
&=(475-7.7,475+7.7) \
&=(467.3,482.7)
\end{aligned}
which is roughly 475 ± 8, or (467, 483). For a sample size of n = 650, our margin of error is 8.
Now, let’s think about this problem in a slightly different way:
An educational researcher is interested in estimating μ (mu), the mean score on the math part of the SAT (SAT-M) of all community college students in his state with a margin of error of (only) 5, at the 95% confidence level. What is the sample size needed to achieve this? σ (sigma), of course, is still assumed to be 100.
To solve this, we set:
$m=2 \cdot \frac{100}{\sqrt{n}}=5 \quad \text { so } \quad \sqrt{n}=\frac{2(100)}{5} \quad \text { and } \quad n=\left(\frac{2(100)}{5}\right)^{2}=1600$
So, for a sample size of 1,600 community college students, the researcher will be able to estimate μ (mu) with a margin of error of 5, at the 95% level. In this example, we can also imagine that the researcher has some flexibility in choosing the sample size, since there is a minimal cost (if any) involved in recording students’ SAT-M scores, and there are many more than 1,600 community college students in each state.
Rather than take the same steps to isolate n every time we solve such a problem, we may obtain a general expression for the required n for a desired margin of error m and a certain level of confidence.
Since
$m = z* \dot \dfrac{\sigma}{\sqrt{n}}$
is the formula to determine m for a given n, we can use simple algebra to express n in terms of m (multiply both sides by the square root of n, divide both sides by m, and square both sides) to get
$n = (\dfrac{z* \sigma}{m})^2$
Comment:
• Clearly, the sample size n must be an integer.
• In the previous example we got n = 1,600, but in other situations, the calculation may give us a non-integer result.
• In these cases, we should always round up to the next highest integer.
• Using this “conservative approach,” we’ll achieve an interval at least as narrow as the one desired.
EXAMPLE:
IQ scores are known to vary normally with a standard deviation of 15. How many students should be sampled if we want to estimate the population mean IQ at 99% confidence with a margin of error equal to 2?
$n=\left(\dfrac{z^{*} \sigma}{m}\right)^{2}=\left(\dfrac{2.576(15)}{2}\right)^{2}=373.26$
Round up to be safe, and take a sample of 374 students.
The purpose of the next activity is to give you guided practice in sample size calculations for obtaining confidence intervals with a desired margin of error, at a certain confidence level. Consider the example from the previous Learn By Doing activity:
Learn by Doing: Sample Size
Comment:
• In the preceding activity, you saw that in order to calculate the sample size when planning a study, you needed to know the population standard deviation, sigma (σ). In practice, sigma is usually not known, because it is a parameter. (The rare exceptions are certain variables like IQ score or standardized tests that might be constructed to have a particular known sigma.)
Therefore, when researchers wish to compute the required sample size in preparation for a study, they use an estimate of sigma. Usually, sigma is estimated based on the standard deviation obtained in prior studies.
However, in some cases, there might not be any prior studies on the topic. In such instances, a researcher still needs to get a rough estimate of the standard deviation of the (yet-to-be-measured) variable, in order to determine the required sample size for the study. One way to get such a rough estimate is with the “range rule of thumb.” We will not cover this topic in depth but mention here that a very rough estimate of the standard deviation of a population is the range/4.
There are a few more things we need to discuss:
• Is it always OK to use the confidence interval we developed for μ (mu) when σ (sigma) is known?
• What if σ (sigma) is unknown?
• How can we use statistical software to calculate confidence intervals for us?
When is it safe to use the confidence interval we developed?
One of the most important things to learn with any inference method is the conditions under which it is safe to use it. It is very tempting to apply a certain method, but if the conditions under which this method was developed are not met, then using this method will lead to unreliable results, which can then lead to wrong and/or misleading conclusions. As you’ll see throughout this section, we will always discuss the conditions under which each method can be safely used.
In particular, the confidence interval for μ (mu), when σ (sigma) is known:
$\bar{x} \pm z* \dot \dfrac{\sigma}{\sqrt{n}}$
was developed assuming that the sampling distribution of x-bar is normal; in other words, that the Central Limit Theorem applies. In particular, this allowed us to determine the values of z*, the confidence multiplier, for different levels of confidence.
First, the sample must be random. Assuming that the sample is random, recall from the Probability unit that the Central Limit Theorem works when the sample size is large (a common rule of thumb for “large” is n > 30), or, for smaller sample sizes, if it is known that the quantitative variable of interest is distributed normally in the population. The only situation when we cannot use the confidence interval, then, is when the sample size is small and the variable of interest is not known to have a normal distribution. In that case, other methods, called non-parametric methods, which are beyond the scope of this course, need to be used. This can be summarized in the following table:
Did I Get This?: When to Use Z-Interval (Means)
In the following activity, you have to opportunity to use software to summarize the raw data provided.
Did I Get This?: Confidence Intervals: Means #3
What if σ (sigma) is unknown?
As we discussed earlier, when variables have been well-researched in different populations it is reasonable to assume that the population standard deviation (σ, sigma) is known. However, this is rarely the case. What if σ (sigma) is unknown?
Well, there is some good news and some bad news.
The good news is that we can easily replace the population standard deviation, σ (sigma), with the sample standard deviation, s.
The bad news is that once σ (sigma) has been replaced by s, we lose the Central Limit Theorem, together with the normality of x-bar, and therefore the confidence multipliers z* for the different levels of confidence (1.645, 1.96, 2.576) are (generally) not correct any more. The new multipliers come from a different distribution called the “t distribution” and are therefore denoted by t* (instead of z*). We will discuss the t distribution in more detail when we talk about hypothesis testing.
The confidence interval for the population mean (μ, mu) when (σ, sigma) is unknown is therefore:
$\bar{x} \pm t^{*} * \dfrac{s}{\sqrt{n}}$
(Note that this interval is very similar to the one when σ (sigma) is known, with the obvious changes: s replaces σ (sigma), and t* replaces z* as discussed above.)
There is an important difference between the confidence multipliers we have used so far (z*) and those needed for the case when σ (sigma) is unknown (t*). Unlike the confidence multipliers we have used so far (z*), which depend only on the level of confidence, the new multipliers (t*) have the added complexity that they depend on both the level of confidence and on the sample size (for example: the t* used in a 95% confidence when n = 10 is different from the t* used when n = 40). Due to this added complexity in determining the appropriate t*, we will rely heavily on software in this case.
Comments:
• Since it is quite rare that σ (sigma) is known, this interval (sometimes called a “one-sample t confidence interval”) is more commonly used as the confidence interval for estimating μ (mu). (Nevertheless, we could not have presented it without our extended discussion up to this point, which also provided you with a solid understanding of confidence intervals.)
• The quantity s/sqrt(n) is called the estimated standard error of x-bar. The Central Limit Theorem tells us that σ/sqrt(n) = sigma/sqrt(n) is the standard deviation of x-bar (and this is the quantity used in confidence interval when σ (sigma) is known). In general, the standard error is the standard deviation of the sampling distribution of a statistic. When we substitute s for σ (sigma) we are estimating the true standard error. You may see the term “standard error” used for both the true standard error and the estimated standard error depending on the author and audience. What is important to understand about the standard error is that it measures the variation of a statistic calculated from a sample of a specified sample size (not the variation of the original population).
• As before, to safely use this confidence interval (one-sample t confidence interval), the sample must be random, and the only case when this interval cannot be used is when the sample size is small and the variable is not known to vary normally.
Final Comment:
• It turns out that for large values of n, the t* multipliers are not that different from the z* multipliers, and therefore using the interval formula:
$\bar{x} \pm z* \ast \dfrac{s}{\sqrt{n}}$
for μ (mu) when σ (sigma) is unknown provides a pretty good approximation.
Population Means (Summary)
Let’s summarize
• When the population is normal and/or the sample is large, a confidence interval for unknown population mean μ (mu) when σ (sigma) is known is:
$\bar{x} \pm z* \dot \dfrac{\sigma}{\sqrt{n}}$
where z* is 1.645 for 90% confidence, 1.96 for 95% confidence, and 2.576 for 99% confidence.
• There is a trade-off between the level of confidence and the precision of the interval estimation. For a given sample size, the price we have to pay for more precision is sacrificing level of confidence.
• The general form of confidence intervals is an estimate +/- the margin of error (m). In this case, the estimate = x-bar and
$m = z* \dot \dfrac{\sigma}{\sqrt{n}}$
The confidence interval is therefore centered at the estimate and its width is exactly 2m.
• For a given level of confidence, the width of the interval depends on the sample size. We can therefore do a sample size calculation to figure out what sample size is needed in order to get a confidence interval with a desired margin of error m, and a certain level of confidence (assuming we have some flexibility with the sample size). To do the sample size calculation we use:
$n =( \dfrac{z* \sigma}{m})^2$
(and round up to the next integer). We estimate σ (sigma) when necessary.
• When σ (sigma) is unknown, we use the sample standard deviation, s, instead, but as a result we also need to use a different set of confidence multipliers (t*) associated with the t distribution. We will use software to calculate intervals in this case, however, the formula for confidence interval in this case is
$\bar{x} \pm t* \ast \dfrac{s}{\sqrt{n}}$
• These new multipliers have the added complexity that they depend not only on the level of confidence, but also on the sample size. Software is therefore very useful for calculating confidence intervals in this case.
• For large values of n, the t* multipliers are not that different from the z* multipliers, and therefore using the interval formula:
$\bar{x} \pm z* \ast \dfrac{s}{\sqrt{n}}$
for μ (mu) when σ (sigma) is unknown provides a pretty good approximation.
Population Proportions
CO-4: Distinguish among different measurement scales, choose the appropriate descriptive and inferential statistical methods based on these distinctions, and interpret the results.
Learning Objectives
LO 4.30: Interpret confidence intervals for population parameters in context.
Learning Objectives
LO 4.32: Find confidence intervals for the population proportion using the formula (when required conditions are met) and perform sample size calculations.
CO-6: Apply basic concepts of probability, random variation, and commonly used statistical probability distributions.
Learning Objectives
LO 6.24: Explain the connection between the sampling distribution of a statistic, and its properties as a point estimator.
Learning Objectives
LO 6.25: Explain what a confidence interval represents and determine how changes in sample size and confidence level affect the precision of the confidence interval.
Video
Video: Population Proportions (4:13)
Confidence Intervals
As we mentioned in the introduction to Unit 4A, when the variable that we’re interested in studying in the population is categorical, the parameter we are trying to infer about is the population proportion (p) associated with that variable. We also learned that the point estimator for the population proportion p is the sample proportion p-hat.
To refresh your memory, here is a picture that summarizes an example we looked at.
We are now moving on to interval estimation of p. In other words, we would like to develop a set of intervals that, with different levels of confidence, will capture the value of p. We’ve actually done all the groundwork and discussed all the big ideas of interval estimation when we talked about interval estimation for μ (mu), so we’ll be able to go through it much faster. Let’s begin.
Recall that the general form of any confidence interval for an unknown parameter is:
estimate ± margin of error
Since the unknown parameter here is the population proportion p, the point estimator (as I reminded you above) is the sample proportion p-hat. The confidence interval for p, therefore, has the form:
(Recall that m is the notation for the margin of error.) The margin of error (m) gives us the maximum estimation error with a certain confidence. In this case it tells us that p-hat is different from p (the parameter it estimates) by no more than m units.
From our previous discussion on confidence intervals, we also know that the margin of error is the product of two components:
To figure out what these two components are, we need to go back to a result we obtained in the Sampling Distributions section of the Probability unit about the sampling distribution of p-hat. We found that under certain conditions (which we’ll come back to later), p-hat has a normal distribution with mean p, and a
This result makes things very simple for us, because it reveals what the two components are that the margin of error is made of:
• Since, like the sampling distribution of x-bar, the sampling distribution of p-hat is normal, the confidence multipliers that we’ll use in the confidence interval for p will be the same z* multipliers we use for the confidence interval for μ (mu) when σ (sigma) is known (using exactly the same reasoning and the same probability results). The multipliers we’ll use, then, are: 1.645, 1.96, and 2.576 at the 90%, 95% and 99% confidence levels, respectively.
• The standard deviation of our estimator p-hat is
Putting it all together, we find that the confidence interval for p should be:
We just have to solve one practical problem and we’re done. We’re trying to estimate the unknown population proportion p, so having it appear in the confidence interval doesn’t make any sense. To overcome this problem, we’ll do the obvious thing …
We’ll replace p with its sample counterpart, p-hat, and work with the estimated standard error of p-hat
Now we’re done. The confidence interval for the population proportion p is:
$\hat{p} \pm z^{*} \cdot \sqrt{\dfrac{\hat{p}(1-\hat{p})}{n}}$
EXAMPLE:
The drug Viagra became available in the U.S. in May, 1998, in the wake of an advertising campaign that was unprecedented in scope and intensity. A Gallup poll found that by the end of the first week in May, 643 out of a random sample of 1,005 adults were aware that Viagra was an impotency medication (based on “Viagra A Popular Hit,” a Gallup poll analysis by Lydia Saad, May 1998).
Let’s estimate the proportion p of all adults in the U.S. who by the end of the first week of May 1998 were already aware of Viagra and its purpose by setting up a 95% confidence interval for p.
We first need to calculate the sample proportion p-hat. Out of 1,005 sampled adults, 643 knew what Viagra is used for, so p-hat = 643/1005 = 0.64
Therefore, a 95% confidence interval for p is
\begin{aligned}
\hat{p} \pm 1.96 \cdot \sqrt{\frac{\hat{p}(1-\hat{p})}{n}} &=0.64 \pm 1.96 \cdot \sqrt{\frac{0.64(1-0.64)}{1005}} \
&=0.64 \pm 0.03 \
&=(0.61,0.67)
\end{aligned}
We can be 95% confident that the proportion of all U.S. adults who were already familiar with Viagra by that time was between 0.61 and 0.67 (or 61% and 67%).
The fact that the margin of error equals 0.03 says we can be 95% confident that unknown population proportion p is within 0.03 (3%) of the observed sample proportion 0.64 (64%). In other words, we are 95% confident that 64% is “off” by no more than 3%.
Did I Get This?: Confidence Intervals – Proportions #1
Comment:
• We would like to share with you the methodology portion of the official poll release for the Viagra example. We hope you see that you now have the tools to understand how poll results are analyzed:
“The results are based on telephone interviews with a randomly selected national sample of 1,005 adults, 18 years and older, conducted May 8-10, 1998. For results based on samples of this size, one can say with 95 percent confidence that the error attributable to sampling and other random effects could be plus or minus 3 percentage points. In addition to sampling error, question wording and practical difficulties in conducting surveys can introduce error or bias into the findings of public opinion polls.”
The purpose of the next activity is to provide guided practice in calculating and interpreting the confidence interval for the population proportion p, and drawing conclusions from it.
Learn by Doing: Confidence Intervals – Proportions #1
Two important results that we discussed at length when we talked about the confidence interval for μ (mu) also apply here:
1. There is a trade-off between level of confidence and the width (or precision) of the confidence interval. The more precision you would like the confidence interval for p to have, the more you have to pay by having a lower level of confidence.
2. Since n appears in the denominator of the margin of error of the confidence interval for p, for a fixed level of confidence, the larger the sample, the narrower, or more precise it is. This brings us naturally to our next point.
Sample Size Calculations
Just as we did for means, when we have some level of flexibility in determining the sample size, we can set a desired margin of error for estimating the population proportion and find the sample size that will achieve that.
For example, a final poll on the day before an election would want the margin of error to be quite small (with a high level of confidence) in order to be able to predict the election results with the most precision. This is particularly relevant when it is a close race between the candidates. The polling company needs to figure out how many eligible voters it needs to include in their sample in order to achieve that.
Let’s see how we do that.
(Comment: For our discussion here we will focus on a 95% confidence level (z* = 1.96), since this is the most commonly used level of confidence.)
The confidence interval for p is
The margin of error, then, is
Now we isolate n (i.e., express it as a function of m).
There is a practical problem with this expression that we need to overcome.
Practically, you first determine the sample size, then you choose a random sample of that size, and then use the collected data to find p-hat.
So the fact that the expression above for determining the sample size depends on p-hat is problematic.
The way to overcome this problem is to take the conservative approach by setting p-hat = 1/2 = 0.5.
Why do we call this approach conservative?
It is conservative because the expression that appears in the numerator,
is maximized when p-hat = 1/2 = 0.5.
That way, the n we get will work in giving us the desired margin of error regardless of what the value of p-hat is. This is a “worst case scenario” approach. So when we do that we get:
In general, for any confidence level we have
• If we know a reasonable estimate of the proportion we can use:
$n=\dfrac{\left(z^{*}\right)^{2} \hat{p}(1-\hat{p})}{m^{2}}$
• If we choose the conservative estimate assuming we know nothing about the true proportion we use:
$n=\dfrac{\left(z^{*}\right)^{2}}{4 \cdot m^{2}}$
EXAMPLE:
It seems like media polls usually use a sample size of 1,000 to 1,200. This could be puzzling.
How could the results obtained from, say, 1,100 U.S. adults give us information about the entire population of U.S. adults? 1,100 is such a tiny fraction of the actual population. Here is the answer:
What sample size n is needed if a margin of error m = 0.03 is desired?
$n=\dfrac{(1.96)^{2}}{4 \cdot(0.03)^{2}}=1067.1 \rightarrow 1068$
(remember, always round up). In fact, 0.03 is a very commonly used margin of error, especially for media polls. For this reason, most media polls work with a sample of around 1,100 people.
Did I Get This?: Confidence Intervals – Proportions #2
When is it safe to use these methods?
As we mentioned before, one of the most important things to learn with any inference method is the conditions under which it is safe to use it.
As we did for the mean, the assumption we made in order to develop the methods in this unit was that the sampling distribution of the sample proportion, p-hat is roughly normal. Recall from the Probability unit that the conditions under which this happens are that
$n p \geq 10 \text { and } n(1-p) \geq 10$
Since p is unknown, we will replace it with its estimate, the sample proportion, and set
$n \hat{p} \geq 10 \text { and } n(1-\hat{p}) \geq 10$
to be the conditions under which it is safe to use the methods we developed in this section.
Here is one final practice for these confidence intervals!!
Did I Get This?: Confidence Intervals – Proportions #3
Let’s summarize
In general, a confidence interval for the unknown population proportion (p) is
$\hat{p} \pm z^{*} \cdot \sqrt{\dfrac{\hat{p}(1-\hat{p})}{n}}$
where z* is 1.645 for 90% confidence, 1.96 for 95% confidence, and 2.576 for 99% confidence.
To obtain a desired margin of error (m) in a confidence interval for an unknown population proportion, a conservative sample size is
$n=\dfrac{\left(z^{*}\right)^{2}}{4 \cdot m^{2}}$
If a reasonable estimate of the true proportion is known, the sample size can be calculated using
$n = \dfrac{(1.96)^2 \hat{p} (1-\hat{p})}{m^2}$
The methods developed in this unit are safe to use as long as
$n \hat{p} \geq 10 \text{ and } n(1-\hat{p}) \geq 10$
Wrap-Up (Estimation)
In this section on estimation, we have discussed the basic process for constructing confidence intervals from point estimates. In doing so we must calculate the margin of error using the standard error (or estimated standard error) and a z* or t* value.
As we wrap up this topic, we wanted to again discuss the interpretation of a confidence interval.
What do we mean by “confidence”?
Suppose we find a 95% confidence interval for an unknown parameter, what does the 95% mean exactly?
• If we repeat the process for all possible samples of this size for the population, 95% of the intervals we construct will contain the parameter
This is NOT the same as saying “the probability that μ (mu) is contained in (the interval constructed from my sample) is 95%.” Why?!
Answer
• Once we have a particular confidence interval, the true value is either in the interval constructed from our sample (probability = 1) or it is not (probability = 0). We simply do not know which it is. If we were to say “the probability that μ (mu) is contained in (the interval constructed from my sample) is 95%,” we know we would be incorrect since it is either 0 (No) or 1 (Yes) for any given sample. The probability comes from the “long run” view of the process.
• The probability we used to construct the confidence interval was based upon the fact that the sample statistic (x-bar, p-hat) will vary in a manner we understand (because we know the sampling distribution).
• The probability is associated with the randomness of our statistic so that for a particular interval we only speak of being “95% confident” which translates into an understanding about the process.
• In other words, in statistics, “95% confident” means our confidence in the process and implies that in the long run, we will be correct by using this process 95% of the time but that 5% of the time we will be incorrect. For one particular use of this process we cannot know if we are one of the 95% which are correct or one of the 5% which are incorrect. That is the statistical definition of confidence.
• We can say that in the long run, 95% of these intervals will contain the true parameter and 5% will not.
Correct Interpretations:
Example: Suppose a 95% confidence interval for the proportion of U.S. adults who are not active at all is (0.23, 0.27).
• Correct Interpretation #1: We are 95% confident that the true proportion of U.S. adults who are not active at all is between 23% and 27%
• Correct Interpretation #2: We are 95% confident that the true proportion of U.S. adults who are not active at all is covered by the interval (23%, 27%)
• A More Thorough Interpretation: Based upon our sample, the true proportion of U.S. adults who are not active at all is estimated to be 25%. With 95% confidence, this value could be as small as 23% to as large as 27%.
• A Common Interpretation in Journal Articles: Based upon our sample, the true proportion of U.S. adults who are not active at all is estimated to be 25% (95% CI 23%-27%).
Now let’s look at an INCORRECT interpretation which we have seen before
• INCORRECT Interpretation: There is a 95% chance that the true proportion of U.S. adults who are not active at all is between 23% and 27%. We know this is incorrect because at this point, the true proportion and the numbers in our interval are fixed. The probability is either 1 or 0 depending on whether the interval is one of the 95% that cover the true proportion, or one of the 5% that do not.
For confidence intervals regarding a population mean, we have an additional caution to discuss about interpretations.
Example: Suppose a 95% confidence interval for the average minutes per day of exercise for U.S. adults is (12, 18).
• Correct Interpretation: We are 95% confident that the true mean minutes per day of exercise for U.S. adults is between 12 and 18 minutes.
• INCORRECT Interpretation: We are 95% confident that an individual U.S. adult exercises between 12 and 18 minutes per day. We must remember that our intervals are about the parameter, in this case the population mean. They do not apply to an individual as we expect individuals to have much more variation.
• INCORRECT Interpretation: We are 95% confident that U.S. adults exercise between 12 and 18 minutes per day.This interpretation is implying this is true for all U.S. adults. This is an incorrect interpretation for the same reason as the previous incorrect interpretation!
As we continue to study inferential statistics, we will see that confidence intervals are used in many situations. The goal is always to provide confidence in our interval estimate of a quantity of interest. Population means and proportions are common parameters, however, any quantity that can be estimated from data has a population counterpart which we may wish to estimate.
(Optional) Outside Reading: Little Handbook – Confidence Intervals (and More) (4 Readings, ≈ 5500 words) | textbooks/stats/Applied_Statistics/Biostatistics_-_Open_Learning_Textbook/Unit_4A%3A_Introduction_to_Statistical_Inference/Estimation.txt |
CO-6: Apply basic concepts of probability, random variation, and commonly used statistical probability distributions.
Learning Objectives
LO 6.26: Outline the logic and process of hypothesis testing.
Learning Objectives
LO 6.27: Explain what the p-value is and how it is used to draw conclusions.
Video
Video: Hypothesis Testing (8:43)
Introduction
We are in the middle of the part of the course that has to do with inference for one variable.
So far, we talked about point estimation and learned how interval estimation enhances it by quantifying the magnitude of the estimation error (with a certain level of confidence) in the form of the margin of error. The result is the confidence interval — an interval that, with a certain confidence, we believe captures the unknown parameter.
We are now moving to the other kind of inference, hypothesis testing. We say that hypothesis testing is “the other kind” because, unlike the inferential methods we presented so far, where the goal was estimating the unknown parameter, the idea, logic and goal of hypothesis testing are quite different.
In the first two parts of this section we will discuss the idea behind hypothesis testing, explain how it works, and introduce new terminology that emerges in this form of inference. The final two parts will be more specific and will discuss hypothesis testing for the population proportion (p) and the population mean (μ, mu).
If this is your first statistics course, you will need to spend considerable time on this topic as there are many new ideas. Many students find this process and its logic difficult to understand in the beginning.
In this section, we will use the hypothesis test for a population proportion to motivate our understanding of the process. We will conduct these tests manually. For all future hypothesis test procedures, including problems involving means, we will use software to obtain the results and focus on interpreting them in the context of our scenario.
General Idea and Logic of Hypothesis Testing
The purpose of this section is to gradually build your understanding about how statistical hypothesis testing works. We start by explaining the general logic behind the process of hypothesis testing. Once we are confident that you understand this logic, we will add some more details and terminology.
To start our discussion about the idea behind statistical hypothesis testing, consider the following example:
EXAMPLE:
A case of suspected cheating on an exam is brought in front of the disciplinary committee at a certain university.
There are two opposing claims in this case:
• The student’s claim: I did not cheat on the exam.
• The instructor’s claim: The student did cheat on the exam.
Adhering to the principle “innocent until proven guilty,” the committee asks the instructor for evidence to support his claim. The instructor explains that the exam had two versions, and shows the committee members that on three separate exam questions, the student used in his solution numbers that were given in the other version of the exam.
The committee members all agree that it would be extremely unlikely to get evidence like that if the student’s claim of not cheating had been true. In other words, the committee members all agree that the instructor brought forward strong enough evidence to reject the student’s claim, and conclude that the student did cheat on the exam.
What does this example have to do with statistics?
While it is true that this story seems unrelated to statistics, it captures all the elements of hypothesis testing and the logic behind it. Before you read on to understand why, it would be useful to read the example again. Please do so now.
Statistical hypothesis testing is defined as:
• Assessing evidence provided by the data against the null claim (the claim which is to be assumed true unless enough evidence exists to reject it).
Here is how the process of statistical hypothesis testing works:
1. We have two claims about what is going on in the population. Let’s call them claim 1 (this will be the null claim or hypothesis) and claim 2 (this will be the alternative). Much like the story above, where the student’s claim is challenged by the instructor’s claim, the null claim 1 is challenged by the alternative claim 2. (For us, these claims are usually about the value of population parameter(s) or about the existence or nonexistence of a relationship between two variables in the population).
2. We choose a sample, collect relevant data and summarize them (this is similar to the instructor collecting evidence from the student’s exam). For statistical tests, this step will also involve checking any conditions or assumptions.
3. We figure out how likely it is to observe data like the data we obtained, if claim 1 is true. (Note that the wording “how likely …” implies that this step requires some kind of probability calculation). In the story, the committee members assessed how likely it is to observe evidence such as the instructor provided, had the student’s claim of not cheating been true.
4. Based on what we found in the previous step, we make our decision:
• If, after assuming claim 1 is true, we find that it would be extremely unlikely to observe data as strong as ours or stronger in favor of claim 2, then we have strong evidence against claim 1, and we reject it in favor of claim 2. Later we will see this corresponds to a small p-value.
• If, after assuming claim 1 is true, we find that observing data as strong as ours or stronger in favor of claim 2 is NOT VERY UNLIKELY, then we do not have enough evidence against claim 1, and therefore we cannot reject it in favor of claim 2. Later we will see this corresponds to a p-value which is not small.
In our story, the committee decided that it would be extremely unlikely to find the evidence that the instructor provided had the student’s claim of not cheating been true. In other words, the members felt that it is extremely unlikely that it is just a coincidence (random chance) that the student used the numbers from the other version of the exam on three separate problems. The committee members therefore decided to reject the student’s claim and concluded that the student had, indeed, cheated on the exam. (Wouldn’t you conclude the same?)
Hopefully this example helped you understand the logic behind hypothesis testing.
Interactive Applet: Reasoning of a Statistical Test
To strengthen your understanding of the process of hypothesis testing and the logic behind it, let’s look at three statistical examples.
EXAMPLE:
A recent study estimated that 20% of all college students in the United States smoke. The head of Health Services at Goodheart University (GU) suspects that the proportion of smokers may be lower at GU. In hopes of confirming her claim, the head of Health Services chooses a random sample of 400 Goodheart students, and finds that 70 of them are smokers.
Let’s analyze this example using the 4 steps outlined above:
1. Stating the claims: There are two claims here:
• claim 1: The proportion of smokers at Goodheart is 0.20.
• claim 2: The proportion of smokers at Goodheart is less than 0.20.
Claim 1 basically says “nothing special goes on at Goodheart University; the proportion of smokers there is no different from the proportion in the entire country.” This claim is challenged by the head of Health Services, who suspects that the proportion of smokers at Goodheart is lower.
2. Choosing a sample and collecting data: A sample of n = 400 was chosen, and summarizing the data revealed that the sample proportion of smokers is p-hat = 70/400 = 0.175.While it is true that 0.175 is less than 0.20, it is not clear whether this is strong enough evidence against claim 1. We must account for sampling variation.
3. Assessment of evidence: In order to assess whether the data provide strong enough evidence against claim 1, we need to ask ourselves: How surprising is it to get a sample proportion as low as p-hat = 0.175 (or lower), assuming claim 1 is true? In other words, we need to find how likely it is that in a random sample of size n = 400 taken from a population where the proportion of smokers is p = 0.20 we’ll get a sample proportion as low as p-hat = 0.175 (or lower).It turns out that the probability that we’ll get a sample proportion as low as p-hat = 0.175 (or lower) in such a sample is roughly 0.106 (do not worry about how this was calculated at this point – however, if you think about it hopefully you can see that the key is the sampling distribution of p-hat).
4. Conclusion: Well, we found that if claim 1 were true there is a probability of 0.106 of observing data like that observed or more extreme. Now you have to decide …Do you think that a probability of 0.106 makes our data rare enough (surprising enough) under claim 1 so that the fact that we did observe it is enough evidence to reject claim 1? Or do you feel that a probability of 0.106 means that data like we observed are not very likely when claim 1 is true, but they are not unlikely enough to conclude that getting such data is sufficient evidence to reject claim 1. Basically, this is your decision. However, it would be nice to have some kind of guideline about what is generally considered surprising enough.
EXAMPLE:
A certain prescription allergy medicine is supposed to contain an average of 245 parts per million (ppm) of a certain chemical. If the concentration is higher than 245 ppm, the drug will likely cause unpleasant side effects, and if the concentration is below 245 ppm, the drug may be ineffective. The manufacturer wants to check whether the mean concentration in a large shipment is the required 245 ppm or not. To this end, a random sample of 64 portions from the large shipment is tested, and it is found that the sample mean concentration is 250 ppm with a sample standard deviation of 12 ppm.
1. Stating the claims:
• Claim 1: The mean concentration in the shipment is the required 245 ppm.
• Claim 2: The mean concentration in the shipment is not the required 245 ppm.
Note that again, claim 1 basically says: “There is nothing unusual about this shipment, the mean concentration is the required 245 ppm.” This claim is challenged by the manufacturer, who wants to check whether that is, indeed, the case or not.
2. Choosing a sample and collecting data: A sample of n = 64 portions is chosen and after summarizing the data it is found that the sample mean concentration is x-bar = 250 and the sample standard deviation is s = 12.Is the fact that x-bar = 250 is different from 245 strong enough evidence to reject claim 1 and conclude that the mean concentration in the whole shipment is not the required 245? In other words, do the data provide strong enough evidence to reject claim 1?
3. Assessing the evidence: In order to assess whether the data provide strong enough evidence against claim 1, we need to ask ourselves the following question: If the mean concentration in the whole shipment were really the required 245 ppm (i.e., if claim 1 were true), how surprising would it be to observe a sample of 64 portions where the sample mean concentration is off by 5 ppm or more (as we did)? It turns out that it would be extremely unlikely to get such a result if the mean concentration were really the required 245. There is only a probability of 0.0007 (i.e., 7 in 10,000) of that happening. (Do not worry about how this was calculated at this point, but again, the key will be the sampling distribution.)
4. Making conclusions: Here, it is pretty clear that a sample like the one we observed or more extreme is VERY rare (or extremely unlikely) if the mean concentration in the shipment were really the required 245 ppm. The fact that we did observe such a sample therefore provides strong evidence against claim 1, so we reject it and conclude with very little doubt that the mean concentration in the shipment is not the required 245 ppm.
Do you think that you’re getting it? Let’s make sure, and look at another example.
EXAMPLE:
Is there a relationship between gender and combined scores (Math + Verbal) on the SAT exam?
Following a report on the College Board website, which showed that in 2003, males scored generally higher than females on the SAT exam, an educational researcher wanted to check whether this was also the case in her school district. The researcher chose random samples of 150 males and 150 females from her school district, collected data on their SAT performance and found the following:
Females Males
n mean standard deviation
150 1010 206
n mean standard deviation
150 1025 212
Again, let’s see how the process of hypothesis testing works for this example:
1. Stating the claims:
• Claim 1: Performance on the SAT is not related to gender (males and females score the same).
• Claim 2: Performance on the SAT is related to gender – males score higher.
Note that again, claim 1 basically says: “There is nothing going on between the variables SAT and gender.” Claim 2 represents what the researcher wants to check, or suspects might actually be the case.
2. Choosing a sample and collecting data: Data were collected and summarized as given above. Is the fact that the sample mean score of males (1,025) is higher than the sample mean score of females (1,010) by 15 points strong enough information to reject claim 1 and conclude that in this researcher’s school district, males score higher on the SAT than females?
3. Assessment of evidence: In order to assess whether the data provide strong enough evidence against claim 1, we need to ask ourselves: If SAT scores are in fact not related to gender (claim 1 is true), how likely is it to get data like the data we observed, in which the difference between the males’ average and females’ average score is as high as 15 points or higher? It turns out that the probability of observing such a sample result if SAT score is not related to gender is approximately 0.29 (Again, do not worry about how this was calculated at this point).
4. Conclusion: Here, we have an example where observing a sample like the one we observed or more extreme is definitely not surprising (roughly 30% chance) if claim 1 were true (i.e., if indeed there is no difference in SAT scores between males and females). We therefore conclude that our data does not provide enough evidence for rejecting claim 1.
Comment:
• Go back and read the conclusion sections of the three examples, and pay attention to the wording. Note that there are two types of conclusions:
• “The data provide enough evidence to reject claim 1 and accept claim 2”; or
• “The data do not provide enough evidence to reject claim 1.”
In particular, note that in the second type of conclusion we did not say:I accept claim 1,” but only “I don’t have enough evidence to reject claim 1.” We will come back to this issue later, but this is a good place to make you aware of this subtle difference.
Hopefully by now, you understand the logic behind the statistical hypothesis testing process. Here is a summary:
Learn by Doing: Logic of Hypothesis Testing
Did I Get This?: Logic of Hypothesis Testing
Steps in Hypothesis Testing
CO-6: Apply basic concepts of probability, random variation, and commonly used statistical probability distributions.
Learning Objectives
LO 6.26: Outline the logic and process of hypothesis testing.
Learning Objectives
LO 6.27: Explain what the p-value is and how it is used to draw conclusions.
Video
Video: Steps in Hypothesis Testing (16:02)
Now that we understand the general idea of how statistical hypothesis testing works, let’s go back to each of the steps and delve slightly deeper, getting more details and learning some terminology.
Hypothesis Testing Step 1: State the Hypotheses
In all three examples, our aim is to decide between two opposing points of view, Claim 1 and Claim 2. In hypothesis testing, Claim 1 is called the null hypothesis (denoted “Ho“), and Claim 2 plays the role of the alternative hypothesis (denoted “Ha“). As we saw in the three examples, the null hypothesis suggests nothing special is going on; in other words, there is no change from the status quo, no difference from the traditional state of affairs, no relationship. In contrast, the alternative hypothesis disagrees with this, stating that something is going on, or there is a change from the status quo, or there is a difference from the traditional state of affairs. The alternative hypothesis, Ha, usually represents what we want to check or what we suspect is really going on.
Let’s go back to our three examples and apply the new notation:
In example 1:
• Ho: The proportion of smokers at GU is 0.20.
• Ha: The proportion of smokers at GU is less than 0.20.
In example 2:
• Ho: The mean concentration in the shipment is the required 245 ppm.
• Ha: The mean concentration in the shipment is not the required 245 ppm.
In example 3:
• Ho: Performance on the SAT is not related to gender (males and females score the same).
• Ha: Performance on the SAT is related to gender – males score higher.
Learn by Doing: State the Hypotheses
Did I Get This?: State the Hypotheses
Hypothesis Testing Step 2: Collect Data, Check Conditions and Summarize Data
This step is pretty obvious. This is what inference is all about. You look at sampled data in order to draw conclusions about the entire population. In the case of hypothesis testing, based on the data, you draw conclusions about whether or not there is enough evidence to reject Ho.
There is, however, one detail that we would like to add here. In this step we collect data and summarize it. Go back and look at the second step in our three examples. Note that in order to summarize the data we used simple sample statistics such as the sample proportion (p-hat), sample mean (x-bar) and the sample standard deviation (s).
In practice, you go a step further and use these sample statistics to summarize the data with what’s called a test statistic. We are not going to go into any details right now, but we will discuss test statistics when we go through the specific tests.
This step will also involve checking any conditions or assumptions required to use the test.
Hypothesis Testing Step 3: Assess the Evidence
As we saw, this is the step where we calculate how likely is it to get data like that observed (or more extreme) when Ho is true. In a sense, this is the heart of the process, since we draw our conclusions based on this probability.
• If this probability is very small (see example 2), then that means that it would be very surprising to get data like that observed (or more extreme) if Ho were true. The fact that we did observe such data is therefore evidence against Ho, and we should reject it.
• On the other hand, if this probability is not very small (see example 3) this means that observing data like that observed (or more extreme) is not very surprising if Ho were true. The fact that we observed such data does not provide evidence against Ho. This crucial probability, therefore, has a special name. It is called the p-value of the test.
In our three examples, the p-values were given to you (and you were reassured that you didn’t need to worry about how these were derived yet):
• Example 1: p-value = 0.106
• Example 2: p-value = 0.0007
• Example 3: p-value = 0.29
Obviously, the smaller the p-value, the more surprising it is to get data like ours (or more extreme) when Ho is true, and therefore, the stronger the evidence the data provide against Ho.
Looking at the three p-values of our three examples, we see that the data that we observed in example 2 provide the strongest evidence against the null hypothesis, followed by example 1, while the data in example 3 provides the least evidence against Ho.
Comment:
• Right now we will not go into specific details about p-value calculations, but just mention that since the p-value is the probability of getting data like those observed (or more extreme) when Ho is true, it would make sense that the calculation of the p-value will be based on the data summary, which, as we mentioned, is the test statistic. Indeed, this is the case. In practice, we will mostly use software to provide the p-value for us.
Hypothesis Testing Step 4: Making Conclusions
Since our statistical conclusion is based on how small the p-value is, or in other words, how surprising our data are when Ho is true, it would be nice to have some kind of guideline or cutoff that will help determine how small the p-value must be, or how “rare” (unlikely) our data must be when Ho is true, for us to conclude that we have enough evidence to reject Ho.
This cutoff exists, and because it is so important, it has a special name. It is called the significance level of the test and is usually denoted by the Greek letter α (alpha). The most commonly used significance level is α (alpha) = 0.05 (or 5%). This means that:
• if the p-value < α (alpha) (usually 0.05), then the data we obtained is considered to be “rare (or surprising) enough” under the assumption that Ho is true, and we say that the data provide statistically significant evidence against Ho, so we reject Ho and thus accept Ha.
• if the p-value > α (alpha)(usually 0.05), then our data are not considered to be “surprising enough” under the assumption that Ho is true, and we say that our data do not provide enough evidence to reject Ho (or, equivalently, that the data do not provide enough evidence to accept Ha).
Now that we have a cutoff to use, here are the appropriate conclusions for each of our examples based upon the p-values we were given.
In Example 1:
• Using our cutoff of 0.05, we fail to reject Ho.
• Conclusion: There IS NOT enough evidence that the proportion of smokers at GU is less than 0.20
• Still we should consider: Does the evidence seen in the data provide any practical evidence towards our alternative hypothesis?
In Example 2:
• Using our cutoff of 0.05, we reject Ho.
• Conclusion: There IS enough evidence that the mean concentration in the shipment is not the required 245 ppm.
• Still we should consider: Does the evidence seen in the data provide any practical evidence towards our alternative hypothesis?
In Example 3:
• Using our cutoff of 0.05, we fail to reject Ho.
• Conclusion: There IS NOT enough evidence that males score higher on average than females on the SAT.
• Still we should consider: Does the evidence seen in the data provide any practical evidence towards our alternative hypothesis?
Notice that all of the above conclusions are written in terms of the alternative hypothesis and are given in the context of the situation. In no situation have we claimed the null hypothesis is true. Be very careful of this and other issues discussed in the following comments.
Comments:
1. Although the significance level provides a good guideline for drawing our conclusions, it should not be treated as an incontrovertible truth. There is a lot of room for personal interpretation. What if your p-value is 0.052? You might want to stick to the rules and say “0.052 > 0.05 and therefore I don’t have enough evidence to reject Ho”, but you might decide that 0.052 is small enough for you to believe that Ho should be rejected. It should be noted that scientific journals do consider 0.05 to be the cutoff point for which any p-value below the cutoff indicates enough evidence against Ho, and any p-value above it, or even equal to it, indicates there is not enough evidence against Ho. Although a p-value between 0.05 and 0.10 is often reported as marginally statistically significant.
2. It is important to draw your conclusions in context. It is never enough to say: “p-value = …, and therefore I have enough evidence to reject Ho at the 0.05 significance level.” You should always word your conclusion in terms of the data. Although we will use the terminology of “rejecting Ho” or “failing to reject Ho” – this is mostly due to the fact that we are instructing you in these concepts. In practice, this language is rarely used. We also suggest writing your conclusion in terms of the alternative hypothesis.Is there or is there not enough evidence that the alternative hypothesis is true?
3. Let’s go back to the issue of the nature of the two types of conclusions that I can make.
• Either I reject Ho (when the p-value is smaller than the significance level)
• or I cannot reject Ho (when the p-value is larger than the significance level).
As we mentioned earlier, note that the second conclusion does not imply that I accept Ho, but just that I don’t have enough evidence to reject it. Saying (by mistake) “I don’t have enough evidence to reject Ho so I accept it” indicates that the data provide evidence that Ho is true, which is not necessarily the case. Consider the following slightly artificial yet effective example:
EXAMPLE:
An employer claims to subscribe to an “equal opportunity” policy, not hiring men any more often than women for managerial positions. Is this credible? You’re not sure, so you want to test the following two hypotheses:
• Ho: The proportion of male managers hired is 0.5
• Ha: The proportion of male managers hired is more than 0.5
Data: You choose at random three of the new managers who were hired in the last 5 years and find that all 3 are men.
Assessing Evidence: If the proportion of male managers hired is really 0.5 (Ho is true), then the probability that the random selection of three managers will yield three males is therefore 0.5 * 0.5 * 0.5 = 0.125. This is the p-value (using the multiplication rule for independent events).
Conclusion: Using 0.05 as the significance level, you conclude that since the p-value = 0.125 > 0.05, the fact that the three randomly selected managers were all males is not enough evidence to reject the employer’s claim of subscribing to an equal opportunity policy (Ho).
However, the data (all three selected are males) definitely does NOT provide evidence to accept the employer’s claim (Ho).
Learn By Doing: Using p-values
Did I Get This?: Using p-values
Comment about wording: Another common wording in scientific journals is:
• “The results are statistically significant” – when the p-value < α (alpha).
• “The results are not statistically significant” – when the p-value > α (alpha).
Often you will see significance levels reported with additional description to indicate the degree of statistical significance. A general guideline (although not required in our course) is:
• If 0.01 ≤ p-value < 0.05, then the results are (statistically) significant.
• If 0.001 ≤ p-value < 0.01, then the results are highly statistically significant.
• If p-value < 0.001, then the results are very highly statistically significant.
• If p-value > 0.05, then the results are not statistically significant (NS).
• If 0.05 ≤ p-value < 0.10, then the results are marginally statistically significant.
Let’s summarize
We learned quite a lot about hypothesis testing. We learned the logic behind it, what the key elements are, and what types of conclusions we can and cannot draw in hypothesis testing. Here is a quick recap:
Video
Video: Hypothesis Testing Overview (2:20)
Here are a few more activities if you need some additional practice.
Did I Get This?: Hypothesis Testing Overview
Comments:
• Notice that the p-value is an example of a conditional probability. We calculate the probability of obtaining results like those of our data (or more extreme) GIVEN the null hypothesis is true. We could write P(Obtaining results like ours or more extreme | Ho is True).
• Another common phrase used to define the p-value is: “The probability of obtaining a statistic as or more extreme than your result given the null hypothesis is TRUE“.
• We could write P(Obtaining a test statistic as or more extreme than ours | Ho is True).
• In this case we are asking “Assuming the null hypothesis is true, how rare is it to observe something as or more extreme than what I have found in my data?”
• If after assuming the null hypothesis is true, what we have found in our data is extremely rare (small p-value), this provides evidence to reject our assumption that Ho is true in favor of Ha.
• The p-value can also be thought of as the probability, assuming the null hypothesis is true, that the result we have seen is solely due to random error (or random chance). We have already seen that statistics from samples collected from a population vary. There is random error or random chance involved when we sample from populations.
In this setting, if the p-value is very small, this implies, assuming the null hypothesis is true, that it is extremely unlikely that the results we have obtained would have happened due to random error alone, and thus our assumption (Ho) is rejected in favor of the alternative hypothesis (Ha).
• It is EXTREMELY important that you find a definition of the p-value which makes sense to you. New students often need to contemplate this idea repeatedly through a variety of examples and explanations before becoming comfortable with this idea. It is one of the two most important concepts in statistics (the other being confidence intervals).
Remember:
• We infer that the alternative hypothesis is true ONLY by rejecting the null hypothesis.
• A statistically significant result is one that has a very low probability of occurring if the null hypothesis is true.
• Results which are statistically significant may or may not have practical significance and vice versa.
Error and Power
CO-6: Apply basic concepts of probability, random variation, and commonly used statistical probability distributions.
Learning Objectives
LO 6.28: Define a Type I and Type II error in general and in the context of specific scenarios.
Learning Objectives
LO 6.29: Explain the concept of the power of a statistical test including the relationship between power, sample size, and effect size.
Video
Video: Errors and Power (12:03)
Type I and Type II Errors in Hypothesis Tests
We have not yet discussed the fact that we are not guaranteed to make the correct decision by this process of hypothesis testing. Maybe you are beginning to see that there is always some level of uncertainty in statistics.
Let’s think about what we know already and define the possible errors we can make in hypothesis testing. When we conduct a hypothesis test, we choose one of two possible conclusions based upon our data.
If the p-value is smaller than your pre-specified significance level (α, alpha), you reject the null hypothesis and either
• You have made the correct decision since the null hypothesis is false
OR
• You have made an error (Type I) and rejected Ho when in fact Ho is true (your data happened to be a RARE EVENT under Ho)
If the p-value is greater than (or equal to) your chosen significance level (α, alpha), you fail to reject the null hypothesis and either
• You have made the correct decision since the null hypothesis is true
OR
• You have made an error (Type II) and failed to reject Ho when in fact Ho is false (the alternative hypothesis, Ha, is true)
The following summarizes the four possible results which can be obtained from a hypothesis test. Notice the rows represent the decision made in the hypothesis test and the columns represent the (usually unknown) truth in reality.
Although the truth is unknown in practice – or we would not be conducting the test – we know it must be the case that either the null hypothesis is true or the null hypothesis is false. It is also the case that either decision we make in a hypothesis test can result in an incorrect conclusion!
A TYPE I Error occurs when we Reject Ho when, in fact, Ho is True. In this case, we mistakenly reject a true null hypothesis.
• P(TYPE I Error) = P(Reject Ho | Ho is True) = α = alpha = Significance Level
A TYPE II Error occurs when we fail to Reject Ho when, in fact, Ho is False. In this case we fail to reject a false null hypothesis.
• P(TYPE II Error) = P(Fail to Reject Ho | Ho is False) = β = beta
When our significance level is 5%, we are saying that we will allow ourselves to make a Type I error less than 5% of the time. In the long run, if we repeat the process, 5% of the time we will find a p-value < 0.05 when in fact the null hypothesis was true.
In this case, our data represent a rare occurrence which is unlikely to happen but is still possible. For example, suppose we toss a coin 10 times and obtain 10 heads, this is unlikely for a fair coin but not impossible. We might conclude the coin is unfair when in fact we simply saw a very rare event for this fair coin.
Our testing procedure CONTROLS for the Type I error when we set a pre-determined value for the significance level.
Notice that these probabilities are conditional probabilities. This is one more reason why conditional probability is an important concept in statistics.
Unfortunately, calculating the probability of a Type II error requires us to know the truth about the population. In practice we can only calculate this probability using a series of “what if” calculations which depend upon the type of problem.
Caution
Comment: As you initially read through the examples below, focus on the broad concepts instead of the small details. It is not important to understand how to calculate these values yourself at this point.
• Try to understand the pictures we present. Which pictures represent an assumed null hypothesis and which represent an alternative?
• It may be useful to come back to this page (and the activities here) after you have reviewed the rest of the section on hypothesis testing and have worked a few problems yourself.
Interactive Applet: Statistical Significance
Here are two examples of using an older version of this applet. It looks slightly different but the same settings and options are available in the version above.
In both cases we will consider IQ scores.
Our null hypothesis is that the true mean is 100. Assume the standard deviation is 16 and we will specify a significance level of 5%.
EXAMPLE:
In this example we will specify that the true mean is indeed 100 so that the null hypothesis is true. Most of the time (95%), when we generate a sample, we should fail to reject the null hypothesis since the null hypothesis is indeed true.
Here is one sample that results in a correct decision:
In the sample above, we obtain an x-bar of 105, which is drawn on the distribution which assumes μ (mu) = 100 (the null hypothesis is true). Notice the sample is shown as blue dots along the x-axis and the shaded region shows for which values of x-bar we would reject the null hypothesis. In other words, we would reject Ho whenever the x-bar falls in the shaded region.
Enter the same values and generate samples until you obtain a Type I error (you falsely reject the null hypothesis). You should see something like this:
If you were to generate 100 samples, you should have around 5% where you rejected Ho. These would be samples which would result in a Type I error.
The previous example illustrates a correct decision and a Type I error when the null hypothesis is true. The next example illustrates a correct decision and Type II error when the null hypothesis is false. In this case, we must specify the true population mean.
EXAMPLE:
Let’s suppose we are sampling from an honors program and that the true mean IQ for this population is 110. We do not know the probability of a Type II error without more detailed calculations.
Let’s start with a sample which results in a correct decision.
In the sample above, we obtain an x-bar of 111, which is drawn on the distribution which assumes μ (mu) = 100 (the null hypothesis is true).
Enter the same values and generate samples until you obtain a Type II error (you fail to reject the null hypothesis). You should see something like this:
You should notice that in this case (when Ho is false), it is easier to obtain an incorrect decision (a Type II error) than it was in the case where Ho is true. If you generate 100 samples, you can approximate the probability of a Type II error.
We can find the probability of a Type II error by visualizing both the assumed distribution and the true distribution together. The image below is adapted from an applet we will use when we discuss the power of a statistical test.
There is a 37.4% chance that, in the long run, we will make a Type II error and fail to reject the null hypothesis when in fact the true mean IQ is 110 in the population from which we sample our 10 individuals.
Can you visualize what will happen if the true population mean is really 115 or 108? When will the Type II error increase? When will it decrease? We will look at this idea again when we discuss the concept of power in hypothesis tests.
Comments:
• It is important to note that there is a trade-off between the probability of a Type I and a Type II error. If we decrease the probability of one of these errors, the probability of the other will increase! The practical result of this is that if we require stronger evidence to reject the null hypothesis (smaller significance level = probability of a Type I error), we will increase the chance that we will be unable to reject the null hypothesis when in fact Ho is false (increases the probability of a Type II error).
• When α (alpha) = 0.05 we obtained a Type II error probability of 0.374 = β = beta
• When α (alpha) = 0.01 (smaller than before) we obtain a Type II error probability of 0.644 = β = beta (larger than before)
• As the blue line in the picture moves farther right, the significance level (α, alpha) is decreasing and the Type II error probability is increasing.
• As the blue line in the picture moves farther left, the significance level (α, alpha) is increasing and the Type II error probability is decreasing
Let’s return to our very first example and define these two errors in context.
EXAMPLE:
A case of suspected cheating on an exam is brought in front of the disciplinary committee at a certain university.
There are two opposing claims in this case:
• Ho = The student’s claim: I did not cheat on the exam.
• Ha = The instructor’s claim: The student did cheat on the exam.
Adhering to the principle “innocent until proven guilty,” the committee asks the instructor for evidence to support his claim.
There are four possible outcomes of this process. There are two possible correct decisions:
• The student did cheat on the exam and the instructor brings enough evidence to reject Ho and conclude the student did cheat on the exam. This is a CORRECT decision!
• The student did not cheat on the exam and the instructor fails to provide enough evidence that the student did cheat on the exam. This is a CORRECT decision!
Both the correct decisions and the possible errors are fairly easy to understand but with the errors, you must be careful to identify and define the two types correctly.
TYPE I Error: Reject Ho when Ho is True
• The student did not cheat on the exam but the instructor brings enough evidence to reject Ho and conclude the student cheated on the exam. This is a Type I Error.
TYPE II Error: Fail to Reject Ho when Ho is False
• The student did cheat on the exam but the instructor fails to provide enough evidence that the student cheated on the exam. This is a Type II Error.
In most situations, including this one, it is more “acceptable” to have a Type II error than a Type I error. Although allowing a student who cheats to go unpunished might be considered a very bad problem, punishing a student for something he or she did not do is usually considered to be a more severe error. This is one reason we control for our Type I error in the process of hypothesis testing.
Did I Get This?: Type I and Type II Errors (in context)
Comment:
• The probabilities of Type I and Type II errors are closely related to the concepts of sensitivity and specificity that we discussed previously. Consider the following hypotheses:
Ho: The individual does not have diabetes (status quo, nothing special happening)
Ha: The individual does have diabetes (something is going on here)
In this setting:
When someone tests positive for diabetes we would reject the null hypothesis and conclude the person has diabetes (we may or may not be correct!).
When someone tests negative for diabetes we would fail to reject the null hypothesis so that we fail to conclude the person has diabetes (we may or may not be correct!)
Let’s take it one step further:
Sensitivity = P(Test + | Have Disease) which in this setting equals
P(Reject Ho | Ho is False) = 1 – P(Fail to Reject Ho | Ho is False) = 1 – β = 1 – beta
Specificity = P(Test – | No Disease) which in this setting equals
P(Fail to Reject Ho | Ho is True) = 1 – P(Reject Ho | Ho is True) = 1 – α = 1 – alpha
Notice that sensitivity and specificity relate to the probability of making a correct decision whereas α (alpha) and β (beta) relate to the probability of making an incorrect decision.
Usually α (alpha) = 0.05 so that the specificity listed above is 0.95 or 95%.
Next, we will see that the sensitivity listed above is the power of the hypothesis test!
Reasons for a Type I Error in Practice
Assuming that you have obtained a quality sample:
• The reason for a Type I error is random chance.
• When a Type I error occurs, our observed data represented a rare event which indicated evidence in favor of the alternative hypothesis even though the null hypothesis was actually true.
Reasons for a Type II Error in Practice
Again, assuming that you have obtained a quality sample, now we have a few possibilities depending upon the true difference that exists.
• The sample size is too small to detect an important difference. This is the worst case, you should have obtained a larger sample. In this situation, you may notice that the effect seen in the sample seems PRACTICALLY significant and yet the p-value is not small enough to reject the null hypothesis.
• The sample size is reasonable for the important difference but the true difference (which might be somewhat meaningful or interesting) is smaller than your test was capable of detecting. This is tolerable as you were not interested in being able to detect this difference when you began your study. In this situation, you may notice that the effect seen in the sample seems to have some potential for practical significance.
• The sample size is more than adequate, the difference that was not detected is meaningless in practice. This is not a problem at all and is in effect a “correct decision” since the difference you did not detect would have no practical meaning.
• Note: We will discuss the idea of practical significance later in more detail.
Power of a Hypothesis Test
It is often the case that we truly wish to prove the alternative hypothesis. It is reasonable that we would be interested in the probability of correctly rejecting the null hypothesis. In other words, the probability of rejecting the null hypothesis, when in fact the null hypothesis is false. This can also be thought of as the probability of being able to detect a (pre-specified) difference of interest to the researcher.
Let’s begin with a realistic example of how power can be described in a study.
EXAMPLE:
In a clinical trial to study two medications for weight loss, we have an 80% chance to detect a difference in the weight loss between the two medications of 10 pounds. In other words, the power of the hypothesis test we will conduct is 80%.
In other words, if one medication comes from a population with an average weight loss of 25 pounds and the other comes from a population with an average weight loss of 15 pounds, we will have an 80% chance to detect that difference using the sample we have in our trial.
If we were to repeat this trial many times, 80% of the time we will be able to reject the null hypothesis (that there is no difference between the medications) and 20% of the time we will fail to reject the null hypothesis (and make a Type II error!).
The difference of 10 pounds in the previous example, is often called the effect size. The measure of the effect differs depending on the particular test you are conducting but is always some measure related to the true effect in the population. In this example, it is the difference between two population means.
Recall the definition of a Type II error:
A TYPE II Error occurs when we fail to Reject Ho when, in fact, Ho is False. In this case we fail to reject a false null hypothesis.
P(TYPE II Error) = P(Fail to Reject Ho | Ho is False) = β = beta
Notice that P(Reject Ho | Ho is False) = 1 – P(Fail to Reject Ho | Ho is False) = 1 – β = 1- beta.
The POWER of a hypothesis test is the probability of rejecting the null hypothesis when the null hypothesis is false. This can also be stated as the probability of correctly rejecting the null hypothesis.
POWER = P(Reject Ho | Ho is False) = 1 – β = 1 – beta
Power is the test’s ability to correctly reject the null hypothesis. A test with high power has a good chance of being able to detect the difference of interest to us, if it exists.
As we mentioned on the bottom of the previous page, this can be thought of as the sensitivity of the hypothesis test if you imagine Ho = No disease and Ha = Disease.
Factors Affecting the Power of a Hypothesis Test
The power of a hypothesis test is affected by numerous quantities (similar to the margin of error in a confidence interval).
Assume that the null hypothesis is false for a given hypothesis test. All else being equal, we have the following:
• Larger samples result in a greater chance to reject the null hypothesis which means an increase in the power of the hypothesis test.
• If the effect size is larger, it will become easier for us to detect. This results in a greater chance to reject the null hypothesis which means an increase in the power of the hypothesis test. The effect size varies for each test and is usually closely related to the difference between the hypothesized value and the true value of the parameter under study.
• From the relationship between the probability of a Type I and a Type II error (as α (alpha) decreases, β (beta) increases), we can see that as α (alpha) decreases, Power = 1 – β = 1 – beta also decreases.
• There are other mathematical ways to change the power of a hypothesis test, such as changing the population standard deviation; however, these are not quantities that we can usually control so we will not discuss them here.
Caution
In practice, we specify a significance level and a desired power to detect a difference which will have practical meaning to us and this determines the sample size required for the experiment or study.
For most grants involving statistical analysis, power calculations must be completed to illustrate that the study will have a reasonable chance to detect an important effect. Otherwise, the money spent on the study could be wasted. The goal is usually to have a power close to 80%.
For example, if there is only a 5% chance to detect an important difference between two treatments in a clinical trial, this would result in a waste of time, effort, and money on the study since, when the alternative hypothesis is true, the chance a treatment effect can be found is very small.
Comment:
• In order to calculate the power of a hypothesis test, we must specify the “truth.” As we mentioned previously when discussing Type II errors, in practice we can only calculate this probability using a series of “what if” calculations which depend upon the type of problem.
The following activity involves working with an interactive applet to study power more carefully.
Learn by Doing: Power of Hypothesis Tests
The following reading is an excellent discussion about Type I and Type II errors.
(Optional) Outside Reading: A Good Discussion of Power (≈ 2500 words)
We will not be asking you to perform power calculations manually. You may be asked to use online calculators and applets. Most statistical software packages offer some ability to complete power calculations. There are also many online calculators for power and sample size on the internet, for example, Russ Lenth’s power and sample-size page.
Proportions (Introduction & Step 1)
CO-4: Distinguish among different measurement scales, choose the appropriate descriptive and inferential statistical methods based on these distinctions, and interpret the results.
Learning Objectives
LO 4.33: In a given context, distinguish between situations involving a population proportion and a population mean and specify the correct null and alternative hypothesis for the scenario.
Learning Objectives
LO 4.34: Carry out a complete hypothesis test for a population proportion by hand.
CO-6: Apply basic concepts of probability, random variation, and commonly used statistical probability distributions.
Learning Objectives
LO 6.26: Outline the logic and process of hypothesis testing.
Video
Video: Proportions (Introduction & Step 1) (7:18)
Now that we understand the process of hypothesis testing and the logic behind it, we are ready to start learning about specific statistical tests (also known as significance tests).
The first test we are going to learn is the test about the population proportion (p).
This test is widely known as the “z-test for the population proportion (p).”
Introduction
We will understand later where the “z-test” part is coming from.
This will be the only type of problem you will complete entirely “by-hand” in this course. Our goal is to use this example to give you the tools you need to understand how this process works. After working a few problems, you should review the earlier material again. You will likely need to review the terminology and concepts a few times before you fully understand the process.
In reality, you will often be conducting more complex statistical tests and allowing software to provide the p-value. In these settings it will be important to know what test to apply for a given situation and to be able to explain the results in context.
Review: Types of Variables
When we conduct a test about a population proportion, we are working with a categorical variable. Later in the course, after we have learned a variety of hypothesis tests, we will need to be able to identify which test is appropriate for which situation. Identifying the variable as categorical or quantitative is an important component of choosing an appropriate hypothesis test.
Learn by Doing: Review Types of Variables
One Sample Z-Test for a Population Proportion
In this part of our discussion on hypothesis testing, we will go into details that we did not go into before. More specifically, we will use this test to introduce the idea of a test statistic, and details about how p-values are calculated.
Let’s start by introducing the three examples, which will be the leading examples in our discussion. Each example is followed by a figure illustrating the information provided, as well as the question of interest.
EXAMPLE:
A machine is known to produce 20% defective products, and is therefore sent for repair. After the machine is repaired, 400 products produced by the machine are chosen at random and 64 of them are found to be defective. Do the data provide enough evidence that the proportion of defective products produced by the machine (p) has been reduced as a result of the repair?
The following figure displays the information, as well as the question of interest:
The question of interest helps us formulate the null and alternative hypotheses in terms of p, the proportion of defective products produced by the machine following the repair:
Ho: p = 0.20 (No change; the repair did not help).
Ha: p < 0.20 (The repair was effective at reducing the proportion of defective parts).
EXAMPLE:
There are rumors that students at a certain liberal arts college are more inclined to use drugs than U.S. college students in general. Suppose that in a simple random sample of 100 students from the college, 19 admitted to marijuana use. Do the data provide enough evidence to conclude that the proportion of marijuana users among the students in the college (p) is higher than the national proportion, which is 0.157? (This number is reported by the Harvard School of Public Health.)
Again, the following figure displays the information as well as the question of interest:
As before, we can formulate the null and alternative hypotheses in terms of p, the proportion of students in the college who use marijuana:
Ho: p = 0.157 (same as among all college students in the country).
Ha: p > 0.157 (higher than the national figure).
EXAMPLE:
Polls on certain topics are conducted routinely in order to monitor changes in the public’s opinions over time. One such topic is the death penalty. In 2003 a poll estimated that 64% of U.S. adults support the death penalty for a person convicted of murder. In a more recent poll, 675 out of 1,000 U.S. adults chosen at random were in favor of the death penalty for convicted murderers. Do the results of this poll provide evidence that the proportion of U.S. adults who support the death penalty for convicted murderers (p) changed between 2003 and the later poll?
Here is a figure that displays the information, as well as the question of interest:
Again, we can formulate the null and alternative hypotheses in term of p, the proportion of U.S. adults who support the death penalty for convicted murderers.
Ho: p = 0.64 (No change from 2003).
Ha: p ≠ 0.64 (Some change since 2003).
Learn by Doing: Proportions (Overview)
Did I Get This?: Proportions (Overview)
Recall that there are basically 4 steps in the process of hypothesis testing:
• STEP 1: State the appropriate null and alternative hypotheses, Ho and Ha.
• STEP 2: Obtain a random sample, collect relevant data, and check whether the data meet the conditions under which the test can be used. If the conditions are met, summarize the data using a test statistic.
• STEP 3: Find the p-value of the test.
• STEP 4: Based on the p-value, decide whether or not the results are statistically significant and draw your conclusions in context.
• Note: In practice, we should always consider the practical significance of the results as well as the statistical significance.
We are now going to go through these steps as they apply to the hypothesis testing for the population proportion p. It should be noted that even though the details will be specific to this particular test, some of the ideas that we will add apply to hypothesis testing in general.
Step 1. Stating the Hypotheses
Here again are the three set of hypotheses that are being tested in each of our three examples:
EXAMPLE:
Has the proportion of defective products been reduced as a result of the repair?
• Ho: p = 0.20 (No change; the repair did not help).
• Ha: p < 0.20 (The repair was effective at reducing the proportion of defective parts).
EXAMPLE:
Is the proportion of marijuana users in the college higher than the national figure?
• Ho: p = 0.157 (same as among all college students in the country).
• Ha: p > 0.157 (higher than the national figure).
EXAMPLE:
Did the proportion of U.S. adults who support the death penalty change between 2003 and a later poll?
• Ho: p = 0.64 (No change from 2003).
• Ha: p ≠ 0.64 (Some change since 2003).
The null hypothesis always takes the form:
• Ho: p = some value
and the alternative hypothesis takes one of the following three forms:
• Ha: p < that value (like in example 1) or
• Ha: p > that value (like in example 2) or
• Ha: p ≠ that value (like in example 3).
Note that it was quite clear from the context which form of the alternative hypothesis would be appropriate. The value that is specified in the null hypothesis is called the null value, and is generally denoted by p0. We can say, therefore, that in general the null hypothesis about the population proportion (p) would take the form:
• Ho: p = p0
We write Ho: p = p0 to say that we are making the hypothesis that the population proportion has the value of p0. In other words, p is the unknown population proportion and p0 is the number we think p might be for the given situation.
The alternative hypothesis takes one of the following three forms (depending on the context):
• Ha: p < p0 (one-sided)
• Ha: p > p0 (one-sided)
• Ha: p ≠ p0 (two-sided)
The first two possible forms of the alternatives (where the = sign in Ho is challenged by < or >) are called one-sided alternatives, and the third form of alternative (where the = sign in Ho is challenged by ≠) is called a two-sided alternative. To understand the intuition behind these names let’s go back to our examples.
Example 3 (death penalty) is a case where we have a two-sided alternative:
• Ho: p = 0.64 (No change from 2003).
• Ha: p ≠ 0.64 (Some change since 2003).
In this case, in order to reject Ho and accept Ha we will need to get a sample proportion of death penalty supporters which is very different from 0.64 in either direction, either much larger or much smaller than 0.64.
In example 2 (marijuana use) we have a one-sided alternative:
• Ho: p = 0.157 (same as among all college students in the country).
• Ha: p > 0.157 (higher than the national figure).
Here, in order to reject Ho and accept Ha we will need to get a sample proportion of marijuana users which is much higher than 0.157.
Similarly, in example 1 (defective products), where we are testing:
• Ho: p = 0.20 (No change; the repair did not help).
• Ha: p < 0.20 (The repair was effective at reducing the proportion of defective parts).
in order to reject Ho and accept Ha, we will need to get a sample proportion of defective products which is much smaller than 0.20.
Learn by Doing: State Hypotheses (Proportions)
Did I Get This?: State Hypotheses (Proportions)
Proportions (Step 2)
CO-4: Distinguish among different measurement scales, choose the appropriate descriptive and inferential statistical methods based on these distinctions, and interpret the results.
Learning Objectives
LO 4.33: In a given context, distinguish between situations involving a population proportion and a population mean and specify the correct null and alternative hypothesis for the scenario.
Learning Objectives
LO 4.34: Carry out a complete hypothesis test for a population proportion by hand.
CO-6: Apply basic concepts of probability, random variation, and commonly used statistical probability distributions.
Learning Objectives
LO 6.26: Outline the logic and process of hypothesis testing.
Video
Video: Proportions (Step 2) (12:38)
Step 2. Collect Data, Check Conditions, and Summarize Data
After the hypotheses have been stated, the next step is to obtain a sample (on which the inference will be based), collect relevant data, and summarize them.
It is extremely important that our sample is representative of the population about which we want to draw conclusions. This is ensured when the sample is chosen at random. Beyond the practical issue of ensuring representativeness, choosing a random sample has theoretical importance that we will mention later.
In the case of hypothesis testing for the population proportion (p), we will collect data on the relevant categorical variable from the individuals in the sample and start by calculating the sample proportion p-hat (the natural quantity to calculate when the parameter of interest is p).
Let’s go back to our three examples and add this step to our figures.
EXAMPLE:
Has the proportion of defective products been reduced as a result of the repair?
EXAMPLE:
Is the proportion of marijuana users in the college higher than the national figure?
EXAMPLE:
Did the proportion of U.S. adults who support the death penalty change between 2003 and a later poll?
As we mentioned earlier without going into details, when we summarize the data in hypothesis testing, we go a step beyond calculating the sample statistic and summarize the data with a test statistic. Every test has a test statistic, which to some degree captures the essence of the test. In fact, the p-value, which so far we have looked upon as “the king” (in the sense that everything is determined by it), is actually determined by (or derived from) the test statistic. We will now introduce the test statistic.
The test statistic is a measure of how far the sample proportion p-hat is from the null value p0, the value that the null hypothesis claims is the value of p. In other words, since p-hat is what the data estimates p to be, the test statistic can be viewed as a measure of the “distance” between what the data tells us about p and what the null hypothesis claims p to be.
Let’s use our examples to understand this:
EXAMPLE:
Has the proportion of defective products been reduced as a result of the repair?
The parameter of interest is p, the proportion of defective products following the repair.
The data estimate p to be p-hat = 0.16
The null hypothesis claims that p = 0.20
The data are therefore 0.04 (or 4 percentage points) below the null hypothesis value.
It is hard to evaluate whether this difference of 4% in defective products is enough evidence to say that the repair was effective at reducing the proportion of defective products, but clearly, the larger the difference, the more evidence it is against the null hypothesis. So if, for example, our sample proportion of defective products had been, say, 0.10 instead of 0.16, then I think you would all agree that cutting the proportion of defective products in half (from 20% to 10%) would be extremely strong evidence that the repair was effective at reducing the proportion of defective products.
EXAMPLE:
Is the proportion of marijuana users in the college higher than the national figure?
The parameter of interest is p, the proportion of students in a college who use marijuana.
The data estimate p to be p-hat = 0.19
The null hypothesis claims that p = 0.157
The data are therefore 0.033 (or 3.3. percentage points) above the null hypothesis value.
EXAMPLE:
Did the proportion of U.S. adults who support the death penalty change between 2003 and a later poll?
The parameter of interest is p, the proportion of U.S. adults who support the death penalty for convicted murderers.
The data estimate p to be p-hat = 0.675
The null hypothesis claims that p = 0.64
There is a difference of 0.035 (or 3.5. percentage points) between the data and the null hypothesis value.
The problem with looking only at the difference between the sample proportion, p-hat, and the null value, p0 is that we have not taken into account the variability of our estimator p-hat which, as we know from our study of sampling distributions, depends on the sample size.
For this reason, the test statistic cannot simply be the difference between p-hat and p0, but must be some form of that formula that accounts for the sample size. In other words, we need to somehow standardize the difference so that comparison between different situations will be possible. We are very close to revealing the test statistic, but before we construct it, let’s be reminded of the following two facts from probability:
Fact 1: When we take a random sample of size n from a population with population proportion p, then
Fact 2: The z-score of any normal value (a value that comes from a normal distribution) is calculated by finding the difference between the value and the mean and then dividing that difference by the standard deviation (of the normal distribution associated with the value). The z-score represents how many standard deviations below or above the mean the value is.
Thus, our test statistic should be a measure of how far the sample proportion p-hat is from the null value p0 relative to the variation of p-hat (as measured by the standard error of p-hat).
Recall that the standard error is the standard deviation of the sampling distribution for a given statistic. For p-hat, we know the following:
To find the p-value, we will need to determine how surprising our value is assuming the null hypothesis is true. We already have the tools needed for this process from our study of sampling distributions as represented in the table above.
EXAMPLE:
Has the proportion of defective products been reduced as a result of the repair?
If we assume the null hypothesis is true, we can specify that the center of the distribution of all possible values of p-hat from samples of size 400 would be 0.20 (our null value).
We can calculate the standard error, assuming p = 0.20 as
$\sqrt{\dfrac{p_{0}\left(1-p_{0}\right)}{n}}=\sqrt{\dfrac{0.2(1-0.2)}{400}}=0.02$
The following picture represents the sampling distribution of all possible values of p-hat of samples of size 400, assuming the true proportion p is 0.20 and our other requirements for the sampling distribution to be normal are met (we will review these during the next step).
In order to calculate probabilities for the picture above, we would need to find the z-score associated with our result.
This z-score is the test statistic! In this example, the numerator of our z-score is the difference between p-hat (0.16) and null value (0.20) which we found earlier to be -0.04. The denominator of our z-score is the standard error calculated above (0.02) and thus quickly we find the z-score, our test statistic, to be -2.
The sample proportion based upon this data is 2 standard errors below the null value.
Hopefully you now understand more about the reasons we need probability in statistics!!
Now we will formalize the definition and look at our remaining examples before moving on to the next step, which will be to determine if a normal distribution applies and calculate the p-value.
Test Statistic for Hypothesis Tests for One Proportion is:
$z=\dfrac{\hat{p}-p_{0}}{\sqrt{\dfrac{p_{0}\left(1-p_{0}\right)}{n}}}$
It represents the difference between the sample proportion and the null value, measured in standard deviations (standard error of p-hat).
The picture above is a representation of the sampling distribution of p-hat assuming p = p0. In other words, this is a model of how p-hat behaves if we are drawing random samples from a population for which Ho is true.
Notice the center of the sampling distribution is at p0, which is the hypothesized proportion given in the null hypothesis (Ho: p = p0.) We could also mark the axis in standard error units,
$\sqrt{\dfrac{p_{0}\left(1-p_{0}\right)}{n}}$
For example, if our null hypothesis claims that the proportion of U.S. adults supporting the death penalty is 0.64, then the sampling distribution is drawn as if the null is true. We draw a normal distribution centered at 0.64 (p0) with a standard error dependent on sample size,
$\sqrt{\dfrac{0.64(1-0.64)}{n}}$.
Important Comment:
• Note that under the assumption that Ho is true (and if the conditions for the sampling distribution to be normal are satisfied) the test statistic follows a N(0,1) (standard normal) distribution. Another way to say the same thing which is quite common is: “The null distribution of the test statistic is N(0,1).”
By “null distribution,” we mean the distribution under the assumption that Ho is true. As we’ll see and stress again later, the null distribution of the test statistic is what the calculation of the p-value is based on.
Let’s go back to our remaining two examples and find the test statistic in each case:
EXAMPLE:
Is the proportion of marijuana users in the college higher than the national figure?
Since the null hypothesis is Ho: p = 0.157, the standardized (z) score of p-hat = 0.19 is
$z=\dfrac{0.19-0.157}{\sqrt{\dfrac{0.157(1-0.157)}{100}}} \approx 0.91$
This is the value of the test statistic for this example.
We interpret this to mean that, assuming that Ho is true, the sample proportion p-hat = 0.19 is 0.91 standard errors above the null value (0.157).
EXAMPLE:
Did the proportion of U.S. adults who support the death penalty change between 2003 and a later poll?
Since the null hypothesis is Ho: p = 0.64, the standardized (z) score of p-hat = 0.675 is
$z=\dfrac{0.675-0.64}{\sqrt{\dfrac{0.64(1-0.64)}{1000}}} \approx 2.31$
This is the value of the test statistic for this example.
We interpret this to mean that, assuming that Ho is true, the sample proportion p-hat = 0.675 is 2.31 standard errors above the null value (0.64).
Learn by Doing: Proportions (Step 2)
Comments about the Test Statistic:
• We mentioned earlier that to some degree, the test statistic captures the essence of the test. In this case, the test statistic measures the difference between p-hat and p0 in standard errors. This is exactly what this test is about. Get data, and look at the discrepancy between what the data estimates p to be (represented by p-hat) and what Ho claims about p (represented by p0).
• You can think about this test statistic as a measure of evidence in the data against Ho. The larger the test statistic, the “further the data are from Ho” and therefore the more evidence the data provide against Ho.
Learn by Doing: Proportions (Step 2) Understanding the Test Statistic
Did I Get This?: Proportions (Step 2)
Comments:
• It should now be clear why this test is commonly known as the z-test for the population proportion. The name comes from the fact that it is based on a test statistic that is a z-score.
• Recall fact 1 that we used for constructing the z-test statistic. Here is part of it again:
When we take a random sample of size n from a population with population proportion p0, the possible values of the sample proportion p-hat (when certain conditions are met) have approximately a normal distribution with a mean of p0… and a standard deviation of
This result provides the theoretical justification for constructing the test statistic the way we did, and therefore the assumptions under which this result holds (in bold, above) are the conditions that our data need to satisfy so that we can use this test. These two conditions are:
i. The sample has to be random.
ii. The conditions under which the sampling distribution of p-hat is normal are met. In other words:
• Here we will pause to say more about condition (i.) above, the need for a random sample. In the Probability Unit we discussed sampling plans based on probability (such as a simple random sample, cluster, or stratified sampling) that produce a non-biased sample, which can be safely used in order to make inferences about a population. We noted in the Probability Unit that, in practice, other (non-random) sampling techniques are sometimes used when random sampling is not feasible. It is important though, when these techniques are used, to be aware of the type of bias that they introduce, and thus the limitations of the conclusions that can be drawn from them. For our purpose here, we will focus on one such practice, the situation in which a sample is not really chosen randomly, but in the context of the categorical variable that is being studied, the sample is regarded as random. For example, say that you are interested in the proportion of students at a certain college who suffer from seasonal allergies. For that purpose, the students in a large engineering class could be considered as a random sample, since there is nothing about being in an engineering class that makes you more or less likely to suffer from seasonal allergies. Technically, the engineering class is a convenience sample, but it is treated as a random sample in the context of this categorical variable. On the other hand, if you are interested in the proportion of students in the college who have math anxiety, then the class of engineering students clearly could not possibly be viewed as a random sample, since engineering students probably have a much lower incidence of math anxiety than the college population overall.
Learn by Doing: Proportions (Step 2) Valid or Invalid Sampling?
Let’s check the conditions in our three examples.
EXAMPLE:
Has the proportion of defective products been reduced as a result of the repair?
i. The 400 products were chosen at random.
ii. n = 400, p0 = 0.2 and therefore:
$n p_{0}=400(0.2)=80 \geq 10$
$n\left(1-p_{0}\right)=400(1-0.2)=320 \geq 10$
EXAMPLE:
Is the proportion of marijuana users in the college higher than the national figure?
i. The 100 students were chosen at random.
ii. n = 100, p0 = 0.157 and therefore:
\begin{gathered}
n p_{0}=100(0.157)=15.7 \geq 10 \
n\left(1-p_{0}\right)=100(1-0.157)=84.3 \geq 10
\end{gathered}
EXAMPLE:
Did the proportion of U.S. adults who support the death penalty change between 2003 and a later poll?
i. The 1000 adults were chosen at random.
ii. n = 1000, p0 = 0.64 and therefore:
\begin{gathered}
n p_{0}=1000(0.64)=640 \geq 10 \
n\left(1-p_{0}\right)=1000(1-0.64)=360 \geq 10
\end{gathered}
Learn by Doing: Proportions (Step 2) Verify Conditions
Checking that our data satisfy the conditions under which the test can be reliably used is a very important part of the hypothesis testing process. Be sure to consider this for every hypothesis test you conduct in this course and certainly in practice.
The Four Steps in Hypothesis Testing
• STEP 1: State the appropriate null and alternative hypotheses, Ho and Ha.
• STEP 2: Obtain a random sample, collect relevant data, and check whether the data meet the conditions under which the test can be used. If the conditions are met, summarize the data using a test statistic.
• STEP 3: Find the p-value of the test.
• STEP 4: Based on the p-value, decide whether or not the results are statistically significant and draw your conclusions in context.
• Note: In practice, we should always consider the practical significance of the results as well as the statistical significance.
With respect to the z-test, the population proportion that we are currently discussing we have:
Step 1: Completed
Step 2: Completed
Step 3: This is what we will work on next.
Proportions (Step 3)
CO-4: Distinguish among different measurement scales, choose the appropriate descriptive and inferential statistical methods based on these distinctions, and interpret the results.
Learning Objectives
LO 4.33: In a given context, distinguish between situations involving a population proportion and a population mean and specify the correct null and alternative hypothesis for the scenario.
Learning Objectives
LO 4.34: Carry out a complete hypothesis test for a population proportion by hand.
CO-6: Apply basic concepts of probability, random variation, and commonly used statistical probability distributions.
Learning Objectives
LO 6.26: Outline the logic and process of hypothesis testing.
Learning Objectives
LO 6.27: Explain what the p-value is and how it is used to draw conclusions.
Video
Video: Proportions (Step 3) (14:46)
Calculators and Tables
Step 3. Finding the P-value of the Test
So far we’ve talked about the p-value at the intuitive level: understanding what it is (or what it measures) and how we use it to draw conclusions about the statistical significance of our results. We will now go more deeply into how the p-value is calculated.
It should be mentioned that eventually we will rely on technology to calculate the p-value for us (as well as the test statistic), but in order to make intelligent use of the output, it is important to first understand the details, and only then let the computer do the calculations for us. Again, our goal is to use this simple example to give you the tools you need to understand the process entirely. Let’s start.
Recall that so far we have said that the p-value is the probability of obtaining data like those observed assuming that Ho is true. Like the test statistic, the p-value is, therefore, a measure of the evidence against Ho. In the case of the test statistic, the larger it is in magnitude (positive or negative), the further p-hat is from p0, the more evidence we have against Ho. In the case of the p-value, it is the opposite; the smaller it is, the more unlikely it is to get data like those observed when Ho is true, the more evidence it is against Ho. One can actually draw conclusions in hypothesis testing just using the test statistic, and as we’ll see the p-value is, in a sense, just another way of looking at the test statistic. The reason that we actually take the extra step in this course and derive the p-value from the test statistic is that even though in this case (the test about the population proportion) and some other tests, the value of the test statistic has a very clear and intuitive interpretation, there are some tests where its value is not as easy to interpret. On the other hand, the p-value keeps its intuitive appeal across all statistical tests.
How is the p-value calculated?
Intuitively, the p-value is the probability of observing data like those observed assuming that Ho is true. Let’s be a bit more formal:
• Since this is a probability question about the data, it makes sense that the calculation will involve the data summary, the test statistic.
• What do we mean by “like” those observed? By “like” we mean “as extreme or even more extreme.”
Putting it all together, we get that in general:
The p-value is the probability of observing a test statistic as extreme as that observed (or even more extreme) assuming that the null hypothesis is true.
By “extreme” we mean extreme in the direction(s) of the alternative hypothesis.
Specifically, for the z-test for the population proportion:
1. If the alternative hypothesis is Ha: p < p0 (less than), then “extreme” means small or less than, and the p-value is: The probability of observing a test statistic as small as that observed or smaller if the null hypothesis is true.
2. If the alternative hypothesis is Ha: p > p0 (greater than), then “extreme” means large or greater than, and the p-value is: The probability of observing a test statistic as large as that observed or larger if the null hypothesis is true.
3. If the alternative is Ha: p ≠ p0 (different from), then “extreme” means extreme in either direction either small or large (i.e., large in magnitude) or just different from, and the p-value therefore is: The probability of observing a test statistic as large in magnitude as that observed or larger if the null hypothesis is true.(Examples: If z = -2.5: p-value = probability of observing a test statistic as small as -2.5 or smaller or as large as 2.5 or larger. If z = 1.5: p-value = probability of observing a test statistic as large as 1.5 or larger, or as small as -1.5 or smaller.)
OK, hopefully that makes (some) sense. But how do we actually calculate it?
Recall the important comment from our discussion about our test statistic,
which said that when the null hypothesis is true (i.e., when p = p0), the possible values of our test statistic follow a standard normal (N(0,1), denoted by Z) distribution. Therefore, the p-value calculations (which assume that Ho is true) are simply standard normal distribution calculations for the 3 possible alternative hypotheses.
Alternative Hypothesis is “Less Than”
The probability of observing a test statistic as small as that observed or smaller, assuming that the values of the test statistic follow a standard normal distribution. We will now represent this probability in symbols and also using the normal distribution.
Looking at the shaded region, you can see why this is often referred to as a left-tailed test. We shaded to the left of the test statistic, since less than is to the left.
Alternative Hypothesis is “Greater Than”
The probability of observing a test statistic as large as that observed or larger, assuming that the values of the test statistic follow a standard normal distribution. Again, we will represent this probability in symbols and using the normal distribution
Looking at the shaded region, you can see why this is often referred to as a right-tailed test. We shaded to the right of the test statistic, since greater than is to the right.
Alternative Hypothesis is “Not Equal To”
The probability of observing a test statistic which is as large in magnitude as that observed or larger, assuming that the values of the test statistic follow a standard normal distribution.
This is often referred to as a two-tailed test, since we shaded in both directions.
Next, we will apply this to our three examples. But first, work through the following activities, which should help your understanding.
Learn by Doing: Proportions (Step 3)
Did I Get This?: Proportions (Step 3)
EXAMPLE:
Has the proportion of defective products been reduced as a result of the repair?
The p-value in this case is:
• The probability of observing a test statistic as small as -2 or smaller, assuming that Ho is true.
OR (recalling what the test statistic actually means in this case),
• The probability of observing a sample proportion that is 2 standard deviations or more below the null value (p0 = 0.20), assuming that p0 is the true population proportion.
OR, more specifically,
• The probability of observing a sample proportion of 0.16 or lower in a random sample of size 400, when the true population proportion is p0 =0.20
In either case, the p-value is found as shown in the following figure:
To find P(Z ≤ -2) we can either use the calculator or table we learned to use in the probability unit for normal random variables. Eventually, after we understand the details, we will use software to run the test for us and the output will give us all the information we need. The p-value that the statistical software provides for this specific example is 0.023. The p-value tells us that it is pretty unlikely (probability of 0.023) to get data like those observed (test statistic of -2 or less) assuming that Ho is true.
EXAMPLE:
Is the proportion of marijuana users in the college higher than the national figure?
The p-value in this case is:
• The probability of observing a test statistic as large as 0.91 or larger, assuming that Ho is true.
OR (recalling what the test statistic actually means in this case),
• The probability of observing a sample proportion that is 0.91 standard deviations or more above the null value (p0 = 0.157), assuming that p0 is the true population proportion.
OR, more specifically,
• The probability of observing a sample proportion of 0.19 or higher in a random sample of size 100, when the true population proportion is p0=0.157
In either case, the p-value is found as shown in the following figure:
Again, at this point we can either use the calculator or table to find that the p-value is 0.182, this is P(Z ≥ 0.91).
The p-value tells us that it is not very surprising (probability of 0.182) to get data like those observed (which yield a test statistic of 0.91 or higher) assuming that the null hypothesis is true.
EXAMPLE:
Did the proportion of U.S. adults who support the death penalty change between 2003 and a later poll?
The p-value in this case is:
• The probability of observing a test statistic as large as 2.31 (or larger) or as small as -2.31 (or smaller), assuming that Ho is true.
OR (recalling what the test statistic actually means in this case),
• The probability of observing a sample proportion that is 2.31 standard deviations or more away from the null value (p0 = 0.64), assuming that p0is the true population proportion.
OR, more specifically,
• The probability of observing a sample proportion as different as 0.675 is from 0.64, or even more different (i.e. as high as 0.675 or higher or as low as 0.605 or lower) in a random sample of size 1,000, when the true population proportion is p0= 0.64
In either case, the p-value is found as shown in the following figure:
Again, at this point we can either use the calculator or table to find that the p-value is 0.021, this is P(Z ≤ -2.31) + P(Z ≥ 2.31) = 2*P(Z ≥ |2.31|)
The p-value tells us that it is pretty unlikely (probability of 0.021) to get data like those observed (test statistic as high as 2.31 or higher or as low as -2.31 or lower) assuming that Ho is true.
Comment:
• We’ve just seen that finding p-values involves probability calculations about the value of the test statistic assuming that Ho is true. In this case, when Ho is true, the values of the test statistic follow a standard normal distribution (i.e., the sampling distribution of the test statistic when the null hypothesis is true is N(0,1)). Therefore, p-values correspond to areas (probabilities) under the standard normal curve.
Similarly, in any test, p-values are found using the sampling distribution of the test statistic when the null hypothesis is true (also known as the “null distribution” of the test statistic). In this case, it was relatively easy to argue that the null distribution of our test statistic is N(0,1). As we’ll see, in other tests, other distributions come up (like the t-distribution and the F-distribution), which we will just mention briefly, and rely heavily on the output of our statistical package for obtaining the p-values.
We’ve just completed our discussion about the p-value, and how it is calculated both in general and more specifically for the z-test for the population proportion. Let’s go back to the four-step process of hypothesis testing and see what we’ve covered and what still needs to be discussed.
The Four Steps in Hypothesis Testing
• STEP 1: State the appropriate null and alternative hypotheses, Ho and Ha.
• STEP 2: Obtain a random sample, collect relevant data, and check whether the data meet the conditions under which the test can be used. If the conditions are met, summarize the data using a test statistic.
• STEP 3: Find the p-value of the test.
• STEP 4: Based on the p-value, decide whether or not the results are statistically significant and draw your conclusions in context.
• Note: In practice, we should always consider the practical significance of the results as well as the statistical significance.
With respect to the z-test the population proportion:
Step 1: Completed
Step 2: Completed
Step 3: Completed
Step 4. This is what we will work on next.
Learn by Doing: Proportions (Step 3) Understanding P-values
Proportions (Step 4 & Summary)
CO-4: Distinguish among different measurement scales, choose the appropriate descriptive and inferential statistical methods based on these distinctions, and interpret the results.
Learning Objectives
LO 4.33: In a given context, distinguish between situations involving a population proportion and a population mean and specify the correct null and alternative hypothesis for the scenario.
Learning Objectives
LO 4.34: Carry out a complete hypothesis test for a population proportion by hand.
CO-6: Apply basic concepts of probability, random variation, and commonly used statistical probability distributions.
Learning Objectives
LO 6.26: Outline the logic and process of hypothesis testing.
Learning Objectives
LO 6.27: Explain what the p-value is and how it is used to draw conclusions.
Video
Video: Proportions (Step 4 & Summary) (4:30)
Step 4. Drawing Conclusions Based on the P-Value
This last part of the four-step process of hypothesis testing is the same across all statistical tests, and actually, we’ve already said basically everything there is to say about it, but it can’t hurt to say it again.
The p-value is a measure of how much evidence the data present against Ho. The smaller the p-value, the more evidence the data present against Ho.
We already mentioned that what determines what constitutes enough evidence against Ho is the significance level (α, alpha), a cutoff point below which the p-value is considered small enough to reject Ho in favor of Ha. The most commonly used significance level is 0.05.
• If p-value ≤ 0.05 then WE REJECT Ho
• Conclusion: There IS enough evidence that Ha is True
• If p-value > 0.05 then WE FAIL TO REJECT Ho
• Conclusion: There IS NOT enough evidence that Ha is True
Where instead of Ha is True, we write what this means in the words of the problem, in other words, in the context of the current scenario.
It is important to mention again that this step has essentially two sub-steps:
• (i) Based on the p-value, determine whether or not the results are statistically significant (i.e., the data present enough evidence to reject Ho).
• (ii) State your conclusions in the context of the problem.
Note: We always still must consider whether the results have any practical significance, particularly if they are statistically significant as a statistically significant result which has not practical use is essentially meaningless!
Let’s go back to our three examples and draw conclusions.
EXAMPLE:
Has the proportion of defective products been reduced as a result of the repair?
We found that the p-value for this test was 0.023.
Since 0.023 is small (in particular, 0.023 < 0.05), the data provide enough evidence to reject Ho.
Conclusion:
• There IS enough evidence that the proportion of defective products is less than 20% after the repair.
The following figure is the complete story of this example, and includes all the steps we went through, starting from stating the hypotheses and ending with our conclusions:
EXAMPLE:
Is the proportion of marijuana users in the college higher than the national figure?
We found that the p-value for this test was 0.182.
Since .182 is not small (in particular, 0.182 > 0.05), the data do not provide enough evidence to reject Ho.
Conclusion:
• There IS NOT enough evidence that the proportion of students at the college who use marijuana is higher than the national figure.
Here is the complete story of this example:
.157 . We take a sample of 100 students, represented by a smaller circle. We find that 19 use marijuana. p-hat = 19/100 = .19, z = .91, and p-value = .182 . Since the p-value is too large we conclude that H_0 cannot be rejected." height="278" loading="lazy" src="http://phhp-faculty-cantrell.sites.m...3/image276.gif" title="A large circle represents the population Students at the college. We want to know p about this population, or what is the population proportion of students using marijuana. The hypotheses are H_0: p = .157 and H_a: p > .157 . We take a sample of 100 students, represented by a smaller circle. We find that 19 use marijuana. p-hat = 19/100 = .19, z = .91, and p-value = .182 . Since the p-value is too large we conclude that H_0 cannot be rejected." width="564">
Learn by Doing: Learn by Doing – Proportions (Step 4)
EXAMPLE:
Did the proportion of U.S. adults who support the death penalty change between 2003 and a later poll?
We found that the p-value for this test was 0.021.
Since 0.021 is small (in particular, 0.021 < 0.05), the data provide enough evidence to reject Ho
Conclusion:
• There IS enough evidence that the proportion of adults who support the death penalty for convicted murderers has changed since 2003.
Here is the complete story of this example:
Did I Get This?: Proportions (Step 4)
Many Students Wonder: Hypothesis Testing for the Population Proportion
Many students wonder why 5% is often selected as the significance level in hypothesis testing, and why 1% is the next most typical level. This is largely due to just convenience and tradition.
When Ronald Fisher (one of the founders of modern statistics) published one of his tables, he used a mathematically convenient scale that included 5% and 1%. Later, these same 5% and 1% levels were used by other people, in part just because Fisher was so highly esteemed. But mostly these are arbitrary levels.
The idea of selecting some sort of relatively small cutoff was historically important in the development of statistics; but it’s important to remember that there is really a continuous range of increasing confidence towards the alternative hypothesis, not a single all-or-nothing value. There isn’t much meaningful difference, for instance, between a p-value of .049 or .051, and it would be foolish to declare one case definitely a “real” effect and to declare the other case definitely a “random” effect. In either case, the study results were roughly 5% likely by chance if there’s no actual effect.
Whether such a p-value is sufficient for us to reject a particular null hypothesis ultimately depends on the risk of making the wrong decision, and the extent to which the hypothesized effect might contradict our prior experience or previous studies.
Let’s Summarize!!
We have now completed going through the four steps of hypothesis testing, and in particular we learned how they are applied to the z-test for the population proportion. Here is a brief summary:
• Step 1: State the hypotheses
State the null hypothesis:
Ho: p = p0
State the alternative hypothesis:
Ha: p < p0 (one-sided)
Ha: p > p0 (one-sided)
Ha: p ≠ p0 (two-sided)
where the choice of the appropriate alternative (out of the three) is usually quite clear from the context of the problem. If you feel it is not clear, it is most likely a two-sided problem. Students are usually good at recognizing the “more than” and “less than” terminology but differences can sometimes be more difficult to spot, sometimes this is because you have preconceived ideas of how you think it should be! Use only the information given in the problem.
• Step 2: Obtain data, check conditions, and summarize data
Obtain data from a sample and:
(i) Check whether the data satisfy the conditions which allow you to use this test.
random sample (or at least a sample that can be considered random in context)
the conditions under which the sampling distribution of p-hat is normal are met
(ii) Calculate the sample proportion p-hat, and summarize the data using the test statistic:
(Recall: This standardized test statistic represents how many standard deviations above or below p0 our sample proportion p-hat is.)
• Step 3: Find the p-value of the test by using the test statistic as follows
IMPORTANT FACT: In all future tests, we will rely on software to obtain the p-value.
When the alternative hypothesis is “less than” the probability of observing a test statistic as small as that observed or smaller, assuming that the values of the test statistic follow a standard normal distribution. We will now represent this probability in symbols and also using the normal distribution.
Looking at the shaded region, you can see why this is often referred to as a left-tailed test. We shaded to the left of the test statistic, since less than is to the left.
When the alternative hypothesis is “greater than” the probability of observing a test statistic as large as that observed or larger, assuming that the values of the test statistic follow a standard normal distribution. Again, we will represent this probability in symbols and using the normal distribution
Looking at the shaded region, you can see why this is often referred to as a right-tailed test. We shaded to the right of the test statistic, since greater than is to the right.
When the alternative hypothesis is “not equal to” the probability of observing a test statistic which is as large in magnitude as that observed or larger, assuming that the values of the test statistic follow a standard normal distribution.
This is often referred to as a two-tailed test, since we shaded in both directions.
• Step 4: Conclusion
Reach a conclusion first regarding the statistical significance of the results, and then determine what it means in the context of the problem.
If p-value ≤ 0.05 then WE REJECT Ho
Conclusion: There IS enough evidence that Ha is True
If p-value > 0.05 then WE FAIL TO REJECT Ho
Conclusion: There IS NOT enough evidence that Ha is True
Recall that: If the p-value is small (in particular, smaller than the significance level, which is usually 0.05), the results are statistically significant (in the sense that there is a statistically significant difference between what was observed in the sample and what was claimed in Ho), and so we reject Ho.
If the p-value is not small, we do not have enough statistical evidence to reject Ho, and so we continue to believe that Ho may be true. (Remember: In hypothesis testing we never “accept” Ho).
Finally, in practice, we should always consider the practical significance of the results as well as the statistical significance.
Learn by Doing: Z-Test for a Population Proportion
What’s next?
Before we move on to the next test, we are going to use the z-test for proportions to bring up and illustrate a few more very important issues regarding hypothesis testing. This might also be a good time to review the concepts of Type I error, Type II error, and Power before continuing on.
More about Hypothesis Testing
CO-1: Describe the roles biostatistics serves in the discipline of public health.
Learning Objectives
LO 1.11: Recognize the distinction between statistical significance and practical significance.
CO-6: Apply basic concepts of probability, random variation, and commonly used statistical probability distributions.
Learning Objectives
LO 6.26: Outline the logic and process of hypothesis testing.
Learning Objectives
LO 6.30: Use a confidence interval to determine the correct conclusion to the associated two-sided hypothesis test.
Video
Video: More about Hypothesis Testing (18:25)
The issues regarding hypothesis testing that we will discuss are:
1. The effect of sample size on hypothesis testing.
2. Statistical significance vs. practical importance.
3. Hypothesis testing and confidence intervals—how are they related?
Let’s begin.
1. The Effect of Sample Size on Hypothesis Testing
We have already seen the effect that the sample size has on inference, when we discussed point and interval estimation for the population mean (μ, mu) and population proportion (p). Intuitively …
Larger sample sizes give us more information to pin down the true nature of the population. We can therefore expect the sample mean and sample proportion obtained from a larger sample to be closer to the population mean and proportion, respectively. As a result, for the same level of confidence, we can report a smaller margin of error, and get a narrower confidence interval. What we’ve seen, then, is that larger sample size gives a boost to how much we trust our sample results.
In hypothesis testing, larger sample sizes have a similar effect. We have also discussed that the power of our test increases when the sample size increases, all else remaining the same. This means, we have a better chance to detect the difference between the true value and the null value for larger samples.
The following two examples will illustrate that a larger sample size provides more convincing evidence (the test has greater power), and how the evidence manifests itself in hypothesis testing. Let’s go back to our example 2 (marijuana use at a certain liberal arts college).
EXAMPLE:
Is the proportion of marijuana users in the college higher than the national figure?
.157 . We take a sample of 100 students, represented by a smaller circle. We find that 19 use marijuana. p-hat = 19/100 = .19, z = .91, and p-value = .182 . Since the p-value is too large we conclude that H_0 cannot be rejected." height="278" loading="lazy" src="http://phhp-faculty-cantrell.sites.m...3/image276.gif" title="A large circle represents the population Students at the college. We want to know p about this population, or what is the population proportion of students using marijuana. The hypotheses are H_0: p = .157 and H_a: p > .157 . We take a sample of 100 students, represented by a smaller circle. We find that 19 use marijuana. p-hat = 19/100 = .19, z = .91, and p-value = .182 . Since the p-value is too large we conclude that H_0 cannot be rejected." width="564">
We do not have enough evidence to conclude that the proportion of students at the college who use marijuana is higher than the national figure.
Now, let’s increase the sample size.
There are rumors that students in a certain liberal arts college are more inclined to use drugs than U.S. college students in general. Suppose that in a simple random sample of 400 students from the college, 76 admitted to marijuana use. Do the data provide enough evidence to conclude that the proportion of marijuana users among the students in the college (p) is higher than the national proportion, which is 0.157? (Reported by the Harvard School of Public Health).
.157 . We take a sample of 400 students, represented by a smaller circle, and find that 76 use marijuana. Conditions are met to use our method, so p-hat = 76/400 = .19, z = 1.81, and p-value = .035 . The p-value is low enough to let us conclude that we can reject H_0." height="292" loading="lazy" src="http://phhp-faculty-cantrell.sites.m...3/image291.gif" title="A large circle represents the population Students at the college. We want to know p about this population, or what is the population proportion of students using marijuana. The hypotheses are H_0: p = .157 and H_a: p > .157 . We take a sample of 400 students, represented by a smaller circle, and find that 76 use marijuana. Conditions are met to use our method, so p-hat = 76/400 = .19, z = 1.81, and p-value = .035 . The p-value is low enough to let us conclude that we can reject H_0." width="572">
Our results here are statistically significant. In other words, in example 2* the data provide enough evidence to reject Ho.
• Conclusion: There is enough evidence that the proportion of marijuana users at the college is higher than among all U.S. students.
What do we learn from this?
We see that sample results that are based on a larger sample carry more weight (have greater power).
In example 2, we saw that a sample proportion of 0.19 based on a sample of size of 100 was not enough evidence that the proportion of marijuana users in the college is higher than 0.157. Recall, from our general overview of hypothesis testing, that this conclusion (not having enough evidence to reject the null hypothesis) doesn’t mean the null hypothesis is necessarily true (so, we never “accept” the null); it only means that the particular study didn’t yield sufficient evidence to reject the null. It might be that the sample size was simply too small to detect a statistically significant difference.
However, in example 2*, we saw that when the sample proportion of 0.19 is obtained from a sample of size 400, it carries much more weight, and in particular, provides enough evidence that the proportion of marijuana users in the college is higher than 0.157 (the national figure). In this case, the sample size of 400 was large enough to detect a statistically significant difference.
The following activity will allow you to practice the ideas and terminology used in hypothesis testing when a result is not statistically significant.
Learn by Doing: Interpreting Non-significant Results
2. Statistical significance vs. practical importance.
Now, we will address the issue of statistical significance versus practical importance (which also involves issues of sample size).
The following activity will let you explore the effect of the sample size on the statistical significance of the results yourself, and more importantly will discuss issue 2: Statistical significance vs. practical importance.
Important Fact: In general, with a sufficiently large sample size you can make any result that has very little practical importance statistically significant! A large sample size alone does NOT make a “good” study!!
This suggests that when interpreting the results of a test, you should always think not only about the statistical significance of the results but also about their practical importance.
Learn by Doing: Statistical vs. Practical Significance
3. Hypothesis Testing and Confidence Intervals
The last topic we want to discuss is the relationship between hypothesis testing and confidence intervals. Even though the flavor of these two forms of inference is different (confidence intervals estimate a parameter, and hypothesis testing assesses the evidence in the data against one claim and in favor of another), there is a strong link between them.
We will explain this link (using the z-test and confidence interval for the population proportion), and then explain how confidence intervals can be used after a test has been carried out.
Recall that a confidence interval gives us a set of plausible values for the unknown population parameter. We may therefore examine a confidence interval to informally decide if a proposed value of population proportion seems plausible.
For example, if a 95% confidence interval for p, the proportion of all U.S. adults already familiar with Viagra in May 1998, was (0.61, 0.67), then it seems clear that we should be able to reject a claim that only 50% of all U.S. adults were familiar with the drug, since based on the confidence interval, 0.50 is not one of the plausible values for p.
In fact, the information provided by a confidence interval can be formally related to the information provided by a hypothesis test. (Comment: The relationship is more straightforward for two-sided alternatives, and so we will not present results for the one-sided cases.)
Suppose we want to carry out the two-sided test:
• Ho: p = p0
• Ha: p ≠ p0
using a significance level of 0.05.
An alternative way to perform this test is to find a 95% confidence interval for p and check:
• If p0 falls outside the confidence interval, reject Ho.
• If p0 falls inside the confidence interval, do not reject Ho.
In other words,
• If p0 is not one of the plausible values for p, we reject Ho.
• If p0 is a plausible value for p, we cannot reject Ho.
(Comment: Similarly, the results of a test using a significance level of 0.01 can be related to the 99% confidence interval.)
Let’s look at an example:
EXAMPLE:
Recall example 3, where we wanted to know whether the proportion of U.S. adults who support the death penalty for convicted murderers has changed since 2003, when it was 0.64.
We are testing:
• Ho: p = 0.64 (No change from 2003).
• Ha: p ≠ 0.64 (Some change since 2003).
and as the figure reminds us, we took a sample of 1,000 U.S. adults, and the data told us that 675 supported the death penalty for convicted murderers (p-hat = 0.675).
A 95% confidence interval for p, the proportion of all U.S. adults who support the death penalty, is:
$0.675 \pm 1.96 \sqrt{\dfrac{0.675(1-0.675)}{1000}} \approx 0.675 \pm 0.029=(0.646,0.704)$
Since the 95% confidence interval for p does not include 0.64 as a plausible value for p, we can reject Ho and conclude (as we did before) that there is enough evidence that the proportion of U.S. adults who support the death penalty for convicted murderers has changed since 2003.
EXAMPLE:
You and your roommate are arguing about whose turn it is to clean the apartment. Your roommate suggests that you settle this by tossing a coin and takes one out of a locked box he has on the shelf. Suspecting that the coin might not be fair, you decide to test it first. You toss the coin 80 times, thinking to yourself that if, indeed, the coin is fair, you should get around 40 heads. Instead you get 48 heads. You are puzzled. You are not sure whether getting 48 heads out of 80 is enough evidence to conclude that the coin is unbalanced, or whether this a result that could have happened just by chance when the coin is fair.
Statistics can help you answer this question.
Let p be the true proportion (probability) of heads. We want to test whether the coin is fair or not.
We are testing:
• Ho: p = 0.5 (the coin is fair).
• Ha: p ≠ 0.5 (the coin is not fair).
The data we have are that out of n = 80 tosses, we got 48 heads, or that the sample proportion of heads is p-hat = 48/80 = 0.6.
A 95% confidence interval for p, the true proportion of heads for this coin, is:
$0.6 \pm 1.96 \sqrt{\dfrac{0.6(1-0.6)}{80}} \approx 0.6 \pm 0.11=(0.49,0.71)$
Since in this case 0.5 is one of the plausible values for p, we cannot reject Ho. In other words, the data do not provide enough evidence to conclude that the coin is not fair.
Comment
The context of the last example is a good opportunity to bring up an important point that was discussed earlier.
Even though we use 0.05 as a cutoff to guide our decision about whether the results are statistically significant, we should not treat it as inviolable and we should always add our own judgment. Let’s look at the last example again.
It turns out that the p-value of this test is 0.0734. In other words, it is maybe not extremely unlikely, but it is quite unlikely (probability of 0.0734) that when you toss a fair coin 80 times you’ll get a sample proportion of heads of 48/80 = 0.6 (or even more extreme). It is true that using the 0.05 significance level (cutoff), 0.0734 is not considered small enough to conclude that the coin is not fair. However, if you really don’t want to clean the apartment, the p-value might be small enough for you to ask your roommate to use a different coin, or to provide one yourself!
Did I Get This?: Connection between Confidence Intervals and Hypothesis Tests
Did I Get This?: Hypothesis Tests for Proportions (Extra Practice)
Here is our final point on this subject:
When the data provide enough evidence to reject Ho, we can conclude (depending on the alternative hypothesis) that the population proportion is either less than, greater than, or not equal to the null value p0. However, we do not get a more informative statement about its actual value. It might be of interest, then, to follow the test with a 95% confidence interval that will give us more insight into the actual value of p.
EXAMPLE:
In our example 3,
we concluded that the proportion of U.S. adults who support the death penalty for convicted murderers has changed since 2003, when it was 0.64. It is probably of interest not only to know that the proportion has changed, but also to estimate what it has changed to. We’ve calculated the 95% confidence interval for p on the previous page and found that it is (0.646, 0.704).
We can combine our conclusions from the test and the confidence interval and say:
Data provide evidence that the proportion of U.S. adults who support the death penalty for convicted murderers has changed since 2003, and we are 95% confident that it is now between 0.646 and 0.704. (i.e. between 64.6% and 70.4%).
EXAMPLE:
Let’s look at our example 1 to see how a confidence interval following a test might be insightful in a different way.
Here is a summary of example 1:
We conclude that as a result of the repair, the proportion of defective products has been reduced to below 0.20 (which was the proportion prior to the repair). It is probably of great interest to the company not only to know that the proportion of defective has been reduced, but also estimate what it has been reduced to, to get a better sense of how effective the repair was. A 95% confidence interval for p in this case is:
$0.16 \pm 1.96 \sqrt{\dfrac{0.16(1-0.16)}{400}} \approx 0.16 \pm 0.036=(0.124,0.196)$
We can therefore say that the data provide evidence that the proportion of defective products has been reduced, and we are 95% confident that it has been reduced to somewhere between 12.4% and 19.6%. This is very useful information, since it tells us that even though the results were significant (i.e., the repair reduced the number of defective products), the repair might not have been effective enough, if it managed to reduce the number of defective products only to the range provided by the confidence interval. This, of course, ties back in to the idea of statistical significance vs. practical importance that we discussed earlier. Even though the results are statistically significant (Ho was rejected), practically speaking, the repair might still be considered ineffective.
Learn by Doing: Hypothesis Tests and Confidence Intervals
Let’s summarize
Even though this portion of the current section is about the z-test for population proportion, it is loaded with very important ideas that apply to hypothesis testing in general. We’ve already summarized the details that are specific to the z-test for proportions, so the purpose of this summary is to highlight the general ideas.
The process of hypothesis testing has four steps:
I. Stating the null and alternative hypotheses (Ho and Ha).
II. Obtaining a random sample (or at least one that can be considered random) and collecting data. Using the data:
Check that the conditions under which the test can be reliably used are met.
Summarize the data using a test statistic.
• The test statistic is a measure of the evidence in the data against Ho. The larger the test statistic is in magnitude, the more evidence the data present against Ho.
III. Finding the p-value of the test. The p-value is the probability of getting data like those observed (or even more extreme) assuming that the null hypothesis is true, and is calculated using the null distribution of the test statistic. The p-value is a measure of the evidence against Ho. The smaller the p-value, the more evidence the data present against Ho.
IV. Making conclusions.
Conclusions about the statistical significance of the results:
If the p-value is small, the data present enough evidence to reject Ho (and accept Ha).
If the p-value is not small, the data do not provide enough evidence to reject Ho.
To help guide our decision, we use the significance level as a cutoff for what is considered a small p-value. The significance cutoff is usually set at 0.05.
Conclusions should then be provided in the context of the problem.
Additional Important Ideas about Hypothesis Testing
• Results that are based on a larger sample carry more weight, and therefore as the sample size increases, results become more statistically significant.
• Even a very small and practically unimportant effect becomes statistically significant with a large enough sample size. The distinction between statistical significance and practical importance should therefore always be considered.
• Confidence intervals can be used in order to carry out two-sided tests (95% confidence for the 0.05 significance level). If the null value is not included in the confidence interval (i.e., is not one of the plausible values for the parameter), we have enough evidence to reject Ho. Otherwise, we cannot reject Ho.
• If the results are statistically significant, it might be of interest to follow up the tests with a confidence interval in order to get insight into the actual value of the parameter of interest.
• It is important to be aware that there are two types of errors in hypothesis testing (Type I and Type II) and that the power of a statistical test is an important measure of how likely we are to be able to detect a difference of interest to us in a particular problem.
Means (All Steps)
NOTE: Beginning on this page, the Learn By Doing and Did I Get This activities are presented as interactive PDF files. The interactivity may not work on mobile devices or with certain PDF viewers. Use an official ADOBE product such as ADOBE READER.
If you have any issues with the Learn By Doing or Did I Get This interactive PDF files, you can view all of the questions and answers presented on this page in this document:
Learning Objectives
LO 4.33: In a given context, distinguish between situations involving a population proportion and a population mean and specify the correct null and alternative hypothesis for the scenario.
CO-6: Apply basic concepts of probability, random variation, and commonly used statistical probability distributions.
Learning Objectives
LO 6.26: Outline the logic and process of hypothesis testing.
Learning Objectives
LO 6.27: Explain what the p-value is and how it is used to draw conclusions.
Learning Objectives
LO 6.30: Use a confidence interval to determine the correct conclusion to the associated two-sided hypothesis test.
Video
Video: Means (All Steps) (13:11)
So far we have talked about the logic behind hypothesis testing and then illustrated how this process proceeds in practice, using the z-test for the population proportion (p).
We are now moving on to discuss testing for the population mean (μ, mu), which is the parameter of interest when the variable of interest is quantitative.
A few comments about the structure of this section:
• The basic groundwork for carrying out hypothesis tests has already been laid in our general discussion and in our presentation of tests about proportions.
Therefore we can easily modify the four steps to carry out tests about means instead, without going into all of the details again.
We will use this approach for all future tests so be sure to go back to the discussion in general and for proportions to review the concepts in more detail.
• In our discussion about confidence intervals for the population mean, we made the distinction between whether the population standard deviation, σ (sigma) was known or if we needed to estimate this value using the sample standard deviation, s.
In this section, we will only discuss the second case as in most realistic settings we do not know the population standard deviation.
In this case we need to use the t-distribution instead of the standard normal distribution for the probability aspects of confidence intervals (choosing table values) and hypothesis tests (finding p-values).
• Although we will discuss some theoretical or conceptual details for some of the analyses we will learn, from this point on we will rely on software to conduct tests and calculate confidence intervals for us, while we focus on understanding which methods are used for which situations and what the results say in context.
If you are interested in more information about the z-test, where we assume the population standard deviation σ (sigma) is known, you can review the Carnegie Mellon Open Learning Statistics Course (you will need to click “ENTER COURSE”).
Like any other tests, the t-test for the population mean follows the four-step process:
• STEP 1: Stating the hypotheses Hoand Ha.
• STEP 2: Collecting relevant data, checking that the data satisfy the conditions which allow us to use this test, and summarizing the data using a test statistic.
• STEP 3: Finding the p-value of the test, the probability of obtaining data as extreme as those collected (or even more extreme, in the direction of the alternative hypothesis), assuming that the null hypothesis is true. In other words, how likely is it that the only reason for getting data like those observed is sampling variability (and not because Hois not true)?
• STEP 4: Drawing conclusions, assessing the statistical significance of the results based on the p-value, and stating our conclusions in context. (Do we or don’t we have evidence to reject Hoand accept Ha?)
• Note: In practice, we should also always consider the practical significance of the results as well as the statistical significance.
We will now go through the four steps specifically for the t-test for the population mean and apply them to our two examples.
Tests About μ (mu) When σ (sigma) is Unknown – The t-test for a Population Mean
Only in a few cases is it reasonable to assume that the population standard deviation, σ (sigma), is known and so we will not cover hypothesis tests in this case. We discussed both cases for confidence intervals so that we could still calculate some confidence intervals by hand.
For this and all future tests we will rely on software to obtain our summary statistics, test statistics, and p-values for us.
The case where σ (sigma) is unknown is much more common in practice. What can we use to replace σ (sigma)? If you don’t know the population standard deviation, the best you can do is find the sample standard deviation, s, and use it instead of σ (sigma). (Note that this is exactly what we did when we discussed confidence intervals).
Is that it? Can we just use s instead of σ (sigma), and the rest is the same as the previous case? Unfortunately, it’s not that simple, but not very complicated either.
Here, when we use the sample standard deviation, s, as our estimate of σ (sigma) we can no longer use a normal distribution to find the cutoff for confidence intervals or the p-values for hypothesis tests.
Instead we must use the t-distribution (with n-1 degrees of freedom) to obtain the p-value for this test.
We discussed this issue for confidence intervals. We will talk more about the t-distribution after we discuss the details of this test for those who are interested in learning more.
It isn’t really necessary for us to understand this distribution but it is important that we use the correct distributions in practice via our software.
We will wait until UNIT 4B to look at how to accomplish this test in the software. For now focus on understanding the process and drawing the correct conclusions from the p-values given.
Now let’s go through the four steps in conducting the t-test for the population mean.
Step 1: State the hypotheses
The null and alternative hypotheses for the t-test for the population mean (μ, mu) have exactly the same structure as the hypotheses for z-test for the population proportion (p):
The null hypothesis has the form:
• Ho: μ = μ0 (mu = mu_zero)
(where μ0 (mu_zero) is often called the null value)
The alternative hypothesis takes one of the following three forms (depending on the context):
• Ha: μ < μ0 (mu < mu_zero) (one-sided)
• Ha: μ > μ0 (mu > mu_zero) (one-sided)
• Ha: μ ≠ μ0 (mu ≠ mu_zero) (two-sided)
where the choice of the appropriate alternative (out of the three) is usually quite clear from the context of the problem.
If you feel it is not clear, it is most likely a two-sided problem. Students are usually good at recognizing the “more than” and “less than” terminology but differences can sometimes be more difficult to spot, sometimes this is because you have preconceived ideas of how you think it should be! You also cannot use the information from the sample to help you determine the hypothesis. We would not know our data when we originally asked the question.
Now try it yourself. Here are a few exercises on stating the hypotheses for tests for a population mean.
Learn by Doing: State the Hypotheses for a test for a population mean
Here are a few more activities for practice.
Did I Get This?: State the Hypotheses for a test for a population mean
When setting up hypotheses, be sure to use only the information in the research question. We cannot use our sample data to help us set up our hypotheses.
For this test, it is still important to correctly choose the alternative hypothesis as “less than”, “greater than”, or “different” although generally in practice two-sample tests are used.
Step 2: Obtain data, check conditions, and summarize data
Obtain data from a sample:
• In this step we would obtain data from a sample. This is not something we do much of in courses but it is done very often in practice!
Check the conditions:
• Then we check the conditions under which this test (the t-test for one population mean) can be safely carried out – which are:
• The sample is random (or at least can be considered random in context).
• We are in one of the three situations marked with a green check mark in the following table (which ensure that x-bar is at least approximately normal and the test statistic using the sample standard deviation, s, is therefore a t-distribution with n-1 degrees of freedom – proving this is beyond the scope of this course):
• For large samples, we don’t need to check for normality in the population. We can rely on the sample size as the basis for the validity of using this test.
• For small samples, we need to have data from a normal population in order for the p-values and confidence intervals to be valid.
In practice, for small samples, it can be very difficult to determine if the population is normal. Here is a simulation to give you a better understanding of the difficulties.
Now try it yourself with a few activities.
Comments:
• It is always a good idea to look at the data and get a sense of their pattern regardless of whether you actually need to do it in order to assess whether the conditions are met.
• This idea of looking at the data is relevant to all tests in general. In the next module—inference for relationships—conducting exploratory data analysis before inference will be an integral part of the process.
Here are a few more problems for extra practice.
When setting up hypotheses, be sure to use only the information in the res
Calculate Test Statistic
Assuming that the conditions are met, we calculate the sample mean x-bar and the sample standard deviation, s (which estimates σ (sigma)), and summarize the data with a test statistic.
The test statistic for the t-test for the population mean is:
$t=\dfrac{\bar{x} - \mu_0}{s/ \sqrt{n}}$
Recall that such a standardized test statistic represents how many standard deviations above or below μ0 (mu_zero) our sample mean x-bar is.
Therefore our test statistic is a measure of how different our data are from what is claimed in the null hypothesis. This is an idea that we mentioned in the previous test as well.
Again we will rely on the p-value to determine how unusual our data would be if the null hypothesis is true.
As we mentioned, the test statistic in the t-test for a population mean does not follow a standard normal distribution. Rather, it follows another bell-shaped distribution called the t-distribution.
We will present the details of this distribution at the end for those interested but for now we will work on the process of the test.
Here are a few important facts.
• In statistical language we say that the null distribution of our test statistic is the t-distribution with (n-1) degrees of freedom. In other words, when Ho is true (i.e., when μ = μ0 (mu = mu_zero)), our test statistic has a t-distribution with (n-1) d.f., and this is the distribution under which we find p-values.
• For a large sample size (n), the null distribution of the test statistic is approximately Z, so whether we use t(n – 1) or Z to calculate the p-values does not make a big difference. However, software will use the t-distribution regardless of the sample size and so will we.
Although we will not calculate p-values by hand for this test, we can still easily calculate the test statistic.
Try it yourself:
From this point in this course and certainly in practice we will allow the software to calculate our test statistics and we will use the p-values provided to draw our conclusions.
Step 3: Find the p-value of the test by using the test statistic as follows
We will use software to obtain the p-value for this (and all future) tests but here are the images illustrating how the p-value is calculated in each of the three cases corresponding to the three choices for our alternative hypothesis.
Note that due to the symmetry of the t distribution, for a given value of the test statistic t, the p-value for the two-sided test is twice as large as the p-value of either of the one-sided tests. The same thing happens when p-values are calculated under the t distribution as when they are calculated under the Z distribution.
mu_zero: A t(n-1) distribution with t-scores on its horizontal axis. T-scores of 0 and t have been marked, with t to the right of 0. t has been generated from a observed test statistic. The area to the right of t under the curve is the p-value. Bottom Graph for Ha: mu not equal to mu_zero: A t(n-1) distribution with t-scores on its horizontal axis. T-scores of -|t|, 0, and |t| have been marked. -|t| is to the left of 0, and |t| is to the right. t has been generated from a observed test statistic. The sum of the area under the curve to the left of -|t| and to the right of |t| is the p-value." height="840" loading="lazy" src="http://phhp-faculty-cantrell.sites.m...od12_means.png" title="Top Graph for Ha: mu < mu_zero: A t(n-1) distribution with t-scores on its horizontal axis. T-scores of 0 and t have been marked, with t to the left of 0. t has been generated from a observed test statistic. The area to the left of t under the curve is the p-value. Middle Graph for Ha: mu > mu_zero: A t(n-1) distribution with t-scores on its horizontal axis. T-scores of 0 and t have been marked, with t to the right of 0. t has been generated from a observed test statistic. The area to the right of t under the curve is the p-value. Bottom Graph for Ha: mu not equal to mu_zero: A t(n-1) distribution with t-scores on its horizontal axis. T-scores of -|t|, 0, and |t| have been marked. -|t| is to the left of 0, and |t| is to the right. t has been generated from a observed test statistic. The sum of the area under the curve to the left of -|t| and to the right of |t| is the p-value." width="328">
We will show some examples of p-values obtained from software in our examples. For now let’s continue our summary of the steps.
Step 4: Conclusion
As usual, based on the p-value (and some significance level of choice) we assess the statistical significance of results, and draw our conclusions in context.
To review what we have said before:
If p-value ≤ 0.05 then WE REJECT Ho
• Conclusion: There ISenough evidence that Ha is True
If p-value > 0.05 then WE FAIL TO REJECT Ho
• Conclusion: There IS NOTenough evidence that Ha is True
Where instead of Ha is True, we write what this means in the words of the problem, in other words, in the context of the current scenario.
This step has essentially two sub-steps:
(i) Based on the p-value, determine whether or not the results are statistically significant (i.e., the data present enough evidence to reject Ho).
(ii) State your conclusions in the context of the problem.
We are now ready to look at two examples.
EXAMPLE:
A certain prescription medicine is supposed to contain an average of 250 parts per million (ppm) of a certain chemical. If the concentration is higher than this, the drug may cause harmful side effects; if it is lower, the drug may be ineffective.
The manufacturer runs a check to see if the mean concentration in a large shipment conforms to the target level of 250 ppm or not.
A simple random sample of 100 portions is tested, and the sample mean concentration is found to be 247 ppm with a sample standard deviation of 12 ppm.
Here is a figure that represents this example:
1. The hypotheses being tested are:
• Ho: μ = μ0 (mu = mu_zero)
• Ha: μ ≠ μ0 (mu ≠ mu_zero)
• Where μ = population mean part per million of the chemical in the entire shipment
2. The conditions that allow us to use the t-test are met since:
• The sample is random
• The sample size is large enough for the Central Limit Theorem to apply and ensure the normality of x-bar. We do not need normality of the population in order to be able to conduct this test for the population mean. We are in the 2nd column in the table below.
• The test statistic is:
$t=\dfrac{\bar{x}-\mu_{0}}{s / \sqrt{n}}=\dfrac{247-250}{12 / \sqrt{100}}=-2.5$
• The data (represented by the sample mean) are 2.5 standard errors below the null value.
3. Finding the p-value.
• To find the p-value we use statistical software, and we calculate a p-value of 0.014.
4. Conclusions:
• The p-value is small (.014) indicating that at the 5% significance level, the results are significant.
• We reject the null hypothesis.
• OUR CONCLUSION IN CONTEXT:
• There is enough evidence to conclude that the mean concentration in entire shipment is not the required 250 ppm.
• It is difficult to comment on the practical significance of this result without more understanding of the practical considerations of this problem.
Here is a summary:
Comments:
• The 95% confidence interval for μ (mu) can be used here in the same way as for proportions to conduct the two-sided test (checking whether the null value falls inside or outside the confidence interval) or following a t-test where Ho was rejected to get insight into the value of μ (mu).
• We find the 95% confidence interval to be (244.619, 249.381). Since 250 is not in the interval we know we would reject our null hypothesis that μ (mu) = 250. The confidence interval gives additional information. By accounting for estimation error, it estimates that the population mean is likely to be between 244.62 and 249.38. This is lower than the target concentration and that information might help determine the seriousness and appropriate course of action in this situation.
Caution
In most situations in practice we use TWO-SIDED HYPOTHESIS TESTS, followed by confidence intervals to gain more insight.
For completeness in covering one sample t-tests for a population mean, we still cover all three possible alternative hypotheses here HOWEVER, this will be the last test for which we will do so.
EXAMPLE:
A research study measured the pulse rates of 57 college men and found a mean pulse rate of 70 beats per minute with a standard deviation of 9.85 beats per minute.
Researchers want to know if the mean pulse rate for all college men is different from the current standard of 72 beats per minute.
1. The hypotheses being tested are:
• Ho: μ = 72
• Ha: μ ≠ 72
• Where μ = population mean heart rate among college men
1. The conditions that allow us to use the t-test are met since:
• The sample is random.
• The sample size is large (n = 57) so we do not need normality of the population in order to be able to conduct this test for the population mean. We are in the 2nd column in the table below.
• The test statistic is:
$t=\dfrac{\bar{x}-\mu}{s / \sqrt{n}}=\dfrac{70-72}{9.85 / \sqrt{57}}=-1.53$
• The data (represented by the sample mean) are 1.53 estimated standard errors below the null value.
3. Finding the p-value.
• Recall that in general the p-value is calculated under the null distribution of the test statistic, which, in the t-test case, is t(n-1). In our case, in which n = 57, the p-value is calculated under the t(56) distribution. Using statistical software, we find that the p-value is 0.132.
• Here is how we calculated the p-value. http://homepage.stat.uiowa.edu/~mbognar/applets/t.html.
4. Making conclusions.
• The p-value (0.132) is not small, indicating that the results are not significant.
• We fail to reject the null hypothesis.
• OUR CONCLUSION IN CONTEXT:
• There is not enough evidence to conclude that the mean pulse rate for all college men is different from the current standard of 72 beats per minute.
• The results from this sample do not appear to have any practical significance either with a mean pulse rate of 70, this is very similar to the hypothesized value, relative to the variation expected in pulse rates.
Now try a few yourself.
Learn by Doing: Hypothesis Testing for the Population Mean
From this point in this course and certainly in practice we will allow the software to calculate our test statistic and p-value and we will use the p-values provided to draw our conclusions.
That concludes our discussion of hypothesis tests in Unit 4A.
In the next unit we will continue to use both confidence intervals and hypothesis test to investigate the relationship between two variables in the cases we covered in Unit 1 on exploratory data analysis – we will look at Case CQ, Case CC, and Case QQ.
Before moving on, we will discuss the details about the t-distribution as a general object.
The t-Distribution
We have seen that variables can be visually modeled by many different sorts of shapes, and we call these shapes distributions. Several distributions arise so frequently that they have been given special names, and they have been studied mathematically.
So far in the course, the only one we’ve named, for continuous quantitative variables, is the normal distribution, but there are others. One of them is called the t-distribution.
The t-distribution is another bell-shaped (unimodal and symmetric) distribution, like the normal distribution; and the center of the t-distribution is standardized at zero, like the center of the standard normal distribution.
Like all distributions that are used as probability models, the normal and the t-distribution are both scaled, so the total area under each of them is 1.
So how is the t-distribution fundamentally different from the normal distribution?
• The spread.
The following picture illustrates the fundamental difference between the normal distribution and the t-distribution:
Here we have an image which illustrates the fundamental difference between the normal distribution and the t-distribution:
You can see in the picture that the t-distribution has slightly less area near the expected central value than the normal distribution does, and you can see that the t distribution has correspondingly more area in the “tails” than the normal distribution does. (It’s often said that the t-distribution has “fatter tails” or “heavier tails” than the normal distribution.)
This reflects the fact that the t-distribution has a larger spread than the normal distribution. The same total area of 1 is spread out over a slightly wider range on the t-distribution, making it a bit lower near the center compared to the normal distribution, and giving the t-distribution slightly more probability in the ‘tails’ compared to the normal distribution.
Therefore, the t-distribution ends up being the appropriate model in certain cases where there is more variability than would be predicted by the normal distribution. One of these cases is stock values, which have more variability (or “volatility,” to use the economic term) than would be predicted by the normal distribution.
There’s actually an entire family of t-distributions. They all have similar formulas (but the math is beyond the scope of this introductory course in statistics), and they all have slightly “fatter tails” than the normal distribution. But some are closer to normal than others.
The t-distributions that have higher “degrees of freedom” are closer to normal (degrees of freedom is a mathematical concept that we won’t study in this course, beyond merely mentioning it here). So, there’s a t-distribution “with one degree of freedom,” another t-distribution “with 2 degrees of freedom” which is slightly closer to normal, another t-distribution “with 3 degrees of freedom” which is a bit closer to normal than the previous ones, and so on.
The following picture illustrates this idea with just a couple of t-distributions (note that “degrees of freedom” is abbreviated “d.f.” on the picture):
The test statistic for our t-test for one population mean is a t-score which follows a t-distribution with (n – 1) degrees of freedom. Recall that each t-distribution is indexed according to “degrees of freedom.” Notice that, in the context of a test for a mean, the degrees of freedom depend on the sample size in the study.
Remember that we said that higher degrees of freedom indicate that the t-distribution is closer to normal. So in the context of a test for the mean, the larger the sample size, the higher the degrees of freedom, and the closer the t-distribution is to a normal z distribution.
As a result, in the context of a test for a mean, the effect of the t-distribution is most important for a study with a relatively small sample size.
We are now done introducing the t-distribution. What are implications of all of this?
• The null distribution of our t-test statistic is the t-distribution with (n-1) d.f. In other words, when Ho is true (i.e., when μ = μ0 (mu = mu_zero)), our test statistic has a t-distribution with (n-1) d.f., and this is the distribution under which we find p-values.
• For a large sample size (n), the null distribution of the test statistic is approximately Z, so whether we use t(n – 1) or Z to calculate the p-values does not make a big difference.
Wrap-Up (Inference for One Variable)
Video
Video: Summary Examples Unit 4A (34:51)
We’ve now completed the two main sections about inference for one variable. In these sections we introduced the three forms of inference:
• Point estimation—estimating an unknown parameter with a single value
• Interval estimation—estimating an unknown parameter with a confidence interval (an interval of plausible values for the parameter, which with some level of confidence we believe captures the true value of the parameter in it).
• Hypothesis testing — a four-step process in which we are assessing the statistical evidence provided by the data in favor or against some claim about the population.
Much like in the Exploratory Data Analysis section for one variable, we distinguished between the case when the variable of interest is categorical, and the case when it is quantitative.
• When the variable of interest is categorical, we are making an inference about the population proportion (p), which represents the proportion of the population that falls into one of the categories of the variable of interest.
• When the variable of interest is quantitative, the inference is about the population mean (μ, mu). | textbooks/stats/Applied_Statistics/Biostatistics_-_Open_Learning_Textbook/Unit_4A%3A_Introduction_to_Statistical_Inference/Hypothesis_Testing.txt |
CO-4: Distinguish among different measurement scales, choose the appropriate descriptive and inferential statistical methods based on these distinctions, and interpret the results.
Learning Objectives
LO 4.20: Classify a data analysis situation involving two variables according to the “role-type classification.”
Learning Objectives
LO 4.35: For a data analysis situation involving two variables, choose the appropriate inferential method for examining the relationship between the variables and justify the choice.
Learning Objectives
LO 4.36: For a data analysis situation involving two variables, carry out the appropriate inferential method for examining relationships between the variables and draw the correct conclusions in context.
REVIEW: Unit 1 Role-Type Classification before continuing.
Video
Video: Unit 4B: Inference for Relationships (5:15)
In the previous unit, we learned to perform inference for a single categorical or quantitative variable in the form of point estimation, confidence intervals or hypothesis testing.
The inference was actually
• about the population proportion (when the variable of interest was categorical) and
• about the population mean (when the variable of interest was quantitative).
Our next (and final) goal for this course is to perform inference about relationships between two variables in a population, based on an observed relationship between variables in a sample. Here is what the process looks like:
We are interested in studying whether a relationship exists between the variables X and Y in a population of interest. We choose a random sample and collect data on both variables from the subjects.
Our goal is to determine whether these data provide strong enough evidence for us to generalize the observed relationship in the sample and conclude (with some acceptable and agreed-upon level of uncertainty) that a relationship between X and Y exists in the entire population.
The primary form of inference that we will use in this unit is hypothesis testing but we will discuss confidence intervals both to estimate unknown parameters of interest involving two variables and as an alternative way of determining the conclusion to our hypothesis test.
Conceptually, across all the inferential methods that we will learn, we’ll test some form of:
Ho: There is no relationship between X and Y
Ha: There is a relationship between X and Y
(We will also discuss point and interval estimation, but our discussion about these forms of inference will be framed around the test.)
Recall that when we discussed examining the relationship between two variables in the Exploratory Data Analysis unit, our discussion was framed around the role-type classification. This part of the course will be structured exactly in the same way.
In other words, we will look at hypothesis testing in the 3 sections corresponding to cases C→Q, C→C, and Q→Q in the table below.
Recall that case Q→C is not specifically addressed in this course other than that we may investigate the association between these variables using the same methods as case C→Q.
It is also important to remember what we learned about lurking variables and causation.
• If our explanatory variable was part of a well-designed experiment then it may be possible for us to claim a causal effect
• But if it was based upon an observational study, we must be cautious to imply only a relationship or association between the two variables, not a direct causal link between the explanatory and response variable.
Unlike the previous part of the course on Inference for One Variable, where we discussed in some detail the theory behind the machinery of the test (such as the null distribution of the test statistic, under which the p-values are calculated), in the inferential procedures that we will introduce in Inference for Relationships, we will discuss much less of that kind of detail.
The principles are the same, but the details behind the null distribution of the test statistic (under which the p-value is calculated) become more complicated and require knowledge of theoretical results that are beyond the scope of this course.
Instead, within each of the inferential methods we will focus on:
• When the inferential method is appropriate for use.
• Under what conditions the procedure can safely be used.
• The conceptual idea behind the test (as it is usually captured by the test statistic).
• How to use software to carry out the procedure in order to get the p-value of the test.
• Interpreting the results in the context of the problem.
• Also, we will continue to introduce each test according to the four-step process of hypothesis testing.
Two-Sided Tests
From this point forward, we will generally focus on
• TWO-SIDED tests and
• Supplement with confidence intervals for the effect of interest to give further information
Using two-sided tests is standard practice in clinical research EVEN when there is a direction of interest for the research hypothesis, such as the desire to prove a new treatment is better than the current treatment.
Here are a few comments:
We are now ready to start with Case C→Q.
Unit 4B: Inference for Relationships
CO-4: Distinguish among different measurement scales, choose the appropriate descriptive and inferential statistical methods based on these distinctions, and interpret the results.
Learning Objectives
LO 4.35: For a data analysis situation involving two variables, choose the appropriate inferential method for examining the relationship between the variables and justify the choice.
Learning Objectives
LO 4.36: For a data analysis situation involving two variables, carry out the appropriate inferential method for examining relationships between the variables and draw the correct conclusions in context.
CO-5: Determine preferred methodological alternatives to commonly used statistical methods when assumptions are not met.
Review: Unit 1 Case C-C
Video
Video: Case C→C (47:09)
Related SAS Tutorials
Related SPSS Tutorials
Introduction
The last procedures we studied (two-sample t, paired t, ANOVA, and their non-parametric alternatives) all involve the relationship between a categorical explanatory variable and a quantitative response variable (case C→Q).In all of these procedures, the result is a comparison of the quantitative response variable (Y) among the groups defined by the categorical explanatory variable (X).The standard tests result in a comparison of the population means of Y within each group defined by X.
Next, we will consider inferences about the relationships between two categorical variables, corresponding to case C→C.
For case C→C, we will learn the following tests:
Dependent Samples (Not Discussed)
Standard Tests
• Continuity Corrected Chi-square Test for Independence (2×2 case)
• Chi-square Test for Independence (RxC case)
Non-Parametric Test
• Fisher’s exact test
Standard Test
• McNemar’s Test – 2×2 Case
In the Exploratory Data Analysis unit of the course, we summarized the relationship between two categorical variables for a given data set (using a two-way table and conditional percents), without trying to generalize beyond the sample data.
Now we will perform statistical inference for two categorical variables, using the sample data to draw conclusions about whether or not we have evidence that the variables are related in the larger population from which the sample was drawn.
In other words, we would like to assess whether the relationship between X and Y that we observed in the data is due to a real relationship between X and Y in the population, or if it is something that could have happened just by chance due to sampling variability.
Before moving into the statistical tests, let’s look at a few (fake) examples.
RxC Tables
Suppose our explanatory variable X has r levels and our response variable Y has c levels. We usually arrange our table with the explanatory variable in the rows and the response variable in the columns.
EXAMPLE: RxC Table
Suppose we have the following partial (fake) data summarized in a two-way table using X = BMI category (r = 4 levels) and Y = Diabetes Status (c = 3 levels).
No Diabetes Pre-Diabetes Diabetes Total
Underweight 100
Normal 400
Overweight 300
Obese 200
Total 700 200 100 1000
From our study of probability we can determine:
• P(No Diabetes) = 700/1000 = 0.7
• P(Pre-Diabetes) = 200/1000 = 0.20
• P(Diabetes) = 100/1000 = 0.10
In the test we are going to use, our null hypothesis will be:
Ho: There is no relationship between X and Y.
Which in this case would be:
Ho: There is no relationship between BMI category (X) and diabetes status (Y).
If there were no relationship between X and Y, this would imply that the distribution of diabetes status is the same for each BMI category.
In this case (C→C), the distribution of diabetes status consists of the probability of each diabetes status group and the null hypothesis becomes:
Ho: BMI category (X) and diabetes status (Y) are INDEPENDENT.
Since the probability of “No Diabetes” is 0.7 in the entire dataset, if there were no differences in the distribution of diabetes status between BMI categories, we would obtain the same proportion in each row. Using the row totals we can find the EXPECTED counts as follows.
Notice the formula used below is simply the formula for the mean or expected value of a binomial random variable with n “trials” and probability of “success” p which was μ = E(X) = np where X = number of successes for a sample of size n.
No Diabetes Pre-Diabetes Diabetes Total
Underweight 100(0.7) = 70 100
Normal 400(0.7) = 280 400
Overweight 300(0.7) = 210 300
Obese 200(0.7) = 140 200
Total 700 200 100 1000
Notice that these do indeed add to 700.
Similarly we can determine the EXPECTED counts for the remaining two columns since 20% of our sample were classified as having pre-diabetes and 10% were classified as having diabetes.
No Diabetes Pre-Diabetes Diabetes Total
Underweight 70 100(0.2) = 20 100(0.1) = 10 100
Normal 280 400(0.2) = 80 400(0.1) = 40 400
Overweight 210 300(0.2) = 60 300(0.1) = 30 300
Obese 140 200(0.2) = 40 200(0.1) = 20 200
Total 700 200 100 1000
What we have created, using only the row totals, column totals, and column percents, is a table of what we would expect to happen if the null hypothesis of no relationship between X and Y were true. Here is the final result.
No Diabetes Pre-Diabetes Diabetes Total
Underweight 70 20 10 100
Normal 280 80 40 400
Overweight 210 60 30 300
Obese 140 40 20 200
Total 700 200 100 1000
Suppose we gather data and find the following (expected counts are in parentheses for easy comparison):
No Diabetes Pre-Diabetes Diabetes Total
Underweight 65 (70) 22 (20) 13 (10) 100
Normal 285 (280) 78 (80) 37 (40) 400
Overweight 216 (210) 53 (60) 31 (30) 300
Obese 134 (140) 47 (40) 19 (20) 200
Total 700 200 100 1000
If we compare our counts to the expected counts they are fairly close. This data would not give much evidence of a difference in the distribution of diabetes status among the levels of BMI categories. In other words, this data would not give much evidence of a relationship (or association) between BMI categories and diabetes status.
The standard test we will learn in case C→C is based upon comparing the OBSERVED cell counts (our data) to the EXPECTED cell counts (using the method discussed above).
We want you to see how the expected cell counts are created so that you will understand what kind of evidence is being used to reject the null hypothesis in case C→C.
Suppose instead that we gather data and we obtain the following counts (expected counts are in parentheses and row percentages are provided):
No Diabetes Pre-Diabetes Diabetes Total
Underweight 90 (70)
90%
7 (20)
7%
3 (10)
3%
100
Normal 340 (280)
85%
40 (80)
10%
20 (40)
5%
400
Overweight 180 (210)
60%
90 (60)
30%
30 (30)
10%
300
Obese 90 (140)
45%
63 (40)
31.5%
47 (20)
23.5%
200
Total 700 200 100 1000
In this case, most of the differences are drastic and there seems to be clear evidence that the distribution of diabetes status is not the same among the four BMI categories.
Although this data is entirely fabricated, it illustrates the kind of evidence we need to reject the null hypothesis in case C→C.
2×2 Tables
One special case occurs when we have two categorical variables where both of these variables have two levels. Two-level categorical variables are often called binary variables or dichotomous variables and when possible are usually coded as 1 for “Yes” or “Success” and 0 for “No” or “Failure.”
Here is another (fake) example.
EXAMPLE: 2x2 Table
Suppose we have the following partial (fake) data summarized in a two-way table using X = treatment and Y = significant improvement in symptoms.
No Improvement Improvement Total
Control 100
Treatment 100
Total 120 80 200
From our study of probability we can determine:
• P(No Improvement) = 120/200 = 0.6
• P(Improvement) = 80/200 = 0.4
Since the probability of “No Improvement” is 0.6 in the entire dataset and the probability for “Improvement” is 0.4, if there was no difference we would obtain the same proportion in each row. Using the row totals we can find the EXPECTED counts as follows.
No Improvement Improvement Total
Control 100(0.6) = 60 100(0.4) = 40 100
Treatment 100(0.6) = 60 100(0.4) = 40 100
Total 120 80 200
Suppose we obtain the following data:
No Improvement Improvement Total
Control 80 20 100
Treatment 40 60 100
Total 120 80 200
In this example we are interested in the probability of improvement and the above data seem to indicate the treatment provides a greater chance for improvement than the control.
We use this example to mention two ways of comparing probability (sometimes “risk”) in 2×2 tables. Many of you may remember these topics from Epidemiology or may see these topics again in Epidemiology courses in the future!
Risk Difference:
For this data, a larger proportion of subjects in the treatment group showed improvement compared to the control group. In fact, the estimated probability of improvement is 0.4 higher for the treatment group than the control group.
This value (0.4) is called a risk-difference and is one common measure in 2×2 tables. Estimates and confidence intervals can be obtained.
For a fixed sample size, the larger this difference, the more evidence against our null hypothesis (no relationship between X and Y).
The population risk-difference is often denoted p1 – p2, and is the difference between two population proportions. We estimate these proportions in the same manner as Unit 1, once for each sample.
For the current example, we obtain
$\hat{p}_{1}=\hat{p}_{\mathrm{TRT}}=\dfrac{60}{100}=0.60$
and
$\hat{p}_{2}=\hat{p}_{\text {Control }}=\dfrac{20}{100}=0.20$
from which we find the risk difference
$\hat{p}_{\text {TRT }}-\hat{p}_{\text {Control }}=0.60-0.20=0.40$
Odds Ratio:
Another common measure in 2×2 tables is the odds ratio, which is defined as the odds of the event occurring in one group divided by the odds of the event occurring in another group.
In this case, the odds of improvement in the treatment group is
$\mathrm{ODDS}_{\mathrm{TRT}}=\dfrac{P(\text { Improvement } \mid \mathrm{TRT})}{P(\text { No Improvement } \mid \mathrm{TRT})}=\dfrac{0.6}{0.4}=1.5$
and the odds of improvement in the control group is
$\mathrm{ODDS}_{\text {Control }}=\dfrac{P(\text { Improvement } \mid \text { Control })}{P(\text { No Improvement } \mid \text { Control })}=\dfrac{0.2}{0.8}=0.25$
so the odds ratio to compare the treatment group to the control group is
$\text { Odds Ratio }=\dfrac{\text { ODDS }_{\mathrm{TRT}}}{\mathrm{ODDS}_{\mathrm{Control}}}=\dfrac{1.5}{0.25}=6$
This value means that the odds of improvement are 6 times higher in the treatment group than in the control group.
Properties of Odds Ratios:
• The odds ratio is always larger than 0.
• An odds ratio of 1 implies the odds are equal in the two groups.
• Values much larger than 1 indicate the event is more likely in the treatment group (numerator group) than the control group (denominator group). This would give evidence that our null hypothesis is false.
• Values much smaller than 1 (closer to zero) would indicate the event is much less likely in the treatment group than the control group. This would also give evidence that our null hypothesis is false.
• Notice: if we compared control to treatment (instead of treatment to control) we would obtain an odds ratio of 1/6 which would say that the odds of improvement in the control group is 1/6 the odds of improvement in the treatment group which leads us to exactly the same conclusion, worded in an opposite manner.
Chi-square Test for Independence
Learning Objectives
LO 4.43: In a given context, determine the appropriate standard method for examining the relationship between two categorical variables. Given the appropriate software output choose the correct p-value and provide the correct conclusions in context.
Learning Objectives
LO 4.44: In a given context, set up the appropriate null and alternative hypotheses for examining the relationship between two categorical variables.
Step 1: State the hypothesesThe hypotheses are:
Ho: There is no relationship between the two categorical variables. (They are independent.)
Ha: There is a relationship between the two categorical variables. (They are not independent.)
Note: for 2×2 tables, these hypotheses can be formulated the same as for population means except using population proportions. This can be done for RxC tables as well but is not common as it requires more notation to compare multiple group proportions.
• Ho: p1 – p2 = 0 (which is the same as p1 = p2)
• Ha: p1 – p2 ≠ 0 (which is the same as p1 ≠ p2) (two-sided)
Step 2: Obtain data, check conditions, and summarize data
(i) The sample should be random with independent observations (all observations are independent of all other observations).
(ii) In general, the larger the sample, the more precise and reliable the test results are. There are different versions of what the conditions are that will ensure reliable use of the test, all of which involve the expected counts. One version of the conditions says that all expected counts need to be greater than 1, and at least 80% of expected counts need to be greater than 5. A more conservative version requires that all expected counts are larger than 5. Some software packages will provide a warning if the sample size is “too small.”
Test Statistic of the Chi-square Test for Independence:
The single number that summarizes the overall difference between observed and expected counts is the chi-square statistic, which tells us in a standardized way how far what we observed (data) is from what would be expected if Ho were true.
Here it is:
$\chi^{2}=\sum_{\text {all cells }} \dfrac{(\text { observed count }-\text { expected count })^{2}}{\text { expected count }}$
Step 3: Find the p-value of the test by using the test statistic as followsWe will rely on software to obtain this value for us. We can also request the expected counts using software.
The p-values are calculated using a chi-square distribution with (r-1)(c-1) degrees of freedom (where r = number of levels of the row variable and c = number of levels of the column variable). We will rely on software to obtain the p-value for this test.
IMPORTANT NOTE
• Use Continuity Correction for 2×2 Tables: For 2×2 tables, a continuity correction is used to improve the approximation of the p-value. This value will only be calculated by the software for 2×2 tables where both variables are binary – have only two levels.
Step 4: Conclusion
As usual, we use the magnitude of the p-value to draw our conclusions. A small p-value indicates that the evidence provided by the data is strong enough to reject Ho and conclude (beyond a reasonable doubt) that the two variables are related. In particular, if a significance level of 0.05 is used, we will reject Ho if the p-value is less than 0.05.
Non-Parametric Alternative: Fisher’s Exact Test
Learning Objectives
LO 5.1: For a data analysis situation involving two variables, determine the appropriate alternative (non-parametric) method when assumptions of our standard methods are not met.
We will look at one non-parametric test in case C→C. Fisher’s exact test is an exact method of obtaining a p-value for the hypotheses tested in a standard chi-square test for independence. This test is often used when the sample size requirement of the chi-square test is not satisfied and can be used for 2×2 and RxC tables.
Step 1: State the hypothesesThe hypotheses are:
Ho: There is no relationship between the two categorical variables. (They are independent.)
Ha: There is a relationship between the two categorical variables. (They are not independent, they are dependent.)
Step 2: Obtain data, check conditions, and summarize data
The sample should be random with independent observations (all observations are independent of all other observations).
Step 3: Find the p-value of the test by using the test statistic as follows
The p-values are calculated using a distribution specific to this test. We will rely on software to obtain the p-value for this test. The p-value measures the chance of obtaining a table as or more extreme (against the null hypothesis) than our table.
Step 4: Conclusion
As usual, we use the magnitude of the p-value to draw our conclusions. A small p-value indicates that the evidence provided by the data is strong enough to reject Ho and conclude (beyond a reasonable doubt) that the two variables are related. In particular, if a significance level of 0.05 is used, we will reject Ho if the p-value is less than 0.05.
Now let’s look at a some examples with real data.
EXAMPLE: Risk Factor for Low Birth Weight
Low birth weight is an outcome of concern due to the fact that infant mortality rates and birth defect rates are very high for babies with low birth weight. A woman’s behavior during pregnancy (including diet, smoking habits, and obtaining prenatal care) can greatly alter her chances of carrying the baby to term and, consequently, of delivering a baby of normal birth weight.
In this example, we will use a 1986 study (Hosmer and Lemeshow (2000), Applied Logistic Regression: Second Edition) in which data were collected from 189 women (of whom 59 had low birth weight infants) at the Baystate Medical Center in Springfield, MA. The goal of the study was to identify risk factors associated with giving birth to a low birth weight baby.
Response Variable:
• LOW – Low birth weight
• 0=No (birth weight >= 2500 g)
• 1=Yes (birth weight < 2500 g)
Possible Explanatory Variables (variables we will use in this example are in bold):
• RACE – Race of mother (1=White, 2=Black, 3=Other)
• SMOKE – Smoking status during pregnancy (0=No, 1=Yes)
• PTL – History of premature labor (0=None, 1=One, etc.)
• HT – History of hypertension (0=No, 1=Yes)
• UI – Presence of uterine irritability (0=No, 1=Yes)
• FTV – Number of physician visits during the first trimester
• BWT – The actual birth weight (in grams)
• AGE – Age of mother (in years)
• LWT – Weight of mother at the last menstrual period (in pounds)
Results:
Step 1: State the hypotheses
The hypotheses are:
Ho: There is no relationship between the categorical explanatory variable and presence of low birth weight. (They are independent.)
Ha: There is a relationship between the categorical explanatory variable and presence of low birth weight.(They are not independent, they are dependent.)
Steps 2 & 3: Obtain data, check conditions, summarize data, and find the p-value
Explanatory Variable Which Test is Appropriate? P-value Decision
RACE Min. Expected Count = 8.12
3×2 table
Use Pearson Chi-square (since RxC)
0.0819 (Chi-square – SAS)
0.082 (Chi-square – SPSS)
Fail to Reject Ho
SMOKE Min. Expected Count = 23.1
2×2 table
Use Continuity Correction (since 2×2)
0.040 (Continuity Correction – SPSS)
0.0396 (Continuity Adj – SAS)
Reject Ho
PTL Min. Expected Count = 0.31
4×2 table
Fisher’s Exact test is more appropriate
3.106 E-04 = 0.0003106 (Fisher’s – SAS)
0.000 (Fisher’s – SPSS)
0.0008 (Chi-square – SAS)
0.001 (Chi-square – SPSS)
Reject Ho
HT Min. Expected Count = 3.75
2×2 table
Fisher’s Exact test may be more appropriate
0.0516 (Fisher’s – SAS)
0.052 (Fisher’s – SPSS)
Fail to Reject Ho
(Barely)
UI Min. Expected Count = 8.74
2×2 table
Use Continuity Correction
0.0355 (Continuity Adj. – SAS)
0.035 (Continuity Correction – SPSS)
Reject Ho
Step 4: Conclusion
When considered individually, presence of uterine irritability, history of premature labor, and smoking during pregnancy are all significantly associated (p-value < 0.05) with the presence/absence of a low birth weight infant whereas history of hypertension and race were only marginally significant (0.05 ≤ p-value < 0.10).
Practical Significance:
Explanatory Variable Comparison of Conditional Percentages of Low Birth Weight
RACE Race = White: 23.96%
Race = Black: 42.31%
Race = Other: 37.31%
SMOKE Smoke = No: 25.22%
Smoke = Yes: 40.54%
PTL History of Premature Labor = 0: 25.79%
History of Premature Labor = 1: 66.67%
History of Premature Labor = 2: 40.00% (Note small sample size of 5 for this row)
History of Premature Labor = 3: 0.00% (Note small sample size of 1 for this row)
HT Hypertension = No: 29.38%
Hypertension = Yes: 58.33% (Note small sample size of 12 for this row)
UI Presence of uterine irritability = No: 27.95%
Presence of uterine irritability = Yes: 50.00%
• Despite our failing to reject the null in two of the five tests, all of these results seem to have some practical significance although the small sample sizes for some portions of the results may be producing misleading information and likely would require further study to confirm the results seen here.
SPSS Output for tests
EXAMPLE: 2x2 Table - Revisting "Looks vs. Personality" with Binary Categorized Response
If, instead of simply analyzing the “looks vs. personality” rating scale, we categorized the responses into groups then we would be in case C→C instead of case C→Q (see previous example in Case C-Q for Two Independent Samples).
Recall the rating score was from 1 to 25 with 1 = personality most important (looks not important at all) and 25 = looks most important (personality not important at all). A score of 13 would be equally important and scores around 13 should indicate looks and personality are nearly equal in importance.
For our purposes we will use a rating of 16 or larger to indicate that looks were indeed more important than personality (by enough to matter).
Data: SPSS format, SAS format
Response Variable:
• Looks – “Looks were (much) more important?”
• 0=No (Less than 16 on the looks vs. personality rating)
• 1=Yes (16 or higher on the looks vs. personality rating)
Results:
Step 1: State the hypotheses
The hypotheses are:
Ho: The proportion of college students who find looks more important than personality is the same for males and females. (The two variables are independent)
Ha: The proportion of college students who find looks more important than personality is different for males and females. (The two variables are dependent)
Steps 2 & 3: Obtain data, check conditions, summarize data, and find the p-value
The minimum expected cell count is 13.38. This is a 2×2 table so we will use the continuity corrected chi-square statistic.
The p-value is found to be 0.001 (SPSS) or 0.0007 (SAS).
Step 4: Conclusion
There is a significant association between gender and whether or not the individual rated looks more important than personality.
Among males, 27.1% rated looks higher than personality while among females this value was only 9.3%.
For fun: The odds ratio here is
$\text{Odds Ratio} = \dfrac{0.271/(1-0.271)}{0.093/(1-0.093)} = \dfrac{0.37174}{0.10254} = 3.63$
which means, based upon our data, we estimate that the odds of rating looks more important than personality is 3.6 times higher among males than among females.
Practical Significance:
It seems clear that the difference between 27.1% and 9.3% is practically significant as well as statistically significant. This difference is large and likely represents a meaningful difference in the views of males and females regarding the importance of looks compared to personality.
SPSS Output | textbooks/stats/Applied_Statistics/Biostatistics_-_Open_Learning_Textbook/Unit_4B%3A_Inference_for_Relationships/Case_C%E2%86%92C.txt |
CO-4: Distinguish among different measurement scales, choose the appropriate descriptive and inferential statistical methods based on these distinctions, and interpret the results.
REVIEW: Unit 1 Case C-Q
Video
Video: Case C→Q (5:23)
Introduction
Recall the role-type classification table framing our discussion on inference about the relationship between two variables.
We start with case C→Q, where the explanatory variable is categorical and the response variable is quantitative.
Recall that in the Exploratory Data Analysis unit, examining the relationship between X and Y in this situation amounts, in practice, to:
• Comparing the distributions of the (quantitative) response Y for each value (category) of the explanatory X.
To do that, we used
• side-by-side boxplots (each representing the distribution of Y in one of the groups defined by X),
• and supplemented the display with the corresponding descriptive statistics.
We will need to add one layer of difficulty here with the possibility that we may have paired or matched samples as opposed to independent samples or groups. Note that all of the examples we discussed in Case CQ in Unit 1 consisted of independent samples.
First we will review the general scenario.
Comparing Means between Groups
To understand the logic, we’ll start with an example and then generalize.
EXAMPLE: GPA and Year in College
Suppose that our variable of interest is the GPA of college students in the United States. From Unint 4A, we know that since GPA is quantitative, we will conduct inference on μ, the (population) mean GPA among all U.S. college students.
Since this section is about relationships, let’s assume that what we are really interested in is not simply GPA, but the relationship between:
• X : year in college (1 = freshmen, 2 = sophomore, 3 = junior, 4 = senior) and
• Y : GPA
In other words, we want to explore whether GPA is related to year in college.
The way to think about this is that the population of U.S. college students is now broken into 4 sub-populations: freshmen, sophomores, juniors and seniors. Within each of these four groups, we are interested in the GPA.
The inference must therefore involve the 4 sub-population means:
• μ1 : mean GPA among freshmen in the United States.
• μ2 : mean GPA among sophomores in the United States
• μ3 : mean GPA among juniors in the United States
• μ4 : mean GPA among seniors in the United States
It makes sense that the inference about the relationship between year and GPA has to be based on some kind of comparison of these four means.
If we infer that these four means are not all equal (i.e., that there are some differences in GPA across years in college) then that’s equivalent to saying GPA is related to year in college. Let’s summarize this example with a figure:
In general, making inferences about the relationship between X and Y in Case C→Q boils down to comparing the means of Y in the sub-populations, which are created by the categories defined by X (say k categories). The following figure summarizes this:
We will split this into two different scenarios (k = 2 and k > 2), where k is the number of categories defined by X.
For example:
• If we are interested in whether GPA (Y) is related to gender (X), this is a scenario where k = 2 (since gender has only two categories: M, F), and the inference will boil down to comparing the mean GPA in the sub-population of males to that in the sub-population of females.
• On the other hand, in the example we looked at earlier, the relationship between GPA (Y) and year in college (X) is a scenario where k > 2 or more specifically, k = 4 (since year has four categories).
Caution
In terms of inference, these two situations (k = 2 and k > 2) will be treated differently!
Scenario with k > 2
2. This large population is broken up into k sub-populations, each with its own mean μ. To infer on relationship between Y and X, we'll need to compare these k means." height="459" loading="lazy" src="http://phhp-faculty-cantrell.sites.m...7/image013.gif" title="The entire population is represented by a large circle, for which we wonder if there is a relationship between Y and X. k > 2. This large population is broken up into k sub-populations, each with its own mean μ. To infer on relationship between Y and X, we'll need to compare these k means." width="565">
Dependent vs. Independent Samples (k = 2)
Learning Objectives
LO 4.37: Identify and distinquish between independent and dependent samples.
Furthermore, within the scenario of comparing two means (i.e., examining the relationship between X and Y, when X has only two categories, k = 2) we will distinguish between two scenarios.
Here, the distinction is somewhat subtle, and has to do with how the samples from each of the two sub-populations we’re comparing are chosen. In other words, it depends upon what type of study design will be implemented.
We have learned that many experiments, as well as observational studies, make a comparison between two groups (sub-populations) defined by the categories of the explanatory variable (X), in order to see if the response (Y) differs.
In some situations, one group (sub-population 1) is defined by one category of X, and another independent group (sub-population 2) is defined by the other category of X. Independent samples are then taken from each group for comparison.
EXAMPLE:
Suppose we are conducting a clinical trial. Participants are randomized into two independent subpopulations:
• those who are given a drug and
• those who are given a placebo.
Each individual appears in only one of these two groups and individuals are not matched or paired in any way. Thus the two samples or groups are independent. We can say those given the drug are independent from those given the placebo.
Recall: By randomly assigning individuals to the treatment we control for both known and unknown lurking variables.
EXAMPLE:
Suppose the Highway Patrol wants to study the reaction times of drivers with a blood alcohol content of half the legal limit in their state.
An observational study was designed which would also serve as publicity on the topic of drinking and driving. At a large event where enough alcohol would be consumed to obtain plenty of potential study participants, officers set up an obstacle course and provided the vehicles. (Other considerations were also implemented to keep the car and track conditions consistent for each participant.)
Volunteers were recruited from those in attendance and given a breathalyzer test to determine their blood alcohol content. Two types of volunteers were chosen to participate:
• Those with a blood alcohol content of zero – as measured by the breathalyzer – of which 10 were chosen to drive the course.
• Those with a blood alcohol content within a small range of half the legal limit (in Florida this would be around 0.04%) – of which 9 were chosen.
Here also, we have two independent groups – even if originally they were taken from the same sample of volunteers – each individual appears in only one of the two groups, the comparison of the reaction times is a comparison between two independent groups.
However, in this study, there was NO random assignment to the treatment and so we would need to be much more concerned about the possibility of lurking variables in this study compared to one in which individuals were randomized into one of these two groups.
We will see it may be more appropriate in some studies to use the same individual as a subject in BOTH treatments – this will result in dependent samples.
When a matched pairs sample design is used, each observation in one sample is matched/paired/linked with an observation in the other sample. These are sometimes called “dependent samples.”
Matching could be by person (if the same person is measured twice), or could actually be a pair of individuals who belong together in a relevant way (husband and wife, siblings).
In this design, then, the same individual or a matched pair of individuals is used to make two measurements of the response – one for each of the two levels of the categorical explanatory variable.
Advantages of a paired sample approach include:
• Reduced measurement error since the variance within subjects is typically smaller than that between subjects
• Requires smaller number of subjects to achieve the same power than independent sample methods.
Disadvantages of a paired sample approach include:
• An order effect based upon which treatment individuals received first.
• A carryover effect such as a drug remaining in the system.
• Testing effect such as particpants learning the obstacle course in the first run improving their performance in the 2nd.
EXAMPLE:
Suppose we are conducting a study on a pain blocker which can be applied to the skin and are comparing two different dosage levels of the solution which in this study will be applied to the forearm.
For each participant both solutions are applied with the following protocol:
• Which drug is applied to which arm is random.
• Patients and clinical staff are blind to the two treatment applications.
• Pain tolerance is measured on both arms using the same standard test with the order of testing randomized.
Here we have dependent samples since the same patient appears in both dosage groups.
Again, randomization is employed to help minimize other issues related to study design such as an order or testing effect.
EXAMPLE:
Suppose the department of motor vehicles wants to check whether drivers are impaired after drinking two beers.
The reaction times (measured in seconds) in an obstacle course are measured for 8 randomly selected drivers before and then after the consumption of two beers.
We have a matched-pairs design, since each individual was measured twice, once before and once after.
In matched pairs, the comparison between the reaction times is done for each individual.
Comment:
• Note that in the first figure, where the samples are independent, the sample sizes of the two independent samples need not be the same.
• On the other hand, it is obvious from the design that in the matched pairs the sample sizes of the two samples must be the same (and thus we used n for both).
• Dependent samples can occur in many other settings but for now we focus on the case of investigating the relationship between a two-level categorical explanatory variable and a quantitative response variable.
Let’s Summarize:
We will begin our discussion of Inference for Relationships with Case C-Q, where the explanatory variable (X) is categorical and the response variable (Y) is quantitative. We discussed that inference in this case amounts to comparing population means.
• We distinguish between scenarios where the explanatory variable (X) has only two categories and scenarios wheret he explanatory variable (X) has MORE than two categories.
• When comparing two means, we make the futher distinction between situations where we have independent samples and those where we have matched pairs.
• For comparing more than two means in this course, we will focus only on the situation where we have independent samples. In studies with more than two groups on dependent samples, it is good to know that a common method used is repeated measures but we will not cover it here.
• We will first discuss comparing two population means starting with matched pairs (dependent samples) then independent samples and conclude with comparing more than two population means in the case of independent samples.
Now test your skills at identifying the three scenarios in Case C-Q.
Did I Get This?: Scenarios in Case C-Q
(Non-Interactive Version – Spoiler Alert)
Looking Ahead – Methods in Case C-Q
• Methods in BOLD will be our main focus in this unit.
Here is a summary of the tests we will learn for the scenario where k = 2.
Dependent Samples (Less Emphasis)
Standard Tests
• Two Sample T-Test Assuming Equal Variances
• Two Sample T-Test Assuming Unequal Variances
Non-Parametric Test
• Mann-Whitney U (or Wilcoxon Rank-Sum) Test
Standard Test
• Paired T-Test
Non-Parametric Tests
• Sign Test
• Wilcoxon Signed-Rank Test
Here is a summary of the tests we will learn for the scenario where k > 2.
Dependent Samples (Not Discussed)
Standard Tests
• One-way ANOVA (Analysis of Variance)
Non-Parametric Test
• Kruskal–Wallis One-way ANOVA
Standard Test
• Repeated Measures ANOVA (or similar)
Paired Samples
Caution
As we mentioned at the end of the Introduction to Unit 4B, we will focus only on two-sided tests for the remainder of this course. One-sided tests are often possible but rarely used in clinical research.
CO-4: Distinguish among different measurement scales, choose the appropriate descriptive and inferential statistical methods based on these distinctions, and interpret the results.
Learning Objectives
LO 4.35: For a data analysis situation involving two variables, choose the appropriate inferential method for examining the relationship between the variables and justify the choice.
Learning Objectives
LO 4.36: For a data analysis situation involving two variables, carry out the appropriate inferential method for examining relationships between the variables and draw the correct conclusions in context.
CO-5: Determine preferred methodological alternatives to commonly used statistical methods when assumptions are not met.
Video
Video: Paired Samples (27:19)
Related SAS Tutorials
Related SPSS Tutorials
Introduction – Matched Pairs (Paired t-test)
Learning Objectives
LO 4.37: Identify and distinguish between independent and dependent samples.
Learning Objectives
LO 4.38: In a given context, determine the appropriate standard method for comparing groups and provide the correct conclusions given the appropriate software output.
Learning Objectives
LO 4.39: In a given context, set up the appropriate null and alternative hypotheses for comparing groups.
We are in Case CQ of inference about relationships, where the explanatory variable is categorical and the response variable is quantitative.
As we mentioned in the summary of the introduction to Case C→Q, the first case that we will deal with is that involving matched pairs. In this case:
• The samples are paired or matched. Every observation in one sample is linked with an observation in the other sample.
• In other words, the samples are dependent.
Notice from this point forward we will use the terms population 1 and population 2 instead of sub-population 1 and sub-population 2. Either terminology is correct.
One of the most common cases where dependent samples occur is when both samples have the same subjects and they are “paired by subject.” In other words, each subject is measured twice on the response variable, typically before and then after some kind of treatment/intervention in order to assess its effectiveness.
EXAMPLE: SAT Prep Class
Suppose you want to assess the effectiveness of an SAT prep class.
It would make sense to use the matched pairs design and record each sampled student’s SAT score before and after the SAT prep classes are attended:
Recall that the two populations represent the two values of the explanatory variable. In this situation, those two values come from a single set of subjects.
• In other words, both populations really have the same students.
• However, each population has a different value of the explanatory variable. Those values are: no prep class, prep class.
This, however, is not the only case where the paired design is used. Other cases are when the pairs are “natural pairs,” such as siblings, twins, or couples.
Notes about graphical summaries for paired data in Case CQ:
• Due to the paired nature of this type of data, we cannot really use side-by-side boxplots to visualize this data as the information contained in the pairing is completely lost.
• We will need to provide graphical summaries of the differences themselves in order to explore this type of data.
The Idea Behind Paired t-Test
The idea behind the paired t-test is to reduce this two-sample situation, where we are comparing two means, to a single sample situation where we are doing inference on a single mean, and then use a simple t-test that we introduced in the previous module.
In this setting, we can easily reduce the raw data to a set of differences and conduct a one-sample t-test.
• Thus we simplify our inference procedure to a problem where we are making an inference about a single mean: the mean of the differences.
In other words, by reducing the two samples to one sample of differences, we are essentially reducing the problem from a problem where we’re comparing two means (i.e., doing inference on μ1−μ2) to a problem in which we are studying one mean.
In general, in every matched pairs problem, our data consist of 2 samples which are organized in n pairs:
We reduce the two samples to only one by calculating the difference between the two observations for each pair.
For example, think of Sample 1 as “before” and Sample 2 as “after”. We can find the difference between the before and after results for each participant, which gives us only one sample, namely “before – after”. We label this difference as “d” in the illustration below.
The paired t-test is based on this one sample of n differences,
and it uses those differences as data for a one-sample t-test on a single mean — the mean of the differences.
This is the general idea behind the paired t-test; it is nothing more than a regular one-sample t-test for the mean of the differences!
Test Procedure for Paired T-Test
We will now go through the 4-step process of the paired t-test.
• Step 1: State the hypotheses
Recall that in the t-test for a single mean our null hypothesis was: Ho: μ = μ0 and the alternative was one of Ha: μ < μ0 or μ > μ0 or μ ≠ μ0. Since the paired t-test is a special case of the one-sample t-test, the hypotheses are the same except that:
Instead of simply μ we use the notation μd to denote that the parameter of interest is the mean of the differences.
In this course our null value μ0 is always 0. In other words, going back to our original paired samples our null hypothesis claims that that there is no difference between the two means. (Technically, it does not have to be zero if you are interested in a more specific difference – for example, you might be interested in showing that there is a reduction in blood pressure of more than 10 points but we will not specifically look at such situations).
Therefore, in the paired t-test: The null hypothesis is always:
Ho: μd = 0
(There IS NO association between the categorical explanatory variable and the quantitative response variable)
We will focus on the two-sided alternative hypothesis of the form:
Ha: μd ≠ 0
(There IS AN association between the categorical explanatory variable and the quantitative response variable)
Some students find it helpful to know that it turns out that μd = μ1 – μ2 (in other words, the difference between the means is the same as the mean of the differences). You may find it easier to first think about the hypotheses in terms of μ1 – μ2 and then represent it in terms of μd.
Did I Get This? Setting up Hypotheses
(Non-Interactive Version – Spoiler Alert)
• Step 2: Obtain data, check conditions, and summarize data
The paired t-test, as a special case of a one-sample t-test, can be safely used as long as:
The sample of differences is random (or at least can be considered random in context).
The distribution of the differences in the population should vary normally if you have small samples. If the sample size is large, it is safe to use the paired t-test regardless of whether the differences vary normally or not. This condition is satisfied in the three situations marked by a green check mark in the table below.
Note: normality is checked by looking at the histogram of differences, and as long as no clear violation of normality (such as extreme skewness and/or outliers) is apparent, the normality assumption is reasonable.
Assuming that we can safely use the paired t-test, the data are summarized by a test statistic:
$t = \dfrac{\bar{y}_d - 0}{s_d / \sqrt{n}}$
where
$\bar{y}_d = \text{ sample mean of the differences}$
$s_d = \text{sample standard deviation of the differences}$
This test statistic measures (in standard errors) how far our data are (represented by the sample mean of the differences) from the null hypothesis (represented by the null value, 0).
Notice this test statistic has the same general form as those discussed earlier:
$\text{test statistic} = \dfrac{\text{estimator - null value}}{\text{standard error of estimator}}$
• Step 3: Find the p-value of the test by using the test statistic as follows
As a special case of the one-sample t-test, the null distribution of the paired t-test statistic is a t distribution (with n – 1 degrees of freedom), which is the distribution under which the p-values are calculated. We will use software to find the p-value for us.
• Step 4: Conclusion
As usual, we draw our conclusion based on the p-value. Be sure to write your conclusions in context by specifying your current variables and/or precisely describing the population mean difference in terms of the current variables.
In particular, if a cutoff probability, α (significance level), is specified, we reject Ho if the p-value is less than α. Otherwise, we fail to reject Ho.
If the p-value is small, there is a statistically significant difference between what was observed in the sample and what was claimed in Ho, so we reject Ho.
Conclusion: There is enough evidence that the categorical explanatory variable is associated with the quantitative response variable. More specifically, there is enough evidence that the population mean difference is not equal to zero.
Remember: a small p-value tells us that there is very little chance of getting data like those observed (or even more extreme) if the null hypothesis were true. Therefore, a small p-value indicates that we should reject the null hypothesis.
If the p-value is not small, we do not have enough statistical evidence to reject Ho.
Conclusion: There is NOT enough evidence that the categorical explanatory variable is associated with the quantitative response variable. More specifically, there is NOT enough evidence that the population mean difference is not equal to zero.
Notice how much better the first sentence sounds! It can get difficult to correctly phrase these conclusions in terms of the mean difference without confusing double negatives.
Learning Objectives
LO 4.40: Based upon the output for a paired t-test, correctly interpret in context the appropriate confidence interval for the population mean-difference.
As in previous methods, we can follow-up with a confidence interval for the mean difference, μd and interpret this interval in the context of the problem.
Interpretation: We are 95% confident that the population mean difference (described in context) is between (lower bound) and (upper bound).
Confidence intervals can also be used to determine whether or not to reject the null hypothesis of the test based upon whether or not the null value of zero falls outside the interval or inside.
If the null value, 0, falls outside the confidence interval, Ho is rejected. (Zero is NOT a plausible value based upon the confidence interval)
If the null value, 0, falls inside the confidence interval, Ho is not rejected. (Zero IS a plausible value based upon the confidence interval)
NOTE: Be careful to choose the correct confidence interval about the population mean difference and not the individual confidence intervals for the means in the groups themselves.
Now let’s look at an example.
EXAMPLE: Drinking and Driving
Note: In some of the videos presented in the course materials, we do conduct the one-sided test for this data instead of the two-sided test we conduct below. In Unit 4B we are going to restrict our attention to two-sided tests supplemented by confidence intervals as needed to provide more information about the effect of interest.
Drunk driving is one of the main causes of car accidents. Interviews with drunk drivers who were involved in accidents and survived revealed that one of the main problems is that drivers do not realize that they are impaired, thinking “I only had 1-2 drinks … I am OK to drive.”
A sample of 20 drivers was chosen, and their reaction times in an obstacle course were measured before and after drinking two beers. The purpose of this study was to check whether drivers are impaired after drinking two beers. Here is a figure summarizing this study:
• Note that the categorical explanatory variable here is “drinking 2 beers (Yes/No)”, and the quantitative response variable is the reaction time.
• By using the matched pairs design in this study (i.e., by measuring each driver twice), the researchers isolated the effect of the two beers on the drivers and eliminated any other confounding factors that might influence the reaction times (such as the driver’s experience, age, etc.).
• For each driver, the two measurements are the total reaction time before drinking two beers, and after. You can see the data by following the links in Step 2 below.
Since the measurements are paired, we can easily reduce the raw data to a set of differences and conduct a one-sample t-test.
Here are some of the results for this data:
Step 1: State the hypotheses
We define μd = the population mean difference in reaction times (Before – After).
As we mentioned, the null hypothesis is:
• Ho: μd = 0 (indicating that the population of the differences are centered at a number that IS ZERO)
The null hypothesis claims that the differences in reaction times are centered at (or around) 0, indicating that drinking two beers has no real impact on reaction times. In other words, drivers are not impaired after drinking two beers.
Although we really want to know whether their reaction times are longer after the two beers, we will still focus on conducting two-sided hypothesis tests. We will be able to address whether the reaction times are longer after two beers when we look at the confidence interval.
Therefore, we will use the two-sided alternative:
• Ha: μd ≠ 0 (indicating that the population of the differences are centered at a number that is NOT ZERO)
Step 2: Obtain data, check conditions, and summarize data
Let’s first check whether we can safely proceed with the paired t-test, by checking the two conditions.
• The sample of drivers was chosen at random.
• The sample size is not large (n = 20), so in order to proceed, we need to look at the histogram or QQ-plot of the differences and make sure there is no evidence that the normality assumption is not met.
We can see from the histogram above that there is no evidence of violation of the normality assumption (on the contrary, the histogram looks quite normal).
Also note that the vast majority of the differences are negative (i.e., the total reaction times for most of the drivers are larger after the two beers), suggesting that the data provide evidence against the null hypothesis.
The question (which the p-value will answer) is whether these data provide strong enough evidence or not against the null hypothesis. We can safely proceed to calculate the test statistic (which in practice we leave to the software to calculate for us).
Test Statistic: We will use software to calculate the test statistic which is t = -2.58.
• Recall: This indicates that the data (represented by the sample mean of the differences) are 2.58 standard errors below the null hypothesis (represented by the null value, 0).
Step 3: Find the p-value of the test by using the test statistic as follows
As a special case of the one-sample t-test, the null distribution of the paired t-test statistic is a t distribution (with n – 1 degrees of freedom), which is the distribution under which the p-values are calculated.
We will let the software find the p-value for us, and in this case, gives us a p-value of 0.0183 (SAS) or 0.018 (SPSS).
The small p-value tells us that there is very little chance of getting data like those observed (or even more extreme) if the null hypothesis were true. More specifically, there is less than a 2% chance (0.018=1.8%) of obtaining a test statistic of -2.58 (or lower) or 2.58 (or higher), assuming that 2 beers have no impact on reaction times.
Step 4: Conclusion
In our example, the p-value is 0.018, indicating that the data provide enough evidence to reject Ho.
• Conclusion: There is enough evidence that drinking two beers is associated with differences in reaction times of drivers.
Follow-up Confidence Interval:
As a follow-up to this conclusion, we quantify the effect that two beers have on the driver, using the 95% confidence interval for μd.
Using statistical software, we find that the 95% confidence interval for μd, the mean of the differences (before – after), is roughly (-0.9, -0.1).
Note: Since the differences were calculated before-after, longer reaction times after the beers would translate into negative differences.
• Interpretation: We are 95% confident that after drinking two beers, the true mean increase in total reaction time of drivers is between 0.1 and 0.9 of a second.
• Thus, the results of the study do indicate impairment of drivers (longer reaction times) not the other way around!
Since the confidence interval does not contain the null value of zero, we can use it to decide to reject the null hypothesis. Zero is not a plausible value of the population mean difference based upon the confidence interval. Notice that using this method is not always practical as often we still need to provide the p-value in clinical research. (Note: this is NOT the interpretation of the confidence interval but a method of using the confidence interval to conduct a hypothesis test.)
Practical Significance:
We should definitely ask ourselves if this is practically significant and I would argue that it is.
• Although a difference in the mean reaction time of 0.1 second might not be too bad, a difference of 0.9 seconds is likely a problem.
• Even at a difference in reaction time of 0.4 seconds, if you were traveling 60 miles per hour, this would translate into a distance traveled of around 35 feet.
Many Students Wonder: One-sided vs. Two-sided P-values
In the output, we are generally provided the two-sided p-value. We must be very careful when converting this to a one-sided p-value (if this is not provided by the software)
• IF the data are in the direction of our alternative hypothesis then we can simply take half of the two-sided p-value.
• IF, however, the data are NOT in the direction of the alternative, the correct p-value is VERY LARGE and is the complement of (one minus) half the two-sided p-value.
The “driving after having 2 beers” example is a case in which observations are paired by subject. In other words, both samples have the same subject, so that each subject is measured twice. Typically, as in our example, one of the measurements occurs before a treatment/intervention (2 beers in our case), and the other measurement after the treatment/intervention.
Our next example is another typical type of study where the matched pairs design is used—it is a study involving twins.
EXAMPLE: IQ Scores
Researchers have long been interested in the extent to which intelligence, as measured by IQ score, is affected by “nurture” as opposed to “nature”: that is, are people’s IQ scores mainly a result of their upbringing and environment, or are they mainly an inherited trait?
A study was designed to measure the effect of home environment on intelligence, or more specifically, the study was designed to address the question: “Are there statistically significant differences in IQ scores between people who were raised by their birth parents, and those who were raised by someone else?”
In order to be able to answer this question, the researchers needed to get two groups of subjects (one from the population of people who were raised by their birth parents, and one from the population of people who were raised by someone else) who are as similar as possible in all other respects. In particular, since genetic differences may also affect intelligence, the researchers wanted to control for this confounding factor.
We know from our discussion on study design (in the Producing Data unit of the course) that one way to (at least theoretically) control for all confounding factors is randomization—randomizing subjects to the different treatment groups. In this case, however, this is not possible. This is an observational study; you cannot randomize children to either be raised by their birth parents or to be raised by someone else. How else can we eliminate the genetics factor? We can conduct a “twin study.”
Because identical twins are genetically the same, a good design for obtaining information to answer this question would be to compare IQ scores for identical twins, one of whom is raised by birth parents and the other by someone else. Such a design (matched pairs) is an excellent way of making a comparison between individuals who only differ with respect to the explanatory variable of interest (upbringing) but are as alike as they can possibly be in all other important aspects (inborn intelligence). Identical twins raised apart were studied by Susan Farber, who published her studies in the book “Identical Twins Reared Apart” (1981, Basic Books).
In this problem, we are going to use the data that appear in Farber’s book in table E6, of the IQ scores of 32 pairs of identical twins who were reared apart.
Here is a figure that will help you understand this study:
Here are the important things to note in the figure:
• We are essentially comparing the mean IQ scores in two populations that are defined by our (two-valued categorical) explanatory variableupbringing (X), whose two values are: raised by birth parents, raised by someone else.
• This is a matched pairs design (as opposed to a two independent samples design), since each observation in one sample is linked (matched) with an observation in the second sample. The observations are paired by twins.
Each of the 32 rows represents one pair of twins. Keeping the notation that we used above, twin 1 is the twin that was raised by his/her birth parents, and twin 2 is the twin that was raised by someone else. Let’s carry out the analysis.
Step 1: State the hypotheses
Recall that in matched pairs, we reduce the data from two samples to one sample of differences:
The hypotheses are stated in terms of the mean of the difference where, μd = population mean difference in IQ scores (Birth Parents – Someone Else):
• Ho: μd = 0 (indicating that the population of the differences are centered at a number that IS ZERO)
• Ha: μd ≠ 0 (indicating that the population of the differences are centered at a number that is NOT ZERO)
Step 2: Obtain data, check conditions, and summarize data
Is it safe to use the paired t-test in this case?
• Clearly, the samples of twins are not random samples from the two populations. However, in this context, they can be considered as random, assuming that there is nothing special about the IQ of a person just because he/she has an identical twin.
• The sample size here is n = 32. Even though it’s the case that if we use the n > 30 rule of thumb our sample can be considered large, it is sort of a borderline case, so just to be on the safe side, we should look at the histogram of the differences just to make sure that we do not see anything extreme. (Comment: Looking at the histogram of differences in every case is useful even if the sample is very large, just in order to get a sense of the data. Recall: “Always look at the data.”)
The data don’t reveal anything that we should be worried about (like very extreme skewness or outliers), so we can safely proceed. Looking at the histogram, we note that most of the differences are negative, indicating that in most of the 32 pairs of twins, twin 2 (raised by someone else) has a higher IQ.
From this point we rely on statistical software, and find that:
• t-value = -1.85
• p-value = 0.074
Our test statistic is -1.85.
Our data (represented by the sample mean of the differences) are 1.85 standard errors below the null hypothesis (represented by the null value 0).
Step 3: Find the p-value of the test by using the test statistic as follows
The p-value is 0.074, indicating that there is a 7.4% chance of obtaining data like those observed (or even more extreme) assuming that Ho is true (i.e., assuming that there are no differences in IQ scores between people who were raised by their natural parents and those who weren’t).
Step 4: Conclusion
Using the conventional significance level (cut-off probability) of .05, our p-value is not small enough, and we therefore cannot reject Ho.
• Conclusion: Our data do not provide enough evidence to conclude that whether a person was raised by his/her natural parents has an impact on the person’s intelligence (as measured by IQ scores).
Confidence Interval:
The 95% confidence interval for the population mean difference is (-6.11322, 0.30072).
Interpretation:
• We are 95% confident that the population mean IQ for twins raised by someone else is between 6.11 greater to 0.3 lower than that for twins raised by their birth parents.
• OR … We are 95% confident that the population mean IQ for twins raised by their birth parents is between 6.11 lower to 0.3 greater than that for twins raised by someone else.
• Note: The order of the groups as well as the numbers provided in the interval can vary, what is important is to get the “lower” and “greater” with the correct value based upon the group order being used.
• Here we used Birth Parents – Someone Else and thus a positive number for our population mean difference indicates that birth parents group is higher (someone else gorup is lower) and a negative number indicates the someone else group is higher (birth parents group is lower).
This confidence interval does contain zero and thus results in the same conclusion to the hypothesis test. Zero IS a plausible value of the population mean difference and thus we cannot reject the null hypothesis.
Practical Significance:
• The confidence interval does “lean” towards the difference being negative, indicating that in most of the 32 pairs of twins, twin 2 (raised by someone else) has a higher IQ. The sample mean difference is -2.9 so we would need to consider whether this value and range of plausible values have any real practical significance.
• In this case, I don’t think I would consider a difference in IQ score of around 3 points to be very important in practice (but others could reasonably disagree).
It is very important to pay attention to whether the two-sample t-test or the paired t-test is appropriate. In other words, being aware of the study design is extremely important. Consider our example, if we had not “caught” that this is a matched pairs design, and had analyzed the data as if the two samples were independent using the two-sample t-test, we would have obtained a p-value of 0.114.
Note that using this (wrong) method to analyze the data, and a significance level of 0.05, we would conclude that the data do not provide enough evidence for us to conclude that reaction times differed after drinking two beers. This is an example of how using the wrong statistical method can lead you to wrong conclusions, which in this context can have very serious implications.
Comments:
• The 95% confidence interval for μ can be used here in the same way as for proportions to conduct the two-sided test (checking whether the null value falls inside or outside the confidence interval) or following a t-test where Ho was rejected to get insight into the value of μ.
• In most situations in practice we use two-sided hypothesis tests, followed by confidence intervals to gain more insight.
Now try a complete example for yourself.
Additional Data for Practice
Here are two other datasets with paired samples.
Non-Parametric Alternatives for Matched Pair Data
Learning Objectives
LO 5.1: For a data analysis situation involving two variables, determine the appropriate alternative (non-parametric) method when assumptions of our standard methods are not met.
The statistical tests we have previously discussed (and many we will discuss) require assumptions about the distribution in the population or about the requirements to use a certain approximation as the sampling distribution. These methods are called parametric.
When these assumptions are not valid, alternative methods often exist to test similar hypotheses. Tests which require only minimal distributional assumptions, if any, are called non-parametric or distribution-free tests.
At the end of this section we will provide some details (see Details for Non-Parametric Alternatives), for now we simply want to mention that there are two common non-parametric alternatives to the paired t-test. They are:
• Sign Test
• Wilcoxon Signed-Rank Test
The fact that both of these tests have the word “sign” in them is not a coincidence – it is due to the fact that we will be interested in whether the differences have a positive sign or a negative sign – and the fact that this word appears in both of these tests can help you to remember that they correspond to paired methods where we are often interested in whether there was an increase (positive sign) or a decrease (negative sign).
Let’s Summarize
• The paired t-test is used to compare two population means when the two samples (drawn from the two populations) are dependent in the sense that every observation in one sample can be linked to an observation in the other sample. Such a design is called “matched pairs.”
• The most common case in which the matched pairs design is used is when the same subjects are measured twice, usually before and then after some kind of treatment and/or intervention. Another classic case are studies involving twins.
• In the background, we have a two-valued categorical explanatory whose categories define the two populations we are comparing and whose effect on the response variable we are trying to assess.
• The idea behind the paired t-test is to reduce the data from two samples to just one sample of the differences, and use these observed differences as data for inference about a single mean — the mean of the differences, μd.
• The paired t-test is therefore simply a one-sample t-test for the mean of the differences μd, where the null value is 0.
• Once we verify that we can safely proceed with the paired t-test, we use software output to carry it out.
• A 95% confidence interval for μd can be very insightful after a test has rejected the null hypothesis, and can also be used for testing in the two-sided case.
• Two non-parametric alternatives to the paired t-test are the sign test and the Wilcoxon signedrank test. (See Details for Non-Parametric Alternatives.)
Two Independent Samples
CO-4: Distinguish among different measurement scales, choose the appropriate descriptive and inferential statistical methods based on these distinctions, and interpret the results
Learning Objectives
LO 4.35: For a data analysis situation involving two variables, choose the appropriate inferential method for examining the relationship between the variables and justify the choice.
Learning Objectives
LO 4.36: For a data analysis situation involving two variables, carry out the appropriate inferential method for examining relationships between the variables and draw the correct conclusions in context.
CO-5: Determine preferred methodological alternatives to commonly used statistical methods when assumptions are not met.
REVIEW: Unit 1 Case C-Q
Video
Video: Two Independent Samples (38:56)
Related SAS Tutorials
Related SPSS Tutorials
Introduction
Here is a summary of the tests we will learn for the scenario where k = 2. Methods in BOLD will be our main focus.
We have completed our discussion on dependent samples (2nd column) and now we move on to independent samples (1st column).
Dependent Samples (Less Emphasis)
Standard Tests
• Two Sample T-Test Assuming Equal Variances
• Two Sample T-Test Assuming Unequal Variances
Non-Parametric Test
• Mann-Whitney U (or Wilcoxon Rank-Sum) Test
Standard Test
• Paired T-Test
Non-Parametric Tests
• Sign Test
• Wilcoxon Signed-Rank Test
Dependent vs. Independent Samples
Learning Objectives
LO 4.37: Identify and distinguish between independent and dependent samples.
We have discussed the dependent sample case where observations are matched/paired/linked between the two samples. Recall that in that scenario observations can be the same individual or two individuals who are matched between samples. To analyze data from dependent samples, we simply took the differences and analyzed the difference using one-sample techniques.
Now we will discuss the independent sample case. In this case, all individuals are independent of all other individuals in their sample as well as all individuals in the other sample. This is most often accomplished by either:
• Taking a random sample from each of the two groups under study. For example to compare heights of males and females, we could take a random sample of 100 females and another random sample of 100 males. The result would be two samples which are independent of each other.
• Taking a random sample from the entire population and then dividing it into two sub-samples based upon the grouping variable of interest. For example, we take a random sample of U.S. adults and then split them into two samples based upon gender. This results in a sub-sample of females and a sub-sample of males which are independent of each other.
Comparing Two Means – Two Independent Samples T-test
Learning Objectives
LO 4.38: In a given context, determine the appropriate standard method for comparing groups and provide the correct conclusions given the appropriate software output.
Learning Objectives
LO 4.39: In a given context, set up the appropriate null and alternative hypotheses for comparing groups.
Recall that here we are interested in the effect of a two-valued (k = 2) categorical variable (X) on a quantitative response (Y). Random samples from the two sub-populations (defined by the two categories of X) are obtained and we need to evaluate whether or not the data provide enough evidence for us to believe that the two sub-population means are different.
In other words, our goal is to test whether the means μ1 and μ2 (which are the means of the variable of interest in the two sub-populations) are equal or not, and in order to do that we have two samples, one from each sub-population, which were chosen independently of each other.
The test that we will learn here is commonly known as the two-sample t-test. As the name suggests, this is a t-test, which as we know means that the p-values for this test are calculated under some t-distribution.
Here are figures that illustrate some of the examples we will cover. Notice how the original variables X (categorical variable with two levels) and Y (quantitative variable) are represented. Think about the fact that we are in case C → Q!
As in our discussion of dependent samples, we will often simplify our terminology and simply use the terms “population 1” and “population 2” instead of referring to these as sub-populations. Either terminology is fine.
Many Students Wonder: Two Independent Samples
Question: Does it matter which population we label as population 1 and which as population 2?
Answer: No, it does not matter as long as you are consistent, meaning that you do not switch labels in the middle.
• BUT… considering how you label the populations is important in stating the hypotheses and in the interpretation of the results.
Steps for the Two-Sample T-test
Recall that our goal is to compare the means μ1 and μ2 based on the two independent samples.
• Step 1: State the hypotheses
The hypotheses represent our goal to compare μ1and μ2.
The null hypothesis is always:
Ho: μ1 – μ2 = 0 (which is the same as μ1 = μ2)
(There IS NO association between the categorical explanatory variable and the quantitative response variable)
We will focus on the two-sided alternative hypothesis of the form:
Ha: μ1 – μ2 ≠ 0 (which is the same as μ1 ≠ μ2) (two-sided)
(There IS AN association between the categorical explanatory variable and the quantitative response variable)
Note that the null hypothesis claims that there is no difference between the means. Conceptually, Ho claims that there is no relationship between the two relevant variables (X and Y).
Our parameter of interest in this case (the parameter about which we are making an inference) is the difference between the means (μ1 – μ2) and the null value is 0. The alternative hypothesis claims that there is a difference between the means.
Did I Get This? What do our hypotheses mean in context?
(Non-Interactive Version – Spoiler Alert)
• Step 2: Obtain data, check conditions, and summarize data
The two-sample t-test can be safely used as long as the following conditions are met:
The two samples are indeed independent.
We are in one of the following two scenarios:
(i) Both populations are normal, or more specifically, the distribution of the response Y in both populations is normal, and both samples are random (or at least can be considered as such). In practice, checking normality in the populations is done by looking at each of the samples using a histogram and checking whether there are any signs that the populations are not normal. Such signs could be extreme skewness and/or extreme outliers.
(ii) The populations are known or discovered not to be normal, but the sample size of each of the random samples is large enough (we can use the rule of thumb that a sample size greater than 30 is considered large enough).
Did I Get This? Conditions for Two Independent Samples
(Non-Interactive Version – Spoiler Alert)
Assuming that we can safely use the two-sample t-test, we need to summarize the data, and in particular, calculate our data summary—the test statistic.
Test Statistic for Two-Sample T-test:
There are two choices for our test statistic, and we must choose the appropriate one to summarize our data We will see how to choose between the two test statistics in the next section. The two options are as follows:
We use the following notation to describe our samples:
$n_1, n_2$ = sample sizes of the samples from population 1 and population 2
$\bar{y}_1, \bar{y}_2$ = sample means of the samples from population 1 and population 2
$s_1, s_2$ = sample standard deviations of the samples from population 1 and population 2
$s_p$ = pooled estimate of a common population standard deviation
Here are the two cases for our test statistic.
(A) Equal Variances: If it is safe to assume that the two populations have equal standard deviations, we can pool our estimates of this common population standard deviation and use the following test statistic.
$t=\dfrac{\bar{y}_{1}-\bar{y}_{2}-0}{s_{p} \sqrt{\frac{1}{n_{1}}+\frac{1}{n_{2}}}}$
where
$s_{p}=\sqrt{\dfrac{\left(n_{1}-1\right) s_{1}^{2}+\left(n_{2}-1\right) s_{2}^{2}}{n_{1}+n_{2}-2}}$
(B) Unequal Variances: If it is NOT safe to assume that the two populations have equal standard deviations, we have unequal standard deviations and must use the following test statistic.
$t=\dfrac{\bar{y}_{1}-\bar{y}_{2}-0}{\sqrt{\frac{s_{1}^{2}}{n_{1}}+\frac{s_{2}^{2}}{n_{2}}}}$
Comments:
• It is possible to never assume equal variances; however, if the assumption of equal variances is satisfied the equal variances t-test will have greater power to detect the difference of interest.
• We will not be calculating the values of these test statistics by hand in this course. We will instead rely on software to obtain the value for us.
• Both of these test statistics measure (in standard errors) how far our data are (represented by the difference of the sample means) from the null hypothesis (represented by the null value, 0).
• These test statistics have the same general form as others we have discussed. We will not discuss the derivation of the standard errors in each case but you should understand this general form and be able to identify each component for a specific test statistic.
$\text{test statistic} = \dfrac{\text{estimator - null value}}{\text{standard error of estimator}}$
• Step 3: Find the p-value of the test by using the test statistic as follows
Each of these tests rely on a particular t-distribution under which the p-values are calculated. In the case where equal variances are assumed, the degrees of freedom are simply:
$n_1 + n_2 - 2$
whereas in the case of unequal variances, the formula for the degrees of freedom is more complex. We will rely on the software to obtain the degrees of freedom in both cases and provided us with the correct p-value (usually this will be a two-sided p-value).
• Step 4: Conclusion
As usual, we draw our conclusion based on the p-value. Be sure to write your conclusions in context by specifying your current variables and/or precisely describing the difference in population means in terms of the current variables.
If the p-value is small, there is a statistically significant difference between what was observed in the sample and what was claimed in Ho, so we reject Ho.
Conclusion: There is enough evidence that the categorical explanatory variable is related to (or associated with) the quantitative response variable. More specifically, there is enough evidence that the difference in population means is not equal to zero.
If the p-value is not small, we do not have enough statistical evidence to reject Ho.
Conclusion: There is NOT enough evidence that the categorical explanatory variable is related to (or associated with) the quantitative response variable. More specifically, there is enough evidence that the difference in population means is not equal to zero.
In particular, if a cutoff probability, α (significance level), is specified, we reject Ho if the p-value is less than α. Otherwise, we do not reject Ho.
Learning Objectives
LO 4.41: Based upon the output for a two-sample t-test, correctly interpret in context the appropriate confidence interval for the difference between population means
As in previous methods, we can follow-up with a confidence interval for the difference between population means, μ1 – μ2 and interpret this interval in the context of the problem.
Interpretation: We are 95% confident that the population mean for (one group) is between __________________ compared to the population mean for (the other group).
Confidence intervals can also be used to determine whether or not to reject the null hypothesis of the test based upon whether or not the null value of zero falls outside the interval or inside.
If the null value, 0, falls outside the confidence interval, Ho is rejected. (Zero is NOT a plausible value based upon the confidence interval)
If the null value, 0, falls inside the confidence interval, Ho is not rejected. (Zero IS a plausible value based upon the confidence interval)
NOTE: Be careful to choose the correct confidence interval about the difference between population means using the same assumption (variances equal or variances unequal) and not the individual confidence intervals for the means in the groups themselves.
Many Students Wonder: Reading Statistical Software Output for Two-Sample T-test
Test for Equality of Variances (or Standard Deviations)
Learning Objectives
LO 4.42: Based upon the output for a two-sample t-test, determine whether to use the results assuming equal variances or those assuming unequal variances.
Since we have two possible tests we can conduct, based upon whether or not we can assume the population standard deviations (or variances) are equal, we need a method to determine which test to use.
Although you can make a reasonable guess using information from the data (i.e. look at the distributions and estimates of the standard deviations and see if you feel they are reasonably equal), we have a test which can help us here, called the test for Equality of Variances. This output is automatically displayed in many software packages when a two-sample t-test is requested although the particular test used may vary.The hypotheses of this test are:
Ho: σ1 = σ2 (the standard deviations in the two populations are the same)
Ha: σ1 ≠ σ2 (the standard deviations in the two populations are not the same)
• If the p-value of this test for equal variances is small, there is enough evidence that the standard deviations in the two populations are different and we cannot assume equal variances.
• IMPORTANT! In this case, when we conduct the two-sample t-test to compare the population means, we use the test statistic for unequal variances.
• If the p-value of this test is large, there is not enough evidence that the standard deviations in the two populations are different. In this case we will assume equal variances since we have no clear evidence to the contrary.
• IMPORTANT! In this case, when we conduct the two-sample t-test to compare the population means, we use the test statistic for equal variances.
Now let’s look at a complete example of conducting a two-sample t-test, including the embedded test for equality of variances.
EXAMPLE: What is more important - personality or looks?
This question was asked of a random sample of 239 college students, who were to answer on a scale of 1 to 25. An answer of 1 means personality has maximum importance and looks no importance at all, whereas an answer of 25 means looks have maximum importance and personality no importance at all. The purpose of this survey was to examine whether males and females differ with respect to the importance of looks vs. personality.
Note that the data have the following format:
Score (Y) Gender (X)
15 Male
13 Female
10 Female
12 Male
14 Female
14 Male
6 Male
17 Male
etc.
The format of the data reminds us that we are essentially examining the relationship between the two-valued categorical variable, gender, and the quantitative response, score. The two values of the categorical explanatory variable (k = 2) define the two populations that we are comparing — males and females. The comparison is with respect to the response variable score. Here is a figure that summarizes the example:
Comments:
• Note that this figure emphasizes how the fact that our explanatory is a two-valued categorical variable means that in practice we are comparing two populations (defined by these two values) with respect to our response Y.
• Note that even though the problem description just says that we had 239 students, the figure tells us that there were 85 males in the sample, and 150 females.
• Following up on comment 2, note that 85 + 150 = 235 and not 239. In these data (which are real) there are four “missing observations,” 4 students for which we do not have the value of the response variable, “importance.” This could be due to a number of reasons, such as recording error or non response. The bottom line is that even though data were collected from 239 students, effectively we have data from only 235. (Recommended: Go through the data file and note that there are 4 cases of missing observations: students 34, 138, 179, and 183).
Step 1: State the hypotheses
Recall that the purpose of this survey was to examine whether the opinions of females and males differ with respect to the importance of looks vs. personality. The hypotheses in this case are therefore:
Ho: μ1 – μ2 = 0 (which is the same as μ1 = μ2)
Ha: μ1 – μ2 ≠ 0 (which is the same as μ1 ≠ μ2)
where μ1 represents the mean “looks vs personality score” for females and μ2 represents the mean “looks vs personality score” for males.
It is important to understand that conceptually, the two hypotheses claim:
Ho: Score (of looks vs. personality) is not related to gender
Ha: Score (of looks vs. personality) is related to gender
Step 2: Obtain data, check conditions, and summarize data
• Data: Looks SPSS format, SAS format, Excel format, CSV format
• Let’s first check whether the conditions that allow us to safely use the two-sample t-test are met.
• Here, 239 students were chosen and were naturally divided into a sample of females and a sample of males. Since the students were chosen at random, the sample of females is independent of the sample of males.
• Here we are in the second scenario — the sample sizes (150 and 85), are definitely large enough, and so we can proceed regardless of whether the populations are normal or not.
• In the output below we first look at the test for equality of variances (outlined in orange). The two-sample t-test results we will use are outlined in blue.
• There are TWO TESTS represented in this output and we must make the correct decision for BOTH of these tests to correctly proceed.
• SOFTWARE OUTPUT In SPSS:
• The p-value for the test of equality of variances is reported as 0.849 in the SIG column under Levene’s test for equality of variances. (Note this differs from the p-value found using SAS, two different tests are used by default between the two programs).
• So we fail to reject the null hypothesis that the variances, or equivalently the standard deviations, are equal (Ho: σ1 = σ2).
• Conclusion to test for equality of variances: We cannot conclude there is a difference in the variance of looks vs. personality score between males and females.
• This results in using the row for Equal variances assumed to find the t-test results including the test statistic, p-value, and confidence interval for the difference. (Outlined in BLUE)
The output might also be broken up if you export or copy the items in certain ways. The results are the same but it can be more difficult to read.
• SOFTWARE OUTPUT In SAS:
• The p-value for the test of equality of variances is reported as 0.5698 in the Pr > F column under equality of variances. (Note this differs from the p-value found using SPSS, two different tests are used by default between the two programs).
• So we fail to reject the null hypothesis that the variances, or equivalently the standard deviations, are equal (Ho: σ1 = σ2).
• Conclusion to test for equality of variances: We cannot conclude there is a difference in the variance of looks vs. personality score between males and females.
• This results in using the row for POOLED method where equal variances are assumed to find the t-test results including the test statistic, p-value, and confidence interval for the difference. (Outlined in BLUE)
• TEST STATISTIC for Two-Sample T-test: In all of the results above, we determine that we will use the test which assumes the variances are EQUAL, and we find our test statistic of t = -4.58.
Step 3: Find the p-value of the test by using the test statistic as follows
• We will let the software find the p-value for us, and in this case, the p-value is less than our significance level of 0.05 in fact it is practically 0.
• This is found in SPSS in the equal variances assumed row under t-test in the SIG. (two-tailed) column given as 0.000 and in SAS in the POOLED ROW under Pr > |t| column given as <0.0001.
• A p-value which is practically 0 means that it would be almost impossible to get data like that observed (or even more extreme) had the null hypothesis been true.
• More specifically, in our example, if there were no differences between females and males with respect to whether they value looks vs. personality, it would be almost impossible (probability approximately 0) to get data where the difference between the sample means of females and males is -2.6 (that difference is 10.73 – 13.33 = -2.6) or more extreme.
• Comment: Note that the output tells us that the difference μ1 – μ2 is approximately -2.6. But more importantly, we want to know if this difference is statistically significant. To answer this, we use the fact that this difference is 4.58 standard errors below the null value.
Step 4: Conclusion
As usual a small p-value provides evidence against Ho. In our case our p-value is practically 0 (which is smaller than any level of significance that we will choose). The data therefore provide very strong evidence against Ho so we reject it.
• Conclusion: There is enough evidence that the mean Importance score (of looks vs personality) of males differs from that of females. In other words, males and females differ with respect to how they value looks vs. personality.
As a follow-up to this conclusion, we can construct a confidence interval for the difference between population means. In this case we will construct a confidence interval for μ1 – μ2 the population mean “looks vs personality score” for females minus the population mean “looks vs personality score” for males.
• Using statistical software, we find that the 95% confidence interval for μ1 – μ2 is roughly (-3.7, -1.5).
• This is found in SPSS in the equal variances assumed row under 95% confidence interval columns given as -3.712 to -1.480 and in SAS in the POOLED ROW under 95% CL MEAN column given as -3.7118 to -1.4804 (be careful NOT to choose the confidence interval for the standard deviation in the last column, 9% CL Std Dev).
• Interpretation:
• We are 95% confident that the population mean “looks vs personality score” for females is between 3.7 and 1.5 points lower than that of males.
• OR
• We are 95% confident that the population mean “looks vs personality score” for males is between 3.7 and 1.5 points higher than that of females.
• The confidence interval therefore quantifies the effect that the explanatory variable (gender) has on the response (looks vs personality score).
• Since low values correspond to personality being more important and high values correspond to looks being more important, the result of our investigation suggests that, on average, females place personality higher than do males. Alternatively we could say that males place looks higher than do females.
• Note: The confidence interval does not contain zero (both values are negative based upon how we chose our groups) and thus using the confidence interval we can reject the null hypothesis here.
Practical Significance:
We should definitely ask ourselves if this is practically significant
• Is a true difference in population means as represented by our estimate from this data meaningful here? I will let you consider and answer for yourself.
SPSS Output for this example (Non-Parametric Output for Examples 1 and 2)
SAS Output and SAS Code (Includes Non-Parametric Test)
Here is another example.
EXAMPLE: BMI vs. Gender in Heart Attack Patients
A study was conducted which enrolled and followed heart attack patients in a certain metropolitan area. In this example we are interested in determining if there is a relationship between Body Mass Index (BMI) and gender. Individuals presenting to the hospital with a heart attack were randomly selected to participate in the study.
Step 1: State the hypotheses
Ho: μ1 – μ2 = 0 (which is the same as μ1 = μ2)
Ha: μ1 – μ2 ≠ 0 (which is the same as μ1 ≠ μ2)
where μ1 represents the mean BMI for males and μ2 represents the mean BMI for females.
It is important to understand that conceptually, the two hypotheses claim:
Ho: BMI is not related to gender in heart attack patients
Ha: BMI is related to gender in heart attack patients
Step 2: Obtain data, check conditions, and summarize data
• Data: WHAS500 SPSS format, SAS format
• Let’s first check whether the conditions that allow us to safely use the two-sample t-test are met.
• Here, subjects were chosen and were naturally divided into a sample of females and a sample of males. Since the subjects were chosen at random, the sample of females is independent of the sample of males.
• Here, we are in the second scenario — the sample sizes are extremely large, and so we can proceed regardless of whether the populations are normal or not.
• In the output below we first look at the test for equality of variances (outlined in orange). The two-sample t-test results we will use are outlined in blue.
• There are TWO TESTS represented in this output and we must make the correct decision for BOTH of these tests to correctly proceed.
• SOFTWARE OUTPUT In SPSS:
• The p-value for the test of equality of variances is reported as 0.001 in the SIG column under Levene’s test for equality of variances.
• So we reject the null hypothesis that the variances, or equivalently the standard deviations, are equal (Ho: σ1 = σ2).
• Conclusion to test for equality of variances: We conclude there is enought evidence of a difference in the variance of looks vs. personality score between males and females.
• This results in using the row for Equal variances NOT assumed to find the t-test results including the test statistic, p-value, and confidence interval for the difference. (Outlined in BLUE)
• SOFTWARE OUTPUT In SAS:
• The p-value for the test of equality of variances is reported as 0.0004 in the Pr > F column under equality of variances.
• So we reject the null hypothesis that the variances, or equivalently the standard deviations, are equal (Ho: σ1 = σ2).
• Conclusion to test for equality of variances: We conclude there is enough evidence of a difference in the variance of looks vs. personality score between males and females.
• This results in using the row for SATTERTHWAITE method where UNEQUAL variances are assumed to find the t-test results including the test statistic, p-value, and confidence interval for the difference. (Outlined in BLUE)
• TEST STATISTIC for Two-Sample T-test: In all of the results above, we determine that we will use the test which assumes the variances are UNEQUAL, and we find our test statistic of t = 3.21.
Step 3: Find the p-value of the test by using the test statistic as follows
• We will let the software find the p-value for us, and in this case, the p-value is less than our significance level of 0.05.
• This is found in SPSS in the UNEQUAL variances assumed row under t-test in the SIG. (two-tailed) column given as 0.001 and in SAS in the SATTERTHWAITE ROW under Pr > |t| column given as 0.0015.
• This p-value means that it would be extremely rare to get data like that observed (or even more extreme) had the null hypothesis been true.
• More specifically, in our example, if there were no differences between females and males with respect to BMI, it would be almost highly unlikely (0.001 probability) to get data where the difference between the sample mean BMIs of males and females is 1.64 or more extreme.
• Comment: Note that the output tells us that the difference μ1 – μ2 is approximately 1.64. But more importantly, we want to know if this difference is statistically significant. To answer this, we use the fact that this difference is 3.21 standard errors above the null value.
Step 4: Conclusion
As usual a small p-value provides evidence against Ho. In our case our p-value is 0.001 (which is smaller than any level of significance that we will choose). The data therefore provide very strong evidence against Ho so we reject it.
• Conclusion: The mean BMI of males differs from that of females. In other words, males and females differ with respect to BMI among heart attack patients.
As a follow-up to this conclusion, we can construct a confidence interval for the difference between population means. In this case we will construct a confidence interval for μ1 – μ2 the population mean BMI for males minus the population mean BMI for females.
• Using statistical software, we find that the 95% confidence interval for μ1 – μ2 is roughly (0.63, 2.64).
• This is found in SPSS in the UNEQUAL variances assumed row under 95% confidence interval columns and in SAS in the SATTERTHWAITE ROW under 95% CL MEAN column.
• Interpretation:
• With 95% confidence that the population mean BMI for males is between 0.63 and 2.64 units larger than that of females.
• OR
• With 95% confidence that the population mean BMI for females is between 0.63 and 2.64 units smaller than that of males.
• The confidence interval therefore quantifies the effect of the explanatory variable (gender) on the response (BMI). Notice that we cannot imply a causal effect of gender on BMI based upon this result alone as there could be many lurking variables, unaccounted for in this analysis, which might be partially or even completely responsible for this difference.
• Note: The confidence interval does not contain zero (both values are positive based upon how we chose our groups) and thus using the confidence interval we can reject the null hypothesis here.
Practical Significance:
• We should definitely ask ourselves if this is practically significant
• Is a true difference in population means as represented by our estimate from this data meaningful here? Is a difference in BMI of between 0.53 and 2.64 of interest?
• I will let you consider and answer for yourself.
SPSS Output for this example (Non-Parametric Output for Examples 1 and 2)
SAS Output and SAS Code (Includes Non-Parametric Test)
Note: In the SAS output the variable gender is not formatted, in this case Males = 0 and Females = 1.
Comments:
You might ask yourself: “Where do we use the test statistic?”
It is true that for all practical purposes all we have to do is check that the conditions which allow us to use the two-sample t-test are met, lift the p-value from the output, and draw our conclusions accordingly.
However, we feel that it is important to mention the test statistic for two reasons:
• The test statistic is what’s behind the scenes; based on its null distribution and its value, the p-value is calculated.
• Apart from being the key for calculating the p-value, the test statistic is also itself a measure of the evidence stored in the data against Ho. As we mentioned, it measures (in standard errors) how different our data is from what is claimed in the null hypothesis.
Now try some more activities for yourself.
Did I Get This? Two-Sample T-test and Related Confidence Interval
(Non-Interactive Version – Spoiler Alert)
Non-Parametric Alternative: Wilcoxon Rank-Sum Test (Mann-Whitney U)
Learning Objectives
LO 5.1: For a data analysis situation involving two variables, determine the appropriate alternative (non-parametric) method when assumptions of our standard methods are not met.
We will look at one non-parametric test in the two-independent samples setting. More details will be discussed later (Details for Non-Parametric Alternatives).
• The Wilcoxon rank-sum test (Mann-Whitney U test) is a general test to compare two distributions in independent samples. It is a commonly used alternative to the two-sample t-test when the assumptions are not met.
k > 2 Independent Samples
CO-4: Distinguish among different measurement scales, choose the appropriate descriptive and inferential statistical methods based on these distinctions, and interpret the results.
Learning Objectives
LO 4.35: For a data analysis situation involving two variables, choose the appropriate inferential method for examining the relationship between the variables and justify the choice.
Learning Objectives
LO 4.36: For a data analysis situation involving two variables, carry out the appropriate inferential method for examining relationships between the variables and draw the correct conclusions in context.
CO-5: Determine preferred methodological alternatives to commonly used statistical methods when assumptions are not met.
REVIEW: Unit 1 Case C-Q
Related SAS Tutorials
Related SPSS Tutorials
Introduction
In this part, we continue to handle situations involving one categorical explanatory variable and one quantitative response variable, which is case C→Q.
Here is a summary of the tests we have covered for the case where k = 2. Methods in BOLD are our main focus in this unit.
So far we have discussed the two samples and matched pairs designs, in which the categorical explanatory variable is two-valued. As we saw, in these cases, examining the relationship between the explanatory and the response variables amounts to comparing the mean of the response variable (Y) in two populations, which are defined by the two values of the explanatory variable (X). The difference between the two samples and matched pairs designs is that in the former, the two samples are independent, and in the latter, the samples are dependent.
Dependent Samples (Less Emphasis)
Standard Tests
• Two Sample T-Test Assuming Equal Variances
• Two Sample T-Test Assuming Unequal Variances
Non-Parametric Test
• Mann-Whitney U (or Wilcoxon Rank-Sum) Test
Standard Test
• Paired T-Test
Non-Parametric Tests
• Sign Test
• Wilcoxon Signed-Rank Test
We now move on to the case where k > 2 when we have independent samples. Here is a summary of the tests we will learn for the case where k > 2. Notice we will not cover the dependent samples case in this course.
Dependent Samples (Not Discussed)
Standard Tests
• One-way ANOVA (Analysis of Variance)
Non-Parametric Test
• Kruskal–Wallis One-way ANOVA
Standard Test
• Repeated Measures ANOVA (or similar)
Here, as in the two-valued case, making inferences about the relationship between the explanatory (X) and the response (Y) variables amounts to comparing the means of the response variable in the populations defined by the values of the explanatory variable, where the number of means we are comparing depends, of course, on the number of values of X.
Unlike the two-valued case, where we looked at two sub-cases (1) when the samples are independent (two samples design) and (2) when the samples are dependent (matched pairs design, here, we are just going to discuss the case where the samples are independent. In other words, we are just going to extend the two samples design to more than two independent samples.
The inferential method for comparing more than two means that we will introduce in this part is called ANalysis Of VAriance (abbreviated as ANOVA), and the test associated with this method is called the ANOVA F-test.
In most software, the data need to be arranged so that each row contains one observation with one variable recording X and another variable recording Y for each observation.
Comparing Two or More Means – The ANOVA F-test
Learning Objectives
LO 4.38: In a given context, determine the appropriate standard method for comparing groups and provide the correct conclusions given the appropriate software output.
Learning Objectives
LO 4.39: In a given context, set up the appropriate null and alternative hypotheses for comparing groups.
As we mentioned earlier, the test that we will present is called the ANOVA F-test, and as you’ll see, this test is different in two ways from all the tests we have presented so far:
• Unlike the previous tests, where we had three possible alternative hypotheses to choose from (depending on the context of the problem), in the ANOVA F-test there is only one alternative, which actually makes life simpler.
• The test statistic will not have the same structure as the test statistics we’ve seen so far. In other words, it will not have the form:
$\text{test statistic} = \dfrac{\text{estimator - null value}}{\text{standard error of estimator}}$
but a different structure that captures the essence of the F-test, and clarifies where the name “analysis of variance” is coming from.
What is the idea behind comparing more than two means?
The question we need to answer is: Are the differences among the sample means due to true differences among the μ’s (alternative hypothesis), or merely due to sampling variability or random chance (null hypothesis)?
Here are two sets of boxplots representing two possible scenarios:
Scenario #1
• Because of the large amount of spread within the groups, this data shows boxplots with plenty of overlap.
• One could imagine the data arising from 4 random samples taken from 4 populations, all having the same mean of about 11 or 12.
• The first group of values may have been a bit on the low side, and the other three a bit on the high side, but such differences could conceivably have come about by chance.
• This would be the case if the null hypothesis, claiming equal population means, were true.
Scenario #2
• Because of the small amount of spread within the groups, this data shows boxplots with very little overlap.
• It would be very hard to believe that we are sampling from four groups that have equal population means.
• This would be the case if the null hypothesis, claiming equal population means, were false.
Thus, in the language of hypothesis tests, we would say that if the data were configured as they are in scenario 1, we would not reject the null hypothesis that population means were equal for the k groups.
If the data were configured as they are in scenario 2, we would reject the null hypothesis, and we would conclude that not all population means are the same for the k groups.
Let’s summarize what we learned from this.
• The question we need to answer is: Are the differences among the sample means due to true differences among the μ’s (alternative hypothesis), or merely due to sampling variability (null hypothesis)?
In order to answer this question using data, we need to look at the variation among the sample means, but this alone is not enough.
We need to look at the variation among the sample means relative to the variation within the groups. In other words, we need to look at the quantity:
which measures to what extent the difference among the sample means for our groups dominates over the usual variation within sampled groups (which reflects differences in individuals that are typical in random samples).
When the variation within groups is large (like in scenario 1), the variation (differences) among the sample means may become negligible resulting in data which provide very little evidence against Ho. When the variation within groups is small (like in scenario 2), the variation among the sample means dominates over it, and the data have stronger evidence against Ho.
It has a different structure from all the test statistics we’ve looked at so far, but it is similar in that it is still a measure of the evidence against H0. The larger F is (which happens when the denominator, the variation within groups, is small relative to the numerator, the variation among the sample means), the more evidence we have against H0.
Looking at this ratio of variations is the idea behind the comparing more than two means; hence the name analysis of variance (ANOVA).
Now test your understanding of this idea.
Learn By Doing: Idea of One-Way ANOVA
(Non-Interactive Version – Spoiler Alert)
Comments
• The focus here is for you to understand the idea behind this test statistic, so we do not go into detail about how the two variations are measured. We instead rely on software output to obtain the F-statistic.
• This test is called the ANOVA F-test.
• So far, we have explained the ANOVA part of the name.
• Based on the previous tests we introduced, it should not be surprising that the “F-test” part comes from the fact that the null distribution of the test statistic, under which the p-values are calculated, is called an F-distribution.
• We will say very little about the F-distribution in this course, which will essentially be limited to this comment and the next one.
• It is fairly straightforward to decide if a z-statistic is large. Even without tables, we should realize by now that a z-statistic of 0.8 is not especially large, whereas a z-statistic of 2.5 is large.
• In the case of the t-statistic, it is less straightforward, because there is a different t-distribution for every sample size n (and degrees of freedom n − 1).
• However, the fact that a t-distribution with a large number of degrees of freedom is very close to the z (standard normal) distribution can help to assess the magnitude of the t-test statistic.
• When the size of the F-statistic must be assessed, the task is even more complicated, because there is a different F-distribution for every combination of the number of groups we are comparing and the total sample size.
• We will nevertheless say that for most situations, an F-statistic greater than 4 would be considered rather large, but tables or software are needed to get a truly accurate assessment.
Steps for One-Way ANOVA
Here is a full statement of the process for the ANOVA F-Test:
Step 1: State the hypotheses
The null hypothesis claims that there is no relationship between X and Y. Since the relationship is examined by comparing the means of Y in the populations defined by the values of X (μ1, μ2, …, μk), no relationship would mean that all the means are equal.
Therefore the null hypothesis of the F-test is:
• Ho: μ1 = μ2 = … = μk. (There is no relationship between X and Y.)
As we mentioned earlier, here we have just one alternative hypothesis, which claims that there is a relationship between X and Y. In terms of the means μ1, μ2, …, μk, it simply says the opposite of the null hypothesis, that not all the means are equal, and we simply write:
• Ha: not all μ’s are equal. (There is a relationship between X and Y.)
Learn By Doing: One-Way ANOVA – STEP 1
(Non-Interactive Version – Spoiler Alert)
Comments:
• The alternative of the ANOVA F-test simply states that not all of the means are equal, and is not specific about the way in which they are different.
• Another way to phrase the alternative is
• Ha: at least two μ’s are different
• Warning: It is incorrect to say that the alternative is μ1 ≠ μ2 ≠ … ≠ μk. This statement is MUCH stronger than our alternative hypothesis and says ALL means are different from ALL other mean
• Note that there are many ways for μ1, μ2, μ3, μ4 not to be all equal, and μ1 ≠ μ2 ≠ μ3 ≠ μ4 is just one of them. Another way could be μ1 = μ2 = μ3 ≠ μ4 or μ1 = μ2 ≠ μ3 = μ4. The alternative of the ANOVA F-test simply states that not all of the means are equal, and is not specific about the way in which they are different.
Step 2: Obtain data, check conditions, and summarize data
The ANOVA F-test can be safely used as long as the following conditions are met:
• The samples drawn from each of the populations we’re comparing are independent.
• We are in one of the following two scenarios:
(i) Each of the populations are normal, or more specifically, the distribution of the response Y in each population is normal, and the samples are random (or at least can be considered as such). In practice, checking normality in the populations is done by looking at each of the samples using a histogram and checking whether there are any signs that the populations are not normal. Such signs could be extreme skewness and/or extreme outliers.
(ii) The populations are known or discovered not to be normal, but the sample size of each of the random samples is large enough (we can use the rule of thumb that a sample size greater than 30 is considered large enough).
• The populations all have the same standard deviation.
Can check this condition using the rule of thumb that the ratio between the largest sample standard deviation and the smallest is less than 2. If that is the case, this condition is considered to be satisfied.
Can check this condition using a formal test similar to that used in the two-sample t-test although we will not cover any formal tests.
Learn By Doing: One-Way ANOVA – STEP 2
(Non-Interactive Version – Spoiler Alert)
Test Statistic
• If our conditions are satisfied we have the test statistic.
• The statistic follows an F-distribution with k-1 numerator degrees of freedom and n-k denominator degrees of freedom.
• Where n is the total (combined) sample size and k is the number of groups being compared.
• We will rely on software to calculate the test statistic and p-value for us.
Step 3: Find the p-value of the test by using the test statistic as follows
• The p-value of the ANOVA F-test is the probability of getting an F statistic as large as we obtained (or even larger), had Ho been true (all k population means are equal).
• In other words, it tells us how surprising it is to find data like those observed, assuming that there is no difference among the population means μ1, μ2, …, μk.
Step 4: Conclusion
As usual, we base our conclusion on the p-value.
• A small p-value tells us that our data contain a lot of evidence against Ho. More specifically, a small p-value tells us that the differences between the sample means are statistically significant (unlikely to have happened by chance), and therefore we reject Ho.
• Conclusion: There is enough evidence that the categorical explanatory variable is related to (or associated with) the quantitative response variable. More specifically, there is enough evidence that there are differences between at least two of the population means (there are some differences in the population means).
• If the p-value is not small, we do not have enough statistical evidence to reject Ho.
• Conclusion: There is NOT enough evidence that the categorical explanatory variable is related to (or associated with) the quantitative response variable. More specifically, there is NOT enough evidence that there are differences between at least two of the population means.
• A significance level (cut-off probability) of 0.05 can help determine what is considered a small p-value.
Final Comment
Note that when we reject Ho in the ANOVA F-test, all we can conclude is that
• not all the means are equal, or
• there are some differences between the means, or
• the response Y is related to explanatory X.
However, the ANOVA F-test does not provide any immediate insight into why Ho was rejected, or in other words, it does not tell us in what way the population means of the groups are different. As an exploratory (or visual) aid to get that insight, we may take a look at the confidence intervals for group population means. More specifically, we can look at which of the confidence intervals overlap and which do not.
Multiple Comparisons:
• When we compare standard 95% confidence intervals in this way, we have an increased chance of making a type I error as each interval has a 5% error individually.
• There are many multiple comparison procedures all of which propose alternative methods for determining which pairs of means are different.
• We will look at a few of these in the software just to show you a little about this topic but we will not cover this officially in this course.
• The goal is to provide an overall type I error rate no larger than 5% for all comparisons made.
Now let’s look at some examples using real data.
EXAMPLE: Is "academic frustration" related to major?
A college dean believes that students with different majors may experience different levels of academic frustration. Random samples of size 35 of Business, English, Mathematics, and Psychology majors are asked to rate their level of academic frustration on a scale of 1 (lowest) to 20 (highest).
The figure highlights what we have already mentioned: examining the relationship between major (X) and frustration level (Y) amounts to comparing the mean frustration levels among the four majors defined by X. Also, the figure reminds us that we are dealing with a case where the samples are independent.
Step 1: State the hypotheses
The correct hypotheses are:
• Ho: μ1 = μ2 = μ3 = μ4.
(There is NO relationship between major and academic frustration level.)
• Ha: not all μ’s are equal.
(There IS a relationship between major and academic frustration level.)
Step 2: Obtain data, check conditions, and summarize data
In our example all the conditions are satisfied:
• All the samples were chosen randomly, and are therefore independent.
• The sample sizes are large enough (n = 35) that we really don’t have to worry about the normality; however, let’s look at the data using side-by-side boxplots, just to get a sense of it:
• The data suggest that the frustration level of the business students is generally lower than students from the other three majors. The ANOVA F-test will tell us whether these differences are significant.
The rule of thumb is satisfied since 3.082 / 2.088 < 2. We will look at the formal test in the software.
Test statistic: (Minitab output)
• The parts of the output that we will focus on here have been highlighted. In particular, note that the F-statistic is 46.60, which is very large, indicating that the data provide a lot of evidence against Ho (we can also see that the p-value is so small that it is reported to be 0, which supports that conclusion as well).
Step 3: Find the p-value of the test by using the test statistic as follows
• As we already noticed before, the p-value in our example is so small that it is reported to be 0.000, telling us that it would be next to impossible to get data like those observed had the mean frustration level of the four majors been the same (as the null hypothesis claims).
Step 4: Conclusion
• In our example, the p-value is extremely small – close to 0 – indicating that our data provide extremely strong evidence to reject Ho.
• Conclusion: There is enough evidence that the population mean frustration level of the four majors are not all the same, or in other words, that majors do have an effect on students’ academic frustration levels at the school where the test was conducted.
As a follow-up, we can construct confidence intervals (or conduct multiple comparisons as we will do in the software). This allows us to understand better which population means are likely to be different.
In this case, the business majors are clearly lower on the frustration scale than other majors. It is also possible that English majors are lower than psychology majors based upon the individual 95% confidence intervals in each group.
SPSS Output
SAS Output and SAS Code (Includes Non-Parametric Test)
Here is another example
EXAMPLE: Reading Level in Adversting
Do advertisers alter the reading level of their ads based on the target audience of the magazine they advertise in?
In 1981, a study of magazine advertisements was conducted (F.K. Shuptrine and D.D. McVicker, “Readability Levels of Magazine Ads,” Journal of Advertising Research, 21:5, October 1981). Researchers selected random samples of advertisements from each of three groups of magazines:
• Group 1—highest educational level magazines (such as Scientific American, Fortune, The New Yorker)
• Group 2—middle educational level magazines (such as Sports Illustrated, Newsweek, People)
• Group 3—lowest educational level magazines (such as National Enquirer, Grit, True Confessions)
The measure that the researchers used to assess the level of the ads was the number of words in the ad. 18 ads were randomly selected from each of the magazine groups, and the number of words per ad were recorded.
The following figure summarizes this problem:
Our question of interest is whether the number of words in ads (Y) is related to the educational level of the magazine (X). To answer this question, we need to compare μ1, μ2, and μ3, the mean number of words in ads of the three magazine groups. Note in the figure that the sample means are provided. It seems that what the data suggest makes sense; the magazines in group 1 have the largest number of words per ad (on average) followed by group 2, and then group 3.
The question is whether these differences between the sample means are significant. In other words, are the differences among the observed sample means due to true differences among the μ’s or merely due to sampling variability? To answer this question, we need to carry out the ANOVA F-test.
Step 1: Stating the hypotheses.
We are testing:
• Ho: μ1 = μ2 = μ3 .
(There is NO relationship between educational level and number of words in ads.)
• Ha: not all μ’s are equal.
(There IS a relationship between educational level and number of words in ads.)
Conceptually, the null hypothesis claims that the number of words in ads is not related to the educational level of the magazine, and the alternative hypothesis claims that there is a relationship.
Step 2: Checking conditions and summarizing the data.
• (i) The ads were selected at random from each magazine group, so the three samples are independent.
In order to check the next two conditions, we’ll need to look at the data (condition ii), and calculate the sample standard deviations of the three samples (condition iii).
• Here are the side-by-side boxplots of the data:
• And the standard deviations:
• Group 1 StDev: 74.0
• Group 2 StDev: 64.3
• Group 3 StDev: 57.6
Using the above, we can address conditions (ii) and (iii)
• (ii) The graph does not display any alarming violations of the normality assumption. It seems like there is some skewness in groups 2 and 3, but not extremely so, and there are no outliers in the data.
• (iii) We can assume that the equal standard deviation assumption is met since the rule of thumb is satisfied: the largest sample standard deviation of the three is 74 (group 1), the smallest one is 57.6 (group 3), and 74/57.6 < 2.
Before we move on, let’s look again at the graph. It is easy to see the trend of the sample means (indicated by red circles).
However, there is so much variation within each of the groups that there is almost a complete overlap between the three boxplots, and the differences between the means are over-shadowed and seem like something that could have happened just by chance.
Let’s move on and see whether the ANOVA F-test will support this observation.
• Test Statistic: Using statistical software to conduct the ANOVA F-test, we find that the F statistic is 1.18, which is not very large. We also find that the p-value is 0.317.
Step 3. Finding the p-value.
• The p-value is 0.317, which tells us that getting data like those observed is not very surprising assuming that there are no differences between the three magazine groups with respect to the mean number of words in ads (which is what Ho claims).
• In other words, the large p-value tells us that it is quite reasonable that the differences between the observed sample means could have happened just by chance (i.e., due to sampling variability) and not because of true differences between the means.
Step 4: Making conclusions in context.
• The large p-value indicates that the results are not statistically significant, and that we cannot reject Ho.
• Conclusion: The study does not provide evidence that the mean number of words in ads is related to the educational level of the magazine. In other words, the study does not provide evidence that advertisers alter the reading level of their ads (as measured by the number of words) based on the educational level of the target audience of the magazine.
Now try one for yourself.
Learn By Doing: One-Way ANOVA – Flicker Frequency
(Non-Interactive Version – Spoiler Alert)
Confidence Intervals
The ANOVA F-test does not provide any insight into why H0 was rejected; it does not tell us in what way μ1,μ2,μ3…,μk are not all equal. We would like to know which pairs of ’s are not equal. As an exploratory (or visual) aid to get that insight, we may take a look at the confidence intervals for group population meansμ1,μ2,μ3…,μk that appears in the output. More specifically, we should look at the position of the confidence intervals and overlap/no overlap between them.
* If the confidence interval for, say,μi overlaps with the confidence interval for μj , then μi and μj share some plausible values, which means that based on the data we have no evidence that these two ’s are different.
* If the confidence interval for μi does not overlap with the confidence interval for μj , then μi and μj do not share plausible values, which means that the data suggest that these two ’s are different.
Furthermore, if like in the figure above the confidence interval (set of plausible values) for μi lies entirely below the confidence interval (set of plausible values) for μj, then the data suggest that μi is smaller than μj.
EXAMPLE
Consider our first example on the level of academic frustration.
Based on the small p-value, we rejected Ho and concluded that not all four frustration level means are equal, or in other words that frustration level is related to the student’s major. To get more insight into that relationship, we can look at the confidence intervals above (marked in red). The top confidence interval is the set of plausible values for μ1, the mean frustration level of business students. The confidence interval below it is the set of plausible values for μ2, the mean frustration level of English students, etc.
What we see is that the business confidence interval is way below the other three (it doesn’t overlap with any of them). The math confidence interval overlaps with both the English and the psychology confidence intervals; however, there is no overlap between the English and psychology confidence intervals.
This gives us the impression that the mean frustration level of business students is lower than the mean in the other three majors. Within the other three majors, we get the impression that the mean frustration of math students may not differ much from the mean of both English and psychology students, however the mean frustration of English students may be lower than the mean of psychology students.
Note that this is only an exploratory/visual way of getting an impression of why Ho was rejected, not a formal one. There is a formal way of doing it that is called “multiple comparisons,” which is beyond the scope of this course. An extension to this course will include this topic in the future.
Non-Parametric Alternative: Kruskal-Wallis Test
Learning Objectives
LO 5.1: For a data analysis situation involving two variables, determine the appropriate alternative (non-parametric) method when assumptions of our standard methods are not met.
We will look at one non-parametric test in the k > 2 independent sample setting. We will cover more details later (Details for Non-Parametric Alternatives).
The Kruskal-Wallis test is a general test to compare multiple distributions in independent samples and is a common alternative to the one-way ANOVA.
Details for Non-Parametric Alternatives in Case C-Q
Caution
As we mentioned at the end of the Introduction to Unit 4B, we will focus only on two-sided tests for the remainder of this course. One-sided tests are often possible but rarely used in clinical research.
CO-5: Determine preferred methodological alternatives to commonly used statistical methods when assumptions are not met.
Video
Video: Details for Non-Parametric Alternatives (17:38)
Related SAS Tutorials
Related SPSS Tutorials
We mentioned some non-parametric alternatives to the paired t-test, two-sample t-test for independent samples, and the one-way ANOVA.
Here we provide more details and resources for these tests for those of you who wish to conduct them in practice.
Non-Parametric Tests
The statistical tests we have previously discussed require assumptions about the distribution in the population or about the requirements to use a certain approximation as the sampling distribution. These methods are called parametric.
When these assumptions are not valid, alternative methods often exist to test similar hypotheses. Tests which require only minimal distributional assumptions, if any, are called non-parametric or distribution-free tests.
In some cases, these tests may be called exact tests due to the fact that their methods of calculating p-values or confidence intervals require no mathematical approximation (a foundation of many statistical methods).
However, note that when the assumptions are precisely satisfied, some “parametric” tests can also be considered “exact.”
Case CQ – Matched Pairs
We will look at two non-parametric tests in the paired sample setting.
The Sign Test
The sign test is a very general test used to compare paired samples. It can be used instead of the Paired T-test if the assumptions are not met although the next test we discuss is likely a better option in that case as we will see. However, the sign test does have some advantages and is worth understanding.
• The idea behind the test is to find the sign of the differences (positive or negative) and use this information to determine if the medians between the two groups are the same.
• If the two paired measurements came from the populations with equal medians, we would expect half of the differences to be positive and half to be negative. Thus the sampling distribution of our statistic is simply a binomial with p = 0.5.
Outline of Procedure for the SIGN TEST
• Step 1: State the hypotheses
The hypotheses are:
Ho: the medians are equal
Ha: the medians are not equal (one-sided tests are possible)
• Step 2: Obtain data, check conditions, and summarize data
We require a random sample (or at least can be considered random in context).
The sign test can be used for any data for which the sign of the difference can be obtained. Thus, it can be used for:
quantitative measures (continuous or discrete)
Examples: Systolic Blood Pressure, Number of Drinks
(categorical) ordinal measures
Examples: Rating scales, Letter Grades
(categorical) binary measures where we can only tell whether one pair is “larger” or “smaller” compared to the other pair
Examples: Is the left arm more or less sunburned than the right arm?, Was there an improvement in pain after treatment?
For this reason, this test is very widely applicable!
The data are summarized by a test statistic which counts the number of positive (or negative) differences. Any ties (zero differences) are discarded.
• Step 3: Find the p-value of the test by using the test statistic as follows
The p-values are calculated using the binomial distribution (or a normal approximation for large samples). We will rely on software to obtain the p-value for this test.
• Step 4: Conclusion
The decision is made in the same manner as other tests.
We can word our conclusion in terms of the medians in the two populations or in terms of the relationship between the categorical explanatory variable (X) and the response variable (Y).
OPTIONAL: For more details visit The Sign Test in Penn State’s online content for STAT 415.
The Wilcoxon Signed-Rank Test:
The Wilcoxon signed-rank Test is a general test to compare distributions in paired samples. This test is usually the preferred alternative to the Paired t-test when the assumptions are not satisfied.
The idea behind the test is to determine if the two populations seem to be the same or different based upon the ranks of the absolute differences (instead of the magnitude of the differences). Ranking procedures are commonly used in non-parametric methods as this moderates the effect of any outliers.
We have one assumption for this test. We assume the distribution of the differences is symmetric.
Under this assumption, if the two paired measurements came from the populations with equal means/medians, we would expect the two sets of ranks (those for positive differences and those for negative differences) to be distributed similarly. If there is a large difference here, this gives evidence of a true difference.
Outline of Procedure for the Wilcoxon Signed-Rank Test
• Step 1: State the hypotheses
The hypotheses are:
Ho: the means/medians are equal
Ha: the means/medians are not equal (one-sided tests are possible)
• Step 2: Obtain data, check conditions, and summarize data
We have a random sample and we assume the distribution of the differences is symmetric so we should check to be sure that there is no clear skewness to the distribution of the differences.
The Wilcoxon signed-Rank test can be used for quantitative or ordinal data (but not binary as for the sign test).
The data are summarized by a test statistic which counts the sum of the positive (or negative) ranks. Any zero differences are discarded.
To rank the pairs, we find the differences (much as we did in the paired t-test), take the absolute value of these differences and rank the pairs from 1 = smallest non-zero difference to m = largest non-zero difference, where m = number of non-zero pairs.
Then we determine which ranks came from positive (or negative) differences and find the sum of these ranks.
You will not be conducting this test by hand. We simply wish to explain some of the logic behind the scenes for these tests.
• Step 3: Find the p-value of the test by using the test statistic as follows
The p-values are calculated using a distribution specific to this test. We will rely on software to obtain the p-value for this test.
• Step 4: Conclusion
The decision is made in the same manner as other tests. We can word our conclusion in terms of the means or medians in the two populations or in terms of the existence or non-existence of a relationship between the categorical explanatory variable (X) and the response variable (Y).
OPTIONAL: For more details on these tests visit The Wilcoxon Signed Rank Test in Penn State’s online content for STAT 415.
Comments:
• The sign test tends to have much lower power than the paired t-test or the Wilcoxon signed-Rank test. In other words, the sign test has less chance of being able to detect a true difference than the other tests. It is, however, applicable in the case where we only know “better” or “worse” for each pair, where the other two methods are not.
• The Wilcoxon signed-rank test is comparable to the paired t-test in power and can even perform better than the paired t-test under certain conditions. In particular, this can occur when there are a few very large outliers as these outliers can greatly affect our estimate of the standard error in the paired t-test since it is based upon the sample standard deviation which is highly affected by such outliers.
• Both the sign Test and the Wilcoxon signed-rank test can also be used for one sample. In that case, you must specify the null value and calculate differences between the observed value and the null value (instead of the difference between two pairs).
Case CQ – Two Independent Samples – Wilcoxon Rank-Sum Test (Mann-Whitney U):
We will look at one non-parametric test in the two-independent samples setting.
The Wilcoxon rank-sum test (Mann-Whitney U test) is a general test to compare two distributions in independent samples. It is a commonly used alternative to the two-sample t-test when the assumptions are not met.
The idea behind the test is to determine if the two populations seem to be the same or different based upon the ranks of the values instead of the magnitude. Ranking procedures are commonly used in non-parametric methods as this moderates the effect of any outliers.
There are many ways to formulate this test. For our purposes, we will assume the quantitative variable (Y) is a continuous random variable (or can be treated as continuous, such as for very large counts) and that we are interested in testing whether there is a “shift” in the distribution. In other words, we assume that the distribution is the same except that in one group the distribution is higher (or lower) than in the other.
• Step 1: State the hypotheses
We assume the distributions of the two populations are the same except for a horizontal shift in location.
The hypotheses are:
Ho: the medians are equal
Ha: the medians are not equal (one-sided tests are possible)
• Step 2: Obtain data, check conditions, and summarize data
(i) We have two independent random samples. All observations in each sample must be independent of all other observations.
(ii) The version of the Wilcoxon rank-sum test (Mann-Whitney U test) we are using assumes a that the quantitative response variable is a continuous random variable.
(iii) We assume there is only a location shift so we should check that the two distributions are similar except possibly for their locations.
(iv) The data are summarized by a test statistic which counts the sum of the sample 1 (or sample 2) ranks.
To rank the observations, we combine all observations in both samples and rank from smallest to largest.
Then we determine which ranks came from sample 1 (or sample 2) and find the sum of these ranks.
You will not be conducting this test by hand. We simply wish to explain some of the logic behind the scenes for these tests.
• Step 3: Find the p-value of the test by using the test statistic as follows
The p-values are calculated using a distribution specific to this test. We will rely on software to obtain the p-value for this test.
• Step 4: Conclusion
The decision is made in the same manner as other tests. We can word our conclusion in terms of the medians in the two populations or in terms of the existence or non-existence of a relationship between the categorical explanatory variable (X) and the response variable (Y).
OPTIONAL: For more details on this test visit The Wilcoxon Rank-Sum Test from Boston University School of Public Health
Case CQ – K > 2 – The Kruskal-Wallis Test
We will look at one non-parametric test in the k > 2 independent sample setting.
The Kruskal-Wallis test is a general test to compare multiple distributions in independent samples.
The idea behind the test is to determine if the k populations seem to be the same or different based upon the ranks of the values instead of the magnitude. Ranking procedures are commonly used in non-parametric methods as this moderates the effect of any outliers.
The test assumes identically-shaped and scaled distributions for each group, except for any difference in medians.
Step 1: State the hypothesesThe hypotheses are:
• Ho: the medians of all groups are equal
• Ha: the medians are not all equal
Step 2: Obtain data, check conditions, and summarize data
(i) We have independent random samples from our k populations. All observations in each sample must be independent of all other observations.
(ii) We have an ordinal, discrete, or continuous response variable Y.
(iii) We assume there is only a location shift so we should check that the distributions are similar except possibly for their locations.
(iv) The data are summarized by a test statistic which involves the ranks of observations in each group.
To rank the observations, we combine all observations in all samples and rank from smallest to largest.
Then we determine which ranks came from which sample and use these to obtain the test statistic.
Step 3: Find the p-value of the test by using the test statistic as follows
The p-values are calculated using a distribution specific to this test. We will rely on software to obtain the p-value for this test.
Step 4: Conclusion
The decision is made in the same manner as other tests. We can word our conclusion in terms of the medians in the k populations or in terms of the existence or non-existence of a relationship between the categorical explanatory variable (X) and the response variable (Y).
OPTIONAL: For more details on this test visit The Kruskal-Wallis Test from Boston University School of Public Health
Let’s Summarize
• We have presented the basic idea for the non-parameteric alternatives for Case C-Q
• The sign test and the Wilcoxon signed-rank test are possible alternatives to the paired t-test in the case of two dependent samples.
• The Wilcoxon rank-sum test (also known as the Mann-Whitney U test) is a possible alternative to the two-sample t-test in the case of two independent samples.
• The Kruskal-Wallis test is a possible alternative to the one-way ANOVA in the case of more than two independent samples.
• In this course, we simply want you to be aware of which non-parameteric alternatives are commonly used to address issues with the assumptions.
• We are not asking you to conduct these tests but we do still provide information for those interested in being able to conduct these tests in practice.
Wrap-Up (Case C-Q)
Caution
As we mentioned at the end of the Introduction to Unit 4B, we will focus only on two-sided tests for the remainder of this course. One-sided tests are often possible but rarely used in clinical research.
We are now done with case C→Q.
• We learned that this case is further classified into sub-cases, depending on the number of groups that we are comparing (i.e., the number of categories that the explanatory variable has), and the design of the study (independent vs. dependent samples).
• For each of the three sub-cases that we covered, we learned the appropriate inferential method, and emphasized the idea behind the method, the conditions under which it can be safely used, how to carry it out using software, and the interpretation of the results.
• We also learned which non-parametric tests are applicable and under what circumstances they might be used instead of the standard methods.
The following table summarizes when each of the three standard tests, covered in this module, are used:
The following summary discusses each of the above named sub-cases of C→Q within the context of the hypothesis testing process.
Step 1: Stating the null and alternative hypotheses (H0 and Ha)
• Although the one-sided alternatives are provided here where possible, remember that we will focus only on two-sided tests supplemented by confidence intervals for methods in Unit 4B.
0 (same as H_a: μ_1 > μ_2) * H_a: μ_1 - μ_2 ≠ 0 (same as H_a: μ_1 ≠ μ_2) For a paired t-test, the hypotheses are H_0: μ_d = 0, and one of: * H_a: μ_d < 0, * H_a: μ_d > 0, * H_0: μ_0 ≠ 0. For ANOVA, H_0: μ_0 = μ_2 = ... = μ_k, and H_a:not all μ's are equal" height="311" loading="lazy" src="http://phhp-faculty-cantrell.sites.m...c-q_table2.png" title="In a Two-Sample t-test, the hypotheses are: H_0: μ_1 - μ_2 = 0 (or H_0: μ_1 = μ_2), and one of: * H_a: μ_1 - μ_2 < 0 (same as H_a: μ_1 < μ_2) * H_a: μ_1 - μ_2 > 0 (same as H_a: μ_1 > μ_2) * H_a: μ_1 - μ_2 ≠ 0 (same as H_a: μ_1 ≠ μ_2) For a paired t-test, the hypotheses are H_0: μ_d = 0, and one of: * H_a: μ_d < 0, * H_a: μ_d > 0, * H_0: μ_0 ≠ 0. For ANOVA, H_0: μ_0 = μ_2 = ... = μ_k, and H_a:not all μ's are equal" width="610">
Step 2: Check Conditions and Summarize the Data Using a Test Statistic
We need to check that the conditions under which the test can be reliably used are met.
For the Paired t-test (as a special case of a one-sample t-test), the conditions are:
• The sample of differences is random (or at least can be considered so in context).
• We are in one of the three situations marked with a green check mark in the following table:
For the Two-Sample t-test, the conditions are:
• Two samples are independent and random
• One of the following two scenarios holds:
• Both populations are normal
• Populations are not normal, but large sample size (>30)
For an ANOVA, the conditions are:
• The samples drawn from each of the populations being compared are independent.
• The response variable varies normally within each of the populations being compared. As is often the case, we do not have to worry about this assumption for large sample sizes.
• The populations all have the same standard deviation.
Now we summarize the data using a test statistic.
• Although we will not be calculating these test statistics by hand, we will review the formulas for each test statistic here.
For the Paired t-test the test statistic is:
$t=\dfrac{\bar{y}_{d}-0}{s_{d} / \sqrt{n}}$
For the Two-Sample t-test assuming equal variances the test statistic is:
$t=\dfrac{\bar{y}_{1}-\bar{y}_{2}-0}{s_{p} \sqrt{\frac{1}{n_{1}}+\frac{1}{n_{2}}}}$
where
$s_{p}=\sqrt{\dfrac{\left(n_{1}-1\right) s_{1}^{2}+\left(n_{2}-1\right) s_{2}^{2}}{n_{1}+n_{2}-2}}$
For the Two-Sample t-test assuming unequal variances the test statistic is:
$t=\dfrac{\bar{y}_{1}-\bar{y}_{2}-0}{\sqrt{\frac{s_{1}^{2}}{n_{1}}+\frac{s_{2}^{2}}{n_{2}}}}$
For an ANOVA the test statistic is:
Step 3: Finding the p-value of the test
Use statistical software to determine the p-value.
• The p-value is the probability of getting data like those observed (or even more extreme) assuming that the null hypothesis is true, and is calculated using the null distribution of the test statistic.
• The p-value is a measure of the evidence against H0.
• The smaller the p-value, the more evidence the data present against H0.
The p-values for three C→Q tests are obtained from the output.
Step 4: Making conclusions
Conclusions about the significance of the results:
• If the p-value is small, the data present enough evidence to reject Ho (and accept Ha).
• If the p-value is not small, the data do not provide enough evidence to reject H0.
• To help guide our decision, we use the significance level as a cutoff for what is considered a small p-value. The significance cutoff is usually set at .05, but should not be considered inviolable.
Conclusions should always be stated in the context of the problem and can all be written in the basic form below:
• There (IS or IS NOT) enough evidence that there is an association between (X) and (Y). Where X and Y should be given in context.
Following the test…
• For a paired t-test, a 95% confidence interval for μd can be very insightful after a test has rejected the null hypothesis, and can also be used for testing in the two-sided case.
• For a two-sample t-test, a 95% confidence interval for μ1−μ2 can be very insightful after a test has rejected the null hypothesis, and can also be used for testing in the two-sided case.
• If the ANOVA F-test has rejected the null hypothesis, looking at the confidence intervals for the population means that are in the output can provide visual insight into why the H0 was rejected (i.e., which of the means differ).
Non-parametric Alternatives
• For a Paired t-test we might investigate using the Wilcoxon Signed-Rank test or the Sign test.
• For a Two-Sample t-test we might investigate using the Wilcoxon Rank-Sum test (Mann-Whitney U test).
• For an ANOVA we might investigate using the Kruskal-Wallis test. | textbooks/stats/Applied_Statistics/Biostatistics_-_Open_Learning_Textbook/Unit_4B%3A_Inference_for_Relationships/Case_C%E2%86%92Q.txt |
CO-4: Distinguish among different measurement scales, choose the appropriate descriptive and inferential statistical methods based on these distinctions, and interpret the results.
Learning Objectives
LO 4.35: For a data analysis situation involving two variables, choose the appropriate inferential method for examining the relationship between the variables and justify the choice.
Learning Objectives
LO 4.36: For a data analysis situation involving two variables, carry out the appropriate inferential method for examining relationships between the variables and draw the correct conclusions in context.
CO-5: Determine preferred methodological alternatives to commonly used statistical methods when assumptions are not met.
Review: From UNIT 1
Video
Video: Case Q→Q (60:27)
Related SAS Tutorials
Related SPSS Tutorials
Introduction
In inference for relationships, so far we have learned inference procedures for both cases C→Q and C→C from the role/type classification table below.
The last case to be considered in this course is case Q→Q, where both the explanatory and response variables are quantitative. (Case Q→C requires statistical methods that go beyond the scope of this course, one of which is logistic regression).
For case Q→Q, we will learn the following tests:
Dependent Samples Independent Samples
Standard Test(s)
• Not Covered (Longitudinal Data Analysis, etc.)
• Test for Significance of Pearson’s Correlation Coefficient
• Test for Significance of the Slope in Linear Regression
Non-Parametric Test(s)
• Test for Significance of Spearman’s Rank Correlation
In the Exploratory Data Analysis section, we examined the relationship between sample values for two quantitative variables by looking at a scatterplot and if the relationship was linear, we supplemented the scatterplot with the correlation coefficient r and the linear regression equation. We discussed the regression equation but made no attempt to claim that the relationship observed in the sample necessarily held for the larger population from which the sample originated.
Now that we have a better understanding of the process of statistical inference, we will discuss a few methods for inferring something about the relationship between two quantitative variables in an entire population, based on the relationship seen in the sample.
In particular, we will focus on linear relationships and will answer the following questions:
• Is the correlation coefficient different from zero in the population, or could it be that we obtained the result in the data just by chance?
• Is the slope different from zero in the population, or could it be that we obtained the result in the data just by chance?
If we satisfy the assumptions and conditions to use the methods, we can estimate the slope and correlation coefficient for our population and conduct hypothesis tests about these parameters.
For the standard tests, the tests for the slope and the correlation coefficient are equivalent; they will always produce the same p-value and conclusion. This is because they are directly related to each other.
In this section, we can state our null and alternative hypotheses as:
Ho: There is no relationship between the two quantitative variables X and Y.
Ha: There is a relationship between the two quantitative variables X and Y.
Pearson’s Correlation Coefficient
Learning Objectives
LO 4.45: In a given context, set up the appropriate null and alternative hypotheses for examining the relationship between two quantitative variables.
Learning Objectives
LO 4.46: In a given context, determine the appropriate standard method for examining the relationship between two quantitative variables interpret the results provided in the appropriate software output in context.
What we know from Unit 1:
• r only measures the LINEAR association between two quantitative variables X and Y
• -1 ≤ r ≤ 1
• If the relationship is linear then:
r = 0 implies no relationship between X and Y (note this is our null hypothesis!!)
r > 0 implies a positive relationship between X and Y (as X increases, Y also increases)
r < 0 implies a negative relationship between X and Y (as X increases, Y decreases)
Now here are the steps for hypothesis testing for Pearson’s Correlation Coefficient:
Step 1: State the hypothesesIf we consider the above information and our null hypothesis,
Ho: There is no relationship between the two quantitative variables X and Y,
Before we can write this using correlation, we must define the population correlation coefficient. In statistics, we use the greek letter ρ (rho) to denote the population correlation coefficient. Thus if there is no relationship between the two quantitative variables X and Y in our population, we can see that this hypothesis is equivalent to
Ho: ρ = 0 (rho = 0).
The alternative hypothesis will be
Ha: ρ ≠ 0 (rho is not equal to zero).
however, one sided tests are possible.
Step 2: Obtain data, check conditions, and summarize data
(i) The sample should be random with independent observations (all observations are independent of all other observations).
(ii) The relationship should be reasonably linear which we can check using a scatterplot. Any clearly non-linear relationship should not be analyzed using this method.
(iii) To conduct this test, both variables should be normally distributed which we can check using histograms and QQ-plots. Outliers can cause problems.
Although there is an intermediate test statistic, in effect, the value of r itself serves as our test statistic.
Step 3: Find the p-value of the test by using the test statistic as follows
We will rely on software to obtain the p-value for this test. We have seen this p-value already when we calculated correlation in Unit 1.
Step 4: Conclusion
As usual, we use the magnitude of the p-value to draw our conclusions. A small p-value indicates that the evidence provided by the data is strong enough to reject Ho and conclude (beyond a reasonable doubt) that the two variables are related (ρ ≠ 0). In particular, if a significance level of 0.05 is used, we will reject Ho if the p-value is less than 0.05.
Confidence intervals can be obtained to estimate the true population correlation coefficient, ρ (rho), however, we will not compute these intervals in this course. You could be asked to interpret or use a confidence interval which has been provided to you.
Non-Parametric Alternative: Spearman’s Rank Correlation
Learning Objectives
LO 5.1: For a data analysis situation involving two variables, determine the appropriate alternative (non-parametric) method when assumptions of our standard methods are not met.
Learning Objectives
LO 5.2: Recognize situations in which Spearman’s rank correlation is a more appropriate measure of the relationship between two quantitative variables
We will look at one non-parametric test in case Q→Q. Spearman’s rank correlation uses the same calculations as for Pearson’s correlation coefficient except that it uses the ranks instead of the original data. This test is useful when there are outliers or when the variables do not appear to be normally distributed.
• This measure and test are most useful when the relationship between X and Y is nonlinear and either non-increasing or non-decreasing.
• If the relationship has both increasing and decreasing components, Spearman’s rank correlation is not usually helpful as a measure of correlation.
This measure behaves similarly to r in that:
• it ranges from -1 to 1
• a value of 0 implies no relationship
• positive values imply a positive relationship
• negative values imply a negative relationship.
Now an example:
EXAMPLE: IQ vs. Cry Count
A method for predicting IQ as soon as possible after birth could be important for early intervention in cases such as brain abnormalities or learning disabilities. It has been thought that greater infant vocalization (for instance, more crying) is associated with higher IQ. In 1964, a study was undertaken to see if IQ at 3 years of age is associated with amount of crying at newborn age. In the study, 38 newborns were made to cry after being tapped on the foot and the number of distinct cry vocalizations within 20 seconds was counted. The subjects were followed up at 3 years of age and their IQs were measured.
Response Variable:
• IQ at three years of age
Explanatory Variable:
• Newborn cry count in 20 seconds
Results:
Step 1: State the hypotheses
The hypotheses are:
Ho: There is no relationship between newborn cry count and IQ at three years of age
Ha: There is a relationship between newborn cry count and IQ at three years of age
Steps 2 & 3: Obtain data, check conditions, summarize data, and find the p-value
(i) To the best of our knowledge the subjects are independent.
(ii) The scatterplot shows a relationship that is reasonably linear although not very strong.
(iii) The histograms and QQ-plots for both variables are slightly skewed right. We would prefer more symmetric distributions; however, the skewness is not extreme so we will proceed with caution.
Pearson’s correlation coefficient is 0.402 with a p-value of 0.012.
Spearman’s rank correlation is 0.354 with a p-value of 0.029.
Step 4: Conclusion
Based upon the scatterplot and correlation results, there is a statistically significant, but somewhat weak, positive correlation between newborn cry count and IQ at age 3.
SPSS Output for tests
Simple Linear Regression
Learning Objectives
LO 4.46: In a given context, determine the appropriate standard method for examining the relationship between two quantitative variables interpret the results provided in the appropriate software output in context.
In Unit 1, we discussed the least squares method for estimating the regression line and used software to obtain the slope and intercept of the linear regression equation. These estimates can be considered as the sample statistics which estimate the true population slope and intercept.
Now we will formalize simple linear regression which will require some additional notation.
A regression model expresses two essential ingredients:
• a tendency of the response variable Y to vary with the explanatory variable X in a systematic fashion (deterministic)
• a stochastic scattering of points around the curve of statistical relationship (random)
Regression is a vast subject which handles a wide variety of possible relationships.
All regression methods begin with a theoretical model which specifies the form of the relationship and includes any needed assumptions or conditions. Now we will introduce a more “statistical” definition of the regression model and define the parameters in the population.
Simple Linear Regression Model:
We will use a different notation here than in the beginning of the semester. Now we use regression model style notation.
We assume the relationship in the population is linear and therefore our regression model can be written as:
$Y_i = \beta_0 + \beta_1 X_i + \epsilon_i$
where
• The parameter β0 (beta_zero) is the intercept (in the population) and is the average value of Y when X = 0
• The parameter β1 (beta_1) is the slope (in the population) and is the change in the average Y for each 1 unit increase in X.
• Xi is the value of the explanatory variable for the i-th subject
• Yi is the value of the response variable for the i-th subject
• εi (epsilon_i) is the error term for the i-th subject
• the error terms are assumed to be
• normally distributed with mean zero (check with histogram and QQ-plot of the residuals)
• constant variance (check with scatterplot of Y vs. X for simple linear regression)
• statistically independent (difficult to check, be sure to have independent observations in the data, different methods are required for dependent observations!)
The following picture illustrates the components of this model.
Each orange dot represents an individual observation in the scatterplot. Each observed value is modeled using the previous equation.
$Y_i = \beta_0 + \beta_1 X_i + \epsilon_i$
The red line is the true linear regression line. The blue dot represents the predicted value for a particular X value and illustrates that our predicted value only estimates the mean, average, or expected value of Y at that X value.
The error for an individual is expected and is due to the variation in our data. In the previous illustration, it is labeled with εi (epsilon_i) and denoted by a bracket which gives the distance between the orange dot for the observed value and the blue dot for the predicted value for a particular value of X. In practice, we cannot observe the true error for an individual but we will be able to estimate them using the residuals, which we will soon define mathematically.
The regression line represents the average Y for a given X and can be expressed as in symbols as the expected value of Y for a given X, E(Y|X) or Y-hat.
$E(Y|X_i) = \hat{Y}_i = \hat{\beta}_0 + \hat{\beta}_1 X_i$
In Unit 1, we used a to represent the intercept and b to represent the slope that we estimated from our data.
In formal regression procedures, we commonly use beta to represent the population parameter and beta-hat to represent the parameter estimate.
These parameter estimates, which are sample statistics estimated from our data, are also sometimes referred to as the coefficients using algebra terminology.
For each observation in our dataset, we also have a residual which is defined as the difference between the observed value and the predicted value for that observation.
$\text{residual}_i = Y_i - \hat{Y}_i$
The residuals are used to check our assumptions of normality and constant variance.
In effect, since we have a quantitative response variable, we are still comparing population means. However, now we must do so for EVERY possible value of X. We want to know if the distribution of Y is the same or different over our range of X values.
This idea is illustrated (including our assumption of normality) in the following picture which shows a case where the distribution of Y is changing as the values of the explanatory variable X change. This change is reflected by only a shift in means since we assume normality and constant variation of Y for all X.
The method used is mathematically equivalent to ANOVA but our interpretations are different due to the quantitative nature of our explanatory variable.
This image shows a scatterplot and regression line on the X-Y plane – as if flat on a table. Then standing up – in the vertical axis – we draw normal curves centered at the regression line for four different X-values – with X increasing for each.
The center of the distributions of the normal distributions which are displayed shows an increase in the mean but constant variation.
The idea is that the model assumes a normal distribution is a good approximation for how the Y-values will vary around the regression line for a particular value of X.
Coefficient of Determination
Learning Objectives
LO 4.47: For simple linear regression models, interpret the coefficient of determination in context.
There is one additional measure which is often of interest in linear regression, the coefficient of determination, R2 which, for simple linear regression is simply the square of the correlation coefficient, r.
The value of R2 is interpreted as the proportion of variation in our response variable Y, which can be explained by the linear regression model using our explanatory variable X.
Important Properties of R2
• 0 ≤ R2 ≤ 1
• R2 = 0 implies the model explains none of the variation in Y.
• R2 = 1 implies the model explains all of the variation in Y (perfect fit, very unlikely with data)
A large R2 may or MAY NOT mean that the model fits our data well.
The image below illustrates data with a fairly large R2 yet the model does not fit the data well.
A small R2 may or MAY NOT mean that there is no relationship between X and Y – we must be careful as the relationship that exists may simply not be specified in our model – currently a simple linear model.
The image below illustrates data with a very small R2 yet the true relationship is very strong.
Test Procedure for the Slope in Simple Linear Regression
Now we move into our formal test procedure for simple linear regression.
A small R2 may or MAY NOT mean that there is no relationship between X and Y – we must be careful as the relationship that exists may simply not be specified in our model – currently a simple linear model. The image below illustrates data with a very small R2 yet the true relationship is very strong.
Step 1: State the hypotheses
In order to test the hypothesis that
Ho: There is no relationship between the two quantitative variables X and Y,
assuming our model is correct (a linear model is sufficient), we can write the above hypothesis as
Ho: β1 = 0 (Beta_1 = 0, the slope of our linear equation = 0 in the population).
The alternative hypothesis will be
Ha: β1 ≠ 0 (Beta_1 is not equal to zero).
Step 2: Obtain data, check conditions, and summarize data
(i) The sample should be random with independent observations (all observations are independent of all other observations).
(ii) The relationship should be linear which we can check using a scatterplot.
(iii) The residuals should be reasonably normally distributed with constant variance which we can check using the methods discussed below.
Normality: Histogram and QQ-plot of the residuals.
Constant Variance: Scatterplot of Y vs. X and/or a scatterplot of the residuals vs. the predicted values (Y-hat). We would like to see random scatter with no pattern and approximately the same spread for all values of X.
Large outliers which fall outside the pattern of the data can cause problems and exert undue influence on our estimates. We saw in Unit 1 that one observation which is far away on the x-axis can have an large impact on the values of the correlation and slope.
Here are two examples each using the two plots mentioned above.
Example 1: Has constant variance (homoscedasticity)
Scatterplot of Y vs. X (above)
Scatterplot of residuals vs. predicted values (above)
Example 2: Does not have constant variance (heteroscedasticity)
Scatterplot of Y vs. X (above)
Scatterplot of residuals vs. predicted values (above)
The test statistic is similar to those we have studied for other t-tests:
$t = \dfrac{\hat{\beta}_1 - 0}{\text{SE}_{\hat{\beta}_1}}$
where
$\text{SE}_{\hat{\beta}_1}$ = standard error of $\hat{\beta}_1$.
Both of these values, along with the test statistic, are provided in the output from the software.
Step 3: Find the p-value of the test by using the test statistic as follows
Under the null hypothesis, the test statistic follows a t-distribution with n-2 degrees of freedom. We will rely on software to obtain the p-value for this test.
Step 4: Conclusion
As usual, we use the magnitude of the p-value to draw our conclusions. A small p-value indicates that the evidence provided by the data is strong enough to reject Ho and we would conclude there is enough evidence that hat slope in the population is not zero and therefore the two variables are related. In particular, if a significance level of 0.05 is used, we will reject Ho if the p-value is less than 0.05.
Confidence intervals will also be obtained in the software to estimate the true population slope, β1 (beta_1).
EXAMPLE: IQ vs. Cry Count
A method for predicting IQ as soon as possible after birth could be important for early intervention in cases such as brain abnormalities or learning disabilities. It has been thought that greater infant vocalization (for instance, more crying) is associated with higher IQ. In 1964, a study was undertaken to see if IQ at 3 years of age is associated with amount of crying at newborn age. In the study, 38 newborns were made to cry after being tapped on the foot and the number of distinct cry vocalizations within 20 seconds was counted. The subjects were followed up at 3 years of age and their IQs were measured.
Response Variable:
• IQ at three years of age
Explanatory Variable:
• Newborn cry count in 20 seconds
Results:
Step 1: State the hypotheses
The hypotheses are:
Ho: There is no (linear) relationship between newborn cry count and IQ at three years of age
Ha: There is a (linear) relationship between newborn cry count and IQ at three years of age
Steps 2 & 3: Obtain data, check conditions, summarize data, and find the p-value
(i) To the best of our knowledge the subjects are independent.
(ii) The scatterplot shows a relationship that is reasonably linear although not very strong.
(iii) The histogram and QQ-plot of the residuals are both reasonably normally distributed. The scatterplots of Y vs. X and the residuals vs. the predicted values both show no evidence of non-constant variance.
The estimated regression equation is
$\hat{IQ} = 90.76 + 1.54 (\text{cry count})$
The parameter estimate of the slope is 1.54 which means that for each 1-unit increase in cry count, the average IQ is expected to increase by 1.54 points.
The standard error of the estimate of the slope is 0.584 which give a test statistic of 2.63 in the output and using unrounded values from the output and the formula:
$t = \dfrac{\hat{\beta}_1 - 0}{\text{SE}_{\hat{\beta}_1}} = \dfrac{1.536 - 0}{0.584} = 2.63$.
The p-value is found to be 0.0124. Notice this exactly the same as we obtained for this data for our test of Pearson’s correlation coefficient. These two methods are equivalent and will always produce the same conclusion about the statistical significance of the linear relationship between X and Y.
The 95% confidence interval for β1 (beta_1) given in the output is (0.353, 2.720).
This regression model has coefficient of determination of R2 = 0.161 which means that 16.1% of the variation in IQ score at age three can be explained by our linear regression model using newborn cry count. This confirms a relatively weak relationship as we found in our previous example using correlations (Pearson’s correlation coefficient and Spearmans’ rank correlation).
Step 4: Conclusion
Conclusion of the test for the slope: Based upon the scatterplot and linear regression analysis, since the relationship is linear and the p-value = 0.0124, there is a statistically significant positive linear relationship between newborn cry count and IQ at age 3.
Interpretation of R-squared: Based upon our R2and scatterplot, the relationship is somewhat weak with only 16.1% of the variation in IQ score at age three being explained by our linear regression model using newborn cry count.
Interpretation of the slope: For each 1-unit increase in cry count, the population mean IQ is expected to increase by 1.54 points, however, the 95% confidence interval suggests this value could be as low as 0.35 points to as high as 2.72 points.
SPSS Output for tests
EXAMPLE: Gestation vs. Longevity in Animals
We return to the data from an earlier activity (Learn By Doing – Correlation and Outliers (Software)). The average gestation period, or time of pregnancy, of an animal is closely related to its longevity, the length of its lifespan. Data on the average gestation period and longevity (in captivity) of 40 different species of animals have been recorded. Here is a summary of the variables in our dataset:
• animal: the name of the animal species.
• gestation: the average gestation period of the species, in days.
• longevity: the average longevity of the species, in years.
In this case, whether we include the outlier or not, there is a problem of non-constant variance. You can clearly see that, in general, as longevity increases, the variation of gestation increases.
This data is not a particularly good candidate for simple linear regression analysis (without further modification such as transformations or the use of alternative methods).
Pearson’s correlation coefficient (or Spearman’s rank correlation), may still provide a reasonable measure of the strength of the relationship, which is clearly a positive relationship from the scatterplot and our previous measure of correlation.
Output – Contains scatterplots with linear equations and LOESS curves (running average) for the dataset with and without the outlier. Pay particular attention to the problem with non-constant variance seen in these scatterplots.
EXAMPLE: Insurance Premiums
The data used in the analysis provided below contains the monthly premiums, driving experience, and gender for a random sample of drivers.
To analyze this data, we have looked at males and females as two separate groups and estimated the correlation and linear regression equation for each gender. We wish to predict the monthly premium using years of driving experience.
Use this output for additional practice with these concepts. For each gender consider the following:
• Are the assumptions satisfied?
• Is the correlation statistically significant? Is it positive or negative? Weak or strong?
• Is the slope statistically significant? What does the slope mean in context? What is the confidence interval for the slope?
• What is R2 and what does it mean in context?
SPSS Output
Wrap-Up (Inference for Relationships)
Video
Video: Full Course Overview & Summary (68:32)
We’ve just completed the part of the course about the inferential methods for relationships between variables. The overall goal of inference for relationships is to assess whether the observed data provide evidence of a significant relationship between the two variables (i.e., a true relationship that exists in the population).
Much like the unit about relationships in the Exploratory Data Analysis (EDA) unit, this part of the course was organized according to the role and type classification of the two variables involved.
However, unlike the EDA unit , when it comes to inferential methods, we further distinguished between three sub-cases in case C→Q, so essentially we covered 5 cases in total.
The following very detailed role-type classification table summarizes both EDA and inference for the relationship between variables:
Case C-Q
Here is a summary of the tests for the scenario where k = 2.
Dependent Samples (Less Emphasis)
Standard Tests
• Two Sample T-Test Assuming Equal Variances
• Two Sample T-Test Assuming Unequal Variances
Non-Parametric Test
• Mann-Whitney U (or Wilcoxon Rank-Sum) Test
Standard Test
• Paired T-Test
Non-Parametric Tests
• Sign Test
• Wilcoxon Signed-Rank Test
Here is a summary of the tests for the scenario where k > 2.
Dependent Samples (Not Discussed)
Standard Tests
• One-way ANOVA (Analysis of Variance)
Non-Parametric Test
• Kruskal–Wallis One-way ANOVA
Standard Test
• Repeated Measures ANOVA (or similar)
Case C-C
Dependent Samples (Not Discussed)
Standard Tests
• Continuity Corrected Chi-square Test for Independence (2×2 case)
• Chi-square Test for Independence (RxC case)
Non-Parametric Test
• Fisher’s exact test
Standard Test
• McNemar’s Test – 2×2 Case
Case Q-Q
Dependent Samples (Not Discussed)
Standard Tests
• Test for Significance of Pearson’s Correlation Coefficient
• Test for Significance of the Slope in Linear Regression
Non-Parametric Test
• Test for Significance of Spearman’s Rank Correlation
Standard Test
• Not Covered (Longitudinal Data Analysis, etc.) | textbooks/stats/Applied_Statistics/Biostatistics_-_Open_Learning_Textbook/Unit_4B%3A_Inference_for_Relationships/Case_Q%E2%86%92Q.txt |
Empirical research, as outlined in this book, is based on the scientific method. Science is a particular way that some epistemologists believe we can understand the world around us. Science, as a method, relies on both logic, as captured by theory, and empirical observation of the world to determine whether the theory we have developed conforms to what we actually observe. We seek to explain the world with our theories, and we test our theories by deducing and testing hypotheses. When a working hypothesis is supported, we have more confidence in our theory. When the null hypothesis is supported, it undermines our proposed theory.
Science seeks a particular kind of knowledge and has certain biases. When we are engaging in scientific research we are interested in reaching generalizations. Rather than wanting to explain why President Trump’s approval dropped, we are interested in explaining why presidential approval drops across various presidents, or, better yet, how economic conditions affect presidential approval. These generalizations should be logical (which is nothing more than saying they should be grounded in a strong theory) and they should be empirically verified (which, we will see means that we have tested hypotheses deduced from our theory). We also look for generalizations that are causal in nature. Scientists actively seek explanations grounded in causation rather than correlation. Scientific knowledge should be replicable – meaning that other scholars should be able to reach the same conclusions that you do. There should be inter-subjective agreement on scientific findings – meaning that people, with different personal experiences and biases, should still reach the same conclusion.
Scientists also tend to prefer simple explanations to complex ones. They have a bias that says the world is pretty simple and that our theories should reflect that belief. Of course, people are complex, so in the social sciences it can be dangerous to look only for the simplest explanation as most concepts we consider have multiple causes.
1.02: Theory and Empirical Research
This book is concerned with the connection between theoretical claims and empirical data. It is about using statistical modeling; in particular, the tool of regression analysis, which is used to develop and refine theories. We define theory broadly as a set of interrelated propositions that seek to explain and, in some cases, predict an observed phenomenon.
Theory: A set of interrelated propositions that seek to explain and predict an observed phenomenon.
Theories contain three important characteristics that we discuss in detail below.
Characteristics of Good Theories
• Coherent and internally consistent
• Causal in nature
• Generate testable hypotheses
1.2.1 Coherent and Internally Consistent
The set of interrelated propositions that constitute a well-structured theory are based on concepts. In well-developed theories, the expected relationships among these concepts are both coherent and internally consistent. Coherence means the identification of concepts and the specified relationships among them are logical, ordered, and integrated. An internally consistent theory will explain relationships with respect to a set of common underlying causes and conditions, providing for consistency in expected relationships (and avoidance of contradictions). For systematic quantitative research, the relevant theoretical concepts are defined such that they can be measured and quantified. Some concepts are relatively easy to quantify, such as the number of votes cast for the winning Presidential candidate in a specified year or the frequency of arrests for gang-related crimes in a particular region and time period. Others are more difficult, such as the concepts of democratization, political ideology or presidential approval. Concepts that are more difficult to measure must be carefully operationalized, which is a process of relating a concept to an observation that can be measured using a defined procedure. For example, political ideology is often operationalized through public opinion surveys that ask respondents to place themselves on a Likert-type scale of ideological categories.
Concepts and Variables
A concept is a commonality across observed individual events or cases. It is a regularity that we find in a complex world. Concepts are our building blocks to understanding the world and to developing theory that explains the world. Once we have identified concepts we seek to explain them by developing theories based on them. Once we have explained a concept we need to define it. We do so in two steps. First, we give it a dictionary-like definition, called a nominal definition. Then, we develop an operational definition that identifies how we can measure and quantify it.
Once a concept has been quantified, it is employed in modeling as a variable. In statistical modeling, variables are thought of as either dependent or independent variables. A dependent variable, Y, is the outcome variable; this is the concept we are trying to explain and/or predict. The independent variable(s), X, is the variable(s) that is used to predict or explain the dependent variable. The expected relationships between (and among) the variables are specified by the theory.
Measurement
When measuring concepts, the indicators that are used in building and testing theories should be both valid and reliable. Validity refers to how well the measurement captures the concept. Face validity, for example, refers to the plausibility and general acceptance of the measure, while the domain validity of the measure concerns the degree to which it captures all relevant aspects of the concept. Reliability, by contrast, refers to how consistent the measure is with repeated applications. A measure is reliable if, when applied to the repeated observations in similar settings, the outcomes are consistent.
Assessing the Quality of a Measure
Measurement is the process of assigning numbers to the phenomenon or concept that you are interested in. Measurement is straight-forward when we can directly observe the phenomenon. One agrees on a metric, such as inches or pounds, and then figures out how many of those units are present for the case in question. Measurement becomes more challenging when you cannot directly observe the concept of interest. In political science and public policy, some of the things we want to measure are directly observable: how many dollars were spent on a project or how many votes the incumbent receives, but many of our concepts are not observable: is issue X on the public’s agenda, how successful is a program, or how much do citizens trust the president. When the concept is not directly observable the operational definition is especially important. The operational definition explains exactly what the researcher will do to assign a number for each subject/case.
In reality, there is always some possibility that the number assigned does not reflect the true value for that case, i.e., there may be some error involved. Error can come about for any number of reasons, including mistakes in coding, the need for subjective judgments, or a measuring instrument that lacks precision. These kinds of error will generally produce inconsistent results; that is, they reduce reliability. We can assess the reliability of an indicator using one of two general approaches. One approach is a test-retest method where the same subjects are measured at two different points in time. If the measure is reliable the correlation between the two observations should be high. We can also assess reliability by using multiple indicators of the same concept and determining if there is a strong inter-correlation among them using statistical formulas such as Cronbach’s alpha or Kuder-Richardson Formula 20 (KR-20).
We can also have error when our measure is not valid. Valid indicators measure the concept we think they are measuring. The indicator should both converge with the concept and discriminate between the concept and similar yet different concepts. Unfortunately, there is no failsafe way to determine whether an indicator is valid. There are, however, a few things you can do to gain confidence in the validity of the indicator. First, you can simply look at it from a logical perspective and ask if it seems like it is valid. Does it have face validity? Second, you can see if it correlates well with other indicators that are considered valid, and in ways that are consistent with theory. This is called construct validity. Third, you can determine if it works in the way expected, which is referred to as predictive validity. Finally, we have more confidence if other researchers using the same concept agree that the indicator is considered valid. This consensual validity at least ensures that different researchers are talking about the same thing.
Measurement of Different Kinds of Concepts
Measurement can be applied to different kinds of concepts, which causes measures of different concepts to vary. There are three primary levels of measurement; ordinal, interval, and nominal. Ordinal level measures indicate relative differences, such as more or less, but do not provide equal distances between intervals on the measurement scale. Therefore, ordinal measures cannot tell us how much more or less one observation is than another. Imagine a survey question asking respondents to identify their annual income. Respondents are given a choice of five different income levels: \$0-20,000, \$20,000-50,000, \$50,000-\$100,000, and \$100,000+. This measure gives us an idea of the rank order of respondents’ income, but it is impossible for us to identify consistent differences between these responses. With an interval level measure, the variable is ordered and the differences between values are consistent. Sticking with the example of income, survey respondents are now asked to provide their annual income to the nearest ten thousand dollar mark (e.g., \$10,000, \$20,000, \$30,000, etc.). This measurement technique produces an interval level variable because we have both a rank ordering and equal spacing between values. Ratio scales are interval measures with the special characteristic that the value of zero (0) indicates the absence of some property. A value of zero (0) income in our example may indicate a person does not have a job. Another example of a ratio scale is the Kelvin temperature scale because zero (0) degrees Kelvin indicates the complete absence of heat. Finally, a nominal level measure identifies categorical differences among observations. Numerical values assigned to nominal variables have no inherent meaning, but only differentiate one type" (e.g., gender, race, religion) from another.
1.2.2 Theories and Causality
Theories should be causal in nature, meaning that an independent variable is thought to have a causal influence on the dependent variable. In other words, a change in the independent variable causes a change in the dependent variable. Causality can be thought of as the motor" that drives the model and provides the basis for explanation and (possibly) prediction.
The Basis of Causality in Theories
1. Time Ordering: The cause precedes the effect, X→Y
2. Co-Variation: Changes in X are associated with changes in Y
3. Non-Spuriousness: There is not a variable Z that causes both X and Y
To establish causality we want to demonstrate that a change in the independent variable is a necessary and sufficient condition for a change in the dependent variable (though more complex, interdependent relationships can also be quantitatively modeled). We can think of the independent variable as a treatment, τ, and we speculate that τ causes a change in our dependent variable, Y. The gold standard’’ for causal inference is an experiment where a) the level of ττ is controlled by the researcher and b) subjects are randomly assigned to a treatment or control group. The group that receives the treatment has outcome Y1 and the control group has outcome Y0; the treatment effect can be defined as τ=Y1-Y0. Causality is inferred because the treatment was only given to one group, and since these groups were randomly assigned other influences should wash out. Thus the difference τ=Y1-Y0 can be attributed to the treatment.
Given the nature of social science and public policy theorizing, we often can’t control the treatment of interest. For example, our case study in this text concerns the effect of political ideology on views about the environment. For this type of relationship, we cannot randomly assign ideology in an experimental sense. Instead, we employ statistical controls to account for the possible influences of confounding factors, such as age and gender. Using multiple regression we control for other factors that might influence the dependent variable.1
1.2.3 Generation of Testable Hypothesis
Theory building is accomplished through the testing of hypotheses derived from theory. In simple form, a theory implies (sets of) relationships among concepts. These concepts are then operationalized. Finally, models are developed to examine how the measures are related. Properly specified hypotheses can be tested with empirical data, which are derived from the application of valid and reliable measures to relevant observations. The testing and re-testing of hypotheses develops levels of confidence that we can have for the core propositions that constitute the theory. In short, empirically grounded theories must be able to posit clear hypotheses that are testable. In this text, we discuss hypotheses and test them using relevant models and data.
As noted above, this text uses the concepts of political ideology and views about the environment as a case study in order to generate and test hypotheses about the relationships between these variables. For example, based on popular media accounts, it is plausible to expect that political conservatives are less likely to be concerned about the environment than political moderates or liberals. Therefore, we can pose the working hypothesis that measures of political ideology will be systematically related to measures of concern for the environment – with conservatives showing less concern for the environment. In classical hypothesis testing, the working hypothesis is tested against a null hypothesis. A null hypothesis is an implicit hypothesis that posits the independent variable has no effect (i.e., null effect) on the dependent variable. In our example, the null hypothesis states ideology has no effect on environmental concern. | textbooks/stats/Applied_Statistics/Book%3A_Quantitative_Research_Methods_for_Political_Science_Public_Policy_and_Public_Administration_(Jenkins-Smith_et_al.)/01%3A_Theories_and_Social_Science/1.01%3A_The_Scientific_Method.txt |
Closely related to hypothesis testing in empirical research is the concept of functional relationships – or functions. Hypotheses posit systematic relationships between variables, and those relationships are expressed as functions. For example, we can hypothesize that an individual’s productivity is related to coffee consumption (productivity is a function of coffee consumption).2
Functions are ubiquitous. When we perceive relational order or patterns in the world around us, we are observing functions. Individual decisions about when to cross the street, whether to take a nap or engage in a barroom brawl can all be ascribed to patterns (the walk" light was lit; someone stayed up too late last night; a Longhorn insulted the Sooner football team). Patterns are how we make sense of the world, and patterns are expressed as functions. That does not mean the functions we perceive are always correct, or that they allow us to predict perfectly. However, without functions we don’t know what to expect; chaos prevails.
In mathematical terms, a function relates an outcome variable, y, to one or more inputs, x. This can be expressed more generally as: y=f(x1,x2,x3,...xn)y=f(x1,x2,x3,...xn), which means y is a function of the x’s, or, y varies as a function of the x’s.
Functions form the basis of the statistical models that will be developed throughout the text. In particular, this text will focus on linear regression, which is based on linear functions such as y=f(x)=5+x, where 5 is a constant and x is a variable. We can plot this function with the values of x ranging from −5 to 5. This is shown in Figure \(1\).
As you can see, the xx values range from −5 to 5 and the corresponding y values range from 0 to 10. The function produces a straight line because the changes in y are consistent across all values of x. This type of function is the basis of the linear models we will develop, therefore these models are said to have a linear functional form.
However, non-linear functional forms are also common. For example, y=f(x)=3−x2 is a quadratic function, which is a type of polynomial function since it contains a square term (an exponent). It is plotted in Figure \(2\). This function is non-linear because the changes in y are not consistent across the full range of x.
Examples of Functions in Social Science Theories
As noted, functions are the basis of statistical models that are used to test hypotheses. Below are a few examples of functions that are related to social science theories.
• Welfare and work incentives
• Employment =f(welfare programs, education level, work experience,…)
• Nuclear weapons proliferation
• Decision to develop nuclear weapons =f(perceived threat, incentives, sanctions,…)
• Priming and political campaign contributions
• Contribution(\$) =f(Prime (suggested \$), income,…)
• Successful program implementation
• Implementation =f(clarity of law, level of public support, problem complexity,…)
Try your hand at this with theories that are familiar to you. First, identify the dependent and independent variables of interest; then develop your own conjectures about the form of the functional relationship(s) among them.
1.04: 1.4 Theory in Social Science
Theories play several crucial roles in the development of scientific knowledge. Some of these include providing patterns for data interpretation, linking the results of related studies together, providing frameworks for the study of concepts, and allowing the interpretation of more general meanings from any single set of findings. Hoover and Todd (2004) provide a very useful discussion of the role of theories in scientific thinking" – find it and read it!
The Role of Theory in Social Science
Adapted from The Elements of Social Scientific Thinking by Kenneth Hoover and Todd Donovan (2004, 37)
• Theory provides patterns for the interpretation of data
• Theory links one study with another
• Theory supplies frameworks within which concepts acquire significance
• Theory allows us to interpret the larger meaning of our findings
Perhaps, in the broadest sense, theories tie the enterprise of the social (or any) science together, as we build, revise, criticize and destroy theories in that collective domain referred to as the literature." | textbooks/stats/Applied_Statistics/Book%3A_Quantitative_Research_Methods_for_Political_Science_Public_Policy_and_Public_Administration_(Jenkins-Smith_et_al.)/01%3A_Theories_and_Social_Science/1.03%3A_Theory_and_Functions.txt |
The goal of this text is to develop an understanding of how to build theories by testing hypotheses using empirical data and statistical models. There are three necessary ingredients of strong empirical research. The first is a carefully constructed theory that generates empirically testable hypotheses. Once tested, these hypotheses should have implications for the development of theory. The second ingredient is quality data. The data should be valid, reliable, and relevant. The final ingredient is using the appropriate model design and execution. Specifically, the appropriate statistical models must be used to test the hypotheses. Appropriate models are those that are properly specified, estimated, and use data that conforms to the statistical assumptions. This course focuses on model design and execution.
As noted, this text uses political ideology and views on the environment as a case study to examine theory building in the social sciences.3 The text is organized by the idealized steps of the research process. As a first step, this first chapter discussed theories and hypothesis testing, which should always be (but often are not!) the first consideration. The second chapter focuses on research design and issues of internal and external validity. Chapter 3 examines data and covers specific ways to understand how the variables in the data are distributed. This is vital to know before doing any type of statistical modeling. The fourth chapter is an introduction to probability. The fifth chapter covers inference and how to reach conclusions regarding a population when you are studying a sample. The sixth chapter explores how to understand basic relationships that can hold between two variables including cross-tabulations, covariance, correlation, and difference of means tests. These relationships are the foundation of more sophisticated statistical approaches and therefore understanding these relationships is often a precursor to the later steps of statistical analysis. The seventh through tenth chapters focus on bi-variate ordinary least squares (OLS) regression or OLS regression with a dependent variable and one independent variable. This allows us to understand the mechanics of regression before moving on the third section (chapters eleven to fifteen) that cover multiple OLS regression. The final section of the book (chapter sixteen) covers the logistic (logit) regression. Logit regression is an example of a class of models called generalized linear models (GLM). GLMs allow for linear analysis to be performed on different types of dependent variables that may not be appropriate for OLS regression.
As a final note, this text makes extensive use of`R`. The code to reproduce all of the examples is excluded in the text in such a way that it can be easily copied and pasted into your`R`console. The data used for the examples is available as well. You can find it here.
1. This matter will be discussed in more detail in the multiple regression section.↩
2. The more coffee, the greater the productivity – up to a point! Beyond some level of consumption, coffee may induce jitters and ADD-type behavior, thereby undercutting productivity. Therefore the posited function that links coffee consumption to productivity is non-linear, initially positive but then flat or negative as consumption increases.↩
3. As you may have already realized, social scientists often take these steps out of order … we may back into" an insight, or skip a step and return to it later. There is no reliable cookbook for what we do. Rather, think of the idealized steps of the scientific process as an important heuristic that helps us think through our line of reasoning and analysis – often after the fact – to help us be sure that we learned what we think we learned from our analysis.↩ | textbooks/stats/Applied_Statistics/Book%3A_Quantitative_Research_Methods_for_Political_Science_Public_Policy_and_Public_Administration_(Jenkins-Smith_et_al.)/01%3A_Theories_and_Social_Science/1.05%3A_Outline_of_the_Book.txt |
Often scholars rely on data collected by other researchers and end up, de facto, with the research design developed by the original scholars. But if you are collecting your own data this stage becomes the key to the success of your project and the decisions you make at this stage will determine both what you will be able to conclude and what you will not be able to conclude. It is at this stage that all the elements of science come together.
We can think of research as starting with a problem or a research question and moving to an attempt to provide an answer to that problem by developing a theory. If we want to know how good (empirically accurate) that theory is we will want to put it to one or more tests. Framing a research question and developing a theory could all be done from the comforts of your backyard hammock. Or, they could be done by a journalist (or, for that matter, by the village idiot) rather than a scientist. To move beyond that stage requires more. To test the theory, we deduce one or more hypotheses from the theory, i.e., statements that should be true if the theory accurately depicts the world. We test those hypotheses by systematically observing the world—the empirical end of the scientific method. It requires you to get out of that hammock and go observe the world. The observations you make allow you to accept or reject your hypotheses, providing insights into the accuracy and value of your theory. Those observations are conducted according to a plan or a research design.
2.02: Internal and External Validity
Developing a research design should be more than just a matter of convenience (although there is an important element of that which we will discuss at the end of this chapter). Not all designs are created equally and there are trade-offs we make when opting for one type of design over another. The two major components of an assessment of a research design are its internal validity and its external validity. Internal validity basically means we can make a causal statement within the context of our study. We have internal validity if, for our study, we can say our independent variable caused our dependent variable. To make that statement we need to satisfy the conditions of causality we identified previously. The major challenge is the issue of spuriousness. We have to ask if our design allows us to say our independent variable makes our dependent variable vary systematically as it changes and that those changes in the dependent variable are not due to some third or extraneous factor. It is worth noting that even with internal validity, you might have serious problems when it comes to your theory. Suppose your hypothesis is that being well-fed makes one more productive. Further, suppose that you operationalize “being well-fed” as consuming twenty Hostess Twinkies in an hour. If the Twinkie eaters are more productive those who did not get the Twinkies your might be able to show causality, but if your theory is based on the idea that “well-fed” means a balanced and healthy diet then you still have a problematic research design. It has internal validity because what you manipulated (Twinkie eating) affected your dependent variable, but that conclusion does not really bring any enlightenment to your theory.
The second basis for evaluating your research design is to assess its external validity. External validity means that we can generalize the results of our study. It asks whether our findings are applicable in other settings. Here we consider what population we are interested in generalizing to. We might be interested in adult Americans, but if we have studied a sample of first-year college students then we might not be able to generalize to our target population. External validity means that we believe we can generalize to our (and perhaps other) population(s). Along with other factors discussed below, replication is key to demonstrating external validity. | textbooks/stats/Applied_Statistics/Book%3A_Quantitative_Research_Methods_for_Political_Science_Public_Policy_and_Public_Administration_(Jenkins-Smith_et_al.)/02%3A_Research_Design/2.01%3A_Overview_of_the_Research_Process.txt |
There are many ways to classify systematic, scientific research designs, but the most common approach is to classify them as experimental or observational. Experimental designs are most easily thought of as a standard laboratory experiment. In an experimental design the researcher controls (holds constant) as many variables as possible and then assigns subjects to groups, usually at random. If randomization works (and it will if the sample size is large enough, but technically that means infinite in size), then the two groups are identical. The researcher then manipulates the experimental treatment (independent variable) so that one group is exposed to it and the other is not. The dependent variable is then observed. If the dependent variable is different for the two groups, we can have quite a bit of confidence that the independent variable caused the dependent variable. That is, we have good internal validity. In other words, the conditions that need to be satisfied to demonstrate causality can be met with an experimental design. Correlation can be determined, time order is evident, and spuriousness is not a problem—there simply is no alternative explanation.
Unfortunately, in the social sciences, the artificiality of the experimental setting often creates suspect external validity. We may want to know the effects of a news story on views towards climate change so we conduct an experiment where participants are brought into a lab setting and some (randomly selected) see the story and others watch a video clip with a cute kitten. If the experiment is conducted appropriately, we can determine the consequences of being exposed to the story. But, can we extrapolate from that study and have confidence that the same consequences would be found in a natural setting, e.g., in one’s living room with kids running around and a cold beverage in your hand? Maybe not. A good researcher will do things that minimize the artificiality of the setting, but external validity will often remain suspect.
Observational designs tend to have the opposite strengths and weaknesses. In an observational design, the researcher cannot control who is exposed to the experimental treatment; therefore, there is no random assignment and there is no control. Does smoking cause heart disease? A researcher might approach that research question by collecting detailed medical and lifestyle histories of a group of subjects. If there is a correlation between those who smoke and heart disease, can we conclude a causal relationship? Generally, the answer to that question is no, because any other difference between the two groups is an alternative explanation (meaning that the relationship might be spurious). For better or worse, though, there are fewer threats to external validity (see below for more detail) because of the natural research setting.
A specific type of observational design, the natural experiment, requires mention because they are increasingly used to great value. In a natural experiment, subjects are exposed to different environmental conditions that are outside the control of the researcher, but the process governing exposure to the different conditions arguably resembles random assignment. Weather, for example, is an environmental condition that arguably mimics random assignment. For example, imagine a natural experiment where one part of New York City gets a lot of snow on election day, whereas another part gets almost no snow. Researchers do not control the weather but might argue that patterns of snowfall are basically random, or, at the very least, exogenous to voting behavior. If you buy this argument, then you might use this as a natural experiment to estimate the impact of weather conditions on voter turnout. Because the experiment takes place in a natural setting, external validity is less of a problem. But, since we do not have control over all events, we may still have internal validity questions.
2.04: Threats to Validity
To understand the pros and cons of various designs and to be able to better judge specific designs, we identify specific threats to internal and external validity. Before we do so, it is important to note that a (perhaps the) primary challenge to establishing internal validity in the social sciences is the fact that most of the phenomena we care about have multiple causes and are often a result of some complex set of interactions. For example, X may be only a partial cause of Y or X may cause Y, but only when Z is present. Multiple causation and interactive effects make it very difficult to demonstrate causality, both internally and externally. Turning now to more specific threats, Table 2.1 identifies common threats to internal validity and Table 2.2 identifies common threats to external validity.
Figure \(1\): Common Threats to Internal Validity
Threat
History Any event that occurs while the experiment is in progress might be an alternation; using a control group mitigates this concern.
Maturation Normal changes over time (e.g., fatigue or aging) might affect the dependent variable; using a control group mitigates this concern
Selection Bias If randomization is not used to assign participants, the groups may not be equivalent
Experimental Mortality If groups lost participants (e.g., due to dropping out of the experiment) they may not be equivalent.
Testing A pre-test may confound the influence of the experimental treatment; using a control group mitigates this concern
Instrumentation Changes or difference in the process of measurements might alternatively account for differences
Statistical Regression The natural tendency for extreme scores to regress or move towards the mean
Figure \(2\): Common Threats to External Validity | textbooks/stats/Applied_Statistics/Book%3A_Quantitative_Research_Methods_for_Political_Science_Public_Policy_and_Public_Administration_(Jenkins-Smith_et_al.)/02%3A_Research_Design/2.03%3A_Major_Classes_of_Designs.txt |
In this section we look at some common research designs, the notation used to symbolize them, and then consider the internal and external validity of the designs. We start with the most basic experimental design, the post-test only design Figure \(3\). In this design, subjects are randomly assigned to one of two groups with one group receiving the experimental treatment.4 There are advantages to this design in that it is relatively inexpensive and eliminates the threats associated with pre-testing. If randomization worked the (unobserved) pre-test measures would be the same so any differences in the observations would be due to the experimental treatment. The problem is that randomization could fail us, especially if the sample size is small.
Many experimental groups are small and many researchers are not comfortable relying on randomization without empirical verification that the groups are the same, so another common design is the Pre-test, Post-test Design (Figure \(4\)). By conducting a pre-test, we can be sure that the groups are identical when the experiment begins. The disadvantages are that adding groups drives the cost up (and/or decreases the size of the groups) and that the various threats due to testing start to be a concern. Consider the example used above concerning a news story and views on climate change. If subjects were given a pre-test on their views on climate change and then exposed to the news story, they might become more attentive to the story. If a change occurs, we can say it was due to the story (internal validity), but we have to wonder whether we can generalize to people who had not been sensitized in advance.
A final experimental design deals with all the drawbacks of the previous two by combining them into what is called the Solomon Four Group Design (Figure \(5\)). Intuitively it is clear that the concerns of the previous two designs are dealt with in this design, but the actual analysis is complicated. Moreover, this design is expensive so while it may represent an ideal, most researchers find it necessary to compromise.
Even the Solomon Four Group design does not solve all of our validity problems. It still likely suffers from the artificiality of the experimental setting. Researchers generally try a variety of tactics to minimize the artificiality of the setting through a variety of efforts such as watching the aforementioned news clip in a living room-like setting rather than on a computer monitor in a cubicle or doing jury research in the courthouse rather than the basement of a university building.
Observational designs lack random assignment, so all of the above designs can be considered observational designs when the assignment to groups is not random. You might, for example, want to consider the effects of a new teaching style on student test scores. One classroom might get the intervention (the new teaching style) and another not be exposed to it (the old teaching style). Since students are not randomly assigned to classrooms it is not experimental and the threats that result from selection bias become a concern (along with all the same concerns we have in the experimental setting). What we gain, of course, is the elimination or minimization of the concern about the experimental setting.
A final design that is commonly used is the repeated measures or longitudinal research design where repeated observations are made over time and at some point, there is an intervention (experimental treatment) and then subsequent observations are made (Figure \(6\)). Selection bias and testing threats are obvious concerns with this design. But there are also concerns about history, maturation, and mortality. Anything that occurs between On and On+1 becomes an alternative explanation for any changes we find. This design may also have a control group, which would give clues regarding the threat of history. Because of the extended time involved in this type of design, the researcher is too concerned about experimental mortality and maturation.
This brief discussion illustrates major research designs and the challenges to maximize internal and external validity. With these experimental designs, we worry about external validity, but since we have said we seek the ability to make causal statements, it seems that preference might be given to research via experimental designs. Certainly, we see more and more experimental designs in political science with important contributions. But, before we dismiss observational designs, we should note that in later chapters, we will provide an approach to providing statistical controls which, in part, substitutes for the control we get with experimental designs.
2.06: Plan Meets Reality
Research design is the process of linking together all the elements of your research project. None of the elements can be taken in isolation, but must all come together to maximize your ability to speak to your theory (and research question) while maximizing internal and external validity within the constraints of your time and budget. The planning process is not straightforward and there are times that you will feel you are taking a step backward0-. That kind of progress’’ is normal.
Additionally, there is no single right way to design a piece of research to address your research problem. Different scholars, for a variety of reasons, would end up with quite different designs for the same research problem. Design includes trade-offs, e.g., internal vs. external validity, and compromises based on time, resources, and opportunities. Knowing the subject matter – both previous research and the subject itself – helps the researcher understand where a contribution can be made and when opportunities present themselves.
1. The symbol R means there is a random assignment to the group. X symbolizes exposure to experimental treatment. O is an observation or measurement.↩ | textbooks/stats/Applied_Statistics/Book%3A_Quantitative_Research_Methods_for_Political_Science_Public_Policy_and_Public_Administration_(Jenkins-Smith_et_al.)/02%3A_Research_Design/2.05%3A__Some_Common_Designs.txt |
What does it mean to characterize your data? First, it means knowing how many observations are contained in your data and the distribution of those observations over the range of your variable(s). What kinds of measures (interval, ordinal, nominal) do you have, and what are the ranges of valid measures for each variable? How many cases of missing (no data) or miscoded (measures that fall outside the valid range) do you have? What do the coded values represent? While seemingly trivial, checking and evaluating your data for these attributes can save you major headaches later. For example, missing values for an observation often get a special code – say, “-99” – to distinguish them from valid observations. If you neglect to treat these values properly, R (or any other statistics program) will treat that value as if it were valid and thereby turn your results into a royal hairball. We know of cases in which even seasoned quantitative scholars have made the embarrassing mistake of failing to properly handle missing values in their analyses. In at least one case, a published paper had to be retracted for this reason. So don’t skimp on the most basic forms of data characterization!
The dataset used for purposes of illustration in this version of this text is taken from a survey of Oklahomans, conducted in 2016, by OU’s Center for Risk and Crisis Management. The survey question wording and background will be provided in class. However, for purposes of this chapter, note that the measure of `ideology`consists of a self-report of political ideology on a scale that ranges from 1 (strong liberal) to 7 (strong conservative); the measure of the `perceived risk of climate change` ranges from zero (no risk) to 10 (extreme risk). `Age` was measured in years.
It is often useful to graph the variables in your dataset to get a better idea of their distribution. In addition, we may want to compare the distribution of a variable to a theoretical distribution (typically a normal distribution). This can be accomplished in several ways, but we will show two here—a histogram and a density curve—and more will be discussed in later chapters. For now, we examine the distribution of the variable measuring age. The red line on the density visualization presents the normal distribution given the mean and standard deviation of our variable.
A histogram creates intervals of equal length, called bins, and displays the frequency of observations in each of the bins. To produce a histogram in R simply use the `geom_histogram` command in the `ggplot2` package. Next, we plot the density of the observed data along with a normal curve. This can be done with the `geom_density` command in the `ggplot2` package.
``````library(ggplot2)
ggplot(ds, aes(age)) +
geom_histogram() ``````
``````ggplot(ds, aes(age)) +
geom_density() +
stat_function(fun = dnorm, args = list(mean = mean(ds\$age, na.rm = T),
sd = sd(ds\$age, na.rm = T)), color = "red") ```
```
You can also get an overview of your data using a table known as a frequency distribution. The frequency distribution summarizes how often each value of your variable occurs in the dataset. If your variable has a limited number of values that it can take on, you can report all values, but if it has a large number of possible values (e.g., age of respondent), then you will want to create categories, or bins, to report those frequencies. In such cases, it is generally easier to make sense of the percentage distribution. Table 3.3 is a frequency distribution for the ideology variable. From that table, we see, for example, that about one-third of all respondents are moderates. We see the numbers decrease as we move away from that category, but not uniformly. There are a few more people on the conservative extreme than on the liberal side and that the number of people placing themselves in the penultimate categories on either end is greater than those towards the middle. The histogram and density curve would, of course, show the same pattern.
The other thing to watch for here (or in the charts) is whether there is an unusual observation. If one person scored 17 in this table, you could be pretty sure a coding error was made somewhere. You cannot find all your errors this way, but you can find some, including the ones that have the potential to most seriously adversely affect your analysis.
Figure \(3\): Frequency Distribution for Ideology
In R, we can obtain the data for the above table with the following functions:
``````# frequency counts for each level
table(ds\$ideol)``````
``````##
## 1 2 3 4 5 6 7
## 122 279 185 571 328 688 351``````
``````# To view percentages
library(dplyr)
table(ds\$ideol) %>% prop.table()``````
``````##
## 1 2 3 4 5 6
## 0.04833597 0.11053883 0.07329635 0.22622821 0.12995246 0.27258320
## 7
## 0.13906498``````
``````# multiply the numbers by 100
table(ds\$ideol) %>% prop.table() * 100 ``````
``````##
## 1 2 3 4 5 6 7
## 4.833597 11.053883 7.329635 22.622821 12.995246 27.258320 13.906498``````
Having obtained a sample, it is important to be able to characterize that sample. In particular, it is important to understand the probability distributions associated with each variable in the sample.
3.1.1 Central Tendency
Measures of central tendency are useful because a single statistic can be used to describe the distribution. We focus on three measures of central tendency: the mean, the median, and the mode.
Measures of Central Tendency
The Mean: The arithmetic average of the values
The Median: The value at the center of the distribution
The Mode: The most frequently occurring value
We will primarily rely on the mean, because of its efficient property of representing the data. But medians – particularly when used in conjunction with the mean - can tell us a great deal about the shape of the distribution of our data. We will return to this point shortly.
3.1.2 Level of Measurement and Central Tendency
The three measures of central tendency – the mean, median, and mode – each tells us something different about our data, but each has some limitations as well (especially when used alone). Knowing the mode tells us what is most common, but we do not know how common and, using it alone, would not even leave us confident that it is an indicator of anything very central. When rolling in your data, it is generally a good idea to roll in all the descriptive statistics that you can to get a good feel for them.
One issue, though, is that your ability to use any statistic is dependent on the level of measurement for the variable. The mean requires you to add all your observations together. But you cannot perform mathematical functions on ordinal or nominal level measures. Your data must be measured at the interval level to calculate a meaningful mean. (If you ask R to calculate the mean student id number, it will, but what you get will be nonsense.) Finding the middle item in an ordered listing of your observations (the median) requires the ability to order your data, so your level of measurement must be at least ordinal. Therefore, if you have nominal level data, you can only report the mode (but no median or mean), so it is critical that you also look beyond the central tendency to the overall distribution of the data.
3.1.3 Moments
In addition to measures of central tendency, “moments” are important ways to characterize the shape of the distribution of a sample variable. Moments are applicable when the data measured is an interval type (the level of measurement). The first four moments are those that are used most often.
1. Expected Value: The expected value of a variable, E(X) is its mean.
E(X)=¯X=∑Xin | textbooks/stats/Applied_Statistics/Book%3A_Quantitative_Research_Methods_for_Political_Science_Public_Policy_and_Public_Administration_(Jenkins-Smith_et_al.)/03%3A_Exploring_and_Visualizing_Data/3.01%3A_Characterizing_Data.txt |
There are two basic ways to find simple probabilities. One way to find a probability is a priori or using logic without any real-world evidence or experience. If we know a die is not loaded, we know the probability of rolling a two is 1 out of 6 or .167. Probabilities are easy to find if every possible outcome has the same probability of occurring. If that is the case, the probability is the number of ways your outcome can be achieved over all possible outcomes.
The second method to determine a probability is called posterior, which uses the experience and evidence that has accumulated over time to determine the likelihood of an event. If we do not know that the probability of getting ahead is the same as the probability of getting a tail when we flip a coin (and, therefore, we cannot use an a priori methodology), we can flip the coin repeatedly. After flipping the coin, say, 6000 times, if we get 3000 heads you can conclude the probability of getting ahead is .5, i.e., 3000 divided by 6000.
Sometimes we want to look at probabilities in a more complex way. Suppose we want to know how Martinez fares against right-handed pitchers. That kind of probability is referred to as a conditional probability. The formal way that we might word that interest is: what is Martinez’s probability of getting a hit given that the pitcher is right-handed? We are establishing a condition (right-handed pitcher) and are only interested in the cases that satisfy the condition. The calculation is the same as a simple probability, but it eliminates his at-bats against lefties and only considers those at-bats against right-handed pitchers. In this case, he has 23 hits in 56 at-bats (against right-handed pitchers) so his probability of getting a hit against a right-handed pitcher is 23/5623/56 or .411. (This example uses the posterior method to find the probability, by the way.) A conditional probability is symbolized as P(A|B)P(A|B) where A is getting a hit and B is the pitcher is right-handed. It is read as the probability of A given B or the probability that Martinez will get a hit given that the pitcher is right-handed.
Another type of probability that we often want is a joint probability. A joint probability tells the likelihood of two (or more) events both occurring. Suppose you want to know the probability that you will like this course and that you will get an A in it, simultaneously – the best of all possible worlds. The formula for finding a joint probability is:
P(A∩B)=P(A)∗P(B|A) or P(B)∗P(A|B)(4.1)(4.1)P(A∩B)=P(A)∗P(B|A) or (B)∗P(A|B)
The probability of two events occurring at the same time is the probability that the first one will occur times the probability the second one will occur given that the first one has occurred.
If events are independent the calculation is even easier. Events are independent if the occurrence or non-occurrence of one does not affect whether the other occurs. Suppose you want to know the probability of liking this course and not needing to get gas on the way home (your definition of a perfect day). Those events are presumably independent so the P(B|A)=P(B)P(B|A)=P(B) and the joint formula for independent events become:
P(A∩B)=P(A)∗P(B)(4.2)(4.2)P(A∩B)=P(A)∗P(B)
The final type of probability is the union of two probabilities. The union of two probabilities is the probability that either one event will occur or the other will occur – either, or, it does not matter which one. You might go into a statistics class with some dread and you might say a little prayer to yourself: Please let me either like this class or get an A. I do not care which one, but please give me at least one of them." The formula and symbols for that kind of probability are:
P(A∪B)=P(A)+P(B)−P(A∩B)(4.3)(4.3)P(A∪B)=P(A)+P(B)−P(A∩B)
It is easy to understand why we just add the P(A)P(A) and the P(B)P(B) but it may be less clear why we subtract the joint probability. The answer is simple - because we counted where they overlap twice (those instances in both A and in B) so we have to subtract out one instance.
If, though, the events are mutually exclusive, we do not need to subtract the overlap. Mutually exclusive events are events that cannot occur at the same time, so there is no overlap. Suppose you are from Chicago and will be happy if either the Cubs or the White Sox win the World Series. Those events are mutually exclusive since only one team can win the World Series so to find the union of those probabilities we simply have to add the probability of the Cubs winning to the probability of the White Sox winning. | textbooks/stats/Applied_Statistics/Book%3A_Quantitative_Research_Methods_for_Political_Science_Public_Policy_and_Public_Administration_(Jenkins-Smith_et_al.)/04%3A_Probability/4.01%3A_Finding_Probabilities.txt |
If we want to find the probability of a score falling in a certain range, e.g., between 3 and 7, or more than 12, we can use the normal to determine that probability. Our ability to make that determination is based on some known characteristics on the normal curve. We know that for all normal curves 68.26% of all scores fall within one standard deviation of the mean, that 95.44% fall within two standard deviations, and that 99.72% fall within three standard deviations. (The normal distribution is dealt with more formally in the next chapter.) So, we know that something that is three or more standard deviations above the mean is pretty rare. Figure \(1\) illustrates the probabilities associated with the normal curve.7
According to Figure \(1\), there is a .3413 probability of an observation falling between the mean and one standard deviation above the mean and, therefore, a .6826 probability of a score falling within (+/−)(+/−) one standard deviation of the mean. There is also a .8413 probability of a score being one standard deviation above the mean or less (.5 probability of a score falling below the mean and a .3413 probability of a score falling between the mean and one standard deviation above it). (Using the language we learned in Chapter 3, another way to articulate that finding is to say that a score one standard deviation above the mean is at the 84th percentile.) There is also a .1587 probability of a score being a standard deviation above the mean or higher (1.0−.8413)(1.0−.8413).
Intelligence tests have a mean of 100 and a standard deviation of 15. Someone with an IQ of 130, then, is two standard deviations above the mean, meaning they score higher than 97.72% of the population. Suppose, though, your IQ is 140. Using Figure \(1\) would enable us only to approximate how high that score is. To find out more precisely, we have to find out how many standard deviations above the mean 140 is and then go to a more precise normal curve table.
To find out how many standard deviations from the mean an observation is, we calculated a standardized, or Z-score. The formula to convert a raw score to a Z-score is:
Z=x−μσ(4.4)(4.4)Z=x−μσ
In this case, the ZZ-score is 140−100/15140−100/15 or 2.672.67. Looking at the formula, you can see that a Z-score of zero puts that score at the mean; a ZZ-score of one is one standard deviation above the mean, and a ZZ-score of 2.672.67 is 2.672.67 standard deviations above the mean.
The next step is to go to a normal curve table to interpret that Z-score. Table @ref(fig: Normal_Curve) at the end of the chapter contains such a table. To use the table you combine rows and columns to find a score of 2.67. Where they cross we see the value .4962. That value means there is a .4962 probability of scoring between the mean and a ZZ-score of 2.67. Since there is a .5 probability of scoring below the mean adding the two values together gives a .9962 probability of finding an IQ of 140 or lower or a .0038 probability of someone having an IQ of 140 or better.
Bernoulli Probabilities
We can use a calculation known as the Bernoulli Process to determine the probability of a certain number of successes in a given number of trials. For example, if you want to know the probability of getting exactly three heads when you flip a coin four times, you can use the Bernoulli calculation. To perform the calculation you need to determine the number of trials (n)(n), the number of successes you care about (k)(k), the probability of success on a single trial (p)(p), and the probability (q)(q) of not a success (1−p(1−p or q)q). The operative formula is:
(n!k!(n−k)!)∗pk∗qn−k(n!k!(n−k)!)∗pk∗qn−k
The symbol n!n! is n factorial" or n∗(n−1)∗(n−2)n∗(n−1)∗(n−2) … ∗1∗1. So if you want to know the probability of getting three heads on four flips of a coin, n=4n=4, k=3k=3, p=.5p=.5, and q=.5q=.5:
(4!3!(4−3)!)∗.53∗.54−3=.25(4!3!(4−3)!)∗.53∗.54−3=.25
The Bernoulli process can be used only when both n∗pn∗p and n∗qn∗q are greater than ten. It is also most useful when you are interested in exactly kk successes. If you want to know the probability of kk or more, or kk or fewer successes, it is easier to use the normal curve. Bernoulli could still be used if your data is discrete, but you would have to do repeated calculations.
4.03: Summary
Probabilities are simple statistics but are important when we want to know the likelihood of some event occurring. There are frequent real-world instances where we find that information valuable. We will see, starting in the next chapter, that probabilities are also central to the concept of inference. | textbooks/stats/Applied_Statistics/Book%3A_Quantitative_Research_Methods_for_Political_Science_Public_Policy_and_Public_Administration_(Jenkins-Smith_et_al.)/04%3A_Probability/4.02%3A_Finding_Probabilities_with_the_Normal_Curve.txt |
The basis of hypothesis testing with statistical analysis is inference. In short, inference—and inferential statistics by extension—means deriving knowledge about a population from a sample of that population. Given that in most contexts it is not possible to have all the data on an entire population of interest, we, therefore, need to sample from that population.8 However, in order to be able to rely on inference, the sample must cover the theoretically relevant variables, variable ranges, and contexts.
5.1.1 Populations and Samples
In doing the statistical analysis we differentiate between populations and samples. The population is the total set of items that we care about. The sample is a subset of those items that we study in order to understand the population. While we are interested in the population we often need to resort to studying a sample due to time, financial, or logistic constraints that might make studying the entire population infeasible. Instead, we use inferential statistics to make inferences about the population from a sample.
5.1.2 Sampling and Knowing
Take a relatively common – but perhaps less commonly examined – expression about what we “know” about the world around us. We commonly say we know" people, and some we know better than others. What does it mean to know someone? In part, it must mean that we can anticipate how that person would behave in a wide array of situations. If we know that person from experience, then it must be that we have observed their behavior across a sufficient variety of situations in the past to be able to infer how they would behave in future situations. Put differently, we have “sampled” their behavior across a relevant range of situations and contexts to be confident that we can anticipate their behavior in the future.9 Similar considerations about sampling might apply to “knowing” a place, a group, or an institution. Of equal importance, samples of observations across different combinations of variables are necessary to identify relationships (or functions) between variables. In short, samples – whether deliberately drawn and systematic or otherwise – are integral to what we think we know of the world around us.
5.1.3 Sampling Strategies
Given the importance of sampling, it should come as little surprise that there are numerous strategies designed to provide useful inference about populations. For example, how can we judge whether the temperature of soup is appropriate before serving it? We might stir the pot, to assure uniformity of temperature across possible (spoon-sized) samples, then sample a spoonful. A particularly thorny problem in sampling concerns the practice of courtship, in which participants may attempt to put “their best foot forward” to make a good impression. Put differently, the participants often seek to bias the sample of relational experiences to make themselves look better than they might on average. Sampling in this context usually involves (a) getting opinions of others, thereby broadening (if only indirectly) the size of the sample, and (b) observing the courtship partner over a wide range of circumstances in which the intended bias may be difficult to maintain. Put formally, we may try to stratify the sample by taking observations inappropriate “cells” that correspond to different potential influences on behavior – say high-stress environments involving preparation for final exams or meeting parents. In the best possible case, however, we try to wash out the effect of various influences on our samples through randomization. To pursue the courtship example (perhaps a bit too far!), observations of behavior could be taken across interactions from a randomly assigned array of partners and situations. But, of course, by then all bets are off on things working out anyway.
5.1.4 Sampling Techniques
When engaging in inferential statistics to infer the characteristics of a population from a sample, it is essential to be clear about how the sample was drawn. Sampling can be a very complex practice with multiple stages involved in drawing the final sample. It is desirable that the sample is some form of a probability sample, i.e., a sample in which each member of the population has a known probability of being sampled. The most direct form of an appropriate probability sample is a random sample where everyone has the same probability of being sampled. A random sample has the advantages of simplicity (in theory) and ease of inference as no adjustments to the data are needed. But, the reality of conducting a random sample may make the process quite challenging. Before we can draw subjects at random, we need a list of all members of the population. For many populations (e.g. adult US residents) that list is impossible to get. Not too long ago, it was reasonable to conclude that a list of telephone numbers was a reasonable approximation of such a listing for American households. During the era that landlines were ubiquitous, pollsters could randomly call numbers (and perhaps ask for the adult in the household who had the most recent birthday) to get a good approximation of a national random sample. (It was also an era before caller identification and specialized ringtones, which meant that calls were routinely answered, therefore decreasing - but not eliminating - concern with response bias.) Of course, telephone habits have changed and pollsters find it increasingly difficult to make the case that random dialing of landlines serves as a representative sample of adult Americans.
Other forms of probability sampling are frequently used to overcome some of the difficulties that pure random sampling presents. Suppose our analysis will call upon us to make comparisons based on race. Only 12.6% of Americans are African-American. Suppose we also want to take into account religious preference. Only 5% of African-Americans are Catholic, which means that only .6% of the population is both. If our sample size is 500, we might end up with three Catholic African-Americans. A stratified random sample (also called a quota sample) can address that problem. A stratified random sample is similar to a simple random sample but will draw from different subpopulations, strata, at different rates. The total sample needs to be weighted, then, to be representative of the entire population.
Another type of probability sample that is common in face-to-face surveys relies on cluster sampling. Cluster sampling initially samples based on clusters (generally geographic units, such as census tracts) and then samples participants within those units. In fact, this approach often uses multi-level sampling where the first level might be a sample of congressional districts, then census tracts, and then households. The final sample will need to be weighted in a complex way to reflect varying probabilities that individuals will be included in the sample.
Non-probability samples, or those for which the probability of inclusion of a member of the population in the sample is unknown, can raise difficult issues for statistical inference; however, under some conditions, they can be considered representative and used for inferential statistics.
Convenience samples (e.g., undergraduate students in the Psychology Department subject pool) are accessible and relatively low cost but may differ from the larger population to which you want to infer in important respects. Necessity may push a researcher to use a convenience sample, but inference should be approached with caution. A convenience sample based on “I asked people who came out of the bank” might provide quite different results from a sample based on “I asked people who came out of a payday loan establishment”.
Some non-probability samples are used because the researcher does not want to make inferences to a larger population. A purposive or judgmental sample relies on the researcher’s discretion regarding who can bring useful information to bear on the subject matter. If we want to know why a piece of legislation was enacted, it makes sense to sample the author and co-authors of the bill, committee members, leadership, etc., rather than a random sample of members of the legislative body.
Snowball sampling is similar to a purposive sample in that we look for people with certain characteristics but rely on subjects to recommend others who meet the criteria we have in place. We might want to know about struggling young artists. They may be hard to find, though, since their works are not hanging in galleries so we may start with one or more that we can find and then ask them who else we should interview.
Increasingly, various kinds of non-probability samples are employed in social science research, and when this is done it is critical that the potential biases associated with the samples be evaluated. But there is also growing evidence that non-probability samples can be used inferentially - when done very carefully, using complex adjustments. Wang, et al. (2014) demonstrate that a sample of Xbox users could be used to forecast the 2012 presidential election outcome. 10 An overview of their technique is relatively simple, but the execution is more challenging. They divided their data into cells based on politically and demographically relevant variables (e.g., party id, gender, race, etc.) and ended up with over 175,000 cells - post stratification. (There were about three-quarters of a million participants in the Xbox survey). Basically, they found the vote intention within each cell and then weighted each cell based on a national survey using multilevel regression. Their final results were strikingly accurate. Similarly, Nate Silver, with FiveThirtyEight, has demonstrated remarkable ability to forecast based on his weighted sample of polls taken by others.
Sampling techniques can be relatively straightforward, but as one moves away from simple random sampling, the sampling process either becomes more complex or limits our ability to draw inferences about a population. Researchers use all of these techniques for good purposes and the best technique will depend on a variety of factors, such as budget, expertise, need for precision, and what research question is being addressed. For the remainder of this text, though, when we talk about drawing inferences, the data will be based upon an appropriately drawn probability sample.
5.1.5 So How is it That We Know?
So why is it that the characteristics of samples can tell us a lot about the characteristics of populations? If samples are properly drawn, the observations taken will provide a range of values on the measures of interest that reflect those of the larger population. The connection is that we expect the phenomenon we are measuring will have distribution within the population, and a sample of observations drawn from the population will provide useful information about that distribution. The theoretical connection comes from probability theory, which concerns the analysis of random phenomena. For present purposes, if we randomly draw a sample of observations on a measure for an individual (say, discrete acts of kindness), we can use probability theory to make inferences about the characteristics of the overall population of the phenomenon in question. More specifically, probability theory allows us to make inference about the shape of that distribution – how frequent are acts of kindness committed, or what proportion of acts evidence kindness?
In sum, samples provide information about probability distributions. Probability distributions include all possible values and the probabilities associated with those values. The normal distribution is the key probability distribution in inferential statistics. | textbooks/stats/Applied_Statistics/Book%3A_Quantitative_Research_Methods_for_Political_Science_Public_Policy_and_Public_Administration_(Jenkins-Smith_et_al.)/05%3A_Interference/5.01%3A_Populations_and_Samples.txt |
Figure $1$: The Normal Distribution
Note that the the tails go to ±∞±∞. In addition, the density of a distribution over the range of x is the key to hypothesis testing With a normal distribution, ∼68%∼68% of the observations will fall within 11 standard deviation of the mean, ∼95%∼95% will fall within 2 standard deviations, and ∼99.7%∼99.7% within 3 standard deviations. This is illustrated in Figures 5.2, 5.3, 5.4.
Figure $1$: The Normal Distribution
Note that the the tails go to ±∞±∞. In addition, the density of a distribution over the range of x is the key to hypothesis testing With a normal distribution, ∼68%∼68% of the observations will fall within 11 standard deviation of the mean, ∼95%∼95% will fall within 2 standard deviations, and ∼99.7%∼99.7% within 3 standard deviations. This is illustrated in Figures 5.2, 5.3, 5.4.
For purposes of statistical inference, the normal distribution is one of the most important types of probability distributions. It forms the basis of many of the assumptions needed to do quantitative data analysis, and is the basis for a wide range of hypothesis tests. A standardized normal distribution has a mean, μμ, of 00 and a standard deviation (s.d.), σσ, of 11. The distribution of an outcome variable, YY, can be described:
Y∼N(μY,σ2Y)(5.1)(5.1)Y∼N(μY,σY2)
Where ∼∼ stands for “distributed as”, NN indicates the normal distribution, and mean μYμY and variance σ2YσY2 are the parameters. The probability function of the normal distribution is expressed below:
The Normal Probability Density Function: The probability density function (PDF) of a normal distribution with mean μμ and standard deviation σσ:
$f(x) = \frac{1}{\sigma \sqrt{2 \pi}} e^{-(x-\mu)^{2}/2\sigma^{2}}$
The Standard Normal Probability Density Function: The standard normal PDF has a μ=0μ=0 and σ=1σ=1
$f(x) = \frac{1}{\sqrt{2 \pi}}e^{-x^{2}/2}$
Using the standard normal PDF, we can plot a normal distribution in R.
x <- seq(-4,4,length=200)
y <- 1/sqrt(2*pi)*exp(-x^2/2)
plot(x,y, type="l", lwd=2)
Note that the the tails go to ±∞±∞. In addition, the density of a distribution over the range of x is the key to hypothesis testing With a normal distribution, ∼68%∼68% of the observations will fall within 11 standard deviation of the mean, ∼95%∼95% will fall within 2 standard deviations, and ∼99.7%∼99.7% within 3 standard deviations. This is illustrated in Figures 5.2, 5.3, 5.4.
The normal distribution is characterized by several important properties. The distribution of observations is symmetrical around the mean μμ; the frequency of observations is highest (the mode) at μμ, with more extreme values occurring with lower frequency (this can be seen in Figure ??); and only the mean and variance are needed to characterize data and test simple hypotheses.
The Properties of the Normal Distribution
• It is symmetrical around its mean and median, μμ
• The highest probability (aka “the mode”) occurs at its mean value
• Extreme values occur in the tails
• It is fully described by its two parameters, μμ and σ2σ2
If the values for μμ and σ2σ2 are known, which might be the case with a population, then we can calculate a ZZ-score to compare differences in μμ and σ2σ2 between two normal distributions or obtain the probability for a given value given μμ and σ2σ2. The ZZ-score is calculated:
Z=Y−μYσ(5.2)(5.2)Z=Y−μYσ
Therefore, if we have a normal distribution with a μμ of 70 and a σ2σ2 of 9, we can calculate a probability for i=75i=75. First we calculate the ZZ-score, then we determine the probability of that score based on the normal distribution.
z <- (75-70)/3
z
## [1] 1.666667
p <- pnorm(1.67)
p
## [1] 0.9525403
p <- 1-p
p
## [1] 0.04745968
As shown, a score of 7575 falls just outside two standard deviations (>0.95>0.95), and the probability of obtaining that score when μ=70μ=70 and σ2=9σ2=9 is just under 5%.
5.2.1 Standardizing a Normal Distribution and Z-scores
A distribution can be plotted using the raw scores found in the original data. That plot will have a mean and standard deviation calculated from the original data. To utilize the normal curve to determine probability functions and for inferential statistics we will want to convert that data so that it is standardized. We standardize so that the distribution is consistent across all distributions. That standardization produces a set of scores that have a mean of zero and a standard deviation of one. A standardized or Z-score of 1.5 means, therefore, that the score is one and a half standard deviations about the mean. A Z-score of -2.0 means that the score is two standard deviations below the mean.
As formula (4.4) indicated, standardizing is a simple process. To move the mean from its original value to a mean of zero, all you have to do is subtract the mean from each score. To standardize the standard deviation to one all that is necessary is to divide each score the standard deviation.
5.2.2 The Central Limit Theorem
An important property of samples is associated with the Central Limit Theorem (CLT). Imagine for a moment that we have a very large (or even infinite) population, from which we can draw as many samples as we’d like. According to the CLT, as the nn-size (number of observations) within a sample drawn from that population increases, the more the distribution of the means taken from samples of that size will resemble a normal distribution. This is illustrated in Figure $5$. Also note that the population does not need to have a normal distribution for the CLT to apply. Finally, a distribution of means from a normal population will be approximately normal at any sample size. | textbooks/stats/Applied_Statistics/Book%3A_Quantitative_Research_Methods_for_Political_Science_Public_Policy_and_Public_Administration_(Jenkins-Smith_et_al.)/05%3A_Interference/5.02%3A_The_Normal_Distribution.txt |
Another key implication of the Central Limit Theorem that is illustrated in Figure \(5\) is that the mean of the repeated sample means is the same, regardless of sample size, and that the mean of the sample means is the population mean (assuming a large enough number of samples). Those conclusions lead to the important point that the sample mean is the best estimate of the population mean, i.e., the sample mean is an unbiased estimate of the population mean. Figure \(5\) also illustrates as the sample size increases, the efficiency of the estimate increases. As the sample size increases, the mean of any particular sample is more likely to approximate the population mean.
When we begin our research we should have some population in mind - the set of items that we want to draw conclusions about. We might want to know about all adult Americans or about human beings (past, present, and future) or about a specific meteorological condition. There is only one way to know with certainty about that population and that is to examine all cases that fit the definition of our population. Most of the time, though, we cannot do that – in the case of adult Americans it would be very time-consuming, expensive, and logistically quite challenging, and in the other two cases it simply would be impossible. Our research, then, often forces us to rely on samples.
Because we rely on samples, inferential statistics are probability-based. As Figure \(5\) illustrates, our sample could perfectly reflect our population; it could be (and is likely to be) at least a reasonable approximation of the population, or the sample could deviate substantially from the population. Two critical points are being made here: the best estimates we have of our population parameters are our sample statistics, and we never know with certainty how good that estimate is. We make decisions (statistical and real-world) based on probabilities.
5.3.1 Confidence Intervals
Because we are dealing with probabilities, if we are estimating a population parameter using a sample statistic, we will want to know how much confidence to place in that estimate. If we want to know a population mean, but only have a sample, the best estimate of that population mean is the sample mean. To know how much confidence to have in a sample mean, we put a confidence interval" around it. A confidence interval will report both a range for the estimate and the probability the population value falls in that range. We say, for example, that we are 95% confident that the true value is between A and B.
To find that confidence interval, we rely on the standard error of the estimate. Figure \(5\) plots the distribution of sample statistics drawn from repeated samples. As the sample size increases, the estimates cluster closer to the true population value, i.e., the standard deviation is smaller. We could use the standard deviation from repeated samples to determine the confidence we can have in any particular sample, but in reality, we are no more likely to draw repeated samples than we are to study the entire population. The standard error, though, provides an estimate of the standard deviation we would have if we did draw a number of samples. The standard error is based on the sample size and the distribution of observations in our data:
\[SE=s√n(5.3)(5.3)SE=sn\]
where ss is the sample standard deviation, and nn is the size (number of observations) of the sample.
The standard error can be interpreted just like a standard deviation. If we have a large sample, we can say that 68.26% of all of our samples (assuming we drew repeated samples) would fall within one standard error of our sample statistic or that 95.44% would fall within two standard errors.
If our sample size is not large, instead of using z-scores to estimate confidence intervals, we use t-scores to estimate the interval. T-scores are calculated just like z-score, but our interpretation of them is slightly different. The confidence interval formula is:
\[¯x+/−SEx∗t(5.4)(5.4)x¯+/−SEx∗t\]
To find the appropriate value for t, we need to decide what level of confidence we want (generally 95%) and our degrees of freedom (df), which is n−1n−1. We can find a confidence interval with `R` using the `t.test` function. By default, `t.test` will test the hypothesis that the mean of our variable of interest (`glbcc_risk`) is equal to zero. It will also find the mean score and a confidence interval for the `glbcc_risk` variable:
``t.test(ds\$glbcc_risk)``
``````##
## One Sample t-test
##
## data: ds\$glbcc_risk
## t = 97.495, df = 2535, p-value < 0.00000000000000022
## alternative hypothesis: true mean is not equal to 0
## 95 percent confidence interval:
## 5.826388 6.065568
## sample estimates:
## mean of x
## 5.945978``````
Moving from the bottom up on the output we see that our mean score is 5.95. Next, we see that the 95% confidence interval is between 5.83 and 6.07. We are, therefore, 95% confident that the population mean is somewhere between those two scores. The first part of the output tests the null hypothesis that the mean value is equal to zero – a topic we will cover in the next section.
5.3.2 The Logic of Hypothesis Testing
We can use the same set of tools to test hypotheses. In this section, we introduce the logic of hypothesis testing. In the next chapter, we address it in more detail. Remember that a hypothesis is a statement about the way the world is and that it may be true or false. Hypotheses are generally deduced from our theory and if our expectations are confirmed, we gain confidence in our theory. Hypothesis testing is where our ideas meet the real world.
Due to the nature of inferential statistics, we cannot directly test hypotheses, but instead, we can test a null hypothesis. While a hypothesis is a statement of an expected relationship between two variables, the null hypothesis is a statement that says there is no relationship between the two variables. A null hypothesis might read: As XX increases, YY does not change. (We will discuss this topic more in the next chapter, but we want to understand the logic of the process here.)
Suppose a principal wants to cut down on absenteeism in her school and offers an incentive program for perfect attendance. Before the program, suppose the attendance rate was 85%. After having the new program in place for a while, she wants to know what the current rate is so she takes a sample of days and estimates the current attendance rate to be 88%. Her research hypothesis is: the attendance rate has gone up since the announcement of the new program (i.e., attendance is great than 85%). Her null hypothesis is that the attendance rate has not gone up since the announcement of the new program (i.e. attendance is less than or equal to 85%). At first, it seems that her null hypothesis is wrong (88%>85%)(88%>85%), but since we are using a sample, it is possible that the true population value is less than 85%. Based on her sample, how likely is it that the true population value is less than 85%? If the likelihood is small (and remember there will always be some chance), then we say our null hypothesis is wrong, i.e., we reject our null hypothesis, but if the likelihood is reasonable we accept our null hypothesis. The standard we normally use to make that determination is .05 – we want less than a .05 probability that we could have found our sample value (here 88%) if our null hypothesized value (85%) is true for the population. We use the t-statistic to find that probability. The formula is:
\[t=x−μse(5.5)(5.5)t=x−μse\]
If we return to the output presented above on `glbcc_risk`, we can see that R tested the null hypothesis that the true population value `glbcc_risk` is equal to zero. It reports t = 97.495 and a p-value of 2.2e-16. This p-value is less than .05, so we can reject our null hypothesis and be very confident that the true population value is greater than zero. % of the above items can be made dynamic.
5.3.3 Some Miscellaneous Notes about Hypothesis Testing
Before suspending our discussion of hypothesis testing, there are a few loose ends to tie up. First, you might be asking yourself where the .05 standard of hypothesis testing comes from. Is there some magic to that number? The answer is no“; .05 is simply the standard, but some researchers report .10 or .01. The p-value of .05, though, is generally considered to provide a reasonable balance between making it nearly impossible to reject a null hypothesis and too easily cluttering our knowledge box with things that we think are related but actually are not. Even using the .05 standard means that 5% of the time when we reject the null hypothesis, we are wrong - there is no relationship. (Besides giving you pause wondering what we are wrong about, it should also help you see why science deems replication to be so important.)
Second, as we just implied, anytime we make a decision to either accept or reject our null hypothesis, we could be wrong. The probabilities tell us that if p=0.05p=0.05, 5% of the time when we reject the null hypothesis, we are wrong because it is actually true. We call that type of mistake a Type I Error. However, when we accept the null hypothesis, we could also be wrong – there may be a relationship within the population. We call that a Type II Error. As should be evident, there is a trade-off between the two. If we decide to use a p-value of .01 instead of .05, we make fewer Type I errors – just one out of 100, instead of 5 out of 100. Yet that also means that we increase by .04 the likelihood that we are accepting a null hypothesis that is false – a Type II Error. To rephrase the previous paragraph: .05 is normally considered to be a reasonable balance between the probability of committing Type I Errors as opposed to Type II Errors. Of course, if the consequence of one type of error or the other is greater, then you can adjust the p-value.
Third, when testing hypotheses, we can use either a one-tailed test or a two-tailed test. The question is whether the entire .05 goes in one tailor is split evenly between the two tails (making, effectively, the p-value equal to .025). Generally speaking, if we have a directional hypothesis (e.g., as X increases so does Y), we will use a one-tail test. If we are expecting a positive relationship, but find a strong negative relationship, we generally conclude that we have a sampling quirk and that the relationship is null, rather than the opposite of what we expected. If, for some reason, you have a hypothesis that does not specify the direction, you would be interested in values in either tail and use a two-tailed test.
5.4 Differences Between Groups
In addition to covariance and correlation (discussed in the next chapter), we can also examine differences in some variables of interest between two or more groups. For example, we may want to compare the mean of the perceived climate change risk variable for males and females. First, we can examine these variables visually.
As coded in our dataset, gender (gender) is a numeric variable with a 1 for males and 0 for females. However, we may want to make gender a categorical variable with labels for Female and Male, as opposed to a numeric variable coded as 0’s and 1’s. To do this we make a new variable and use the `factor` command, which will tell `R` that the new variable is a categorical variable. Then we will tell `R` that this new variable has two levels or factors, Male and Female. Finally, we will label the factors of our new variable and name it f.gend.
``ds\$f.gend <- factor(ds\$gender, levels = c(0, 1), labels = c("Female","Male"))``
We can then observe differences in the distributions of perceived risk for males and females by creating density curves:
``````library(tidyverse)
ds %>%
drop_na(f.gend) %>%
ggplot(aes(glbcc_risk)) +
geom_density() +
facet_wrap(~ f.gend, scales = "fixed")``````
Based on the density plots, it appears that some differences exist between males and females regarding perceived climate change risk. We can also use the `by` command to see the mean of climate change risk for males and females.
``by(ds\$glbcc_risk, ds\$f.gend, mean, na.rm=TRUE)``
``````## ds\$f.gend: Female
## [1] 6.134259
## --------------------------------------------------------
## ds\$f.gend: Male
## [1] 5.670577``````
Again there appears to be a difference, with females perceiving greater risk on average (6.13) than males (5.67). However, we want to know whether these differences are statistically significant. To test for the statistical significance of the difference between groups, we use a t-test.
5.4.1 t-tests
The t-test is based on the tt distribution. The tt distribution, also known as the Student’s tt distribution, is the probability distribution for sample estimates. It has similar properties and is related to, the normal distribution. The normal distribution is based on a population where μμ and σ2σ2 are known; however, the tt distribution is based on a sample where μμ and σ2σ2 are estimated, as the mean ¯XX¯ and variance s2xsx2. The mean of the tt distribution, like the normal distribution, is 00, but the variance, s2xsx2, is conditioned by n−1n−1 degrees of freedom(df). Degrees of freedom are the values used to calculate statistics that are “free” to vary.11 A tt distribution approaches the standard normal distribution as the number of degrees of freedom increases.
In summary, we want to know the difference of means between males and females, d=¯Xm−¯Xfd=X¯m−X¯f, and if that difference is statistically significant. This amounts to a hypothesis test where our working hypothesis, H1H1, is that males are less likely than females to view climate change as risky. The null hypothesis, HAHA, is that there is no difference between males and females regarding the risks associated with climate change. To test H1H1 we use the t-test, which is calculated:
t=¯Xm−¯XfSEd(5.6)(5.6)t=X¯m−X¯fSEd
Where SEdSEd is the of the estimated differences between the two groups. To estimate SEdSEd, we need the SE of the estimated mean for each group. The SE is calculated:
SE=s√n(5.7)(5.7)SE=sn
where ss is the s.d. of the variable. H1H1 states that there is a difference between males and females, therefore under H1H1 it is expected that t>0t>0 since zero is the mean of the tt distribution. However, under HAHA it is expected that t=0t=0.
We can calculate this in `R`. First, we calculate the nn size for males and females. Then we calculate the SE for males and females.
``````n.total <- length(ds\$gender)
nM <- sum(ds\$gender, na.rm=TRUE)
nF <- n.total-nM
by(ds\$glbcc_risk, ds\$f.gend, sd, na.rm=TRUE)``````
``````## ds\$f.gend: Female
## [1] 2.981938
## --------------------------------------------------------
## ds\$f.gend: Male
## [1] 3.180171``````
``````sdM <- 2.82
seM <- 2.82/(sqrt(nM))
seM``````
``## [1] 0.08803907``
``````sdF <- 2.35
seF <- 2.35/(sqrt(nF))
seF``````
``## [1] 0.06025641``
Next, we need to calculate the SEdSEd:SEd=√SE2M+SE2F(5.8)(5.8)SEd=SEM2+SEF2
``````seD <- sqrt(seM^2+seF^2)
seD``````
``## [1] 0.1066851``
Finally, we can calculate our tt-score, and use the `t.test` function to check.
``by(ds\$glbcc_risk, ds\$f.gend, mean, na.rm=TRUE)``
``````## ds\$f.gend: Female
## [1] 6.134259
## --------------------------------------------------------
## ds\$f.gend: Male
## [1] 5.670577``````
``````meanF <- 6.96
meanM <- 6.42
t <- (meanF-meanM)/seD
t``````
``## [1] 5.061625``
``t.test(ds\$glbcc_risk~ds\$gender)``
``````##
## Welch Two Sample t-test
##
## data: ds\$glbcc_risk by ds\$gender
## t = 3.6927, df = 2097.5, p-value = 0.0002275
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## 0.2174340 0.7099311
## sample estimates:
## mean in group 0 mean in group 1
## 6.134259 5.670577``````
For the difference in the perceived risk between women and men, we have a tt-value of 4.6. This result is greater than zero, as expected by H1H1. In addition, as shown in the `t.test` output the pp-value—the probability of obtaining our result if the population difference was 00—is extremely low at .0002275 (that’s the same as 2.275e-04). Therefore, we reject the null hypothesis and concluded that there are differences (on average) in the ways that males and females perceive climate change risk.
5.5 Summary
In this chapter we gained an understanding of inferential statistics, how to use them to place confidence intervals around an estimate, and an overview of how to use them to test hypotheses. In the next chapter, we turn, more formally, to testing hypotheses using crosstabs and by comparing means of different groups. We then continue to explore hypothesis testing and model building using regression analysis.
1. It is important to keep in mind that, for purposes of theory building, the population of interest may not be finite. For example, if you theorize about general properties of human behavior, many of the members of the human population are not yet (or are no longer) alive. Hence it is not possible to include all of the population of interest in your research. We therefore rely on samples.↩
2. Of course, we also need to estimate changes – both gradual and abrupt – in how people behave over time, which is the province of time-series analysis.↩
3. Wei Wang, David Rothschild, Sharad Goel, and Andrew Gelman (2014) ’’Forecasting Elections with Non-Representative Polls," Preprint submitted to International Journal of Forecasting March 31, 2014.↩
4. In a difference of means test across two groups, we “use up” one observation when we separate the observations into two groups. Hence the denominator reflects the loss of that used up observation: n-1.↩ | textbooks/stats/Applied_Statistics/Book%3A_Quantitative_Research_Methods_for_Political_Science_Public_Policy_and_Public_Administration_(Jenkins-Smith_et_al.)/05%3A_Interference/5.03%3A_Inferences_to_the_Population_from_the_Sample.txt |
In addition to covariance and correlation (discussed in the next chapter), we can also examine differences in some variables of interest between two or more groups. For example, we may want to compare the mean of the perceived climate change risk variable for males and females. First, we can examine these variables visually.
As coded in our dataset, gender (gender) is a numeric variable with a 1 for males and 0 for females. However, we may want to make gender a categorical variable with labels for Female and Male, as opposed to a numeric variable coded as 0’s and 1’s. To do this we make a new variable and use the `factor` command, which will tell `R` that the new variable is a categorical variable. Then we will tell `R` that this new variable has two levels or factors, Male and Female. Finally, we will label the factors of our new variable and name it f.gend.
``ds\$f.gend <- factor(ds\$gender, levels = c(0, 1), labels = c("Female","Male"))``
We can then observe differences in the distributions of perceived risk for males and females by creating density curves:
``````library(tidyverse)
ds %>%
drop_na(f.gend) %>%
ggplot(aes(glbcc_risk)) +
geom_density() +
facet_wrap(~ f.gend, scales = "fixed")``````
Based on the density plots, it appears that some differences exist between males and females regarding perceived climate change risk. We can also use the `by` command to see the mean of climate change risk for males and females.
``by(ds\$glbcc_risk, ds\$f.gend, mean, na.rm=TRUE)``
``````## ds\$f.gend: Female
## [1] 6.134259
## --------------------------------------------------------
## ds\$f.gend: Male
## [1] 5.670577``````
Again there appears to be a difference, with females perceiving greater risk on average (6.13) than males (5.67). However, we want to know whether these differences are statistically significant. To test for the statistical significance of the difference between groups, we use a t-test.
5.4.1 t-tests
The t-test is based on the tt distribution. The tt distribution, also known as the Student’s tt distribution, is the probability distribution for sample estimates. It has similar properties and is related to, the normal distribution. The normal distribution is based on a population where μμ and σ2σ2 are known; however, the tt distribution is based on a sample where μμ and σ2σ2 are estimated, as the mean ¯XX¯ and variance s2xsx2. The mean of the tt distribution, like the normal distribution, is 00, but the variance, s2xsx2, is conditioned by n−1n−1 degrees of freedom(df). Degrees of freedom are the values used to calculate statistics that are “free” to vary.11 A tt distribution approaches the standard normal distribution as the number of degrees of freedom increases.
In summary, we want to know the difference of means between males and females, d=¯Xm−¯Xfd=X¯m−X¯f, and if that difference is statistically significant. This amounts to a hypothesis test where our working hypothesis, H1H1, is that males are less likely than females to view climate change as risky. The null hypothesis, HAHA, is that there is no difference between males and females regarding the risks associated with climate change. To test H1H1 we use the t-test, which is calculated:
t=¯Xm−¯XfSEd(5.6)(5.6)t=X¯m−X¯fSEd
Where SEdSEd is the of the estimated differences between the two groups. To estimate SEdSEd, we need the SE of the estimated mean for each group. The SE is calculated:
SE=s√n(5.7)(5.7)SE=sn
where ss is the s.d. of the variable. H1H1 states that there is a difference between males and females, therefore under H1H1 it is expected that t>0t>0 since zero is the mean of the tt distribution. However, under HAHA it is expected that t=0t=0.
We can calculate this in `R`. First, we calculate the nn size for males and females. Then we calculate the SE for males and females.
``````n.total <- length(ds\$gender)
nM <- sum(ds\$gender, na.rm=TRUE)
nF <- n.total-nM
by(ds\$glbcc_risk, ds\$f.gend, sd, na.rm=TRUE)``````
``````## ds\$f.gend: Female
## [1] 2.981938
## --------------------------------------------------------
## ds\$f.gend: Male
## [1] 3.180171``````
``````sdM <- 2.82
seM <- 2.82/(sqrt(nM))
seM``````
``## [1] 0.08803907``
``````sdF <- 2.35
seF <- 2.35/(sqrt(nF))
seF``````
``## [1] 0.06025641``
Next, we need to calculate the SEdSEd:SEd=√SE2M+SE2F(5.8)(5.8)SEd=SEM2+SEF2
``````seD <- sqrt(seM^2+seF^2)
seD``````
``## [1] 0.1066851``
Finally, we can calculate our t-score, and use the `t.test` function to check.
``by(ds\$glbcc_risk, ds\$f.gend, mean, na.rm=TRUE)``
``````## ds\$f.gend: Female
## [1] 6.134259
## --------------------------------------------------------
## ds\$f.gend: Male
## [1] 5.670577``````
``````meanF <- 6.96
meanM <- 6.42
t <- (meanF-meanM)/seD
t``````
``## [1] 5.061625``
``t.test(ds\$glbcc_risk~ds\$gender)``
``````##
## Welch Two Sample t-test
##
## data: ds\$glbcc_risk by ds\$gender
## t = 3.6927, df = 2097.5, p-value = 0.0002275
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## 0.2174340 0.7099311
## sample estimates:
## mean in group 0 mean in group 1
## 6.134259 5.670577``````
For the difference in the perceived risk between women and men, we have a tt-value of 4.6. This result is greater than zero, as expected by H1H1. In addition, as shown in the `t.test` output the pp-value—the probability of obtaining our result if the population difference was 00—is extremely low at .0002275 (that’s the same as 2.275e-04). Therefore, we reject the null hypothesis and concluded that there are differences (on average) in the ways that males and females perceive climate change risk.
5.5 Summary
In this chapter we gained an understanding of inferential statistics, how to use them to place confidence intervals around an estimate, and an overview of how to use them to test hypotheses. In the next chapter, we turn, more formally, to testing hypotheses using crosstabs and by comparing means of different groups. We then continue to explore hypothesis testing and model building using regression analysis.
1. It is important to keep in mind that, for purposes of theory building, the population of interest may not be finite. For example, if you theorize about the general properties of human behavior, many of the members of the human population are not yet (or are no longer) alive. Hence it is not possible to include all of the population of interest in your research. We, therefore, rely on samples.↩
2. Of course, we also need to estimate changes – both gradual and abrupt – in how people behave over time, which is the province of time-series analysis.↩
3. Wei Wang, David Rothschild, Sharad Goel, and Andrew Gelman (2014) ’’Forecasting Elections with Non-Representative Polls," Preprint submitted to International Journal of Forecasting March 31, 2014.↩
4. In a difference of means test across two groups, we “use up” one observation when we separate the observations into two groups. Hence the denominator reflects the loss of that used up observation: n-1.↩ | textbooks/stats/Applied_Statistics/Book%3A_Quantitative_Research_Methods_for_Political_Science_Public_Policy_and_Public_Administration_(Jenkins-Smith_et_al.)/05%3A_Interference/5.04%3A_Differences_Between_Groups.txt |
6.1 Cross-Tabulation
To determine if there is an association between two variables measured at the nominal or ordinal levels, we use cross-tabulation and a set of supporting statistics. A cross-tabulation (or just crosstab) is a table that looks at the distribution of two variables simultaneously. Table 6.1 provides a sample layout of a 2 X 2 table.
As Table 6.1 illustrates, a crosstab is set up so that the independent variable is on the top, forming columns, and the dependent variable is on the side, forming rows. Toward the upper left-hand corner of the table is the low, or negative, variable categories. Generally, a table will be displayed in a percentage format. The marginals for a table are the column totals and the row totals and are the same as a frequency distribution would be for that variable. Each cross-classification reports how many observations have that shared characteristic. The cross-classification groups are referred to as cells, so Table 6.1 is a four-celled table.
A table like Table 6.1 provides a basis to begin to answer the question of whether our independent and dependent variables are related. Remember that our null hypothesis says there is no relationship between our IV and our DV. Looking at Table 6.1, we can say of those low on the IV, 60% of them will also below on the DV; and that those high on the IV will be low on the DV 40% of the time. Our null hypothesis says there should be no difference, but in this case, there is a 20% difference so it appears that our null hypothesis is incorrect. What we learned in our inferential statistics chapter, though, tells us that it is still possible that the null hypothesis is true. The question is how likely is it that we could have a 20% difference in our sample even if the null hypothesis is true?12
We use the chi-square statistic to test our null hypothesis when using crosstabs. To find chi-square (χ2χ2), we begin by assuming the null hypothesis to be true and find the expected frequencies for each cell in our table. We do so using a posterior methodology based on the marginals for our dependent variable. We see that 53% of our total sample is low on the dependent variable. If our null hypothesis is correct, then where one is located on the independent variable should not matter: 53% of those who are low on the IV should be low on the DV and 53% of those who are high on the IV should be low on the DV. Table 6.2 & 6.3 illustrate this pattern. To find the expected frequency for each cell, we simply multiply the expected cell percentage times the number of people in each category of the IV: the expected frequency for the low-low cell is .53∗200=106.53∗200=106; for the low-high cell, it is .47∗200=94.47∗200=94; for the low-high cell it is .53∗100=53.53∗100=53; and for the high-high cell, the expected frequency is .47∗100=47.47∗100=47. (See Table 6.2 & 6.3).
The formula for the chi-square takes the expected frequency for each of the cells and subtracts the observed frequency from it, squares those differences, divides by the expected frequency, and sums those values:
χ2=∑(O−E)2E(6.1)(6.1)χ2=∑(O−E)2E
where:
χ2χ2 = The Test Statistic
∑∑ = The Summation Operator
OO = Observed Frequencies
EE = Expected Frequencies
Table 6.4 provides those calculations. It shows a final chi-square of 10.73. With that chi-square, we can go to a chi-square table to determine whether to accept or reject the null hypothesis. Before going to that chi-square table, we need to figure out two things. First, we need to determine the level of significance we want, presumably .05. Second, we need to determine our degrees of freedom. We will provide more on that concept as we go on, but for now, know that it is the number of rows minus one times the number of columns minus one. In this case, we have (2−1)(2−1)=1(2−1)(2−1)=1 degree of freedom.
Table 6.9 (at the end of this chapter) is a chi-square table that shows the critical values for various levels of significance and degrees of freedom. The critical value for one degree of freedom with a .05 level of significance is 3.84. Since our chi-square is larger than that we can reject our null hypothesis - there is less than a .05 probability that we could have found the results in our sample if there is no relationship in the population. In fact, if we follow the row for one degree of freedom across, we see we can reject our null hypothesis even at the .005 level of significance and, almost but not quite, at the .001 level of significance.
Having rejected the null hypothesis, we believe there is a relationship between the two variables, but we still want to know how strong that relationship is. Measures of association are used to determine the strength of a relationship. One type of measure of association relies on a co-variation model as elaborated upon in Sections 6.2 and 6.3. Co-variation models are directional models and require ordinal or interval level measures; otherwise, the variables have no direction. Here we consider alternative models.
If one or both of our variables is nominal, we cannot specify directional change. Still, we might see a recognizable pattern of change in one variable as the other variable varies. Women might be more concerned about climate change than are men, for example. For that type of case, we may use a reduction in error or a proportional reduction in error (PRE) model. We consider how well we predict using a naive model (assuming no relationship) and compare it to how much better we predict when we use our independent variable to make that prediction. These measures of association only range from 0−1.00−1.0, since the sign otherwise indicates direction. Generally, we use this type of measure when at least one of our variables are nominal, but we will also use a PRE model measure, r2r2, in regression analysis. Lambda is a commonly used PRE-based measure of association for nominal level data, but it can underestimate the relationship in some circumstances.
Another set of measures of association suitable for nominal level data is based on chi-square. Cramer’s V is a simple chi square-based indicator, but like chi-square itself, its value is affected by the sample size and the dimensions of the table. Phi corrects for sample size but is appropriate only for a 2 X 2 table. The contingency coefficient, C, also corrects for sample size and can be applied to larger tables, but requires a square table, i.e., the same number of rows and columns.
If we have ordinal level data, we can use a co-variation model, but the specific model developed below in Section 6.3 looks at how observations are distributed around their means. Since we cannot find a mean for ordinal level data, we need an alternative. Gamma is commonly used with ordinal level data and provides a summary comparing how many observations fall around the diagonal in the table that supports a positive relationship (e.g. observations in the low-low cell and the high-high cells) as opposed to observations following the negative diagonal (e.g. the low-high cell and the high-low cells). Gamma ranges from −1.0−1.0 to +1.0+1.0.\
Crosstabulations and their associated statistics can be calculated using R. In this example we continue to use the Global Climate Change dataset (ds). The dataset includes measures of survey respondents: gender (female = 0, male = 1); perceived risk posed by climate change, or glbcc_risk (0 = Not Risk; 10 = extreme risk), and political ideology (1 = strong liberal, 7 = strong conservative). Here we look at whether there is a relationship between gender and the glbcc_risk variable. The glbcc_risk variable has eleven categories; to make the table more manageable, we recode it to five categories.
``````# Factor the gender variable
ds\$f.gend <- factor(ds\$gender, levels=c(0,1), labels = c("Women", "Men"))
# recode glbcc_risk to five categories
library(car)
ds\$r.glbcc_risk <- car::recode(ds\$glbcc_risk, "0:1=1; 2:3=2; 4:6=3; 7:8:=4;
9:10=5; NA=NA")``````
Using the `table` function, we produce a frequency table reflecting the relationship between gender and the recoded glbccrisk variable.
``````# create the table
table(ds\$r.glbcc_risk, ds\$f.gend)``````
``````##
## Women Men
## 1 134 134
## 2 175 155
## 3 480 281
## 4 330 208
## 5 393 245``````
``````# create the table as an R Object
glbcc.table <- table(ds\$r.glbcc_risk, ds\$f.gend)``````
This table is difficult to interpret because of the numbers of men and women are different. To make the table easier to interpret, we convert it to percentages using the `prop.table` function. Looking at the new table, we can see that there are more men at the lower end of the perceived risk scale and more women at the upper end.
``````# Multiply by 100
prop.table(glbcc.table, 2) * 100``````
``````##
## Women Men
## 1 8.862434 13.098729
## 2 11.574074 15.151515
## 3 31.746032 27.468231
## 4 21.825397 20.332356
## 5 25.992063 23.949169``````
The percentaged table suggests that there is a relationship between the two variables, but also illustrates the challenge of relying on percentage differences to determine the significance of that relationship. So, to test our null hypothesis, we calculate our chi square using the chisq.test function.
``````# Chi Square Test
chisq.test(glbcc.table)``````
``````##
## Pearson's Chi-squared test
##
## data: glbcc.table
## X-squared = 21.729, df = 4, p-value = 0.0002269``````
R reports our chiquare to equal 21.73. It also tells us that we have 4 degrees of freedom and a p value of .0002269. Since that p-value is substantially less than .05, we can reject our null hypothesis with great confidence. There is, evidently, a relationship between gender and percieved risk of climate change.
Finally, we want to know how strong the relationship is. We use the `assocstats` function to get several measures of association. Since the table is not a 2 X 2 table nor square, neither phi not the contingency coefficient is appropriate, but we can report Cramer’s V. Cramer’s V is .093, indicating a relatively weak relationship between gender and the perceived global climate change risk variable.
``````library(vcd)
assocstats(glbcc.table)``````
``````## X^2 df P(> X^2)
## Likelihood Ratio 21.494 4 0.00025270
## Pearson 21.729 4 0.00022695
##
## Phi-Coefficient : NA
## Contingency Coeff.: 0.092
## Cramer's V : 0.093``````
6.1.1 Crosstabulation and Control
In Chapter 2 we talked about the importance of experimental control if we want to make causal statements. In experimental designs, we rely on physical control and randomization to provide that control to give us confidence in the causal nature of any relationship we find. With quasi-experimental designs, however, we do not have that type of control and have to wonder whether any relationship that we find might be spurious. At that point, we promised that the situation is not hopeless with quasi-experimental designs and that there are statistical substitutes for the control naturally afforded to us in experimental designs. In this section, we will describe that process when using crosstabulation. We will first look at some hypothetical data to get some clean examples of what might happen when you control for an alternative explanatory variable before looking at a real example using R.
The process used to control for an alternative explanatory variable, commonly referred to as a third variable, is straightforward. To control for a third variable, we first construct our original table between our independent and dependent variables. Then we sort our data into subsets based on the categories of our third variable and reconstruct new tables using our IV and DV for each subset of our data.
Suppose we hypothesize that people who are contacted about voting are more likely to vote. Table 6.5 illustrates what we might find. (Remember all of these data are fabricated to illustrate our points.) According to the first table, people who are contacted are 50% more likely to vote than those who are not. But, a skeptic might say campaigns target previous voters for contact and that previous voters are more likely to vote in subsequent elections. That skeptic is making the argument that the relationship between contact and voting is spurious and that the true cause of voting is voting history. To test that theory, we control for voting history by sorting respondents into two sets – those who voted in the last election and those who did not. We then reconstruct the original table for the two sets of respondents. The new tables indicate that previous voters are 50% more likely to vote when contacted and that those who did not vote previously are 50% more likely to vote when contacted. The skeptic is wrong; the pattern found in our original data persists even after controlling for the alternative explanation. We still remain reluctant to use causal language because another skeptic might have another alternative explanation (which would require us to go through the same process with the new third variable), but we do have more confidence in the possible causal nature of the relationship between contact and voting.
The next example tests the hypothesis that those who are optimistic about the future are more likely to vote for the incumbent than those who are pessimistic. Table 6.6 shows that optimistic people are 25% more likely to vote for the incumbent than are pessimistic people. But our skeptic friend might argue that feelings about the world are not nearly as important as real-life conditions. People with jobs vote for the incumbent more often than those without a job and, of course, those with a job are more likely to feel good about the world. To test that alternative, we control for whether the respondent has a job and reconstruct new tables. When we do, we find that among those with a job, 70% vote for the incumbent - regardless of their level of optimism about the world. And, among those without a job, 40% vote for the incumbent, regardless of their optimism. In other words, after controlling for job status, there is no relationship between the level of optimism and voting behavior. The original relationship was spurious.
A third outcome of controlling for a third variable might be some form of interaction or specification effect. The third variable affects how the first two are related, but it does not completely undermine the original relationship. For example, we might find the original relationship to be stronger for one category of the control variable than another - or even to be present in one case and not the other. The pattern might also suggest that both variables have an influence on the dependent variable, resembling some form of joint causation. In fact, it is possible for your relationship to appear to be null in your original table, but when you control you might find a positive relationship for one category of your control variable and negative for another.
Using an example from the Climate and Weather survey, we might hypothesize that liberals are more likely to think that greenhouse gases are causing global warming. We start by recoding ideology from 7 levels to 3, then construct a frequency table and convert it to a percentage table of the relationship.
``````# recode variables ideology to 3 categories
library(car)
ds\$r.ideol<-car::recode(ds\$ideol, "1:2=1; 3:5=2; 6:7=3; NA=NA")
# factor the variables to add labels.
ds\$f.ideol<- factor(ds\$r.ideol, levels=c(1, 2, 3), labels=c("Liberal",
"Moderate", "Conservative"))
ds\$f.glbcc <- factor(ds\$glbcc, levels=c(0, 1),
labels = c("GLBCC No", "GLBCC Yes"))
# 3 Two variable table glbcc~ideology
v2.glbcc.table <- table(ds\$f.glbcc, ds\$f.ideol)
v2.glbcc.table``````
``````##
## Liberal Moderate Conservative
## GLBCC No 26 322 734
## GLBCC Yes 375 762 305``````
``````# Percentages by Column
prop.table(v2.glbcc.table, 2) * 100``````
``````##
## Liberal Moderate Conservative
## GLBCC No 6.483791 29.704797 70.644851
## GLBCC Yes 93.516209 70.295203 29.355149``````
It appears that our hypothesis is supported, as there is more than a 40% difference between liberals and conservatives with moderates in between. However, let’s consider the chi-square before we reject our null hypothesis:
``````# Chi-squared
chisq.test(v2.glbcc.table, correct = FALSE)``````
``````##
## Pearson's Chi-squared test
##
## data: v2.glbcc.table
## X-squared = 620.76, df = 2, p-value < 0.00000000000000022``````
The chi-square is very large and our p-value is very small. We can, therefore, reject our null hypothesis with great confidence. Next, we consider the strength of the association using Cramer’s V (since either Phi nor the contingency coefficient is appropriate for a 3 X 2 table):
``````# Cramer's V
library(vcd)
assocstats(v2.glbcc.table)``````
``````## X^2 df P(> X^2)
## Likelihood Ratio 678.24 2 0
## Pearson 620.76 2 0
##
## Phi-Coefficient : NA
## Contingency Coeff.: 0.444
## Cramer's V : 0.496``````
The Cramer’s V value of .496 indicates that we have a strong relationship between political ideology and beliefs about climate change.
We might, though, want to look at gender as a control variable since we know gender is related both to perceptions on the climate and ideology. First, we need to generate a new table with the control variable gender added. We start by factoring the gender variable.
``````# factor the variables to add labels.
ds\$f.gend <- factor(ds\$gend, levels=c(0, 1), labels=c("Women", "Men"))``````
We then create a new table. The R output is shown, in which the line `\#\# , , = Women` indicates the results for women and `\#\# , , = Men` displays the results for men.
``````# 3 Two variable table glbcc~ideology+gend
v3.glbcc.table <- table(ds\$f.glbcc, ds\$f.ideol, ds\$f.gend)
v3.glbcc.table``````
``````## , , = Women
##
##
## Liberal Moderate Conservative
## GLBCC No 18 206 375
## GLBCC Yes 239 470 196
##
## , , = Men
##
##
## Liberal Moderate Conservative
## GLBCC No 8 116 358
## GLBCC Yes 136 292 109``````
``````# Percentages by Column for Women
prop.table(v3.glbcc.table[,,1], 2) * 100 ``````
``````##
## Liberal Moderate Conservative
## GLBCC No 7.003891 30.473373 65.674256
## GLBCC Yes 92.996109 69.526627 34.325744``````
``chisq.test(v3.glbcc.table[,,1])``
``````##
## Pearson's Chi-squared test
##
## data: v3.glbcc.table[, , 1]
## X-squared = 299.39, df = 2, p-value < 0.00000000000000022``````
``assocstats(v3.glbcc.table[,,1])``
``````## X^2 df P(> X^2)
## Likelihood Ratio 326.13 2 0
## Pearson 299.39 2 0
##
## Phi-Coefficient : NA
## Contingency Coeff.: 0.407
## Cramer's V : 0.446``````
``````# Percentages by Column for Men
prop.table(v3.glbcc.table[,,2], 2) * 100 ``````
``````##
## Liberal Moderate Conservative
## GLBCC No 5.555556 28.431373 76.659529
## GLBCC Yes 94.444444 71.568627 23.340471``````
``chisq.test(v3.glbcc.table[,,2])``
``````##
## Pearson's Chi-squared test
##
## data: v3.glbcc.table[, , 2]
## X-squared = 320.43, df = 2, p-value < 0.00000000000000022``````
``assocstats(v3.glbcc.table[,,2])``
``````## X^2 df P(> X^2)
## Likelihood Ratio 353.24 2 0
## Pearson 320.43 2 0
##
## Phi-Coefficient : NA
## Contingency Coeff.: 0.489
## Cramer's V : 0.561``````
For both men and women, we still see more than a 40% difference and the p-value for both tables chi-square is 2.2e-16 and both Cramer’s V’s are greater than .30. It is clear that even when controlling for gender, there is a robust relationship between ideology and perceived risk of climate change. However, these tables also suggest that women are slightly more inclined to believe greenhouse gases play a role in climate change than are men. We may have an instance of joint causation, where both ideology and gender affect (cause" is still too strong a word) views concerning the impact of greenhouse gases on climate change.
Crosstabs, chi-square, and measures of association are used with nominal and ordinal data to provide an overview of a relationship, its statistical significance, and the strength of a relationship. In the next section, we turn to ways to consider the same set of questions with interval level data before turning to the more advanced technique of regression analysis in Part 2 of this book. | textbooks/stats/Applied_Statistics/Book%3A_Quantitative_Research_Methods_for_Political_Science_Public_Policy_and_Public_Administration_(Jenkins-Smith_et_al.)/06%3A_Association_of_Variables/6.01%3A_Cross-Tabulation.txt |
Covariance is a simple measure of the way two variables move together, or “co-vary”. The covariance of two variables, XX and YY, can be expressed in population notation as:
cov(X,Y)=E[(X−μx)(Y−μy)](6.2)(6.2)cov(X,Y)=E[(X−μx)(Y−μy)]
Therefore, the covariance between XX and YY is simply the product of the variation of XX around its expected value, and the variation of YY around its expected value. The sample covariance is expressed as:
cov(X,Y)=∑(X−¯X)(Y−¯Y)(n−1)(6.3)(6.3)cov(X,Y)=∑(X−X¯)(Y−Y¯)(n−1)
Covariance can be positive, negative, or zero. If the covariance is positive both variables move in the same direction, meaning if XX increases YY increases or if XX decreases YY decreases. Negative covariance means that the variables move in opposite directions; if XX increases YY decreases. Finally, zero covariance indicates that there is no covariance between XX and YY.
6.03: Correlation
Correlation is closely related to covariance. In essence, correlation standardizes covariance so it can be compared across variables. Correlation is represented by a correlation coefficient, ρρ, and is calculated by dividing the covariance of the two variables by the product of their standard deviations. For populations it is expressed as:
ρ=cov(X,Y)σxσy(6.4)(6.4)ρ=cov(X,Y)σxσy
For samples it is expressed as:
r=∑(X−¯X)(Y−¯Y)/(n−1)sxsy(6.5)(6.5)r=∑(X−X¯)(Y−Y¯)/(n−1)sxsy
Like covariance, correlations can be positive, negative, and zero. The possible values of the correlation coefficient rr, range from -1, perfect negative relationship to 1, perfect positive relationship. If r=0r=0, that indicates no correlation. Correlations can be calculated in `R`, using the `cor` function.
``````ds %>% dplyr::select(education, ideol, age, glbcc_risk) %>% na.omit() %>%
cor()``````
``````## education ideol age glbcc_risk
## education 1.00000000 -0.13246843 -0.06149090 0.09115774
## ideol -0.13246843 1.00000000 0.08991177 -0.59009431
## age -0.06149090 0.08991177 1.00000000 -0.07514098
## glbcc_risk 0.09115774 -0.59009431 -0.07514098 1.00000000``````
Note that each variable is perfectly (and positively) correlated with itself - naturally! Age is slightly and surprisingly negatively correlated with education (-0.06) and unsurprisingly positively correlated with political ideology (+0.09). What this means is that, in this dataset and on average, older people are slightly less educated and more conservative than younger people. Now notice the correlation coefficient for the relationship between ideology and perceived risk of climate change (glbcc_risk). This correlation (-0.59) indicates that on average, the more conservative the individual is, the less risky climate change is perceived to be.
6.04: Scatterplots
As noted earlier, it is often useful to try and see patterns between two variables. We examined the density plots of males and females with regard to climate change risk, then we tested these differences for statistical significance. However, we often want to know more than the mean difference between groups; we may also want to know if differences exist for variables with several possible values. For example, here we examine the relationship between ideology and perceived risk of climate change. One of the more efficient ways to do this is to produce a scatterplot. %Use geom_jitter. This is because ideology and glbcc risk are discrete variables(i.e., whole numbers), so we need to “jitter” the data. If your values are continuous, use `geom_point`.13 The result is shown in Figure.
``````ds %>%
ggplot(aes(ideol, glbcc_risk)) +
geom_jitter(shape = 1)``````
We can see that the density of values indicates that strong liberals—11’s on the ideology scale—tend to view climate change as quite risky, whereas strong conservatives—77’s on the ideology scale—tend to view climate change as less risky. Like our previous example, we want to know more about the nature of this relationship. Therefore, we can plot a regression line and a “loess” line. These lines are the linear and nonlinear estimates of the relationship between political ideology and the perceived risk of climate change. We’ll have more to say about the linear estimates when we turn to regression analysis in the next chapter.
``````ds %>%
drop_na(glbcc_risk, ideol) %>%
ggplot(aes(ideol, glbcc_risk)) +
geom_jitter(shape = 1) +
geom_smooth(method = "loess", color = "green") +
geom_smooth(method = "lm", color = "red")``````
Note that the regression lines both slope downward, with average perceived risk ranging from over 8 for the strong liberals (ideology=1) to less than 5 for strong conservatives (ideology=7). This illustrates how scatterplots can provide information about the nature of the relationship between two variables. We will take the next step – to bivariate regression analysis – in the next chapter.
1. To reiterate the general decision rule: if the probability that we could have a 20% difference in our sample if the null hypothesis is true is less than .05, we will reject our null hypothesis.↩
2. That means a “jit” (a very small value) is applied to each observed point on the plot, so you can see observations that are “stacked” on the same coordinate. Ha! Just kidding; they’re not called jits. We don’t know what they’re called. But they ought to be called jits.↩ | textbooks/stats/Applied_Statistics/Book%3A_Quantitative_Research_Methods_for_Political_Science_Public_Policy_and_Public_Administration_(Jenkins-Smith_et_al.)/06%3A_Association_of_Variables/6.02%3A_Covariance.txt |
Models, as discussed earlier, are an essential component in theory building. They simplify theoretical concepts, provide a precise way to evaluate relationships between variables, and serve as a vehicle for hypothesis testing. As discussed in Chapter 1, one of the central features of a theoretical model is the presumption of causality, and causality is based on three factors: time ordering (observational or theoretical), co-variation, and non-spuriousness. Of these three assumptions, co-variation is the one analyzed using OLS. The often-repeated adage, correlation is not causation’’ is key. Causation is driven by theory, but co-variation is a critical part of empirical hypothesis testing.
When describing relationships, it is important to distinguish between those that are deterministic versus stochastic. Deterministic relationships are “fully determined” such that, knowing the values of the independent variable, you can perfectly explain (or predict) the value of the dependent variable. Philosophers of Old (like Kant) imagined the universe to be like a massive and complex clock which, once wound up and set ticking, would permit perfect prediction of the future if you had all the information on the starting conditions. There is no “error” in the prediction. Stochastic relationships, on the other hand, include an irreducible random component, such that the independent variables permit only a partial prediction of the dependent variable. But that stochastic (or random) component of the variation in the dependent variable has a probability distribution that can be analyzed statistically.
7.1.1 Deterministic Linear Model
The deterministic linear model serves as the basis for evaluating theoretical models. It is expressed as:
Yi=α+βXi(7.1)(7.1)Yi=α+βXi
A deterministic model is systematic and contains no error, therefore YY is perfectly predicted by XX. This is illustrated in Figure \(1\). αα and ββ are the model parameters and are constant terms. ββ is the slope or the change in YY over the change in XX. αα is the intercept, or the value of YY when XX is zero.
Given that in social science we rarely work with deterministic models, nearly all models contain a stochastic, or random, component.
7.1.2 Stochastic Linear Model
The stochastic, or statistical, the linear model contains a systematic component, Y=α+βY=α+β, and a stochastic component called the error term. The error term is the difference between the expected value of YiYi and the observed value of YiYi; Yi−μYi−μ. This model is expressed as:
Yi=α+βXi+ϵi(7.2)(7.2)Yi=α+βXi+ϵi
where ϵiϵi is the error term. In the deterministic model, each value of YY fits along the regression line, however in a stochastic model, the expected value of YY is conditioned by the values of XX. This is illustrated in Figure \(2\).
Figure \(2\) shows the conditional population distributions of YY for several values of X,p(Y|X)X,p(Y|X). The conditional means of YY given XX are denoted μμ.
μi≡E(Yi)≡E(Y|Xi)=α+βXi(7.3)(7.3)μi≡E(Yi)≡E(Y|Xi)=α+βXi
where - α=E(Y)≡μα=E(Y)≡μ when X=0X=0 - Each 1 unit increase in XX increases E(Y)E(Y) by ββ
However, in the stochastic linear model variation in YY is caused by more than XX, it is also caused by the error term ϵϵ. The error term is expressed as:
ϵi=Yi−E(Yi)=Yi−(α+βXi)=Yi−α−βXiϵi=Yi−E(Yi)=Yi−(α+βXi)=Yi−α−βXiTherefore;Yi=E(Yi)+ϵ=α+βXi+ϵiYi=E(Yi)+ϵ=α+βXi+ϵi
We make several important assumptions about the error term that are discussed in the next section.
7.1.3 Assumptions about the Error Term
There are three key assumptions about the error term; a) errors have identical distributions, b) errors are independent, and c) errors are normally distributed.14
Error Assumptions
• Errors have identical distributions
E(ϵ2i)=σ2ϵE(ϵi2)=σϵ2
• Errors are independent of XX and other ϵiϵi
E(ϵi)≡E(ϵ|xi)=0E(ϵi)≡E(ϵ|xi)=0
and
E(ϵi)≠E(ϵj)E(ϵi)≠E(ϵj) for i≠ji≠j
• Errors are normally distributed
ϵi∼N(0,σ2ϵ)ϵi∼N(0,σϵ2)
Taken together these assumptions mean that the error term has a normal, independent, and identical distribution (normal i.i.d.). However, we don’t know if, in any particular case, these assumptions are met. Therefore we must estimate a linear model. | textbooks/stats/Applied_Statistics/Book%3A_Quantitative_Research_Methods_for_Political_Science_Public_Policy_and_Public_Administration_(Jenkins-Smith_et_al.)/07%3A_The_Logic_of_Ordinary_Least_Squares_Estimation/7.01%3A_heoretical_Models.txt |
With stochastic models we don’t know if the error assumptions are met, nor do we know the values of αα and ββ; therefore we must estimate them, as denoted by a hat (e.g., ^αα^ is the estimate for αα). The stochastic model as shown in Equation (7.4) is estimated as:
Yi=^α+^βXi+ϵi(7.4)(7.4)Yi=α^+β^Xi+ϵi
where ϵiϵi is the residual term or the estimated error term. Since no line can perfectly pass through all the data points, we introduce a residual, ϵϵ, into the regression equation. Note that the predicted value of YY is denoted ^YY^ (yy-hat).
Yi=^α+^βXi+ϵi=^Yi+ϵiϵi=Yi−^Yi=Yi−^α−^βXiYi=α^+β^Xi+ϵi=Yi^+ϵiϵi=Yi−Yi^=Yi−α^−β^Xi
7.2.1 Residuals
Residuals measure prediction errors of how far observation YiYi is from predicted ^YiYi^. This is shown in Figure \(3\).
The residual term contains the accumulation (sum) of errors that can result from measurement issues, modeling problems, and irreducible randomness. Ideally, the residual term contains lots of small and independent influences that result in an overall random quality of the distribution of the errors. When that distribution is not random – that is, when the distribution of error has some systematic quality – the estimates of ^αα^ and ^ββ^ may be biased. Thus, when we evaluate our models we will focus on the shape of the distribution of our errors.
What’s in ϵϵ?
Measurement Error
• Imperfect operationalizations
• Imperfect measure application
Modeling Error
• Modeling error/mis-specification
• Missing model explanation
• Incorrect assumptions about associations
• Incorrect assumptions about distributions
Stochastic “noise”
• Unpredictable variability in the dependent variable
The goal of regression analysis is to minimize the error associated with the model estimates. As noted, the residual term is the estimated error, or overall miss" (e.g., Yi−^YiYi−Yi^). Specifically, the goal is to minimize the sum of the squared errors, ∑ϵ2∑ϵ2. Therefore, we need to find the values of ^αα^ and ^ββ^ that minimize ∑ϵ2∑ϵ2.
Note that for a fixed set of data {^αα^,^αα^}, each possible choice of values for ^αα^ and ^ββ^ corresponds to a specific residual sum of squares, ∑ϵ2∑ϵ2. This can be expressed by the following functional form:
S(^α,^β)=n∑i=1ϵ2i=∑(Yi−^Yi)2=∑(Yi−^α−^βXi)2(7.5)(7.5)S(α^,β^)=∑i=1nϵi2=∑(Yi−Yi^)2=∑(Yi−α^−β^Xi)2
Minimizing this function requires specifying estimators for ^αα^ and ^ββ^ such that S(^α,^β)=∑ϵ2S(α^,β^)=∑ϵ2 is at the lowest possible value. Finding this minimum value requires the use of calculus, which will be discussed in the next chapter. Before that, we walk through a quick example of simple regression
7.03: An Example of Simp
The following example uses a measure of peoples’ political ideology to predict their perceptions of the risks posed by global climate change. OLS regression can be done using the `lm` function in `R`. For this example, we are again using the class data set.
``````ols1 <- lm(ds\$glbcc_risk~ds\$ideol)
summary(ols1)``````
``````##
## Call:
## lm(formula = ds\$glbcc_risk ~ ds\$ideol)
##
## Residuals:
## Min 1Q Median 3Q Max
## -8.726 -1.633 0.274 1.459 6.506
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 10.81866 0.14189 76.25 <0.0000000000000002 ***
## ds\$ideol -1.04635 0.02856 -36.63 <0.0000000000000002 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 2.479 on 2511 degrees of freedom
## (34 observations deleted due to missingness)
## Multiple R-squared: 0.3483, Adjusted R-squared: 0.348
## F-statistic: 1342 on 1 and 2511 DF, p-value: < 0.00000000000000022``````
The output in R provides quite a lot of information about the relationship between the measures of ideology and perceived risks of climate change. It provides an overview of the distribution of the residuals; the estimated coefficients for ^αα^ and ^ββ^; the results of hypothesis tests; and overall measures of model fit" – all of which we will discuss in detail in later chapters. For now, note that the estimated BB for ideology is negative, which indicates that as the value for ideology increases—in our data this means more conservative—the perceived risk of climate change decreases. Specifically, for each one-unit increase in the ideology scale, perceived climate change risk decreases by -1.0463463.
We can also examine the distribution of the residuals, using a histogram and a density curve. This is shown in Figure \(4\) and Figure \(5\). Note that we will discuss residual diagnostics in detail in future chapters.
``````data.frame(ols1\$residuals) %>%
ggplot(aes(ols1\$residuals)) +
geom_histogram(bins = 16)
```
```
``````data.frame(ols1\$residuals) %>%
ggplot(aes(ols1\$residuals)) +
geom_density(adjust = 1.5) ``````
For purposes of this Chapter, be sure that you can run the basic bivariate OLS regression model in `R`. If you can – congratulations! If not, try again. And again. And again…
1. Actually, we assume only that the means of the errors drawn from repeated samples of observations will be normally distributed – but we will deal with that wrinkle later on.↩ | textbooks/stats/Applied_Statistics/Book%3A_Quantitative_Research_Methods_for_Political_Science_Public_Policy_and_Public_Administration_(Jenkins-Smith_et_al.)/07%3A_The_Logic_of_Ordinary_Least_Squares_Estimation/7.02%3A_Estimating_Linear_.txt |
In calculus, the derivative is a measure of the slope of any function of x, or f(x)f(x), at each given value of xx. For the function f(x)f(x), the derivative is denoted as f′(x)f′(x) or, pronounced as “f prime x”. Because the formula for ∑ϵ2∑ϵ2 is known and can be treated as a function, the derivative of that function permits the calculation of the change in the sum of the squared error over each possible value of ^αα^ and ^ββ^. For that reason, we need to find the derivative for ∑ϵ2∑ϵ2 with respect to changes in ^αα^ and ^ββ^. That, in turn, will permit us to “derive” the values of ^αα^ and ^ββ^ that result in the lowest possible ∑ϵ2∑ϵ2.
Look – we understand that this all sounds complicated. But it’s not all that complicated. In this chapter, we will walk through all the steps so you’ll see that it's really rather simple and, well, elegant. You will see that differential calculus (the kind of calculus that is concerned with rates of change) is built on a set of clearly defined rules for finding the derivative for any function f(x)f(x). It’s like solving a puzzle. The next section outlines these rules, so we can start solving puzzles.
8.1.1 Rules of Derivation
Derivative Rules
1. Power Rule
2. Constant Rule
3. A Constant Times a Function
4. Differentiating a Sum
5. Product Rule
6. Quotient Rule
7. Chain Rule
The following sections provide examples of the application of each rule.
Rule 1: The Power Rule
Example:f(x)=x6f′(x)=6∗x6−1=6x5f(x)=x6f′(x)=6∗x6−1=6x5
A second example can be plotted in `R`. The function is f(x)=x2f(x)=x2 and therefore, using the power rule, the derivative is: f′(x)=2xf′(x)=2x.
``````x <- c(-5:5)
x``````
``## [1] -5 -4 -3 -2 -1 0 1 2 3 4 5``
``````y <- x^2
y``````
``## [1] 25 16 9 4 1 0 1 4 9 16 25``
``plot(x,y, type="o", pch=19)``
Rule 2: The Constant Rule
Example:f(x)=346f′(x)=0=10xf(x)=346f′(x)=0=10x
Rule 3: A Constant Times a Function
Example:f(x)=5x2f′(x)=5∗2x2−1=10xf(x)=5x2f′(x)=5∗2x2−1=10x
Rule 4: Differentiating a Sum
Example:
f(x)=4x2+32xf′(x)=(4x2)′+(32x)′=4∗2x2−1+32=8x+32f(x)=4x2+32xf′(x)=(4x2)′+(32x)′=4∗2x2−1+32=8x+32
Rule 5: The Product Rule
Example:f(x)=x3(x−5)f′(x)=(x3)′(x−5)+(x3)(x−5)′=3x2(x−5)+(x3)∗1=3x3−15x2+x3=4x3−15x2f(x)=x3(x−5)f′(x)=(x3)′(x−5)+(x3)(x−5)′=3x2(x−5)+(x3)∗1=3x3−15x2+x3=4x3−15x2
In a second example, the product rule is applied to the function y=f(x)=x2−6x+5y=f(x)=x2−6x+5. The derivative of this function is f′(x)=2x−6f′(x)=2x−6. This function can be plotted in `R`.
``````x <- c(-1:7)
x``````
``## [1] -1 0 1 2 3 4 5 6 7``
``````y <- x^2-6*x+5
y``````
``## [1] 12 5 0 -3 -4 -3 0 5 12``
``````plot(x,y, type="o", pch=19)
abline(h=0,v=0)``````
We can also use the derivative and `R` to calculate the slope for each value of XX.
``````b <- 2*x-6
b``````
``## [1] -8 -6 -4 -2 0 2 4 6 8``
The values for XX, which are shown in Figure \(2\), range from -8 to +8 and return derivatives (slopes at a point) ranging from -25 to +25.
Rule 6: the Quotient Rule
Example:f(x)=xx2+5f′(x)=(x2+5)(x)′−(x2+5)′(x)(x2+5)2=(x2+5)−(2x)(x)(x2+5)2=−x2+5(x2+5)2f(x)=xx2+5f′(x)=(x2+5)(x)′−(x2+5)′(x)(x2+5)2=(x2+5)−(2x)(x)(x2+5)2=−x2+5(x2+5)2
Rule 7: The Chain Rule
Example:f(x)=(7x2−2x+13)5f′(x)=5(7x2−2x+13)4∗(7x2−2x+13)′=5(7x2−2x+13)4∗(14x−2)f(x)=(7x2−2x+13)5f′(x)=5(7x2−2x+13)4∗(7x2−2x+13)′=5(7x2−2x+13)4∗(14x−2)
8.1.2 Critical Points
Our goal is to use derivatives to find the values of ^αα^ and ^ββ^ that minimize the sum of the squared error. To do this we need to find the minima of a function. The minima is the smallest value that a function takes, whereas the maxima is the largest value. To find the minima and maxima, the critical points are key. The critical point is where the derivative of the function is equal to 00, or f′(x)=0f′(x)=0. Note that this is equivalent to the slope is equal to 00.
Example: Finding the Critical Points
To find the critical point for the function
y=f(x)=(x2−4x+5)y=f(x)=(x2−4x+5);
• First find the derivative; f′(x)=2x−4f′(x)=2x−4
• Set the derivative equal to 00; f′(x)=2x−4=0f′(x)=2x−4=0
• Solve for xx; x=2x=2
• Substitute 22 for xx into the function and solve for yy
• Thus, the critical point (there’s only one in this case) of the function is (2,1)(2,1)
Once a critical point is identified, the next step is to determine whether that point is a minima or a maxima. The most straightforward way to do this is to identify the x,y coordinates and plot. This can be done in `R`, as we will show using the function y=f(x)=(x2−4x+5)y=f(x)=(x2−4x+5). The plot is shown in Figure \(3\).
``````x <- c(-5:5)
x``````
``## [1] -5 -4 -3 -2 -1 0 1 2 3 4 5``
``````y <- x^2-4*x+5
y``````
``## [1] 50 37 26 17 10 5 2 1 2 5 10``
``plot(x,y, type="o", pch=19)``
As can be seen, the critical point (2,1)(2,1) is a minima.
8.1.3 Partial Derivation
When an equation includes two variables, one can take a partial derivative with respect to only one variable, while the other variable is simply treated as a constant. This is particularly useful in our case because the function ∑ϵ2∑ϵ2 has two variables – ^αα^ and ^ββ^.
Let’s take an example. For the function y=f(x,z)=x3+4xz−5z2y=f(x,z)=x3+4xz−5z2, we first take the derivative of xx holding zz constant.
∂y∂x=∂f(x,z)∂x=3x2+4z∂y∂x=∂f(x,z)∂x=3x2+4z
Next we take the derivative of zz holding xx constant.
∂y∂z=∂f(x,z)∂z=4x−10z | textbooks/stats/Applied_Statistics/Book%3A_Quantitative_Research_Methods_for_Political_Science_Public_Policy_and_Public_Administration_(Jenkins-Smith_et_al.)/08%3A_Linear_Estimation_and_Minimizing_Error/8.01%3A_Minimizing_Error_using_Der.txt |
Now that we have developed some of the rules for differential calculus, we can see how OLS finds values of ^αα^ and ^ββ^ that minimize the sum of the squared error. In formal terms, let’s define the set, S(^α,^β)S(α^,β^), as a pair of regression estimators that jointly determine the residual sum of squares given that: Yi=^Yi+ϵi=^α+^βXi+ϵiYi=Y^i+ϵi=α^+β^Xi+ϵi. This function can be expressed:
S(^α,^β)=n∑i=1ϵ2i=∑(Yi−^Yi)2=∑(Yi−^α−^βXi)2S(α^,β^)=∑i=1nϵi2=∑(Yi−Yi^)2=∑(Yi−α^−β^Xi)2
First, we will derive ^αα^.
8.2.1 OLS Derivation of ^αα^
Take the partial derivatives of S(^α,^β)S(α^,β^) with-respect-to (w.r.t) ^αα^ in order to determine the formulation of ^αα^ that minimizes S(^α,^β)S(α^,β^). Using the chain rule,
∂S(^α,^β)∂^α=∑2(Yi−^α−^βXi)2−1∗(Yi−^α−^βXi)′=∑2(Yi−^α−^βXi)1∗(−1)=−2∑(Yi−^α−^βXi)=−2∑Yi+2n^α+2^β∑Xi∂S(α^,β^)∂α^=∑2(Yi−α^−β^Xi)2−1∗(Yi−α^−β^Xi)′=∑2(Yi−α^−β^Xi)1∗(−1)=−2∑(Yi−α^−β^Xi)=−2∑Yi+2nα^+2β^∑Xi
Next, set the derivative equal to 00.
∂S(^α,^β)∂^α=−2∑Yi+2n^α+2^β∑Xi=0∂S(α^,β^)∂α^=−2∑Yi+2nα^+2β^∑Xi=0
Then, shift non-^αα^ terms to the other side of the equal sign:
2n^α=2∑Yi−2^β∑Xi2nα^=2∑Yi−2β^∑XiFinally, divide through by 2n2n:2n^α2n=2∑Yi−2^β∑Xi2nA=∑Yin−^β∗∑Xin=¯Y−^β¯X2nα^2n=2∑Yi−2β^∑Xi2nA=∑Yin−β^∗∑Xin=Y¯−β^X¯∴^α=¯Y−^β¯X(8.1)(8.1)∴α^=Y¯−β^X¯
8.2.2 OLS Derivation of ^ββ^
Having found ^αα^, the next step is to derive ^ββ^. This time we will take the partial derivative w.r.t ^ββ^. As you will see, the steps are a little more involved for ^ββ^ than they were for ^αα^.
∂S(^α,^β)∂^β=∑2(Yi−^α−^βXi)2−1∗(Yi−^α−^βXi)′=∑2(Yi−^α−^βXi)1∗(−Xi)=2∑(−XiYi+^αXi+^βX2i)=−2∑XiYi+2^α∑Xi+2^β∑X2i∂S(α^,β^)∂β^=∑2(Yi−α^−β^Xi)2−1∗(Yi−α^−β^Xi)′=∑2(Yi−α^−β^Xi)1∗(−Xi)=2∑(−XiYi+α^Xi+β^Xi2)=−2∑XiYi+2α^∑Xi+2β^∑Xi2
Since we know that ^α=¯Y−^β¯Xα^=Y¯−β^X¯, we can substitute ¯Y−^β¯XY¯−β^X¯ for ^αα^.
∂S(^α,^β)∂^β=−2∑XiYi+2(¯Y−^β¯X)∑Xi+2^β∑X2i=−2∑XiYi+2¯Y∑Xi−2^β¯X∑Xi+2^β∑X2i∂S(α^,β^)∂β^=−2∑XiYi+2(Y¯−β^X¯)∑Xi+2β^∑Xi2=−2∑XiYi+2Y¯∑Xi−2β^X¯∑Xi+2β^∑Xi2
Next, we can substitute ∑Yin∑Yin for ¯YY¯ and ∑Xin∑Xin for ¯XX¯ and set it equal to 00.
∂S(^α,^β)∂^β=−2∑XiYi+2∑Yi∑Xin−2^β∑Xi∑Xin+2^β∑X2i=0∂S(α^,β^)∂β^=−2∑XiYi+2∑Yi∑Xin−2β^∑Xi∑Xin+2β^∑Xi2=0
Then, multiply through by n2n2 and put all the ^ββ^ terms on the same side.
n^β∑X2i−^β(∑Xi)2=n∑XiYi−∑Xi∑Yi^β(n∑X2i−(∑Xi)2)=n∑XiYi−∑Xi∑Yi∴^β=n∑XiYi−∑Xi∑Yin∑X2i−(∑Xi)2nβ^∑Xi2−β^(∑Xi)2=n∑XiYi−∑Xi∑Yiβ^(n∑Xi2−(∑Xi)2)=n∑XiYi−∑Xi∑Yi∴β^=n∑XiYi−∑Xi∑Yin∑Xi2−(∑Xi)2
The ^ββ^ term can be rearranged such that:
^β=Σ(Xi−¯X)(Yi−¯Y)Σ(Xi−¯X)2(8.2)(8.2)β^=Σ(Xi−X¯)(Yi−Y¯)Σ(Xi−X¯)2
Now remember what we are doing here: we used the partial derivatives for ∑ϵ2∑ϵ2 with respect to ^αα^ and ^ββ^ to find the values for ^αα^ and ^ββ^ that will give us the smallest value for ∑ϵ2∑ϵ2. Put differently, the formulas for ^ββ^ and ^αα^ allow the calculation of the error-minimizing slope (change in YY given a one-unit change in XX) and intercept (value for YY when XX is zero) for any data set representing a bivariate, linear relationship. No other formulas will give us a line, using the same data, that will result in as small a squared-error. Therefore, OLS is referred to as the Best Linear Unbiased Estimator (BLUE).
8.2.3 Interpreting ^ββ^ and ^αα^
In a regression equation, Y=^α+^βXY=α^+β^X, where ^αα^ is shown in Equation (8.1) and ^ββ^ is shown in Equation (8.2). Equation (8.2) shows that for each 1-unit increase in XX you get ^ββ^ units to change in YY. Equation (8.1) shows that when XX is 00, YY is equal to ^αα^. Note that in a regression model with no independent variables, ^αα^ is simply the expected value (i.e., mean) of YY.
The intuition behind these formulas can be shown by using `R` to calculate “by hand” the slope (^ββ^) and intercept (^αα^) coefficients. A theoretical simple regression model is structured as follows:
Yi=α+βXi+ϵiYi=α+βXi+ϵi
• αα and ββ are constant terms
• αα is the intercept
• ββ is the slope
• XiXi is a predictor of YiYi
• ϵϵ is the error term
The model to be estimated is expressed as Y=^β+^βX+/epsilonY=β^+β^X+/epsilon.
As noted, the goal is to calculate the intercept coefficient:
^α=¯Y−^β¯Xα^=Y¯−β^X¯and the slope coefficient:^β=Σ(Xi−¯X)(Yi−¯Y)Σ(Xi−¯X)2β^=Σ(Xi−X¯)(Yi−Y¯)Σ(Xi−X¯)2
Using `R`, this can be accomplished in a few steps. First, create a vector of values for `x` and `y` (note that we chose these values arbitrarily for the purpose of this example).
``````x <- c(4,2,4,3,5,7,4,9)
x``````
``## [1] 4 2 4 3 5 7 4 9``
``````y <- c(2,1,5,3,6,4,2,7)
y``````
``## [1] 2 1 5 3 6 4 2 7``
Then, create objects for ¯XX¯ and ¯YY¯:
``````xbar <- mean(x)
xbar``````
``## [1] 4.75``
``````ybar <- mean(y)
ybar``````
``## [1] 3.75``
Next, create objects for (X−¯X)(X−X¯) and (Y−¯Y)(Y−Y¯), the deviations of XX and YY around their means:
``````x.m.xbar <- x-xbar
x.m.xbar``````
``## [1] -0.75 -2.75 -0.75 -1.75 0.25 2.25 -0.75 4.25``
``````y.m.ybar <- y-ybar
y.m.ybar``````
``## [1] -1.75 -2.75 1.25 -0.75 2.25 0.25 -1.75 3.25``
Then, calculate ^ββ^:
^β=Σ(Xi−¯X)(Yi−¯Y)Σ(Xi−¯X)2β^=Σ(Xi−X¯)(Yi−Y¯)Σ(Xi−X¯)2
``````B <- sum((x.m.xbar)*(y.m.ybar))/sum((x.m.xbar)^2)
B``````
``## [1] 0.7183099``
Finally, calculate ^αα^
^α=¯Y−^β¯Xα^=Y¯−β^X¯
``````A <- ybar-B*xbar
A``````
``## [1] 0.3380282``
To see the relationship, we can produce a scatterplot of `x` and `y` and add our regression line, as shown in Figure \(4\). So, for each unit increase in xx, yy increases by 0.7183099 and when xx is 00, yy is equal to 0.3380282.
``````plot(x,y)
lines(x,A+B*x)``````
``dev.off()``
``````## RStudioGD
## 2``````
8.03: Summary
Whoa! Think of what you’ve accomplished here: You learned enough calculus to find a minima for an equation with two variables, then applied that to the equation for the ∑ϵ2∑ϵ2. You derived the error minimizing values for ^αα^ and ^ββ^, then used those formulae in `R` to calculate by hand" the OLS regression for a small dataset.
Congratulate yourself – you deserve it! | textbooks/stats/Applied_Statistics/Book%3A_Quantitative_Research_Methods_for_Political_Science_Public_Policy_and_Public_Administration_(Jenkins-Smith_et_al.)/08%3A_Linear_Estimation_and_Minimizing_Error/8.02%3A_8.2_Deriving_OLS_Estimator.txt |
Hypothesis testing is the key to theory building. This chapter is focused on empirical hypothesis testing using OLS regression, with examples drawn from the accompanying class dataset. Here we will use the responses to the political ideology question (ranging from 1=strong liberal to 7=strong conservative), as well as responses to a question concerning the survey respondents’ level of risk that global warming poses for people and the environment.15
Using the data from these questions, we posit the following hypothesis:
H1H1: On average, as respondents become more politically conservative, they will be less likely to express increased risk associated with global warming.
The null hypothesis, H0H0, is β=0β=0, posits that a respondent’s ideology has no relationship with their views about the risks of global warming for people and the environment. Our working hypothesis, H1H1, is β<0β<0. We expect ββ to be less than zero because we expect a negative slope between our measures of ideology and levels of risk associated with global warming, given that a larger numeric value for ideology indicates a more conservative respondent. Note that this is a directional hypothesis since we are posting a negative relationship. Typically, a directional hypothesis implies a one-tailed test where the critical value is 0.05 on one side of the distribution. A non-directional hypothesis, β≠0β≠0 does not imply a particular direction, it only implies that there is a relationship. This requires a two-tailed test where the critical value is 0.025 on both sides of the distribution.
To test this hypothesis, we run the following code in `R`.
Before we begin, for this chapter we will need to make a special data set that just contains the variables `glbcc_risk` and `ideol` with their missing values removed.
``````#Filtering a data set with only variables glbcc_risk and ideol
ds.omit <- filter(ds) %>%
dplyr::select(glbcc_risk,ideol) %>%
na.omit()
#Run the na.omit function to remove the missing values``````
``````ols1 <- lm(glbcc_risk ~ ideol, data = ds.omit)
summary(ols1)``````
``````##
## Call:
## lm(formula = glbcc_risk ~ ideol, data = ds.omit)
##
## Residuals:
## Min 1Q Median 3Q Max
## -8.726 -1.633 0.274 1.459 6.506
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 10.81866 0.14189 76.25 <0.0000000000000002 ***
## ideol -1.04635 0.02856 -36.63 <0.0000000000000002 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 2.479 on 2511 degrees of freedom
## Multiple R-squared: 0.3483, Adjusted R-squared: 0.348
## F-statistic: 1342 on 1 and 2511 DF, p-value: < 0.00000000000000022``````
To know whether to accept the rejecting of the null hypothesis, we need to first understand the standard error associated with the model and our coefficients. We start, therefore, with consideration of the residual standard error of the regression model.
9.1.1 Residual Standard Error
The residual standard error (or standard error of the regression) measures the spread of our observations around the regression line. As will be discussed below, the residual standard error is used to calculate the standard errors of the regression coefficients, AA and BB.
The formula for the residual standard error is as follows:
SE=√ΣE2in−2(9.1)(9.1)SE=ΣEi2n−2
To calculate this in `R`, based on the model we just ran, we create an object called `Se` and use the `sqrt` and `sum` commands.
``````Se <- sqrt(sum(ols1\$residuals^2)/(length(ds.omit\$glbcc_risk)-2))
Se``````
``## [1] 2.479022``
Note that this result matches the result provided by the `summary` function in `R`, as shown above.
For our model, the results indicate that: Yi=10.8186624−1.0463463Xi+EiYi=10.8186624−1.0463463Xi+Ei. Another sample of 2513 observations would almost certainly lead to different estimates for AA and BB. If we drew many such samples, we’d get the sample distribution of the estimates. Because we typically cannot draw many samples, we need to estimate the sample distribution, based on our sample size and variance. To do that, we calculate the standard error of the slope and intercept coefficients, SE(B)SE(B) and SE(A)SE(A). These standard errors are our estimates of how much variation we would expect in the estimates of BB and AA across different samples. We use them to evaluate whether BB and AA are larger than would be expected to occur by chance if the real values of BB and/or AA are zero (the null hypotheses).
The standard error for BB, SE(B)SE(B) is:
SE(B)=SE√TSSX(9.2)(9.2)SE(B)=SETSSX
where SESE is the residual standard error of the regression, (as shown earlier in equation 9.1). TSSXTSSX is the total sum of squares for XX, that is the total sum of the squared deviations (residuals) of XX from its mean ¯XX¯; ∑(Xi−¯X)2∑(Xi−X¯)2. Note that the greater the deviation of XX around its mean as a proportion of the standard error of the model, the smaller the SE(B)SE(B). The smaller SE(B)SE(B) is, the less variation we would expect in repeated estimates of BB across multiple samples.
The standard error for AA, SE(A)SE(A), is defined as:
SE(A)=SE∗√1n+¯X2TSSX(9.3)(9.3)SE(A)=SE∗1n+X¯2TSSX
Again, the SESE is the residual standard error, as shown in equation 9.1.
For AA, the larger the data set, and the larger the deviation of XX around its mean, the more precise our estimate of AA (i.e., the smaller SE(A)SE(A) will be).
We can calculate the SESE of AA and BB in `R` in a few steps. First, we create an object `TSSx` that is the total sum of squares for the XX variable.
``````TSSx <- sum((ds.omit\$ideol-mean(ds.omit\$ideol, na.rm = TRUE))^2)
TSSx``````
``## [1] 7532.946``
Then, we create an object called `SEa`.
``````SEa <- Se*sqrt((1/length(ds.omit\$glbcc_risk))+(mean(ds.omit\$ideol,na.rm=T)^2/TSSx))
SEa``````
``## [1] 0.1418895``
Finally, we create `SEb`.
``````SEb <- Se/(sqrt(TSSx))
SEb``````
``## [1] 0.02856262``
Using the standard errors, we can determine how likely it is that our estimate of ββ differs from 00; that is how many standard errors our estimate is away from 00. To determine this we use the tt value. The tt score is derived by dividing the regression coefficient by its standard error. For our model, the tt value for ββ is as follows:
``````t <- ols1\$coef[2]/SEb
t``````
``````## ideol
## -36.63342``````
The tt value for our BB is 36.6334214, meaning that BB is 36.6334214 standard errors away from zero. We can then ask: What is the probability, pp value, of obtaining this result if β=0β=0? According to the results shown earlier, p=2e−16p=2e−16. That is remarkably close to zero. This result indicates that we can reject the null hypothesis that β=0β=0.
In addition, we can calculate the confidence interval (CI) for our estimate of BB. This means that in 95 out of 100 repeated applications, the confidence interval will contain ββ.
In the following example, we calculate a 95%95% CI. The CI is calculated as follows:
B±1.96(SE(B))(9.4)(9.4)B±1.96(SE(B))
We can easily calculate this in `R`. First, we calculate the upper limit then the lower limit and then we use the `confint` function to check.
``````Bhi <- ols1\$coef[2]-1.96*SEb
Bhi``````
``````## ideol
## -1.102329``````
``````Blow <- ols1\$coef[2]+1.96*SEb
Blow``````
``````## ideol
## -0.9903636``````
``confint(ols1)``
``````## 2.5 % 97.5 %
## (Intercept) 10.540430 11.0968947
## ideol -1.102355 -0.9903377``````
As shown, the upper limit of our estimated BB is -0.9903636, which is far below 00, providing further support for rejecting H0H0.
So, using our example data, we tested the working hypothesis that political ideology is negatively related to the perceived risk of global warming to people and the environment. Using simple OLS regression, we find support for this working hypothesis and can reject the null. | textbooks/stats/Applied_Statistics/Book%3A_Quantitative_Research_Methods_for_Political_Science_Public_Policy_and_Public_Administration_(Jenkins-Smith_et_al.)/09%3A_Bi-Variate_Hypothesis_Testing_and_Model_Fit/9.01%3A_Hypothesis_Tests_for_.txt |
Once we have constructed a regression model, it is natural to ask: how good is the model at explaining variation in our dependent variable? We can answer this question with a number of statistics that indicate model fit“. Basically, these statistics provide measures of the degree to which the estimated relationships account for the variance in the dependent variable, YY.
There are several ways to examine how well the model explains" the variance in YY. First, we can examine the covariance of XX and YY, which is a general measure of the sample variance for XX and YY. Then we can use a measure of sample correlation, which is the standardized measure of covariation. Both of these measures provide indicators of the degree to which variation in XX can account for variation in YY. Finally, we can examine R2R2, also know as the coefficient of determination, which is the standard measure of the goodness of fit for OLS models.
9.2.1 Sample Covariance and Correlations
The sample covariance for a simple regression model is defined as:
SXY=Σ(Xi−¯X)(Yi−¯Y)n−1(9.5)(9.5)SXY=Σ(Xi−X¯)(Yi−Y¯)n−1
Intuitively, this measure tells you, on average, whether a higher value of XX (relative to its mean) is associated with a higher or lower value of YY. Is the association negative or positive? Covariance can be obtained quite simply in `R` by using the the `cov` function.
``````Sxy <- cov(ds.omit\$ideol, ds.omit\$glbcc_risk)
Sxy``````
``## [1] -3.137767``
The problem with covariance is that its magnitude will be entirely dependent on the scales used to measure XX and YY. That is, it is non-standard, and its meaning will vary depending on what it is that is being measured. In order to compare sample covariation across different samples and different measures, we can use the sample correlation.
The sample correlation, rr, is found by dividing SXYSXY by the product of the standard deviations of XX, SXSX, and YY, SYSY.
r=SXYSXSY=Σ(Xi−¯X)(Yi−¯Y)√Σ(Xi−¯X)2Σ(Yi−¯Y)2(9.6)(9.6)r=SXYSXSY=Σ(Xi−X¯)(Yi−Y¯)Σ(Xi−X¯)2Σ(Yi−Y¯)2
To calculate this in `R`, we first make an object for SXSX and SYSY using the `sd` function.
``````Sx <- sd(ds.omit\$ideol)
Sx``````
``## [1] 1.7317``
``````Sy <- sd(ds.omit\$glbcc_risk)
Sy``````
``## [1] 3.070227``
Then to find rr:
``````r <- Sxy/(Sx*Sy)
r``````
``## [1] -0.5901706``
To check this we can use the `cor` function in `R`.
``````rbyR <- cor(ds.omit\$ideol, ds.omit\$glbcc_risk)
rbyR``````
``## [1] -0.5901706``
So what does the correlation coefficient mean? The values range from +1 to -1, with a value of +1 means there is a perfect positive relationship between XX and YY. Each increment of increase in XX is matched by a constant increase in YY – with all observations lining up neatly on a positive slope. A correlation coefficient of -1, or a perfect negative relationship, would indicate that each increment of increase in XX corresponds to a constant decrease in YY – or a negatively sloped line. A correlation coefficient of zero would describe no relationship between XX and YY.
9.2.2 Coefficient of Determination: R2R2
The most often used measure of goodness of fit for OLS models is R2R2. R2R2 is derived from three components: the total sum of squares, the explained sum of squares, and the residual sum of squares. R2R2 is the ratio of ESS (explained sum of squares) to TSS (total sum of squares).
Components of R2R2
• Total sum of squares (TSS): The sum of the squared variance of YY
• Residual sum of squares(RSS): The variance of YY not accounted for by the model
• Explained sum of squares (ESS): The variance of YY accounted for in the model. It is the difference between the TSS and the RSS.
• R2R2: The proportion of the total variance of YY explained by the model or the ratio of ESSESS to TSSTSS
R2=ESSTSS=TSS−RSSTSS=1−RSSTSSR2=ESSTSS=TSS−RSSTSS=1−RSSTSS
The components of R2R2 are illustrated in Figure \(1\). As shown, for each observation YiYi, variation around the mean can be decomposed into that which is “explained” by the regression and that which is not. In Figure \(1\), the deviation between the mean of YY and the predicted value of YY, ^YY^, is the proportion of the variation of YiYi that can be explained (or predicted) by the regression. That is shown as a blue line. The deviation of the observed value of YiYi from the predicted value ^YY^ (aka the residual, as discussed in the previous chapter) is the unexplained deviation, shown in red. Together, the explained and unexplained variation make up the total variation of YiYi around the mean ^YY^.
To calculate R2R2 “by hand” in `R`, we must first determine the total sum of squares, which is the sum of the squared differences of the observed values of YY from the mean of YY, Σ(Yi−¯Y)2Σ(Yi−Y¯)2. Using `R`, we can create an object called `TSS`.
``````TSS <- sum((ds.omit\$glbcc_risk-mean(ds.omit\$glbcc_risk))^2)
TSS``````
``## [1] 23678.85``
Remember that R2R2 is the ratio of the explained sum of squares to the total sum of squares (ESS/TSS). Therefore to calculate R2R2 we need to create an object called `RSS`, the squared sum of our model residuals.
``````RSS <- sum(ols1\$residuals^2)
RSS``````
``## [1] 15431.48``
Next, we create an object called `ESS`, which is equal to TSS-RSS.
``````ESS <- TSS-RSS
ESS``````
``## [1] 8247.376``
Finally, we calculate the R2R2.
``````R2 <- ESS/TSS
R2``````
``## [1] 0.3483013``
Note–happily–that the R2R2 calculated by “by hand” in `R` matches the results provided by the `summary` command.
The values for R2R2 can range from zero to 1. In the case of simple regression, a value of 1 indicates that the modeled coefficient (BB) “accounts for” all of the variation in YY. Put differently, all of the squared deviations in YiYi around the mean (^YY^) are in ESS, with none in the residual (RSS).16 A value of zero would indicate that all of the deviations in YiYi around the mean are in RSS – all residual or error“. Our example shows that the variation in political ideology (our XX) accounts for roughly 34.8 percent of the variation in our measure of the perceived risk of climate change (YY).
9.2.3 Visualizing Bivariate Regression
The `ggplot2` the package provides a mechanism for viewing the effect of the independent variable, ideology, on the dependent variable, perceived risk of climate change. Adding `geom_smooth` will calculate and visualize a regression line that represents the relationship between your IV and DV while minimizing the residual sum of squares. Graphically (Figure \(2\)), we see as an individual becomes more conservative (ideology = 7), their perception of the risk of global warming decreases.
``````ggplot(ds.omit, aes(ideol, glbcc_risk)) +
geom_smooth(method = lm)``````
Cleaning up the R Environment
If you recall, at the beginning of the chapter, we created several temporary data sets. We should take the time to clear up our workspace for the next chapter. The `rm` function in `R` will remove them for us.
``rm(ds.omit) ``
9.03: Summary
This chapter has focused on two key aspects of simple regression models: hypothesis testing and measures of the goodness of model fit. With respect to the former, we focused on the residual standard error and its role in determining the probability that our model estimates, BB and AA, are just random departures from a population in which ββ and αα are zero. We showed, using, how to calculate the residual standard errors for AA and BB and, using them, to calculate the t-statistics and associated probabilities for hypothesis testing. For model fit, we focused on model covariation and correlation and finished up with a discussion of the coefficient of determination – R2R2. So you are now in a position to use simple regression and to wage unremitting geek-war on those whose models are endowed with lesser R2sR2s.
1. The question wording was as follows: On a scale from zero to ten, where zero means no risk and ten means extreme risk, how much risk do you think global warming poses for people and the environment?“↩
2. Note that with a bivariate model, R2R2 is equal to the square of the correlation coefficient.↩ | textbooks/stats/Applied_Statistics/Book%3A_Quantitative_Research_Methods_for_Political_Science_Public_Policy_and_Public_Administration_(Jenkins-Smith_et_al.)/09%3A_Bi-Variate_Hypothesis_Testing_and_Model_Fit/9.02%3A_Measuring_Goodness_of.txt |
Recall from Chapter 4 that we identified three key assumptions about the error term that is necessary for OLS to provide unbiased, efficient linear estimators; a) errors have identical distributions, b) errors are independent, c) errors are normally distributed.17
Error Assumptions
• Errors have identical distributions
E(ϵ2i)=σ2ϵE(ϵi2)=σϵ2
• Errors are independent of XX and other ϵiϵi
E(ϵi)≡E(ϵ|xi)=0E(ϵi)≡E(ϵ|xi)=0
and
E(ϵi)≠E(ϵj)E(ϵi)≠E(ϵj) for i≠ji≠j
• Errors are normally distributed
ϵi∼N(0,σ2ϵ)ϵi∼N(0,σϵ2)
Taken together these assumptions mean that the error term has a normal, independent, and identical distribution (normal i.i.d.). Figure \(1\) shows what these assumptions would imply for the distribution of residuals around the predicted values of YY given XX.
How can we determine whether our residuals approximate the expected pattern? The most straightforward approach is to visually examine the distribution of the residuals over the range of the predicted values for YY. If all is well, there should be no obvious pattern to the residuals – they should appear as a “sneeze plot” (i.e., it looks like you sneezed on the plot. How gross!) as shown in Figure \(2\).
Generally, there is no pattern in such a sneeze plot of residuals. One of the difficulties we have, as human beings, is that we tend to look at randomness and perceive patterns. Our brains are wired to see patterns, even where they are none. Moreover, with random distributions, there will in some samples be clumps and gaps that do appear to depict some kind of order when in fact there is none. There is the danger, then, of over-interpreting the pattern of residuals to see problems that aren’t there. The key is to know what kinds of patterns to look for, so when you do observe one you will know it.
10.02: When Things Go
Residual analysis is the process of looking for signature patterns in the residuals that are indicative of a failure in the underlying assumptions of OLS regression. Different kinds of problems lead to different patterns in the residuals.
10.2.1 “Outlier” Data
Sometimes our data include unusual cases that behave differently from most of our observations. This may happen for a number of reasons. The most typical is that the data have been mis-coded, with some subgroup of the data having numerical values that lead to large residuals. Cases like this can also arise when a subgroup of the cases differ from the others in how XX influences YY, and that difference has not been captured in the model. This is a problem referred to as the omission of important independent variables.18 Figure \(3\) shows a stylized example, with a cluster of residuals falling at a considerable distance from the rest.
This is a case of influential outliers. The effect of such outliers can be significant, as the OLS estimates of AA and BB seek to minimize overall squared error. In the case of Figure \(3\), the effect would be to shift the estimate of BB to accommodate the unusual observations, as illustrated in Figure \(4\). One possible response would be to omit the unusual observations, as shown in Figure \(4\). Another would be to consider, theoretically and empirically, why these observations are unusual. Are they, perhaps, miscoded? Or are they codes representing missing values (e.g., “-99”)?
If they are not mis-codes, perhaps these outlier observations manifest a different kind of relationship between XX and YY, which might in turn, require a revised theory and model. We will address some modeling options to address this possibility when we explore multiple regression, in Part III of this book.
In sum, outlier analysis looks at residuals for patterns in which some observations deviate widely from others. If that deviation is influential, changing estimates of AA and BB as shown in Figure \(4\), then you must examine the observations to determine whether they are miscoded. If not, you can evaluate whether the cases are theoretically distinct, such that the influence of XX on YY is likely to be different than for other cases. If you conclude that this is so, you will need to respecify your model to account for these differences. We will discuss some options for doing that later in this chapter, and again in our discussion of multiple regression.
10.2.2 Non-Constant Variance
A second thing to look for in visual diagnostics of residuals is non-constant variance or heteroscedasticity. In this case, the variation in the residuals over the range of predicted values for YY should be roughly even. A problem occurs when that variation changes substantially as the predicted value of YY changes, as is illustrated in Figure \(5\).
``````## x5 y5 z5
## 1 1 0.116268529 first
## 2 2 -0.058592447 first
## 3 3 0.178546500 first
## 4 4 -0.133259371 first
## 5 5 -0.044656677 first
## 6 6 0.056960612 first
## 7 7 -0.288971761 first
## 8 8 -0.086901834 first
## 9 9 -0.046170268 first
## 10 10 -0.055554091 first
## 11 11 -0.002013537 first
## 12 12 -0.015038222 first
## 13 13 -0.062812676 first
## 14 14 0.132322085 first
## 15 15 -0.152135057 first
## 16 16 -0.043742787 first
## 17 17 0.097057758 first
## 18 18 0.002822264 first
## 19 19 -0.008578219 first
## 20 20 0.038921440 first
## 21 21 0.023668737 first``````
``````## x5 y5 z5
## 1 21 -0.7944212 second
## 2 22 3.9722634 second
## 3 23 2.0344877 second
## 4 24 -1.3313647 second
## 5 25 -8.0963483 second
## 6 26 -3.2788775 second
## 7 27 -6.3068507 second
## 8 28 -13.6105004 second
## 9 29 -3.3742972 second
## 10 30 -1.1897133 second
## 11 31 8.7458017 second
## 12 32 8.5587880 second
## 13 33 6.0964799 second
## 14 34 -6.0353801 second
## 15 35 -10.2333314 second
## 16 36 -5.0246837 second
## 17 37 6.8506290 second
## 18 38 0.4832010 second
## 19 39 2.3291504 second
## 20 40 -4.5016566 second
## 21 41 -8.4841231 second``````
``````## x5 y5 z5
## 1 1 0.116268529 first
## 2 2 -0.058592447 first
## 3 3 0.178546500 first
## 4 4 -0.133259371 first
## 5 5 -0.044656677 first
## 6 6 0.056960612 first
## 7 7 -0.288971761 first
## 8 8 -0.086901834 first
## 9 9 -0.046170268 first
## 10 10 -0.055554091 first
## 11 11 -0.002013537 first
## 12 12 -0.015038222 first
## 13 13 -0.062812676 first
## 14 14 0.132322085 first
## 15 15 -0.152135057 first
## 16 16 -0.043742787 first
## 17 17 0.097057758 first
## 18 18 0.002822264 first
## 19 19 -0.008578219 first
## 20 20 0.038921440 first
## 21 21 0.023668737 first
## 22 21 -0.794421247 second
## 23 22 3.972263354 second
## 24 23 2.034487716 second
## 25 24 -1.331364730 second
## 26 25 -8.096348251 second
## 27 26 -3.278877502 second
## 28 27 -6.306850722 second
## 29 28 -13.610500382 second
## 30 29 -3.374297181 second
## 31 30 -1.189713327 second
## 32 31 8.745801727 second
## 33 32 8.558788016 second
## 34 33 6.096479914 second
## 35 34 -6.035380147 second
## 36 35 -10.233331440 second
## 37 36 -5.024683664 second
## 38 37 6.850629016 second
## 39 38 0.483200951 second
## 40 39 2.329150423 second
## 41 40 -4.501656591 second
## 42 41 -8.484123104 second```
```
As Figure \(5\) shows, the width of the spread of the residuals grows as the predicted value of YY increases, making a fan-shaped pattern. Equally concerning would be a case of a “reverse fan”, or a pattern with a bulge in the middle and very “tight” distributions of residuals at either extreme. These would all be cases in which the assumption of constant-variance in the residuals (or “homoscedasticity”) fails, and are referred to as instances of heteroscedasticity.
What are the implications of heteroscedasticity? Our hypothesis tests for the estimated coefficients (AA and BB) are based on the assumption that the standard errors of the estimates (see the prior chapter) are normally distributed. If inspection of your residuals provides evidence to question that assumption, then the interpretation of the t-values and p-values may be problematic. Intuitively, in such a case the precision of our estimates of AA and BB are not constant – but rather will depend on the predicted value of YY. So you might be estimating BB relatively precisely in some ranges of YY, and less precise in others. That means you cannot depend on the estimated t and p-values to test your hypotheses.
10.2.3 Non-Linearity in the Parameters
One of the primary assumptions of simple OLS regression is that the estimated slope parameter (the BB) will be constant, and therefore the model will be linear. Put differently, the effect of any change in XX on YY should be constant over the range of YY. Thus, if our assumption is correct, the pattern of the residuals should be roughly symmetric, above and below zero, over the range of predicted values.
If the real relationship between XX and YY is not linear, however, the predicted (linear) values for YY will systematically depart from the (curved) relationship that is represented in the data. Figure \(6\) shows the kind of pattern we would expect in our residuals if the observed relationship between XX and YY is a strong curve when we attempt to model it as if it were linear.
What are the implications of non-linearity? First, because the slope is non-constant, the estimate of BB will be biased. In the illustration shown in Figure \(6\), BB would underestimate the value of YY in both the low and high ranges of the predicted value of YY, and overestimate it in the mid-range. In addition, the standard errors of the residuals will be large, due to systematic over- and under-estimation of YY, making the model very inefficient (or imprecise). | textbooks/stats/Applied_Statistics/Book%3A_Quantitative_Research_Methods_for_Political_Science_Public_Policy_and_Public_Administration_(Jenkins-Smith_et_al.)/10%3A_OLS_Assumptions_and_Simple_Regression_Diagnostics/10.01%3A_A_Recap_of_Mod.txt |
This far we have used rather simple illustrations of residual diagnostics and the kinds of patterns to look for. But you should be warned that, in real applications, the patterns are rarely so clear. So we will walk through an example diagnostic session, using the the `tbur` data set.
Our in-class lab example focuses on the relationship between political ideology (“ideology” in our dataset) as a predictor of the perceived risks posed by climate change (“gccrsk”). The model is specified in `R` as follows:
``OLS_env <- lm(ds\$glbcc_risk ~ ds\$ideol)``
Using the summary command in `R`, we can review the results.
``summary(OLS_env)``
``````##
## Call:
## lm(formula = ds\$glbcc_risk ~ ds\$ideol)
##
## Residuals:
## Min 1Q Median 3Q Max
## -8.726 -1.633 0.274 1.459 6.506
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 10.81866 0.14189 76.25 <0.0000000000000002 ***
## ds\$ideol -1.04635 0.02856 -36.63 <0.0000000000000002 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 2.479 on 2511 degrees of freedom
## (34 observations deleted due to missingness)
## Multiple R-squared: 0.3483, Adjusted R-squared: 0.348
## F-statistic: 1342 on 1 and 2511 DF, p-value: < 0.00000000000000022``````
Note that, as was discussed in the prior chapter, the estimated value for BB is negative and highly statistically significant. This indicates that the more conservative the survey respondent, the lower the perceived risks attributed to climate change. Now we will use these model results and the associated residuals to evaluate the key assumptions of OLS, beginning with linearity.
10.3.1 Testing for Non-Linearity
One way to test for non-linearity is to fit the model to a polynomial functional form. This sounds impressive but is quite easy to do and understand (really!). All you need to do is include the square of the independent variable as a second predictor in the model. A significant regression coefficient on the squared variable indicates problems with linearity. To do this, we first produce the squared variable.
``````#first we square the ideology variable and create a new variable to use in our model.
ds\$ideology2 <- ds\$ideol^2
summary(ds\$ideology2)``````
``````## Min. 1st Qu. Median Mean 3rd Qu. Max. NA's
## 1.00 16.00 25.00 24.65 36.00 49.00 23``````
Next, we run the regression with the original independent variable and our new squared variable. Finally, we check the regression output.
``````OLS_env2 <- lm(glbcc_risk ~ ideol + ideology2, data = ds)
summary(OLS_env2)``````
A significant coefficient on the squared ideology variable informs us that we probably have a non-linearity problem. The significant and negative coefficient for the square of ideology means that the curve steepens (perceived risks fall faster) as the scale shifts further up on the conservative side of the scale. We can supplement the polynomial regression test by producing a residual plot with a formal Tukey test. The residual plot (`car` package `residualPlots` function) displays the Pearson fitted values against the model’s observed values. Ideally, the plots will produce flat red lines; curved lines represent non-linearity. The output for the Tukey test is visible in the RR workspace. The null hypothesis for the Tukey test is a linear relationship, so a significant p-value is indicative of non-linearity. The tukey test is reported as part of the `residualPlots` function in the `car` package.
``````#A significant p-value indicates non-linearity using the Tukey test
library(car)
residualPlots(OLS_env)``````
``````## Test stat Pr(>|Test stat|)
## ds\$ideol -5.0181 0.0000005584 ***
## Tukey test -5.0181 0.0000005219 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1``````
The curved red lines in Figure \(7\) in the residual plots and significant Tukey test indicate a non-linear relationship in the model. This is a serious violation of a core assumption of OLS regression, which means that the estimate of BB is likely to be biased. Our findings suggest that the relationship between ideology and perceived risks of climate change is approximately linear from “strong liberals” to those who are “leaning Republican”. But perceived risks seem to drop off more rapidly as the scale rises toward “strong Republican.”
10.3.2 Testing for Normality in Model Residuals
Testing for normality in the model residuals will involve using many of the techniques demonstrated in previous chapters. The first step is to graphically display the residuals in order to see how closely the model residuals resemble a normal distribution. A formal test for normality is also included in the demonstration.
Start by creating a histogram of the model residuals.
``````OLS_env\$residuals %>% # Pipe the residuals to a data frame
data.frame() %>% # Pipe the data frame to ggplot
ggplot(aes(OLS_env\$residuals)) +
geom_histogram(bins = 16)
```
```
The histogram in figure 10.8 indicates that the residuals are approximately normally distributed, but there appears to be a negative skew. Next, we can create a smoothed density of the model residuals compared to a theoretical normal distribution.
``````OLS_env\$residuals %>% # Pipe the residuals to a data frame
data.frame() %>% # Pipe the data frame to ggplot
ggplot(aes(OLS_env\$residuals)) +
geom_density(adjust = 2) +
stat_function(fun = dnorm, args = list(mean = mean(OLS_env\$residuals),
sd = sd(OLS_env\$residuals)),
color = "red")``````
Figure \(9\) indicates the model residuals deviate slightly from a normal distributed because of a slightly negative skew and a mean higher than we would expect in a normal distribution. Our final ocular examination of the residuals will be a quartile plot %(using the `stat_qq` function from the `ggplot2` package).
``````OLS_env\$residuals %>% # Pipe the residuals to a data frame
data.frame() %>% # Pipe the data frame to ggplot
ggplot(aes(sample = OLS_env\$residuals)) +
stat_qq() +
stat_qq_line()
```
```
According to Figure \(10\), it appears as if the residuals are normally distributed except for the tails of the distribution. Taken together the graphical representations of the residuals suggest modest non-normality. As a final step, we can conduct a formal Shapiro-Wilk test for normality. The null hypothesis for a Shapiro-Wilk test is a normal distribution, so we do not want to see a significant p-value.
``````#a significant value p-value potentially indicates the data is not normally distributed.
shapiro.test(OLS_env\$residuals)``````
``````##
## Shapiro-Wilk normality test
##
## data: OLS_env\$residuals
## W = 0.98901, p-value = 0.000000000000551``````
The Shapiro-Wilk test confirms what we observed in the graphical displays of the model residuals – the residuals are not normally distributed. Recall that our dependent variable (gccrsk) appears to have a non-normal distribution. This could be the root of the non-normality found in the model residuals. Given this information, steps must be taken to assure that the model residuals meet the required OLS assumptions. One possibility would be to transform the dependent variable (glbccrisk) in order to induce a normal distribution. Another might be to add a polynomial term to the independent variable (ideology) as was done above. In either case, you would need to recheck the residuals in order to see if the model revisions adequately dealt with the problem. We suggest that you do just that!
10.3.3 Testing for Non-Constant Variance in the Residuals
Testing for non-constant variance (heteroscedasticity) in a model is fairly straightforward. We can start by creating a spread-level plot that fits the studentized residuals against the model’s fitted values. A line with a non-zero slope is indicative of heteroscedasticity. Figure \(11\) displays the spread-level plot from the `car` package.
``spreadLevelPlot(OLS_env)``
``````##
## Suggested power transformation: 1.787088``````
``dev.off() ``
``````## RStudioGD
## 2``````
The negative slope on the red line in Figure \(11\) indicates the model may contain heteroscedasticity. We can also perform a formal test for non constant variance. The null hypothesis is constant variance, so we do not want to see a significant p-value.
``````#a significant value indicates potential heteroscedasticity issues.
ncvTest(OLS_env)``````
``````## Non-constant Variance Score Test
## Variance formula: ~ fitted.values
## Chisquare = 68.107 Df = 1 p = 0.0000000000000001548597``````
The significant p-value on the non-constant variance test informs us that there is a problem with heteroscedasticity in the model. This is yet another violation of the core assumptions of OLS regression, and it brings into doubt our hypothesis tests.
10.3.4 Examining Outlier Data
There are a number of ways to examine outlying observations in an OLS regression. This section briefly illustrates a subset of analytical tests that will provide a useful assessment of potentially important outliers. The purpose of examining outlier data is twofold. First, we want to make sure there are not any mis-coded or invalid data influencing our regression. For example, an outlying observation with a value of “-99” would very likely bias our results and obviously needs to be corrected. Second, outlier data may indicate the need to theoretically reconceptualize our model. Perhaps the relationship in the model is mis-specified, with outliers at the extremes of a variable suggesting a non-linear relationship. Or it may be that a subset of cases responds differently to the independent variable, and therefore must be treated as “special cases” in the model. Examining outliers allows us to identify and address these potential problems.
One of the first things we can do is perform a Bonferroni Outlier Test. The Bonferroni Outlier Tests uses a tt distribution to test whether the model’s largest studentized residual value’s outlier status is statistically different from the other observations in the model. A significant p-value indicates an extreme outlier that warrants further examination. We use the `outlierTest` function in the `car` package to perform a Bonferroni Outlier Test.
``````#a significant p-value indicates extreme case for review
outlierTest(OLS_env)``````
``````## No Studentized residuals with Bonferonni p < 0.05
## Largest |rstudent|:
## rstudent unadjusted p-value Bonferonni p
## 589 -3.530306 0.00042255 NA``````
According to the R output, the Bonferroni p-value for the largest (absolute) residual is not statistically significant. While this test is important for identifying a potentially significant outlying observation, it is not a panacea for checking for patterns in outlying data. Next we will examine the model’s df.betas in order to see which observations exert the most influence on the model’s regression coefficients. DfbetasDfbetas are measures of how much the regression coefficient changes when observation ii is omitted. Larger values indicate an observation that has considerable influence on the model.
A useful method for finding dfbeta observations is to use the `dfbetaPlots` function in the `car` package. We specify the option `id.n=2` to show the two largest df.betas. See figure 10.12.
````plotdb<-dfbetaPlots(OLS_env, id.n=3)`
```
``````# Check the observations with high dfbetas.
# We see the values 589 and 615 returned.
# We only want to see results from columns gccrsk and ideology in tbur.data.
ds[c(589,615),c("glbcc_risk", "ideol")]``````
``````## glbcc_risk ideol
## 589 0 2
## 615 0 2``````
These observations are interesting because they identify a potential problem in our model specification. Both observations are considered outliers because the respondents self-identified as “liberal” (ideology = 1) and rated their perceived risk of global climate change as 0. These values deviate substantially from the norm for other strong liberals in the dataset. Remember, as we saw earlier, our model has a problem with non-linearity – these outlying observations seem to corroborate this finding. The examination of outliers sheds some light on the issue.
Finally, we can produce a plot that combines studentized residuals, “hat values”, and Cook’s D distances (these are measures of the amount of influence observations have on the model) using circles as an indicator of influence – the larger the circle, the greater the influence. Figure \(13\) displays the combined influence plot. In addition, the `influencePlot` the function returns the values of the greatest influence.
````influencePlot(OLS_env)`
```
``````## StudRes Hat CookD
## 20 0.09192603 0.002172497 0.000009202846
## 30 0.09192603 0.002172497 0.000009202846
## 589 -3.53030574 0.001334528 0.008289418537
## 615 -3.53030574 0.001334528 0.008289418537``````
Figure \(13\) indicates that there are a number of cases that warrant further examination. We are already familiar with 589 and 615 Let’s add 20, 30, 90 and 1052.
``````#review the results
ds[c(589,615,20,30,90,1052),c("glbcc_risk", "ideol")]``````
``````## glbcc_risk ideol
## 589 0 2
## 615 0 2
## 20 10 1
## 30 10 1
## 90 10 1
## 1052 3 6``````
One important take-away from a visual examination of these observations is that there do not appear to be any completely mis-coded or invalid data affecting our model. In general, even the most influential observations do not appear to be implausible cases. Observations 589 and 615 19 present an interesting problem regarding the theoretical and model specifications. These observations represent respondents who self-reported as “liberal” (ideology=2) and also rated the perceived risk of global climate change as 0 out of 10. These observations therefore deviate from the model’s expected values (“strong liberal” respondents, on average, believed global climate change represents a high risk). Earlier in our diagnostic testing, we found a problem with non-linearity. Taken together, it looks like the non-linearity in our model is due to observations at the ideological extremes. One way we can deal with this problem is to include a squared ideology variable (a polynomial) in the model, as illustrated earlier in this chapter. However, it is also important to note this non-linear relationship in the theoretical conceptualization of our model. Perhaps there is something special about people with extreme ideologies that need to be taken into account when attempting to predict the perceived risk of global climate change. This finding should also inform our examination of post-estimation predictions – something that will be covered later in this text. | textbooks/stats/Applied_Statistics/Book%3A_Quantitative_Research_Methods_for_Political_Science_Public_Policy_and_Public_Administration_(Jenkins-Smith_et_al.)/10%3A_OLS_Assumptions_and_Simple_Regression_Diagnostics/10.03%3A_Application_of.txt |
What should you do if you observe patterns in the residuals that seem to violate the assumptions of OLS? If you find deviant cases – outliers that are shown to be highly influential – you need to first evaluate the specific cases (observations). Is it possible that the data were miscoded? We hear of many instances in which missing value codes (often “-99”) were inadvertently left in the dataset. `R` would treat such values as if they were real data, often generating glaring and influential outliers. Should that be the case, recode the offending variable observation as missing (“NA”) and try again.
But what if there is no obvious coding problem? It may be that the influential outlier is appropriately measured, but that the observation is different in some theoretically important way. Suppose, for example, that your model included some respondents who – rather than diligently answering your questions – just responded at random to your survey questions. They would introduce noise and error. If you could measure these slackers, you could either exclude them or include a control variable in your model to account for their different patterns of responses. We will discuss inclusion of model controls when we turn to multiple regression modeling in later chapters.
What if your residual analysis indicates the presence of heteroscedasticity? Recall that this will undermine your ability to do hypothesis tests in OLS. There are several options. If the variation in fit over the range of the predicted value of YY could plausibly result from the omission of an important explanatory variable, you should respecify your model accordingly (more on this later in this book). It is often the case that you can improve the distribution of residuals by including important but previously omitted variables. Measures of income, when left out of consumer behavior models, often have this effect.
Another approach is to use a different modeling approach that accounts for the heteroscedasticity in the estimated standard error. Of particular utility are robust estimators, which can be employed using the `rlm` (robust linear model) function in the `MASS` package. This approach increases the magnitude of the estimated standard errors, reducing the t-values and resulting p-values. That means that the “cost” of running robust estimators is that the precision of the estimates is reduced.
Evidence of non-linearity in the residuals presents a thorny problem. This is a basic violation of a central assumption of OLS, resulting in biased estimates of AA and BB. What can you do? First, you can respecify your model to include a polynomial; you would include both the XX variable and a square of the XX variable. Note that this will require you to recode XX. In this approach, the value of XX is constant, while the value of the square of XX increases exponentially. So a relationship in which YY decreases as the square of XX increases will provide a progressively steeper slope as XX rises. This is the kind of pattern we observed in the example in which political ideology was used to predict the perceived risk posed by climate change.
10.05: Summary
Now you are in a position to employ diagnostics – both visual and statistical – to evaluate the results of your statistical models. Note that, once you have made your model corrections, you will need to regenerate and re-evaluate your model residuals to determine whether the problem has been ameliorated. Think of diagnostics as an iterative process in which you use the model results to evaluate, diagnose, revise re-run, and re-evaluate your model. This is where the real learning happens, as you challenge your theory (as specified in your model) with observed data. So – have at it!
1. Again, we assume only that the means of the errors drawn from repeated samples of observations will be normally distributed – but we will often find that errors in a particular sample deviate significantly from a normal distribution.↩
2. Political scientists who study US electoral politics have had to account for unusual observations in the Southern states. Failure in the model to account for these differences would lead to prediction error and ugly patterns in the residuals. Sadly, Professor Gaddie notes that scholars have not been sufficiently careful – or perhaps well-trained? – to do this right. Professor Gaddie notes: “… instead of working to achieve better model specification through the application of theory and careful thought, in the 1960s and 1970s electoral scholars instead just threw out the South and all senate races, creating the perception that the United States had 39 states and a unicameral legislature.”↩
3. Of note, observations 20, 30, and 90 and 1052 are returned as well. There doesn’t appear to be anything special about these four observations. Part of this may be due to the bivariate relationship and how the `influcencePlot` function weights the data. The results are included for your review.↩ | textbooks/stats/Applied_Statistics/Book%3A_Quantitative_Research_Methods_for_Political_Science_Public_Policy_and_Public_Administration_(Jenkins-Smith_et_al.)/10%3A_OLS_Assumptions_and_Simple_Regression_Diagnostics/10.04%3A_So_Now_What%3F.txt |
Matrix algebra is widely used for the derivation of multiple regression because it permits a compact, intuitive depiction of regression analysis. For example, an estimated multiple regression model in scalar notion is expressed as: Y=A+BX1+BX2+BX3+EY=A+BX1+BX2+BX3+E. Using matrix notation, the same equation can be expressed in a more compact and (believe it or not!) intuitive form: y=Xb+ey=Xb+e.
In addition, matrix notation is flexible in that it can handle any number of independent variables. Operations performed on the model matrix XX, are performed on all independent variables simultaneously. Lastly, you will see that matrix expression is widely used in statistical presentations of the results of OLS analysis. For all these reasons, then, we begin with the development of multiple regression in matrix form.
11.02: The Basics of Matrix Algebra
A matrix is a rectangular array of numbers with rows and columns. As noted, operations performed on matrices are performed on all elements of a matrix simultaneously. In this section, we provide the basic understanding of matrix algebra that is necessary to make sense of the expression of multiple regression in matrix form.
11.2.1 Matrix Basics
The individual numbers in a matrix are referred to as “elements”. The elements of a matrix can be identified by their location in a row and column, denoted as Ar,cAr,c. In the following example, mm will refer to the matrix row and nn will refer to the column.
Am,n=⎡⎢ ⎢ ⎢ ⎢ ⎢⎣a1,1a1,2⋯a1,na2,1a2,2⋯a2,n⋮⋮⋱⋮am,1am,2⋯am,n⎤⎥ ⎥ ⎥ ⎥ ⎥⎦Am,n=[a1,1a1,2⋯a1,na2,1a2,2⋯a2,n⋮⋮⋱⋮am,1am,2⋯am,n]
Therefore, in the following matrix;
A=[1058−1210]A=[1058−1210]
element a2,3=0a2,3=0 and a1,2=5a1,2=5.
11.2.2 Vectors
A vector is a matrix with single column or row. Here are some examples:
A=⎡⎢ ⎢ ⎢⎣6−1811⎤⎥ ⎥ ⎥⎦A=[6−1811]
or
A=[1287]A=[1287]
11.2.3 Matrix Operations
There are several “operations” that can be performed with and on matrices. Most of the these can be computed with `R`, so we will use `R` examples as we go along. As always, you will understand the operations better if you work the problems in `R` as we go. There is no need to load a data set this time – we will enter all the data we need in the examples.
11.2.4 Transpose
Transposing, or taking the “prime” of a matrix, switches the rows and columns.21 The matrix
A=[1058−1210]A=[1058−1210]
Once transposed is:
A′=⎡⎢⎣10−125180⎤⎥⎦A′=[10−125180]
Note that the operation “hinges” on the element in the upper right-hand corner of AA, A1,1A1,1, so the first column of AA becomes the first row on A′A′. To transpose a matrix in `R`, create a matrix object then simply use the `t` command.
``````A <- matrix(c(10,-12,5,1,8,0),2,3)
A``````
``````## [,1] [,2] [,3]
## [1,] 10 5 8
## [2,] -12 1 0``````
``t(A)``
``````## [,1] [,2]
## [1,] 10 -12
## [2,] 5 1
## [3,] 8 0``````
11.2.5 Adding Matrices
To add matrices together, they must have the same dimensions, meaning that the matrices must have the same number of rows and columns. Then, you simply add each element to its counterpart by row and column. For example:
A=[4−320]+B=[814−5]=A+B=[4+8−3+12+40+(−5)]=[12−26−5]A=[4−320]+B=[814−5]=A+B=[4+8−3+12+40+(−5)]=[12−26−5]
To add matrices together in `R`, simply create two matrix objects and add them together.
``````A <- matrix(c(4,2,-3,0),2,2)
A``````
``````## [,1] [,2]
## [1,] 4 -3
## [2,] 2 0``````
``````B <- matrix(c(8,4,1,-5),2,2)
B``````
``````## [,1] [,2]
## [1,] 8 1
## [2,] 4 -5``````
``A + B``
``````## [,1] [,2]
## [1,] 12 -2
## [2,] 6 -5``````
See – how easy is that? No need to be afraid of a little matrix algebra!
11.2.6 Multiplication of Matrices
To multiply matrices they must be conformable, which means the number of columns in the first matrix must match the number of rows in the second matrix.
ArXq∗BqXc=CrXcArXq∗BqXc=CrXc
Then, multiply column elements by the row elements, as shown here:
A=⎡⎢⎣25106−2⎤⎥⎦∗B=[421572]=AXB=⎡⎢⎣(2X4)+(5X5)(2X2)+(5X7)(2X1)+(5X2)(1X4)+(0X5)(1X2)+(0X7)(1X1)+(0X2)(6X4)+(−2X5)(6X2)+(−2X7)(6X1)+(−2X2)⎤⎥⎦=⎡⎢⎣33391242114−22⎤⎥⎦A=[25106−2]∗B=[421572]=AXB=[(2X4)+(5X5)(2X2)+(5X7)(2X1)+(5X2)(1X4)+(0X5)(1X2)+(0X7)(1X1)+(0X2)(6X4)+(−2X5)(6X2)+(−2X7)(6X1)+(−2X2)]=[33391242114−22]
To multiply matrices in `R`, create two matrix objects and multiply them using the `\%*\%` command.
``````A <- matrix(c(2,1,6,5,0,-2),3,2)
A``````
``````## [,1] [,2]
## [1,] 2 5
## [2,] 1 0
## [3,] 6 -2``````
``````B <- matrix(c(4,5,2,7,1,2),2,3)
B``````
``````## [,1] [,2] [,3]
## [1,] 4 2 1
## [2,] 5 7 2``````
``A %*% B``
``````## [,1] [,2] [,3]
## [1,] 33 39 12
## [2,] 4 2 1
## [3,] 14 -2 2``````
11.2.7 Identity Matrices
The identity matrix is a square matrix with 1’s on the diagonal and 0’s elsewhere. For a 4 x 4 matrix, it looks like this:
I=⎡⎢ ⎢ ⎢⎣1000010000100001⎤⎥ ⎥ ⎥⎦I=[1000010000100001]
It acts like a 1 in algebra; a matrix (AA) times the identity matrix (II) is AA. This can be demonstrated in `R`.
``````A <- matrix(c(5,3,2,4),2,2)
A``````
``````## [,1] [,2]
## [1,] 5 2
## [2,] 3 4``````
``````I <- matrix(c(1,0,0,1),2,2)
I``````
``````## [,1] [,2]
## [1,] 1 0
## [2,] 0 1``````
``A %*% I``
``````## [,1] [,2]
## [1,] 5 2
## [2,] 3 4``````
Note that, if you want to square a column matrix (that is, multiply it by itself), you can simply take the transpose of the column (thereby making it a row matrix) and multiply them. The square of column matrix AA is A′AA′A.
11.2.8 Matrix Inversion
The matrix inversion operation is a bit like dividing any number by itself in algebra. An inverse of the AA matrix is denoted A−1A−1. Any matrix multiplied by its inverse is equal to the identity matrix:
AA−1=A−1A=IAA−1=A−1A=I
For example,
A=[1−1−1−1]and A−1=[0.5−0.5−0.50.5]therefore A∗A−1=[1001]A=[1−1−1−1]and A−1=[0.5−0.5−0.50.5]therefore A∗A−1=[1001]
However, matrix inversion is only applicable to a square (i.e., number of rows equals number of columns) matrix; only a square matrix can have an inverse.
Finding the Inverse of a Matrix
To find the inverse of a matrix, the values that will produce the identity matrix, create a second matrix of variables and solve for II.
A=[3124]X[abcd]=[3a+b3c+d2a+4b2c+4d]=[1001]A=[3124]X[abcd]=[3a+b3c+d2a+4b2c+4d]=[1001]
Set 3a+b=13a+b=1 and 2a+4b=02a+4b=0 and solve for aa and bb. In this case a=25a=25 and b=−15b=−15. Likewise, set 3c+d=03c+d=0 and 2c+4d=12c+4d=1; solving for cc and dd produces c=−110c=−110 and d=310d=310. Therefore,
A−1=[25−110−15310]A−1=[25−110−15310]
Finding the inverse matrix can also be done in `R` using the `solve` command.
``````A <- matrix(c(3,2,1,4),2,2)
A``````
``````## [,1] [,2]
## [1,] 3 1
## [2,] 2 4``````
``````A.inverse <- solve(A)
A.inverse``````
``````## [,1] [,2]
## [1,] 0.4 -0.1
## [2,] -0.2 0.3``````
``A %*% A.inverse``
``````## [,1] [,2]
## [1,] 1 0
## [2,] 0 1``````
OK – now we have all the pieces we need to apply matrix algebra to multiple regression. | textbooks/stats/Applied_Statistics/Book%3A_Quantitative_Research_Methods_for_Political_Science_Public_Policy_and_Public_Administration_(Jenkins-Smith_et_al.)/11%3A_Introduction_to_Multiple_Regression/11.01%3A_Matrix_Algebra_and_Multiple_.txt |
As was the case with simple regression, we want to minimize the sum of the squared errors, ee. In matrix notation, the OLS model is y=Xb+ey=Xb+e, where e=y−Xbe=y−Xb. The sum of the squared ee is:
∑e2i=[e1e2⋯en]⎡⎢ ⎢ ⎢ ⎢⎣e1e2⋮en⎤⎥ ⎥ ⎥ ⎥⎦=e′e(11.1)(11.1)∑ei2=[e1e2⋯en][e1e2⋮en]=e′e
Therefore, we want to find the bb that minimizes this function:
e′e=(y−Xb)′(y−Xb)=y′y−b′X′y−y′Xb+b′X′Xb=y′y−2b′X′y+b′X′Xbe′e=(y−Xb)′(y−Xb)=y′y−b′X′y−y′Xb+b′X′Xb=y′y−2b′X′y+b′X′Xb
To do this we take the derivative of e′ee′e w.r.t bb and set it equal to 00.
∂e′e∂b=−2X′y+2X′Xb=0∂e′e∂b=−2X′y+2X′Xb=0To solve this we subtract 2X′Xb2X′Xb from both sides:−2X′Xb=−2X′y−2X′Xb=−2X′y
Then to remove the −2−2’s, we multiply each side by −1/2−1/2. This leaves us with:
(X′X)b=X′y(X′X)b=X′y
To solve for bb we multiply both sides by the inverse of X′X,(X′X)−1X′X,(X′X)−1. Note that for matrices this is equivalent to dividing each side by X′XX′X. Therefore:
b=(X′X)−1X′y(11.2)(11.2)b=(X′X)−1X′y
The X′XX′X matrix is square, and therefore invertible (i.e., the inverse exists). However, the X′XX′X matrix can be non-invertible (i.e., singular) if n<kn<k—the number of kk independent variables exceeds the nn-size—or if one or more of the independent variables is perfectly correlated with another independent variable. This is termed perfect multicollinearity and will be discussed in more detail in Chapter 14. Also note that the X′XX′X matrix contains the basis for all the necessary means, variances, and covariances among the XX’s.
X′X=⎡⎢ ⎢ ⎢ ⎢ ⎢⎣n∑X1∑X2∑X3∑X1∑X21∑X1X2∑X1X3∑X2∑X2X1∑X22∑X2X3∑X3∑X3X1∑X3X2∑X23⎤⎥ ⎥ ⎥ ⎥ ⎥⎦X′X=[n∑X1∑X2∑X3∑X1∑X12∑X1X2∑X1X3∑X2∑X2X1∑X22∑X2X3∑X3∑X3X1∑X3X2∑X32]
Regression in Matrix Form
Assume a model using nn observations, kk parameters, and k−1k−1, XiXi (independent) variables.
y=Xb+e^y=Xbb=(X′X)−1X′yy=Xb+ey^=Xbb=(X′X)−1X′y
• y=n∗1y=n∗1 column vector of observations of the DV, YY
• ^y=n∗1y^=n∗1 column vector of predicted YY values
• X=n∗kX=n∗k matrix of observations of the IVs; first column 11s
• b=k∗1b=k∗1 column vector of regression coefficients; first row is AA
• e=n∗1e=n∗1 column vector of nn residual values
Using the following steps, we will use `R` to calculate bb, a vector of regression coefficients; ^yy^, a vector of predicted yy values; and ee, a vector of residuals.
We want to fit the model y=Xb+ey=Xb+e to the following matrices:
y=⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣611435910⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦X=⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣1454172312641196134517341825⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦y=[611435910]X=[1454172312641196134517341825]
Create two objects, the yy matrix and the XX matrix.
``````y <- matrix(c(6,11,4,3,5,9,10),7,1)
y``````
``````## [,1]
## [1,] 6
## [2,] 11
## [3,] 4
## [4,] 3
## [5,] 5
## [6,] 9
## [7,] 10``````
``````X <- matrix(c(1,1,1,1,1,1,1,4,7,2,1,3,7,8,5,2,6,9,4,3,2,4,3,4,6,5,4,5),7,4)
X``````
``````## [,1] [,2] [,3] [,4]
## [1,] 1 4 5 4
## [2,] 1 7 2 3
## [3,] 1 2 6 4
## [4,] 1 1 9 6
## [5,] 1 3 4 5
## [6,] 1 7 3 4
## [7,] 1 8 2 5``````
Calculate bb: b=(X′X)−1X′yb=(X′X)−1X′y.
We can calculate this in `R` in just a few steps. First, we transpose XX to get X′X′.
``````X.prime <- t(X)
X.prime``````
``````## [,1] [,2] [,3] [,4] [,5] [,6] [,7]
## [1,] 1 1 1 1 1 1 1
## [2,] 4 7 2 1 3 7 8
## [3,] 5 2 6 9 4 3 2
## [4,] 4 3 4 6 5 4 5``````
Then we multiply XX by X′X′; (X′XX′X).
``````X.prime.X <- X.prime %*% X
X.prime.X``````
``````## [,1] [,2] [,3] [,4]
## [1,] 7 32 31 31
## [2,] 32 192 104 134
## [3,] 31 104 175 146
## [4,] 31 134 146 143``````
Next, we find the inverse of X′XX′X; X′X−1X′X−1
``````X.prime.X.inv<-solve(X.prime.X)
X.prime.X.inv``````
``````## [,1] [,2] [,3] [,4]
## [1,] 12.2420551 -1.04528602 -1.01536017 -0.63771186
## [2,] -1.0452860 0.12936970 0.13744703 -0.03495763
## [3,] -1.0153602 0.13744703 0.18697034 -0.09957627
## [4,] -0.6377119 -0.03495763 -0.09957627 0.27966102``````
Then, we multiply X′X−1X′X−1 by X′X′.
``````X.prime.X.inv.X.prime<-X.prime.X.inv %*% X.prime
X.prime.X.inv.X.prime``````
``````## [,1] [,2] [,3] [,4] [,5] [,6]
## [1,] 0.43326271 0.98119703 1.50847458 -1.7677436 1.8561970 -0.6718750
## [2,] 0.01959746 0.03032309 -0.10169492 0.1113612 -0.2821769 0.1328125
## [3,] 0.07097458 0.02198093 -0.01694915 0.2073623 -0.3530191 0.1093750
## [4,] -0.15677966 -0.24258475 -0.18644068 0.1091102 0.2574153 -0.0625000
## [,7]
## [1,] -1.33951271
## [2,] 0.08977754
## [3,] -0.03972458
## [4,] 0.28177966``````
Finally, to obtain the bb vector we multiply X′X−1X′X′X−1X′ by yy.
``````b<-X.prime.X.inv.X.prime %*% y
b``````
``````## [,1]
## [1,] 3.96239407
## [2,] 1.06064619
## [3,] 0.04396186
## [4,] -0.48516949``````
We can use the `lm` function in `R` to check and see whether our “by hand” matrix approach gets the same result as does the “canned” multiple regression routine:
``lm(y~0+X)``
``````##
## Call:
## lm(formula = y ~ 0 + X)
##
## Coefficients:
## X1 X2 X3 X4
## 3.96239 1.06065 0.04396 -0.48517``````
Calculate ^yy^: ^y=Xby^=Xb.
To calculate the ^yy^ vector in `R`, simply multiply `X` and `b`.
``````y.hat <- X %*% b
y.hat``````
``````## [,1]
## [1,] 6.484110
## [2,] 10.019333
## [3,] 4.406780
## [4,] 2.507680
## [5,] 4.894333
## [6,] 9.578125
## [7,] 10.109640``````
Calculate ee.
To calculate ee, the vector of residuals, simply subtract the vector yy from the vector ^yy^.
``````e <- y-y.hat
e``````
``````## [,1]
## [1,] -0.4841102
## [2,] 0.9806674
## [3,] -0.4067797
## [4,] 0.4923199
## [5,] 0.1056674
## [6,] -0.5781250
## [7,] -0.1096398``````
11.04: Summary
Whew! Now, using matrix algebra and calculus, you have derived the squared-error minimizing formula for multiple regression. Not only that, you can use the matrix form, in `R`, to calculate the estimated slope and intercept coefficients, predict YY, and even calculate the regression residuals. We’re on our way to true Geekdome!
Next stop: the key assumptions necessary for OLS to provide the best, unbiased, linear estimates (BLUE) and the basis for statistical controls using multiple independent variables in regression models.
1. It is useful to keep in mind the difference between “multiple regression” and “multivariate regression”. The latter predicts 2 or more dependent variables using an independent variable.↩
2. The use of “prime” in matrix algebra should not be confused with the use of prime" in the expression of a derivative, as in X′X′.↩ | textbooks/stats/Applied_Statistics/Book%3A_Quantitative_Research_Methods_for_Political_Science_Public_Policy_and_Public_Administration_(Jenkins-Smith_et_al.)/11%3A_Introduction_to_Multiple_Regression/11.03%3A_OLS_Regression_in_Matrix_For.txt |
As with simple regression, the theoretical multiple regression model contains a systematic component — Y=α+β1Xi1+β2Xi2+…+βkXikY=α+β1Xi1+β2Xi2+…+βkXik and a stochastic component—ϵiϵi. The overall theoretical model is expressed as:
Y=α+β1Xi1+β2Xi2+…+βkXik+ϵiY=α+β1Xi1+β2Xi2+…+βkXik+ϵi
where - αα is the constant term - β1β1 through βkβk are the parameters of IVs 1 through k - kk is the number of IVs - ϵϵ is the error term
In matrix form the theoretical model can be much more simply expressed as: y=Xβ+ϵy=Xβ+ϵ.
The empirical model that will be estimated can be expressed as:Yi=A+B1Xi1+B2Xi2+…+BkXik+Ei=^Yi+EiYi=A+B1Xi1+B2Xi2+…+BkXik+Ei=Yi^+EiTherefore, the residual sum of squares (RSS) for the model is expressed as:RSS=∑E2i=∑(Yi−^Yi)2=∑(Yi−(A+B1Xi1+B2Xi2+…+BkXik))2RSS=∑Ei2=∑(Yi−Yi^)2=∑(Yi−(A+B1Xi1+B2Xi2+…+BkXik))2
12.1.1 Assumptions of OLS Regression
There are several important assumptions necessary for multiple regression. These assumptions include linearity, fixed XX’s, and errors that are normally distributed.
OLS Assumptions
Systematic Component
• Linearity
• Fixed XX
Stochastic Component
• Errors have identical distributions
• Errors are independent of XX and other ϵiϵi
• Errors are normally distributed
Linearity
When OLS is used, it is assumed that a linear functional form is the correct specification for the model being estimated. Note that linearity is assumed in the parameters (that is, for the BsBs), therefore the expected value of the dependent variable is a linear function of the parameters, not necessarily of the variables themselves. So, as we will discuss in later chapters, it is possible to transform the variables (the XsXs) to introduce non-linearity into the model while retaining linear estimated coefficients. For example, a model with a squared XX term can be estimated with OLS:
Y=A+BX2i+EY=A+BXi2+E
However, a model with a squared BB term cannot.
Fixed XX
The assumption of fixed values of XX means that the value of XX in our observations is not systematically related to the value of the other XX’s. We can see this most clearly in an experimental setting where the researcher can manipulate the experimental variable while controlling for all other possible XsXs through random assignment to a treatment and control group. In that case, the value of the experimental treatment is completely unrelated to the value of the other XsXs – or, put differently, the treatment variable is orthogonal to the other XsXs. This assumption is carried through to observational studies as well. Note that if XX is assumed to be fixed, then changes in YY are assumed to be a result of the independent variations in the XX’s and error (and nothing else).
12.02: Partial Effects
As noted in Chapter 1, multiple regression controls" for the effects of other variables on the dependent variables. This is in order to manage possible spurious relationships, where the variable ZZ influences the value of both XX and YY. Figure \(1\) illustrates the nature of spurious relationships between variables.
``````## Warning in is.na(x): is.na() applied to non-(list or vector) of type
## 'expression'
## Warning in is.na(x): is.na() applied to non-(list or vector) of type
## 'expression'```
```
To control for spurious relationships, multiple regression accounts for the partial effects of one XX on another XX. Partial effects deal with the shared variance between YY and the XX’s. This is illustrated in Figure \(2\). In this example, the number of deaths resulting from house fires is positively associated with the number of fire trucks that are sent to the scene of the fire. A simple-minded analysis would conclude that if fewer trucks are sent, fewer fire-related deaths would occur. Of course, the number of trucks sent to the fire, and the number of fire-related deaths, are both driven by the magnitude of the fire. An appropriate control for the size of the fire would therefore presumably eliminate the positive association between the number of fire trucks at the scene and the number of deaths (and may even reverse the direction of the relationship, as the larger number of trucks may more quickly suppress the fire).
``````## Warning: Removed 1 rows containing missing values (geom_point).
## Warning: Removed 1 rows containing missing values (geom_point).``````
``````## Warning in is.na(x): is.na() applied to non-(list or vector) of type
## 'expression'
## Warning in is.na(x): is.na() applied to non-(list or vector) of type
## 'expression'
## Warning in is.na(x): is.na() applied to non-(list or vector) of type
## 'expression'
## Warning in is.na(x): is.na() applied to non-(list or vector) of type
## 'expression'``````
In Figure \(2\), the Venn diagram on the left shows a pair of XXs that would jointly predict YY better than either XX alone. However, the overlapped area between X1X1 and X2X2 causes some confusion. That would need to be removed to estimate the “pure” effect of X1X1 on YY. The diagram on the right represents a dangerous case. Overall, X1X1+X2X2 explain YY well, but we don`t know how the individual X1X1 or X2X2 influence YY. This clouds our ability to see the effects of either of the XsXs on YY. In the extreme case of wholly overlapping explanations by the IVs, we face the condition of multicolinearity that makes estimation of the partial regression coefficients (the BsBs) impossible.
In calculating the effect of X1X1 on YY, we need to remove the effect of the other XXs on both X1X1 and YY. While multiple regression does this for us, we will walk through an example to illustrate the concepts.
Partial Effects
In a case with two IVs, X1X1 and X2X2
Y=A+B1Xi1+B2Xi2+EiY=A+B1Xi1+B2Xi2+Ei
• Remove the effect of X2X2 and YY
^Yi=A1+B1Xi2+EiY|X2Yi^=A1+B1Xi2+EiY|X2
• Remove the effect of X2X2 on X1X1:
^Xi=A2+B2Xi2+EiX1|X2Xi^=A2+B2Xi2+EiX1|X2
So,
EiY|X2=0+B3EiX1|X2EiY|X2=0+B3EiX1|X2 and B3EiX1|X2=B1Xi1B3EiX1|X2=B1Xi1
As an example, we will use age and ideology to predict perceived climate change risk.
``````ds.temp <- filter(ds) %>% dplyr::select(glbcc_risk, ideol, age) %>%
na.omit()
ols1 <- lm(glbcc_risk ~ ideol+age, data = ds.temp)
summary(ols1)``````
``````##
## Call:
## lm(formula = glbcc_risk ~ ideol + age, data = ds.temp)
##
## Residuals:
## Min 1Q Median 3Q Max
## -8.7913 -1.6252 0.2785 1.4674 6.6075
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 11.096064 0.244640 45.357 <0.0000000000000002 ***
## ideol -1.042748 0.028674 -36.366 <0.0000000000000002 ***
## age -0.004872 0.003500 -1.392 0.164
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 2.479 on 2510 degrees of freedom
## Multiple R-squared: 0.3488, Adjusted R-squared: 0.3483
## F-statistic: 672.2 on 2 and 2510 DF, p-value: < 0.00000000000000022``````
Note that the estimated coefficient for ideology is -1.0427478. To see how multiple regression removes the shared variance we first regress climate change risk on age and create an object `ols2.resids` of the residuals.
``````ols2 <- lm(glbcc_risk ~ age, data = ds.temp)
summary(ols2)``````
``````##
## Call:
## lm(formula = glbcc_risk ~ age, data = ds.temp)
##
## Residuals:
## Min 1Q Median 3Q Max
## -6.4924 -2.1000 0.0799 2.5376 4.5867
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 6.933835 0.267116 25.958 < 0.0000000000000002 ***
## age -0.016350 0.004307 -3.796 0.00015 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 3.062 on 2511 degrees of freedom
## Multiple R-squared: 0.005706, Adjusted R-squared: 0.00531
## F-statistic: 14.41 on 1 and 2511 DF, p-value: 0.0001504``````
``ols2.resids <- ols2\$residuals ``
Note that, when modeled alone, the estimated effect of age on glbccrsk is larger (-0.0164) than it was in the multiple regression with ideology (-0.00487). This is because age is correlated with ideology, and – because ideology is also related to glbccrsk – when we don’t “control for” ideology, the age variable carries some of the influence of ideology.
Next, we regress ideology on age and create an object of the residuals.
``````ols3 <- lm(ideol ~ age, data = ds.temp)
summary(ols3)``````
``````##
## Call:
## lm(formula = ideol ~ age, data = ds.temp)
##
## Residuals:
## Min 1Q Median 3Q Max
## -3.9492 -0.8502 0.2709 1.3480 2.7332
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 3.991597 0.150478 26.526 < 0.0000000000000002 ***
## age 0.011007 0.002426 4.537 0.00000598 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 1.725 on 2511 degrees of freedom
## Multiple R-squared: 0.00813, Adjusted R-squared: 0.007735
## F-statistic: 20.58 on 1 and 2511 DF, p-value: 0.000005981``````
``ols3.resids <- ols3\$residuals``
Finally, we regress the residuals from ols2 on the residuals from ols3. Note that this regression does not include an intercept term.
``````ols4 <- lm(ols2.resids ~ 0 + ols3.resids)
summary(ols4)``````
``````##
## Call:
## lm(formula = ols2.resids ~ 0 + ols3.resids)
##
## Residuals:
## Min 1Q Median 3Q Max
## -8.7913 -1.6252 0.2785 1.4674 6.6075
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## ols3.resids -1.04275 0.02866 -36.38 <0.0000000000000002 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 2.478 on 2512 degrees of freedom
## Multiple R-squared: 0.3451, Adjusted R-squared: 0.3448
## F-statistic: 1324 on 1 and 2512 DF, p-value: < 0.00000000000000022``````
As shown, the estimated BB for EiX1|X2EiX1|X2, matches the estimated BB for ideology in the first regression. What we have done, and what multiple regression does, is clean" both YY and X1X1 (ideology) of their correlations with X2X2 (age) by using the residuals from the bivariate regressions. | textbooks/stats/Applied_Statistics/Book%3A_Quantitative_Research_Methods_for_Political_Science_Public_Policy_and_Public_Administration_(Jenkins-Smith_et_al.)/12%3A_The_Logic_of_Multiple_Regression/12.01%3A_Theoretical_Specification.txt |
``````library(psych)
describe(data.frame(ds.temp\$glbcc_risk,ds.temp\$ideol,
ds.temp\$age))``````
``````## vars n mean sd median trimmed mad min max
## ds.temp.glbcc_risk 1 2513 5.95 3.07 6 6.14 2.97 0 10
## ds.temp.ideol 2 2513 4.66 1.73 5 4.76 1.48 1 7
## ds.temp.age 3 2513 60.38 14.19 62 61.01 13.34 18 99
## range skew kurtosis se
## ds.temp.glbcc_risk 10 -0.32 -0.94 0.06
## ds.temp.ideol 6 -0.45 -0.79 0.03
## ds.temp.age 81 -0.38 -0.23 0.28``````
``````library(car)
scatterplotMatrix(data.frame(ds.temp\$glbcc_risk,
ds.temp\$ideol,ds.temp\$age),
diagonal="density")``````
In this section, we walk through another example of multiple regression. First, we start with our two IV model.
``````ols1 <- lm(glbcc_risk ~ age+ideol, data=ds.temp)
summary(ols1)``````
``````##
## Call:
## lm(formula = glbcc_risk ~ age + ideol, data = ds.temp)
##
## Residuals:
## Min 1Q Median 3Q Max
## -8.7913 -1.6252 0.2785 1.4674 6.6075
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 11.096064 0.244640 45.357 <0.0000000000000002 ***
## age -0.004872 0.003500 -1.392 0.164
## ideol -1.042748 0.028674 -36.366 <0.0000000000000002 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 2.479 on 2510 degrees of freedom
## Multiple R-squared: 0.3488, Adjusted R-squared: 0.3483
## F-statistic: 672.2 on 2 and 2510 DF, p-value: < 0.00000000000000022``````
The results show that the relationship between age and perceived risk (glbccrsk) is negative and insignificant. The relationship between ideology and perceived risk is negative and significant. The coefficients of the XX’s are interpreted in the same way as with simple regression, except that we are now controlling for the effect of the other XX’s by removing their influence on the estimated coefficient. Therefore, we say that as ideology increases one unit, perceptions of the risk of climate change (glbccrsk) decrease by -1.0427478, controlling for the effect of age.
As was the case with simple regression, multiple regression finds the intercept and slopes that minimize the sum of the squared residuals. With only one IV the relationship can be represented in a two-dimensional plane (a graph) as a line, but each IV adds another dimension. Two IVs create a regression plane within a cube, as shown in Figure \(3\). The Figure shows a scatterplot of perceived climate change risk, age, and ideology coupled with the regression plane. Note that this is a sample of 200 observations from the larger data set. Were we to add more IVs, we would generate a hypercube… and we haven’t found a clever way to draw that yet.
``````ds200 <- ds.temp[sample(1:nrow(ds.temp), 200, replace=FALSE),]
library(scatterplot3d)
s3d <-scatterplot3d(ds200\$age,
ds200\$ideol,
ds200\$glbcc_risk
,pch=16, highlight.3d=TRUE,
type="h", main="3D Scatterplot")
s3d\$plane3d(ols1)``````
In the next example education is added to the model.
``````ds.temp <- filter(ds) %>%
dplyr::select(glbcc_risk, age, education, income, ideol) %>%
na.omit()
ols2 <- lm(glbcc_risk ~ age + education + ideol, data = ds.temp)
summary(ols2)``````
``````##
## Call:
## lm(formula = glbcc_risk ~ age + education + ideol, data = ds.temp)
##
## Residuals:
## Min 1Q Median 3Q Max
## -8.8092 -1.6355 0.2388 1.4279 6.6334
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 10.841669 0.308416 35.153 <0.0000000000000002 ***
## age -0.003246 0.003652 -0.889 0.374
## education 0.036775 0.028547 1.288 0.198
## ideol -1.044827 0.029829 -35.027 <0.0000000000000002 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 2.437 on 2268 degrees of freedom
## Multiple R-squared: 0.3607, Adjusted R-squared: 0.3598
## F-statistic: 426.5 on 3 and 2268 DF, p-value: < 0.00000000000000022``````
We see that as a respondent’s education increases one unit on the education scale, perceived risk appears to increase by 0.0367752, keeping age and ideology constant. However, this result is not significant. In the final example, income is added to the model. Note that the size and significance of education actually increases once income is included, indicating that education only has bearing on the perceived risks of climate change once the independent effect of income is considered.
``````options(scipen = 999) #to turn off scientific notation
ols3 <- lm(glbcc_risk ~ age + education + income + ideol, data = ds.temp)
summary(ols3)``````
``````##
## Call:
## lm(formula = glbcc_risk ~ age + education + income + ideol, data = ds.temp)
##
## Residuals:
## Min 1Q Median 3Q Max
## -8.7991 -1.6654 0.2246 1.4437 6.5968
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 10.9232861851 0.3092149750 35.326 < 0.0000000000000002 ***
## age -0.0044231931 0.0036688855 -1.206 0.22810
## education 0.0632823391 0.0299443094 2.113 0.03468 *
## income -0.0000026033 0.0000009021 -2.886 0.00394 **
## ideol -1.0366154295 0.0299166747 -34.650 < 0.0000000000000002 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 2.433 on 2267 degrees of freedom
## Multiple R-squared: 0.363, Adjusted R-squared: 0.3619
## F-statistic: 323 on 4 and 2267 DF, p-value: < 0.00000000000000022``````
12.3.1 Hypothesis Testing and tt-tests
The logic of hypothesis testing with multiple regression is a straightforward extension from simple regression as described in Chapter 7. Below we will demonstrate how to use the standard error of the ideology variable to test whether ideology influences perceptions of the perceived risk of global climate change. Specifically, we posit:
H1H1: As respondents become more conservative, they will perceive climate change to be less risky, all else equal.
Therefore, βideology<0βideology<0. The null hypothesis is that βideology=0βideology=0.
To test H1H1 we first need to find the standard error of the BB for ideology, (BjBj).
SE(Bj)=SE√RSSj(12.1)(12.1)SE(Bj)=SERSSj
where RSSj=RSSj= the residual sum of squares from the regression of XjXj (ideology) on the other XXs (age, education, income) in the model. RSSjRSSj captures all of the independent variation in XjXj. Note that the bigger RSSjRSSj, the smaller SE(Bj)SE(Bj), and the smaller SE(Bj)SE(Bj), the more precise the estimate of BjBj.
SESE (the standard error of the model) is:
SE=√RSSn−k−1SE=RSSn−k−1
We can use `R` to find the RSSRSS for ideology in our model. First we find the SESE of the model:
``````Se <- sqrt((sum(ols3\$residuals^2))/(length(ds.temp\$ideol)-5-1))
Se``````
``## [1] 2.43312``
Then we find the RSSRSS, for ideology:
``````ols4 <- lm(ideol ~ age + education + income, data = ds.temp)
summary(ols4)``````
``````##
## Call:
## lm(formula = ideol ~ age + education + income, data = ds.temp)
##
## Residuals:
## Min 1Q Median 3Q Max
## -4.2764 -1.1441 0.2154 1.4077 3.1288
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 4.5945481422 0.1944108986 23.633 < 0.0000000000000002 ***
## age 0.0107541759 0.0025652107 4.192 0.0000286716948757 ***
## education -0.1562812154 0.0207596525 -7.528 0.0000000000000738 ***
## income 0.0000028680 0.0000006303 4.550 0.0000056434561990 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 1.707 on 2268 degrees of freedom
## Multiple R-squared: 0.034, Adjusted R-squared: 0.03272
## F-statistic: 26.6 on 3 and 2268 DF, p-value: < 0.00000000000000022``````
``````RSSideol <- sum(ols4\$residuals^2)
RSSideol``````
``## [1] 6611.636``
Finally, we calculate the SESE for ideology:
``````SEideol <- Se/sqrt(RSSideol)
SEideol``````
``## [1] 0.02992328``
Once the SE(Bj)SE(Bj) is known, the tt-test for the ideology coefficient can be calculated. The tt value is the ratio of the estimated coefficient to its standard error.
t=BjSE(Bj)(12.2)(12.2)t=BjSE(Bj)
This can be calculated using `R`.
``ols3\$coef[5]/SEideol``
``````## ideol
## -34.64245``````
As we see, the result is statistically significant, and therefore we reject the null hypothesis. Also note that the results match those from the `R` output for the full model, as was shown earlier.
12.04: Summary
The use of multiple regression, when compared to simple bivariate regression, allows for more sophisticated and interesting analyses. The most important feature is the ability of the analyst (that’s you!) to statistically control for the effects of all other IVs when estimating any BB. In essence, we clean" the estimated relationship between any XX and YY of the influence of all other XsXs in the model. Hypothesis testing in multiple regression requires that we identify the independent variation in each XX, but otherwise the estimated standard error for each BB is analogous to that for simple regression.
So, maybe it’s a little more complicated. But look at what we can observe! Our estimates from the examples in this chapter show that age, income and education are all related to political ideology, but even when we control for their effects, ideology retains a potent influence on the perceived risks of climate change. Politics matter. | textbooks/stats/Applied_Statistics/Book%3A_Quantitative_Research_Methods_for_Political_Science_Public_Policy_and_Public_Administration_(Jenkins-Smith_et_al.)/12%3A_The_Logic_of_Multiple_Regression/12.03%3A__Multiple_Regression_Example.txt |
Model building is the process of deciding which independent variables to include in the model.22 For our purposes, when deciding which variables to include, theory and findings from the extant literature should be the most prominent guides. Apart from theory, however, this chapter examines empirical strategies that can help determine if the addition of new variables improves overall model fit. In general, when adding a variable, check for: a) improved prediction based on empirical indicators, b) statistically and substantively significant estimated coefficients, and c) stability of model coefficients—do other coefficients change when adding the new one – particularly look for sign changes.
13.1.1 Theory and Hypotheses
The most important guidance for deciding whether a variable (or variables) should be included in your model is provided by theory and prior research. Simply put, knowing the literature on your topic is vital to knowing what variables are important. You should be able to articulate a clear theoretical reason for including each variable in your model. In those cases where you don’t have much theoretical guidance, however, you should use model parsimony, which is a function of simplicity and model fit, as your guide. You can focus on whether the inclusion of a variable improves model fit. In the next section, we will explore several empirical indicators that can be used to evaluate the appropriateness of variable inclusion.
13.1.2 Empirical Indicators
When building a model, it is best to start with a few IV’s and then begin adding other variables. However, when adding a variable, check for:
• Improved prediction (increase in adjusted R2R2)
• Statistically and substantively significant estimated coefficients
• Stability of model coefficients
• Do other coefficients change when adding the new one?
• Particularly look for sign changes for estimated coefficients.
Coefficient of Determination: R2R2
R2R2 was previously discussed within the context of simple regression. The extension to multiple regression is straightforward, except that multiple regression leads us to place greater weight on the use of the adjusted R2R2. Recall that the adjusted R2R2 corrects for the inclusion of multiple independent variables; R2R2 is the ratio of the explained sum of squares to the total sum of squares (ESS/TSS).
R2R2 is expressed as:
R2=1−RSSTSS(13.1)(13.1)R2=1−RSSTSS
However, this formulation of R2R2 is insensitive to the complexity of the model and the degrees of freedom provided by your data. This means that an increase in the number of kk independent variables, can increase the R2R2. Adjusted R2R2 penalizes the R2R2 by correcting for the degrees of freedom. It is defined as:
adjustedR2=1−RSSn−k−1TSSn−k−1(13.2)(13.2)adjustedR2=1−RSSn−k−1TSSn−k−1
The R2R2 of two models can be compared, as illustrated by the following example. The first (simpler) model consists of basic demographics (age, education, and income) as predictors of climate change risk. The second (more complex) model adds the variable measuring political ideology to the explanation.
``````ds.temp <- filter(ds) %>%
dplyr::select(glbcc_risk, age, education, income, ideol) %>%
na.omit()
ols1 <- lm(glbcc_risk ~ age + education + income, data = ds.temp)
summary(ols1)``````
``````##
## Call:
## lm(formula = glbcc_risk ~ age + education + income, data = ds.temp)
##
## Residuals:
## Min 1Q Median 3Q Max
## -6.9189 -2.0546 0.0828 2.5823 5.1908
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 6.160506689 0.342491831 17.987 < 0.0000000000000002 ***
## age -0.015571138 0.004519107 -3.446 0.00058 ***
## education 0.225285858 0.036572082 6.160 0.000000000858 ***
## income -0.000005576 0.000001110 -5.022 0.000000551452 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 3.008 on 2268 degrees of freedom
## Multiple R-squared: 0.02565, Adjusted R-squared: 0.02437
## F-statistic: 19.91 on 3 and 2268 DF, p-value: 0.0000000000009815``````
``````ols2 <- lm(glbcc_risk ~ age + education + income + ideol, data = ds.temp)
summary(ols2)``````
``````##
## Call:
## lm(formula = glbcc_risk ~ age + education + income + ideol, data = ds.temp)
##
## Residuals:
## Min 1Q Median 3Q Max
## -8.7991 -1.6654 0.2246 1.4437 6.5968
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 10.9232861851 0.3092149750 35.326 < 0.0000000000000002 ***
## age -0.0044231931 0.0036688855 -1.206 0.22810
## education 0.0632823391 0.0299443094 2.113 0.03468 *
## income -0.0000026033 0.0000009021 -2.886 0.00394 **
## ideol -1.0366154295 0.0299166747 -34.650 < 0.0000000000000002 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 2.433 on 2267 degrees of freedom
## Multiple R-squared: 0.363, Adjusted R-squared: 0.3619
## F-statistic: 323 on 4 and 2267 DF, p-value: < 0.00000000000000022``````
As can be seen by comparing the model results, the more complex model that includes political ideology has a higher R2R2 than does the simpler model. This indicates that the more complex model explains a greater fraction of the variance in perceived risks of climate change. However, we don’t know if this improvement is statistically significant. In order to determine whether the more complex model adds significantly to the explanation of perceive risks, we can utilize the FF-test.
FF-test
The FF-test is a test statistic based on the FF distribution, in the same way the the tt-test is based on the tt distribution. The FF distribution skews right and ranges between 00 and ∞∞. Just like the tt distribution, the FF distribution approaches normal as the degrees of freedom increase.^[Note that the FF distribution is the square of a tt-distributed variable with mm degrees of freedom. The FF distribution has 11 degree of freedom in the numerator and mm degrees of in the denominator:t2m=F1,mtm2=F1,m
FF-tests are used to test for the statistical significance of the overall model fit. The null hypothesis for an FF-test is that the model offers no improvement for predicting YiYi over the mean of YY, ¯YY¯.
The formula for the FF-test is:
F=ESSkRSSn−k−1(13.3)(13.3)F=ESSkRSSn−k−1
where kk is the number of parameters and n−k−1n−k−1 are the degrees of freedom. Therefore, FF is a ratio of the explained variance to the residual variance, correcting for the number of observations and parameters. The FF-value is compared to the FF-distribution, just like a tt-distribution, to obtain a pp-value. Note that the `R` output includes the FF statistic and pp value.
Nested FF-test
For model building we turn to the nested FF-test, which tests whether a more complex model (with more IVs) adds to the explanatory power over a simpler model (with fewer IVs). To find out, we calculate an F-statistic for the model improvement:
F=ESS1−ESS0qRSS1n−k−1(13.4)(13.4)F=ESS1−ESS0qRSS1n−k−1
where qq is the difference in the number of IVs between the simpler and the more complex models. The complex model has kk IVs (and estimates kk parameters), and the simpler model has k−qk−q IVs (and estimates only k−qk−q parameters). ESS1ESS1 is the explained sum of squares for the complex model. RSS1RSS1 is the residual sum of squares for the complex model. ESS0ESS0 is the explained sum of squares for the simpler model. So the nested-F represents the ratio of the additional explanation per added IV, over the residual sum of squares divided by the model degrees of freedom.
We can use `R`, to calculate the FF statistic based on our previous example.
``````TSS <- sum((ds.temp\$glbcc_risk-mean(ds.temp\$glbcc_risk))^2)
TSS``````
``## [1] 21059.86``
``````RSS.mod1 <- sum(ols1\$residuals^2)
RSS.mod1``````
``## [1] 20519.57``
``````ESS.mod1 <- TSS-RSS.mod1
ESS.mod1``````
``## [1] 540.2891``
``````RSS.mod2 <- sum(ols2\$residuals^2)
RSS.mod2``````
``## [1] 13414.89``
``````ESS.mod2 <- TSS-RSS.mod2
ESS.mod2``````
``## [1] 7644.965``
``````F <- ((ESS.mod2 - ESS.mod1)/1)/(RSS.mod2/(length(ds.temp\$glbcc_risk)-4-1))
F``````
``## [1] 1200.629``
Or, you can simply use the `anova` function in RR:
``anova(ols1,ols2) ``
``````## Analysis of Variance Table
##
## Model 1: glbcc_risk ~ age + education + income
## Model 2: glbcc_risk ~ age + education + income + ideol
## Res.Df RSS Df Sum of Sq F Pr(>F)
## 1 2268 20520
## 2 2267 13415 1 7104.7 1200.6 < 0.00000000000000022 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1``````
As shown using both approaches, the inclusion of ideology significantly improves model fit.
13.1.3 Risks in Model Building
As is true of most things in life, there are risks to consider when building statistical models. First, are you including irrelevant XX’s? These can increase model complexity, reduce adjusted R2R2, and increase model variability across samples. Remember that you should have a theoretical basis for inclusion of all of the variables in your model.
Second, are you omitting relevant XX’s? Not including important variables can fail to capture fit and can bias other estimated coefficients, particularly when the omitted XX is related to both other XX’s and to the dependent variable YY.
Finally, remember that we are using sample data. Therefore, about 5% of the time, our sample will include random observations of XX’s that result in BB’s that meet classical hypothesis tests – resulting in a Type I error. Conversely, the BB’s may be important, but the sample data will randomly include observations of XX that result in estimated parameters that do not meet the classical statistical tests – resulting in a Type II error. That’s why we rely on theory, prior hypotheses, and replication. | textbooks/stats/Applied_Statistics/Book%3A_Quantitative_Research_Methods_for_Political_Science_Public_Policy_and_Public_Administration_(Jenkins-Smith_et_al.)/13%3A_Multiple_Regression_and_Model_Building/13.01%3A_Model_Building.txt |
Almost all statistical software packages (including RR) permit a number of mechanical “search strategies” for finding IVs that make a statistically significant contribution to the prediction of the model-dependent variable. The most common of these is called stepwise regression, which may also be referred to as forward, backward (or maybe even upside down!) stepwise regression. Stepwise procedures do not require that the analyst think – you just have to designate a pool of possible IVs and let the package go to work, sifting through the IVs to identify those that (on the basis of your sample data) appear to be related to the model dependent variable. The stepwise procedures use sequential F-tests, sequentially adding variables that “improve the fit” of the mindless model until there are no more IVs that meet some threshold (usually p<0.05p<0.05) of statistical significance. These procedures are like mechanically wringing all of the explanation you can get for YY out of some pool of XX.
You should already recognize that these kinds of methods pose serious problems. First and foremost, this is an atheoretical approach to model building. But, what if you have no theory to start with – is a stepwise approach appropriate then? No, for several reasons. If any of the candidate XX variables are strongly correlated, the inclusion of the first one will “use up” some of the explanation of the second, because of the way OLS calculates partial regression coefficients. For that reason, once one of the variables is mechanically selected, the other will tend to be excluded because it will have less to contribute to YY. Perhaps more damning, stepwise approaches are highly susceptible to inclusion of spuriously related variables. Recall that we are using samples, drawn from the larger population, and that samples are subject to random variation. If the step-wise process uses the classical 0.05 cut-off for inclusion of a variable, that means that one time in twenty (in the long run) we will include a variable that meets the criterion only by random chance.23 Recall that the classical hypothesis test requires that we specify our hypothesis in advance; step-wise processes simply rummage around within a set of potential IVs to find those that fit.
There have been notable cases in which mechanical model building has resulted in seriously problematic “findings” that have very costly implications for society. One is recounted in the PBS Frontline episode called “Currents of Fear”.^[The program was written, produced and directed by Jon Palfreman, and it was first broadcast on June 13, 1995. The full transcript can be found here. The story concerns whether electromagnetic fields (EMFs) from technologies including high-voltage power lines cause cancer in people who are exposed. The problem was that “cancer clusters” could be identified that were proximate to the power lines, but no laboratory experiments could find a connection. However, concerned citizens and activists persisted in believing there was a causal relationship. In that context, the Swedish government sponsored a very ambitious study to settle the question. Here is the text of the discussion from the Frontline program:
… in 1992, a landmark study appeared from Sweden. A huge investigation, it enrolled everyone living within 300 meters of Sweden’s high-voltage transmission line system over a 25-year period. They went far beyond all previous studies in their efforts to measure magnetic fields, calculating the fields that the children were exposed to at the time of their cancer diagnosis and before. This study reported an apparently clear association between magnetic field exposure and childhood leukemia, with a risk ratio for the most highly exposed of nearly 4.
The Swedish government announced it was investigating new policy options, including whether to move children away from schools near power lines. Surely, here was the proof that power lines were dangerous, the proof that even the physicists and biological naysayers would have to accept. But three years after the study was published, the Swedish research no longer looks so unassailable. This is a copy of the original contractor’s report, which reveals the remarkable thoroughness of the Swedish team. Unlike the published article, which just summarizes part of the data, the report shows everything they did in great detail, all the things they measured and all the comparisons they made.
When scientists saw how many things they had measured – nearly 800 risk ratios are in the report – they began accusing the Swedes of falling into one of the most fundamental errors in epidemiology, sometimes called the multiple comparisons fallacy.
So, according to the Frontline report, the Swedish EMF study regressed the incidence of nearly 800 possible cancers onto the proximity of its citizens to high-voltage power lines. In some cases, there appeared to be a positive relationship. These they reported. In other cases, there was no relationship, and in some the relationship was negative - which would seem to imply (if you were so silly as to do so) that living near the high voltage lines actually protected people from cancer. But only the positive relationships were included in the reports, leading to a false impression that the study had confirmed that proximity to high-voltage lines causes cancer. Embarrassing to the study authors, to put it mildly.
13.03: Summary
This chapter has focused on multiple regression model building. The keys to that process are understanding (a) the critical role of theory and prior research findings in model specification, and (b) the meaning of the partial regression coefficients produced by OLS. When theory is not well-developed, you can thoughtfully employ nested F-tests to evaluate whether the hypothesized inclusion of an XX variable meaningfully contributes to the explanation of YY. But you should avoid reliance on mechanical model-building routines, like step-wise regression, because these can lead you down into statistical perdition. None of us want to see that happen!
1. Model building also concerns decisions about model functional form, which we address in the next chapter.↩
2. Add to that the propensity of journals to publish articles that have new and exciting findings, in the form of statistically significant modeled coefficients, and you can see that there would be a substantial risk: that of finding and promoting nonsense findings.↩ | textbooks/stats/Applied_Statistics/Book%3A_Quantitative_Research_Methods_for_Political_Science_Public_Policy_and_Public_Administration_(Jenkins-Smith_et_al.)/13%3A_Multiple_Regression_and_Model_Building/13.02%3A_Evils_of_Stepwise_Regress.txt |
Thus far, we have considered OLS models that include variables measured on interval level scales (or, in a pinch and with caution, ordinal scales). That is fine when we have variables for which we can develop valid and reliable interval (or ordinal) measures. But in the policy and social science worlds, we often want to include in our analysis concepts that do not readily admit to interval measure – including many cases in which a variable has an “on - off”, or “present - absent” quality. In other cases we want to include a concept that is essentially nominal in nature, such that an observation can be categorized as a subset but not measured on a “high-low” or “more-less” type of scale. In these instances we can utilize what is generally known as a dummy variable, but are also referred to as indicator variables, Boolean variables, or categorical variables.
What the Heck are “Dummy Variables”?
• A dichotomous variable, with values of 0 and 1;
• A value of 1 represents the presence of some quality, a zero its absence;
• The 1s are compared to the 0s, who are known as the referent group“;
• Dummy variables are often thought of as a proxy for a qualitative variable.
Dummy variables allow for tests of the differences in overall value of the YY for different nominal groups in the data. They are akin to a difference of means test for the groups identified by the dummy variable. Dummy variables allow for comparisons between an included (the 1s) and an omitted (the 0s) group. Therefore, it is important to be clear about which group is omitted and serving as the comparison category."
It is often the case that there are more than two groups represented by a set of nominal categories. In that case, the variable will consist of two or more dummy variables, with 0/1 codes for each category except the referent group (which is omitted). Several examples of categorical variables that can be represented in multiple regression with dummy variables include:
• Experimental treatment and control groups (treatment=1, control=0)
• Gender (male=1, female=0 or vice versa)
• Race and ethnicity (a dummy for each group, with one omitted referent group)
• Region of residence (dummy for each region with one omitted reference region)
• Type of education (dummy for each type with omitted reference type)
• Religious affiliation (dummy for each religious denomination with omitted reference)
The value of the dummy coefficient represents the estimated difference in YY between the dummy group and the reference group. Because the estimated difference is the average over all of the YY observations, the dummy is best understood as a change in the value of the intercept (AA) for the dummied" group. This is illustrated in Figure \(1\). In this illustration, the value of YY is a function of X1X1 (a continuous variable) and X2X2 (a dummy variable). When X2X2 is equal to 0 (the referent case) the top regression line applies. When X2=1X2=1, the value of YY is reduced to the bottom line. In short, X2X2 has a negative estimated partial regression coefficient represented by the difference in height between the two regression lines.
For a case with multiple nominal categories (e.g., region) the procedure is as follows: (a) determine which category will be assigned as the referent group; (b) create a dummy variable for each of the other categories. For example, if you are coding a dummy for four regions (North, South, East and West), you could designate the South as the referent group. Then you would create dummies for the other three regions. Then, all observations from the North would get a value of 1 in the North dummy, and zeros in all others. Similarly, East and West observations would receive a 1 in their respective dummy category and zeros elsewhere. The observations from the South region would be given values of zero in all three categories. The interpretation of the partial regression coefficients for each of the three dummies would then be the estimated difference in YY between observations from the North, East and West and those from the South.
Now let’s walk through an example of an RR model with a dummy variable and the interpretation of that model. We will predict climate change risk using age, education, income, ideology, and “gend”, a dummy variable for gender for which 1 = male and 0 = female.
``````ds.temp <- filter(ds) %>%
dplyr::select("glbcc_risk","age","education","income","ideol","gender") %>% na.omit()
ols1 <- lm(glbcc_risk ~ age + education + income + ideol + gender, data = ds.temp)
summary(ols1)``````
``````##
## Call:
## lm(formula = glbcc_risk ~ age + education + income + ideol +
## gender, data = ds.temp)
##
## Residuals:
## Min 1Q Median 3Q Max
## -8.8976 -1.6553 0.1982 1.4814 6.7046
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 10.9396287313 0.3092105590 35.379 < 0.0000000000000002 ***
## age -0.0040621210 0.0036713524 -1.106 0.26865
## education 0.0665255149 0.0299689664 2.220 0.02653 *
## income -0.0000023716 0.0000009083 -2.611 0.00908 **
## ideol -1.0321209152 0.0299808687 -34.426 < 0.0000000000000002 ***
## gender -0.2221178483 0.1051449213 -2.112 0.03475 *
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 2.431 on 2265 degrees of freedom
## Multiple R-squared: 0.364, Adjusted R-squared: 0.3626
## F-statistic: 259.3 on 5 and 2265 DF, p-value: < 0.00000000000000022``````
First note that the inclusion of the dummy variables doe not change the manner in which you interpret the other (non-dummy) variables in the model; the estimated partial regression coefficients for age, education, income and ideology should all be interpreted as described in the prior chapter. Note that the estimated partial regression coefficient for gender" is negative and statistically significant, indicating that males are less likely to be concerned about the environment than are females. The estimate indicates that, all else being equal, the average difference between men and women on the climate change risk scale is -0.2221178. | textbooks/stats/Applied_Statistics/Book%3A_Quantitative_Research_Methods_for_Political_Science_Public_Policy_and_Public_Administration_(Jenkins-Smith_et_al.)/14%3A_Topics_in_Multiple_Regression/14.01%3A_Dummy_Variables.txt |
Dummy variables can also be used to estimate the ways in which the effect of a variable differs across subsets of cases. These kinds of effects are generally called interactions." When an interaction occurs, the effect of one XX is dependent on the value of another. Typically, an OLS model is additive, where the BB’s are added together to predict YY;
Yi=A+BX1+BX2+BX3+BX4+EiYi=A+BX1+BX2+BX3+BX4+Ei.
However, an interaction model has a multiplicative effect where two of the IVs are multiplied;
Yi=A+BX1+BX2+BX3∗BX4+EiYi=A+BX1+BX2+BX3∗BX4+Ei.
A slope dummy" is a special kind of interaction in which a dummy variable is interacted with (multiplied by) a scale (ordinal or higher) variable. Suppose, for example, that you hypothesized that the effects of political of ideology on perceived risks of climate change were different for men and women. Perhaps men are more likely than women to consistently integrate ideology into climate change risk perceptions. In such a case, a dummy variable (0=women, 1=men) could be interacted with ideology (1=strong liberal, 7=strong conservative) to predict levels of perceived risk of climate change (0=no risk, 10=extreme risk). If your hypothesized interaction was correct, you would observe the kind of pattern as shown in Figure \(2\).
We can test our hypothesized interaction in `R`, controlling for the effects of age and income.
``````ols2 <- lm(glbcc_risk ~ age + income + education + gender * ideol, data = ds.temp)
summary(ols2)``````
``````##
## Call:
## lm(formula = glbcc_risk ~ age + income + education + gender *
## ideol, data = ds.temp)
##
## Residuals:
## Min 1Q Median 3Q Max
## -8.718 -1.704 0.166 1.468 6.929
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 10.6004885194 0.3296900513 32.153 < 0.0000000000000002 ***
## age -0.0041366805 0.0036653120 -1.129 0.25919
## income -0.0000023222 0.0000009069 -2.561 0.01051 *
## education 0.0682885587 0.0299249903 2.282 0.02258 *
## gender 0.5971981026 0.2987398877 1.999 0.04572 *
## ideol -0.9591306050 0.0389448341 -24.628 < 0.0000000000000002 ***
## gender:ideol -0.1750006234 0.0597401590 -2.929 0.00343 **
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 2.427 on 2264 degrees of freedom
## Multiple R-squared: 0.3664, Adjusted R-squared: 0.3647
## F-statistic: 218.2 on 6 and 2264 DF, p-value: < 0.00000000000000022``````
The results indicate a negative and significant interaction effect for gender and ideology. Consistent with our hypothesis, this means that the effect of ideology on climate change risk is more pronounced for males than females. Put differently, the slope of ideology is steeper for males than it is for females. This is shown in Figure \(3\).
``````ds.temp\$gend.factor <- factor(ds.temp\$gender, levels=c(0,1),labels=c("Female","Male"))
library(effects)
ols3 <- lm(glbcc_risk~ age + income + education + ideol * gend.factor, data = ds.temp)
plot(effect("ideol*gend.factor",ols3),ylim=0:10)``````
In sum, dummy variables add greatly to the flexibility of OLS model specification. They permit the inclusion of categorical variables, and they allow for testing hypotheses about interactions of groups with other IVs within the model. This kind of flexibility is one reason that OLS models are widely used by social scientists and policy analysts.
14.03: Standardized Regression Coefficien
In most cases, the various IVs in a model are represented on different measurement scales. For example, ideology ranges from 1 to 7, while age ranges from 18 to over 90 years old. These different scales make comparing the effects of the various IVs difficult. If we want to directly compare the magnitudes of the effects of ideology and age on levels of environmental concern, we would need to standardize the variables.
One way to standardized variables is to create a ZZ-score based on each variable. Variables are standardized in this way as follows:
Zi=Xi−¯Xsx(14.1)(14.1)Zi=Xi−X¯sx
where sxsx is the s.d. of XX. Standardizing the variables by creating ZZ-scores re-scales them so that each variables has a mean of 00 and a s.d. of 11. Therefore, all variables have the same mean and s.d. It is important to realize (and it is somewhat counter-intuitive) that the standardized variables retain all of the variation that was in the original measure.
A second way to standardize variables converts the unstandardized BB, into a standardized B′B′.
B′k=BksksY(14.2)(14.2)Bk′=BksksY
where BkBk is the unstandardized coefficient of XkXk, sksk is the s.d. of XkXk, and sysy is the s.d. of YY. Standardized regression coefficients, also known as beta weights or “betas”, are those we would get if we regress a standardized YY onto standardized XX’s.
Interpreting Standardized Betas
• The standard deviation change in YY for a one-standard deviation change in XX
• All XX’ss on an equal footing, so one can compare the strength of the effects of the XX’s
• Cannot be used for comparisons across samples
• Variances will differ across different samples
We can use the `scale` function in `R` to calculate a ZZ score for each of our variables, and then re-run our model.
``````stan.ds <- ds.temp %>%
dplyr::select(glbcc_risk, age, education, income, ideol, gender) %>%
scale %>%
data.frame()
ols3 <- lm(glbcc_risk ~ age + education + income + ideol + gender, data = stan.ds)
summary(ols3)``````
``````##
## Call:
## lm(formula = glbcc_risk ~ age + education + income + ideol +
## gender, data = stan.ds)
##
## Residuals:
## Min 1Q Median 3Q Max
## -2.92180 -0.54357 0.06509 0.48646 2.20164
##
## Coefficients:
## Estimate Std. Error t value
## (Intercept) 0.0000000000000001685 0.0167531785616065292 0.000
## age -0.0187675384877126518 0.0169621356203379960 -1.106
## education 0.0395657731919867237 0.0178239180606745221 2.220
## income -0.0466922668201090602 0.0178816880127353542 -2.611
## ideol -0.5882792369403809785 0.0170882328807871603 -34.426
## gender -0.0359158695199312886 0.0170016561132237121 -2.112
## Pr(>|t|)
## (Intercept) 1.00000
## age 0.26865
## education 0.02653 *
## income 0.00908 **
## ideol < 0.0000000000000002 ***
## gender 0.03475 *
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 0.7984 on 2265 degrees of freedom
## Multiple R-squared: 0.364, Adjusted R-squared: 0.3626
## F-statistic: 259.3 on 5 and 2265 DF, p-value: < 0.00000000000000022``````
In addition, we can convert the original unstandardized coefficient for ideology, to a standardized coefficient.
``````sdX <- sd(ds.temp\$ideol, na.rm=TRUE)
sdY <- sd(ds.temp\$glbcc_risk, na.rm=TRUE)
ideology.prime <- ols1\$coef[5]*(sdX/sdY)
ideology.prime``````
``````## ideol
## -0.5882792``````
Using either approach, standardized coefficients allow us to compare the magnitudes of the effects of each of the IVs on YY.
14.04: Summary
This chapter has focused on options in designing and using OLS models. We first covered the use of dummy variables to capture the effects of group differences on estimates of YY. We then explained how dummy variables, when interacted with scale variables, can provide estimates of the differences in how the scale variable affects YY across the different subgroups represented by the dummy variable. Finally, we introduced the use of standardized regression coefficients as a means to compare the effects of different XsXs on YY when the scales of the XsXs differ. Overall, these refinements in the use of OLS permit great flexibility in the application of regression models to estimation and hypothesis testing in policy analysis and social science research. | textbooks/stats/Applied_Statistics/Book%3A_Quantitative_Research_Methods_for_Political_Science_Public_Policy_and_Public_Administration_(Jenkins-Smith_et_al.)/14%3A_Topics_in_Multiple_Regression/14.02%3A_Interaction_Effects.txt |
As described in earlier chapters, there is a set of key assumptions that must be met to justify the use of the tt and FF distributions in the interpretation of OLS model results. In particular, these assumptions are necessary for hypotheses tests and the generation of confidence intervals. When met, the assumptions make OLS more efficient than any other unbiased estimator.
OLS Assumptions
Systematic Component
• Linearity
• Fixed XX
Stochastic Component
• Errors have constant variance across the range of XX
E(ϵ2i)=σ2ϵE(ϵi2)=σϵ2
• Errors are independent of XX and other ϵiϵi
E(ϵi)≡E(ϵ|xi)=0E(ϵi)≡E(ϵ|xi)=0
and
E(ϵi)≠E(ϵj)E(ϵi)≠E(ϵj) for i≠ji≠j
• Errors are normally distributed
ϵi∼N(0,σ2ϵ)ϵi∼N(0,σϵ2)
There is an additional set of assumptions needed for “correct” model specification. An ideal model OLS would have the following characteristics: - YY is a linear function of modeled XX variables - No XX’s are omitted that affect E(Y)E(Y) and that are correlated with included XX’s. Note that exclusion of other XXs that are related to YY, but are not related to the XXs in the model, does not critically undermine the model estimates. However, it does reduce the overall ability to explain YY. All XX’s in the model affect E(Y)E(Y).
Note that if we omit an XX that is related to YY and other XXs in the model, we will bias the estimate of the included XXs. Also consider the problem of including XXs that are related to other XXs in the model, but not related to YY. This scenario would reduce the independent variance in XX used to predict YY.
Table 15.1 summarizes the various classes of assumption failures and their implications.
When considering the assumptions, our data permit empirical tests for some assumptions, but not all. Specifically, we can check for linearity, normality of the residuals, homoscedasticity, data “outliers” and multicollinearity. However, we can’t check for correlation between error and XX’s, whether the mean error equals zero, and whether all the relevant XX’s are included.
15.02: OLS Diagnostic Techniques
In this section, we examine the residuals from a multiple regression model for potential problems. Note that we use a subsample of the first 500 observations, drawn from the larger tbur.data" dataset, to permit easier evaluation of the plots of residuals. We begin with an evaluation of the assumption of the linearity of the relationship between the XXs and YY, and then evaluate assumptions regarding the error term.
Our multiple regression model predicts survey respondents’ levels of risk perceived of climate change (YY) using political ideology, age, household income, and educational achievement as independent variables (XXs). The results of the regression model as follows:
``````ols1 <- lm(glbcc_risk ~ age + education + income + ideol, data = ds.small)
summary(ols1)``````
``````##
## Call:
## lm(formula = glbcc_risk ~ age + education + income + ideol, data = ds.small)
##
## Residuals:
## Min 1Q Median 3Q Max
## -7.1617 -1.7131 -0.0584 1.7216 6.8981
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 12.0848259959 0.7246993630 16.676 <0.0000000000000002 ***
## age -0.0055585796 0.0084072695 -0.661 0.509
## education -0.0186146680 0.0697901408 -0.267 0.790
## income 0.0000001923 0.0000022269 0.086 0.931
## ideol -1.2235648372 0.0663035792 -18.454 <0.0000000000000002 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 2.353 on 445 degrees of freedom
## Multiple R-squared: 0.4365, Adjusted R-squared: 0.4315
## F-statistic: 86.19 on 4 and 445 DF, p-value: < 0.00000000000000022``````
On the basis of the RR output, the model appears to be quite reasonable, with a statistically significant estimated partial regression coefficient for political ideology. But let’s take a closer look.
15.2.1 Non-Linearity
One of the most critical assumptions of OLS is that the relationships between variables are linear in their functional form. We start with a stylized example (a fancy way of saying we made it up!) of what a linear and nonlinear pattern of residuals would look like. Figure \(2\) shows an illustration of how the residuals would look with a clearly linear relationship, and Figure \(3\) illustrates how the the residuals would look with a clearly non-linear relationship.
Figure \(2\): Linear
Now let’s look at the residuals from our example model. We can check the linear nature of the relationship between the DV and the IVs in several ways. First we can plot the residuals by the values of the IVs. We also can add a lowess line to demonstrate the relationship between each of the IVs and the residuals, and add a line at 00 for comparison.
``````ds.small\$fit.r <- ols1\$residuals
ds.small\$fit.p <- ols1\$fitted.values``````
``````library(reshape2)
ds.small %>%
melt(measure.vars = c("age", "education", "income", "ideol", "fit.p")) %>%
ggplot(aes(value, fit.r, group = variable)) +
geom_point(shape = 1) +
geom_smooth(method = loess) +
geom_hline(yintercept = 0) +
facet_wrap(~ variable, scales = "free")``````
As we can see in Figure \(4\), the plots of residuals by both income and ideology seem to indicate non-linear relationships. We can check this “ocular impression” by squaring each term and using the `anova` function to compare model fit.
``````ds.small\$age2 <- ds.small\$age^2
ds.small\$edu2 <- ds.small\$education^2
ds.small\$inc2 <- ds.small\$income^2
ds.small\$ideology2<-ds.small\$ideol^2
ols2 <- lm(glbcc_risk ~ age+age2+education+edu2+income+inc2+ideol+ideology2, data=ds.small)
summary(ols2)``````
``````##
## Call:
## lm(formula = glbcc_risk ~ age + age2 + education + edu2 + income +
## inc2 + ideol + ideology2, data = ds.small)
##
## Residuals:
## Min 1Q Median 3Q Max
## -7.1563 -1.5894 0.0389 1.4898 7.3417
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 9.66069872535646 1.93057305147186 5.004 0.000000812 ***
## age 0.02973349791714 0.05734762412523 0.518 0.604385
## age2 -0.00028910659305 0.00050097599702 -0.577 0.564175
## education -0.48137978481400 0.35887879735475 -1.341 0.180499
## edu2 0.05131569933892 0.03722361864679 1.379 0.168723
## income 0.00000285263412 0.00000534134363 0.534 0.593564
## inc2 -0.00000000001131 0.00000000001839 -0.615 0.538966
## ideol -0.05726196851107 0.35319018414228 -0.162 0.871279
## ideology2 -0.13270718319750 0.03964680646295 -3.347 0.000886 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 2.33 on 441 degrees of freedom
## Multiple R-squared: 0.4528, Adjusted R-squared: 0.4429
## F-statistic: 45.61 on 8 and 441 DF, p-value: < 0.00000000000000022``````
The model output indicates that ideology may have a non-linear relationships with risk perceptions of climate change. For ideology, only the squared term is significant, indicating that levels of perceived risk of climate change decline at an increasing rate for those on the most conservative end of the scale. Again, this is consistent with the visual inspection of the relationship between ideology and the residuals in Figure \(4\). The question remains whether the introduction of these non-linear (polynomial) terms improves overall model fit. We can check that with an analysis of variance across the simple model (without polynomial terms) and the models with the squared terms.
``anova(ols1,ols2)``
``````## Analysis of Variance Table
##
## Model 1: glbcc_risk ~ age + education + income + ideol
## Model 2: glbcc_risk ~ age + age2 + education + edu2 + income + inc2 +
## ideol + ideology2
## Res.Df RSS Df Sum of Sq F Pr(>F)
## 1 445 2464.2
## 2 441 2393.2 4 71.059 3.2736 0.01161 *
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1``````
As we can see, the Anova test indicates that including the squared terms improves model fit, therefore the relationships include nonlinear components.
A final way to check for non-linearity is Ramsey’s Regression Error Specification Test (RESET). This tests the functional form of the model. Similar to our test using squared terms, the RESET tests calculates an FF statistic that compares the linear model with a model(s) that raises the IVs to various powers. Specifically, it tests whether there are statistically significant differences in the R2R2 of each of the models. Similar to a nested FF test, it is calculated by:
F=R21−R20q1−R21n−k1(15.1)(15.1)F=R12−R02q1−R12n−k1
where R20R02 is the R2R2 of the linear model, R21R12 is the R2R2 of the polynomial model(s), qq is the number of new regressors, and k1k1 is the number of IVs in the polynomial model(s). The null hypothesis is that the functional relationship between the XX’s and YY is linear, therefore the coefficients of the second and third powers to the IVs are zero. If there is a low pp-value (i.e., if we can reject the null hypothesis), non-linear relationships are suspected. This test can be run using the `resettest` function from the `lmtest` package. Here we are setting the IVs to the second and third powers and we are examining the regressor variables.24
``````library(lmtest)
resettest(ols1,power=2:3,type="regressor")``````
``````##
## RESET test
##
## data: ols1
## RESET = 2.2752, df1 = 8, df2 = 437, p-value = 0.02157``````
Again, the test provides evidence that we have a non-linear relationship.
What should we do when we identify a nonlinear relationship between our YY and XXs? The first step is to look closely at the bi-variate plots, to try to discern the correct functional form for each XX regressor. If the relationship looks curvilinear, try a polynomial regression in which you include both XX and X2X2 for the relevant IVs. It may also be the case that a skewed DV or IV is causing the problem. This is not unusual when, for example, the income variable plays an important role in the model, and the distribution of income is skewed upward. In such a case, you can try transforming the skewed variable, using an appropriate log form.
It is possible that variable transformations won’t suffice, however. In that case, you may have no other option by to try non-linear forms of regression. These non-OLS kinds of models typically use maximal likelihood functions (see the next chapter) to fit the model to the data. But that takes us considerably beyond the focus of this book.
15.2.2 Non-Constant Variance, or Heteroscedasticity
Recall that OLS requires constant variance because the even spread of residuals is assumed for both FF and tt tests. To examine constant variance, we can produce (read as “make up”) a baseline plot to demonstrate what constant variance in the residuals should" look like.
As we can see in Figure \(5\), the residuals are spread evenly and in a seemingly random fashion, much like the sneeze plot" discussed in Chapter 10. This is the ideal pattern, indicating that the residuals do not vary systematically over the range of the predicted value for XX. The residuals are homoscedastistic, and thus provide the appropriate basis for the FF and tt tests needed for evaluating your hypotheses.
The first step in determining whether we have constant variance is to plot the the residuals by the fitted values for YY, as follows:25
``````ds.small\$fit.r <- ols1\$residuals
ds.small\$fit.p <- ols1\$fitted.values
ggplot(ds.small, aes(fit.p, fit.r)) +
geom_jitter(shape = 1) +
geom_hline(yintercept = 0, color = "red") +
ylab("Residuals") +
xlab("Fitted")``````
Based on the pattern evident in Figure \(7\), the residuals appear to show heteroscedasticity. We can test for non-constant error using the Breusch-Pagan (aka Cook-Weisberg) test. This tests the null hypothesis that the error variance is constant, therefore a small p value would indicate that we have heteroscedasticity. In R we can use the ncvTest function from the car package.
``````library(car)
ncvTest(ols1)``````
``````## Non-constant Variance Score Test
## Variance formula: ~ fitted.values
## Chisquare = 12.70938 Df = 1 p = 0.0003638269``````
The non-constant variance test provides confirmation that the residuals from our model are heteroscedastistic.
What are the implications? Our tt-tests for the estimated partial regression coefficients assumed constant variance. With the evidence of heteroscedasticity, we conclude that these tests are unreliable (the precision of our estimates will be greater in some ranges of XX than others).
They are several steps that can be considered when confronted by heteroscedasticity in the residuals. First, we can consider whether we need to re-specify the model, possibly because we have some omitted variables. If model re-specification does not correct the problem, we can use non-OLS regression techniques that include robust estimated standard errors. Robust standard errors are appropriate when error variance is unknown. Robust standard errors do not change the estimate of BB, but adjust the estimated standard error of each coefficient, SE(B)SE(B), thus giving more accurate pp values. In this example, we draw on White’s (1980)26 method to calculate robust standard errors.
White uses a heteroscedasticity consistent covariance matrix (hccm) to calculate standard errors when the error term has non-constant variance. Under the OLS assumption of constant error variance, the covariance matrix of bb is:
V(b)=(X′X)−1X′V(y)X(X′X)−1V(b)=(X′X)−1X′V(y)X(X′X)−1
where V(y)=σ2eInV(y)=σe2In,
therefore,
V(b)=σ2e(X′X)−1V(b)=σe2(X′X)−1.
If the error terms have distinct variances, a consistent estimator constrains ΣΣ to a diagonal matrix of the squared residuals,
Σ=diag(σ21,…,σ2n)Σ=diag(σ12,…,σn2)
where σ2iσi2 is estimated by e2iei2. Therefore the hccm estimator is expressed as:
Vhccm(b)=(X′X)−1X′diag(e2i,…,e2n)X(X′X)−1Vhccm(b)=(X′X)−1X′diag(ei2,…,en2)X(X′X)−1
We can use the `hccm` function from the `car` package to calculate the robust standard errors for our regression model, predicting perceived environmental risk (YY) with political ideology, age, education and income as the XX variables.
``````library(car)
hccm(ols1) %>% diag() %>% sqrt()``````
``````## (Intercept) age education income ideol
## 0.668778725013 0.008030365625 0.069824489564 0.000002320899 0.060039031426``````
Using the `hccm` function we can create a function in `R` that will calculate the robust standard errors and the subsequent tt-values and pp-values.
``````library(car)
robust.se <- function(model) {
s <- summary(model)
wse <- sqrt(diag(hccm(ols1)))
t <- model\$coefficients/wse
p <- 2*pnorm(-abs(t))
results <- cbind(model\$coefficients, wse, t, p)
dimnames(results) <- dimnames(s\$coefficients)
results
}``````
We can then compare our results with the original simple regression model results.
``summary(ols1)``
``````##
## Call:
## lm(formula = glbcc_risk ~ age + education + income + ideol, data = ds.small)
##
## Residuals:
## Min 1Q Median 3Q Max
## -7.1617 -1.7131 -0.0584 1.7216 6.8981
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 12.0848259959 0.7246993630 16.676 <0.0000000000000002 ***
## age -0.0055585796 0.0084072695 -0.661 0.509
## education -0.0186146680 0.0697901408 -0.267 0.790
## income 0.0000001923 0.0000022269 0.086 0.931
## ideol -1.2235648372 0.0663035792 -18.454 <0.0000000000000002 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 2.353 on 445 degrees of freedom
## Multiple R-squared: 0.4365, Adjusted R-squared: 0.4315
## F-statistic: 86.19 on 4 and 445 DF, p-value: < 0.00000000000000022``````
``robust.se(ols1)``
``````## Estimate Std. Error t value
## (Intercept) 12.0848259958670 0.668778725013 18.06999168
## age -0.0055585796372 0.008030365625 -0.69219509
## education -0.0186146679570 0.069824489564 -0.26659225
## income 0.0000001922905 0.000002320899 0.08285175
## ideol -1.2235648372311 0.060039031426 -20.37948994
## Pr(>|t|)
## (Intercept) 0.00000000000000000000000000000000000000000000000000000000000000000000000054921988962793404326183377
## age 0.48881482326776815039437451559933833777904510498046875000000000000000000000000000000000000000000000
## education 0.78978312137982031870819810137618333101272583007812500000000000000000000000000000000000000000000000
## income 0.93396941638148500697269582815351895987987518310546875000000000000000000000000000000000000000000000
## ideol 0.00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000002542911``````
As we see the estimated BB’s remain the same, but the estimated standard errors, tt-values and pp-values are adjusted to reflect the robust estimation. Despite these adjustments, the results of the hypothesis test remain unchanged.
It is important to note that, while robust estimators can help atone for heteroscedasticity in your models, their use should not be seen as an alternative to careful model construction. The first step should always be to evaluate your model specification and functional form (e.g., the use of polynomials, inclusion of relevant variables), as well as possible measurement error, before resorting to robust estimation.
15.2.3 Independence of EE
As noted above, we cannot test for the assumption that the error term EE is independent of the XX’s. However we can test to see whether the error terms, EiEi, are correlated with each other. One of the assumptions of OLS is that E(ϵi)≠E(ϵj)E(ϵi)≠E(ϵj) for i≠ji≠j. When there is a relationship between the residuals, this is referred to as serial correlation or autocorrelation. Autocorrelation is most likely to occur with time-series data, however it can occur with cross-sectional data as well. To test for autocorrelation we use the Durbin-Watson, dd, test statistic. The dd statistic is expressed as:
d=∑ni=2(Ei−Ei−1)2∑ni=1E2i(15.2)(15.2)d=∑i=2n(Ei−Ei−1)2∑i=1nEi2
The dd statistics ranges from 00 to 44; 0≤d≤40≤d≤4. A 00 indicates perfect positive correction, 44 indicates perfect negative correlation, and a 22 indicates no autocorrelation. Therefore, we look for values of dd that are close to 22.
We can use the `dwtest` function in the `lmtest` package to test the null hypothesis that autocorrelation is 00, meaning that we don’t have autocorrelation.
``````library(lmtest)
dwtest(ols1)``````
``````##
## Durbin-Watson test
##
## data: ols1
## DW = 1.9008, p-value = 0.1441
## alternative hypothesis: true autocorrelation is greater than 0``````
Generally, a Durbin-Watson result between 1.5 and 2.5 indicates, that any autocorrelation in the data will not have a discernible effect on your estimates. The test for our example model indicates that we do not have an autocorrelation problem with this model. If we did find autocorrelation, we would need to respecify our model to account for (or estimate) the relationships among the error terms. In time series analysis, where observations are taken sequentially over time, we would typically include a “lag” term (in which the value of YY in period tt is predicted by the value of YY in period t−1t−1). This is a typical AR1AR1 model, which would be discussed in a time-series analysis course. The entangled residuals can, of course, be much more complex, and require more specialized models (e.g., ARIMA or vector-autoregression models). These approaches are beyond the scope of this text.
15.2.4 Normality of the Residuals
This is a critical assumption for OLS because (along with homoscedasticity) it is required for hypothesis tests and confidence interval estimation. It is particularly sensitive with small samples. Note that non-normality will increase sample-to-sample variation in model estimates.
To examine normality of the residuals we first plot the residuals and then run what is known as the Shapiro-Wilk normality test. Here we run the test on our example model, and plot the residuals.
``````p1 <- ggplot(ds.small, aes(fit.r)) +
geom_histogram(bins = 10, color = "black", fill = "white")``````
``````p2 <- ggplot(ds.small, aes(fit.r)) +
geom_density() +
stat_function(fun = dnorm, args = list(mean = mean(ds.small\$fit.r),
sd = sd(ds.small\$fit.r)),
color = "dodgerblue", size = 2, alpha = .5)``````
``````p3 <- ggplot(ds.small, aes("", fit.r)) +
geom_boxplot() ``````
``````p4 <- ggplot(ds.small, aes(sample = fit.r)) +
stat_qq(shape = 1) +
stat_qq_line(size = 1.5, alpha = .5)``````
It appears from the graphs, on the basis of an ocular test“, that the residuals are potentially normally distributed. Therefore, to perform a statistical test for non-normality, we use the Shapiro-Wilk, WW, test statistic. WW is expressed as:
W=(∑ni=1aix(i))2∑ni=1(xi−¯x)2(15.3)(15.3)W=(∑i=1naix(i))2∑i=1n(xi−x¯)2
where x(i)x(i) are the ordered sample values and aiai are constants generated from the means, variances, and covariances of the order statistics from a normal distribution. The Shapiro-Wilk tests the null hypothesis that the residuals are normally distributed. To perform this test in `R`, use the `shapiro.test` function.
``shapiro.test(ols1\$residuals)``
``````##
## Shapiro-Wilk normality test
##
## data: ols1\$residuals
## W = 0.99566, p-value = 0.2485``````
Since we have a relatively large pp value we fail to reject the null hypothesis of normally distributed errors. Our residuals are, accoridng to our visual examination and this test, normally distributed.
To adjust for non-normal errors we can use robust estimators, as discussed earlier with respect to heteroscedasticity. Robust estimators correct for non-normality, but produce estimated standard errors of the partial regression coefficients that tend to be larger, and hence produce less model precision. Other possible steps, where warranted, include transformation of variables that may have non-linear relationships with YY. Typically this involves taking log transformations of the suspect variables.
15.2.5 Outliers, Leverage, and Influence
Apart from the distributional behavior of residuals, it is also important to examine the residuals for unusual" observations. Unusual observations in the data may be cases of mis-coding (e.g., −99−99), mis-measurement, or perhaps special cases that require different kinds of treatment in the model. All of these may appear as unusual cases that are observed in your diagnostic analysis. The unusual cases that we should be most concerned about are regression outliers, that are potentially influential and that are suspect because of their differences from other cases.
Why should we worry about outliers? Recall that OLS minimizes the sum of the squared residuals for a model. Unusual cases – which by definition will have large outliers – have the potential to substantially influence our estimates of BB because their already large residuals are squared. A large outlier can thus result in OLS estimates that change the model intercept and slope.
There are several steps that can help identify outliers and their effects on your model. The first – and most obvious – is to examine the range of values in your YY and XX variables. Do they fall within the appropriate ranges?
This step – too often omitted even by experienced analysts – can help you avoid often agonizing mis-steps that result from inclusion of miscoded data or missing values (e.g., -99“) that need to be recoded before running your model. If you fail to identify these problems, they will show up in your residual analysis as outliers. But it is much easier to catch the problem before you run your model.
But sometimes we find outliers for reasons other than mis-codes, and identification requires careful examination of your residuals. First we discuss how to find outliers – unusual values of YY – and leverage – unusual values of XX – since they are closely related.
15.2.6 Outliers
A regression outlier is an observation that has an unusual value on the dependent variable YY, conditioned on the values of the independent variables, XX. Note that an outlier can have a large residual value, but not necessarily affect the estimated slope or intercept. Below we examine a few ways to identify potential outliers, and their effects on our estimated slope coefficients.
Using the regression example, we first plot the residuals to look for any possible outliers. In this plot we are plotting the raw residuals for each of the 500500 observations. This is shown in Figure \(9\).
``````ggplot(ds.small, aes(row.names(ds.small), fit.r)) +
geom_point(shape = 1) +
geom_hline(yintercept = 0, color = "red")``````
Next, we can sort the residuals and find the case with the largest absolute value and examine that case.
``````# Sort the residuals
output.1 <- sort(ols1\$residuals) # smallest first
output.2 <- sort(ols1\$residuals, decreasing = TRUE) # largest first
# The head function return the top results, the argument 1 returns 1 variable only
head(output.1, 1) # smallest residual absolute value``````
``````## 333
## -7.161695``````
``head(output.2, 1) # largest residual absolute value``
``````## 104
## 6.898077``````
Then, we can examine the XX and YY values of those cases on key variables. Here we examine the values across all independent variables in the model.
``ds.small[c(298,94),c("age","education","income","ideol","glbcc_risk")] # [c(row numbers),c(column numbers)]``
``````## age education income ideol glbcc_risk
## 333 69 6 100000 2 2
## 104 55 7 94000 7 10``````
By examining the case of 298, we can see that this is outlier because the observed values of YY are far from what would be expected, given the values of XX. A wealthy older liberal would most likely rate climate change as riskier than a 2. In case 94, a strong conservaitive rates climate change risk at the lowest possible value. This observation, while not consistent with the estimated relationship between ideology and environmental concern, is certainly not implausible. But the unusual appearance of a case with a strong conservative leaning, and high risk of cliamte change results in a large residual.
What we really want to know is: does any particular case substantially change the regression results? If a case substantively change the results than it is said to have influence. Individual cases can be outliers, but still be influential. Note that DFBETAS are case statistics, therefore a DFBETA value will be calculated for each variable for each case.
DFBETAS
DFBETAS measure the influence of case ii on the jj estimated coefficients. Specifically, it asks by how many standard errors does BjBj change when case ii is removed DFBETAS are expressed as:
DFBETASij=Bj(−i)−BjSE(Bj)(15.4)(15.4)DFBETASij=Bj(−i)−BjSE(Bj)
Note that if DFBETAS >0>0, then case ii pulls BjBj up, and if DFBETAS <0<0, then case ii pulls BjBj down. In general, if |DFBETASij|>2√n|DFBETASij|>2n then these cases warrant further examination. Note that this approach gets the top 5% of influential cases, given the sample size. For both simple (bi-variate) and multiple regression models the DFBETA cut-offs can be calculated in `R`.
``````df <- 2/sqrt(500)
df``````
``## [1] 0.08944272``
In this case, if |DFBETAS|>0.0894427|DFBETAS|>0.0894427 then they can be examined for possible influence. Note, however, than in large datasets this may prove to be difficult, so you should examine the largest DFBETAS first. In our example, we will look only at the largest 5 DFBETAS.
To calculate the DFBETAS we use the `dfbetas` function. Then we examine the DFBETA values for the first five rows of our data.
``````df.ols1 <- dfbetas(ols1)
df.ols1[1:5,]``````
``````## (Intercept) age education income ideol
## 1 -0.004396485 0.005554545 0.01043817 -0.01548697 -0.005616679
## 2 0.046302381 -0.007569305 -0.02671961 -0.01401653 -0.042323468
## 3 -0.002896270 0.018301623 -0.01946054 0.02534233 -0.023111519
## 5 -0.072106074 0.060263914 0.02966501 0.01243482 0.015464937
## 7 -0.057608817 -0.005345142 -0.04948456 0.06456577 0.134103149``````
We can then plot the DFBETAS for each of the IVs in our regression models, and create lines for ±0.089±0.089. Figure \(10\) shows the DFBETAS for each variable in the multiple regression model.
``````melt(df.ols1, varnames = c("index", "variable")) %>%
ggplot(aes(index, value)) +
geom_point() +
geom_hline(yintercept = df) +
geom_hline(yintercept = -df) +
facet_wrap(~ variable, scales = "free")``````
As can be seen, several cases seem to exceed the 0.0890.089 cut-off. Next we find the case with the highest absolute DFBETA value, and examine the XX and YY values for that case.
``````# Return Absolute Value dfbeta
names(df.ols1) <- row.names(ds.small)
df.ols1[abs(df.ols1) == max(abs(df.ols1))] ``````
``````## <NA>
## 0.4112137``````
``````# a observation name may not be returned - let's figure out the observation
# convert df.osl1 from matrix to dataframe
class(df.ols1)``````
``## [1] "matrix"``
``````df2.ols1 <- as.data.frame(df.ols1)
# add an id variable
df2.ols1\$id <- 1:450 # generate a new observation number
# head function returns one value, based on ,1
# syntax - head(data_set[with(data_set, order(+/-variable)), ], 1)
# Ideology
head(df2.ols1[with(df2.ols1, order(-ideol)), ], 1) # order declining``````
``````## (Intercept) age education income ideol id
## 333 -0.001083869 -0.1276632 -0.04252348 -0.07591519 0.2438799 298``````
``head(df2.ols1[with(df2.ols1, order(+ideol)), ], 1) # order increasing``
``````## (Intercept) age education income ideol id
## 148 -0.0477082 0.1279219 -0.03641922 0.04291471 -0.09833372 131``````
``````# Income
head(df2.ols1[with(df2.ols1, order(-income)), ], 1) # order declining``````
``````## (Intercept) age education income ideol id
## 494 -0.05137992 -0.01514244 -0.009938873 0.4112137 -0.03873292 445``````
``head(df2.ols1[with(df2.ols1, order(+income)), ], 1) # order increasing``
``````## (Intercept) age education income ideol id
## 284 0.06766781 -0.06611698 0.08166577 -0.4001515 0.04501527 254``````
``````# Age
head(df2.ols1[with(df2.ols1, order(-age)), ], 1) # order declining``````
``````## (Intercept) age education income ideol id
## 87 -0.2146905 0.1786665 0.04131316 -0.01755352 0.1390403 78``````
``head(df2.ols1[with(df2.ols1, order(+age)), ], 1) # order increasing``
``````## (Intercept) age education income ideol id
## 467 0.183455 -0.2193257 -0.1906404 0.02477437 0.1832784 420``````
``````# Education - we find the amount - ID 308 for edu
head(df2.ols1[with(df2.ols1, order(-education)), ], 1) # order declining``````
``````## (Intercept) age education income ideol id
## 343 -0.1751724 0.06071469 0.1813973 -0.05557382 0.09717012 308``````
``head(df2.ols1[with(df2.ols1, order(+education)), ], 1) # order increasing``
``````## (Intercept) age education income ideol id
## 105 0.05091437 0.1062966 -0.2033285 -0.02741242 -0.005880984 95``````
``````# View the output
df.ols1[abs(df.ols1) == max(abs(df.ols1))] ``````
``````## <NA>
## 0.4112137``````
``df.ols1[c(308),] # dfbeta number is observation 131 - education``
``````## (Intercept) age education income ideol
## -0.17517243 0.06071469 0.18139726 -0.05557382 0.09717012``````
``ds.small[c(308), c("age", "education", "income", "ideol", "glbcc_risk")]``
``````## age education income ideol glbcc_risk
## 343 51 2 81000 3 4``````
Note that this “severe outlier” is indeed an interesting case – a 51 year old with a high school diploma, relatively high income, who is slightly liberal and perceivs low risk for climate change. But this outlier is not implausible, and therefore we can be reassured that – even in this most extreme case – we do not have problematic outliers.
So, having explored the residuals from our model, we found a number of outliers, some with significant influence on our model results. In inspection of the most extreme outlier gave us no cause to worry that the observations were inappropriately distorting our model results. But what should you do if you find puzzling, implausible observations that may influence your model?
First, as always, evaluate your theory. Is it possible that the case represented a class of observations that behave systematically differently than the other cases? This is of particular concern if you have a cluster of cases, all determined to be outliers, that have similar properties. You may need to modify your theory to account for this subgroup. One such example can be found in the study of American politics, wherein the Southern states routinely appeared to behave differently than others. Most careful efforts to model state (and individual) political behavior account for the unique aspects of southern politics, in ways ranging from the addition of dummy variables to interaction terms in regression models.
How would you determine whether the model (and theory) should be revised? Look closely at the deviant cases – what can you learn from them? Try experiments by running the models with controls – dummies and interaction terms. What effects do you observe? If your results suggest theoretical revisions, you will need to collect new data to test your new hypotheses. Remember: In empirical studies, you need to keep your discoveries distinct from your hypothesis tests.
As a last resort, if you have troubling outliers for which you cannot account in theory, you might decide omit those observations from your model and re-run your analyses. We do not recommend this course of action, because it can appear to be a case of jiggering the data" to get the results you want.
15.2.7 Multicollinearity
Multicollinearity is the correlation of the IVs in the model. Note that if any XiXi is a linear combination of other XX’s in the model, BiBi cannot be estimated. As discussed previously, the partial regression coefficient strips both the XX’s and YY of the overlapping covariation by regressing one XX variable on all other XX variables:
EXi|Xj=Xi−^Xi^Xi=A+BXjEXi|Xj=Xi−X^iX^i=A+BXj
If an X is perfectly predicted by the other XX’s, then:
where R2kRk2 is the R2R2 obtained from regressing all XkXk on all other XX’s.
We rarely find perfect multicollinearity in practice, but high multicollinearity results in loss of statistical resolution. Such as:
• Large standard errors
• Low tt-stats, high pp-values
• This erodes the resolution of our hypothesis tests
• Enormous sensitivity to small changes in:
• Data
• Model specification
You should always check the correlations between the IVs during the model building process. This is a way to quickly identify possible multicollinearity issues.
``````ds %>%
dplyr::select(age, education, income, ideol) %>%
na.omit() %>%
data.frame() %>%
cor()``````
``````## age education income ideol
## age 1.00000000 -0.06370223 -0.11853753 0.08535126
## education -0.06370223 1.00000000 0.30129917 -0.13770584
## income -0.11853753 0.30129917 1.00000000 0.04147114
## ideol 0.08535126 -0.13770584 0.04147114 1.00000000``````
There do not appear to be any variables that are so highly correlated that it would result in problems with multicolinearity.
We will discuss two more formal ways to check for multicollinearity. First, is the Variance Inflation Factor (VIF), and the second is tolerance. The VIF is the degree to which the variance of other coefficients is increased due to the inclusion of the specified variable. It is expressed as:
VIF=11−R2k(15.5)(15.5)VIF=11−Rk2
Note that as R2kRk2 increases the variance of XkXk increases. A general rule of thumb is that VIF>5VIF>5 is problematic.
Another, and related, way to measure multicollinearity is tolerance. The tolerance of any XX, XkXk, is the proportion of its variance not shared with the other XX’s.
tolerance=1−R2k(15.6)(15.6)tolerance=1−Rk2
Note that this is mathematically equivalent to 1VIF1VIF. The rule of thumb for acceptable tolerance is partly a function of nn-size:
• If n<50n<50, tolerance should exceed 0.70.7
• If n<300n<300, tolerance should exceed 0.50.5
• If n<600n<600, tolerance should exceed 0.30.3
• If n<1000n<1000, tolerance should exceed 0.10.1
Both VIF and tolerance can be calculated in `R`.
``````library(car)
vif(ols1)``````
``````## age education income ideol
## 1.024094 1.098383 1.101733 1.009105``````
``1/vif(ols1)``
``````## age education income ideol
## 0.9764731 0.9104295 0.9076611 0.9909775``````
Note that, for our example model, we are well within acceptable limits on both VIF and tolerance.
If multicollinearity is suspected, what can you do? One option is to drop one of the highly co-linear variables. However, this may result in model mis-specification. As with other modeling considerations, you must use theory as a guide. A second option would be to add new data, thereby lessening the threat posed by multicolinearity. A third option would be to obtain data from specialized samples that maximize independent variation in the collinear variables (e.g., elite samples may disentangle the effects of income, education, and other SES-related variables).
Yet another strategy involves reconsidering why your data are so highly correlated. It may be that your measures are in fact different “indicators” of the same underlying theoretical concept. This can happen, for example, when you measure sets of attitudes that are all influenced by a more general attitude or belief system. In such a case, data scaling is a promising option. This can be accomplished by building an additive scale, or using various scaling options in RR. Another approach would be to use techniques such as factor analysis to tease out the underlying (or latent“) variables represented by your indicator variables. Indeed, the combination of factor analysis and regression modeling is an important and widely used approach, referred to as structural equation modeling (SEM). But that is a topic for another book and another course.
15.03: Summary
In this chapter we have described how you can approach the diagnostic stage for OLS multiple regression analysis. We described the key threats to the necessary assumptions of OLS, and listed them and their effects in Table 15.1. But we also noted that diagnostics are more of an art than a simple recipe. In this business you will learn as you go, both in the analysis of a particular model (or set of models) and in the development of your own approach and procedures. We wish you well, Grasshopper!
1. See the `lmtest` package documentation for more options and information.↩
2. Note that we jitter the points to make them easier to see.↩
3. H White, 1980. “A Heteroscedasticity-consistent covariance matrix estimator and a direct test for heteroscedasticity.” Econometrica 48: 817-838.↩ | textbooks/stats/Applied_Statistics/Book%3A_Quantitative_Research_Methods_for_Political_Science_Public_Policy_and_Public_Administration_(Jenkins-Smith_et_al.)/15%3A_The_Art_of_Regression_Diagnostics/15.01%3A_OLS_Error_Assumptions_Revisite.txt |
• 16.1: Generalized Linear Models
Generalized Linear Models (GLMs) provide a modeling structure that can relate a linear model to response variables that do not have normal distributions. The distribution of YY is assumed to belong to one of an exponential family of distributions, including the Gaussian, Binomial, and Poisson distributions. GLMs are fit to the data by the method of maximum likelihood.
• 16.2: Logit Estimation
Logit is used when predicting limited dependent variables. By virtue of the binary dependent variable, these models do not meet the key assumptions of OLS. Logit uses maximum likelihood estimation (MLE), which is a counterpart to minimizing least squares. MLE identifies the probability of obtaining the sample as a function of the model parameters. It answers the question, what are the values for BB’s that make the sample most likely?
• 16.3: Summary
16: Logit Regression
Generalized Linear Models (GLMs) provide a modeling structure that can relate a linear model to response variables that do not have normal distributions. The distribution of YY is assumed to belong to one of an exponential family of distributions, including the Gaussian, Binomial, and Poisson distributions. GLMs are fit to the data by the method of maximum likelihood.
Like OLS, GLMs contain a stochastic component and a systematic component. The systematic component is expressed as:
\[η=α+β1Xi1+β2Xi2+…+βkXik(16.1)(16.1)η=α+β1Xi1+β2Xi2+…+βkXik\]
However, GLMs also contain a link function" that relates the response variable, YiYi, to the systematic linear component, ηη. Table 16.1 shows the major exponential “families”" of GLM models, and indicates the kinds of link functions involved in each. Note that OLS models would fall within the Gaussian family. In the next section we focus on the binomial family, and on logit estimation in particular.
16.02: Logit Estimation
Logit is used when predicting limited dependent variables, specifically those in which YY is represented by 00’s and 11’s. By virtue of the binary dependent variable, these models do not meet the key assumptions of OLS. Logit uses maximum likelihood estimation (MLE), which is a counterpart to minimizing least squares. MLE identifies the probability of obtaining the sample as a function of the model parameters (i.e., the XX’s). It answers the question, what are the values for BB’s that make the sample most likely? In other words, the likelihood function expresses the probability of obtaining the observed data as a function of the model parameters. Estimates of AA and BB are based on maximizing a likelihood function of the observed YY values.
In logit estimation we seek P(Y=1)P(Y=1), the probability that Y=1Y=1. The odds that Y=1Y=1 are expressed as:
\[O(Y=1)=P(Y=1)1−P(Y=1)O(Y=1)=P(Y=1)1−P(Y=1)\]
Logits, LL, are the natural logarithm of the odds:
\[L=logeO=logeP1−PL=logeO=logeP1−P\]
They can range from −∞−∞, when P=0P=0, to ∞∞, when P=1P=1. LL is the estimated systematic linear component:
\[L=A+B1Xi1+…+BkXikL=A+B1Xi1+…+BkXik\]
By reversing the logit we can obtain the predicted probability that Y=1Y=1 for each of the ii observations:
\[Pi=11−e−Li(16.2)(16.2)Pi=11−e−Li\]
where e=2.71828…e=2.71828…, the base number of natural logarithms. Note that LL is a linear function, but PP is a non-linear SS-shaped function as shown in Figure \(2\). Also note, that Equation 16.2 is the link function that relates the linear component to the non-linear response variable.
In more formal terms, each observation, ii, contributes to the likelihood function by PiPi if Yi=1Yi=1, and by 1−Pi1−Pi if Yi=0Yi=0. This is defined as:
\[PYii(1−Pi)1−YiPiYi(1−Pi)1−Yi\]
The likelihood function is the product (multiplication) of all these individual contributions:
\[ℓ=∏PYii(1−Pi)1−Yiℓ=∏PiYi(1−Pi)1−Yi\]
The likelihood function is the largest for the model that best predicts Y=1Y=1 or Y=0Y=0; therefore when the predicted value of YY is correct and close to 11 or 00, the likelihood function is maximized.
To estimate the model parameters, we seek to maximize the log of the likelihood function. We use the log because it converts the multiplication into addition, and is therefore easier to calculate. The log likelihood is:
\[logeℓ=n∑i=1[YilogePi+(1−Yi)loge(1−Pi)]logeℓ=∑i=1n[YilogePi+(1−Yi)loge(1−Pi)]\]
The solution involves taking the first derivative of the log likelihood with respect to each of the BB’s, setting them to zero, and solving the simultaneous equation. The solution of the equation isn’t linear, so it can’t be solved directly. Instead, it’s solved through a sequential estimation process that looks for successively better fits’’ of the model.
For the most part, the key assumptions required for logit models are analogous to those required for OLS. The key differences are that (a) we do not assume a linear relationship between the XXs and YY, and (b) we do not assume normally distributed, homoscedastistic residuals. The key assumptions that are retained are shown below.
Logit Assumptions and Qualifiers - The model is correctly specified - True conditional probabilities are logistic function of the XX’s - No important XX’s omitted; no extraneous XX’s included - No significant measurement error - The cases are independent - No XX is a linear function of other XX’s - Increased multicollinearity leads to greater imprecision - Influential cases can bias estimates - Sample size: n−k−1n−k−1 should exceed 100100 - Independent covariation between the XXs and YY is critical
The following example uses demographic information to predict beliefs about anthropogenic climate change.
``````ds.temp <- ds %>%
dplyr::select(glbcc, age, education, income, ideol, gender) %>%
na.omit()
logit1 <- glm(glbcc ~ age + gender + education + income, data = ds.temp, family = binomial())
summary(logit1)``````
``````##
## Call:
## glm(formula = glbcc ~ age + gender + education + income, family = binomial(),
## data = ds.temp)
##
## Deviance Residuals:
## Min 1Q Median 3Q Max
## -1.707 -1.250 0.880 1.053 1.578
##
## Coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) 0.4431552007 0.2344093710 1.891 0.058689 .
## age -0.0107882966 0.0031157929 -3.462 0.000535 ***
## gender -0.3131329979 0.0880376089 -3.557 0.000375 ***
## education 0.1580178789 0.0251302944 6.288 0.000000000322 ***
## income -0.0000023799 0.0000008013 -2.970 0.002977 **
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for binomial family taken to be 1)
##
## Null deviance: 3114.5 on 2281 degrees of freedom
## Residual deviance: 3047.4 on 2277 degrees of freedom
## AIC: 3057.4
##
## Number of Fisher Scoring iterations: 4``````
As we can see, age and gender are both negative and statistically significant predictors of climate change opinion. Below we discuss logit hypothesis tests, goodness of fit, and how to interpret the logit coefficients.
16.2.1 Logit Hypothesis Tests
In some ways, hypothesis testing with logit is quite similar to that using OLS. The same use of pp-values is employed; however, they differ in how they are derived. The logit analysis makes use of the Wald zz-statistic, which is similar to the tt-stat in OLS. The Wald zz score compares the estimated coefficient to the asymptotic standard error, (aka the normal distribution). The pp-value is derived from the asymptotic standard-normal distribution. Each estimated coefficient has a Wald zz-score and a pp-value that shows the probability that the null hypothesis is correct, given the data.
z=BjSE(Bj)(16.3)(16.3)z=BjSE(Bj)
16.2.2 Goodness of Fit
Given that logit regression is estimated using MLE, the goodness-of-fit statistics differ from those of OLS. Here we examine three measures of fit: log-likelihood, the pseudo R2R2, and the Akaike information criteria (AIC).
Log-Likelihood
To test for the overall null hypothesis that all BB’s are equal to zero (similar to an overall FF-test in OLS), we can compare the log-likelihood of the demographic model with 4 IVs to the initial null model," which includes only the intercept term. In general, a smaller log-likelihood indicates a better fit. Using the deviance statistic G2G2 (aka the likelihood-ratio test statistic), we can determine whether the difference is statistically significant. G2G2 is expressed as:
G2=2(logeL1−logeL0)(16.4)(16.4)G2=2(logeL1−logeL0)
where L1L1 is the demographic model and L0L0 is the null model. The G2G2 test statistic takes the difference between the log likelihoods of the two models and compares that to a χ2χ2 distribution with qq degrees of freedom, where qq is the difference in the number of IVs. We can calculate this in `R`. First, we run a null model predicting belief that greenhouse gases are causing the climate to change, using only the intercept:
``````logit0 <- glm(glbcc ~ 1, data = ds.temp)
summary(logit0)``````
``````##
## Call:
## glm(formula = glbcc ~ 1, data = ds.temp)
##
## Deviance Residuals:
## Min 1Q Median 3Q Max
## -0.5732 -0.5732 0.4268 0.4268 0.4268
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 0.57318 0.01036 55.35 <0.0000000000000002 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for gaussian family taken to be 0.2447517)
##
## Null deviance: 558.28 on 2281 degrees of freedom
## Residual deviance: 558.28 on 2281 degrees of freedom
## AIC: 3267.1
##
## Number of Fisher Scoring iterations: 2``````
We then calculate the log likelihood for the null model,
logeL0(16.5)(16.5)logeL0
``logLik(logit0)``
``## 'log Lik.' -1631.548 (df=2)``
Next, we calculate the log likelihood for the demographic model,
logeL0(16.6)(16.6)logeL0
Recall that we generated this model (dubbed “logit1”) earlier:
``logLik(logit1)``
``## 'log Lik.' -1523.724 (df=5)``
Finally, we calculate the GG statistic and perform the chi-square test for statistical significance:
``````G <- 2*(-1523 - (-1631))
G``````
``## [1] 216``
``pchisq(G, df = 3, lower.tail = FALSE)``
``## [1] 0.0000000000000000000000000000000000000000000001470144``
We can see by the very low p-value that the demographic model offers a significant improvement in fit.
The same approach can be used to compare nested models, similar to nested FF-tests in OLS. For example, we can include ideology in the model and use the `anova` function to see if the ideology variable improves model fit. Note that we specify the χ2χ2 test.
``````logit2 <- glm(glbcc ~ age + gender + education + income + ideol,
family = binomial(), data = ds.temp)
summary(logit2)``````
``````##
## Call:
## glm(formula = glbcc ~ age + gender + education + income + ideol,
## family = binomial(), data = ds.temp)
##
## Deviance Residuals:
## Min 1Q Median 3Q Max
## -2.6661 -0.8939 0.3427 0.8324 2.0212
##
## Coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) 4.0545788430 0.3210639034 12.629 < 0.0000000000000002 ***
## age -0.0042866683 0.0036304540 -1.181 0.237701
## gender -0.2044012213 0.1022959122 -1.998 0.045702 *
## education 0.1009422741 0.0293429371 3.440 0.000582 ***
## income -0.0000010425 0.0000008939 -1.166 0.243485
## ideol -0.7900118618 0.0376321895 -20.993 < 0.0000000000000002 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for binomial family taken to be 1)
##
## Null deviance: 3114.5 on 2281 degrees of freedom
## Residual deviance: 2404.0 on 2276 degrees of freedom
## AIC: 2416
##
## Number of Fisher Scoring iterations: 4``````
``anova(logit1, logit2, test = "Chisq")``
``````## Analysis of Deviance Table
##
## Model 1: glbcc ~ age + gender + education + income
## Model 2: glbcc ~ age + gender + education + income + ideol
## Resid. Df Resid. Dev Df Deviance Pr(>Chi)
## 1 2277 3047.4
## 2 2276 2404.0 1 643.45 < 0.00000000000000022 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1``````
As we can see, adding ideology significantly improves the model.
Pseudo R2R2
A measure that is equivalent to the R2R2 in OLS does not exist for logit. Remember that explaining variance in YY is not the goal of MLE. However, a pseudo’’ R2R2 measure exists that compares the residual deviance of the null model with that of the full model. Like the R2R2 measure, pseudo R2R2 ranges from 00 to 11 with values closer to 11 indicating improved model fit.
Deviance is analogous to the residual sum of squares for a linear model. It is expressed as:
deviance=−2(logeL)(16.7)(16.7)deviance=−2(logeL)
It is simply the log-likelihood of the model multiplied by a −2−2. The pseudo R2R2 is 11 minus the ratio of the deviance of the full model L1L1 to the deviance of the null model L0L0:
pseudoR2=1−−2(logeL1)−2(logeL0)(16.8)(16.8)pseudoR2=1−−2(logeL1)−2(logeL0)
This can be calculated in ‘R’ using the full model with ideology.
``````pseudoR2 <- 1 - (logit2\$deviance/logit2\$null.deviance)
pseudoR2``````
``## [1] 0.2281165``
The pseudo R2R2 of the model is 0.2281165. Note that the psuedo R2R2 is only an approximation of explained variance, and should be used in combination with other measures of fit such as AIC.
Akaike Information Criteria
Another way to examine goodness-of-fit is the Akaike information criteria (AIC). Like the adjusted R2R2 for OLS, the AIC takes into account the parsimony of the model by penalizing for the number of parameters. But AIC is useful only in a comparative manner – either with the null model or an alternative model. It does not purport to describe the percent of variance in YY accounted for, as does the pseudo R2R2.
AIC is defined as -2 times the residual deviance of the model plus two times the number of parameters, or kk IVs plus the intercept:
AIC=−2(logeL)+2(k+1)(16.9)(16.9)AIC=−2(logeL)+2(k+1)
Note that smaller values are indicative of a better fit. The AIC is most useful when comparing the fit of alternative (not necessarily nested) models. In `R`, AIC is given as part of the `summary` output for a `glm` object, but we can also calculate it and verify.
``````aic.logit2 <- logit2\$deviance + 2*6
aic.logit2``````
``## [1] 2416.002``
``logit2\$aic``
``## [1] 2416.002``
16.2.3 Interpreting Logits
The logits, LL, are logged odds, and therefore the coefficients that are produced must be interpreted as logged odds. This means that for each unit change in ideology, the predicted logged odds of believing climate change has an anthropogenic cause decrease by -0.7900119. This interpretation, though mathematically straightforward, is not terribly informative. Below we discuss two ways to make the interpretation of logit analysis more intuitive.
Calculate Odds
Logits can be used to directly calculate odds by taking the antilog of any of the coefficients:
antilog=eBantilog=eB
For example, the following retuns odds for all the IVs.
``logit2 %>% coef() %>% exp()``
``````## (Intercept) age gender education income ideol
## 57.6608736 0.9957225 0.8151353 1.1062128 0.9999990 0.4538394``````
Therefore, for each 1-unit increase in the ideology scale (i.e., becoming more conservative), the odds of believing that climate change is human caused decrease by 0.4538394.
Predicted Probabilities
The most straightforward way to interpret logits is to transform them into predicted probabilities. To calculate the effect of a particular independent variable, XiXi, on the probability of Y=1Y=1, set all XjXj’s at their means, then calculate:
^P=11+e−^LP^=11+e−L^
We can then evaluate the change in predicted probabilities that YY=1 across the range of values in XiXi.
This procedure can be demonstrated in two steps. First, create a data frame holding all the variables except ideology at their mean. Second, use the `augment` function to calculate the predicted probabilities for each level of ideology. Indicate `type.predict = "response"`.
``````library(broom)
log.data <- data.frame(age = mean(ds.temp\$age),
gender = mean(ds.temp\$gender),
education = mean(ds.temp\$education),
income = mean(ds.temp\$income),
ideol = 1:7)
log.data <- logit2 %>%
augment(newdata = log.data, type.predict = "response")
log.data``````
``````## # A tibble: 7 x 7
## age gender education income ideol .fitted .se.fit
## * <dbl> <dbl> <dbl> <dbl> <int> <dbl> <dbl>
## 1 60.1 0.412 5.09 70627. 1 0.967 0.00523
## 2 60.1 0.412 5.09 70627. 2 0.929 0.00833
## 3 60.1 0.412 5.09 70627. 3 0.856 0.0115
## 4 60.1 0.412 5.09 70627. 4 0.730 0.0127
## 5 60.1 0.412 5.09 70627. 5 0.551 0.0124
## 6 60.1 0.412 5.09 70627. 6 0.357 0.0139
## 7 60.1 0.412 5.09 70627. 7 0.202 0.0141``````
The output shows, for each case, the ideology measure for the respondent followed by the estimated probability (pp) that the individual believes man-made greenhouse gasses are causing climate change. We can also graph the results with 95%95% confidence intervals. This is shown in Figure \(3\).
``````log.df <- log.data %>%
mutate(upper = .fitted + 1.96 * .se.fit,
lower = .fitted - 1.96 * .se.fit)
ggplot(log.df, aes(ideol, .fitted)) +
geom_point() +
geom_errorbar(aes(ymin = lower, ymax = upper, width = .2)) ``````
We can see that as respondents become more conservative, the probability of believing that climate change is man-made decreases at what appears to be an increasing rate.
16.03: Summary
As an analysis and research tool, logit modeling expands your capabilities beyond those that can reasonably be estimated with OLS. Now you can accommodate models with binary dependent variables. Logit models are a family of generalized linear models that are useful for predicting the odds or probabilities of outcomes for binary dependent variables. This chapter has described the manner in which logits are calculated, how model fit can be characterized, and several methods for making the logit results readily interpretable.
Perhaps one of the greatest difficulties in applications of logit models is the clear communication of the meaning of the results. The estimated coefficients show the change in the log of the odds for a one unit increase in the XX variable – not the usual way to describe effects. However, as described in this chapter, these estimated coefficients can be readily transformed into changes in the odds, or the logit itself can be reversed" to provide estimated probabilities. Of particular utility are logit graphics, showing the estimated shift in YY from values of zero to one; the estimated probabilities of YY=1 for cases with specified combinations of values in the XX variables; and estimates of the ranges of probabilities for YY=1 across the ranges of values in any XX.
In sum, the use of logit models will expand your ability to test hypotheses to include a range of outcomes that are binary in nature. Given that a great deal of the phenomena of interest in the policy and social sciences are of this kind, you will find this capability to be an important part of your research toolkit. | textbooks/stats/Applied_Statistics/Book%3A_Quantitative_Research_Methods_for_Political_Science_Public_Policy_and_Public_Administration_(Jenkins-Smith_et_al.)/16%3A_Logit_Regression/16.01%3A_Generalized_Linear_Models.txt |
R is a language and environment for statistical computing and graphics. It was developed at Bell Laboratories (formerly AT&T, now Lucent Technologies) by John Chambers and colleagues. It is based off of another language called S. R is an integrated suite of software facilities for data manipulation, calculation, and graphical display. It includes:
• an effective data handling and storage facility,
• a suite of operators for calculations on arrays, in particular matrices,
• a large, coherent, integrated collection of intermediate tools for data analysis,
• graphical facilities for data analysis and display either on-screen or on hardcopy, and
• a well-developed, simple and effective programming language which includes conditionals, loops, user-defined recursive functions, and input and output facilities.
R is a powerful and effective tool for computing, statistics and analysis, and producing graphics. However, many applications exist that can do these or similar things. R has a number of benefits that make it particularly useful for a book such as this. First, similar to the book itself, R is open source and free. This comes with a set of associated advantages. Free is, of course, the best price. Additionally, this allows you, the student or reader, to take this tool with you wherever you go. You are not dependent on your employer to buy or have a license of a particular software. This is especially relevant as other software with similar functionality often cost hundreds, if not thousands, of dollars for a single license. The open source nature of R has resulted in a robust set of users, across a wide variety of disciplines–including political science–who are constantly updating and revising the language. R therefore has some of the most up-to-date and innovative functionality and methods available to its users should they know where to look. Within R, these functions and tools are often implemented as packages. Packages allow advanced users of R to contribute statistical methods and computing tools to the general users of R. These packages are reviewed and vetted and then added to the CRAN repository. Later, we will cover some basic packages used throughout the book. The CRAN repository is where we will download R.
17.02: Downloading R and RStudio
In this section we will provide instructions to downloading R and RStudio. RStudio is an integrated development environment (IDE) that makes R a bit more user-friendly. In the class associated with this text, RStudio will primarily be used; however, it should be noted other IDEs exist for R. Additionally, R can be used without the aid of an IDE should you decide to do so.
First, to download R, we need to go to the R project website repository as mentioned before. This can be found here. This website has many references relevant to R Users. To download R, go to the CRAN. It is recommended that individuals choose the mirror that is nearest their actual location. (For the purposes of this class, we therefore recommend the Revolution Analytics mirror in Dallas, though really any Mirror will do just fine.) Once here, you will want to click the link that says “Download R” for your relevant operating system (Mac, Windows, or Linux). On the next page, you will click the link that says “install R for the first time.” This will open a page that should look something like this:
Here you will click the “Download R” link at the top of the page. This should download the Installation Wizard for R. Once this has begun, you will click through the Wizard. Unless you have particular advanced preferences, the default settings will work and are preferred.
At this point, you now have R downloaded on your device and can be pretty much ready to go. However, as stated previously, we are also going to show you how to download RStudio. You will find the site to download RStudio here.
Once here, you will scroll down until it looks like the screen in 17.2. Then you will want to use the links under the installer subtitle for your relevant operating system. You do not need to use the links under the zip/tarball header. As with R, you should then simply follow the default locations and settings in the Installer of RStudio. As we said before, RStudio simply makes the use of R a little easier and more user-friendly. It includes some of the functionality that often makes other statistical softwares preferred for initially teaching students statistics. Once you have R and RStudio downloaded, you are prepared to dive right in. However, before we do that we want to introduce you to some common terminology in the fields of programming–as well as statistics–that may be helpful in your understanding of R | textbooks/stats/Applied_Statistics/Book%3A_Quantitative_Research_Methods_for_Political_Science_Public_Policy_and_Public_Administration_(Jenkins-Smith_et_al.)/17%3A_Appendix-_Basic_R/17.01%3A_Introduction_to_R.txt |
In many respects, R is a programming language similar to other languages such a Java, Python, and others. As such, it comes with a terminology that may be unfamilair to most readers. In this section we introduce some of this terminology in order to give readers the working knowledge necessary to utilize the rest of the book to the best of its ability. One particular thing to note is that R is an object oriented programming language. This means the program is organized around the data we are feeding it, rather than the logical procedures used to manipulate it. This introduces the important concept of data types and structures. For R, and programming languages generally, there is no agreed upon or common usage of the terms data type versus data structure. For the purposes of this book, we will attempt to use the term data structure to refer to the ways in which data are organized and data type to the characteristics of the particular data within the strucutre. Data types make up the building blocks of data strutures. There are many data types; we will cover only the most common ones that are releavant to our book. The first is the character type. This is simply a single Unicode character. The second is a string. Strings are simply a set of characters. This data type can contain, among other things, respodents’ names and other common text data. The next data type is the logical type. This type indicates whether or not a statement or condition is True or False. It is often represented as a 0/1 in many cases. Finally, there are numerica data types. One is the integer which is, as you may recall, a number with nothing after the decimal point. On the other hand, the float data type allows for numbers before and after the decimal point.
In R, there are a plethora of data structures to structure our data types. We will again focus on a few common ones. Probably the simplest data structure is a vector. A vector is an object where all elements are of the same data type. A scalar is simply a vector with only one value. For the purposes of this book, a variable is often represented as a vector or the column of a dataset. Factors are vectors with a fixed set of values called levels. A common example of this in the social sciences is sex with only two levels- male or female. A matrix is a two dimensional collection of values, all of the same type. Thus, a matrix is simply a collection of vectors. An array is a matrix with more than 2-dimensions. The data structure we will use most is a dataframe. A dataframe is simply a matrix where the values do not all have to be the same type. Therefore, a dataframe can have a vector that is text data type, a vector that is numerical data type, and a vector that is a logical data type or any possible combination. Finally, lists are collections of these data structures. They are essentially a method of gathering together a set of dataframes, matrices, etc. These will not commonly be used in our book but are important in many applications. Now that we have covered the basic types and structures of data, we are going to explain how to load data into R.
Reading Data
R can handle a variety of different file types as data. The primary type that will be used for the book and accompanying course is a comma separated file, or .csv file type. A CSV is a convenient file type that is portable across many operating platforms (Mac, Windows, etc) as well as statistical/data manipulation softwares. Other common file types are text (.txt) and Excel files (.xls or .xlsx). R also has its own file type called a R data file with the .RData extension. Other statistical softwares also have their own file types, such as Stata’s .dta file extension. R has built in functionality to deal with .csv and .txt as well as a few other file extensions. Uploading other data types requires special packages (haven, foreign, and readxl are popular for these purposes). These methods work for uploading files from the hard drives on our computers. You can also directly download data from the internet into R from a variety of sources and using a variety of packages.
For the purposes of the book, we will acquire our data by going here. You will then type your e-mail where it says Request Data. You should then receive an e-mail with the data attached as a .csv file. First, you will want to download this data onto your computer. We recommend creating a folder specifically for the book and its data (and if you’re in the class for your classwork). This file will be your working directory. For each script we run in class, you will have to set your working directory. An easy way to do this in RStudio is to go to the Session tab. Scroll about halfway down to the option that says “”Set Working Directory" and then click “Choose Directory…” This will open up an explorer or search panel that allows you to choose the folder that you have saved the data in. This will then create a line of code in the console of RStudio that you then copy and paste into the Code editor to set the working directory for your data. You then run this code by hitting Ctrl+Enter on the highlighted line.
Once this has been done, it is a good idea to check your directory. One easy way to do this is the `list.files()` command, which will list all files saved in the folder you have set as your working directory.
``# list.files()``
If you have done this correctly, the data you downloaded should show up as a file. Once you have done this, uploading the data will be easy. Simply write one line of code:
``# ds<-read.csv("w1_w13_longdata.csv")``
This line of code loads our data saved as a .csv into R and saves it as an object (remember the object oriented programming from earlier) that we call ds (short for dataset). This is the convention for the entire book. Now that we have the data downloaded from the internet and uploaded into R, we are going to briefly introduce you to some data manipulation techniques. | textbooks/stats/Applied_Statistics/Book%3A_Quantitative_Research_Methods_for_Political_Science_Public_Policy_and_Public_Administration_(Jenkins-Smith_et_al.)/17%3A_Appendix-_Basic_R/17.03%3A_Introduction_to_Programming.txt |
R is a very flexible tool for manipulating data into various subsets and forms. There are many useful packages and functions for doing this, including the dplyr package, tidyr package, and more. R and its packages will allow users to transform their data from long to wide formats, remove NA values, recode variables, etc. In order to make the downloaded data more manageable for the book, we are going to do two things. First, we want to restrict our data to one wave. The data we downloaded represent many waves of a quarterly survey that is sent to a panel of Oklahoma residents on weather, climate and policy preferences. This book will not venture into panel data analysis or time series analysis, as it is an introductory text, and therefore we simply want one cross section of data for our analysis. This can be done with one line of code:
``# ds<-subset(ds, ds\$wave_id == "Wave 12 (Fall 2016)")``
What this line of code is doing is creating an object, that we have again named ds in order to overwrite our old object, that has only the 12th wave of data from the survey. In effect, this is removing all rows in which waveid, the variable that indicates the survey wave, does not equal twelve. Across these many waves, many different questions are asked and various variables are collected. We now want to remove all columns or variables that were not collected in wave twelve. This can also be done with one line of code:
``# ds<-ds[, !apply(is.na(ds), 2, all)]``
This line of code is a bit more complicated, but what it is essentially doing is first searching all of ds for NA values using the is.na function. It is then returning a logical value of TRUE or FALSE—if a cell does have an NA then the value returned is TRUE and vice versa. It is then searching by column, which is represented by the number 2 (rows are represented by the number 1), to see if all of the values are TRUE or FALSE. This then returns a logical value for the column, either TRUE if all of the rows/cells are NAs or FALSE if at least one row/cell in the column is not an NA. The ! is then reversing the TRUE and FALSE meanings. Now TRUE means a column that is not all NA and therefore one we want to keep. Finally, the brackets are another way to subset our data set. This allows us to keep all columns where the returned value is TRUE, or not all values were NA. Because we are concerned with columns, we write the function after the comma. If we wanted to do a similar thing but with rows we would put the function before the comma. Finally, we want to save this dataset to our working directory which will be explained in the following sectio
Writing Data
Saving or writing data that we have manipulated is a useful tool. It allows us to easily share datasets we have created with others. This is useful for collaboration, especially with other users who may not use R. Additionally, this will be useful for the book, as our new dataset is the one that will be worked with throughout the book. This dataset is much smaller than the one we originally downloaded and therefore will allow for quicker load times as well as hopefully reduce potential confusion. The code to save this data set is rather simple as well:
``# write.csv(ds, "Class Data Set.csv")``
This line of code allows us to save the dataset we created and saved in the object named ds as a new .csv file in our working directory called Class Data Set." Having successfully downloaded R and RStudio, learned some basic programming and data manipulation techniques, and saved the class data set to your working directory, you are ready to use the rest of the book to its fullest potential.
17.07: The Tidyverse
This edition of the book employs the tidyverse family of R functions for both statistical analysis and data visualization. The tidyverse is a collection of functions that provide an efficient, consistent, and intuitive method of both working with your data and visualizing it. Packages like dplyr are used as the primary method of data exploration and wrangling, and ggplot2 is used for visualization. More information can be found about the tidyverse | textbooks/stats/Applied_Statistics/Book%3A_Quantitative_Research_Methods_for_Political_Science_Public_Policy_and_Public_Administration_(Jenkins-Smith_et_al.)/17%3A_Appendix-_Basic_R/17.05%3A_Data_Manipulation_in_R.txt |
You are probably asking yourself the question, "When and where will I use statistics?" If you read any newspaper, watch television, or use the Internet, you will see statistical information. There are statistics about crime, sports, education, politics, and real estate. Typically, when you read a newspaper article or watch a television news program, you are given sample information. With this information, you may make a decision about the correctness of a statement, claim, or "fact." Statistical methods can help you make the "best educated guess."
Since you will undoubtedly be given statistical information at some point in your life, you need to know some techniques for analyzing the information thoughtfully. Think about buying a house or managing a budget. Think about your chosen profession. The fields of economics, business, psychology, education, biology, law, computer science, police science, and early childhood development require at least one course in statistics.
Included in this chapter are the basic ideas and words of probability and statistics. You will soon understand that statistics and probability work together. You will also learn how data are gathered and what "good" data can be distinguished from "bad."
1.01: Definitions of Statistics Probability and Key Terms
The science of statistics deals with the collection, analysis, interpretation, and presentation of data. We see and use data in our everyday lives.
In this course, you will learn how to organize and summarize data. Organizing and summarizing data is called descriptive statistics. Two ways to summarize data are by graphing and by using numbers (for example, finding an average). After you have studied probability and probability distributions, you will use formal methods for drawing conclusions from "good" data. The formal methods are called inferential statistics. Statistical inference uses probability to determine how confident we can be that our conclusions are correct.
Effective interpretation of data (inference) is based on good procedures for producing data and thoughtful examination of the data. You will encounter what will seem to be too many mathematical formulas for interpreting data. The goal of statistics is not to perform numerous calculations using the formulas, but to gain an understanding of your data. The calculations can be done using a calculator or a computer. The understanding must come from you. If you can thoroughly grasp the basics of statistics, you can be more confident in the decisions you make in life.
Probability
Probability is a mathematical tool used to study randomness. It deals with the chance (the likelihood) of an event occurring. For example, if you toss a fair coin four times, the outcomes may not be two heads and two tails. However, if you toss the same coin 4,000 times, the outcomes will be close to half heads and half tails. The expected theoretical probability of heads in any one toss is $\frac{1}{2}$ or 0.5. Even though the outcomes of a few repetitions are uncertain, there is a regular pattern of outcomes when there are many repetitions. After reading about the English statistician Karl Pearson who tossed a coin 24,000 times with a result of 12,012 heads, one of the authors tossed a coin 2,000 times. The results were 996 heads. The fraction $\frac{996}{2000}$ is equal to 0.498 which is very close to 0.5, the expected probability.
The theory of probability began with the study of games of chance such as poker. Predictions take the form of probabilities. To predict the likelihood of an earthquake, of rain, or whether you will get an A in this course, we use probabilities. Doctors use probability to determine the chance of a vaccination causing the disease the vaccination is supposed to prevent. A stockbroker uses probability to determine the rate of return on a client's investments. You might use probability to decide to buy a lottery ticket or not. In your study of statistics, you will use the power of mathematics through probability calculations to analyze and interpret your data.
Key Terms
In statistics, we generally want to study a population. You can think of a population as a collection of persons, things, or objects under study. To study the population, we select a sample. The idea of sampling is to select a portion (or subset) of the larger population and study that portion (the sample) to gain information about the population. Data are the result of sampling from a population.
Because it takes a lot of time and money to examine an entire population, sampling is a very practical technique. If you wished to compute the overall grade point average at your school, it would make sense to select a sample of students who attend the school. The data collected from the sample would be the students' grade point averages. In presidential elections, opinion poll samples of 1,000–2,000 people are taken. The opinion poll is supposed to represent the views of the people in the entire country. Manufacturers of canned carbonated drinks take samples to determine if a 16 ounce can contains 16 ounces of carbonated drink.
From the sample data, we can calculate a statistic. A statistic is a number that represents a property of the sample. For example, if we consider one math class to be a sample of the population of all math classes, then the average number of points earned by students in that one math class at the end of the term is an example of a statistic. The statistic is an estimate of a population parameter, in this case the mean. A parameter is a numerical characteristic of the whole population that can be estimated by a statistic. Since we considered all math classes to be the population, then the average number of points earned per student over all the math classes is an example of a parameter.
One of the main concerns in the field of statistics is how accurately a statistic estimates a parameter. The accuracy really depends on how well the sample represents the population. The sample must contain the characteristics of the population in order to be a representative sample. We are interested in both the sample statistic and the population parameter in inferential statistics. In a later chapter, we will use the sample statistic to test the validity of the established population parameter.
A variable, or random variable, usually notated by capital letters such as $X$ and $Y$, is a characteristic or measurement that can be determined for each member of a population. Variables may be numerical or categorical. Numerical variables take on values with equal units such as weight in pounds and time in hours. Categorical variables place the person or thing into a category. If we let $X$ equal the number of points earned by one math student at the end of a term, then $X$ is a numerical variable. If we let $Y$ be a person's party affiliation, then some examples of $Y$ include Republican, Democrat, and Independent. $Y$ is a categorical variable. We could do some math with values of $X$ (calculate the average number of points earned, for example), but it makes no sense to do math with values of $Y$ (calculating an average party affiliation makes no sense).
Data are the actual values of the variable. They may be numbers or they may be words. Datum is a single value.
Two words that come up often in statistics are mean and proportion. If you were to take three exams in your math classes and obtain scores of 86, 75, and 92, you would calculate your mean score by adding the three exam scores and dividing by three (your mean score would be 84.3 to one decimal place). If, in your math class, there are 40 students and 22 are men and 18 are women, then the proportion of men students is $\frac{22}{40}$ and the proportion of women students is $\frac{18}{40}$. Mean and proportion are discussed in more detail in later chapters.
NOTE
The words "mean" and "average" are often used interchangeably. The substitution of one word for the other is common practice. The technical term is "arithmetic mean," and "average" is technically a center location. However, in practice among non-statisticians, "average" is commonly accepted for "arithmetic mean."
Example 1.1
Determine what the key terms refer to in the following study. We want to know the average (mean) amount of money first year college students spend at ABC College on school supplies that do not include books. We randomly surveyed 100 first year students at the college. Three of those students spent $150,$200, and $225, respectively. Answer Solution 1.1 The population is all first year students attending ABC College this term. The sample could be all students enrolled in one section of a beginning statistics course at ABC College (although this sample may not represent the entire population). The parameter is the average (mean) amount of money spent (excluding books) by first year college students at ABC College this term: the population mean. The statistic is the average (mean) amount of money spent (excluding books) by first year college students in the sample. The variable could be the amount of money spent (excluding books) by one first year student. Let $X$ = the amount of money spent (excluding books) by one first year student attending ABC College. The data are the dollar amounts spent by the first year students. Examples of the data are$150, $200, and$225.
Exercise 1.1
Determine what the key terms refer to in the following study. We want to know the average (mean) amount of money spent on school uniforms each year by families with children at Knoll Academy. We randomly survey 100 families with children in the school. Three of the families spent $65,$75, and \$95, respectively.
Example 1.2
Determine what the key terms refer to in the following study.
A study was conducted at a local college to analyze the average cumulative GPA’s of students who graduated last year. Fill in the letter of the phrase that best describes each of the items below.
1. Population ____ 2. Statistic ____ 3. Parameter ____ 4. Sample ____ 5. Variable ____ 6. Data ____
1. all students who attended the college last year
2. the cumulative GPA of one student who graduated from the college last year
3. 3.65, 2.80, 1.50, 3.90
4. a group of students who graduated from the college last year, randomly selected
5. the average cumulative GPA of students who graduated from the college last year
6. all students who graduated from the college last year
7. the average cumulative GPA of students in the study who graduated from the college last year
Answer
Solution 1.2
1. f; 2. g; 3. e; 4. d; 5. b; 6. c
Example 1.3
Determine what the key terms refer to in the following study.
As part of a study designed to test the safety of automobiles, the National Transportation Safety Board collected and reviewed data about the effects of an automobile crash on test dummies. Here is the criterion they used:
Speed at which cars crashed Location of “drive” (i.e. dummies)
35 miles/hour Front Seat
Table 1.1
Cars with dummies in the front seats were crashed into a wall at a speed of 35 miles per hour. We want to know the proportion of dummies in the driver’s seat that would have had head injuries, if they had been actual drivers. We start with a simple random sample of 75 cars.
Answer
Solution 1.3
The population is all cars containing dummies in the front seat.
The sample is the 75 cars, selected by a simple random sample.
The parameter is the proportion of driver dummies (if they had been real people) who would have suffered head injuries in the population.
The statistic is proportion of driver dummies (if they had been real people) who would have suffered head injuries in the sample.
The variable $X$ = the number of driver dummies (if they had been real people) who would have suffered head injuries.
The data are either: yes, had head injury, or no, did not.
Example 1.4
Determine what the key terms refer to in the following study.
An insurance company would like to determine the proportion of all medical doctors who have been involved in one or more malpractice lawsuits. The company selects 500 doctors at random from a professional directory and determines the number in the sample who have been involved in a malpractice lawsuit.
Answer
Solution 1.4
The population is all medical doctors listed in the professional directory.
The parameter is the proportion of medical doctors who have been involved in one or more malpractice suits in the population.
The sample is the 500 doctors selected at random from the professional directory.
The statistic is the proportion of medical doctors who have been involved in one or more malpractice suits in the sample.
The variable $X$ = the number of medical doctors who have been involved in one or more malpractice suits.
The data are either: yes, was involved in one or more malpractice lawsuits, or no, was not. | textbooks/stats/Applied_Statistics/Introductory_Business_Statistics_(OpenStax)/01%3A_Sampling_and_Data/1.00%3A_Introduction_to_Sampling_and_Data.txt |
Data may come from a population or from a sample. Lowercase letters like \(x\) or \(y\) generally are used to represent data values. Most data can be put into the following categories:
• Qualitative
• Quantitative
Qualitative data are the result of categorizing or describing attributes of a population. Qualitative data are also often called categorical data. Hair color, blood type, ethnic group, the car a person drives, and the street a person lives on are examples of qualitative(categorical) data. Qualitative(categorical) data are generally described by words or letters. For instance, hair color might be black, dark brown, light brown, blonde, gray, or red. Blood type might be AB+, O-, or B+. Researchers often prefer to use quantitative data over qualitative(categorical) data because it lends itself more easily to mathematical analysis. For example, it does not make sense to find an average hair color or blood type.
Quantitative data are always numbers. Quantitative data are the result of counting or measuring attributes of a population. Amount of money, pulse rate, weight, number of people living in your town, and number of students who take statistics are examples of quantitative data. Quantitative data may be either discrete or continuous.
All data that are the result of counting are called quantitative discrete data. These data take on only certain numerical values. If you count the number of phone calls you receive for each day of the week, you might get values such as zero, one, two, or three.
Data that are not only made up of counting numbers, but that may include fractions, decimals, or irrational numbers, are called quantitative continuous data. Continuous data are often the results of measurements like lengths, weights, or times. A list of the lengths in minutes for all the phone calls that you make in a week, with numbers like 2.4, 7.5, or 11.0, would be quantitative continuous data.
Example \(1\): DATA SAMPLE OF QUANTITATIVE DISCRETE DATA
The data are the number of books students carry in their backpacks. You sample five students. Two students carry three books, one student carries four books, one student carries two books, and one student carries one book. The numbers of books (three, four, two, and one) are the quantitative discrete data.
Exercise \(1\)
The data are the number of machines in a gym. You sample five gyms. One gym has 12 machines, one gym has 15 machines, one gym has ten machines, one gym has 22 machines, and the other gym has 20 machines. What type of data is this?
Example \(2\): DATA SAMPLE OF QUANTITATIVE CONTINUOUS DATA
The data are the weights of backpacks with books in them. You sample the same five students. The weights (in pounds) of their backpacks are 6.2, 7, 6.8, 9.1, 4.3. Notice that backpacks carrying three books can have different weights. Weights are quantitative continuous data.
Exercise \(2\)
The data are the areas of lawns in square feet. You sample five houses. The areas of the lawns are 144 sq. feet, 160 sq. feet, 190 sq. feet, 180 sq. feet, and 210 sq. feet. What type of data is this?
Example \(3\)
You go to the supermarket and purchase three cans of soup (19 ounces) tomato bisque, 14.1 ounces lentil, and 19 ounces Italian wedding), two packages of nuts (walnuts and peanuts), four different kinds of vegetable (broccoli, cauliflower, spinach, and carrots), and two desserts (16 ounces pistachio ice cream and 32 ounces chocolate chip cookies).
Name data sets that are quantitative discrete, quantitative continuous, and qualitative(categorical).
Answer
One Possible Solution:
• The three cans of soup, two packages of nuts, four kinds of vegetables and two desserts are quantitative discrete data because you count them.
• The weights of the soups (19 ounces, 14.1 ounces, 19 ounces) are quantitative continuous data because you measure weights as precisely as possible.
• Types of soups, nuts, vegetables and desserts are qualitative(categorical) data because they are categorical.
Try to identify additional data sets in this example.
Example \(4\)
The data are the colors of backpacks. Again, you sample the same five students. One student has a red backpack, two students have black backpacks, one student has a green backpack, and one student has a gray backpack. The colors red, black, black, green, and gray are qualitative(categorical) data.
Exercise \(4\)
The data are the colors of houses. You sample five houses. The colors of the houses are white, yellow, white, red, and white. What type of data is this?
You may collect data as numbers and report it categorically. For example, the quiz scores for each student are recorded throughout the term. At the end of the term, the quiz scores are reported as A, B, C, D, or F
Example \(5\)
Work collaboratively to determine the correct data type (quantitative or qualitative). Indicate whether quantitative data are continuous or discrete. Hint: Data that are discrete often start with the words "the number of."
1. the number of pairs of shoes you own
2. the type of car you drive
3. the distance from your home to the nearest grocery store
4. the number of classes you take per school year
5. the type of calculator you use
6. weights of sumo wrestlers
7. number of correct answers on a quiz
8. IQ scores (This may cause some discussion.)
Answer
Items a, d, and g are quantitative discrete; items c, f, and h are quantitative continuous; items b and e are qualitative, or categorical.
Exercise \(5\)
Determine the correct data type (quantitative or qualitative) for the number of cars in a parking lot. Indicate whether quantitative data are continuous or discrete.
Example \(6\)
A statistics professor collects information about the classification of her students as freshmen, sophomores, juniors, or seniors. The data she collects are summarized in the pie chart Figure 1.2. What type of data does this graph show?
Answer
This pie chart shows the students in each year, which is qualitative (or categorical) data.
Exercise \(6\)
The registrar at State University keeps records of the number of credit hours students complete each semester. The data he collects are summarized in the histogram. The class boundaries are 10 to less than 13, 13 to less than 16, 16 to less than 19, 19 to less than 22, and 22 to less than 25.
What type of data does this graph show?
Qualitative Data Discussion
Below are tables comparing the number of part-time and full-time students at De Anza College and Foothill College enrolled for the spring 2010 quarter. The tables display counts (frequencies) and percentages or proportions (relative frequencies). The percent columns make comparing the same categories in the colleges easier. Displaying percentages along with the numbers is often helpful, but it is particularly important when comparing sets of data that do not have the same totals, such as the total enrollments for both colleges in this example. Notice how much larger the percentage for part-time students at Foothill College is compared to De Anza College.
Table \(1\): Fall Term 2007 (Census day)
De Anza College Foothill College
Number Percent Number Percent
Full-time 9,200 40.9% Full-time 4,059 28.6%
Part-time 13,296 59.1% Part-time 10,124 71.4%
Total 22,496 100% Total 14,183 100%
Tables are a good way of organizing and displaying data. But graphs can be even more helpful in understanding the data. There are no strict rules concerning which graphs to use. Two graphs that are used to display qualitative(categorical) data are pie charts and bar graphs.
• In a pie chart, categories of data are represented by wedges in a circle and are proportional in size to the percent of individuals in each category.
• In a bar graph, the length of the bar for each category is proportional to the number or percent of individuals in each category. Bars may be vertical or horizontal.
• A Pareto chart consists of bars that are sorted into order by category size (largest to smallest).
Look at Figure 1.5 and determine which graph (pie or bar) you think displays the comparisons better.
It is a good idea to look at a variety of graphs to see which is the most helpful in displaying the data. We might make different choices of what we think is the “best” graph depending on the data and the context. Our choice also depends on what we are using the data for.
Figure 1.5
Percentages That Add to More (or Less) Than 100%
Sometimes percentages add up to be more than 100% (or less than 100%). In the graph, the percentages add to more than 100% because students can be in more than one category. A bar graph is appropriate to compare the relative size of the categories. A pie chart cannot be used. It also could not be used if the percentages added to less than 100%.
Table \(2\): De Anza College Spring 2010
Characteristic/category Percent
Full-time students 40.9%
Students who intend to transfer to a 4-year educational institution 48.6%
Students under age 25 61.0%
TOTAL 150.5%
Omitting Categories/Missing Data
The table displays Ethnicity of Students but is missing the "Other/Unknown" category. This category contains people who did not feel they fit into any of the ethnicity categories or declined to respond. Notice that the frequencies do not add up to the total number of students. In this situation, create a bar graph and not a pie chart.
Table \(3\): Ethnicity of Students at De Anza College Fall Term 2007 (Census Day)
Frequency Percent
Asian 8,794 36.1%
Black 1,412 5.8%
Filipino 1,298 5.3%
Hispanic 4,180 17.1%
Native American 146 0.6%
Pacific Islander 236 1.0%
White 5,978 24.5%
TOTAL 22,044 out of 24,382 90.4% out of 100%
The following graph is the same as the previous graph but the “Other/Unknown” percent (9.6%) has been included. The “Other/Unknown” category is large compared to some of the other categories (Native American, 0.6%, Pacific Islander 1.0%). This is important to know when we think about what the data are telling us.
This particular bar graph in Figure 1.9 is a Pareto chart. The Pareto chart has the bars sorted from largest to smallest and is easier to read and interpret.
Pie Charts: No Missing Data
The following pie charts have the “Other/Unknown” category included (since the percentages must add to 100%). The chart in Figure 1.10.
Sampling
Gathering information about an entire population often costs too much or is virtually impossible. Instead, we use a sample of the population. A sample should have the same characteristics as the population it is representing. Most statisticians use various methods of random sampling in an attempt to achieve this goal. This section will describe a few of the most common methods. There are several different methods of random sampling. In each form of random sampling, each member of a population initially has an equal chance of being selected for the sample. Each method has pros and cons. The easiest method to describe is called a simple random sample. Any group of n individuals is equally likely to be chosen as any other group of \(n\) individuals if the simple random sampling technique is used. In other words, each sample of the same size has an equal chance of being selected.
Besides simple random sampling, there are other forms of sampling that involve a chance process for getting the sample. Other well-known random sampling methods are the stratified sample, the cluster sample, and the systematic sample.
To choose a stratified sample, divide the population into groups called strata and then take a proportionate number from each stratum. For example, you could stratify (group) your college population by department and then choose a proportionate simple random sample from each stratum (each department) to get a stratified random sample. To choose a simple random sample from each department, number each member of the first department, number each member of the second department, and do the same for the remaining departments. Then use simple random sampling to choose proportionate numbers from the first department and do the same for each of the remaining departments. Those numbers picked from the first department, picked from the second department, and so on represent the members who make up the stratified sample.
To choose a cluster sample, divide the population into clusters (groups) and then randomly select some of the clusters. All the members from these clusters are in the cluster sample. For example, if you randomly sample four departments from your college population, the four departments make up the cluster sample. Divide your college faculty by department. The departments are the clusters. Number each department, and then choose four different numbers using simple random sampling. All members of the four departments with those numbers are the cluster sample.
To choose a systematic sample, randomly select a starting point and take every \(n^{th}\) piece of data from a listing of the population. For example, suppose you have to do a phone survey. Your phone book contains 20,000 residence listings. You must choose 400 names for the sample. Number the population 1–20,000 and then use a simple random sample to pick a number that represents the first name in the sample. Then choose every fiftieth name thereafter until you have a total of 400 names (you might have to go back to the beginning of your phone list). Systematic sampling is frequently chosen because it is a simple method.
A type of sampling that is non-random is convenience sampling. Convenience sampling involves using results that are readily available. For example, a computer software store conducts a marketing study by interviewing potential customers who happen to be in the store browsing through the available software. The results of convenience sampling may be very good in some cases and highly biased (favor certain outcomes) in others.
Sampling data should be done very carefully. Collecting data carelessly can have devastating results. Surveys mailed to households and then returned may be very biased (they may favor a certain group). It is better for the person conducting the survey to select the sample respondents.
True random sampling is done with replacement. That is, once a member is picked, that member goes back into the population and thus may be chosen more than once. However for practical reasons, in most populations, simple random sampling is done without replacement. Surveys are typically done without replacement. That is, a member of the population may be chosen only once. Most samples are taken from large populations and the sample tends to be small in comparison to the population. Since this is the case, sampling without replacement is approximately the same as sampling with replacement because the chance of picking the same individual more than once with replacement is very low.
In a college population of 10,000 people, suppose you want to pick a sample of 1,000 randomly for a survey. For any particular sample of 1,000, if you are sampling with replacement,
• the chance of picking the first person is 1,000 out of 10,000 (0.1000);
• the chance of picking a different second person for this sample is 999 out of 10,000 (0.0999);
• the chance of picking the same person again is 1 out of 10,000 (very low).
If you are sampling without replacement,
• the chance of picking the first person for any particular sample is 1000 out of 10,000 (0.1000);
• the chance of picking a different second person is 999 out of 9,999 (0.0999);
• you do not replace the first person before picking the next person.
Compare the fractions 999/10,000 and 999/9,999. For accuracy, carry the decimal answers to four decimal places. To four decimal places, these numbers are equivalent (0.0999).
Sampling without replacement instead of sampling with replacement becomes a mathematical issue only when the population is small. For example, if the population is 25 people, the sample is ten, and you are sampling with replacement for any particular sample, then the chance of picking the first person is ten out of 25, and the chance of picking a different second person is nine out of 25 (you replace the first person).
If you sample without replacement, then the chance of picking the first person is ten out of 25, and then the chance of picking the second person (who is different) is nine out of 24 (you do not replace the first person).
Compare the fractions 9/25 and 9/24. To four decimal places, 9/25 = 0.3600 and 9/24 = 0.3750. To four decimal places, these numbers are not equivalent.
When you analyze data, it is important to be aware of sampling errors and nonsampling errors. The actual process of sampling causes sampling errors. For example, the sample may not be large enough. Factors not related to the sampling process cause nonsampling errors. A defective counting device can cause a nonsampling error.
In reality, a sample will never be exactly representative of the population so there will always be some sampling error. As a rule, the larger the sample, the smaller the sampling error.
In statistics, a sampling bias is created when a sample is collected from a population and some members of the population are not as likely to be chosen as others (remember, each member of the population should have an equally likely chance of being chosen). When a sampling bias happens, there can be incorrect conclusions drawn about the population that is being studied.
Critical Evaluation
We need to evaluate the statistical studies we read about critically and analyze them before accepting the results of the studies. Common problems to be aware of include
• Problems with samples: A sample must be representative of the population. A sample that is not representative of the population is biased. Biased samples that are not representative of the population give results that are inaccurate and not valid.
• Self-selected samples: Responses only by people who choose to respond, such as call-in surveys, are often unreliable.
• Sample size issues: Samples that are too small may be unreliable. Larger samples are better, if possible. In some situations, having small samples is unavoidable and can still be used to draw conclusions. Examples: crash testing cars or medical testing for rare conditions
• Undue influence: collecting data or asking questions in a way that influences the response
• Non-response or refusal of subject to participate: The collected responses may no longer be representative of the population. Often, people with strong positive or negative opinions may answer surveys, which can affect the results.
• Causality: A relationship between two variables does not mean that one causes the other to occur. They may be related (correlated) because of their relationship through a different variable.
• Self-funded or self-interest studies: A study performed by a person or organization in order to support their claim. Is the study impartial? Read the study carefully to evaluate the work. Do not automatically assume that the study is good, but do not automatically assume the study is bad either. Evaluate it on its merits and the work done.
• Misleading use of data: improperly displayed graphs, incomplete data, or lack of context
• Confounding: When the effects of multiple factors on a response cannot be separated. Confounding makes it difficult or impossible to draw valid conclusions about the effect of each factor.
Example \(7\)
A study is done to determine the average tuition that San Jose State undergraduate students pay per semester. Each student in the following samples is asked how much tuition he or she paid for the Fall semester. What is the type of sampling in each case?
1. A sample of 100 undergraduate San Jose State students is taken by organizing the students’ names by classification (freshman, sophomore, junior, or senior), and then selecting 25 students from each.
2. A random number generator is used to select a student from the alphabetical listing of all undergraduate students in the Fall semester. Starting with that student, every 50th student is chosen until 75 students are included in the sample.
3. A completely random method is used to select 75 students. Each undergraduate student in the fall semester has the same probability of being chosen at any stage of the sampling process.
4. The freshman, sophomore, junior, and senior years are numbered one, two, three, and four, respectively. A random number generator is used to pick two of those years. All students in those two years are in the sample.
5. An administrative assistant is asked to stand in front of the library one Wednesday and to ask the first 100 undergraduate students he encounters what they paid for tuition the Fall semester. Those 100 students are the sample.
Answer
a. stratified; b. systematic; c. simple random; d. cluster; e. convenience
Example \(8\)
Determine the type of sampling used (simple random, stratified, systematic, cluster, or convenience).
1. A soccer coach selects six players from a group of boys aged eight to ten, seven players from a group of boys aged 11 to 12, and three players from a group of boys aged 13 to 14 to form a recreational soccer team.
2. A pollster interviews all human resource personnel in five different high tech companies.
3. A high school educational researcher interviews 50 high school female teachers and 50 high school male teachers.
4. A medical researcher interviews every third cancer patient from a list of cancer patients at a local hospital.
5. A high school counselor uses a computer to generate 50 random numbers and then picks students whose names correspond to the numbers.
6. A student interviews classmates in his algebra class to determine how many pairs of jeans a student owns, on the average.
Answer
a. stratified; b. cluster; c. stratified; d. systematic; e. simple random; f.convenience
If we were to examine two samples representing the same population, even if we used random sampling methods for the samples, they would not be exactly the same. Just as there is variation in data, there is variation in samples. As you become accustomed to sampling, the variability will begin to seem natural.
Example \(8\)
Suppose ABC College has 10,000 part-time students (the population). We are interested in the average amount of money a part-time student spends on books in the fall term. Asking all 10,000 students is an almost impossible task.
Suppose we take two different samples.
First, we use convenience sampling and survey ten students from a first term organic chemistry class. Many of these students are taking first term calculus in addition to the organic chemistry class. The amount of money they spend on books is as follows:
\$128; \$87; \$173; \$116; \$130; \$204; \$147; \$189; \$93; \$153
The second sample is taken using a list of senior citizens who take P.E. classes and taking every fifth senior citizen on the list, for a total of ten senior citizens. They spend:
\$50; \$40; \$36; \$15; \$50; \$100; \$40; \$53; \$22; \$22
It is unlikely that any student is in both samples.
a. Do you think that either of these samples is representative of (or is characteristic of) the entire 10,000 part-time student population?
Answer
a. No. The first sample probably consists of science-oriented students. Besides the chemistry course, some of them are also taking first-term calculus. Books for these classes tend to be expensive. Most of these students are, more than likely, paying more than the average part-time student for their books. The second sample is a group of senior citizens who are, more than likely, taking courses for health and interest. The amount of money they spend on books is probably much less than the average part-time student. Both samples are biased. Also, in both cases, not all students have a chance to be in either sample.
b. Since these samples are not representative of the entire population, is it wise to use the results to describe the entire population?
Answer
Solution 1.13
b. No. For these samples, each member of the population did not have an equally likely chance of being chosen.
Now, suppose we take a third sample. We choose ten different part-time students from the disciplines of chemistry, math, English, psychology, sociology, history, nursing, physical education, art, and early childhood development. (We assume that these are the only disciplines in which part-time students at ABC College are enrolled and that an equal number of part-time students are enrolled in each of the disciplines.) Each student is chosen using simple random sampling. Using a calculator, random numbers are generated and a student from a particular discipline is selected if he or she has a corresponding number. The students spend the following amounts:
\$180; \$50; \$150; \$85; \$260; \$75; \$180; \$200; \$200; \$150
c. Is the sample biased?
Answer
Solution 1.13
c. The sample is unbiased, but a larger sample would be recommended to increase the likelihood that the sample will be close to representative of the population. However, for a biased sampling technique, even a large sample runs the risk of not being representative of the population.
Students often ask if it is "good enough" to take a sample, instead of surveying the entire population. If the survey is done well, the answer is yes.
Exercise \(8\)
A local radio station has a fan base of 20,000 listeners. The station wants to know if its audience would prefer more music or more talk shows. Asking all 20,000 listeners is an almost impossible task.
The station uses convenience sampling and surveys the first 200 people they meet at one of the station’s music concert events. 24 people said they’d prefer more talk shows, and 176 people said they’d prefer more music.
Do you think that this sample is representative of (or is characteristic of) the entire 20,000 listener population?
Variation in Data
Variation is present in any set of data. For example, 16-ounce cans of beverage may contain more or less than 16 ounces of liquid. In one study, eight 16 ounce cans were measured and produced the following amount (in ounces) of beverage:
15.8; 16.1; 15.2; 14.8; 15.8; 15.9; 16.0; 15.5
Measurements of the amount of beverage in a 16-ounce can may vary because different people make the measurements or because the exact amount, 16 ounces of liquid, was not put into the cans. Manufacturers regularly run tests to determine if the amount of beverage in a 16-ounce can falls within the desired range.
Be aware that as you take data, your data may vary somewhat from the data someone else is taking for the same purpose. This is completely natural. However, if two or more of you are taking the same data and get very different results, it is time for you and the others to reevaluate your data-taking methods and your accuracy.
Variation in Samples
It was mentioned previously that two or more samples from the same population, taken randomly, and having close to the same characteristics of the population will likely be different from each other. Suppose Doreen and Jung both decide to study the average amount of time students at their college sleep each night. Doreen and Jung each take samples of 500 students. Doreen uses systematic sampling and Jung uses cluster sampling. Doreen's sample will be different from Jung's sample. Even if Doreen and Jung used the same sampling method, in all likelihood their samples would be different. Neither would be wrong, however.
Think about what contributes to making Doreen’s and Jung’s samples different.
If Doreen and Jung took larger samples (i.e. the number of data values is increased), their sample results (the average amount of time a student sleeps) might be closer to the actual population average. But still, their samples would be, in all likelihood, different from each other. This variability in samples cannot be stressed enough.
Size of a Sample
The size of a sample (often called the number of observations, usually given the symbol n) is important. The examples you have seen in this book so far have been small. Samples of only a few hundred observations, or even smaller, are sufficient for many purposes. In polling, samples that are from 1,200 to 1,500 observations are considered large enough and good enough if the survey is random and is well done. Later we will find that even much smaller sample sizes will give very good results. You will learn why when you study confidence intervals.
Be aware that many large samples are biased. For example, call-in surveys are invariably biased, because people choose to respond or not. | textbooks/stats/Applied_Statistics/Introductory_Business_Statistics_(OpenStax)/01%3A_Sampling_and_Data/1.02%3A_Data_Sampling_and_Variation_in_Data_and_Sampling.txt |
Once you have a set of data, you will need to organize it so that you can analyze how frequently each datum occurs in the set. However, when calculating the frequency, you may need to round your answers so that they are as precise as possible.
Levels of Measurement
The way a set of data is measured is called its level of measurement. Correct statistical procedures depend on a researcher being familiar with levels of measurement. Not every statistical operation can be used with every set of data. Data can be classified into four levels of measurement. They are (from lowest to highest level):
• Nominal scale level
• Ordinal scale level
• Interval scale level
• Ratio scale level
Data that is measured using a nominal scale is qualitative (categorical). Categories, colors, names, labels and favorite foods along with yes or no responses are examples of nominal level data. Nominal scale data are not ordered. For example, trying to classify people according to their favorite food does not make any sense. Putting pizza first and sushi second is not meaningful.
Smartphone companies are another example of nominal scale data. The data are the names of the companies that make smartphones, but there is no agreed upon order of these brands, even though people may have personal preferences. Nominal scale data cannot be used in calculations.
Data that is measured using an ordinal scale is similar to nominal scale data but there is a big difference. The ordinal scale data can be ordered. An example of ordinal scale data is a list of the top five national parks in the United States. The top five national parks in the United States can be ranked from one to five but we cannot measure differences between the data.
Another example of using the ordinal scale is a cruise survey where the responses to questions about the cruise are “excellent,” “good,” “satisfactory,” and “unsatisfactory.” These responses are ordered from the most desired response to the least desired. But the differences between two pieces of data cannot be measured. Like the nominal scale data, ordinal scale data cannot be used in calculations.
Data that is measured using the interval scale is similar to ordinal level data because it has a definite ordering but there is a difference between data. The differences between interval scale data can be measured though the data does not have a starting point.
Temperature scales like Celsius (C) and Fahrenheit (F) are measured by using the interval scale. In both temperature measurements, 40° is equal to 100° minus 60°. Differences make sense. But 0 degrees does not because, in both scales, 0 is not the absolute lowest temperature. Temperatures like -10° F and -15° C exist and are colder than 0.
Interval level data can be used in calculations, but one type of comparison cannot be done. 80° C is not four times as hot as 20° C (nor is 80° F four times as hot as 20° F). There is no meaning to the ratio of 80 to 20 (or four to one).
Data that is measured using the ratio scale takes care of the ratio problem and gives you the most information. Ratio scale data is like interval scale data, but it has a 0 point and ratios can be calculated. For example, four multiple choice statistics final exam scores are 80, 68, 20 and 92 (out of a possible 100 points). The exams are machine-graded.
The data can be put in order from lowest to highest: 20, 68, 80, 92.
The differences between the data have meaning. The score 92 is more than the score 68 by 24 points. Ratios can be calculated. The smallest score is 0. So 80 is four times 20. The score of 80 is four times better than the score of 20.
Frequency
Twenty students were asked how many hours they worked per day. Their responses, in hours, are as follows: 5; 6; 3; 3; 2; 4; 7; 5; 2; 3; 5; 6; 5; 4; 4; 3; 5; 2; 5; 3.
Table $5$ lists the different data values in ascending order and their frequencies.
Data value Frequency
2 3
3 5
4 3
5 6
6 2
7 1
Table1.5 Frequency Table of Student Work Hours
A frequency is the number of times a value of the data occurs. According to Table $5$, there are three students who work two hours, five students who work three hours, and so on. The sum of the values in the frequency column, 20, represents the total number of students included in the sample.
A relative frequency is the ratio (fraction or proportion) of the number of times a value of the data occurs in the set of all outcomes to the total number of outcomes. To find the relative frequencies, divide each frequency by the total number of students in the sample–in this case, 20. Relative frequencies can be written as fractions, percents, or decimals.
Data value Frequency Relative frequency
2 3 $\frac{3}{20}$ or 0.15
3 5 $\frac{5}{20}$ or 0.25
4 3 $\frac{3}{20}$ or 0.15
5 6 $\frac{6}{20}$ or 0.30
6 2 $\frac{2}{20}$ or 0.10
7 1 $\frac{1}{20}$ or 0.05
Table1.6 Frequency Table of Student Work Hours with Relative Frequencies
The sum of the values in the relative frequency column of Table $6$ is $\frac{20}{20}$, or 1.
Cumulative relative frequency is the accumulation of the previous relative frequencies. To find the cumulative relative frequencies, add all the previous relative frequencies to the relative frequency for the current row, as shown in Table $7$.
Data value Frequency Relative frequency Cumulative relative frequency
2 3 $\frac{3}{20}$ or 0.15 0.15
3 5 $\frac{5}{20}$ or 0.25 0.15 + 0.25 = 0.40
4 3 $\frac{3}{20}$ or 0.15 0.40 + 0.15 = 0.55
5 6 $\frac{6}{20}$ or 0.30 0.55 + 0.30 = 0.85
6 2 $\frac{2}{20}$ or 0.10 0.85 + 0.10 = 0.95
7 1 $\frac{1}{20}$ or 0.05 0.95 + 0.05 = 1.00
Table1.7 Frequency Table of Student Work Hours with Relative and Cumulative Relative Frequencies
The last entry of the cumulative relative frequency column is one, indicating that one hundred percent of the data has been accumulated.
NOTE
Because of rounding, the relative frequency column may not always sum to one, and the last entry in the cumulative relative frequency column may not be one. However, they each should be close to one.
Table $8$ represents the heights, in inches, of a sample of 100 male semiprofessional soccer players.
Heights (inches) Frequency Relative frequency Cumulative relative frequency
59.95–61.95 5 $\frac{5}{10}$ = 0.05 0.05
61.95–63.95 3 $\frac{3}{100}$ = 0.03 0.05 + 0.03 = 0.08
63.95–65.95 15 $\frac{15}{100}$ = 0.15 0.08 + 0.15 = 0.23
65.95–67.95 40 $\frac{40}{100}$ = 0.40 0.23 + 0.40 = 0.63
67.95–69.95 17 $\frac{17}{100}$ = 0.17 0.63 + 0.17 = 0.80
69.95–71.95 12 $\frac{12}{100}$ = 0.12 0.80 + 0.12 = 0.92
71.95–73.95 7 $\frac{7}{100}$ = 0.07 0.92 + 0.07 = 0.99
73.95–75.95 1 $\frac{1}{100}$ = 0.01 0.99 + 0.01 = 1.00
Total = 100 Total = 1.00
Table1.8 Frequency Table of Soccer Player Height
The data in this table have been grouped into the following intervals:
• 59.95 to 61.95 inches
• 61.95 to 63.95 inches
• 63.95 to 65.95 inches
• 65.95 to 67.95 inches
• 67.95 to 69.95 inches
• 69.95 to 71.95 inches
• 71.95 to 73.95 inches
• 73.95 to 75.95 inches
In this sample, there are five players whose heights fall within the interval 59.95–61.95 inches, three players whose heights fall within the interval 61.95–63.95 inches, 15 players whose heights fall within the interval 63.95–65.95 inches, 40 players whose heights fall within the interval 65.95–67.95 inches, 17 players whose heights fall within the interval 67.95–69.95 inches, 12 players whose heights fall within the interval 69.95–71.95, seven players whose heights fall within the interval 71.95–73.95, and one player whose heights fall within the interval 73.95–75.95. All heights fall between the endpoints of an interval and not at the endpoints.
Example $14$
From Table $8$, find the percentage of heights that are less than 65.95 inches.
Exercise $14$
Table $9$ shows the amount, in inches, of annual rainfall in a sample of towns.
Rainfall (inches) Frequency Relative frequency Cumulative relative frequency
2.95–4.97 6 $\frac{6}{50}$ = 0.12 0.12
4.97–6.99 7 $\frac{7}{50}$ = 0.14 0.12 + 0.14 = 0.26
6.99–9.01 15 $\frac{15}{50}$ = 0.30 0.26 + 0.30 = 0.56
9.01–11.03 8 $\frac{8}{50}$ = 0.16 0.56 + 0.16 = 0.72
11.03–13.05 9 $\frac{9}{50}$ = 0.18 0.72 + 0.18 = 0.90
13.05–15.07 5 $\frac{5}{50}$ = 0.10 0.90 + 0.10 = 1.00
Total = 50 Total = 1.00
Table $9$
From Table $9$, find the percentage of rainfall that is less than 9.01 inches.
Example $15$
From Table $8$, find the percentage of heights that fall between 61.95 and 65.95 inches.
Answer
Solution 1.15
Add the relative frequencies in the second and third rows: $0.03 + 0.15 = 0.18$ or 18%.
Exercise $15$
From Table $9$, find the percentage of rainfall that is between 6.99 and 13.05 inches.
Example $16$
Use the heights of the 100 male semiprofessional soccer players in Table $8$. Fill in the blanks and check your answers.
1. The percentage of heights that are from 67.95 to 71.95 inches is: ____.
2. The percentage of heights that are from 67.95 to 73.95 inches is: ____.
3. The percentage of heights that are more than 65.95 inches is: ____.
4. The number of players in the sample who are between 61.95 and 71.95 inches tall is: ____.
5. What kind of data are the heights?
6. Describe how you could gather this data (the heights) so that the data are characteristic of all male semiprofessional soccer players.
Remember, you count frequencies. To find the relative frequency, divide the frequency by the total number of data values. To find the cumulative relative frequency, add all of the previous relative frequencies to the relative frequency for the current row.
Answer
Solution 1.16
1. 29%
2. 36%
3. 77%
4. 87
5. quantitative continuous
6. get rosters from each team and choose a simple random sample from each
Example $17$
Nineteen people were asked how many miles, to the nearest mile, they commute to work each day. The data are as follows: 2; 5; 7; 3; 2; 10; 18; 15; 20; 7; 10; 18; 5; 12; 13; 12; 4; 5; 10. Table $10$ was produced:
Data Frequency Relative frequency Cumulative relative frequency
3 3 $\frac{3}{19}$ 0.1579
4 1 $\frac{1}{19}$ 0.2105
5 3 $\frac{3}{19}$ 0.1579
7 2 $\frac{2}{19}$ 0.2632
10 3 $\frac{4}{19}$ 0.4737
12 2 $\frac{2}{19}$ 0.7895
13 1 $\frac{1}{19}$ 0.8421
15 1 $\frac{1}{19}$ 0.8948
18 1 $\frac{1}{19}$ 0.9474
20 1 $\frac{1}{19}$ 1.0000
Table $10$ Frequency of Commuting Distances
1. Is the table correct? If it is not correct, what is wrong?
2. True or False: Three percent of the people surveyed commute three miles. If the statement is not correct, what should it be? If the table is incorrect, make the corrections.
3. What fraction of the people surveyed commute five or seven miles?
4. What fraction of the people surveyed commute 12 miles or more? Less than 12 miles? Between five and 13 miles (not including five and 13 miles)?
Answer
Solution 1.17
1. No. The frequency column sums to 18, not 19. Not all cumulative relative frequencies are correct.
2. False. The frequency for three miles should be one; for two miles (left out), two. The cumulative relative frequency column should read: 0.1052, 0.1579, 0.2105, 0.3684, 0.4737, 0.6316, 0.7368, 0.7895, 0.8421, 0.9474, 1.0000.
3. $\frac{5}{19}$
4. $\frac{7}{19}, \frac{12}{19}, \frac{7}{19)$
Exercise $17$
Table $9$ represents the amount, in inches, of annual rainfall in a sample of towns. What fraction of towns surveyed get between 11.03 and 13.05 inches of rainfall each year?
Example $18$
Table $11$ contains the total number of deaths worldwide as a result of earthquakes for the period from 2000 to 2012.
Year Total number of deaths
2000 231
2001 21,357
2002 11,685
2003 33,819
2004 228,802
2005 88,003
2006 6,605
2007 712
2008 88,011
2009 1,790
2010 320,120
2011 21,953
2012 768
Total 823,856
Table1.11
Answer the following questions.
1. What is the frequency of deaths measured from 2006 through 2009?
2. What percentage of deaths occurred after 2009?
3. What is the relative frequency of deaths that occurred in 2003 or earlier?
4. What is the percentage of deaths that occurred in 2004?
5. What kind of data are the numbers of deaths?
6. The Richter scale is used to quantify the energy produced by an earthquake. Examples of Richter scale numbers are 2.3, 4.0, 6.1, and 7.0. What kind of data are these numbers?
Answer
Solution 1.18
1. 97,118 (11.8%)
2. 41.6%
3. 67,092/823,356 or 0.081 or 8.1 %
4. 27.8%
5. Quantitative discrete
6. Quantitative continuous
Exercise $18$
Table $12$ contains the total number of fatal motor vehicle traffic crashes in the United States for the period from 1994 to 2011.
Year Total number of crashes Year Total number of crashes
1994 36,254 2004 38,444
1995 37,241 2005 39,252
1996 37,494 2006 38,648
1997 37,324 2007 37,435
1998 37,107 2008 34,172
1999 37,140 2009 30,862
2000 37,526 2010 30,296
2001 37,862 2011 29,757
2002 38,491 Total 653,782
2003 38,477
Table1.12
Answer the following questions.
1. What is the frequency of deaths measured from 2000 through 2004?
2. What percentage of deaths occurred after 2006?
3. What is the relative frequency of deaths that occurred in 2000 or before?
4. What is the percentage of deaths that occurred in 2011?
5. What is the cumulative relative frequency for 2006? Explain what this number tells you about the data. | textbooks/stats/Applied_Statistics/Introductory_Business_Statistics_(OpenStax)/01%3A_Sampling_and_Data/1.03%3A_Levels_of_Measurement.txt |
Does aspirin reduce the risk of heart attacks? Is one brand of fertilizer more effective at growing roses than another? Is fatigue as dangerous to a driver as the influence of alcohol? Questions like these are answered using randomized experiments. In this module, you will learn important aspects of experimental design. Proper study design ensures the production of reliable, accurate data.
The purpose of an experiment is to investigate the relationship between two variables. When one variable causes change in another, we call the first variable the independent variable or explanatory variable. The affected variable is called the dependent variable or response variable: stimulus, response. In a randomized experiment, the researcher manipulates values of the explanatory variable and measures the resulting changes in the response variable. The different values of the explanatory variable are called treatments. An experimental unit is a single object or individual to be measured.
You want to investigate the effectiveness of vitamin E in preventing disease. You recruit a group of subjects and ask them if they regularly take vitamin E. You notice that the subjects who take vitamin E exhibit better health on average than those who do not. Does this prove that vitamin E is effective in disease prevention? It does not. There are many differences between the two groups compared in addition to vitamin E consumption. People who take vitamin E regularly often take other steps to improve their health: exercise, diet, other vitamin supplements, choosing not to smoke. Any one of these factors could be influencing health. As described, this study does not prove that vitamin E is the key to disease prevention.
Additional variables that can cloud a study are called lurking variables. In order to prove that the explanatory variable is causing a change in the response variable, it is necessary to isolate the explanatory variable. The researcher must design her experiment in such a way that there is only one difference between groups being compared: the planned treatments. This is accomplished by the random assignment of experimental units to treatment groups. When subjects are assigned treatments randomly, all of the potential lurking variables are spread equally among the groups. At this point the only difference between groups is the one imposed by the researcher. Different outcomes measured in the response variable, therefore, must be a direct result of the different treatments. In this way, an experiment can prove a cause-and-effect connection between the explanatory and response variables.
The power of suggestion can have an important influence on the outcome of an experiment. Studies have shown that the expectation of the study participant can be as important as the actual medication. In one study of performance-enhancing drugs, researchers noted:
Results showed that believing one had taken the substance resulted in [performance] times almost as fast as those associated with consuming the drug itself. In contrast, taking the drug without knowledge yielded no significant performance increment. (McClung, M. Collins, D. “Because I know it will!”: placebo effects of an ergogenic aid on athletic performance. Journal of Sport & Exercise Psychology. 2007 Jun. 29(3):382-94. Web. April 30, 2013.)
When participation in a study prompts a physical response from a participant, it is difficult to isolate the effects of the explanatory variable. To counter the power of suggestion, researchers set aside one treatment group as a control group. This group is given a placebo treatment–a treatment that cannot influence the response variable. The control group helps researchers balance the effects of being in an experiment with the effects of the active treatments. Of course, if you are participating in a study and you know that you are receiving a pill which contains no actual medication, then the power of suggestion is no longer a factor. Blinding in a randomized experiment preserves the power of suggestion. When a person involved in a research study is blinded, he does not know who is receiving the active treatment(s) and who is receiving the placebo treatment. A double-blind experiment is one in which both the subjects and the researchers involved with the subjects are blinded.
Example \(19\)
The Smell & Taste Treatment and Research Foundation conducted a study to investigate whether smell can affect learning. Subjects completed mazes multiple times while wearing masks. They completed the pencil and paper mazes three times wearing floral-scented masks, and three times with unscented masks. Participants were assigned at random to wear the floral mask during the first three trials or during the last three trials. For each trial, researchers recorded the time it took to complete the maze and the subject’s impression of the mask’s scent: positive, negative, or neutral.
1. Describe the explanatory and response variables in this study.
2. What are the treatments?
3. Identify any lurking variables that could interfere with this study.
4. Is it possible to use blinding in this study?
Answer
Solution 1.19
The explanatory variable is scent, and the response variable is the time it takes to complete the maze. There are two treatments: a floral-scented mask and an unscented mask. All subjects experienced both treatments. The order of treatments was randomly assigned so there were no differences between the treatment groups. Random assignment eliminates the problem of lurking variables. Subjects will clearly know whether they can smell flowers or not, so subjects cannot be blinded in this study. Researchers timing the mazes can be blinded, though. The researcher who is observing a subject will not know which mask is being worn. | textbooks/stats/Applied_Statistics/Introductory_Business_Statistics_(OpenStax)/01%3A_Sampling_and_Data/1.04%3A_Experimental_Design_and_Ethics.txt |
Average
also called mean or arithmetic mean; a number that describes the central tendency of the data
Blinding
not telling participants which treatment a subject is receiving
Categorical Variable
variables that take on values that are names or labels
Cluster Sampling
a method for selecting a random sample and dividing the population into groups (clusters); use simple random sampling to select a set of clusters. Every individual in the chosen clusters is included in the sample.
Continuous Random Variable
a random variable (RV) whose outcomes are measured; the height of trees in the forest is a continuous RV.
Control Group
a group in a randomized experiment that receives an inactive treatment but is otherwise managed exactly as the other groups
Convenience Sampling
a nonrandom method of selecting a sample; this method selects individuals that are easily accessible and may result in biased data.
Cumulative Relative Frequency
The term applies to an ordered set of observations from smallest to largest. The cumulative relative frequency is the sum of the relative frequencies for all values that are less than or equal to the given value.
Data
a set of observations (a set of possible outcomes); most data can be put into two groups: qualitative (an attribute whose value is indicated by a label) or quantitative (an attribute whose value is indicated by a number). Quantitative data can be separated into two subgroups: discrete and continuous. Data is discrete if it is the result of counting (such as the number of students of a given ethnic group in a class or the number of books on a shelf). Data is continuous if it is the result of measuring (such as distance traveled or weight of luggage)
Discrete Random Variable
a random variable (RV) whose outcomes are counted
Double-blinding
the act of blinding both the subjects of an experiment and the researchers who work with the subjects
Experimental Unit
any individual or object to be measured
Explanatory Variable
the independent variable in an experiment; the value controlled by researchers
Frequency
the number of times a value of the data occurs
Informed Consent
Any human subject in a research study must be cognizant of any risks or costs associated with the study. The subject has the right to know the nature of the treatments included in the study, their potential risks, and their potential benefits. Consent must be given freely by an informed, fit participant.
Institutional Review Board
a committee tasked with oversight of research programs that involve human subjects
Lurking Variable
a variable that has an effect on a study even though it is neither an explanatory variable nor a response variable
Mathematical Models
a description of a phenomenon using mathematical concepts, such as equations, inequalities, distributions, etc.
Nonsampling Error
an issue that affects the reliability of sampling data other than natural variation; it includes a variety of human errors including poor study design, biased sampling methods, inaccurate information provided by study participants, data entry errors, and poor analysis.
Numerical Variable
variables that take on values that are indicated by numbers
Observational Study
a study in which the independent variable is not manipulated by the researcher
Parameter
a number that is used to represent a population characteristic and that generally cannot be determined easily
Placebo
an inactive treatment that has no real effect on the explanatory variable
Population
all individuals, objects, or measurements whose properties are being studied
Probability
a number between zero and one, inclusive, that gives the likelihood that a specific event will occur
Proportion
the number of successes divided by the total number in the sample
Qualitative Data
See Data.
Quantitative Data
See Data.
Random Assignment
the act of organizing experimental units into treatment groups using random methods
Random Sampling
a method of selecting a sample that gives every member of the population an equal chance of being selected.
Relative Frequency
the ratio of the number of times a value of the data occurs in the set of all outcomes to the number of all outcomes to the total number of outcomes
Representative Sample
a subset of the population that has the same characteristics as the population
Response Variable
the dependent variable in an experiment; the value that is measured for change at the end of an experiment
Sample
a subset of the population studied
Sampling Bias
not all members of the population are equally likely to be selected
Sampling Error
the natural variation that results from selecting a sample to represent a larger population; this variation decreases as the sample size increases, so selecting larger samples reduces sampling error.
Sampling with Replacement
Once a member of the population is selected for inclusion in a sample, that member is returned to the population for the selection of the next individual.
Sampling without Replacement
A member of the population may be chosen for inclusion in a sample only once. If chosen, the member is not returned to the population before the next selection.
Simple Random Sampling
a straightforward method for selecting a random sample; give each member of the population a number. Use a random number generator to select a set of labels. These randomly selected labels identify the members of your sample.
Statistic
a numerical characteristic of the sample; a statistic estimates the corresponding population parameter.
Statistical Models
a description of a phenomenon using probability distributions that describe the expected behavior of the phenomenon and the variability in the expected observations.
Stratified Sampling
a method for selecting a random sample used to ensure that subgroups of the population are represented adequately; divide the population into groups (strata). Use simple random sampling to identify a proportionate number of individuals from each stratum.
Survey
a study in which data is collected as reported by individuals.
Systematic Sampling
a method for selecting a random sample; list the members of the population. Use simple random sampling to select a starting point in the population. Let k = (number of individuals in the population)/(number of individuals needed in the sample). Choose every kth individual in the list starting with the one that was randomly selected. If necessary, return to the beginning of the population list to complete your sample.
Treatments
different values or components of the explanatory variable applied in an experiment
Variable
a characteristic of interest for each person or object in a population
1.06: Chapter References
Definitions of Statistics, Probability, and Key Terms
• The Data and Story Library, lib.stat.cmu.edu/DASL/Stories...stDummies.html (accessed May 1, 2013).
Levels of Measurement
• “State & County QuickFacts,” U.S. Census Bureau. quickfacts.census.gov/qfd/download_data.html (accessed May 1, 2013).
• “State & County QuickFacts: Quick, easy access to facts about people, business, and geography,” U.S. Census Bureau. quickfacts.census.gov/qfd/index.html (accessed May 1, 2013).
• “Table 5: Direct hits by mainland United States Hurricanes (1851-2004),” National Hurricane Center, http://www.nhc.noaa.gov/gifs/table5.gif (accessed May 1, 2013).
• “Levels of Measurement,” infinity.cos.edu/faculty/wood...ata_Levels.htm (accessed May 1, 2013).
• Courtney Taylor, “Levels of Measurement,” about.com, http://statistics.about.com/od/Helpa...easurement.htm (accessed May 1, 2013).
• David Lane. “Levels of Measurement,” Connexions, http://cnx.org/content/m10809/latest/ (accessed May 1, 2013).
Experimental Design and Ethics
• “Vitamin E and Health,” Nutrition Source, Harvard School of Public Health, http://www.hsph.harvard.edu/nutritio...rce/vitamin-e/ (accessed May 1, 2013).
• Stan Reents. “Don’t Underestimate the Power of Suggestion,” athleteinme.com, www.athleteinme.com/ArticleView.aspx?id=1053 (accessed May 1, 2013).
• Ankita Mehta. “Daily Dose of Aspiring Helps Reduce Heart Attacks: Study,” International Business Times, July 21, 2011. Also available online at http://www.ibtimes.com/daily-dose-as...s-study-300443 (accessed May 1, 2013).
• The Data and Story Library, lib.stat.cmu.edu/DASL/Stories...dLearning.html (accessed May 1, 2013).
• M.L. Jacskon et al., “Cognitive Components of Simulated Driving Performance: Sleep Loss effect and Predictors,” Accident Analysis and Prevention Journal, Jan no. 50 (2013), http://www.ncbi.nlm.nih.gov/pubmed/22721550 (accessed May 1, 2013).
• “Earthquake Information by Year,” U.S. Geological Survey. http://earthquake.usgs.gov/earthquak...archives/year/ (accessed May 1, 2013).
• “Fatality Analysis Report Systems (FARS) Encyclopedia,” National Highway Traffic and Safety Administration. http://www-fars.nhtsa.dot.gov/Main/index.aspx (accessed May 1, 2013).
• Data from www.businessweek.com (accessed May 1, 2013).
• Data from www.forbes.com (accessed May 1, 2013).
• “America’s Best Small Companies,” http://www.forbes.com/best-small-companies/list/ (accessed May 1, 2013).
• U.S. Department of Health and Human Services, Code of Federal Regulations Title 45 Public Welfare Department of Health and Human Services Part 46 Protection of Human Subjects revised January 15, 2009. Section 46.111:Criteria for IRB Approval of Research.
• “April 2013 Air Travel Consumer Report,” U.S. Department of Transportation, April 11 (2013), http://www.dot.gov/airconsumer/april...onsumer-report (accessed May 1, 2013).
• Lori Alden, “Statistics can be Misleading,” econoclass.com, http://www.econoclass.com/misleadingstats.html (accessed May 1, 2013).
• Maria de los A. Medina, “Ethics in Statistics,” Based on “Building an Ethics Module for Business, Science, and Engineering Students” by Jose A. Cruz-Cruz and William Frey, Connexions, http://cnx.org/content/m15555/latest/ (accessed May 1, 2013). | textbooks/stats/Applied_Statistics/Introductory_Business_Statistics_(OpenStax)/01%3A_Sampling_and_Data/1.05%3A_Chapter_Key_Terms.txt |
1.1 Definitions of Statistics, Probability, and Key Terms
For each of the following eight exercises, identify: a. the population, b. the sample, c. the parameter, d. the statistic, e. the variable, and f. the data. Give examples where appropriate.
1.
A fitness center is interested in the mean amount of time a client exercises in the center each week.
2.
Ski resorts are interested in the mean age that children take their first ski and snowboard lessons. They need this information to plan their ski classes optimally.
3.
A cardiologist is interested in the mean recovery period of her patients who have had heart attacks.
4.
Insurance companies are interested in the mean health costs each year of their clients, so that they can determine the costs of health insurance.
5.
A politician is interested in the proportion of voters in his district who think he is doing a good job.
6.
A marriage counselor is interested in the proportion of clients she counsels who stay married.
7.
Political pollsters may be interested in the proportion of people who will vote for a particular cause.
8.
A marketing company is interested in the proportion of people who will buy a particular product.
Use the following information to answer the next three exercises: A Lake Tahoe Community College instructor is interested in the mean number of days Lake Tahoe Community College math students are absent from class during a quarter.
9.
What is the population she is interested in?
1. all Lake Tahoe Community College students
2. all Lake Tahoe Community College English students
3. all Lake Tahoe Community College students in her classes
4. all Lake Tahoe Community College math students
10.
Consider the following:
\(X\) = number of days a Lake Tahoe Community College math student is absent
In this case, \(X\) is an example of a:
1. variable.
2. population.
3. statistic.
4. data.
11.
The instructor’s sample produces a mean number of days absent of 3.5 days. This value is an example of a:
1. parameter.
2. data.
3. statistic.
4. variable.
1.2 Data, Sampling, and Variation in Data and Sampling
For the following exercises, identify the type of data that would be used to describe a response (quantitative discrete, quantitative continuous, or qualitative), and give an example of the data.
12.
number of tickets sold to a concert
13.
percent of body fat
14.
favorite baseball team
15.
time in line to buy groceries
16.
number of students enrolled at Evergreen Valley College
17.
most-watched television show
18.
brand of toothpaste
19.
distance to the closest movie theatre
20.
age of executives in Fortune 500 companies
21.
number of competing computer spreadsheet software packages
Use the following information to answer the next two exercises: A study was done to determine the age, number of times per week, and the duration (amount of time) of resident use of a local park in San Jose. The first house in the neighborhood around the park was selected randomly and then every 8th house in the neighborhood around the park was interviewed.
22.
“Number of times per week” is what type of data?
1. qualitative (categorical)
2. quantitative discrete
3. quantitative continuous
23.
“Duration (amount of time)” is what type of data?
1. qualitative (categorical)
2. quantitative discrete
3. quantitative continuous
24.
Airline companies are interested in the consistency of the number of babies on each flight, so that they have adequate safety equipment. Suppose an airline conducts a survey. Over Thanksgiving weekend, it surveys six flights from Boston to Salt Lake City to determine the number of babies on the flights. It determines the amount of safety equipment needed by the result of that study.
1. Using complete sentences, list three things wrong with the way the survey was conducted.
2. Using complete sentences, list three ways that you would improve the survey if it were to be repeated.
25.
Suppose you want to determine the mean number of students per statistics class in your state. Describe a possible sampling method in three to five complete sentences. Make the description detailed.
26.
Suppose you want to determine the mean number of cans of soda drunk each month by students in their twenties at your school. Describe a possible sampling method in three to five complete sentences. Make the description detailed.
27.
List some practical difficulties involved in getting accurate results from a telephone survey.
28.
List some practical difficulties involved in getting accurate results from a mailed survey.
29.
With your classmates, brainstorm some ways you could overcome these problems if you needed to conduct a phone or mail survey.
30.
The instructor takes her sample by gathering data on five randomly selected students from each Lake Tahoe Community College math class. The type of sampling she used is
1. cluster sampling
2. stratified sampling
3. simple random sampling
4. convenience sampling
31.
A study was done to determine the age, number of times per week, and the duration (amount of time) of residents using a local park in San Jose. The first house in the neighborhood around the park was selected randomly and then every eighth house in the neighborhood around the park was interviewed. The sampling method was:
1. simple random
2. systematic
3. stratified
4. cluster
32.
Name the sampling method used in each of the following situations:
1. A woman in the airport is handing out questionnaires to travelers asking them to evaluate the airport’s service. She does not ask travelers who are hurrying through the airport with their hands full of luggage, but instead asks all travelers who are sitting near gates and not taking naps while they wait.
2. A teacher wants to know if her students are doing homework, so she randomly selects rows two and five and then calls on all students in row two and all students in row five to present the solutions to homework problems to the class.
3. The marketing manager for an electronics chain store wants information about the ages of its customers. Over the next two weeks, at each store location, 100 randomly selected customers are given questionnaires to fill out asking for information about age, as well as about other variables of interest.
4. The librarian at a public library wants to determine what proportion of the library users are children. The librarian has a tally sheet on which she marks whether books are checked out by an adult or a child. She records this data for every fourth patron who checks out books.
5. A political party wants to know the reaction of voters to a debate between the candidates. The day after the debate, the party’s polling staff calls 1,200 randomly selected phone numbers. If a registered voter answers the phone or is available to come to the phone, that registered voter is asked whom he or she intends to vote for and whether the debate changed his or her opinion of the candidates.
33.
A “random survey” was conducted of 3,274 people of the “microprocessor generation” (people born since 1971, the year the microprocessor was invented). It was reported that 48% of those individuals surveyed stated that if they had \$2,000 to spend, they would use it for computer equipment. Also, 66% of those surveyed considered themselves relatively savvy computer users.
1. Do you consider the sample size large enough for a study of this type? Why or why not?
2. Based on your “gut feeling,” do you believe the percents accurately reflect the U.S. population for those individuals born since 1971? If not, do you think the percents of the population are actually higher or lower than the sample statistics? Why?
Additional information: The survey, reported by Intel Corporation, was filled out by individuals who visited the Los Angeles Convention Center to see the Smithsonian Institute's road show called “America’s Smithsonian.”
3. With this additional information, do you feel that all demographic and ethnic groups were equally represented at the event? Why or why not?
4. With the additional information, comment on how accurately you think the sample statistics reflect the population parameters.
34.
The Well-Being Index is a survey that follows trends of U.S. residents on a regular basis. There are six areas of health and wellness covered in the survey: Life Evaluation, Emotional Health, Physical Health, Healthy Behavior, Work Environment, and Basic Access. Some of the questions used to measure the Index are listed below.
Identify the type of data obtained from each question used in this survey: qualitative(categorical), quantitative discrete, or quantitative continuous.
1. Do you have any health problems that prevent you from doing any of the things people your age can normally do?
2. During the past 30 days, for about how many days did poor health keep you from doing your usual activities?
3. In the last seven days, on how many days did you exercise for 30 minutes or more?
4. Do you have health insurance coverage?
35.
In advance of the 1936 Presidential Election, a magazine titled Literary Digest released the results of an opinion poll predicting that the republican candidate Alf Landon would win by a large margin. The magazine sent post cards to approximately 10,000,000 prospective voters. These prospective voters were selected from the subscription list of the magazine, from automobile registration lists, from phone lists, and from club membership lists. Approximately 2,300,000 people returned the postcards.
1. Think about the state of the United States in 1936. Explain why a sample chosen from magazine subscription lists, automobile registration lists, phone books, and club membership lists was not representative of the population of the United States at that time.
2. What effect does the low response rate have on the reliability of the sample?
3. Are these problems examples of sampling error or nonsampling error?
4. During the same year, George Gallup conducted his own poll of 30,000 prospective voters. These researchers used a method they called "quota sampling" to obtain survey answers from specific subsets of the population. Quota sampling is an example of which sampling method described in this module?
36.
Crime-related and demographic statistics for 47 US states in 1960 were collected from government agencies, including the FBI'sUniform Crime Report. One analysis of this data found a strong connection between education and crime indicating that higher levels of education in a community correspond to higher crime rates.
Which of the potential problems with samples discussed in Example \(4\) could explain this connection?
37.
YouPolls is a website that allows anyone to create and respond to polls. One question posted April 15 asks:
“Do you feel happy paying your taxes when members of the Obama administration are allowed to ignore their tax liabilities?” (lastbaldeagle. 2013. On Tax Day, House to Call for Firing Federal Workers Who Owe Back Taxes. Opinion poll posted online at: http://www.youpolls.com/details.aspx?id=12328 (accessed May 1, 2013).)
As of April 25, 11 people responded to this question. Each participant answered “NO!”
Which of the potential problems with samples discussed in this module could explain this connection?
38.
A scholarly article about response rates begins with the following quote:
“Declining contact and cooperation rates in random digit dial (RDD) national telephone surveys raise serious concerns about the validity of estimates drawn from such research.” (Scott Keeter et al., “Gauging the Impact of Growing Nonresponse on Estimates from a National RDD Telephone Survey,” Public Opinion Quarterly 70 no. 5 (2006), http://poq.oxfordjournals.org/content/70/5/759.full (accessed May 1, 2013).)
The Pew Research Center for People and the Press admits:
“The percentage of people we interview – out of all we try to interview – has been declining over the past decade or more.” (Frequently Asked Questions, Pew Research Center for the People & the Press, http://www.people-press.org/methodol...wer-your-polls (accessed May 1, 2013).)
1. What are some reasons for the decline in response rate over the past decade?
2. Explain why researchers are concerned with the impact of the declining response rate on public opinion polls.
1.3 Levels of Measurement
39.
Fifty part-time students were asked how many courses they were taking this term. The (incomplete) results are shown below:
# of courses Frequency Relative frequency Cumulative relative frequency
1 30 0.6
2 15
3
Table1.13 Part-time Student Course Loads
1. Fill in the blanks in Table \(13\).
2. What percent of students take exactly two courses?
3. What percent of students take one or two courses?
40.
Sixty adults with gum disease were asked the number of times per week they used to floss before their diagnosis. The (incomplete) results are shown in Table \(14\).
# flossing per week Frequency Relative frequency Cumulative relative frequency
0 27 0.4500
1 18
3 0.9333
6 3 0.0500
7 1 0.0167
Table1.14 Flossing Frequency for Adults with Gum Disease
1. Fill in the blanks in Table \(14\).
2. What percent of adults flossed six times per week?
3. What percent flossed at most three times per week?
41.
Nineteen immigrants to the U.S were asked how many years, to the nearest year, they have lived in the U.S. The data are as follows: 2;5; 7; 2; 2; 10; 20; 15; 0; 7; 0; 20; 5; 12; 15; 12; 4; 5; 10 .
Table \(15\) was produced.
Data Frequency Relative frequency Cumulative relative frequency
0 2 219219 0.1053
2 3 319319 0.2632
4 1 119119 0.3158
5 3 319319 0.4737
7 2 219219 0.5789
10 2 219219 0.6842
12 2 219219 0.7895
15 1 119119 0.8421
20 1 119119 1.0000
Table \(15\) Frequency of Immigrant Survey Responses
1. Fix the errors in Table \(15\). Also, explain how someone might have arrived at the incorrect number(s).
2. Explain what is wrong with this statement: “47 percent of the people surveyed have lived in the U.S. for 5 years.”
3. Fix the statement in b to make it correct.
4. What fraction of the people surveyed have lived in the U.S. five or seven years?
5. What fraction of the people surveyed have lived in the U.S. at most 12 years?
6. What fraction of the people surveyed have lived in the U.S. fewer than 12 years?
7. What fraction of the people surveyed have lived in the U.S. from five to 20 years, inclusive?
42.
How much time does it take to travel to work? Table \(16\) shows the mean commute time by state for workers at least 16 years old who are not working at home. Find the mean travel time, and round off the answer properly.
24.0 24.3 25.9 18.9 27.5 17.9 21.8 20.9 16.7 27.3
18.2 24.7 20.0 22.6 23.9 18.0 31.4 22.3 24.0 25.5
24.7 24.6 28.1 24.9 22.6 23.6 23.4 25.7 24.8 25.5
21.2 25.7 23.1 23.0 23.9 26.0 16.3 23.1 21.4 21.5
27.0 27.0 18.6 31.7 23.3 30.1 22.9 23.3 21.7 18.6
Table \(16\)
43.
Forbes magazine published data on the best small firms in 2012. These were firms which had been publicly traded for at least a year, have a stock price of at least \$5 per share, and have reported annual revenue between \$5 million and \$1 billion. Table \(17\) shows the ages of the chief executive officers for the first 60 ranked firms.
Age Frequency Relative frequency Cumulative relative frequency
40–44 3
45–49 11
50–54 13
55–59 16
60–64 10
65–69 6
70–74 1
Table \(17\)
1. What is the frequency for CEO ages between 54 and 65?
2. What percentage of CEOs are 65 years or older?
3. What is the relative frequency of ages under 50?
4. What is the cumulative relative frequency for CEOs younger than 55?
5. Which graph shows the relative frequency and which shows the cumulative relative frequency?
Use the following information to answer the next two exercises: Table \(18\) contains data on hurricanes that have made direct hits on the U.S. Between 1851 and 2004. A hurricane is given a strength category rating based on the minimum wind speed generated by the storm.
Category Number of direct hits Relative frequency Cumulative frequency
Total = 273
1 109 0.3993 0.3993
2 72 0.2637 0.6630
3 71 0.2601
4 18 0.9890
5 3 0.0110 1.0000
Table1.18 Frequency of Hurricane Direct Hits
44.
What is the relative frequency of direct hits that were category 4 hurricanes?
1. 0.0768
2. 0.0659
3. 0.2601
4. Not enough information to calculate
45.
What is the relative frequency of direct hits that were AT MOST a category 3 storm?
1. 0.3480
2. 0.9231
3. 0.2601
4. 0.3370 | textbooks/stats/Applied_Statistics/Introductory_Business_Statistics_(OpenStax)/01%3A_Sampling_and_Data/1.H%3A_Sampling_and_Data_%28Homework%29.txt |
1.1 Definitions of Statistics, Probability, and Key Terms
The mathematical theory of statistics is easier to learn when you know the language. This module presents important terms that will be used throughout the text.
1.2 Data, Sampling, and Variation in Data and Sampling
Data are individual items of information that come from a population or sample. Data may be classified as qualitative (categorical), quantitative continuous, or quantitative discrete.
Because it is not practical to measure the entire population in a study, researchers use samples to represent the population. A random sample is a representative group from the population chosen by using a method that gives each individual in the population an equal chance of being included in the sample. Random sampling methods include simple random sampling, stratified sampling, cluster sampling, and systematic sampling. Convenience sampling is a nonrandom method of choosing a sample that often produces biased data.
Samples that contain different individuals result in different data. This is true even when the samples are well-chosen and representative of the population. When properly selected, larger samples model the population more closely than smaller samples. There are many different potential problems that can affect the reliability of a sample. Statistical data needs to be critically analyzed, not simply accepted.
1.3 Levels of Measurement
Some calculations generate numbers that are artificially precise. It is not necessary to report a value to eight decimal places when the measures that generated that value were only accurate to the nearest tenth. Round off your final answer to one more decimal place than was present in the original data. This means that if you have data measured to the nearest tenth of a unit, report the final statistic to the nearest hundredth.
In addition to rounding your answers, you can measure your data using the following four levels of measurement.
• Nominal scale level: data that cannot be ordered nor can it be used in calculations
• Ordinal scale level: data that can be ordered; the differences cannot be measured
• Interval scale level: data with a definite ordering but no starting point; the differences can be measured, but there is no such thing as a ratio.
• Ratio scale level: data with a starting point that can be ordered; the differences have meaning and ratios can be calculated.
When organizing data, it is important to know how many times a value appears. How many statistics students study five hours or more for an exam? What percent of families on our block own two pets? Frequency, relative frequency, and cumulative relative frequency are measures that answer questions like these.
1.4 Experimental Design and Ethics
A poorly designed study will not produce reliable data. There are certain key components that must be included in every experiment. To eliminate lurking variables, subjects must be assigned randomly to different treatment groups. One of the groups must act as a control group, demonstrating what happens when the active treatment is not applied. Participants in the control group receive a placebo treatment that looks exactly like the active treatments but cannot influence the response variable. To preserve the integrity of the placebo, both researchers and subjects may be blinded. When a study is designed properly, the only difference between treatment groups is the one imposed by the researcher. Therefore, when groups respond differently to different treatments, the difference must be due to the influence of the explanatory variable.
“An ethics problem arises when you are considering an action that benefits you or some cause you support, hurts or reduces benefits to others, and violates some rule.” (Andrew Gelman, “Open Data and Open Methods,” Ethics and Statistics, http://www.stat.columbia.edu/~gelman...nceEthics1.pdf (accessed May 1, 2013).) Ethical violations in statistics are not always easy to spot. Professional associations and federal agencies post guidelines for proper conduct. It is important that you learn basic statistical procedures so that you can recognize proper data analysis. | textbooks/stats/Applied_Statistics/Introductory_Business_Statistics_(OpenStax)/01%3A_Sampling_and_Data/1.R%3A_Sampling_and_Data_%28Review%29.txt |
2.
1. all children who take ski or snowboard lessons
2. a group of these children
3. the population mean age of children who take their first snowboard lesson
4. the sample mean age of children who take their first snowboard lesson
5. \(X\) = the age of one child who takes his or her first ski or snowboard lesson
6. values for \(X\), such as 3, 7, and so on
4.
1. the clients of the insurance companies
2. a group of the clients
3. the mean health costs of the clients
4. the mean health costs of the sample
5. \(X\) = the health costs of one client
6. values for \(X\), such as 34, 9, 82, and so on
6.
1. all the clients of this counselor
2. a group of clients of this marriage counselor
3. the proportion of all her clients who stay married
4. the proportion of the sample of the counselor’s clients who stay married
5. \(X\) = the number of couples who stay married
6. yes, no
8.
1. all people (maybe in a certain geographic area, such as the United States)
2. a group of the people
3. the proportion of all people who will buy the product
4. the proportion of the sample who will buy the product
5. \(X\) = the number of people who will buy it
6. buy, not buy
10.
a
12.
quantitative discrete, 150
14.
qualitative, Oakland A’s
16.
quantitative discrete, 11,234 students
18.
qualitative, Crest
20.
quantitative continuous, 47.3 years
22.
b
24.
1. The survey was conducted using six similar flights.
The survey would not be a true representation of the entire population of air travelers.
Conducting the survey on a holiday weekend will not produce representative results.
2. Conduct the survey during different times of the year.
Conduct the survey using flights to and from various locations.
Conduct the survey on different days of the week.
26.
Answers will vary. Sample Answer: You could use a systematic sampling method. Stop the tenth person as they leave one of the buildings on campus at 9:50 in the morning. Then stop the tenth person as they leave a different building on campus at 1:50 in the afternoon.
28.
Answers will vary. Sample Answer: Many people will not respond to mail surveys. If they do respond to the surveys, you can’t be sure who is responding. In addition, mailing lists can be incomplete.
30.
b
32.
convenience cluster stratified systematic simple random
34.
1. qualitative(categorical)
2. quantitative discrete
3. quantitative discrete
4. qualitative(categorical)
36.
Causality: The fact that two variables are related does not guarantee that one variable is influencing the other. We cannot assume that crime rate impacts education level or that education level impacts crime rate.
Confounding: There are many factors that define a community other than education level and crime rate. Communities with high crime rates and high education levels may have other lurking variables that distinguish them from communities with lower crime rates and lower education levels. Because we cannot isolate these variables of interest, we cannot draw valid conclusions about the connection between education and crime. Possible lurking variables include police expenditures, unemployment levels, region, average age, and size.
38.
1. Possible reasons: increased use of caller id, decreased use of landlines, increased use of private numbers, voice mail, privacy managers, hectic nature of personal schedules, decreased willingness to be interviewed
2. When a large number of people refuse to participate, then the sample may not have the same characteristics of the population. Perhaps the majority of people willing to participate are doing so because they feel strongly about the subject of the survey.
40.
1. # flossing per week Frequency Relative frequency Cumulative relative frequency
0 27 0.4500 0.4500
1 18 0.3000 0.7500
3 11 0.1833 0.9333
6 3 0.0500 0.9833
7 1 0.0167 1
Table 1.19
2. 5.00%
3. 93.33%
42.
The sum of the travel times is 1,173.1. Divide the sum by 50 to calculate the mean value: 23.462. Because each state’s travel time was measured to the nearest tenth, round this calculation to the nearest hundredth: 23.46.
44.
b | textbooks/stats/Applied_Statistics/Introductory_Business_Statistics_(OpenStax)/01%3A_Sampling_and_Data/1.S%3A_Sampling_and_Data_%28Solutions%29.txt |
Once you have collected data, what will you do with it? Data can be described and presented in many different formats. For example, suppose you are interested in buying a house in a particular area. You may have no clue about the house prices, so you might ask your real estate agent to give you a sample data set of prices. Looking at all the prices in the sample often is overwhelming. A better way might be to look at the median price and the variation of prices. The median and variation are just two ways that you will learn to describe data. Your agent might also provide you with a graph of the data.
In this chapter, you will study numerical and graphical ways to describe and display your data. This area of statistics is called "Descriptive Statistics." You will learn how to calculate, and even more importantly, how to interpret these measurements and graphs.
A statistical graph is a tool that helps you learn about the shape or distribution of a sample or a population. A graph can be a more effective way of presenting data than a mass of numbers because we can see where data clusters and where there are only a few data values. Newspapers and the Internet use graphs to show trends and to enable readers to compare facts and figures quickly. Statisticians often graph data first to get a picture of the data. Then, more formal tools may be applied.
Some of the types of graphs that are used to summarize and organize data are the dot plot, the bar graph, the histogram, the stem-and-leaf plot, the frequency polygon (a type of broken line graph), the pie chart, and the box plot. In this chapter, we will briefly look at stem-and-leaf plots, line graphs, and bar graphs, as well as frequency polygons, and time series graphs. Our emphasis will be on histograms and box plots.
2.01: Display Data
Stem-and-Leaf Graphs (Stemplots), Line Graphs, and Bar Graphs
One simple graph, the stem-and-leaf graph or stemplot, comes from the field of exploratory data analysis. It is a good choice when the data sets are small. To create the plot, divide each observation of data into a stem and a leaf. The leaf consists of a final significant digit. For example, 23 has stem two and leaf three. The number 432 has stem 43 and leaf two. Likewise, the number 5,432 has stem 543 and leaf two. The decimal 9.3 has stem nine and leaf three. Write the stems in a vertical line from smallest to largest. Draw a vertical line to the right of the stems. Then write the leaves in increasing order next to their corresponding stem.
Example $2$.1
For Susan Dean's spring pre-calculus class, scores for the first exam were as follows (smallest to largest):
33; 42; 49; 49; 53; 55; 55; 61; 63; 67; 68; 68; 69; 69; 72; 73; 74; 78; 80; 83; 88; 88; 88; 90; 92; 94; 94; 94; 94; 96; 100
Stem Leaf
3 3
4 2 9 9
5 3 5 5
6 1 3 7 8 8 9 9
7 2 3 4 8
8 0 3 8 8 8
9 0 2 4 4 4 4 6
10 0
Table $2$.1 Stem-and-Leaf Graph
The stemplot shows that most scores fell in the 60s, 70s, 80s, and 90s. Eight out of the 31 scores or approximately 26% (831)(831) were in the 90s or 100, a fairly high number of As.
Exercise $2$.1
For the Park City basketball team, scores for the last 30 games were as follows (smallest to largest):
32; 32; 33; 34; 38; 40; 42; 42; 43; 44; 46; 47; 47; 48; 48; 48; 49; 50; 50; 51; 52; 52; 52; 53; 54; 56; 57; 57; 60; 61
Construct a stem plot for the data.
The stemplot is a quick way to graph data and gives an exact picture of the data. You want to look for an overall pattern and any outliers. An outlier is an observation of data that does not fit the rest of the data. It is sometimes called an extreme value. When you graph an outlier, it will appear not to fit the pattern of the graph. Some outliers are due to mistakes (for example, writing down 50 instead of 500) while others may indicate that something unusual is happening. It takes some background information to explain outliers, so we will cover them in more detail later.
Example $2$.2
The data are the distances (in kilometers) from a home to local supermarkets. Create a stemplot using the data:
1.1; 1.5; 2.3; 2.5; 2.7; 3.2; 3.3; 3.3; 3.5; 3.8; 4.0; 4.2; 4.5; 4.5; 4.7; 4.8; 5.5; 5.6; 6.5; 6.7; 12.3
Do the data seem to have any concentration of values?
NOTE
The leaves are to the right of the decimal.
Answer
The value 12.3 may be an outlier. Values appear to concentrate at three and four kilometers.
Stem Leaf
1 1 5
2 3 5 7
3 2 3 3 5 8
4 0 2 5 5 7 8
5 5 6
6 5 7
7
8
9
10
11
12 3
Table $2$.2
Exercise $2$.2
The following data show the distances (in miles) from the homes of off-campus statistics students to the college. Create a stem plot using the data and identify any outliers:
0.5; 0.7; 1.1; 1.2; 1.2; 1.3; 1.3; 1.5; 1.5; 1.7; 1.7; 1.8; 1.9; 2.0; 2.2; 2.5; 2.6; 2.8; 2.8; 2.8; 3.5; 3.8; 4.4; 4.8; 4.9; 5.2; 5.5; 5.7; 5.8; 8.0
Example $2$.3
A side-by-side stem-and-leaf plot allows a comparison of the two data sets in two columns. In a side-by-side stem-and-leaf plot, two sets of leaves share the same stem. The leaves are to the left and the right of the stems. Table $2$.4 and Table $2$.5 show the ages of presidents at their inauguration and at their death. Construct a side-by-side stem-and-leaf plot using this data.
Answer
Ages at Inauguration Ages at Death
9 9 8 7 7 7 6 3 2 4 6 9
8 7 7 7 7 6 6 6 5 5 5 5 4 4 4 4 4 2 2 1 1 1 1 1 0 5 3 6 6 7 7 8
9 8 5 4 4 2 1 1 1 0 6 0 0 3 3 4 4 5 6 7 7 7 8
7 0 0 1 1 1 4 7 8 8 9
8 0 1 3 5 8
9 0 0 3 3
Table $2$.3
President Age President Age President Age
Washington 57 Lincoln 52 Hoover 54
J. Adams 61 A. Johnson 56 F. Roosevelt 51
Jefferson 57 Grant 46 Truman 60
Madison 57 Hayes 54 Eisenhower 62
Monroe 58 Garfield 49 Kennedy 43
J. Q. Adams 57 Arthur 51 L. Johnson 55
Jackson 61 Cleveland 47 Nixon 56
Van Buren 54 B. Harrison 55 Ford 61
W. H. Harrison 68 Cleveland 55 Carter 52
Tyler 51 McKinley 54 Reagan 69
Polk 49 T. Roosevelt 42 G.H.W. Bush 64
Taylor 64 Taft 51 Clinton 47
Fillmore 50 Wilson 56 G. W. Bush 54
Pierce 48 Harding 55 Obama 47
Buchanan 65 Coolidge 51 Trump 70
Table $2$.4 Presidential Ages at Inauguration
President Age President Age President Age
Washington 67 Lincoln 56 Hoover 90
J. Adams 90 A. Johnson 66 F. Roosevelt 63
Jefferson 83 Grant 63 Truman 88
Madison 85 Hayes 70 Eisenhower 78
Monroe 73 Garfield 49 Kennedy 46
J. Q. Adams 80 Arthur 56 L. Johnson 64
Jackson 78 Cleveland 71 Nixon 81
Van Buren 79 B. Harrison 67 Ford 93
W. H. Harrison 68 Cleveland 71 Reagan 93
Tyler 71 McKinley 58
Polk 53 T. Roosevelt 60
Taylor 65 Taft 72
Fillmore 74 Wilson 67
Pierce 64 Harding 57
Buchanan 77 Coolidge 60
Table $2$.5 Presidential Age at Death
Another type of graph that is useful for specific data values is a line graph. In the particular line graph shown in Example $4$, the x-axis(horizontal axis) consists of data values and the y-axis (vertical axis) consists of frequency points. The frequency points are connected using line segments.
Example $2$.4
In a survey, 40 mothers were asked how many times per week a teenager must be reminded to do his or her chores. The results are shown in Table $2$.6 and in Figure $2$.2.
Number of times teenager is reminded Frequency
0 2
1 5
2 8
3 14
4 7
5 4
Table2.6
Figure 2.2
Exercise $4$
In a survey, 40 people were asked how many times per year they had their car in the shop for repairs. The results are shown in Table $7$. Construct a line graph.
Number of times in shop Frequency
0 7
1 10
2 14
3 9
Table2.2.7
Bar graphs consist of bars that are separated from each other. The bars can be rectangles or they can be rectangular boxes (used in three-dimensional plots), and they can be vertical or horizontal. The bar graph shown in Example $5$ has age groups represented on the x-axis and proportions on the y-axis.
Exercise $1$
Add exercises text here.
Answer
Solution 2.5
Example $5$
By the end of 2011, Facebook had over 146 million users in the United States. Table $2$.8 shows three age groups, the number of users in each age group, and the proportion (%) of users in each age group. Construct a bar graph using this data.
Age groups Number of Facebook users Proportion (%) of Facebook users
13–25 65,082,280 45%
26–44 53,300,200 36%
45–64 27,885,100 19%
Table2.2.8
Solution
Exercise $5$
Add exercises text here.
Answer
The population in Park City is made up of children, working-age adults, and retirees. Table $9$ shows the three age groups, the number of people in the town from each age group, and the proportion (%) of people in each age group. Construct a bar graph showing the proportions.
Age groups Number of people Proportion of population
Children 67,059 19%
Working-age adults 152,198 43%
Retirees 131,662 38%
Table2.2.9
Example $2$.6
The columns in Table $2$.10 contain: the race or ethnicity of students in U.S. Public Schools for the class of 2011, percentages for the Advanced Placement examine population for that class, and percentages for the overall student population. Create a bar graph with the student race or ethnicity (qualitative data) on the x-axis, and the Advanced Placement examinee population percentages on the y-axis.
Race/ethnicity AP examinee population Overall student population
1 = Asian, Asian American or Pacific Islander 10.3% 5.7%
2 = Black or African American 9.0% 14.7%
3 = Hispanic or Latino 17.0% 17.6%
4 = American Indian or Alaska Native 0.6% 1.1%
5 = White 57.1% 59.2%
6 = Not reported/other 6.0% 1.7%
Table2.2.10
Answer
Solution 2.6
Exercise $2$.6
Add exercises text here.
Answer
Park city is broken down into six voting districts. The table shows the percent of the total registered voter population that lives in each district as well as the percent total of the entire population that lives in each district. Construct a bar graph that shows the registered voter population by district.
District Registered voter population Overall city population
1 15.5% 19.4%
2 12.2% 15.6%
3 9.8% 9.0%
4 17.4% 18.5%
5 22.8% 20.7%
6 22.3% 16.8%
Table $2$.11
Example $2$.7
Below is a two-way table showing the types of pets owned by men and women:
Dogs Cats Fish Total
Men 4 2 2 8
Women 4 6 2 12
Total 8 8 4 20
Table $2$.12
Given these data, calculate the conditional distributions for the subpopulation of men who own each pet type.
Answer
• Men who own dogs = 4/8 = 0.5
• Men who own cats = 2/8 = 0.25
• Men who own fish = 2/8 = 0.25
Note: The sum of all of the conditional distributions must equal one. In this case, 0.5 + 0.25 + 0.25 = 1; therefore, the solution "checks".
Histograms, Frequency Polygons, and Time Series Graphs
For most of the work you do in this book, you will use a histogram to display the data. One advantage of a histogram is that it can readily display large data sets. A rule of thumb is to use a histogram when the data set consists of 100 values or more.
A histogram consists of contiguous (adjoining) boxes. It has both a horizontal axis and a vertical axis. The horizontal axis is labeled with what the data represents (for instance, distance from your home to school). The vertical axis is labeled either frequency or relative frequency (or percent frequency or probability). The graph will have the same shape with either label. The histogram (like the stemplot) can give you the shape of the data, the center, and the spread of the data.
The relative frequency is equal to the frequency for an observed value of the data divided by the total number of data values in the sample.(Remember, frequency is defined as the number of times an answer occurs.) If:
• $f$ = frequency
• $n$ = total number of data values (or the sum of the individual frequencies), and
• $RF$ = relative frequency,
then:
$\RF=\frac{f}{n}\nonumber] For example, if three students in Mr. Ahab's English class of 40 students received from 90% to 100%, then, $f = 3$, $n = 40$, and $RF = \frac{f}{n} = \frac{3}{40} = 0.075$. 7.5% of the students received 90–100%. 90–100% are quantitative measures. To construct a histogram, first decide how many bars or intervals, also called classes, represent the data. Many histograms consist of five to 15 bars or classes for clarity. The number of bars needs to be chosen. Choose a starting point for the first interval to be less than the smallest data value. A convenient starting point is a lower value carried out to one more decimal place than the value with the most decimal places. For example, if the value with the most decimal places is 6.1 and this is the smallest value, a convenient starting point is 6.05 (6.1 – 0.05 = 6.05). We say that 6.05 has more precision. If the value with the most decimal places is 2.23 and the lowest value is 1.5, a convenient starting point is 1.495 (1.5 – 0.005 = 1.495). If the value with the most decimal places is 3.234 and the lowest value is 1.0, a convenient starting point is 0.9995 (1.0 – 0.0005 = 0.9995). If all the data happen to be integers and the smallest value is two, then a convenient starting point is 1.5 (2 – 0.5 = 1.5). Also, when the starting point and other boundaries are carried to one additional decimal place, no data value will fall on a boundary. The next two examples go into detail about how to construct a histogram using continuous data and how to create a histogram using discrete data. Example $2$.8 The following data are the heights (in inches to the nearest half inch) of 100 male semiprofessional soccer players. The heights are continuous data, since height is measured. 60; 60.5; 61; 61; 61.5 63.5; 63.5; 63.5 64; 64; 64; 64; 64; 64; 64; 64.5; 64.5; 64.5; 64.5; 64.5; 64.5; 64.5; 64.5 66; 66; 66; 66; 66; 66; 66; 66; 66; 66; 66.5; 66.5; 66.5; 66.5; 66.5; 66.5; 66.5; 66.5; 66.5; 66.5; 66.5; 67; 67; 67; 67; 67; 67; 67; 67; 67; 67; 67; 67; 67.5; 67.5; 67.5; 67.5; 67.5; 67.5; 67.5 68; 68; 69; 69; 69; 69; 69; 69; 69; 69; 69; 69; 69.5; 69.5; 69.5; 69.5; 69.5 70; 70; 70; 70; 70; 70; 70.5; 70.5; 70.5; 71; 71; 71 72; 72; 72; 72.5; 72.5; 73; 73.5 74 The smallest data value is 60. Since the data with the most decimal places has one decimal (for instance, 61.5), we want our starting point to have two decimal places. Since the numbers 0.5, 0.05, 0.005, etc. are convenient numbers, use 0.05 and subtract it from 60, the smallest value, for the convenient starting point. 60 – 0.05 = 59.95 which is more precise than, say, 61.5 by one decimal place. The starting point is, then, 59.95. The largest value is 74, so 74 + 0.05 = 74.05 is the ending value. Next, calculate the width of each bar or class interval. To calculate this width, subtract the starting point from the ending value and divide by the number of bars (you must choose the number of bars you desire). Suppose you choose eight bars. \[\frac{74.05−59.95}{8}=1.76\non\nonumber$
NOTE
We will round up to two and make each bar or class interval two units wide. Rounding up to two is one way to prevent a value from falling on a boundary. Rounding to the next number is often necessary even if it goes against the standard rules of rounding. For this example, using 1.76 as the width would also work. A guideline that is followed by some for the width of a bar or class interval is to take the square root of the number of data values and then round to the nearest whole number, if necessary. For example, if there are 150 values of data, take the square root of 150 and round to 12 bars or intervals.
The boundaries are:
• 59.95
• 59.95 + 2 = 61.95
• 61.95 + 2 = 63.95
• 63.95 + 2 = 65.95
• 65.95 + 2 = 67.95
• 67.95 + 2 = 69.95
• 69.95 + 2 = 71.95
• 71.95 + 2 = 73.95
• 73.95 + 2 = 75.95
The heights 60 through 61.5 inches are in the interval 59.95–61.95. The heights that are 63.5 are in the interval 61.95–63.95. The heights that are 64 through 64.5 are in the interval 63.95–65.95. The heights 66 through 67.5 are in the interval 65.95–67.95. The heights 68 through 69.5 are in the interval 67.95–69.95. The heights 70 through 71 are in the interval 69.95–71.95. The heights 72 through 73.5 are in the interval 71.95–73.95. The height 74 is in the interval 73.95–75.95.
The following histogram displays the heights on the x-axis and relative frequency on the y-axis.
Exercise $2$.8
The following data are the shoe sizes of 50 male students. The sizes are continuous data since shoe size is measured. Construct a histogram and calculate the width of each bar or class interval. Suppose you choose six bars.
9; 9; 9.5; 9.5; 10; 10; 10; 10; 10; 10; 10.5; 10.5; 10.5; 10.5; 10.5; 10.5; 10.5; 10.5
11; 11; 11; 11; 11; 11; 11; 11; 11; 11; 11; 11; 11; 11.5; 11.5; 11.5; 11.5; 11.5; 11.5; 11.5
12; 12; 12; 12; 12; 12; 12; 12.5; 12.5; 12.5; 12.5; 14
Example $2$.9
Create a histogram for the following data: the number of books bought by 50 part-time college students at ABC College. The number of books is discrete data, since books are counted.
1; 1; 1; 1; 1; 1; 1; 1; 1; 1; 1
2; 2; 2; 2; 2; 2; 2; 2; 2; 2
3; 3; 3; 3; 3; 3; 3; 3; 3; 3; 3; 3; 3; 3; 3; 3
4; 4; 4; 4; 4; 4
5; 5; 5; 5; 5
6; 6
Eleven students buy one book. Ten students buy two books. Sixteen students buy three books. Six students buy four books. Five students buy five books. Two students buy six books.
Because the data are integers, subtract 0.5 from 1, the smallest data value and add 0.5 to 6, the largest data value. Then the starting point is 0.5 and the ending value is 6.5.
Next, calculate the width of each bar or class interval. If the data are discrete and there are not too many different values, a width that places the data values in the middle of the bar or class interval is the most convenient. Since the data consist of the numbers 1, 2, 3, 4, 5, 6, and the starting point is 0.5, a width of one places the 1 in the middle of the interval from 0.5 to 1.5, the 2 in the middle of the interval from 1.5 to 2.5, the 3 in the middle of the interval from 2.5 to 3.5, the 4 in the middle of the interval from _______ to _______, the 5 in the middle of the interval from _______ to _______, and the _______ in the middle of the interval from _______ to _______ .
Solution
Calculate the number of bars as follows:
$\frac{6.5−0.5}{\text{number of bars}}=1\nonumber$
where 1 is the width of a bar. Therefore, bars = 6.
The following histogram displays the number of books on the x-axis and the frequency on the y-axis.
Example $2$.10
Using this data set, construct a histogram.
Number of hours my classmates spent playing video games on weekends
9.95 10 2.25 16.75 0
19.5 22.5 7.5 15 12.75
5.5 11 10 20.75 17.5
23 21.9 24 23.75 18
20 15 22.9 18.8 20.5
Table $2$.13
Answer
Solution 2.10
Some values in this data set fall on boundaries for the class intervals. A value is counted in a class interval if it falls on the left boundary, but not if it falls on the right boundary. Different researchers may set up histograms for the same data in different ways. There is more than one correct way to set up a histogram.
Frequency Polygons
Frequency polygons are analogous to line graphs, and just as line graphs make continuous data visually easy to interpret, so too do frequency polygons.
To construct a frequency polygon, first examine the data and decide on the number of intervals, or class intervals, to use on the x-axis and y-axis. After choosing the appropriate ranges, begin plotting the data points. After all the points are plotted, draw line segments to connect them.
Example $2$.11
A frequency polygon was constructed from the frequency table below.
Lower bound Upper bound Frequency Cumulative frequency
49.5 59.5 5 5
59.5 69.5 10 15
69.5 79.5 30 45
79.5 89.5 40 85
89.5 99.5 15 100
Table $2$.14: Frequency distribution for calculus final test scores
The first label on the x-axis is 44.5. This represents an interval extending from 39.5 to 49.5. Since the lowest test score is 54.5, this interval is used only to allow the graph to touch the x-axis. The point labeled 54.5 represents the next interval, or the first “real” interval from the table, and contains five scores. This reasoning is followed for each of the remaining intervals with the point 104.5 representing the interval from 99.5 to 109.5. Again, this interval contains no data and is only used so that the graph will touch the x-axis. Looking at the graph, we say that this distribution is skewed because one side of the graph does not mirror the other side.
Exercise $2$.11
Construct a frequency polygon of U.S. Presidents’ ages at inauguration shown in Table $15$.
Age at inauguration Frequency
41.5–46.5 4
46.5–51.5 11
51.5–56.5 14
56.5–61.5 9
61.5–66.5 4
66.5–71.5 2
Table2.2.15
Frequency polygons are useful for comparing distributions. This is achieved by overlaying the frequency polygons drawn for different data sets.
Example $2$.12
We will construct an overlay frequency polygon comparing the scores from Example $11$ with the students’ final numeric grade.
Lower bound Upper bound Frequency Cumulative frequency
49.5 59.5 5 5
59.5 69.5 10 15
69.5 79.5 30 45
79.5 89.5 40 85
89.5 99.5 15 100
Table $2$.16: Frequency distribution for calculus final test scores
Lower bound Upper bound Frequency Cumulative frequency
49.5 59.5 10 10
59.5 69.5 10 20
69.5 79.5 30 50
79.5 89.5 45 95
89.5 99.5 5 100
Table $2$.17: Frequency distribution for calculus final grades
Constructing a Time Series Graph
Suppose that we want to study the temperature range of a region for an entire month. Every day at noon we note the temperature and write this down in a log. A variety of statistical studies could be done with these data. We could find the mean or the median temperature for the month. We could construct a histogram displaying the number of days that temperatures reach a certain range of values. However, all of these methods ignore a portion of the data that we have collected.
One feature of the data that we may want to consider is that of time. Since each date is paired with the temperature reading for the day, we don‘t have to think of the data as being random. We can instead use the times given to impose a chronological order on the data. A graph that recognizes this ordering and displays the changing temperature as the month progresses is called a time series graph.
To construct a time series graph, we must look at both pieces of our paired data set. We start with a standard Cartesian coordinate system. The horizontal axis is used to plot the date or time increments, and the vertical axis is used to plot the values of the variable that we are measuring. By doing this, we make each point on the graph correspond to a date and a measured quantity. The points on the graph are typically connected by straight lines in the order in which they occur.
Example $2$.13
The following data shows the Annual Consumer Price Index, each month, for ten years. Construct a time series graph for the Annual Consumer Price Index data only.
Year Jan Feb Mar Apr May Jun Jul
2003 181.7 183.1 184.2 183.8 183.5 183.7 183.9
2004 185.2 186.2 187.4 188.0 189.1 189.7 189.4
2005 190.7 191.8 193.3 194.6 194.4 194.5 195.4
2006 198.3 198.7 199.8 201.5 202.5 202.9 203.5
2007 202.416 203.499 205.352 206.686 207.949 208.352 208.299
2008 211.080 211.693 213.528 214.823 216.632 218.815 219.964
2009 211.143 212.193 212.709 213.240 213.856 215.693 215.351
2010 216.687 216.741 217.631 218.009 218.178 217.965 218.011
2011 220.223 221.309 223.467 224.906 225.964 225.722 225.922
2012 226.665 227.663 229.392 230.085 229.815 229.478 229.104
Table $2$.18
Year Aug Sep Oct Nov Dec Annual
2003 184.6 185.2 185.0 184.5 184.3 184.0
2004 189.5 189.9 190.9 191.0 190.3 188.9
2005 196.4 198.8 199.2 197.6 196.8 195.3
2006 203.9 202.9 201.8 201.5 201.8 201.6
2007 207.917 208.490 208.936 210.177 210.036 207.342
2008 219.086 218.783 216.573 212.425 210.228 215.303
2009 215.834 215.969 216.177 216.330 215.949 214.537
2010 218.312 218.439 218.711 218.803 219.179 218.056
2011 226.545 226.889 226.421 226.230 225.672 224.939
2012 230.379 231.407 231.317 230.221 229.601 229.594
Table $2$.19
Answer
Solution 2.13
Exercise $2$.13
The following table is a portion of a data set from www.worldbank.org. Use the table to construct a time series graph for CO2emissions for the United States.
Year Ukraine United Kingdom United States
2003 352,259 540,640 5,681,664
2004 343,121 540,409 5,790,761
2005 339,029 541,990 5,826,394
2006 327,797 542,045 5,737,615
2007 328,357 528,631 5,828,697
2008 323,657 522,247 5,656,839
2009 272,176 474,579 5,299,563
Table $20$: CO2 emissions
Uses of a Time Series Graph
Time series graphs are important tools in various applications of statistics. When recording values of the same variable over an extended period of time, sometimes it is difficult to discern any trend or pattern. However, once the same data points are displayed graphically, some features jump out. Time series graphs make trends easy to spot.
How NOT to Lie with Statistics
It is important to remember that the very reason we develop a variety of methods to present data is to develop insights into the subject of what the observations represent. We want to get a "sense" of the data. Are the observations all very much alike or are they spread across a wide range of values, are they bunched at one end of the spectrum or are they distributed evenly and so on. We are trying to get a visual picture of the numerical data. Shortly we will develop formal mathematical measures of the data, but our visual graphical presentation can say much. It can, unfortunately, also say much that is distracting, confusing and simply wrong in terms of the impression the visual leaves. Many years ago Darrell Huff wrote the book How to Lie with Statistics. It has been through 25 plus printings and sold more than one and one-half million copies. His perspective was a harsh one and used many actual examples that were designed to mislead. He wanted to make people aware of such deception, but perhaps more importantly to educate so that others do not make the same errors inadvertently.
Again, the goal is to enlighten with visuals that tell the story of the data. Pie charts have a number of common problems when used to convey the message of the data. Too many pieces of the pie overwhelm the reader. More than perhaps five or six categories ought to give an idea of the relative importance of each piece. This is after all the goal of a pie chart, what subset matters most relative to the others. If there are more components than this then perhaps an alternative approach would be better or perhaps some can be consolidated into an "other" category. Pie charts cannot show changes over time, although we see this attempted all too often. In federal, state, and city finance documents pie charts are often presented to show the components of revenue available to the governing body for appropriation: income tax, sales tax motor vehicle taxes and so on. In and of itself this is interesting information and can be nicely done with a pie chart. The error occurs when two years are set side-by-side. Because the total revenues change year to year, but the size of the pie is fixed, no real information is provided and the relative size of each piece of the pie cannot be meaningfully compared.
Histograms can be very helpful in understanding the data. Properly presented, they can be a quick visual way to present probabilities of different categories by the simple visual of comparing relative areas in each category. Here the error, purposeful or not, is to vary the width of the categories. This of course makes comparison to the other categories impossible. It does embellish the importance of the category with the expanded width because it has a greater area, inappropriately, and thus visually "says" that that category has a higher probability of occurrence.
Time series graphs perhaps are the most abused. A plot of some variable across time should never be presented on axes that change part way across the page either in the vertical or horizontal dimension. Perhaps the time frame is changed from years to months. Perhaps this is to save space or because monthly data was not available for early years. In either case this confounds the presentation and destroys any value of the graph. If this is not done to purposefully confuse the reader, then it certainly is either lazy or sloppy work.
Changing the units of measurement of the axis can smooth out a drop or accentuate one. If you want to show large changes, then measure the variable in small units, penny rather than thousands of dollars. And of course to continue the fraud, be sure that the axis does not begin at zero, zero. If it begins at zero, zero, then it becomes apparent that the axis has been manipulated.
Perhaps you have a client that is concerned with the volatility of the portfolio you manage. An easy way to present the data is to use long time periods on the time series graph. Use months or better, quarters rather than daily or weekly data. If that doesn't get the volatility down then spread the time axis relative to the rate of return or portfolio valuation axis. If you want to show "quick" dramatic growth, then shrink the time axis. Any positive growth will show visually "high" growth rates. Do note that if the growth is negative then this trick will show the portfolio is collapsing at a dramatic rate.
Again, the goal of descriptive statistics is to convey meaningful visuals that tell the story of the data. Purposeful manipulation is fraud and unethical at the worst, but even at its best, making these type of errors will lead to confusion on the part of the analysis. | textbooks/stats/Applied_Statistics/Introductory_Business_Statistics_(OpenStax)/02%3A_Descriptive_Statistics/2.00%3A_introduction_to_Descriptive_Statistics.txt |
The common measures of location are quartiles and percentiles
Quartiles are special percentiles. The first quartile, $Q_1$, is the same as the $25^{th}$ percentile, and the third quartile, $Q_3$, is the same as the $75^{th}$ percentile. The median, M, is called both the second quartile and the 50th percentile.
To calculate quartiles and percentiles, the data must be ordered from smallest to largest. Quartiles divide ordered data into quarters. Percentiles divide ordered data into hundredths. To score in the $90^{th}$ percentile of an exam does not mean, necessarily, that you received 90% on a test. It means that 90% of test scores are the same or less than your score and 10% of the test scores are the same or greater than your test score.
Percentiles are useful for comparing values. For this reason, universities and colleges use percentiles extensively. One instance in which colleges and universities use percentiles is when SAT results are used to determine a minimum testing score that will be used as an acceptance factor. For example, suppose Duke accepts SAT scores at or above the $75^{th}$ percentile. That translates into a score of at least 1220.
Percentiles are mostly used with very large populations. Therefore, if you were to say that 90% of the test scores are less (and not the same or less) than your score, it would be acceptable because removing one particular data value is not significant.
The median is a number that measures the "center" of the data. You can think of the median as the "middle value," but it does not actually have to be one of the observed values. It is a number that separates ordered data into halves. Half the values are the same number or smaller than the median, and half the values are the same number or larger. For example, consider the following data.
$1; 11.5; 6; 7.2; 4; 8; 9; 10; 6.8; 8.3; 2; 2; 10; 1$
Ordered from smallest to largest:
$1; 1; 2; 2; 4; 6; 6.8; 7.2; 8; 8.3; 9; 10; 10; 11.5$
Since there are 14 observations, the median is between the seventh value, 6.8, and the eighth value, 7.2. To find the median, add the two values together and divide by two.
$\frac{6.8+7.2}{2}=7\nonumber$
The median is seven. Half of the values are smaller than seven and half of the values are larger than seven.
Quartiles are numbers that separate the data into quarters. Quartiles may or may not be part of the data. To find the quartiles, first find the median or second quartile. The first quartile, $Q_1$, is the middle value of the lower half of the data, and the third quartile, $Q_3$, is the middle value, or median, of the upper half of the data. To get the idea, consider the same data set:
1; 1; 2; 2; 4; 6; 6.8; 7.2; 8; 8.3; 9; 10; 10; 11.5
The median or second quartile is seven. The lower half of the data are 1, 1, 2, 2, 4, 6, 6.8. The middle value of the lower half is two.
1; 1; 2; 2; 4; 6; 6.8
The number two, which is part of the data, is the first quartile. One-fourth of the entire sets of values are the same as or less than two and three-fourths of the values are more than two.
The upper half of the data is 7.2, 8, 8.3, 9, 10, 10, 11.5. The middle value of the upper half is nine.
The third quartile, $Q_3$, is nine. Three-fourths (75%) of the ordered data set are less than nine. One-fourth (25%) of the ordered data set are greater than nine. The third quartile is part of the data set in this example.
The interquartile range is a number that indicates the spread of the middle half or the middle 50% of the data. It is the difference between the third quartile ($Q_3$) and the first quartile ($Q_1$).
$IQR = Q_3 – Q_1$
The $IQR$ can help to determine potential outliers. A value is suspected to be a potential outlier if it is less than $\bf{(1.5)(IQR)$ below the first quartile or more than $\bf{(1.5)(IQR)}$ above the third quartile. Potential outliers always require further investigation.
potential outlier
A potential outlier is a data point that is significantly different from the other data points. These special data points may be errors or some kind of abnormality or they may be a key to understanding the data.
Example $14$
For the following 13 real estate prices, calculate the $IQR$ and determine if any prices are potential outliers. Prices are in dollars.
$389,950; 230,500; 158,000; 479,000; 639,000; 114,950; 5,500,000; 387,000; 659,000; 529,000; 575,000; 488,800; 1,095,000$
Answer
Solution 2.14
Order the data from smallest to largest.
$114,950; 158,000; 230,500; 387,000; 389,950; 479,000; 488,800; 529,000; 575,000; 639,000; 659,000; 1,095,000; 5,500,000$
$M = 488,800$
$Q_{1}=\frac{230,500+387,000}{2}=308,750$
$Q_{3}=\frac{639,000+659,000}{2}=649,000$
$IQR = 649,000 – 308,750 = 340,250$
$(1.5)(IQR) = (1.5)(340,250) = 510,375$
$Q_1 – (1.5)(IQR) = 308,750 – 510,375 = –201,625$
$Q_3 + (1.5)(IQR) = 649,000 + 510,375 = 1,159,375$
No house price is less than $–201,625$. However, $5,500,000$ is more than $1,159,375$. Therefore, $5,500,000$ is a potential outlier.
Example $15$
For the two data sets in the test scores example, find the following:
1. The interquartile range. Compare the two interquartile ranges.
2. Any outliers in either set.
Answer
Solution 2.15
The five number summary for the day and night classes is
Minimum $Q_1$ Median $Q_3$ Maximum
Day 32 56 74.5 82.5 99
Night 25.5 78 81 89 98
Table $21$
a. The $IQR$ for the day group is $Q_3 – Q_1 = 82.5 – 56 = 26.5$
The $IQR$ for the night group is $Q_3 – Q_1 = 89 – 78 = 11$
The interquartile range (the spread or variability) for the day class is larger than the night class $IQR$. This suggests more variation will be found in the day class’s class test scores.
b. Day class outliers are found using the $IQR$ times 1.5 rule. So,
• $Q_1 - IQR(1.5) = 56 – 26.5(1.5) = 16.25$
• $Q_3 + IQR(1.5) = 82.5 + 26.5(1.5) = 122.25$
Since the minimum and maximum values for the day class are greater than $16.25$ and less than $122.25$, there are no outliers.
Night class outliers are calculated as:
• $Q_1 – IQR (1.5) = 78 – 11(1.5) = 61.5$
• $Q_3 + IQR(1.5) = 89 + 11(1.5) = 105.5$
For this class, any test score less than $61.5$ is an outlier. Therefore, the scores of $45$ and $25.5$ are outliers. Since no test score is greater than 105.5, there is no upper end outlier.
Example $16$
Fifty statistics students were asked how much sleep they get per school night (rounded to the nearest hour). The results were:
Amount of sleep per school night (hours) Frequency Relative frequency Cumulative relative frequency
4 2 0.04 0.04
5 5 0.10 0.14
6 7 0.14 0.28
7 12 0.24 0.52
8 14 0.28 0.80
9 7 0.14 0.94
10 3 0.06 1.00
Table $22$
Find the 28th percentile. Notice the 0.28 in the "cumulative relative frequency" column. Twenty-eight percent of 50 data values is 14 values. There are 14 values less than the 28th percentile. They include the two 4s, the five 5s, and the seven 6s. The 28th percentile is between the last six and the first seven. The 28th percentile is 6.5.
Find the median. Look again at the "cumulative relative frequency" column and find 0.52. The median is the 50th percentile or the second quartile. 50% of 50 is 25. There are 25 values less than the median. They include the two 4s, the five 5s, the seven 6s, and eleven of the 7s. The median or 50th percentile is between the 25th, or seven, and 26th, or seven, values. The median is seven.
Find the third quartile. The third quartile is the same as the $75^{th}$ percentile. You can "eyeball" this answer. If you look at the "cumulative relative frequency" column, you find 0.52 and 0.80. When you have all the fours, fives, sixes and sevens, you have 52% of the data. When you include all the 8s, you have 80% of the data. The $bf{75^{th}}$ percentile, then, must be an eight. Another way to look at the problem is to find 75% of 50, which is 37.5, and round up to 38. The third quartile, $Q_3$, is the 38th value, which is an eight. You can check this answer by counting the values. (There are 37 values below the third quartile and 12 values above.)
Exercise $16$
Forty bus drivers were asked how many hours they spend each day running their routes (rounded to the nearest hour). Find the 65th percentile.
Amount of time spent on route (hours) Frequency Relative frequency Cumulative relative frequency
2 12 0.30 0.30
3 14 0.35 0.65
4 10 0.25 0.90
5 4 0.10 1.00
Table $23$
Example $17$
Using Table $22$:
1. Find the $80^{th}$ percentile.
2. Find the $90^{th}$ percentile.
3. Find the first quartile. What is another name for the first quartile?
Answer
Solution 2.17
Using the data from the frequency table, we have:
a. The $80^{th}$ percentile is between the last eight and the first nine in the table (between the $40^{th}$ and $41^{st}$ values). Therefore, we need to take the mean of the $40^{th}$ an $41^{st}$ values. The $80^{th}$ percentile $=\frac{8+9}{2}=8.5$
b. The $90^{th}$ percentile will be the $45^{th}$ data value (location is $0.90(50) = 45$) and the 45th data value is nine.
c. $Q_1$ is also the 25th percentile. The $25^{th}$ percentile location calculation: $P_{25}=0.25(50)=12.5 \approx 13$ the $13^{th}$ data value. Thus, the $25^{th}$ percentile is six.
A Formula for Finding the $k$th Percentile
If you were to do a little research, you would find several formulas for calculating the $k^{th}$ percentile. Here is one of them.
$k =$ the $k^{th}$ percentile. It may or may not be part of the data.
$i =$ the index (ranking or position of a data value)
$n =$ the total number of data points, or observations
• Order the data from smallest to largest.
• Calculate $i=\frac{k}{100}(n+1)$
• If i is an integer, then the $k^{th}$ percentile is the data value in the $i^{th}$ position in the ordered set of data.
• If i is not an integer, then round i up and round i down to the nearest integers. Average the two data values in these two positions in the ordered data set. This is easier to understand in an example.
Example $18$
Listed are 29 ages for Academy Award winning best actors in order from smallest to largest.
$18; 21; 22; 25; 26; 27; 29; 30; 31; 33; 36; 37; 41; 42; 47; 52; 55; 57; 58; 62; 64; 67; 69; 71; 72; 73; 74; 76; 77$
1. Find the $70^{th}$ percentile.
2. Find the $83^{rd}$ percentile.
Answer
Solution 2.18
1.
• $k = 70$
• $i$ = the index
• $n = 29$
$i=\frac{k}{100}(n+1)=\left(\frac{70}{100}\right)(29+1)=21$. Twenty-one is an integer, and the data value in the 21st position in the ordered data set is 64. The 70th percentile is 64 years.
2.
• $k = 83^{rd}$ percentile
• $i$ = the index
• $n = 29$
$i=\frac{k}{100}(n+1)=( \frac{83}{100} )(29+1)=24.9$, which is NOT an integer. Round it down to 24 and up to 25. The age in the $24^{th}$ position is 71 and the age in the $25^{th}$ position is 72. Average 71 and 72. The $83^{rd}$ percentile is 71.5 years.
Exercise $18$
Listed are 29 ages for Academy Award winning best actors in order from smallest to largest.
$18; 21; 22; 25; 26; 27; 29; 30; 31; 33; 36; 37; 41; 42; 47; 52; 55; 57; 58; 62; 64; 67; 69; 71; 72; 73; 74; 76; 77$
Calculate the 20th percentile and the 55th percentile.
A Formula for Finding the Percentile of a Value in a Data Set
• Order the data from smallest to largest.
• $x$ = the number of data values counting from the bottom of the data list up to but not including the data value for which you want to find the percentile.
• $y$ = the number of data values equal to the data value for which you want to find the percentile.
• $n$ = the total number of data.
• Calculate $\frac{x+0.5 y}{n}(100)$. Then round to the nearest integer.
Example $19$
Listed are 29 ages for Academy Award winning best actors in order from smallest to largest.
$18; 21; 22; 25; 26; 27; 29; 30; 31; 33; 36; 37; 41; 42; 47; 52; 55; 57; 58; 62; 64; 67; 69; 71; 72; 73; 74; 76; 77$
1. Find the percentile for 58.
2. Find the percentile for 25.
Answer
Solution 2.19
1. Counting from the bottom of the list, there are 18 data values less than 58. There is one value of 58.
$x = 18$ and $y=1 . \frac{x+0.5 y}{n}(100)=\frac{18+0.5(1)}{29}(100)=63.80$. 58 is the $64^{th}$ percentile.
2. Counting from the bottom of the list, there are three data values less than 25. There is one value of 25.
$x = 3$ and $y=1 . \frac{x+0.5 y}{n}(100)=\frac{3+0.5(1)}{29}(100)=12.07$. Twenty-five is the $12^{th}$ percentile.
Interpreting Percentiles, Quartiles, and Median
A percentile indicates the relative standing of a data value when data are sorted into numerical order from smallest to largest. Percentages of data values are less than or equal to the pth percentile. For example, 15% of data values are less than or equal to the 15th percentile.
• Low percentiles always correspond to lower data values.
• High percentiles always correspond to higher data values.
A percentile may or may not correspond to a value judgment about whether it is "good" or "bad." The interpretation of whether a certain percentile is "good" or "bad" depends on the context of the situation to which the data applies. In some situations, a low percentile would be considered "good;" in other contexts a high percentile might be considered "good". In many situations, there is no value judgment that applies.
Understanding how to interpret percentiles properly is important not only when describing data, but also when calculating probabilities in later chapters of this text.
NOTE
When writing the interpretation of a percentile in the context of the given data, the sentence should contain the following information.
• information about the context of the situation being considered
• the data value (value of the variable) that represents the percentile
• the percent of individuals or items with data values below the percentile
• the percent of individuals or items with data values above the percentile.
Example $20$
On a timed math test, the first quartile for time it took to finish the exam was 35 minutes. Interpret the first quartile in the context of this situation.
Answer
Solution 2.20
Twenty-five percent of students finished the exam in 35 minutes or less. Seventy-five percent of students finished the exam in 35 minutes or more. A low percentile could be considered good, as finishing more quickly on a timed exam is desirable. (If you take too long, you might not be able to finish.)
Example $21$
On a 20 question math test, the 70th percentile for number of correct answers was 16. Interpret the 70th percentile in the context of this situation.
Answer
Solution 2.21
Seventy percent of students answered 16 or fewer questions correctly. Thirty percent of students answered 16 or more questions correctly. A higher percentile could be considered good, as answering more questions correctly is desirable.
Exercise $21$
On a 60 point written assignment, the $80^{th}$ percentile for the number of points earned was 49. Interpret the $80^{th}$ percentile in the context of this situation.
Example $22$
At a community college, it was found that the $30^{th}$ percentile of credit units that students are enrolled for is seven units. Interpret the $30^{th}$ percentile in the context of this situation.
Answer
Solution 2.22
• Thirty percent of students are enrolled in seven or fewer credit units.
• Seventy percent of students are enrolled in seven or more credit units.
• In this example, there is no "good" or "bad" value judgment associated with a higher or lower percentile. Students attend community college for varied reasons and needs, and their course load varies according to their needs.
Example $23$
Sharpe Middle School is applying for a grant that will be used to add fitness equipment to the gym. The principal surveyed 15 anonymous students to determine how many minutes a day the students spend exercising. The results from the 15 anonymous students are shown.
0 minutes; 40 minutes; 60 minutes; 30 minutes; 60 minutes
10 minutes; 45 minutes; 30 minutes; 300 minutes; 90 minutes;
30 minutes; 120 minutes; 60 minutes; 0 minutes; 20 minutes
Determine the following five values.
• Min = 0
• $Q_1 = 20$
• Med = 40
• $Q_3 = 60$
• Max = 300
If you were the principal, would you be justified in purchasing new fitness equipment? Since 75% of the students exercise for 60 minutes or less daily, and since the $IQR$ is 40 minutes $(60 – 20 = 40)$, we know that half of the students surveyed exercise between 20 minutes and 60 minutes daily. This seems a reasonable amount of time spent exercising, so the principal would be justified in purchasing the new equipment.
However, the principal needs to be careful. The value 300 appears to be a potential outlier.
$Q_3 + 1.5(IQR) = 60 + (1.5)(40) = 120$.
The value 300 is greater than 120 so it is a potential outlier. If we delete it and calculate the five values, we get the following values:
• Min = 0
• $Q_1 = 20$
• $Q_3 = 60$
• Max = 120
We still have 75% of the students exercising for 60 minutes or less daily and half of the students exercising between 20 and 60 | textbooks/stats/Applied_Statistics/Introductory_Business_Statistics_(OpenStax)/02%3A_Descriptive_Statistics/2.02%3A_Measures_of_the_Location_of_the_Data.txt |
The "center" of a data set is also a way of describing location. The two most widely used measures of the "center" of the data are the mean(average) and the median. To calculate the mean weight of 50 people, add the 50 weights together and divide by 50. Technically this is the arithmetic mean. We will discuss the geometric mean later. To find the median weight of the 50 people, order the data and find the number that splits the data into two equal parts meaning an equal number of observations on each side. The weight of 25 people are below this weight and 25 people are heavier than this weight. The median is generally a better measure of the center when there are extreme values or outliers because it is not affected by the precise numerical values of the outliers. The mean is the most common measure of the center.
NOTE
The words “mean” and “average” are often used interchangeably. The substitution of one word for the other is common practice. The technical term is “arithmetic mean” and “average” is technically a center location. Formally, the arithmetic mean is called the first moment of the distribution by mathematicians. However, in practice among non-statisticians, “average" is commonly accepted for “arithmetic mean.”
When each value in the data set is not unique, the mean can be calculated by multiplying each distinct value by its frequency and then dividing the sum by the total number of data values. The letter used to represent the sample mean is an x with a bar over it (pronounced “$x$ bar”): $\overline x$.
The Greek letter $\mu$ (pronounced "mew") represents the population mean. One of the requirements for the sample mean to be a good estimate of the population mean is for the sample taken to be truly random.
To see that both ways of calculating the mean are the same, consider the sample:
1; 1; 1; 2; 2; 3; 4; 4; 4; 4; 4
$\overline{x}=\frac{1+1+1+2+2+3+4+4+4+4+4}{11}=2.7\nonumber$
$\overline{x}=\frac{3(1)+2(2)+1(3)+5(4)}{11}=2.7\nonumber$
In the second calculation, the frequencies are 3, 2, 1, and 5.
You can quickly find the location of the median by using the expression $\frac{n+1}{2}$.
The letter $n$ is the total number of data values in the sample. If $n$ is an odd number, the median is the middle value of the ordered data (ordered smallest to largest). If $n$ is an even number, the median is equal to the two middle values added together and divided by two after the data has been ordered. For example, if the total number of data values is 97, then $\frac{n+1}{2}=\frac{97+1}{2}=49$. The median is the 49th value in the ordered data. If the total number of data values is 100, then $\frac{n+1}{2}=\frac{100+1}{2}=50.5$. The median occurs midway between the 50th and 51st values. The location of the median and the value of the median are not the same. The upper case letter $M$ is often used to represent the median. The next example illustrates the location of the median and the value of the median.
Example 2.24
AIDS data indicating the number of months a patient with AIDS lives after taking a new antibody drug are as follows (smallest to largest):
3; 4; 8; 8; 10; 11; 12; 13; 14; 15; 15; 16; 16; 17; 17; 18; 21; 22; 22; 24; 24; 25; 26; 26; 27; 27; 29; 29; 31; 32; 33; 33; 34; 34; 35; 37; 40; 44; 44; 47;
Calculate the mean and the median.
Answer
Solution 2.24
The calculation for the mean is:
$\overline{x}=\frac{[3+4+(8)(2)+10+11+12+13+14+(15)(2)+\ldots+35+37+40+(44)(2)+47]}{40}=23.6$
To find the median, $M$, first use the formula for the location. The location is:
$\frac{n+1}{2}=\frac{40+1}{2}=20.5$
Starting at the smallest value, the median is located between the 20th and 21st values (the two 24s):
$3; 4; 8; 8; 10; 11; 12; 13; 14; 15; 15; 16; 16; 17; 17; 18; 21; 22; 22; 24; 24; 25; 26; 26; 27; 27; 29; 29; 31; 32; 33; 33; 34; 34; 35; 37; 40; 44; 44; 47;$
$M=\frac{24+24}{2}=24$
Example 2.25
Suppose that in a small town of 50 people, one person earns $5,000,000 per year and the other 49 each earn$30,000. Which is the better measure of the "center": the mean or the median?
Answer
Solution 2.25
$\overline{x}=\frac{5,000,000+49(30,000)}{50}=129,400$
$M = 30,000$
(There are 49 people who earn $30,000 and one person who earns$5,000,000.)
The median is a better measure of the "center" than the mean because 49 of the values are 30,000 and one is 5,000,000. The 5,000,000 is an outlier. The 30,000 gives us a better sense of the middle of the data.
Another measure of the center is the mode. The mode is the most frequent value. There can be more than one mode in a data set as long as those values have the same frequency and that frequency is the highest. A data set with two modes is called bimodal.
Example 2.26
Statistics exam scores for 20 students are as follows:
50; 53; 59; 59; 63; 63; 72; 72; 72; 72; 72; 76; 78; 81; 83; 84; 84; 84; 90; 93
Find the mode.
Answer
Solution 2.26
The most frequent score is 72, which occurs five times. Mode = 72.
Example 2.27
Five real estate exam scores are 430, 430, 480, 480, 495. The data set is bimodal because the scores 430 and 480 each occur twice.
When is the mode the best measure of the "center"? Consider a weight loss program that advertises a mean weight loss of six pounds the first week of the program. The mode might indicate that most people lose two pounds the first week, making the program less appealing.
NOTE
The mode can be calculated for qualitative data as well as for quantitative data. For example, if the data set is: red, red, red, green, green, yellow, purple, black, blue, the mode is red.
Calculating the Arithmetic Mean of Grouped Frequency Tables
When only grouped data is available, you do not know the individual data values (we only know intervals and interval frequencies); therefore, you cannot compute an exact mean for the data set. What we must do is estimate the actual mean by calculating the mean of a frequency table. A frequency table is a data representation in which grouped data is displayed along with the corresponding frequencies. To calculate the mean from a grouped frequency table we can apply the basic definition of mean: mean = $\frac{\text { data sum }}{\text { number of data values }}$ We simply need to modify the definition to fit within the restrictions of a frequency table.
Since we do not know the individual data values we can instead find the midpoint of each interval. The midpoint is $\frac{\text { lower boundary+upper boundary}}{2}$. We can now modify the mean definition to be $\textbf{Mean of Frequency Table}=\frac{\sum f m}{\sum f}$ where f = the frequency of the interval and m = the midpoint of the interval.
Example 2.28
A frequency table displaying professor Blount’s last statistic test is shown. Find the best estimate of the class mean.
Grade interval Number of students
50–56.5 1
56.5–62.5 0
62.5–68.5 4
68.5–74.5 4
74.5–80.5 2
80.5–86.5 3
86.5–92.5 4
92.5–98.5 1
Table 2.24
Answer
Solution 2.28
Find the midpoints for all intervals
Grade interval Midpoint
50–56.5 53.25
56.5–62.5 59.5
62.5–68.5 65.5
68.5–74.5 71.5
74.5–80.5 77.5
80.5–86.5 83.5
86.5–92.5 89.5
92.5–98.5 95.5
Table 2.25
• Calculate the sum of the product of each interval frequency and midpoint. $\sum f m$ $53.25(1)+59.5(0)+65.5(4)+71.5(4)+77.5(2)+83.5(3)+89.5(4)+95.5(1)=1460.25$
• $\mu=\frac{\sum f m}{\sum f}=\frac{1460.25}{19}=76.86$
Exercise 2.28
Maris conducted a study on the effect that playing video games has on memory recall. As part of her study, she compiled the following data:
Hours teenagers spend on video games Number of teenagers
0–3.5 3
3.5–7.5 7
7.5–11.5 12
11.5–15.5 7
15.5–19.5 9
Table 2.26
What is the best estimate for the mean number of hours spent playing video games? | textbooks/stats/Applied_Statistics/Introductory_Business_Statistics_(OpenStax)/02%3A_Descriptive_Statistics/2.03%3A_Measures_of_the_Center_of_the_Data.txt |
Formula for Population Mean
$\boldsymbol{\mu}=\frac{1}{N} \sum_{i=1}^{N} x_{i}\nonumber$
Formula for Sample Mean
$\overline{x}=\frac{1}{n} \sum_{i=1}^{n} x_{i}\nonumber$
This unit is here to remind you of material that you once studied and said at the time “I am sure that I will never need this!”
Here are the formulas for a population mean and the sample mean. The Greek letter $\mu$ is the symbol for the population mean and $\overline{x}$ is the symbol for the sample mean. Both formulas have a mathematical symbol that tells us how to make the calculations. It is called Sigma notation because the symbol is the Greek capital letter sigma: $\Sigma$. Like all mathematical symbols it tells us what to do: just as the plus sign tells us to add and the $x$ tells us to multiply. These are called mathematical operators. The $\Sigma$ symbol tells us to add a specific list of numbers.
Let’s say we have a sample of animals from the local animal shelter and we are interested in their average age. If we list each value, or observation, in a column, you can give each one an index number. The first number will be number 1 and the second number 2 and so on.
Animal Age
1 9
2 1
3 8.5
4 10.5
5 10
6 8.5
7 12
8 8
9 1
10 9.5
Table $27$
Each observation represents a particular animal in the sample. Purr is animal number one and is a 9 year old cat, Toto is animal number 2 and is a 1 year old puppy and so on.
To calculate the mean we are told by the formula to add up all these numbers, ages in this case, and then divide the sum by 10, the total number of animals in the sample.
Animal number one, the cat Purr, is designated as $X_1$, animal number 2, Toto, is designated as $X_2$ and so on through Dundee who is animal number 10 and is designated as $X_{10}$.
The i in the formula tells us which of the observations to add together. In this case it is $X_1$ through $X_{10}$ which is all of them. We know which ones to add by the indexing notation, the $i = 1$ and the $n$ or capital $N$ for the population. For this example the indexing notation would be $i = 1$ and because it is a sample we use a small $n$ on the top of the $\Sigma$ which would be 10.
The standard deviation requires the same mathematical operator and so it would be helpful to recall this knowledge from your past.
The sum of the ages is found to be 78 and dividing by 10 gives us the sample mean age as 7.8 years.
2.05: Geometric Mean
The mean (Arithmetic), median and mode are all measures of the “center” of the data, the “average”. They are all in their own way trying to measure the “common” point within the data, that which is “normal”. In the case of the arithmetic mean this is solved by finding the value from which all points are equal linear distances. We can imagine that all the data values are combined through addition and then distributed back to each data point in equal amounts. The sum of all the values is what is redistributed in equal amounts such that the total sum remains the same.
The geometric mean redistributes not the sum of the values but the product of multiplying all the individual values and then redistributing them in equal portions such that the total product remains the same. This can be seen from the formula for the geometric mean, $\tilde{x}$: (Pronounced $x$-tilde)
$\tilde{x}=\left(\prod_{i=1}^{n} x_{i}\right)^{\frac{1}{n}}=\sqrt[n]{x_{1} \cdot x_{2} \cdots x_{n}}=\left(x_{1} \cdot x_{2} \cdots x_{n}\right)^{\frac{1}{n}}\nonumber$
where $\pi$ is another mathematical operator, that tells us to multiply all the $x_{i}$ numbers in the same way capital Greek sigma tells us to add all the $x_{i}$ numbers. Remember that a fractional exponent is calling for the nth root of the number thus an exponent of 1/3 is the cube root of the number.
The geometric mean answers the question, "if all the quantities had the same value, what would that value have to be in order to achieve the same product?” The geometric mean gets its name from the fact that when redistributed in this way the sides form a geometric shape for which all sides have the same length. To see this, take the example of the numbers 10, 51.2 and 8. The geometric mean is the product of multiplying these three numbers together (4,096) and taking the cube root because there are three numbers among which this product is to be distributed. Thus the geometric mean of these three numbers is 16. This describes a cube 16x16x16 and has a volume of 4,096 units.
The geometric mean is relevant in Economics and Finance for dealing with growth: growth of markets, in investment, population and other variables the growth in which there is an interest. Imagine that our box of 4,096 units (perhaps dollars) is the value of an investment after three years and that the investment returns in percents were the three numbers in our example. The geometric mean will provide us with the answer to the question, what is the average rate of return: 16 percent. The arithmetic mean of these three numbers is 23.6 percent. The reason for this difference, 16 versus 23.6, is that the arithmetic mean is additive and thus does not account for the interest on the interest, compound interest, embedded in the investment growth process. The same issue arises when asking for the average rate of growth of a population or sales or market penetration, etc., knowing the annual rates of growth. The formula for the geometric mean rate of return, or any other growth rate, is:
$r_{s}=\left(x_{1} \cdot x_{2} \cdots x_{n}\right)^{\frac{1}{n}}-1\nonumber$
Manipulating the formula for the geometric mean can also provide a calculation of the average rate of growth between two periods knowing only the initial value a0a0 and the ending value anan and the number of periods, nn. The following formula provides this information:
$\left(\frac{a_{n}}{a_{0}}\right)^{\frac{1}{n}}=\tilde{x}\nonumber$
Finally, we note that the formula for the geometric mean requires that all numbers be positive, greater than zero. The reason of course is that the root of a negative number is undefined for use outside of mathematical theory. There are ways to avoid this problem however. In the case of rates of return and other simple growth problems we can convert the negative values to meaningful positive equivalent values. Imagine that the annual returns for the past three years are +12%, -8%, and +2%. Using the decimal multiplier equivalents of 1.12, 0.92, and 1.02, allows us to compute a geometric mean of 1.0167. Subtracting 1 from this value gives the geometric mean of +1.67% as a net rate of population growth (or financial return). From this example we can see that the geometric mean provides us with this formula for calculating the geometric (mean) rate of return for a series of annual rates of return:
$r_{s}=\tilde{x}-1\nonumber$
where $r_{s}$ is average rate of return and $\tilde{x}$ is the geometric mean of the returns during some number of time periods. Note that the length of each time period must be the same.
As a general rule one should convert the percent values to its decimal equivalent multiplier. It is important to recognize that when dealing with percents, the geometric mean of percent values does not equal the geometric mean of the decimal multiplier equivalents and it is the decimal multiplier equivalent geometric mean that is relevant. | textbooks/stats/Applied_Statistics/Introductory_Business_Statistics_(OpenStax)/02%3A_Descriptive_Statistics/2.04%3A_Sigma_Notation_and_Calculating_the_Arithmetic_Mean.txt |
Consider the following data set.
4; 5; 6; 6; 6; 7; 7; 7; 7; 7; 7; 8; 8; 8; 9; 10
This data set can be represented by following histogram. Each interval has width one, and each value is located in the middle of an interval.
The histogram displays a symmetrical distribution of data. A distribution is symmetrical if a vertical line can be drawn at some point in the histogram such that the shape to the left and the right of the vertical line are mirror images of each other. The mean, the median, and the mode are each seven for these data. In a perfectly symmetrical distribution, the mean and the median are the same. This example has one mode (unimodal), and the mode is the same as the mean and median. In a symmetrical distribution that has two modes (bimodal), the two modes would be different from the mean and median.
The histogram for the data: 4; 5; 6; 6; 6; 7; 7; 7; 7; 8 is not symmetrical. The right-hand side seems "chopped off" compared to the left side. A distribution of this type is called skewed to the left because it is pulled out to the left. We can formally measure the skewness of a distribution just as we can mathematically measure the center weight of the data or its general "speadness". The mathematical formula for skewness is:
$a_{3}=\sum \frac{\left(x_{t}-\overline{x}\right)^{3}}{n s^{3}}.\nonumber$
The greater the deviation from zero indicates a greater degree of skewness. If the skewness is negative then the distribution is skewed left as in Figure $13$.
The mean is 6.3, the median is 6.5, and the mode is seven. Notice that the mean is less than the median, and they are both less than the mode. The mean and the median both reflect the skewing, but the mean reflects it more so.
The histogram for the data: 6; 7; 7; 7; 7; 8; 8; 8; 9; 10, is also not symmetrical. It is skewed to the right.
The mean is 7.7, the median is 7.5, and the mode is seven. Of the three statistics, the mean is the largest, while the mode is the smallest. Again, the mean reflects the skewing the most.
To summarize, generally if the distribution of data is skewed to the left, the mean is less than the median, which is often less than the mode. If the distribution of data is skewed to the right, the mode is often less than the median, which is less than the mean.
As with the mean, median and mode, and as we will see shortly, the variance, there are mathematical formulas that give us precise measures of these characteristics of the distribution of the data. Again looking at the formula for skewness we see that this is a relationship between the mean of the data and the individual observations cubed.
$a_{3}=\sum \frac{\left(x_{i}-\overline{x}\right)^{3}}{n s^{3}}\nonumber$
where ss is the sample standard deviation of the data, $\mathrm{X}_{i}$, and $\overline{x}$ is the arithmetic mean and $n$ is the sample size.
Formally the arithmetic mean is known as the first moment of the distribution. The second moment we will see is the variance, and skewness is the third moment. The variance measures the squared differences of the data from the mean and skewness measures the cubed differences of the data from the mean. While a variance can never be a negative number, the measure of skewness can and this is how we determine if the data are skewed right of left. The skewness for a normal distribution is zero, and any symmetric data should have skewness near zero. Negative values for the skewness indicate data that are skewed left and positive values for the skewness indicate data that are skewed right. By skewed left, we mean that the left tail is long relative to the right tail. Similarly, skewed right means that the right tail is long relative to the left tail. The skewness characterizes the degree of asymmetry of a distribution around its mean. While the mean and standard deviation are dimensionalquantities (this is why we will take the square root of the variance ) that is, have the same units as the measured quantities $\mathrm{X}_{i}$, the skewness is conventionally defined in such a way as to make it nondimensional. It is a pure number that characterizes only the shape of the distribution. A positive value of skewness signifies a distribution with an asymmetric tail extending out towards more positive $X$ and a negative value signifies a distribution whose tail extends out towards more negative $X$. A zero measure of skewness will indicate a symmetrical distribution.
Skewness and symmetry become important when we discuss probability distributions in later chapters. | textbooks/stats/Applied_Statistics/Introductory_Business_Statistics_(OpenStax)/02%3A_Descriptive_Statistics/2.06%3A_Skewness_and_the_Mean_Median_and_Mode.txt |
An important characteristic of any set of data is the variation in the data. In some data sets, the data values are concentrated closely near the mean; in other data sets, the data values are more widely spread out from the mean. The most common measure of variation, or spread, is the standard deviation. The standard deviation is a number that measures how far data values are from their mean.
The standard deviation
• provides a numerical measure of the overall amount of variation in a data set, and
• can be used to determine whether a particular data value is close to or far from the mean.
The standard deviation provides a measure of the overall variation in a data set
The standard deviation is always positive or zero. The standard deviation is small when the data are all concentrated close to the mean, exhibiting little variation or spread. The standard deviation is larger when the data values are more spread out from the mean, exhibiting more variation.
Suppose that we are studying the amount of time customers wait in line at the checkout at supermarket $A$ and supermarket $B$. The average wait time at both supermarkets is five minutes. At supermarket $A$, the standard deviation for the wait time is two minutes; at supermarket $B$. The standard deviation for the wait time is four minutes.
Because supermarket $B$ has a higher standard deviation, we know that there is more variation in the wait times at supermarket $B$. Overall, wait times at supermarket $B$ are more spread out from the average; wait times at supermarket $A$ are more concentrated near the average.
Calculating the Standard Deviation
If $x$ is a number, then the difference "$x$ minus the mean" is called its deviation. In a data set, there are as many deviations as there are items in the data set. The deviations are used to calculate the standard deviation. If the numbers belong to a population, in symbols a deviation is $x – \mu$. For sample data, in symbols a deviation is $x – \overline{x}$.
The procedure to calculate the standard deviation depends on whether the numbers are the entire population or are data from a sample. The calculations are similar, but not identical. Therefore the symbol used to represent the standard deviation depends on whether it is calculated from a population or a sample. The lower case letter s represents the sample standard deviation and the Greek letter $\sigma$ (sigma, lower case) represents the population standard deviation. If the sample has the same characteristics as the population, then s should be a good estimate of $\sigma$.
To calculate the standard deviation, we need to calculate the variance first. The variance is the average of the squares of the deviations (the $x – \overline{x}$ values for a sample, or the $x – \mu$ values for a population). The symbol $\sigma^2$ represents the population variance; the population standard deviation $\sigma$ is the square root of the population variance. The symbol $s^2$ represents the sample variance; the sample standard deviation s is the square root of the sample variance. You can think of the standard deviation as a special average of the deviations. Formally, the variance is the second moment of the distribution or the first moment around the mean. Remember that the mean is the first moment of the distribution.
If the numbers come from a census of the entire population and not a sample, when we calculate the average of the squared deviations to find the variance, we divide by $N$, the number of items in the population. If the data are from a sample rather than a population, when we calculate the average of the squared deviations, we divide by $\bf{n – 1}$, one less than the number of items in the sample.
Formulas for the Sample Standard Deviation
• $s=\sqrt{\frac{\Sigma(x-\overline{x})^{2}}{n-1}} \text { or } s=\sqrt{\frac{\Sigma f(x-\overline{x})^{2}}{n-1}} \text { or } s=\sqrt{\frac{\left(\sum_{i=1}^{n} x^{2}\right)^{-n x^{2}}}{n-1}}$
• For the sample standard deviation, the denominator is $\bf{n – 1}$, that is the sample size minus 1.
Formulas for the Population Standard Deviation
• $\boldsymbol{\sigma}=\sqrt{\frac{\Sigma(x-\mu)^{2}}{N}} \text { or } \sigma=\sqrt{\frac{\Sigma f(x \mu)^{2}}{N}} \text { or } \sigma=\sqrt{\frac{\sum_{i=1}^{N} x_{i}^{2}}{N}-\mu^{2}}$
• For the population standard deviation, the denominator is $N$, the number of items in the population.
In these formulas, $f$ represents the frequency with which a value appears. For example, if a value appears once, $f$ is one. If a value appears three times in the data set or population, $f$ is three. Two important observations concerning the variance and standard deviation: the deviations are measured from the mean and the deviations are squared. In principle, the deviations could be measured from any point, however, our interest is measurement from the center weight of the data, what is the "normal" or most usual value of the observation. Later we will be trying to measure the "unusualness" of an observation or a sample mean and thus we need a measure from the mean. The second observation is that the deviations are squared. This does two things, first it makes the deviations all positive and second it changes the units of measurement from that of the mean and the original observations. If the data are weights then the mean is measured in pounds, but the variance is measured in pounds-squared. One reason to use the standard deviation is to return to the original units of measurement by taking the square root of the variance. Further, when the deviations are squared it explodes their value. For example, a deviation of 10 from the mean when squared is 100, but a deviation of 100 from the mean is 10,000. What this does is place great weight on outliers when calculating the variance.
Types of Variability in Samples
When trying to study a population, a sample is often used, either for convenience or because it is not possible to access the entire population. Variability is the term used to describe the differences that may occur in these outcomes. Common types of variability include the following:
• Observational or measurement variability
• Natural variability
• Induced variability
• Sample variability
Here are some examples to describe each type of variability.
Example 1: Measurement variability
Measurement variability occurs when there are differences in the instruments used to measure or in the people using those instruments. If we are gathering data on how long it takes for a ball to drop from a height by having students measure the time of the drop with a stopwatch, we may experience measurement variability if the two stopwatches used were made by different manufacturers: For example, one stopwatch measures to the nearest second, whereas the other one measures to the nearest tenth of a second. We also may experience measurement variability because two different people are gathering the data. Their reaction times in pressing the button on the stopwatch may differ; thus, the outcomes will vary accordingly. The differences in outcomes may be affected by measurement variability.
Example 2: Natural variability
Natural variability arises from the differences that naturally occur because members of a population differ from each other. For example, if we have two identical corn plants and we expose both plants to the same amount of water and sunlight, they may still grow at different rates simply because they are two different corn plants. The difference in outcomes may be explained by natural variability.
Example 3: Induced variability
Induced variability is the counterpart to natural variability; this occurs because we have artificially induced an element of variation (that, by definition, was not present naturally): For example, we assign people to two different groups to study memory, and we induce a variable in one group by limiting the amount of sleep they get. The difference in outcomes may be affected by induced variability.
Example 4: Sample variability
Sample variability occurs when multiple random samples are taken from the same population. For example, if I conduct four surveys of 50 people randomly selected from a given population, the differences in outcomes may be affected by sample variability.
Example $29$
In a fifth grade class, the teacher was interested in the average age and the sample standard deviation of the ages of her students. The following data are the ages for a SAMPLE of $n = 20$ fifth grade students. The ages are rounded to the nearest half year:
9; 9.5; 9.5; 10; 10; 10; 10; 10.5; 10.5; 10.5; 10.5; 11; 11; 11; 11; 11; 11; 11.5; 11.5; 11.5;
$\overline{x}=\frac{9+9.5(2)+10(4)+10.5(4)+11(6)+11.5(3)}{20}=10.525\nonumber$
The average age is 10.53 years, rounded to two places.
The variance may be calculated by using a table. Then the standard deviation is calculated by taking the square root of the variance. We will explain the parts of the table after calculating $s$.
Data Freq. Deviations Deviations2 (Freq.)(Deviations2)
$x$ $f$ $(x - \overline{x})$ $(x – \overline{x})^2$ $(f)(x – \overline{x})^2$
9 1 $9 – 10.525 = –1.525$ $(–1.525)^2 = 2.325625$ $1 \times 2.325625 = 2.325625$
9.5 2 $9.5 – 10.525 = –1.025$ $(–1.025)2 = 1.050625$ $2 \times 1.050625 = 2.101250$
10 4 $10 – 10.525 = –0.525$ $(–0.525)2 = 0.275625$ $4 \times 0.275625 = 1.1025$
10.5 4 $10.5 – 10.525 = –0.025$ $(–0.025)2 = 0.000625$ $4 \times 0.000625 = 0.0025$
11 6 $11 – 10.525 = 0.475$ $(0.475)2 = 0.225625$ $6 \times 0.225625 = 1.35375$
11.5 3 $11.5 – 10.525 = 0.975$ $(0.975)2 = 0.950625$ $3 \times 0.950625 = 2.851875$
The total is 9.7375
Table $28$
The sample variance, $s^2$, is equal to the sum of the last column (9.7375) divided by the total number of data values minus one $(20 – 1)$:
$s^{2}=\frac{9.7375}{20-1}=0.5125$
The sample standard deviation s is equal to the square root of the sample variance:
$s=\sqrt{0.5125}=0.715891$, which is rounded to two decimal places, $s = 0.72$.
Explanation of the standard deviation calculation shown in the table
The deviations show how spread out the data are about the mean. The data value 11.5 is farther from the mean than is the data value 11 which is indicated by the deviations 0.97 and 0.47. A positive deviation occurs when the data value is greater than the mean, whereas a negative deviation occurs when the data value is less than the mean. The deviation is –1.525 for the data value nine. If you add the deviations, the sum is always zero. (For Example $29$, there are $n = 20$ deviations.) So you cannot simply add the deviations to get the spread of the data. By squaring the deviations, you make them positive numbers, and the sum will also be positive. The variance, then, is the average squared deviation. By squaring the deviations we are placing an extreme penalty on observations that are far from the mean; these observations get greater weight in the calculations of the variance. We will see later on that the variance (standard deviation) plays the critical role in determining our conclusions in inferential statistics. We can begin now by using the standard deviation as a measure of "unusualness." "How did you do on the test?" "Terrific! Two standard deviations above the mean." This, we will see, is an unusually good exam grade.
The variance is a squared measure and does not have the same units as the data. Taking the square root solves the problem. The standard deviation measures the spread in the same units as the data.
Notice that instead of dividing by $n = 20$, the calculation divided by $n – 1 = 20 – 1 = 19$ because the data is a sample. For the sample variance, we divide by the sample size minus one $(n – 1)$. Why not divide by $n$? The answer has to do with the population variance. The sample variance is an estimate of the population variance. This estimate requires us to use an estimate of the population mean rather than the actual population mean. Based on the theoretical mathematics that lies behind these calculations, dividing by $(n – 1)$ gives a better estimate of the population variance.
The standard deviation, $s$ or $\sigma$, is either zero or larger than zero. Describing the data with reference to the spread is called "variability". The variability in data depends upon the method by which the outcomes are obtained; for example, by measuring or by random sampling. When the standard deviation is zero, there is no spread; that is, the all the data values are equal to each other. The standard deviation is small when the data are all concentrated close to the mean, and is larger when the data values show more variation from the mean. When the standard deviation is a lot larger than zero, the data values are very spread out about the mean; outliers can make $s$ or $\sigma$ very large.
Example $30$
Use the following data (first exam scores) from Susan Dean's spring pre-calculus class:
$33; 42; 49; 49; 53; 55; 55; 61; 63; 67; 68; 68; 69; 69; 72; 73; 74; 78; 80; 83; 88; 88; 88; 90; 92; 94; 94; 94; 94; 96; 100$
1. Create a chart containing the data, frequencies, relative frequencies, and cumulative relative frequencies to three decimal places.
2. Calculate the following to one decimal place:
1. The sample mean
2. The sample standard deviation
3. The median
4. The first quartile
5. The third quartile
6. $IQR$
Answer
Solution 2.30
a. See Table $29$
b.
1. The sample mean = 73.5
2. The sample standard deviation = 17.9
3. The median = 73
4. The first quartile = 61
5. The third quartile = 90
6. $IQR = 90 – 61 = 29$
Data Frequency Relative frequency Cumulative relative frequency
33 1 0.032 0.032
42 1 0.032 0.064
49 2 0.065 0.129
53 1 0.032 0.161
55 2 0.065 0.226
61 1 0.032 0.258
63 1 0.032 0.29
67 1 0.032 0.322
68 2 0.065 0.387
69 2 0.065 0.452
72 1 0.032 0.484
73 1 0.032 0.516
74 1 0.032 0.548
78 1 0.032 0.580
80 1 0.032 0.612
83 1 0.032 0.644
88 3 0.097 0.741
90 1 0.032 0.773
92 1 0.032 0.805
94 4 0.129 0.934
96 1 0.032 0.966
100 1 0.032 0.998 (Why isn't this value 1? Answer: Rounding)
Table $29$
Standard deviation of Grouped Frequency Tables
Recall that for grouped data we do not know individual data values, so we cannot describe the typical value of the data with precision. In other words, we cannot find the exact mean, median, or mode. We can, however, determine the best estimate of the measures of center by finding the mean of the grouped data with the formula: $\text{ Mean of Frequency Table }=\frac{\sum \(f$ m}{\sum f}\)
where $f=$ interval frequencies and $m$ = interval midpoints.
Just as we could not find the exact mean, neither can we find the exact standard deviation. Remember that standard deviation describes numerically the expected deviation a data value has from the mean. In simple English, the standard deviation allows us to compare how “unusual” individual data is compared to the mean.
Example $31$
Find the standard deviation for the data in Table $30$.
Class Frequency, $f$ Midpoint, $m$ $f\cdot m$ $f(m−\bar{x})^2$
0–2 1 1 $1\cdot 1=1$ $1(1−6.88)^2=34.57$
3–5 6 4 $6\cdot 4=24$ $6(4−6.88)^2=49.77$
6-8 10 7 $10\cdot 7=70$ $10(7−6.88)^2=0.14$
9-11 7 10 $7\cdot 10=70$ $7(10−6.88)^2=68.14$
12-14 0 13 $0\cdot 13=0$ $0(13−6.88)^2=0$
n = 24 $\bar{x}=16524=6.88$ $s^2=152.6224−1=6.64$
Table $30$
For this data set, we have the mean, $\bar{x} = 6.88$ and the standard deviation, $s_x = 2.58$. This means that a randomly selected data value would be expected to be 2.58 units from the mean. If we look at the first class, we see that the class midpoint is equal to one. This is almost three standard deviations from the mean. While the formula for calculating the standard deviation is not complicated,
$s_x=\sqrt{\frac{Σ(m−\bar{x})^2f}{n−1}}\nonumber$
where $s_x =$ sample standard deviation, $\bar{x} =$ sample mean, the calculations are tedious. It is usually best to use technology when performing the calculations.
Comparing Values from Different Data Sets
The standard deviation is useful when comparing data values that come from different data sets. If the data sets have different means and standard deviations, then comparing the data values directly can be misleading.
• For each data value x, calculate how many standard deviations away from its mean the value is.
• Use the formula: x = mean + (#of STDEVs)(standard deviation); solve for #of STDEVs.
• $\# \text { of } S T D E V s=\frac{x-\text { mean }}{\text { standard deviation }}$
• Compare the results of this calculation.
#of STDEVs is often called a "z-score"; we can use the symbol $z$. In symbols, the formulas become:
Sample $x=\overline{x}+z s$ $z=\frac{x-\overline{x}}{s}$
Population $x=\mu+z \sigma$ $z=\frac{x-\mu}{\sigma}$
Table $31$
Example $32$
Two students, John and Ali, from different high schools, wanted to find out who had the highest GPA when compared to his school. Which student had the highest GPA when compared to his school?
Student GPA School mean GPA School standard deviation
John 2.85 3.0 0.7
Ali 77 80 10
Table $32$
Answer
Solution 2.32
For each student, determine how many standard deviations (#of STDEVs) his GPA is away from the average, for his school. Pay careful attention to signs when comparing and interpreting the answer.
$z=\# \text { of STDE } \mathrm{Vs}=\frac{\text { value - mean }}{\text { standard deviation }}=\frac{x-\mu}{\sigma}$
For John, $z=\# \text { ofSTDEV } s=\frac{2.85 \cdot 3.0}{0.7}=-0.21$
For Ali, $z=\# \text { ofSTDEV } s=\frac{77-80}{10}=-0.3$
John has the better GPA when compared to his school because his GPA is 0.21 standard deviations below his school's mean while Ali's GPA is 0.3 standard deviations below his school's mean.
John's z-score of –0.21 is higher than Ali's z-score of –0.3. For GPA, higher values are better, so we conclude that John has the better GPA when compared to his school.
Exercise $32$
Add exercises text here.
Answer
Two swimmers, Angie and Beth, from different teams, wanted to find out who had the fastest time for the 50 meter freestyle when compared to her team. Which swimmer had the fastest time when compared to her team?
Swimmer Time (seconds) Team mean time Team standard deviation
Angie 26.2 27.2 0.8
Beth 27.3 30.1 1.4
Table $33$
The following lists give a few facts that provide a little more insight into what the standard deviation tells us about the distribution of the data.
For ANY data set, no matter what the distribution of the data is:
• At least 75% of the data is within two standard deviations of the mean.
• At least 89% of the data is within three standard deviations of the mean.
• At least 95% of the data is within 4.5 standard deviations of the mean.
• This is known as Chebyshev's Rule.
For data having a Normal Distribution, which we will examine in great detail later:
• Approximately 68% of the data is within one standard deviation of the mean.
• Approximately 95% of the data is within two standard deviations of the mean.
• More than 99% of the data is within three standard deviations of the mean.
• This is known as the Empirical Rule.
• It is important to note that this rule only applies when the shape of the distribution of the data is bell-shaped and symmetric. We will learn more about this when studying the "Normal" or "Gaussian" probability distribution in later chapters.
Coefficient of Variation
Another useful way to compare distributions besides simple comparisons of means or standard deviations is to adjust for differences in the scale of the data being measured. Quite simply, a large variation in data with a large mean is different than the same variation in data with a small mean. To adjust for the scale of the underlying data the Coefficient of Variation (CV) has been developed. Mathematically:
$C V=\frac{s}{\overline{x}} * 100 \text { conditioned upon } \overline{x} \neq 0, \text { where } s \text { is the standard deviation of the data and } \overline{x}\nonumber$
We can see that this measures the variability of the underlying data as a percentage of the mean value; the center weight of the data set. This measure is useful in comparing risk where an adjustment is warranted because of differences in scale of two data sets. In effect, the scale is changed to common scale, percentage differences, and allows direct comparison of the two or more magnitudes of variation of different data sets. | textbooks/stats/Applied_Statistics/Introductory_Business_Statistics_(OpenStax)/02%3A_Descriptive_Statistics/2.07%3A__Measures_of_the_Spread_of_the_Data.txt |
119.
Javier and Ercilia are supervisors at a shopping mall. Each was given the task of estimating the mean distance that shoppers live from the mall. They each randomly surveyed 100 shoppers. The samples yielded the following information.
Javier Ercilia
$\overline x$ 6.0 miles 6.0 miles
$s$ 4.0 miles 7.0 miles
Table $81$
1. How can you determine which survey was correct ?
2. Explain what the difference in the results of the surveys implies about the data.
3. If the two histograms depict the distribution of values for each supervisor, which one depicts Ercilia's sample? How do you know?
Use the following information to answer the next three exercises: We are interested in the number of years students in a particular elementary statistics class have lived in California. The information in the following table is from the entire section.
Number of years Frequency Number of years Frequency
Total = 20
7 1 22 1
14 3 23 1
15 1 26 1
18 1 40 2
19 4 42 2
20 3
Table $82$
120.
What is the $IQR$?
1. 8
2. 11
3. 15
4. 35
121.
What is the mode?
1. 19
2. 19.5
3. 14 and 20
4. 22.65
122.
Is this a sample or the entire population?
1. sample
2. entire population
3. neither
123.
Twenty-five randomly selected students were asked the number of movies they watched the previous week. The results are as follows:
# of movies Frequency
0 5
1 9
2 6
3 4
4 1
Table $83$
1. Find the sample mean $\overline x$.
2. Find the approximate sample standard deviation, $s$.
124.
Forty randomly selected students were asked the number of pairs of sneakers they owned. Let X = the number of pairs of sneakers owned. The results are as follows:
$X$ Frequency
1 2
2 5
3 8
4 12
5 12
6 0
7 1
Table $84$
1. Find the sample mean $\overline x$
2. Find the sample standard deviation, $s$
3. Construct a histogram of the data.
4. Complete the columns of the chart.
5. Find the first quartile.
6. Find the median.
7. Find the third quartile.
8. What percent of the students owned at least five pairs?
9. Find the 40th percentile.
10. Find the 90th percentile.
11. Construct a line graph of the data
12. Construct a stemplot of the data
125.
Following are the published weights (in pounds) of all of the team members of the San Francisco 49ers from a previous year.
177; 205; 210; 210; 232; 205; 185; 185; 178; 210; 206; 212; 184; 174; 185; 242; 188; 212; 215; 247; 241; 223; 220; 260; 245; 259; 278; 270; 280; 295; 275; 285; 290; 272; 273; 280; 285; 286; 200; 215; 185; 230; 250; 241; 190; 260; 250; 302; 265; 290; 276; 228; 265
1. Organize the data from smallest to largest value.
2. Find the median.
3. Find the first quartile.
4. Find the third quartile.
5. The middle 50% of the weights are from _______ to _______.
6. If our population were all professional football players, would the above data be a sample of weights or the population of weights? Why?
7. If our population included every team member who ever played for the San Francisco 49ers, would the above data be a sample of weights or the population of weights? Why?
8. Assume the population was the San Francisco 49ers. Find:
1. the population mean, $\mu$.
2. the population standard deviation, $sigma$.
3. the weight that is two standard deviations below the mean.
4. When Steve Young, quarterback, played football, he weighed 205 pounds. How many standard deviations above or below the mean was he?
9. That same year, the mean weight for the Dallas Cowboys was 240.08 pounds with a standard deviation of 44.38 pounds. Emmit Smith weighed in at 209 pounds. With respect to his team, who was lighter, Smith or Young? How did you determine your answer?
126.
One hundred teachers attended a seminar on mathematical problem solving. The attitudes of a representative sample of 12 of the teachers were measured before and after the seminar. A positive number for change in attitude indicates that a teacher's attitude toward math became more positive. The 12 change scores are as follows:
3; 8; –1; 2; 0; 5; –3; 1; –1; 6; 5; –2
1. What is the mean change score?
2. What is the standard deviation for this population?
3. What is the median change score?
4. Find the change score that is 2.2 standard deviations below the mean.
127.
Refer to Figure $25$ determine which of the following are true and which are false. Explain your solution to each part in complete sentences.
1. The medians for both graphs are the same.
2. We cannot determine if any of the means for both graphs is different.
3. The standard deviation for graph b is larger than the standard deviation for graph a.
4. We cannot determine if any of the third quartiles for both graphs is different.
128.
In a recent issue of the IEEE Spectrum, 84 engineering conferences were announced. Four conferences lasted two days. Thirty-six lasted three days. Eighteen lasted four days. Nineteen lasted five days. Four lasted six days. One lasted seven days. One lasted eight days. One lasted nine days. Let $X$ = the length (in days) of an engineering conference.
1. Organize the data in a chart.
2. Find the median, the first quartile, and the third quartile.
3. Find the 65th percentile.
4. Find the 10th percentile.
5. The middle 50% of the conferences last from _______ days to _______ days.
6. Calculate the sample mean of days of engineering conferences.
7. Calculate the sample standard deviation of days of engineering conferences.
8. Find the mode.
9. If you were planning an engineering conference, which would you choose as the length of the conference: mean; median; or mode? Explain why you made that choice.
10. Give two reasons why you think that three to five days seem to be popular lengths of engineering conferences.
129.
A survey of enrollment at 35 community colleges across the United States yielded the following figures:
6414; 1550; 2109; 9350; 21828; 4300; 5944; 5722; 2825; 2044; 5481; 5200; 5853; 2750; 10012; 6357; 27000; 9414; 7681; 3200; 17500; 9200; 7380; 18314; 6557; 13713; 17768; 7493; 2771; 2861; 1263; 7285; 28165; 5080; 11622
1. Organize the data into a chart with five intervals of equal width. Label the two columns "Enrollment" and "Frequency."
2. Construct a histogram of the data.
3. If you were to build a new community college, which piece of information would be more valuable: the mode or the mean?
4. Calculate the sample mean.
5. Calculate the sample standard deviation.
6. A school with an enrollment of 8000 would be how many standard deviations away from the mean?
Use the following information to answer the next two exercises. $X$ = the number of days per week that 100 clients use a particular exercise facility.
$x$ Frequency
0 3
1 12
2 33
3 28
4 11
5 9
6 4
Table $85$
130.
The 80th percentile is _____
1. 5
2. 80
3. 3
4. 4
131.
The number that is 1.5 standard deviations BELOW the mean is approximately _____
1. 0.7
2. 4.8
3. –2.8
4. Cannot be determined
132.
Suppose that a publisher conducted a survey asking adult consumers the number of fiction paperback books they had purchased in the previous month. The results are summarized in the Table $86$.
# of books Freq. Rel. Freq.
0 18
1 24
2 24
3 22
4 15
5 10
7 5
9 1
Table 2.86
1. Are there any outliers in the data? Use an appropriate numerical test involving the $IQR$ to identify outliers, if any, and clearly state your conclusion.
2. If a data value is identified as an outlier, what should be done about it?
3. Are any data values further than two standard deviations away from the mean? In some situations, statisticians may use this criteria to identify data values that are unusual, compared to the other data values. (Note that this criteria is most appropriate to use for data that is mound-shaped and symmetric, rather than for skewed data.)
4. Do parts a and c of this problem give the same answer?
5. Examine the shape of the data. Which part, a or c, of this question gives a more appropriate result for this data?
6. Based on the shape of the data which is the most appropriate measure of center for this data: mean, median or mode?
2.09: Chapter Formula Review
2.2 Measures of the Location of the Data
$i=\left(\frac{k}{100}\right)(n+1)$
where $i$ = the ranking or position of a data value,
$k$ = the $k$th percentile,
$n$ = total number of data.
Expression for finding the percentile of a data value: $\left(\frac{x+0.5 y}{n}\right)(100)$
where $x$ = the number of values counting from the bottom of the data list up to but not including the data value for which you want to find the percentile,
$y$ = the number of data values equal to the data value for which you want to find the percentile,
$n$ = total number of data
2.3 Measures of the Center of the Data
$\mu=\frac{\sum f m}{\sum f}$ Where $f$ = interval frequencies and $m$ = interval midpoints.
The arithmetic mean for a sample (denoted by $\overline{x}$) is $\overline{x}=\frac{\text { Sum of all values in the sample }}{\text { Number of values in the sample }}$
The arithmetic mean for a population (denoted by μ) is $\boldsymbol{\mu}=\frac{\text { Sum of all values in the population }}{\text { Number of values in the population }}$
2.5 Geometric Mean
The Geometric Mean: $\overline{x}=\left(\prod_{i=1}^{n} x_{i}\right)^{\frac{1}{n}}=\sqrt[n]{x_{1} \cdot x_{2} \cdots x_{n}}=\left(x_{1} \cdot x_{2} \cdots x_{n}\right)^{\frac{1}{n}}$
2.6 Skewness and the Mean, Median, and Mode
Formula for skewness: $a_{3}=\sum \frac{\left(x_{i}-\overline{x}\right)^{3}}{n s^{2}}$
Formula for Coefficient of Variation:$C V=\frac{s}{\overline{x}} \cdot 100 \text { conditioned upon } \overline{x} \neq 0$
2.7 Measures of the Spread of the Data
$s_{x}=\sqrt{\frac{\sum f m^{2}}{n}-\overline{x}^{2}} \text { where }$ $\begin{array}{l}{s_{x}=\text { sample standard deviation }} \ {\overline{x}=\text { sample mean }}\end{array}$
Formulas for Sample Standard Deviation $s=\sqrt{\frac{\Sigma(x-\overline{x})^{2}}{n-1}} \text { or } s=\sqrt{\frac{\Sigma f(x-\overline{x})^{2}}{n-1}} \text { or } s=\sqrt{\frac{\left(\sum_{t=1}^{n} x^{2}\right)-n x^{2}}{n-1}}$ For the sample standard deviation, the denominator is n - 1, that is the sample size - 1.
Formulas for Population Standard Deviation $\sigma=\sqrt{\frac{\Sigma(x-\mu)^{2}}{N}} \text { or } \sigma=\sqrt{\frac{\Sigma f(x \mu)^{2}}{N}} \text { or } \sigma=\sqrt{\frac{\sum_{i=1}^{N} x_{i}^{2}}{N}-\mu^{2} F}$ For the population standard deviation, the denominator is N, the number of items in the population. | textbooks/stats/Applied_Statistics/Introductory_Business_Statistics_(OpenStax)/02%3A_Descriptive_Statistics/2.08%3A_Homework.txt |
2.1 Display Data
84.
Table $63$ contains the 2010 obesity rates in U.S. states and Washington, DC.
State Percent (%) State Percent (%) State Percent (%)
Alabama 32.2 Kentucky 31.3 North Dakota 27.2
Alaska 24.5 Louisiana 31.0 Ohio 29.2
Arizona 24.3 Maine 26.8 Oklahoma 30.4
Arkansas 30.1 Maryland 27.1 Oregon 26.8
California 24.0 Massachusetts 23.0 Pennsylvania 28.6
Colorado 21.0 Michigan 30.9 Rhode Island 25.5
Connecticut 22.5 Minnesota 24.8 South Carolina 31.5
Delaware 28.0 Mississippi 34.0 South Dakota 27.3
Washington, DC 22.2 Missouri 30.5 Tennessee 30.8
Florida 26.6 Montana 23.0 Texas 31.0
Georgia 29.6 Nebraska 26.9 Utah 22.5
Hawaii 22.7 Nevada 22.4 Vermont 23.2
Idaho 26.5 New Hampshire 25.0 Virginia 26.0
Illinois 28.2 New Jersey 23.8 Washington 25.5
Indiana 29.6 New Mexico 25.1 West Virginia 32.5
Iowa 28.4 New York 23.9 Wisconsin 26.3
Kansas 29.4 North Carolina 27.8 Wyoming 25.1
Table $63$
1. Use a random number generator to randomly pick eight states. Construct a bar graph of the obesity rates of those eight states.
2. Construct a bar graph for all the states beginning with the letter "A."
3. Construct a bar graph for all the states beginning with the letter "M."
85.
Suppose that three book publishers were interested in the number of fiction paperbacks adult consumers purchase per month. Each publisher conducted a survey. In the survey, adult consumers were asked the number of fiction paperbacks they had purchased the previous month. The results are as follows:
# of books Freq. Rel. freq.
0 10
1 12
2 16
3 12
4 8
5 6
6 2
8 2
Table $64$ Publisher A
# of books Freq. Rel. freq.
0 18
1 24
2 24
3 22
4 15
5 10
7 5
9 1
Table $65$ Publisher B
# of books Freq. Rel. freq.
0–1 20
2–3 35
4–5 12
6–7 2
8–9 1
Table $66$ Publisher C
1. Find the relative frequencies for each survey. Write them in the charts.
2. Use the frequency column to construct a histogram for each publisher's survey. For Publishers A and B, make bar widths of one. For Publisher C, make bar widths of two.
3. In complete sentences, give two reasons why the graphs for Publishers A and B are not identical.
4. Would you have expected the graph for Publisher C to look like the other two graphs? Why or why not?
5. Make new histograms for Publisher A and Publisher B. This time, make bar widths of two.
6. Now, compare the graph for Publisher C to the new graphs for Publishers A and B. Are the graphs more similar or more different? Explain your answer.
86.
Often, cruise ships conduct all on-board transactions, with the exception of gambling, on a cashless basis. At the end of the cruise, guests pay one bill that covers all onboard transactions. Suppose that 60 single travelers and 70 couples were surveyed as to their on-board bills for a seven-day cruise from Los Angeles to the Mexican Riviera. Following is a summary of the bills for each group.
Amount($) Frequency Rel. frequency 51–100 5 101–150 10 151–200 15 201–250 15 251–300 10 301–350 5 Table $67$ Singles Amount($) Frequency Rel. frequency
100–150 5
201–250 5
251–300 5
301–350 5
351–400 10
401–450 10
451–500 10
501–550 10
551–600 5
601–650 5
Table $68$ Couples
1. Fill in the relative frequency for each group.
2. Construct a histogram for the singles group. Scale the x-axis by $50 widths. Use relative frequency on the y-axis. 3. Construct a histogram for the couples group. Scale the x-axis by$50 widths. Use relative frequency on the y-axis.
4. Compare the two graphs:
1. List two similarities between the graphs.
2. List two differences between the graphs.
3. Overall, are the graphs more similar or different?
5. Construct a new graph for the couples by hand. Since each couple is paying for two individuals, instead of scaling the x-axis by $50, scale it by$100. Use relative frequency on the y-axis.
6. Compare the graph for the singles with the new graph for the couples:
1. List two similarities between the graphs.
2. Overall, are the graphs more similar or different?
7. How did scaling the couples graph differently change the way you compared it to the singles graph?
8. Based on the graphs, do you think that individuals spend the same amount, more or less, as singles as they do person by person as a couple? Explain why in one or two complete sentences.
87.
Twenty-five randomly selected students were asked the number of movies they watched the previous week. The results are as follows.
# of movies Frequency Relative frequency Cumulative relative frequency
0 5
1 9
2 6
3 4
4 1
Table $69$
1. Construct a histogram of the data.
2. Complete the columns of the chart.
Use the following information to answer the next two exercises: Suppose one hundred eleven people who shopped in a special t-shirt store were asked the number of t-shirts they own costing more than $19 each. 88. The percentage of people who own at most three t-shirts costing more than$19 each is approximately:
1. 21
2. 59
3. 41
4. Cannot be determined
89.
If the data were collected by asking the first 111 people who entered the store, then the type of sampling is:
1. cluster
2. simple random
3. stratified
4. convenience
90.
Following are the 2010 obesity rates by U.S. states and Washington, DC.
State Percent (%) State Percent (%) State Percent (%)
Alabama 32.2 Kentucky 31.3 North Dakota 27.2
Alaska 24.5 Louisiana 31.0 Ohio 29.2
Arizona 24.3 Maine 26.8 Oklahoma 30.4
Arkansas 30.1 Maryland 27.1 Oregon 26.8
California 24.0 Massachusetts 23.0 Pennsylvania 28.6
Colorado 21.0 Michigan 30.9 Rhode Island 25.5
Connecticut 22.5 Minnesota 24.8 South Carolina 31.5
Delaware 28.0 Mississippi 34.0 South Dakota 27.3
Washington, DC 22.2 Missouri 30.5 Tennessee 30.8
Florida 26.6 Montana 23.0 Texas 31.0
Georgia 29.6 Nebraska 26.9 Utah 22.5
Hawaii 22.7 Nevada 22.4 Vermont 23.2
Idaho 26.5 New Hampshire 25.0 Virginia 26.0
Illinois 28.2 New Jersey 23.8 Washington 25.5
Indiana 29.6 New Mexico 25.1 West Virginia 32.5
Iowa 28.4 New York 23.9 Wisconsin 26.3
Kansas 29.4 North Carolina 27.8 Wyoming 25.1
Table $70$
Construct a bar graph of obesity rates of your state and the four states closest to your state. Hint: Label the x-axis with the states.
2.2 Measures of the Location of the Data
91.
The median age for U.S. blacks currently is 30.9 years; for U.S. whites it is 42.3 years.
1. Based upon this information, give two reasons why the black median age could be lower than the white median age.
2. Does the lower median age for blacks necessarily mean that blacks die younger than whites? Why or why not?
3. How might it be possible for blacks and whites to die at approximately the same age, but for the median age for whites to be higher?
92.
Six hundred adult Americans were asked by telephone poll, "What do you think constitutes a middle-class income?" The results are in Table 2.71. Also, include left endpoint, but not the right endpoint.
Salary ($) Relative frequency < 20,000 0.02 20,000–25,000 0.09 25,000–30,000 0.19 30,000–40,000 0.26 40,000–50,000 0.18 50,000–75,000 0.17 75,000–99,999 0.02 100,000+ 0.01 Table $71$ 1. What percentage of the survey answered "not sure"? 2. What percentage think that middle-class is from$25,000 to $50,000? 3. Construct a histogram of the data. 1. Should all bars have the same width, based on the data? Why or why not? 2. How should the <20,000 and the 100,000+ intervals be handled? Why? 4. Find the 40th and 80th percentiles 5. Construct a bar graph of the data 2.3 Measures of the Center of the Data 93. The most obese countries in the world have obesity rates that range from 11.4% to 74.6%. This data is summarized in the following table. Percent of population obese Number of countries 11.4–20.45 29 20.45–29.45 13 29.45–38.45 4 38.45–47.45 0 47.45–56.45 2 56.45–65.45 1 65.45–74.45 0 74.45–83.45 1 Table $72$ 1. What is the best estimate of the average obesity percentage for these countries? 2. The United States has an average obesity rate of 33.9%. Is this rate above average or below? 3. How does the United States compare to other countries? 94. Table $73$ gives the percent of children under five considered to be underweight. What is the best estimate for the mean percentage of underweight children? Percent of underweight children Number of countries 16–21.45 23 21.45–26.9 4 26.9–32.35 9 32.35–37.8 7 37.8–43.25 6 43.25–48.7 1 Table 2:73 2.4 Sigma Notation and Calculating the Arithmetic Mean 95. A sample of 10 prices is chosen from a population of 100 similar items. The values obtained from the sample, and the values for the population, are given in Table $74$ and Table $75$ respectively. 1. Is the mean of the sample within$1 of the population mean?
2. What is the difference in the sample and population means?
Prices of the sample
$21$23
$21$24
$22$22
$25$21
$20$24
Table $74$
Prices of the population Frequency
$20 20$21 35
$22 15$23 10
$24 18$25 2
Table $75$
96.
A standardized test is given to ten people at the beginning of the school year with the results given in Table $76$ below. At the end of the year the same people were again tested.
1. What is the average improvement?
2. Does it matter if the means are subtracted, or if the individual values are subtracted?
Student Beginning score Ending score
1 1100 1120
2 980 1030
3 1200 1208
4 998 1000
5 893 948
6 1015 1030
7 1217 1224
8 1232 1245
9 967 988
10 988 997
Table $76$
97.
A small class of 7 students has a mean grade of 82 on a test. If six of the grades are 80, 82,86, 90, 90, and 95, what is the other grade?
98.
A class of 20 students has a mean grade of 80 on a test. Nineteen of the students has a mean grade between 79 and 82, inclusive.
1. What is the lowest possible grade of the other student?
2. What is the highest possible grade of the other student?
99.
If the mean of 20 prices is $10.39, and 5 of the items with a mean of$10.99 are sampled, what is the mean of the other 15 prices?
2.5 Geometric Mean
100.
An investment grows from $10,000 to$22,000 in five years. What is the average rate of return?
101.
An initial investment of $20,000 grows at a rate of 9% for five years. What is its final value? 102. A culture contains 1,300 bacteria. The bacteria grow to 2,000 in 10 hours. What is the rate at which the bacteria grow per hour to the nearest tenth of a percent? 103. An investment of$3,000 grows at a rate of 5% for one year, then at a rate of 8% for three years. What is the average rate of return to the nearest hundredth of a percent?
104.
An investment of $10,000 goes down to$9,500 in four years. What is the average return per year to the nearest hundredth of a percent?
2.6 Skewness and the Mean, Median, and Mode
105.
The median age of the U.S. population in 1980 was 30.0 years. In 1991, the median age was 33.1 years.
1. What does it mean for the median age to rise?
2. Give two reasons why the median age could rise.
3. For the median age to rise, is the actual number of children less in 1991 than it was in 1980? Why or why not?
2.7 Measures of the Spread of the Data
Use the following information to answer the next nine exercises: The population parameters below describe the full-time equivalent number of students (FTES) each year at Lake Tahoe Community College from 1976–1977 through 2004–2005.
• $\mu = 1000$ FTES
• $\text{median }= 1,014$ FTES
• $\sigma = 474$ FTES
• $\text{first quartile }= 528.5$ FTES
• $\text{third quartile }= 1,447.5$ FTES
• $n = 29$ years
106.
A sample of 11 years is taken. About how many are expected to have a FTES of 1014 or above? Explain how you determined your answer.
107.
75% of all years have an FTES:
1. at or below: _____
2. at or above: _____
108.
The population standard deviation = _____
109.
What percent of the FTES were from 528.5 to 1447.5? How do you know?
110.
What is the $IQR$? What does the $IQR$ represent?
111.
How many standard deviations away from the mean is the median?
Additional Information: The population FTES for 2005–2006 through 2010–2011 was given in an updated report. The data are reported here.
Year 2005–06 2006–07 2007–08 2008–09 2009–10 2010–11
Total FTES 1,585 1,690 1,735 1,935 2,021 1,890
Table $77$
112.
Calculate the mean, median, standard deviation, the first quartile, the third quartile and the $IQR$. Round to one decimal place.
113.
Compare the $IQR$ for the FTES for 1976–77 through 2004–2005 with the $IQR$ for the FTES for 2005-2006 through 2010–2011. Why do you suppose the $IQR$s are so different?
114.
Three students were applying to the same graduate school. They came from schools with different grading systems. Which student had the best GPA when compared to other students at his school? Explain how you determined your answer.
Student GPA School Average GPA School Standard Deviation
Thuy 2.7 3.2 0.8
Vichet 87 75 20
Kamala 8.6 8 0.4
Table $78$
115.
A music school has budgeted to purchase three musical instruments. They plan to purchase a piano costing $3,000, a guitar costing$550, and a drum set costing $600. The mean cost for a piano is$4,000 with a standard deviation of $2,500. The mean cost for a guitar is$500 with a standard deviation of $200. The mean cost for drums is$700 with a standard deviation of \$100. Which cost is the lowest, when compared to other instruments of the same type? Which cost is the highest when compared to other instruments of the same type. Justify your answer.
116.
An elementary school class ran one mile with a mean of 11 minutes and a standard deviation of three minutes. Rachel, a student in the class, ran one mile in eight minutes. A junior high school class ran one mile with a mean of nine minutes and a standard deviation of two minutes. Kenji, a student in the class, ran 1 mile in 8.5 minutes. A high school class ran one mile with a mean of seven minutes and a standard deviation of four minutes. Nedda, a student in the class, ran one mile in eight minutes.
1. Why is Kenji considered a better runner than Nedda, even though Nedda ran faster than he?
2. Who is the fastest runner with respect to his or her class? Explain why.
117.
The most obese countries in the world have obesity rates that range from 11.4% to 74.6%. This data is summarized in Table $79$.
Percent of population obese Number of countries
11.4–20.45 29
20.45–29.45 13
29.45–38.45 4
38.45–47.45 0
47.45–56.45 2
56.45–65.45 1
65.45–74.45 0
74.45–83.45 1
Table $79$
What is the best estimate of the average obesity percentage for these countries? What is the standard deviation for the listed obesity rates? The United States has an average obesity rate of 33.9%. Is this rate above average or below? How “unusual” is the United States’ obesity rate compared to the average rate? Explain.
118.
Table $80$ gives the percent of children under five considered to be underweight.
Percent of underweight children Number of countries
16–21.45 23
21.45–26.9 4
26.9–32.35 9
32.35–37.8 7
37.8–43.25 6
43.25–48.7 1
Table $80$
What is the best estimate for the mean percentage of underweight children? What is the standard deviation? Which interval(s) could be considered unusual? Explain. | textbooks/stats/Applied_Statistics/Introductory_Business_Statistics_(OpenStax)/02%3A_Descriptive_Statistics/2.10%3A_Chapter_Homework.txt |
Frequency
the number of times a value of the data occurs
Frequency Table
a data representation in which grouped data is displayed along with the corresponding frequencies
Histogram
a graphical representation in x-y form of the distribution of data in a data set; x represents the data and y represents the frequency, or relative frequency. The graph consists of contiguous rectangles.
Interquartile Range
or IQR, is the range of the middle 50 percent of the data values; the IQR is found by subtracting the first quartile from the third quartile.
Mean (arithmetic)
a number that measures the central tendency of the data; a common name for mean is 'average.' The term 'mean' is a shortened form of 'arithmetic mean.' By definition, the mean for a sample (denoted by $\overline{x}$) is $\overline{x}=\frac{\text { Sum of all values in the sample }}{\text { Number of values in the sample }}$, and the mean for a population (denoted by μ) is $\boldsymbol{\mu}=\frac{\text { Sum of all values in the population }}{\text { Number of values in the population }}$
Mean (geometric)
a measure of central tendency that provides a measure of average geometric growth over multiple time periods.
Median
a number that separates ordered data into halves; half the values are the same number or smaller than the median and half the values are the same number or larger than the median. The median may or may not be part of the data.
Midpoint
the mean of an interval in a frequency table
Mode
the value that appears most frequently in a set of data
Outlier
an observation that does not fit the rest of the data
Percentile
a number that divides ordered data into hundredths; percentiles may or may not be part of the data. The median of the data is the second quartile and the 50th percentile. The first and third quartiles are the 25th and the 75th percentiles, respectively.
Quartiles
the numbers that separate the data into quarters; quartiles may or may not be part of the data. The second quartile is the median of the data.
Relative Frequency
the ratio of the number of times a value of the data occurs in the set of all outcomes to the number of all outcomes
Standard Deviation
a number that is equal to the square root of the variance and measures how far data values are from their mean; notation: s for sample standard deviation and σ for population standard deviation.
Variance
mean of the squared deviations from the mean, or the square of the standard deviation; for a set of data, a deviation can be represented as x – $\overline{x}$ where x is a value of the data and $\overline{x}$ is the sample mean. The sample variance is equal to the sum of the squares of the deviations divided by the difference of the sample size and one.
2.13: Chapter Homework Solutions
1.
3.
5.
7.
9.
65
11.
The relative frequency shows the proportion of data points that have each value. The frequency tells the number of data points that have each value.
13.
Answers will vary. One possible histogram is shown:
15.
Find the midpoint for each class. These will be graphed on the x-axis. The frequency values will be graphed on the y-axis values.
17.
19.
1. The 40th percentile is 37 years.
2. The 78th percentile is 70 years.
21.
Jesse graduated 37th out of a class of 180 students. There are 180 – 37 = 143 students ranked below Jesse. There is one rank of 37.
$x = 143$ and $y = 1$. $\frac{x+0.5 y}{n}(100)=\frac{143+0.5(1)}{180}(100) = 79.72$. Jesse’s rank of 37 puts him at the 80th percentile.
23.
1. For runners in a race it is more desirable to have a high percentile for speed. A high percentile means a higher speed which is faster.
2. 40% of runners ran at speeds of 7.5 miles per hour or less (slower). 60% of runners ran at speeds of 7.5 miles per hour or more (faster).
25.
When waiting in line at the DMV, the 85th percentile would be a long wait time compared to the other people waiting. 85% of people had shorter wait times than Mina. In this context, Mina would prefer a wait time corresponding to a lower percentile. 85% of people at the DMV waited 32 minutes or less. 15% of people at the DMV waited 32 minutes or longer.
27.
The manufacturer and the consumer would be upset. This is a large repair cost for the damages, compared to the other cars in the sample. INTERPRETATION: 90% of the crash tested cars had damage repair costs of $1700 or less; only 10% had damage repair costs of$1700 or more.
29.
You can afford 34% of houses. 66% of the houses are too expensive for your budget. INTERPRETATION: 34% of houses cost $240,000 or less. 66% of houses cost$240,000 or more.
31.
4
33.
$6 – 4 = 2$
35.
6
37.
Mean: $16 + 17 + 19 + 20 + 20 + 21 + 23 + 24 + 25 + 25 + 25 + 26 + 26 + 27 + 27 + 27 + 28 + 29 + 30 + 32 + 33 + 33 + 34 + 35 + 37 + 39 + 40 = 738$;
$\frac{738}{27} = 27.33$
39.
The most frequent lengths are 25 and 27, which occur three times. Mode = 25, 27
41.
4
44.
39.48 in.
45.
$21,574 46. 15.98 ounces 47. 81.56 48. 4 hours 49. 2.01 inches 50. 18.25 51. 10 52. 14.15 53. 14 54. 14.78 55. 44% 56. 100% 57. 6% 58. 33% 59. The data are symmetrical. The median is 3 and the mean is 2.85. They are close, and the mode lies close to the middle of the data, so the data are symmetrical. 61. The data are skewed right. The median is 87.5 and the mean is 88.2. Even though they are close, the mode lies to the left of the middle of the data, and there are many more instances of 87 than any other number, so the data are skewed right. 63. When the data are symmetrical, the mean and median are close or the same. 65. The distribution is skewed right because it looks pulled out to the right. 67. The mean is 4.1 and is slightly greater than the median, which is four. 69. The mode and the median are the same. In this case, they are both five. 71. The distribution is skewed left because it looks pulled out to the left. 73. The mean and the median are both six. 75. The mode is 12, the median is 12.5, and the mean is 15.1. The mean is the largest. 77. The mean tends to reflect skewing the most because it is affected the most by outliers. 79. $s = 34.5$ 81. For Fredo: $z=\frac{0.158-0.166}{0.012} = –0.67$ For Karl: $z=\frac{0.177-0.189}{0.015}=-0.8$ Fredo’s z-score of –0.67 is higher than Karl’s z-score of –0.8. For batting average, higher values are better, so Fredo has a better batting average compared to his team. 83. 1. $s_{x}=\sqrt{\frac{\sum f m^{2}}{n}-\overline{x}^{2}}=\sqrt{\frac{193157.45}{30}-79.5^{2}}=10.88$ 2. $s_{x}=\sqrt{\frac{\sum f m^{2}}{n}-\overline{x}^{2}}=\sqrt{\frac{38045.3}{101}-60.94^{2}}=7.62$ 3. $s_{x}=\sqrt{\frac{\sum f m^{2}}{n}-\overline{x}^{2}}=\sqrt{\frac{440051.5}{86}-70.66^{2}}=11.14$ 84. 1. Example solution for using the random number generator for the TI-84+ to generate a simple random sample of 8 states. Instructions are as follows. • Number the entries in the table 1–51 (Includes Washington, DC; Numbered vertically) • Press MATH • Arrow over to PRB • Press 5:randInt( • Enter 51,1,8) Eight numbers are generated (use the right arrow key to scroll through the numbers). The numbers correspond to the numbered states (for this example: {47 21 9 23 51 13 25 4}. If any numbers are repeated, generate a different number by using 5:randInt(51,1)). Here, the states (and Washington DC) are {Arkansas, Washington DC, Idaho, Maryland, Michigan, Mississippi, Virginia, Wyoming}. Corresponding percents are $\{30.1, 22.2, 26.5, 27.1, 30.9, 34.0, 26.0, 25.1\}$. 86. Amount($) Frequency Relative frequency
51–100 5 0.08
101–150 10 0.17
151–200 15 0.25
201–250 15 0.25
251–300 10 0.17
301–350 5 0.08
Table2.87 Singles
Amount($) Frequency Relative frequency 100–150 5 0.07 201–250 5 0.07 251–300 5 0.07 301–350 5 0.07 351–400 10 0.14 401–450 10 0.14 451–500 10 0.14 501–550 10 0.14 551–600 5 0.07 601–650 5 0.07 Table2.88 Couples 1. See Table $87$ and Table $88$. 2. In the following histogram data values that fall on the right boundary are counted in the class interval, while values that fall on the left boundary are not counted (with the exception of the first interval where both boundary values are included). 3. In the following histogram, the data values that fall on the right boundary are counted in the class interval, while values that fall on the left boundary are not counted (with the exception of the first interval where values on both boundaries are included). 4. Compare the two graphs: 1. Answers may vary. Possible answers include: • Both graphs have a single peak. • Both graphs use class intervals with width equal to$50.
2. Answers may vary. Possible answers include:
• The couples graph has a class interval with no values.
• It takes almost twice as many class intervals to display the data for couples.
3. Answers may vary. Possible answers include: The graphs are more similar than different because the overall patterns for the graphs are the same.
5. Check student's solution.
6. Compare the graph for the Singles with the new graph for the Couples:
• Both graphs have a single peak.
• Both graphs display 6 class intervals.
• Both graphs show the same general pattern.
1. Answers may vary. Possible answers include: Although the width of the class intervals for couples is double that of the class intervals for singles, the graphs are more similar than they are different.
7. Answers may vary. Possible answers include: You are able to compare the graphs interval by interval. It is easier to compare the overall patterns with the new scale on the Couples graph. Because a couple represents two individuals, the new scale leads to a more accurate comparison.
8. Answers may vary. Possible answers include: Based on the histograms, it seems that spending does not vary much from singles to individuals who are part of a couple. The overall patterns are the same. The range of spending for couples is approximately double the range for individuals.
88.
c
90.
Answers will vary.
92.
1. $1 – (0.02+0.09+0.19+0.26+0.18+0.17+0.02+0.01) = 0.06$
2. $0.19+0.26+0.18 = 0.63$
3. Check student’s solution.
4. 40th percentile will fall between 30,000 and 40,000
80th percentile will fall between 50,000 and 75,000
5. Check student’s solution.
94.
The mean percentage, $\overline{x}=\frac{1328.65}{50}=26.75$
95.
1. Yes
2. The sample is 0.5 higher.
96.
1. 20
2. No
97.
51
98.
1. 42
2. 99
99.
$10.19 100. 17% 101.$30,772.48
102.
4.4%
103.
7.24%
104.
-1.27%
106.
The median value is the middle value in the ordered list of data values. The median value of a set of 11 will be the 6th number in order. Six years will have totals at or below the median.
108.
474 FTES
110.
919
112.
• mean = 1,809.3
• median = 1,812.5
• standard deviation = 151.2
• first quartile = 1,690
• third quartile = 1,935
• $IQR = 245$
113.
Hint: Think about the number of years covered by each time period and what happened to higher education during those periods.
115.
For pianos, the cost of the piano is 0.4 standard deviations BELOW the mean. For guitars, the cost of the guitar is 0.25 standard deviations ABOVE the mean. For drums, the cost of the drum set is 1.0 standard deviations BELOW the mean. Of the three, the drums cost the lowest in comparison to the cost of other instruments of the same type. The guitar costs the most in comparison to the cost of other instruments of the same type.
117.
• $\overline{x}=23.32$
• Using the TI 83/84, we obtain a standard deviation of: $s_{x}=12.95$.
• The obesity rate of the United States is 10.58% higher than the average obesity rate.
• Since the standard deviation is 12.95, we see that $23.32 + 12.95 = 36.27$ is the obesity percentage that is one standard deviation from the mean. The United States obesity rate is slightly less than one standard deviation from the mean. Therefore, we can assume that the United States, while 34% obese, does not hav e an unusually high percentage of obese people.
120.
a
122.
b
123.
1. 1.48
2. 1.12
125.
1. 174; 177; 178; 184; 185; 185; 185; 185; 188; 190; 200; 205; 205; 206; 210; 210; 210; 212; 212; 215; 215; 220; 223; 228; 230; 232; 241; 241; 242; 245; 247; 250; 250; 259; 260; 260; 265; 265; 270; 272; 273; 275; 276; 278; 280; 280; 285; 285; 286; 290; 290; 295; 302
2. 241
3. 205.5
4. 272.5
5. 205.5, 272.5
6. sample
7. population
1. 236.34
2. 37.50
3. 161.34
4. 0.84 std. dev. below the mean
8. Young
127.
1. True
2. True
3. True
4. False
129.
1. Enrollment Frequency
1000-5000 10
5000-10000 16
10000-15000 3
15000-20000 3
20000-25000 1
25000-30000 2
Table $89$
2. Check student’s solution.
3. mode
4. 8628.74
5. 6943.88
6. –0.09
131.
a | textbooks/stats/Applied_Statistics/Introductory_Business_Statistics_(OpenStax)/02%3A_Descriptive_Statistics/2.11%3A_Chapter_Key_Terms.txt |
2.1 Display Data
14.
Construct a frequency polygon for the following:
1. Describe the relationship between the mode and the median of this distribution.67.
Describe the relationship between the mean and the median of this distribution.
68.
69.
Describe the relationship between the mode and the median of this distribution.
70.
Are the mean and the median the exact same in this distribution? Why or why not?
71.
Describe the shape of this distribution.
72.
Describe the relationship between the mode and the median of this distribution.
73.
Describe the relationship between the mean and the median of this distribution.
74.
The mean and median for the data are the same.
3; 4; 5; 5; 6; 6; 6; 6; 7; 7; 7; 7; 7; 7; 7
Is the data perfectly symmetrical? Why or why not?
75.
Which is the greatest, the mean, the mode, or the median of the data set?
11; 11; 12; 12; 12; 12; 13; 15; 17; 22; 22; 22
76.
Which is the least, the mean, the mode, and the median of the data set?
56; 56; 56; 58; 59; 60; 62; 64; 64; 65; 67
77.
Of the three measures, which tends to reflect skewing the most, the mean, the mode, or the median? Why?
78.
In a perfectly symmetrical distribution, when would the mode be different from the mean and median?
2.7 Measures of the Spread of the Data
Use the following information to answer the next two exercises: The following data are the distances between 20 retail stores and a large distribution center. The distances are in miles.
29; 37; 38; 40; 58; 67; 68; 69; 76; 86; 87; 95; 96; 96; 99; 106; 112; 127; 145; 150
79.
Use a graphing calculator or computer to find the standard deviation and round to the nearest tenth.
80.
Find the value that is one standard deviation below the mean.
81.
Two baseball players, Fredo and Karl, on different teams wanted to find out who had the higher batting average when compared to his team. Which baseball player had the higher batting average when compared to his team?
Baseball playerBatting averageTeam batting averageTeam standard deviation
Fredo0.1580.1660.012
Karl0.1770.1890.015
Table 2:59 to find the value that is three standard deviations:
• Find the standard deviation for the following frequency tables using the formula. Check the calculations with the TI 83/84.83.
Find the standard deviation for the following frequency tables using the formula. Check the calculations with the TI 83/84.
1. GradeFrequency
49.5–59.52
59.5–69.53
69.5–79.58
79.5–89.512
89.5–99.55
Table \(60\)
2. Daily low temperatureFrequency
49.5–59.553
59.5–69.532
69.5–79.515
79.5–89.51
89.5–99.50
Table \(61\)
3. Points per gameFrequency
49.5–59.514
59.5–69.532
69.5–79.515
79.5–89.523
89.5–99.52
Table \(62\)
2.R: Descriptive Statistics (Review)
2.1 Display Data
A stem-and-leaf plot is a way to plot data and look at the distribution. In a stem-and-leaf plot, all data values within a class are visible. The advantage in a stem-and-leaf plot is that all values are listed, unlike a histogram, which gives classes of data values. A line graph is often used to represent a set of data values in which a quantity varies with time. These graphs are useful for finding trends. That is, finding a general pattern in data sets including temperature, sales, employment, company profit or cost over a period of time. A bar graph is a chart that uses either horizontal or vertical bars to show comparisons among categories. One axis of the chart shows the specific categories being compared, and the other axis represents a discrete value. Some bar graphs present bars clustered in groups of more than one (grouped bar graphs), and others show the bars divided into subparts to show cumulative effect (stacked bar graphs). Bar graphs are especially useful when categorical data is being used.
A histogram is a graphic version of a frequency distribution. The graph consists of bars of equal width drawn adjacent to each other. The horizontal scale represents classes of quantitative data values and the vertical scale represents frequencies. The heights of the bars correspond to frequency values. Histograms are typically used for large, continuous, quantitative data sets. A frequency polygon can also be used when graphing large data sets with data points that repeat. The data usually goes on y-axis with the frequency being graphed on the x-axis. Time series graphs can be helpful when looking at large amounts of data for one variable over a period of time.
2.2 Measures of the Location of the Data
The values that divide a rank-ordered set of data into 100 equal parts are called percentiles. Percentiles are used to compare and interpret data. For example, an observation at the 50th percentile would be greater than 50 percent of the other observations in the set. Quartiles divide data into quarters. The first quartile ($Q_1$) is the 25th percentile,the second quartile ($Q_2$ or median) is 50th percentile, and the third quartile ($Q_3$) is the the 75th percentile. The interquartile range, or $IQR$, is the range of the middle 50 percent of the data values. The $IQR$ is found by subtracting $Q_1$ from $Q_3$, and can help determine outliers by using the following two expressions.
• $Q_3 + IQR(1.5)$
• $Q_1 – IQR(1.5)$
2.3 Measures of the Center of the Data
The mean and the median can be calculated to help you find the "center" of a data set. The mean is the best estimate for the actual data set, but the median is the best measurement when a data set contains several outliers or extreme values. The mode will tell you the most frequently occuring datum (or data) in your data set. The mean, median, and mode are extremely helpful when you need to analyze your data, but if your data set consists of ranges which lack specific values, the mean may seem impossible to calculate. However, the mean can be approximated if you add the lower boundary with the upper boundary and divide by two to find the midpoint of each interval. Multiply each midpoint by the number of values found in the corresponding range. Divide the sum of these values by the total number of data values in the set.
2.6 Skewness and the Mean, Median, and Mode
Looking at the distribution of data can reveal a lot about the relationship between the mean, the median, and the mode. There are three types of distributions. A right (or positive) skewed distribution has a shape like Figure $11$.
2.7 Measures of the Spread of the Data
The standard deviation can help you calculate the spread of data. There are different equations to use if are calculating the standard deviation of a sample or of a population.
• The Standard Deviation allows us to compare individual data or classes to the data set mean numerically.
• $s=\sqrt{\frac{\sum(x-\overline{x})^{2}}{n-1}} \text { or } s=\sqrt{\frac{\sum f(x-\overline{x})^{2}}{n-1}}$ is the formula for calculating the standard deviation of a sample. To calculate the standard deviation of a population, we would use the population mean, μ, and the formula $\sigma=\sqrt{\frac{\sum(x-\mu)^{2}}{N}} \text { or } \sigma=\sqrt{\frac{\sum f(x-\mu)^{2}}{N}}$. | textbooks/stats/Applied_Statistics/Introductory_Business_Statistics_(OpenStax)/02%3A_Descriptive_Statistics/2.14%3A_Chapter_Practice.txt |
You have, more than likely, used probability. In fact, you probably have an intuitive sense of probability. Probability deals with the chance of an event occurring. Whenever you weigh the odds of whether or not to do your homework or to study for an exam, you are using probability. In this chapter, you will learn how to solve probability problems using a systematic approach.
03: Probability Topics
It is often necessary to "guess" about the outcome of an event in order to make a decision. Politicians study polls to guess their likelihood of winning an election. Teachers choose a particular course of study based on what they think students can comprehend. Doctors choose the treatments needed for various diseases based on their assessment of likely results. You may have visited a casino where people play games chosen because of the belief that the likelihood of winning is good. You may have chosen your course of study based on the probable availability of jobs.
You have, more than likely, used probability. In fact, you probably have an intuitive sense of probability. Probability deals with the chance of an event occurring. Whenever you weigh the odds of whether or not to do your homework or to study for an exam, you are using probability. In this chapter, you will learn how to solve probability problems using a systematic approach.
3.01: Probability Terminology
Probability is a measure that is associated with how certain we are of outcomes of a particular experiment or activity. An experiment is a planned operation carried out under controlled conditions. If the result is not predetermined, then the experiment is said to be a chance experiment. Flipping one fair coin twice is an example of an experiment.
A result of an experiment is called an outcome. The sample space of an experiment is the set of all possible outcomes. Three ways to represent a sample space are: to list the possible outcomes, to create a tree diagram, or to create a Venn diagram. The uppercase letter $S$ is used to denote the sample space. For example, if you flip one fair coin, $S = \{H, T\}$ where $H =$ heads and $T =$ tails are the outcomes.
An event is any combination of outcomes. Upper case letters like $A$ and $B$ represent events. For example, if the experiment is to flip one fair coin, event $A$ might be getting at most one head. The probability of an event $A$ is written $P(A)$.
The probability of any outcome is the long-term relative frequency of that outcome. Probabilities are between zero and one, inclusive (that is, zero and one and all numbers between these values). $P(A) = 0$ means the event $A$ can never happen. $P(A) = 1$ means the event $A$ always happens. $P(A) = 0.5$ means the event $A$ is equally likely to occur or not to occur. For example, if you flip one fair coin repeatedly (from 20 to 2,000 to 20,000 times) the relative frequency of heads approaches 0.5 (the probability of heads).
Equally likely means that each outcome of an experiment occurs with equal probability. For example, if you toss a fair, six-sided die, each face (1, 2, 3, 4, 5, or 6) is as likely to occur as any other face. If you toss a fair coin, a Head (H) and a Tail (T) are equally likely to occur. If you randomly guess the answer to a true/false question on an exam, you are equally likely to select a correct answer or an incorrect answer.
To calculate the probability of an event A when all outcomes in the sample space are equally likely, count the number of outcomes for event A and divide by the total number of outcomes in the sample space. For example, if you toss a fair dime and a fair nickel, the sample space is $\{HH, TH, HT, TT\}$ where $T =$ tails and $H =$ heads. The sample space has four outcomes. A = getting one head. There are two outcomes that meet this condition $\{HT, TH\}$, so $P(A) = \frac{2}{4} = 0.5$.
Suppose you roll one fair six-sided die, with the numbers $\{1, 2, 3, 4, 5, 6\}$ on its faces. Let event $E =$ rolling a number that is at least five. There are two outcomes $\{5, 6\}$. $P(E) = \frac{2}{6}$ If you were to roll the die only a few times, you would not be surprised if your observed results did not match the probability. If you were to roll the die a very large number of times, you would expect that, overall, $\frac{2}{6}$ of the rolls would result in an outcome of "at least five". You would not expect exactly $\frac{2}{6}$. The long-term relative frequency of obtaining this result would approach the theoretical probability of $\frac{2}{6}$ as the number of repetitions grows larger and larger.
This important characteristic of probability experiments is known as the law of large numbers which states that as the number of repetitions of an experiment is increased, the relative frequency obtained in the experiment tends to become closer and closer to the theoretical probability. Even though the outcomes do not happen according to any set pattern or order, overall, the long-term observed relative frequency will approach the theoretical probability. (The word empirical is often used instead of the word observed.)
It is important to realize that in many situations, the outcomes are not equally likely. A coin or die may be unfair, or biased. Two math professors in Europe had their statistics students test the Belgian one Euro coin and discovered that in 250 trials, a head was obtained 56% of the time and a tail was obtained 44% of the time. The data seem to show that the coin is not a fair coin; more repetitions would be helpful to draw a more accurate conclusion about such bias. Some dice may be biased. Look at the dice in a game you have at home; the spots on each face are usually small holes carved out and then painted to make the spots visible. Your dice may or may not be biased; it is possible that the outcomes may be affected by the slight weight differences due to the different numbers of holes in the faces. Gambling casinos make a lot of money depending on outcomes from rolling dice, so casino dice are made differently to eliminate bias. Casino dice have flat faces; the holes are completely filled with paint having the same density as the material that the dice are made out of so that each face is equally likely to occur. Later we will learn techniques to use to work with probabilities for events that are not equally likely.
"$\cup$" Event: The Union
An outcome is in the event $A \cup B$ if the outcome is in A or is in B or is in both A and B. For example, let $A = \{1, 2, 3, 4, 5\}$ and $B = \{4, 5, 6, 7, 8\}$. $A \cup B = \{1, 2, 3, 4, 5, 6, 7, 8\}$. Notice that 4 and 5 are NOT listed twice.
"$\cap$" Event: The Intersection
An outcome is in the event $A \cap B$ if the outcome is in both A and B at the same time. For example, let $A$ and $B$ be $\{1, 2, 3, 4, 5\}$ and $\{4, 5, 6, 7, 8\}$, respectively. Then $A \cap B = \{4, 5\}$.
The complement of event A is denoted A′ (read "A prime"). A′ consists of all outcomes that are NOT in A. Notice that $P(A) + P(A′) = 1$. For example, let $S = \{1, 2, 3, 4, 5, 6\}$ and let $A = \{1, 2, 3, 4\}$. Then, $A′ = \{5, 6\}$. $P(A) = \frac{4}{6}$, $P(A′) = \frac{2}{6}$, and $P(A) + P(A′) = \frac{4}{6}+\frac{2}{6}=1$
The conditional probability of $A$ given $B$ is written $P(A|B)$. $P(A|B)$ is the probability that event $A$ will occur given that the event $B$ has already occurred. A conditional reduces the sample space. We calculate the probability of A from the reduced sample space $B$. The formula to calculate $P(A|B)$ is $P(A | B)=\frac{P(A \cap B)}{P(B)}$ where $P(B)$ is greater than zero.
For example, suppose we toss one fair, six-sided die. The sample space $S = \{1, 2, 3, 4, 5, 6\}$. Let $A =$ face is 2 or 3 and $B =$ face is even $(2, 4, 6)$. To calculate $P(A|B)$, we count the number of outcomes 2 or 3 in the sample space $B = \{2, 4, 6\}$. Then we divide that by the number of outcomes $B$ (rather than $S$).
We get the same result by using the formula. Remember that $S$ has six outcomes.
$P(A|B) = \frac{\frac{(\text { the number of outcomes that are } 2 \text { or } 3 \text { and even in } S)}{6}}{\frac{(\text { the number of outcomes that are even in } S)}{6}}=\frac{\frac{1}{6}}{\frac{3}{6}}=\frac{1}{3}$
Odds
The odds of an event presents the probability as a ratio of success to failure. This is common in various gambling formats. Mathematically, the odds of an event can be defined as:
$\frac{P(A)}{1-P(A)}\nonumber$
where $P(A)$ is the probability of success and of course $1 − P(A)$ is the probability of failure. Odds are always quoted as "numerator to denominator," e.g. 2 to 1. Here the probability of winning is twice that of losing; thus, the probability of winning is 0.66. A probability of winning of 0.60 would generate odds in favor of winning of 3 to 2. While the calculation of odds can be useful in gambling venues in determining payoff amounts, it is not helpful for understanding probability or statistical theory.
Understanding Terminology and Symbols
It is important to read each problem carefully to think about and understand what the events are. Understanding the wording is the first very important step in solving probability problems. Reread the problem several times if necessary. Clearly identify the event of interest. Determine whether there is a condition stated in the wording that would indicate that the probability is conditional; carefully identify the condition, if any.
Solution 3.3
1. $P(M) = 0.52$
2. $P(F) = 0.48$
3. $P(R) = 0.87$
4. $P(L) = 0.13$
5. $P(M \cap R) = 0.43$
6. $P(F \cap L) = 0.04$
7. $P(M \cup F) = 1$
8. $P(M \cup R) = 0.96$
9. $P(F \cup L) = 0.57$
10. $P(M') = 0.48$
11. $P(R|M) = 0.8269$ (rounded to four decimal places)
12. $P(F|L) = 0.3077$ (rounded to four decimal places)
13. $P(L|F) = 0.0833$
3.02: Independent and Mutually Exclusive Events
Independent and mutually exclusive do not mean the same thing.
Independent Events
Two events are independent if one of the following are true:
• Two events A and B are independent if the knowledge that one occurred does not affect the chance the other occurs. For example, the outcomes of two roles of a fair die are independent events. The outcome of the first roll does not change the probability for the outcome of the second roll. To show two events are independent, you must show only one of the above conditions. If two events are NOT independent, then we say that they are dependent.
Sampling may be done withreplacement or without replacement.
• If it is not known whether A and B are independent or dependent, assume they are dependent until you can show otherwise.
1. Compute \(P(T)\).
2. Compute \(P(T|F)\).
3. Are \(T\) and \(F\) independent?.
4. Are \(F\) and \(S\) mutually exclusive?
5. Are \(F\) and \(S\) independent?
3.03: Two Basic Rules of Probability
When calculating probability, there are two rules to consider when determining if two events are independent or dependent and if they are mutually exclusive or not.
The Multiplication Rule
If A and B are two events defined on a sample space, then: $P(A \cap B)=P(B) P(A | B)$. We can think of the intersection symbol as substituting for the word "and".
This rule may also be written as: $P(A | B)=\frac{P(A \cap B)}{P(B)}$
This equation is read as the probability of A given B equals the probability of A and B divided by the probability of B.
If A and B are independent, then $P(A|B)=P(A)$. Then $P(A\cap B)=P(A|B)P(B)$ becomes $P(A\cap B)=P(A)(B)$ because the $P(A|B)=P(A)$ if A and B are independent.
One easy way to remember the multiplication rule is that the word "and" means that the event has to satisfy two conditions. For example the name drawn from the class roster is to be both a female and a sophomore. It is harder to satisfy two conditions than only one and of course when we multiply fractions the result is always smaller. This reflects the increasing difficulty of satisfying two conditions.
The Addition Rule
If A and B are defined on a sample space, then: $P(A\cup B)=P(A)+P(B)−P(A\cap B)$. We can think of the union symbol substituting for the word "or". The reason we subtract the intersection of A and B is to keep from double counting elements that are in both A and B.
If A and B are mutually exclusive, then $P(A\cap B)=0$. Then $P(A\cup B)=P(A)+P(B)−P(A\cap B)$ becomes $P(A\cup B)=P(A)+P(B)$.
A student goes to the library. Let events B = the student checks out a book and D = the student checks out a DVD. Suppose that $P(B) = 0.40$, $P(D) = 0.30$ and $P(D|B) = 0.5$.
1. Find $P(B′)$.
2. Find $P(D \cap B)$.
3. Find $P(B|D)$.
4. Find $P(D \cap B′)$.
5. Find $P(D|B′)$. | textbooks/stats/Applied_Statistics/Introductory_Business_Statistics_(OpenStax)/03%3A_Probability_Topics/3.00%3A_Introduction_to_Probability.txt |
Contingency Tables
A contingency table provides a way of portraying data that can facilitate calculating probabilities. The table helps in determining conditional probabilities quite easily. The table displays sample values in relation to two different variables that may be dependent or contingent on one another. Later on, we will use contingency tables again, but in another manner.
Example $20$
Suppose a study of speeding violations and drivers who use cell phones produced the following fictional data:
Speeding violation in the last year No speeding violation in the last year Total
Uses cell phone while driving 25 280 305
Does not use cell phone while driving 45 405 450
Total 70 685 755
Table $2$
The total number of people in the sample is 755. The row totals are 305 and 450. The column totals are 70 and 685. Notice that 305 + 450 = 755 and 70 + 685 = 755.
Calculate the following probabilities using the table.
a. Find P(Driver is a cell phone user).
Answer
Solution 3.20
a. $\frac{\text { number of cell phone users }}{\text { total number in study }}=\frac{305}{755}$
b. Find P(Driver had no violation in the last year).
Answer
Solution 3.20
b. $\frac{\text { number that had no violation }}{\text { total number in study }}=\frac{685}{755}$
c. Find P(Driver had no violation in the last year $\cap$ was a cell phone user).
Answer
Solution 3.20
c. $\frac{280}{755}$
d. Find P(Driver is a cell phone user $\cup$ driver had no violation in the last year).
Answer
Solution 3.20
d. $\left(\frac{305}{755}+\frac{685}{755}\right)-\frac{280}{755}=\frac{710}{755}$
e. Find P(Driver is a cell phone user $|$ driver had a violation in the last year).
Answer
Solution 3.20
e. $\frac{25}{70}$ (The sample space is reduced to the number of drivers who had a violation.)
f. Find P(Driver had no violation last year $|$ driver was not a cell phone user)
Answer
Solution 3.20
f. $\frac{405}{450}$ (The sample space is reduced to the number of drivers who were not cell phone users.)
Exercise $20$
Table $3$ shows the number of athletes who stretch before exercising and how many had injuries within the past year.
Injury in last year No injury in last year Total
Stretches 55 295 350
Does not stretch 231 219 450
Total 286 514 800
Table3.3
1. What is P(athlete stretches before exercising)?
2. What is P(athlete stretches before exercising||no injury in the last year)?
Example $21$
Table $4$ shows a random sample of 100 hikers and the areas of hiking they prefer.
Sex The coastline Near lakes and streams On mountain peaks Total
Female 18 16 ___ 45
Male ___ ___ 14 55
Total ___ 41 ___ ___
Table3.4 Hiking Area Preference
a. Complete the table.
Answer
Solution 3.21
a.
Sex The coastline Near lakes and streams On mountain peaks Total
Female 18 16 11 45
Male 16 25 14 55
Total 34 41 25 100
Table $5$ Hiking Area Preference
b. Are the events "being female" and "preferring the coastline" independent events?
Let F = being female and let C = preferring the coastline.
1. Find $P(F\cap C)$.
2. Find P(F)P(C)
Are these two numbers the same? If they are, then F and C are independent. If they are not, then F and C are not independent.
Answer
Solution 3.21
b.
1. $P(F\cap C)=\frac{18}{100}$ = 0.18
2. P(F)P(C) = $\left(\frac{45}{100}\right)\left(\frac{34}{100}\right)$ = (0.45)(0.34) = 0.153
$P(F\cap C)$ ≠ P(F)P(C), so the events F and C are not independent.
c. Find the probability that a person is male given that the person prefers hiking near lakes and streams. Let M = being male, and let L = prefers hiking near lakes and streams.
1. What word tells you this is a conditional?
2. Fill in the blanks and calculate the probability: P(___||___) = ___.
3. Is the sample space for this problem all 100 hikers? If not, what is it?
Answer
Solution 3.21
c.
1.The word 'given' tells you that this is a conditional.
2.P(M||L) = $\frac{25}{41}$
3.No, the sample space for this problem is the 41 hikers who prefer lakes and streams.
d. Find the probability that a person is female or prefers hiking on mountain peaks. Let F = being female, and let P= prefers mountain peaks.
1. Find P(F).
2. Find P(P).
3. Find $P(F\cap P)$.
4. Find $P(F\cup P)$.
Answer
Solution 3.21
d.
1. P(F) = $\frac{45}{100}$
2. P(P) = $\frac{25}{100}$
3. $P(F\cap P)$= $\frac{11}{100}$
4. $P(F\cup P)$= $\frac{45}{100}+\frac{25}{100}-\frac{11}{100}=\frac{59}{100}$
Exercise $21$
Table $6$ shows a random sample of 200 cyclists and the routes they prefer. Let M = males and H = hilly path.
Gender Lake path Hilly path Wooded path Total
Female 45 38 27 110
Male 26 52 12 90
Total 71 90 39 200
Table $6$
1. Out of the males, what is the probability that the cyclist prefers a hilly path?
2. Are the events “being male” and “preferring the hilly path” independent events?
Example $22$
Muddy Mouse lives in a cage with three doors. If Muddy goes out the first door, the probability that he gets caught by Alissa the cat is 1515 and the probability he is not caught is 4545. If he goes out the second door, the probability he gets caught by Alissa is 1414 and the probability he is not caught is 3434. The probability that Alissa catches Muddy coming out of the third door is 1212 and the probability she does not catch Muddy is 1212. It is equally likely that Muddy will choose any of the three doors so the probability of choosing each door is 1313.
Caught or not Door one Door two Door three Total
Caught $\frac{1}{15}$ $\frac{1}{12}$ $\frac{1}{6}$ ____
Not caught $\frac{4}{15}$ $\frac{3}{12}$ $\frac{1}{6}$ ____
Total ____ ____ ____ 1
Table $7$ Door Choice
• The first entry $\frac{1}{15}=\left(\frac{1}{5}\right)\left(\frac{1}{3}\right)$ is $P(Door One\cap Caught)$
• The entry $\frac{4}{15}=\left(\frac{4}{5}\right)\left(\frac{1}{3}\right)$ is $P(Door One\cap Not Caught)$
Verify the remaining entries.
a. Complete the probability contingency table. Calculate the entries for the totals. Verify that the lower-right corner entry is 1.
Answer
Solution 3.22
a.
Caught or not Door one Door two Door three Total
Caught $\frac{1}{15}$ $\frac{1}{12}$ $\frac{1}{6}$ $\frac{19}{60}$
Not caught $\frac{4}{15}$ $\frac{3}{12}$ $\frac{1}{6}$ $\frac{41}{60}$
Total $\frac{5}{15}$ $\frac{4}{12}$ $\frac{2}{6}$ 1
Table $8$ Door Choice
b. What is the probability that Alissa does not catch Muddy?
Answer
Solution 3.22
b. $\frac{41}{60}$
c. What is the probability that Muddy chooses Door One \cap Door Two given that Muddy is caught by Alissa?
Answer
Solution 3.22
c. $\frac{9}{19}$
Example $23$
Table $9$ contains the number of crimes per 100,000 inhabitants from 2008 to 2011 in the U.S.
Year Robbery Burglary Rape Vehicle Total
2008 145.7 732.1 29.7 314.7
2009 133.1 717.7 29.1 259.2
2010 119.3 701 27.7 239.1
2011 113.7 702.2 26.8 229.6
Total
Table $9$ United States Crime Index Rates Per 100,000 Inhabitants 2008–2011
TOTAL each column and each row. Total data = 4,520.7
1. Find $P(2009\cap Robbery)$.
2. Find $P(2010\cap Burglary)$.
3. Find $P(2010\cup Burglary)$.
4. Find P(2011|Rape).
5. Find P(Vehicle|2008).
Answer
Solution 3.23
1. 0.0294
2. 0.1551
3. 0.7165
4. 0.2365
5. 0.2575
Exercise $23$
Table $10$ relates the weights and heights of a group of individuals participating in an observational study.
Weight/height Tall Medium Short Totals
Obese 18 28 14
Normal 20 51 28
Underweight 12 25 9
Totals
Table $10$
1. Find the total for each row and column
2. Find the probability that a randomly chosen individual from this group is Tall.
3. Find the probability that a randomly chosen individual from this group is Obese and Tall.
4. Find the probability that a randomly chosen individual from this group is Tall given that the idividual is Obese.
5. Find the probability that a randomly chosen individual from this group is Obese given that the individual is Tall.
6. Find the probability a randomly chosen individual from this group is Tall and Underweight.
7. Are the events Obese and Tall independent?
Tree Diagrams
Sometimes, when the probability problems are complex, it can be helpful to graph the situation. Tree diagrams can be used to visualize and solve conditional probabilities.
Tree Diagrams
A tree diagram is a special type of graph used to determine the outcomes of an experiment. It consists of "branches" that are labeled with either frequencies or probabilities. Tree diagrams can make some probability problems easier to visualize and solve. The following example illustrates how to use a tree diagram.
Example $24$
In an urn, there are 11 balls. Three balls are red (R) and eight balls are blue (B). Draw two balls, one at a time, with replacement. "With replacement" means that you put the first ball back in the urn before you select the second ball. The tree diagram using frequencies that show all the possible outcomes follows.
The first set of branches represents the first draw. The second set of branches represents the second draw. Each of the outcomes is distinct. In fact, we can list each red ball as R1, R2, and R3 and each blue ball as B1, B2, B3, B4, B5, B6, B7, and B8. Then the nine RR outcomes can be written as:
R1R1; R1R2; R1R3; R2R1; R2R2; R2R3; R3R1; R3R2; R3R3
The other outcomes are similar.
There are a total of 11 balls in the urn. Draw two balls, one at a time, with replacement. There are 11(11) = 121 outcomes, the size of the sample space.
a. List the 24 BR outcomes: B1R1, B1R2, B1R3, ...
Answer
Solution 3.24
a. B1R1; B1R2; B1R3; B2R1; B2R2; B2R3; B3R1; B3R2; B3R3; B4R1; B4R2; B4R3; B5R1; B5R2; B5R3; B6R1; B6R2; B6R3; B7R1; B7R2; B7R3; B8R1; B8R2; B8R3
b. Using the tree diagram, calculate P(RR).
Answer
Solution 3.24
b. P(RR) = $\left(\frac{3}{11}\right)\left(\frac{3}{11}\right) = \frac{9}{121}$
c. Using the tree diagram, calculate P(RB\cup BR)P(RB\cup BR).
Answer
Solution 3.24
c. $P(RB\cup BR)$ = $\left(\frac{3}{11}\right)\left(\frac{8}{11}\right)+\left(\frac{8}{11}\right)\left(\frac{3}{11}\right)=\frac{48}{121}$
d. Using the tree diagram, calculate $P(Ron 1st draw\cap Bon 2nd draw)$.
Answer
Solution 3.24
d. $P(Ron 1st draw\cap Bon 2nd draw) = \left(\frac{3}{11}\right)\left(\frac{8}{11}\right)=\frac{24}{121}$
e. Using the tree diagram, calculate P(R on 2nd draw|B on 1st draw).
Answer
Solution 3.24
e. P(R on 2nd draw|B on 1st draw) = P(R on 2nd|B on 1st) = $\frac{24}{88} = \frac{3}{11}$
This problem is a conditional one. The sample space has been reduced to those outcomes that already have a blue on the first draw. There are 24 + 64 = 88 possible outcomes (24 BR and 64 BB). Twenty-four of the 88 possible outcomes are BR. $\frac{24}{88} = \frac{3}{11}$.
f. Using the tree diagram, calculate P(BB).
Answer
Solution 3.24
f. P(BB) = $\frac{64}{121}$
g. Using the tree diagram, calculate P(B on the 2nd draw|R on the first draw).
Answer
Solution 3.24
g. P(B on 2nd draw|R on 1st draw) = $\frac{8}{11}$
There are 9 + 24 outcomes that have R on the first draw (9 RR and 24 RB). The sample space is then 9 + 24 = 33. 24 of the 33 outcomes have B on the second draw. The probability is then $\frac{24}{33}$.
Exercise $24$
In a standard deck, there are 52 cards. 12 cards are face cards (event F) and 40 cards are not face cards (event N). Draw two cards, one at a time, with replacement. All possible outcomes are shown in the tree diagram as frequencies. Using the tree diagram, calculate P(FF).
Example $25$
An urn has three red marbles and eight blue marbles in it. Draw two marbles, one at a time, this time without replacement, from the urn. "Without replacement" means that you do not put the first ball back before you select the second marble. Following is a tree diagram for this situation. The branches are labeled with probabilities instead of frequencies. The numbers at the ends of the branches are calculated by multiplying the numbers on the two corresponding branches, for example, $\left(\frac{3}{11}\right)\left(\frac{2}{10}\right)=\frac{6}{110}$.
NOTE
If you draw a red on the first draw from the three red possibilities, there are two red marbles left to draw on the second draw. You do not put back or replace the first marble after you have drawn it. You draw without replacement, so that on the second draw there are ten marbles left in the urn.
Calculate the following probabilities using the tree diagram.
a. P(RR) = ________
Answer
Solution 3.25
a. P(RR) = $\left(\frac{3}{11}\right)\left(\frac{2}{10}\right)=\frac{6}{110}$
b. Fill in the blanks:
$P(RB\cup BR) = \left(\frac{3}{11}\right)\left(\frac{8}{10}\right)$ + (___)(___) = $\frac{48}{110}$
Answer
Solution 3.25
b. $P(RB\cup BR) = \left(\frac{3}{11}\right)\left(\frac{8}{10}\right)+\left(\frac{8}{11}\right)\left(\frac{3}{10}\right)=\frac{48}{110}$
c. P(R on 2nd|B on 1st) =
Answer
Solution 3.25
c. P(R on 2nd|B on 1st) = $\frac{3}{10}$
d. Fill in the blanks.
$P(Ron 1st\cap Bon 2nd)$ = (___)(___) = $\frac{24}{100}$
Answer
Solution 3.25
d. $P(R \text{ on 1st }\cap B \text{ on 2nd}) = \left(\frac{3}{11}\right)\left(\frac{8}{10}\right)=\frac{24}{110}$
e. Find P(BB).
Answer
Solution 3.25
e. P(BB) = $\left(\frac{8}{11}\right)\left(\frac{7}{10}\right)$
f. Find P(B on 2nd|R on 1st).
Answer
Solution 3.25
f. Using the tree diagram, P(B on 2nd|R on 1st) = P(R|B) = $\frac{8}{10}$.
If we are using probabilities, we can label the tree in the following general way.
• P(R|R) here means P(R on 2nd|R on 1st)
• P(B|R) here means P(B on 2nd|R on 1st)
• P(R|B) here means P(R on 2nd|B on 1st)
• P(B|B) here means P(B on 2nd|B on 1st)
Exercise $25$
In a standard deck, there are 52 cards. Twelve cards are face cards (F) and 40 cards are not face cards (N). Draw two cards, one at a time, without replacement. The tree diagram is labeled with all possible probabilities.
1. Find $P(FN\cup NF)$.
2. Find P(N|F).
3. Find P(at most one face card).
Hint: "At most one face card" means zero or one face card.
4. Find P(at least on face card).
Hint: "At least one face card" means one or two face cards.
Example $26$
A litter of kittens available for adoption at the Humane Society has four tabby kittens and five black kittens. A family comes in and randomly selects two kittens (without replacement) for adoption.
1. What is the probability that both kittens are tabby?
$a \cdot\left(\frac{1}{2}\right)\left(\frac{1}{2}\right) b \cdot\left(\frac{4}{9}\right)\left(\frac{4}{9}\right) c \cdot\left(\frac{4}{9}\right)\left(\frac{3}{8}\right) d \cdot\left(\frac{4}{9}\right)\left(\frac{5}{9}\right)$
2. What is the probability that one kitten of each coloring is selected?
a.$\left(\frac{4}{9}\right)\left(\frac{5}{9}\right)$ b.$\left(\frac{4}{9}\right)\left(\frac{5}{8}\right)$ c.$\left(\frac{4}{9}\right)\left(\frac{5}{9}\right)+\left(\frac{5}{9}\right)\left(\frac{4}{9}\right)$ d.$\left(\frac{4}{9}\right)\left(\frac{5}{8}\right)+\left(\frac{5}{9}\right)\left(\frac{4}{8}\right)$
3. What is the probability that a tabby is chosen as the second kitten when a black kitten was chosen as the first?
4. What is the probability of choosing two kittens of the same color?
Answer
Solution 3.26
a. c, b. d, c. $\frac{4}{8}$, d. $\frac{32}{72}$
Exercise $26$
Suppose there are four red balls and three yellow balls in a box. Two balls are drawn from the box without replacement. What is the probability that one ball of each coloring is selected? | textbooks/stats/Applied_Statistics/Introductory_Business_Statistics_(OpenStax)/03%3A_Probability_Topics/3.04%3A_Contingency_Tables_and_Probability_Trees.txt |
A Venn diagram is a picture that represents the outcomes of an experiment. It generally consists of a box that represents the sample space S together with circles or ovals. The circles or ovals represent events. Venn diagrams also help us to convert common English words into mathematical terms that help add precision.
Venn diagrams are named for their inventor, John Venn, a mathematics professor at Cambridge and an Anglican minister. His main work was conducted during the late 1870's and gave rise to a whole branch of mathematics and a new way to approach issues of logic. We will develop the probability rules just covered using this powerful way to demonstrate the probability postulates including the Addition Rule, Multiplication Rule, Complement Rule, Independence, and Conditional Probability.
Example 3.27
Suppose an experiment has the outcomes 1, 2, 3, ... , 12 where each outcome has an equal chance of occurring. Let event $A = \{1, 2, 3, 4, 5, 6\}$ and event $B = \{6, 7, 8, 9\}$. Then $A$ intersect $B = A \cap B=\{6\}$ and $A$ union $B = A\cup B=\{1, 2, 3, 4, 5, 6, 7, 8, 9\}$. The Venn diagram is as follows:
Figure 3.6 shows the most basic relationship among these numbers. First, the numbers are in groups called sets; set A and set B. Some number are in both sets; we say in set A $\cap$ in set B. The English word "and" means inclusive, meaning having the characteristics of both A and B, or in this case, being a part of both A and B. This condition is called the INTERSECTION of the two sets. All members that are part of both sets constitute the intersection of the two sets. The intersection is written as $A\cap B$ where $\cap$ is the mathematical symbol for intersection. The statement A\cap BA\cap B is read as "A intersect B." You can remember this by thinking of the intersection of two streets.
There are also those numbers that form a group that, for membership, the number must be in either one or the other group. The number does not have to be in BOTH groups, but instead only in either one of the two. These numbers are called the UNION of the two sets and in this case they are the numbers 1-5 (from A exclusively), 7-9 (from set B exclusively) and also 6, which is in both sets A and B. The symbol for the UNION is $\cup$, thus $A\cup B=$ numbers 1-9, but excludes number 10, 11, and 12. The values 10, 11, and 12 are part of the universe, but are not in either of the two sets.
Translating the English word "AND" into the mathematical logic symbol \cap , intersection, and the word "OR" into the mathematical symbol \cup , union, provides a very precise way to discuss the issues of probability and logic. The general terminology for the three areas of the Venn diagram in Figure 3.6 is shown in Figure 3.7.
Exercise 3.27
Suppose an experiment has outcomes black, white, red, orange, yellow, green, blue, and purple, where each outcome has an equal chance of occurring. Let event C = {green, blue, purple} and event P = {red, yellow, blue}. Then $C\cap P=\{blue\}$ and $C \cup P=\{\text { green, blue, purple, red, yellow }\}$. Draw a Venn diagram representing this situation.
Example 3.28
Flip two fair coins. Let A = tails on the first coin. Let B = tails on the second coin. Then A = {TT, TH} and B = {TT, HT}. Therefore, $A\cap B=\{TT\}$. $A\cup B=\{TH, TT, HT\}$.
The sample space when you flip two fair coins is X = {HH, HT, TH, TT}. The outcome HH is in NEITHER A NOR B. The Venn diagram is as follows:
Exercise 3.28
Roll a fair, six-sided die. Let A = a prime number of dots is rolled. Let B = an odd number of dots is rolled. Then A= {2, 3, 5} and B = {1, 3, 5}. Therefore, $A\cap B=\{3, 5\}$. $A\cup B=\{1, 2, 3, 5\}$. The sample space for rolling a fair die is S = {1, 2, 3, 4, 5, 6}. Draw a Venn diagram representing this situation.
Example 3.29
A person with type O blood and a negative Rh factor (Rh-) can donate blood to any person with any blood type. Four percent of African Americans have type O blood and a negative RH factor, 5−10% of African Americans have the Rh- factor, and 51% have type O blood.
The “O” circle represents the African Americans with type O blood. The “Rh-“ oval represents the African Americans with the Rh- factor.
We will take the average of 5% and 10% and use 7.5% as the percent of African Americans who have the Rh- factor. Let O = African American with Type O blood and R = African American with Rh- factor.
1. P(O) = ___________
2. P(R) = ___________
3. $P(O\cap R)=$ ___________
4. $P(O\cup R)=$ ____________
5. In the Venn Diagram, describe the overlapping area using a complete sentence.
6. In the Venn Diagram, describe the area in the rectangle but outside both the circle and the oval using a complete sentence.
Answer
Solution 3.29
a. 0.51; b. 0.075; c. 0.04; d. 0.545; e. The area represents the African Americans that have type O blood and the Rh- factor. f. The area represents the African Americans that have neither type O blood nor the Rh- factor.
Example 3.30
Fifty percent of the workers at a factory work a second job, 25% have a spouse who also works, 5% work a second job and have a spouse who also works. Draw a Venn diagram showing the relationships. Let W = works a second job and S = spouse also works.
Answer
Forty percent of the students at a local college belong to a club and 50% work part time. Five percent of the students work part time and belong to a club. Draw a Venn diagram showing the relationships. Let C = student belongs to a club and PT = student works part time.
If a student is selected at random, find
• the probability that the student belongs to a club. P(C) = 0.40
• the probability that the student works part time. P(PT) = 0.50
• the probability that the student belongs to a club AND works part time. $P(C\cap PT)=0.05$
• the probability that the student belongs to a club given that the student works part time. $P(C | P T)=\frac{P(C \cap P T)}{P(P T)}=\frac{0.05}{0.50}=0.1$
• the probability that the student belongs to a club OR works part time. $P(C \cup P T)=P(C)+P(P T)-P(C \cap P T)=0.40+0.50-0.05=0.85$
In order to solve Example 3.30 we had to draw upon the concept of conditional probability from the previous section. There we used tree diagrams to track the changes in the probabilities, because the sample space changed as we drew without replacement. In short, conditional probability is the chance that something will happen given that some other event has already happened. Put another way, the probability that something will happen conditioned upon the situation that something else is also true. In Example 3.30 the probability P(C||PT) is the conditional probability that the randomly drawn student is a member of the club, conditioned upon the fact that the student also is working part time. This allows us to see the relationship between Venn diagrams and the probability postulates.
Exercise 3.30
In a bookstore, the probability that the customer buys a novel is 0.6, and the probability that the customer buys a non-fiction book is 0.4. Suppose that the probability that the customer buys both is 0.2.
1. Draw a Venn diagram representing the situation.
2. Find the probability that the customer buys either a novel or a non-fiction book.
3. In the Venn diagram, describe the overlapping area using a complete sentence.
4. Suppose that some customers buy only compact disks. Draw an oval in your Venn diagram representing this event.
Example 3.31
A set of 20 German Shepherd dogs is observed. 12 are male, 8 are female, 10 have some brown coloring, and 5 have some white sections of fur. Answer the following using Venn Diagrams.
Draw a Venn diagram simply showing the sets of male and female dogs.
Answer
Solution 3.31
The Venn diagram below demonstrates the situation of mutually exclusive events where the outcomes are independent events. If a dog cannot be both male and female, then there is no intersection. Being male precludes being female and being female precludes being male: in this case, the characteristic gender is therefore mutually exclusive. A Venn diagram shows this as two sets with no intersection. The intersection is said to be the null set using the mathematical symbol ∅.
Draw a second Venn diagram illustrating that 10 of the male dogs have brown coloring.
Answer
Solution 3.31
The Venn diagram below shows the overlap between male and brown where the number 10 is placed in it. This represents $\text{ Male}\cap \text{Brown }$: both male and brown. This is the intersection of these two characteristics. To get the union of Male and Brown, then it is simply the two circled areas minus the overlap. In proper terms, $\text{ Male}\cup \text{ Brown }=\text { Male }+\text { Brown }-\text { Male } \cap \text { Brown}$ will give us the number of dogs in the union of these two sets. If we did not subtract the intersection, we would have double counted some of the dogs.
Now draw a situation depicting a scenario in which the non-shaded region represents "No white fur and female," or White fur′ \cap Female. the prime above "fur" indicates "not white fur." The prime above a set means not in that set, e.g. $\mathrm{A}^{\prime}$ means not $\mathrm{A}$. Sometimes, the notation used is a line above the letter. For example, $\overline{A}=\mathrm{A}^{\prime}$.
Answer
Solution 3.31
The Addition Rule of Probability
We met the addition rule earlier but without the help of Venn diagrams. Venn diagrams help visualize the counting process that is inherent in the calculation of probability. To restate the Addition Rule of Probability:
$P(A \cup B)=P(\mathrm{A})+P(B)-P(A \cap B)\nonumber$
Remember that probability is simply the proportion of the objects we are interested in relative to the total number of objects. This is why we can see the usefulness of the Venn diagrams. Example 3.31 shows how we can use Venn diagrams to count the number of dogs in the union of brown and male by reminding us to subtract the intersection of brown and male. We can see the effect of this directly on probabilities in the addition rule.
Example 3.32
Let's sample 50 students who are in a statistics class. 20 are freshmen and 30 are sophomores. 15 students get a "B" in the course, and 5 students both get a "B" and are freshmen.
Find the probability of selecting a student who either earns a "B" OR is a freshmen. We are translating the word OR to the mathematical symbol for the addition rule, which is the union of the two sets.
Answer
Solution 3.32
We know that there are 50 students in our sample, so we know the denominator of our fraction to give us probability. We need only to find the number of students that meet the characteristics we are interested in, i.e. any freshman and any student who earned a grade of "B." With the Addition Rule of probability, we can skip directly to probabilities.
Let "A" = the number of freshmen, and let "B" = the grade of "B." Below we can see the process for using Venn diagrams to solve this.
The $P(A)=\frac{20}{50}=0.40, P(B)=\frac{15}{50}=0.30, \text { and } P(A \cap B)=\frac{5}{50}=0.10$
Therefore, $P(A \cap B)=0.40+0.30-0.10=0.60$
If two events are mutually exclusive, then, like the example where we diagram the male and female dogs, the addition rule is simplified to just $P(A\cup B)=P(A)+P(B)−0$. This is true because, as we saw earlier, the union of mutually exclusive events is the null set, ∅. The diagrams below demonstrate this.
The Multiplication Rule of Probability
Restating the Multiplication Rule of Probability using the notation of Venn diagrams, we have:
$P(A\cap B)=P(A|B)⋅P(B)\nonumber$
The multiplication rule can be modified with a bit of algebra into the following conditional rule. Then Venn diagrams can then be used to demonstrate the process.
The conditional rule: $P(A | B)=\frac{P(A \cap B)}{P(B)}$
Using the same facts from Example 3.32 above, find the probability that someone will earn a "B" if they are a "freshman."
$P(A | B)=\frac{0.10}{0.30}=\frac{1}{3}\nonumber$
The multiplication rule must also be altered if the two events are independent. Independent events are defined as a situation where the conditional probability is simply the probability of the event of interest. Formally, independence of events is defined as $P(A|B)=P(A)$ or $P(B|A)=P(B)$. When flipping coins, the outcome of the second flip is independent of the outcome of the first flip; coins do not have memory. The Multiplication Rule of Probability for independent events thus becomes:
$P(A\cap B)=P(A)⋅P(B)\nonumber$
One easy way to remember this is to consider what we mean by the word "and." We see that the Multiplication Rule has translated the word "and" to the Venn notation for intersection. Therefore, the outcome must meet the two conditions of freshmen and grade of "B" in the above example. It is harder, less probable, to meet two conditions than just one or some other one. We can attempt to see the logic of the Multiplication Rule of probability due to the fact that fractions multiplied times each other become smaller.
The development of the Rules of Probability with the use of Venn diagrams can be shown to help as we wish to calculate probabilities from data arranged in a contingency table.
Example 3.33
Table 3.11 is from a sample of 200 people who were asked how much education they completed. The columns represent the highest education they completed, and the rows separate the individuals by male and female.
Less than high school grad High school grad Some college College grad Total
Male 5 15 40 60 120
Female 8 12 30 30 80
Total 13 27 70 90 200
Table 3.11
Now, we can use this table to answer probability questions. The following examples are designed to help understand the format above while connecting the knowledge to both Venn diagrams and the probability rules.
What is the probability that a selected person both finished college and is female?
Answer
Solution 3.33
This is a simple task of finding the value where the two characteristics intersect on the table, and then applying the postulate of probability, which states that the probability of an event is the proportion of outcomes that match the event in which we are interested as a proportion of all total possible outcomes.
$P(\text {College Grad } \cap \text { Female })=\frac{30}{200}=0.15$
What is the probability of selecting either a female or someone who finished college?
Answer
Solution 3.33
This task involves the use of the addition rule to solve for this probability.
$P(\text { College Grad } \cup \text{ Female })=P(F)+P(C G)-P(F \cap C G)$
$P(\text { College Grad } \cup \text{ Female }) =\frac{80}{200}+\frac{90}{200}-\frac{30}{200}=\frac{140}{200}=0.70$
What is the probability of selecting a high school graduate if we only select from the group of males?
Answer
Solution 3.33
Here we must use the conditional probability rule (the modified multiplication rule) to solve for this probability.
$P (\text{HS Grad } | \text { Male })=\frac{P(\mathrm{HS} \text { Grad } \cap \mathrm{Male})}{\mathrm{P}(\mathrm{Male})}=\frac{\left(\frac{15}{200}\right)}{\left(\frac{120}{200}\right)}=\frac{15}{120}=0.125$
Can we conclude that the level of education attained by these 200 people is independent of the gender of the person?
Answer
Solution 3.33
There are two ways to approach this test. The first method seeks to test if the intersection of two events equals the product of the events separately remembering that if two events are independent than $P(A)^{*} P(B)=P(A \cap B)$. For simplicity's sake, we can use calculated values from above.
Does $P(\text { College Grad } \cap \text { Female })=P(C G) \cdot P(F)$?
$\frac{30}{200} \neq \frac{90}{200} \cdot \frac{80}{200}$ because 0.15 ≠ 0.18.
Therefore, gender and education here are not independent.
The second method is to test if the conditional probability of A given B is equal to the probability of A. Again for simplicity, we can use an already calculated value from above.
Does $P(H S \text { Grad } | \text { Male })=P(H S \text { Grad) }$?
$\frac{15}{120} \neq \frac{27}{200}$ because 0.125 ≠ 0.135.
Therefore, again gender and education here are not independent.
3.06: Chapter Formula Review
3.1 Terminology
A and B are events
$P(S) = 1$ where $S$ is the sample space
$0 ≤ P(A) ≤ 1$
$P(A | B)=\frac{P(A \cap B)}{P(B)}$
3.2 Independent and Mutually Exclusive Events
$\text {If } A \text { and } B \text { are independent, } P(A \cap B)=P(A) P(B), P(A | B)=P(A) \text { and } P(B | A)=P(B)$
$\text {If } A \text { and } B \text { are mutually exclusive, } P(A \cup B)=P(A)+P(B) \text { and } P(A \cap B)=0$
3.3 Two Basic Rules of Probability
The multiplication rule: $P(A \cap B) = P(A|B)P(B)$
The addition rule: $P(A \cup B) = P(A) + P(B) - P(A \cap B)$ | textbooks/stats/Applied_Statistics/Introductory_Business_Statistics_(OpenStax)/03%3A_Probability_Topics/3.05%3A_Venn_Diagrams.txt |
3.1 Terminology
72.
The graph in Figure $17$ displays the sample sizes and percentages of people in different age and gender groups who were polled concerning their approval of Mayor Ford’s actions in office. The total number in the sample of all the age groups is 1,045.
1. Define three events in the graph.
2. Describe in words what the entry 40 means.
3. Describe in words the complement of the entry in question 2.
4. Describe in words what the entry 30 means.
5. Out of the males and females, what percent are males?
6. Out of the females, what percent disapprove of Mayor Ford?
7. Out of all the age groups, what percent approve of Mayor Ford?
8. Find P(Approve|Male).
9. Out of the age groups, what percent are more than 44 years old?
10. Find P(Approve|Age < 35).
73.
Explain what is wrong with the following statements. Use complete sentences.
1. If there is a 60% chance of rain on Saturday and a 70% chance of rain on Sunday, then there is a 130% chance of rain over the weekend.
2. The probability that a baseball player hits a home run is greater than the probability that he gets a successful hit.
3.2 Independent and Mutually Exclusive Events
Use the following information to answer the next 12 exercises. The graph shown is based on more than 170,000 interviews done by Gallup that took place from January through December 2012. The sample consists of employed Americans 18 years of age or older. The Emotional Health Index Scores are the sample space. We randomly sample one Emotional Health Index Score.
74.
Find the probability that an Emotional Health Index Score is 82.7.
75.
Find the probability that an Emotional Health Index Score is 81.0.
76.
Find the probability that an Emotional Health Index Score is more than 81?
77.
Find the probability that an Emotional Health Index Score is between 80.5 and 82?
78.
If we know an Emotional Health Index Score is 81.5 or more, what is the probability that it is 82.7?
79.
What is the probability that an Emotional Health Index Score is 80.7 or 82.7?
80.
What is the probability that an Emotional Health Index Score is less than 80.2 given that it is already less than 81.
81.
What occupation has the highest emotional index score?
82.
What occupation has the lowest emotional index score?
83.
What is the range of the data?
84.
Compute the average EHIS.
85.
If all occupations are equally likely for a certain individual, what is the probability that he or she will have an occupation with lower than average EHIS?
3.3 Two Basic Rules of Probability
86.
On February 28, 2013, a Field Poll Survey reported that 61% of California registered voters approved of allowing two people of the same gender to marry and have regular marriage laws apply to them. Among 18 to 39 year olds (California registered voters), the approval rating was 78%. Six in ten California registered voters said that the upcoming Supreme Court’s ruling about the constitutionality of California’s Proposition 8 was either very or somewhat important to them. Out of those CA registered voters who support same-sex marriage, 75% say the ruling is important to them.
In this problem, let:
• C = California registered voters who support same-sex marriage.
• B = California registered voters who say the Supreme Court’s ruling about the constitutionality of California’s Proposition 8 is very or somewhat important to them
• A = California registered voters who are 18 to 39 years old.
1. Find $P(C)$.
2. Find $P(B)$.
3. Find $P(C|A)$.
4. Find $P(B|C)$.
5. In words, what is $C|A$?
6. In words, what is $B|C$?
7. Find $P(C \cap B)$.
8. In words, what is $C \cap B$?
9. Find $P(C \cup B)$.
10. Are C and B mutually exclusive events? Show why or why not.
87.
After Rob Ford, the mayor of Toronto, announced his plans to cut budget costs in late 2011, the Forum Research polled 1,046 people to measure the mayor’s popularity. Everyone polled expressed either approval or disapproval. These are the results their poll produced:
• In early 2011, 60 percent of the population approved of Mayor Ford’s actions in office.
• In mid-2011, 57 percent of the population approved of his actions.
• In late 2011, the percentage of popular approval was measured at 42 percent.
1. What is the sample size for this study?
2. What proportion in the poll disapproved of Mayor Ford, according to the results from late 2011?
3. How many people polled responded that they approved of Mayor Ford in late 2011?
4. What is the probability that a person supported Mayor Ford, based on the data collected in mid-2011?
5. What is the probability that a person supported Mayor Ford, based on the data collected in early 2011?
Use the following information to answer the next three exercises. The casino game, roulette, allows the gambler to bet on the probability of a ball, which spins in the roulette wheel, landing on a particular color, number, or range of numbers. The table used to place bets contains of 38 numbers, and each number is assigned to a color and a range.
88.
1. List the sample space of the 38 possible outcomes in roulette.
2. You bet on red. Find P(red).
3. You bet on -1st 12- (1st Dozen). Find P(-1st 12-).
4. You bet on an even number. Find P(even number).
5. Is getting an odd number the complement of getting an even number? Why?
6. Find two mutually exclusive events.
7. Are the events Even and 1st Dozen independent?
89.
Compute the probability of winning the following types of bets:
1. Betting on two lines that touch each other on the table as in 1-2-3-4-5-6
2. Betting on three numbers in a line, as in 1-2-3
3. Betting on one number
4. Betting on four numbers that touch each other to form a square, as in 10-11-13-14
5. Betting on two numbers that touch each other on the table, as in 10-11 or 10-13
6. Betting on 0-00-1-2-3
7. Betting on 0-1-2; or 0-00-2; or 00-2-3
90.
Compute the probability of winning the following types of bets:
1. Betting on a color
2. Betting on one of the dozen groups
3. Betting on the range of numbers from 1 to 18
4. Betting on the range of numbers 19–36
5. Betting on one of the columns
6. Betting on an even or odd number (excluding zero)
91.
Suppose that you have eight cards. Five are green and three are yellow. The five green cards are numbered 1, 2, 3, 4, and 5. The three yellow cards are numbered 1, 2, and 3. The cards are well shuffled. You randomly draw one card.
• G = card drawn is green
• E = card drawn is even-numbered
1. List the sample space.
2. $P(G) =$ _____
3. $P(G|E) =$ _____
4. $P(G \cap E) =$ _____
5. $P(G \cup E) =$ _____
6. Are G and E mutually exclusive? Justify your answer numerically.
92.
Roll two fair dice separately. Each die has six faces.
1. List the sample space.
2. Let A be the event that either a three or four is rolled first, followed by an even number. Find $P(A)$.
3. Let B be the event that the sum of the two rolls is at most seven. Find $P(B)$.
4. In words, explain what “$P(A|B)$” represents. Find $P(A|B)$.
5. Are A and B mutually exclusive events? Explain your answer in one to three complete sentences, including numerical justification.
6. Are A and B independent events? Explain your answer in one to three complete sentences, including numerical justification.
93.
A special deck of cards has ten cards. Four are green, three are blue, and three are red. When a card is picked, its color of it is recorded. An experiment consists of first picking a card and then tossing a coin.
1. List the sample space.
2. Let A be the event that a blue card is picked first, followed by landing a head on the coin toss. Find P(A).
3. Let B be the event that a red or green is picked, followed by landing a head on the coin toss. Are the events A and B mutually exclusive? Explain your answer in one to three complete sentences, including numerical justification.
4. Let C be the event that a red or blue is picked, followed by landing a head on the coin toss. Are the events A and C mutually exclusive? Explain your answer in one to three complete sentences, including numerical justification.
94.
An experiment consists of first rolling a die and then tossing a coin.
1. List the sample space.
2. Let A be the event that either a three or a four is rolled first, followed by landing a head on the coin toss. Find P(A).
3. Let B be the event that the first and second tosses land on heads. Are the events A and B mutually exclusive? Explain your answer in one to three complete sentences, including numerical justification.
95.
An experiment consists of tossing a nickel, a dime, and a quarter. Of interest is the side the coin lands on.
1. List the sample space.
2. Let A be the event that there are at least two tails. Find P(A).
3. Let B be the event that the first and second tosses land on heads. Are the events A and B mutually exclusive? Explain your answer in one to three complete sentences, including justification.
96.
Consider the following scenario:
Let $P(C) = 0.4$.
Let $P(D) = 0.5$.
Let $P(C|D) = 0.6$.
1. Find $P(C \cap D)$.
2. Are C and D mutually exclusive? Why or why not?
3. Are C and D independent events? Why or why not?
4. Find $P(C \cup D)$.
5. Find $P(D|C)$.
97.
Y and Z are independent events.
1. Rewrite the basic Addition Rule $P(Y \cup Z) = P(Y) + P(Z) - P(Y \cap Z)$ using the information that Y and Z are independent events.
2. Use the rewritten rule to find $P(Z)$ if $P(Y \cup Z) = 0.71$ and $P(Y) = 0.42$.
98.
G and H are mutually exclusive events. $P(G) = 0.5 P(H) = 0.3$
1. Explain why the following statement MUST be false: $P(H|G) = 0.4$.
2. Find $P(H \cup G)$.
3. Are G and H independent or dependent events? Explain in a complete sentence.
99.
Approximately 281,000,000 people over age five live in the United States. Of these people, 55,000,000 speak a language other than English at home. Of those who speak another language at home, 62.3% speak Spanish.
Let: E = speaks English at home; E′ = speaks another language at home; S = speaks Spanish;
Finish each probability statement by matching the correct answer.
Probability Statements Answers
a. $P(E′) =$ i. 0.8043
b. $P(E) =$ ii. 0.623
c. $P(S \cap E′) =$ iii. 0.1957
d. $P(S|E′) =$ iv. 0.1219
Table $14$
100.
1994, the U.S. government held a lottery to issue 55,000 Green Cards (permits for non-citizens to work legally in the U.S.). Renate Deutsch, from Germany, was one of approximately 6.5 million people who entered this lottery. Let G = won green card.
1. What was Renate’s chance of winning a Green Card? Write your answer as a probability statement.
2. In the summer of 1994, Renate received a letter stating she was one of 110,000 finalists chosen. Once the finalists were chosen, assuming that each finalist had an equal chance to win, what was Renate’s chance of winning a Green Card? Write your answer as a conditional probability statement. Let F = was a finalist.
3. Are G and F independent or dependent events? Justify your answer numerically and also explain why.
4. Are G and F mutually exclusive events? Justify your answer numerically and explain why.
101.
Three professors at George Washington University did an experiment to determine if economists are more selfish than other people. They dropped 64 stamped, addressed envelopes with \$10 cash in different classrooms on the George Washington campus. 44% were returned overall. From the economics classes 56% of the envelopes were returned. From the business, psychology, and history classes 31% were returned.
Let: R = money returned; E = economics classes; O = other classes
1. Write a probability statement for the overall percent of money returned.
2. Write a probability statement for the percent of money returned out of the economics classes.
3. Write a probability statement for the percent of money returned out of the other classes.
4. Is money being returned independent of the class? Justify your answer numerically and explain it.
5. Based upon this study, do you think that economists are more selfish than other people? Explain why or why not. Include numbers to justify your answer.
102.
The following table of data obtained from www.baseball-almanac.com shows hit information for four players. Suppose that one hit from the table is randomly selected.
Name Single Double Triple Home run Total hits
Babe Ruth 1,517 506 136 714 2,873
Jackie Robinson 1,054 273 54 137 1,518
Ty Cobb 3,603 174 295 114 4,189
Hank Aaron 2,294 624 98 755 3,771
Total 8,471 1,577 583 1,720 12,351
Table $15$
Are "the hit being made by Hank Aaron" and "the hit being a double" independent events?
1. Yes, because P(hit by Hank Aaron|hit is a double) = P(hit by Hank Aaron)
2. No, because P(hit by Hank Aaron|hit is a double) ≠ P(hit is a double)
3. No, because P(hit is by Hank Aaron|hit is a double) ≠ P(hit by Hank Aaron)
4. Yes, because P(hit is by Hank Aaron|hit is a double) = P(hit is a double)
103.
United Blood Services is a blood bank that serves more than 500 hospitals in 18 states. According to their website, a person with type O blood and a negative Rh factor (Rh-) can donate blood to any person with any bloodtype. Their data show that 43% of people have type O blood and 15% of people have Rh- factor; 52% of people have type O or Rh- factor.
1. Find the probability that a person has both type O blood and the Rh- factor.
2. Find the probability that a person does NOT have both type O blood and the Rh- factor.
104.
At a college, 72% of courses have final exams and 46% of courses require research papers. Suppose that 32% of courses have a research paper and a final exam. Let F be the event that a course has a final exam. Let R be the event that a course requires a research paper.
1. Find the probability that a course has a final exam or a research project.
2. Find the probability that a course has NEITHER of these two requirements.
105.
In a box of assorted cookies, 36% contain chocolate and 12% contain nuts. Of those, 8% contain both chocolate and nuts. Sean is allergic to both chocolate and nuts.
1. Find the probability that a cookie contains chocolate or nuts (he can't eat it).
2. Find the probability that a cookie does not contain chocolate or nuts (he can eat it).
106.
A college finds that 10% of students have taken a distance learning class and that 40% of students are part time students. Of the part time students, 20% have taken a distance learning class. Let D = event that a student takes a distance learning class andE = event that a student is a part time student
1. Find $P(D \cap E)$.
2. Find $P(E|D)$.
3. Find $P(D \cup E)$.
4. Using an appropriate test, show whether D and E are independent.
5. Using an appropriate test, show whether D and E are mutually exclusive.
3.5 Venn Diagrams
Use the information in the Table $16$ to answer the next eight exercises. The table shows the political party affiliation of each of 67 members of the US Senate in June 2012, and when they are up for reelection.
Up for reelection: Democratic party Republican party Other Total
November 2014 20 13 0
November 2016 10 24 0
Total
Table $16$
107.
What is the probability that a randomly selected senator has an “Other” affiliation?
108.
What is the probability that a randomly selected senator is up for reelection in November 2016?
109.
What is the probability that a randomly selected senator is a Democrat and up for reelection in November 2016?
110.
What is the probability that a randomly selected senator is a Republican or is up for reelection in November 2014?
111.
Suppose that a member of the US Senate is randomly selected. Given that the randomly selected senator is up for reelection in November 2016, what is the probability that this senator is a Democrat?
112.
Suppose that a member of the US Senate is randomly selected. What is the probability that the senator is up for reelection in November 2014, knowing that this senator is a Republican?
113.
The events “Republican” and “Up for reelection in 2016” are ________
1. mutually exclusive.
2. independent.
3. both mutually exclusive and independent.
4. neither mutually exclusive nor independent.
114.
The events “Other” and “Up for reelection in November 2016” are ________
1. mutually exclusive.
2. independent.
3. both mutually exclusive and independent.
4. neither mutually exclusive nor independent.
115.
Table $17$ gives the number of participants in the recent National Health Interview Survey who had been treated for cancer in the previous 12 months. The results are sorted by age, race (black or white), and sex. We are interested in possible relationships between age, race, and sex. We will let suicide victims be our population.
Race and sex 15–24 25–40 41–65 Over 65 TOTALS
White, male 1,165 2,036 3,703 8,395
White, female 1,076 2,242 4,060 9,129
Black, male 142 194 384 824
Black, female 131 290 486 1,061
All others
TOTALS 2,792 5,279 9,354 21,081
Table $17$
Do not include "all others" for parts f and g.
1. Fill in the column for cancer treatment for individuals over age 65.
2. Fill in the row for all other races.
3. Find the probability that a randomly selected individual was a white male.
4. Find the probability that a randomly selected individual was a black female.
5. Find the probability that a randomly selected individual was black
6. Find the probability that a randomly selected individual was male.
7. Out of the individuals over age 65, find the probability that a randomly selected individual was a black or white male.
Use the following information to answer the next two exercises. The table of data obtained from www.baseball-almanac.com shows hit information for four well known baseball players. Suppose that one hit from the table is randomly selected.
Name Single Double Triple Home run TOTAL HITS
Babe Ruth 1,517 506 136 714 2,873
Jackie Robinson 1,054 273 54 137 1,518
Ty Cobb 3,603 174 295 114 4,189
Hank Aaron 2,294 624 98 755 3,771
TOTAL 8,471 1,577 583 1,720 12,351
Table $18$
116.
Find P(hit was made by Babe Ruth).
1. $\frac{1518}{2873}$
2. $\frac{2873}{12351}$
3. $\frac{583}{12351}$
4. $\frac{4189}{12351}$
117.
Find P(hit was made by Ty Cobb|The hit was a Home Run).
1. $\frac{4189}{12351}$
2. $\frac{114}{1720}$
3. $\frac{1720}{4189}$
4. $\frac{114}{12351}$
118.
Table $19$ identifies a group of children by one of four hair colors, and by type of hair.
Hair type Brown Blond Black Red Totals
Wavy 20 15 3 43
Straight 80 15 12
Totals 20 215
Table $19$
1. Complete the table.
2. What is the probability that a randomly selected child will have wavy hair?
3. What is the probability that a randomly selected child will have either brown or blond hair?
4. What is the probability that a randomly selected child will have wavy brown hair?
5. What is the probability that a randomly selected child will have red hair, given that he or she has straight hair?
6. If B is the event of a child having brown hair, find the probability of the complement of B.
7. In words, what does the complement of B represent?
119.
In a previous year, the weights of the members of the San Francisco 49ers and the Dallas Cowboys were published in theSan Jose Mercury News. The factual data were compiled into the following table.
Shirt # ≤ 210 211–250 251–290 > 290
1–33 21 5 0 290" class="lt-stats-5547">0
34–66 6 18 7 290" class="lt-stats-5547">4
66–99 6 12 22 290" class="lt-stats-5547">5
Table $20$
For the following, suppose that you randomly select one player from the 49ers or Cowboys.
1. Find the probability that his shirt number is from 1 to 33.
2. Find the probability that he weighs at most 210 pounds.
3. Find the probability that his shirt number is from 1 to 33 AND he weighs at most 210 pounds.
4. Find the probability that his shirt number is from 1 to 33 OR he weighs at most 210 pounds.
5. Find the probability that his shirt number is from 1 to 33 GIVEN that he weighs at most 210 pounds.
Use the following information to answer the next two exercises. This tree diagram shows the tossing of an unfair coin followed by drawing one bead from a cup containing three red (R), four yellow (Y) and five blue (B) beads. For the coin, P(H) = $\frac{2}{3}$ and P(T) = $\frac{1}{3}$ where H is heads and T is tails.
120.
Find P(tossing a Head on the coin AND a Red bead)
1. $\frac{2}{3}$
2. $\frac{5}{15}$
3. $\frac{6}{36}$
4. $\frac{5}{36}$
121.
Find P(Blue bead).
1. $\frac{15}{36}$
2. $\frac{10}{36}$
3. $\frac{10}{12}$
4. $\frac{6}{36}$
122.
A box of cookies contains three chocolate and seven butter cookies. Miguel randomly selects a cookie and eats it. Then he randomly selects another cookie and eats it. (How many cookies did he take?)
1. Draw the tree that represents the possibilities for the cookie selections. Write the probabilities along each branch of the tree.
2. Are the probabilities for the flavor of the SECOND cookie that Miguel selects independent of his first selection? Explain.
3. For each complete path through the tree, write the event it represents and find the probabilities.
4. Let S be the event that both cookies selected were the same flavor. Find P(S).
5. Let T be the event that the cookies selected were different flavors. Find P(T) by two different methods: by using the complement rule and by using the branches of the tree. Your answers should be the same with both methods.
6. Let U be the event that the second cookie selected is a butter cookie. Find P(U). | textbooks/stats/Applied_Statistics/Introductory_Business_Statistics_(OpenStax)/03%3A_Probability_Topics/3.07%3A_Chapter_Homework.txt |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.