chapter
stringlengths
1.97k
1.53M
path
stringlengths
47
241
One of the implicit assumptions up to this point was that the models were being applied to a single homogeneous population. In many cases, we take a sample from a population but that overall group is likely a combination of individuals from different sub-populations. For example, the SAT study was interested in all students at the university but that contains the obvious sub-populations based on the gender of the students. It is dangerous to fit MLR models across subpopulations but we can also use MLR models to address more sophisticated research questions by comparing groups. We will be able to compare the intercepts (mean levels) and the slopes to see if they differ between the groups. For example, does the relationship between the satv and fygpa differ for male and female students? We can add the grouping information to the scatterplot of fygpa vs satv (Figure 8.22) and consider whether there is visual evidence of a difference in the slope and/or intercept between the two groups, with men coded148 as 1 and women coded as 2. Code below changes this variable to GENDER with more explicit labels, even though they might not be correct and the students were likely forced to choose one or the other. It appears that the slope for females might be larger (steeper) in this relationship than it is for males. So increases in SAT Verbal percentiles for females might have more of an impact on the average first year GPA. We’ll handle this sort of situation in Section 8.11, where we will formally consider how to change the slopes for different groups. In this section, we develop new methods needed to begin to handle these situations and explore creating models that assume the same slope coefficient for all groups but allow for different $y$-intercepts. This material ends up resembling what we did for the Two-Way ANOVA additive model. The results for satv contrast with Figure 8.23 for the relationship between first year college GPA and satm percentile by gender of the students. The lines for the two groups appear to be mostly parallel and just seem to have different $y$-intercepts. In this section, we will learn how we can use our MLR techniques to fit a model to the entire data set that allows for different $y$-intercepts. The real power of this idea is that we can then also test whether the different groups have different $y$-intercepts – whether the shift between the groups is “real”. In this example, it appears to suggest that females generally have slightly higher GPAs than males, on average, but that an increase in satm has about the same impact on GPA for both groups. If this difference in $y$-intercepts is not “real”, then there appears to be no difference between the sexes in their relationship between satm and GPA and we can safely continue using a model that does not differentiate the two groups. We could also just subset the data set and do two analyses, but that approach will not allow us to assess whether things are “really” different between the two groups. # Make 1,2 coded gender into factor GENDER satgpa <- satgpa %>% mutate(GENDER = factor(gender)) # Make category names clear but note that level names might be wrong levels(satgpa\$GENDER) <- c("MALE", "FEMALE") satgpa %>% ggplot(mapping = aes(x = satv, y = fygpa, color = GENDER, shape = GENDER)) + geom_smooth(method = "lm") + geom_point(alpha = 0.7) + theme_bw() + scale_color_viridis_d(end = 0.8, option = "plasma") + labs(title = "Scatterplot of GPA vs satv by gender") satgpa %>% ggplot(mapping = aes(x = satm, y = fygpa, color = GENDER, shape = GENDER)) + geom_smooth(method = "lm") + geom_point(alpha = 0.7) + theme_bw() + scale_color_viridis_d(end = 0.8, option = "inferno") + labs(title = "Scatterplot of GPA vs satv by gender") To fit one model to a data set that contains multiple groups, we need a way of entering categorical variable information in an MLR model. Regression models require quantitative predictor variables for the $x\text{'s}$ so we can’t directly enter the text coded information on the gender of the students into the regression model since it contains “words” and how can multiply a word times a slope coefficient. To be able to put in “numbers” as predictors, we create what are called indicator variables149 that are made up of 0s and 1s, with the 0 reflecting one category and 1 the other, changing depending on the category of the individual in that row of the data set. The lm function does this whenever a factor variable is used as an explanatory variable. It sets up the indicator variables using a baseline category (which gets coded as a 0) and the deviation category for the other level of the variable (which gets coded as a 1). We can see how this works by exploring what happens when we put GENDER into our lm with satm, after first making sure it is categorical using the factor function and making the factor levels explicit instead of 1s and 2s. SATGENDER1 <- lm(fygpa ~ satm + GENDER, data = satgpa) #Fit lm with satm and GENDER summary(SATGENDER1) ## ## Call: ## lm(formula = fygpa ~ satm + GENDER, data = satgpa) ## ## Residuals: ## Min 1Q Median 3Q Max ## -2.42124 -0.42363 0.01868 0.46540 1.66397 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 0.21589 0.14858 1.453 0.147 ## satm 0.03861 0.00258 14.969 < 2e-16 ## GENDERFEMALE 0.31322 0.04360 7.184 1.32e-12 ## ## Residual standard error: 0.6667 on 997 degrees of freedom ## Multiple R-squared: 0.1917, Adjusted R-squared: 0.1901 ## F-statistic: 118.2 on 2 and 997 DF, p-value: < 2.2e-16 The GENDER row contains information that the linear model chose MALE as the baseline category and FEMALE as the deviation category since MALE does not show up in the output. To see what lm is doing for us when we give it a two-level categorical variable, we can create our own “numerical” predictor that is 0 for males and 1 for females that we called GENDERINDICATOR, displayed for the first 10 observations: # Convert logical to 1 for female, 0 for male using ifelse function satgpa <- satgpa %>% mutate(GENDERINDICATOR = ifelse(GENDER == "FEMALE", 1, 0)) # Explore first 10 observations on the two versions of GENDER using the head() function satgpa %>% select(GENDER, GENDERINDICATOR) %>% head(10) ## # A tibble: 10 × 2 ## GENDER GENDERINDICATOR ## <fct> <dbl> ## 1 MALE 0 ## 2 FEMALE 1 ## 3 FEMALE 1 ## 4 MALE 0 ## 5 MALE 0 ## 6 FEMALE 1 ## 7 MALE 0 ## 8 MALE 0 ## 9 FEMALE 1 ## 10 MALE 0 We can define the indicator variable more generally by calling it $I_{\text{Female},i}$ to denote that it is an indicator $(I)$ that takes on a value of 1 for observations in the category Female and 0 otherwise (Male) – changing based on the observation ($i$). Indicator variables, once created, are quantitative variables that take on values of 0 or 1 and we can put them directly into linear models with other $x\text{'s}$ (quantitative or categorical). If we replace the categorical GENDER variable with our quantitative GENDERINDICATOR and re-fit the model, we get: SATGENDER2 <- lm(fygpa ~ satm + GENDERINDICATOR, data = satgpa) summary(SATGENDER2) ## ## Call: ## lm(formula = fygpa ~ satm + GENDERINDICATOR, data = satgpa) ## ## Residuals: ## Min 1Q Median 3Q Max ## -2.42124 -0.42363 0.01868 0.46540 1.66397 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 0.21589 0.14858 1.453 0.147 ## satm 0.03861 0.00258 14.969 < 2e-16 ## GENDERINDICATOR 0.31322 0.04360 7.184 1.32e-12 ## ## Residual standard error: 0.6667 on 997 degrees of freedom ## Multiple R-squared: 0.1917, Adjusted R-squared: 0.1901 ## F-statistic: 118.2 on 2 and 997 DF, p-value: < 2.2e-16 This matches all the previous lm output except that we didn’t get any information on the categories used since lm didn’t know that GENDERINDICATOR was anything different from other quantitative predictors. Now we want to think about what this model means. We can write the estimated model as $\widehat{\text{fygpa}}_i = 0.216 + 0.0386\cdot\text{satm}_i + 0.313 \cdot I_{\text{Female},i}$ When we have a male observation, the indicator takes on a value of 0 so the 0.313 drops out of the model, leaving an SLR just in terms of satm. For a female student, the indicator is 1 and we add 0.313 to the previous $y$-intercept. The following works this out step-by-step, simplifying the MLR into two SLRs: • Simplified model for Males (plug in a 0 for $I_{\text{Female},i}$): • $\widehat{\text{fygpa}}_i = 0.216 + 0.0386\cdot\text{satm}_i + 0.313 \cdot 0 = 0.216 + 0.0386\cdot\text{satm}_i$ • Simplified model for Females (plug in a 1 for $I_{\text{Female},i}$): • $\widehat{\text{fygpa}}_i = 0.216 + 0.0386\cdot\text{satm}_i + 0.313 \cdot 1$ • $= 0.216 + 0.0386\cdot\text{satm}_i + 0.313$ (combine “like” terms to simplify the equation) • $= 0.529 + 0.0386\cdot\text{satm}_i$ In this situation, we then end up with two SLR models that relate satm to GPA, one model for males $(\widehat{\text{fygpa}}_i = 0.216 + 0.0386\cdot\text{satm}_i)$ and one for females $(\widehat{\text{fygpa}}_i = 0.529 + 0.0386\cdot\text{satm}_i)$. The only difference between these two models is in the $y$-intercept, with the female model’s $y$-intercept shifted up from the male $y$-intercept by 0.313. And that is what adding indicator variables into models does in general150 – it shifts the intercept up or down from the baseline group (here selected as males) to get a new intercept for the deviation group (here females). To make this visually clearer, Figure 8.24 contains the regression lines that were estimated for each group. For any satm, the difference in the groups is the 0.313 coefficient from the GENDERFEMALE or GENDERINDICATOR row of the model summaries. For example, at satm = 50, the difference in terms of predicted average first year GPAs between males and females is displayed as a difference between 2.15 and 2.46. This model assumes that the slope on satm is the same for both groups except that they are allowed to have different $y$-intercepts, which is reasonable here because we saw approximately parallel relationships for the two groups in Figure 8.23. Remember that lm selects baseline categories typically based on the alphabetical order of the levels of the categorical variable when it is created unless you actively use a function like relevel to change the baseline category. Here, the GENDER variable started with a coding of 1 and 2 and retained that order even with the recoding of levels that we created to give it more explicit names. Because we allow lm to create indicator variables for us, the main thing you need to do is explore the model summary and look for the hint at the baseline level that is not displayed after the name of the categorical variable. We can also work out the impacts of adding an indicator variable to the model in general in the theoretical model with a single quantitative predictor $x_i$ and indicator $I_i$. The model starts as in the equation below. $y_i = \beta_0+\beta_1x_i + \beta_2I_i + \varepsilon_i$ Again, there are two versions: • For any observation $i$ in the baseline category, $I_i = 0$ and the model is $y_i = \beta_0+\beta_1x_i + \varepsilon_i$. • For any observation $i$ in the non-baseline (deviation) category, $I_i = 1$ and the model simplifies to $y_i = (\beta_0+\beta_2)+\beta_1x_i + \varepsilon_i$. • This model has a $y$-intercept of $\beta_0+\beta_2$. The interpretation and inferences for $\beta_1$ resemble the work with any MLR model, noting that these results are “controlled for”, “adjusted for”, or “allowing for differences based on” the categorical variable in the model. The interpretation of $\beta_2$ is as a shift up or down in the $y$-intercept for the model that includes $x_i$. When we make term-plots in a model with a quantitative and additive categorical variable, the two reported model components match with the previous discussion – the same estimated term from the quantitative variable for all observations and a shift to reflect the different $y$-intercepts in the two groups. In Figure 8.25, the females are estimated to be that same 0.313 points higher on first year GPA. The males have a mean GPA slightly above 2.3 which is the predicted GPA for the average satm percentile for a male (remember that we have to hold the other variable at its mean to make each term-plot). When making the satm term-plot, the intercept is generated based on a weighted average of the intercept for the baseline category (male) of $b_0 = 0.216$ and the intercept for the deviation category (female) of $b_0 + b_2 = 0.529$ with weights of $516/1000 = 0.516$ for the estimated male intercept and $484/1000 = 0.484$ for estimated female intercept, $0.516 \cdot 0.216 + 0.484 \cdot 0.529 = 0.368$. tally(GENDER ~ 1, data = satgpa) ## 1 ## GENDER 1 ## MALE 516 ## FEMALE 484 plot(allEffects(SATGENDER1)) The model summary and confidence intervals provide some potential interesting inferences in these models. Again, these are just applications of MLR methods we have already seen except that the definition of one of the variables is “different” using the indicator coding idea. For the same model, the GENDER coefficient can be used to generate inferences for differences in the mean the groups, controlling for their satm scores. ## Estimate Std. Error t value Pr(>|t|) ## GENDERFEMALE 0.31322 0.04360 7.184 1.32e-12 Testing the null hypothesis that $H_0: \beta_2 = 0$ vs $H_A: \beta_2\ne 0$ using our regular $t$-test provides the opportunity to test for a difference in intercepts between the groups. In this situation, the test statistic is $t = 7.184$ and, based on a $t_{997}$-distribution if the null is true, the p-value is $<0.0001$. We have very strong evidence against the null hypothesis that there is no difference in the true $y$-intercept in a satm model for first year college GPA between males and females, so we would conclude that there is a difference in their true mean GPA levels controlled for satm. The confidence interval is also informative: confint(SATGENDER1) ## 2.5 % 97.5 % ## (Intercept) -0.07566665 0.50744709 ## satm 0.03355273 0.04367726 ## GENDERFEMALE 0.22766284 0.39877160 We are 95% confident that the true mean GPA for females is between 0.228 and 0.399 points higher than for males, after adjusting for the satm in the population of students. If we had subset the data set by gender and fit two SLRs, we could have obtained the same simplified regression models for each group but we never could have performed inferences for the differences between the two groups without putting all the observations together in one model and then assessing those differences with targeted coefficients. We also would not be able to get an estimate of their common slope for satm, after adjusting for differences in the intercept for each group.
textbooks/stats/Advanced_Statistics/Intermediate_Statistics_with_R_(Greenwood)/08%3A_Multiple_linear_regression/8.09%3A_Different_intercepts_for_different_groups_-_MLR_with_indicator_variables.txt
The same techniques can be extended to more than two groups. A study was conducted to explore sound tolerances using $n = 98$ subjects with the data available in the Headache data set from the heplots package . Each subject was initially exposed to a tone, stopping when the tone became definitely intolerable (DU) and that decibel level was recorded (variable called du1). Then the subjects were randomly assigned to one of four treatments: T1 (Listened again to the tone at their initial DU level, for the same amount of time they were able to tolerate it before); T2 (Same as T1, with one additional minute of exposure); T3 (Same as T2, but the subjects were explicitly instructed to use the relaxation techniques); and Control (these subjects experienced no further exposure to the noise tone until the final sensitivity measures were taken). Then the DU was measured again (variable called du2). One would expect that there would be a relationship between the upper tolerance levels of the subjects before and after treatment. But maybe the treatments impact that relationship? We can use our indicator approach to see if the treatments provide a shift to higher tolerances after accounting for the relationship between the two measurements151. The scatterplot152 of the results in Figure 8.26 shows some variation in the slopes and the intercepts for the groups although the variation in intercepts seems more prominent than differences in slopes. Note that the fct_relevel function was applied to the treatment variable with an option of "Control" to make the Control category the baseline category as the person who created the data set had set T1 as the baseline in the treatment variable. library(heplots) data(Headache) Headache <- as_tibble(Headache) Headache ## # A tibble: 98 × 6 ## type treatment u1 du1 u2 du2 ## <fct> <fct> <dbl> <dbl> <dbl> <dbl> ## 1 Migrane T3 2.34 5.3 5.8 8.52 ## 2 Migrane T1 2.73 6.85 4.68 6.68 ## 3 Tension T1 0.37 0.53 0.55 0.84 ## 4 Migrane T3 7.5 9.12 5.7 7.88 ## 5 Migrane T3 4.63 7.21 5.63 6.75 ## 6 Migrane T3 3.6 7.3 4.83 7.32 ## 7 Migrane T2 2.45 3.75 2.5 3.18 ## 8 Migrane T1 2.31 3.25 2 3.3 ## 9 Migrane T1 1.38 2.33 2.23 3.98 ## 10 Tension T3 0.85 1.42 1.37 1.89 ## # … with 88 more rows ## # ℹ Use print(n = ...) to see more rows Headache <- Headache %>% mutate(treatment = factor(treatment), treatment = fct_relevel(treatment, "Control") ) # Make treatment a factor and Control the baseline category Headache %>% ggplot(mapping = aes(x = du1, y = du2, color = treatment, shape = treatment)) + geom_smooth(method = "lm", se = F) + geom_point(size = 2.5) + theme_bw() + scale_color_viridis_d(end = 0.85, option = "inferno") + labs(title = "Scatterplot of Maximum DB tolerance before & after treatment (by treatment)") This data set contains a categorical variable with 4 levels. To go beyond two groups, we have to add more than one indicator variable, defining three indicators to turn on (1) or off (0) for three of the levels of the variable with the same reference level used for all the indicators. For this example, the Control group is chosen as the baseline group so it hides in the background while we define indicators for the other three levels. The indicators for T1, T2, and T3 treatment levels are: • Indicator for T1: $I_{T1,i} = \left\{\begin{array}{rl} 1 & \text{if Treatment} = T1 \ 0 & \text{else} \end{array}\right.$ • Indicator for T2: $I_{T2,i} = \left\{\begin{array}{rl} 1 & \text{if Treatment} = T2 \ 0 & \text{else} \end{array}\right.$ • Indicator for T3: $I_{T3,i} = \left\{\begin{array}{rl} 1 & \text{if Treatment} = T3 \ 0 & \text{else} \end{array}\right.$ We can see the values of these indicators for a few observations and their original variable (treatment) in the following output. For Control all the indicators stay at 0. Treatment I_T1 I_T2 I_T3 T3 0 0 1 T1 1 0 0 T1 1 0 0 T3 0 0 1 T3 0 0 1 T3 0 0 1 T2 0 1 0 T1 1 0 0 T1 1 0 0 T3 0 0 1 T3 0 0 1 T2 0 1 0 T3 0 0 1 T1 1 0 0 T3 0 0 1 Control 0 0 0 T3 0 0 1 When we fit the additive model of the form y ~ x + group, the lm function takes the $\boldsymbol{J}$ categories and creates $\boldsymbol{J-1}$ indicator variables. The baseline level is always handled in the intercept. The true model will be of the form $y_i = \beta_0 + \beta_1x_i +\beta_2I_{\text{Level}2,i}+\beta_3I_{\text{Level}3,i} +\cdots+\beta_{J}I_{\text{Level}J,i}+\varepsilon_i$ where the $I_{\text{CatName}j,i}\text{'s}$ are the different indicator variables. Note that each indicator variable gets a coefficient associated with it and is “turned on” whenever the $i^{th}$ observation is in that category. At most only one of the $I_{\text{CatName}j,i}\text{'s}$ is a 1 for any observation, so the $y$-intercept will either be $\beta_0$ for the baseline group or $\beta_0+\beta_j$ for $j = 2,\ldots,J$. It is important to remember that this is an “additive” model since the effects just add and there is no interaction between the grouping variable and the quantitative predictor. To be able to trust this model, we need to check that we do not need different slope coefficients for the groups as discussed in the next section. For these types of models, it is always good to start with a plot of the data set with regression lines for each group – assessing whether the lines look relatively parallel or not. In Figure 8.26, there are some differences in slopes – we investigate that further in the next section. For now, we can proceed with fitting the additive model with different intercepts for the four levels of treatment and the quantitative explanatory variable, du1. head1 <- lm(du2 ~ du1 + treatment, data = Headache) summary(head1) ## ## Call: ## lm(formula = du2 ~ du1 + treatment, data = Headache) ## ## Residuals: ## Min 1Q Median 3Q Max ## -6.9085 -0.9551 -0.3118 1.1141 10.5364 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 0.25165 0.51624 0.487 0.6271 ## du1 0.83705 0.05176 16.172 <2e-16 ## treatmentT1 0.55752 0.61830 0.902 0.3695 ## treatmentT2 0.63444 0.63884 0.993 0.3232 ## treatmentT3 1.36671 0.60608 2.255 0.0265 ## ## Residual standard error: 2.14 on 93 degrees of freedom ## Multiple R-squared: 0.7511, Adjusted R-squared: 0.7404 ## F-statistic: 70.16 on 4 and 93 DF, p-value: < 2.2e-16 The complete estimated regression model is $\widehat{\text{du2}}_i = 0.252+0.837\cdot\text{du1}_i +0.558I_{\text{T1},i}+0.634I_{\text{T2},i}+1.367I_{\text{T3},i}$ For each group, the model simplifies to an SLR as follows: • For Control (baseline): $\begin{array}{rl} \widehat{\text{du2}}_i & = 0.252+0.837\cdot\text{du1}_i +0.558I_{\text{T1},i}+0.634I_{\text{T2},i}+1.367I_{\text{T3},i} \ & = 0.252+0.837\cdot\text{du1}_i+0.558*0+0.634*0+1.367*0 \ & = 0.252+0.837\cdot\text{du1}_i. \end{array}$ • For T1: $\begin{array}{rl} \widehat{\text{du2}}_i & = 0.252+0.837\cdot\text{du1}_i +0.558I_{\text{T1},i}+0.634I_{\text{T2},i}+1.367I_{\text{T3},i} \ & = 0.252+0.837\cdot\text{du1}_i+0.558*1+0.634*0+1.367*0 \ & = 0.252+0.837\cdot\text{du1}_i + 0.558 \ & = 0.81+0.837\cdot\text{du1}_i. \end{array}$ • Similarly for T2: $\widehat{\text{du2}}_i = 0.886 + 0.837\cdot\text{du1}_i$ • Finally for T3: $\widehat{\text{du2}}_i = 1.62 + 0.837\cdot\text{du1}_i$ To reinforce what this additive model is doing, Figure 8.27 displays the estimated regression lines for all four groups, showing the shifts in the y-intercepts among the groups. The right panel of the term-plot (Figure 8.28) shows how the T3 group seems to have shifted up the most relative to the others and the Control group seems to have a mean that is a bit lower than the others, in the model that otherwise assumes that the same linear relationship holds between du1 and du2 for all the groups. After controlling for the Treatment group, for a 1 decibel increase in initial tolerances, we estimate, on average, to obtain a 0.84 decibel change in the second tolerance measurement. The R2 shows that this is a decent model for the responses, with this model explaining 75.1% percent of the variation in the second decibel tolerance measure. We should check the diagnostic plots and VIFs to check for any issues – all the diagnostics and assumptions are as before except that there is no assumption of linearity between the grouping variable and the responses. Additionally, sometimes we need to add group information to diagnostics to see if any patterns in residuals look different in different groups, like linearity or non-constant variance, when we are fitting models that might contain multiple groups. plot(allEffects(head1, residuals = T), grid = T) The diagnostic plots in Figure 8.29 provides some indications of a few observations in the tails that deviate from a normal distribution to having slightly heavier tails but only one outlier is of real concern and causes some concern about the normality assumption. There is a small indication of increasing variability as a function of the fitted values as both the Residuals vs. Fitted and Scale-Location plots show some fanning out for higher values but this is a minor issue. There are no influential points here since all the Cook’s D values are less than 0.5. par(mfrow = c(2,2), oma = c(0,0,3,0)) plot(head1, pch = 16, sub.caption = "") title(main="Plot of diagnostics for additive model with du1 and treatment for du2", outer=TRUE) Additionally, sometimes we need to add group information to diagnostics to see if any patterns in residuals look different in different groups, like linearity or non-constant variance, when we are fitting models that might contain multiple groups. We can use the same scatterplot tools to make our own plot of the residuals (extracted using the residuals function) versus the fitted values (extracted using the fitted function) by groups as in Figure 8.30. This provides an opportunity to introduce faceting, where we can split our plots into panels by a grouping variable, here by the treatment applied to each subject. This can be helpful with multiple groups to be able to see each one more clearly as we avoid overplotting. The addition of + facet_grid(cols = vars(treatment)) facets the plot based on the treatment variable and puts the facets in different columns because of the cols = part of the code (rows = specifies the number of rows for the facets), labeling each panel at the top with the level being displayed of the faceting variable (vars() is needed to help ggplot find the variable). In this example, there are no additional patterns identified by making this plot although we do see some minor deviations in the fitted lines for each group, but it is a good additional check in these multi-group situations. Headache <- Headache %>% mutate(resids = residuals(head1), fits = fitted(head1) ) Headache %>% ggplot(mapping = aes(x = fits, y = resids, color = treatment, shape = treatment)) + geom_smooth(method = "lm", se = F) + geom_point(size = 2.5) + theme_bw() + scale_color_viridis_d(end = 0.85, option = "inferno") + labs(title = "Scatterplot of Residuals vs Fitted by Treatment Group") + facet_grid(cols = vars(treatment)) The VIFs are different for models with categorical variables than for models with only quantitative predictors in MLR, even though we are still concerned with shared information across the predictors of all kinds. For categorical predictors, the $J$ levels are combined to create a single measure for the predictor all together called the generalized VIF (GVIF). For GVIFs, interpretations are based on the GVIF measure to the power $1/(2*(J-1))$. For quantitative predictors when GVIFS are present, $J$ = 2, and the power simplifies to $1/2$, which is our regular square-root scale for inflation of standard errors due to multicollinearity (so the GVIF is the VIF for quantitative predictors). For a $J$-level categorical predictor, the power is also $1/2$ for $J=2$ levels and increases for more levels. There are no rules of thumb for GVIFs for $J>2$. In the following output, there are four levels, so $J=4$. When raised to the requisite power, the GVIF interpretation for multi-category categorical predictors is the multiplicative increase in the SEs for the coefficients on all the indicator variables due to multicollinearity with other predictors. In this model, the SE for the quantitative predictor du1 is 1.009 times larger due to multicollinearity with other predictors and the SEs for the indicator variables for the four-level categorical treatment predictor are 1.003 times larger due to multicollinearity, both compared to what they would have been with no shared information in the predictors in the model. Neither are large, so multicollinearity is not a problem in this model. vif(head1) ## GVIF Df GVIF^(1/(2*Df)) ## du1 1.01786 1 1.008891 ## treatment 1.01786 3 1.002955 While there are inferences available in the model output, the tests for the indicator variables are not too informative (at least to start) since they only compare each group to the baseline. In Section 8.12, we see how to use ANOVA F-tests to help us ask general questions about including a categorical predictor in the model. But we can compare adjusted R2 values with and without Treatment to see if including the categorical variable was “worth it”: head1R <- lm(du2 ~ du1, data = Headache) summary(head1R) ## ## Call: ## lm(formula = du2 ~ du1, data = Headache) ## ## Residuals: ## Min 1Q Median 3Q Max ## -6.9887 -0.8820 -0.2765 1.1529 10.4165 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 0.84744 0.36045 2.351 0.0208 ## du1 0.85142 0.05189 16.408 <2e-16 ## ## Residual standard error: 2.165 on 96 degrees of freedom ## Multiple R-squared: 0.7371, Adjusted R-squared: 0.7344 ## F-statistic: 269.2 on 1 and 96 DF, p-value: < 2.2e-16 The adjusted R2 in the model with both Treatment and du1 is 0.7404 and the adjusted R2 for this reduced model with just du1 is 0.7344, suggesting the Treatment is useful. The next section provides a technique to be able to work with different slopes on the quantitative predictor for each group. Comparing those results to the results for the additive model allows assessment of the assumption in this section that all the groups had the same slope coefficient for the quantitative variable.
textbooks/stats/Advanced_Statistics/Intermediate_Statistics_with_R_(Greenwood)/08%3A_Multiple_linear_regression/8.10%3A_Additive_MLR_with_more_than_two_groups_-_Headache_example.txt
Sometimes researchers are specifically interested in whether the slopes vary across groups or the regression lines in the scatterplot for the different groups may not look parallel or it may just be hard to tell visually if there really is a difference in the slopes. Unless you are very sure that there is not an interaction between the grouping variable and the quantitative predictor, you should153 start by fitting a model containing an interaction and then see if you can drop it. It may be the case that you end up with the simpler additive model from the previous sections, but you don’t want to assume the same slope across groups unless you are absolutely sure that is the case. This should remind you a bit of the discussions of the additive and interaction models in the Two-way ANOVA material. The models, concerns, and techniques are very similar, but with the quantitative variable replacing one of the two categorical variables. As always, the scatterplot is a good first step to understanding whether we need the extra complexity that these models require. A new example provides motivation for the consideration of different slopes and intercepts. A study was performed to address whether the relationship between nonverbal IQs and reading accuracy differs between dyslexic and non-dyslexic students. Two groups of students were identified, one group of dyslexic students was identified first (19 students) and then a group of gender and age similar student matches were identified (25 students) for a total sample size of $n = 44$, provided in the dyslexic3 data set from the smdata package . This type of study design is an attempt to “balance” the data from the two groups on some important characteristics to make the comparisons of the groups as fair as possible. The researchers attempted to balance the characteristics of the subjects in the two groups so that if they found different results for the two groups, they could attribute it to the main difference they used to create the groups – dyslexia or not. This design, case-control or case-comparison where each subject with a trait is matched to one or more subjects in the “control” group would hopefully reduce confounding from other factors and then allow stronger conclusions in situations where it is impossible to randomly assign treatments to subjects. We still would avoid using “causal” language but this design is about as good as you can get when you are unable to randomly assign levels to subjects. Using these data, we can explore the relationship between nonverbal IQ scores and reading accuracy, with reading accuracy measured as a proportion correct. The fact that there is an upper limit to the response variable attained by many students will cause complications below, but we can still learn something from our attempts to analyze these data using an MLR model. The scatterplot in Figure 8.31 seems to indicate some clear differences in the IQ vs reading score relationship between the dys = 0 (non-dyslexic) and dys = 1 (dyslexic) students (code below makes these levels more explicit in the data set). Note that the IQ is standardized to have mean 0 and standard deviation of 1 which means that a 1 unit change in IQ score is a 1 SD change and that the y-intercept (for $x = 0$) is right in the center of the plot and actually interesting154. library(smdata) data("dyslexic3") dyslexic3 <- dyslexic3 %>% mutate(dys = factor(dys)) levels(dyslexic3\$dys) <- c("no", "yes") dyslexic3 %>% ggplot(mapping = aes(x = ziq, y = score, color = dys, shape = dys)) + geom_smooth(method = "lm") + geom_point(size = 2, alpha = 0.5) + theme_bw() + scale_color_viridis_d(end = 0.7, option = "plasma") + labs(title = "Plot of IQ vs Reading by dyslexia status", x = "Standardized nonverbal IQ scores", y = "Reading score") + facet_grid(cols = vars(dys)) To allow for both different $y$-intercepts and slope coefficients on the quantitative predictor, we need to include a “modification” of the slope coefficient. This is performed using an interaction between the two predictor variables where we allow the impacts of one variable (slopes) to change based on the levels of another variable (grouping variable). The formula notation is y ~ x * group, remembering that this also includes the main effects (the additive variable components) as well as the interaction coefficients; this is similar to what we discussed in the Two-Way ANOVA interaction model. We can start with the general model for a two-level categorical variable with an interaction, which is $y_i = \beta_0 + \beta_1x_i +\beta_2I_{\text{CatName},i} + {\color{red}{\boldsymbol{\beta_3I_{\text{CatName},i}x_i}}}+\varepsilon_i,$ where the new component involves the product of both the indicator and the quantitative predictor variable. The $\color{red}{\boldsymbol{\beta_3}}$ coefficient will be found in a row of output with both variable names in it (with the indicator level name) with a colon between them (something like x:grouplevel). As always, the best way to understand any model involving indicators is to plug in 0s or 1s for the indicator variable(s) and simplify the equations. • For any observation in the baseline group $I_{\text{CatName},i} = 0$, so $y_i = \beta_0+\beta_1x_i+\beta_2I_{\text{CatName},i}+ {\color{red}{\boldsymbol{\beta_3I_{\text{CatName},i}x_i}}}+\varepsilon_i$ simplifies quickly to: $y_i = \beta_0+\beta_1x_i+\varepsilon_i$ • So the baseline group’s model involves the initial intercept and quantitative slope coefficient. • For any observation in the second category $I_{\text{CatName},i} = 1$, so $y_i = \beta_0+\beta_1x_i+\beta_2I_{\text{CatName},i}+ {\color{red}{\boldsymbol{\beta_3I_{\text{CatName},i}x_i}}}+\varepsilon_i$ is $y_i = \beta_0+\beta_1x_i+\beta_2*1+ {\color{red}{\boldsymbol{\beta_3*1*x_i}}}+\varepsilon_i$ which “simplifies” to $y_i = (\beta_0+\beta_2) + (\beta_1+{\color{red}{\boldsymbol{\beta_3}}})x_i +\varepsilon_i,$ by combining like terms. • For the second category, the model contains a modified $y$-intercept, now $\beta_0+\beta_2$, and a modified slope coefficient, now $\beta_1+\color{red}{\boldsymbol{\beta_3}}$. We can make this more concrete by applying this to the dyslexia data with dys as a categorical variable for dyslexia status of subjects (levels of no and yes) and ziq the standardized IQ. The model is estimated as: dys_model <- lm(score ~ ziq * dys, data = dyslexic3) summary(dys_model) ## ## Call: ## lm(formula = score ~ ziq * dys, data = dyslexic3) ## ## Residuals: ## Min 1Q Median 3Q Max ## -0.26362 -0.04152 0.01682 0.06790 0.17740 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 0.87586 0.02391 36.628 < 2e-16 ## ziq 0.05827 0.02535 2.299 0.0268 ## dysyes -0.27951 0.03827 -7.304 7.11e-09 ## ziq:dysyes -0.07285 0.03821 -1.907 0.0638 ## ## Residual standard error: 0.1017 on 40 degrees of freedom ## Multiple R-squared: 0.712, Adjusted R-squared: 0.6904 ## F-statistic: 32.96 on 3 and 40 DF, p-value: 6.743e-11 The estimated model can be written as $\widehat{\text{Score}}_i = 0.876+0.058\cdot\text{ZIQ}_i - 0.280I_{\text{yes},i} -{\color{red}{\boldsymbol{0.073}}}I_{\text{yes},i}\cdot\text{ZIQ}_i$ and simplified for the two groups as: • For the baseline (non-dyslexic, $I_{\text{yes},i} = 0$) students: $\widehat{\text{Score}}_i = 0.876+0.058\cdot\text{ZIQ}_i$ • For the deviation (dyslexic, $I_{\text{yes},i} = 1$) students: $\begin{array}{rl} \widehat{\text{Score}}_i& = 0.876+0.058\cdot\text{ZIQ}_i - 0.280*1- 0.073*1\cdot\text{ZIQ}_i \ & = (0.876- 0.280) + (0.058-0.073)\cdot\text{ZIQ}_i, \ \end{array}$ which simplifies finally to: $\widehat{\text{Score}}_i = 0.596-0.015\cdot\text{ZIQ}_i$ • So the slope switched from 0.058 in the non-dyslexic students to -0.015 in the dyslexic students. The interpretations of these coefficients are outlined below: • For the non-dyslexic students: For a 1 SD increase in verbal IQ score, we estimate, on average, the reading score to go up by 0.058 “points”. • For the dyslexic students: For a 1 SD increase in verbal IQ score, we estimate, on average, the reading score to change by -0.015 “points”. So, an expected pattern of results emerges for the non-dyslexic students. Those with higher IQs tend to have higher reading accuracy; this does not mean higher IQ’s cause more accurate reading because random assignment of IQ is not possible. However, for the dyslexic students, the relationship is not what one would might expect. It is slightly negative, showing that higher verbal IQ’s are related to lower reading accuracy. What we conclude from this is that we should not expect higher IQ’s to show higher performance on a test like this. Checking the assumptions is always recommended before getting focused on the inferences in the model. When interactions are present, you should not use VIFs as they are naturally inflated because the same variable is re-used in multiple parts of the model to create the interaction components. Checking the multicollinearity in the related additive model can be performed to understand shared information in the variables used in interactions. When fitting models with multiple groups, it is possible to see “groups” in the fitted values ($x$-axis in Residuals vs Fitted and Scale-Location plots) and that is not a problem – it is a feature of these models. You should look for issues in the residuals for each group but the residuals should overall still be normally distributed and have the same variability everywhere. It is a bit hard to see issues in Figure 8.32 because of the group differences, but note the line of residuals for the higher fitted values. This is an artifact of the upper threshold in the reading accuracy test used. As in the first year of college GPA, these observations were censored – their true score was outside the range of values we could observe – and so we did not really get a measure of how good these students were since a lot of their abilities were higher than the test could detect and they all binned up at the same value of getting all the questions correct. The relationship in this group might be even stronger if we could really observe differences in the highest level readers. We should treat the results for the non-dyslexic group with caution even though they are clearly scoring on average higher and have a different slope than the results for the dyslexic students. The QQ-plot suggests a slightly long left tail but this deviation is not too far from what might happen if we simulated from a normal distribution, so is not clear evidence of a violation of the normality assumption. The influence diagnostics do not suggest any influential points because no points have Cook’s D over 0.5. par(mfrow = c(2,2), oma = c(0,0,2,0)) plot(dys_model, pch = 16, sub.caption = "") title(main="Plot of diagnostics for Dyslexia Interaction model", outer=TRUE) For these models, we have relaxed an earlier assumption that data were collected from only one group. In fact, we are doing specific research that is focused on questions about the differences between groups. However, these models still make assumptions that, within a specific group, the relationships are linear between the predictor and response variables. They also assume that the variability in the residuals is the same for all observations. Sometimes it can be difficult to check the assumptions by looking at the overall diagnostic plots and it may be easier to go back to the original scatterplot or plot the residuals vs fitted values by group to fully assess the results. Figure 8.33 shows a scatterplot of the residuals vs the quantitative explanatory variable by the groups. The variability in the residuals is a bit larger in the non-dyslexic group, possibly suggesting that variability in the reading test is higher for higher scoring individuals even though we couldn’t observe all of that variability because there were so many perfect scores in this group. dyslexic3 <- dyslexic3 %>% mutate(resids = residuals(dys_model), fits = fitted(dys_model) ) dyslexic3 %>% ggplot(mapping = aes(x = fits, y = resids, color = dys, shape = dys)) + geom_smooth(method = "lm", se = F) + geom_point(size = 2.5) + theme_bw() + scale_color_viridis_d(end = 0.7, option = "plasma") + labs(title = "Scatterplot of Residuals vs Fitted by Group") If we feel comfortable enough with the assumptions to trust the inferences here (this might be dangerous), then we can consider what some of the model inferences provide us in this situation. For example, the test for $H_0: {\color{red}{\boldsymbol{\beta_3}}} = 0$ vs $H_A: {\color{red}{\boldsymbol{\beta_3}}}\ne 0$ provides an interesting comparison. Under the null hypothesis, the two groups would have the same slope so it provides an opportunity to directly consider whether the relationship (via the slope) is different between the groups in their respective populations. We find $t = -1.907$ which, if the assumptions are true, follows a $t(40)$-distribution under the null hypothesis. This test statistic has a corresponding p-value of 0.0638. So it provides some evidence against the null hypothesis of no difference in the slopes between the two groups but it isn’t strong evidence against it. There are serious issues (like getting the wrong idea about directions of relationships) if we ignore a potentially important interaction and some statisticians would recommend retaining interactions even if the evidence is only moderate for its inclusion in the model. For the original research question of whether the relationships differ for the two groups, we only have marginal evidence to support that result. Possibly with a larger sample size or a reading test that only a few students could get 100% on, the researchers might have detected a more pronounced difference in the slopes for the two groups. In the presence of a categorical by quantitative interaction, term-plots can be generated that plot the results for each group on the same display or on separate facets for each level of the categorical variable. The first version is useful for comparing the different lines and the second version is useful to add the partial residuals and get a final exploration of model assumptions and ranges of values where predictor variables were observed in each group. The term-plots basically provide a plot of the “simplified” SLR models for each group. In Figure 8.34 we can see noticeable differences in the slopes and intercepts. Note that testing for differences in intercepts between groups is not very interesting when there are different slopes because if you change the slope, you have to change the intercept. The plot shows that there are clear differences in the means even though we don’t have a test to directly assess that in this complicated of a model155. Figure 8.35 splits the plots up and adds partial residuals to the plots. The impact on the estimated model for the perfect scores in the non-dyslexic subjects is very prominent as well as the difference in the relationships between the two variables in the two groups. plot(allEffects(dys_model), ci.style = "bands", multiline = T, lty = c(1,2), grid = T) plot(allEffects(dys_model, residuals = T), lty = c(1,2), grid = T) It certainly appears in the plots that IQ has a different impact on the mean score in the two groups (even though the p-value only provided marginal evidence in support of the interaction). To reinforce the potential dangers of forcing the same slope for both groups, consider the additive model for these data. Again, this just shifts one group off the other one, but both have the same slope. The following model summary and term-plots (Figure 8.36) suggest the potentially dangerous conclusion that can come from assuming a common slope when that might not be the case. dys_modelR <- lm(score ~ ziq + dys, data = dyslexic3) summary(dys_modelR) ## ## Call: ## lm(formula = score ~ ziq + dys, data = dyslexic3) ## ## Residuals: ## Min 1Q Median 3Q Max ## -0.26062 -0.05565 0.02932 0.07577 0.13217 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 0.89178 0.02312 38.580 < 2e-16 ## ziq 0.02620 0.01957 1.339 0.188 ## dysyes -0.26879 0.03905 -6.883 2.41e-08 ## ## Residual standard error: 0.1049 on 41 degrees of freedom ## Multiple R-squared: 0.6858, Adjusted R-squared: 0.6705 ## F-statistic: 44.75 on 2 and 41 DF, p-value: 4.917e-11 plot(allEffects(dys_modelR, residuals = T)) This model provides little evidence against the null hypothesis that IQ is not linearly related to reading score for all students ($t_{41} = 1.34$, p-value = 0.188), adjusted for dyslexia status, but strong evidence against the null hypothesis of no difference in the true $y$-intercepts ($t_{41} = -6.88$, p-value $<0.00001$) after adjusting for the verbal IQ score. Since the IQ term has a large p-value, we could drop it from the model – leaving a model that only includes the grouping variable: dys_modelR2 <- lm(score ~ dys, data = dyslexic3) summary(dys_modelR2) ## ## Call: ## lm(formula = score ~ dys, data = dyslexic3) ## ## Residuals: ## Min 1Q Median 3Q Max ## -0.25818 -0.04510 0.02514 0.09520 0.09694 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 0.90480 0.02117 42.737 <2e-16 ## dysyes -0.29892 0.03222 -9.278 1e-11 ## ## Residual standard error: 0.1059 on 42 degrees of freedom ## Multiple R-squared: 0.6721, Adjusted R-squared: 0.6643 ## F-statistic: 86.08 on 1 and 42 DF, p-value: 1e-11 plot(allEffects(dys_modelR2, residuals = T), grid = T) These results, including the term-plot in Figure 8.37, suggest a difference in the mean reading scores between the two groups and maybe that is all these data really say… This is the logical outcome if we decide that the interaction is not important in this data set. In general, if the interaction is dropped, the interaction model can be reduced to considering an additive model with the categorical and quantitative predictor variables. Either or both of those variables could also be considered for removal, usually starting with the variable with the larger p-value, leaving a string of ever-simpler models possible if large p-values are continually encountered156. It is useful to note that the last model has returned us to the first model we encountered in Chapter 2 where we were just comparing the means for two groups. However, the researchers probably were not seeking to make the discovery that dyslexic students have a tougher time than non-dyslexic students on a reading test but sometimes that is all that the data support. The key part of this sequence of decisions was how much evidence you think a p-value of 0.06 contains… For more than two categories in a categorical variable, the model contains more indicators to keep track of but uses the same ideas. We have to deal with modifying the intercept and slope coefficients for every deviation group so the task is onerous but relatively repetitive. The general model is: $\begin{array}{rl} y_i = \beta_0 &+ \beta_1x_i +\beta_2I_{\text{Level }2,i}+\beta_3I_{\text{Level }3,i} +\cdots+\beta_JI_{\text{Level }J,i} \ &+\beta_{J+1}I_{\text{Level }2,i}\:x_i+\beta_{J+2}I_{\text{Level }3,i}\:x_i +\cdots+\beta_{2J-1}I_{\text{Level }J,i}\:x_i +\varepsilon_i.\ \end{array}$ Specific to the audible tolerance/headache data that had four groups. The model with an interaction present is $\begin{array}{rl} \text{du2}_i = \beta_0 & + \beta_1\cdot\text{du1}_i + \beta_2I_{T1,i} + \beta_3I_{T2,i} + \beta_4I_{\text{T3},i} \ &+ \beta_5I_{T1,i}\cdot\text{du1}_i + \beta_6I_{T2,i}\cdot\text{du1}_i + \beta_7I_{\text{T3},i}\cdot\text{du1}_i+\varepsilon_i.\ \end{array}$ Based on the following output, the estimated general regression model is $\begin{array}{rl} \widehat{\text{du2}}_i = 0.241 &+ 0.839\cdot\text{du1}_i + 1.091I_{T1,i} + 0.855I_{T2,i} +0.775I_{T3,i} \ & - 0.106I_{T1,i}\cdot\text{du1}_i - 0.040I_{T2,i}\cdot\text{du1}_i + 0.093I_{T3,i}\cdot\text{du1}_i.\ \end{array}$ Then we could work out the specific equation for each group with replacing their indicator variable in two places with 1s and the rest of the indicators with 0. For example, for the T1 group: $\begin{array}{rll} \widehat{\text{du2}}_i & = 0.241 &+ 0.839\cdot\text{du1}_i + 1.091\cdot1 + 0.855\cdot0 +0.775\cdot0 \ &&- 0.106\cdot1\cdot\text{du1}_i - 0.040\cdot0\cdot\text{du1}_i + 0.093\cdot0\cdot\text{du1}_i \ \widehat{\text{du2}}_i& = 0.241&+0.839\cdot\text{du1}_i + 1.091 - 0.106\cdot\text{du1}_i \ \widehat{\text{du2}}_i& = 1.332 &+ 0.733\cdot\text{du1}_i.\ \end{array}$ head2 <- lm(du2 ~ du1 * treatment, data = Headache) summary(head2) ## ## Call: ## lm(formula = du2 ~ du1 * treatment, data = Headache) ## ## Residuals: ## Min 1Q Median 3Q Max ## -6.8072 -1.0969 -0.3285 0.8192 10.6039 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 0.24073 0.68331 0.352 0.725 ## du1 0.83923 0.10289 8.157 1.93e-12 ## treatmentT1 1.09084 0.95020 1.148 0.254 ## treatmentT2 0.85524 1.14770 0.745 0.458 ## treatmentT3 0.77471 0.97370 0.796 0.428 ## du1:treatmentT1 -0.10604 0.14326 -0.740 0.461 ## du1:treatmentT2 -0.03981 0.17658 -0.225 0.822 ## du1:treatmentT3 0.09300 0.13590 0.684 0.496 ## ## Residual standard error: 2.148 on 90 degrees of freedom ## Multiple R-squared: 0.7573, Adjusted R-squared: 0.7384 ## F-statistic: 40.12 on 7 and 90 DF, p-value: < 2.2e-16 Or we can let the term-plots (Figures 8.38 and 8.39) show us all four different simplified models. Here we can see that all the slopes “look” to be pretty similar. When the interaction model is fit and the results “look” like the additive model, there is a good chance that we will be able to avoid all this complication and just use the additive model without missing anything interesting. There are two different options for displaying interaction models. Version 1 (Figure 8.38) has a different panel for each level of the categorical variable and Version 2 (Figure 8.39) puts all the lines on the same plot. In this case, neither version shows much of a difference and Version 2 overlaps so much that you can’t see all the groups. In these situations, it can be useful to make the term-plots twice, once with multiline = T and once multiline = F, and then select the version that captures the results best. plot(allEffects(head2, residuals = T), grid = T) #version 1 plot(allEffects(head2), multiline = T, ci.style = "bands", grid = T, lty = c(1:4), lwd = 2) #version 2 In situations with more than 2 levels, the $t$-tests for the interaction or changing $y$-intercepts are not informative for deciding if you really need different slopes or intercepts for all the groups. They only tell you if a specific group is potentially different from the baseline group and the choice of the baseline is arbitrary. To assess whether we really need to have varying slopes or intercepts with more than two groups we need to develop $F$-tests for the interaction part of the model.
textbooks/stats/Advanced_Statistics/Intermediate_Statistics_with_R_(Greenwood)/08%3A_Multiple_linear_regression/8.11%3A_Different_slopes_and_different_intercepts.txt
For models with multi-category $(J>2)$ categorical variables we need a method for deciding if all the extra complexity present in the additive or interaction models is necessary. We can appeal to model selection methods such as the adjusted R2 that focus on balancing model fit and complexity but interests often move to trying to decide if the differences are more extreme than we would expect by chance if there were no group differences in intercepts or slopes. Because of the multi-degree of freedom aspects of the use of indicator variables ($J-1$ variables for a $J$ level categorical variable), we have to develop tests that combine and assess information across multiple “variables” – even though these indicators all pertain to a single original categorical variable. ANOVA $F$-tests did exactly this sort of thing in the One and Two-Way ANOVA models and can do that for us here. There are two models that we perform tests in – the additive and the interaction models. We start with a discussion of the tests in an interaction setting since that provides us the first test to consider in most situations to assess evidence of whether the extra complexity of varying slopes is really needed. If we don’t “need” the varying slopes or if the plot really does have lines for the groups that look relatively parallel, we can fit the additive model and either assess evidence of the need for different intercepts or for the quantitative predictor – either is a reasonable next step. Basically this establishes a set of nested models (each model is a reduced version of another more complicated model higher in the tree of models and we can move down the tree by setting a set of slope coefficients to 0) displayed in Figure 8.40. This is based on the assumption that we would proceed through the model, dropping terms if the p-values are large (“not significant” in the diagram) to arrive at a final model. If the initial interaction test suggests the interaction is important, then no further refinement should be considered and that model should be explored (this was the same protocol suggested in the 2-WAY ANOVA situation, the other place where we considered interactions). If the interaction is not deemed important based on the test, then the model should be re-fit using both variables in an additive model. In that additive model, both variables can be assessed conditional on the other one. If both have small p-values, then that is the final model and should be explored further. If either the categorical or quantitative variable have large p-values, then they can be dropped from the model and the model re-fit with only one variable in it, usually starting with dropping the component with the largest p-value if both are not “small”. Note that if there is only a categorical variable remaining, then we would call that linear model a One-Way ANOVA (quantitative response and $J$ group categorical explanatory) and if the only remaining variable is quantitative, then a SLR model is being fit. If that final variable has a large p-value in either model, it can be removed and all that is left to describe the responses is a mean-only model. Otherwise the single variable model is the final model. Usually we will not have to delve deeply into this tree of models and might stop earlier in the tree if that fully addresses our research question, but it is good to consider the potential paths that an analysis could involve before it is started if model refinement is being considered. To perform the first test (after checking that assumptions are not problematic, of course), we can apply the Anova function from the car package to an interaction model157. It will provide three tests, one for each variable by themselves, which are not too interesting, and then the interaction test. This will result in an $F$-statistic that, if the assumptions are true, will follow an $F(J-1, n-2J)$-distribution under the null hypothesis. This tests the hypotheses: • $\boldsymbol{H_0:}$ The slope for $\boldsymbol{x}$ is the same for all $\boldsymbol{J}$ groups in the population vs • $\boldsymbol{H_A:}$ The slope for $\boldsymbol{x}$ in at least one group differs from the others in the population. This test is also legitimate in the case of a two-level categorical variable $(J = 2)$ and then follows an $F(1, n-4)$-distribution under the null hypothesis. With $J = 2$, the p-value from this test matches the results for the $t$-test $(t_{n-4})$ for the single slope-changing coefficient in the model summary output. The noise tolerance study, introduced in Section 8.10, provides a situation for exploring the results in detail. With the $J = 4$ level categorical variable (Treatment), the model for the second noise tolerance measurement (du2) as a function of the interaction between Treatment and initial noise tolerance (du1) is $\begin{array}{rl} \text{du2}_i = \beta_0 &+ \beta_1\cdot\text{du1}_i + \beta_2I_{T1,i} + \beta_3I_{T2,i} + \beta_4I_{T3,i} \ &+ \beta_5I_{T1,i}\cdot\text{du1}_i + \beta_6I_{T2,i}\cdot\text{du1}_i + \beta_7I_{T3,i}\cdot\text{du1}_i+\varepsilon_i. \end{array}$ We can re-write the previous hypotheses in one of two more specific ways: • $H_0:$ The slope for du1 is the same for all four Treatment groups in the population OR • $H_0: \beta_5 = \beta_6 = \beta_7 = 0$ • This defines a null hypothesis that all the deviation coefficients for getting different slopes for the different treatments are 0 in the population. • $H_A:$ The slope for du1 is NOT the same for all four Treatment groups in the population (at least one group has a different slope) OR • $H_A:$ At least one of $\beta_5,\beta_6,\beta_7$ is different from 0 in the population. • The alternative states that at least one of the deviation coefficients for getting different slopes for the different Treatments is not 0 in the population. In this situation, the results for the test of these hypotheses is in the row labeled du1:treatment in the Anova output. The ANOVA table below shows a test statistic of $F = 0.768$ with the numerator df of 3, coming from $J-1$, and the denominator df of 90, coming from $n-2J = 98-2*4 = 90$ and also provided in the Residuals row in the table, leading to an $F(3, 90)$-distribution for the test statistic under the null hypothesis. The p-value from this distribution is 0.515, showing little to no evidence against the null hypothesis, so does not suggest that the slope coefficient for du1 in explaining du2 is different for at least one of the Treatment groups in the population. Anova(head2) ## Anova Table (Type II tests) ## ## Response: du2 ## Sum Sq Df F value Pr(>F) ## du1 1197.78 1 259.5908 <2e-16 ## treatment 23.90 3 1.7265 0.1672 ## du1:treatment 10.63 3 0.7679 0.5150 ## Residuals 415.27 90 Without evidence to support using an interaction, we should consider both the quantitative and categorical variables in an additive model. The ANOVA table for the additive model contains two interesting tests. One test is for the quantitative variable discussed previously. The other is for the categorical variable, assessing whether different $y$-intercepts are needed. The additive model here is $\text{du2}_i = \beta_0 + \beta_1\cdot\text{du1}_i + \beta_2I_{T1,i} + \beta_3I_{T2,i} + \beta_4I_{T3,i} +\varepsilon_i.\$ The hypotheses assessed in the ANOVA test for treatment are: • $H_0:$ The $y$-intercept for the model with du1 is the same for all four Treatment groups in the population OR • $H_0: \beta_2 = \beta_3 = \beta_4 = 0$ • This defines a null hypothesis that all the deviation coefficients for getting different $y$-intercepts for the different Treatments are 0 in the population. • $H_A:$ The $y$-intercepts for the model with du1 is NOT the same for all four Treatment groups in the population (at least one group has a different $y$-intercept) OR • $H_A:$ At least one of $\beta_2,\beta_3,\beta_4$ is different from 0 in the population. • The alternative states that at least one of the deviation coefficients for getting different $y$-intercepts for the different Treatments is not 0 in the population. The $F$-test for the categorical variable in an additive model follows $F(J-1, n-J-1)$-distribution under the null hypothesis. For this example, the test statistic for Treatment follows an $F(3, 93)$-distribution under the null hypothesis. The observed test statistic has a value of 1.74, generating a p-value of 0.164. So we would find weak evidence against the null hypothesis and so does not suggest some difference in $y$-intercepts between the treatment groups, in a model with du1, in the population. We could interpret this in the fashion we used initially in MLR by stating this result as: there is little evidence against the null hypothesis of no difference in the mean du2 for the Treatment groups after controlling for du1 so we would conclude that there is possibly no difference between the groups controlled for du1. head1 <- lm(du2 ~ du1 + treatment, data = Headache) Anova(head1) ## Anova Table (Type II tests) ## ## Response: du2 ## Sum Sq Df F value Pr(>F) ## du1 1197.8 1 261.5491 <2e-16 ## treatment 23.9 3 1.7395 0.1643 ## Residuals 425.9 93 In the same ANOVA table, there is a test for the du1 model component. This tests $H_0: \beta_1 = 0$ vs $H_A: \beta_1\ne 0$ in a model with different $y$-intercepts for the different treatment groups. If we remove this term from the model, all we are left with is different $y$-intercepts for the groups. A model just with different $y$-intercepts is typically called a One-Way ANOVA model. Here, there it appears that the quantitative variable is needed in the model after controlling for the different $y$-intercepts for different treatments since it has a small p-value ($F$(1,93) = 261.55 or $t$(93) = 16.172, p-value<0.0001). Note that this interpretation retains the conditional wording regardless of whether the other variable had a small p-value or it did not. If you want an unconditional interpretation for a variable, then you will need to refit the model without the other variable(s) after deciding that they are not important.
textbooks/stats/Advanced_Statistics/Intermediate_Statistics_with_R_(Greenwood)/08%3A_Multiple_linear_regression/8.12%3A_F-tests_for_MLR_models_with_quantitative_and_categorical_variables_and_interactions.txt
There are a variety of techniques for selecting among a set of potential models or refining an initially fit MLR model. Hypothesis testing can be used (in the case where we have nested models either by adding or deleting a single term at a time) or comparisons of adjusted R2 across different potential models (which is valid for nested or non-nested model comparisons). Diagnostics should play a role in the models considered and in selecting among models that might appear to be similar on a model comparison criterion. In this section, a new model selection method is introduced that has stronger theoretical underpinnings, a slightly more interpretable scale, and, often, better performance in picking an optimal158 model than the adjusted R2. The measure is called the AIC (Akaike’s An Information Criterion159, ). It is extremely popular, but sometimes misused, in some fields such as Ecology, and has been applied in almost every other potential application area where statistical models can be compared. Burnham and Anderson (2002) have been responsible for popularizing the use of AIC for model selection, especially in Ecology. The AIC is an estimate of the distance (or discrepancy or divergence) between a candidate model and the true model, on a log-scale, based on a measure called the Kullback-Leibler divergence. The models that are closer (have a smaller distance) to the truth are better and we can compare how close two models are to the truth, picking the one that has a smaller distance (smaller AIC) as better. The AIC includes a component that is on the log-scale, so negative values are possible and you should not be disturbed if you are comparing large magnitude negative numbers – just pick the model with the smallest AIC score. The AIC is optimized (smallest) for a model that contains the optimal balance of simplicity of the model with quality of fit to the observations. Scientists are driven to different degrees by what is called the principle of parsimony: that simpler explanations (models) are better if everything else is equal or even close to equal. In this case, it would mean that if two models are similarly good on AIC, then select the simpler of the two models since it is more likely to be correct in general than the more complicated model. The AIC is calculated as $AIC = -2log(Likelihood)+2m$, where the Likelihood provides a measure of fit of the model (we let R calculate it for us) and gets smaller for better fitting models and $m$ = (number of estimated $\beta\text{'s}+1$). The value $m$ is called the model degrees of freedom for AIC calculations and relates to how many total parameters are estimated. Note that it is a different measure of degrees of freedom than used in ANOVA $F$-tests. The main things to understand about the formula for the AIC is that as $m$ increases, the AIC will go up and that as the fit improves, the likelihood will increase (so -2log-likelihood will get smaller)160. There are some facets of this discussion to keep in mind when comparing models. More complicated models always fit better (we saw this for the R2 measure, as the proportion of variation explained always goes up if more “stuff” is put into the model even if the “stuff” isn’t useful). The AIC resembles the adjusted R2 in that it incorporates the count of the number of parameters estimated. This allows the AIC to make sure that enough extra variability is explained in the responses to justify making the model more complicated (increasing $m$). The optimal model on AIC has to balance adding complexity and increasing quality of the fit. Since this measure provides an estimate of the distance or discrepancy to the “true model”, the model with the smallest value “wins” – it is top-ranked on the AIC. Note that the top-ranked AIC model will often not be the best fitting model since the best fitting model is always the most complicated model considered. The top AIC model is the one that is estimated to be closest to the truth, where the truth is still unknown… To help with interpreting the scale of AICs, they are often reported in a table sorted from smallest to largest values with the AIC and the “delta AIC” or, simply, $\Delta\text{AIC}$ reported. The $\Delta\text{AIC} = \text{AIC}_{\text{model}} - \text{AIC}_{\text{topModel}}$ and so provides a value of 0 for the top-ranked AIC model and a measure of how much worse on the AIC scale the other models are. A rule of thumb is that a 2 unit difference on AICs $(\Delta\text{AIC} = 2)$ is moderate evidence of a difference in the models and more than 4 units $(\Delta\text{AIC}>4)$ is strong evidence of a difference. This is more based on experience than a distinct reason or theoretical result but seems to provide reasonable results in most situations. Often researchers will consider any models within 2 AIC units of the top model $(\Delta\text{AIC}<2)$ as indistinguishable on AICs and so either select the simplest model of the choices or report all the models with similar “support”, allowing the reader to explore the suite of similarly supported potential models. It is important to remember that if you search across too many models, even with the AIC to support your model comparisons, you might find a spuriously top model. Individual results that are found by exploring many tests or models have higher chances to be spurious and results found in this manner are difficult to replicate when someone repeats a similar study. For these reasons, there is a set of general recommendations that have been developed for using AICs: • Consider a suite of models (often pre-specified and based on prior research in the area of interest) and find the models with the top (in other words, smallest) AIC results. • The suite of candidate models need to contain at least some good models. Selecting the best of a set of BAD models only puts you at the top of $%#%-mountain, which is not necessarily a good thing. • Report a table with the models considered, sorted from smallest to largest AICs ($\Delta\text{AICs}$ from smaller to larger) that includes a count of number of parameters estimated161, the AICs, and $\Delta\text{AICs}$. • Remember to incorporate the mean-only model in the model selection results. This allows you to compare the top model to one that does not contain any predictors. • Interpret the top model or top models if a few are close on the AIC-scale to the top model. • DO NOT REPORT P-VALUES OR CALL TERMS “SIGNIFICANT” when models were selected using AICs. • Hypothesis testing and AIC model selection are not compatible philosophies and testing in models selected by AICs invalidates the tests as they have inflated Type I error rates. The AIC results are your “evidence” – you don’t need anything else. If you wanted to report p-values, use them to select your model. • You can describe variables as “important” or “useful” and report confidence intervals to aid in interpretation of the terms in the selected model(s) but need to avoid performing hypothesis tests with the confidence intervals. • Remember that the selected model is not the “true” model – it is only the best model according to AIC among the set of models you provided. • AICs assume that the model is specified correctly up to possibly comparing different predictor variables. Perform diagnostic checks on your initial model and the top model and do not trust AICs when assumptions are clearly violated (p-values are similarly not valid in that situation). When working with AICs, there are two options. Fit the models of interest and then run the AIC function on each model. This can be tedious, especially when we have many possible models to consider. We can make it easy to fit all the potential candidate models that are implied by a complicated starting model by using the dredge function from the MuMIn package . The name (dredge) actually speaks to what fitting all possible models really engages – what is called data dredging. The term is meant to refer to considering way too many models for your data set, probably finding something good from the process, but maybe identifying something spurious since you looked at so many models. Note that if you take a hypothesis testing approach where you plan to remove any terms with large p-values in this same situation, you are really considering all possible models as well because you could have removed some or all model components. Methods that consider all possible models are probably best used in exploratory analyses where you do not know if any or all terms should be important. If you have more specific research questions, then you probably should try to focus on comparisons of models that help you directly answer those questions, either with AIC or p-value methods. The dredge function provides an automated method of assessing all possible simpler models based on an initial (full) model. It generates a table of AIC results, $\Delta\text{AICs}$, and also shows when various predictors are in or out of the model for all reduced models possible from an initial model. For quantitative predictors, the estimated slope is reported when that predictor is in the model. For categorical variables and interactions with them, it just puts a “+” in the table to let you know that the term is in the models. Note that you must run the options(na.action = "na.fail") code to get dredge to work162. To explore the AICs and compare their results to the adjusted R2 that we used before for model selection, we can revisit the Snow Depth data set with related results found in Section 8.4 and Table 8.1. In that situation we were considering a “full” model that included Elevation, Min.Temp, and Max.Temp as potential predictor variables after removing two influential points. And we considered all possible reduced models from that “full”163 model. Note that the dredge output adds one more model that adjusted R2 can’t consider – the mean-only model that contains no predictor variables. In the following output it is the last model in the output (worst ranked on AIC). Including the mean-only model in these results helps us “prove” that there is support for having something in the model, but only if there is better support for other models than this simplest possible model. In reading dredge output164 as it is constructed here, the models are sorted by top to bottom AIC values (smallest AIC to largest). The column delta is for the $\Delta\text{AICs}$ and shows a 0 for the first row, which is the top-ranked AIC model. Here it is for the model with Elevation and Max.Temp but not including Min.Temp. This was also the top ranked model from adjusted R2, which is reproduced in the adjRsq column. The AIC is calculated using the previous formula based on the df and logLik columns. The df is also a useful column for comparing models as it helps you see how complex each model is. For example, the top model used up 4 model df (three $\beta\text{'s}$ and the residual error variance) and the most complex model that included four predictor variables used up 5 model df. library(MuMIn) options(na.action = "na.fail") #Must run this code once to use dredge snotel2R <- snotel_s %>% slice(-c(9,22)) m6 <- lm(Snow.Depth ~ Elevation + Min.Temp + Max.Temp, data = snotel2R) dredge(m6, rank = "AIC", extra = c("R^2", adjRsq = function(x) summary(x)$adj.r.squared)) ## Global model call: lm(formula = Snow.Depth ~ Elevation + Min.Temp + Max.Temp, data = snotel2R) ## --- ## Model selection table ## (Int) Elv Max.Tmp Min.Tmp R^2 adjRsq df logLik AIC delta weight ## 4 -167.50 0.02408 1.2530 0.8495 0.8344 4 -80.855 169.7 0.00 0.568 ## 8 -213.30 0.02686 1.2430 0.9843 0.8535 0.8304 5 -80.541 171.1 1.37 0.286 ## 2 -80.41 0.01791 0.8087 0.7996 3 -83.611 173.2 3.51 0.098 ## 6 -130.70 0.02098 1.0660 0.8134 0.7948 4 -83.322 174.6 4.93 0.048 ## 5 179.60 -5.0090 0.6283 0.6106 3 -91.249 188.5 18.79 0.000 ## 7 178.60 -0.2687 -4.6240 0.6308 0.5939 4 -91.170 190.3 20.63 0.000 ## 3 119.50 -2.1800 0.4131 0.3852 3 -96.500 199.0 29.29 0.000 ## 1 40.21 0.0000 0.0000 2 -102.630 209.3 39.55 0.000 ## Models ranked by AIC(x) You can use the table of results from dredge to find information to compare the estimated models. There are two models that are clearly favored over the others with $\Delta\text{AICs}$ for the model with Elevation and Max.Temp of 0 and for the model with all three predictors of 1.37. The $\Delta\text{AIC}$ for the third ranked model (contains just Elevation) is 3.51 suggesting clear support for the top model over this because of a difference of 3.51 AIC units to the truth. The difference between the second and third ranked models also provides relatively strong support for the more complex model over the model with just Elevation. And the mean-only model had a $\Delta\text{AIC}$ of nearly 40 – suggesting extremely strong evidence for the top model versus using no predictors. So we have pretty clear support for models that include the Elevation and Max.Temp variables (in both top models) and some support for also including the Min.Temp, but the top model did not require its inclusion. It is also possible to think about the AICs as a result on a number line from “closest to the truth” to “farthest” for the suite of models considered, as shown in Figure 8.41. We could add further explorations of the term-plots and confidence intervals for the slopes from the top or, here, possibly top two models. We would not spend any time with p-values since we already used the AIC to assess evidence related to the model components and they are invalid if we model select prior to reporting them. We can quickly compare the slopes for variables that are shared in the two models since they are both quantitative variables using the output. It is interesting that the Elevation and Max.Temp slopes change little with the inclusion of Min.Temp in moving from the top to second ranked model (0.02408 to 0.0286 and 1.253 to 1.243). This was an observational study and so we can’t consider causal inferences here as discussed previously. Generally, the use of AICs does not preclude making causal statements but if you have randomized assignment of levels of an explanatory variable, it is more philosophically consistent to use hypothesis testing methods in that setting. If you went to the effort to impose the levels of a treatment on the subjects, it also makes sense to see if the differences created are beyond what you might expect by chance if the treatment didn’t matter.
textbooks/stats/Advanced_Statistics/Intermediate_Statistics_with_R_(Greenwood)/08%3A_Multiple_linear_regression/8.13%3A_AICs_for_model_selection.txt
Researchers were interested in studying the effects of smoking by children on their lung development by measuring the forced expiratory volume (FEV, measured in Liters) in a representative sample of children ($n = 654$) between the ages of 3 and 19; this data set is available in the FEV data set in the coneproj package (M. C. Meyer and Liao (2021), Liao and Meyer (2014)). Measurements on the age (in years) and height (in inches) as well as the sex and smoking status of the children were made. We would expect both the age and height to have positive relationships with FEV (lung capacity) and that smoking might decrease the lung capacity but also that older children would be more likely to smoke. So the height and age might be confounded with smoking status and smoking might diminish lung development for older kids – resulting in a potential interaction between age and smoking. The sex of the child might also matter and should be considered or at least controlled for since the response is a size-based measure. This creates the potential for including up to four variables (age, height, sex, and smoking status) and possibly the interaction between age and smoking status. Initial explorations suggested that modeling the log-FEV would be more successful than trying to model the responses on the original scale. Figure 8.42 shows the suggestion of different slopes for the smokers than non-smokers and that there aren’t very many smokers under 9 years old in the data set. So we will start with a model that contains an age by smoking interaction and include height and sex as additive terms. We are not sure if any of these model components will be needed, so the simplest candidate model will be to remove all the predictors and just have a mean-only model (FEV ~ 1). In between the mean-only and most complicated model are many different options where we can drop the interaction or drop the additive terms or drop the terms involved in the interaction if we don’t need the interaction. library(coneproj) data(FEV) FEV <- as_tibble(FEV) FEV <- FEV %>% mutate(sex = factor(sex), #Make sex and smoke factors, log.FEV smoke = factor(smoke), log.FEV = log(FEV)) levels(FEV$sex) <- c("Female","Male") #Make sex labels explicit levels(FEV$smoke) <- c("Nonsmoker","Smoker") #Make smoking status labels explicit p1 <- FEV %>% ggplot(mapping = aes(x = age, y = log.FEV, color = smoke, shape = smoke)) + geom_point(size = 1.5, alpha = 0.5) + geom_smooth(method = "lm") + theme_bw() + scale_color_viridis_d(end = 0.8) + labs(title = "Plot of log(FEV) vs Age of children by smoking status", y = "log(FEV)") p2 <- FEV %>% ggplot(mapping = aes(x = age, y = log.FEV, color = smoke, shape = smoke)) + geom_point(size = 1.5, alpha = 0.5) + geom_smooth(method = "lm") + theme_bw() + scale_color_viridis_d(end = 0.8) + labs(title = "Plot of log(FEV) vs Age of children by smoking status", y = "log(FEV)") + facet_grid(cols = vars(smoke)) grid.arrange(p1, p2, nrow = 2) To get the needed results, start with the full model – the most complicated model you want to consider. It is good to check assumptions before considering reducing the model as they rarely get better in simpler models and the AIC is only appropriate to use if the model assumptions are not clearly violated. As suggested above, our “fullish” model for the log(FEV) values is specified as log(FEV) ~ height + age * smoke + sex. fm1 <- lm(log.FEV ~ height + age * smoke + sex, data = FEV) summary(fm1) ## ## Call: ## lm(formula = log.FEV ~ height + age * smoke + sex, data = FEV) ## ## Residuals: ## Min 1Q Median 3Q Max ## -0.62926 -0.08783 0.01136 0.09658 0.40751 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) -1.919494 0.080571 -23.824 < 2e-16 ## height 0.042066 0.001759 23.911 < 2e-16 ## age 0.025368 0.003642 6.966 8.03e-12 ## smokeSmoker 0.107884 0.113646 0.949 0.34282 ## sexMale 0.030871 0.011764 2.624 0.00889 ## age:smokeSmoker -0.011666 0.008465 -1.378 0.16863 ## ## Residual standard error: 0.1454 on 648 degrees of freedom ## Multiple R-squared: 0.8112, Adjusted R-squared: 0.8097 ## F-statistic: 556.8 on 5 and 648 DF, p-value: < 2.2e-16 par(mfrow = c(2,2), oma = c(0,0,2,0)) plot(fm1, pch = 16, sub.caption = "") title(main="Diagnostics for full FEV model", outer=TRUE) The diagnostic plots suggest that there are a few outlying points (Figure 8.43) but they are not influential and there is no indication of violations of the constant variance assumption. There is a slight left skew with a long left tail to cause a very minor concern with the normality assumption but not enough to be concerned about our inferences from this model. If we select a different model(s), we would want to check its diagnostics and make sure that the results do not look noticeably worse than these do. plot(allEffects(fm1, residuals = T), grid = T) Figure 8.44 provides our first term-plot with multiple predictors and an interaction. Each term is interpreted using our “conditional” language for any of the other two panels. So we could explore the impacts of height on log-fev after controlling for sex as well as age, smoking status, and the age by smoking interaction. This mirrors how we would interpret any of the coefficients or confidence intervals from this full model (we are not doing hypothesis tests here). The AIC function can be used to generate the AIC values for a single or set of candidate models. It will also provide the model degrees of freedom used for each model if run the function on multiple models. For example, suppose that the want to compare fm1 to a model without the interaction term in the model, called fm1R. You need to fit both models and then apply the AIC function to them with commas between the model names: fm1R <- lm(log.FEV ~ height + age + smoke + sex, data = FEV) AIC(fm1, fm1R) ## df AIC ## fm1 7 -658.5178 ## fm1R 6 -658.6037 These results tells us that the fm1R model (the one without the interaction) is better (more negative) on the AIC by 0.09 AIC units. Note that this model does not “fit” as well as the full model, it is just the top AIC model – the AIC results suggest that it is slightly closer to the truth than the more complicated model but with such a small difference there is similar support and little evidence of a difference between the two models. This provides only an assessment of the difference between including or excluding the interaction between age and smoking in a model with two other predictors. We are probably also interested in whether the other terms are needed in the model. The full suite of results from dredge provide model comparisons that help us to assess the presence/absence of each model component including the interaction. options(na.action = "na.fail") #Must run this code once to use dredge dredge(fm1, rank = "AIC", extra = c("R^2", adjRsq = function(x) summary(x)$adj.r.squared)) ## Global model call: lm(formula = log.FEV ~ height + age * smoke + sex, data = FEV) ## --- ## Model selection table ## (Int) age hgh sex smk age:smk R^2 adjRsq df logLik AIC delta weight ## 16 -1.944000 0.02339 0.04280 + + 0.81060 0.80950 6 335.302 -658.6 0.00 0.414 ## 32 -1.919000 0.02537 0.04207 + + + 0.81120 0.80970 7 336.259 -658.5 0.09 0.397 ## 8 -1.940000 0.02120 0.04299 + 0.80920 0.80830 5 332.865 -655.7 2.87 0.099 ## 12 -1.974000 0.02231 0.04371 + 0.80880 0.80790 5 332.163 -654.3 4.28 0.049 ## 28 -1.955000 0.02388 0.04315 + + 0.80920 0.80800 6 332.802 -653.6 5.00 0.034 ## 4 -1.971000 0.01982 0.04399 0.80710 0.80650 4 329.262 -650.5 8.08 0.007 ## 7 -2.265000 0.05185 + 0.79640 0.79580 4 311.594 -615.2 43.42 0.000 ## 3 -2.271000 0.05212 0.79560 0.79530 3 310.322 -614.6 43.96 0.000 ## 15 -2.267000 0.05190 + + 0.79640 0.79550 5 311.602 -613.2 45.40 0.000 ## 11 -2.277000 0.05222 + 0.79560 0.79500 4 310.378 -612.8 45.85 0.000 ## 30 -0.067780 0.09493 + + + 0.64460 0.64240 6 129.430 -246.9 411.74 0.000 ## 26 -0.026590 0.09596 + + 0.62360 0.62190 5 110.667 -211.3 447.27 0.000 ## 14 -0.015820 0.08963 + + 0.62110 0.61930 5 108.465 -206.9 451.67 0.000 ## 6 0.004991 0.08660 + 0.61750 0.61630 4 105.363 -202.7 455.88 0.000 ## 10 0.022940 0.09077 + 0.60120 0.60000 4 91.790 -175.6 483.02 0.000 ## 2 0.050600 0.08708 0.59580 0.59520 3 87.342 -168.7 489.92 0.000 ## 13 0.822000 + + 0.09535 0.09257 4 -176.092 360.2 1018.79 0.000 ## 9 0.888400 + 0.05975 0.05831 3 -188.712 383.4 1042.03 0.000 ## 5 0.857400 + 0.02878 0.02729 3 -199.310 404.6 1063.22 0.000 ## 1 0.915400 0.00000 0.00000 2 -208.859 421.7 1080.32 0.000 ## Models ranked by AIC(x) There is a lot of information in the output and some of the needed information in the second set of rows, so we will try to point out some useful features to consider. The left columns describe the models being estimated. For example, the first row of results is for a model with an intercept (Int), age (age) , height (hgh), sex (sex), and smoking(smk). For sex and smoking, there are “+”s in the output row when they are included in that model but no coefficient since they are categorical variables. There is no interaction between age and smoking in the top ranked model. The top AIC model has an $\boldsymbol{R}^2 = 0.8106$, adjusted R2 of 0.8095, model df = 6 (from an intercept, four slopes, and the residual variance), log-likelihood (logLik) = 335.302, an AIC = -658.6 and $\Delta\text{AIC}$ of 0.00. The next best model adds the interaction between age and smoking, resulting in increases in the R2, adjusted R2, and model df, but increasing the AIC by 0.09 units $(\Delta\text{AIC} = 0.09)$. This suggests that these two models are essentially equivalent on the AIC because the difference is so small and this comparison was discussed previously. The simpler model is a little bit better on AIC so you could focus on it or on the slightly more complicated model – but you should probably note that the evidence is equivocal for these two models. The comparison to other potential models shows the strength of evidence in support of all the other model components. The intercept-only model is again the last in the list with the least support on AICs with a $\Delta\text{AIC}$ of 1080.32, suggesting it is not worth considering in comparison with the top model. Comparing the mean-only model to our favorite model on AICs is a bit like the overall $\boldsymbol{F}$-test we considered in Section 8.7 because it compares a model with no predictors to a complicated model. Each model with just one predictor included is available in the table as well, with the top single predictor model based on height having a $\Delta\text{AIC}$ of 43.96. So we certainly need to pursue something more complicated than SLR based with such strong evidence for the more complex models versus the single predictor models at over 40 AIC units different. Closer to the top model is the third-ranked model that includes age, height, and sex. It has a $\Delta\text{AIC}$ of 2.87 so we would say that these results present marginal support for the top two models over this model. It is the simplest model of the top three but not close enough to be considered in detail. The dredge results also provides the opportunity to compare the model selection results from the adjusted R2 compared to the AIC. The AIC favors the model without an interaction between age and smoking whereas the adjusted R2 favors the most complicated model considered here that included an age and smoking interaction. The AIC provides units that are more interpretable than adjusted R2 even though the scale for the AIC is a bit mysterious as distances from the unknown true model with possibly negative distances. The top AIC model (and possibly the other similar models) can then be explored in more detail. You should not then focus on hypothesis testing in this model. Hypothesis testing so permeates the use of statistics that even after using AICs many researchers are pressured to report p-values for model components. Some of this could be confusion caused when people first learned these statistical methods because when we teach you statistics we show you how to use various methods, one after another, and forget to mention that you should not use every method we taught you in every analysis. Confidence intervals and term-plots are useful for describing the different model components and making inferences for the estimated sizes of differences in the population. These results should not be used for deciding if terms are “significant” when the models (and their components) have already been selected using measures like the AIC or adjusted R2. But you can discuss the estimated model components to go with how you arrived at having them in the model. In this situation, the top model is estimated to be $\log(\widehat{\text{FEV}})_i = -1.94 + 0.043\cdot\text{Height}_i+ 0.0234\cdot\text{Age}_i -0.046I_{\text{Smoker},i}+0.0293I_{\text{Male},i}$ based on the estimated coefficients provided below. Using these results and the term-plots (Figure 8.46) we see that in this model there are positive slopes for Age and Height on log-FEV, a negative coefficient for smoking (Smoker), and a positive coefficient for sex (Males). There is some multicollinearity impacting the estimates for height and age based on having VIFs near 3 but these are not extreme issues. We could go further with interpretations such as for the age term: For a 1 year increase in age, we estimate, on average, a 0.0234 log-liter increase in FEV, after controlling for the height, smoking status, and sex of the children. We can even interpret this on the original scale since this was a log(y) response model using the same techniques as in Section 7.6. If we exponentiate the slope coefficient of the quantitative variable, $\exp(0.0234) = 1.0237$. This provides the interpretation on the original FEV scale, for a 1 year increase in age, we estimate a 2.4% increase in the median FEV, after controlling for the height, smoking status, and sex of the children. The only difference from Section 7.6 when working with a log(y) model now is that we have to note that the model used to generate the slope coefficient had other components and so this estimate is after adjusting for them. fm1R$coefficients ## (Intercept) height age smokeSmoker sexMale ## -1.94399818 0.04279579 0.02338721 -0.04606754 0.02931936 vif(fm1R) ## height age smoke sex ## 2.829728 3.019010 1.209564 1.060228 confint(fm1R) ## 2.5 % 97.5 % ## (Intercept) -2.098414941 -1.789581413 ## height 0.039498923 0.046092655 ## age 0.016812109 0.029962319 ## smokeSmoker -0.087127344 -0.005007728 ## sexMale 0.006308481 0.052330236 plot(allEffects(fm1R), grid = T) Like any statistical method, the AIC works better with larger sample sizes and when assumptions are not clearly violated. It also will detect important variables in models more easily when the effects of the predictor variables are strong. Along with the AIC results, it is good to report the coefficients for your top estimated model(s), confidence intervals for the coefficients and/or term-plots, and R2. This provides a useful summary of the reasons for selecting the model(s), information on the importance of the terms within the model, and a measure of the variability explained by the model. The R2 is not used to select the model, but after selection can be a nice summary of model quality. For fm1R , the $R^2 = 0.8106$ suggesting that the selected model explains 81% of the variation in log-FEV values. The AICs are a preferred modeling strategy in some fields such as Ecology. As with this and many other methods discussed in this book, it is sometimes as easy to find journal articles with mistakes in using statistical methods as it is to find papers doing it correctly. After completing this material, you have the potential to have the knowledge and experience of two statistics classes and now are better trained than some researchers that frequently use these methods. This set of tools can be easily mis-applied. Try to make sure that you are thinking carefully through your problem before jumping to the statistical results. Make a graph first, think carefully about your study design and variables collected and what your models of interest might be, what assumptions might be violated based on the data collection story, and then start fitting models. Then check your assumptions and only proceed on with any inference if the conditions are not clearly violated. The AIC provides an alternative method for selecting among different potential models and they do not need to be nested (a requirement of hypothesis testing methods used to sequentially simplify models). The automated consideration of all possible models in the dredge function should not be considered in all situations but can be useful in a preliminary model exploration study where no clear knowledge exists about useful models to consider. Where some knowledge exists of possible models of interest a priori, fit those models and use the AIC function to get AICs to compare. Reporting the summary of AIC results beyond just reporting the top model(s) that were selected for focused exploration provides the evidence to support that selection – not p-values!
textbooks/stats/Advanced_Statistics/Intermediate_Statistics_with_R_(Greenwood)/08%3A_Multiple_linear_regression/8.14%3A_Case_study_-_Forced_expiratory_volume_model_selection_using_AICs.txt
This chapter explored the most complicated models we’re going to explore. MLR models can incorporate features of SLR and ANOVAs. The MLR’s used in this chapter highlight the flexibility of the linear modeling framework to move from two-sample mean models to multi-predictor models with interactions of categorical and quantitative variables. It is useful to use the pertinent names for the simpler models, but at this point we could have called everything we are doing fitting linear models. The power of the linear model involves being able to add multiple predictor variables to the model and handle categorical predictors using indicator variables. All this power comes with some responsibility in that you need to know what you are trying to fit and how to interpret the results provided. We introduced each scenario working from simple to the most complicated version of the models, trying to motivate when you would encounter them, and the specific details of the interpretations of each type of model. In Chapter 9, case studies are used to review the different methods discussed with reminders of how to identify and interpret the particular methods used. When you have to make modeling decisions, you should remember the main priorities in modeling. First, you need to find a model that can address research question(s) of interest. Second, find a model that is trustworthy by assessing the assumptions in the model relative to your data set. Third, report the logic and evidence that was used to identify and support the model. All too often, researchers present only a final model with little information on how they arrived at it. You should be reporting the reasons for decisions made and the evidence supporting them, whether that is using p-values or some other model selection criterion. For example, if you were considering an interaction model and the interaction was dropped and an additive model is re-fit and interpreted, the evidence related to the interaction test should still be reported. Similarly, if a larger MLR is considered and some variables are removed, the evidence (reason) for those removals should be provided. Because of multicollinearity in models, you should never remove more than one quantitative predictor at a time or else you could remove two variables that are important but were “hiding” when both were included in the model. 8.16: Summary of important R code There is very little “new” R code in this chapter since all these methods were either used in the ANOVA or SLR chapters. The models are more complicated but are built off of methods from previous chapters. In this code, y is a response variable, x1, x2, …, xK are quantitative explanatory variables, groupfactor is a factor variable and the data are in DATASETNAME. • DATASETNAME %>% ggplot(mapping = aes(x = x, y = y)) + geom_point() + geom_smooth(method = “lm”) • Provides a scatter plot with a regression line. • Add + geom_smooth() to add a smoothing line to help detect nonlinear relationships. • Add color = groupfactor to aesthetic to color points and lines based on a grouping variable. • Add + facet_grid(cols = vars(groupfactor)) to facet by groups. • MODELNAME <- lm(y ~ x1 + x2 +…+ xK, data = DATASETNAME) • Estimates an MLR model using least squares with $K$ quantitative predictors. • MODELNAME <- lm(y ~ x1 * groupfactor, data = DATASETNAME) • Estimates an interaction model between a quantitative and categorical variable, providing different slopes and intercepts for each group. • MODELNAME <- lm(y ~ x1 + groupfactor, data = DATASETNAME) • Estimates an additive model with a quantitative and categorical variable, providing different intercepts for each group. • summary(MODELNAME) • Provides parameter estimates, overall $F$-test, R2, and adjusted R2. • par(mfrow = c(2, 2)); plot(MODELNAME) • Provides four regression diagnostic plots in one plot. • confint(MODELNAME, level = 0.95) • Provides 95% confidence intervals for the regression model coefficients. • Change level if you want other confidence levels. • plot(allEffects(MODELNAME)) • Requires the effects package. • Provides a plot of the estimated regression lines with 95% confidence interval for the mean. • vif(MODELNAME) • Requires the car package. • Provides VIFs for an MLR model. Only use in additive models – not meaningful for models with interactions present. • predict(MODELNAME, se.fit = T) • Provides fitted values for all observed $x\text{'s}$ with SEs for the mean. • predict(MODELNAME, newdata = tibble(x1 = X1_NEW, x2 = X2_NEW, , xK = XK_NEW, interval = “confidence”) • Provides fitted value for specific values of the quantitative predictors with CI for the mean. • predict(MODELNAME, newdata = tibble(x1 = X1_NEW, x2 = X2_NEW, , xK = XK_NEW, interval = “prediction”) • Provides fitted value for specific values of the quantitative predictors with PI for a new observation. • Anova(MODELNAME) • Requires the car package. • Use to generate ANOVA tables and $F$-tests useful when categorical variables are included in either the additive or interaction models. • AIC(MODELNAME_1, MODELNAME_2) • Use to get AIC results for two candidate models called MODELNAME_1 and MODELNAME_2. • options(na.action = “na.fail”) dredge(FULL_MODELNAME, rank = “AIC”) • Requires the MuMIn package. • Provides AIC and delta AIC results for all possible simpler models given a full model called FULL_MODELNAME.
textbooks/stats/Advanced_Statistics/Intermediate_Statistics_with_R_(Greenwood)/08%3A_Multiple_linear_regression/8.15%3A_Chapter_summary.txt
8.1. Treadmill data analysis The original research goal for the treadmill data set used for practice problems in the last two chapters was to replace the costly treadmill oxygen test with a cheap to find running time measurement but there were actually quite a few variables measured when the run time was found – maybe we can replace the treadmill test result with a combined prediction built using a few variables using the MLR techniques. The following code will get us re-started in this situation. ``````treadmill <- read_csv("http://www.math.montana.edu/courses/s217/documents/treadmill.csv") tm1 <- lm(TreadMillOx ~ RunTime, data = treadmill)`````` 8.1.1. Fit the MLR that also includes the running pulse (`RunPulse`), the resting pulse (`RestPulse`), body weight (`BodyWeight`), and Age (`Age`) of the subjects. Report and interpret the R2 for this model. 8.1.2. Compare the R2 and the adjusted R2 to the results for the SLR model that just had `RunTime` in the model. What do these results suggest? 8.1.3. Interpret the estimated `RunTime` slope coefficients from the SLR model and this MLR model. Explain the differences in the estimates. 8.1.4. Find the VIFs for this model and discuss whether there is an issue with multicollinearity noted in these results. 8.1.5. Report the value for the overall \(F\)-test for the MLR model and interpret the result. 8.1.6. Drop the variable with the largest p-value in the MLR model and re-fit it. Compare the resulting R2 and adjusted R2 values to the others found previously. 8.1.7. Use the `dredge` function as follows to consider some other potential reduced models and report the top two models according to adjusted R2 values. What model had the highest R2? Also discuss and compare the model selection results provided by the delta AICs here. ``````library(MuMIn) options(na.action = "na.fail") #Must run this code once to use dredge dredge(MODELNAMEFORFULLMODEL, rank = "AIC", extra = c("R^2", adjRsq = function(x) summary(x)\$adj.r.squared))`````` 8.1.8. For one of the models, interpret the Age slope coefficient. Remember that only male subjects between 38 and 57 participated in this study. Discuss how this might have impacted the results found as compared to a more general population that could have been sampled from. 8.1.9. The following code creates a new three-level variable grouping the ages into low, middle, and high for those observed. The scatterplot lets you explore whether the relationship between treadmill oxygen and run time might differ across the age groups. ``````treadmill <- treadmill %>% mutate(Ageb = factor(cut(Age, breaks = c(37, 44.5, 50.5, 58)))) summary(treadmill\$Ageb) treadmill %>% ggplot(mapping = aes(x = RunTime, y = TreadMillOx, color = Ageb, shape = Ageb)) + geom_point(size = 1.5, alpha = 0.5) + geom_smooth(method = "lm") + theme_bw() + scale_color_viridis_d(end = 0.8) + facet_grid(rows = vars(Ageb))`````` Based on the plot, do the lines look approximately parallel or not? 8.1.10. Fit the MLR that contains a `RunTime` by `Ageb` interaction – do not include any other variables. Compare the R2 and adjusted R2 results to previous models. 8.1.11. Find and report the results for the \(F\)-test that assesses evidence relative to the need for different slope coefficients. 8.1.12. Write out the overall estimated model. What level was R using as baseline? Write out the simplified model for two of the age levels. Make an effects plot and discuss how it matches the simplified models you generated. 8.1.13. Fit the additive model with `RunTime` and predict the mean treadmill oxygen values for subjects with run times of 11 minutes in each of the three `Ageb` groups. 8.1.14. Find the \(F\)-test results for the binned age variable in the additive model. Report and interpret those results. References Akaike, Hirotugu. 1974. “A New Look at the Statistical Model Identification.” IEEE Transactions on Automatic Control 19: 716–23. Bartoń, Kamil. 2022. MuMIn: Multi-Model Inference. https://CRAN.R-project.org/package=MuMIn. Burnham, Kenneth P., and David R. Anderson. 2002. Model Selection and Multimodel Inference. NY: Springer. Çetinkaya-Rundel, Mine, David Diez, Andrew Bray, Albert Y. Kim, Ben Baumer, Chester Ismay, Nick Paterno, and Christopher Barr. 2022. Openintro: Data Sets and Supplemental Functions from OpenIntro Textbooks and Labs. https://CRAN.R-project.org/package=openintro. De Veaux, Richard D., Paul F. Velleman, and David E. Bock. 2011. Stats: Data and Models, 3rd Edition. Pearson. Fox, John. 2003. “Effect Displays in R for Generalised Linear Models.” Journal of Statistical Software 8 (15): 1–27. http://www.jstatsoft.org/v08/i15/. Fox, John, and Michael Friendly. 2021. Heplots: Visualizing Hypothesis Tests in Multivariate Linear Models. http://friendly.github.io/heplots/. ———. 2022b. carData: Companion to Applied Regression Data Sets. https://CRAN.R-project.org/package=carData. Garnier, Simon. 2021. Viridis: Colorblind-Friendly Color Maps for r. https://CRAN.R-project.org/package=viridis. Liao, Xiyue, and Mary C. Meyer. 2014. “Coneproj: An R Package for the Primal or Dual Cone Projections with Routines for Constrained Regression.” Journal of Statistical Software 61 (12): 1–22. http://www.jstatsoft.org/v61/i12/. Merkle, Ed, and Michael Smithson. 2018. Smdata: Data to Accompany Smithson & Merkle, 2013. https://CRAN.R-project.org/package=smdata. Meyer, Mary C., and Xiyue Liao. 2021. Coneproj: Primal or Dual Cone Projections with Routines for Constrained Regression. https://CRAN.R-project.org/package=coneproj. Ramsey, Fred, and Daniel Schafer. 2012. The Statistical Sleuth: A Course in Methods of Data Analysis. Cengage Learning. https://books.google.com/books?id=eSlLjA9TwkUC. 1. If you take advanced applied mathematics courses, you can learn more about the algorithms being used by `lm`. Everyone else only cares about the algorithms when they don’t work – which is usually due to the user’s inputs in these models not the algorithm itself.↩︎ 2. Sometimes the `effects` plots ignores the edge explanatory observations with the default display. Always check the original variable summaries when considering the range of observed values. By turning on the “partial residuals” with SLR models, the plots show the original observations along with the fitted values and 95% confidence interval band. In more complex models, these displays with residuals are more complicated but can be used to assess linearity with each predictor in the model after accounting for other variables.↩︎ 3. We used this same notation in the fitting the additive Two-Way ANOVA and this is also additive in terms of these variables. Interaction models are discussed later in the chapter.↩︎ 4. I have not given you a formula for calculating partial residuals. We will leave that for more advanced material.↩︎ 5. Imagine showing up to a ski area expecting a 40 inch base and there only being 11 inches. I’m sure ski areas are always more accurate than this model in their reporting of amounts of snow on the ground…↩︎ 6. The site name is redacted to protect the innocence of the reader. More information on this site, located in Beaverhead County in Montana, is available at www.wcc.nrcs.usda.gov/nwcc/site?sitenum=355&state=mt.↩︎ 7. Term-plots with additive factor variables use the weighted (based on percentage of the responses in each category) average of their predicted mean responses across their levels but we don’t have any factor variables in the MLR models, yet.↩︎ 8. This also applies to the additive two-way ANOVA model.↩︎ 9. The `seq` function has syntax of `seq(from = startingpoint, to = endingpoint, length.out = #ofvalues_between_start_and_end)` and the `rep` function has syntax of `rep(numbertorepeat, #oftimes).`↩︎ 10. Also see Section 8.13 for another method of picking among different models.↩︎ 11. This section was inspired by a similar section from De Veaux, Velleman, and Bock (2011).↩︎ 12. There are some social science models where the model is fit with the mean subtracted from each predictor so all have mean 0 and the precision of the \(y\)-intercept is interesting. In some cases both the response and predictor variables are “standardized” to have means of 0 and standard deviations of 1. The interpretations of coefficients then relates to changes in standard deviations around the means. These coefficients are called “standardized betas”. But even in these models where the \(x\)-values of 0 are of interest, the test for the \(y\)-intercept being 0 is rarely of interest.↩︎ 13. The variables were renamed to better interface with R code and our book formatting using the `rename` function.↩︎ 14. The answer is no – it should be converted to a factor variable prior to plotting so it can be displayed correctly by `ggpairs`, but was intentionally left this way so you could see what happens when numerically coded categorical variables are not carefully handled in R.↩︎ 15. Either someone had a weighted GPA with bonus points, or more likely here, there was a coding error in the data set since only one observation was over 4.0 in the GPA data. Either way, we could remove it and note that our inferences for HSGPA do not extend above 4.0.↩︎ 16. When there are just two predictors, the VIFs have to be the same since the proportion of information shared is the same in both directions. With more than two predictors, each variable can have a different VIF value.↩︎ 17. We are actually making an educated guess about what these codes mean. Other similar data sets used 1 for males but the documentation on these data is a bit sparse. We proceed with a small potential that the conclusions regarding differences in gender are in the wrong direction.↩︎ 18. Some people also call them dummy variables to reflect that they are stand-ins for dealing with the categorical information. But it seems like a harsh anthropomorphism so I prefer “indicators”.↩︎ 19. This is true for additive uses of indicator variables. In Section 8.11, we consider interactions between quantitative and categorical variables which has the effect of changing slopes and intercepts. The simplification ideas to produce estimated equations for each group are used there but we have to account for changing slopes by group too.↩︎ 20. Models like this with a categorical variable and quantitative variable are often called ANCOVA or analysis of covariance models but really are just versions of our linear models we’ve been using throughout this material.↩︎ 21. The `scale_color_viridis_d(end = 0.85, option = "inferno")` code makes the plot in a suite of four colors from the `viridis` package that attempt to be color-blind friendly.↩︎ 22. The strength of this recommendation drops when you have many predictors as you can’t do this for every variable, but the concern remains about an assumption of no interaction whenever you fit models without them. In more complex situations, think about variables that are most likely to interact in their impacts on the response based on the situation being studied and try to explore those.↩︎ 23. Standardizing quantitative predictor variables is popular in social sciences, often where the response variable is also standardized. In those situations, they generate what are called “standardized betas” (https://en.Wikipedia.org/wiki/Standardized_coefficient) that estimate the change in SDs in the response for a 1 SD increase in the explanatory variable.↩︎ 24. There is a way to test for a difference in the two lines at a particular \(x\) value but it is beyond the scope of this material.↩︎ 25. This is an example of what is called “step down” testing for model refinement which is a commonly used technique for arriving at a final model to describe response variables. Note that each step in the process should be reported, not just the final model that only has variables with small p-values remaining in it.↩︎ 26. We could also use the `anova` function to do this but using `Anova` throughout this material provides the answers we want in the additive model and it has no impact for the only test of interest in the interaction model since the interaction is the last component in the model.↩︎ 27. In most situations, it would be crazy to assume that the true model for a process has been obtained so we can never pick the “correct” model. In fact, we won’t even know if we are picking a “good” model, but just the best from a set of the candidate models on a criterion. But we can study the general performance of methods using simulations where we know the true model and the AIC has some useful properties in identifying the correct model when it is in the candidate set of models. No such similar theory exists for the adjusted R2.↩︎ 28. Most people now call this Akaike’s (pronounced ah-kah-ee-kay) Information Criterion, but he used the AIC nomenclature to mean An Information Criterion – he was not so vain as to name the method after himself in the original paper that proposed it. But it is now common to use “A” for his last name.↩︎ 29. More details on these components of the methods will be left for more advanced material – we will focus on an introduction to using the AIC measure here.↩︎ 30. Although sometimes excluded, the count of parameters should include counting the residual variance as a parameter.↩︎ 31. It makes it impossible to fit models with any missing values in the data set and this prevents you from making incorrect comparisons of AICs to models with different observations.↩︎ 32. We put quotes on “full” or sometimes call it the “fullish” model because we could always add more to the model, like interactions or other explanatory variables. So we rarely have a completely full model but we do have our “most complicated that we are considering” model.↩︎ 33. The options in `extra = ...` are to get extra information displayed that you do not necessarily need. You can simply run `dredge(m6, rank = "AIC")` to get just the AIC results.↩︎
textbooks/stats/Advanced_Statistics/Intermediate_Statistics_with_R_(Greenwood)/08%3A_Multiple_linear_regression/8.17%3A_Practice_problems.txt
At the beginning of the text, we provided a schematic of methods that you would learn about that was (probably) gibberish. Hopefully, revisiting that same diagram (Figure 9.1) will bring back memories of each of the chapters. One common theme was that categorical variables create special challenges whether they are explanatory or response variables. Every scenario with a quantitative response variable was handled using linear models. The last material on multiple linear regression modeling tied back to the One-Way and Two-Way ANOVA models as categorical variables were added to the models. As both a review and to emphasize the connections, let’s connect some of the different versions of the general linear model that we considered. If we start with the One-Way ANOVA, the referenced-coded model was written out as: $y_{ij} = \alpha + \tau_j + \varepsilon_{ij}.$ We didn’t want to introduce indicator variables at that early stage of the material, but we can now write out the same model using our indicator variable approach from Chapter 8 for a $J$-level categorical explanatory variable using $J-1$ indicator variables as: $y_i = \beta_0 + \beta_1I_{\text{Level }2,i} + \beta_2I_{\text{Level }3,i} + \cdots + \beta_{J-1}I_{\text{Level }J,i} + \varepsilon_i.$ We now know how the indicator variables are either 0 or 1 for each observation and only one takes in the value 1 (is “turned on”) at a time for each response. We can then equate the general notation from Chapter 8 with our specific One-Way ANOVA (Chapter 3) notation as follows: • For the baseline category, the mean is: $\alpha = \beta_0$ • The mean for the baseline category was modeled using $\alpha$ which is the intercept term in the output that we called $\beta_0$ in the regression models. • For category $j$, the mean is: • From the One-Way ANOVA model: $\alpha + \tau_j$ • From the regression model where the only indicator variable that is 1 is $I_{\text{Level }j,i}$: $\begin{array}{rl} &\beta_0 + \beta_1I_{\text{Level }2,i} + \beta_2I_{\text{Level }3,i} + \cdots + \beta_JI_{\text{Level }J,i} \ & = \beta_0 + \beta_{j-1}\cdot1\ & = \beta_0 + \beta_{j-1} \end{array}$ • So with intercepts being equal, $\beta_{j-1} = \tau_j$. The ANOVA reference-coding notation was used to focus on the coefficients that were “turned on” and their interpretation without getting bogged down in the full power (and notation) of general linear models. The same equivalence is possible to equate our work in the Two-Way ANOVA interaction model, $y_{ijk} = \alpha + \tau_j + \gamma_k + \omega_{jk} + \varepsilon_{ijk},$ with the regression notation from the MLR model with an interaction: $\begin{array}{rc} y_i = &\beta_0 + \beta_1x_i +\beta_2I_{\text{Level }2,i}+\beta_3I_{\text{Level }3,i} +\cdots+\beta_JI_{\text{Level }J,i} +\beta_{J+1}I_{\text{Level }2,i}\:x_i \ &+\beta_{J+2}I_{\text{Level }3,i}\:x_i +\cdots+\beta_{2J-1}I_{\text{Level }J,i}\:x_i +\varepsilon_i \end{array}$ If one of the categorical variables only had two levels, then we could simply replace $x_i$ with the pertinent indicator variable and be able to equate the two versions of the notation. That said, we won’t attempt that here. And if both variables have more than 2 levels, the number of coefficients to keep track of grows rapidly. The great increase in complexity of notation to fully writing out the indicator variables in the regression approach with interactions with two categorical variables is the other reason we explored the Two-Way ANOVA using a “simplified” notation system even though lm used the indicator approach to estimate the model. The Two-Way ANOVA notation helped us distinguish which coefficients related to main effects and the interaction, something that the regression notation doesn’t make clear. In the following four sections, you will have additional opportunities to see applications of the methods considered here to real data. The data sets are taken directly from published research articles, so you can see the potential utility of the methods we’ve been discussing for handling real problems. They are focused on biological applications because most come from a particular journal (Biology Letters) that encourages authors to share their data sets, making our re-analyses possible. Use these sections to review the methods from earlier in the book and to see some hints about possible extensions of the methods you have learned.
textbooks/stats/Advanced_Statistics/Intermediate_Statistics_with_R_(Greenwood)/09%3A_Case_studies/9.01%3A_Overview_of_material_covered.txt
In a 16-year experiment, Gundale, Bach, and Nordin (2013) studied the impacts of Nitrogen (N) additions on the mass of two feather moss species (Pleurozium schreberi (PS) and Hylocomium (HS)) in the Svartberget Experimental Forest in Sweden. They used a randomized block design: here this means that within each of 6 blocks (pre-specified areas that were divided into three experimental units or plots of area 0.1 hectare), one of the three treatments were randomly applied. Randomized block designs involve randomization of levels within blocks or groups as opposed to completely randomized designs where each experimental unit (the subject or plot that will be measured) could be randomly assigned to any treatment. This is done in agricultural studies to control for systematic differences across the fields by making sure each treatment level is used in each area or block of the field. In this example, it resulted in a balanced design with six replicates at each combination of Species and Treatment. The three treatments involved different levels of N applied immediately after snow melt, Control (no additional N – just the naturally deposited amount), 12.5 kg N $\text{ha}^{-1}\text{yr}^{-1}$ (N12.5), and 50 kg N $\text{ha}^{-1}\text{yr}^{-1}$ (N50). The researchers were interested in whether the treatments would have differential impacts on the two species of moss growth. They measured a variety of other variables, but here we focus on the estimated biomass per hectare (mg/ha) of the species (PS or HS), both measured for each plot within each block, considering differences across the treatments (Control, N12.5, or N50). The pirate-plot in Figure 9.2 provides some initial information about the responses. Initially there seem to be some differences in the combinations of groups and some differences in variability in the different groups, especially with much more variability in the control treatment level and more variability in the PS responses than for the HS responses. gdn <- read_csv("http://www.math.montana.edu/courses/s217/documents/gundalebachnordin_2.csv") gdn <- gdn %>% mutate(Species = factor(Species), Treatment = factor(Treatment) ) library(yarrr) pirateplot(Massperha ~ Species + Treatment, data = gdn, inf.method = "ci", inf.disp = "line", theme = 2, ylab = "Biomass", point.o = 1, pal = "southpark") The Two-WAY ANOVA model that contains a species by treatment interaction is of interest (this has a quantitative response variable of biomass and two categorical predictors of species and treatment)165. We can make an interaction plot to focus on the observed patterns of the means across the combinations of levels as provided in Figure 9.3. The interaction plot suggests a relatively additive pattern of differences between PS and HS across the three treatment levels. However, the variability seems to be quite different based on this plot as well. library(catstats) #Or directly using: #source("http://www.math.montana.edu/courses/s217/documents/intplotfunctions_v3.R") intplotarray(Massperha ~ Species * Treatment, data = gdn, col = viridis(4)[1:3], lwd = 2, cex.main = 1) Based on the initial plots, we are going to be most concerned about the equal variance assumption. We can fit the interaction model and explore the diagnostic plots to verify that we have a problem. m1 <- lm(Massperha ~ Species * Treatment, data = gdn) summary(m1) ## ## Call: ## lm(formula = Massperha ~ Species * Treatment, data = gdn) ## ## Residuals: ## Min 1Q Median 3Q Max ## -992.6 -252.2 -64.6 308.0 1252.9 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 1694.80 211.86 8.000 6.27e-09 ## SpeciesPS 859.88 299.62 2.870 0.00745 ## TreatmentN12.5 -588.26 299.62 -1.963 0.05893 ## TreatmentN50 -1182.91 299.62 -3.948 0.00044 ## SpeciesPS:TreatmentN12.5 199.42 423.72 0.471 0.64130 ## SpeciesPS:TreatmentN50 88.29 423.72 0.208 0.83636 ## ## Residual standard error: 519 on 30 degrees of freedom ## Multiple R-squared: 0.6661, Adjusted R-squared: 0.6104 ## F-statistic: 11.97 on 5 and 30 DF, p-value: 2.009e-06 par(mfrow = c(2,2), oma = c(0,0,2,0)) plot(m1, pch = 16, sub.caption = "") title(main="Initial Massperha 2-WAY model", outer=TRUE) There is a clear problem with non-constant variance showing up in a fanning shape166 in the Residuals versus Fitted and Scale-Location plots in Figure 9.4. Interestingly, the normality assumption is not an issue as the residuals track the 1-1 line in the QQ-plot quite closely so hopefully we will not worsen this result by using a transformation to try to address the non-constant variance issue. The independence assumption is violated in two ways for this model by this study design – the blocks create clusters or groups of observations and the block should be accounted for (they did this in their models by adding block as a categorical variable to their models). Using blocked designs and accounting for the blocks in the model will typically give more precise inferences for the effects of interest, the treatments randomized within the blocks. Additionally, there are two measurements on each plot within block, one for SP and one for HS and these might be related (for example, high HS biomass might be associated with high or low SP) so putting both observations into a model violates the independence assumption at a second level. It takes more advanced statistical models (called linear mixed models) to see how to fully deal with this, for now it is important to recognize the issues. The more complicated models provide similar results here and include the treatment by species interaction we are going to explore, they just add to this basic model to account for these other issues. Remember that before using a log-transformation, you always must check that the responses are strictly greater than 0: summary(gdn$Massperha) ## Min. 1st Qu. Median Mean 3rd Qu. Max. ## 319.1 1015.1 1521.8 1582.3 2026.6 3807.6 The minimum is 319.1 so it is safe to apply the natural log-transformation to the response variable (Biomass) and repeat the previous plots: gdn <- gdn %>% mutate(logMassperha = log(Massperha)) par(mfrow = c(2,1)) pirateplot(logMassperha ~ Species + Treatment, data = gdn, inf.method = "ci", inf.disp = "line", theme = 2, ylab = "log-Biomass", point.o = 1, pal = "southpark", main = "(a)") intplot(logMassperha ~ Species * Treatment, data = gdn, col = viridis(4)[1:3], lwd = 2, main = "(b)") The variability in the pirate-plot in Figure 9.5(a) appears to be more consistent across the groups but the lines appear to be a little less parallel in the interaction plot Figure 9.5(b) for the log-scale response. That is not problematic but suggests that we may now have an interaction present – it is hard to tell visually sometimes. Again, fitting the interaction model and exploring the diagnostics is the best way to assess the success of the transformation applied. The log(Mass per ha) version of the response variable has little issue with changing variability present in the residuals in Figure 9.6 with much more similar variation in the residuals across the fitted values. The normality assumption is leaning toward a slight violation with too little variability in the right tail and so maybe a little bit of a left skew. This is only a minor issue and fixes the other big issue (clear non-constant variance), so this model is at least closer to giving us trustworthy inferences than the original model. The model presents moderate evidence against the null hypothesis of no Species by Treatment interaction on the log-biomass ($F(2,30) = 4.2$, p-value $= 0.026$). This suggests that the effects on the log-biomass of the treatments differ between the two species. The mean log-biomass is lower for HS than PS with the impacts of increased nitrogen causing HS mean log-biomass to decrease more rapidly than for PS. In other words, increasing nitrogen has more of an impact on the resulting log-biomass for HS than for PS. The highest mean log-biomass rates were observed under the control conditions for both species making nitrogen appear to inhibit growth of these species. m2 <- lm(logMassperha ~ Species * Treatment, data = gdn) summary(m2) ## ## Call: ## lm(formula = logMassperha ~ Species * Treatment, data = gdn) ## ## Residuals: ## Min 1Q Median 3Q Max ## -0.51138 -0.16821 -0.02663 0.23925 0.44190 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 7.4108 0.1160 63.902 < 2e-16 ## SpeciesPS 0.3921 0.1640 2.391 0.02329 ## TreatmentN12.5 -0.4228 0.1640 -2.578 0.01510 ## TreatmentN50 -1.1999 0.1640 -7.316 3.79e-08 ## SpeciesPS:TreatmentN12.5 0.2413 0.2319 1.040 0.30645 ## SpeciesPS:TreatmentN50 0.6616 0.2319 2.853 0.00778 ## ## Residual standard error: 0.2841 on 30 degrees of freedom ## Multiple R-squared: 0.7998, Adjusted R-squared: 0.7664 ## F-statistic: 23.96 on 5 and 30 DF, p-value: 1.204e-09 library(car) Anova(m2) ## Anova Table (Type II tests) ## ## Response: logMassperha ## Sum Sq Df F value Pr(>F) ## Species 4.3233 1 53.577 3.755e-08 ## Treatment 4.6725 2 28.952 9.923e-08 ## Species:Treatment 0.6727 2 4.168 0.02528 ## Residuals 2.4208 30 par(mfrow = c(2,2), oma = c(0,0,2,0)) plot(m2, pch = 16, sub.caption = "") title(main="log-Massperha 2-WAY model", outer=TRUE) The researchers actually applied a $\log(y+1)$ transformation to all the variables. This was used because one of their many variables had a value of 0 and so they added 1 to avoid analyzing a $-\infty$ response. This was not needed for most of their variables because most did not attain the value of 0. Adding a small value to observations and then log-transforming is a common but completely arbitrary practice and the choice of the added value can impact the results. Sometimes considering a square-root transformation can accomplish similar benefits as the log-transform and be applied safely to responses that include 0s. Or more complicated statistical models can be used that allow 0s in responses and still account for the violations of the linear model assumptions – see a statistician or continue exploring more advanced statistical methods for ideas in this direction. The term-plot in Figure 9.7 provides another display of the results with some information on the results for each combination of the species and treatments. Retaining the interaction because of moderate evidence in the interaction test suggests that the treatments caused different results for the different species. And it appears that there are some clear differences among certain combinations such as the mean for PS-Control is clearly larger than for HS-N50. The researchers were probably really interested in whether the N12.5 results differed from Control for HS and whether the species differed at Control sites. As part of performing all pair-wise comparisons, we can assess those sorts of detailed questions. This sort of follow-up could be considered in any Two-Way ANOVA model but will be most interesting in situations where there are important interactions. library(effects) plot(allEffects(m2), multiline = T, lty = c(1,2), ci.style = "bars", grid = T) Follow-up Pairwise Comparisons: Given at least moderate evidence against the null hypothesis of no interaction, many researchers would like more details about the source of the differences. We can re-fit the model with a unique mean for each combination of the two predictor variables, fitting a One-Way ANOVA model (here with six levels) and using Tukey’s HSD to provide safe inferences for differences among pairs of the true means. There are six groups corresponding to all combinations of Species (HS, PS) and treatment levels (Control, N12.5, and N50) provided in the new variable SpTrt by the interaction function with new levels of HS.Control, PS.Control, HS.N12.5, PS.N12.5, HS.N50, and PS.N50. The One-Way ANOVA $F$-test ($F(5,30) = 23.96$, p-value $< 0.0001$) suggests that there is strong evidence against the null hypothesis of no difference in the true mean log-biomass among the six treatment/species combinations and so we would conclude that at least one differs from the others. Note that the One-Way ANOVA table contains the test for at least one of those means being different from the others; the interaction test above was testing a more refined hypothesis – does the effect of treatment differ between the two species? As in any situation with a small p-value from the overall One-Way ANOVA test, the pair-wise comparisons should be of interest. # Create new variable: gdn <- gdn %>% mutate(SpTrt = interaction(Species, Treatment)) levels(gdn$SpTrt) ## [1] "HS.Control" "PS.Control" "HS.N12.5" "PS.N12.5" "HS.N50" ## [6] "PS.N50" newm2 <- lm(logMassperha ~ SpTrt, data = gdn) Anova(newm2) ## Anova Table (Type II tests) ## ## Response: logMassperha ## Sum Sq Df F value Pr(>F) ## SpTrt 9.6685 5 23.963 1.204e-09 ## Residuals 2.4208 30 library(multcomp) PWnewm2 <- glht(newm2, linfct = mcp(SpTrt = "Tukey")) confint(PWnewm2) ## ## Simultaneous Confidence Intervals ## ## Multiple Comparisons of Means: Tukey Contrasts ## ## ## Fit: lm(formula = logMassperha ~ SpTrt, data = gdn) ## ## Quantile = 3.0421 ## 95% family-wise confidence level ## ## ## Linear Hypotheses: ## Estimate lwr upr ## PS.Control - HS.Control == 0 0.39210 -0.10682 0.89102 ## HS.N12.5 - HS.Control == 0 -0.42277 -0.92169 0.07615 ## PS.N12.5 - HS.Control == 0 0.21064 -0.28827 0.70956 ## HS.N50 - HS.Control == 0 -1.19994 -1.69886 -0.70102 ## PS.N50 - HS.Control == 0 -0.14620 -0.64512 0.35272 ## HS.N12.5 - PS.Control == 0 -0.81487 -1.31379 -0.31596 ## PS.N12.5 - PS.Control == 0 -0.18146 -0.68037 0.31746 ## HS.N50 - PS.Control == 0 -1.59204 -2.09096 -1.09312 ## PS.N50 - PS.Control == 0 -0.53830 -1.03722 -0.03938 ## PS.N12.5 - HS.N12.5 == 0 0.63342 0.13450 1.13234 ## HS.N50 - HS.N12.5 == 0 -0.77717 -1.27608 -0.27825 ## PS.N50 - HS.N12.5 == 0 0.27657 -0.22235 0.77549 ## HS.N50 - PS.N12.5 == 0 -1.41058 -1.90950 -0.91166 ## PS.N50 - PS.N12.5 == 0 -0.35685 -0.85576 0.14207 ## PS.N50 - HS.N50 == 0 1.05374 0.55482 1.55266 We can also generate the Compact Letter Display (CLD) to help us group up the results. cld(PWnewm2) ## HS.Control PS.Control HS.N12.5 PS.N12.5 HS.N50 PS.N50 ## "bd" "d" "b" "cd" "a" "bc" And we can add the CLD to an interaction plot to create Figure 9.8. Researchers often use displays like this to simplify the presentation of pair-wise comparisons. Sometimes researchers add bars or stars to provide the same information about pairs that are or are not detectably different. The following code creates the plot of these results using our intplot function and the cld = T option. intplot(logMassperha ~ Species * Treatment, cld = T, cldshift = 0.16, data = gdn, lwd = 2, main = "Interaction Plot with CLD from Tukey's HSD on One-Way ANOVA") These results suggest that HS-N50 is detectably different from all the other groups (letter “a”). The rest of the story is more complicated since many of the sets contain overlapping groups in terms of detectable differences. Some specific aspects of those results are most interesting. The mean log-biomasses were not detectably different between the species in the control group (they share a “d”). In other words, without treatment, there is little to no evidence against the null hypothesis of no difference in how much of the two species are present in the sites. For N12.5 and N50 treatments, there are detectable differences between the Species. These comparisons are probably of the most interest initially and suggest that the treatments have a different impact on the two species, remembering that in the control treatments, the results for the two species were not detectably different. Further explorations of the sizes of the differences that can be extracted from selected confidence intervals in the Tukey’s HSD results printed above. Because these results are for the log-scale responses, we could exponentiate coefficients for groups that are deviations from the baseline category and interpret those as multiplicative changes in the median relative to the baseline group, but at the end of this amount of material, I thought that might stop you from reading on any further…
textbooks/stats/Advanced_Statistics/Intermediate_Statistics_with_R_(Greenwood)/09%3A_Case_studies/9.02%3A_The_impact_of_simulated_chronic_nitrogen_deposition_on_the_biomass_and_N2-fixation_activity_of_two_boreal_feather_mosscyanobacteria.txt
In Sasaki and Pratt (2013), a set of ant colonies were randomly assigned to one of two treatments to study whether the ants could be “trained” to have a preference for or against certain attributes for potential nest sites. The colonies were either randomly assigned to experience the repeated choice of two identical colony sites except for having an inferior light or entrance size attribute. Then the ants were allowed to choose between two nests, one that had a large entrance but was dark and the other that had a small entrance but was bright. 54 of the 60 colonies that were randomly assigned to one of the two treatments completed the experiment by making a choice between the two types of sites. The data set and some processing code follows. The first question is what type of analysis is appropriate here. Once we recognize that there are two categorical variables being considered (Treatment group with two levels and After choice with two levels SmallBright or LargeDark for what the colonies selected), then this is recognized as being within our Chi-square testing framework. The random assignment of colonies (the subjects here) to treatment levels tells us that the Chi-square Homogeneity test is appropriate here and that we can make causal statements about the effects of the Treatment groups. ``sasakipratt <- read_csv("http://www.math.montana.edu/courses/s217/documents/sasakipratt.csv")`` ``````sasakipratt <- sasakipratt %>% mutate(group = factor(group), after = factor(after), before = factor(before) ) levels(sasakipratt\$group) <- c("Light", "Entrance") levels(sasakipratt\$after) <- c("SmallBright", "LargeDark") levels(sasakipratt\$before) <- c("SmallBright", "LargeDark") plot(after ~ group, data = sasakipratt, col = cividis(2))`````` ``````library(mosaic) tally(~ group + after, data = sasakipratt)`````` ``````## after ## group SmallBright LargeDark ## Light 19 9 ## Entrance 9 17`````` ``table1 <- tally(~ group + after, data = sasakipratt, margins = F)`` The null hypothesis of interest here is that there is no difference in the distribution of responses on After – the rates of their choice of den types – between the two treatment groups in the population of all ant colonies like those studied. The alternative is that there is some difference in the distributions of After between the groups in the population. To use the Chi-square distribution to find a p-value for the \(X^2\) statistic, we need all the expected cell counts to be larger than 5, so we should check that. Note that in the following, the `correct = F` option is used to keep the function from slightly modifying the statistic used that occurs when overall sample sizes are small. ``chisq.test(table1, correct = F)\$expected`` ``````## after ## group SmallBright LargeDark ## Light 14.51852 13.48148 ## Entrance 13.48148 12.51852`````` Our expected cell count condition is met, so we can proceed to explore the results of the parametric test: ``chisq.test(table1, correct = F)`` ``````## ## Pearson's Chi-squared test ## ## data: table1 ## X-squared = 5.9671, df = 1, p-value = 0.01458`````` The \(X^2\) statistic is 5.97 which, if our assumptions hold, should approximately follow a Chi-square distribution with \((R-1)*(C-1) = 1\) degrees of freedom under the null hypothesis. The p-value is 0.015, suggesting that there is moderate to strong evidence against the null hypothesis and we can conclude that there is a difference in the distribution of the responses between the two treated groups in the population of all ant colonies that could have been treated. Because of the random assignment, we can say that the treatments caused differences in the colony choices. These results cannot be extended to ants beyond those being studied by these researchers because they were not randomly selected. Further exploration of the standardized residuals can provide more insights in some situations, although here they are similar for all the cells: ``chisq.test(table1, correct = F)\$residuals`` ``````## after ## group SmallBright LargeDark ## Light 1.176144 -1.220542 ## Entrance -1.220542 1.266616`````` When all the standardized residual contributions are similar, that suggests that there are differences in all the cells from what we would expect if the null hypothesis were true. Basically, that means that what we observed is a bit larger than expected for the Light treatment group in the SmallBright choice and lower than expected in LargeDark – those treated ants preferred the small and bright den. And for the Entrance treated group, they preferred the large entrance, dark den at a higher rate than expected if the null is true and lower than expected in the small entrance, bright location. The researchers extended this basic result a little further using a statistical model called logistic regression, which involves using something like a linear model but with a categorical response variable (well – it actually only works for a two-category response variable). They also had measured which of the two types of dens that each colony chose before treatment and used this model to control for that choice. So the actual model used in their paper contained two predictor variables – the randomized treatment received that we explored here and the prior choice of den type. The interpretation of their results related to the same treatment effect, but they were able to discuss it after adjusting for the colonies previous selection. Their conclusions were similar to those found with our simpler analysis. Logistic regression models are a special case of what are called generalized linear models and are a topic for the next level of statistics if you continue exploring.
textbooks/stats/Advanced_Statistics/Intermediate_Statistics_with_R_(Greenwood)/09%3A_Case_studies/9.03%3A_Ants_learn_to_rely_on_more_informative_attributes_during_decision-making.txt
Benson and Mannion (2012) published a paleontology study that considered modeling the diversity of Sauropodomorphs across $n = 26$ “stage-level” time bins. Diversity is measured by the count of the number of different species that have been found in a particular level of fossils. Specifically, the counts in the Sauropodomorphs group were obtained for stages between Carnian and Maastrichtian, with the first three stages in the Triassic, the next ten in the Jurassic, and the last eleven in the Cretaceous. They were concerned about variation in sampling efforts and the ability of paleontologists to find fossils across different stages creating a false impression of the changes in biodiversity (counts of species) over time. They first wanted to see if the species counts were related to factors such as the count of dinosaur-bearing-formations (DBF) and the count of dinosaur-bearing-collections (DBC) that have been identified for each period. The thought is that if there are more formations or collections of fossils from certain stages, the diversity might be better counted (more found of those available to find) and those stages with less information available might be under-counted. They also measured the length of each stage (Duration) but did not consider it in their models since they want to reflect the diversity and longer stages would likely have higher diversity. Their main goal was to develop a model that would control for the effects of sampling efforts and allow them to perform inferences for whether the diversity was different between the Triassic/Jurassic (grouped together) and considered models that included two different versions of sampling effort variables and one for the comparisons of periods (an indicator variable TJK = 0 if the observation is in Triassic or Jurassic or 1 if in Cretaceous), which are more explicitly coded below. They log-e transformed all their quantitative variables because the untransformed variables created diagnostic issues including influential points. They explored a model just based on the DBC predictor167 and they analyzed the residuals from that model to see if the biodiversity was different in the Cretaceous or before, finding a “p-value >= 0.0001” (I think they meant < 0.0001168). They were comparing the MLR models you learned to some extended regression models that incorporated a correction for correlation in the responses over time, but we can proceed with fitting some of their MLR models and using an AIC comparison similar to what they used. There are some obvious flaws in their analysis and results that we will avoid169. First, we start with a plot of the log-diversity vs the log-dinosaur bearing collections by period. We can see fairly strong positive relationships between the log amounts of collections and species found with potentially similar slopes for the two periods but what look like different intercepts. Especially for TJK level 1 (Cretaceous period) observations, we might need to worry about a curving relationship. Note that a similar plot can also be made using the formations version of the quantitative predictor variable and that the research questions involve whether DBF or DBC are better predictor variables. bm <- read_csv("http://www.math.montana.edu/courses/s217/documents/bensonmanion.csv") bm <- bm %>% mutate(logSpecies = log(Species), logDBCs = log(DBCs), logDBFs = log(DBFs), TJK = factor(TJK) ) levels(bm$TJK) <- c("Trias_Juras","Cretaceous") bm %>% ggplot(mapping = aes(x = logDBCs, y = logSpecies, color = TJK, shape = TJK)) + geom_smooth(method = "lm") + geom_smooth(se = F, lty = 2) + geom_point(size = 2) + theme_bw() + scale_color_colorblind() The following results will allow us to explore models similar to theirs. One “full” model they considered is: $\log{(\text{count})}_i = \beta_0 + \beta_1\cdot\log{(\text{DBC})}_i + \beta_2I_{\text{TJK},i} + \varepsilon_i$ which was compared to $\log{(\text{count})}_i = \beta_0 + \beta_1\cdot\log{(\text{DBF})}_i + \beta_2I_{\text{TJK},i} + \varepsilon_i$ as well as the simpler models that each suggests: $\begin{array}{rl} \log{(\text{count})}_i & = \beta_0 + \beta_1\cdot\log{(\text{DBC})}_i + \varepsilon_i, \ \log{(\text{count})}_i & = \beta_0 + \beta_1\cdot\log{(\text{DBF})}_i + \varepsilon_i, \ \log{(\text{count})}_i & = \beta_0 + \beta_2I_{\text{TJK},i} + \varepsilon_i, \text{ and} \ \log{(\text{count})}_i & = \beta_0 + \varepsilon_i.\ \end{array}$ Both versions of the models (based on DBF or DBC) start with an MLR model with a quantitative variable and two slopes. We can obtain some of the needed model selection results from the first full model using: bd1 <- lm(logSpecies ~ logDBCs + TJK, data = bm) library(MuMIn) options(na.action = "na.fail") dredge(bd1, rank = "AIC", extra = c("R^2", adjRsq = function(x) summary(x)$adj.r.squared)) ## Global model call: lm(formula = logSpecies ~ logDBCs + TJK, data = bm) ## --- ## Model selection table ## (Intrc) lgDBC TJK R^2 adjRsq df logLik AIC delta weight ## 4 -1.0890 0.7243 + 0.580900 0.54440 4 -12.652 33.3 0.00 0.987 ## 2 0.1988 0.4283 0.369100 0.34280 3 -17.969 41.9 8.63 0.013 ## 1 2.5690 0.000000 0.00000 2 -23.956 51.9 18.61 0.000 ## 3 2.5300 + 0.004823 -0.03664 3 -23.893 53.8 20.48 0.000 ## Models ranked by AIC(x) And from the second model: bd2 <- lm(logSpecies ~ logDBFs + TJK, data = bm) dredge(bd2, rank = "AIC", extra = c("R^2", adjRsq = function(x) summary(x)\$adj.r.squared)) ## Global model call: lm(formula = logSpecies ~ logDBFs + TJK, data = bm) ## --- ## Model selection table ## (Intrc) lgDBF TJK R^2 adjRsq df logLik AIC delta weight ## 4 -2.4100 1.3710 + 0.519900 0.47810 4 -14.418 36.8 0.00 0.995 ## 2 0.5964 0.4882 0.209800 0.17690 3 -20.895 47.8 10.95 0.004 ## 1 2.5690 0.000000 0.00000 2 -23.956 51.9 15.08 0.001 ## 3 2.5300 + 0.004823 -0.03664 3 -23.893 53.8 16.95 0.000 ## Models ranked by AIC(x) The top AIC model is $\log{(\text{count})}_i = \beta_0 + \beta_1\cdot\log{(\text{DBC})}_i + \beta_2I_{\text{TJK},i} + \varepsilon_i$ with an AIC of 33.3. The next best ranked model on AICs was $\log{(\text{count})}_i = \beta_0 + \beta_1\cdot\log{(\text{DBF})}_i + \beta_2I_{\text{TJK},i} + \varepsilon_i$ with an AIC of 36.8, so 3.5 AIC units worse than the top model and so there is clear evidence to support the DBC+TJK model over the best version with DBF and all others. We put these two runs of results together in Table 9.1, re-computing all the AICs based on the top model from the first full model considered to make it easier to see this. Table 9.1: Model comparison table. Model R2 adj R2 df logLik AIC Delta AIC $\log(\text{count})_i = \beta_0 + \beta_1\cdot\log(\text{DBC})_i + \beta_2I_{\text{TJK},i} + \varepsilon_i$ 0.5809 0.5444 4 -12.652 33.3 0 $\log(\text{count})_i = \beta_0 + \beta_1\cdot\log(\text{DBF})_i + \beta_2I_{\text{TJK},i} + \varepsilon_i$ 0.5199 0.4781 4 -14.418 36.8 3.5 $\log(\text{count})_i = \beta_0 + \beta_1\cdot\log(\text{DBC})_i + \varepsilon_i$ 0.3691 0.3428 3 -17.969 41.9 8.6 $\log(\text{count})_i = \beta_0 + \beta_1\cdot\log(\text{DBF})_i + \varepsilon_i$ 0.2098 0.1769 3 -20.895 47.8 14.5 $\log(\text{count})_i = \beta_0 + \varepsilon_i$ 0 0 2 -23.956 51.9 18.6 $\log(\text{count})_i = \beta_0 + \beta_2I_{\text{TJK},i} + \varepsilon_i$ 0.0048 -0.0366 3 -23.893 53.8 20.5 Table 9.1 suggests some interesting results. By itself, $TJK$ leads to the worst performing model on the AIC measure, ranking below a model with nothing in it (mean-only) and 20.5 AIC units worse than the top model. But the two top models distinctly benefit from the inclusion of TJK. This suggests that after controlling for the sampling effort, either through DBC or DBF, the differences in the stages captured by TJK can be more clearly observed. So the top model in our (correct) results170 suggests that log(DBC) is important as well as different intercepts for the two periods. We can interrogate this model further but we should check the diagnostics (Figure 9.11) and consider our model assumptions first as AICs are not valid if the model assumptions are clearly violated. par(mfrow = c(2,2), oma = c(0,0,2,0)) plot(bd1, pch = 16) The constant variance, linearity, and assessment of influence do not suggest any problems with those assumptions. This is reinforced in the partial residuals in Figure 9.12. The normality assumption is possibly violated but shows lighter tails than expected from a normal distribution and so should cause few problems with inferences (we would be looking for an answer of “yes, there is a violation of the normality assumption but that problem is minor because the pattern is not the problematic type of violation because both the upper and lower tails are shorter than expected from a normal distribution”). The other assumption that is violated for all our models is that the observations are independent. Between neighboring stages in time, there would likely be some sort of relationship in the biodiversity so we should not assume that the observations are independent (this is another time series of observations). The authors acknowledged this issue but unskillfully attempted to deal with it. Because an interaction was not considered in any of the models, there also is an assumption that the results are parallel enough for the two groups. The scatterplot in Figure 9.10 suggests that using parallel lines for the two groups is probably reasonable but a full assessment really should also explore that fully to verify that there is no support for an interaction which would relate to different impacts of sampling efforts on the response across the levels of TJK. Ignoring the violation of the independence assumption, we are otherwise OK to explore the model more and see what it tells us about biodiversity of Sauropodomorphs. The top model is estimated to be $\log{(\widehat{\text{count}})}_i = -1.089 + 0.724\cdot\log{(\text{DBC})}_i -0.75I_{\text{TJK},i}$. This suggests that for the early observations (TJK = Trias_Juras), the model is $\log{(\widehat{\text{count}})}_i = -1.089 + 0.724\cdot\log{(\text{DBC})}_i$ and for the Cretaceous period (TJK = Cretaceous), the model is $\log{(\widehat{\text{count}})}_i = -1.089 + -0.75+0.724\cdot\log{(\text{DBC})}_i$ which simplifies to $\log{(\widehat{\text{count}})}_i = -1.84 + 0.724\cdot\log{(\text{DBC})}_i$. This suggests that the sampling efforts have the same impacts on all observations and having an increase in logDBCs is associated with increases in the mean log-biodiversity. Specifically, for a 1 log-count increase in the log-DBCs, we estimate, on average, to have a 0.724 log-count change in the mean log-biodiversity, after accounting for different intercepts for the two periods considered. We could also translate this to the original count scale but will leave it as is, because their real question of interest involves the differences between the periods. The change in the y-intercepts of -0.76 suggests that the Cretaceous has a lower average log-biodiversity by 0.75 log-count, after controlling for the log-sampling effort. This suggests that the Cretaceous had a lower corrected mean log-Sauropodomorph biodiversity $\require{enclose} (t_{23} = -3.41;\enclose{horizontalstrike}{\text{p-value} = 0.0024})$ than the combined results for the Triassic and Jurassic. On the original count scale, this suggests $\exp(-0.76) = 0.47$ times (53% drop in) the median biodiversity count per stage for Cretaceous versus the prior time period, after correcting for log-sampling effort in each stage. summary(bd1) ## ## Call: ## lm(formula = logSpecies ~ logDBCs + TJK, data = bm) ## ## Residuals: ## Min 1Q Median 3Q Max ## -0.6721 -0.3955 0.1149 0.2999 0.6158 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) -1.0887 0.6533 -1.666 0.1092 ## logDBCs 0.7243 0.1288 5.622 1.01e-05 ## TJKCretaceous -0.7598 0.2229 -3.409 0.0024 ## ## Residual standard error: 0.4185 on 23 degrees of freedom ## Multiple R-squared: 0.5809, Adjusted R-squared: 0.5444 ## F-statistic: 15.94 on 2 and 23 DF, p-value: 4.54e-05 plot(allEffects(bd1, residuals = T), grid = T) Their study shows some interesting contrasts between methods. They tried to use AIC-based model selection methods across all the models but then used p-values to really make their final conclusions. This presents a philosophical inconsistency that bothers some more than others but should bother everyone. One thought is whether they needed to use AICs at all since they wanted to use p-values? The one reason they might have preferred to use AICs is that it allows the direct comparison of $\log{(\text{count})}_i = \beta_0 + \beta_1\log{(\text{DBC})}_i + \beta_2I_{\text{TJK},i} + \varepsilon_i$ to $\log{(\text{count})}_i = \beta_0 + \beta_1\cdot\log{(\text{DBF})}_i + \beta_2I_{\text{TJK},i} + \varepsilon_i,$ exploring whether DBC or DBF is “better” with TJK in the model. There is no hypothesis test to compare these two models because one is not nested in the other – it is not possible to get from one model to the other by setting one or more slope coefficients to 0 so we can’t hypothesis test our way from one model to the other one. The AICs suggest strong support for the model with DBC and TJK as compared to the model with DBF and TJK, so that helps us make that decision. After that step, we could rely on $t$-tests or ANOVA $F$-tests to decide whether further refinement is suggested/possible for the model with DBC and TJK. This would provide the direct inferences that they probably want and are trying to obtain from AICs along with p-values in their paper. Finally, their results would actually be more valid if they had used a set of statistical methods designed for modeling responses that are counts of events or things, especially those whose measurements change as a function of sampling effort; models called Poisson rate models would be ideal for their application which are also special cases of the generalized linear models noted in the extensions for modeling categorical responses. The other aspect of the biodiversity that they measured for each stage was the duration of the stage. They never incorporated that information and it makes sense given their interests in comparing biodiversity across stages, not understanding why more or less biodiversity might occur. But other researchers might want to estimate the biodiversity after also controlling for the length of time that the stage lasted and the sampling efforts involved in detecting the biodiversity of each stage, models that are only a few steps away from those considered here. In general, this paper presents some of the pitfalls of attempting to use advanced statistical methods as well as hinting at the benefits. The statistical models are the only way to access the results of interest; inaccurate usage of statistical models can provide inaccurate conclusions. They seemed to mostly get the right answers despite a suite of errors in their work.
textbooks/stats/Advanced_Statistics/Intermediate_Statistics_with_R_(Greenwood)/09%3A_Case_studies/9.04%3A_Multi-variate_models_are_essential_for_understanding_vertebrate_diversification_in_deep_time.txt
In the practice problems at the end of Chapter 4, a study (Puhan et al. (2006)) related to a pre-post, two group comparison of the sleepiness ratings of subjects was introduced. They obtained $n = 25$ volunteers and they randomized the subjects to either get a lesson or be placed on a waiting list for lessons. They constrained the randomization based on the high/low apnoea and high/low on the Epworth scale of the subjects in their initial observations to make sure they balanced the types of subjects going into the treatment and control groups. They measured the subjects’ Epworth value (daytime sleepiness, higher is more sleepy) initially and after four months, where only the treated subjects (those who took lessons) had any intervention. We are interested in whether the mean Epworth scale values changed differently over the four months in the group that got didgeridoo lessons than it did in the control group (that got no lessons). Each subject was measured twice (so the total sample size in the data set is 50) in the data set provided that is available at http://www.math.montana.edu/courses/s217/documents/epworthdata.csv. The data set was not initially provided by the researchers, but they did provide a plot very similar to Figure 9.13. To make Figure 9.13 geom_line is used to display a line for each subject over the two time points (pre and post) of observation and indicate which group the subjects were assigned to. This allows us to see the variation at a given time across subjects and changes over time, which is critical here as this shows clearly why we had a violation of the independence assumption in these data. In the plot, you can see that there are not clear differences in the two groups at the “Pre” time but that treated group seems to have most of the lines go down to lower sleepiness ratings and that this is not happening much for the subjects in the control group. The violation of the independence assumption is diagnosable from the study design (two observations on each subject). The plot allows us to go further and see that many subjects had similar Epworth scores from pre to post (high in pre, generally high in post) once we account for systematic changes in the treated subjects that seemed to drop a bit on average. epworthdata <- read_csv("http://www.math.montana.edu/courses/s217/documents/epworthdata.csv") epworthdata <- epworthdata %>% mutate(Time = factor(Time), Group = factor(Group) ) levels(epworthdata$Time) <- c("Pre" , "Post") levels(epworthdata$Group) <- c("Control" , "Didgeridoo") epworthdata %>% ggplot(mapping = aes(x = Time, y = Epworth, group = Subject, color = Group)) + geom_point() + geom_line() + theme_bw() + scale_color_colorblind() This plot seems to contradict the result from the following Two-Way ANOVA (that is a repeat of what you would have seen had you done the practice problem earlier in the book and the related interaction plot) – there is little to no evidence against the null hypothesis of no interaction between Time and Treatment group on Epworth scale ratings ($F(1,46) = 1.37$, p-value $= 0.2484$ as seen in Table 9.2). But this model assumes all the observations are independent and so does not account for the repeated measures on the same subjects. It ends up that if we account for systematic differences in subjects, we can (sometimes) find the differences we are interested in more clearly. We can see that this model does not really seem to capture the full structure of the real data by comparing simulated data to the original one, as in Figure 9.14. The real data set had fairly strong relationships between the pre and post scores but this connection seems to disappear in responses simulated from the estimated Two-Way ANOVA model (that assumes all observations are independent). library(car) lm_int <- lm(Epworth ~ Time * Group, data = epworthdata) Anova(lm_int) Table 9.2: ANOVA table from Two-Way ANOVA interaction model. Sum Sq Df F value Pr(>F) Time 120.746 1 5.653 0.022 Group 8.651 1 0.405 0.528 Time:Group 29.265 1 1.370 0.248 Residuals 982.540 46 If the issue is failing to account for differences in subjects, then why not add “Subject” to the model? There are two things to consider. First, we would need to make sure that “Subject” is a factor variable as the “Subject” variable is initially numerical from 1 to 25. Second, we have to deal with having a factor variable with 25 levels (so 24 indicator variables!). This is a big number and would make writing out the model and interpreting the term-plot for Subject extremely challenging. Fortunately, we are not too concerned about how much higher or lower an individual is than a baseline subject, but we do need to account for it in the model. This sort of “repeated measures” modeling is more often handled by a more complex set of extended regression models that are called linear mixed models and are designed to handle this sort of grouping variable with many levels. But if we put the Subject factor variable into the previous model, we can use Type II ANOVA tests to test for an interaction between Time and Group (our primary research question) after controlling for subject-to-subject variation. There is a warning message about aliasing that occurs when you do this which means that it is not possible to estimate all the $\beta$s in this model (and why we more typically used mixed models to do this sort of thing). Despite this, the test for Time:Group in Table 9.3 is correct and now accounts for the repeated measures on the subject. It provides $F(1,23) = 5.43$ with a p-value of 0.029, suggesting that there is moderate evidence against the null hypothesis of no interaction of time and group once we account for subject. This is a notably different result from what we observed in the Two-Way ANOVA interaction model that didn’t account for repeated measures on the subjects and matches the results in the original paper closely. epworthdata <- epworthdata %>% mutate(Subject = factor(Subject)) lm_int_wsub <- lm(Epworth ~ Time * Group + Subject, data = epworthdata) Anova(lm_int_wsub) Table 9.3: ANOVA table from Two-Way ANOVA interaction model. Sum Sq Df F value Pr(>F) Time 120.746 1 22.410 0.000 Group   0 Subject 858.615 23 6.929 0.000 Time:Group 29.265 1 5.431 0.029 Residuals 123.924 23 With this result, we would usually explore the term-plots from this model to get a sense of the pattern of the changes over time in the treatment and control groups. That aliasing issue means that the “effects” function also has some issues. To see the effects plots, we need to use a linear mixed model from the nlme package . This model is beyond the scope of this material, but it provides the same $F$-statistic for the interaction ($F(1,23) = 5.43$) and the term-plots can now be produced (Figure 9.15). In that plot, we again see that the didgeridoo group mean for “Post” is noticeably lower than in the “Pre” and that the changes in the control group were minimal over the four months. This difference in the changes over time was present in the initial graphical exploration but we needed to account for variation in subjects to be able to detect this difference. While these results rely on more complex models than we have time to discuss here, hopefully the similarity of the results of interest should resonate with the methods we have been exploring while hinting at more possibilities if you learn more statistical methods. library(nlme) lme_int <- lme(Epworth ~ Time * Group, random = ~1|Subject, data = epworthdata) anova(lme_int) ## numDF denDF F-value p-value ## (Intercept) 1 23 132.81354 <.0001 ## Time 1 23 22.41014 0.0001 ## Group 1 23 0.23175 0.6348 ## Time:Group 1 23 5.43151 0.0289 plot(allEffects(lme_int), multiline = T, lty = c(1,2), ci.style = "bars", grid = T)
textbooks/stats/Advanced_Statistics/Intermediate_Statistics_with_R_(Greenwood)/09%3A_Case_studies/9.05%3A_What_do_didgeridoos_really_do_about_sleepiness.txt
As we wrap up, it is important to remember that these tools are limited by the quality of the data collected. If you are ever involved in applying these statistical models, whether in a research or industrial setting, make sure that the research questions are discussed before data collection. And before data collection is started, make sure that the methods will provide results that can address the research questions. And, finally, make sure someone involved in the project knows how to perform the appropriate graphical and statistical analysis. One way to make sure you know how to analyze a data set and, often, clarify the research questions and data collection needs, is to make a simulated data set that resembles the one you want to collect and analyze it. This can highlight the sorts of questions the research can address and potentially expose issues before the study starts. With this sort of preparation, many issues can be avoided. Remember to think about reasons why assumptions of your proposed method might be violated. You are now armed and a bit dangerous with statistical methods. If you go to use them, remember the fundamentals and find the story in the data. After deciding on any research questions of interest, graph the data and make sure that the statistical methods will give you results that make some sense based on the graphical results. In the MLR results, it is possible that graphs will not be able to completely tell you the story, but all the other methods should follow the pictures you see. Even when (or especially when) you use sophisticated statistical methods, graphical presentations are critical to helping others understand the results. We have discussed examples that involve displaying categorical and quantitative variables and even some displays that bridge both types of variables. We hope you have enjoyed this material and been able to continue to develop your interests in statistics. You will see it in many future situations both in courses in your area of study and outside of academia to try to address problems that need answers. You are also prepared to take more advanced statistics courses. References Benson, Roger B. J., and Philip D. Mannion. 2012. “Multi-Variate Models Are Essential for Understanding Vertebrate Diversification in Deep Time.” Biology Letters 8: 127–30. https://doi.org/10.1098/rsbl.2011.0460. Gundale, Michael J., Lisbet H. Bach, and Annika Nordin. 2013. “The Impact of Simulated Chronic Nitrogen Deposition on the Biomass and N2-Fixation Activity of Two Boreal Feather Moss–Cyanobacteria Associations.” Biology Letters 9 (6). https://doi.org/10.1098/rsbl.2013.0797. Pinheiro, José, Douglas Bates, and R Core Team. 2022. Nlme: Linear and Nonlinear Mixed Effects Models. https://svn.r-project.org/R-packages/trunk/nlme/. Puhan, Milo A, Alex Suarez, Christian Lo Cascio, Alfred Zahn, Markus Heitz, and Otto Braendli. 2006. “Didgeridoo Playing as Alternative Treatment for Obstructive Sleep Apnoea Syndrome: Randomised Controlled Trial.” BMJ 332 (7536): 266–70. https://doi.org/10.1136/bmj.38705.470590.55. Sasaki, Takao, and Stephen C. Pratt. 2013. “Ants Learn to Rely on More Informative Attributes During Decision-Making.” Biology Letters 9 (6). https://doi.org/10.1098/rsbl.2013.0667. 1. The researchers did not do this analysis so never directly addressed this research question although they did discuss it in general ways.↩︎ 2. Instructors often get asked what a problem with non-constant variance actually looks like – this is a perfect example of it!↩︎ 3. This was not even close to their top AIC model so they made an odd choice.↩︎ 4. I had students read this paper in a class and one decided that this was a reasonable way to report small p-values – it is WRONG. We are interested in how small a p-value might be and saying it is over a value is never useful, especially if you say it is larger than a tiny number.↩︎ 5. All too often, I read journal articles that have under-utilized, under-reported, mis-applied, or mis-interpreted statistical methods and results. One of the reasons that I wanted to write this book was to help more people move from basic statistical knowledge to correct use of intermediate statistical methods and beginning to see the potential in more advanced statistical methods. It took me many years of being a statistician (after getting a PhD) just to feel armed for battle when confronted with new applications and two stat courses are not enough to get you there, but you have to start somewhere. You are only maybe two or three hundred hours into your 10,000 hours required for mastery. This book is intended get you some solid fundamentals to build on or a few intermediate tools to use if this is your last statistics training experience.↩︎ 6. They also had an error in their AIC results that is difficult to explain here but was due to an un-careful usage of the results from the more advanced models that account for autocorrelation, which seems to provide the proper ranking of models (that they ignored) but did not provide the correct differences among models.↩︎
textbooks/stats/Advanced_Statistics/Intermediate_Statistics_with_R_(Greenwood)/09%3A_Case_studies/9.06%3A_General_summary.txt
The first chapter explains the basic notions and highlights some of the objectives of time series analysis. Section 1.1 gives several important examples, discusses their characteristic features and deduces a general approach to the data analysis. In Section 1.2, stationary processes are identified as a reasonably broad class of random variables which are able to capture the main features extracted from the examples. Finally, it is discussed how to treat deterministic trends and seasonal components in Sections 1.3 and 1.4, and how to assess the residuals in Section 1.5. Section 1.6 concludes. 1: Basic Concepts in Time Series The first definition clarifies the notion time series analysis. Definition 1.1.1: Time Series Let $T \neq \emptyset$ be an index set, conveniently being thought of as "time''. A family $(X_t\colon t\in T)$ of random variables (random functions) is called a stochastic process. A realization of $(X_t\colon t\in T)$ is called a time series. We will use the notation $(x_t\colon t\in T)$ in the discourse. The most common choices for the index set T include the integers $\mathbb{Z}=\{0,\pm 1,\pm 2,\ldots\}$, the positive integers $\mathbb{N}=\{1,2,\ldots\}$, the nonnegative integers $\mathbb{N}_0=\{0,1,2,\ldots\}$, the real numbers $\mathbb{R}=(-\infty,\infty)$ and the positive halfline $\mathbb{R}_+=[0,\infty)$. This class is mainly concerned with the first three cases which are subsumed under the notion discrete time series analysis. Oftentimes the stochastic process $(X_t\colon t\in T)$ is itself referred to as a time series, in the sense that a realization is identified with the probabilistic generating mechanism. The objective of time series analysis is to gain knowledge of this underlying random phenomenon through examining one (and typically only one) realization. This separates time series analysis from, say, regression analysis for independent data. In the following a number of examples are given emphasizing the multitude of possible applications of time series analysis in various scientific fields. Example 1.1.1 (W$\ddot{\mbox{o}}$lfer's sunspot numbers). In Figure 1.1, the number of sunspots (that is, dark spots visible on the surface of the sun) observed annually are plotted against time. The horizontal axis labels time in years, while the vertical axis represents the observed values $x_t$ of the random variable $X_t=\#\mbox{ of sunspots at time t},\qquad t=1700,\ldots,1994. \nonumber$ The figure is called a time series plot. It is a useful device for a preliminary analysis. Sunspot numbers are used to explain magnetic oscillations on the sun surface. Figure 1.1: Wölfer's sunspot from 1700 to 1994. To reproduce a version of the time series plot in Figure 1.1 using the free software package R (downloads are available at http://cran.r-project.org), download the file sunspots.dat from the course webpage and type the following commands: > spots = read.table("sunspots.dat") > spots = ts(spots, start=1700, frequency=1) > plot(spots, xlab="time", ylab="", main="Number of Sun spots") In the first line, the file sunspots.dat is read into the object spots, which is then in the second line transformed into a time series object using the function ts(). Using start sets the starting value for the x-axis to a prespecified number, while frequency presets the number of observations for one unit of time. (Here: one annual observation.) Finally, plot is the standard plotting command in R, where xlab and ylab determine the labels for the x-axis and y-axis, respectively, and main gives the headline. Example 1.1.2 (Canadian lynx data). The time series plot in Figure 1.2 comes from a biological data set. It contains the annual returns of lynx at auction in London by the Hudson Bay Company from 1821--1934 (on a $log_{10}$ scale). These are viewed as observations of the stochastic process $X_t=\log_{10}(\mbox{number of lynx trapped at time 1820+t}), \qquad t=1,\ldots,114. \nonumber$ Figure 1.2: Number of lynx trapped in the MacKenzie River district between 1821 and 1934. The data is used as an estimate for the number of all lynx trapped along the MacKenzie River in Canada. This estimate, in turn, is often taken as a proxy for the true population size of the lynx. A similar time series plot could be obtained for the snowshoe rabbit, the primary food source of the Canadian lynx, hinting at an intricate predator-prey relationship. Assuming that the data is stored in the file lynx.dat, the corresponding R commands leading to the time series plot in Figure 1.2 are: > lynx = read.table("lynx.dat") > lynx = ts(log10(lynx), start=1821, frequency=1) > plot(lynx, xlab="", ylab="", main="Number of trapped lynx") Example 1.1.3 (Treasury bills). Another important field of application for time series analysis lies in the area of finance. To hedge the risks of portfolios, investors commonly use short-term risk-free interest rates such as the yields of three-month, six-month, and twelve-month Treasury bills plotted in Figure 1.3. The (multivariate) data displayed consists of 2,386 weekly observations from July 17, 1959, to December 31, 1999. Here, $X_t=(X_{t,1},X_{t,2},X_{t,3}), \qquad t=1,\ldots,2386, \nonumber$ where $X_{t,1}$, $X_{t,2}$ and $X_{t,3}$ denote the three-month, six-month, and twelve-month yields at time t, respectively. It can be seen from the graph that all three Treasury bills are moving very similarly over time, implying a high correlation between the components of $X_t$. Figure 1.3: Yields of Treasury bills from July 17, 1959, to December 31, 1999. Figure 1.4: S&P 500 from January 3, 1972, to December 31, 1999. To produce the three-variate time series plot in Figure 1.3, use the R code > bills03 = read.table("bills03.dat"); > bills06 = read.table("bills06.dat"); > bills12 = read.table("bills12.dat"); > par(mfrow=c(3,1)) > plot.ts(bills03, xlab="(a)", ylab="", main="Yields of 3-month Treasury Bills") > plot.ts(bills06, xlab="(b)", ylab="", main="Yields of 6-month Treasury Bills") > plot.ts(bills12, xlab="(c)", ylab="", main="Yields of 12-month Treasury Bills") It is again assumed that the data can be found in the corresponding files bills03.dat, bills06.dat and bills12.dat. The command line par(mfrow=c(3,1)) is used to set up the graphics. It enables you to save three different plots in the same file. Example 1.1.4 (S&P 500). The Standard and Poor's 500 index (S&P 500) is a value-weighted index based on the prices of 500 stocks that account for approximately 70% of the U.S. equity market capitalization. It is a leading economic indicator and is also used to hedge market portfolios. Figure 1.4 contains the 7,076 daily S&P 500 closing prices from January 3, 1972, to December 31, 1999, on a natural logarithm scale. It is consequently the time series plot of the process $X_t=\ln(\mbox{closing price of S&P 500 at time t}), \qquad t=1,\ldots,7076. \nonumber$ Note that the logarithm transform has been applied to make the returns directly comparable to the percentage of investment return. The time series plot can be reproduced in R using the file sp500.dat There are countless other examples from all areas of science. To develop a theory capable of handling broad applications, the statistician needs to rely on a mathematical framework that can explain phenomena such as • trends (apparent in Example 1.1.4); • seasonal or cyclical effects (apparent in Examples 1.1.1 and 1.1.2); • random fluctuations (all Examples); • dependence (all Examples?). The classical approach taken in time series analysis is to postulate that the stochastic process $(X_t\colon t\in T)$ under investigation can be divided into deterministic trend and seasonal components plus a centered random component, giving rise to the model $X_t=m_t+s_t+Y_t, \qquad t\in T \tag{1.1.1}$ where $(m_t\colon t\in T)$ denotes the trend function ("mean component''), $(s_t\colon t\in T)$ the seasonal effects and $(Y_t\colon t\in T)$ a (zero mean) stochastic process. After an appropriate model has been chosen, the statistician may aim at • estimating the model parameters for a better understanding of the time series; • predicting future values, for example, to develop investing strategies; • checking the goodness of fit to the data to confirm that the chosen model is appropriate. Estimation procedures and prediction techniques are dealt with in detail in later chapters of the notes. The rest of this chapter will be devoted to introducing the classes of strictly and weakly stationary stochastic processes (in Section 1.2) and to providing tools to eliminate trends and seasonal components from a given time series (in Sections 1.3 and 1.4), while some goodness of fit tests will be presented in Section 1.5.
textbooks/stats/Advanced_Statistics/Time_Series_Analysis_(Aue)/1%3A_Basic_Concepts_in_Time_Series/1.1%3A_Introduction_and_Examples.txt
Fitting solely independent and identically distributed random variables to data is too narrow a concept. While, on one hand, they allow for a somewhat nice and easy mathematical treatment, their use is, on the other hand, often hard to justify in applications. Our goal is therefore to introduce a concept that keeps some of the desirable properties of independent and identically distributed random variables ("regularity''), but that also considerably enlarges the class of stochastic processes to choose from by allowing dependence as well as varying distributions. Dependence between two random variables $X$ and $Y$ is usually measured in terms of the $covariance$ $function$ $Cov(X,Y)=E\big[(X-E[X])(Y-E[Y])\big] \nonumber$ and the $correlation$ $function$ $Corr(X,Y)=\frac{Cov(X,Y)}{\sqrt{Var(X)Var(Y)}}. \nonumber$ With these notations at hand, the classes of strictly and weakly dependent stochastic processes can be introduced. Definition 1.2.1 (Strict Stationarity). A stochastic process ($X_t\colon t\in T$) is called strictly stationary if, for all $t_1,...,t_n \in T$ and $h$ such that $t_1+h,...,t_n+h\in T$, it holds that $( X_{t_1},\ldots,X_{t_n}) \stackrel{\cal D}{=} (X_{t_1+h},\ldots,X_{t_n+h}). \nonumber$ That is, the so-called finite-dimensional distributions of the process are invariant under time shifts. Here $=^{\cal D}$ indicates equality in distribution. The definition in terms of the finite-dimensional distribution can be reformulated equivalently in terms of the cumulative joint distribution function equalities $P(X_{t_1}\leq x_1,\ldots,X_{t_n}\leq x_n)=P(X_{t_1+h}\leq x_1,\ldots,X_{t_n+h}\leq x_n) \nonumber$ holding true for all $x_1,...,x_n\in\mathbb{R}$, $t_1,...,t_n\in T$ and $h$ such that $t_1+h,...,t_n+h\in T$. This can be quite difficult to check for a given time series, especially if the generating mechanism of a time series is far from simple, since too many model parameters have to be estimated from the available data, rendering concise statistical statements impossible. A possible exception is provided by the case of independent and identically distributed random variables. To get around these difficulties, a time series analyst will commonly only specify the first- and second-order moments of the joint distributions. Doing so then leads to the notion of weak stationarity. Definition 1.2.2 (Weak Stationarity). A stochastic process $(X_t\colon t\in T)$ is called weakly stationary if • the second moments are finite: $E[X_t^2]<\infty$ for all $t\in T$; • the means are constant: $E[X_t]=m$ for all $t\in T$; • the covariance of $X_t$ and $X_{t+h}$ depends on $h$ only: $\gamma(h)=\gamma_X(h)=Cov(X_t,X_{t+h}), \qquad h\in T \mbox{ such that } t+h\in T, \nonumber$ is independent of $t\in T$ and is called the autocovariance function (ACVF). Moreover, $\rho(h)=\rho_X(h)=\frac{\gamma(h)}{\gamma(0)},\qquad h\in T, \nonumber$ is called the autocorrelation function (ACF). Remark 1.2.1. If $(X_t: t\in T)$) is a strictly stationary stochastic process with finite second moments, then it is also weakly stationary. The converse is not necessarily true. If $(X_t\colon t\in T)$, however, is weakly stationary and Gaussian, then it is also strictly stationary. Recall that a stochastic process is called Gaussian if, for any $t_1,...,t_n\in T$, the random vector $(X_{t_1},...,X_{t_n})$ is multivariate normally distributed. This section is concluded with examples of stationary and nonstationary stochastic processes. • Figure 1.5: 100 simulated values of the cyclical time series (left panel), the stochastic amplitude (middle panel), and the sine part (right panel). Example 1.2.1 (White Noise). Let $(Z_t\colon t\in\mathbb{Z})$ be a sequence of real-valued, pairwise uncorrelated random variables with $E[Z_t]=0$ and $0<Var(Z_t)=\sigma^2<\infty$ for all $t\in\mathbb{Z}$. Then $(Z_t\colon t\in Z)$ is called white noise, abbreviated by $(Z_t\colon t\in\mathbb{Z})\sim{\rm WN}(0,\sigma^2)$. It defines a centered, weakly stationary process with ACVF and ACF given by $\gamma(h)=\left\{\begin{array}{r@{\quad\;}l} \sigma^2, & h=0, \ 0, & h\not=0,\end{array}\right. \qquad\mbox{and}\qquad \rho(h)=\left\{\begin{array}{r@{\quad\;}l} 1, & h=0, \ 0, & h\not=0,\end{array}\right. \nonumber$ respectively. If the $(Z_t\colon t\in\mathbb{Z})$ are moreover independent and identically distributed, they are called iid noise, shortly $(Z_t\colon t\in\mathbb{Z})\sim{\rm IID}(0,\sigma^2)$. The left panel of Figure 1.6 displays 1000 observations of an iid noise sequence $(Z_t\colon t\in\mathbb{Z})$ based on standard normal random variables. The corresponding R commands to produce the plot are > z = rnorm(1000,0,1) > plot.ts(z, xlab="", ylab="", main="") The command rnorm simulates here 1000 normal random variables with mean 0 and variance 1. There are various built-in random variable generators in R such as the functions runif(n,a,b) and rbinom(n,m,p) which simulate the $n$ values of a uniform distribution on the interval $(a,b)$ and a binomial distribution with repetition parameter $m$ and success probability $p$, respectively. Figure 1.6: 1000 simulated values of iid N(0, 1) noise (left panel) and a random walk with iid N(0, 1) innovations (right panel). Example 1.2.2 (Cyclical Time Series). Let $A$ and $B$ be uncorrelated random variables with zero mean and variances $Var(A)=Var(B)=\sigma^2$, and let $\lambda\in\mathbb{R}$ be a frequency parameter. Define $X_t=A\cos(\lambda t)+B\sin(\lambda t),\qquad t\in\mathbb{R}. \nonumber$ The resulting stochastic process $(X_t\colon t\in\mathbb{R})$ is then weakly stationary. Since $\sin(\lambda t+\varphi)=\sin(\varphi)\cos(\lambda t)+\cos(\varphi)\sin(\lambda t)$, the process can be represented as $X_t=R\sin(\lambda t+\varphi), \qquad t\in\mathbb{R}, \nonumber$ so that $R$ is the stochastic amplitude and $\varphi\in[-\pi,\pi]$ the stochastic phase of a sinusoid. Some computations show that one must have $A=R\sin(\varphi)$ and $B=R\cos(\varphi)$. In the left panel of Figure 1.5, 100 observed values of a series $(X_t)_{t\in \mathbb{Z}}$ are displayed. Therein, $\lambda=\pi/25$ was used, while $R$ and $\varphi$ were random variables uniformly distributed on the interval $(-.5,1)$ and $(0,1)$, respectively. The middle panel shows the realization of $R$, the right panel the realization of $\sin(\lambda t+\varphi)$. Using cyclical time series bears great advantages when seasonal effects, such as annually recurrent phenomena, have to be modeled. The following R commands can be applied: > t = 1:100; R = runif(100,-.5,1); phi = runif(100,0,1); lambda = pi/25 > cyc = R*sin(lambda*t+phi) > plot.ts(cyc, xlab="", ylab="") This produces the left panel of Figure 1.5. The middle and right panels follow in a similar fashion. Example 1.2.3 (Random Walk). Let $(Z_t\colon t\in\mathbb{N})\sim{\rm WN}(0,\sigma^2)$. Let $S_0=0$ and $S_t=Z_1+\ldots+Z_t,\qquad t\in\mathbb{N}. \nonumber$ The resulting stochastic process $(S_t\colon t\in\mathbb{N}_0)$ is called a random walk and is the most important nonstationary time series. Indeed, it holds here that, for $h>0$, $Cov(S_t,S_{t+h})=Cov(S_t,S_t+R_{t,h})=t\sigma^2, \nonumber$ where $R_{t,h}=Z_{t+1}+\ldots+Z_{t+h}$, and the ACVF obviously depends on $t$. In R, one may construct a random walk, for example, with the following simple command that utilizes the 1000 normal observations stored in the array z of Example 1.2.1. > rw = cumsum(z) The function cumsum takes as input an array and returns as output an array of the same length that contains as its jth entry the sum of the first j input entries. The resulting time series plot is shown in the right panel of Figure 1.6. Chapter 3 discusses in detail so-called autoregressive moving average processes which have become a central building block in time series analysis. They are constructed from white noise sequences by an application of a set of stochastic difference equations similar to the ones defining the random walk $(S_t\colon t\in\mathbb{N}_0)$ of Example 1.2.3. In general, the true parameters of a stationary stochastic process $(X_t\colon t\in T)$ are unknown to the statistician. Therefore, they have to be estimated from a realization $x_1,...,x_n$. The following set of estimators will be used here. The sample mean of $x_1,...,x_n$ is defined as $\bar{x}=\frac 1n\sum_{t=1}^nx_t. \nonumber$ The sample autocovariance function (sample ACVF) is given by \label{eq:1.2.1} \hat{\gamma}(h)= \frac 1n\sum_{t=1}^{n-h}(x_{t+h}-\bar{x})(x_t-\bar{x}), \qquad h=0,1,\ldots,n-1. Finally, the sample autocorrelation function (sample ACF) is $\hat{\rho}(h)=\frac{\hat{\gamma}(h)}{\hat{\gamma}(0)}, \qquad h=0,1,\ldots,n-1. \nonumber$ Example 1.2.4. Let $(Z_t\colon t\in\mathbb{Z})$ be a sequence of independent standard normally distributed random variables (see the left panel of Figure 1.6 for a typical realization of size n = 1,000). Then, clearly, $\gamma(0)=\rho(0)=1$ and $\gamma(h)=\rho(h)=0$ whenever $h\not=0$. Table 1.1 gives the corresponding estimated values $\hat{\gamma}(h)$ and $\hat{\rho}(h)$ for $h=0,1,\ldots,5$. The estimated values are all very close to the true ones, indicating that the estimators work reasonably well for n = 1,000. Indeed it can be shown that they are asymptotically unbiased and consistent. Moreover, the sample autocorrelations $\hat{\rho}(h)$ are approximately normal with zero mean and variance $1/1000$. See also Theorem 1.2.1 below. In R, the function acf can be used to compute the sample ACF. Theorem 1.2.1. Let $(Z_t\colon t\in\mathbb{Z})\sim{\rm WN}(0,\sigma^2)$ and let $h\not=0$. Under a general set of conditions, it holds that the sample ACF at lag $h$, $\hat{\rho}(h)$, is for large $n$ approximately normally distributed with zero mean and variance 1/n. Theorem 1.2.1 and Example 1.2.4 suggest a first method to assess whether or not a given data set can be modeled conveniently by a white noise sequence: for a white noise sequence, approximately 95% of the sample ACFs should be within the the confidence interval $\pm2/\sqrt{n}$. Using the data files on the course webpage, one can compute with R the corresponding sample ACFs to check for whiteness of the underlying time series. The properties of the sample ACF are revisited in Chapter 2. • Figure1.7: Annual water levels of Lake Huron (left panel) and the residual plot obtained from fitting a linear trend to the data (right panel).
textbooks/stats/Advanced_Statistics/Time_Series_Analysis_(Aue)/1%3A_Basic_Concepts_in_Time_Series/1.2%3A_Stationary_Time_Series.txt
In this section three different methods are developed to estimate the trend of a time series model. It is assumed that it makes sense to postulate the model (1.1.1) with $s_t=0$ for all $t\in T$, that is, $X_t=m_t+Y_t, t\in T \tag{1.3.1} \label{Eq131}$ where (without loss of generality) $E[Y_t]=0$. In particular, three different methods are discussed, (1) the least squares estimation of $m_t$, (2) smoothing by means of moving averages and (3) differencing. Method 1 (Least squares estimation) It is often useful to assume that a trend component can be modeled appropriately by a polynomial, $m_t=b_0+b_1t+\ldots+b_pt^p, \qquad p\in\mathbb{N}_0. \nonumber$ In this case, the unknown parameters $b_0,\ldots,b_p$ can be estimated by the least squares method. Combined, they yield the estimated polynomial trend $\hat{m}_t=\hat{b}_0+\hat{b}_1t+\ldots+\hat{b}_pt^p, \qquad t\in T, \nonumber$ where $\hat{b}_0,\ldots,\hat{b}_p$ denote the corresponding least squares estimates. Note that the order $p$ is not estimated. It has to be selected by the statistician---for example, by inspecting the time series plot. The residuals $\hat{Y}_t$ can be obtained as $\hat{Y}_t=X_t-\hat{m}_t=X_t-\hat{b}_0-\hat{b}_1t-\ldots-\hat{b}_pt^p, \qquad t\in T. \nonumber$ How to assess the goodness of fit of the fitted trend will be subject of Section 1.5 below. Example 1.3.1 (Level of Lake Huron). The left panel of Figure 1.7 contains the time series of the annual average water levels in feet (reduced by 570) of Lake Huron from 1875 to 1972. It is a realization of the process $X_t=\mbox{(Average water level of Lake Huron in the year 1874+t)}-570, \qquad t=1,\ldots,98. \nonumber$ There seems to be a linear decline in the water level and it is therefore reasonable to fit a polynomial of order one to the data. Evaluating the least squares estimators provides us with the values $\hat{b}_0=10.202 \qquad\mbox{and}\qquad \hat{b}_1=-0.0242 \nonumber$ for the intercept and the slope, respectively. The resulting observed residuals $\hat{y}_t=\hat{Y}_t(\omega)$ are plotted against time in the right panel of Figure 1.7. There is no apparent trend left in the data. On the other hand, the plot does not strongly support the stationarity of the residuals. Additionally, there is evidence of dependence in the data. To reproduce the analysis in R, assume that the data is stored in the file lake.dat. Then use the following commands. > lake = read.table("lake.dat") > lake = ts(lake, start=1875) > t = 1:length(lake) > lsfit = lm(lake$^\mathrm{\sim}$t) > plot(t, lake, xlab="", ylab="", main="") > lines(lsfit{\$}fit) The function lm fits a linear model or regression line to the Lake Huron data. To plot both the original data set and the fitted regression line into the same graph, you can first plot the water levels and then use the lines function to superimpose the fit. The residuals corresponding to the linear model fit can be accessed with the command lsfit$resid. \end{exmp} Method 2 (Smoothing with Moving Averages) Let $(X_t\colon t\in\mathbb{Z})$ be a stochastic process following model $\ref{Eq131}$. Choose $q\in\mathbb{N}_0$ and define the two-sided moving average \label{eq:wt} W_t=\frac{1}{2q+1}\sum_{j=-q}^qX_{t+j}, \qquad t\in\mathbb{Z}. \tag{1.3.2} The random variables $W_t$ can be utilized to estimate the trend component $m_t$ in the following way. First note that $W_t=\frac{1}{2q+1}\sum_{j=-q}^qm_{t+j}+\frac{1}{2q+1}\sum_{j=-q}^qY_{t+j} \approx m_t, \nonumber$ assuming that the trend is locally approximately linear and that the average of the $Y_t$ over the interval $[t-q,t+q]$ is close to zero. Therefore, $m_t$ can be estimated by $\hat{m}_t=W_t,\qquad t=q+1,\ldots,n-q. \nonumber$ Notice that there is no possibility of estimating the first $q$ and last $n-q$ drift terms due to the two-sided nature of the moving averages. In contrast, one can also define one-sided moving averages by letting $\hat{m}_1=X_1,\qquad \hat{m}_t=aX_t+(1-a)\hat{m}_{t-1},\quad t=2,\ldots,n. \nonumber$ • Figure 1.8: The two-sided moving average filters Wt for the Lake Huron data (upper panel) and their residuals (lower panel) with bandwidth q = 2 (left), q = 10 (middle) and q = 35 (right). Figure 1.8 contains estimators $\hat{m}_t$ based on the two-sided moving averages for the Lake Huron data of Example 1.3.1. for selected choices of $q$ (upper panel) and the corresponding estimated residuals (lower panel). The moving average filters for this example can be produced in R in the following way: > t = 1:length(lake) > ma2 = filter(lake, sides=2, rep(1,5)/5) > ma10 = filter(lake, sides=2, rep(1,21)/21) > ma35 = filter(lake, sides=2, rep(1,71)/71) > plot(t, ma2, xlab="", ylab="",type="l") > lines(t,ma10); lines(t,ma35) Therein, sides determines if a one- or two-sided filter is going to be used. The phrase rep(1,5) creates a vector of length 5 with each entry being equal to 1. More general versions of the moving average smoothers can be obtained in the following way. Observe that in the case of the two-sided version $W_t$ each variable $X_{t-q},\ldots,X_{t+q}$ obtains a "weight" $a_j=(2q+1)^{-1}$. The sum of all weights thus equals one. The same is true for the one-sided moving averages with weights $a$ and $1-a$. Generally, one can hence define a smoother by letting $\hat{m}_t=\sum_{j=-q}^qa_jX_{t+j}, \qquad t=q+1,\ldots,n-q, \tag{1.3.3} \label{Eq133}$ where $a_{-q}+\ldots+a_q=1$. These general moving averages (two-sided and one-sided) are commonly referred to as linear filters. There are countless choices for the weights. The one here, $a_j=(2q+1)^{-1}$, has the advantage that linear trends pass undistorted. In the next example, a filter is introduced which passes cubic trends without distortion. Example 1.3.2 (Spencer's 15-point moving average). Suppose that the filter in display $\ref{Eq133}$ is defined by weights satisfying $a_j=0$ if $|j|>7$, $a_j=a_{-j}$ and $(a_0,a_1,\ldots,a_7)=\frac{1}{320}(74,67,46,21,3,-5,-6,-3). \nonumber$ Then, the corresponding filters passes cubic trends $m_t=b_0+b_1t+b_2t^2+b_3t^3$ undistorted. To see this, observe that \begin{align*} \sum_{j=-7}^7a_j=1\qquad\mbox{and}\qquad \sum_{j=-7}^7j^ra_j=0,\qquad r=1,2,3. \end{align*} Now apply Proposition 1.3.1 below to arrive at the conclusion. Assuming that the observations are in data, use the R commands > a = c(-3, -6, -5, 3, 21, 46, 67, 74, 67, 46, 21, 3, -5, -6, -3)/320 > s15 = filter(data, sides=2, a) to apply Spencer's 15-point moving average filter. This example also explains how to specify a general tailor-made filter for a given data set. Proposition 1.3.1. A linear filter (1.3.3) passes a polynomial of degree $p$ if and only if $\sum_{j}a_j=1\qquad\mbox{and}\qquad \sum_{j}j^ra_j=0,\qquad r=1,\ldots,p. \nonumber$ Proof. It suffices to show that $\sum_ja_j(t+j)^r=t^r$ for $r=0,\ldots,p$. Using the binomial theorem, write \begin{align*} \sum_ja_j(t+j)^r &=\sum_ja_j\sum_{k=0}^r{r \choose k}t^kj^{r-k}\.2cm] &=\sum_{k=0}^r{r \choose k}t^k\left(\sum_ja_jj^{r-k}\right) \[.2cm] &=t^r \end{align*} for any $r=0,\ldots,p$ if and only if the above conditions hold. • Figure 1.9: Time series plots of the observed sequences (xt) in the left panel and (2xt) in the right panel of the differenced Lake Huron data described in Example 1.3.1. Method 3 (Differencing) A third possibility to remove drift terms from a given time series is differencing. To this end, introduce the difference operator $\nabla$ as \[ \nabla X_t=X_t-X_{t-1}=(1-B)X_t, \qquad t\in T, \nonumber where $B$ denotes the backshift operator $BX_t=X_{t-1}$. Repeated application of $\nabla$ is defined in the intuitive way: $\nabla^2X_t=\nabla(\nabla X_t)=\nabla(X_t-X_{t-1})=X_t-2X_{t-1}+X_{t-2} \nonumber$ and, recursively, the representations follow also for higher powers of $\nabla$. Suppose that the difference operator is applied to the linear trend $m_t=b_0+b_1t$, then $\nabla m_t=m_t-m_{t-1}=b_0+b_1t-b_0-b_1(t-1)=b_1 \nonumber$ which is a constant. Inductively, this leads to the conclusion that for a polynomial drift of degree $p$, namely $m_t=\sum_{j=0}^pb_jt^j$, $\nabla^pm_t=p!b_p$ and thus constant. Applying this technique to a stochastic process of the form (1.3.1) with a polynomial drift $m_t$, yields then $\nabla^pX_t=p!b_p+\nabla^p Y_t,\qquad t\in T. \nonumber$ This is a stationary process with mean $p!b_p$. The plots in Figure 1.9 contain the first and second differences for the Lake Huron data. In R, they may be obtained from the commands > d1 = diff(lake) > d2 = diff(d1) > par(mfrow=c(1,2)) > plot.ts(d1, xlab="", ylab="") > plot.ts(d2, xlab="", ylab="") The next example shows that the difference operator can also be applied to a random walk to create stationary data. Example 1.3.3. Let $(S_t\colon t\in\mathbb{N}_0)$ be the random walk of Example 1.2.3. If the difference operator $\nabla$ is applied to this stochastic process, then $\nabla S_t=S_t-S_{t-1}=Z_t, \qquad t\in\mathbb{N}. \nonumber$ In other words, $\nabla$ does nothing else but recover the original white noise sequence that was used to build the random walk.
textbooks/stats/Advanced_Statistics/Time_Series_Analysis_(Aue)/1%3A_Basic_Concepts_in_Time_Series/1.3%3A_Eliminating_Trend_Components.txt
Recall the classical decomposition (1.1.1), $X_t=m_t+s_t+Y_t, \qquad t\in T, \nonumber$ with $E[Y_t]=0$. In this section, three methods are discussed that aim at estimating both the trend and seasonal components in the data. As additional requirement on $(s_t\colon t\in T)$, it is assumed that $s_{t+d}=s_t,\qquad \sum_{j=1}^ds_j=0, \nonumber$ where $d$ denotes the period of the seasonal component. (If dealing with yearly data sampled monthly, then obviously $d=12$.) It is convenient to relabel the observations $x_1,\ldots,x_n$ in terms of the seasonal period $d$ as $x_{j,k}=x_{k+d(j-1)}. \nonumber$ In the case of yearly data, observation $x_{j,k}$ thus represents the data point observed for the $k$th month of the $j$th year. For convenience the data is always referred to in this fashion even if the actual period is something other than 12. Method 1 (Small trend method) If the changes in the drift term appear to be small, then it is reasonable to assume that the drift in year $j$, say, $m_j$ is constant. As a natural estimator one can therefore apply $\hat{m}_j=\frac{1}{d}\sum_{k=1}^dx_{j,k}. \nonumber$ To estimate the seasonality in the data, one can in a second step utilize the quantities $\hat{s}_k=\frac 1N\sum_{j=1}^N(x_{j,k}-\hat{m}_j), \nonumber$ where $N$ is determined by the equation $n=Nd$, provided that data has been collected over $N$ full cycles. Direct calculations show that these estimators possess the property $\hat{s}_1+\ldots+\hat{s}_d=0$ (as in the case of the true seasonal components $s_t$). To further assess the quality of the fit, one needs to analyze the observed residuals $\hat{y}_{j,k}=x_{j,k}-\hat{m}_j-\hat{s}_k. \nonumber$ Note that due to the relabeling of the observations and the assumption of a slowly changing trend, the drift component is solely described by the "annual'' subscript $j$, while the seasonal component only contains the "monthly'' subscript $k$. • Figure 1.10: Time series plots of the red wine sales in Australia from January 1980 to October 1991 (left) and its log transformation with yearly mean estimates (right). Example 1.4.1 (Australian Wine Sales). The left panel of Figure 1.10 shows the monthly sales of red wine (in kiloliters) in Australia from January 1980 to October 1991. Since there is an apparent increase in the fluctuations over time, the right panel of the same figure shows the natural logarithm transform of the data. There is clear evidence of both trend and seasonality. In the following, the log transformed data is studied. Using the small trend method as described above, the annual means are estimated first. They are already incorporated in the right time series plot of Figure 1.10. Note that there are only ten months of data available for the year 1991, so that the estimation has to be adjusted accordingly. The detrended data is shown in the left panel of Figure 1.11. The middle plot in the same figure shows the estimated seasonal component, while the right panel displays the residuals. Even though the assumption of small changes in the drift is somewhat questionable, the residuals appear to look quite nice. They indicate that there is dependence in the data (see Section 1.5 below for more on this subject). • Figure 1.11: The detrended log series (left), the estimated seasonal component (center) and the corresponding residuals series (right) of the Australian red wine sales data. Method 2 (Moving average estimation) This method is to be preferred over the first one whenever the underlying trend component cannot be assumed constant. Three steps are to be applied to the data. 1st Step: Trend estimation. At first, focus on the removal of the trend component with the linear filters discussed in the previous section. If the period $d$ is odd, then one can directly use $\hat{m}_t=W_t$ as in (1.3.2) with $q$ specified by the equation $d=2q+1$. If the period $d=2q$ is even, then slightly modify $W_t$ and use $\hat{m}_t=\frac 1d(.5x_{t-q}+x_{t-q+1}+\ldots+x_{t+q-1}+.5x_{t+q}), \qquad t=q+1,\ldots,n-q. \nonumber$ 2nd Step: Seasonality estimation. To estimate the seasonal component, let \begin{align*} \mu_k&=\frac 1{N-1}\sum_{j=2}^N(x_{k+d(j-1)}-\hat{m}_{k+d(j-1)}), \qquad k=1,\ldots,q,\.2cm] \mu_k&=\frac 1{N-1}\sum_{j=1}^{N-1}(x_{k+d(j-1)}-\hat{m}_{k+d(j-1)}), \qquad k=q+1,\ldots,d. \end{align*} Define now \[ \hat{s}_k=\mu_k-\frac 1d\sum_{\ell=1}^d\mu_\ell,\qquad k=1,\ldots,d, \nonumber and set $\hat{s}_{k}=\hat{s}_{k-d}$ whenever $k>d$. This will provide us with deseasonalized data which can be examined further. In the final step, any remaining trend can be removed from the data. 3rd Step: Trend Reestimation. Apply any of the methods from Section 1.3. Method 3 (Differencing at lag d) Introducing the lag-d difference operator $\nabla_d$, defined by letting $\nabla_dX_t=X_t-X_{t-d}=(1-B^d)X_t,\qquad t=d+1,\ldots,n, \nonumber$ and assuming model (1.1.1), one arrives at the transformed random variables $\nabla_dX_t=m_t-m_{t-d}+Y_t-Y_{t-d},\qquad t=d+1,\ldots,n. \nonumber$ Note that the seasonality is removed, since $s_t=s_{t-d}$. The remaining noise variables $Y_t-Y_{t-d}$ are stationary and have zero mean. The new trend component $m_t-m_{t-d}$ can be eliminated using any of the methods developed in Section 1.3. • Figure 1.12: The differenced observed series $\nabla_{12}x_t$ (left), $\nabla x_t$ (middle) and $\nabla\nabla_{12}x_t=\nabla_{12}\nabla x_t$ (right) for the Australian red wine sales data. Example 1.4.2 (Australian wine sales). Revisit the Australian red wine sales data of Example 1.4.1 and apply the differencing techniques just established. The left plot of Figure 1.12 shows the the data after an application of the operator $\nabla_{12}$. If the remaining trend in the data is estimated with the differencing method from Section 1.3, the residual plot given in the right panel of Figure 1.12 is obtained. Note that the order of application does not change the residuals, that is, $\nabla\nabla_{12}x_t=\nabla_{12}\nabla x_t$. The middle panel of Figure 1.12 displays the differenced data which still contains the seasonal component.
textbooks/stats/Advanced_Statistics/Time_Series_Analysis_(Aue)/1%3A_Basic_Concepts_in_Time_Series/1.4%3A_Eliminating_Trend_and_Seasonal_Components.txt
In this subsection, several goodness-of-fit tests are introduced to further analyze the residuals obtained after the elimination of trend and seasonal components. The main objective is to determine whether or not these residuals can be regarded as obtained from a sequence of independent, identically distributed random variables or if there is dependence in the data. Throughout $Y_1,\ldots,Y_n$ denote the residuals and $y_1,\ldots,y_n$ a typical realization. Method 1 (The sample ACF) It could be seen in Example 1.2.4 that, for $j\not=0$, the estimators $\hat{\rho}(j)$ of the ACF $\rho(j)$ are asymptotically independent and normally distributed with mean zero and variance $n^{-1}$, provided the underlying residuals are independent and identically distributed with a finite variance. Therefore, plotting the sample ACF for a certain number of lags, say $h$, it is expected that approximately 95% of these values are within the bounds $\pm 1.96/\sqrt{n}$. The R function acf helps to perform this analysis. (See Theorem 1.2.1) Method 2 (The Portmanteau test) The Portmanteau test is based on the test statistic $Q=n\sum_{j=1}^h\hat{\rho}^2(j). \nonumber$ Using the fact that the variables $\sqrt{n}\hat{\rho}(j)$ are asymptotically standard normal, it becomes apparent that $Q$ itself can be approximated with a chi-squared distribution possessing $h$ degrees of freedom. The hypothesis of independent and identically distributed residuals is rejected at the level $\alpha$ if $Q>\chi_{1-\alpha}^2(h)$, where $\chi_{1-\alpha}^2(h)$ is the $1-\alpha$ quantile of the chi-squared distribution with $h$ degrees of freedom. Several refinements of the original Portmanteau test have been established in the literature. We refer here only to the papers Ljung and Box (1978), and McLeod and Li (1983) for further information. Method 3 (The rank test) This test is very useful for finding linear trends. Denote by $\Pi=\#\{(i,j):Y_i>Y_j,\,i>j,\,i=2,\ldots,n\} \nonumber$ the random number of pairs $(i,j)$ satisfying the conditions $Y_i>Y_j$ and $i>j$. There are ${n \choose 2}=\frac 12n(n-1)$ pairs $(i,j)$ such that $i>j$. If $Y_1,\ldots,Y_n$ are independent and identically distributed, then $P(Y_i>Y_j)=1/2$ (assuming a continuous distribution). Now it follows that $\mu_\Pi=E[\Pi]=\frac 14n(n-1)$ and, similarly, $\sigma_\Pi^2=\mbox{Var}(\Pi)=\frac{1}{72}n(n-1)(2n+5)$. Moreover, for large enough sample sizes $n$, $\Pi$ has an approximate normal distribution with mean $\mu_\Pi$ and variance $\sigma_\Pi^2$. Consequently, the hypothesis of independent, identically distributed data would be rejected at the level $\alpha$ if $P=\frac{|\Pi-\mu_\Pi|}{\sigma_\Pi}>z_{1-\alpha/2}, \nonumber$ where $z_{1-\alpha/2}$ denotes the $1-\alpha/2$ quantile of the standard normal distribution. Method 4 (Tests for normality) If there is evidence that the data are generated by Gaussian random variables, one can create the qq plot to check for normality. It is based on a visual inspection of the data. To this end, denote by $Y_{(1)}<\ldots<Y_{(n)}$ the order statistics of the residuals $Y_1,\ldots,Y_n$ which are normally distributed with expected value $\mu$ and variance $\sigma^2$. It holds that $\label{eq:1.5.1} E[Y_{(j)}]=\mu+\sigma E[X_{(j)}], \tag{1.5.1}$ where $X_{(1)}<\ldots<X_{(n)}$ are the order statistics of a standard normal distribution. The qq plot is defined as the graph of the pairs $(E[X_{(1)}],Y_{(1)}),\ldots,(E[X_{(n)}],Y_{(n)})$. According to display (1.5.1), the resulting graph will be approximately linear with the squared correlation $R^2$ of the points being close to 1. The assumption of normality will thus be rejected if $R^2$ is "too'' small. It is common to approximate $E[X_{(j)}]\approx\Phi_j=\Phi^{-1}((j-.5)/n)$ ($\Phi$ being the distribution function of the standard normal distribution). The previous statement is made precise by letting $R^2=\frac{\left[\sum_{j=1}^n(Y_{(j)}-\bar{Y})\Phi_j\right]^2}{\sum_{j=1}^n(Y_{(j)}-\bar{Y})^2\sum_{j=1}^n\Phi_j^2}, \nonumber$ where $\bar{Y}=\frac 1n(Y_1+\ldots+Y_n)$. The critical values for $R^2$ are tabulated and can be found, for example in Shapiro and Francia (1972). The corresponding R function is qqnorm. 1.6: Summary In this chapter, the classical decomposition (1.1.1) of a time series into a drift component, a seasonal component and a sequence of residuals was introduced. Methods to estimate the drift and the seasonality were provided. Moreover, the class of stationary processes was identified as a reasonably broad class of random variables. Several ways were introduced to check whether or not the resulting residuals can be considered to be independent, identically distributed. In Chapter 3, the class of autoregressive moving average (ARMA) processes is discussed in depth, a parametric class of random variables that are at the center of linear time series analysis because they are able to capture a wide range of dependence structures and allow for a thorough mathematical treatment. Before, properties of the sample mean, sample ACVF and ACF are considered in the next chapter.
textbooks/stats/Advanced_Statistics/Time_Series_Analysis_(Aue)/1%3A_Basic_Concepts_in_Time_Series/1.5%3A_Assessing_the_Residuals.txt
In this brief second chapter, some results concerning asymptotic properties of the sample mean and the sample ACVF are collected. Throughout, $(X_t\colon t\in\mathbb{Z})$ denotes a weakly stationary stochastic process with mean $\mu$ and ACVF $\gamma$. In Section 1.2 it was shown that such a process is completely characterized by these two quantities. The mean $\mu$ was estimated by the sample mean $\bar{x}$, and the ACVF $\gamma$ by the sample ACVF $\hat{\gamma}$ defined in (1.2.1). In the following, some properties of these estimators are discussed in more detail. 2: The Estimation of Mean and Covariances Assume that an appropriate guess for the unknown mean $\mu$ of some weakly stationary stochastic process $(X_t\colon t\in\mathbb{Z})$ has to be found. The sample mean $\bar{x}$, easily computed as the average of $n$ observations $x_1,\ldots,x_n$ of the process, has been identified as suitable in Section 1.2. To investigate its theoretical properties, one needs to analyze the random variable associated with it, that is, $\bar{X}_n=\frac 1n(X_1+\ldots+X_n). \nonumber$ Two facts can be quickly established. • $\bar{X}_n$ is an unbiased estimator for $\mu$, since $E[\bar{X}_n]=E\left[\frac 1n\sum_{t=1}^nX_t\right]=\frac 1n\sum_{t=1}^nE[X_t]=\frac 1n n\mu=\mu. \nonumber$ This means that "on average'', the true but unknown $\mu$ is correctly estimated. Notice that there is no difference in the computations between the standard case of independent and identically distributed random variables and the more general weakly stationary process considered here. • If $\gamma(n)\to 0$ as $n\to\infty$, then $\bar{X}_n$ is a consistent estimator for $\mu$, since \begin{align*} \mathrm{Var}(\bar{X}_n)&=\mathrm{Cov}\left(\frac 1n\sum_{s=1}^nX_s,\frac 1n\sum_{t=1}^nX_t\right) =\frac{1}{n^2}\sum_{s=1}^n\sum_{t=1}^n\mathrm{Cov}(X_s,X_t)\.2cm] &=\frac{1}{n^2}\sum_{s-t=-n}^n(n-|s-t|)\gamma(s-t) =\frac 1n\sum_{h=-n}^n\left(1-\frac{|h|}{n}\right)\gamma(h). \end{align*} Now, the quantity on the right-hand side converges to zero as $n\to\infty$ because $\gamma(n)\to 0$ as $n\to\infty$ by assumption. The first equality sign in the latter equation array follows from the fact that $\mathrm{Var}(X)=\mathrm{Cov}(X,X)$ for any random variable $X$, the second equality sign uses that the covariance function is linear in both arguments. For the third equality, one can use that $\mathrm{Cov}(X_s,X_t)=\gamma(s-t)$ and that each $\gamma(s-t)$ appears exactly $n-|s-t|$ times in the double summation. Finally, the right-hand side is obtained by replacing $s-t$ with $h$ and pulling one $n^{-1}$ inside the summation. In the standard case of independent and identically distributed random variables $n\mathrm{Var}(\bar{X})=\sigma^2$. The condition $\gamma(n)\to 0$ is automatically satisfied. However, in the general case of weakly stationary processes, it cannot be omitted. More can be proved using an appropriate set of assumptions. The results are formulated as a theorem without giving the proofs. Theorem $1$ Let $(X_t\colon t\in\mathbb{Z})$ be a weakly stationary stochastic process with mean $\mu$ and ACVF $\gamma$. Then, the following statements hold true as $n\to\infty$. 1. If $\sum_{h=-\infty}^\infty|\gamma(h)|<\infty$, then \[ n\mathrm{Var}(\bar{X}_n)\to \sum_{h=-\infty}^\infty\gamma(h)=\tau^2; \nonumber 2. If the process is "close to Gaussianity'', then $\sqrt{n}(\bar{X}_n-\mu)\sim AN(0,\tau_n^2), \qquad \tau_n^2=\sum_{h=-n}^n\left(1-\frac{|h|}{n}\right)\gamma(h). \nonumber$ Here, $\sim AN(0,\tau_n^2)$ stands for approximately normally distributed with mean zero and variance $\tau_n^2$. Theorem $1$ can be utilized to construct confidence intervals for the unknown mean parameter $\mu$. To do so, one must, however, estimate the unknown variance parameter $\tau_n$. For a large class of stochastic processes, it holds that $\tau_n^2$ converges to $\tau^2$ as $n\to\infty$. Therefore, we can use $\tau^2$ as an approximation for $\tau_n^2$. Moreover, $\tau^2$ can be estimated by $\hat{\tau}_n^2=\sum_{h=-\sqrt{n}}^{\sqrt{n}}\left(1-\frac{|h|}{n}\right)\hat{\gamma}(h), \nonumber$ where $\hat{\gamma}(h)$ denotes the ACVF estimator defined in (1.2.1). An approximate 95% confidence interval for $\mu$ can now be constructed as $\left(\bar{X}_n-1.96\frac{\hat{\tau}_n}{\sqrt{n}},\bar{X}_n+1.96\frac{\hat{\tau}_n}{\sqrt{n}}\right). \nonumber$ Example $1$: Autoregressive Processes Let $(X_t\colon t\in\mathbb{Z})$ be given by the equations \label{eq:2.1.1} X_t-\mu=\phi(X_{t-1}-\mu)+Z_t,\qquad t\in\mathbb{Z},\tag{2.1.1}\ where $(Z_t\colon t\in\mathbb{Z})\sim\mathrm{WN}(0,\sigma^2)$ and $|\phi|<1$. It will be shown in Chapter 3 that $(X_t\colon t\in\mathbb{Z})$ defines a weakly stationary process. Utilizing the stochastic difference Equations \ref{2.1.1}, both mean and autocovariances can be determined. It holds that $E[X_t]=\phi E[X_{t-1}]+\mu(1-\phi)$. Since, by stationarity, $E[X_{t-1}]$ can be substituted with $E[X_t]$, it follows that $E[X_t]=\mu,\qquad t\in\mathbb{Z}. \nonumber$ In the following we shall work with the process $(X_t^c\colon t\in\mathbb{Z})$ given by letting $X_t^c=X_t-\mu$. Clearly, $E[X_t^c]=0$. From the definition, it follows also that the covariances of $(X_t\colon t\in\mathbb{Z})$ and $(X_t^c\colon t\in\mathbb{Z})$ coincide. First computing the second moment of $X_t^c$, gives $E[\{X_t^c\}^2]=E\big[(\phi X_{t-1}^c+Z_t)^2\big]=\phi^2E[\{X_{t-1}^c\}^2]+\sigma^2 \nonumber$ and consequently, since $E[\{X_{t-1}^c\}^2]=E[\{X_t^c\}^2]$ by weak stationarity of $(X_t^c\colon t\in\mathbb{Z})$, $E[\{X_t^c\}^2]=\frac{\sigma^2}{1-\phi^2},\qquad t\in\mathbb{Z}. \nonumber$ It becomes apparent from the latter equation, why the condition $|\phi|<1$ was needed in display (2.1.1). In the next step, the autocovariance function is computed. For $h>0$, it holds that $\gamma(h)=E[X_{t+h}^cX_t^c]=E\big[(\phi X_{t+h-1}^c+Z_{t+h})X_t^c\big]=\phi E[X_{t+h-1}^cX_t^c]=\phi\gamma(h-1)=\phi^{h}\gamma(0) \nonumber$ after $h$ iterations. But since $\gamma(0)=E[\{X_t^c\}^2]$, by symmetry of the ACVF, it follows that $\gamma(h)=\frac{\sigma^2\phi^{|h|}}{1-\phi^2},\qquad h\in\mathbb{Z}. \nonumber$ After these theoretical considerations, a 95% (asymptotic) confidence interval for the mean parameter $\mu$ can be constructed. To check if Theorem 2.1.1 is applicable here, one needs to check if the autocovariances are absolutely summable: \begin{align*} \tau^2&=\sum_{h=-\infty}^\infty\gamma(h)=\frac{\sigma^2}{1-\phi^2}\left(1+2\sum_{h=1}^\infty\phi^h\right) =\frac{\sigma^2}{1-\phi^2}\left(1+\frac{2}{1-\phi}-2\right) \.2cm] &=\frac{\sigma^2}{1-\phi^2}\frac{1}{1-\phi}(1+\phi)=\frac{\sigma^2}{(1-\phi)^2}<\infty. \end{align*} Therefore, a 95% confidence interval for $\mu$ which is based on the observed values $x_1,\ldots,x_n$ is given by \[ \left(\bar{x}-1.96\frac{\sigma}{\sqrt{n}(1-\phi)},\bar{x}+1.96\frac{\sigma}{\sqrt{n}(1-\phi)}\right). \nonumber Therein, the parameters $\sigma$ and $\phi$ have to be replaced with appropriate estimators. These will be introduced in Chapter 3. 2.2: Estimation of the Autocovariance Function This section deals with the estimation of the ACVF and ACF at lag $h$. Recall from equation (1.2.1) that the estimator $\hat{\gamma}(h)=\frac 1n\sum_{t=1}^{n-|h|}(X_{t+|h|}-\bar{X}_n)(X_t-\bar{X}_n), \qquad h=0,\pm 1,\ldots, \pm(n-1), \nonumber$ may be utilized as a proxy for the unknown $\gamma(h)$. As estimator for the ACF $\rho(h)$, $\hat{\rho}(h)=\frac{\hat{\gamma}(h)}{\hat\gamma(0)},\qquad h=0,\pm 1,\ldots,\pm(n-1), \nonumber$ was identified. Some of the theoretical properties of $\hat{\rho}(h)$ are briefly collected in the following. They are not as obvious to derive as in the case of the sample mean, and all proofs are omitted. Note also that similar statements hold for $\hat{\gamma}(h)$ as well. • The estimator $\hat{\rho}(h)$ is generally biased, that is, $E[\hat{\rho}(h)]\not=\rho(h)$. It holds, however, under non-restrictive assumptions that $E[\hat{\rho}(h)]\to\rho(h)\qquad (n\to\infty). \nonumber$ This property is called asymptotic unbiasedness. • The estimator $\hat{\rho}(h)$ is consistent for $\rho(h)$ under an appropriate set of assumptions, that is, $\mathrm{Var}(\hat{\rho}(h)-\rho(h))\to 0$ as $n\to\infty$. It was already established in Section 1.5 how the sample ACF $\hat{\rho}$ can be used to test if residuals consist of white noise variables. For more general statistical inference, one needs to know the sampling distribution of $\hat{\rho}$. Since the estimation of $\rho(h)$ is based on only a few observations for $h$ close to the sample size $n$, estimates tend to be unreliable. As a rule of thumb, given by Box and Jenkins (1976), $n$ should at least be 50 and $h$ less than or equal to n/4. Theorem $1$ For $m\geq 1$, let $\mathbf{\rho}_m=(\rho(1),\ldots,\rho(m))^T$ and $\mathbf{\hat{\rho}}_m=(\hat{\rho}(1),\ldots,\hat{\rho}(m))^T$, where $^T$ denotes the transpose of a vector. Under a set of suitable assumptions, it holds that $\sqrt{n}(\mathbf{\hat{\rho}}_m-\mathbf{\rho}_m)\sim AN(\mathbf{0},\Sigma)\qquad (n\to\infty), \nonumber$ where $\sim AN(0,\Sigma)$ stands for approximately normally distributed with mean vector $\mathbf{0}$ and covariance matrix $\Sigma=(\sigma_{ij})$ given by Bartlett's formula $\sigma_{ij}=\sum_{k=1}^\infty\big[\rho(k+i)+\rho(k-i)-2\rho(i)\rho(k)\big]\big[\rho(k+j)+\rho(k-j)-2\rho(j)\rho(k)\big]. \nonumber$ The section is concluded with two examples. The first one recollects the results already known for independent, identically distributed random variables, the second deals with the autoregressive process of Example (2.2.1). Example $1$ Let $(X_t\colon t\in\mathbb{Z})\sim\mathrm{IID}(0,\sigma^2)$. Then, $\rho(0)=1$ and $\rho(h)=0$ for all $h\not=0$. The covariance matrix $\Sigma$ is therefore given by $\sigma_{ij}=1\quad\mbox{if i=j} \qquad and \qquad \sigma_{ij}=0\quad\mbox{if i\not=j}. \nonumber$ This means that $\Sigma$ is a diagonal matrix. In view of Theorem 2.2.1 it holds thus that the estimators $\hat{\rho}(1),\ldots,\hat{\rho}(k)$ are approximately independent and identically distributed normal random variables with mean 0 and variance $1/n$. This was the basis for Methods 1 and 2 in Section 1.6 (see also Theorem 1.2.1). Example $2$ Reconsider the autoregressive process $(X_t\colon t\in\mathbb{Z})$ from Example 2.1.1 with $\mu=0$. Dividing $\gamma(h)$ by $\gamma(0)$ yields that $\rho(h)=\phi^{|h|},\qquad h\in\mathbb{Z}. \nonumber$ Now the diagonal entries of $\Sigma$ are computed as \begin{align*} \sigma_{ii}&=\sum_{k=1}^\infty\big[\rho(k+i)+\rho(k-i)-2\rho(i)\rho(k)\big]^2\[.2cm] &=\sum_{k=1}^i\phi^{2i}(\phi^{-k}-\phi^k)^2+\sum_{k=i+1}^\infty\phi^{2k}(\phi^{-i}-\phi^i)^2\[.2cm] &=(1-\phi^{2i})(1+\phi^2)(1-\phi^2)^{-1}-2i\phi^{2i}. \end{align*}
textbooks/stats/Advanced_Statistics/Time_Series_Analysis_(Aue)/2%3A_The_Estimation_of_Mean_and_Covariances/2.1%3A_Estimation_of_the_Mean.txt
In this chapter autoregressive moving average processes are discussed. They play a crucial role in specifying time series models for applications. As the solutions of stochastic difference equations with constant coefficients and these processes possess a linear structure. 3: ARMA Processes In this chapter autoregressive moving average processes are discussed. They play a crucial role in specifying time series models for applications. As the solutions of stochastic difference equations with constant coefficients and these processes possess a linear structure. Definition 3.1.1: ARMA processes (a) A weakly stationary process $X_t\colon t\in\mathbb{Z}$ is called an autoregressive moving average time series of order $p,q$, abbreviated by $ARMA(p,q)$, if it satisfies the difference equations $\label{eq:3.1.1} X_t=\phi_1X_{t-1}+\ldots+\phi_pX_{t-p}+Z_t+\theta_1Z_{t-1}+\ldots+\theta_qZ_{t-q}, \qquad t\in\mathbb{Z}, \tag{3.1.1}$ where $\phi_1,\ldots,\phi_p$ and $\theta_1,\ldots,\theta_q$ are real constants, $\phi_p\not=0\not=\theta_q$, and $(Z_t\colon t\in\mathbb{Z})\sim{\rm WN}(0,\sigma^2)$. (b) A weakly stationary stochastic process $X_t\colon t\in\mathbb{Z}$ is called an $ARMA(p,q)$ time series with mean $\mu$ if the process $X_t-\mu\colon t\in\mathbb{Z}$ satisfies the equation system. A more concise representation of Equation \ref{eq:3.1.1} can be obtained with the use of the backshift operator $B$. To this end, define the autoregressive polynomial and the moving average polynomial by $\phi(z)=1-\phi_1z-\phi_2z^2-\ldots-\phi_pz^p,\qquad z\in\mathbb{C}, \nonumber$ and $\theta(z)=1+\theta_1z+\theta_2z^2+\ldots+\theta_qz^q,\qquad z\in\mathbb{C}, \nonumber$ respectively, where $\mathbb{C}$ denotes the set of complex numbers. Inserting the backshift operator into these polynomials, the equations in (3.1.1) become $\label{eq:3.1.2} \phi(B)X_t=\theta(B)Z_t,\qquad t\in\mathbb{Z}. \tag{3.1.2}$ Example 3.1.1 Figure 3.1 displays realizations of three different autoregressive moving average time series based on independent, standard normally distributed $(Z_t\colon t\in\mathbb{Z})$. The left panel is an ARMA(2,2) process with parameter specifications $\phi_1=.2$, $\phi_2=-.3$, $\theta_1=-.5$ and $\theta_2=.3$. The middle plot is obtained from an ARMA(1,4) process with parameters $\phi_1=.3$, $\theta_1=-.2$, $\theta_2=-.3$, $\theta_3=.5$, and $\theta_4=.2$, while the right plot is from an ARMA(4,1) with parameters $\phi_1=-.2$, $\phi_2=-.3$, $\phi_3=.5$ and $\phi_4=.2$ and $\theta_1=.6$. The plots indicate that ARMA models can provide a flexible tool for modeling diverse residual sequences. It will turn out in the next section that all three realizations here come from (strictly) stationary processes. Similar time series plots can be produced in R using the commands >arima22 = arima.sim(list(order=c(2,0,2), ar=c(.2,-.3), ma=c(-.5,.3)), n=100) >arima14 = arima.sim(list(order=c(1,0,4), ar=.3, ma=c(-.2,-.3,.5,.2)), n=100) >arima41 = arima.sim(list(order=c(4,0,1), ar=c(-.2,-.3,.5,.2), ma=.6), n=100) Some special cases covered in the following two examples have particular relevance in time series analysis. Example 3.1.2 (AR Processes) If the moving average polynomial in (3.1.2) is equal to one, that is, if $\theta(z)\equiv 1$, then the resulting $(X_t\colon t\in\mathbb{Z})$is referred to as autoregressive process of order $p$, AR$(p)$. These time series interpret the value of the current variable $X_t$ as a linear combination of $p$ previous variables $X_{t-1},\ldots,X_{t-p}$ plus an additional distortion by the white noise $Z_t$. Figure 3.1.2 displays two AR(1) processes with respective parameters $\phi_1=-.9$ (left) and $\phi_1=.8$ (middle) as well as an AR(2) process with parameters $\phi_1=-.5$ and $\phi_2=.3$. The corresponding R commands are >ar1neg = arima.sim(list(order=c(1,0,0), ar=-.9), n=100) >ar1pos = arima.sim(list(order=c(1,0,0), ar=.8), n=100) >ar2 = arima.sim(list(order=c(2,0,0), ar=c(-.5,.3)), n=100) Figure 3.3: Realizations of three moving average processes. Example 3.1.3 (MA Processes) If the autoregressive polynomial in (3.1.2) is equal to one, that is, if $\phi(z)\equiv 1$, then the resulting $(X_t\colon t\in\mathbb{Z})$ is referred to as moving average process of order $q$, MA($q$)}. Here the present variable $X_t$ is obtained as superposition of $q$ white noise terms $Z_t,\ldots,Z_{t-q}$. Figure (3.1.3) shows two MA(1) processes with respective parameters $\theta_1=.5$ (left) and $\theta_1=-.8$ (middle). The right plot is observed from an MA(2) process with parameters $\theta_1=-.5$ and $\theta_2=.3$. In R one may use > ma1pos = arima.sim(list(order=c(0,0,1), ma=.5), n=100) > ma1neg = arima.sim(list(order=c(0,0,1), ma=-.8), n=100) > ma2 = arima.sim(list(order=c(0,0,2), ma=c(-.5,.3)), n=100) For the analysis upcoming in the next chapters, we now introduce moving average processes of infinite order $(q=\infty)$. They are an important tool for determining stationary solutions to the difference equations (3.1.1). Definition 3.1.2 Linear processes A stochastic process $(X_t\colon t\in\mathbb{Z})$ is called linear process or MA$(\infty)$ time series if there is a sequence $(\psi_j\colon j\in\mathbb{N}_0)$ with $\sum_{j=0}^\infty|\psi_j|<\infty$ such that $\label{eq:3.1.3} X_t=\sum_{j=0}^\infty\psi_jZ_{t-j},\qquad t\in\mathbb{Z}, \tag{3.1.3}$ where $(Z_t\colon t\in\mathbb{Z})\sim{\rm WN}(0,\sigma^2)$. Moving average time series of any order $q$ are special cases of linear processes. Just pick $\psi_j=\theta_j$ for $j=1,\ldots,q$ and set $\psi_j=0$ if $j>q$. It is common to introduce the power series $\psi(z)=\sum_{j=0}^\infty\psi_jz^j, \qquad z\in\mathbb{C}, \nonumber$ to express a linear process in terms of the backshift operator. Display (3.1.3) can now be rewritten in the compact form $X_t=\psi(B)Z_t,\qquad t\in\mathbb{Z}. \nonumber$ With the definitions of this section at hand, properties of ARMA processes, such as stationarity and invertibility, are investigated in the next section. The current section is closed giving meaning to the notation $X_t=\psi(B)Z_t$. Note that one is possibly dealing with an infinite sum of random variables. For completeness and later use, in the following example the mean and ACVF of a linear process are derived. Example 3.1.4 Mean and ACVF of a linear process Let $(X_t\colon t\in\mathbb{Z})$ be a linear process according to Definition 3.1.2. Then, it holds that $E[X_t] =E\left[\sum_{j=0}^\infty\psi_jZ_{t-j}\right] =\sum_{j=0}^\infty\psi_jE[Z_{t-j}]=0, \qquad t\in\mathbb{Z}. \nonumber$ Next observe also that \begin{align*} \gamma(h) &=\mathrm{Cov}(X_{t+h},X_t)\[.2cm] &=E\left[\sum_{j=0}^\infty\psi_jZ_{t+h-j}\sum_{k=0}^\infty\psi_kZ_{t-k}\right]\[.2cm] &=\sigma^2\sum_{k=0}^\infty\psi_{k+h}\psi_k<\infty \end{align*} by assumption on the sequence $(\psi_j\colon j\in\mathbb{N}_0)$.
textbooks/stats/Advanced_Statistics/Time_Series_Analysis_(Aue)/3%3A_ARMA_Processes/3.1%3A_Introduction_to_Autoregressive_Moving_Average_%28ARMA%29_Processes.txt
While a moving average process of order $q$ will always be stationary without conditions on the coefficients $\theta_1$,$\ldots$,$\theta_q$, some deeper thoughts are required in the case of AR($p$) and ARMA($p,q$) processes. For simplicity, we start by investigating the autoregressive process of order one, which is given by the equations $X_t=\phi X_{t-1}+Z_t$ (writing $\phi=\phi_1$). Repeated iterations yield that $X_t =\phi X_{t-1}+Z_t =\phi^2X_{t-2}+Z_t+\phi Z_{t-1}=\ldots =\phi^NX_{t-N}+\sum_{j=0}^{N-1}\phi^jZ_{t-j}. \nonumber$ Letting $N\to\infty$, it could now be shown that, with probability one, $X_t=\sum_{j=0}^\infty\phi^jZ_{t-j} \tag{3.2.2}$ is the weakly stationary solution to the AR(1) equations, provided that $|\phi|<1$. These calculations would indicate moreover, that an autoregressive process of order one can be represented as linear process with coefficients $\psi_j=\phi^j$. Example $1$: Mean and ACVF of an AR(1) process Since an autoregressive process of order one has been identified as an example of a linear process, one can easily determine its expected value as $E[X_t]=\sum_{j=0}^\infty\phi^jE[Z_{t-j}]=0, \qquad t\in\mathbb{Z}. \nonumber$ For the ACVF, it is obtained that \begin{align*} \gamma(h) &={\rm Cov}(X_{t+h},X_t)\.2cm] &=E\left[\sum_{j=0}^\infty\phi^jZ_{t+h-j}\sum_{k=0}^\infty\phi^kZ_{t-k}\right]\[.2cm] &=\sigma^2\sum_{k=0}^\infty\phi^{k+h}\phi^{k} =\sigma^2\phi^h\sum_{k=0}^\infty\phi^{2k} =\frac{\sigma^2\phi^h}{1-\phi^2}, \end{align*} where $h\geq 0$. This determines the ACVF for all $h$ using that $\gamma(-h)=\gamma(h)$. It is also immediate that the ACF satisfies $\rho(h)=\phi^h$. See also Example 3.1.1 for comparison. Example $2$: Nonstationary AR(1) processes In Example 1.2.3 we have introduced the random walk as a nonstationary time series. It can also be viewed as a nonstationary AR(1) process with parameter $\phi=1$. In general, autoregressive processes of order one with coefficients $|\phi|>1$ are called {\it explosive}\/ for they do not admit a weakly stationary solution that could be expressed as a linear process. However, one may proceed as follows. Rewrite the defining equations of an AR(1) process as \[ X_t=-\phi^{-1}Z_{t+1}+\phi^{-1}X_{t+1}, \qquad t\in\mathbb{Z}. \nonumber Apply now the same iterations as before to arrive at $X_t=\phi^{-N}X_{t+N}-\sum_{j=1}^N\phi^{-j}Z_{t+j},\qquad t\in\mathbb{Z}. \nonumber$ Note that in the weakly stationary case, the present observation has been described in terms of past innovations. The representation in the last equation however contains only future observations with time lags larger than the present time $t$. From a statistical point of view this does not make much sense, even though by identical arguments as above we may obtain $X_t=-\sum_{j=1}^\infty\phi^{-j}Z_{t+j}, \qquad t\in\mathbb{Z}, \nonumber$ as the weakly stationary solution in the explosive case. The result of the previous example leads to the notion of causality which means that the process $(X_t: t\in\mathbb{Z})$ has a representation in terms of the white noise $(Z_s: s\leq t)$ and that is hence uncorrelated with the future as given by $(Z_s: s>t)$. We give the definition for the general ARMA case. Definition: Causality An ARMA($p,q$) process given by (3.1.1) is causal if there is a sequence $(\psi_j: j\in\mathbb{N}_0)$ such that $\sum_{j=0}^\infty|\psi_j|<\infty$ and $X_t=\sum_{j=0}^\infty\psi_jZ_{t-j}, \qquad t\in\mathbb{Z}. \nonumber$ Causality means that an ARMA time series can be represented as a linear process. It was seen earlier in this section how an AR(1) process whose coefficient satisfies the condition $|\phi|<1$ can be converted into a linear process. It was also shown that this is impossible if $|\phi|>1$. The conditions on the autoregressive parameter $\phi$ can be restated in terms of the corresponding autoregressive polynomial $\phi(z)=1-\phi z$ as follows. It holds that $|\phi|<1$ if and only if $\phi(z)\not=0$ for all $|z|\leq 1, \$.2cm]$ $|\phi|>1$ if and only if $\phi(z)\not=0$ for all $|z|\geq 1$. It turns out that the characterization in terms of the zeroes of the autoregressive polynomials carries over from the AR(1) case to the general ARMA($p,q$) case. Moreover, the $\psi$-weights of the resulting linear process have an easy representation in terms of the polynomials $\phi(z)$ and $\theta(z)$. The result is summarized in the next theorem. Theorem 3.2.1 Let $(X_t: t\in\mathbb{Z})$ be an ARMA($p,q$) process such that the polynomials $\phi(z)$ and $\theta(z)$ have no common zeroes. Then $(X_t\colon t\in\mathbb{Z})$ is causal if and only if $\phi(z)\not=0$ for all $z\in\mathbb{C}$ with $|z|\leq 1$. The coefficients $(\psi_j: j\in\mathbb{N}_0)$ are determined by the power series expansion \[ \psi(z)=\sum_{j=0}^\infty\psi_jz^j=\frac{\theta(z)}{\phi(z)}, \qquad |z|\leq 1. \nonumber$ A concept closely related to causality is invertibility. This notion is motivated with the following example that studies properties of a moving average time series of order 1. Example $3$ Let $(X_t\colon t\in\mathbb{N})$ be an MA(1) process with parameter $\theta=\theta_1$. It is an easy exercise to compute the ACVF and the ACF as $\gamma(h)=\left\{ \begin{array}{l@{\quad}l} (1+\theta^2)\sigma^2, & h=0, \ \theta\sigma^2, & h=1 \ 0 & h>1, \end{array}\right. \qquad \rho(h)=\left\{ \begin{array}{l@{\quad}l} 1 & h=0.\ \displaystyle\theta(1+\theta^2)^{-1}, & h=1. \ 0 & h>1. \end{array}\right. \nonumber$ These results lead to the conclusion that $\rho(h)$ does not change if the parameter $\theta$ is replaced with $\theta^{-1}$. Moreover, there exist pairs $(\theta,\sigma^2)$ that lead to the same ACVF, for example $(5,1)$ and $(1/5,25)$. Consequently, we arrive at the fact that the two MA(1) models $X_t=Z_t+\frac 15Z_{t-1},\qquad t\in\mathbb{Z}, \qquad (Z_t\colon t\in\mathbb{Z})\sim\mbox{iid }{\cal N}(0,25), \nonumber$ and $X_t=\tilde{Z}_t+5\tilde{Z}_{t-1},\qquad t\in\mathbb{Z}, \qquad (\tilde{Z}\colon t\in\mathbb{Z})\sim\mbox{iid }{\cal N}(0,1), \nonumber$ are indistinguishable because we only observe $X_t$ but not the noise variables $Z_t$ and $\tilde{Z}_t$. For convenience, the statistician will pick the model which satisfies the invertibility criterion which is to be defined next. It specifies that the noise sequence can be represented as a linear process in the observations. Definition: Invertibility An ARMA($p,q$) process given by (3.1.1) is invertible if there is a sequence $(\pi_j\colon j\in\mathbb{N}_0)$ such that $\sum_{j=0}^\infty|\pi_j|<\infty$ and $Z_t=\sum_{j=0}^\infty\pi_jX_{t-j},\qquad t\in\mathbb{Z}. \nonumber$ Theorem 3.2.2 Let $(X_t: t\in\mathbb{Z})$ be an ARMA($p,q$) process such that the polynomials $\phi(z)$ and $\theta(z)$ have no common zeroes. Then $(X_t\colon t\in\mathbb{Z})$ is invertible if and only if $\theta(z)\not=0$ for all $z \in\mathbb{C}$ with $|z|\leq 1$. The coefficients $(\pi_j)_{j\in\mathbb{N}_0}$ are determined by the power series expansion $\pi(z)=\sum_{j=0}^\infty\pi_jz^j=\frac{\phi(z)}{\theta(z)}, \qquad |z|\leq 1. \nonumber$ From now on it is assumed that all ARMA sequences specified in the sequel are causal and invertible unless explicitly stated otherwise. The final example of this section highlights the usefulness of the established theory. It deals with parameter redundancy and the calculation of the causality and invertibility sequences $(\psi_j\colon j\in\mathbb{N}_0)$ and $(\pi_j\colon j\in\mathbb{N}_0)$. Example $4$: Parameter redundancy Consider the ARMA equations $X_t=.4X_{t-1}+.21X_{t-2}+Z_t+.6Z_{t-1}+.09Z_{t-2}, \nonumber$ which seem to generate an ARMA(2,2) sequence. However, the autoregressive and moving average polynomials have a common zero: \begin{align*} \tilde{\phi}(z)&=1-.4z-.21z^2=(1-.7z)(1+.3z), \.2cm] \tilde{\theta}(z)&=1+.6z+.09z^2=(1+.3z)^2. \end{align*} Therefore, one can reset the ARMA equations to a sequence of order (1,1) and obtain \[ X_t=.7X_{t-1}+Z_t+.3Z_{t-1}. \nonumber Now, the corresponding polynomials have no common roots. Note that the roots of $\phi(z)=1-.7z$ and $\theta(z)=1+.3z$ are $10/7>1$ and $-10/3<-1$, respectively. Thus Theorems 3.2.1 and 3.2.2 imply that causal and invertible solutions exist. In the following, the corresponding coefficients in the expansions $X_t=\sum_{j=0}^\infty\psi_jZ_{t-j} \qquad and \qquad Z_t=\sum_{j=0}^\infty\pi_jX_{t-j}, \qquad t\in\mathbb{Z}, \nonumber$ are calculated. Starting with the causality sequence $(\psi_j: j\in\mathbb{N}_0)$. Writing, for $|z|\leq 1$, $\sum_{j=0}^\infty\psi_jz^j =\psi(z) =\frac{\theta(z)}{\phi(z)} =\frac{1+.3z}{1-.7z} =(1+.3z)\sum_{j=0}^\infty(.7z)^j, \nonumber$ it can be obtained from a comparison of coefficients that $\psi_0=1 \qquad and \qquad \psi_j=(.7+.3)(.7)^{j-1}=(.7)^{j-1}, \qquad j\in\mathbb{N}. \nonumber$ Similarly one computes the invertibility coefficients $(\pi_j: j\in\mathbb{N}_0)$ from the equation $\sum_{j=0}^\infty\pi_jz^j =\pi(z) =\frac{\phi(z)}{\theta(z)} =\frac{1-.7z}{1+.3z} =(1-.7z)\sum_{j=0}^\infty(-.3z)^j \nonumber$ ($|z|\leq 1$) as $\pi_0=1 \qquad and \qquad \pi_j=(-1)^j(.3+.7)(.3)^{j-1}=(-1)^j(.3)^{j-1}. \nonumber$ Together, the previous calculations yield to the explicit representations $X_t=Z_t+\sum_{j=1}^\infty(.7)^{j-1}Z_{t-j} \qquad and \qquad Z_t=X_t+\sum_{j=1}^\infty(-1)^j(.3)^{j-1}X_{t-j}. \nonumber$ In the remainder of this section, a general way is provided to determine the weights $(\psi_j\colon j\geq 1)$ for a causal ARMA($p,q$) process given by $\phi(B)X_t=\theta(B)Z_t$, where $\phi(z)\not=0$ for all $z\in\mathbb{C}$ such that $|z|\leq 1$. Since $\psi(z)=\theta(z)/\phi(z)$ for these $z$, the weight $\psi_j$ can be computed by matching the corresponding coefficients in the equation $\psi(z)\phi(z)=\theta(z)$, that is, $(\psi_0+\psi_1z+\psi_2z^2+\ldots)(1-\phi_1z-\ldots-\phi_pz^p) = 1+\theta_1z+\ldots+\theta_qz^q. \nonumber$ Recursively solving for $\psi_0,\psi_1,\psi_2,\ldots$ gives \begin{align*} \psi_0&=1, \ \psi_1-\phi_1\psi_0&=\theta_1, \ \psi_2-\phi_1\psi_1-\phi_2\psi_0&=\theta_2, \end{align*} and so on as long as $j<\max\{p,q+1\}$. The general solution can be stated as $\psi_j-\sum_{k=1}^j\phi_k\psi_{j-k}=\theta_j, \qquad 0\leq j<\max\{p,q+1\}, \tag{3.2.1}\[.2cm]$ $\psi_j-\sum_{k=1}^p\phi_k\psi_{j-k}=0, \qquad \phantom{0\leq} j\geq\max\{p,q+1\},\tag{3.2.2}$ if we define $\phi_j=0$ if $j>p$ and $\theta_j=0$ if $j>q$. To obtain the coefficients $\psi_j$ one therefore has to solve the homogeneous linear difference equation (3.2.2) subject to the initial conditions specified by (3.2.1). For more on this subject, see Section 3.6 of Brockwell and Davis (1991) and Section 3.3 of Shumway and Stoffer (2006). R calculations In R, these computations can be performed using the command ARMAtoMA. For example, one can use the commands >ARMAtoMA(ar=.7,ma=.3,25) >plot(ARMAtoMA(ar=.7,ma=.3,25)) which will produce the output displayed in Figure 3.4. The plot shows nicely the exponential decay of the $\psi$-weights which is typical for ARMA processes. The table shows row-wise the weights $\psi_0,\ldots,\psi_{24}$. This is enabled by the choice of 25 in the argument of the function ARMAtoMA. 1.0000000000 0.7000000000 0.4900000000 0.3430000000 0.2401000000 0.1680700000 0.1176490000 0.0823543000 0.0576480100 0.0403536070 0.0282475249 0.0197732674 0.0138412872 0.0096889010 0.0067822307 0.0047475615 0.0033232931 0.0023263051 0.0016284136 0.0011398895 0.0007979227 0.0005585459 0.0003909821 0.0002736875 0.0001915812
textbooks/stats/Advanced_Statistics/Time_Series_Analysis_(Aue)/3%3A_ARMA_Processes/3.2%3A_Causality_and_Invertibility.txt
In this section, the partial autocorrelation function (PACF) is introduced to further assess the dependence structure of stationary processes in general and causal ARMA processes in particular. To start with, let us compute the ACVF of a moving average process of order $q$ Example $1$: The ACVF of an MA($q$) process Let $(X_t\colon t\in\mathbb{Z})$ be an MA($q$) process specified by the polynomial $\theta(z)=1+\theta_1z+\ldots+\theta_qz^q$. Then, letting $\theta_0=1$, it holds that $E[X_t]=\sum_{j=0}^q\theta_jE[Z_{t-j}]=0. \nonumber$ Solution To compute the ACVF, suppose that $h\geq 0$ and write \begin{align*} \gamma(h)&= Cov(X_{t+h},X_{t})=E[X_{t+h}X_{t}]\.2cm] &=E\left[\left(\sum_{j=0}^q\theta_jZ_{t+h-j}\right) \left(\sum_{k=0}^q\theta_kZ_{t-k}\right)\right]\[.2cm] &=\sum_{j=0}^q\sum_{k=0}^q\theta_j\theta_kE[Z_{t+h-j}Z_{t-k}]\[.2cm] &=\left\{\begin{array}{l@{\qquad}r} \displaystyle\sigma^2\sum_{k=0}^{q-h}\theta_{k+h}\theta_k, & 0\leq h\leq q.\[.2cm] 0, & h>q. \end{array}\right. \end{align*} The result here is a generalization of the MA(1) case, which was treated in Example 3.2.3. It is also a special case of the linear process in Example 3.1.4. The structure of the ACVF for MA processes indicates a possible strategy to determine in practice the unknown order $q$: plot the the sample ACF and select as order $q$ the largest lag such that $\rho(h)$ is significantly different from zero. While the sample ACF can potentially reveal the true order of an MA process, the same is not true anymore in the case of AR processes. Even for the AR(1) time series it has been shown in Example 3.2.1 that its ACF $\rho(h)=\phi^{|h|}$ is nonzero for all lags. As further motivation, however, we discuss the following example. Example 3.3.2 Let $(X_t\colon t\in\mathbb{Z})$ be a causal AR(1) process with parameter $|\phi|<1$. It holds that \[ \gamma(2)=Cov(X_2,X_{0}) =Cov(\phi^2X_{0}+\phi Z_{1}+Z_2,X_{0}) =\phi^2\gamma(0)\not=0. \nonumber To break the linear dependence between $X_0$ and $X_2$, subtract $\phi X_1$ from both variables. Calculating the resulting covariance yields $Cov(X_2-\phi X_{1},X_0-\phi X_1)= Cov(Z_2,X_0-\phi X_1)=0, \nonumber$ since, due to the causality of this AR(1) process, $X_0-\phi X_1$ is a function of $Z_{1},Z_0,Z_{-1},\ldots$ and therefore uncorrelated with $X_2-\phi X_1=Z_2$. The previous example motivates the following general definition. Definition 3.3.1 Partial autocorrelation function Let $(X_t\colon t\in\mathbb{Z})$ be a weakly stationary stochastic process with zero mean. Then, the sequence $(\phi_{hh}\colon h\in\mathbb{N})$ given by \begin{align*} \phi_{11}&=\rho(1)=Corr(X_1,X_0), \.2cm] \phi_{hh}&=Corr(X_h-X_h^{h-1},X_0-X_0^{h-1}), \qquad h\geq 2, \end{align*} is called the partial autocorrelation function (PACF) of $(X_t\colon t\in\mathbb{Z})$. Therein, \begin{align*} X_h^{h-1}&=\mbox{regression of X_h on }(X_{h-1},\ldots,X_1)\[.2cm] &=\beta_1X_{h-1}+\beta_2X_{h-2}+\ldots+\beta_{h-1}X_1 \[.3cm] X_0^{h-1}&=\mbox{regression of X_0 on }(X_1,\ldots,X_{h-1})\[.2cm] &=\beta_1X_1+\beta_2X_2+\ldots+\beta_{h-1}X_{h-1}. \end{align*} Notice that there is no intercept coefficient $\beta_0$ in the regression parameters, since it is assumed that $E[X_t]=0$. The following example demonstrates how to calculate the regression parameters in the case of an AR(1) process. Example 3.3.3 PACF of an AR(1) process] If $(X_t\colon t\in\mathbb{Z})$ is a causal AR(1) process, then $\phi_{11}=\rho(1)=\phi$. To calculate $\phi_{22}$, calculate first $X_2^1=\beta X_1$, that is $\beta$. This coefficient is determined by minimizing the mean-squared error between $X_2$ and $\beta X_1$: \[ E[X_2-\beta X_1]^2=\gamma(0)-2\beta\gamma(1)+\beta^2\gamma(0) \nonumber which is minimized by $\beta=\rho(1)=\phi$. (This follows easily by taking the derivative and setting it to zero.) Therefore $X_2^1=\phi X_1$. Similarly, one computes $X_0^1=\phi X_1$ and it follows from Example 3.3.2 that $\phi_{22}=0$. Indeed all lags $h\geq 2$ of the PACF are zero. More generally, consider briefly a causal AR($p$) process given by $\phi(B)X_t=Z_t$ with $\phi(z)=1-\phi_1z-\ldots-\phi_pz^p$. Then, for $h>p$, $X_h^{h-1}=\sum_{j=1}^p\phi_jX_{h-j} \nonumber$ and consequently $\phi_{hh}=Corr(X_h-X_h^{h-1},X_0-X_0^{h-1}) = Corr(Z_h,X_0-X_0^{h-1})=0 \nonumber$ if $h>p$ by causality (the same argument used in Example 3.3.2 applies here as well). Observe, however, that $\phi_{hh}$ is not necessarily zero if $h\leq p$. The forgoing suggests that the sample version of the PACF can be utilized to identify the order of an autoregressive process from data: use as $p$ the largest lag $h$ such that $\phi_{hh}$ is significantly different from zero. On the other hand, for an invertible MA($q$) process, one can write $Z_t=\pi(B)X_t$ or, equivalently, $X_t=-\sum_{j=1}^\infty\pi_jX_{t-j}+Z_t \nonumber$ which shows that the PACF of an MA($q$) process will be nonzero for all lags, since for a perfect'' regression one would have to use all past variables $(X_s\colon s<t)$ instead of only the quantity $X_t^{t-1}$ given in Definition 3.3.1. In summary, the PACF reverses the behavior of the ACVF for autoregressive and moving average processes. While the latter have an ACVF that vanishes after lag $q$ and a PACF that is nonzero (though decaying) for all lags, AR processes have an ACVF that is nonzero (though decaying) for all lags but a PACF that vanishes after lag $p$. ACVF (ACF) and PACF hence provide useful tools in assessing the dependence of given ARMA processes. If the estimated ACVF (the estimated PACF) is essentially zero after some time lag, then the underlying time series can be conveniently modeled with an MA (AR) process---and no general ARMA sequence has to be fitted. These conclusions are summarized in Table 3.3.1 Table 3.1: The behavior of ACF and PACF for AR, MA, and ARMA processes. Example 3.3.4 Figure 3.5 collects the ACFs and PACFs of three ARMA processes. The upper panel is taken from the AR(2) process with parameters $\phi_1=1.5$ and $\phi_2=-.75$. It can be seen that the ACF tails off and displays cyclical behavior (note that the corresponding autoregressive polynomial has complex roots). The PACF, however, cuts off after lag 2. Thus, inspecting ACF and PACF, we would correctly specify the order of the AR process. The middle panel shows the ACF and PACF of the MA(3) process given by the parameters $\theta_1=1.5$, $\theta_2=-.75$ and $\theta_3=3$. The plots confirm that $q=3$ because the ACF cuts off after lag 3 and the PACF tails off. Finally, the lower panel displays the ACF and PACF of the ARMA(1,1) process of Example 3.2.4. Here, the assessment is much harder. While the ACF tails off as predicted (see Table 3.1), the PACF basically cuts off after lag 4 or 5. This could lead to the wrong conclusion that the underlying process is actually an AR process of order 4 or 5. (The reason for this behavior lies in the fact that the dependence in this particular ARMA(1,1) process can be well approximated by that of an AR(4) or AR(5) time series.) To reproduce the graphs in R, you can use the commands >ar2.acf=ARMAacf(ar=c(1.5,-.75), ma=0, 25) >ar2.pacf=ARMAacf(ar=c(1.5,-.75), ma=0, 25, pacf=T) for the AR(2) process. The other two cases follow from straightforward adaptations of this code. Example 3.3.5 Recruitment Series The data considered in this example consists of 453 months of observed recruitment (number of new fish) in a certain part of the Pacific Ocean collected over the years 1950--1987. The corresponding time series plot is given in the left panel of Figure 3.6. The corresponding ACF and PACF displayed in the middle and right panel of the same figure recommend fitting an AR process of order $p=2$ to the recruitment data. Assuming that the data is in rec, the R code to reproduce Figure 3.6 is > rec = ts(rec, start=1950, frequency=12) > plot(rec, xlab="", ylab="") > acf(rec, lag=48) > pacf(rec, lag=48) This assertion is also consistent with the scatterplots that relate current recruitment to past recruitment at several time lags, namely $h=1,\ldots,12$. For lag 1 and 2, there seems to be a strong linear relationship, while this is not the case anymore for $h\geq 3$. The corresponding R commands are > lag.plot(rec, lags=12, layout=c(3,4), diag=F) Denote by $X_t$ the recruitment at time $t$. To estimate the AR(2) parameters, run a regression on the observed data triplets included in the set $(x_t,x_ t-1,x_ t-2)\colon j=3,\ldots,453$ to fit a model of the form $X_t=\phi_0+\phi_1X_ t-1+\phi_2X_ t-2+Z_t, \qquad t=3,\ldots,453, \nonumber$ where $(Z_t)\sim\mathrm WN(0,\sigma^2)$. This task can be performed in R as follows. > fit.rec = ar.ols(rec, aic=F, order.max=2, demean=F, intercept=T) These estimates can be assessed with the command {\tt fit.rec} and the corresponding standard errors with $\tt fit.rec{\}asy.se$. Here the parameter estimates $\hat{\phi}_0=6.737(1.111)$, $\hat{\phi}_1=1.3541(.042)$, $\hat{\phi}_2=-.4632(.0412)$ and $\hat{\sigma}^2=89.72$ are obtained. The standard errors are given in parentheses.
textbooks/stats/Advanced_Statistics/Time_Series_Analysis_(Aue)/3%3A_ARMA_Processes/3.3%3A_The_PACF_of_a_Causal_ARMA_Process.txt
Suppose that the variables $X_1,\ldots,X_n$ of a weakly stationary time series $(X_t\colon t\in\mathbb{Z})$ have been observed with the goal to predict or forecast the future values of $X_{n+1},X_{n+2},\ldots$. The focus is here on so-called one-step best linear predictors (BLP). These are, by definition, linear combinations $\hat{X}_{n+1}=\phi_{n0}+\phi_{n1}X_n+\ldots+\phi_{nn}X_1 \label{3.4.1}$ of the observed variables $X_1,\ldots,X_n$ that minimize the mean-squared error $E\left[\{X_{n+1}-g(X_1,\ldots,X_n)\}^2\right] \nonumber$ for functions g of $X_1,\ldots,X_n$. Straightforward generalizations yield definitions for the m-step best linear predictors $\hat{X}_{n+m}$ of $X_{n+m}$ for arbitrary $m\in\mathbb{N}$ in the same fashion. Using Hilbert space theory, one can prove the following theorem which will be the starting point for our considerations. Theorem $1$: Best linear prediction (BLP) Let $(X_t\colon t\in\mathbb{Z})$ be a weakly stationary stochastic process of which $X_1,\ldots,X_n$ are observed. Then, the one-step BLP $\hat{X}_{n+1}$ of $X_{n+1}$ is determined by the equations $E\left[(X_{n+1}-\hat{X}_{n+1})X_{n+1-j}\right]=0 \nonumber$ for all $j=1,\ldots,n+1$, where $X_0=1$. The equations specified in Theorem $1$ can be used to calculate the coefficients $\phi_{n0},\ldots,\phi_{nn}$ in Equation \ref{3.4.1}. It suffices to focus on mean zero processes $(X_t\colon t\in\mathbb{Z})$ and thus to set $\phi_{n0}=0$ as the following calculations show. Assume that $E[X_t]=\mu$ for all $t\in\mathbb{Z}$. Then, Theorem $1$ gives that $E[\hat{X}_{n+1}]=E[X_{n+1}]=\mu$ (using the equation with $j=n+1$. Consequently, it holds that $\mu=E[\hat{X}_{n+1}] =E\left[\phi_{n0}+\sum_{\ell=1}^n\phi_{n\ell}X_{n+1-\ell}\right] =\phi_{n0}+\sum_{\ell=1}^n\phi_{n\ell}\mu. \nonumber$ Using now that $\phi_{n0}=\mu(1-\phi_{n1}-\ldots-\phi_{nn})$, Equation \ref{3.4.1} can be rewritten as $\hat{Y}_{n+1}=\phi_{n1}Y_n+\ldots+\phi_{nn}Y_1, \nonumber$ where $\hat{Y}_{n+1}=\hat{X}_{n+1}-\mu$ has mean zero. With the ACVF $\gamma$ of $(X_t\colon t\in\mathbb{Z})$, the equations in Theorem $1$ can be expressed as $\sum_{\ell=1}^n\phi_{n\ell}\gamma(j-\ell)=\gamma(j),\qquad j=1,\ldots,n. \label{3.4.2}$ Note that due to the convention $\phi_{n0}=0$, the last equation in Theorem $1$ (for which $j=n+1$) is omitted. More conveniently, this is restated in matrix notation. To this end, let $\Gamma_n=(\gamma(j-\ell))_{j,\ell=1,\ldots,n}$, $\phi_n=(\phi_{n1},\ldots,\phi_{nn})^T$ and $\gamma_n=(\gamma(1),\ldots,\gamma(n))^T$, where $^T$ denotes the transpose. With these notations, (3.4.2.) becomes $\Gamma_n\phi_n=\gamma_n \qquad\Longleftrightarrow\qquad \phi_n=\Gamma_n^{-1}\gamma_n, \label{3.4.3}$ provided that $\Gamma_n$ is nonsingular. The determination of the coefficients $\phi_{n\ell}$ has thus been reduced to solving a linear equation system and depends only on second-order properties of $(X_t\colon t\in\mathbb{Z})$ which are given by the ACVF $\gamma$. Let $X_n=(X_n,X_{n-1},\ldots,X_1)^T$. Then, $\hat{X}_{n+1}=\phi_n^TX_n$. To assess the quality of the prediction, one computes the mean-squared error with the help of Equation \ref{3.4.3} as follows: \begin{align} P_{n+1} &=E\left[(X_{n+1}-\hat{X}_{n+1})^2\right] \nonumber \[5pt] &=E\left[(X_{n+1}-\phi_n^T X_n)^2\right] \nonumber \[5pt] &=E\left[(X_{n+1}-\gamma_n^T\Gamma_n^{-1} X_n)^2\right]\nonumber \[5pt] &=E\left[X_{n+1}^2-2\gamma_n^T\Gamma_n^{-1} X_nX_{n+1} +\gamma_n^T\Gamma_n^{-1} X_n X_n^{T}\Gamma_n^{-1}\gamma_n\right]\nonumber \[5pt] &=\gamma(0)-2\gamma_n^T\Gamma_n^{-1}\gamma_n +\gamma_n^T\Gamma_n^{-1}\Gamma_n\Gamma_n^{-1}\gamma_n\nonumber \[5pt] &=\gamma(0)-\gamma_n^T\Gamma_n^{-1}\gamma_n. \label{3.4.4} \end{align} As an initial example, we explain the prediction procedure for an autoregressive process of order 2. Example $1$: Prediction of an AR(2) Process Let $(X_t\colon t\in\mathbb{Z})$ be the causal AR(2) process $X_t=\phi_1X_{t-1}+\phi_2X_{t-2}+Z_t$. Suppose that only an observation of $X_1$ is available to forecast the value of $X_2$. In this simplified case, the single prediction Equation \ref{3.4.2} is $\phi_{11}\gamma(0)=\gamma(1), \nonumber$ so that $\phi_{11}=\rho(1)$ and $\hat{X}_{1+1}=\rho(1)X_1$. In the next step, assume that observed values of $X_1$ and $X_2$ are at hand to forecast the value of $X_3$. Then, one similarly obtains from (3.4.2.) that the predictor can be computed from \begin{align*} \hat{X}_{2+1} &=\phi_{21}X_{2}+\phi_{22}X_1 =\phi_2^T X_2=(\Gamma_2^{-1}\gamma_2)^T X_2 \[5pt] &=(\gamma(1),\gamma(2))\left(\begin{array}{c@{\quad}c} \gamma(0) & \gamma(1) \ \gamma(1) & \gamma(0) \end{array}\right)^{-1} \left(\begin{array}{c} X_2 \ X_1 \end{array}\right). \end{align*} \nonumber However, applying the arguments leading to the definition of the PAC in Section 3.3.3., one finds that $E\left[\{X_3-(\phi_1X_2+\phi_2X_1)\}X_1\right]=E[Z_3X_1]=0, \nonumber$ $E\left[\{X_3-(\phi_1X_2+\phi_2X_1)\}X_2\right]=E[Z_3X_2]=0. \nonumber$ Hence, $\hat{X}_{2+1}=\phi_1X_2+\phi_2X_1$ and even $\hat{X}_{n+1}=\phi_1X_n+\phi_2X_{n-1}$ for all $n\geq 2$, exploiting the particular autoregressive structure. Since similar results can be proved for general causal AR(p) processes, the one-step predictors have the form $\hat{X}_{n+1}=\phi_1X_n+\ldots+\phi_pX_{n-p+1} \nonumber$ whenever the number of observed variables n is at least p. The major drawback of this approach is immediately apparent from the previous example: For larger sample sizes n, the prediction procedure requires the calculation of the inverse matrix $\Gamma_n^{-1}$ which is computationally expensive. In the remainder of this section, two recursive prediction methods are introduced that bypass the inversion altogether. They are known as Durbin-Levinson algorithm and innovations algorithm. Finally, predictors based on the infinite past are introduced which are often easily applicable for the class of causal and invertible ARMA processes. Method 1: The Durbin-Levinson algorithm If $(X_t\colon t\in\mathbb{Z})$ is a zero mean weakly stationary process with ACVF $\gamma$ such that $\gamma(0)>0$ and $\gamma(h)\to 0$ as $h\to\infty$, then the coefficients $\phi_{n\ell}$ in (3.4.2.) and the mean squared errors $P_n$ in (3.4.4.) satisfy the recursions $\phi_{11}=\frac{\gamma(1)}{\gamma(0)},\qquad P_0=\gamma(0), \nonumber$ and, for $n\geq 1$, $\phi_{nn}=\frac{1}{P_{n-1}} \left(\gamma(n)-\sum_{\ell=1}^{n-1}\phi_{n-1,\ell}\gamma(n-\ell)\right), \nonumber$ $\left(\begin{array}{l}\phi_{n1} \ {~}\vdots \ \phi_{n,n-1}\end{array}\right) =\left(\begin{array}{l} \phi_{n-1,1} \ {~}\vdots \ \phi_{n-1,n-1}\end{array}\right) -\phi_{nn}\left(\begin{array}{l} \phi_{n-1,n-1} \ {~}\vdots \ \phi_{n-1,1}\end{array}\right) \nonumber$ and $P_{n}=P_{n-1}(1-\phi_{nn}^2). \nonumber$ It can be shown that under the assumptions made on the process $(X_t\colon t\in\mathbb{Z})$, it holds indeed that $\phi_{nn}$ is equal to the value of the PACF of $(X_t\colon t\in\mathbb{Z})$ at lag n. The result is formulated as Corollary 5.2.1 in Brockwell and Davis (1991). This fact is highlighted in an example. The PACF of an AR(2) process Let $(X_t\colon t\in\mathbb{Z})$ be a causal AR(2) process. Then, $\rho(1)=\phi_1/(1-\phi_2)$ and all other values can be computed recursively from $\rho(h)-\phi_1\rho(h-1)-\phi_2\rho(h-2)=0,\qquad h\geq 2. \nonumber$ Note that the ACVF $\gamma$ satisfies a difference equation with the same coefficients, which is seen by multiplying the latter equation with $\gamma(0)$. Applying the Durbin-Levinson algorithm gives first that $\phi_{11}=\frac{\gamma(1)}{\gamma(0)}=\rho(1) \qquad\mbox{and}\qquad P_1=P_0(1-\phi_{11}^2)=\gamma(0)(1-\rho(1)^2). \nonumber$ Ignoring the recursion for the error terms $P_n$ in the following, the next $\phi_{n\ell}$ values are obtained a $\phi_{22} =\frac{1}{P_1}\left[\gamma(2)-\phi_{11}\gamma(1)\right] =\frac{1}{1-\rho(1)^2}\left[\rho(2)-\rho(1)^2\right] \nonumber$ $=\frac{\phi_1^2(1-\phi_2)^{-1}+\phi_2-[\phi_1(1-\phi_2)^{-1}]^2} {1-[\phi_1(1-\phi_2)^{-1}]^2}=\phi_2, \nonumber$ $\phi_{21} =\phi_{11}-\phi_{22}\phi_{11}=\rho(1)(1-\phi_2)=\phi_1, \nonumber$ $\phi_{33} =\frac{1}{P_2}\left[\gamma(3)-\phi_{21}\gamma(2)-\phi_{22}\gamma(1)\right] =\frac{1}{P_2}\left[\gamma(3)-\phi_1\gamma(2)-\phi_2\gamma(2)\right]=0. \nonumber$ Now, referring to the remarks after Example 3.3.7., no further computations are necessary to determine the PACF because $\phi_{nn}=0$ for all $n>p=2$. Method 2: The innovations algorithm In contrast to the Durbin-Levinson algorithm, this method can also be applied to nonstationary processes. It should thus, in general, be preferred over Method 1. The innovations algorithm gets its name from the fact that one directly uses the form of the prediction equations in Theorem 3.4.1. which are stated in terms of the innovations $(X_{t+1}-\hat{X}_{t+1})_{t\in\mathbb{Z}}$. Observe that the sequence consists of uncorrelated random variables. The one-step predictors $\hat{X}_{n+1}$ can be calculated from the recursions $\hat{X}_{0+1}=0,\qquad P_1=\gamma(0) \nonumber$ and, for $n\geq 1$, $\hat{X}_{n+1} =\sum_{\ell=1}^n\theta_{n\ell}(X_{n+1-\ell}-\hat{X}_{n+1-\ell}) \nonumber$ $P_{n+1} =\gamma(0)-\sum_{\ell=0}^{n-1}\theta_{n,n-\ell}^2P_{\ell+1}, \nonumber$ where the coefficients are obtained from the equations $\theta_{n,n-\ell}=\frac{1}{P_{\ell+1}} \left[\gamma(n-\ell)-\sum_{i=0}^{\ell-1}\theta_{\ell,\ell-i}\theta_{n,n-i}P_{i+1}\right], \qquad\ell=0,1,\ldots,n-1. \nonumber$ As example we show how the innovations algorithm is applied to a moving average time series of order 1. Example $3$: Prediction of an MA(1) Process Let $(X_t\colon t\in\mathbb{Z})$ be the MA(1) process $X_t=Z_t+\theta Z_{t-1}$. Note that $\gamma(0)=(1+\theta^2)\sigma^2,\qquad\gamma(1)=\theta\sigma^2 \qquad\mbox{and}\qquad\gamma(h)=0\quad(h\geq 2). \nonumber$ Using the innovations algorithm, one can compute the one-step predictor from the values \begin{align*} \theta_{n1}=\frac{\theta\sigma^2}{P_n},\qquad \theta_{n\ell}=0 \quad(\ell=2,\ldots,n-1), \end{align*} and \begin{align*} P_1 &=(1+\theta^2)\sigma^2,\[5pt] P_{n+1}&=(1+\theta^2-\theta\theta_{n1})\sigma^2 \end{align*} \nonumber as $\hat{X}_{n+1}=\frac{\theta\sigma^2}{P_n}(X_n-\hat{X}_{n}). \nonumber$ Method 3: Prediction based on the infinite past Suppose that a causal and invertible ARMA(p,q) process is analyzed. Assume further that (unrealistically) the complete history of the process can be stored and that thus all past variables $(X_t\colon t\leq n)$ can be accessed. Define then $\tilde{X}_{n+m}=E[X_{n+m}|X_n,X_{n-1},\ldots], \nonumber$ as the m-step ahead predictor based on the infinite past. It can be shown that, for large sample sizes n, the difference between the values of $\hat{X}_{n+m}$ and $\tilde{X}_{n+m}$ vanishes at an exponential rate. Exploiting causality and invertibility of the ARMA process, one can transform the predictor $\tilde{X}_{n+m}$ so that it is in a computationally more feasible form. To do so, note that by causality \begin{align} \tilde{X}_{n+m} &=E[X_{n+m}|X_n,X_{n-1},\ldots]\nonumber \[5pt] &=E\left[\sum_{j=0}^\infty\psi_jZ_{n+m-j}\Big|X_n,X_{n-1},\ldots\right]\nonumber \[5pt] &=\sum_{j=m}^\infty\psi_jZ_{n+m-j} \label{3.4.5} \end{align} because $E[Z_t|X_n,X_{n-1},\ldots]$ equals zero if t>n and equals Z_t if $t\leq n$ (due to invertibility!). The representation in (3.4.5.) can be used to compute the mean squared prediction error $\tilde{P}_{n+m}$. It follows from causality that $\tilde{P}_{n+m}=E[(X_{n+m}-\tilde{X}_{n+m})^2] =E\left[\left(\sum_{j=0}^{m-1}\psi_jZ_{n+m-j}\right)^2\right] =\sigma^2\sum_{j=0}^{m-1}\psi_j^2. \label{3.4.6}$ On the other hand, Equation \ref{3.4.5} does not allow to directly calculate the forecasts because $\tilde{X}_{n+m}$ is given in terms of the noise variables $Z_{n+m-j}$. Instead invertibility will be utilized. Observe first that $E[X_{n+m-j}|X_n,X_{n-1},\ldots]=\left\{\begin{array}{c@{\quad}l} \tilde{X}_{n+m-j}, & j<m.\[.2cm] X_{n+m-j}, & j\geq m. \end{array}\right. \nonumber$ By invertibility (the 0='' part follows again from causality), \begin{align}0=E[Z_{n+m}|X_n,X_{n-1},\ldots] & \[5pt] &=E\left[\sum_{j=0}^\infty\pi_jX_{n+m-j}\Big|X_n,X_{n-1},\ldots\right] \[5pt] & =\sum_{j=0}^\infty\pi_jE[X_{n+m-j}|X_n,X_{n-1},\ldots].\end{align} \nonumber Combining the previous two statements, yields $\tilde{X}_{n+m}=-\sum_{j=1}^{m-1}\pi_j\tilde{X}_{n+m-j} -\sum_{j=m}^\infty\pi_jX_{n+m-j}. \label{3.4.7}$ The equations can now be solved recursively for $m=1,2,\ldots$ Note, however, that for any $m\geq 1$ the sequence $(X_{n+m+t}-\tilde{X}_{n+m+t}\colon t\in\mathbb{Z})$ does not consist of uncorrelated random variables. In fact, if $h\in\mathbb{N}_0$, it holds that \begin{align} E[(X_{n+m}-\tilde{X}_{n+m})(X_{n+m+h}-\tilde{X}_{n+m+h})] &\[5pt] &=E\left[\sum_{j=0}^{m-1}\psi_jZ_{n+m-j}\sum_{i=0}^{m+h-1}\psi_iZ_{n+m+h-i}\right] \[5pt] & =\sigma^2\sum_{j=0}^{m-1}\psi_j\psi_{j+h}. \end{align} \nonumber Finally, for practical purposes the given forecast needs to be truncated. This is accomplished by setting $\sum_{j=n+m}^\infty\pi_jX_{n+m-j}=0. \nonumber$ The resulting equations (see Equation \ref{3.4.7} for comparison) yield recursively the truncated m-step predictors $X_{n+m}^*$: $X_{n+m}^*=-\sum_{j=1}^{m-1}\pi_jX_{n+m-j}^*-\sum_{j=m}^{n+m-1}\pi_jX_{n+m-j}. \label{3.4.8}$
textbooks/stats/Advanced_Statistics/Time_Series_Analysis_(Aue)/3%3A_ARMA_Processes/3.4%3A_Forecasting.txt
Let $(X_t\colon t\in\mathbb{Z})$ be a causal and invertible ARMA(p,q) process with known orders p and q, possibly with mean $\mu$. This section is concerned with estimation procedures for the unknown parameter vector $\beta=(\mu,\phi_1,\ldots,\phi_p,\theta_1,\ldots,\theta_q,\sigma^2)^T. \tag{3.5.1}$ To simplify the estimation procedure, it is assumed that the data has already been adjusted by subtraction of the mean and the discussion is therefore restricted to zero mean ARMA models. In the following, three estimation methods are introduced. The method of moments works best in case of pure AR processes, while it does not lead to optimal estimation procedures for general ARMA processes. For the latter, more efficient estimators are provided by the maximum likelihood and least squares methods which will be discussed subsequently. Method 1 (Method of Moments) Since this method is only efficient in their case, the presentation here is restricted to AR(p) processes $X_t=\phi_1X_{t-1}+\ldots+\phi_pX_{t-p}+Z_t, t\in\mathbb{Z}, \nonumber$ where $(Z_t\colon t\in\mathbb{Z})\sim\mbox{WN}(0,\sigma^2)$. The parameter vector $\beta$ consequently reduces to $(\phi,\sigma^2)^T$ with $\phi=(\phi_1,\ldots,\phi_p)^T$ and can be estimated using the Yule-Walker equations $\Gamma_p\phi=\gamma_p \qquad\mbox{and}\ \sigma^2=\gamma(0)-\phi^T\gamma_p, \nonumber$ where $\Gamma_p=(\gamma(k-j))_{k,j=1,\ldots,p}$ and $\gamma_p=(\gamma(1),\ldots,\gamma(p))^T$. Observe that the equations are obtained by the same arguments applied to derive the Durbin-Levinson algorithm in the previous section. The method of moments suggests to replace every quantity in the Yule-Walker equations with their estimated counterparts, which yields the Yule-Walker estimators $\widehat{\phi}=\hat{\Gamma}_p^{-1} \hat{\gamma}_p=\hat{R}_p^{-1}\hat{\rho}_p \tag{3.5.2}$ $\hat{\sigma}^2 =\hat{\gamma}(0)-\hat{\gamma}^T_p\hat{\Gamma}_p^{-1}\hat{\gamma}_p =\hat{\gamma}(0)\left[1-\hat{\rho}_p^T\hat{R}_p^{-1}\hat{\rho}_p\right ]. \tag{3.5.3}$ Therein, $\hat{R}_p=\hat{\gamma}(0)^{-1}\hat{\Gamma}_p$ and $\hat{\rho}_p=\hat{\gamma}(0)^{-1}\hat{\gamma}_p$ with $\hat{\gamma}(h)$ defined as in (1.2.1). Using $\hat{\gamma}(h)$ as estimator for the ACVF at lag $h$, a dependence on the sample size $n$ is obtained in an implicit way. This dependence is suppressed in the notation used here. The following theorem contains the limit behavior of the Yule-Walker estimators as n tends to infinity. Theorem 3.5.1. If $(X_t\colon t\in\mathbb{Z})$ is a causal AR(p) process, then $\sqrt{n}(\widehat{\phi}-\phi)\stackrel{\cal D}{\longrightarrow} N(\mbox{0},\sigma^2\Gamma_p^{-1})\qquad\mbox{and}\qquad \hat{\sigma}^2\stackrel{P} {\longrightarrow}\sigma^2 \nonumber$ as $n\to\infty$, where $\to^P$ indicates convergence in probability. A proof of this result is given in Section 8.10 of Brockwell and Davis (1991). Since equations (3.5.2) and (3.5.3) have the same structure as the corresponding equations (3.4.3) and (3.4.4), the Durbin-Levinson algorithm can be used to solve recursively for the estimators $\widehat{\phi}_h=(\widehat{\phi}_{h1},\ldots,\widehat{\phi}_{hh})$. Moreover, since $\phi_{hh}$ is equal to the value of the PACF of $(X_t\colon t\in\mathbb{Z})$ at lag h, the estimator $\widehat{\phi}_{hh}$ can be used as its proxy. Since it is already known that, in the case of AR(p) processes, $\phi_{hh}=0$ if h>p, Theorem (3.5.1) implies immediately the following corollary. Corollary 3.5.1 If $(X_t\colon t\in\mathbb{Z})$ is a causal AR(p) process, then $\sqrt{n}\widehat{\phi}_{hh}\stackrel{\cal D}{\longrightarrow}Z \qquad(n\to\infty) \nonumber$ for all h>p, where Z stands for a standard normal random variable. Example 3.5.1. (Yule-Walker estimates for AR(2) processes). Suppose that $n=144$ values of the autoregressive process $X_t=1.5X_{t-1}-.75X_{t-2}+Z_t$ have been observed, where $(Z_t\colon t\in\mathbb{Z})$ is a sequence of independent standard normal variates. Assume further that $\hat{\gamma}(0)=8.434$, $\hat{\rho}(1)=0.834$ and $\hat{\rho}(2)=0.476$ have been calculated from the data. The Yule-Walker estimators for the parameters are then given by $\widehat{\phi}=\left(\begin{array}{c} \widehat{\phi}_1 \[.1cm] \widehat{\phi}_2 \end{array}\right) =\left(\begin{array}{rr} 1.000 & 0.834 \[.1cm] 0.834 & 1.000 \end{array}\right)^{-1} \left(\begin{array}{c} 0.834 \[.1cm] 0.476 \end{array}\right)= \left(\begin{array}{r} 1.439 \[.1cm] -0.725\end{array}\right) \nonumber$ and $\hat{\sigma}^2=8.434\left[1-(0.834,0.476) \left(\begin{array}{r} 1.439 \[.1cm] -0.725 \end{array}\right)\right]=1.215. \nonumber$ To construct asymptotic confidence intervals using Theorem 3.5.1, the unknown limiting covariance matrix $\sigma^2\Gamma_p^{-1}$ needs to be estimated. This can be done using the estimator $\frac{\hat{\sigma}^2\hat{\Gamma}_p^{-1}}{n}= \frac{1}{144}\frac{1.215}{8.434} \left(\begin{array}{rr} 1.000 & 0.834 \[.1cm] 0.834 & 1.000 \end{array}\right)^{-1}= \left(\begin{array}{rr} 0.057^2 & -0.003 \[.1cm] -0.003 & 0.057^2 \end{array}\right). \nonumber$ Then, the $1-\alpha$ level confidence interval for the parameters $\phi_1$ and $\phi_2$ are computed as $1.439\pm 0.057z_{1-\alpha/2} \qquad\mbox{and}\qquad -0.725\pm 0.057z_{1-\alpha/2}, \nonumber$ respectively, where $z_{1-\alpha/2}$ is the corresponding normal quantile. Example 3.5.2 (Recruitment Series). Let us reconsider the recruitment series of Example 3.3.5. There, an AR(2) model was first established as appropriate for the data and the model parameters were then estimated using an ordinary least squares approach. Here, the coefficients will instead be estimated with the Yule-Walker procedure. The R command is > rec.yw = ar.yw(rec, order=2)} The mean estimate can be obtained from rec.yw$x.mean as $\hat{\mu}=62.26$, while the autoregressive parameter estimates and their standard errors are accessed with the commands rec.yw$ar and sqrt(rec.yw$asy.var.coef as $\hat{\phi}_1=1.3316(.0422)$ and $\hat{\phi}_2=-.4445(.0422)$. Finally, the variance estimate is obtained from rec.yw$var.pred as $\hat{\sigma}^2=94.7991$. All values are close to their counterparts in Example 3.3.5. Example 3.5.3. Consider the invertible MA(1) process $X_t=Z_t+\theta Z_{t-1}$, where $|\theta|<1$. Using invertibility, each $X_t$ has an infinite autoregressive representation $X_t=\sum_{j=1}^\infty(-\theta)^jX_{t-j}+Z_t \nonumber$ that is nonlinear in the unknown parameter $\theta$ to be estimated. The method of moments is here based on solving $\hat{\rho}(1)=\frac{\hat{\gamma}(1)}{\hat{\gamma}(0)} =\frac{\hat{\theta}}{1+\hat{\theta}^2}. \nonumber$ for $\hat{\theta}$. The foregoing quadratic equation has the two solutions $\hat{\theta} =\frac{1\pm\sqrt{1-4\hat{\rho}(1)^2}}{2\hat{\rho}(1)}, \nonumber$ of which we pick the invertible one. Note moreover, that $|\hat{\rho}(1)|$ is not necessarily less or equal to 1/2 which is required for the existence of real solutions. (The theoretical value $|\rho(1)|$, however, is always less than 1/2 for any MA(1) process, as an easy computation shows). Hence, $\theta$ can not always be estimated from given data samples. Method 2 (Maximum Likelihood Estimation) The innovations algorithm of the previous section applied to a causal ARMA(p,q) process $(X_t\colon t\in\mathbb{Z})$ gives $\hat{X}_{i+1}=\sum_{j=1}^i\theta_{ij}(X_{i+1-j}-\hat{X}_{i+1-j}), \phantom{\sum_{j=1}^p\phi_jX_{i+1-j}+} 1\leq i< \max\{p,q\}, \nonumber$ $\hat{X}_{i+1}= \sum_{j=1}^p\phi_jX_{i+1-j}+\sum_{j=1}^q\theta_{ij}(X_{i+1-j}-\hat{X}_{i+1-j}), \phantom{1\leq} i\geq \max\{p,q\}, \nonumber$ with prediction error $P_{i+1}=\sigma^2R_{i+1}. \nonumber$ In the last expression, $\sigma^2$ has been factored out due to reasons that will become apparent from the form of the likelihood function to be discussed below. Recall that the sequence $(X_{i+1}-\hat{X}_{i+1}\colon i\in\mathbb{Z})$ consists of uncorrelated random variables if the parameters are known. Assuming normality for the errors, we moreover obtain even independence. This can be exploited to define the Gaussian maximum likelihood estimation(MLE) procedure. Throughout, it is assumed that $(X_t\colon t\in\mathbb{Z})$ has zero mean ($\mu=0$). The parameters of interest are collected in the vectors $\beta=(\phi,\theta,\sigma^2)^T$ and $\beta'=(\phi,\theta)^T$, where $\phi=(\phi_1,\ldots,\phi_p)^T$ and $\theta=(\theta_1,\ldots,\theta_q)^T$. Assume finally that we have observed the variables $X_1,\ldots,X_n$. Then, the Gaussian likelihood function for the innovations is $L(\beta)=\frac{1}{(2\pi\sigma^2)^{n/2}}\left(\prod_{i=1}^nR_i^{1/2}\right) \exp\left(-\frac{1}{2\sigma^2}\sum_{j=1}^n\frac{(X_j-\hat{X}_j)^2}{R_j}\right). \tag{3.5.4}$ Taking the partial derivative of $\ln L(\beta)$ with respect to the variable $\sigma^2$ reveals that the MLE for $\sigma^2$ can be calculated from $\hat{\sigma}^2=\frac{S(\hat{\phi},\hat{\theta})}{n},\qquad S(\hat{\phi},\hat{\theta})=\sum_{j=1}^n\frac{(X_j-\hat{X}_j)^2}{R_j}. \nonumber$ Therein, $\hat{\phi}$ and $\hat{\theta}$ denote the MLEs of $\phi$ and $\theta$ obtained from minimizing the profile likelihood or reduced likelihood $\ell(\phi,\theta)=\ln\left(\frac{S(\phi,\theta)}{n}\right) +\frac 1n\sum_{j=1}^n\ln(R_j). \nonumber$ Observe that the profile likelihood $\ell(\phi,\theta)$ can be computed using the innovations algorithm. The speed of these computations depends heavily on the quality of initial estimates. These are often provided by the non-optimal Yule-Walker procedure. For numerical methods, such as the Newton-Raphson and scoring algorithms, see Section 3.6 in Shumway and Stoffer (2006). The limit distribution of the MLE procedure is given as the following theorem. Its proof can be found in Section 8.8 of Brockwell and Davis (1991). Theorem 3.5.2. Let $(X_t\colon t\in\mathbb{Z})$ be a causal and invertible ARMA(p,q) process defined with an iid sequence $(Z_t\colon t\in\mathbb{Z}) satisfying E[Z_t]=0$ and $E[Z_t^2]=\sigma^2$. Consider the MLE $\hat{\beta}'$ of $\beta'$ that is initialized with the moment estimators of Method 1. Then, $\sqrt{n}(\hat{\beta}'-\beta')\stackrel{\cal D}{\longrightarrow} N(\mbox{0},\sigma^2\Gamma_{p,q}^{-1}) \qquad(n\to\infty). \nonumber$ The result is optimal. The covariance matrix $\Gamma_{p,q}$ is in block form and can be evaluated in terms of covariances of various autoregressive processes. Example 3.5.4 (Recruitment Series). The MLE estimation procedure for the recruitment series can be applied in R as follows: >rec.mle = ar.mle(rec, order=2) The mean estimate can be obtained from rec.mle$x.mean as $\hat{\mu}=62.26$, while the autoregressive parameter estimates and their standard errors are accessed with the commands rec.mle$ar and sqrt(rec.mle$asy.var.coef) as $\hat{\phi}_1=1.3513(.0410)$ and $\hat{\phi}_2=-.4099(.0410)$. Finally, the variance estimate is obtained from rec.yw$var.pred as $\hat{\sigma}^2=89.3360$. All values are very close to their counterparts in Example 3.3.5. Method 3 (Least Squares Estimation) An alternative to the method of moments and the MLE is provided by the least squares estimation (LSE). For causal and invertible ARMA(p,q) processes, it is based on minimizing the weighted sum of squares $S(\phi,\theta)=\sum_{j=1}^n\frac{(X_j-\hat{X}_j)^2}{R_j} \tag{3.5.5}$ with respect to $\phi$ and $\theta$, respectively. Assuming that $\tilde{\phi}$ and $\tilde{\theta}$ denote these LSEs, the LSE for $\sigma^2$ is computed as $\tilde{\sigma}^2=\frac{S(\tilde{\phi},\tilde{\theta})}{n-p-q}. \nonumber$ The least squares procedure has the same asymptotics as the MLE. Theorem 3.5.3. The result of Theorem 3.5.2. holds also if $\hat{\beta}'$ is replaced with $\tilde{\beta}'$. Example 3.5.5 (Recruitment Series). The least squares estimation has already been discussed in Example 3.3.5, including the R commands.
textbooks/stats/Advanced_Statistics/Time_Series_Analysis_(Aue)/3%3A_ARMA_Processes/3.5%3A_Parameter_Estimation.txt
In this section, a rough guide for going about the data analysis will be provided. It consists of several parts, most of which have been discussed previously. The main focus is on the selection of $p$ and $q$ in the likely case that these parameters are unknown. Step 1. Plot the data and check whether or not the variability remains reasonably stable throughout the observation period. If that is not the case, use preliminary transformations to stabilize the variance. One popular class is given by the Box-Cox transformations (Box and Cox, 1964) $f_\lambda(U_t)=\left\{\begin{array}{l@{\qquad}l} \lambda^{-1}(U_t^\lambda-1), & U_t\geq 0,\;\lambda>0. \[.2cm] \ln U_t & U_t>0,\;\lambda=0. \end{array}\right. \nonumber$ In practice $f_0$ or $f_{1/2}$ are often adequate choices. (Recall, for instance, the Australian wine sales data of Example 1.4.1.) Step 2. Remove, if present, trend and seasonal components from the data. Chapter 1 introduced a number of tools to do so, based on the classical decomposition of a time series $Y_t=m_t+s_t+X_t \nonumber$ into a trend, a seasonality and a residual component. Note that differencing works also without the specific representation in the last display. If the data appears stationary, move on to the next step. Else apply, for example, another set of difference operations. Step 3. Suppose now that Steps 1 and 2 have provided us with observations that are well described by a stationary sequence $(X_t\colon t\in\mathbb{Z})$. The goal is then to find the most appropriate ARMA($p,q)$ model to describe the process. In the unlikely case that $p$ and $q$ can be assumed known, utilize the estimation procedures of Section 3.5 directly. Otherwise, choose them according to one of the following criteria. (a) The standard criterion that is typically implemented in software packages is a modification of Akaike's information criterion, see Akaike (1969), which was given by Hurvich and Tsai (1989). In this paper it is suggested that the ARMA model parameters be chosen to minimize the objective function \label{eq:3.7.1} {\rm AIC}_C(\phi,\theta,p,q) =-2\ln L(\phi,\theta,S(\phi,\theta)/n) +\frac{2(p+q+1)n}{n-p-q-2}. \tag{3.6.1} Here, $L(\phi,\theta,\sigma^2)$ denotes the Gaussian likelihood defined in (3.5.4) and $S(\phi,\theta)$ is the weighted sum of squares in (3.5.5). It can be seen from the definition that the ${\rm AIC}_C$ does not attempt to minimize the log-likelihood function directly. The introduction of the penalty term on the right-hand side of (3.6.1) reduces the risk of overfitting. (b) For pure autoregressive processes, Akaike (1969) introduced criterion that is based on a minimization of the final prediction error. Here, the order $p$ is chosen as the minimizer of the objective function ${\rm FPE}=\hat{\sigma}^2\frac{n+p}{n-p}, \nonumber$ where $\hat{\sigma}^2$ denotes the MLE of the unknown noise variance $\sigma^2$. For more on this topic and other procedures that help fit a model, we refer here to Section 9.3 of Brockwell and Davis (1991). Step 4. The last step in the analysis is concerned with diagnostic checking by applying the goodness of fit tests of Section 1.5. 3.7: Summary The class of autoregressive moving average processes has been introduced to model stationary stochastic processes. Theoretical properties such as causality and invertibility have been examined, which depend on the zeroes of the autoregressive and moving average polynomials, respectively. It has been shown how the causal representation of an ARMA process can be utilized to compute its covariance function which contains all information about the dependence structure. Assuming known parameter values, several forecasting procedures have been discussed. The Durbin- Levinson algorithm works well for pure AR processes, while the innovations algorithm is particularly useful for pure MA processes. Predictions using an infinite past work well for causal and invertible ARMA processes. For practical purposes, however, a truncated version is more relevant. Since the exact parameter values are in general unknown, several estimation procedures were introduced. The Yule-Walker procedure is only optimal in the AR case but provides useful initial estimates that can be used for the numerical derivation of maximum likelihood or least squares estimates. Finally, a framework has been provided that may potentially be useful when facing the problem of analyzing a data set in practice.
textbooks/stats/Advanced_Statistics/Time_Series_Analysis_(Aue)/3%3A_ARMA_Processes/3.6%3A_Model_Selection.txt
In this chapter, a general method is discussed to deal with the periodic components of a time series. 4: Spectral Analysis Many of the time series discussed in the previous chapters displayed strong periodic components: The sunspot numbers of Example 1.1.1, the number of trapped lynx of Example 1.1.2 and the Australian wine sales data of Example 1.4.1. Often, there is an obvious choice for the period $d$ of this cyclical part such as an annual pattern in the wine sales. Given $d$, one could then proceed by removing the seasonal effects as in Section 1.4. In the first two examples it is, however, somewhat harder to determine the precise value of $d$. In this chapter, a general method is therefore discussed to deal with the periodic components of a time series. To complicate matters, it is usually the case that several cyclical patterns are simultaneously present in a time series. As an example recall the southern oscillation index (SOI) data which exhibits both an annual pattern and a so-called El Ni$\tilde{n}$o pattern. The sine and cosine functions are the prototypes of periodic functions. They are going to be utilized here to describe cyclical behavior in time series. Before doing so, a cycle is defined to be one complete period of a sine or cosine function over a time interval of length $2\pi$. Define also the frequency $\omega=\dfrac 1d \nonumber$ as the number of cycles per observation, where $d$ denotes the period of a time series (that is, the number of observations in a cycle). For monthly observations with an annual period, $d=12$ and hence $\omega=1/12=0.083$ cycles per observation. Now reconsider the process $X_t=R\sin(2\pi\omega t+\varphi) \nonumber$ as introduced in Example 1.2.2, using the convention $\lambda=2\pi\omega$. To include randomness in this process, choose the amplitude $R$ and the phase $\varphi$ to be random variables. An equivalent representation of this process is given by $X_t=A\cos(2\pi\omega t)+B\sin(2\pi\omega t), \nonumber$ with $A=R\sin(\varphi)$ and $B=R\cos(\varphi)$ usually being independent standard normal variates. Then, $R^2=A^2+B^2$ is a $\chi$-squared random variable with 2 degrees of freedom and $\varphi=\tan^{-1}(B/A)$ is uniformly distributed on $(-\pi,\pi]$. Moreover, $R$ and $\varphi$ are independent. Choosing now the value of $\omega$ one particular periodicity can be described. To accommodate more than one, it seems natural to consider mixtures of these periodic series with multiple frequencies and amplitudes: $X_t=\sum_{j=1}^m \big[A_j\cos(2\pi\omega_jt)+B_j\sin(2\pi\omega_jt)\big], \qquad t\in\mathbb{Z}, \nonumber$ where $A_1,\ldots,A_m$ and $B_1,\ldots,B_m$ are independent random variables with zero mean and variances $\sigma_1^2,\ldots,\sigma_m^2$, and $\omega_1,\ldots,\omega_m$ are distinct frequencies. It can be shown that $(X_t\colon t\in\mathbb{Z})$ is a weakly stationary process with lag-h ACVF $\gamma(h)=\sum_{j=1}^m\sigma_j^2\cos(2\pi\omega_j h),\qquad h\in\mathbb{Z}. \nonumber$ The latter result yields in particular that $\gamma(0)=\sigma_1^2+\ldots+\sigma_m^2$. The variance of $X_t$ is consequently the sum of the component variances. Example 4.1.1. Let $m=2$ and choose $A_1=B_1=1$, $A_2=B_2=4$ to be constant as well as $\omega_1=1/12$ and $\omega_2=1/6$. This means that $X_t=X_t^{(1)}+X_t^{(2)} =\big[\cos(2\pi t/12)+\sin(2\pi t/12)\big]+\big[4\cos(2\pi t/6)+4\sin(2\pi t/6)\big] \nonumber$ is the sum of two periodic components of which one exhibits an annual cycle and the other a cycle of six months. For all processes involved, realizations of $n=48$ observations (4 years of data) are displayed in Figure 4.1. Also shown is a fourth time series plot which contains the $X_t$ distorted by standard normal independent noise, $\tilde X_t$. The corresponding R code is: >t=1:48 >x1=cos(2*pi*t/12)+sin(2*pi*t/12) >x2=4*cos(2*pi*t/6)+4*sin(2*pi*t/6) >x=x1+x2 >tildex=x+rnorm(48) Note that the squared amplitude of $X_t^{(1)}$ is $1^2+1^2=2$. The maximum and minimum values of $X_t^{(1)}$ are therefore $\pm\sqrt{2}$. Similarly, we obtain $\pm\sqrt{32}$ for the second component. For a statistician it is now important to develop tools to recover the periodicities from the data. The branch of statistics concerned with this problem is called spectral analyis. The standard method in this area is based on the periodogram which is introduced now. Suppose for the moment that the frequency parameter $\omega_1=1/12$ in Example 4.1.1 is known. To obtain estimates of $A_1$ and $B_1$, one could try to run a regression using the explanatory variables $Y_{t,1}=\cos(2\pi t/12)$ or $Y_{t,2}=\sin(2\pi t/12)$ to compute the least squares estimators \begin{align*} \hat A_1=&\dfrac{\sum_{t=1}^nX_tY_{t,1}}{\sum_{t=1}^nY_{t,1}^2}=\dfrac 2n\sum_{t=1}^nX_t\cos(2\pi t/12), \.2cm] \hat B_1=&\dfrac{\sum_{t=1}^nX_tY_{t,2}}{\sum_{t=1}^nY_{t,2}^2}=\dfrac 2n\sum_{t=1}^nX_t\sin(2\pi t/12). \end{align*} Since, in general, the frequencies involved will not be known to the statistician prior to the data analysis, the foregoing suggests to pick a number of potential $\omega's, say j/n for \(j=1,\ldots,n/2$ and to run a long regression of the form \[X_t=\sum_{j=0}^{n/2}\big[A_j\cos(2\pi jt/n)+B_j\sin(2\pi jt/n)\big]. \tag{4.1.1} This leads to least squares estimates $\hat A_j$ and $\hat B_j$ of which the "significant'' ones should be selected. Note that the regression in 4.1.1 is a perfect one because there are as many unknowns as variables! Note also that $P(j/n)=\hat A_j^2+\hat B_j^2 \nonumber$ is essentially (up to a normalization) an estimator for the correlation between the time series $X_t$ and the corresponding sum of the periodic cosine and sine functions at frequency $j/n$. The collection of all $P(j/n)$, $j=1,\ldots,n/2$, is called the scaled periodogram. It can be computed quickly via an algorithm known as the fast Fourier transform (FFT) which in turn is based on the discrete Fourier transform (DFT) $d(j/n)=\dfrac{1}{\sqrt{n}}\sum_{t=1}^nX_t\exp(-2\pi ijt/n). \nonumber$ The frequencies $j/n$ are called the Fourier or fundamental frequencies. Since $\exp(-ix)=\cos(x)-i\sin(x)$ and $|z|^2=z\bar{z}=(a+ib)(a-ib)=a^2+b^2$ for any complex number $z=a+ib$, it follows that $I(j/n)=|d(j/n)|^2=\dfrac 1n\left(\sum_{t=1}^nX_t\cos(2\pi jt/n)\right)^2+\dfrac 1n\left(\sum_{t=1}^nX_t\sin(2\pi jt/n)\right)^2. \nonumber$ The quantity $I(j/n)$ is referred to as the periodogram. It follows immediately that the periodogram and the scaled periodogram are related via the identity $4I(j/n)=nP(j/n)$. Example 4.1.2. Using the expressions and notations of Example 4.1.1, the periodogram and the scaled periodogram are computed in R as follows: >t=1:48 >l=abs(fft(x)/sqrt(48))^ 2 >P=4*I/48 >f=0:24/48 >plot(f, P[1:25], type="l") >abline(v=1/12) >abline(v=1/6) The corresponding (scaled) periodogram for $(\tilde X_t)$ can be obtained in a similar fashion. The scaled periodograms are shown in the left and middle panel of Figure 4.2. The right panel displays the scaled periodogram of another version of $(\tilde X_t)$ in which the standard normal noise has been replaced with normal noise with variance 9. From these plots it can be seen that the six months periodicity is clearly visible in the graphs (see the dashed vertical lines at x=1/6. The less pronounced annual cycle (vertical line at x=1/12 is still visible in the first two scaled periodograms but is lost if the noise variance is increased as in the right plot. Note, however, that the y-scale is different for all three plots. In the ideal situation that we observe the periodic component without additional contamination by noise, we can furthermore see why the periodogram may be useful in uncovering the variance decomposition from above. We have shown in the lines preceding Example 4.1.1 that the squared amplitudes of $X_{t}^{(1)}$ and $X_t^{(2)}$ are 2 and 32, respectively. These values are readily read from the scaled periodogram in the left panel of Figure 4.2. The contamination with noise alters these values. In the next section, it is established that the time domain approach (based on properties of the ACVF, that is, regression on past values of the time series) and the frequency domain approach (using a periodic function approach via fundamental frequencies, that is, regression on sine and cosine functions) are equivalent. Some details are given on the spectral density (the population counterpart of the periodogram) and on properties of the periodogram itself.
textbooks/stats/Advanced_Statistics/Time_Series_Analysis_(Aue)/4%3A_Spectral_Analysis/4.1%3A_Introduction_to_Spectral_Analysis.txt
The fundamental technical result which is at the core of spectral analysis states that any (weakly) stationary time series can be viewed (approximately) as a random superposition of sine and cosine functions varying at various frequencies. In other words, the regression in (4.1.1) is approximately true for all weakly stationary time series. In Chapters 1-3, it is shown how the characteristics of a stationary stochastic process can be described in terms of its ACVF $\gamma(h)$. The first goal in this section is to introduce the quantity corresponding to $\gamma(h)$ in the frequency domain. Definition 4.2.1 (Spectral Density) If the ACVF $\gamma(h)$ of a stationary time series (Xt)t\in\mathbb{Z} \nonumber \] satisfies the condition $\sum_{h=-\infty}^\infty|\gamma(h)|<\infty, \nonumber$ then there exists a function f defined on (-1/2,1/2] such that $\gamma(h)=\int_{-1/2}^{1/2}\exp(2\pi i\omega h)f(\omega)d\omega,\qquad h\in\mathbb{Z}, \nonumber$ and $f(\omega)=\sum_{h=-\infty}^\infty\gamma(h)\exp(-2\pi i\omega h),\qquad\omega\in(-1/2,1/2]. \nonumber$ The function f is called the spectral density of the process $X_t\colon t\in\mathbb{Z})$. Definition 4.2.1 (which contains a theorem part as well) establishes that each weakly stationary process can be equivalently described in terms of its ACVF or its spectral density. It also provides the formulas to compute one from the other. Time series analysis can consequently be performed either in the time domain (using $\gamma(h)$) or in the frequency domain (using f$(\omega))$. Which approach is the more suitable one cannot be decided in a general fashion but has to be reevaluated for every application of interest. In the following, several basic properties of the spectral density are collected and evaluated f for several important examples. That the spectral density is analogous to a probability density function is established in the next proposition. Proposition 4.2.1 If f($\omega$) is the spectral density of a weakly stationary process $(X_t\colon t\in\mathbb{Z})$, then the following statements hold: 1. f($\omega$) $\geq$ 0 for all $\omega$. This follows from the positive definiteness of $\gamma(h)$ 2. f($\omega$)=f(-$\omega$) and f($\omega+1$)=f($\omega$) 3. The variance of ($X_t\colon t\in\mathbb{Z})$ is given by $\gamma(0)=\int_{-1/2}^{1/2}f(\omega)d\omega. \nonumber$ Part (c) of the proposition states that the variance of a weakly stationary process is equal to the integrated spectral density over all frequencies. This property is revisited below, when a spectral analysis of variance (spectral ANOVA) will be discussed. In the following three examples are presented. Example 4.2.1 (White Noise) If $(Z_t\colon t\in\mathbb{Z})\sim\mbox{WN}(0,\sigma^2)$, then its ACVF is nonzero only for h=0, in which case $\gamma_Z(h)=\sigma^2$. Plugging this result into the defining equation in Definition4.2.1 yields that $f_Z(\omega)=\gamma_Z(0)\exp(-2\pi i\omega 0)=\sigma^2. \nonumber$ The spectral density of a white noise sequence is therefore constant for all $\omega\in(-1/2,1/2]$, which means that every frequency $\omega$ contributes equally to the overall spectrum. This explains the term white'' noise (in analogy to white'' light). Example 4.2.2 (Moving Average) Let $(Z_t\colon t\in\mathbb{Z})\sim\mbox{WN}(0,\sigma^2)$ and define the time series $(X_t\colon t\in\mathbb{Z})$ by $X_t=\tfrac 12\left(Z_t+Z_{t-1}\right),\qquad t\in\mathbb{Z}. \nonumber$ It can be shown that $\gamma_X(h)=\frac{\sigma^2}4\left(2-|h|\right),\qquad h=0,\pm 1 \nonumber$ and that $\gamma$_X=0 otherwise. Therefore, $f_X(\omega)=\sum_{h=-1}^1\gamma_X(h)\exp(2\pi i \omega h) \nonumber$ $= \frac{\sigma^2}4 (\exp(-2\pi i \omega (-1)))+2\exp(-2\pi i \omega 0)+\exp(-2\pi i \omega 1) \nonumber$ $= \frac{\sigma^2}2 (1+\cos(2\pi\omega)) \nonumber$ using that $\exp(ix)=\cos(x)+i\sin(x)$, $\cos(x)=\cos(-x)$ and $\sin(x)=-\sin(-x)$. It can be seen from the two time series plots in Figure 4.3 that the application of the two-point moving average to the white noise sequence smoothes the sample path. This is due to an attenuation of the higher frequencies which is visible in the form of the spectral density in the right panel of Figure 4.3. All plots have been obtained using Gaussian white noise with $\sigma^2=1$. Example 4.2.3 (AR(2) Process). Let $(X_t\colon t\in\mathbb{Z})$ be an AR(2) process which can be written in the form $Z_t=X_t-\phi_1X_{t-1}-\phi_2X_{t-2},\qquad t\in\mathbb{Z} \nonumber$ In this representation, it can be seen that the ACVF $\gamma_Z$ of the white noise sequence can be obtained as $\gamma_Z(h) = E [(X_t-\phi_1X_{t-1}-\phi_2X_{t-2}) (X_{t+h}-\phi_1X_{t+h-1}-\phi_2X_{t+h-2})] \nonumber$ $=(1+\phi_1^2+\phi_2^2)\gamma_X(h)+(\phi_1\phi_2-\phi_1)[\gamma_X(h+1)+\gamma_X(h-1)] \nonumber$ $\qquad - \phi_2[\gamma_X(h+2)+\gamma_X(h-2)] \nonumber$ Now it is known from Definition 4.2.1 that $\gamma_X(h)=\int_{-1/2}^{1/2}\exp(2\pi i\omega h)f_X(\omega)d\omega \nonumber$ and $\gamma_Z(h)=\int_{-1/2}^{1/2}\exp(2\pi i\omega h)f_Z(\omega)d\omega, \nonumber$ where $f_X(\omega)$ and $f_Z(\omega)$ denote the respective spectral densities. Consequently, $\gamma_Z(h)=\int_{-1/2}^{1/2}\exp(2\pi i\omega h)f_Z(\omega)d\omega \[.2cm] \nonumber$ $=(1+\phi_1^2+\phi_2^2)\gamma_X(h)+(\phi_1\phi_2-\phi_1)[\gamma_X(h+1)+\gamma_X(h-1)]-\phi_2[\gamma_X(h+2)+\gamma_X(h-2)]\[.2cm] \nonumber$ $=\int_{-1/2}^{1/2}\left[(1+\phi_1^2+\phi_2^2)+(\phi_1\phi_2-\phi_1)(\exp(2\pi i\omega)+\exp(-2\pi i\omega))\right. \[.2cm] \nonumber$ $\qquad\qquad\left. -\phi_2(\exp(4\pi i \omega)+\exp(-4\pi i \omega)) \right]\exp(2\pi i\omega h)f_X(\omega)d\omega \[.2cm] \nonumber$ $=\int_{-1/2}^{1/2}\left[(1+\phi_1^2+\phi_2^2)+2(\phi_1\phi_2-\phi_1)\cos(2\pi\omega)-2\phi_2\cos(4\pi\omega)\right]\exp(2\pi i\omega h)f_X(\omega)d\omega. \nonumber$ The foregoing implies together with $f_Z(\omega)=\sigma^2$ that $\sigma^2=\left[(1+\phi_1^2+\phi_2^2)+2(\phi_1\phi_2-\phi_1)\cos(2\pi\omega)-2\phi_2\cos(4\pi\omega)\right]f_X(\omega). \nonumber$ Hence, the spectral density of an AR(2) process has the form $f_X(\omega)=\sigma^2\left[(1+\phi_1^2+\phi_2^2)+2(\phi_1\phi_2-\phi_1)\cos(2\pi\omega)-2\phi_2\cos(4\pi\omega)\right]^{-1}. \nonumber$ Figure 4.4 displays the time series plot of an AR(2) process with parameters $\phi_1=1.35$, $\phi_2=-.41$ and $\sigma^2=89.34$. These values are very similar to the ones obtained for the recruitment series in Section 3.5. The same figure also shows the corresponding spectral density using the formula just derived. With the contents of this Section, it has so far been established that the spectral density $f(\omega)$ is a population quantity describing the impact of the various periodic components. Next, it is verified that the periodogram $I(\omega_j)$ introduced in Section \ref{sec:4.1} is the sample counterpart of the spectral density. Proposition 4.2.2. Let $\omega_j=j/n$ denote the Fourier frequencies. If $I(\omega_j)=|d(\omega_j)|^2$ is the periodogram based on observations $X_1,\ldots,X_n$ of a weakly stationary process $(X_t\colon t\in\mathbb{Z})$, then $I(\omega_j)=\sum_{h=-n+1}^{n-1}\hat{\gamma}_n(h)\exp(-2\pi i \omega_j h),\qquad j\not=0. \nonumber$ If $j=0$, then $I(\omega_0)=I(0)=n\bar X_n^2$. Proof. Let first $j\not= 0$. Using that $\sum_{t=1}^n\exp(-2\pi i\omega_jt)=0$, it follows that $I(\omega_j) = \frac 1n\sum_{t=1}^n\sum_{s=1}^n(X_t-\bar X_n)(X_s-\bar X_n)\exp(-2\pi i\omega_j(t-s))\[.2cm] \nonumber$ $=\frac 1n \sum_{h=-n+1}^{n-1}\sum_{t=1}^{n-|h|}(X_{t+|h|}-\bar X_n)(X_t-\bar X_n)\exp(-2\pi i\omega_jh)\[.2cm] \nonumber$ $=\sum_{h=-n+1}^{n-1}\hat\gamma_n(h)\exp(-2\pi i\omega_jh), \nonumber$ which proves the first claim of the proposition. If $j=0$, the relations $\cos(0)=1$ and $\sin(0)=0$ imply that $I(0)=n\bar X_n^2$. This completes the proof. More can be said about the periodogram. In fact, one can interpret spectral analysis as a spectral analysis of variance (ANOVA). To see this, let first $d_c(\omega_j) = \mathrm{Re}(d(\omega_j))=\frac{1}{\sqrt{n}}\sum_{t=1}^nX_t\cos(2\pi\omega_jt), \[.2cm] \nonumber$ $d_s(\omega_j) = \mathrm{Im}(d(\omega_j))=\frac{1}{\sqrt{n}}\sum_{t=1}^nX_t\sin(2\pi\omega_jt). \nonumber$ Then, $I(\omega_j)=d_c^2(\omega_j)+d_s^2(\omega_j)$. Let us now go back to the introductory example and study the process $X_t=A_0+\sum_{j=1}^m\big[A_j\cos(2\pi\omega_j t)+B_j\sin(2\pi\omega_jt)\big], \nonumber$ where $m=(n-1)/2$ and $n$ odd. Suppose $X_1,\ldots,X_n$ have been observed. Then, using regression techniques as before, it can be seen that $A_0=\bar{X}_n$ and $A_j = \frac 2n\sum_{t=1}^nX_t\cos(2\pi\omega_jt)=\frac{2}{\sqrt{n}}d_c(\omega_j),\[.2cm] \nonumber$ $B_j = \frac 2n\sum_{t=1}^nX_t\sin(2\pi\omega_jt)=\frac{2}{\sqrt{n}}d_s(\omega_j). \nonumber$ Therefore, $\sum_{t=1}^n(X_t-\bar{X}_n)^2=2\sum_{j=1}^m\big[d_c^2(\omega_j)+d_s^2(\omega_j)\big]=2\sum_{j=1}^mI(\omega_j) \nonumber$ and the following ANOVA table is obtained. If the underlying stochastic process exhibits a strong periodic pattern at a certain frequency, then the periodogram will most likely pick these up. Example 4.2.4 Consider the $n=5$ data points $X_1=2$, $X_2=4$, $X_3=6$, $X_4=4$ and $X_5=2$, which display a cyclical but nonsinusoidal pattern. This suggests that $\omega=1/5$ is significant and $\omega=2/5$ is not. In R, the spectral ANOVA can be produced as follows. >x = c(2,4,6,4,2), t=1:5 >cos1 = cos(2*pi*t*1/5) >sin1 = sin(2*pi*t*1/5) >cos2 = cos(2*pi*t*2/5) >sin2 = sin(2*pi*t*2/5) This generates the data and the independent cosine and sine variables. Now run a regression and check the ANOVA output. >reg = lm(x\~{}cos1+sin1+cos2+sin2) >anova(reg) This leads to the following output. Response: x Df Sum Sq Mean Sq F value Pr(>F) cos1 1 7.1777 7.1777 cos2 1 0.0223 0.0223 sin1 1 3.7889 3.7889 sin2 1 0.2111 0.2111 Residuals 0 0.0000 According to previous reasoning (check the last table!), the periodogram at frequency $\omega_1=1/5$ is given as the sum of the $cos1$ and $\tt sin1$ coefficients, that is, $I(1/5)=(d_c(1/5)+d_s(1/5))/2=(7.1777+3.7889)/2=5.4833$. Similarly, $I(2/5)=(d_c(2/5)+d_s(2/5))/2=(0.0223+0.2111)/2=0.1167.$ Note, however, that the mean squared error is computed differently in R. We can compare these values with the periodogram: > abs(fft(x))$\widehat{}$ 2/5 [1] 64.8000000 5.4832816 0.1167184 0.1167184 5.4832816 The first value here is $I(0)=n\bar{X}_n^2=5*(18/5)^2=64.8$. The second and third value are $I(1/5)$ and $I(2/5)$, respectively, while $I(3/5)=I(2/5)$ and $I(4/5)=I(1/5)$ complete the list. In the next section, some large sample properties of the periodogram are discussed to get a better understanding of spectral analysis. \
textbooks/stats/Advanced_Statistics/Time_Series_Analysis_(Aue)/4%3A_Spectral_Analysis/4.2%3A_The_Spectral_Density_and_the_Periodogram.txt
Let $(X_t\colon t\in\mathbb{Z})$ be a weakly stationary time series with mean $\mu$, absolutely summable ACVF $\gamma(h)$ and spectral density $f(\omega)$. Proceeding as in the proof of Proposition4.2.2., one obtains $I(\omega_j)=\frac 1n\sum_{h={-n+1}}^{n-1}\sum_{t=1}^{n-|h|}(X_{t+|h|}-\mu)(X_t-\mu)\exp(-2\pi i\omega_jh), \label{Eq1}$ provided $\omega_j\not=0$. Using this representation, the limiting behavior of the periodogram can be established. Proposition 4.3.1 Let $I(\cdot)$ be the periodogram based on observations $X_1,\ldots,X_n$ of a weakly stationary process $(X_t\colon t\in\mathbb{Z})$, then, for any $\omega\not=0$, $E[I(\omega_{j:n})]\to f(\omega)\qquad(n\to\infty), \nonumber$ where $\omega_{j:n}=j_n/n$ with $(j_n)_{n\in\mathbb{N}}$ chosen such that $\omega_{j:n}\to\omega$ as $n\to\infty$. If $\omega=0$, then $E[I(0)]-n\mu^2\to f(0)\qquad(n\to\infty). \nonumber$ Proof. There are two limits involved in the computations of the periodogram mean. First, take the limit as $n\to\infty$. This, however, requires secondly that for each $n$ we have to work with a different set of Fourier frequencies. To adjust for this, we have introduced the notation $\omega_{j:n}$. If $\omega_j\not=0$ is a Fourier frequency ($n$ fixed!), then $E[I(\omega_j)]=\sum_{h=-n+1}^{n-1}\left(\frac{n-|h|}{n}\right)\gamma(h)\exp(-2\pi i\omega_jh). \nonumber$ Therefore ($n\to\infty$!), $E[I(\omega_{j:n})]\to\sum_{h=-\infty}^\infty\gamma(h)\exp(-2\pi i\omega h)=f(\omega), \nonumber$ thus proving the first claim. The second follows from $I(0)=n\bar{X}_n^2$ (see Proposition 4.2.2.), so that $E[I(0)]-n\mu^2=n(E[\bar{X}_n^2]-\mu^2)=n\mbox{Var}(\bar{X}_n) \to f(0)$ as $n\to\infty$ as in Chapter 2. The proof is complete. Proposition 4.3.1. shows that the periodogram $I(\omega)$ is asymptotically unbiased for $f(\omega)$. It is, however, inconsistent. This is implied by the following proposition which is given without proof. It is not surprising considering that each value $I(\omega_j)$ is the sum of squares of only two random variables irrespective of the sample size. Proposition 4.3.2. If $(X_t\colon t\in\mathbb{Z})$ is a (causal or noncausal) weakly stationary time series such that $X_t=\sum_{j=-\infty}^\infty\psi_jZ_{t-j},\qquad t\in\mathbb{Z}, \nonumber$ with $\sum_{j=-\infty}^\infty|\psi_j|<\infty and (Z_t)_{t\in\mathbb{Z}}\sim\mbox{WN}(0,\sigma^2)$, then $(\frac{2I(\omega_{1:n})}{f(\omega_1)},\ldots,\frac{2I(\omega_{m:n})}{f(\omega_m)}) \stackrel{\cal D}{\to}(\xi_1,\ldots,\xi_m), \nonumber$ where $\omega_1,\ldots,\omega_m$ are $m$ distinct frequencies with $\omega_{j:n}\to\omega_j$ and $f(\omega_j)>0$. The variables $\xi_1,\ldots,\xi_m$ are independent, identical chi-squared distributed with two degrees of freedom. The result of this proposition can be used to construct confidence intervals for the value of the spectral density at frequency $\omega$. To this end, denote by $\chi_2^2(\alpha)$ the lower tail probability of the chi-squared variable $\xi_j$, that is, $P(\xi_j\leq\chi_2^2(\alpha))=\alpha. \nonumber$ Then, Proposition 4.3.2. implies that an approximate confidence interval with level $1-\alpha$ is given by $\frac{2I(\omega_{j:n})}{\chi_2^2(1-\alpha/2)}\leq f(\omega)\leq \frac{2I(\omega_{j:n})}{\chi_2^2(\alpha/2)}. \nonumber$ Proposition 4.3.2. also suggests that confidence intervals can be derived simultaneously for several frequency components. Before confidence intervals are computed for the dominant frequency of the recruitment data return for a moment to the computation of the FFT which is the basis for the periodogram usage. To ensure a quick computation time, highly composite integers $n^\prime$ have to be used. To achieve this in general, the length of time series is adjusted by padding the original but detrended data by adding zeroes. In R, spectral analysis is performed with the function spec.pgram. To find out which $n^\prime$ is used for your particular data, type nextn(length(x)), assuming that your series is in x. Example 4.3.1. Figure 4.5 displays the periodogram of the recruitment data which has been discussed in Example 3.3.5. It shows a strong annual frequency component at $\omega=1/12$ as well as several spikes in the neighborhood of the El Ni$\tilde{n}$o frequency $\omega=1/48$. Higher frequency components with $\omega>.3$ are virtually absent. Even though an AR(2) model was fitted to this data in Chapter 3 to produce future values based on this fit, it is seen that the periodogram here does not validate this fit as the spectral density of an AR(2) process (as computed in Example 4.2.3.) is qualitatively different. In R, the following commands can be used (nextn(length(rec)) gives $n^\prime=480$ here if the recruitment data is stored in rec as before). >rec.pgram=spec.pgram(rec, taper=0, log="no") >abline(v=1/12, lty=2) >abline(v=1/48, lty=2) The function spec.pgram allows you to fine-tune the spectral analysis. For our purposes, we always use the specifications given above for the raw periodogram (taper allows you, for example, to exclusively look at a particular frequency band, log allows you to plot the log-periodogram and is the R standard). To compute the confidence intervals for the two dominating frequencies $1/12$ and $1/48$, you can use the following R code, noting that $1/12=40/480$ and $1/48=10/480$. >rec.pgram{\$}spec[40] [1] 21332.94 >rec.pgram{\$}spec[10] [1] 14368.42 >u=qchisq(.025, 2); l=qchisq(.975, 2) >2*rec.pgram{\$}spec[40]/l >2*rec.pgram{\$}spec[40]/u >2*rec.pgram{\$}spec[10]/l ~2*rec.pgram{\$}spec[10]/u Using the numerical values of this analysis, the following confidence intervals are obtained at the level $\alpha=.1$: $f(1/12)\in(5783.041,842606.2)\qquad\mbox{and}\qquad f(1/48)\in(3895.065, 567522.5). \nonumber$ These are much too wide and alternatives to the raw periodogram are needed. These are provided, for example, by a smoothing approach which uses an averaging procedure over a band of neighboring frequencies. This can be done as follows. >k=kernel("daniell",4) >rec.ave=spec.pgram(rec, k, taper=0, log="no") > abline(v=1/12, lty=2) > abline(v=1/48, lty=2) > rec.ave$bandwidth [1] 0.005412659\medskip The resulting smoothed periodogram is shown in Figure 4.6. It is less noisy, as is expected from taking averages. More precisely, a two-sided Daniell filter with $m=4$ was used here with $L=2m+1$ neighboring frequencies $\omega_k=\omega_j+\frac kn,\qquad k=-m,\ldots,m, \nonumber$ to compute the periodogram at $\omega_j=j/n$. The resulting plot in Figure 4.6 shows, on the other hand, that the sharp annual peak has been flattened considerably. The bandwidth reported in R can be computed as $b=L/(\sqrt{12}n)$. To compute confidence intervals one has to adjust the previously derived formula. This is done by taking changing the degrees of freedom from 2 to $df=2Ln/n^\prime$ (if the zeroes where appended) and leads to $\frac{df}{\chi^2_{df}(1-\alpha/2)} \sum_{k=-m}^mf(\omega_j+\frac kn) \leq f(\omega) \leq \frac{df}{\chi^2_{df}(\alpha/2)}\sum_{k=-m}^mf(\omega_j+\frac kn) \nonumber$ for $\omega\approx\omega_j$. For the recruitment data the following R code can be used: >df=ceiling(rec.ave{\$}df) >u=qchisq(.025,df), l~=~qchisq(.975,df) >df*rec.ave{\$}spec[40]/l >df*rec.ave{\$}spec[40]/u >df*rec.ave{\$}spec[10]/l >df*rec.ave{\$}spec[10]/u to get the confidence intervals $f(1/12)\in(1482.427, 5916.823)\qquad\mbox{and}\qquad f(1/48)\in(4452.583, 17771.64). \nonumber$ The compromise between the noisy raw periodogram and further smoothing as described here (with $L=9$) reverses the magnitude of the $1/12$ annual frequency and the $1/48$ El Ni$\tilde{n}$o component. This is due to the fact that the annual peak is a very sharp one, with neighboring frequencies being basically zero. For the $1/48$ component, there are is a whole band of neighboring frequency which also contribute the El Ni$\tilde{n}$o phenomenon is irregular and does only on average appear every four years). Moreover, the annual cycle is now distributed over a whole range. One way around this issue is provided by the use of other kernels such as the modified Daniell kernel given in R as kernel("modified.daniell", c(3,3)). This leads to the spectral density in Figure 4.7. Contributers Demo: I really love the way that Equation $\ref{Eq1}$ looks. 4.4: Linear Filtering A linear filter uses specific coefficients $(\psi_s\colon s\in\mathbb{Z})$, called the impulse response function, to transform a weakly stationary input series $(X_t\colon t\in\mathbb{Z})$ into an output series $(Y_t\colon t\in\mathbb{Z})$ via $Y_t=\sum_{s=-\infty}^\infty\psi_sX_{t-s}, \qquad t\in\mathbb{Z}, \nonumber$ where $\sum_{s=-\infty}^\infty|\psi_s|<\infty$. Then, the frequency response function $\Psi(\omega)=\sum_{s=-\infty}^\infty\psi_s\exp(-2\pi i\omega s) \nonumber$ is well defined. Note that the two-point moving average of Example 4.2.2 and the differenced sequence $\nabla X_t$ are examples of linear filters. On the other hand, any \/ causal ARMA process can be identified as a linear filter applied to a white noise sequence. Implicitly this concept was already used to compute the spectral densities in Exampels 4.2.2 and 4.2.3. To investigate this in further detail, let $\gamma_X(h)$ and $\gamma_Y(h)$ denote the ACVF of the input process $(X_t\colon t\in\mathbb{Z})$ and the output process $(Y_t\colon t\in\mathbb{Z})$, respectively, and denote by $f_X(\omega)$ and $f_Y(\omega)$ the corresponding spectral densities. The following is the main result in this section. Theorem 4.4.1. Under the assumptions made in this section, it holds that $f_Y(\omega)=|\Psi(\omega)|^2f_X(\omega)$. Proof. First note that $\gamma_Y(h)=E\big[(Y_{t+h}-\mu_Y)(Y_t-\mu_Y)]\$.2cm]$ $=\sum_{r=-\infty}^\infty\sum_{s=-\infty}^\infty\psi_r\psi_s\gamma(h-r+s)\[.2cm]$ $=\sum_{r=-\infty}^\infty\sum_{s=-\infty}^\infty\psi_r\psi_s\int_{-1/2}^{1/2}\exp(2\pi i\omega(h-r+s))f_X(\omega)d\omega\[.2cm]$ $=\int_{-1/2}^{1/2}\Big(\sum_{r=-\infty}^\infty\psi_r\exp(-2\pi i\omega r)\Big)\Big(\sum_{s=-\infty}^\infty\psi_s\exp(2\pi i\omega s)\Big)\exp(2\pi i\omega h)f_X(\omega)d\omega\[.2cm]$ $=\int_{-1/2}^{1/2}\exp(2\pi i\omega h)|\Psi(\omega)|^2f_X(\omega)d\omega.$ Now identify $f_Y(\omega)=|\Psi(\omega)|^2f_X(\omega)$, which is the assertion of the theorem. Theorem 4.4.1 suggests a way to compute the spectral density of a causal ARMA process. To this end, let $(Y_t\colon t\in\mathbb{Z})$ be such a causal ARMA(p,q) process satisfying $Y_t=\psi(B)Z_t$, where $(Z_t\colon t\in\mathbb{Z})\sim\mbox{WN}(0,\sigma^2)$ and \[ \psi(z)=\dfrac{\theta(z)}{\phi(z)}=\sum_{s=0}^\infty\psi_sz^s,\qquad |z|\leq 1. \nonumber$ with $\theta(z)$ and $\phi(z)$ being the moving average and autoregressive polynomial, respectively. Note that the $(\psi_s\colon s\in\mathbb{N}_0)$ can be viewed as a special impulse response function. Corollary 4.4.1. If $(Y_t\colon t\in\mathbb{Z})$ be a causal ARMA(p,q)\) process. Then, its spectral density is given by $f_Y(\omega)=\sigma^2\dfrac{|\theta(e^{-2\pi i\omega})|^2}{|\phi(e^{-2\pi i\omega})|^2}. \nonumber$ Proof. Apply Theorem 4.4.1 with input sequence $(Z_t\colon t\in\mathbb{Z})$. Then $f_Z(\omega)=\sigma^2$, and moreover the frequency response function is $\Psi(\omega)=\sum_{s=0}^\infty\psi_s\exp(-2\pi i\omega s)=\psi(e^{-2\pi i\omega})=\dfrac{\theta(e^{-2\pi i\omega})}{\phi(e^{-2\pi i\omega})}. \nonumber$ Since $f_Y(\omega)=|\Psi(\omega)|^2f_X(\omega)$, the proof is complete. Corollary 4.4.1 gives an easy approach to define parametric spectral density estimates for causal ARMA(p,q) processes by simply replacing the population quantities by appropriate sample counterparts. This gives the spectral density estimator $\hat f(\omega)=\hat\sigma_n^2\dfrac{|\hat\theta(e^{-2\pi i\omega})|^2}{|\hat\phi(e^{-2\pi i\omega})|^2}. \nonumber$ Now any of the estimation techniques discussed in Section 3.5 may be applied when computing $\hat f(\omega)$. 4.5: Summary In this chapter, the basic methods for frequency domain time series analysis were introduced. These are based on a regression of the given data on cosine and sine functions varying at the Fourier frequencies. On the population side, spectral densities were identified as the frequency domain counterparts of absolutely summable autocovariance functions. These are obtained from one another by the application of (inverse) Fourier transforms. On the sample side, the periodogram has been shown to be an estimator for the unknown spectral density. Since it is an inconsistent estimator, various techniques have been discussed to overcome this fact. Finally, linear filters were introduced which can, for example, be used to compute spectral densities of causal ARMA processes and to derive parametric spectral density estimators other than the periodogram.
textbooks/stats/Advanced_Statistics/Time_Series_Analysis_(Aue)/4%3A_Spectral_Analysis/4.3%3A_Large_Sample_Properties.txt
Statistics include numerical facts and figures. For instance: • The largest earthquake measured \(9.2\) on the Richter scale.  • Men are at least \(10\) times more likely than women to commit murder.  • One in every \(8\) South Africans is HIV positive.  • By the year 2020, there will be \(15\) people aged \(65\) and over for every new baby born. The study of statistics involves math and relies upon calculations of numbers. But it also relies heavily on how the numbers are chosen and how the statistics are interpreted. For example, consider the following three scenarios and the interpretations based upon the presented statistics. You will find that the numbers may be right, but the interpretation may be wrong. Try to identify a major flaw with each interpretation before we describe it. 1. A new advertisement for Ben and Jerry's ice cream introduced in late May of last year resulted in a 30% increase in ice cream sales for the following three months. Thus, the advertisement was effective. A major flaw is that ice cream consumption generally increases in the months of June, July, and August regardless of advertisements. This effect is called a history effect and leads people to interpret outcomes as the result of one variable when another variable (in this case, one having to do with the passage of time) is actually responsible. 2. The more churches in a city, the more crime there is. Thus, churches lead to crime. A major flaw is that both increased churches and increased crime rates can be explained by larger populations. In bigger cities, there are both more churches and more crime. This problem, which we will discuss in more detail in Chapter 6, refers to the third-variable problem. Namely, a third variable can cause both situations; however, people erroneously believe that there is a causal relationship between the two primary variables rather than recognize that a third variable can cause both. 3. \(75\%\) more interracial marriages are occurring this year than \(25\) years ago. Thus, our society accepts interracial marriages. A major flaw is that we don't have the information that we need. What is the rate at which marriages are occurring? Suppose only \(1\%\) of marriages \(25\) years ago were interracial and so now \(1.75\%\) of marriages are interracial (\(1.75\) is \(75\%\) higher than \(1\)). But this latter number is hardly evidence suggesting the acceptability of interracial marriages. In addition, the statistic provided does not rule out the possibility that the number of interracial marriages has seen dramatic fluctuations over the years and this year is not the highest. Again, there is simply not enough information to understand fully the impact of the statistics. As a whole, these examples show that statistics are not only facts and figures; they are something more than that. In the broadest sense, “statistics” refers to a range of techniques and procedures for analyzing, interpreting, displaying, and making decisions based on data. Statistics is the language of science and data. The ability to understand and communicate using statistics enables researchers from different labs, different languages, and different fields articulate to one another exactly what they have found in their work. It is an objective, precise, and powerful tool in science and in everyday life. What statistics are not Many psychology students dread the idea of taking a statistics course, and more than a few have changed majors upon learning that it is a requirement. That is because many students view statistics as a math class, which is actually not true. While many of you will not believe this or agree with it, statistics isn’t math. Although math is a central component of it, statistics is a broader way of organizing, interpreting, and communicating information in an objective manner. Indeed, great care has been taken to eliminate as much math from this course as possible (students who do not believe this are welcome to ask the professor what matrix algebra is). Statistics is a way of viewing reality as it exists around us in a way that we otherwise could not.
textbooks/stats/Applied_Statistics/An_Introduction_to_Psychological_Statistics_(Foster_et_al.)/01%3A_Introduction/1.01%3A_What_are_statistics.txt
Virtually every student of the behavioral sciences takes some form of statistics class. This is because statistics is how we communicate in science. It serves as the link between a research idea and usable conclusions. Without statistics, we would be unable to interpret the massive amounts of information contained in data. Even small datasets contain hundreds – if not thousands – of numbers, each representing a specific observation we made. Without a way to organize these numbers into a more interpretable form, we would be lost, having wasted the time and money of our participants, ourselves, and the communities we serve. Beyond its use in science, however, there is a more personal reason to study statistics. Like most people, you probably feel that it is important to “take control of your life.” But what does this mean? Partly, it means being able to properly evaluate the data and claims that bombard you every day. If you cannot distinguish good from faulty reasoning, then you are vulnerable to manipulation and to decisions that are not in your best interest. Statistics provides tools that you need in order to react intelligently to information you hear or read. In this sense, statistics is one of the most important things that you can study. To be more specific, here are some claims that we have heard on several occasions. (We are not saying that each one of these claims is true!) • \(4\) out of \(5\) dentists recommend Dentine. • Almost \(85\%\) of lung cancers in men and \(45\%\) in women are tobacco-related. • Condoms are effective \(94\%\) of the time. • People tend to be more persuasive when they look others directly in the eye and speak loudly and quickly. • Women make \(75\) cents to every dollar a man makes when they work the same job. • A surprising new study shows that eating egg whites can increase one's life span. • People predict that it is very unlikely there will ever be another baseball player with a batting average over \(400\). • There is an \(80\%\) chance that in a room full of \(30\) people that at least two people will share the same birthday. • \(79.48\%\) of all statistics are made up on the spot. All of these claims are statistical in character. We suspect that some of them sound familiar; if not, we bet that you have heard other claims like them. Notice how diverse the examples are. They come from psychology, health, law, sports, business, etc. Indeed, data and data interpretation show up in discourse from virtually every facet of contemporary life. Statistics are often presented in an effort to add credibility to an argument or advice. You can see this by paying attention to television advertisements. Many of the numbers thrown about in this way do not represent careful statistical analysis. They can be misleading and push you into decisions that you might find cause to regret. For these reasons, learning about statistics is a long step towards taking control of your life. (It is not, of course, the only step needed for this purpose.) The purpose of this course, beyond preparing you for a career in psychology, is to help you learn statistical essentials. It will make you into an intelligent consumer of statistical claims. You can take the first step right away. To be an intelligent consumer of statistics, your first reflex must be to question the statistics that you encounter. The British Prime Minister Benjamin Disraeli is quoted by Mark Twain as having said, “There are three kinds of lies -- lies, damned lies, and statistics.” This quote reminds us why it is so important to understand statistics. So let us invite you to reform your statistical habits from now on. No longer will you blindly accept numbers or findings. Instead, you will begin to think about the numbers, their sources, and most importantly, the procedures used to generate them. The above section puts an emphasis on defending ourselves against fraudulent claims wrapped up as statistics, but let us look at a more positive note. Just as important as detecting the deceptive use of statistics is the appreciation of the proper use of statistics. You must also learn to recognize statistical evidence that supports a stated conclusion. Statistics are all around you, sometimes used well, sometimes not. We must learn how to distinguish the two cases. In doing so, statistics will likely be the course you use most in your day to day life, even if you do not ever run a formal analysis again.
textbooks/stats/Applied_Statistics/An_Introduction_to_Psychological_Statistics_(Foster_et_al.)/01%3A_Introduction/1.02%3A_Why_do_we_study_statistics.txt
In order to use statistics, we need data to analyze. Data come in an amazingly diverse range of formats, and each type gives us a unique type of information. In virtually any form, data represent the measured value of variables. A variable is simply a characteristic or feature of the thing we are interested in understanding. In psychology, we are interested in people, so we might get a group of people together and measure their levels of stress (one variable), anxiety (a second variable), and their physical health (a third variable). Once we have data on these three variables, we can use statistics to understand if and how they are related. Before we do so, we need to understand the nature of our data: what they represent and where they came from. Types of Variables When conducting research, experimenters often manipulate variables. For example, an experimenter might compare the effectiveness of four types of antidepressants. In this case, the variable is “type of antidepressant.” When a variable is manipulated by an experimenter, it is called an independent variable. The experiment seeks to determine the effect of the independent variable on relief from depression. In this example, relief from depression is called a dependent variable. In general, the independent variable is manipulated by the experimenter and its effects on the dependent variable are measured. Example \(1\) Can blueberries slow down aging? A study indicates that antioxidants found in blueberries may slow down the process of aging. In this study, \(19\)-month-old rats (equivalent to \(60\)-year-old humans) were fed either their standard diet or a diet supplemented by either blueberry, strawberry, or spinach powder. After eight weeks, the rats were given memory and motor skills tests. Although all supplemented rats showed improvement, those supplemented with blueberry powder showed the most notable improvement. 1. What is the independent variable? (dietary supplement: none, blueberry, strawberry, and spinach) 2. What are the dependent variables? (memory test and motor skills test) Example \(2\) Does beta-carotene protect against cancer? Beta-carotene supplements have been thought to protect against cancer. However, a study published in the Journal of the National Cancer Institute suggests this is false. The study was conducted with 39,000 women aged 45 and up. These women were randomly assigned to receive a beta-carotene supplement or a placebo, and their health was studied over their lifetime. Cancer rates for women taking the betacarotene supplement did not differ systematically from the cancer rates of those women taking the placebo. 1. What is the independent variable? (supplements: beta-carotene or placebo) 2. What is the dependent variable? (occurrence of cancer) Example \(3\) How bright is right? An automobile manufacturer wants to know how bright brake lights should be in order to minimize the time required for the driver of a following car to realize that the car in front is stopping and to hit the brakes. 1. What is the independent variable? (brightness of brake lights) 2. What is the dependent variable? (time to hit brakes) Levels of an Independent Variable If an experiment compares an experimental treatment with a control treatment, then the independent variable (type of treatment) has two levels: experimental and control. If an experiment were comparing five types of diets, then the independent variable (type of diet) would have \(5\) levels. In general, the number of levels of an independent variable is the number of experimental conditions. Qualitative and Quantitative Variables An important distinction between variables is between qualitative variables and quantitative variables. Qualitative variables are those that express a qualitative attribute such as hair color, eye color, religion, favorite movie, gender, and so on. The values of a qualitative variable do not imply a numerical ordering. Values of the variable “religion” differ qualitatively; no ordering of religions is implied. Qualitative variables are sometimes referred to as categorical variables. Quantitative variables are those variables that are measured in terms of numbers. Some examples of quantitative variables are height, weight, and shoe size. In the study on the effect of diet discussed previously, the independent variable was type of supplement: none, strawberry, blueberry, and spinach. The variable “type of supplement” is a qualitative variable; there is nothing quantitative about it. In contrast, the dependent variable “memory test” is a quantitative variable since memory performance was measured on a quantitative scale (number correct). Discrete and Continuous Variables Variables such as number of children in a household are called discrete variables since the possible scores are discrete points on the scale. For example, a household could have three children or six children, but not \(4.53\) children. Other variables such as “time to respond to a question” are continuous variables since the scale is continuous and not made up of discrete steps. The response time could be \(1.64\) seconds, or it could be \(1.64237123922121\) seconds. Of course, the practicalities of measurement preclude most measured variables from being truly continuous. Levels of Measurement Before we can conduct a statistical analysis, we need to measure our dependent variable. Exactly how the measurement is carried out depends on the type of variable involved in the analysis. Different types are measured differently. To measure the time taken to respond to a stimulus, you might use a stop watch. Stop watches are of no use, of course, when it comes to measuring someone's attitude towards a political candidate. A rating scale is more appropriate in this case (with labels like “very favorable,” “somewhat favorable,” etc.). For a dependent variable such as “favorite color,” you can simply note the color-word (like “red”) that the subject offers. Although procedures for measurement differ in many ways, they can be classified using a few fundamental categories. In a given category, all of the procedures share some properties that are important for you to know about. The categories are called “scale types,” or just “scales,” and are described in this section. Nominal scales When measuring using a nominal scale, one simply names or categorizes responses. Gender, handedness, favorite color, and religion are examples of variables measured on a nominal scale. The essential point about nominal scales is that they do not imply any ordering among the responses. For example, when classifying people according to their favorite color, there is no sense in which green is placed “ahead of” blue. Responses are merely categorized. Nominal scales embody the lowest level of measurement. Ordinal scales A researcher wishing to measure consumers' satisfaction with their microwave ovens might ask them to specify their feelings as either “very dissatisfied,” “somewhat dissatisfied,” “somewhat satisfied,” or “very satisfied.” The items in this scale are ordered, ranging from least to most satisfied. This is what distinguishes ordinal from nominal scales. Unlike nominal scales, ordinal scales allow comparisons of the degree to which two subjects possess the dependent variable. For example, our satisfaction ordering makes it meaningful to assert that one person is more satisfied than another with their microwave ovens. Such an assertion reflects the first person's use of a verbal label that comes later in the list than the label chosen by the second person. On the other hand, ordinal scales fail to capture important information that will be present in the other scales we examine. In particular, the difference between two levels of an ordinal scale cannot be assumed to be the same as the difference between two other levels. In our satisfaction scale, for example, the difference between the responses “very dissatisfied” and “somewhat dissatisfied” is probably not equivalent to the difference between “somewhat dissatisfied” and “somewhat satisfied.” Nothing in our measurement procedure allows us to determine whether the two differences reflect the same difference in psychological satisfaction. Statisticians express this point by saying that the differences between adjacent scale values do not necessarily represent equal intervals on the underlying scale giving rise to the measurements. (In our case, the underlying scale is the true feeling of satisfaction, which we are trying to measure.) What if the researcher had measured satisfaction by asking consumers to indicate their level of satisfaction by choosing a number from one to four? Would the difference between the responses of one and two necessarily reflect the same difference in satisfaction as the difference between the responses two and three? The answer is No. Changing the response format to numbers does not change the meaning of the scale. We still are in no position to assert that the mental step from \(1\) to \(2\) (for example) is the same as the mental step from \(3\) to \(4\). Interval scales Interval scales are numerical scales in which intervals have the same interpretation throughout. As an example, consider the Fahrenheit scale of temperature. The difference between \(30\) degrees and \(40\) degrees represents the same temperature difference as the difference between \(80\) degrees and \(90\) degrees. This is because each \(10\)-degree interval has the same physical meaning (in terms of the kinetic energy of molecules). Interval scales are not perfect, however. In particular, they do not have a true zero point even if one of the scaled values happens to carry the name “zero.” The Fahrenheit scale illustrates the issue. Zero degrees Fahrenheit does not represent the complete absence of temperature (the absence of any molecular kinetic energy). In reality, the label “zero” is applied to its temperature for quite accidental reasons connected to the history of temperature measurement. Since an interval scale has no true zero point, it does not make sense to compute ratios of temperatures. For example, there is no sense in which the ratio of \(40\) to \(20\) degrees Fahrenheit is the same as the ratio of \(100\) to \(50\) degrees; no interesting physical property is preserved across the two ratios. After all, if the “zero” label were applied at the temperature that Fahrenheit happens to label as \(10\) degrees, the two ratios would instead be \(30\) to \(10\) and \(90\) to \(40\), no longer the same! For this reason, it does not make sense to say that \(80\) degrees is “twice as hot” as \(40\) degrees. Such a claim would depend on an arbitrary decision about where to “start” the temperature scale, namely, what temperature to call zero (whereas the claim is intended to make a more fundamental assertion about the underlying physical reality). Ratio scales The ratio scale of measurement is the most informative scale. It is an interval scale with the additional property that its zero position indicates the absence of the quantity being measured. You can think of a ratio scale as the three earlier scales rolled up in one. Like a nominal scale, it provides a name or category for each object (the numbers serve as labels). Like an ordinal scale, the objects are ordered (in terms of the ordering of the numbers). Like an interval scale, the same difference at two places on the scale has the same meaning. And in addition, the same ratio at two places on the scale also carries the same meaning. The Fahrenheit scale for temperature has an arbitrary zero point and is therefore not a ratio scale. However, zero on the Kelvin scale is absolute zero. This makes the Kelvin scale a ratio scale. For example, if one temperature is twice as high as another as measured on the Kelvin scale, then it has twice the kinetic energy of the other temperature. Another example of a ratio scale is the amount of money you have in your pocket right now (25 cents, 55 cents, etc.). Money is measured on a ratio scale because, in addition to having the properties of an interval scale, it has a true zero point: if you have zero money, this implies the absence of money. Since money has a true zero point, it makes sense to say that someone with 50 cents has twice as much money as someone with 25 cents (or that Bill Gates has a million times more money than you do). What level of measurement is used for psychological variables? Rating scales are used frequently in psychological research. For example, experimental subjects may be asked to rate their level of pain, how much they like a consumer product, their attitudes about capital punishment, their confidence in an answer to a test question. Typically these ratings are made on a 5-point or a 7-point scale. These scales are ordinal scales since there is no assurance that a given difference represents the same thing across the range of the scale. For example, there is no way to be sure that a treatment that reduces pain from a rated pain level of 3 to a rated pain level of 2 represents the same level of relief as a treatment that reduces pain from a rated pain level of 7 to a rated pain level of 6. In memory experiments, the dependent variable is often the number of items correctly recalled. What scale of measurement is this? You could reasonably argue that it is a ratio scale. First, there is a true zero point; some subjects may get no items correct at all. Moreover, a difference of one represents a difference of one item recalled across the entire scale. It is certainly valid to say that someone who recalled 12 items recalled twice as many items as someone who recalled only 6 items. But number-of-items recalled is a more complicated case than it appears at first. Consider the following example in which subjects are asked to remember as many items as possible from a list of 10. Assume that (a) there are 5 easy items and 5 difficult items, (b) half of the subjects are able to recall all the easy items and different numbers of difficult items, while (c) the other half of the subjects are unable to recall any of the difficult items but they do remember different numbers of easy items. Some sample data are shown below. Table \(1\): Sample Data Subject Easy Items Difficult Items Score A 0 0 1 1 0 0 0 0 0 0 2 B 1 0 1 1 0 0 0 0 0 0 3 C 1 1 1 1 1 1 1 0 0 0 7 D 1 1 1 1 1 0 1 1 0 1 8 Let's compare (i) the difference between Subject A's score of 2 and Subject B's score of 3 and (ii) the difference between Subject C's score of 7 and Subject D's score of 8. The former difference is a difference of one easy item; the latter difference is a difference of one difficult item. Do these two differences necessarily signify the same difference in memory? We are inclined to respond “No” to this question since only a little more memory may be needed to retain the additional easy item whereas a lot more memory may be needed to retain the additional hard itemred. The general point is that it is often inappropriate to consider psychological measurement scales as either interval or ratio. Consequences of level of measurement Why are we so interested in the type of scale that measures a dependent variable? The crux of the matter is the relationship between the variable's level of measurement and the statistics that can be meaningfully computed with that variable. For example, consider a hypothetical study in which 5 children are asked to choose their favorite color from blue, red, yellow, green, and purple. The researcher codes the results as follows: Table \(2\): Favorite color data code Color Code Blue 1 Red 2 Yellow 3 Green 4 Purple 5 This means that if a child said her favorite color was “Red,” then the choice was coded as “2,” if the child said her favorite color was “Purple,” then the response was coded as 5, and so forth. Consider the following hypothetical data Table \(3\): Favorite color data Subject Color Code 1 Blue 1 2 Blue 1 3 Green 4 4 Green 4 5 Purple 5 Each code is a number, so nothing prevents us from computing the average code assigned to the children. The average happens to be 3, but you can see that it would be senseless to conclude that the average favorite color is yellow (the color with a code of 3). Such nonsense arises because favorite color is a nominal scale, and taking the average of its numerical labels is like counting the number of letters in the name of a snake to see how long the beast is. Does it make sense to compute the mean of numbers measured on an ordinal scale? This is a difficult question, one that statisticians have debated for decades. The prevailing (but by no means unanimous) opinion of statisticians is that for almost all practical situations, the mean of an ordinally-measured variable is a meaningful statistic. However, there are extreme situations in which computing the mean of an ordinally-measured variable can be very misleading.
textbooks/stats/Applied_Statistics/An_Introduction_to_Psychological_Statistics_(Foster_et_al.)/01%3A_Introduction/1.03%3A_Types_of_Data_and_How_to_Collect_Them.txt
We are usually interested in understanding a specific group of people. This group is known as the population of interest, or simply the population. The population is the collection of all people who have some characteristic in common; it can be as broad as “all people” if we have a very general research question about human psychology, or it can be extremely narrow, such as “all freshmen psychology majors at Midwestern public universities” if we have a specific group in mind. Populations and samples In statistics, we often rely on a sample --- that is, a small subset of a larger set of data --- to draw inferences about the larger set. The larger set is known as the population from which the sample is drawn. Example \(1\) You have been hired by the National Election Commission to examine how the American people feel about the fairness of the voting procedures in the U.S. Who will you ask? Solution It is not practical to ask every single American how he or she feels about the fairness of the voting procedures. Instead, we query a relatively small number of Americans, and draw inferences about the entire country from their responses. The Americans actually queried constitute our sample of the larger population of all Americans. A sample is typically a small subset of the population. In the case of voting attitudes, we would sample a few thousand Americans drawn from the hundreds of millions that make up the country. In choosing a sample, it is therefore crucial that it not over-represent one kind of citizen at the expense of others. For example, something would be wrong with our sample if it happened to be made up entirely of Florida residents. If the sample held only Floridians, it could not be used to infer the attitudes of other Americans. The same problem would arise if the sample were comprised only of Republicans. Inferences from statistics are based on the assumption that sampling is representative of the population. If the sample is not representative, then the possibility of sampling bias occurs. Sampling bias means that our conclusions apply only to our sample and are not generalizable to the full population. Example \(2\) We are interested in examining how many math classes have been taken on average by current graduating seniors at American colleges and universities during their four years in school. Solution Whereas our population in the last example included all US citizens, now it involves just the graduating seniors throughout the country. This is still a large set since there are thousands of colleges and universities, each enrolling many students. (New York University, for example, enrolls 48,000 students.) It would be prohibitively costly to examine the transcript of every college senior. We therefore take a sample of college seniors and then make inferences to the entire population based on what we find. To make the sample, we might first choose some public and private colleges and universities across the United States. Then we might sample 50 students from each of these institutions. Suppose that the average number of math classes taken by the people in our sample were 3.2. Then we might speculate that 3.2 approximates the number we would find if we had the resources to examine every senior in the entire population. But we must be careful about the possibility that our sample is non-representative of the population. Perhaps we chose an overabundance of math majors, or chose too many technical institutions that have heavy math requirements. Such bad sampling makes our sample unrepresentative of the population of all seniors. To solidify your understanding of sampling bias, consider the following example. Try to identify the population and the sample, and then reflect on whether the sample is likely to yield the information desired. Example \(3\) A substitute teacher wants to know how students in the class did on their last test. The teacher asks the 10 students sitting in the front row to state their latest test score. He concludes from their report that the class did extremely well. What is the sample? What is the population? Can you identify any problems with choosing the sample in the way that the teacher did? Solution The population consists of all students in the class. The sample is made up of just the 10 students sitting in the front row. The sample is not likely to be representative of the population. Those who sit in the front row tend to be more interested in the class and tend to perform higher on tests. Hence, the sample may perform at a higher level than the population. Example \(4\) A coach is interested in how many cartwheels the average college freshmen at his university can do. Eight volunteers from the freshman class step forward. After observing their performance, the coach concludes that college freshmen can do an average of 16 cartwheels in a row without stopping. Solution The population is the class of all freshmen at the coach's university. The sample is composed of the 8 volunteers. The sample is poorly chosen because volunteers are more likely to be able to do cartwheels than the average freshman; people who can't do cartwheels probably did not volunteer! In the example, we are also not told of the gender of the volunteers. Were they all women, for example? That might affect the outcome, contributing to the non-representative nature of the sample (if the school is co-ed). Simple Random Sampling Researchers adopt a variety of sampling strategies. The most straightforward is simple random sampling. Such sampling requires every member of the population to have an equal chance of being selected into the sample. In addition, the selection of one member must be independent of the selection of every other member. That is, picking one member from the population must not increase or decrease the probability of picking any other member (relative to the others). In this sense, we can say that simple random sampling chooses a sample by pure chance. To check your understanding of simple random sampling, consider the following example. What is the population? What is the sample? Was the sample picked by simple random sampling? Is it biased? Example \(5\) A research scientist is interested in studying the experiences of twins raised together versus those raised apart. She obtains a list of twins from the National Twin Registry, and selects two subsets of individuals for her study. First, she chooses all those in the registry whose last name begins with Z. Then she turns to all those whose last name begins with B. Because there are so many names that start with B, however, our researcher decides to incorporate only every other name into her sample. Finally, she mails out a survey and compares characteristics of twins raised apart versus together. Solution The population consists of all twins recorded in the National Twin Registry. It is important that the researcher only make statistical generalizations to the twins on this list, not to all twins in the nation or world. That is, the National Twin Registry may not be representative of all twins. Even if inferences are limited to the Registry, a number of problems affect the sampling procedure we described. For instance, choosing only twins whose last names begin with Z does not give every individual an equal chance of being selected into the sample. Moreover, such a procedure risks over-representing ethnic groups with many surnames that begin with Z. There are other reasons why choosing just the Z's may bias the sample. Perhaps such people are more patient than average because they often find themselves at the end of the line! The same problem occurs with choosing twins whose last name begins with B. An additional problem for the B's is that the “every-other-one” procedure disallowed adjacent names on the B part of the list from being both selected. Just this defect alone means the sample was not formed through simple random sampling. Sample size matters Recall that the definition of a random sample is a sample in which every member of the population has an equal chance of being selected. This means that the sampling procedure rather than the results of the procedure define what it means for a sample to be random. Random samples, especially if the sample size is small, are not necessarily representative of the entire population. For example, if a random sample of 20 subjects were taken from a population with an equal number of males and females, there would be a nontrivial probability (0.06) that 70% or more of the sample would be female. Such a sample would not be representative, although it would be drawn randomly. Only a large sample size makes it likely that our sample is close to representative of the population. For this reason, inferential statistics take into account the sample size when generalizing results from samples to populations. In later chapters, you'll see what kinds of mathematical techniques ensure this sensitivity to sample size. More complex sampling Sometimes it is not feasible to build a sample using simple random sampling. To see the problem, consider the fact that both Dallas and Houston are competing to be hosts of the 2012 Olympics. Imagine that you are hired to assess whether most Texans prefer Houston to Dallas as the host, or the reverse. Given the impracticality of obtaining the opinion of every single Texan, you must construct a sample of the Texas population. But now notice how difficult it would be to proceed by simple random sampling. For example, how will you contact those individuals who don’t vote and don’t have a phone? Even among people you find in the telephone book, how can you identify those who have just relocated to California (and had no reason to inform you of their move)? What do you do about the fact that since the beginning of the study, an additional 4,212 people took up residence in the state of Texas? As you can see, it is sometimes very difficult to develop a truly random procedure. For this reason, other kinds of sampling techniques have been devised. We now discuss two of them. Stratified Sampling Since simple random sampling often does not ensure a representative sample, a sampling method called stratified random sampling is sometimes used to make the sample more representative of the population. This method can be used if the population has a number of distinct “strata” or groups. In stratified sampling, you first identify members of your sample who belong to each group. Then you randomly sample from each of those subgroups in such a way that the sizes of the subgroups in the sample are proportional to their sizes in the population. Let's take an example: Suppose you were interested in views of capital punishment at an urban university. You have the time and resources to interview 200 students. The student body is diverse with respect to age; many older people work during the day and enroll in night courses (average age is 39), while younger students generally enroll in day classes (average age of 19). It is possible that night students have different views about capital punishment than day students. If 70% of the students were day students, it makes sense to ensure that 70% of the sample consisted of day students. Thus, your sample of 200 students would consist of 140 day students and 60 night students. The proportion of day students in the sample and in the population (the entire university) would be the same. Inferences to the entire population of students at the university would therefore be more secure. Convenience Sampling Not all sampling methods are perfect, and sometimes that’s okay. For example, if we are beginning research into a completely unstudied area, we may sometimes take some shortcuts to quickly gather data and get a general idea of how things work before fully investing a lot of time and money into well-designed research projects with proper sampling. This is known as convenience sampling, named for its ease of use. In limited cases, such as the one just described, convenience sampling is okay because we intend to follow up with a representative sample. Unfortunately, sometimes convenience sampling is used due only to its convenience without the intent of improving on it in future work.
textbooks/stats/Applied_Statistics/An_Introduction_to_Psychological_Statistics_(Foster_et_al.)/01%3A_Introduction/1.04%3A_Collecting_Data.txt
Research studies come in many forms, and, just like with the different types of data we have, different types of studies tell us different things. The choice of research design is determined by the research question and the logistics involved. Though a complete understanding of different research designs is the subject for at least one full class, if not more, a basic understanding of the principles is useful here. There are three types of research designs we will discuss: experimental, quasiexperimental, and non-experimental. Experimental Designs If we want to know if a change in one variable causes a change in another variable, we must use a true experiment. An experiment is defined by the use of random assignment to treatment conditions and manipulation of the independent variable. To understand what this means, let’s look at an example: A clinical researcher wants to know if a newly developed drug is effective in treating the flu. Working with collaborators at several local hospitals, she randomly samples 40 flu patients and randomly assigns each one to one of two conditions: Group A receives the new drug and Group B received a placebo. She measures the symptoms of all participants after 1 week to see if there is a difference in symptoms between the groups. In the example, the independent variable is the drug treatment; we manipulate it into 2 levels: new drug or placebo. Without the researcher administering the drug (i.e. manipulating the independent variable), there would be no difference between the groups. Each person, after being randomly sampled to be in the research, was then randomly assigned to one of the 2 groups. That is, random sampling and random assignment are not the same thing and cannot be used interchangeably. For research to be a true experiment, random assignment must be used. For research to be representative of the population, random sampling must be used. The use of both techniques helps ensure that there are no systematic differences between the groups, thus eliminating the potential for sampling bias. The dependent variable in the example is flu symptoms. Barring any other intervention, we would assume that people in both groups, on average, get better at roughly the same rate. Because there are no systematic differences between the 2 groups, if the researcher does find a difference in symptoms, she can confidently attribute it to the effectiveness of the new drug. Quasi-Experimental Designs Quasi-experimental research involves getting as close as possible to the conditions of a true experiment when we cannot meet all requirements. Specifically, a quasiexperiment involves manipulating the independent variable but not randomly assigning people to groups. There are several reasons this might be used. First, it may be unethical to deny potential treatment to someone if there is good reason to believe it will be effective and that the person would unduly suffer if they did not receive it. Alternatively, it may be impossible to randomly assign people to groups. Consider the following example: A professor wants to test out a new teaching method to see if it improves student learning. Because he is teaching two sections of the same course, he decides to teach one section the traditional way and the other section using the new method. At the end of the semester, he compares the grades on the final for each class to see if there is a difference. In this example, the professor has manipulated his teaching method, which is the independent variable, hoping to find a difference in student performance, the dependent variable. However, because students enroll in courses, he cannot randomly assign the students to a particular group, thus precluding using a true experiment to answer his research question. Because of this, we cannot know for sure that there are no systematic differences between the classes other than teaching style and therefore cannot determine causality. Non-Experimental Designs Finally, non-experimental research (sometimes called correlational research) involves observing things as they occur naturally and recording our observations as data. Consider this example: A data scientist wants to know if there is a relation between how conscientious a person is and whether that person is a good employee. She hopes to use this information to predict the job performance of future employees by measuring their personality when they are still job applicants. She randomly samples volunteer employees from several different companies, measuring their conscientiousness and having their bosses rate their performance on the job. She analyzes this data to find a relation. Here, it is not possible to manipulate conscientious, so the researcher must gather data from employees as they are in order to find a relation between her variables. Although this technique cannot establish causality, it can still be quite useful. If the relation between conscientiousness and job performance is consistent, then it doesn’t necessarily matter is conscientiousness causes good performance or if they are both caused by something else – she can still measure conscientiousness to predict future performance. Additionally, these studies have the benefit of reflecting reality as it actually exists since we as researchers do not change anything.
textbooks/stats/Applied_Statistics/An_Introduction_to_Psychological_Statistics_(Foster_et_al.)/01%3A_Introduction/1.05%3A_Type_of_Research_Designs.txt
Now that we understand the nature of our data, let’s turn to the types of statistics we can use to interpret them. There are 2 types of statistics: descriptive and inferential. Descriptive Statistics Descriptive statistics are numbers that are used to summarize and describe data. The word “data” refers to the information that has been collected from an experiment, a survey, an historical record, etc. (By the way, “data” is plural. One piece of information is called a “datum.”) If we are analyzing birth certificates, for example, a descriptive statistic might be the percentage of certificates issued in New York State, or the average age of the mother. Any other number we choose to compute also counts as a descriptive statistic for the data from which the statistic is computed. Several descriptive statistics are often used at one time to give a full picture of the data. Descriptive statistics are just descriptive. They do not involve generalizing beyond the data at hand. Generalizing from our data to another set of cases is the business of inferential statistics, which you'll be studying in another section. Here we focus on (mere) descriptive statistics. Some descriptive statistics are shown in Table \(1\). The table shows the average salaries for various occupations in the United States in 1999. Table \(1\): Average salaries for various occupations in 1999. Salary Occupation \$112,760 pediatricians \$106,130 dentists \$100,090 podiatrists \$76,140 physicists \$53,410 architects \$49,720 school, clinical, and counseling psychologists \$47,910 flight attendants \$39,560 elementary school teachers \$38,710 police officers \$18,980 floral designers Descriptive statistics like these offer insight into American society. It is interesting to note, for example, that we pay the people who educate our children and who protect our citizens a great deal less than we pay people who take care of our feet or our teeth. For more descriptive statistics, consider Table \(2\). It shows the number of unmarried men per 100 unmarried women in U.S. Metro Areas in 1990. From this table we see that men outnumber women most in Jacksonville, NC, and women outnumber men most in Sarasota, FL. You can see that descriptive statistics can be useful if we are looking for an opposite-sex partner! (These data come from the Information Please Almanac.) Table \(2\): Number of unmarried men per 100 unmarried women in U.S. Metro Areas in 1990. Cities with mostly men Men per 100 Women Cities with mostly women Men per 100 Women 1. Jacksonville, NC 224 1. Sarasota, FL 66 2. Killeen-Temple, TX 123 2. Bradenton, FL 68 3. Fayetteville, NC 118 3. Altoona, PA 69 4. Brazoria, TX 117 4. Springfield, IL 70 5. Lawton, OK 116 5. Jacksonville, TN 70 6. State College, PA 113 6. Gadsden, AL 70 7. ClarksvilleHopkinsville, TN-KY 113 7. Wheeling, WV 70 8. Anchorage, Alaska 112 8. Charleston, WV 71 9. Salinas-SeasideMonterey, CA 112 9. St. Joseph, MO 71 10. Bryan-College Station, TX 111 10. Lynchburg, VA 71 NOTE: Unmarried includes never-married, widowed, and divorced persons, 15 years or older. These descriptive statistics may make us ponder why the numbers are so disparate in these cities. One potential explanation, for instance, as to why there are more women in Florida than men may involve the fact that elderly individuals tend to move down to the Sarasota region and that women tend to outlive men. Thus, more women might live in Sarasota than men. However, in the absence of proper data, this is only speculation. You probably know that descriptive statistics are central to the world of sports. Every sporting event produces numerous statistics such as the shooting percentage of players on a basketball team. For the Olympic marathon (a foot race of 26.2 miles), we possess data that cover more than a century of competition. (The first modern Olympics took place in 1896.) The following table shows the winning times for both men and women (the latter have only been allowed to compete since 1984). Table \(3\): Winning Olympic marathon times. Year Winner Country Time Women 1984 Joan Benoit USA 2:24:52 1988 Rosa Mota POR 2:25:40 1992 Valentina Yegorova UT 2:32:41 1996 Fatuma Roba ETH 2:26:05 2000 Naoko Takahashi JPN 2:23:14 2004 Mizuki Noguchi JPN 2:26:20 Men 1896 Spiridon Louis GRE 2:58:50 1900 Michel Theato FRA 2:59:45 1904 Thomas Hicks USA 3:28:53 1906 Billy Sherring CAN 2:51:23 1908 Johnny Hayes USA 2:55:18 1912 Kenneth McArthur S. Afr. 2:36:54 1920 Hannes Kolehmainen FIN 2:32:35 1924 Albin Stenroos FIN 2:41:22 1928 Boughra El Ouafi FRA 2:32:57 1932 Juan Carlos Zabala ARG 2:31:36 1936 Sohn Kee-Chung JPN 2:29:19 1948 Delfo Cabrera ARG 2:34:51 1952 Emil Ztopek CZE 2:23:03 1956 Alain Mimoun FRA 2:25:00 1960 Abebe Bikila ETH 2:15:16 1964 Abebe Bikila ETH 2:12:11 1968 Mamo Wolde ETH 2:20:26 1972 Frank Shorter USA 2:12:19 1976 Waldemar Cierpinski E.Ger 2:09:55 1980 Waldemar Cierpinski E.Ger 2:11:03 1984 Carlos Lopes POR 2:09:21 1988 Gelindo Bordin ITA 2:10:32 1992 Hwang Young-Cho S. Kor 2:13:23 1996 Josia Thugwane S. Afr. 2:12:36 2000 Gezahenge Abera ETH 2:10.10 2004 Stefano Baldini ITA 2:10:55 There are many descriptive statistics that we can compute from the data in the table. To gain insight into the improvement in speed over the years, let us divide the men's times into two pieces, namely, the first 13 races (up to 1952) and the second 13 (starting from 1956). The mean winning time for the first 13 races is 2 hours, 44 minutes, and 22 seconds (written 2:44:22). The mean winning time for the second 13 races is 2:13:18. This is quite a difference (over half an hour). Does this prove that the fastest men are running faster? Or is the difference just due to chance, no more than what often emerges from chance differences in performance from year to year? We can't answer this question with descriptive statistics alone. All we can affirm is that the two means are “suggestive.” Examining Table \(3\) leads to many other questions. We note that Takahashi (the lead female runner in 2000) would have beaten the male runner in 1956 and all male runners in the first 12 marathons. This fact leads us to ask whether the gender gap will close or remain constant. When we look at the times within each gender, we also wonder how far they will decrease (if at all) in the next century of the Olympics. Might we one day witness a sub-2 hour marathon? The study of statistics can help you make reasonable guesses about the answers to these questions. It is also important to differentiate what we use to describe populations vs what we use to describe samples. A population is described by a parameter; the parameter is the true value of the descriptive in the population, but one that we can never know for sure. For example, the Bureau of Labor Statistics reports that the average hourly wage of chefs is \$23.87. However, even if this number was computed using information from every single chef in the United States (making it a parameter), it would quickly become slightly off as one chef retires and a new chef enters the job market. Additionally, as noted above, there is virtually no way to collect data from every single person in a population. In order to understand a variable, we estimate the population parameter using a sample statistic. Here, the term “statistic” refers to the specific number we compute from the data (e.g. the average), not the field of statistics. A sample statistic is an estimate of the true population parameter, and if our sample is representative of the population, then the statistic is considered to be a good estimator of the parameter. Even the best sample will be somewhat off from the full population, earlier referred to as sampling bias, and as a result, there will always be a tiny discrepancy between the parameter and the statistic we use to estimate it. This difference is known as sampling error, and, as we will see throughout the course, understanding sampling error is the key to understanding statistics. Every observation we make about a variable, be it a full research study or observing an individual’s behavior, is incapable of being completely representative of all possibilities for that variable. Knowing where to draw the line between an unusual observation and a true difference is what statistics is all about. Inferential Statistics Descriptive statistics are wonderful at telling us what our data look like. However, what we often want to understand is how our data behave. What variables are related to other variables? Under what conditions will the value of a variable change? Are two groups different from each other, and if so, are people within each group different or similar? These are the questions answered by inferential statistics, and inferential statistics are how we generalize from our sample back up to our population. Units 2 and 3 are all about inferential statistics, the formal analyses and tests we run to make conclusions about our data. For example, we will learn how to use a t statistic to determine whether people change over time when enrolled in an intervention. We will also use an F statistic to determine if we can predict future values on a variable based on current known values of a variable. There are many types of inferential statistics, each allowing us insight into a different behavior of the data we collect. This course will only touch on a small subset (or a sample) of them, but the principles we learn along the way will make it easier to learn new tests, as most inferential statistics follow the same structure and format.
textbooks/stats/Applied_Statistics/An_Introduction_to_Psychological_Statistics_(Foster_et_al.)/01%3A_Introduction/1.06%3A_Types_of_Statistical_Analyses.txt
As noted above, statistics is not math. It does, however, use math as a tool. Many statistical formulas involve summing numbers. Fortunately there is a convenient notation for expressing summation. This section covers the basics of this summation notation. Let's say we have a variable $\mathrm{X}$ that represents the weights (in grams) of 4 grapes: Table $1$ Grape $\mathrm{X}$ 1 4.6 2 5.1 3 4.9 4 4.4 We label Grape 1's weight $\mathrm{X}_{1}$, Grape 2's weight $\mathrm{X}_{2}$, etc. The following formula means to sum up the weights of the four grapes: $\sum_{i=1}^{4} X_{i}$ The Greek letter $\Sigma$ indicates summation. The “i = 1” at the bottom indicates that the summation is to start with $\mathrm{X}_{1}$ and the 4 at the top indicates that the summation will end with $\mathrm{X}_{4}$. The “$\mathrm{X}_{i}$” indicates that $\mathrm{X}$ is the variable to be summed as i goes from 1 to 4. Therefore, $\sum_{i=1}^{4} X_{i}=X_{1}+X_{2}+X_{3}+X_{4}=4.6+5.1+4.9+4.4=19 \nonumber$ The symbol $\sum_{i=1}^{3} X_{i} \nonumber$ indicates that only the first 3 scores are to be summed. The index variable i goes from 1 to 3. When all the scores of a variable (such as $\mathrm{X}$) are to be summed, it is often convenient to use the following abbreviated notation: $\sum \mathrm{X} \nonumber$ Thus, when no values of i are shown, it means to sum all the values of $\mathrm{X}$. Many formulas involve squaring numbers before they are summed. This is indicated as $\begin{array}{l}{\sum X^{2}= 4.6^{2}+5.1^{2}+4.9^{2}+4.4^{2}} \ {\quad \quad=21.16+26.01+24.01+19.36=90.54}\end{array} \nonumber$ Notice that: $\left(\sum \mathrm{X} \right)^{2} \neq \sum \mathrm{X}^{2}$ because the expression on the left means to sum up all the values of $\mathrm{X}$ and then square the sum (19² = 361), whereas the expression on the right means to square the numbers and then sum the squares (90.54, as shown). Some formulas involve the sum of cross products. Below are the data for variables $\mathrm{X}$ and $\mathrm{Y}$. The cross products ($\mathrm{XY}$) are shown in the third column. The sum of the cross products is 3 + 4 + 21 = 28. Table $2$ $\mathrm{X}$ $\mathrm{Y}$ $\mathrm{XY}$ 1 3 3 2 2 4 3 7 21 In summation notation, this is written as: $\sum \mathrm{XY} = 28 \nonumber$ 1.08: Exercises 1. In your own words, describe why we study statistics. Answer: Your answer could take many forms but should include information about objectively interpreting information and/or communicating results and research conclusions 1. For each of the following, determine if the variable is continuous or discrete: 1. Time taken to read a book chapter 2. Favorite food 3. Cognitive ability 4. Temperature 5. Letter grade received in a class 2. For each of the following, determine the level of measurement: 1. T-shirt size 2. Time taken to run 100 meter race 3. First, second, and third place in 100 meter race 4. Birthplace 5. Temperature in Celsius Answer: 1. Ordinal 2. Ratio 3. Ordinal 4. Nominal 5. Interval 1. What is the difference between a population and a sample? Which is described by a parameter and which is described by a statistic? 2. What is sampling bias? What is sampling error? Answer: Sampling bias is the difference in demographic characteristics between a sample and the population it should represent. Sampling error is the difference between a population parameter and sample statistic that is caused by random chance due to sampling bias. 1. What is the difference between a simple random sample and a stratified random sample? 2. What are the two key characteristics of a true experimental design? Answer: Random assignment to treatment conditions and manipulation of the independent variable 9 1. When would we use a quasi-experimental design? 2. Use the following dataset for the computations below: $\mathrm{X}$ $\mathrm{Y}$ 2 8 3 8 7 4 5 1 9 4 1. $\sum \mathrm{X}$ 2. $\sum \mathrm{Y}^2$ 3. $\sum \mathrm{XY}$ 4. $(\sum \mathrm{Y})^2$ Answer: 1. 26 2. 161 3. 109 4. 625 1. What are the most common measures of central tendency and spread?
textbooks/stats/Applied_Statistics/An_Introduction_to_Psychological_Statistics_(Foster_et_al.)/01%3A_Introduction/1.07%3A_Mathematical_Notation.txt
When Apple Computer introduced the iMac computer in August 1998, the company wanted to learn whether the iMac was expanding Apple’s market share. Was the iMac just attracting previous Macintosh owners? Or was it purchased by newcomers to the computer market and by previous Windows users who were switching over? To find out, 500 iMac customers were interviewed. Each customer was categorized as a previous Macintosh owner, a previous Windows owner, or a new computer purchaser. This section examines graphical methods for displaying the results of the interviews. We’ll learn some general lessons about how to graph data that fall into a small number of categories. A later section will consider how to graph numerical data in which each observation is represented by a number in some range. The key point about the qualitative data that occupy us in the present section is that they do not come with a pre-established ordering (the way numbers are ordered). For example, there is no natural sense in which the category of previous Windows users comes before or after the category of previous Macintosh users. This situation may be contrasted with quantitative data, such as a person’s weight. People of one weight are naturally ordered with respect to people of a different weight. Frequency Tables All of the graphical methods shown in this section are derived from frequency tables. Table 1 shows a frequency table for the results of the iMac study; it shows the frequencies of the various response categories. It also shows the relative frequencies, which are the proportion of responses in each category. For example, the relative frequency for “none” of 0.17 = 85/500. Table \(1\): Frequency Table for the iMac Data. Previous Ownership Frequency Relative Frequency None 85 0.17 Windows 60 0.12 Macintosh 355 0.71 Total 500 1 Pie Charts The pie chart in Figure \(1\) shows the results of the iMac study. In a pie chart, each category is represented by a slice of the pie. The area of the slice is proportional to the percentage of responses in the category. This is simply the relative frequency multiplied by 100. Although most iMac purchasers were Macintosh owners, Apple was encouraged by the 12% of purchasers who were former Windows users, and by the 17% of purchasers who were buying a computer for the first time. Pie charts are effective for displaying the relative frequencies of a small number of categories. They are not recommended, however, when you have a large number of categories. Pie charts can also be confusing when they are used to compare the outcomes of two different surveys or experiments. In an influential book on the use of graphs, Edward Tufte asserted “The only worse design than a pie chart is several of them.” Here is another important point about pie charts. If they are based on a small number of observations, it can be misleading to label the pie slices with percentages. For example, if just 5 people had been interviewed by Apple Computers, and 3 were former Windows users, it would be misleading to display a pie chart with the Windows slice showing 60%. With so few people interviewed, such a large percentage of Windows users might easily have occurred since chance can cause large errors with small samples. In this case, it is better to alert the user of the pie chart to the actual numbers involved. The slices should therefore be labeled with the actual frequencies observed (e.g., 3) instead of with percentages. Bar charts Bar charts can also be used to represent frequencies of different categories. A bar chart of the iMac purchases is shown in Figure \(2\). Frequencies are shown on the Yaxis and the type of computer previously owned is shown on the X-axis. Typically, the Y-axis shows the number of observations in each category rather than the percentage of observations in each category as is typical in pie charts. Comparing Distributions Often we need to compare the results of different surveys, or of different conditions within the same overall survey. In this case, we are comparing the “distributions” of responses between the surveys or conditions. Bar charts are often excellent for illustrating differences between two distributions. Figure \(3\) shows the number of people playing card games at the Yahoo web site on a Sunday and on a Wednesday in the spring of 2001. We see that there were more players overall on Wednesday compared to Sunday. The number of people playing Pinochle was nonetheless the same on these two days. In contrast, there were about twice as many people playing hearts on Wednesday as on Sunday. Facts like these emerge clearly from a well-designed bar chart. The bars in Figure \(3\) are oriented horizontally rather than vertically. The horizontal format is useful when you have many categories because there is more room for the category labels. We’ll have more to say about bar charts when we consider numerical quantities later in this chapter. Some graphical mistakes to avoid Don’t get fancy! People sometimes add features to graphs that don’t help to convey their information. For example, 3-dimensional bar charts such as the one shown in Figure \(4\) are usually not as effective as their two-dimensional counterparts. Here is another way that fanciness can lead to trouble. Instead of plain bars, it is tempting to substitute meaningful images. For example, Figure \(5\) presents the iMac data using pictures of computers. The heights of the pictures accurately represent the number of buyers, yet Figure \(5\) is misleading because the viewer's attention will be captured by areas. The areas can exaggerate the size differences between the groups. In terms of percentages, the ratio of previous Macintosh owners to previous Windows owners is about 6 to 1. But the ratio of the two areas in Figure \(5\) is about 35 to 1. A biased person wishing to hide the fact that many Windows owners purchased iMacs would be tempted to use Figure \(5\) instead of Figure \(2\)! Edward Tufte coined the term “lie factor” to refer to the ratio of the size of the effect shown in a graph to the size of the effect shown in the data. He suggests that lie factors greater than 1.05 or less than 0.95 produce unacceptable distortion. Another distortion in bar charts results from setting the baseline to a value other than zero. The baseline is the bottom of the Y-axis, representing the least number of cases that could have occurred in a category. Normally, but not always, this number should be zero. Figure 6 shows the iMac data with a baseline of 50. Once again, the differences in areas suggests a different story than the true differences in percentages. The number of Windows-switchers seems minuscule compared to its true value of 12%. Finally, we note that it is a serious mistake to use a line graph when the X-axis contains merely qualitative variables. A line graph is essentially a bar graph with the tops of the bars represented by points joined by lines (the rest of the bar is suppressed). Figure \(7\) inappropriately shows a line graph of the card game data from Yahoo. The drawback to Figure 7 is that it gives the false impression that the games are naturally ordered in a numerical way when, in fact, they are ordered alphabetically. Summary Pie charts and bar charts can both be effective methods of portraying qualitative data. Bar charts are better when there are more than just a few categories and for comparing two or more distributions. Be careful to avoid creating misleading graphs.
textbooks/stats/Applied_Statistics/An_Introduction_to_Psychological_Statistics_(Foster_et_al.)/02%3A_Describing_Data_using_Distributions_and_Graphs/2.01%3A_Graphing_Qualitative_Variables.txt
As discussed in the section on variables in Chapter 1, quantitative variables are variables measured on a numeric scale. Height, weight, response time, subjective rating of pain, temperature, and score on an exam are all examples of quantitative variables. Quantitative variables are distinguished from categorical (sometimes called qualitative) variables such as favorite color, religion, city of birth, favorite sport in which there is no ordering or measuring involved. There are many types of graphs that can be used to portray distributions of quantitative variables. The upcoming sections cover the following types of graphs: 1. stem and leaf displays 2. histograms 3. frequency polygons 4. box plots 5. bar charts 6. line graphs 7. dot plots 8. scatter plots (discussed in a different chapter) Some graph types such as stem and leaf displays are best-suited for small to moderate amounts of data, whereas others such as histograms are best suited for large amounts of data. Graph types such as box plots are good at depicting differences between distributions. Scatter plots are used to show the relationship between two variables. Stem and Leaf Displays A stem and leaf display is a graphical method of displaying data. It is particularly useful when your data are not too numerous. In this section, we will explain how to construct and interpret this kind of graph. As usual, we will start with an example. Consider Figure \(1\) that shows the number of touchdown passes (TD passes) thrown by each of the 31 teams in the National Football League in the 2000 season. A stem and leaf display of the data is shown in Figure \(2\). The left portion of Figure \(2\) contains the stems. They are the numbers 3, 2, 1, and 0, arranged as a column to the left of the bars. Think of these numbers as 10’s digits. A stem of 3, for example, can be used to represent the 10’s digit in any of the numbers from 30 to 39. The numbers to the right of the bar are leaves, and they represent the 1’s digits. Every leaf in the graph therefore stands for the result of adding the leaf to 10 times its stem. To make this clear, let us examine Figure \(2\) more closely. In the top row, the four leaves to the right of stem 3 are 2, 3, 3, and 7. Combined with the stem, these leaves represent the numbers 32, 33, 33, and 37, which are the numbers of TD passes for the first four teams in Figure \(1\). The next row has a stem of 2 and 12 leaves. Together, they represent 12 data points, namely, two occurrences of 20 TD passes, three occurrences of 21 TD passes, three occurrences of 22 TD passes, one occurrence of 23 TD passes, two occurrences of 28 TD passes, and one occurrence of 29 TD passes. We leave it to you to figure out what the third row represents. The fourth row has a stem of 0 and two leaves. It stands for the last two entries in Figure \(1\), namely 9 TD passes and 6 TD passes. (The latter two numbers may be thought of as 09 and 06.) One purpose of a stem and leaf display is to clarify the shape of the distribution. You can see many facts about TD passes more easily in Figure \(7\) than in Figure \(1\). For example, by looking at the stems and the shape of the plot, you can tell that most of the teams had between 10 and 29 passing TD's, with a few having more and a few having less. The precise numbers of TD passes can be determined by examining the leaves. We can make our figure even more revealing by splitting each stem into two parts. Figure \(3\) shows how to do this. The top row is reserved for numbers from 35 to 39 and holds only the 37 TD passes made by the first team in Figure \(1\). The second row is reserved for the numbers from 30 to 34 and holds the 32, 33, and 33 TD passes made by the next three teams in the table. You can see for yourself what the other rows represent. Figure \(3\) is more revealing than Figure \(2\) because the latter figure lumps too many values into a single row. Whether you should split stems in a display depends on the exact form of your data. If rows get too long with single stems, you might try splitting them into two or more parts. There is a variation of stem and leaf displays that is useful for comparing distributions. The two distributions are placed back to back along a common column of stems. The result is a “back-to-back stem and leaf display.” Figure \(4\) shows such a graph. It compares the numbers of TD passes in the 1998 and 2000 seasons. The stems are in the middle, the leaves to the left are for the 1998 data, and the leaves to the right are for the 2000 data. For example, the second-to-last row shows that in 1998 there were teams with 11, 12, and 13 TD passes, and in 2000 there were two teams with 12 and three teams with 14 TD passes. Figure \(4\) helps us see that the two seasons were similar, but that only in 1998 did any teams throw more than 40 TD passes. There are two things about the football data that make them easy to graph with stems and leaves. First, the data are limited to whole numbers that can be represented with a one-digit stem and a one-digit leaf. Second, all the numbers are positive. If the data include numbers with three or more digits, or contain decimals, they can be rounded to two-digit accuracy. Negative values are also easily handled. Let us look at another example. Figure \(5\) shows data from the case study Weapons and Aggression. Each value is the mean difference over a series of trials between the times it took an experimental subject to name aggressive words (like “punch”) under two conditions. In one condition, the words were preceded by a non-weapon word such as “bug.” In the second condition, the same words were preceded by a weapon word such as “gun” or “knife.” The issue addressed by the experiment was whether a preceding weapon word would speed up (or prime) pronunciation of the aggressive word compared to a non-weapon priming word. A positive difference implies greater priming of the aggressive word by the weapon word. Negative differences imply that the priming by the weapon word was less than for a neutral word. You see that the numbers range from 43.2 to -27.4. The first value indicates that one subject was 43.2 milliseconds faster pronouncing aggressive words when they were preceded by weapon words than when preceded by neutral words. The value 27.4 indicates that another subject was 27.4 milliseconds slower pronouncing aggressive words when they were preceded by weapon words. The data are displayed with stems and leaves in Figure \(6\). Since stem and leaf displays can only portray two whole digits (one for the stem and one for the leaf) the numbers are first rounded. Thus, the value 43.2 is rounded to 43 and represented with a stem of 4 and a leaf of 3. Similarly, 42.9 is rounded to 43. To represent negative numbers, we simply use negative stems. For example, the bottom row of the figure represents the number –27. The second-to-last row represents the numbers -10, -10, -15, etc. Once again, we have rounded the original values from Figure \(5\). Observe that the figure contains a row headed by “0” and another headed by “-0.” The stem of 0 is for numbers between 0 and 9, whereas the stem of -0 is for numbers between 0 and -9. For example, the fifth row of the table holds the numbers 1, 2, 4, 5, 5, 8, 9 and the sixth row holds 0, -6, -7, and -9. Values that are exactly 0 before rounding should be split as evenly as possible between the “0” and “-0” rows. In Figure \(5\), none of the values are 0 before rounding. The “0” that appears in the “-0” row comes from the original value of -0.2 in the table. Although stem and leaf displays are unwieldy for large data sets, they are often useful for data sets with up to 200 observations. Figure \(7\) portrays the distribution of populations of 185 US cities in 1998. To be included, a city had to have between 100,000 and 500,000 residents. Since a stem and leaf plot shows only two-place accuracy, we had to round the numbers to the nearest 10,000. For example the largest number (493,559) was rounded to 490,000 and then plotted with a stem of 4 and a leaf of 9. The fourth highest number (463,201) was rounded to 460,000 and plotted with a stem of 4 and a leaf of 6. Thus, the stems represent units of 100,000 and the leaves represent units of 10,000. Notice that each stem value is split into five parts: 0-1, 2-3, 4-5, 67, and 8-9. Whether your data can be suitably represented by a stem and leaf display depends on whether they can be rounded without loss of important information. Also, their extreme values must fit into two successive digits, as the data in Figure 11 fit into the 10,000 and 100,000 places (for leaves and stems, respectively). Deciding what kind of graph is best suited to displaying your data thus requires good judgment. Statistics is not just recipes! Histograms A histogram is a graphical method for displaying the shape of a distribution. It is particularly useful when there are a large number of observations. We begin with an example consisting of the scores of 642 students on a psychology test. The test consists of 197 items each graded as “correct” or “incorrect.” The students' scores ranged from 46 to 167. The first step is to create a frequency table. Unfortunately, a simple frequency table would be too big, containing over 100 rows. To simplify the table, we group scores together as shown in Table \(1\). Table \(1\): Grouped Frequency Distribution of Psychology Test Scores Interval's Lower Limit Interval's Upper Limit Class Frequency 39.5 49.5 3 49.5 59.5 10 59.5 69.5 53 69.5 79.5 107 79.5 89.5 147 89.5 99.5 130 99.5 109.5 78 109.5 119.5 59 119.5 129.5 36 129.5 139.5 11 139.5 149.5 6 149.5 159.5 1 159.5 169.5 1 To create this table, the range of scores was broken into intervals, called class intervals. The first interval is from 39.5 to 49.5, the second from 49.5 to 59.5, etc. Next, the number of scores falling into each interval was counted to obtain the class frequencies. There are three scores in the first interval, 10 in the second, etc. Class intervals of width 10 provide enough detail about the distribution to be revealing without making the graph too “choppy.” More information on choosing the widths of class intervals is presented later in this section. Placing the limits of the class intervals midway between two numbers (e.g., 49.5) ensures that every score will fall in an interval rather than on the boundary between intervals. In a histogram, the class frequencies are represented by bars. The height of each bar corresponds to its class frequency. A histogram of these data is shown in Figure \(8\). The histogram makes it plain that most of the scores are in the middle of the distribution, with fewer scores in the extremes. You can also see that the distribution is not symmetric: the scores extend to the right farther than they do to the left. The distribution is therefore said to be skewed. (We'll have more to say about shapes of distributions in Chapter 3.) In our example, the observations are whole numbers. Histograms can also be used when the scores are measured on a more continuous scale such as the length of time (in milliseconds) required to perform a task. In this case, there is no need to worry about fence sitters since they are improbable. (It would be quite a coincidence for a task to require exactly 7 seconds, measured to the nearest thousandth of a second.) We are therefore free to choose whole numbers as boundaries for our class intervals, for example, 4000, 5000, etc. The class frequency is then the number of observations that are greater than or equal to the lower bound, and strictly less than the upper bound. For example, one interval might hold times from 4000 to 4999 milliseconds. Using whole numbers as boundaries avoids a cluttered appearance, and is the practice of many computer programs that create histograms. Note also that some computer programs label the middle of each interval rather than the end points. Histograms can be based on relative frequencies instead of actual frequencies. Histograms based on relative frequencies show the proportion of scores in each interval rather than the number of scores. In this case, the Y-axis runs from 0 to 1 (or somewhere in between if there are no extreme proportions). You can change a histogram based on frequencies to one based on relative frequencies by (a) dividing each class frequency by the total number of observations, and then (b) plotting the quotients on the Y-axis (labeled as proportion). There is more to be said about the widths of the class intervals, sometimes called bin widths. Your choice of bin width determines the number of class intervals. This decision, along with the choice of starting point for the first interval, affects the shape of the histogram. The best advice is to experiment with different choices of width, and to choose a histogram according to how well it communicates the shape of the distribution. Frequency Polygons Frequency polygons are a graphical device for understanding the shapes of distributions. They serve the same purpose as histograms, but are especially helpful for comparing sets of data. Frequency polygons are also a good choice for displaying cumulative frequency distributions. To create a frequency polygon, start just as for histograms, by choosing a class interval. Then draw an X-axis representing the values of the scores in your data. Mark the middle of each class interval with a tick mark, and label it with the middle value represented by the class. Draw the Y-axis to indicate the frequency of each class. Place a point in the middle of each class interval at the height corresponding to its frequency. Finally, connect the points. You should include one class interval below the lowest value in your data and one above the highest value. The graph will then touch the X-axis on both sides. A frequency polygon for 642 psychology test scores shown in Figure \(8\) was constructed from the frequency table shown in Table \(2\). Table \(2\): Frequency Distribution of Psychology Test Scores Lower Limit Upper Limit Count Cumulative Count 29.5 39.5 0 0 39.5 49.5 3 3 49.5 59.5 10 13 59.5 69.5 53 66 69.5 79.5 107 173 79.5 89.5 147 320 89.5 99.5 130 450 99.5 109.5 78 528 109.5 119.5 59 587 119.5 129.5 36 623 129.5 139.5 11 634 139.5 149.5 6 640 149.5 159.5 1 641 159.5 169.5 1 642 169.5 170.5 0 642 The first label on the X-axis is 35. This represents an interval extending from 29.5 to 39.5. Since the lowest test score is 46, this interval has a frequency of 0. The point labeled 45 represents the interval from 39.5 to 49.5. There are three scores in this interval. There are 147 scores in the interval that surrounds 85. You can easily discern the shape of the distribution from Figure \(9\). Most of the scores are between 65 and 115. It is clear that the distribution is not symmetric inasmuch as good scores (to the right) trail off more gradually than poor scores (to the left). In the terminology of Chapter 3 (where we will study shapes of distributions more systematically), the distribution is skewed. A cumulative frequency polygon for the same test scores is shown in Figure \(10\). The graph is the same as before except that the Y value for each point is the number of students in the corresponding class interval plus all numbers in lower intervals. For example, there are no scores in the interval labeled “35,” three in the interval “45,” and 10 in the interval “55.” Therefore, the Y value corresponding to “55” is 13. Since 642 students took the test, the cumulative frequency for the last interval is 642. Frequency polygons are useful for comparing distributions. This is achieved by overlaying the frequency polygons drawn for different data sets. Figure 2.1.3 provides an example. The data come from a task in which the goal is to move a computer cursor to a target on the screen as fast as possible. On 20 of the trials, the target was a small rectangle; on the other 20, the target was a large rectangle. Time to reach the target was recorded on each trial. The two distributions (one for each target) are plotted together in Figure \(11\). The figure shows that, although there is some overlap in times, it generally took longer to move the cursor to the small target than to the large one. It is also possible to plot two cumulative frequency distributions in the same graph. This is illustrated in Figure \(12\) using the same data from the cursor task. The difference in distributions for the two targets is again evident. Box Plots We have already discussed techniques for visually representing data (see histograms and frequency polygons). In this section we present another important graph, called a box plot. Box plots are useful for identifying outliers and for comparing distributions. We will explain box plots with the help of data from an in-class experiment. Students in Introductory Statistics were presented with a page containing 30 colored rectangles. Their task was to name the colors as quickly as possible. Their times (in seconds) were recorded. We'll compare the scores for the 16 men and 31 women who participated in the experiment by making separate box plots for each gender. Such a display is said to involve parallel box plots. There are several steps in constructing a box plot. The first relies on the 25th, 50th, and 75th percentiles in the distribution of scores. Figure \(14\) shows how these three statistics are used. For each gender we draw a box extending from the 25th percentile to the 75th percentile. The 50th percentile is drawn inside the box. Therefore, the bottom of each box is the 25th percentile, the top is the 75th percentile, and the line in the middle is the 50th percentile. The data for the women in our sample are shown in Figure \(13\). For these data, the 25th percentile is 17, the 50th percentile is 19, and the 75th percentile is 20. For the men (whose data are not shown), the 25th percentile is 19, the 50th percentile is 22.5, and the 75th percentile is 25.5. Before proceeding, the terminology in Table \(3\) is helpful. Table \(3\): Box plot terms and values for women's times. Name Formula Value Upper Hinge 75th Percentile 20 Lower Hinge 25th Percentile 17 H-Spread Upper Hinge - Lower Hinge 3 Step 1.5 x H-Spread 4.5 Upper Inner Fence Upper Hinge + 1 Step 24.5 Lower Inner Fence Lower Hinge - 1 Step 12.5 Upper Outer Fence Upper Hinge + 2 Steps 29 Lower Outer Fence Lower Hinge - 2 Steps 8 Upper Adjacent Largest value below Upper Inner Fence 24 Lower Adjacent Smallest value above Lower Inner Fence 14 Outside Value A value beyond an Inner Fence but not beyond an Outer Fence 29 Far Out Value A value beyond an Outer Fence None Continuing with the box plots, we put “whiskers” above and below each box to give additional information about the spread of data. Whiskers are vertical lines that end in a horizontal stroke. Whiskers are drawn from the upper and lower hinges to the upper and lower adjacent values (24 and 14 for the women's data), as shown in Figure \(15\). Although we don't draw whiskers all the way to outside or far out values, we still wish to represent them in our box plots. This is achieved by adding additional marks beyond the whiskers. Specifically, outside values are indicated by small “o's” and far out values are indicated by asterisks (*). In our data, there are no farout values and just one outside value. This outside value of 29 is for the women and is shown in Figure \(16\). There is one more mark to include in box plots (although sometimes it is omitted). We indicate the mean score for a group by inserting a plus sign. Figure \(17\) shows the result of adding means to our box plots. Figure \(17\) provides a revealing summary of the data. Since half the scores in a distribution are between the hinges (recall that the hinges are the 25th and 75th percentiles), we see that half the women's times are between 17 and 20 seconds whereas half the men's times are between 19 and 25.5 seconds. We also see that women generally named the colors faster than the men did, although one woman was slower than almost all of the men. Figure \(18\) shows the box plot for the women's data with detailed labels. Box plots provide basic information about a distribution. For example, a distribution with a positive skew would have a longer whisker in the positive direction than in the negative direction. A larger mean than median would also indicate a positive skew. Box plots are good at portraying extreme values and are especially good at showing differences between distributions. However, many of the details of a distribution are not revealed in a box plot and to examine these details one should use create a histogram and/or a stem and leaf display. Bar Charts In the section on qualitative variables, we saw how bar charts could be used to illustrate the frequencies of different categories. For example, the bar chart shown in Figure \(19\) shows how many purchasers of iMac computers were previous Macintosh users, previous Windows users, and new computer purchasers. In this section we show how bar charts can be used to present other kinds of quantitative information, not just frequency counts. The bar chart in Figure \(20\) shows the percent increases in the Dow Jones, Standard and Poor 500 (S & P), and Nasdaq stock indexes from May 24th 2000 to May 24th 2001. Notice that both the S & P and the Nasdaq had “negative increases” which means that they decreased in value. In this bar chart, the Y-axis is not frequency but rather the signed quantity percentage increase. Bar charts are particularly effective for showing change over time. Figure \(21\), for example, shows the percent increase in the Consumer Price Index (CPI) over four three-month periods. The fluctuation in inflation is apparent in the graph. Bar charts are often used to compare the means of different experimental conditions. Figure 2.1.4 shows the mean time it took one of us (DL) to move the cursor to either a small target or a large target. On average, more time was required for small targets than for large ones. Although bar charts can display means, we do not recommend them for this purpose. Box plots should be used instead since they provide more information than bar charts without taking up more space. For example, a box plot of the cursor-movement data is shown in Figure \(23\). You can see that Figure \(23\) reveals more about the distribution of movement times than does Figure \(22\). The section on qualitative variables presented earlier in this chapter discussed the use of bar charts for comparing distributions. Some common graphical mistakes were also noted. The earlier discussion applies equally well to the use of bar charts to display quantitative variables. Line Graphs A line graph is a bar graph with the tops of the bars represented by points joined by lines (the rest of the bar is suppressed). For example, Figure \(24\) was presented in the section on bar charts and shows changes in the Consumer Price Index (CPI) over time. A line graph of these same data is shown in Figure \(25\). Although the figures are similar, the line graph emphasizes the change from period to period. Line graphs are appropriate only when both the X- and Y-axes display ordered (rather than qualitative) variables. Although bar charts can also be used in this situation, line graphs are generally better at comparing changes over time. Figure \(26\), for example, shows percent increases and decreases in five components of the CPI. The figure makes it easy to see that medical costs had a steadier progression than the other components. Although you could create an analogous bar chart, its interpretation would not be as easy. Let us stress that it is misleading to use a line graph when the X-axis contains merely qualitative variables. Figure \(27\) inappropriately shows a line graph of the card game data from Yahoo, discussed in the section on qualitative variables. The defect in Figure \(27\) is that it gives the false impression that the games are naturally ordered in a numerical way. The Shape of Distribution Finally, it is useful to present discussion on how we describe the shapes of distributions, which we will revisit in the next chapter to learn how different shapes affect our numerical descriptors of data and distributions. The primary characteristic we are concerned about when assessing the shape of a distribution is whether the distribution is symmetrical or skewed. A symmetrical distribution, as the name suggests, can be cut down the center to form 2 mirror images. Although in practice we will never get a perfectly symmetrical distribution, we would like our data to be as close to symmetrical as possible for reasons we delve into in Chapter 3. Many types of distributions are symmetrical, but by far the most common and pertinent distribution at this point is the normal distribution, shown in Figure \(28\). Notice that although the symmetry is not perfect (for instance, the bar just to the right of the center is taller than the one just to the left), the two sides are roughly the same shape. The normal distribution has a single peak, known as the center, and two tails that extend out equally, forming what is known as a bell shape or bell curve. Symmetrical distributions can also have multiple peaks. Figure \(29\) shows a bimodal distribution, named for the two peaks that lie roughly symmetrically on either side of the center point. As we will see in the next chapter, this is not a particularly desirable characteristic of our data, and, worse, this is a relatively difficult characteristic to detect numerically. Thus, it is important to visualize your data before moving ahead with any formal analyses. Distributions that are not symmetrical also come in many forms, more than can be described here. The most common asymmetry to be encountered is referred to as skew, in which one of the two tails of the distribution is disproportionately longer than the other. This property can affect the value of the averages we use in our analyses and make them an inaccurate representation of our data, which causes many problems. Skew can either be positive or negative (also known as right or left, respectively), based on which tail is longer. It is very easy to get the two confused at first; many students want to describe the skew by where the bulk of the data (larger portion of the histogram, known as the body) is placed, but the correct determination is based on which tail is longer. You can think of the tail as an arrow: whichever direction the arrow is pointing is the direction of the skew. Figures \(30\) and \(31\) show positive (right) and negative (left) skew, respectively. 2.03: Exercises 1. Name some ways to graph quantitative variables and some ways to graph qualitative variables. Answer: Qualitative variables are displayed using pie charts and bar charts. Quantitative variables are displayed as box plots, histograms, etc. 1. Given the following data, construct a pie chart and a bar chart. Which do you think is the more appropriate or useful way to display the data? Favorite Movie Genre Frequency Comedy 14 Horror 9 Romance 8 Action 12 1. Pretend you are constructing a histogram for describing the distribution of salaries for individuals who are 40 years or older, but are not yet retired. 1. What is on the Y-axis? Explain. 2. What is on the X-axis? Explain. 3. What would be the probable shape of the salary distribution? Explain why. Answer: [You do not need to draw the histogram, only describe it below] 1. The Y-axis would have the frequency or proportion because this is always the case in histograms 2. The X-axis has income, because this is out quantitative variable of interest 3. Because most income data are positively skewed, this histogram would likely be skewed positively too 1. A graph appears below showing the number of adults and children who prefer each type of soda. There were 130 adults and kids surveyed. Discuss some ways in which the graph below could be improved. 1. Which of the box plots on the graph has a large positive skew? Which has a large negative skew? Answer: Chart B has the positive skew because the outliers (dots and asterisks) are on the upper (higher) end; chart C has the negative skew because the outliers are on the lower end. 1. Create a histogram of the following data representing how many shows children said they watch each day: Number of TV Shows Frequency 0 2 1 18 2 36 3 7 4 3 1. Explain the differences between bar charts and histograms. When would each be used? Answer: In bar charts, the bars do not touch; in histograms, the bars do touch. Bar charts are appropriate for qualitative variables, whereas histograms are better for quantitative variables. 1. Draw a histogram of a distribution that is 1. Negatively skewed 2. Symmetrical 3. Positively skewed 2. Based on the pie chart below, which was made from a sample of 300 students, construct a frequency table of college majors. Answer: Use the following dataset for the computations below: Major Frequency Psychology 144 Biology 120 Chemistry 24 Physics 12 1. Create a histogram of the following data. Label the tails and body and determine if it is skewed (and direction, if so) or symmetrical. Hours worked per week Proportion 0 -10 4 10 -20 8 20 - 30 11 30 - 40 51 40 - 50 12 50 - 60 9 60+ 5
textbooks/stats/Applied_Statistics/An_Introduction_to_Psychological_Statistics_(Foster_et_al.)/02%3A_Describing_Data_using_Distributions_and_Graphs/2.02%3A_Graphing_Quantitative_Variables.txt
What is “central tendency,” and why do we want to know the central tendency of a group of scores? Let us first try to answer these questions intuitively. Then we will proceed to a more formal discussion. Imagine this situation: You are in a class with just four other students, and the five of you took a 5-point pop quiz. Today your instructor is walking around the room, handing back the quizzes. She stops at your desk and hands you your paper. Written in bold black ink on the front is “3/5.” How do you react? Are you happy with your score of 3 or disappointed? How do you decide? You might calculate your percentage correct, realize it is 60%, and be appalled. But it is more likely that when deciding how to react to your performance, you will want additional information. What additional information would you like? If you are like most students, you will immediately ask your neighbors, “Whad'ja get?” and then ask the instructor, “How did the class do?” In other words, the additional information you want is how your quiz score compares to other students' scores. You therefore understand the importance of comparing your score to the class distribution of scores. Should your score of 3 turn out to be among the higher scores, then you'll be pleased after all. On the other hand, if 3 is among the lower scores in the class, you won't be quite so happy. This idea of comparing individual scores to a distribution of scores is fundamental to statistics. So let's explore it further, using the same example (the pop quiz you took with your four classmates). Three possible outcomes are shown in Table \(1\). They are labeled “Dataset A,” “Dataset B,” and “Dataset C.” Which of the three datasets would make you happiest? In other words, in comparing your score with your fellow students' scores, in which dataset would your score of 3 be the most impressive? Table \(1\): Three possible datasets for the 5-point make-up quiz. Student Dataset A Dataset B Dataset C You 3 3 3 John's 3 4 2 Maria's 3 4 2 Shareecia's 3 4 2 Luther's 3 5 1 In Dataset A, everyone's score is 3. This puts your score at the exact center of the distribution. You can draw satisfaction from the fact that you did as well as everyone else. But of course it cuts both ways: everyone else did just as well as you. Now consider the possibility that the scores are described as in Dataset B. This is a depressing outcome even though your score is no different than the one in Dataset A. The problem is that the other four students had higher grades, putting yours below the center of the distribution. Finally, let's look at Dataset C. This is more like it! All of your classmates score lower than you so your score is above the center of the distribution. Now let's change the example in order to develop more insight into the center of a distribution. Figure \(1\) shows the results of an experiment on memory for chess positions. Subjects were shown a chess position and then asked to reconstruct it on an empty chess board. The number of pieces correctly placed was recorded. This was repeated for two more chess positions. The scores represent the total number of chess pieces correctly placed for the three chess positions. The maximum possible score was 89. Two groups are compared. On the left are people who don't play chess. On the right are people who play a great deal (tournament players). It is clear that the location of the center of the distribution for the non-players is much lower than the center of the distribution for the tournament players. We're sure you get the idea now about the center of a distribution. It is time to move beyond intuition. We need a formal definition of the center of a distribution. In fact, we'll offer you three definitions! This is not just generosity on our part. There turn out to be (at least) three different ways of thinking about the center of a distribution, all of them useful in various contexts. In the remainder of this section we attempt to communicate the idea behind each concept. In the succeeding sections we will give statistical measures for these concepts of central tendency. Definitions of Center Now we explain the three different ways of defining the center of a distribution. All three are called measures of central tendency. Balance Scale One definition of central tendency is the point at which the distribution is in balance. Figure \(2\) shows the distribution of the five numbers 2, 3, 4, 9, 16 placed upon a balance scale. If each number weighs one pound, and is placed at its position along the number line, then it would be possible to balance them by placing a fulcrum at 6.8. For another example, consider the distribution shown in Figure \(3\). It is balanced by placing the fulcrum in the geometric middle. Figure \(4\) illustrates that the same distribution can't be balanced by placing the fulcrum to the left of center. Figure \(5\) shows an asymmetric distribution. To balance it, we cannot put the fulcrum halfway between the lowest and highest values (as we did in Figure \(3\)). Placing the fulcrum at the “half way” point would cause it to tip towards the left. Smallest Absolute Deviation Another way to define the center of a distribution is based on the concept of the sum of the absolute deviations (differences). Consider the distribution made up of the five numbers 2, 3, 4, 9, 16. Let's see how far the distribution is from 10 (picking a number arbitrarily). Table \(2\) shows the sum of the absolute deviations of these numbers from the number 10. Table \(2\): An example of the sum of absolute deviations Values Absolute Deviations from 10 2 3 4 9 16 8 7 6 1 6 Sum 28 The first row of the table shows that the absolute value of the difference between 2 and 10 is 8; the second row shows that the absolute difference between 3 and 10 is 7, and similarly for the other rows. When we add up the five absolute deviations, we get 28. So, the sum of the absolute deviations from 10 is 28. Likewise, the sum of the absolute deviations from 5 equals 3 + 2 + 1 + 4 + 11 = 21. So, the sum of the absolute deviations from 5 is smaller than the sum of the absolute deviations from 10. In this sense, 5 is closer, overall, to the other numbers than is 10. We are now in a position to define a second measure of central tendency, this time in terms of absolute deviations. Specifically, according to our second definition, the center of a distribution is the number for which the sum of the absolute deviations is smallest. As we just saw, the sum of the absolute deviations from 10 is 28 and the sum of the absolute deviations from 5 is 21. Is there a value for which the sum of the absolute deviations is even smaller than 21? Yes. For these data, there is a value for which the sum of absolute deviations is only 20. See if you can find it. Smallest Squared Deviation We shall discuss one more way to define the center of a distribution. It is based on the concept of the sum of squared deviations (differences). Again, consider the distribution of the five numbers 2, 3, 4, 9, 16. Table \(3\) shows the sum of the squared deviations of these numbers from the number 10. Table \(3\): An example of the sum of squared deviations Values Squared Deviations from 10 2 64 3 49 4 36 9 1 16 36 Sum 186 The first row in the table shows that the squared value of the difference between 2 and 10 is 64; the second row shows that the squared difference between 3 and 10 is 49, and so forth. When we add up all these squared deviations, we get 186. Changing the target from 10 to 5, we calculate the sum of the squared deviations from 5 as 9 + 4 + 1 + 16 + 121 = 151. So, the sum of the squared deviations from 5 is smaller than the sum of the squared deviations from 10. Is there a value for which the sum of the squared deviations is even smaller than 151? Yes, it is possible to reach 134.8. Can you find the target number for which the sum of squared deviations is 134.8? The target that minimizes the sum of squared deviations provides another useful definition of central tendency (the last one to be discussed in this section). It can be challenging to find the value that minimizes this sum.
textbooks/stats/Applied_Statistics/An_Introduction_to_Psychological_Statistics_(Foster_et_al.)/03%3A_Measures_of_Central_Tendency_and_Spread/3.01%3A_What_is_Central_Tendency.txt
In the previous section we saw that there are several ways to define central tendency. This section defines the three most common measures of central tendency: the mean, the median, and the mode. The relationships among these measures of central tendency and the definitions given in the previous section will probably not be obvious to you. This section gives only the basic definitions of the mean, median and mode. A further discussion of the relative merits and proper applications of these statistics is presented in a later section. Arithmetic Mean The arithmetic mean is the most common measure of central tendency. It is simply the sum of the numbers divided by the number of numbers. The symbol “$\mu$” (pronounced “mew”) is used for the mean of a population. The symbol “$\overline{\mathrm{X}}$” (pronounced “X-bar”) is used for the mean of a sample. The formula for $\mu$ is shown below: $\mu=\dfrac{\sum \mathrm{X}}{N}$ where $\Sigma \mathbf{X}$ is the sum of all the numbers in the population and $N$ is the number of numbers in the population. The formula for $\overline{\mathrm{X}}$ is essentially identical: $\overline{\mathrm{X}}=\dfrac{\sum \mathrm{X}}{N}$ where $\Sigma \mathbf{X}$ is the sum of all the numbers in the sample and $N$ is the number of numbers in the sample. The only distinction between these two equations is whether we are referring to the population (in which case we use the parameter $\mu$) or a sample of that population (in which case we use the statistic $\overline{\mathrm{X}}$). As an example, the mean of the numbers 1, 2, 3, 6, 8 is 20/5 = 4 regardless of whether the numbers constitute the entire population or just a sample from the population. Figure $1$ shows the number of touchdown (TD) passes thrown by each of the 31 teams in the National Football League in the 2000 season. The mean number of touchdown passes thrown is 20.45 as shown below. $\mu=\dfrac{\sum X}{N}=\dfrac{634}{31}=20.45 \nonumber$ Although the arithmetic mean is not the only “mean” (there is also a geometric mean, a harmonic mean, and many others that are all beyond the scope of this course), it is by far the most commonly used. Therefore, if the term “mean” is used without specifying whether it is the arithmetic mean, the geometric mean, or some other mean, it is assumed to refer to the arithmetic mean. Median The median is also a frequently used measure of central tendency. The median is the midpoint of a distribution: the same number of scores is above the median as below it. For the data in Figure $1$, there are 31 scores. The 16th highest score (which equals 20) is the median because there are 15 scores below the 16th score and 15 scores above the 16th score. The median can also be thought of as the 50th percentile. When there is an odd number of numbers, the median is simply the middle number. For example, the median of 2, 4, and 7 is 4. When there is an even number of numbers, the median is the mean of the two middle numbers. Thus, the median of the numbers 2, 4, 7, 12 is: $\dfrac{4+7}{2}=5.5 \nonumber$ When there are numbers with the same values, each appearance of that value gets counted. For example, in the set of numbers 1, 3, 4, 4, 5, 8, and 9, the median is 4 because there are three numbers (1, 3, and 4) below it and three numbers (5, 8, and 9) above it. If we only counted 4 once, the median would incorrectly be calculated at 4.5 (4+5 divided by 2). When in doubt, writing out all of the numbers in order and marking them off one at a time from the top and bottom will always lead you to the correct answer. Mode The mode is the most frequently occurring value in the dataset. For the data in Figure $1$, the mode is 18 since more teams (4) had 18 touchdown passes than any other number of touchdown passes. With continuous data, such as response time measured to many decimals, the frequency of each value is one since no two scores will be exactly the same (see discussion of continuous variables). Therefore the mode of continuous data is normally computed from a grouped frequency distribution. Table $1$ shows a grouped frequency distribution for the target response time data. Since the interval with the highest frequency is 600-700, the mode is the middle of that interval (650). Though the mode is not frequently used for continuous data, it is nevertheless an important measure of central tendency as it is the only measure we can use on qualitative or categorical data. Table $1$: Grouped frequency distribution Range Frequency 500 - 600 3 600 - 700 6 700 - 800 5 800 - 900 5 900 - 1000 0 1000 - 1100 1 More on the Mean and Median In the section “What is central tendency,” we saw that the center of a distribution could be defined three ways: 1. the point on which a distribution would balance 2. the value whose average absolute deviation from all the other values is minimized 3. the value whose squared difference from all the other values is minimized. The mean is the point on which a distribution would balance, the median is the value that minimizes the sum of absolute deviations, and the mean is the value that minimizes the sum of the squared deviations. Table $2$ shows the absolute and squared deviations of the numbers 2, 3, 4, 9, and 16 from their median of 4 and their mean of 6.8. You can see that the sum of absolute deviations from the median (20) is smaller than the sum of absolute deviations from the mean (22.8). On the other hand, the sum of squared deviations from the median (174) is larger than the sum of squared deviations from the mean (134.8). Table $2$: Absolute & squared deviations from the median of 4 and the mean of 6.8. Value Absolute Deviation from Median Absolute Deviation from Mean Squared Deviation from Median Squared Deviation from Mean 2 2 4.8 4 23.04 3 1 3.8 1 14.44 4 0 2.8 0 7.84 9 5 2.2 25 4.84 16 12 9.2 144 84.64 Total 20 22.8 174 134.8 Figure $2$ shows that the distribution balances at the mean of 6.8 and not at the median of 4. The relative advantages and disadvantages of the mean and median are discussed in the section “Comparing Measures” later in this chapter. When a distribution is symmetric, then the mean and the median are the same. Consider the following distribution: 1, 3, 4, 5, 6, 7, 9. The mean and median are both 5. The mean, median, and mode are identical in the bell-shaped normal distribution. Comparing Measures of Central Tendency How do the various measures of central tendency compare with each other? For symmetric distributions, the mean and median, as is the mode except in bimodal distributions. Differences among the measures occur with skewed distributions. Figure $3$ shows the distribution of 642 scores on an introductory psychology test. Notice this distribution has a slight positive skew. Measures of central tendency are shown in Table $3$. Notice they do not differ greatly, with the exception that the mode is considerably lower than the other measures. When distributions have a positive skew, the mean is typically higher than the median, although it may not be in bimodal distributions. For these data, the mean of 91.58 is higher than the median of 90. This pattern holds true for any skew: the mode will remain at the highest point in the distribution, the median will be pulled slightly out into the skewed tail (the longer end of the distribution), and the mean will be pulled the farthest out. Thus, the mean is more sensitive to skew than the median or mode, and in cases of extreme skew, the mean may no longer be appropriate to use. Table $3$: Measures of central tendency for the test scores. Measure Value Mode 84.00 Median 90.00 Mean 91.58 The distribution of baseball salaries (in 1994) shown in Figure $4$ has a much more pronounced skew than the distribution in Figure $3$. Table $4$ shows the measures of central tendency for these data. The large skew results in very different values for these measures. No single measure of central tendency is sufficient for data such as these. If you were asked the very general question: “So, what do baseball players make?” and answered with the mean of $1,183,000, you would not have told the whole story since only about one third of baseball players make that much. If you answered with the mode of$250,000 or the median of \$500,000, you would not be giving any indication that some players make many millions of dollars. Fortunately, there is no need to summarize a distribution with a single number. When the various measures differ, our opinion is that you should report the mean and median. Sometimes it is worth reporting the mode as well. In the media, the median is usually reported to summarize the center of skewed distributions. You will hear about median salaries and median prices of houses sold, etc. This is better than reporting only the mean, but it would be informative to hear more statistics. Table $4$: Measures of central tendency for baseball salaries (in thousands of dollars). Measure Value Mode 250 Median 500 Mean 1,183
textbooks/stats/Applied_Statistics/An_Introduction_to_Psychological_Statistics_(Foster_et_al.)/03%3A_Measures_of_Central_Tendency_and_Spread/3.02%3A_Measures_of_Central_Tendency.txt
Variability refers to how “spread out” a group of scores is. To see what we mean by spread out, consider graphs in Figure $1$. These graphs represent the scores on two quizzes. The mean score for each quiz is 7.0. Despite the equality of means, you can see that the distributions are quite different. Specifically, the scores on Quiz 1 are more densely packed and those on Quiz 2 are more spread out. The differences among students were much greater on Quiz 2 than on Quiz 1. The terms variability, spread, and dispersion are synonyms, and refer to how spread out a distribution is. Just as in the section on central tendency where we discussed measures of the center of a distribution of scores, in this chapter we will discuss measures of the variability of a distribution. There are three frequently used measures of variability: range, variance, and standard deviation. In the next few paragraphs, we will look at each of these measures of variability in more detail. Range The range is the simplest measure of variability to calculate, and one you have probably encountered many times in your life. The range is simply the highest score minus the lowest score. Let’s take a few examples. What is the range of the following group of numbers: 10, 2, 5, 6, 7, 3, 4? Well, the highest number is 10, and the lowest number is 2, so 10 - 2 = 8. The range is 8. Let’s take another example. Here’s a dataset with 10 numbers: 99, 45, 23, 67, 45, 91, 82, 78, 62, 51. What is the range? The highest number is 99 and the lowest number is 23, so 99 - 23 equals 76; the range is 76. Now consider the two quizzes shown in Figure $1$ and Figure $2$. On Quiz 1, the lowest score is 5 and the highest score is 9. Therefore, the range is 4. The range on Quiz 2 was larger: the lowest score was 4 and the highest score was 10. Therefore the range is 6. The problem with using range is that it is extremely sensitive to outliers, and one number far away from the rest of the data will greatly alter the value of the range. For example, in the set of numbers 1, 3, 4, 4, 5, 8, and 9, the range is 8 (9 – 1). However, if we add a single person whose score is nowhere close to the rest of the scores, say, 20, the range more than doubles from 8 to 19. Interquartile Range The interquartile range (IQR) is the range of the middle 50% of the scores in a distribution and is sometimes used to communicate where the bulk of the data in the distribution are located. It is computed as follows: $\text {IQR} = 75\text {th percentile }- 25\text {th percentile}$ For Quiz 1, the 75th percentile is 8 and the 25th percentile is 6. The interquartile range is therefore 2. For Quiz 2, which has greater spread, the 75th percentile is 9, the 25th percentile is 5, and the interquartile range is 4. Recall that in the discussion of box plots, the 75th percentile was called the upper hinge and the 25th percentile was called the lower hinge. Using this terminology, the interquartile range is referred to as the H-spread. Sum of Squares Variability can also be defined in terms of how close the scores in the distribution are to the middle of the distribution. Using the mean as the measure of the middle of the distribution, we can see how far, on average, each data point is from the center. The data from Quiz 1 are shown in Table $1$. The mean score is 7.0 ($\Sigma \mathrm{X} / \mathrm{N}= 140/20 = 7$). Therefore, the column “$X-\overline {X}$” contains deviations (how far each score deviates from the mean), here calculated as the score minus 7. The column “$(X-\overline {X})^{2}$” has the “Squared Deviations” and is simply the previous column squared. There are a few things to note about how Table $1$ is formatted, as this is the format you will use to calculate variance (and, soon, standard deviation). The raw data scores ($\mathrm{X}$) are always placed in the left-most column. This column is then summed at the bottom to facilitate calculating the mean (simply divided this number by the number of scores in the table). Once you have the mean, you can easily work your way down the middle column calculating the deviation scores. This column is also summed and has a very important property: it will always sum to 0 (or close to zero if you have rounding error due to many decimal places). This step is used as a check on your math to make sure you haven’t made a mistake. If this column sums to 0, you can move on to filling in the third column of squared deviations. This column is summed as well and has its own name: the Sum of Squares (abbreviated as $SS$ and given the formula $∑(X-\overline {X})^{2}$). As we will see, the Sum of Squares appears again and again in different formulas – it is a very important value, and this table makes it simple to calculate without error. Table $1$: Calculation of Variance for Quiz 1 scores. $\mathrm{X}$ $X-\overline{X}$ $(X-\overline {X})^{2}$ 9 2 4 9 2 4 9 2 4 8 1 1 8 1 1 8 1 1 8 1 1 7 0 0 7 0 0 7 0 0 7 0 0 7 0 0 6 -1 1 6 -1 1 6 -1 1 6 -1 1 6 -1 1 6 -1 1 5 -2 4 5 -2 4 $\Sigma = 140$ $\Sigma = 0$ $\Sigma = 30$ Variance Now that we have the Sum of Squares calculated, we can use it to compute our formal measure of average distance from the mean, the variance. The variance is defined as the average squared difference of the scores from the mean. We square the deviation scores because, as we saw in the Sum of Squares table, the sum of raw deviations is always 0, and there’s nothing we can do mathematically without changing that. The population parameter for variance is $σ^2$ (“sigma-squared”) and is calculated as: $\sigma^{2}=\dfrac{\sum(X-\mu)^{2}}{N}$ Notice that the numerator that formula is identical to the formula for Sum of Squares presented above with $\overline {X}$ replaced by $μ$. Thus, we can use the Sum of Squares table to easily calculate the numerator then simply divide that value by $N$ to get variance. If we assume that the values in Table $1$ represent the full population, then we can take our value of Sum of Squares and divide it by $N$ to get our population variance: $\sigma^{2}=\dfrac{30}{20}=1.5 \nonumber$ So, on average, scores in this population are 1.5 squared units away from the mean. This measure of spread is much more robust (a term used by statisticians to mean resilient or resistant to) outliers than the range, so it is a much more useful value to compute. Additionally, as we will see in future chapters, variance plays a central role in inferential statistics. The sample statistic used to estimate the variance is $s^2$ (“s-squared”): $s^{2}=\dfrac{\sum(X-\overline{X})^{2}}{N-1}$ This formula is very similar to the formula for the population variance with one change: we now divide by $N – 1$ instead of $N$. The value $N – 1$ has a special name: the degrees of freedom (abbreviated as $df$). You don’t need to understand in depth what degrees of freedom are (essentially they account for the fact that we have to use a sample statistic to estimate the mean ($\overline {X}$) before we estimate the variance) in order to calculate variance, but knowing that the denominator is called $df$ provides a nice shorthand for the variance formula: $SS/df$. Going back to the values in Table $1$ and treating those scores as a sample, we can estimate the sample variance as: $s^{2}=\dfrac{30}{20-1}=1.58$ Notice that this value is slightly larger than the one we calculated when we assumed these scores were the full population. This is because our value in the denominator is slightly smaller, making the final value larger. In general, as your sample size $N$ gets bigger, the effect of subtracting 1 becomes less and less. Comparing a sample size of 10 to a sample size of 1000; 10 – 1 = 9, or 90% of the original value, whereas 1000 – 1 = 999, or 99.9% of the original value. Thus, larger sample sizes will bring the estimate of the sample variance closer to that of the population variance. This is a key idea and principle in statistics that we will see over and over again: larger sample sizes better reflect the population. Standard Deviation The standard deviation is simply the square root of the variance. This is a useful and interpretable statistic because taking the square root of the variance (recalling that variance is the average squared difference) puts the standard deviation back into the original units of the measure we used. Thus, when reporting descriptive statistics in a study, scientists virtually always report mean and standard deviation. Standard deviation is therefore the most commonly used measure of spread for our purposes. The population parameter for standard deviation is $σ$ (“sigma”), which, intuitively, is the square root of the variance parameter $σ^2$ (on occasion, the symbols work out nicely that way). The formula is simply the formula for variance under a square root sign: $\sigma=\sqrt{\dfrac{\sum(X-\mu)^{2}}{N}}$ Back to our earlier example from Table $1$: $\sigma=\sqrt{\dfrac{30}{20}}=\sqrt{1.5}=1.22 \nonumber$ The sample statistic follows the same conventions and is given as $s$: $s=\sqrt{\dfrac{\sum(X-\overline {X})^{2}}{N-1}}=\sqrt{\dfrac{S S}{d f}}$ The sample standard deviation from Table $1$ is: $s=\sqrt{\dfrac{30}{20-1}}=\sqrt{1.58}=1.26 \nonumber$ The standard deviation is an especially useful measure of variability when the distribution is normal or approximately normal because the proportion of the distribution within a given number of standard deviations from the mean can be calculated. For example, 68% of the distribution is within one standard deviation (above and below) of the mean and approximately 95% of the distribution is within two standard deviations of the mean. Therefore, if you had a normal distribution with a mean of 50 and a standard deviation of 10, then 68% of the distribution would be between 50 - 10 = 40 and 50 +10 =60. Similarly, about 95% of the distribution would be between 50 - 2 x 10 = 30 and 50 + 2 x 10 = 70. Figure $4$ shows two normal distributions. The red distribution has a mean of 40 and a standard deviation of 5; the blue distribution has a mean of 60 and a standard deviation of 10. For the red distribution, 68% of the distribution is between 45 and 55; for the blue distribution, 68% is between 50 and 70. Notice that as the standard deviation gets smaller, the distribution becomes much narrower, regardless of where the center of the distribution (mean) is. Figure $5$ presents several more examples of this effect.
textbooks/stats/Applied_Statistics/An_Introduction_to_Psychological_Statistics_(Foster_et_al.)/03%3A_Measures_of_Central_Tendency_and_Spread/3.03%3A_Spread_and_Variability.txt
1. If the mean time to respond to a stimulus is much higher than the median time to respond, what can you say about the shape of the distribution of response times? Answer: If the mean is higher, that means it is farther out into the right-hand tail of the distribution. Therefore, we know this distribution is positively skewed. 1. Compare the mean, median, and mode in terms of their sensitivity to extreme scores. 2. Your younger brother comes home one day after taking a science test. He says that some- one at school told him that “60% of the students in the class scored above the median test grade.” What is wrong with this statement? What if he had said “60% of the students scored above the mean?” Answer: The median is defined as the value with 50% of scores above it and 50% of scores below it; therefore, 60% of score cannot fall above the median. If 60% of scores fall above the mean, that would indicate that the mean has been pulled down below the value of the median, which means that the distribution is negatively skewed 1. Make up three data sets with 5 numbers each that have: 1. the same mean but different standard deviations. 2. the same mean but different medians. 3. the same median but different means. 2. Compute the population mean and population standard deviation for the following scores (remember to use the Sum of Squares table): 5, 7, 8, 3, 4, 4, 2, 7, 1, 6 Answer: $\mu=4.80, \sigma^{2}=2.36$ 1. For the following problem, use the following scores: 5, 8, 8, 8, 7, 8, 9, 12, 8, 9, 8, 10, 7, 9, 7, 6, 9, 10, 11, 8 1. Create a histogram of these data. What is the shape of this histogram? 2. How do you think the three measures of central tendency will compare to each other in this dataset? 3. Compute the sample mean, the median, and the mode 4. Draw and label lines on your histogram for each of the above values. Do your results match your predictions? 2. Compute the range, sample variance, and sample standard deviation for the following scores: 25, 36, 41, 28, 29, 32, 39, 37, 34, 34, 37, 35, 30, 36, 31, 31 Answer: range = 16, $s^2 = 18.40$, $s = 4.29$ 1. Using the same values from problem 7, calculate the range, sample variance, and sample standard deviation, but this time include 65 in the list of values. How did each of the three values change? 2. Two normal distributions have exactly the same mean, but one has a standard deviation of 20 and the other has a standard deviation of 10. How would the shapes of the two distributions compare? Answer: If both distributions are normal, then they are both symmetrical, and having the same mean causes them to overlap with one another. The distribution with the standard deviation of 10 will be narrower than the other distribution 1. Compute the sample mean and sample standard deviation for the following scores: -8, -4, -7, -6, -8, -5, -7, -9, -2, 0
textbooks/stats/Applied_Statistics/An_Introduction_to_Psychological_Statistics_(Foster_et_al.)/03%3A_Measures_of_Central_Tendency_and_Spread/3.04%3A_Exercises.txt
The normal distribution is the most important and most widely used distribution in statistics. It is sometimes called the “bell curve,” although the tonal qualities of such a bell would be less than pleasing. It is also called the “Gaussian curve” of Gaussian distribution after the mathematician Karl Friedrich Gauss. Strictly speaking, it is not correct to talk about “the normal distribution” since there are many normal distributions. Normal distributions can differ in their means and in their standard deviations. Figure 1 shows three normal distributions. The green (left-most) distribution has a mean of -3 and a standard deviation of 0.5, the distribution in red (the middle distribution) has a mean of 0 and a standard deviation of 1, and the distribution in black (right-most) has a mean of 2 and a standard deviation of 3. These as well as all other normal distributions are symmetric with relatively more values at the center of the distribution and relatively few in the tails. What is consistent about all normal distribution is the shape and the proportion of scores within a given distance along the x-axis. We will focus on the Standard Normal Distribution (also known as the Unit Normal Distribution), which has a mean of 0 and a standard deviation of 1 (i.e. the red distribution in Figure \(1\)). Seven features of normal distributions are listed below. 1. Normal distributions are symmetric around their mean. 2. The mean, median, and mode of a normal distribution are equal. 3. The area under the normal curve is equal to 1.0. 4. Normal distributions are denser in the center and less dense in the tails. 5. Normal distributions are defined by two parameters, the mean (\(μ\)) and the standard deviation (\(σ\)). 6. 68% of the area of a normal distribution is within one standard deviation of the mean. 7. Approximately 95% of the area of a normal distribution is within two standard deviations of the mean. These properties enable us to use the normal distribution to understand how scores relate to one another within and across a distribution. But first, we need to learn how to calculate the standardized score than make up a standard normal distribution. 4.02: Z-scores A $z$-score is a standardized version of a raw score ($x$) that gives information about the relative location of that score within its distribution. The formula for converting a raw score into a $z$-score is: $z=\dfrac{x-\mu}{\sigma}$ for values from a population and for values from a sample: $z=\dfrac{x-\overline{X}}{s}$ As you can see, $z$-scores combine information about where the distribution is located (the mean/center) with how wide the distribution is (the standard deviation/spread) to interpret a raw score ($x$). Specifically, $z$-scores will tell us how far the score is away from the mean in units of standard deviations and in what direction. The value of a $z$-score has two parts: the sign (positive or negative) and the magnitude (the actual number). The sign of the $z$-score tells you in which half of the distribution the z-score falls: a positive sign (or no sign) indicates that the score is above the mean and on the right hand-side or upper end of the distribution, and a negative sign tells you the score is below the mean and on the left-hand side or lower end of the distribution. The magnitude of the number tells you, in units of standard deviations, how far away the score is from the center or mean. The magnitude can take on any value between negative and positive infinity, but for reasons we will see soon, they generally fall between -3 and 3. Let’s look at some examples. A $z$-score value of -1.0 tells us that this z-score is 1 standard deviation (because of the magnitude 1.0) below (because of the negative sign) the mean. Similarly, a $z$-score value of 1.0 tells us that this $z$-score is 1 standard deviation above the mean. Thus, these two scores are the same distance away from the mean but in opposite directions. A $z$-score of -2.5 is two-and-a-half standard deviations below the mean and is therefore farther from the center than both of the previous scores, and a $z$-score of 0.25 is closer than all of the ones before. In Unit 2, we will learn to formalize the distinction between what we consider “close” to the center or “far” from the center. For now, we will use a rough cut-off of 1.5 standard deviations in either direction as the difference between close scores (those within 1.5 standard deviations or between $z$ = -1.5 and $z$ = 1.5) and extreme scores (those farther than 1.5 standard deviations – below $z$ = -1.5 or above $z$ = 1.5). We can also convert raw scores into $z$-scores to get a better idea of where in the distribution those scores fall. Let’s say we get a score of 68 on an exam. We may be disappointed to have scored so low, but perhaps it was just a very hard exam. Having information about the distribution of all scores in the class would be helpful to put some perspective on ours. We find out that the class got an average score of 54 with a standard deviation of 8. To find out our relative location within this distribution, we simply convert our test score into a $z$-score. $z=\dfrac{X-\mu}{\sigma}=\frac{68-54}{8}=1.75 \nonumber$ We find that we are 1.75 standard deviations above the average, above our rough cut off for close and far. Suddenly our 68 is looking pretty good! Figure $1$ shows both the raw score and the $z$-score on their respective distributions. Notice that the red line indicating where each score lies is in the same relative spot for both. This is because transforming a raw score into a $z$-score does not change its relative location, it only makes it easier to know precisely where it is. $Z$-scores are also useful for comparing scores from different distributions. Let’s say we take the SAT and score 501 on both the math and critical reading sections. Does that mean we did equally well on both? Scores on the math portion are distributed normally with a mean of 511 and standard deviation of 120, so our $z$-score on the math section is $z_{math}=\dfrac{501-511}{120}=-0.08 \nonumber$ which is just slightly below average (note that use of “math” as a subscript; subscripts are used when presenting multiple versions of the same statistic in order to know which one is which and have no bearing on the actual calculation). The critical reading section has a mean of 495 and standard deviation of 116, so $z_{C R}=\frac{501-495}{116}=0.05 \nonumber$ So even though we were almost exactly average on both tests, we did a little bit better on the critical reading portion relative to other people. Finally, $z$-scores are incredibly useful if we need to combine information from different measures that are on different scales. Let’s say we give a set of employees a series of tests on things like job knowledge, personality, and leadership. We may want to combine these into a single score we can use to rate employees for development or promotion, but look what happens when we take the average of raw scores from different scales, as shown in Table $1$: Table $1$: Raw test scores on different scales (ranges in parentheses). Raw Scores Job Knowledge (0 – 100) Personality (1 –5) Leadership (1 – 5) Average Employee 1 98 4.2 1.1 34.43 Employee 2 96 3.1 4.5 34.53 Employee 3 97 2.9 3.6 34.50 Because the job knowledge scores were so big and the scores were so similar, they overpowered the other scores and removed almost all variability in the average. However, if we standardize these scores into $z$-scores, our averages retain more variability and it is easier to assess differences between employees, as shown in Table $2$. Table $2$: Standardized scores. $z$-Scores Job Knowledge (0 – 100) Personality (1 –5) Leadership (1 – 5) Average Employee 1 1.00 1.14 -1.12 0.34 Employee 2 -1.00 -0.43 0.81 -0.20 Employee 3 0.00 -0.71 0.30 -0.14 Setting the scale of a distribution Another convenient characteristic of $z$-scores is that they can be converted into any “scale” that we would like. Here, the term scale means how far apart the scores are (their spread) and where they are located (their central tendency). This can be very useful if we don’t want to work with negative numbers or if we have a specific range we would like to present. The formulas for transforming $z$ to $x$ are: $x=z \sigma+\mu$ for a population and $x=z s+\overline{X}$ for a sample. Notice that these are just simple rearrangements of the original formulas for calculating $z$ from raw scores. Let’s say we create a new measure of intelligence, and initial calibration finds that our scores have a mean of 40 and standard deviation of 7. Three people who have scores of 52, 43, and 34 want to know how well they did on the measure. We can convert their raw scores into $z$-scores: \begin{aligned} z &=\dfrac{52-40}{7}=1.71 \ z &=\dfrac{43-40}{7}=0.43 \ z &=\dfrac{34-40}{7}=-0.80 \end{aligned} \nonumber A problem is that these new $z$-scores aren’t exactly intuitive for many people. We can give people information about their relative location in the distribution (for instance, the first person scored well above average), or we can translate these $z$ scores into the more familiar metric of IQ scores, which have a mean of 100 and standard deviation of 16: $\begin{array}{l}{\mathrm{IQ}=1.71 * 16+100=127.36} \ {\mathrm{IQ}=0.43 * 16+100=106.88} \ {\mathrm{IQ}=-0.80 * 16+100=87.20}\end{array} \nonumber$ We would also likely round these values to 127, 107, and 87, respectively, for convenience.
textbooks/stats/Applied_Statistics/An_Introduction_to_Psychological_Statistics_(Foster_et_al.)/04%3A_z-scores_and_the_Standard_Normal_Distribution/4.01%3A_Normal_Distributions.txt
\(Z\)-scores and the standard normal distribution go hand-in-hand. A \(z\)-score will tell you exactly where in the standard normal distribution a value is located, and any normal distribution can be converted into a standard normal distribution by converting all of the scores in the distribution into \(z\)-scores, a process known as standardization. We saw in the previous chapter that standard deviations can be used to divide the normal distribution: 68% of the distribution falls within 1 standard deviation of the mean, 95% within (roughly) 2 standard deviations, and 99.7% within 3 standard deviations. Because \(z\)-scores are in units of standard deviations, this means that 68% of scores fall between \(z\) = -1.0 and \(z\) = 1.0 and so on. We call this 68% (or any percentage we have based on our \(z\)-scores) the proportion of the area under the curve. Any area under the curve is bounded by (defined by, delineated by, etc.) by a single \(z\)-score or pair of \(z\)-scores. An important property to point out here is that, by virtue of the fact that the total area under the curve of a distribution is always equal to 1.0 (see section on Normal Distributions at the beginning of this chapter), these areas under the curve can be added together or subtracted from 1 to find the proportion in other areas. For example, we know that the area between \(z\) = -1.0 and \(z\) = 1.0 (i.e. within one standard deviation of the mean) contains 68% of the area under the curve, which can be represented in decimal form at 0.6800 (to change a percentage to a decimal, simply move the decimal point 2 places to the left). Because the total area under the curve is equal to 1.0, that means that the proportion of the area outside \(z\)= -1.0 and \(z\) = 1.0 is equal to 1.0 – 0.6800 = 0.3200 or 32% (see Figure \(1\) below). This area is called the area in the tails of the distribution. Because this area is split between two tails and because the normal distribution is symmetrical, each tail has exactly one-half, or 16%, of the area under the curve. We will have much more to say about this concept in the coming chapters. As it turns out, this is a quite powerful idea that enables us to make statements about how likely an outcome is and what that means for research questions we would like to answer and hypotheses we would like to test. But first, we need to make a brief foray into some ideas about probability. 4.04: Exercises 1. What are the two pieces of information contained in a $z$-score? Answer: The location above or below the mean (from the sign of the number) and the distance in standard deviations away from the mean (from the magnitude of the number). 1. A $z$-score takes a raw score and standardizes it into units of ________. 2. Assume the following 5 scores represent a sample: 2, 3, 5, 5, 6. Transform these scores into $z$-scores. Answer: $\overline{\mathrm{X}}$= 4.2, $s$ = 1.64; $z$ = -1.34, -0.73, 0.49, 0.49, 1.10 1. True or false: 1. All normal distributions are symmetrical 2. All normal distributions have a mean of 1.0 3. All normal distributions have a standard deviation of 1.0 4. The total area under the curve of all normal distributions is equal to 1 2. Interpret the location, direction, and distance (near or far) of the following $z$-scores: 1. -2.00 2. 1.25 3. 3.50 4. -0.34 Answer: 1. 2 standard deviations below the mean, far 2. 1.25 standard deviations above the mean, near 3. 3.5 standard deviations above the mean, far 4. 0.34 standard deviations below the mean, near 1. Transform the following $z$-scores into a distribution with a mean of 10 and standard deviation of 2: -1.75, 2.20, 1.65, -0.95 2. Calculate $z$-scores for the following raw scores taken from a population with a mean of 100 and standard deviation of 16: 112, 109, 56, 88, 135, 99 Answer: $z$ = 0.75, 0.56, -2.75, -0.75, 2.19, -0.06 1. What does a $z$-score of 0.00 represent? 2. For a distribution with a standard deviation of 20, find $z$-scores that correspond to: 1. One-half of a standard deviation below the mean 2. 5 points above the mean 3. Three standard deviations above the mean 4. 22 points below the mean Answer: 1. -0.50 2. 0.25 3. 3.00 4. 1.10 1. Calculate the raw score for the following $z$-scores from a distribution with a mean of 15 and standard deviation of 3: 1. 4.0 2. 2.2 3. -1.3 4. 0.46
textbooks/stats/Applied_Statistics/An_Introduction_to_Psychological_Statistics_(Foster_et_al.)/04%3A_z-scores_and_the_Standard_Normal_Distribution/4.03%3A_Z-scores_and_the_Area_under_the_Curve.txt
• 5.1: What is Probability • 5.2: Probability in Graphs and Distributions We will see shortly that the normal distribution is the key to how probability works for our purposes. To understand exactly how, let’s first look at a simple, intuitive example using pie charts. • 5.3: The Bigger Picture The concepts and ideas presented in this chapter are likely not intuitive at first. Probability is a tough topic for everyone, but the tools it gives us are incredibly powerful and enable us to do amazing things with data analysis. They are the heart of how inferential statistics work. • 5.E: Probability (Exercises) 05: Probability When we speak of the probability of something happening, we are talking how likely it is that “thing” will happen based on the conditions present. For instance, what is the probability that it will rain? That is, how likely do we think it is that it will rain today under the circumstances or conditions today? To define or understand the conditions that might affect how likely it is to rain, we might look out the window and say, “it’s sunny outside, so it’s not very likely that it will rain today.” Stated using probability language: given that it is sunny outside, the probability of rain is low. “Given” is the word we use to state what the conditions are. As the conditions change, so does the probability. Thus, if it were cloudy and windy outside, we might say, “given the current weather conditions, there is a high probability that it is going to rain.” In these examples, we spoke about whether or not it is going to rain. Raining is an example of an event, which is the catch-all term we use to talk about any specific thing happening; it is a generic term that we specified to mean “rain” in exactly the same way that “conditions” is a generic term that we specified to mean “sunny” or “cloudy and windy.” It should also be noted that the terms “low” and “high” are relative and vague, and they will likely be interpreted different by different people (in other words: given how vague the terminology was, the probability of different interpretations is high). Most of the time we try to use more precise language or, even better, numbers to represent the probability of our event. Regardless, the basic structure and logic of our statements are consistent with how we speak about probability using numbers and formulas. Let’s look at a slightly deeper example. Say we have a regular, six-sided die (note that “die” is singular and “dice” is plural, a distinction that Dr. Foster has yet to get correct on his first try) and want to know how likely it is that we will roll a 1. That is, what is the probability of rolling a 1, given that the die is not weighted (which would introduce what we call a bias, though that is beyond the scope of this chapter). We could roll the die and see if it is a 1 or not, but that won’t tell us about the probability, it will only tell us a single result. We could also roll the die hundreds or thousands of times, recording each outcome and seeing what the final list looks like, but this is time consuming, and rolling a die that many times may lead down a dark path to gambling or, worse, playing Dungeons & Dragons. What we need is a simple equation that represents what we are looking for and what is possible. To calculate the probability of an event, which here is defined as rolling a 1 on an unbiased die, we need to know two things: how many outcomes satisfy the criteria of our event (stated different, how many outcomes would count as what we are looking for) and the total number of outcomes possible. In our example, only a single outcome, rolling a 1, will satisfy our criteria, and there are a total of six possible outcomes (rolling a 1, rolling a 2, rolling a 3, rolling a 4, rolling a 5, and rolling a 6). Thus, the probability of rolling a 1 on an unbiased die is 1 in 6 or 1/6. Put into an equation using generic terms, we get: $\text { Probability of an event }=\dfrac{\text { number of outcomes that satisfy our criteria }}{\text { total number of possible outcomes }}$ We can also using P() as shorthand for probability and A as shorthand for an event: $P(A)=\dfrac{\text { number of outcomes that count a } A}{\text { total number of possible outcomes }}$ Using this equation, let’s now calculate the probability of rolling an even number on this die: $P(\text {Even Number})=\dfrac{2,4, \text {or } 6}{1,2,3,4,5, \text {or } 6}=\dfrac{3}{6}=\dfrac{1}{2} \nonumber$ So we have a 50% chance of rolling an even number of this die. The principles laid out here operate under a certain set of conditions and can be elaborated into ideas that are complex yet powerful and elegant. However, such extensions are not necessary for a basic understanding of statistics, so we will end our discussion on the math of probability here. Now, let’s turn back to more familiar topics.
textbooks/stats/Applied_Statistics/An_Introduction_to_Psychological_Statistics_(Foster_et_al.)/05%3A_Probability/5.01%3A_What_is_probability.txt
We will see shortly that the normal distribution is the key to how probability works for our purposes. To understand exactly how, let’s first look at a simple, intuitive example using pie charts. Probability in Pie Charts Recall that a pie chart represents how frequently a category was observed and that all slices of the pie chart add up to 100%, or 1. This means that if we randomly select an observation from the data used to create the pie chart, the probability of it taking on a specific value is exactly equal to the size of that category’s slice in the pie chart. Take, for example, the pie chart in Figure \(1\) representing the favorite sports of 100 people. If you put this pie chart on a dart board and aimed blindly (assuming you are guaranteed to hit the board), the likelihood of hitting the slice for any given sport would be equal to the size of that slice. So, the probability of hitting the baseball slice is the highest at 36%. The probability is equal to the proportion of the chart taken up by that section. We can also add slices together. For instance, maybe we want to know the probability to finding someone whose favorite sport is usually played on grass. The outcomes that satisfy this criteria are baseball, football, and soccer. To get the probability, we simply add their slices together to see what proportion of the area of the pie chart is in that region: 36% + 25% + 20% = 81%. We can also add sections together even if they do not touch. If we want to know the likelihood that someone’s favorite sport is not called football somewhere in the world (i.e. baseball and hockey), we can add those slices even though they aren’t adjacent or continuous in the chart itself: 36% + 20% = 56%. We are able to do all of this because 1) the size of the slice corresponds to the area of the chart taken up by that slice, 2) the percentage for a specific category can be represented as a decimal (this step was skipped for ease of explanation above), and 3) the total area of the chart is equal to 100% or 1.0, which makes the size of the slices interpretable. Probability in Normal Distributions If the language at the end of the last section sounded familiar, that’s because its exactly the language used in the last chapter to describe the normal distribution. Recall that the normal distribution has an area under its curve that is equal to 1 and that it can be split into sections by drawing a line through it that corresponds to a given \(z\)-score. Because of this, we can interpret areas under the normal curve as probabilities that correspond to \(z\)-scores. First, let’s look back at the area between \(z\) = -1.00 and \(z\) = 1.00 presented in Figure \(2\). We were told earlier that this region contains 68% of the area under the curve. Thus, if we randomly chose a \(z\)-score from all possible z-scores, there is a 68% chance that it will be between \(z\) = -1.00 and \(z\) = 1.00 because those are the \(z\)-scores that satisfy our criteria. Just like a pie chart is broken up into slices by drawing lines through it, we can also draw a line through the normal distribution to split it into sections. Take a look at the normal distribution in Figure \(3\) which has a line drawn through it as \(z\) = 1.25. This line creates two sections of the distribution: the smaller section called the tail and the larger section called the body. Differentiating between the body and the tail does not depend on which side of the distribution the line is drawn. All that matters is the relative size of the pieces: bigger is always body. As you can see, we can break up the normal distribution into 3 pieces (lower tail, body, and upper tail) as in Figure \(2\) or into 2 pieces (body and tail) as in Figure \(3\). We can then find the proportion of the area in the body and tail based on where the line was drawn (i.e. at what \(z\)-score). Mathematically this is done using calculus. Fortunately, the exact values are given you to you in the Standard Normal Distribution Table, also known at the \(z\)-table. Using the values in this table, we can find the area under the normal curve in any body, tail, or combination of tails no matter which \(z\)-scores are used to define them. The \(z\)-table presents the values for the area under the curve to the left of the positive \(z\)-scores from 0.00-3.00 (technically 3.09), as indicated by the shaded region of the distribution at the top of the table. To find the appropriate value, we first find the row corresponding to our \(z\)-score then follow it over until we get to the column that corresponds to the number in the hundredths place of our \(z\)-score. For example, suppose we want to find the area in the body for a \(z\)-score of 1.62. We would first find the row for 1.60 then follow it across to the column labeled 0.02 (1.60 + 0.02 = 1.62) and find 0.9474 (see Figure \(4\)). Thus, the odds of randomly selecting someone with a \(z\)-score less than (to the left of) \(z\) = 1.62 is 94.74% because that is the proportion of the area taken up by values that satisfy our criteria. The \(z\)-table only presents the area in the body for positive \(z\)-scores because the normal distribution is symmetrical. Thus, the area in the body of \(z\) = 1.62 is equal to the area in the body for \(z\) = -1.62, though now the body will be the shaded area to the right of \(z\) (because the body is always larger). When in doubt, drawing out your distribution and shading the area you need to find will always help. The table also only presents the area in the body because the total area under the normal curve is always equal to 1.00, so if we need to find the area in the tail for \(z\) = 1.62, we simply find the area in the body and subtract it from 1.00 (1.00 – 0.9474 = 0.0526). Let’s look at another example. This time, let’s find the area corresponding to \(z\)-scores more extreme than \(z\) = -1.96 and \(z\) = 1.96. That is, let’s find the area in the tails of the distribution for values less than \(z\) = -1.96 (farther negative and therefore more extreme) and greater than \(z\) = 1.96 (farther positive and therefore more extreme). This region is illustrated in Figure \(5\). Let’s start with the tail for \(z\) = 1.96. If we go to the \(z\)-table we will find that the body to the left of \(z\) = 1.96 is equal to 0.9750. To find the area in the tail, we subtract that from 1.00 to get 0.0250. Because the normal distribution is symmetrical, the area in the tail for \(z\) = -1.96 is the exact same value, 0.0250. Finally, to get the total area in the shaded region, we simply add the areas together to get 0.0500. Thus, there is a 5% chance of randomly getting a value more extreme than \(z\) = -1.96 or \(z\) = 1.96 (this particular value and region will become incredibly important in Unit 2). Finally, we can find the area between two \(z\)-scores by shading and subtracting. Figure \(6\) shows the area between \(z\) = 0.50 and \(z\) = 1.50. Because this is a subsection of a body (rather than just a body or a tail), we must first find the larger of the two bodies, in this case the body for \(z\) = 1.50, and subtract the smaller of the two bodies, or the body for \(z\) = 0.50. Aligning the distributions vertically, as in Figure 6, makes this clearer. From the z-table, the area in the body for \(z\) = 1.50 is 0.9332 and the area in the body for \(z\) = 0.50 is 0.6915. Subtracting these gives us 0.9332 – 0.6915 = 0.2417.
textbooks/stats/Applied_Statistics/An_Introduction_to_Psychological_Statistics_(Foster_et_al.)/05%3A_Probability/5.02%3A_Probability_in_Graphs_and_Distributions.txt
The concepts and ideas presented in this chapter are likely not intuitive at first. Probability is a tough topic for everyone, but the tools it gives us are incredibly powerful and enable us to do amazing things with data analysis. They are the heart of how inferential statistics work. To summarize, the probability that an event happens is the number of outcomes that qualify as that event (i.e. the number of ways the event could happen) compared to the total number of outcomes (i.e. how many things are possible). This extends to graphs like a pie chart, where the biggest slices take up more of the area and are therefore more likely to be chosen at random. This idea then brings us back around to our normal distribution, which can also be broken up into regions or areas, each of which are bounded by one or two \(z\)-scores and correspond to all \(z\)-scores in that region. The probability of randomly getting one of those \(z\)-scores in the specified region can then be found on the Standard Normal Distribution Table. Thus, the larger the region, the more likely an event is, and vice versa. Because the tails of the distribution are, by definition, smaller and we go farther out into the tail, the likelihood or probability of finding a result out in the extremes becomes small. 5.04: Exercises 1. In your own words, what is probability? Answer: Your answer should include information about an event happening under certain conditions given certain criteria. You could also discuss the relation between probability and the area under the curve or the proportion of the area in a chart. 1. There is a bag with 5 red blocks, 2 yellow blocks, and 4 blue blocks. If you reach in and grab one block without looking, what is the probability it is red? 2. Under a normal distribution, which of the following is more likely? (Note: this question can be answered without any calculations if you draw out the distributions and shade properly) • Getting a \(z\)-score greater than \(z\) = 2.75 • Getting a \(z\)-score less than \(z\) = -1.50 Answer: Getting a \(z\)-score less than \(z\) = -1.50 is more likely. \(z\) = 2.75 is farther out into the right tail than \(z\) = -1.50 is into the left tail, therefore there are fewer more extreme scores beyond 2.75 than -1.50, regardless of the direction 1. The heights of women in the United States are normally distributed with a mean of 63.7 inches and a standard deviation of 2.7 inches. If you randomly select a woman in the United States, what is the probability that she will be between 65 and 67 inches tall? 2. The heights of men in the United States are normally distributed with a mean of 69.1 inches and a standard deviation of 2.9 inches. What proportion of men are taller than 6 feet (72 inches)? Answer: 15.87% or 0.1587 1. You know you need to score at least 82 points on the final exam to pass your class. After the final, you find out that the average score on the exam was 78 with a standard deviation of 7. How likely is it that you pass the class? 2. What proportion of the area under the normal curve is greater than \(z\) = 1.65? Answer: 4.95% or 0.0495 1. Find the \(z\)-score that bounds 25% of the lower tail of the distribution. 2. Find the \(z\)-score that bounds the top 9% of the distribution. Answer: \(z\) = 1.34 (the top 9% means 9% of the area is in the upper tail and 91% is in the body to the left; finding the value in the normal table closest to .9100 is .9099, which corresponds to \(z\) = 1.34) 1. In a distribution with a mean of 70 and standard deviation of 12, what proportion of scores are lower than 55?
textbooks/stats/Applied_Statistics/An_Introduction_to_Psychological_Statistics_(Foster_et_al.)/05%3A_Probability/5.03%3A_The_Bigger_Picture.txt
Most of what we have dealt with so far has concerned individual scores grouped into samples, with those samples being drawn from and, hopefully, representative of a population. We saw how we can understand the location of individual scores within a sample’s distribution via \(z\)-scores, and how we can extend that to understand how likely it is to observe scores higher or lower than an individual score via probability. Inherent in this work is the notion that an individual score will differ from the mean, which we quantify as a \(z\)-score. All of the individual scores will differ from the mean in different amounts and different directions, which is natural and expected. We quantify these differences as variance and standard deviation. Measures of spread and the idea of variability in observations is a key principle in inferential statistics. We know that any observation, whether it is a single score, a set of scores, or a particular descriptive statistic will differ from the center of whatever distribution it belongs in. This is equally true of things outside of statistics and format data collection and analysis. Some days you hear your alarm and wake up easily, other days you need to hit snooze a few [dozen] times. Some days traffic is light, other days it is very heavy. Some classes you are able to focus, pay attention, and take good notes, but other days you find yourself zoning out the entire time. Each individual observation is an insight but is not, by itself, the entire story, and it takes an extreme deviation from what we expect for us to think that something strange is going on. Being a little sleepy is normal, but being completely unable to get out of bed might indicate that we are sick. Light traffic is a good thing, but almost no cars on the road might make us think we forgot it is Saturday. Zoning out occasionally is fine, but if we cannot focus at all, we might be in a stats class rather than a fun one. All of these principles carry forward from scores within samples to samples within populations. Just like an individual score will differ from its mean, an individual sample mean will differ from the true population mean. We encountered this principle in earlier chapters: sampling error. As mentioned way back in chapter 1, sampling error is an incredibly important principle. We know ahead of time that if we collect data and compute a sample, the observed value of that sample will be at least slightly off from what we expect it to be based on our supposed population mean; this is natural and expected. However, if our sample mean is extremely different from what we expect based on the population mean, there may be something going on. 6.02: The Sampling Distribution of Sample Means To see how we use sampling error, we will learn about a new, theoretical distribution known as the sampling distribution. In the same way that we can gather a lot of individual scores and put them together to form a distribution with a center and spread, if we were to take many samples, all of the same size, and calculate the mean of each of those, we could put those means together to form a distribution. This new distribution is, intuitively, known as the distribution of sample means. It is one example of what we call a sampling distribution, we can be formed from a set of any statistic, such as a mean, a test statistic, or a correlation coefficient (more on the latter two in Units 2 and 3). For our purposes, understanding the distribution of sample means will be enough to see how all other sampling distributions work to enable and inform our inferential analyses, so these two terms will be used interchangeably from here on out. Let’s take a deeper look at some of its characteristics. The sampling distribution of sample means can be described by its shape, center, and spread, just like any of the other distributions we have worked with. The shape of our sampling distribution is normal: a bell-shaped curve with a single peak and two tails extending symmetrically in either direction, just like what we saw in previous chapters. The center of the sampling distribution of sample means – which is, itself, the mean or average of the means – is the true population mean, $μ$. This will sometimes be written as $\mu_{\overline{X}}$ to denote it as the mean of the sample means. The spread of the sampling distribution is called the standard error, the quantification of sampling error, denoted $\mu_{\overline{X}}$. The formula for standard error is: $\sigma_{\overline{X}}=\dfrac{\sigma}{\sqrt{n}}$ Notice that the sample size is in this equation. As stated above, the sampling distribution refers to samples of a specific size. That is, all sample means must be calculated from samples of the same size $n$, such $n$ = 10, $n$ = 30, or $n$ = 100. This sample size refers to how many people or observations are in each individual sample, not how many samples are used to form the sampling distribution. This is because the sampling distribution is a theoretical distribution, not one we will ever actually calculate or observe. Figure $1$ displays the principles stated here in graphical form. Two Important Axioms We just learned that the sampling distribution is theoretical: we never actually see it. If that is true, then how can we know it works? How can we use something that we don’t see? The answer lies in two very important mathematical facts: the central limit theorem and the law of large numbers. We will not go into the math behind how these statements were derived, but knowing what they are and what they mean is important to understanding why inferential statistics work and how we can draw conclusions about a population based on information gained from a single sample. Central Limit Theorem The central limit theorem states: Theorem $1$ For samples of a single size $n$, drawn from a population with a given mean $μ$ and variance $σ^2$, the sampling distribution of sample means will have a mean $\mu_{\overline{X}}=\mu$ and variance $\sigma _{X}^{2}=\dfrac{\sigma ^{2}}{n}$. This distribution will approach normality as $n$ increases. From this, we are able to find the standard deviation of our sampling distribution, the standard error. As you can see, just like any other standard deviation, the standard error is simply the square root of the variance of the distribution. The last sentence of the central limit theorem states that the sampling distribution will be normal as the sample size of the samples used to create it increases. What this means is that bigger samples will create a more normal distribution, so we are better able to use the techniques we developed for normal distributions and probabilities. So how large is large enough? In general, a sampling distribution will be normal if either of two characteristics is true: 1. the population from which the samples are drawn is normally distributed or 2. the sample size is equal to or greater than 30. This second criteria is very important because it enables us to use methods developed for normal distributions even if the true population distribution is skewed. Law of Large Numbers The law of large numbers simply states that as our sample size increases, the probability that our sample mean is an accurate representation of the true population mean also increases. It is the formal mathematical way to state that larger samples are more accurate. The law of large numbers is related to the central limit theorem, specifically the formulas for variance and standard error. Notice that the sample size appears in the denominators of those formulas. A larger denominator in any fraction means that the overall value of the fraction gets smaller (i.e 1/2 = 0.50, 1/3 – 0.33, 1/4 = 0.25, and so on). Thus, larger sample sizes will create smaller standard errors. We already know that standard error is the spread of the sampling distribution and that a smaller spread creates a narrower distribution. Therefore, larger sample sizes create narrower sampling distributions, which increases the probability that a sample mean will be close to the center and decreases the probability that it will be in the tails. This is illustrated in Figures $2$ and $3$.
textbooks/stats/Applied_Statistics/An_Introduction_to_Psychological_Statistics_(Foster_et_al.)/06%3A_Sampling_Distributions/6.01%3A_People_Samples_and_Populations.txt
We saw in chapter 6 that we can use $z$-scores to split up a normal distribution and calculate the proportion of the area under the curve in one of the new regions, giving us the probability of randomly selecting a $z$-score in that range. We can follow the exact sample process for sample means, converting them into $z$-scores and calculating probabilities. The only difference is that instead of dividing a raw score by the standard deviation, we divide the sample mean by the standard error. $z=\dfrac{\overline{X}-\mu}{\sigma_{\overline{X}}}=\dfrac{\overline{X}-\mu}{\frac {\overline{\sigma}}{\sqrt{n}}}$ Let’s say we are drawing samples from a population with a mean of 50 and standard deviation of 10 (the same values used in Figure 2). What is the probability that we get a random sample of size 10 with a mean greater than or equal to 55? That is, for n = 10, what is the probability that $\overline{X}$ ≥ 55? First, we need to convert this sample mean score into a $z$-score: $z=\dfrac{55-50}{\frac{10}{\sqrt{10}}}=\dfrac{5}{3.16}=1.58 \nonumber$ Now we need to shade the area under the normal curve corresponding to scores greater than $z$ = 1.58 as in Figure $1$: Now we go to our $z$-table and find that the area to the left of $z$ = 1.58 is 0.9429. Finally, because we need the area to the right (per our shaded diagram), we simply subtract this from 1 to get 1.00 – 0.9429 = 0.0571. So, the probability of randomly drawing a sample of 10 people from a population with a mean of 50 and standard deviation of 10 whose sample mean is 55 or more is $p$ = .0571, or 5.71%. Notice that we are talking about means that are 55 or more. That is because, strictly speaking, it’s impossible to calculate the probability of a score taking on exactly 1 value since the “shaded region” would just be a line with no area to calculate. Now let’s do the same thing, but assume that instead of only having a sample of 10 people we took a sample of 50 people. First, we find $z$: $z=\dfrac{55-50}{\frac{10}{\sqrt{50}}}=\dfrac{5}{1.41}=3.55$ Then we shade the appropriate region of the normal distribution: Notice that no region of Figure $2$ appears to be shaded. That is because the area under the curve that far out into the tail is so small that it can’t even be seen (the red line has been added to show exactly where the region starts). Thus, we already know that the probability must be smaller for $N$ = 50 than $N$ = 10 because the size of the area (the proportion) is much smaller. We run into a similar issue when we try to find $z$ = 3.55 on our Standard Normal Distribution Table. The table only goes up to 3.09 because everything beyond that is almost 0 and changes so little that it’s not worth printing values. The closest we can get is subtracting the largest value, 0.9990, from 1 to get 0.001. We know that, technically, the actual probability is smaller than this (since 3.55 is farther into the tail than 3.09), so we say that the probability is $p$ < 0.001, or less than 0.1%. This example shows what an impact sample size can have. From the same population, looking for exactly the same thing, changing only the sample size took us from roughly a 5% chance (or about 1/20 odds) to a less than 0.1% chance (or less than 1 in 1000). As the sample size n increased, the standard error decreased, which in turn caused the value of $z$ to increase, which finally caused the $p$-value (a term for probability we will use a lot in Unit 2) to decrease. You can think of this relation like gears: turning the first gear (sample size) clockwise causes the next gear (standard error) to turn counterclockwise, which causes the third gear (z) to turn clockwise, which finally causes the last gear (probability) to turn counterclockwise. All of these pieces fit together, and the relations will always be the same: $\mathrm{n} \uparrow \sigma_{\overline{X}} \downarrow \mathrm{z} \uparrow \mathrm{p} \downarrow$ Let’s look at this one more way. For the same population of sample size 50 and standard deviation 10, what proportion of sample means fall between 47 and 53 if they are of sample size 10 and sample size 50? We’ll start again with $n$ = 10. Converting 47 and 53 into $z$-scores, we get $z$ = -0.95 and $z$ = 0.95, respectively. From our $z$-table, we find that the proportion between these two scores is 0.6578 (the process here is left off for the student to practice converting $\overline{X}$ to $z$ and $z$ to proportions). So, 65.78% of sample means of sample size 10 will fall between 47 and 53. For $n$ = 50, our $z$-scores for 47 and 53 are ±2.13, which gives us a proportion of the area as 0.9668, almost 97%! Shaded regions for each of these sampling distributions is displayed in Figure $3$. The sampling distributions are shown on the original scale, rather than as z-scores, so you can see the effect of the shading and how much of the body falls into the range, which is marked off with dotted line. 6.04: Sampling Distribution Probability and Inference We’ve seen how we can use the standard error to determine probability based on our normal curve. We can think of the standard error as how much we would naturally expect our statistic – be it a mean or some other statistic) – to vary. In our formula for $z$ based on a sample mean, the numerator ($\overline{X}-\mu$) is what we call an observed effect. That is, it is what we observe in our sample mean versus what we expected based on the population from which that sample mean was calculated. Because the sample mean will naturally move around due to sampling error, our observed effect will also change naturally. In the context of our formula for $z$, then, our standard error is how much we would naturally expect the observed effect to change. Changing by a little is completely normal, but changing by a lot might indicate something is going on. This is the basis of inferential statistics and the logic behind hypothesis testing, the subject of Unit 2. 6.05: Exercises 1. What is a sampling distribution? Answer: The sampling distribution (or sampling distribution of the sample means) is the distribution formed by combining many sample means taken from the same population and of a single, consistent sample size. 1. What are the two mathematical facts that describe how sampling distributions work? 2. What is the difference between a sampling distribution and a regular distribution? Answer: A sampling distribution is made of statistics (e.g. the mean) whereas a regular distribution is made of individual scores. 1. What effect does sample size have on the shape of a sampling distribution? 2. What is standard error? Answer: Standard error is the spread of the sampling distribution and is the quantification of sampling error. It is how much we expect the sample mean to naturally change based on random chance. 1. For a population with a mean of 75 and a standard deviation of 12, what proportion of sample means of size $n$ = 16 fall above 82? 2. For a population with a mean of 100 and standard deviation of 16, what is the probability that a random sample of size 4 will have a mean between 110 and 130? Answer: 10.46% or 0.1046 1. Find the $z$-score for the following means taken from a population with mean 10 and standard deviation 2: 1. $\overline{X}$= 8, $n$ = 12 2. $\overline{X}$= 8, $n$ = 30 3. $\overline{X}$ = 20, $n$ = 4 4. $\overline{X}$ = 20, $n$ = 16 2. As the sample size increases, what happens to the $p$-value associated with a given sample mean? Answer: As sample size increases, the $p$-value will decrease 1. For a population with a mean of 35 and standard deviation of 7, find the sample mean of size $n$ = 20 that cuts off the top 5% of the sampling distribution.
textbooks/stats/Applied_Statistics/An_Introduction_to_Psychological_Statistics_(Foster_et_al.)/06%3A_Sampling_Distributions/6.03%3A_Using_Standard_Error_for_Probability.txt
The statistician R. Fisher explained the concept of hypothesis testing with a story of a lady tasting tea. Here we will present an example based on James Bond who insisted that martinis should be shaken rather than stirred. Let's consider a hypothetical experiment to determine whether Mr. Bond can tell the difference between a shaken and a stirred martini. Suppose we gave Mr. Bond a series of 16 taste tests. In each test, we flipped a fair coin to determine whether to stir or shake the martini. Then we presented the martini to Mr. Bond and asked him to decide whether it was shaken or stirred. Let's say Mr. Bond was correct on 13 of the 16 taste tests. Does this prove that Mr. Bond has at least some ability to tell whether the martini was shaken or stirred? This result does not prove that he does; it could be he was just lucky and guessed right 13 out of 16 times. But how plausible is the explanation that he was just lucky? To assess its plausibility, we determine the probability that someone who was just guessing would be correct 13/16 times or more. This probability can be computed to be 0.0106. This is a pretty low probability, and therefore someone would have to be very lucky to be correct 13 or more times out of 16 if they were just guessing. So either Mr. Bond was very lucky, or he can tell whether the drink was shaken or stirred. The hypothesis that he was guessing is not proven false, but considerable doubt is cast on it. Therefore, there is strong evidence that Mr. Bond can tell whether a drink was shaken or stirred. Let's consider another example. The case study Physicians' Reactions sought to determine whether physicians spend less time with obese patients. Physicians were sampled randomly and each was shown a chart of a patient complaining of a migraine headache. They were then asked to estimate how long they would spend with the patient. The charts were identical except that for half the charts, the patient was obese and for the other half, the patient was of average weight. The chart a particular physician viewed was determined randomly. Thirty-three physicians viewed charts of average-weight patients and 38 physicians viewed charts of obese patients. The mean time physicians reported that they would spend with obese patients was 24.7 minutes as compared to a mean of 31.4 minutes for normal-weight patients. How might this difference between means have occurred? One possibility is that physicians were influenced by the weight of the patients. On the other hand, perhaps by chance, the physicians who viewed charts of the obese patients tend to see patients for less time than the other physicians. Random assignment of charts does not ensure that the groups will be equal in all respects other than the chart they viewed. In fact, it is certain the groups differed in many ways by chance. The two groups could not have exactly the same mean age (if measured precisely enough such as in days). Perhaps a physician's age affects how long physicians see patients. There are innumerable differences between the groups that could affect how long they view patients. With this in mind, is it plausible that these chance differences are responsible for the difference in times? To assess the plausibility of the hypothesis that the difference in mean times is due to chance, we compute the probability of getting a difference as large or larger than the observed difference (31.4 - 24.7 = 6.7 minutes) if the difference were, in fact, due solely to chance. Using methods presented in later chapters, this probability can be computed to be 0.0057. Since this is such a low probability, we have confidence that the difference in times is due to the patient's weight and is not due to chance. 7.02: The Probability Value It is very important to understand precisely what the probability values mean. In the James Bond example, the computed probability of 0.0106 is the probability he would be correct on 13 or more taste tests (out of 16) if he were just guessing. It is easy to mistake this probability of 0.0106 as the probability he cannot tell the difference. This is not at all what it means. The probability of 0.0106 is the probability of a certain outcome (13 or more out of 16) assuming a certain state of the world (James Bond was only guessing). It is not the probability that a state of the world is true. Although this might seem like a distinction without a difference, consider the following example. An animal trainer claims that a trained bird can determine whether or not numbers are evenly divisible by 7. In an experiment assessing this claim, the bird is given a series of 16 test trials. On each trial, a number is displayed on a screen and the bird pecks at one of two keys to indicate its choice. The numbers are chosen in such a way that the probability of any number being evenly divisible by 7 is 0.50. The bird is correct on 9/16 choices. We can compute that the probability of being correct nine or more times out of 16 if one is only guessing is 0.40. Since a bird who is only guessing would do this well 40% of the time, these data do not provide convincing evidence that the bird can tell the difference between the two types of numbers. As a scientist, you would be very skeptical that the bird had this ability. Would you conclude that there is a 0.40 probability that the bird can tell the difference? Certainly not! You would think the probability is much lower than 0.0001. To reiterate, the probability value is the probability of an outcome (9/16 or better) and not the probability of a particular state of the world (the bird was only guessing). In statistics, it is conventional to refer to possible states of the world as hypotheses since they are hypothesized states of the world. Using this terminology, the probability value is the probability of an outcome given the hypothesis. It is not the probability of the hypothesis given the outcome. This is not to say that we ignore the probability of the hypothesis. If the probability of the outcome given the hypothesis is sufficiently low, we have evidence that the hypothesis is false. However, we do not compute the probability that the hypothesis is false. In the James Bond example, the hypothesis is that he cannot tell the difference between shaken and stirred martinis. The probability value is low (0.0106), thus providing evidence that he can tell the difference. However, we have not computed the probability that he can tell the difference.
textbooks/stats/Applied_Statistics/An_Introduction_to_Psychological_Statistics_(Foster_et_al.)/07%3A__Introduction_to_Hypothesis_Testing/7.01%3A_Logic_and_Purpose_of_Hypothesis_Testing.txt
The hypothesis that an apparent effect is due to chance is called the null hypothesis, written $H_0$ (“H-naught”). In the Physicians' Reactions example, the null hypothesis is that in the population of physicians, the mean time expected to be spent with obese patients is equal to the mean time expected to be spent with average-weight patients. This null hypothesis can be written as: $\mathrm{H}_{0}: \mu_{\mathrm{obese}}-\mu_{\mathrm{average}}=0$ The null hypothesis in a correlational study of the relationship between high school grades and college grades would typically be that the population correlation is 0. This can be written as $\mathrm{H}_{0}: \rho=0$ where $ρ$ is the population correlation, which we will cover in chapter 12. Although the null hypothesis is usually that the value of a parameter is 0, there are occasions in which the null hypothesis is a value other than 0. For example, if we are working with mothers in the U.S. whose children are at risk of low birth weight, we can use 7.47 pounds, the average birthweight in the US, as our null value and test for differences against that. For now, we will focus on testing a value of a single mean against what we expect from the population. Using birthweight as an example, our null hypothesis takes the form: $\mathrm{H}_{0}: \mu=7.47 \nonumber$ The number on the right hand side is our null hypothesis value that is informed by our research question. Notice that we are testing the value for $μ$, the population parameter, NOT the sample statistic $\overline{\mathrm{X}}$. This is for two reasons: 1) once we collect data, we know what the value of $\overline{\mathrm{X}}$ is – it’s not a mystery or a question, it is observed and used for the second reason, which is 2) we are interested in understanding the population, not just our sample. Keep in mind that the null hypothesis is typically the opposite of the researcher's hypothesis. In the Physicians' Reactions study, the researchers hypothesized that physicians would expect to spend less time with obese patients. The null hypothesis that the two types of patients are treated identically is put forward with the hope that it can be discredited and therefore rejected. If the null hypothesis were true, a difference as large or larger than the sample difference of 6.7 minutes would be very unlikely to occur. Therefore, the researchers rejected the null hypothesis of no difference and concluded that in the population, physicians intend to spend less time with obese patients. In general, the null hypothesis is the idea that nothing is going on: there is no effect of our treatment, no relation between our variables, and no difference in our sample mean from what we expected about the population mean. This is always our baseline starting assumption, and it is what we seek to reject. If we are trying to treat depression, we want to find a difference in average symptoms between our treatment and control groups. If we are trying to predict job performance, we want to find a relation between conscientiousness and evaluation scores. However, until we have evidence against it, we must use the null hypothesis as our starting point. 7.04: The Alternative Hypothesis If the null hypothesis is rejected, then we will need some other explanation, which we call the alternative hypothesis, $H_A$ or $H_1$. The alternative hypothesis is simply the reverse of the null hypothesis, and there are three options, depending on where we expect the difference to lie. Thus, our alternative hypothesis is the mathematical way of stating our research question. If we expect our obtained sample mean to be above or below the null hypothesis value, which we call a directional hypothesis, then our alternative hypothesis takes the form: $\mathrm{H}_{\mathrm{A}}: \mu>7.47 \quad \text { or } \quad \mathrm{H}_{\mathrm{A}}: \mu<7.47 \nonumber$ based on the research question itself. We should only use a directional hypothesis if we have good reason, based on prior observations or research, to suspect a particular direction. When we do not know the direction, such as when we are entering a new area of research, we use a non-directional alternative: $\mathrm{H}_{\mathrm{A}}: \mu \neq 7.47 \nonumber$ We will set different criteria for rejecting the null hypothesis based on the directionality (greater than, less than, or not equal to) of the alternative. To understand why, we need to see where our criteria come from and how they relate to $z$-scores and distributions.
textbooks/stats/Applied_Statistics/An_Introduction_to_Psychological_Statistics_(Foster_et_al.)/07%3A__Introduction_to_Hypothesis_Testing/7.03%3A_The_Null_Hypothesis.txt
A low probability value casts doubt on the null hypothesis. How low must the probability value be in order to conclude that the null hypothesis is false? Although there is clearly no right or wrong answer to this question, it is conventional to conclude the null hypothesis is false if the probability value is less than 0.05. More conservative researchers conclude the null hypothesis is false only if the probability value is less than 0.01. When a researcher concludes that the null hypothesis is false, the researcher is said to have rejected the null hypothesis. The probability value below which the null hypothesis is rejected is called the α level or simply $α$ (“alpha”). It is also called the significance level. If α is not explicitly specified, assume that $α$ = 0.05. The significance level is a threshold we set before collecting data in order to determine whether or not we should reject the null hypothesis. We set this value beforehand to avoid biasing ourselves by viewing our results and then determining what criteria we should use. If our data produce values that meet or exceed this threshold, then we have sufficient evidence to reject the null hypothesis; if not, we fail to reject the null (we never “accept” the null). There are two criteria we use to assess whether our data meet the thresholds established by our chosen significance level, and they both have to do with our discussions of probability and distributions. Recall that probability refers to the likelihood of an event, given some situation or set of conditions. In hypothesis testing, that situation is the assumption that the null hypothesis value is the correct value, or that there is no effect. The value laid out in H0 is our condition under which we interpret our results. To reject this assumption, and thereby reject the null hypothesis, we need results that would be very unlikely if the null was true. Now recall that values of z which fall in the tails of the standard normal distribution represent unlikely values. That is, the proportion of the area under the curve as or more extreme than $z$ is very small as we get into the tails of the distribution. Our significance level corresponds to the area under the tail that is exactly equal to α: if we use our normal criterion of $α$ = .05, then 5% of the area under the curve becomes what we call the rejection region (also called the critical region) of the distribution. This is illustrated in Figure $1$. The shaded rejection region takes us 5% of the area under the curve. Any result which falls in that region is sufficient evidence to reject the null hypothesis. The rejection region is bounded by a specific $z$-value, as is any area under the curve. In hypothesis testing, the value corresponding to a specific rejection region is called the critical value, $z_{crit}$ (“$z$-crit”) or $z*$ (hence the other name “critical region”). Finding the critical value works exactly the same as finding the z-score corresponding to any area under the curve like we did in Unit 1. If we go to the normal table, we will find that the z-score corresponding to 5% of the area under the curve is equal to 1.645 ($z$ = 1.64 corresponds to 0.0405 and $z$ = 1.65 corresponds to 0.0495, so .05 is exactly in between them) if we go to the right and -1.645 if we go to the left. The direction must be determined by your alternative hypothesis, and drawing then shading the distribution is helpful for keeping directionality straight. Suppose, however, that we want to do a non-directional test. We need to put the critical region in both tails, but we don’t want to increase the overall size of the rejection region (for reasons we will see later). To do this, we simply split it in half so that an equal proportion of the area under the curve falls in each tail’s rejection region. For $α$ = .05, this means 2.5% of the area is in each tail, which, based on the z-table, corresponds to critical values of $z*$ = ±1.96. This is shown in Figure $2$. Thus, any $z$-score falling outside ±1.96 (greater than 1.96 in absolute value) falls in the rejection region. When we use $z$-scores in this way, the obtained value of $z$ (sometimes called $z$-obtained) is something known as a test statistic, which is simply an inferential statistic used to test a null hypothesis. The formula for our $z$-statistic has not changed: $z=\dfrac{\overline{\mathrm{X}}-\mu}{\bar{\sigma} / \sqrt{\mathrm{n}}}$ To formally test our hypothesis, we compare our obtained $z$-statistic to our critical $z$-value. If $\mathrm{Z}_{\mathrm{obt}}>\mathrm{Z}_{\mathrm{crit}}$, that means it falls in the rejection region (to see why, draw a line for $z$ = 2.5 on Figure $1$ or Figure $2$) and so we reject $H_0$. If $\mathrm{Z}_{\mathrm{obt}}<\mathrm{Z}_{\mathrm{crit}}$, we fail to reject. Remember that as $z$ gets larger, the corresponding area under the curve beyond $z$ gets smaller. Thus, the proportion, or $p$-value, will be smaller than the area for $α$, and if the area is smaller, the probability gets smaller. Specifically, the probability of obtaining that result, or a more extreme result, under the condition that the null hypothesis is true gets smaller. The $z$-statistic is very useful when we are doing our calculations by hand. However, when we use computer software, it will report to us a $p$-value, which is simply the proportion of the area under the curve in the tails beyond our obtained $z$-statistic. We can directly compare this $p$-value to $α$ to test our null hypothesis: if $p < α$, we reject $H_0$, but if $p > α$, we fail to reject. Note also that the reverse is always true: if we use critical values to test our hypothesis, we will always know if $p$ is greater than or less than $α$. If we reject, we know that $p < α$ because the obtained $z$-statistic falls farther out into the tail than the critical $z$-value that corresponds to $α$, so the proportion ($p$-value) for that $z$-statistic will be smaller. Conversely, if we fail to reject, we know that the proportion will be larger than $α$ because the $z$-statistic will not be as far into the tail. This is illustrated for a one-tailed test in Figure $3$. When the null hypothesis is rejected, the effect is said to be statistically significant. For example, in the Physicians Reactions case study, the probability value is 0.0057. Therefore, the effect of obesity is statistically significant and the null hypothesis that obesity makes no difference is rejected. It is very important to keep in mind that statistical significance means only that the null hypothesis of exactly no effect is rejected; it does not mean that the effect is important, which is what “significant” usually means. When an effect is significant, you can have confidence the effect is not exactly zero. Finding that an effect is significant does not tell you about how large or important the effect is. Do not confuse statistical significance with practical significance. A small effect can be highly significant if the sample size is large enough. Why does the word “significant” in the phrase “statistically significant” mean something so different from other uses of the word? Interestingly, this is because the meaning of “significant” in everyday language has changed. It turns out that when the procedures for hypothesis testing were developed, something was “significant” if it signified something. Thus, finding that an effect is statistically significant signifies that the effect is real and not due to chance. Over the years, the meaning of “significant” changed, leading to the potential misinterpretation.
textbooks/stats/Applied_Statistics/An_Introduction_to_Psychological_Statistics_(Foster_et_al.)/07%3A__Introduction_to_Hypothesis_Testing/7.05%3A_Critical_values_p-values_and_significance_level.txt
The process of testing hypotheses follows a simple four-step procedure. This process will be what we use for the remained of the textbook and course, and though the hypothesis and statistics we use will change, this process will not. Step 1: State the Hypotheses Your hypotheses are the first thing you need to lay out. Otherwise, there is nothing to test! You have to state the null hypothesis (which is what we test) and the alternative hypothesis (which is what we expect). These should be stated mathematically as they were presented above AND in words, explaining in normal English what each one means in terms of the research question. Step 2: Find the Critical Values Next, we formally lay out the criteria we will use to test our hypotheses. There are two pieces of information that inform our critical values: \(α\), which determines how much of the area under the curve composes our rejection region, and the directionality of the test, which determines where the region will be. Step 3: Compute the Test Statistic Once we have our hypotheses and the standards we use to test them, we can collect data and calculate our test statistic, in this case \(z\). This step is where the vast majority of differences in future chapters will arise: different tests used for different data are calculated in different ways, but the way we use and interpret them remains the same. Step 4: Make the Decision Finally, once we have our obtained test statistic, we can compare it to our critical value and decide whether we should reject or fail to reject the null hypothesis. When we do this, we must interpret the decision in relation to our research question, stating what we concluded, what we based our conclusion on, and the specific statistics we obtained. 7.07: Movie Popcorn Let’s see how hypothesis testing works in action by working through an example. Say that a movie theater owner likes to keep a very close eye on how much popcorn goes into each bag sold, so he knows that the average bag has 8 cups of popcorn and that this varies a little bit, about half a cup. That is, the known population mean is $μ$ = 8.00 and the known population standard deviation is $σ$ = 0.50. The owner wants to make sure that the newest employee is filling bags correctly, so over the course of a week he randomly assesses 25 bags filled by the employee to test for a difference ($N$ = 25). He doesn’t want bags overfilled or under filled, so he looks for differences in both directions. This scenario has all of the information we need to begin our hypothesis testing procedure. Step 1: State the Hypotheses Our manager is looking for a difference in the mean weight of popcorn bags compared to the population mean of 8. We will need both a null and an alternative hypothesis written both mathematically and in words. We’ll always start with the null hypothesis: $H_0$: There is no difference in the weight of popcorn bags from this employee $H_0$: $\mu = 8.00$ Notice that we phrase the hypothesis in terms of the population parameter $μ$, which in this case would be the true average weight of bags filled by the new employee. Our assumption of no difference, the null hypothesis, is that this mean is exactly the same as the known population mean value we want it to match, 8.00. Now let’s do the alternative: $H_A$: There is a difference in the weight of popcorn bags from this employee $H_A$: $μ ≠ 8.00$ In this case, we don’t know if the bags will be too full or not full enough, so we do a two-tailed alternative hypothesis that there is a difference. Step 2: Find the Critical Values Our critical values are based on two things: the directionality of the test and the level of significance. We decided in step 1 that a two-tailed test is the appropriate directionality. We were given no information about the level of significance, so we assume that $α$ = 0.05 is what we will use. As stated earlier in the chapter, the critical values for a two-tailed $z$-test at $α$ = 0.05 are $z*$ = ±1.96. This will be the criteria we use to test our hypothesis. We can now draw out our distribution so we can visualize the rejection region and make sure it makes sense. Step 3: Calculate the Test Statistic Now we come to our formal calculations. Let’s say that the manager collects data and finds that the average weight of this employee’s popcorn bags is $\overline{\mathrm{X}}$= 7.75 cups. We can now plug this value, along with the values presented in the original problem, into our equation for $z$: $z=\dfrac{7.75-8.00}{0.50 / \sqrt{25}}=\dfrac{-0.25}{0.10}=-2.50 \nonumber$ So our test statistic is $z$ = -2.50, which we can draw onto our rejection region distribution: Step 4: Make the Decision Looking at Figure $2$, we can see that our obtained $z$-statistic falls in the rejection region. We can also directly compare it to our critical value: in terms of absolute value, -2.50 > -1.96, so we reject the null hypothesis. We can now write our conclusion: Reject $H_0$. Based on the sample of 25 bags, we can conclude that the average popcorn bag from this employee is smaller ($\overline{\mathrm{X}}$= 7.75 cups) than the average weight of popcorn bags at this movie theater, $z$ = 2.50, $p$ < 0.05. When we write our conclusion, we write out the words to communicate what it actually means, but we also include the average sample size we calculated (the exact location doesn’t matter, just somewhere that flows naturally and makes sense) and the $z$-statistic and $p$-value. We don’t know the exact $p$-value, but we do know that because we rejected the null, it must be less than $α$.
textbooks/stats/Applied_Statistics/An_Introduction_to_Psychological_Statistics_(Foster_et_al.)/07%3A__Introduction_to_Hypothesis_Testing/7.06%3A_Steps_of_the_Hypothesis_Testing_Process.txt
When we reject the null hypothesis, we are stating that the difference we found was statistically significant, but we have mentioned several times that this tells us nothing about practical significance. To get an idea of the actual size of what we found, we can compute a new statistic called an effect size. Effect sizes give us an idea of how large, important, or meaningful a statistically significant effect is. For mean differences like we calculated here, our effect size is Cohen’s $d$: $d=\dfrac{\overline{X}-\mu}{\sigma}$ This is very similar to our formula for $z$, but we no longer take into account the sample size (since overly large samples can make it too easy to reject the null). Cohen’s $d$ is interpreted in units of standard deviations, just like $z$. For our example: $d=\dfrac{7.75-8.00}{0.50}=\dfrac{-0.25}{0.50}=0.50 \nonumber$ Cohen’s $d$ is interpreted as small, moderate, or large. Specifically, $d$ = 0.20 is small, $d$ = 0.50 is moderate, and $d$ = 0.80 is large. Obviously values can fall in between these guidelines, so we should use our best judgment and the context of the problem to make our final interpretation of size. Our effect size happened to be exactly equal to one of these, so we say that there was a moderate effect. Effect sizes are incredibly useful and provide important information and clarification that overcomes some of the weakness of hypothesis testing. Whenever you find a significant result, you should always calculate an effect size. 7.09: Office Temperature Let’s do another example to solidify our understanding. Let’s say that the office building you work in is supposed to be kept at 74 degree Fahrenheit but is allowed to vary by 1 degree in either direction. You suspect that, as a cost saving measure, the temperature was secretly set higher. You set up a formal way to test your hypothesis. Step 1: State the Hypotheses You start by laying out the null hypothesis: $H_0$: There is no difference in the average building temperature $H_0: \mu = 74$ Next you state the alternative hypothesis. You have reason to suspect a specific direction of change, so you make a one-tailed test: $H_A$: The average building temperature is higher than claimed $\mathrm{H}_{\mathrm{A}}: \mu>74$ Step 2: Find the Critical Values You know that the most common level of significance is $α$ = 0.05, so you keep that the same and know that the critical value for a one-tailed $z$-test is $z*$ = 1.645. To keep track of the directionality of the test and rejection region, you draw out your distribution: Step 3: Calculate the Test Statistic Now that you have everything set up, you spend one week collecting temperature data: Table $1$: Temperature data for a week Day Temperature Monday 77 Tuesday 76 Wednesday 74 Thursday 78 Friday 78 You calculate the average of these scores to be $\overline{\mathrm{X}}$= 76.6 degrees. You use this to calculate the test statistic, using $μ$ = 74 (the supposed average temperature), $σ$ = 1.00 (how much the temperature should vary), and $n$ = 5 (how many data points you collected): $z=\dfrac{76.60-74.00}{1.00 / \sqrt{5}}=\dfrac{2.60}{0.45}=5.78 \nonumber$ This value falls so far into the tail that it cannot even be plotted on the distribution! Step 4: Make the Decision You compare your obtained $z$-statistic, $z$ = 5.77, to the critical value, $z*$ = 1.645, and find that $z > z*$. Therefore you reject the null hypothesis, concluding: Based on 5 observations, the average temperature ($\overline{\mathrm{X}}$= 76.6 degrees) is statistically significantly higher than it is supposed to be, $z$ = 5.77, $p$ < .05. Because the result is significant, you also calculate an effect size: $d=\dfrac{76.60-74.00}{1.00}=\dfrac{2.60}{1.00}=2.60 \nonumber$ The effect size you calculate is definitely large, meaning someone has some explaining to do! 7.10: Different Significance Level Finally, let’s take a look at an example phrased in generic terms, rather than in the context of a specific research question, to see the individual pieces one more time. This time, however, we will use a stricter significance level, $α$ = 0.01, to test the hypothesis. Step 1: State the Hypotheses We will use 60 as an arbitrary null hypothesis value: $H_0$: The average score does not differ from the population $H_0: \mu = 50$ We will assume a two-tailed test: $H_A$: The average score does differ $H_A: μ ≠ 50$ Step 2: Find the Critical Values We have seen the critical values for $z$-tests at $α$ = 0.05 levels of significance several times. To find the values for $α$ = 0.01, we will go to the standard normal table and find the $z$-score cutting of 0.005 (0.01 divided by 2 for a two-tailed test) of the area in the tail, which is $z*$ = ±2.575. Notice that this cutoff is much higher than it was for $α$ = 0.05. This is because we need much less of the area in the tail, so we need to go very far out to find the cutoff. As a result, this will require a much larger effect or much larger sample size in order to reject the null hypothesis. Step 3: Calculate the Test Statistic We can now calculate our test statistic. We will use $σ$ = 10 as our known population standard deviation and the following data to calculate our sample mean: 61 62 65 61 58 59 54 61 60 63 The average of these scores is $\overline{\mathrm{X}}$= 60.40. From this we calculate our $z$-statistic as: $z=\dfrac{60.40-60.00}{10.00 / \sqrt{10}}=\dfrac{0.40}{3.16}=0.13 \nonumber$ Step 4: Make the Decision Our obtained $z$-statistic, $z$ = 0.13, is very small. It is much less than our critical value of 2.575. Thus, this time, we fail to reject the null hypothesis. Our conclusion would look something like: Based on the sample of 10 scores, we cannot conclude that there is no effect causing the mean ($\overline{\mathrm{X}}$= 60.40) to be statistically significantly different from 60.00, $z$ = 0.13, $p$ > 0.01. Notice two things about the end of the conclusion. First, we wrote that $p$ is greater than instead of p is less than, like we did in the previous two examples. This is because we failed to reject the null hypothesis. We don’t know exactly what the $p$-value is, but we know it must be larger than the $α$ level we used to test our hypothesis. Second, we used 0.01 instead of the usual 0.05, because this time we tested at a different level. The number you compare to the $p$-value should always be the significance level you test at. Finally, because we did not detect a statistically significant effect, we do not need to calculate an effect size.
textbooks/stats/Applied_Statistics/An_Introduction_to_Psychological_Statistics_(Foster_et_al.)/07%3A__Introduction_to_Hypothesis_Testing/7.08%3A_Effect_Size.txt
There are several other considerations we need to keep in mind when performing hypothesis testing. Errors in Hypothesis Testing In the Physicians' Reactions case study, the probability value associated with the significance test is 0.0057. Therefore, the null hypothesis was rejected, and it was concluded that physicians intend to spend less time with obese patients. Despite the low probability value, it is possible that the null hypothesis of no true difference between obese and average-weight patients is true and that the large difference between sample means occurred by chance. If this is the case, then the conclusion that physicians intend to spend less time with obese patients is in error. This type of error is called a Type I error. More generally, a Type I error occurs when a significance test results in the rejection of a true null hypothesis. By one common convention, if the probability value is below 0.05 then the null hypothesis is rejected. Another convention, although slightly less common, is to reject the null hypothesis if the probability value is below 0.01. The threshold for rejecting the null hypothesis is called the \(α\) level or simply \(α\). It is also called the significance level. As discussed in the introduction to hypothesis testing, it is better to interpret the probability value as an indication of the weight of evidence against the null hypothesis than as part of a decision rule for making a reject or do-not-reject decision. Therefore, keep in mind that rejecting the null hypothesis is not an all-or-nothing decision. The Type I error rate is affected by the \(α\) level: the lower the \(α\) level the lower the Type I error rate. It might seem that \(α\) is the probability of a Type I error. However, this is not correct. Instead, \(α\) is the probability of a Type I error given that the null hypothesis is true. If the null hypothesis is false, then it is impossible to make a Type I error. The second type of error that can be made in significance testing is failing to reject a false null hypothesis. This kind of error is called a Type II error. Unlike a Type I error, a Type II error is not really an error. When a statistical test is not significant, it means that the data do not provide strong evidence that the null hypothesis is false. Lack of significance does not support the conclusion that the null hypothesis is true. Therefore, a researcher should not make the mistake of incorrectly concluding that the null hypothesis is true when a statistical test was not significant. Instead, the researcher should consider the test inconclusive. Contrast this with a Type I error in which the researcher erroneously concludes that the null hypothesis is false when, in fact, it is true. A Type II error can only occur if the null hypothesis is false. If the null hypothesis is false, then the probability of a Type II error is called \(β\) (beta). The probability of correctly rejecting a false null hypothesis equals 1- \(β\) and is called power. Power is simply our ability to correctly detect an effect that exists. It is influenced by the size of the effect (larger effects are easier to detect), the significance level we set (making it easier to reject the null makes it easier to detect an effect, but increases the likelihood of a Type I Error), and the sample size used (larger samples make it easier to reject the null). Misconceptions in Hypothesis Testing Misconceptions about significance testing are common. This section lists three important ones. 1. Misconception: The probability value is the probability that the null hypothesis is false. • Proper interpretation: The probability value is the probability of a result as extreme or more extreme given that the null hypothesis is true. It is the probability of the data given the null hypothesis. It is not the probability that the null hypothesis is false. 2. Misconception: A low probability value indicates a large effect. • Proper interpretation: A low probability value indicates that the sample outcome (or one more extreme) would be very unlikely if the null hypothesis were true. A low probability value can occur with small effect sizes, particularly if the sample size is large. 3. Misconception: A non-significant outcome means that the null hypothesis is probably true. • Proper interpretation: A non-significant outcome means that the data do not conclusively demonstrate that the null hypothesis is false.
textbooks/stats/Applied_Statistics/An_Introduction_to_Psychological_Statistics_(Foster_et_al.)/07%3A__Introduction_to_Hypothesis_Testing/7.11%3A_Other_Considerations_in_Hypothesis_Testing.txt
1. In your own words, explain what the null hypothesis is. Answer: Your answer should include mention of the baseline assumption of no difference between the sample and the population. 1. What are Type I and Type II Errors? 2. What is $α$? Answer: Alpha is the significance level. It is the criteria we use when decided to reject or fail to reject the null hypothesis, corresponding to a given proportion of the area under the normal distribution and a probability of finding extreme scores assuming the null hypothesis is true. 1. Why do we phrase null and alternative hypotheses with population parameters and not sample means? 2. If our null hypothesis is “$H_0: μ = 40$”, what are the three possible alternative hypotheses? Answer: $H_A: μ ≠ 40$, $H_A: μ > 40$, $H_A: μ < 40$ 1. Why do we state our hypotheses and decision criteria before we collect our data? 2. When and why do you calculate an effect size? Answer: We calculate an effect size when we find a statistically significant result to see if our result is practically meaningful or important 1. Determine whether you would reject or fail to reject the null hypothesis in the following situations: 1. $z$= 1.99, two-tailed test at $α$ = 0.05 2. $z$ = 0.34, $z*$ = 1.645 3. $p$ = 0.03, $α$ = 0.05 4. $p$ = 0.015, $α$ = 0.01 2. You are part of a trivia team and have tracked your team’s performance since you started playing, so you know that your scores are normally distributed with $μ$ = 78 and $σ$ = 12. Recently, a new person joined the team, and you think the scores have gotten better. Use hypothesis testing to see if the average score has improved based on the following 8 weeks’ worth of score data: 82, 74, 62, 68, 79, 94, 90, 81, 80. Answer: Step 1: $H_0: μ = 78$ “The average score is not different after the new person joined”, $H_A: μ > 78$ “The average score has gone up since the new person joined.” Step 2: One-tailed test to the right, assuming $α$ = 0.05, $z*$ = 1.645. Step 3: $\overline{\mathrm{X}}$= 88.75, $\sigma _{\overline{\mathrm{X}}}$ = 4.24, $z$ = 2.54. Step 4: $z > z*$, Reject $H_0$. Based on 8 weeks of games, we can conclude that our average score ($\overline{\mathrm{X}}$ = 88.75) is higher now that the new person is on the team, $z$ = 2.54, $p$ < .05. Since the result is significant, we need an effect size: Cohen’s $d$ = 0.90, which is a large effect. 1. You get hired as a server at a local restaurant, and the manager tells you that servers’ tips are $42 on average but vary about$12 ($μ$ = 42, $σ$ = 12). You decide to track your tips to see if you make a different amount, but because this is your first job as a server, you don’t know if you will make more or less in tips. After working 16 shifts, you find that your average nightly amount is \$44.50 from tips. Test for a difference between this value and the population mean at the $α$ = 0.05 level of significance.
textbooks/stats/Applied_Statistics/An_Introduction_to_Psychological_Statistics_(Foster_et_al.)/07%3A__Introduction_to_Hypothesis_Testing/7.E%3A_Introduction_to_Hypothesis_Testing_%28Exercises%29.txt
• 8.1: The t-statistic The z-statistic was a useful way to link the material and ease us into the new way to looking at data, but it isn’t a very common test because it relies on knowing the populations standard deviation, σ, which is rarely going to be the case. Instead, we will estimate that parameter σ using the sample statistic s in the same way that we estimate μ using X. Our new statistic is called t, and for testing one population mean using a single sample (called a 1-sample t -test) • 8.2: Hypothesis Testing with t Hypothesis testing with the t-statistic works exactly the same way as z-tests did, following the four-step process of (1) Stating the Hypothesis, (2) Finding the Critical Values, (3) Computing the Test Statistic, and (4) Making the Decision. • 8.3: Confidence Intervals • 8.E: Introduction to t-tests (Exercises) 08: Introduction to t-tests Last chapter, we were introduced to hypothesis testing using the $z$-statistic for sample means that we learned in Unit 1. This was a useful way to link the material and ease us into the new way to looking at data, but it isn’t a very common test because it relies on knowing the populations standard deviation, $σ$, which is rarely going to be the case. Instead, we will estimate that parameter $σ$ using the sample statistic $s$ in the same way that we estimate $μ$ using $\overline{\mathrm{X}}$ ($μ$ will still appear in our formulas because we suspect something about its value and that is what we are testing). Our new statistic is called $t$, and for testing one population mean using a single sample (called a 1-sample $t$-test) it takes the form: $t=\dfrac{\bar{X}-\mu}{s_{\bar{X}}}=\dfrac{\bar{X}-\mu}{s / \sqrt{n}}$ Notice that $t$ looks almost identical to $z$; this is because they test the exact same thing: the value of a sample mean compared to what we expect of the population. The only difference is that the standard error is now denoted $s_{\overline{\mathrm{X}}}$ to indicate that we use the sample statistic for standard deviation, $s$, instead of the population parameter $σ$. The process of using and interpreting the standard error and the full test statistic remain exactly the same. In chapter 3 we learned that the formulae for sample standard deviation and population standard deviation differ by one key factor: the denominator for the parameter is $N$ but the denominator for the statistic is $N – 1$, also known as degrees of freedom, $df$. Because we are using a new measure of spread, we can no longer use the standard normal distribution and the $z$-table to find our critical values. For $t$-tests, we will use the $t$-distribution and $t$-table to find these values. The $t$-distribution, like the standard normal distribution, is symmetric and normally distributed with a mean of 0 and standard error (as the measure of standard deviation for sampling distributions) of 1. However, because the calculation of standard error uses degrees of freedom, there will be a different t-distribution for every degree of freedom. Luckily, they all work exactly the same, so in practice this difference is minor. Figure $1$ shows four curves: a normal distribution curve labeled $z$, and three tdistribution curves for 2, 10, and 30 degrees of freedom. Two things should stand out: First, for lower degrees of freedom (e.g. 2), the tails of the distribution are much fatter, meaning the a larger proportion of the area under the curve falls in the tail. This means that we will have to go farther out into the tail to cut off the portion corresponding to 5% or $α$ = 0.05, which will in turn lead to higher critical values. Second, as the degrees of freedom increase, we get closer and closer to the $z$ curve. Even the distribution with $df$ = 30, corresponding to a sample size of just 31 people, is nearly indistinguishable from $z$. In fact, a $t$-distribution with infinite degrees of freedom (theoretically, of course) is exactly the standard normal distribution. Because of this, the bottom row of the $t$-table also includes the critical values for $z$-tests at the specific significance levels. Even though these curves are very close, it is still important to use the correct table and critical values, because small differences can add up quickly. The $t$-distribution table lists critical values for one- and two-tailed tests at several levels of significance arranged into columns. The rows of the $t$-table list degrees of freedom up to $df$ = 100 in order to use the appropriate distribution curve. It does not, however, list all possible degrees of freedom in this range, because that would take too many rows. Above $df$ = 40, the rows jump in increments of 10. If a problem requires you to find critical values and the exact degrees of freedom is not listed, you always round down to the next smallest number. For example, if you have 48 people in your sample, the degrees of freedom are $N$ – 1 = 48 – 1 = 47; however, 47 doesn’t appear on our table, so we round down and use the critical values for $df$ = 40, even though 50 is closer. We do this because it avoids inflating Type I Error (false positives, see chapter 7) by using criteria that are too lax.
textbooks/stats/Applied_Statistics/An_Introduction_to_Psychological_Statistics_(Foster_et_al.)/08%3A_Introduction_to_t-tests/8.01%3A_The_t-statistic.txt
Hypothesis testing with the $t$-statistic works exactly the same way as $z$-tests did, following the four-step process of 1. Stating the Hypothesis 2. Finding the Critical Values 3. Computing the Test Statistic 4. Making the Decision. We will work though an example: let’s say that you move to a new city and find a an auto shop to change your oil. Your old mechanic did the job in about 30 minutes (though you never paid close enough attention to know how much that varied), and you suspect that your new shop takes much longer. After 4 oil changes, you think you have enough evidence to demonstrate this. Step 1: State the Hypotheses Our hypotheses for 1-sample t-tests are identical to those we used for $z$-tests. We still state the null and alternative hypotheses mathematically in terms of the population parameter and written out in readable English. For our example: $H_0$: There is no difference in the average time to change a car’s oil $H_0: μ = 30$ $H_A$: This shop takes longer to change oil than your old mechanic $H_A: μ > 30$ Step 2: Find the Critical Values As noted above, our critical values still delineate the area in the tails under the curve corresponding to our chosen level of significance. Because we have no reason to change significance levels, we will use $α$ = 0.05, and because we suspect a direction of effect, we have a one-tailed test. To find our critical values for $t$, we need to add one more piece of information: the degrees of freedom. For this example: $df = N – 1 = 4 – 1 = 3 \nonumber$ Going to our $t$-table, we find the column corresponding to our one-tailed significance level and find where it intersects with the row for 3 degrees of freedom. As shown in Figure $1$: our critical value is $t*$ = 2.353 We can then shade this region on our $t$-distribution to visualize our rejection region Step 3: Compute the Test Statistic The four wait times you experienced for your oil changes are the new shop were 46 minutes, 58 minutes, 40 minutes, and 71 minutes. We will use these to calculate $\overline{\mathrm{X}}$ and s by first filling in the sum of squares table in Table $1$: Table $1$: Sum of Squares Table $\overline{\mathrm{X}}$ $\mathrm{X}-\overline{\mathrm{X}}$ $(\mathrm{X}-\overline{\mathrm{X}})^{2}$ 46 -7.75 60.06 58 4.25 18.06 40 -13.75 189.06 71 17.25 297.56 $\Sigma$=215 $\Sigma$=0 $\Sigma$=564.74 After filling in the first row to get $\Sigma$=215, we find that the mean is $\overline{\mathrm{X}}$ = 53.75 (215 divided by sample size 4), which allows us to fill in the rest of the table to get our sum of squares $SS$ = 564.74, which we then plug in to the formula for standard deviation from chapter 3: $s=\sqrt{\dfrac{\sum(X-\overline{X})^{2}}{N-1}}=\sqrt{\dfrac{S S}{d f}}=\sqrt{\dfrac{564.74}{3}}=13.72 \nonumber$ Next, we take this value and plug it in to the formula for standard error: $s_{\overline{X}}=\dfrac{s}{\sqrt{n}}=\dfrac{13.72}{2}=6.86 \nonumber$ And, finally, we put the standard error, sample mean, and null hypothesis value into the formula for our test statistic $t$: $t=\dfrac{\overline{\mathrm{X}}-\mu}{s_{\overline{\mathrm{X}}}}=\dfrac{53.75-30}{6.86}=\dfrac{23.75}{6.68}=3.46 \nonumber$ This may seem like a lot of steps, but it is really just taking our raw data to calculate one value at a time and carrying that value forward into the next equation: data  sample size/degrees of freedom  mean  sum of squares  standard deviation  standard error  test statistic. At each step, we simply match the symbols of what we just calculated to where they appear in the next formula to make sure we are plugging everything in correctly. Step 4: Make the Decision Now that we have our critical value and test statistic, we can make our decision using the same criteria we used for a $z$-test. Our obtained $t$-statistic was $t$ = 3.46 and our critical value was $t* = 2.353: t > t*$, so we reject the null hypothesis and conclude: Based on our four oil changes, the new mechanic takes longer on average ($\overline{\mathrm{X}}$ = 53.75) to change oil than our old mechanic, $t(3)$ = 3.46, $p$ < .05. Notice that we also include the degrees of freedom in parentheses next to $t$. And because we found a significant result, we need to calculate an effect size, which is still Cohen’s $d$, but now we use $s$ in place of $σ$: $d=\dfrac{\overline{X}-\mu}{s}=\dfrac{53.75-30.00}{13.72}=1.73 \nonumber$ This is a large effect. It should also be noted that for some things, like the minutes in our current example, we can also interpret the magnitude of the difference we observed (23 minutes and 45 seconds) as an indicator of importance since time is a familiar metric.
textbooks/stats/Applied_Statistics/An_Introduction_to_Psychological_Statistics_(Foster_et_al.)/08%3A_Introduction_to_t-tests/8.02%3A_Hypothesis_Testing_with_t.txt
Up to this point, we have learned how to estimate the population parameter for the mean using sample data and a sample statistic. From one point of view, this makes sense: we have one value for our parameter so we use a single value (called a point estimate) to estimate it. However, we have seen that all statistics have sampling error and that the value we find for the sample mean will bounce around based on the people in our sample, simply due to random chance. Thinking about estimation from this perspective, it would make more sense to take that error into account rather than relying just on our point estimate. To do this, we calculate what is known as a confidence interval. A confidence interval starts with our point estimate then creates a range of scores considered plausible based on our standard deviation, our sample size, and the level of confidence with which we would like to estimate the parameter. This range, which extends equally in both directions away from the point estimate, is called the margin of error. We calculate the margin of error by multiplying our two-tailed critical value by our standard error: $\text {Margin of Error }=t^{*}(s / \sqrt{n})$ One important consideration when calculating the margin of error is that it can only be calculated using the critical value for a two-tailed test. This is because the margin of error moves away from the point estimate in both directions, so a one-tailed value does not make sense. The critical value we use will be based on a chosen level of confidence, which is equal to 1 – $α$. Thus, a 95% level of confidence corresponds to $α$ = 0.05. Thus, at the 0.05 level of significance, we create a 95% Confidence Interval. How to interpret that is discussed further on. Once we have our margin of error calculated, we add it to our point estimate for the mean to get an upper bound to the confidence interval and subtract it from the point estimate for the mean to get a lower bound for the confidence interval: $\begin{array}{l}{\text {Upper Bound}=\bar{X}+\text {Margin of Error}} \ {\text {Lower Bound }=\bar{X}-\text {Margin of Error}}\end{array}$ Or simply: $\text { Confidence Interval }=\overline{X} \pm t^{*}(s / \sqrt{n})$ To write out a confidence interval, we always use soft brackets and put the lower bound, a comma, and the upper bound: $\text { Confidence Interval }=\text { (Lower Bound, Upper Bound) }$ Let’s see what this looks like with some actual numbers by taking our oil change data and using it to create a 95% confidence interval estimating the average length of time it takes at the new mechanic. We already found that our average was $\overline{X}$= 53.75 and our standard error was $s_{\overline{X}}$ = 6.86. We also found a critical value to test our hypothesis, but remember that we were testing a one-tailed hypothesis, so that critical value won’t work. To see why that is, look at the column headers on the $t$-table. The column for one-tailed $α$ = 0.05 is the same as a two-tailed $α$ = 0.10. If we used the old critical value, we’d actually be creating a 90% confidence interval (1.00-0.10 = 0.90, or 90%). To find the correct value, we use the column for two-tailed $α$ = 0.05 and, again, the row for 3 degrees of freedom, to find $t*$ = 3.182. Now we have all the pieces we need to construct our confidence interval: $95 \% C I=53.75 \pm 3.182(6.86) \nonumber$ \begin{aligned} \text {Upper Bound} &=53.75+3.182(6.86) \ U B=& 53.75+21.83 \ U B &=75.58 \end{aligned} \nonumber \begin{aligned} \text {Lower Bound} &=53.75-3.182(6.86) \ L B &=53.75-21.83 \ L B &=31.92 \end{aligned} \nonumber $95 \% C I=(31.92,75.58) \nonumber$ So we find that our 95% confidence interval runs from 31.92 minutes to 75.58 minutes, but what does that actually mean? The range (31.92, 75.58) represents values of the mean that we consider reasonable or plausible based on our observed data. It includes our point estimate of the mean, $\overline{X}$= 53.75, in the center, but it also has a range of values that could also have been the case based on what we know about how much these scores vary (i.e. our standard error). It is very tempting to also interpret this interval by saying that we are 95% confident that the true population mean falls within the range (31.92, 75.58), but this is not true. The reason it is not true is that phrasing our interpretation this way suggests that we have firmly established an interval and the population mean does or does not fall into it, suggesting that our interval is firm and the population mean will move around. However, the population mean is an absolute that does not change; it is our interval that will vary from data collection to data collection, even taking into account our standard error. The correct interpretation, then, is that we are 95% confident that the range (31.92, 75.58) brackets the true population mean. This is a very subtle difference, but it is an important one. Hypothesis Testing with Confidence Intervals As a function of how they are constructed, we can also use confidence intervals to test hypotheses. However, we are limited to testing two-tailed hypotheses only, because of how the intervals work, as discussed above. Once a confidence interval has been constructed, using it to test a hypothesis is simple. The range of the confidence interval brackets (or contains, or is around) the null hypothesis value, we fail to reject the null hypothesis. If it does not bracket the null hypothesis value (i.e. if the entire range is above the null hypothesis value or below it), we reject the null hypothesis. The reason for this is clear if we think about what a confidence interval represents. Remember: a confidence interval is a range of values that we consider reasonable or plausible based on our data. Thus, if the null hypothesis value is in that range, then it is a value that is plausible based on our observations. If the null hypothesis is plausible, then we have no reason to reject it. Thus, if our confidence interval brackets the null hypothesis value, thereby making it a reasonable or plausible value based on our observed data, then we have no evidence against the null hypothesis and fail to reject it. However, if we build a confidence interval of reasonable values based on our observations and it does not contain the null hypothesis value, then we have no empirical (observed) reason to believe the null hypothesis value and therefore reject the null hypothesis. Let’s see an example. You hear that the national average on a measure of friendliness is 38 points. You want to know if people in your community are more or less friendly than people nationwide, so you collect data from 30 random people in town to look for a difference. We’ll follow the same four step hypothesis testing procedure as before. Step 1: State the Hypotheses We will start by laying out our null and alternative hypotheses: $H_0$: There is no difference in how friendly the local community is compared to the national average $H_0: μ = 38$ $H_A$: There is a difference in how friendly the local community is compared to the national average $H_A: μ ≠ 38$ Step 2: Find the Critical Values We need our critical values in order to determine the width of our margin of error. We will assume a significance level of $α$ = 0.05 (which will give us a 95% CI). From the $t$-table, a two-tailed critical value at $α$ = 0.05 with 29 degrees of freedom ($N$ – 1 = 30 – 1 = 29) is $t*$ = 2.045. Step 3: Calculations Now we can construct our confidence interval. After we collect our data, we find that the average person in our community scored 39.85, or $\overline{X}$= 39.85, and our standard deviation was $s$ = 5.61. First, we need to use this standard deviation, plus our sample size of $N$ = 30, to calculate our standard error: $s_{\overline{X}}=\dfrac{s}{\sqrt{n}}=\dfrac{5.61}{5.48}=1.02 \nonumber$ Now we can put that value, our point estimate for the sample mean, and our critical value from step 2 into the formula for a confidence interval: $95 \% C I=39.85 \pm 2.045(1.02) \nonumber$ \begin{aligned} \text {Upper Bound} &=39.85+2.045(1.02) \ U B &=39.85+2.09 \ U B &=41.94 \end{aligned} \nonumber \begin{aligned} \text {Lower Bound} &=39.85-2.045(1.02) \ L B &=39.85-2.09 \ L B &=37.76 \end{aligned} \nonumber $95 \% C I=(37.76,41.94) \nonumber$ Step 4: Make the Decision Finally, we can compare our confidence interval to our null hypothesis value. The null value of 38 is higher than our lower bound of 37.76 and lower than our upper bound of 41.94. Thus, the confidence interval brackets our null hypothesis value, and we fail to reject the null hypothesis: Fail to Reject $H_0$. Based on our sample of 30 people, our community not different in average friendliness ($\overline{X}$= 39.85) than the nation as a whole, 95% CI = (37.76, 41.94). Note that we don’t report a test statistic or $p$-value because that is not how we tested the hypothesis, but we do report the value we found for our confidence interval. An important characteristic of hypothesis testing is that both methods will always give you the same result. That is because both are based on the standard error and critical values in their calculations. To check this, we can calculate a t-statistic for the example above and find it to be $t$ = 1.81, which is smaller than our critical value of 2.045 and fails to reject the null hypothesis. Confidence Intervals using $z$ Confidence intervals can also be constructed using $z$-score criteria, if one knows the population standard deviation. The format, calculations, and interpretation are all exactly the same, only replacing $t*$ with $z*$ and $s_{\overline{X}}$ with $\sigma_{\overline{X}}$.
textbooks/stats/Applied_Statistics/An_Introduction_to_Psychological_Statistics_(Foster_et_al.)/08%3A_Introduction_to_t-tests/8.03%3A_Confidence_Intervals.txt
1. What is the difference between a $z$-test and a 1-sample $t$-test? Answer: A $z$-test uses population standard deviation for calculating standard error and gets critical values based on the standard normal distribution. A $t$-test uses sample standard deviation as an estimate when calculating standard error and gets critical values from the t-distribution based on degrees of freedom. 1. What does a confidence interval represent? 2. What is the relationship between a chosen level of confidence for a confidence interval and how wide that interval is? For instance, if you move from a 95% CI to a 90% CI, what happens? Hint: look at the t-table to see how critical values change when you change levels of significance. Answer: As the level of confidence gets higher, the interval gets wider. In order to speak with more confidence about having found the population mean, you need to cast a wider net. This happens because critical values for higher confidence levels are larger, which creates a wider margin of error. 1. Construct a confidence interval around the sample mean $\overline{X}$= 25 for the following conditions: 1. $N$ = 25, $s$ = 15, 95% confidence level 2. $N$ = 25, $s$ = 15, 90% confidence level 3. $s_{\overline{X}}$ = 4.5, $α$ = 0.05, $df$ = 20 4. $s$ = 12, $df$ = 16 (yes, that is all the information you need) 2. True or False: a confidence interval represents the most likely location of the true population mean. Answer: False: a confidence interval is a range of plausible scores that may or may not bracket the true population mean. 1. You hear that college campuses may differ from the general population in terms of political affiliation, and you want to use hypothesis testing to see if this is true and, if so, how big the difference is. You know that the average political affiliation in the nation is $μ$ = 4.00 on a scale of 1.00 to 7.00, so you gather data from 150 college students across the nation to see if there is a difference. You find that the average score is 3.76 with a standard deviation of 1.52. Use a 1-sample $t$-test to see if there is a difference at the $α$ = 0.05 level. 2. You hear a lot of talk about increasing global temperature, so you decide to see for yourself if there has been an actual change in recent years. You know that the average land temperature from 1951-1980 was 8.79 degrees Celsius. You find annual average temperature data from 1981-2017 and decide to construct a 99% confidence interval (because you want to be as sure as possible and look for differences in both directions, not just one) using this data to test for a difference from the previous average. Answer: $\overline{X}$= 9.44, $s$ = 0.35, $s_{\overline{X}}$= 0.06, $df$ = 36, $t*$ = 2.719, 99% CI = (9.28, 9.60); CI does not bracket $μ$, reject null hypothesis. $d$ = 1.83 1. Determine whether you would reject or fail to reject the null hypothesis in the following situations: 1. $t$ = 2.58, $N$ = 21, two-tailed test at $α$ = 0.05 2. $t$ = 1.99, $N$ = 49, one-tailed test at $α$ = 0.01 3. $μ$ = 47.82, 99% CI = (48.71, 49.28) 4. $μ$ = 0, 95% CI = (-0.15, 0.20) 2. You are curious about how people feel about craft beer, so you gather data from 55 people in the city on whether or not they like it. You code your data so that 0 is neutral, positive scores indicate liking craft beer, and negative scores indicate disliking craft beer. You find that the average opinion was $\overline{X}$= 1.10 and the spread was $s$ = 0.40, and you test for a difference from 0 at the $α$ = 0.05 level. Answer: Step 1: $H_0: μ = 0$ “The average person has a neutral opinion towards craft beer”, $H_A: μ ≠ 0$ “Overall people will have an opinion about craft beer, either good or bad.” Step 2: Two-tailed test, $df$ = 54, $t*$ = 2.009. Step 3: $\overline{X}$= 1.10, $s_{\overline{X}}$= 0.05, $t$ = 22.00. Step 4: $t > t*$, Reject $H_0$. Based on opinions from 55 people, we can conclude that the average opinion of craft beer ($\overline{X}$= 1.10) is positive, $t(54)$ = 22.00, $p$ < .05. Since the result is significant, we need an effect size: Cohen’s $d$ = 2.75, which is a large effect. 1. You want to know if college students have more stress in their daily lives than the general population ($μ$ = 12), so you gather data from 25 people to test your hypothesis. Your sample has an average stress score of $\overline{X}$= 13.11 and a standard deviation of $s$ = 3.89. Use a 1-sample $t$-test to see if there is a difference.
textbooks/stats/Applied_Statistics/An_Introduction_to_Psychological_Statistics_(Foster_et_al.)/08%3A_Introduction_to_t-tests/8.04%3A_Exercises.txt
Researchers are often interested in change over time. Sometimes we want to see if change occurs naturally, and other times we are hoping for change in response to some manipulation. In each of these cases, we measure a single variable at different times, and what we are looking for is whether or not we get the same score at time 2 as we did at time 1. The absolute value of our measurements does not matter – all that matters is the change. Let’s look at an example: Table $1$: Raw and difference scores before and after training. Before After Improvement 6 9 3 7 7 0 4 10 6 1 3 2 8 10 2 Table $1$ shows scores on a quiz that five employees received before they took a training course and after they took the course. The difference between these scores (i.e. the score after minus the score before) represents improvement in the employees’ ability. This third column is what we look at when assessing whether or not our training was effective. We want to see positive scores, which indicate that the employees’ performance went up. What we are not interested in is how good they were before they took the training or after the training. Notice that the lowest scoring employee before the training (with a score of 1) improved just as much as the highest scoring employee before the training (with a score of 8), regardless of how far apart they were to begin with. There’s also one improvement score of 0, meaning that the training did not help this employee. An important factor in this is that the participants received the same assessment at both time points. To calculate improvement or any other difference score, we must measure only a single variable. When looking at change scores like the ones in Table $1$, we calculate our difference scores by taking the time 2 score and subtracting the time 1 score. That is: $\mathrm{X}_{\mathrm{d}}=\mathrm{X}_{\mathrm{T} 2}-\mathrm{X}_{\mathrm{T} 1}$ Where $\mathrm{X}_{\mathrm{d}}$ is the difference score, $\mathrm{X}_{\mathrm{T} 1}$ is the score on the variable at time 1, and $\mathrm{X}_{\mathrm{T} 2}$ is the score on the variable at time 2. The difference score, $\mathrm{X}_{\mathrm{d}}$, will be the data we use to test for improvement or change. We subtract time 2 minus time 1 for ease of interpretation; if scores get better, then the difference score will be positive. Similarly, if we’re measuring something like reaction time or depression symptoms that we are trying to reduce, then better outcomes (lower scores) will yield negative difference scores. We can also test to see if people who are matched or paired in some way agree on a specific topic. For example, we can see if a parent and a child agree on the quality of home life, or we can see if two romantic partners agree on how serious and committed their relationship is. In these situations, we also subtract one score from the other to get a difference score. This time, however, it doesn’t matter which score we subtract from the other because what we are concerned with is the agreement. In both of these types of data, what we have are multiple scores on a single variable. That is, a single observation or data point is comprised of two measurements that are put together into one difference score. This is what makes the analysis of change unique – our ability to link these measurements in a meaningful way. This type of analysis would not work if we had two separate samples of people that weren’t related at the individual level, such as samples of people from different states that we gathered independently. Such datasets and analyses are the subject of the following chapter. A rose by any other name… It is important to point out that this form of t-test has been called many different things by many different people over the years: “matched pairs”, “paired samples”, “repeated measures”, “dependent measures”, “dependent samples”, and many others. What all of these names have in common is that they describe the analysis of two scores that are related in a systematic way within people or within pairs, which is what each of the datasets usable in this analysis have in common. As such, all of these names are equally appropriate, and the choice of which one to use comes down to preference. In this text, we will refer to paired samples, though the appearance of any of the other names throughout this chapter should not be taken to refer to a different analysis: they are all the same thing. Now that we have an understanding of what difference scores are and know how to calculate them, we can use them to test hypotheses. As we will see, this works exactly the same way as testing hypotheses about one sample mean with a tstatistic. The only difference is in the format of the null and alternative hypotheses.
textbooks/stats/Applied_Statistics/An_Introduction_to_Psychological_Statistics_(Foster_et_al.)/09%3A_Repeated_Measures/9.01%3A_Change_and_Differences.txt
When we work with difference scores, our research questions have to do with change. Did scores improve? Did symptoms get better? Did prevalence go up or down? Our hypotheses will reflect this. Remember that the null hypothesis is the idea that there is nothing interesting, notable, or impactful represented in our dataset. In a paired samples t-test, that takes the form of ‘no change’. There is no improvement in scores or decrease in symptoms. Thus, our null hypothesis is: $H_0$: There is no change or difference $H_0: μD = 0$ As with our other null hypotheses, we express the null hypothesis for paired samples $t$-tests in both words and mathematical notation. The exact wording of the written-out version should be changed to match whatever research question we are addressing (e.g. “ There is no change in ability scores after training”). However, the mathematical version of the null hypothesis is always exactly the same: the average change score is equal to zero. Our population parameter for the average is still $μ$, but it now has a subscript $D$ to denote the fact that it is the average change score and not the average raw observation before or after our manipulation. Obviously individual difference scores can go up or down, but the null hypothesis states that these positive or negative change values are just random chance and that the true average change score across all people is 0. Our alternative hypotheses will also follow the same format that they did before: they can be directional if we suspect a change or difference in a specific direction, or we can use an inequality sign to test for any change: $H_A$: There is a change or difference $H_A: μD ≠ 0$ $H_A$: The average score increases $H_A: μD > 0$ $H_A$: The average score decreases $H_A: μD < 0$ As before, you choice of which alternative hypothesis to use should be specified before you collect data based on your research question and any evidence you might have that would indicate a specific directional (or non-directional) change. Critical Values and Decision Criteria As with before, once we have our hypotheses laid out, we need to find our critical values that will serve as our decision criteria. This step has not changed at all from the last chapter. Our critical values are based on our level of significance (still usually $α$ = 0.05), the directionality of our test (one-tailed or two-tailed), and the degrees of freedom, which are still calculated as $df = n – 1$. Because this is a $t$-test like the last chapter, we will find our critical values on the same $t$-table using the same process of identifying the correct column based on our significance level and directionality and the correct row based on our degrees of freedom or the next lowest value if our exact degrees of freedom are not presented. After we calculate our test statistic, our decision criteria are the same as well: $p < α$ or $t_{obt} > t*$. Test Statistic Our test statistic for our change scores follows exactly the same format as it did for our 1-sample $t$-test. In fact, the only difference is in the data that we use. For our change test, we first calculate a difference score as shown above. Then, we use those scores as the raw data in the same mean calculation, standard error formula, and $t$-statistic. Let’s look at each of these. The mean difference score is calculated in the same way as any other mean: sum each of the individual difference scores and divide by the sample size. $\overline{X_{D}}=\dfrac{\Sigma X_{D}}{n}$ Here we are using the subscript $D$ to keep track of that fact that these are difference scores instead of raw scores; it has no actual effect on our calculation. Using this, we calculate the standard deviation of the difference scores the same way as well: $s_{D}=\sqrt{\dfrac{\sum\left(X_{D}-\overline{X_{D}}\right)^{2}}{n-1}}=\sqrt{\dfrac{S S}{d f}}$ We will find the numerator, the Sum of Squares, using the same table format that we learned in chapter 3. Once we have our standard deviation, we can find the standard error: $s_{\overline{X}_{D}}=^{S_{D}} / \sqrt{n}$ Finally, our test statistic t has the same structure as well: $t=\dfrac{\overline{X_{D}}-\mu_{D}}{s_{\overline{X}_{D}}}$ As we can see, once we calculate our difference scores from our raw measurements, everything else is exactly the same. Let’s see an example.
textbooks/stats/Applied_Statistics/An_Introduction_to_Psychological_Statistics_(Foster_et_al.)/09%3A_Repeated_Measures/9.02%3A_Hypotheses_of_Change_and_Differences.txt
Workers at a local company have been complaining that working conditions have gotten very poor, hours are too long, and they don’t feel supported by the management. The company hires a consultant to come in and help fix the situation before it gets so bad that the employees start to quit. The consultant first assesses 40 of the employee’s level of job satisfaction as part of focus groups used to identify specific changes that might help. The company institutes some of these changes, and six months later the consultant returns to measure job satisfaction again. Knowing that some interventions miss the mark and can actually make things worse, the consultant tests for a difference in either direction (i.e. and increase or a decreased in average job satisfaction) at the $α$ = 0.05 level of significance. Step 1: State the Hypotheses First, we state our null and alternative hypotheses: $H_0$: There is no change in average job satisfaction $H_0: μD = 0$ $H_A$: There is an increase in average job satisfaction $H_A: μD > 0$ In this case, we are hoping that the changes we made will improve employee satisfaction, and, because we based the changes on employee recommendations, we have good reason to believe that they will. Thus, we will use a one-directional alternative hypothesis. Step 2: Find the Critical Values Our critical values will once again be based on our level of significance, which we know is $α$ = 0.05, the directionality of our test, which is one-tailed to the right, and our degrees of freedom. For our dependent-samples $t$-test, the degrees of freedom are still given as $df = n – 1$. For this problem, we have 40 people, so our degrees of freedom are 39. Going to our t-table, we find that the critical value is $t*$ = 1.685 as shown in Figure $1$. Step 3: Calculate the Test Statistic Now that the criteria are set, it is time to calculate the test statistic. The data obtained by the consultant found that the difference scores from time 1 to time 2 had a mean of $\overline{\mathrm{X}_{\mathrm{D}}}$ = 2.96 and a standard deviation of $s_D$ = 2.85. Using this information, plus the size of the sample ($N$ = 40), we first calculate the standard error: $s_{\overline{x_{D}}}=s_{D / \sqrt{n}}=2.85 / \sqrt{40}=2.85 / 6.32=0.46 \nonumber$ Now, we can put that value, along with our sample mean and null hypothesis value, into the formula for $t$ and calculate the test statistic: $t=\dfrac{\overline{X_{D}}-\mu_{D}}{s_{\overline{X}_{D}}}=\dfrac{2.96-0}{0.46}=6.43 \nonumber$ Notice that, because the null hypothesis value of a dependent samples $t$-test is always 0, we can simply divide our obtained sample mean by the standard error. Step 4: Make the Decision We have obtained a test statistic of $t$ = 6.43 that we can compare to our previously established critical value of $t*$ = 1.685. 6.43 is larger than 1.685, so $t > t*$ and we reject the null hypothesis: Reject $H_0$. Based on the sample data from 40 workers, we can say that the intervention statistically significantly improved job satisfaction ($\overline{\mathrm{X}_{\mathrm{D}}}$= 2.96) among the workers, $t(39) = 6.43$, $p < 0.05$. Because this result was statistically significant, we will want to calculate Cohen’s $d$ as an effect size using the same format as we did for the last $t$-test: $t=\dfrac{\overline{X_{D}}-\mu_{D}}{s_{D}}=\dfrac{2.96}{2.85}=1.04 \nonumber$ This is a large effect size. Notice again that we can omit the null hypothesis value here because it is always equal to 0. Hopefully the above example made it clear that running a dependent samples $t$-test to look for differences before and after some treatment works exactly the same way as a regular 1-sample $t$-test does, which was just a small change in how $z$-tests were performed in chapter 7. At this point, this process should feel familiar, and we will continue to make small adjustments to this familiar process as we encounter new types of data to test new types of research questions.
textbooks/stats/Applied_Statistics/An_Introduction_to_Psychological_Statistics_(Foster_et_al.)/09%3A_Repeated_Measures/9.03%3A_Increasing_Satisfaction_at_Work.txt
Let’s say that a bank wants to make sure that their new commercial will make them look good to the public, so they recruit 7 people to view the commercial as a focus group. The focus group members fill out a short questionnaire about how they view the company, then watch the commercial and fill out the same questionnaire a second time. The bank really wants to find significant results, so they test for a change at $α$ = 0.10. However, they use a 2-tailed test since they know that past commercials have not gone over well with the public, and they want to make sure the new one does not backfire. They decide to test their hypothesis using a confidence interval to see just how spread out the opinions are. As we will see, confidence intervals work the same way as they did before, just like with the test statistic. Step 1: State the Hypotheses As always, we start with hypotheses: $H_0$: There is no change in how people view the bank $H_0: μD = 0$ $H_A$: There is a change in how people view the bank $H_A: μD ≠ 0$ Step 2: Find the Critical Values Just like with our regular hypothesis testing procedure, we will need critical values from the appropriate level of significance and degrees of freedom in order to form our confidence interval. Because we have 7 participants, our degrees of freedom are $df$ = 6. From our t-table, we find that the critical value corresponding to this df at this level of significance is $t*$ = 1.943. Step 3: Calculate the Confidence Interval The data collected before (time 1) and after (time 2) the participants viewed the commercial is presented in Table $1$. In order to build our confidence interval, we will first have to calculate the mean and standard deviation of the difference scores, which are also in Table $1$. As a reminder, the difference scores are calculated as Time 2 – Time 1. Table $1$: Opinions of the bank Time 1 Time 2 $X_{D}$ 3 2 -1 3 6 3 5 3 -2 8 4 -4 3 9 6 1 2 1 4 5 1 The mean of the difference scores is: $\overline{X_{D}}=\dfrac{\sum X_{D}}{n}=\dfrac{4}{7}=0.57 \nonumber$ The standard deviation will be solved by first using the Sum of Squares Table: Table $2$: Sum of Squares $X_{D}$ $X_{D}-\overline{X_{D}}$ $(X_{D}-\overline{X_{D}})^2$ -1 -1.57 2.46 3 2.43 5.90 -2 -2.57 6.60 -4 -4.57 20.88 6 5.43 29.48 1 0.43 0.18 1 0.43 0.18 $\Sigma=4$ $\Sigma=0$ $\Sigma=65.68$ $s_{D}=\sqrt{\dfrac{S S}{d f}}=\sqrt{\dfrac{65.68}{6}}=\sqrt{10.95}=3.31 \nonumber$ Finally, we find the standard error: $s_{\overline{X_{D}}}=^{S_{D}} / \sqrt{n}=3.31 / \sqrt{7}=1.25 \nonumber$ We now have all the pieces needed to compute our confidence interval: $\begin{array}{c}{95 \% C I=\overline{X_{D}} \pm t^{*}\left(s_{\bar{X}_{D}}\right)} \ {95 \% C I=0.57 \pm 1.943(1.25)}\end{array} \nonumber$ \begin{aligned} \text {Upper Bound} &=0.57+1.943(1.25) \ U B &=0.57+2.43 \ U B &=3.00 \end{aligned} \nonumber \begin{aligned} \text {Lower Bound} &=0.57-1.943(1.25) \ L B=& 0.57-2.43 \ L B &=-1.86 \end{aligned} \nonumber $95 \% C I=(-1.86,3.00) \nonumber$ Step 4: Make the Decision Remember that the confidence interval represents a range of values that seem plausible or reasonable based on our observed data. The interval spans -1.86 to 3.00, which includes 0, our null hypothesis value. Because the null hypothesis value is in the interval, it is considered a reasonable value, and because it is a reasonable value, we have no evidence against it. We fail to reject the null hypothesis. Fail to Reject $H_0$. Based on our focus group of 7 people, we cannot say that the average change in opinion ($\overline{X_{D}}$ = 0.57) was any better or worse after viewing the commercial, CI: (-1.86, 3.00). As with before, we only report the confidence interval to indicate how we performed the test.
textbooks/stats/Applied_Statistics/An_Introduction_to_Psychological_Statistics_(Foster_et_al.)/09%3A_Repeated_Measures/9.04%3A_Bad_Press.txt
1. What is the difference between a 1-sample $t$-test and a dependent-samples $t$-test? How are they alike? Answer: A 1-sample $t$-test uses raw scores to compare an average to a specific value. A dependent samples $t$-test uses two raw scores from each person to calculate difference scores and test for an average difference score that is equal to zero. The calculations, steps, and interpretation is exactly the same for each. 1. Name 3 research questions that could be addressed using a dependent samples $t$-test. 2. What are difference scores and why do we calculate them? Answer: Difference scores indicate change or discrepancy relative to a single person or pair of people. We calculate them to eliminate individual differences in our study of change or agreement. 1. Why is the null hypothesis for a dependent-samples $t$-test always $μ_D = 0$? 2. A researcher is interested in testing whether explaining the processes of statistics helps increase trust in computer algorithms. He wants to test for a difference at the $α$ = 0.05 level and knows that some people may trust the algorithms less after the training, so he uses a two-tailed test. He gathers pre-post data from 35 people and finds that the average difference score is $\overline{X_{D}}$ = 12.10 with a standard deviation of $s_{D}$= 17.39. Conduct a hypothesis test to answer the research question. Answer: Step 1: $H_0: μ = 0$ “The average change in trust of algorithms is 0”, $H_A: μ ≠ 0$ “People’s opinions of how much they trust algorithms changes.” Step 2: Two-tailed test, $df$ = 34, $t*$ = 2.032. Step 3: $\overline{X_{D}}$ = 12.10, $s_{\overline{X_{D}}}$= 2.94, $t$ = 4.12. Step 4: $t > t*$, Reject $H_0$. Based on opinions from 35 people, we can conclude that people trust algorithms more ($\overline{X_{D}}$= 12.10) after learning statistics, $t(34) = 4.12, p < .05$. Since the result is significant, we need an effect size: Cohen’s $d$ = 0.70, which is a moderate to large effect. 1. Decide whether you would reject or fail to reject the null hypothesis in the following situations: 1. $\overline{X_{D}}$ = 3.50, $s_{D}$ = 1.10, $n$ = 12, $α$ = 0.05, two-tailed test 2. 95% CI= (0.20,1.85) 3. $t$ = 2.98, $t*$ = -2.36, one-tailed test to the left 4. 90% CI = (-1.12, 4.36) 2. Calculate difference scores for the following data: Time 1 Time 2 $X_D$ 61 83 75 89 91 98 83 92 74 80 82 88 98 98 82 77 69 88 76 79 91 91 70 80 Answer: Time 1 Time 2 $X_D$ 61 83 22 75 89 14 91 98 7 83 92 9 74 80 6 82 88 6 98 98 0 82 77 -5 69 88 19 76 79 3 91 91 0 70 80 10 1. You want to know if an employee’s opinion about an organization is the same as the opinion of that employee’s boss. You collect data from 18 employee-supervisor pairs and code the difference scores so that positive scores indicate that the employee has a higher opinion and negative scores indicate that the boss has a higher opinion (meaning that difference scores of 0 indicate no difference and complete agreement). You find that the mean difference score is $\overline{X_{D}}$= -3.15 with a standard deviation of $s_D$ = 1.97. Test this hypothesis at the $α$ = 0.01 level. 2. Construct confidence intervals from a mean of $\overline{X_{D}}$ = 1.25, standard error of $s_{\overline{X_{D}}}$= 0.45, and $df$ = 10 at the 90%, 95%, and 99% confidence level. Describe what happens as confidence changes and whether to reject $H_0$. Answer: At the 90% confidence level, $t*$ = 1.812 and CI = (0.43, 2.07) so we reject $H_0$. At the 95% confidence level, $t*$ = 2.228 and CI = (0.25, 2.25) so we reject $H_0$. At the 99% confidence level, $t*$ = 3.169 and CI = (-0.18, 2.68) so we fail to reject $H_0$. As the confidence level goes up, our interval gets wider (which is why we have higher confidence), and eventually we do not reject the null hypothesis because the interval is so wide that it contains 0. 1. A professor wants to see how much students learn over the course of a semester. A pre-test is given before the class begins to see what students know ahead of time, and the same test is given at the end of the semester to see what students know at the end. The data are below. Test for an improvement at the $α$ = 0.05 level. Did scores increase? How much did scores increase? Pretest Posttest $X_D$ 90 89 60 66 95 99 93 91 95 100 67 64 89 91 90 95 94 95 83 89 75 82 87 92 82 83 82 85 88 93 66 69 90 90 93 100 86 95 91 96
textbooks/stats/Applied_Statistics/An_Introduction_to_Psychological_Statistics_(Foster_et_al.)/09%3A_Repeated_Measures/9.05%3A_Exercises.txt
Last chapter, we learned about mean differences, that is, the average value of difference scores. Those difference scores came from ONE group and TWO time points (or two perspectives). Now, we will deal with the difference of the means, that is, the average values of separate groups that are represented by separate descriptive statistics. This analysis involves TWO groups and ONE time point. As with all of our other tests as well, both of these analyses are concerned with a single variable. It is very important to keep these two tests separate and understand the distinctions between them because they assess very different questions and require different approaches to the data. When in doubt, think about how the data were collected and where they came from. If they came from two time points with the same people (sometimes referred to as “longitudinal” data), you know you are working with repeated measures data (the measurement literally was repeated) and will use a repeated measures/dependent samples \(t\)-test. If it came from a single time point that used separate groups, you need to look at the nature of those groups and if they are related. Can individuals in one group being meaningfully matched up with one and only one individual from the other group? For example, are they a romantic couple? If so, we call those data matched and we use a matched pairs/dependent samples \(t\)-test. However, if there’s no logical or meaningful way to link individuals across groups, or if there is no overlap between the groups, then we say the groups are independent and use the independent samples \(t\)-test, the subject of this chapter. 10.02: Research Questions about Independent Means Many research ideas in the behavioral sciences and other areas of research are concerned with whether or not two means are the same or different. Logically, we therefore say that these research questions are concerned with group mean differences. That is, on average, do we expect a person from Group A to be higher or lower on some variable that a person from Group B. In any time of research design looking at group mean differences, there are some key criteria we must consider: the groups must be mutually exclusive (i.e. you can only be part of one group at any given time) and the groups have to be measured on the same variable (i.e. you can’t compare personality in one group to reaction time in another group since those values would not be the same anyway). Let’s look at one of the most common and logical examples: testing a new medication. When a new medication is developed, the researchers who created it need to demonstrate that it effectively treats the symptoms they are trying to alleviate. The simplest design that will answer this question involves two groups: one group that receives the new medication (the “treatment” group) and one group that receives a placebo (the “control” group). Participants are randomly assigned to one of the two groups (remember that random assignment is the hallmark of a true experiment), and the researchers test the symptoms in each person in each group after they received either the medication or the placebo. They then calculate the average symptoms in each group and compare them to see if the treatment group did better (i.e. had fewer or less severe symptoms) than the control group. In this example, we had two groups: treatment and control. Membership in these two groups was mutually exclusive: each individual participant received either the experimental medication or the placebo. No one in the experiment received both, so there was no overlap between the two groups. Additionally, each group could be measured on the same variable: symptoms related to the disease or ailment being treated. Because each group was measured on the same variable, the average scores in each group could be meaningfully compared. If the treatment was ineffective, we would expect that the average symptoms of someone receiving the treatment would be the same as the average symptoms of someone receiving the placebo (i.e. there is no difference between the groups). However, if the treatment WAS effective, we would expect fewer symptoms from the treatment group, leading to a lower group average. Now let’s look at an example using groups that already exist. A common, and perhaps salient, question is how students feel about their job prospects after graduation. Suppose that we have narrowed our potential choice of college down to two universities and, in the course of trying to decide between the two, we come across a survey that has data from each university on how students at those universities feel about their future job prospects. As with our last example, we have two groups: University A and University B, and each participant is in only one of the two groups (assuming there are no transfer students who were somehow able to rate both universities). Because students at each university completed the same survey, they are measuring the same thing, so we can use a \(t\)-test to compare the average perceptions of students at each university to see if they are the same. If they are the same, then we should continue looking for other things about each university to help us decide on where to go. But, if they are different, we can use that information in favor of the university with higher job prospects. As we can see, the grouping variable we use for an independent samples \(t\)-test can be a set of groups we create (as in the experimental medication example) or groups that already exist naturally (as in the university example). There are countless other examples of research questions relating to two group means, making the independent samples \(t\)-test one of the most widely used analyses around.
textbooks/stats/Applied_Statistics/An_Introduction_to_Psychological_Statistics_(Foster_et_al.)/10%3A__Independent_Samples/10.01%3A_Difference_of_Means.txt
The process of testing hypotheses using an independent samples $t$-test is the same as it was in the last three chapters, and it starts with stating our hypotheses and laying out the criteria we will use to test them. Our null hypothesis for an independent samples $t$-test is the same as all others: there is no difference. The means of the two groups are the same under the null hypothesis, no matter how those groups were formed. Mathematically, this takes on two equivalent forms: $H_{0}: \mu_{1}=\mu_{2} \nonumber$ or $H_{0}: \mu_{1}-\mu_{2}=0 \nonumber$ Both of these formulations of the null hypothesis tell us exactly the same thing: that the numerical value of the means is the same in both groups. This is more clear in the first formulation, but the second formulation also makes sense (any number minus itself is always zero) and helps us out a little when we get to the math of the test statistic. Either one is acceptable and you only need to report one. The English interpretation of both of them is also the same: $H_{0}: \text { There is no difference between the means of the two groups } \nonumber$ Our alternative hypotheses are also unchanged: we simply replace the equal sign (=) with one of the three inequalities (>, <, ≠): $\begin{array}{l}{H_{A}: \mu_{1}>\mu_{2}} \ {H_{A}: \mu_{1}<\mu_{2}} \ {H_{A}: \mu_{1} \neq \mu_{2}}\end{array} \nonumber$ Or $\begin{array}{l}{H_{A}: \mu_{1}-\mu_{2}>0} \ {H_{A}: \mu_{1}-\mu_{2}<0} \ {H_{A}: \mu_{1}-\mu_{2} \neq 0}\end{array} \nonumber$ Whichever formulation you chose for the null hypothesis should be the one you use for the alternative hypothesis (be consistent), and the interpretation of them is always the same: $H_{A}: \text {There is a difference between the means of the two groups} \nonumber$ Notice that we are now dealing with two means instead of just one, so it will be very important to keep track of which mean goes with which population and, by extension, which dataset and sample data. We use subscripts to differentiate between the populations, so make sure to keep track of which is which. If it is helpful, you can also use more descriptive subscripts. To use the experimental medication example: $H_0$: There is no difference between the means of the treatment and control groups $H_{0}: \mu_{\text {treatment}}=\mu_{\text {control}}$ $H_A$: There is a difference between the means of the treatment and control groups $H_{A}: \mu_{\text {treatment}} \neq \mu_{\text {control}}$ Once we have our hypotheses laid out, we can set our criteria to test them using the same three pieces of information as before: significance level ($α$), directionality (left, right, or two-tailed), and degrees of freedom, which for an independent samples $t$-test are: $d f=n_{1}+n_{2}-2 \nonumber$ This looks different than before, but it is just adding the individual degrees of freedom from each group ($n – 1$) together. Notice that the sample sizes, $n$, also get subscripts so we can tell them apart. For an independent samples $t$-test, it is often the case that our two groups will have slightly different sample sizes, either due to chance or some characteristic of the groups themselves. Generally, this is not as issue, so long as one group is not massively larger than the other group. What is of greater concern is keeping track of which is which using the subscripts.
textbooks/stats/Applied_Statistics/An_Introduction_to_Psychological_Statistics_(Foster_et_al.)/10%3A__Independent_Samples/10.03%3A_Hypotheses_and_Decision_Criteria.txt
The test statistic for our independent samples $t$-test takes on the same logical structure and format as our other $t$-tests: our observed effect minus our null hypothesis value, all divided by the standard error: $t=\dfrac{(\overline{X_{1}}-\overline{X_{2}})-\left(\mu_{1}-\mu_{2}\right)}{s_{\overline{X}_{1}-\overline{X_{2}}}}$ This looks like more work to calculate, but remember that our null hypothesis states that the quantity $\mu_{1}-\mu_{2}=0$, so we can drop that out of the equation and are left with: $t=\dfrac{(\overline{X_{1}}-\overline{X_{2}})}{s_{\overline{X}_{1}-\overline{X_{2}}}}$ Our standard error in the denomination is still standard deviation ($s$) with a subscript denoting what it is the standard error of. Because we are dealing with the difference between two separate means, rather than a single mean or single mean of difference scores, we put both means in the subscript. Calculating our standard error, as we will see next, is where the biggest differences between this $t$-test and other $t$-tests appears. However, once we do calculate it and use it in our test statistic, everything else goes back to normal. Our decision criteria is still comparing our obtained test statistic to our critical value, and our interpretation based on whether or not we reject the null hypothesis is unchanged as well. 10.05: Standard Error and Pooled Variance Recall that the standard error is the average distance between any given sample mean and the center of its corresponding sampling distribution, and it is a function of the standard deviation of the population (either given or estimated) and the sample size. This definition and interpretation hold true for our independent samples $t$-test as well, but because we are working with two samples drawn from two populations, we have to first combine their estimates of standard deviation – or, more accurately, their estimates of variance – into a single value that we can then use to calculate our standard error. The combined estimate of variance using the information from each sample is called the pooled variance and is denoted $s_{p}^{2}$; the subscript $p$ serves as a reminder indicating that it is the pooled variance. The term “pooled variance” is a literal name because we are simply pooling or combining the information on variance – the Sum of Squares and Degrees of Freedom – from both of our samples into a single number. The result is a weighted average of the observed sample variances, the weight for each being determined by the sample size, and will always fall between the two observed variances. The computational formula for the pooled variance is: $s_{p}^{2}=\dfrac{\left(n_{1}-1\right) s_{1}^{2}+\left(n_{2}-1\right) s_{2}^{2}}{n_{1}+n_{2}-2}$ This formula can look daunting at first, but it is in fact just a weighted average. Even more conveniently, some simple algebra can be employed to greatly reduce the complexity of the calculation. The simpler and more appropriate formula to use when calculating pooled variance is: $s_{p}^{2}=\dfrac{S S_{1}+S S_{2}}{d f_{1}+d f_{2}}$ Using this formula, it’s very simple to see that we are just adding together the same pieces of information we have been calculating since chapter 3. Thus, when we use this formula, the pooled variance is not nearly as intimidating as it might have originally seemed. Once we have our pooled variance calculated, we can drop it into the equation for our standard error: $S_{\overline{X_{1}}-\overline{X_{2}}}=\sqrt{\dfrac{S_{p}^{2}}{n_{1}}+\dfrac{S_{p}^{2}}{n_{2}}}$ Once again, although this formula may seem different than it was before, in reality it is just a different way of writing the same thing. An alternative but mathematically equivalent way of writing our old standard error is: $s_{\overline{X}}=\dfrac{s}{\sqrt{n}}=\sqrt{\dfrac{s^{2}}{n}}$ Looking at that, we can now see that, once again, we are simply adding together two pieces of information: no new logic or interpretation required. Once the standard error is calculated, it goes in the denominator of our test statistic, as shown above and as was the case in all previous chapters. Thus, the only additional step to calculating an independent samples t-statistic is computing the pooled variance. Let’s see an example in action.
textbooks/stats/Applied_Statistics/An_Introduction_to_Psychological_Statistics_(Foster_et_al.)/10%3A__Independent_Samples/10.04%3A_Independent_Samples_t-statistic.txt
We are interested in whether the type of movie someone sees at the theater affects their mood when they leave. We decide to ask people about their mood as they leave one of two movies: a comedy (group 1, $n$ = 35) or a horror film (group 2, $n$ = 29). Our data are coded so that higher scores indicate a more positive mood. We have good reason to believe that people leaving the comedy will be in a better mood, so we use a one-tailed test at $α$ = 0.05 to test our hypothesis. Step 1: State the Hypotheses As always, we start with hypotheses: $H_0$: There is no difference in average mood between the two movie types $\mathrm{H}_{0}: \mu_{1}-\mu_{2}=0$ or $\mathrm{H}_{0}: \mu_{1}=\mu_{2}$ $H_A$: The comedy film will give a better average mood than the horror film $\mathrm{H}_{\mathrm{A}}: \mu_{1}-\mu_{2}>0$ or $\mathrm{H}_{\mathrm{A}}: \mu_{1}>\mu_{2}$ Notice that in the first formulation of the alternative hypothesis we say that the first mean minus the second mean will be greater than zero. This is based on how we code the data (higher is better), so we suspect that the mean of the first group will be higher. Thus, we will have a larger number minus a smaller number, which will be greater than zero. Be sure to pay attention to which group is which and how your data are coded (higher is almost always used as better outcomes) to make sure your hypothesis makes sense! Step 2: Find the Critical Values Just like before, we will need critical values, which come from out $t$-table. In this example, we have a one-tailed test at $α$ = 0.05 and expect a positive answer (because we expect the difference between the means to be greater than zero). Our degrees of freedom for our independent samples t-test is just the degrees of freedom from each group added together: 35 + 29 – 2 = 62. From our t-table, we find that our critical value is $t*$ = 1.671. Note that because 62 does not appear on the table, we use the next lowest value, which in this case is 60. Step 3: Compute the Test Statistic The data from our two groups are presented in the tables below. Table $1$ shows the values for the Comedy group, and Table $2$ shows the values for the Horror group. Values for both have already been placed in the Sum of Squares tables since we will need to use them for our further calculations. As always, the column on the left is our raw data. Table $1$: Raw scores and Sum of Squares for Group 1 $X$ $(X-\overline{X})$ $(X-\overline{X})^{2}$ Group 1: Comedy Film 39.10 15.10 228.01 38.00 14.00 196.00 14.90 -9.10 82.81 20.70 -3.30 10.89 19.50 -4.50 20.25 32.20 8.20 67.24 11.00 -13.00 169.00 20.70 -3.30 10.89 26.40 2.40 5.76 35.70 11.70 136.89 26.40 2.40 5.76 28.80 4.80 23.04 33.40 9.40 88.36 13.70 -10.30 106.09 46.10 22.10 488.41 13.70 -10.30 106.09 23.00 -1.00 1.00 20.70 -3.30 10.89 19.50 -4.50 20.25 11.40 -12.60 158.76 24.10 0.10 0.01 17.20 -6.80 46.24 38.00 14.00 196.00 10.30 -13.70 187.69 35.70 11.70 136.89 41.50 17.50 306.25 18.40 -5.60 31.36 36.80 12.80 163.84 54.10 30.10 906.01 11.40 -12.60 158.76 8.70 -15.30 234.09 23.00 -1.00 1.00 14.30 -9.70 94.09 5.30 -18.70 349.69 6.30 -17.70 313.29 $\Sigma = 840$ $\Sigma = 0$ $\Sigma = 5061.60$ Table $2$: Raw scores and Sum of Squares for Group 2 $X$ $(X-\overline{X})$ $(X-\overline{X})^{2}$ Group 2: Horror Film 24.00 7.50 56.25 17.00 0.50 0.25 35.80 19.30 372.49 18.00 1.50 2.25 -1.70 -18.20 331.24 11.10 -5.40 29.16 10.10 -6.40 40.96 16.10 -0.40 0.16 -0.70 -17.20 295.84 14.10 -2.40 5.76 25.90 9.40 88.36 23.00 6.50 42.25 20.00 3.50 12.25 14.10 -2.40 5.76 -1.70 -18.20 331.24 19.00 2.50 6.25 20.00 3.50 12.25 30.90 14.40 207.36 30.90 14.40 207.36 22.00 5.50 30.25 6.20 -10.30 106.09 27.90 11.40 129.96 14.10 -2.40 5.76 33.80 17.30 299.29 26.90 10.40 108.16 5.20 -11.30 127.69 13.10 -3.40 11.56 19.00 2.50 6.25 -15.50 -32.00 1024.00 $\Sigma = 478.6$ $\Sigma = 0.10$ $\Sigma = 3896.45$ Using the sum of the first column for each table, we can calculate the mean for each group: $\overline{X_{1}}=\dfrac{840}{35}=24.00 \nonumber$ And $\overline{X_{2}}=\dfrac{478.60}{29}=16.50 \nonumber$ These values were used to calculate the middle rows of each table, which sum to zero as they should (the middle column for group 2 sums to a very small value instead of zero due to rounding error – the exact mean is 16.50344827586207, but that’s far more than we need for our purposes). Squaring each of the deviation scores in the middle columns gives us the values in the third columns, which sum to our next important value: the Sum of Squares for each group: $SS_1$ = 5061.60 and $SS_2$ = 3896.45. These values have all been calculated and take on the same interpretation as they have since chapter 3 – no new computations yet. Before we move on to the pooled variance that will allow us to calculate standard error, let’s compute our standard deviation for each group; even though we will not use them in our calculation of the test statistic, they are still important descriptors of our data: $s_{1}=\sqrt{\dfrac{5061.60}{34}}=12.20 \nonumber$ And $s_{2}=\sqrt{\dfrac{3896.45}{28}}=11.80 \nonumber$ Now we can move on to our new calculation, the pooled variance, which is just the Sums of Squares that we calculated from our table and the degrees of freedom, which is just $n – 1$ for each group: $s_{p}^{2}=\dfrac{S S_{1}+S S_{2}}{d f_{1}+d f_{2}}=\dfrac{5061.60+3896.45}{34+28}=\dfrac{8958.05}{62}=144.48 \nonumber$ As you can see, if you follow the regular process of calculating standard deviation using the Sum of Squares table, finding the pooled variance is very easy. Now we can use that value to calculate our standard error, the last step before we can find our test statistic: $s_{\overline{X_{1}}-\overline{X_{2}}}=\sqrt{\dfrac{s_{p}^{2}}{n_{1}}+\dfrac{s_{p}^{2}}{n_{2}}}=\sqrt{\dfrac{144.48}{35}+\dfrac{144.48}{29}}=\sqrt{4.13+4.98}=\sqrt{9.11}=3.02 \nonumber$ Finally, we can use our standard error and the means we calculated earlier to compute our test statistic. Because the null hypothesis value of $μ_1 – μ_2$ is 0.00, we will leave that portion out of the equation for simplicity: $t=\dfrac{(\overline{X_{1}}-\overline{X_{2}})}{s_{\overline{X}_{1}-\overline{X_{2}}}}=\dfrac{24.00-16.50}{3.02}=2.48 \nonumber$ The process of calculating our obtained test statistic $t$ = 2.48 followed the same sequence of steps as before: use raw data to compute the mean and sum of squares (this time for two groups instead of one), use the sum of squares and degrees of freedom to calculate standard error (this time using pooled variance instead of standard deviation), and use that standard error and the observed means to get t. Now we can move on to the final step of the hypothesis testing procedure. Step 4: Make the Decision Our test statistic has a value of $t$ = 2.48, and in step 2 we found that the critical value is $t*$ = 1.671. 2.48 > 1.671, so we reject the null hypothesis: Reject $H_0$. Based on our sample data from people who watched different kinds of movies, we can say that the average mood after a comedy movie ($\overline{X_{1}}=24.00$) is better than the average mood after a horror movie ($\overline{X_{2}}=16.50$), $t(62) = 2.48, p < .05$.
textbooks/stats/Applied_Statistics/An_Introduction_to_Psychological_Statistics_(Foster_et_al.)/10%3A__Independent_Samples/10.06%3A_Movies_and_Mood.txt
We have seen in previous chapters that even a statistically significant effect needs to be interpreted along with an effect size to see if it is practically meaningful. We have also seen that our sample means, as a point estimate, are not perfect and would be better represented by a range of values that we call a confidence interval. As with all other topics, this is also true of our independent samples $t$-tests. Our effect size for the independent samples $t$-test is still Cohen’s $d$, and it is still just our observed effect divided by the standard deviation. Remember that standard deviation is just the square root of the variance, and because we work with pooled variance in our test statistic, we will use the square root of the pooled variance as our denominator in the formula for Cohen’s $d$. This gives us: $d=\dfrac{\overline{X_{1}}-\overline{X_{2}}}{\sqrt{s_{p}^{2}}}$ For our example above, we can calculate the effect size to be: $d=\dfrac{24.00-16.50}{\sqrt{144.48}}=\dfrac{7.50}{12.02}=0.62 \nonumber$ We interpret this using the same guidelines as before, so we would consider this a moderate or moderately large effect. Our confidence intervals also take on the same form and interpretation as they have in the past. The value we are interested in is the difference between the two means, so our point estimate is the value of one mean minus the other, or xbar1 minus xbar2. Just like before, this is our observed effect and is the same value as the one we place in the numerator of our test statistic. We calculate this value then place the margin of error – still our critical value times our standard error – above and below it. That is: $\text { Confidence Interval }=(\overline{X_{1}}-\overline{X_{2}}) \pm t^{*}\left(s_{\overline{X_{1}}-\overline{X_{2}}}\right)$ Because our hypothesis testing example used a one-tailed test, it would be inappropriate to calculate a confidence interval on those data (remember that we can only calculate a confidence interval for a two-tailed test because the interval extends in both directions). Let’s say we find summary statistics on the average life satisfaction of people from two different towns and want to create a confidence interval to see if the difference between the two might actually be zero. Our sample data are $\overline{X_{1}}=28.65\; \mathrm{s}_{1}=12.40\; \mathrm{n}_{1}=40$ and $\overline{X_{2}}=25.40 \mathrm{s}_{2}=15.68 \mathrm{n}_{2}=42$. At face value, it looks like the people from the first town have higher life satisfaction (28.65 vs. 25.40), but it will take a confidence interval (or complete hypothesis testing process) to see if that is true or just due to random chance. First, we want to calculate the difference between our sample means, which is 28.65 – 25.40 = 3.25. Next, we need a critical value from our $t$-table. If we want to test at the normal 95% level of confidence, then our sample sizes will yield degrees of freedom equal to 40 + 42 – 2 = 80. From our table, that gives us a critical value of $t*$ = 1.990. Finally, we need our standard error. Recall that our standard error for an independent samples $t$-test uses pooled variance, which requires the Sum of Squares and degrees of freedom. Up to this point, we have calculated the Sum of Squares using raw data, but in this situation, we do not have access to it. So, what are we to do? If we have summary data like standard deviation and sample size, it is very easy to calculate the pooled variance, and the key lies in rearranging the formulas to work backwards through them. We need the Sum of Squares and degrees of freedom to calculate our pooled variance. Degrees of freedom is very simple: we just take the sample size minus 1.00 for each group. Getting the Sum of Squares is also easy: remember that variance is standard deviation squared and is the Sum of Squares divided by the degrees of freedom. That is: $s^{2}=(s)^{2}=\dfrac{S S}{d f}$ To get the Sum of Squares, we just multiply both sides of the above equation to get: $s^{2} * d f=S S$ Which is the squared standard deviation multiplied by the degrees of freedom ($n-1$) equals the Sum of Squares. Using our example data: $\begin{array}{c}{\left(s_{1}\right)^{2} * d f_{1}=S S_{1}} \ {(12.40)^{2} *(40-1)=5996.64}\end{array} \nonumber$ $\begin{array}{c}{\left(s_{2}\right)^{2} * d f_{2}=S S_{2}} \ {(15.68)^{2} *(42-1)=10080.36}\end{array} \nonumber$ And thus our pooled variance equals: $s_{p}^{2}=\dfrac{S S_{1}+S S_{2}}{d f_{1}+d f_{2}}=\dfrac{5996.64+10080.36}{39+41}=\dfrac{16077}{80}=200.96 \nonumber$ And our standard error equals: $s_{\bar{x}_{1}-\bar{x}_{2}}=\sqrt{\frac{s_{p}^{2}}{n_{1}}+\frac{s_{p}^{2}}{n_{2}}}=\sqrt{\frac{200.96}{40}+\frac{200.96}{42}}=\sqrt{5.02+4.78}=\sqrt{9.89}=3.14 \nonumber$ All of these steps are just slightly different ways of using the same formulae, numbers, and ideas we have worked with up to this point. Once we get out standard error, it’s time to build our confidence interval. $95 \% C I=3.25 \pm 1.990(3.14) \nonumber$ \begin{aligned} \text {Upper Bound} &=3.25+1.990(3.14) \ U B &=3.25+6.25 \ U B &=9.50 \end{aligned} \nonumber \begin{array}{l}{\text { Lower Bound }=3.25-1.990(3.14)} \ {\qquad \begin{aligned} L B=& 3.25-6.25 \ L B &=-3.00 \end{aligned}}\end{array} \nonumber $95 \% C I=(-3.00,9.50) \nonumber$ Our confidence interval, as always, represents a range of values that would be considered reasonable or plausible based on our observed data. In this instance, our interval (-3.00, 9.50) does contain zero. Thus, even though the means look a little bit different, it may very well be the case that the life satisfaction in both of these towns is the same. Proving otherwise would require more data.
textbooks/stats/Applied_Statistics/An_Introduction_to_Psychological_Statistics_(Foster_et_al.)/10%3A__Independent_Samples/10.07%3A_Effect_Sizes_and_Confidence_Intervals.txt
Before wrapping up the coverage of independent samples t-tests, there is one other important topic to cover. Using the pooled variance to calculate the test statistic relies on an assumption known as homogeneity of variance. In statistics, an assumption is some characteristic that we assume is true about our data, and our ability to use our inferential statistics accurately and correctly relies on these assumptions being true. If these assumptions are not true, then our analyses are at best ineffective (e.g. low power to detect effects) and at worst inappropriate (e.g. too many Type I errors). A detailed coverage of assumptions is beyond the scope of this course, but it is important to know that they exist for all analyses. For the current analysis, one important assumption is homogeneity of variance. This is fancy statistical talk for the idea that the true population variance for each group is the same and any difference in the observed sample variances is due to random chance (if this sounds eerily similar to the idea of testing the null hypothesis that the true population means are equal, that’s because it is exactly the same!) This notion allows us to compute a single pooled variance that uses our easily calculated degrees of freedom. If the assumption is shown to not be true, then we have to use a very complicated formula to estimate the proper degrees of freedom. There are formal tests to assess whether or not this assumption is met, but we will not discuss them here. Many statistical programs incorporate the test of homogeneity of variance automatically and can report the results of the analysis assuming it is true or assuming it has been violated. You can easily tell which is which by the degrees of freedom: the corrected degrees of freedom (which is used when the assumption of homogeneity of variance is violated) will have decimal places. Fortunately, the independent samples \(t\)-test is very robust to violations of this assumption (an analysis is “robust” if it works well even when its assumptions are not met), which is why we do not bother going through the tedious work of testing and estimating new degrees of freedom by hand. 10.09: Exercises 1. What is meant by “the difference of the means” when talking about an independent samples $t$-test? How does it differ from the “mean of the differences” in a repeated measures $t$-test? Answer: The difference of the means is one mean, calculated from a set of scores, compared to another mean which is calculated from a different set of scores; the independent samples $t$-test looks for whether the two separate values are different from one another. This is different than the “mean of the differences” because the latter is a single mean computed on a single set of difference scores that come from one data collection of matched pairs. So, the difference of the means deals with two numbers but the mean of the differences is only one number. 1. Describe three research questions that could be tested using an independent samples $t$-test. 2. Calculate pooled variance from the following raw data: Group 1 Group 2 16 4 11 10 9 15 7 13 5 12 4 9 12 8 Answer: $\mathrm{SS}_{1}=106.86, \mathrm{SS}_{2}=78.86, s_{p}^{2}=15.48$ 1. Calculate the standard error from the following descriptive statistics 1. $s_1$ = 24, $s_2$ = 21, $n_1$ = 36, $n_2$ = 49 2. $s_1$ = 15.40, $s_2$ = 14.80, $n_1$ = 20, $n_2$ = 23 3. $s_1$ = 12, $s_2$ = 10, $n_1$ = 25, $n_2$ = 25 2. Determine whether to reject or fail to reject the null hypothesis in the following situations: 1. $t(40) = 2.49$, $α = 0.01$, one-tailed test to the right 2. $\overline{X_{1}}=64, \overline{X_{2}}=54, n_{1}=14, n_{2}=12, s_{\overline{X_{1}}-\overline{X_{2}}}=9.75, \alpha=0.05$, two-tailed test 3. 95% Confidence Interval: (0.50, 2.10) Answer: 1. Reject 2. Fail to Reject 3. Reject 1. A professor is interest in whether or not the type of software program used in a statistics lab affects how well students learn the material. The professor teaches the same lecture material to two classes but has one class use a point-and-click software program in lab and has the other class use a basic programming language. The professor tests for a difference between the two classes on their final exam scores. Point-and-Click Programming 83 86 83 79 63 100 77 74 86 70 84 67 78 83 61 85 65 74 75 86 100 87 60 61 90 76 66 100 54 1. A researcher wants to know if there is a difference in how busy someone is based on whether that person identifies as an early bird or a night owl. The researcher gathers data from people in each group, coding the data so that higher scores represent higher levels of being busy, and tests for a difference between the two at the .05 level of significance. Early Bird Night Owl 23 26 28 10 27 20 33 19 26 26 30 18 22 12 25 25 26 Answer: Step 1: $H_0: μ_1 – μ_2 = 0$ “There is not difference in the average business of early birds versus night owls”, $H_A: μ_1 – μ_2 ≠ 0$ “There is a difference in the average business of early birds versus night owls.” Step 2: Two-tailed test, $df$ = 15, $t*$ = 2.131. Step 3: $\overline{X_{1}}=26.67, \overline{X_{2}}=19.50, s_{p}^{2}=27.73, s_{X_{1}}-\overline{X_{2}}=2.37$ Step 4: $t > t*$, Reject $H_0$. Based on our data of early birds and night owls, we can conclude that early birds are busier ($\overline{X_{1}}=26.67$) than night owls ($\overline{X_{2}}=19.50$), $t(15) = 3.03$, $p < .05$. Since the result is significant, we need an effect size: Cohen’s $d = 1.47$, which is a large effect. 1. Lots of people claim that having a pet helps lower their stress level. Use the following summary data to test the claim that there is a lower average stress level among pet owners (group 1) than among non-owners (group 2) at the .05 level of significance.$\overline{X_{1}}=16.25, \overline{X_{2}}=20.95, s_{1}=4.00, s_{2}=5.10, n_{1}=29, n_{2}=25 \nonumber$ 2. Administrators at a university want to know if students in different majors are more or less extroverted than others. They provide you with descriptive statistics they have for English majors (coded as 1) and History majors (coded as 2) and ask you to create a confidence interval of the difference between them. Does this confidence interval suggest that the students from the majors differ?$\overline{X_{1}}=3.78, \overline{X_{2}}=2.23, s_{1}=2.60, s_{2}=1.15, n_{1}=45, n_{2}=40 \nonumber$ Answer: $\overline{X_{1}}-\overline{X_{2}}=1.55, \mathrm{t}^{*}=1.990, s_{\overline{X_{1}}-\overline{X_{2}}}=0.45$, CI = (0.66, 2.44). This confidence interval does not contain zero, so it does suggest that there is a difference between the extroversion of English majors and History majors. 1. Researchers want to know if people’s awareness of environmental issues varies as a function of where they live. The researchers have the following summary data from two states, Alaska and Hawaii, that they want to use to test for a difference.$\overline{\mathrm{X}_{H}}=47.50, \overline{\mathrm{X}_{A}}=45.70, \mathrm{s}_{\mathrm{H}}=14.65, \mathrm{s}_{\mathrm{A}}=13.20, \mathrm{n}_{\mathrm{H}}=139, \mathrm{n}_{\mathrm{A}}=150 \nonumber$
textbooks/stats/Applied_Statistics/An_Introduction_to_Psychological_Statistics_(Foster_et_al.)/10%3A__Independent_Samples/10.08%3A_Homogeneity_of_Variance.txt
We have seen time and again that scores, be they individual data or group means, will differ naturally. Sometimes this is due to random chance, and other times it is due to actual differences. Our job as scientists, researchers, and data analysts is to determine if the observed differences are systematic and meaningful (via a hypothesis test) and, if so, what is causing those differences. Through this, it becomes clear that, although we are usually interested in the mean or average score, it is the variability in the scores that is key. Take a look at Figure \(1\), which shows scores for many people on a test of skill used as part of a job application. The \(x\)-axis has each individual person, in no particular order, and the \(y\)-axis contains the score each person received on the test. As we can see, the job applicants differed quite a bit in their performance, and understanding why that is the case would be extremely useful information. However, there’s no interpretable pattern in the data, especially because we only have information on the test, not on any other variable (remember that the x-axis here only shows individual people and is not ordered or interpretable). Our goal is to explain this variability that we are seeing in the dataset. Let’s assume that as part of the job application procedure we also collected data on the highest degree each applicant earned. With knowledge of what the job requires, we could sort our applicants into three groups: those applicants who have a college degree related to the job, those applicants who have a college degree that is not related to the job, and those applicants who did not earn a college degree. This is a common way that job applicants are sorted, and we can use ANOVA to test if these groups are actually different. Figure \(2\) presents the same job applicant scores, but now they are color coded by group membership (i.e. which group they belong in). Now that we can differentiate between applicants this way, a pattern starts to emerge: those applicants with a relevant degree (coded red) tend to be near the top, those applicants with no college degree (coded black) tend to be near the bottom, and the applicants with an unrelated degree (coded green) tend to fall into the middle. However, even within these groups, there is still some variability, as shown in Figure \(2\). This pattern is even easier to see when the applicants are sorted and organized into their respective groups, as shown in Figure \(3\). Now that we have our data visualized into an easily interpretable format, we can clearly see that our applicants’ scores differ largely along group lines. Those applicants who do not have a college degree received the lowest scores, those who had a degree relevant to the job received the highest scores, and those who did have a degree but one that is not related to the job tended to fall somewhere in the middle. Thus, we have systematic variance between our groups. We can also clearly see that within each group, our applicants’ scores differed from one another. Those applicants without a degree tended to score very similarly, since the scores are clustered close together. Our group of applicants with relevant degrees varied a little but more than that, and our group of applicants with unrelated degrees varied quite a bit. It may be that there are other factors that cause the observed score differences within each group, or they could just be due to random chance. Because we do not have any other explanatory data in our dataset, the variability we observe within our groups is considered random error, with any deviations between a person and that person’s group mean caused only by chance. Thus, we have unsystematic (random) variance within our groups. The process and analyses used in ANOVA will take these two sources of variance (systematic variance between groups and random error within groups, or how much groups differ from each other and how much people differ within each group) and compare them to one another to determine if the groups have any explanatory value in our outcome variable. By doing this, we will test for statistically significant differences between the group means, just like we did for \(t\)-tests. We will go step by step to break down the math to see how ANOVA actually works.
textbooks/stats/Applied_Statistics/An_Introduction_to_Psychological_Statistics_(Foster_et_al.)/11%3A_Analysis_of_Variance/11.01%3A_Observing_and_Interpreting_Variability.txt
ANOVA is all about looking at the different sources of variance (i.e. the reasons that scores differ from one another) in a dataset. Fortunately, the way we calculate these sources of variance takes a very familiar form: the Sum of Squares. Before we get into the calculations themselves, we must first lay out some important terminology and notation. In ANOVA, we are working with two variables, a grouping or explanatory variable and a continuous outcome variable. The grouping variable is our predictor (it predicts or explains the values in the outcome variable) or, in experimental terms, our independent variable, and it made up of $k$ groups, with $k$ being any whole number 2 or greater. That is, ANOVA requires two or more groups to work, and it is usually conducted with three or more. In ANOVA, we refer to groups as “levels”, so the number of levels is just the number of groups, which again is $k$. In the above example, our grouping variable was education, which had 3 levels, so $k$ = 3. When we report any descriptive value (e.g. mean, sample size, standard deviation) for a specific group, we will use a subscript 1…$k$ to denote which group it refers to. For example, if we have three groups and want to report the standard deviation $s$ for each group, we would report them as $s_1$, $s_2$, and $s_3$. Our second variable is our outcome variable. This is the variable on which people differ, and we are trying to explain or account for those differences based on group membership. In the example above, our outcome was the score each person earned on the test. Our outcome variable will still use $X$ for scores as before. When describing the outcome variable using means, we will use subscripts to refer to specific group means. So if we have $k$ = 3 groups, our means will be $\overline{X_{1}}$, $\overline{X_{2}}$, and $\overline{X_{3}}$. We will also have a single mean representing the average of all participants across all groups. This is known as the grand mean, and we use the symbol $\overline{X_{G}}$. These different means – the individual group means and the overall grand mean – will be how we calculate our sums of squares. Finally, we now have to differentiate between several different sample sizes. Our data will now have sample sizes for each group, and we will denote these with a lower case “$n$” and a subscript, just like with our other descriptive statistics: $n_1$, $n_2$, and $n_3$. We also have the overall sample size in our dataset, and we will denote this with a capital $N$. The total sample size is just the group sample sizes added together. Between Groups Sum of Squares One source of variability we can identified in 11.1.3 of the above example was differences or variability between the groups. That is, the groups clearly had different average levels. The variability arising from these differences is known as the between groups variability, and it is quantified using Between Groups Sum of Squares. Our calculations for sums of squares in ANOVA will take on the same form as it did for regular calculations of variance. Each observation, in this case the group means, is compared to the overall mean, in this case the grand mean, to calculate a deviation score. These deviation scores are squared so that they do not cancel each other out and sum to zero. The squared deviations are then added up, or summed. There is, however, one small difference. Because each group mean represents a group composed of multiple people, before we sum the deviation scores we must multiple them by the number of people within that group. Incorporating this, we find our equation for Between Groups Sum of Squares to be: $S S_{B}=\sum n_{j}\left(\overline{X}_{J}-\overline{X_{G}}\right)^{2}$ The subscript $j$ refers to the “$j^{th}$” group where $j$ = 1…$k$ to keep track of which group mean and sample size we are working with. As you can see, the only difference between this equation and the familiar sum of squares for variance is that we are adding in the sample size. Everything else logically fits together in the same way. Within Groups Sum of Squares The other source of variability in the figures comes from differences that occur within each group. That is, each individual deviates a little bit from their respective group mean, just like the group means differed from the grand mean. We therefore label this source the Within Groups Sum of Squares. Because we are trying to account for variance based on group-level means, any deviation from the group means indicates an inaccuracy or error. Thus, our within groups variability represents our error in ANOVA. The formula for this sum of squares is again going to take on the same form and logic. What we are looking for is the distance between each individual person and the mean of the group to which they belong. We calculate this deviation score, square it so that they can be added together, then sum all of them into one overall value: $S S_{W}=\sum\left(X_{i j}-\overline{X}_{j}\right)^{2}$ In this instance, because we are calculating this deviation score for each individual person, there is no need to multiple by how many people we have. The subscript $j$ again represents a group and the subscript $i$ refers to a specific person. So, $X_{ij}$ is read as “the $i^{th}$ person of the $j^{th}$ group.” It is important to remember that the deviation score for each person is only calculated relative to their group mean: do not calculate these scores relative to the other group means. Total Sum of Squares The Between Groups and Within Groups Sums of Squares represent all variability in our dataset. We also refer to the total variability as the Total Sum of Squares, representing the overall variability with a single number. The calculation for this score is exactly the same as it would be if we were calculating the overall variance in the dataset (because that’s what we are interested in explaining) without worrying about or even knowing about the groups into which our scores fall: $S S_{T}=\sum\left(X_{i}-\overline{X_{G}}\right)^{2}$ We can see that our Total Sum of Squares is just each individual score minus the grand mean. As with our Within Groups Sum of Squares, we are calculating a deviation score for each individual person, so we do not need to multiply anything by the sample size; that is only done for Between Groups Sum of Squares. An important feature of the sums of squares in ANOVA is that they all fit together. We could work through the algebra to demonstrate that if we added together the formulas for $SS_B$ and $SS_W$, we would end up with the formula for $SS_T$. That is: $S S_{T}=S S_{B}+S S_{W}$ This will prove to be very convenient, because if we know the values of any two of our sums of squares, it is very quick and easy to find the value of the third. It is also a good way to check calculations: if you calculate each $SS$ by hand, you can make sure that they all fit together as shown above, and if not, you know that you made a math mistake somewhere. We can see from the above formulas that calculating an ANOVA by hand from raw data can take a very, very long time. For this reason, you will not be required to calculate the SS values by hand, but you should still take the time to understand how they fit together and what each one represents to ensure you understand the analysis itself.
textbooks/stats/Applied_Statistics/An_Introduction_to_Psychological_Statistics_(Foster_et_al.)/11%3A_Analysis_of_Variance/11.02%3A_Sources_of_Variance.txt
All of our sources of variability fit together in meaningful, interpretable ways as we saw above, and the easiest way to do this is to organize them into a table. The ANOVA table, shown in Table $1$, is how we calculate our test statistic. Table $1$: ANOVA Table Source $SS$ $df$ $MS$ $F$ Between $S S_{B}$ $k-1$ $\frac{S S_{B}}{d f_{B}}$ $\frac{MS_{B}}{MS_{W}}$ Within $S S_{W}$ $N-k$ $\frac{S S_{W}}{d f_{W}}$ Total $S S_{T}$ $N-1$ The first column of the ANOVA table, labeled “Source”, indicates which of our sources of variability we are using: between groups, within groups, or total. The second column, labeled “SS”, contains our values for the sums of squares that we learned to calculate above. As noted previously, calculating these by hand takes too long, and so the formulas are not presented in Table $1$. However, remember that the Total is the sum of the other two, in case you are only given two $SS$ values and need to calculate the third. The next column, labeled “$df$”, is our degrees of freedom. As with the sums of squares, there is a different $df$ for each group, and the formulas are presented in the table. Notice that the total degrees of freedom, $N – 1$, is the same as it was for our regular variance. This matches the $SS_T$ formulation to again indicate that we are simply taking our familiar variance term and breaking it up into difference sources. Also remember that the capital $N$ in the $df$ calculations refers to the overall sample size, not a specific group sample size. Notice that the total row for degrees of freedom, just like for sums of squares, is just the Between and Within rows added together. If you take $N – k + k – 1$, then the “$– k$” and “$+ k$” portions will cancel out, and you are left with $N – 1$. This is a convenient way to quickly check your calculations. The third column, labeled “$MS$”, is our Mean Squares for each source of variance. A “mean square” is just another way to say variability. Each mean square is calculated by dividing the sum of squares by its corresponding degrees of freedom. Notice that we do this for the Between row and the Within row, but not for the Total row. There are two reasons for this. First, our Total Mean Square would just be the variance in the full dataset (put together the formulas to see this for yourself), so it would not be new information. Second, the Mean Square values for Between and Within would not add up to equal the Mean Square Total because they are divided by different denominators. This is in contrast to the first two columns, where the Total row was both the conceptual total (i.e. the overall variance and degrees of freedom) and the literal total of the other two rows. The final column in the ANOVA table, labeled “$F$”, is our test statistic for ANOVA. The $F$ statistic, just like a $t$- or $z$-statistic, is compared to a critical value to see whether we can reject for fail to reject a null hypothesis. Thus, although the calculations look different for ANOVA, we are still doing the same thing that we did in all of Unit 2. We are simply using a new type of data to test our hypotheses. We will see what these hypotheses look like shortly, but first, we must take a moment to address why we are doing our calculations this way.
textbooks/stats/Applied_Statistics/An_Introduction_to_Psychological_Statistics_(Foster_et_al.)/11%3A_Analysis_of_Variance/11.03%3A_ANOVA_Table.txt
You may be wondering why we do not just use another \(t\)-test to test our hypotheses about three or more groups the way we did in Unit 2. After all, we are still just looking at group mean differences. The reason is that our \(t\)-statistic formula can only handle up to two groups, one minus the other. With only two groups, we can move our population parameters for the group means around in our null hypothesis and still get the same interpretation: the means are equal, which can also be concluded if one mean minus the other mean is equal to zero. However, if we tried adding a third mean, we would no longer be able to do this. So, in order to use \(t\)-tests to compare three or more means, we would have to run a series of individual group comparisons. For only three groups, we would have three \(t\)-tests: group 1 vs group 2, group 1 vs group 3, and group 2 vs group 3. This may not sound like a lot, especially with the advances in technology that have made running an analysis very fast, but it quickly scales up. With just one additional group, bringing our total to four, we would have six comparisons: group 1 vs group 2, group 1 vs group 3, group 1 vs group 4, group 2 vs group 3, group 2 vs group 4, and group 3 vs group 4. This makes for a logistical and computation nightmare for five or more groups. A bigger issue, however, is our probability of committing a Type I Error. Remember that a Type I error is a false positive, and the chance of committing a Type I error is equal to our significance level, \(α\). This is true if we are only running a single analysis (such as a \(t\)-test with only two groups) on a single dataset. However, when we start running multiple analyses on the same dataset, our Type I error rate increases, raising the probability that we are capitalizing on random chance and rejecting a null hypothesis when we should not. ANOVA, by comparing all groups simultaneously with a single analysis, averts this issue and keeps our error rate at the \(α\) we set. 11.05: Hypotheses in ANOVA So far we have seen what ANOVA is used for, why we use it, and how we use it. Now we can turn to the formal hypotheses we will be testing. As with before, we have a null and an alternative hypothesis to lay out. Our null hypothesis is still the idea of “no difference” in our data. Because we have multiple group means, we simply list them out as equal to each other: $\begin{array}{c}{\mathrm{H}_{0}: \text { There is no difference in the group means }} \ {\mathrm{H}_{0}: \mu_{1}=\mu_{2}=\mu_{3}}\end{array} \nonumber$ We list as many $μ$ parameters as groups we have. In the example above, we have three groups to test, so we have three parameters in our null hypothesis. If we had more groups, say, four, we would simply add another $μ$ to the list and give it the appropriate subscript, giving us: $\begin{array}{c}\mathrm{H}_{0}: \text { There is no difference in the group means }\ \mathrm{H}_{0}: \mu_{1}=\mu_{2}=\mu_{3}=\mu_{4} \end{array} \nonumber$ Notice that we do not say that the means are all equal to zero, we only say that they are equal to one another; it does not matter what the actual value is, so long as it holds for all groups equally. Our alternative hypothesis for ANOVA is a little bit different. Let’s take a look at it and then dive deeper into what it means: $\mathrm{H}_{A}: \text { At least one mean is different } \nonumber$ The first difference in obvious: there is no mathematical statement of the alternative hypothesis in ANOVA. This is due to the second difference: we are not saying which group is going to be different, only that at least one will be. Because we do not hypothesize about which mean will be different, there is no way to write it mathematically. Related to this, we do not have directional hypotheses (greater than or less than) like we did in Unit 2. Due to this, our alternative hypothesis is always exactly the same: at least one mean is different. In Unit 2, we saw that, if we reject the null hypothesis, we can adopt the alternative, and this made it easy to understand what the differences looked like. In ANOVA, we will still adopt the alternative hypothesis as the best explanation of our data if we reject the null hypothesis. However, when we look at the alternative hypothesis, we can see that it does not give us much information. We will know that a difference exists somewhere, but we will not know where that difference is. Is only group 1 different but groups 2 and 3 the same? Is it only group 2? Are all three of them different? Based on just our alternative hypothesis, there is no way to be sure. We will come back to this issue later and see how to find out specific differences. For now, just remember that we are testing for any difference in group means, and it does not matter where that difference occurs. Now that we have our hypotheses for ANOVA, let’s work through an example. We will continue to use the data from Figures 11.1.1 through 11.1.3 for continuity.
textbooks/stats/Applied_Statistics/An_Introduction_to_Psychological_Statistics_(Foster_et_al.)/11%3A_Analysis_of_Variance/11.04%3A_ANOVA_and_Type_I_Error.txt
Our data come from three groups of 10 people each, all of whom applied for a single job opening: those with no college degree, those with a college degree that is not related to the job opening, and those with a college degree from a relevant field. We want to know if we can use this group membership to account for our observed variability and, by doing so, test if there is a difference between our three group means. We will start, as always, with our hypotheses. Step 1: State the Hypotheses Our hypotheses are concerned with the means of groups based on education level, so: $\begin{array}{c}{\mathrm{H}_{0}: \text { There is no difference between the means of the education groups }} \ {\mathrm{H}_{0}: \mu_{1}=\mu_{2}=\mu_{3}}\end{array} \nonumber$ $\mathrm{H}_{A}: \text { At least one mean is different } \nonumber$ Again, we phrase our null hypothesis in terms of what we are actually looking for, and we use a number of population parameters equal to our number of groups. Our alternative hypothesis is always exactly the same. Step 2: Find the Critical Values Our test statistic for ANOVA, as we saw above, is $F$. Because we are using a new test statistic, we will get a new table: the $F$ distribution table, the top of which is shown in Figure $1$: The $F$ table only displays critical values for $α$ = 0.05. This is because other significance levels are uncommon and so it is not worth it to use up the space to present them. There are now two degrees of freedom we must use to find our critical value: Numerator and Denominator. These correspond to the numerator and denominator of our test statistic, which, if you look at the ANOVA table presented earlier, are our Between Groups and Within Groups rows, respectively. The $df_B$ is the “Degrees of Freedom: Numerator” because it is the degrees of freedom value used to calculate the Mean Square Between, which in turn was the numerator of our $F$ statistic. Likewise, the $df_W$ is the “$df$ denom.” (short for denominator) because it is the degrees of freedom value used to calculate the Mean Square Within, which was our denominator for $F$. The formula for $df_B$ is $k – 1$, and remember that k is the number of groups we are assessing. In this example, $k = 3$ so our $df_B$ = 2. This tells us that we will use the second column, the one labeled 2, to find our critical value. To find the proper row, we simply calculate the $df_W$, which was $N – k$. The original prompt told us that we have “three groups of 10 people each,” so our total sample size is 30. This makes our value for $df_W$ = 27. If we follow the second column down to the row for 27, we find that our critical value is 3.35. We use this critical value the same way as we did before: it is our criterion against which we will compare our obtained test statistic to determine statistical significance. Step 3: Calculate the Test Statistic Now that we have our hypotheses and the criterion we will use to test them, we can calculate our test statistic. To do this, we will fill in the ANOVA table. When we do so, we will work our way from left to right, filling in each cell to get our final answer. We will assume that we are given the $SS$ values as shown below: Table $1$: ANOVA Table Source $SS$ $df$ $MS$ $F$ Between 8246 Within 3020 Total These may seem like random numbers, but remember that they are based on the distances between the groups themselves and within each group. Figure $2$ shows the plot of the data with the group means and grand mean included. If we wanted to, we could use this information, combined with our earlier information that each group has 10 people, to calculate the Between Groups Sum of Squares by hand. However, doing so would take some time, and without the specific values of the data points, we would not be able to calculate our Within Groups Sum of Squares, so we will trust that these values are the correct ones. We were given the sums of squares values for our first two rows, so we can use those to calculate the Total Sum of Squares. Table $2$: Total Sum of Squares Source $SS$ $df$ $MS$ $F$ Between 8246 Within 3020 Total 11266 We also calculated our degrees of freedom earlier, so we can fill in those values. Additionally, we know that the total degrees of freedom is $N – 1$, which is 29. This value of 29 is also the sum of the other two degrees of freedom, so everything checks out. Table $3$: Total Sum of Squares Source $SS$ $df$ $MS$ $F$ Between 8246 2 Within 3020 27 Total 11266 29 Now we have everything we need to calculate our mean squares. Our $MS$ values for each row are just the $SS$ divided by the $df$ for that row, giving us: Table $4$: Total Sum of Squares Source $SS$ $df$ $MS$ $F$ Between 8246 2 4123 Within 3020 27 111.85 Total 11266 29 Remember that we do not calculate a Total Mean Square, so we leave that cell blank. Finally, we have the information we need to calculate our test statistic. $F$ is our $MS_B$ divided by $MS_W$. Table $5$: Total Sum of Squares Source $SS$ $df$ $MS$ $F$ Between 8246 2 4123 36.86 Within 3020 27 111.85 Total 11266 29 So, working our way through the table given only two $SS$ values and the sample size and group size given before, we calculate our test statistic to be $F_{obt} = 36.86$, which we will compare to the critical value in step 4. Step 4: Make the Decision Our obtained test statistic was calculated to be $F_{obt} = 36.86$ and our critical value was found to be $F^* = 3.35$. Our obtained statistic is larger than our critical value, so we can reject the null hypothesis. Reject $H_0$. Based on our 3 groups of 10 people, we can conclude that job test scores are statistically significantly different based on education level, $F(2,27) = 36.86, p < .05$. Notice that when we report $F$, we include both degrees of freedom. We always report the numerator then the denominator, separated by a comma. We must also note that, because we were only testing for any difference, we cannot yet conclude which groups are different from the others. We will do so shortly, but first, because we found a statistically significant result, we need to calculate an effect size to see how big of an effect we found.
textbooks/stats/Applied_Statistics/An_Introduction_to_Psychological_Statistics_(Foster_et_al.)/11%3A_Analysis_of_Variance/11.06%3A_Scores_on_Job_Application_Tests.txt
Recall that the purpose of ANOVA is to take observed variability and see if we can explain those differences based on group membership. To that end, our effect size will be just that: the variance explained. You can think of variance explained as the proportion or percent of the differences we are able to account for based on our groups. We know that the overall observed differences are quantified as the Total Sum of Squares, and that our observed effect of group membership is the Between Groups Sum of Squares. Our effect size, therefore, is the ratio of these to sums of squares. Specifically: $\eta^{2}=\dfrac{S S_{B}}{S S_{T}}$ The effect size $\eta^{2}$ is called “eta-squared” and represents variance explained. For our example, our values give an effect size of: $\eta^{2}=\dfrac{8246}{11266}=0.73 \nonumber$ So, we are able to explain 73% of the variance in job test scores based on education. This is, in fact, a huge effect size, and most of the time we will not explain nearly that much variance. Our guidelines for the size of our effects are: Table $1$: Guidelines for the size of our effects $\eta^{2}$ Size 0.01 Small 0.09 Medium 0.25 Large So, we found that not only do we have a statistically significant result, but that our observed effect was very large! However, we still do not know specifically which groups are different from each other. It could be that they are all different, or that only those who have a relevant degree are different from the others, or that only those who have no degree are different from the others. To find out which is true, we need to do a special analysis called a post hoc test. 11.08: Post Hoc Tests A post hoc test is used only after we find a statistically significant result and need to determine where our differences truly came from. The term “post hoc” comes from the Latin for “after the event”. There are many different post hoc tests that have been developed, and most of them will give us similar answers. We will only focus here on the most commonly used ones. We will also only discuss the concepts behind each and will not worry about calculations. Bonferroni Test A Bonferroni test is perhaps the simplest post hoc analysis. A Bonferroni test is a series of \(t\)-tests performed on each pair of groups. As we discussed earlier, the number of groups quickly grows the number of comparisons, which inflates Type I error rates. To avoid this, a Bonferroni test divides our significance level \(α\) by the number of comparisons we are making so that when they are all run, they sum back up to our original Type I error rate. Once we have our new significance level, we simply run independent samples \(t\)-tests to look for difference between our pairs of groups. This adjustment is sometimes called a Bonferroni Correction, and it is easy to do by hand if we want to compare obtained \(p\)-values to our new corrected α level, but it is more difficult to do when using critical values like we do for our analyses so we will leave our discussion of it to that. Tukey’s Honest Significant Difference Tukey’s Honest Significant Difference (HSD) is a very popular post hoc analysis. This analysis, like Bonferroni’s, makes adjustments based on the number of comparisons, but it makes adjustments to the test statistic when running the comparisons of two groups. These comparisons give us an estimate of the difference between the groups and a confidence interval for the estimate. We use this confidence interval in the same way that we use a confidence interval for a regular independent samples \(t\)-test: if it contains 0.00, the groups are not different, but if it does not contain 0.00 then the groups are different. Below are the differences between the group means and the Tukey’s HSD confidence intervals for the differences: Table \(1\): Differences between the group means and the Tukey’s HSD confidence intervals Comparison Difference Tukey’s HSD CI None vs Relevant 40.60 (28.87, 52.33) None vs Unrelated 19.50 (7.77, 31.23) Relevant vs Unrelated 21.10 (9.37, 32.83) As we can see, none of these intervals contain 0.00, so we can conclude that all three groups are different from one another. Scheffe’s Test Another common post hoc test is Scheffe’s Test. Like Tukey’s HSD, Scheffe’s test adjusts the test statistic for how many comparisons are made, but it does so in a slightly different way. The result is a test that is “conservative,” which means that it is less likely to commit a Type I Error, but this comes at the cost of less power to detect effects. We can see this by looking at the confidence intervals that Scheffe’s test gives us: Table \(2\): Confidence intervals given by Scheffe’s test Comparison Difference Tukey’s HSD CI None vs Relevant 40.60 (28.35, 52.85) None vs Unrelated 19.50 (7.25, 31.75) Relevant vs Unrelated 21.10 (8.85, 33.35) As we can see, these are slightly wider than the intervals we got from Tukey’s HSD. This means that, all other things being equal, they are more likely to contain zero. In our case, however, the results are the same, and we again conclude that all three groups differ from one another. There are many more post hoc tests than just these three, and they all approach the task in different ways, with some being more conservative and others being more powerful. In general, though, they will give highly similar answers. What is important here is to be able to interpret a post hoc analysis. If you are given post hoc analysis confidence intervals, like the ones seen above, read them the same way we read confidence intervals in chapter 10: if they contain zero, there is no difference; if they do not contain zero, there is a difference.
textbooks/stats/Applied_Statistics/An_Introduction_to_Psychological_Statistics_(Foster_et_al.)/11%3A_Analysis_of_Variance/11.07%3A_Variance_Explained.txt
We have only just scratched the surface on ANOVA in this chapter. There are many other variations available for the one-way ANOVA presented here. There are also other types of ANOVAs that you are likely to encounter. The first is called a factorial ANOVA. Factorial ANOVAs use multiple grouping variables, not just one, to look for group mean differences. Just as there is no limit to the number of groups in a one-way ANOVA, there is no limit to the number of grouping variables in a Factorial ANOVA, but it becomes very difficult to find and interpret significant results with many factors, so usually they are limited to two or three grouping variables with only a small number of groups in each. Another ANOVA is called a Repeated Measures ANOVA. This is an extension of a repeated measures or matched pairs \(t\)-test, but in this case we are measuring each person three or more times to look for a change. We can even combine both of these advanced ANOVAs into mixed designs to test very specific and valuable questions. These topics are far beyond the scope of this text, but you should know about their existence. Our treatment of ANOVA here is a small first step into a much larger world! 11.1E: Analysis of Variance (Exercises) 1. What are the three pieces of variance analyzed in ANOVA? Answer: Variance between groups ($SSB$), variance within groups ($SSW$) and total variance ($SST$). 1. What does rejecting the null hypothesis in ANOVA tell us? What does it not tell us? 2. What is the purpose of post hoc tests? Answer: Post hoc tests are run if we reject the null hypothesis in ANOVA; they tell us which specific group differences are significant. 1. Based on the ANOVA table below, do you reject or fail to reject the null hypothesis? What is the effect size? Source $SS$ $df$ $MS$ $F$ Between 60.72 3 20.24 3.88 Within 213.61 41 5.21 Total 274.33 44 1. Finish filling out the following ANOVA tables: 1. $K = 4$ Source $SS$ $df$ $MS$ $F$ Between 87.40 Within Total 199.22 33 1. $N=14$ Source $SS$ $df$ $MS$ $F$ Between   2 14.10 Within Total 64.65 Source $SS$ $df$ $MS$ $F$ Between   2   42.36 Within   54 2.48 Total Answer: 1. $K=4$ Source $SS$ $df$ $MS$ $F$ Between 87.40 3 29.13 7.81 Within 111.82 30 3.73 Total 199.22 33 1. $N=14$ Source $SS$ $df$ $MS$ $F$ Between 28.20 2 14.10 4.26 Within 36.45 11 3.31 Total 64.65 13 Source $SS$ $df$ $MS$ $F$ Between 210.10 2 105.05 42.36 Within 133.92 54 2.48 Total 344.02 1. You know that stores tend to charge different prices for similar or identical products, and you want to test whether or not these differences are, on average, statistically significantly different. You go online and collect data from 3 different stores, gathering information on 15 products at each store. You find that the average prices at each store are: Store 1 xbar = $27.82, Store 2 xbar =$38.96, and Store 3 xbar = \$24.53. Based on the overall variability in the products and the variability within each store, you find the following values for the Sums of Squares: SST = 683.22, SSW = 441.19. Complete the ANOVA table and use the 4 step hypothesis testing procedure to see if there are systematic price differences between the stores. 2. You and your friend are debating which type of candy is the best. You find data on the average rating for hard candy (e.g. jolly ranchers, $\overline{\mathrm{X}}$= 3.60), chewable candy (e.g. starburst, $\overline{\mathrm{X}}$ = 4.20), and chocolate (e.g. snickers, $\overline{\mathrm{X}}$= 4.40); each type of candy was rated by 30 people. Test for differences in average candy rating using SSB = 16.18 and SSW = 28.74. Answer: Step 1: $H_0: μ_1 = μ_2 = μ_3$ “There is no difference in average rating of candy quality”, $H_A$: “At least one mean is different.” Step 2: 3 groups and 90 total observations yields $df_{num} = 2$ and $df_{den} = 87$, $α = 0.05$, $F^* = 3.11$. Step 3: based on the given $SSB$ and $SSW$ and the computed $df$ from step 2, is: Source $SS$ $df$ $MS$ $F$ Between 16.18 2 8.09 24.52 Within 28.74 87 0.33 Total 44.92 89 Step 4: $F > F^*$, reject $H_0$. Based on the data in our 3 groups, we can say that there is a statistically significant difference in the quality of different types of candy, $F(2,87) = 24.52, p < .05$. Since the result is significant, we need an effect size: $\eta^{2}=16.18 / 44.92=.36$, which is a large effect 1. Administrators at a university want to know if students in different majors are more or less extroverted than others. They provide you with data they have for English majors ($\overline{\mathrm{X}}$ = 3.78, $n$ = 45), History majors ($\overline{\mathrm{X}}$= 2.23, $n$ = 40), Psychology majors ($\overline{\mathrm{X}}$= 4.41, $n$ = 51), and Math majors ($\overline{\mathrm{X}}$ = 1.15, $n$ = 28). You find the $SSB = 75.80$ and $SSW = 47.40$ and test at $α= 0.05$. 2. You are assigned to run a study comparing a new medication ($\overline{\mathrm{X}}$= 17.47, $n$ = 19), an existing medication ($\overline{\mathrm{X}}$= 17.94, $n$ = 18), and a placebo ($\overline{\mathrm{X}}$= 13.70, $n$ = 20), with higher scores reflecting better outcomes. Use $SSB = 210.10$ and $SSW = 133.90$ to test for differences. Answer: Step 1: $H_0: μ_1 = μ_2 = μ_3$ “There is no difference in average outcome based on treatment”, $H_A$: “At least one mean is different.” Step 2: 3 groups and 57 total participants yields $df_{num} = 2$ and $df_{den} = 54$, $α = 0.05, F^* = 3.18$. Step 3: based on the given $SSB$ and $SSW$ and the computed $df$ from step 2, is: Source $SS$ $df$ $MS$ $F$ Between 210.10 2 105.02 42.36 Within 133.90 54 2.48 Total 344.00 56 Step 4: $F > F^*$, reject $H_0$. Based on the data in our 3 groups, we can say that there is a statistically significant difference in the effectiveness of the treatments, $F(2,54) = 42.36, p < .05$. Since the result is significant, we need an effect size: $\eta^{2}=210.10 / 344.00=.61$, which is a large effect. 1. You are in charge of assessing different training methods for effectiveness. You have data on 4 methods: Method 1 ($\overline{\mathrm{X}}$ = 87, $n$ = 12), Method 2 ($\overline{\mathrm{X}}$= 92, $n$ = 14), Method 3 ($\overline{\mathrm{X}}$ = 88, $n$ = 15), and Method 4 ($\overline{\mathrm{X}}$= 75, $n$ = 11). Test for differences among these means, assuming $SSB = 64.81$ and $SST = 399.45$.
textbooks/stats/Applied_Statistics/An_Introduction_to_Psychological_Statistics_(Foster_et_al.)/11%3A_Analysis_of_Variance/11.09%3A_Other_ANOVA_Designs.txt
A common theme throughout statistics is the notion that individuals will differ on different characteristics and traits, which we call variance. In inferential statistics and hypothesis testing, our goal is to find systematic reasons for differences and rule out random chance as the cause. By doing this, we are using information on a different variable – which so far has been group membership like in ANOVA – to explain this variance. In correlations, we will instead use a continuous variable to account for the variance. • 12.1: Variability and Covariance Because we have two continuous variables, we will have two characteristics or score on which people will vary. What we want to know is do people vary on the scores together. That is, as one score changes, does the other score also change in a predictable or consistent way? This notion of variables differing together is called covariance (the prefix “co” meaning “together”). • 12.2: Visualizing Relations Visualizing data remains an important first step in understanding and describing out data before we move into inferential statistics. Nowhere is this more important than in correlation. Correlations are visualized by a scatterplot, where our X variable values are plotted on the X -axis, the Y variable values are plotted on the Y -axis, and each point or marker in the plot represents a single person’s score on X and Y. • 12.3: Three Characteristics When we talk about correlations, there are three traits that we need to know in order to truly understand the relation (or lack of relation) between X and Y : form, direction, and magnitude. We will discuss each of them in turn. • 12.4: Pearson’s r There are several different types of correlation coefficients, but we will only focus on the most common: Pearson’s r. r is a very popular correlation coefficient for assessing linear relations, and it serves as both a descriptive statistic and as a test statistic. It is descriptive because it describes what is happening in the scatterplot; r will have both a sign (+/–) for the direction and a number (0 – 1 in absolute value) for the magnitude. • 12.5: Anxiety and Depression Anxiety and depression are often reported to be highly linked (or “comorbid”). Our hypothesis testing procedure follows the same four-step process as before, starting with our null and alternative hypotheses. We will look for a positive relation between our variables among a group of 10 people because that is what we would expect based on them being comorbid. • 12.6: Effect Size • 12.7: Correlation versus Causation • 12.8: Final Considerations • 12.E: Correlations (Exercises) Thumbnail: Correlation shown when the two variables' ranges are unrestricted, and when the range of is restricted to the interval (0,1). (CC BY 3.0 Unported; Skbkekas via Wikipedia) 12: Correlations Because we have two continuous variables, we will have two characteristics or score on which people will vary. What we want to know is do people vary on the scores together. That is, as one score changes, does the other score also change in a predictable or consistent way? This notion of variables differing together is called covariance (the prefix “co” meaning “together”). Let’s look at our formula for variance on a single variable: $s^{2}=\dfrac{\sum(X-\overline{X})^{2}}{N-1}$ We use $X$ to represent a person’s score on the variable at hand, and $\overline{X}$ to represent the mean of that variable. The numerator of this formula is the Sum of Squares, which we have seen several times for various uses. Recall that squaring a value is just multiplying that value by itself. Thus, we can write the same equation as: $s^{2}=\dfrac{\sum((X-\overline{X})(X-\overline{X}))}{N-1}$ This is the same formula and works the same way as before, where we multiply the deviation score by itself (we square it) and then sum across squared deviations. Now, let’s look at the formula for covariance. In this formula, we will still use $X$ to represent the score on one variable, and we will now use $Y$ to represent the score on the second variable. We will still use bars to represent averages of the scores. The formula for covariance ($cov_{X Y}$ with the subscript $XY$ to indicate covariance across the $X$ and $Y$ variables) is: $\operatorname{cov}_{X Y}=\dfrac{\sum((X-\overline{X})(Y-\overline{Y}))}{N-1}$ As we can see, this is the exact same structure as the previous formula. Now, instead of multiplying the deviation score by itself on one variable, we take the deviation scores from a single person on each variable and multiply them together. We do this for each person (exactly the same as we did for variance) and then sum them to get our numerator. The numerator in this is called the Sum of Products. $S P=\sum((X-\overline{X})(Y-\overline{Y}))$ We will calculate the sum of products using the same table we used to calculate the sum of squares. In fact, the table for sum of products is simply a sum of squares table for $X$, plus a sum of squares table for $Y$, with a final column of products, as shown below. Table $1$: Sum of Products table $X$ $(X-\overline{X})$ $(X-\overline{X})^2$ $Y$ $(Y-\overline{Y})$ $(Y-\overline{Y})^2$ $(X-\overline{X})(Y-\overline{Y})$ This table works the same way that it did before (remember that the column headers tell you exactly what to do in that column). We list our raw data for the $X$ and $Y$ variables in the $X$ and $Y$ columns, respectively, then add them up so we can calculate the mean of each variable. We then take those means and subtract them from the appropriate raw score to get our deviation scores for each person on each variable, and the columns of deviation scores will both add up to zero. We will square our deviation scores for each variable to get the sum of squares for $X$ and $Y$ so that we can compute the variance and standard deviation of each (we will use the standard deviation in our equation below). Finally, we take the deviation score from each variable and multiply them together to get our product score. Summing this column will give us our sum of products. It is very important that you multiply the raw deviation scores from each variable, NOT the squared deviation scores. Our sum of products will go into the numerator of our formula for covariance, and then we only have to divide by $N – 1$ to get our covariance. Unlike the sum of squares, both our sum of products and our covariance can be positive, negative, or zero, and they will always match (e.g. if our sum of products is positive, our covariance will always be positive). A positive sum of products and covariance indicates that the two variables are related and move in the same direction. That is, as one variable goes up, the other will also go up, and vice versa. A negative sum of products and covariance means that the variables are related but move in opposite directions when they change, which is called an inverse relation. In an inverse relation, as one variable goes up, the other variable goes down. If the sum of products and covariance are zero, then that means that the variables are not related. As one variable goes up or down, the other variable does not change in a consistent or predictable way. The previous paragraph brings us to an important definition about relations between variables. What we are looking for in a relation is a consistent or predictable pattern. That is, the variables change together, either in the same direction or opposite directions, in the same way each time. It doesn’t matter if this relation is positive or negative, only that it is not zero. If there is no consistency in how the variables change within a person, then the relation is zero and does not exist. We will revisit this notion of direction vs zero relation later on.
textbooks/stats/Applied_Statistics/An_Introduction_to_Psychological_Statistics_(Foster_et_al.)/12%3A_Correlations/12.01%3A_Variability_and_Covariance.txt
Chapter 2 covered many different forms of data visualization, and visualizing data remains an important first step in understanding and describing out data before we move into inferential statistics. Nowhere is this more important than in correlation. Correlations are visualized by a scatterplot, where our \(X\) variable values are plotted on the \(X\)-axis, the \(Y\) variable values are plotted on the \(Y\)-axis, and each point or marker in the plot represents a single person’s score on \(X\) and \(Y\). Figure \(1\) shows a scatterplot for hypothetical scores on job satisfaction (\(X\)) and worker well-being (\(Y\)). We can see from the axes that each of these variables is measured on a 10-point scale, with 10 being the highest on both variables (high satisfaction and good health and well-being) and 1 being the lowest (dissatisfaction and poor health). When we look at this plot, we can see that the variables do seem to be related. The higher scores on job satisfaction tend to also be the higher scores on well-being, and the same is true of the lower scores. Figure \(1\) demonstrates a positive relation. As scores on \(X\) increase, scores on \(Y\) also tend to increase. Although this is not a perfect relation (if it were, the points would form a single straight line), it is nonetheless very clearly positive. This is one of the key benefits to scatterplots: they make it very easy to see the direction of the relation. As another example, Figure \(2\) shows a negative relation between job satisfaction (\(X\)) and burnout (\(Y\)). As we can see from this plot, higher scores on job satisfaction tend to correspond to lower scores on burnout, which is how stressed, un-energetic, and unhappy someone is at their job. As with Figure \(1\), this is not a perfect relation, but it is still a clear one. As these figures show, points in a positive relation moves from the bottom left of the plot to the top right, and points in a negative relation move from the top left to the bottom right. Scatterplots can also indicate that there is no relation between the two variables. In these scatterplots (an example is shown below in Figure \(3\) plotting job satisfaction and job performance) there is no interpretable shape or line in the scatterplot. The points appear randomly throughout the plot. If we tried to draw a straight line through these points, it would basically be flat. The low scores on job satisfaction have roughly the same scores on job performance as do the high scores on job satisfaction. Scores in the middle or average range of job satisfaction have some scores on job performance that are about equal to the high and low levels and some scores on job performance that are a little higher, but the overall picture is one of inconsistency. As we can see, scatterplots are very useful for giving us an approximate idea of whether or not there is a relation between the two variables and, if there is, if that relation is positive or negative. They are also useful for another reason: they are the only way to determine one of the characteristics of correlations that are discussed next: form.
textbooks/stats/Applied_Statistics/An_Introduction_to_Psychological_Statistics_(Foster_et_al.)/12%3A_Correlations/12.02%3A_Visualizing_Relations.txt
When we talk about correlations, there are three traits that we need to know in order to truly understand the relation (or lack of relation) between \(X\) and \(Y\): form, direction, and magnitude. We will discuss each of them in turn. Form The first characteristic of relations between variables is their form. The form of a relation is the shape it takes in a scatterplot, and a scatterplot is the only way it is possible to assess the form of a relation. there are three forms we look for: linear, curvilinear, or no relation. A linear relation is what we saw in Figures 12.2.1, 12.2.2, and 12.2.3. If we drew a line through the middle points in the any of the scatterplots, we would be best suited with a straight line. The term “linear” comes from the word “line”. A linear relation is what we will always assume when we calculate correlations. All of the correlations presented here are only valid for linear relations. Thus, it is important to plot our data to make sure we meet this assumption. The relation between two variables can also be curvilinear. As the name suggests, a curvilinear relation is one in which a line through the middle of the points in a scatterplot will be curved rather than straight. Two examples are presented in Figures \(1\) and \(2\). Curvilinear relations can take many shapes, and the two examples above are only a small sample of the possibilities. What they have in common is that they both have a very clear pattern but that pattern is not a straight line. If we try to draw a straight line through them, we would get a result similar to what is shown in Figure \(3\). Although that line is the closest it can be to all points at the same time, it clearly does a very poor job of representing the relation we see. Additionally, the line itself is flat, suggesting there is no relation between the two variables even though the data show that there is one. This is important to keep in mind, because the math behind our calculations of correlation coefficients will only ever produce a straight line – we cannot create a curved line with the techniques discussed here. Finally, sometimes when we create a scatterplot, we end up with no interpretable relation at all. An example of this is shown below in Figure \(4\). The points in this plot show no consistency in relation, and a line through the middle would once again be a straight, flat line. Sometimes when we look at scatterplots, it is tempting to get biased by a few points that fall far away from the rest of the points and seem to imply that there may be some sort of relation. These points are called outliers, and we will discuss them in more detail later in the chapter. These can be common, so it is important to formally test for a relation between our variables, not just rely on visualization. This is the point of hypothesis testing with correlations, and we will go in depth on it soon. First, however, we need to describe the other two characteristics of relations: direction and magnitude. Direction The direction of the relation between two variables tells us whether the variables change in the same way at the same time or in opposite ways at the same time. We saw this concept earlier when first discussing scatterplots, and we used the terms positive and negative. A positive relation is one in which \(X\) and \(Y\) change in the same direction: as \(X\) goes up, \(Y\) goes up, and as \(X\) goes down, \(Y\) also goes down. A negative relation is just the opposite: \(X\) and \(Y\) change together in opposite directions: as \(X\) goes up, \(Y\) goes down, and vice versa. As we will see soon, when we calculate a correlation coefficient, we are quantifying the relation demonstrated in a scatterplot. That is, we are putting a number to it. That number will be either positive, negative, or zero, and we interpret the sign of the number as our direction. If the number is positive, it is a positive relation, and if it is negative, it is a negative relation. If it is zero, then there is no relation. The direction of the relation corresponds directly to the slope of the hypothetical line we draw through scatterplots when assessing the form of the relation. If the line has a positive slope that moves from bottom left to top right, it is positive, and vice versa for negative. If the line it flat, that means it has no slope, and there is no relation, which will in turn yield a zero for our correlation coefficient. Magnitude The number we calculate for our correlation coefficient, which we will describe in detail below, corresponds to the magnitude of the relation between the two variables. The magnitude is how strong or how consistent the relation between the variables is. Higher numbers mean greater magnitude, which means a stronger relation. Our correlation coefficients will take on any value between -1.00 and 1.00, with 0.00 in the middle, which again represents no relation. A correlation of -1.00 is a perfect negative relation; as \(X\) goes up by some amount, \(Y\) goes down by the same amount, consistently. Likewise, a correlation of 1.00 indicates a perfect positive relation; as \(X\) goes up by some amount, \(Y\) also goes up by the same amount. Finally, a correlation of 0.00, which indicates no relation, means that as \(X\) goes up by some amount, \(Y\) may or may not change by any amount, and it does so inconsistently. The vast majority of correlations do not reach -1.00 or positive 1.00. Instead, they fall in between, and we use rough cut offs for how strong the relation is based on this number. Importantly, the sign of the number (the direction of the relation) has no bearing on how strong the relation is. The only thing that matters is the magnitude, or the absolute value of the correlation coefficient. A correlation of -1 is just as strong as a correlation of 1. We generally use values of 0.10, 0.30, and 0.50 as indicating weak, moderate, and strong relations, respectively. The strength of a relation, just like the form and direction, can also be inferred from a scatterplot, though this is much more difficult to do. Some examples of weak and strong relations are shown in Figures \(5\) and \(6\), respectively. Weak correlations still have an interpretable form and direction, but it is much harder to see. Strong correlations have a very clear pattern, and the points tend to form a line. The examples show two different directions, but remember that the direction does not matter for the strength, only the consistency of the relation and the size of the number, which we will see next.
textbooks/stats/Applied_Statistics/An_Introduction_to_Psychological_Statistics_(Foster_et_al.)/12%3A_Correlations/12.03%3A_Three_Characteristics.txt
There are several different types of correlation coefficients, but we will only focus on the most common: Pearson’s $r$. $r$ is a very popular correlation coefficient for assessing linear relations, and it serves as both a descriptive statistic (like $\overline{X}$) and as a test statistic (like $t$). It is descriptive because it describes what is happening in the scatterplot; $r$ will have both a sign (+/–) for the direction and a number (0 – 1 in absolute value) for the magnitude. As noted above, assumes a linear relation, so nothing about $r$ will suggest what the form is – it will only tell what the direction and magnitude would be if the form is linear (Remember: always make a scatterplot first!). $r$ also works as a test statistic because the magnitude of $r$ will correspond directly to a $t$ value as the specific degrees of freedom, which can then be compared to a critical value. Luckily, we do not need to do this conversion by hand. Instead, we will have a table of $r$ critical values that looks very similar to our $t$ table, and we can compare our $r$ directly to those. The formula for $r$ is very simple: it is just the covariance (defined above) divided by the standard deviations of $X$ and $Y$: $r=\dfrac{\operatorname{cov}_{X Y}}{s_{X} s_{Y}}=\dfrac{S P}{\sqrt{S S X * S S Y}}$ The first formula gives a direct sense of what a correlation is: a covariance standardized onto the scale of $X$ and $Y$; the second formula is computationally simpler and faster. Both of these equations will give the same value, and as we saw at the beginning of the chapter, all of these values are easily computed by using the sum of products table. When we do this calculation, we will find that our answer is always between -1.00 and 1.00 (if it’s not, check the math again), which gives us a standard, interpretable metric, similar to what $z$-scores did. It was stated earlier that $r$ is a descriptive statistic like $\overline{X}$, and just like $\overline{X}$, it corresponds to a population parameter. For correlations, the population parameter is the lowercase Greek letter $ρ$ (“rho”); be careful not to confuse $ρ$ with a $p$-value – they look quite similar. $r$ is an estimate of $ρ$ just like $\overline{X}$ is an estimate of $μ$. Thus, we will test our observed value of $r$ that we calculate from the data and compare it to a value of $ρ$ specified by our null hypothesis to see if the relation between our variables is significant, as we will see in our example next. 12.05: Anxiety and Depression Anxiety and depression are often reported to be highly linked (or “comorbid”). Our hypothesis testing procedure follows the same four-step process as before, starting with our null and alternative hypotheses. We will look for a positive relation between our variables among a group of 10 people because that is what we would expect based on them being comorbid. Step 1: State the Hypotheses Our hypotheses for correlations start with a baseline assumption of no relation, and our alternative will be directional if we expect to find a specific type of relation. For this example, we expect a positive relation: $\begin{array}{c}{\mathrm{H}_{0}: \text { There is no relation between anxiety and depression}} \ {\mathrm{H}_{0}: \rho=0}\end{array} \nonumber$ $\begin{array}{c}{\mathrm{H}_{\mathrm{A}}: \text {There is a positive relation between anxiety and depression}} \ {\mathrm{H}_{0}: \rho>0}\end{array} \nonumber$ Remember that $ρ$ (“rho”) is our population parameter for the correlation that we estimate with $r$, just like $\overline{X}$ and $μ$ for means. Remember also that if there is no relation between variables, the magnitude will be 0, which is where we get the null and alternative hypothesis values. Step 2: Find the Critical Values The critical values for correlations come from the correlation table, which looks very similar to the $t$-table (see Figure $1$). Just like our $t$-table, the column of critical values is based on our significance level ($α$) and the directionality of our test. The row is determined by our degrees of freedom. For correlations, we have $N – 2$ degrees of freedom, rather than $N – 1$ (why this is the case is not important). For our example, we have 10 people, so our degrees of freedom = 10 – 2 = 8. We were not given any information about the level of significance at which we should test our hypothesis, so we will assume $α = 0.05$ as always. From our table, we can see that a 1-tailed test (because we expect only a positive relation) at the $α = 0.05$ level has a critical value of $r^* = 0.549$. Thus, if our observed correlation is greater than 0.549, it will be statistically significant. This is a rather high bar (remember, the guideline for a strong relation is $r$ = 0.50); this is because we have so few people. Larger samples make it easier to find significant relations. Step 3: Calculate the Test Statistic We have laid out our hypotheses and the criteria we will use to assess them, so now we can move on to our test statistic. Before we do that, we must first create a scatterplot of the data to make sure that the most likely form of our relation is in fact linear. Figure $2$ below shows our data plotted out, and it looks like they are, in fact, linearly related, so Pearson’s $r$ is appropriate. The data we gather from our participants is as follows: Table $1$: Data from participants Dep 2.81 1.96 3.43 3.40 4.71 1.80 4.27 3.68 2.44 3.13 Anx 3.54 3.05 3.81 3.43 4.03 3.59 4.17 3.46 3.19 4.12 We will need to put these values into our Sum of Products table to calculate the standard deviation and covariance of our variables. We will use $X$ for depression and $Y$ for anxiety to keep track of our data, but be aware that this choice is arbitrary and the math will work out the same if we decided to do the opposite. Our table is thus: Table $2$: Sum of Products table $X$ $(X-\overline{X})$ $(X-\overline{X})^2$ $Y$ $(Y-\overline{Y})$ $(Y-\overline{Y})^2$ $(X-\overline{X})(Y-\overline{Y})$ 2.81 -0.35 0.12 3.54 -0.10 0.01 0.04 1.96 -1.20 1.44 3.05 -0.59 0.35 0.71 3.43 0.27 0.07 3.81 0.17 0.03 0.05 3.40 0.24 0.06 3.43 -0.21 0.04 -0.05 4.71 1.55 2.40 4.03 0.39 0.15 0.60 1.80 -1.36 1.85 3.59 -0.05 0.00 0.07 4.27 1.11 1.23 4.17 0.53 0.28 0.59 3.68 0.52 0.27 3.46 -0.18 0.03 -0.09 2.44 -0.72 0.52 3.19 -0.45 0.20 0.32 3.13 -0.03 0.00 4.12 0.48 0.23 -0.01 31.63 0.03 7.97 36.39 -0.01 1.33 2.22 The bottom row is the sum of each column. We can see from this that the sum of the $X$ observations is 31.63, which makes the mean of the $X$ variable $\overline{X}=3.16$. The deviation scores for $X$ sum to 0.03, which is very close to 0, given rounding error, so everything looks right so far. The next column is the squared deviations for $X$, so we can see that the sum of squares for $X$ is $SS_X = 7.97$. The same is true of the $Y$ columns, with an average of $\overline{Y} = 3.64$, deviations that sum to zero within rounding error, and a sum of squares as $SS_Y = 1.33$. The final column is the product of our deviation scores (NOT of our squared deviations), which gives us a sum of products of $SP = 2.22$. There are now three pieces of information we need to calculate before we compute our correlation coefficient: the covariance of $X$ and $Y$ and the standard deviation of each. The covariance of two variable, remember, is the sum of products divided by $N – 1$. For our data: $\operatorname{cov}_{X Y}=\dfrac{S P}{N-1}=\dfrac{2.22}{9}=0.25 \nonumber$ The formula for standard deviation are the same as before. Using subscripts $X$ and $Y$ to denote depression and anxiety: $s_{X}=\sqrt{\dfrac{\sum(X-\overline{X})^{2}}{N-1}}=\sqrt{\dfrac{7.97}{9}}=0.94 \nonumber$ $s_{Y}=\sqrt{\dfrac{\sum(Y-\overline{Y})^{2}}{N-1}}=\sqrt{\dfrac{1.33}{9}}=0.38 \nonumber$ Now we have all of the information we need to calculate $r$: $r=\dfrac{\operatorname{cov}_{X Y}}{s_{X} S_{Y}}=\dfrac{0.25}{0.94 * 0.38}=0.70 \nonumber$ We can verify this using our other formula, which is computationally shorter: $r=\dfrac{S P}{\sqrt{S S X * S S Y}}=\dfrac{2.22}{\sqrt{7.97 * 1.33}}=.70 \nonumber$ So our observed correlation between anxiety and depression is $r = 0.70$, which, based on sign and magnitude, is a strong, positive correlation. Now we need to compare it to our critical value to see if it is also statistically significant. Step 4: Make a Decision Our critical value was $r^* = 0.549$ and our obtained value was $r = 0.70$. Our obtained value was larger than our critical value, so we can reject the null hypothesis. Reject $H_0$. Based on our sample of 10 people, there is a statistically significant, strong, positive relation between anxiety and depression, $r(8) = 0.70, p < .05$. Notice in our interpretation that, because we already know the magnitude and direction of our correlation, we can interpret that. We also report the degrees of freedom, just like with $t$, and we know that $p < α$ because we rejected the null hypothesis. As we can see, even though we are dealing with a very different type of data, our process of hypothesis testing has remained unchanged.
textbooks/stats/Applied_Statistics/An_Introduction_to_Psychological_Statistics_(Foster_et_al.)/12%3A_Correlations/12.04%3A_Pearsons_r.txt
Pearson’s \(r\) is an incredibly flexible and useful statistic. Not only is it both descriptive and inferential, as we saw above, but because it is on a standardized metric (always between -1.00 and 1.00), it can also serve as its own effect size. In general, we use \(r\) = 0.10, \(r\) = 0.30, and \(r\) = 0.50 as our guidelines for small, medium, and large effects. Just like with Cohen’s \(d\), these guidelines are not absolutes, but they do serve as useful indicators in most situations. Notice as well that these are the same guidelines we used earlier to interpret the magnitude of the relation based on the correlation coefficient. In addition to \(r\) being its own effect size, there is an additional effect size we can calculate for our results. This effect size is \(r^2\), and it is exactly what it looks like – it is the squared value of our correlation coefficient. Just like \(η^2\) in ANOVA, \(r^2\) is interpreted as the amount of variance explained in the outcome variance, and the cut scores are the same as well: 0.01, 0.09, and 0.25 for small, medium, and large, respectively. Notice here that these are the same cutoffs we used for regular \(r\) effect sizes, but squared (0.102 = 0.01, 0.302 = 0.09, 0.502 = 0.25) because, again, the \(r^2\) effect size is just the squared correlation, so its interpretation should be, and is, the same. The reason we use \(r^2\) as an effect size is because our ability to explain variance is often important to us. The similarities between \(η^2\) and \(r^2\) in interpretation and magnitude should clue you in to the fact that they are similar analyses, even if they look nothing alike. That is because, behind the scenes, they actually are! In the next chapter, we will learn a technique called Linear Regression, which will formally link the two analyses together. 12.07: Correlation versus Causation We cover a great deal of material in introductory statistics and, as mentioned chapter 1, many of the principles underlying what we do in statistics can be used in your day to day life to help you interpret information objectively and make better decisions. We now come to what may be the most important lesson in introductory statistics: the difference between correlation and causation. It is very, very tempting to look at variables that are correlated and assume that this means they are causally related; that is, it gives the impression that \(X\) is causing \(Y\). However, in reality, correlation do not – and cannot – do this. Correlations DO NOT prove causation. No matter how logical or how obvious or how convenient it may seem, no correlational analysis can demonstrate causality. The ONLY way to demonstrate a causal relation is with a properly designed and controlled experiment. Many times, we have good reason for assessing the correlation between two variables, and often that reason will be that we suspect that one causes the other. Thus, when we run our analyses and find strong, statistically significant results, it is very tempting to say that we found the causal relation that we are looking for. The reason we cannot do this is that, without an experimental design that includes random assignment and control variables, the relation we observe between the two variables may be caused by something else that we failed to measure. These “third variables” are lurking variables or confound variables, and they are impossible to detect and control for without an experiment. Confound variables, which we will represent with \(Z\), can cause two variables \(X\) and \(Y\) to appear related when in fact they are not. They do this by being the hidden – or lurking – cause of each variable independently. That is, if \(Z\) causes \(X\) and \(Z\) causes \(Y\), the \(X\) and \(Y\) will appear to be related . However, if we control for the effect of \(Z\) (the method for doing this is beyond the scope of this text), then the relation between \(X\) and \(Y\) will disappear. A popular example for this effect is the correlation between ice cream sales and deaths by drowning. These variables are known to correlate very strongly over time. However, this does not prove that one causes the other. The lurking variable in this case is the weather – people enjoy swimming and enjoy eating ice cream more during hot weather as a way to cool off. As another example, consider shoe size and spelling ability in elementary school children. Although there should clearly be no causal relation here, the variables and nonetheless consistently correlated. The confound in this case? Age. Older children spell better than younger children and are also bigger, so they have larger shoes. When there is the possibility of confounding variables being the hidden cause of our observed correlation, we will often collect data on \(Z\) as well and control for it in our analysis. This is good practice and a wise thing for researchers to do. Thus, it would seem that it is easy to demonstrate causation with a correlation that controls for \(Z\). However, the number of variables that could potentially cause a correlation between \(X\) and \(Y\) is functionally limitless, so it would be impossible to control for everything. That is why we use experimental designs; by randomly assigning people to groups and manipulating variables in those groups, we can balance out individual differences in any variable that may be our cause. It is not always possible to do an experiment, however, so there are certain situations in which we will have to be satisfied with our observed relation and do the best we can to control for known confounds. However, in these situations, even if we do an excellent job of controlling for many extraneous (a statistical and research term for “outside”) variables, we must be very careful not to use causal language. That is because, even after controls, sometimes variables are related just by chance. Sometimes, variables will end up being related simply due to random chance, and we call these correlation spurious. Spurious just means random, so what we are seeing is random correlations because, given enough time, enough variables, and enough data, sampling error will eventually cause some variables to be related when they should not. Sometimes, this even results in incredibly strong, but completely nonsensical, correlations. This becomes more and more of a problem as our ability to collect massive datasets and dig through them improves, so it is very important to think critically about any relation you encounter.
textbooks/stats/Applied_Statistics/An_Introduction_to_Psychological_Statistics_(Foster_et_al.)/12%3A_Correlations/12.06%3A_Effect_Size.txt
Correlations, although simple to calculate, and be very complex, and there are many additional issues we should consider. We will look at two of the most common issues that affect our correlations, as well as discuss some other correlations and reporting methods you may encounter. Range Restriction The strength of a correlation depends on how much variability is in each of the variables \(X\) and \(Y\). This is evident in the formula for Pearson’s \(r\), which uses both covariance (based on the sum of products, which comes from deviation scores) and the standard deviation of both variables (which are based on the sums of squares, which also come from deviation scores). Thus, if we reduce the amount of variability in one or both variables, our correlation will go down. Failure to capture the full variability of a variability is called range restriction. Take a look at Figures \(1\) and \(2\) below. The first shows a strong relation (\(r\) = 0.67) between two variables. An oval is overlain on top of it to make the relation even more distinct. The second shows the same data, but the bottom half of the \(X\) variable (all scores below 5) have been removed, which causes our relation (again represented by a red oval) to become much weaker (\(r\) = 0.38). Thus range restriction has truncated (made smaller) our observed correlation. Sometimes range restriction happens by design. For example, we rarely hire people who do poorly on job applications, so we would not have the lower range of those predictor variables. Other times, we inadvertently cause range restriction by not properly sampling our population. Although there are ways to correct for range restriction, they are complicated and require much information that may not be known, so it is best to be very careful during the data collection process to avoid it. Outliers Another issue that can cause the observed size of our correlation to be inappropriately large or small is the presence of outliers. An outlier is a data point that falls far away from the rest of the observations in the dataset. Sometimes outliers are the result of incorrect data entry, poor or intentionally misleading responses, or simple random chance. Other times, however, they represent real people with meaningful values on our variables. The distinction between meaningful and accidental outliers is a difficult one that is based on the expert judgment of the researcher. Sometimes, we will remove the outlier (if we think it is an accident) or we may decide to keep it (if we find the scores to still be meaningful even though they are different). The plots below in Figure \(3\) show the effects that an outlier can have on data. In the first, we have our raw dataset. You can see in the upper right corner that there is an outlier observation that is very far from the rest of our observations on both the \(X\) and \(Y\) variables. In the middle, we see the correlation computed when we include the outlier, along with a straight line representing the relation; here, it is a positive relation. In the third image, we see the correlation after removing the outlier, along with a line showing the direction once again. Not only did the correlation get stronger, it completely changed direction! In general, there are three effects that an outlier can have on a correlation: it can change the magnitude (make it stronger or weaker), it can change the significance (make a non-significant correlation significant or vice versa), and/or it can change the direction (make a positive relation negative or vice versa). Outliers are a big issue in small datasets where a single observation can have a strong weight compared to the rest. However, as our samples sizes get very large (into the hundreds), the effects of outliers diminishes because they are outweighed by the rest of the data. Nevertheless, no matter how large a dataset you have, it is always a good idea to screen for outliers, both statistically (using analyses that we do not cover here) and/or visually (using scatterplots). Other Correlation Coefficients In this chapter we have focused on Pearson’s \(r\) as our correlation coefficient because it very common and very useful. There are, however, many other correlations out there, each of which is designed for a different type of data. The most common of these is Spearman’s rho (\(ρ\)), which is designed to be used on ordinal data rather than continuous data. This is a very useful analysis if we have ranked data or our data do not conform to the normal distribution. There are even more correlations for ordered categories, but they are much less common and beyond the scope of this chapter. Additionally, the principles of correlations underlie many other advanced analyses. In the next chapter, we will learn about regression, which is a formal way of running and analyzing a correlation that can be extended to more than two variables. Regression is a very powerful technique that serves as the basis for even our most advanced statistical models, so what we have learned in this chapter will open the door to an entire world of possibilities in data analysis. Correlation Matrices Many research studies look at the relation between more than two continuous variables. In such situations, we could simply list our all of our correlations, but that would take up a lot of space and make it difficult to quickly find the relation we are looking for. Instead, we create correlation matrices so that we can quickly and simply display our results. A matrix is like a grid that contains our values. There is one row and one column for each of our variables, and the intersections of the rows and columns for different variables contain the correlation for those two variables. At the beginning of the chapter, we saw scatterplots presenting data for correlations between job satisfaction, well-being, burnout, and job performance. We can create a correlation matrix to quickly display the numerical values of each. Such a matrix is shown below. Table \(1\): Correlation matrix to display the numerical values Satisfaction Well-Being Burnout Performance Satisfaction 1.00 Well-Being 0.41 1.00 Burnout -0.54 -0.87 1.00 Performance 0.08 0.21 -0.33 1.00 Notice that there are values of 1.00 where each row and column of the same variable intersect. This is because a variable correlates perfectly with itself, so the value is always exactly 1.00. Also notice that the upper cells are left blank and only the cells below the diagonal of 1s are filled in. This is because correlation matrices are symmetrical: they have the same values above the diagonal as below it. Filling in both sides would provide redundant information and make it a bit harder to read the matrix, so we leave the upper triangle blank. Correlation matrices are a very condensed way of presenting many results quickly, so they appear in almost all research studies that use continuous variables. Many matrices also include columns that show the variable means and standard deviations, as well as asterisks showing whether or not each correlation is statistically significant. 12.09: Exercises 1. What does a correlation assess? Answer: Correlations assess the linear relation between two continuous variables 1. What are the three characteristics of a correlation coefficient? 2. What is the difference between covariance and correlation? Answer: Covariance is an unstandardized measure of how related two continuous variables are. Correlations are standardized versions of covariance that fall between negative 1 and positive 1. 1. Why is it important to visualize correlational data in a scatterplot before performing analyses? 2. What sort of relation is displayed in the scatterplot below? Answer: Strong, positive, linear relation 1. What is the direction and magnitude of the following correlation coefficients 1. -0.81 2. 0.40 3. 0.15 4. -0.08 5. 0.29 2. Create a scatterplot from the following data: Hours Studying Overall Class Performance 0.62 2.02 1.50 4.62 0.34 2.60 0.97 1.59 3.54 4.67 0.69 2.52 1.53 2.28 0.32 1.68 1.94 2.50 1.25 4.04 1.42 2.63 3.07 3.53 3.99 3.90 1.73 2.75 1.9 2.95 Answer: Your scatterplot should look similar to this: 1. In the following correlation matrix, what is the relation (number, direction, and magnitude) between… 1. Pay and Satisfaction 2. Stress and Health Workplace Pay Satisfaction Stress Health Pay 1.00 Satisfaction 0.68 1.00 Stress 0.02 -0.23 1.00 Health 0.05 0.15 -0.48 1.00 1. Using the data from problem 7, test for a statistically significant relation between the variables. Answer: Step 1: $H_0: ρ = 0$, “There is no relation between time spent studying and overall performance in class”, $H_A: ρ > 0$, “There is a positive relation between time spent studying and overall performance in class.” Step 2: $df = 15 – 2 = 13, α = 0.05$, 1-tailed test, $r^* = 0.441$. Step 3: Using the Sum of Products table, you should find: $\overline{X} = 1.61, SS_X = 17.44, \overline{Y}= 2.95, SS_Y = 13.60, SP = 10.06, r = 0.65$. Step 4: Obtained statistic is greater than critical value, reject $H_0$. There is a statistically significant, strong, positive relation between time spent studying and performance in class, $r(13) = 0.65, p < .05$. 1. A researcher collects data from 100 people to assess whether there is any relation between level of education and levels of civic engagement. The researcher finds the following descriptive values: $\overline{X}= 4.02, s_x = 1.15, \overline{Y}= 15.92, s_y = 5.01, SS_X = 130.93, SS_Y = 2484.91, SP = 159.39$. Test for a significant relation using the four step hypothesis testing procedure.
textbooks/stats/Applied_Statistics/An_Introduction_to_Psychological_Statistics_(Foster_et_al.)/12%3A_Correlations/12.08%3A_Final_Considerations.txt
• 13.1: Line of Best Fit In correlations, we referred to a linear trend in the data. That is, we assumed that there was a straight line we could draw through the middle of our scatterplot that would represent the relation between our two variables, X and Y . Regression involves solving for the equation of that line, which is called the Line of Best Fit. • 13.2: Prediction In regression, we most frequently talk about prediction, specifically predicting our outcome variable Y from our explanatory variable X, and we use the line of best fit to make our predictions. • 13.3: ANOVA Table Our ANOVA table in regression follows the exact same format as it did for ANOVA (hence the name). Our top row is our observed effect, our middle row is our error, and our bottom row is our total. The columns take on the same interpretations as well: from left to right, we have our sums of squares, our degrees of freedom, our mean squares, and our F statistic. • 13.4: Hypothesis Testing in Regression Regression, like all other analyses, will test a null hypothesis in our data. In regression, we are interested in predicting Y scores and explaining variance using a line, the slope of which is what allows us to get closer to our observed scores than the mean of Y can. Thus, our hypotheses concern the slope of the line, which is estimated in the prediction equation by b . • 13.5: Happiness and Well-Being Researchers are interested in explaining differences in how happy people are based on how healthy people are. They gather data on each of these variables from 18 people and fit a linear regression model to explain the variance. We will follow the four-step hypothesis testing procedure to see if there is a relation between these variables that is statistically significant. • 13.6: Multiple Regression and Other Extensions The next step in regression is to study multiple regression, which uses multiple X variables as predictors for a single Y variable at the same time. The math of multiple regression is very complex but the logic is the same: we are trying to use variables that are statistically significantly related to our outcome to explain the variance we observe in that outcome. Other forms of regression include curvilinear models that can explain curves in the data rather than straight lines. • 13.E: Linear Regression (Exercises) 13: Linear Regression In correlations, we referred to a linear trend in the data. That is, we assumed that there was a straight line we could draw through the middle of our scatterplot that would represent the relation between our two variables, \(X\) and \(Y\). Regression involves solving for the equation of that line, which is called the Line of Best Fit. The line of best fit can be thought of as the central tendency of our scatterplot. The term “best fit” means that the line is as close to all points (with each point representing both variables for a single person) in the scatterplot as possible, with a balance of scores above and below the line. This is the same idea as the mean, which has an equal weighting of scores above and below it and is the best singular descriptor of all our data points for a single variable. We have already seen many scatterplots in chapter 2 and chapter 12, so we know by now that no scatterplot has points that form a perfectly straight line. Because of this, when we put a straight line through a scatterplot, it will not touch all of the points, and it may not even touch any! This will result in some distance between the line and each of the points it is supposed to represent, just like a mean has some distance between it and all of the individual scores in the dataset. The distances between the line of best fit and each individual data point go by two different names that mean the same thing: errors and residuals. The term “error” in regression is closely aligned with the meaning of error in statistics (think standard error or sampling error); it does not mean that we did anything wrong, it simply means that there was some discrepancy or difference between what our analysis produced and the true value we are trying to get at it. The term “residual” is new to our study of statistics, and it takes on a very similar meaning in regression to what it means in everyday parlance: there is something left over. In regression, what is “left over” – that is, what makes up the residual – is an imperfection in our ability to predict values of the \(Y\) variable using our line. This definition brings us to one of the primary purposes of regression and the line of best fit: predicting scores.
textbooks/stats/Applied_Statistics/An_Introduction_to_Psychological_Statistics_(Foster_et_al.)/13%3A_Linear_Regression/13.01%3A_Line_of_Best_Fit.txt
The goal of regression is the same as the goal of ANOVA: to take what we know about one variable ($X$) and use it to explain our observed differences in another variable ($Y$). In ANOVA, we talked about – and tested for – group mean differences, but in regression we do not have groups for our explanatory variable; we have a continuous variable, like in correlation. Because of this, our vocabulary will be a little bit different, but the process, logic, and end result are all the same. In regression, we most frequently talk about prediction, specifically predicting our outcome variable $Y$ from our explanatory variable $X$, and we use the line of best fit to make our predictions. Let’s take a look at the equation for the line, which is quite simple: $\widehat{\mathrm{Y}}=\mathrm{a}+\mathrm{bX}$ The terms in the equation are defined as: • $\widehat{\mathrm{Y}}$: the predicted value of $Y$ for an individual person • $a$: the intercept of the line • $b$: the slope of the line • $X$: the observed value of $X$ for an individual person What this shows us is that we will use our known value of $X$ for each person to predict the value of $Y$ for that person. The predicted value, $\widehat{\mathrm{Y}}$, is called “$y$-hat” and is our best guess for what a person’s score on the outcome is. Notice also that the form of the equation is very similar to very simple linear equations that you have likely encountered before and has only two parameter estimates: an intercept (where the line crosses the Y-axis) and a slope (how steep – and the direction, positive or negative – the line is). These are parameter estimates because, like everything else in statistics, we are interested in approximating the true value of the relation in the population but can only ever estimate it using sample data. We will soon see that one of these parameters, the slope, is the focus of our hypothesis tests (the intercept is only there to make the math work out properly and is rarely interpretable). The formulae for these parameter estimates use very familiar values: $\mathrm{a}=\overline{\mathrm{Y}}-\mathrm{b} \overline{\mathrm{X}}$ $\mathrm{b}=\dfrac{\operatorname{cov}_{X Y}}{s_{X}^{2}}=\dfrac{S P}{S S X}=r\left(\dfrac{S_{y}}{s_{x}}\right)$ We have seen each of these before. $\overline{Y}$ and $\overline{X}$ are the means of $Y$ and $X$, respectively; $\operatorname{cov}_{X Y}$ is the covariance of $X$ and $Y$ we learned about with correlations; and $s_{X}^{2}$ is the variance of $X$. The formula for the slope is very similar to the formula for a Pearson correlation coefficient; the only difference is that we are dividing by the variance of $X$ instead of the product of the standard deviations of $X$ and $Y$. Because of this, our slope is scaled to the same scale as our $X$ variable and is no longer constrained to be between 0 and 1 in absolute value. This formula provides a clear definition of the slope of the line of best fit, and just like with correlation, this definitional formula can be simplified into a short computational formula for easier calculations. In this case, we are simply taking the sum of products and dividing by the sum of squares for $X$. Notice that there is a third formula for the slope of the line that involves the correlation between $X$ and $Y$. This is because regression and correlation look for the same thing: a straight line through the middle of the data. The only difference between a regression coefficient in simple linear regression and a Pearson correlation coefficient is the scale. So, if you lack raw data but have summary information on the correlation and standard deviations for variables, you can still compute a slope, and therefore intercept, for a line of best fit. It is very important to point out that the $Y$ values in the equations for $a$ and $b$ are our observed $Y$ values in the dataset, NOT the predicted $Y$ values ($\widehat{\mathrm{Y}}$) from our equation for the line of best fit. Thus, we will have 3 values for each person: the observed value of $X (X)$, the observed value of $Y (Y)$, and the predicted value of $Y (\widehat{\mathrm{Y}}$). You may be asking why we would try to predict $Y$ if we have an observed value of $Y$, and that is a very reasonable question. The answer has two explanations: first, we need to use known values of $Y$ to calculate the parameter estimates in our equation, and we use the difference between our observed values and predicted values ($Y – \widehat{\mathrm{Y}}$) to see how accurate our equation is; second, we often use regression to create a predictive model that we can then use to predict values of $Y$ for other people for whom we only have information on $X$. Let’s look at this from an applied example. Businesses often have more applicants for a job than they have openings available, so they want to know who among the applicants is most likely to be the best employee. There are many criteria that can be used, but one is a personality test for conscientiousness, with the belief being that more conscientious (more responsible) employees are better than less conscientious employees. A business might give their employees a personality inventory to assess conscientiousness and existing performance data to look for a relation. In this example, we have known values of the predictor ($X$, conscientiousness) and outcome ($Y$, job performance), so we can estimate an equation for a line of best fit and see how accurately conscientious predicts job performance, then use this equation to predict future job performance of applicants based only on their known values of conscientiousness from personality inventories given during the application process. The key assessing whether a linear regression works well is the difference between our observed and known $Y$ values and our predicted $\widehat{\mathrm{Y}}$ values. As mentioned in passing above, we use subtraction to find the difference between them ($Y – \widehat{\mathrm{Y}}$) in the same way we use subtraction for deviation scores and sums of squares. The value ($Y – \widehat{\mathrm{Y}}$) is our residual, which, as defined above, is how close our line of best fit is to our actual values. We can visualize residuals to get a better sense of what they are by creating a scatterplot and overlaying a line of best fit on it, as shown in Figure $1$. In Figure $1$, the triangular dots represent observations from each person on both $X$ and $Y$ and the dashed bright red line is the line of best fit estimated by the equation $\widehat{\mathrm{Y}}= a + bX$. For every person in the dataset, the line represents their predicted score. The dark red bracket between the triangular dots and the predicted scores on the line of best fit are our residuals (they are only drawn for four observations for ease of viewing, but in reality there is one for every observation); you can see that some residuals are positive and some are negative, and that some are very large and some are very small. This means that some predictions are very accurate and some are very inaccurate, and the some predictions overestimated values and some underestimated values. Across the entire dataset, the line of best fit is the one that minimizes the total (sum) value of all residuals. That is, although predictions at an individual level might be somewhat inaccurate, across our full sample and (theoretically) in future samples our total amount of error is as small as possible. We call this property of the line of best fit the Least Squares Error Solution. This term means that the solution – or equation – of the line is the one that provides the smallest possible value of the squared errors (squared so that they can be summed, just like in standard deviation) relative to any other straight line we could draw through the data. Predicting Scores and Explaining Variance We have now seen that the purpose of regression is twofold: we want to predict scores based on our line and, as stated earlier, explain variance in our observed $Y$ variable just like in ANOVA. These two purposes go hand in hand, and our ability to predict scores is literally our ability to explain variance. That is, if we cannot account for the variance in $Y$ based on $X$, then we have no reason to use $X$ to predict future values of $Y$. We know that the overall variance in $Y$ is a function of each score deviating from the mean of $Y$ (as in our calculation of variance and standard deviation). So, just like the red brackets in figure 1 representing residuals, given as ($Y – \widehat{\mathrm{Y}}$), we can visualize the overall variance as each score’s distance from the overall mean of $Y$, given as ($Y – \overline{Y}$), our normal deviation score. This is shown in Figure $2$. In Figure $2$, the solid blue line is the mean of $Y$, and the blue brackets are the deviation scores between our observed values of $Y$ and the mean of $Y$. This represents the overall variance that we are trying to explain. Thus, the residuals and the deviation scores are the same type of idea: the distance between an observed score and a given line, either the line of best fit that gives predictions or the line representing the mean that serves as a baseline. The difference between these two values, which is the distance between the lines themselves, is our model’s ability to predict scores above and beyond the baseline mean; that is, it is our models ability to explain the variance we observe in $Y$ based on values of $X$. If we have no ability to explain variance, then our line will be flat (the slope will be 0.00) and will be the same as the line representing the mean, and the distance between the lines will be 0.00 as well. We now have three pieces of information: the distance from the observed score to the mean, the distance from the observed score to the prediction line, and the distance from the prediction line to the mean. These are our three pieces of information needed to test our hypotheses about regression and to calculate effect sizes. They are our three Sums of Squares, just like in ANOVA. Our distance from the observed score to the mean is the Sum of Squares Total, which we are trying to explain. Our distance from the observed score to the prediction line is our Sum of Squares Error, or residual, which we are trying to minimize. Our distance from the prediction line to the mean is our Sum of Squares Model, which is our observed effect and our ability to explain variance. Each of these will go into the ANOVA table to calculate our test statistic.
textbooks/stats/Applied_Statistics/An_Introduction_to_Psychological_Statistics_(Foster_et_al.)/13%3A_Linear_Regression/13.02%3A_Prediction.txt
Our ANOVA table in regression follows the exact same format as it did for ANOVA (hence the name). Our top row is our observed effect, our middle row is our error, and our bottom row is our total. The columns take on the same interpretations as well: from left to right, we have our sums of squares, our degrees of freedom, our mean squares, and our $F$ statistic. Table $1$: ANOVA table in regression Source $SS$ $df$ $MS$ $F$ Model $\sum(\widehat{Y}-\overline{Y})^{2}$ 1 $SS_M / df_M$ $MS_M / MS_E$ Error $\sum(Y-\widehat{Y})^{2}$ $N-2$ $SS_E/ df_E$ Total $\sum(Y-\overline{Y})^{2}$ $N-1$ As with ANOVA, getting the values for the $SS$ column is a straightforward but somewhat arduous process. First, you take the raw scores of $X$ and $Y$ and calculate the means, variances, and covariance using the sum of products table introduced in our chapter on correlations. Next, you use the variance of $X$ and the covariance of $X$ and $Y$ to calculate the slope of the line, $b$, the formula for which is given above. After that, you use the means and the slope to find the intercept, $a$, which is given alongside $b$. After that, you use the full prediction equation for the line of best fit to get predicted $Y$ scores ($\widehat{Y}$) for each person. Finally, you use the observed $Y$ scores, predicted $Y$ scores, and mean of $Y$ to find the appropriate deviation scores for each person for each sum of squares source in the table and sum them to get the Sum of Squares Model, Sum of Squares Error, and Sum of Squares Total. As with ANOVA, you won’t be required to compute the $SS$ values by hand, but you will need to know what they represent and how they fit together. The other columns in the ANOVA table are all familiar. The degrees of freedom column still has $N – 1$ for our total, but now we have $N – 2$ for our error degrees of freedom and 1 for our model degrees of freedom; this is because simple linear regression only has one predictor, so our degrees of freedom for the model is always 1 and does not change. The total degrees of freedom must still be the sum of the other two, so our degrees of freedom error will always be $N – 2$ for simple linear regression. The mean square columns are still the $SS$ column divided by the $df$ column, and the test statistic $F$ is still the ratio of the mean squares. Based on this, it is now explicitly clear that not only do regression and ANOVA have the same goal but they are, in fact, the same analysis entirely. The only difference is the type of data we feed into the predictor side of the equations: continuous for regression and categorical for ANOVA. 13.04: Hypothesis Testing in Regression Regression, like all other analyses, will test a null hypothesis in our data. In regression, we are interested in predicting $Y$ scores and explaining variance using a line, the slope of which is what allows us to get closer to our observed scores than the mean of $Y$ can. Thus, our hypotheses concern the slope of the line, which is estimated in the prediction equation by $b$. Specifically, we want to test that the slope is not zero: $\begin{array}{c}{\mathrm{H}_{0}: \text { There is no explanatory relation between our variables }} \ {\mathrm{H}_{0}: \beta=0}\end{array} \nonumber$ $\begin{array}{c}{\mathrm{H}_{\mathrm{A}}: \text {There is an explanatory relation between our variables}} \ {\mathrm{H}_{\mathrm{A}}: \beta>0} \ {\mathrm{H}_{\mathrm{A}}: \beta<0} \ {\mathrm{H}_{\mathrm{A}}: \beta \neq 0}\end{array} \nonumber$ A non-zero slope indicates that we can explain values in $Y$ based on $X$ and therefore predict future values of $Y$ based on $X$. Our alternative hypotheses are analogous to those in correlation: positive relations have values above zero, negative relations have values below zero, and two-tailed tests are possible. Just like ANOVA, we will test the significance of this relation using the $F$ statistic calculated in our ANOVA table compared to a critical value from the $F$ distribution table. Let’s take a look at an example and regression in action.
textbooks/stats/Applied_Statistics/An_Introduction_to_Psychological_Statistics_(Foster_et_al.)/13%3A_Linear_Regression/13.03%3A_ANOVA_Table.txt
Researchers are interested in explaining differences in how happy people are based on how healthy people are. They gather data on each of these variables from 18 people and fit a linear regression model to explain the variance. We will follow the four-step hypothesis testing procedure to see if there is a relation between these variables that is statistically significant. Step 1: State the Hypotheses The null hypothesis in regression states that there is no relation between our variables. The alternative states that there is a relation, but because our research description did not explicitly state a direction of the relation, we will use a nondirectional hypothesis. $\begin{array}{c}{\text { Ho }_{0}: \text { There is no explanatory relation between health and happiness }} \ {\qquad \mathrm{H}_{0}: \beta=0}\end{array} \nonumber$ $\begin{array}{c}{\text { H}_{ A } : \text { There is an explanatory relation between health and happiness } } \ {\qquad \mathrm{H}_{\mathrm{A}}: \beta \neq 0}\end{array} \nonumber$ Step 2: Find the Critical Value Because regression and ANOVA are the same analysis, our critical value for regression will come from the same place: the $F$ distribution table, which uses two types of degrees of freedom. We saw above that the degrees of freedom for our numerator – the Model line – is always 1 in simple linear regression, and that the denominator degrees of freedom – from the Error line – is $N – 2$. In this instance, we have 18 people so our degrees of freedom for the denominator is 16. Going to our $F$ table, we find that the appropriate critical value for 1 and 16 degrees of freedom is $F^* = 4.49$, shown below in Figure $1$. Step 3: Calculate the Test Statistic The process of calculating the test statistic for regression first involves computing the parameter estimates for the line of best fit. To do this, we first calculate the means, standard deviations, and sum of products for our $X$ and $Y$ variables, as shown below. Table $1$: Sum of Products table $X$ $(X-\overline{X})$ $(X-\overline{X})^2$ $Y$ $(Y-\overline{Y})$ $(Y-\overline{Y})^2$ $(X-\overline{X})(Y-\overline{Y})$ 17.65 -2.13 4.53 10.36 -7.10 50.37 15.10 16.99 -2.79 7.80 16.38 -1.08 1.16 3.01 18.30 -1.48 2.18 15.23 -2.23 4.97 3.29 18.28 -1.50 2.25 14.26 -3.19 10.18 4.79 21.89 2.11 4.47 17.71 0.26 0.07 0.55 22.61 2.83 8.01 16.47 -0.98 0.97 -2.79 17.42 -2.36 5.57 16.89 -0.56 0.32 1.33 20.35 0.57 0.32 18.74 1.29 1.66 -0.09.73 18.89 -0.89 0.79 21.96 4.50 20.26 -4.00 18.63 -1.15 1.32 17.57 0.11 0.01 -0.13 19.67 -0.11 0.01 18.12 0.66 0.44 -0.08 18.39 -1.39 1.94 12.08 -5.37 28.87 7.48 22.48 2.71 7.32 17.11 -0.34 0.12 -0.93 23.25 3.47 12.07 21.66 4.21 17.73 14.63 19.91 0.13 0.02 17.86 0.40 0.16 0.05 18.21 -1.57 2.45 18.49 1.03 1.07 -1.62 23.65 3.87 14.99 22.13 4.67 21.82 18.08 19.45 -0.33 0.11 21.17 3.72 13.82 -1.22 356.02 0.00 76.14 314.18 0.00 173.99 58.29 From the raw data in our $X$ and $Y$ columns, we find that the means are $\overline{\mathrm{X}}=19.78$ and $\overline{\mathrm{Y}}=17.45$. The deviation scores for each variable sum to zero, so all is well there. The sums of squares for $X$ and $Y$ ultimately lead us to standard deviations of $s_{X}=2.12$ and $s_{Y}=3.20$. Finally, our sum of products is 58.29, which gives us a covariance of $\operatorname{cov}_{X Y}=3.43$, so we know our relation will be positive. This is all the information we need for our equations for the line of best. First, we must calculate the slope of the line: $\mathrm{b}=\dfrac{S P}{S S X}=\dfrac{58.29}{76.14}=0.77 \nonumber$ This means that as $X$ changes by 1 unit, $Y$ will change by 0.77. In terms of our problem, as health increases by 1, happiness goes up by 0.77, which is a positive relation. Next, we use the slope, along with the means of each variable, to compute the intercept: \begin{aligned} a &=\overline{Y}-b\: \overline{X} \ a=& 17.45-0.77 * 19.78 \ a=& 17.45-15.03=2.42 \end{aligned} \nonumber For this particular problem (and most regressions), the intercept is not an important or interpretable value, so we will not read into it further. Now that we have all of our parameters estimated, we can give the full equation for our line of best fit: $\widehat{\mathrm{Y}}=2.42+0.77 \mathrm{X} \nonumber$ We can plot this relation in a scatterplot and overlay our line onto it, as shown in Figure $2$. We can use the line equation to find predicted values for each observation and use them to calculate our sums of squares model and error, but this is tedious to do by hand, so we will let the computer software do the heavy lifting in that column of our ANOVA table: Table $2$: ANOVA Table Source $SS$ $df$ $MS$ $F$ Model 44.62 Error 129.37 Total Now that we have these, we can fill in the rest of the ANOVA table. We already found our degrees of freedom in Step 2: Table $3$: ANOVA Table Source $SS$ $df$ $MS$ $F$ Model 44.62 1 Error 129.37 16 Total Our total line is always the sum of the other two lines, giving us: Table $4$: ANOVA Table Source $SS$ $df$ $MS$ $F$ Model 44.62 1 Error 129.37 16 Total 173.99 17 Our mean squares column is only calculated for the model and error lines and is always our $SS$ divided by our $df$, which is: Table $5$: ANOVA Table Source $SS$ $df$ $MS$ $F$ Model 44.62 1 44.62 Error 129.37 16 8.09 Total 173.99 17 Finally, our $F$ statistic is the ratio of the mean squares: Table $6$: ANOVA Table Source $SS$ $df$ $MS$ $F$ Model 44.62 1 44.62 5.52 Error 129.37 16 8.09 Total 173.99 17 This gives us an obtained $F$ statistic of 5.52, which we will now use to test our hypothesis. Step 4: Make the Decision We now have everything we need to make our final decision. Our obtained test statistic was $F = 5.52$ and our critical value was $F^* = 4.49$. Since our obtained test statistic is greater than our critical value, we can reject the null hypothesis. Reject $H_0$. Based on our sample of 18 people, we can predict levels of happiness based on how healthy someone is, $F(1,16) = 5.52, p < .05$. Effect Size We know that, because we rejected the null hypothesis, we should calculate an effect size. In regression, our effect size is variance explained, just like it was in ANOVA. Instead of using $η^2$ to represent this, we instead us $R^2$, as we saw in correlation (yet more evidence that all of these are the same analysis). Variance explained is still the ratio of $SS_M$ to $SS_T$: $R^{2}=\dfrac{S S_{M}}{S S_{T}}=\dfrac{44.62}{173.99}=0.26 \nonumber$ We are explaining 26% of the variance in happiness based on health, which is a large effect size ($R^2$ uses the same effect size cutoffs as $η^2$). Accuracy in Prediction We found a large, statistically significant relation between our variables, which is what we hoped for. However, if we want to use our estimated line of best fit for future prediction, we will also want to know how precise or accurate our predicted values are. What we want to know is the average distance from our predictions to our actual observed values, or the average size of the residual ($Y − \widehat {Y}$). The average size of the residual is known by a specific name: the standard error of the estimate ($S_{(Y-\widehat{Y})}$), which is given by the formula $S_{(Y-\widehat{Y})}=\sqrt{\dfrac{\sum(Y-\widehat{Y})^{2}}{N-2}}$ This formula is almost identical to our standard deviation formula, and it follows the same logic. We square our residuals, add them up, then divide by the degrees of freedom. Although this sounds like a long process, we already have the sum of the squared residuals in our ANOVA table! In fact, the value under the square root sign is just the $SS_E$ divided by the $df_E$, which we know is called the mean squared error, or $MS_E$: $s_{(Y-\widehat{Y})}=\sqrt{\dfrac{\sum(Y-\widehat{Y})^{2}}{N-2}}=\sqrt{M S E}$ For our example: $s_{(Y-\widehat{Y})}=\sqrt{\dfrac{129.37}{16}}=\sqrt{8.09}=2.84 \nonumber$ So on average, our predictions are just under 3 points away from our actual values. There are no specific cutoffs or guidelines for how big our standard error of the estimate can or should be; it is highly dependent on both our sample size and the scale of our original $Y$ variable, so expert judgment should be used. In this case, the estimate is not that far off and can be considered reasonably precise.
textbooks/stats/Applied_Statistics/An_Introduction_to_Psychological_Statistics_(Foster_et_al.)/13%3A_Linear_Regression/13.05%3A_Happiness_and_Well-Being.txt
Simple linear regression as presented here is only a stepping stone towards an entire field of research and application. Regression is an incredibly flexible and powerful tool, and the extensions and variations on it are far beyond the scope of this chapter (indeed, even entire books struggle to accommodate all possible applications of the simple principles laid out here). The next step in regression is to study multiple regression, which uses multiple \(X\) variables as predictors for a single \(Y\) variable at the same time. The math of multiple regression is very complex but the logic is the same: we are trying to use variables that are statistically significantly related to our outcome to explain the variance we observe in that outcome. Other forms of regression include curvilinear models that can explain curves in the data rather than the straight lines used here, as well as moderation models that change the relation between two variables based on levels of a third. The possibilities are truly endless and offer a lifetime of discovery. 13.07: Exercises 1. How are ANOVA and linear regression similar? How are they different? Answer: ANOVA and simple linear regression both take the total observed variance and partition it into pieces that we can explain and cannot explain and use the ratio of those pieces to test for significant relations. They are different in that ANOVA uses a categorical variable as a predictor whereas linear regression uses a continuous variable. 1. What is a residual? 2. How are correlation and regression similar? How are they different? Answer: Correlation and regression both involve taking two continuous variables and finding a linear relation between them. Correlations find a standardized value describing the direction and magnitude of the relation whereas regression finds the line of best fit and uses it to partition and explain variance. 1. What are the two parameters of the line of best fit, and what do they represent? 2. What is our criteria for finding the line of best fit? Answer: Least Squares Error Solution; the line that minimizes the total amount of residual error in the dataset. 1. Fill out the rest of the ANOVA tables below for simple linear regressions: 1. Source $SS$ $df$ $MS$ $F$ Model 34.21 Error Total 66.12 54 2. Source $SS$ $df$ $MS$ $F$ Model     6.03 Error   16 Total 19.98 2. In chapter 12, we found a statistically significant correlation between overall performance in class and how much time someone studied. Use the summary statistics calculated in that problem (provided here) to compute a line of best fit predicting success from study times: $\overline{X}= 1.61, s_X = 1.12, \overline{Y} = 2.95, s_Y = 0.99, r = 0.65$. Answer: $b = r^*(s_y/s_x) = 0.65*(0.99/1.12) = 0.72; a = \overline{Y} - b\overline{X} = 2.95 – (0.72*1.61) = 1.79; \widehat{Y}= 1.79 + 0.72X$ 1. Using the line of best fit equation created in problem 7, predict the scores for how successful people will be based on how much they study: 1. $X = 1.20$ 2. $X = 3.33$ 3. $X = 0.71$ 4. $X = 4.00$ 2. You have become suspicious that the draft rankings of your fantasy football league have no predictive value for how teams place at the end of the season. You go back to historical league data and find rankings of teams after the draft and at the end of the season (below) to test for a statistically significant predictive relation. Assume $SSM = 2.65$ and $SSE = 337.35$ Draft Projection Final Rankings 1 14 2 6 3 8 4 13 5 2 6 15 7 4 8 10 9 11 10 16 11 9 12 7 13 14 14 12 15 1 16 5 Answer: Step 1: $H_0: β = 0$ “There is no predictive relation between draft rankings and final rankings in fantasy football,” $H_A: β ≠ 0$, “There is a predictive relation between draft rankings and final rankings in fantasy football.” Step 2: Our model will have 1 (based on the number of predictors) and 14 (based on how many observations we have) degrees of freedom, giving us a critical value of $F^* = 4.60$. Step 3: Using the sum of products table, we find : $\overline{X}= 8.50, \overline{Y} = 8.50, SSX = 339.86, SP = 29.99$, giving us a line of best fit of: b = 29.99/339.86 = 0.09; a = 8.50 – 0.09*8.50 = 7.74; $\widehat{Y} = 7.74 + 0.09X$. Our given $SS$ values and our df from step 2 allow us to fill in the ANOVA table: Source $SS$ $df$ $MS$ $F$ Model 2.65 1 2.65 0.11 Error 337.35 14 24.10 Total 339.86 15 Step 4: Our obtained value was smaller than our critical value, so we fail to reject the null hypothesis. There is no evidence to suggest that draft rankings have any predictive value for final fantasy football rankings, $F(1,14) = 0.11, p > .05$ 1. You have summary data for two variables: how extroverted some is ($X$) and how often someone volunteers ($Y$). Using these values, calculate the line of best fit predicting volunteering from extroversion then test for a statistically significant relation using the hypothesis testing procedure: $\overline{X}= 12.58, s_X = 4.65, \overline{Y} = 7.44, s_Y = 2.12, r = 0.34, N = 67, SSM = 19.79, SSE = 215.77$.
textbooks/stats/Applied_Statistics/An_Introduction_to_Psychological_Statistics_(Foster_et_al.)/13%3A_Linear_Regression/13.06%3A_Multiple_Regression_and_Other_Extensions.txt
• 14.1: Categories and Frequency Tables Our data for the χ2 test are categorical, specifically nominal, variables. Recall that nominal variables have no specified order and can only be described by their names and the frequencies with which they occur in the dataset. Thus, unlike our other variables that we have tested, we cannot describe our data for the χ2 test using means and standard deviations. Instead, we will use frequencies tables. • 14.2: Goodness-of-Fit The first of our two χ² tests assesses one categorical variable against a null hypothesis of equally sized frequencies. Equal frequency distributions are what we would expect to get if categorization was completely random. We could, in theory, also test against a specific distribution of category sizes if we have a good reason to (e.g. we have a solid foundation of how the regular population is distributed), but this is less common, so we will not deal with it in this text. • 14.3: χ² Statistic The calculations for our test statistic in χ² tests combine our information from our observed frequencies ( O ) and our expected frequencies ( E ) for each level of our categorical variable. For each cell (category) we find the difference between the observed and expected values, square them, and divide by the expected values. We then sum this value across cells for our test statistic. • 14.4: Pineapple on Pizza There is a very passionate and on-going debate on whether or not pineapple should go on pizza. Being the objective, rational data analysts that we are, we will collect empirical data to see if we can settle this debate once and for all. We gather data from a group of adults asking for a simple Yes/No answer. • 14.5: Contingency Tables for Two Variables The goodness-of-fit test is a useful tool for assessing a single categorical variable. However, what is more common is wanting to know if two categorical variables are related to one another. This type of analysis is similar to a correlation, the only difference being that we are working with nominal data, which violates the assumptions of traditional correlation coefficients. This is where the χ² test for independence comes in handy. • 14.6: Test for Independence The χ² test performed on contingency tables is known as the test for independence. In this analysis, we are looking to see if the values of each categorical variable (that is, the frequency of their levels) is related to or independent of the values of the other categorical variable. Because we are still doing a χ² test, which is nonparametric, we still do not have mathematical versions of our hypotheses. The actual interpretations of the hypotheses are quite simple. • 14.7: College Sports • 14.E: Chi-square (Exercises) 14: Chi-square Our data for the $\chi^{2}$ test are categorical, specifically nominal, variables. Recall from unit 1 that nominal variables have no specified order and can only be described by their names and the frequencies with which they occur in the dataset. Thus, unlike our other variables that we have tested, we cannot describe our data for the $\chi^{2}$ test using means and standard deviations. Instead, we will use frequencies tables. Table $1$: Pet Preferences Cat Dog Other Total Observed 14 17 5 36 Expected 12 12 12 36 Table $1$ gives an example of a frequency table used for a $\chi^{2}$ test. The columns represent the different categories within our single variable, which in this example is pet preference. The $\chi^{2}$ test can assess as few as two categories, and there is no technical upper limit on how many categories can be included in our variable, although, as with ANOVA, having too many categories makes our computations long and our interpretation difficult. The final column in the table is the total number of observations, or $N$. The $\chi^{2}$ test assumes that each observation comes from only one person and that each person will provide only one observation, so our total observations will always equal our sample size. There are two rows in this table. The first row gives the observed frequencies of each category from our dataset; in this example, 14 people reported liking preferring cats as pets, 17 people reported preferring dogs, and 5 people reported a different animal. The second row gives expected values; expected values are what would be found if each category had equal representation. The calculation for an expected value is: $E=\dfrac{N}{C}$ Where $N$ is the total number of people in our sample and $C$ is the number of categories in our variable (also the number of columns in our table). The expected values correspond to the null hypothesis for $\chi^{2}$ tests: equal representation of categories. Our first of two $\chi^{2}$ tests, the Goodness-of-Fit test, will assess how well our data lines up with, or deviates from, this assumption.
textbooks/stats/Applied_Statistics/An_Introduction_to_Psychological_Statistics_(Foster_et_al.)/14%3A_Chi-square/14.01%3A_Categories_and_Frequency_Tables.txt
The first of our two $\chi^{2}$ tests assesses one categorical variable against a null hypothesis of equally sized frequencies. Equal frequency distributions are what we would expect to get if categorization was completely random. We could, in theory, also test against a specific distribution of category sizes if we have a good reason to (e.g. we have a solid foundation of how the regular population is distributed), but this is less common, so we will not deal with it in this text. Hypotheses All $\chi^{2}$ tests, including the goodness-of-fit test, are non-parametric. This means that there is no population parameter we are estimating or testing against; we are working only with our sample data. Because of this, there are no mathematical statements for $\chi^{2}$ hypotheses. This should make sense because the mathematical hypothesis statements were always about population parameters (e.g. $μ$), so if we are non-parametric, we have no parameters and therefore no mathematical statements. We do, however, still state our hypotheses verbally. For goodness-of-fit $\chi^{2}$ tests, our null hypothesis is that there is an equal number of observations in each category. That is, there is no difference between the categories in how prevalent they are. Our alternative hypothesis says that the categories do differ in their frequency. We do not have specific directions or one-tailed tests for $\chi^{2}$, matching our lack of mathematical statement. Degrees of Freedom and the $\chi^{2}$ table Our degrees of freedom for the $\chi^{2}$ test are based on the number of categories we have in our variable, not on the number of people or observations like it was for our other tests. Luckily, they are still as simple to calculate: $d f=C-1$ So for our pet preference example, we have 3 categories, so we have 2 degrees of freedom. Our degrees of freedom, along with our significance level (still defaulted to $α = 0.05$) are used to find our critical values in the $\chi^{2}$ table, which is shown in figure 1. Because we do not have directional hypotheses for $\chi^{2}$ tests, we do not need to differentiate between critical values for 1- or 2-tailed tests. In fact, just like our $F$ tests for regression and ANOVA, all $\chi^{2}$ tests are 1-tailed tests. 14.03: Statistic The calculations for our test statistic in $\chi^{2}$ tests combine our information from our observed frequencies ($O$) and our expected frequencies ($E$) for each level of our categorical variable. For each cell (category) we find the difference between the observed and expected values, square them, and divide by the expected values. We then sum this value across cells for our test statistic. This is shown in the formula: $\chi^{2}=\sum \dfrac{(\mathrm{O}-\mathrm{E})^{2}}{\mathrm{E}}$ For our pet preference data, we would have: $\chi^{2}=\dfrac{(14-12)^{2}}{12}+\dfrac{(17-12)^{2}}{12}+\dfrac{(5-12)^{2}}{12}=0.33+2.08+4.08=6.49 \nonumber$ Notice that, for each cell’s calculation, the expected value in the numerator and the expected value in the denominator are the same value. Let’s now take a look at an example from start to finish. 14.04: Pineapple on Pizza There is a very passionate and on-going debate on whether or not pineapple should go on pizza. Being the objective, rational data analysts that we are, we will collect empirical data to see if we can settle this debate once and for all. We gather data from a group of adults asking for a simple Yes/No answer. Step 1: State the Hypotheses We start, as always, with our hypotheses. Our null hypothesis of no difference will state that an equal number of people will say they do or do not like pineapple on pizza, and our alternative will be that one side wins out over the other: $\mathrm{H}_{0}: \text { An equal number of people do and do not like pineapple on pizza } \nonumber$ $\mathrm{H}_{A}: A \text { significant majority of people will agree one way or the other} \nonumber$ Step 2: Find the Critical Value To avoid any potential bias in this crucial analysis, we will leave $α$ at its typical level. We have two options in our data (Yes or No), which will give us two categories. Based on this, we will have 1 degree of freedom. From our $\chi^{2}$ table, we find a critical value of 3.84. Step 3: Calculate the Test Statistic The results of the data collection are presented in Table $1$. We had data from 45 people in all and 2 categories, so our expected values are $E = 45/2 = 22.50$. Table $1$: Results of Data collection Yes No Total Observed 26 19 45 Expected 22.50 22.50 45 We can use these to calculate our $\chi^{2}$ statistic: $\chi^{2}=\dfrac{(26-22.50)^{2}}{22.50}+\dfrac{(19-22.50)^{2}}{22.50}=0.54+0.54=1.08 \nonumber$ Step 4: Make the Decision Our observed test statistic had a value of 1.08 and our critical value was 3.84. Our test statistic was smaller than our critical value, so we fail to reject the null hypothesis, and the debate rages on.
textbooks/stats/Applied_Statistics/An_Introduction_to_Psychological_Statistics_(Foster_et_al.)/14%3A_Chi-square/14.02%3A_Goodness-of-Fit.txt
The goodness-of-fit test is a useful tool for assessing a single categorical variable. However, what is more common is wanting to know if two categorical variables are related to one another. This type of analysis is similar to a correlation, the only difference being that we are working with nominal data, which violates the assumptions of traditional correlation coefficients. This is where the $\chi^2$ test for independence comes in handy. As noted above, our only description for nominal data is frequency, so we will again present our observations in a frequency table. When we have two categorical variables, our frequency table is crossed. That is, each combination of levels from each categorical variable are presented. This type of frequency table is called a contingency table because it shows the frequency of each category in one variable, contingent upon the specific level of the other variable. An example contingency table is shown in Table $1$, which displays whether or not 168 college students watched college sports growing up (Yes/No) and whether the students’ final choice of which college to attend was influenced by the college’s sports teams (Yes – Primary, Yes – Somewhat, No): Table $1$: Contingency table of college sports and decision making College Sports Affected Decision Primary Somewhat No Total Watched Yes 47 26 14 87 No 21 23 37 81 Total 68 49 51 168 In contrast to the frequency table for our goodness-of-fit test, our contingency table does not contain expected values, only observed data. Within our table, wherever our rows and columns cross, we have a cell. A cell contains the frequency of observing it’s corresponding specific levels of each variable at the same time. The top left cell in Table $1$ shows us that 47 people in our study watched college sports as a child AND had college sports as their primary deciding factor in which college to attend. Cells are numbered based on which row they are in (rows are numbered top to bottom) and which column they are in (columns are numbered left to right). We always name the cell using (R,C), with the row first and the column second. A quick and easy way to remember the order is that R/C Cola exists but C/R Cola does not. Based on this convention, the top left cell containing our 47 participants who watched college sports as a child and had sports as a primary criteria is cell (1,1). Next to it, which has 26 people who watched college sports as a child but had sports only somewhat affect their decision, is cell (1,2), and so on. We only number the cells where our categories cross. We do not number our total cells, which have their own special name: marginal values. Marginal values are the total values for a single category of one variable, added up across levels of the other variable. In table 3, these marginal values have been italicized for ease of explanation, though this is not normally the case. We can see that, in total, 87 of our participants (47+26+14) watched college sports growing up and 81 (21+23+37) did not. The total of these two marginal values is 168, the total number of people in our study. Likewise, 68 people used sports as a primary criteria for deciding which college to attend, 50 considered it somewhat, and 50 did not use it as criteria at all. The total of these marginal values is also 168, our total number of people. The marginal values for rows and columns will always both add up to the total number of participants, $N$, in the study. If they do not, then a calculation error was made and you must go back and check your work. Expected Values of Contingency Tables Our expected values for contingency tables are based on the same logic as they were for frequency tables, but now we must incorporate information about how frequently each row and column was observed (the marginal values) and how many people were in the sample overall ($N$) to find what random chance would have made the frequencies out to be. Specifically: $E_{i j}=\dfrac{R_{i} C_{j}}{N}$ The subscripts $i$ and $j$ indicate which row and column, respectively, correspond to the cell we are calculating the expected frequency for, and the $R_i$ and $C_j$ are the row and column marginal values, respectively. $N$ is still the total sample size. Using the data from Table $1$, we can calculate the expected frequency for cell (1,1), the college sport watchers who used sports at their primary criteria, to be: $E_{1,1}=\frac{87 * 68}{168}=35.21 \nonumber$ We can follow the same math to find all the expected values for this table: Table $2$: Contingency table of college sports and decision making College Sports Affected Decision Primary Somewhat No Total Watched Yes 35.21 25.38 26.41 87 No 32.79 23.62 24.59 81 Total 68 49 51 Notice that the marginal values still add up to the same totals as before. This is because the expected frequencies are just row and column averages simultaneously. Our total $N$ will also add up to the same value. The observed and expected frequencies can be used to calculate the same $\chi^{2}$ statistic as we did for the goodness-of-fit test. Before we get there, though, we should look at the hypotheses and degrees of freedom used for contingency tables.
textbooks/stats/Applied_Statistics/An_Introduction_to_Psychological_Statistics_(Foster_et_al.)/14%3A_Chi-square/14.05%3A_Contingency_Tables_for_Two_Variables.txt
The $\chi^{2}$ test performed on contingency tables is known as the test for independence. In this analysis, we are looking to see if the values of each categorical variable (that is, the frequency of their levels) is related to or independent of the values of the other categorical variable. Because we are still doing a $\chi^{2}$ test, which is nonparametric, we still do not have mathematical versions of our hypotheses. The actual interpretations of the hypotheses are quite simple: the null hypothesis says that the variables are independent or not related, and alternative says that they are not independent or that they are related. Using this set up and the data provided in Table 14.5.2, let’s formally test for whether or not watching college sports as a child is related to using sports as a criteria for selecting a college to attend. 14.07: College Sports We will follow the same 4 step procedure as we have since chapter 7. Step 1: State the Hypotheses Our null hypothesis of no difference will state that there is no relation between our variables, and our alternative will state that our variables are related: $\mathrm{H}_{0}: \text { College choice criteria is independent of college sports viewership as a child } \nonumber$ $\mathrm{H}_{\mathrm{A}}: \text { College choice criteria is related to college sports viewership as a child } \nonumber$ Step 2: Find the Critical Value Our critical value will come from the same table that we used for the goodness-of-fit test, but our degrees of freedom will change. Because we now have rows and columns (instead of just columns) our new degrees of freedom use information on both: $d f=(R-1)(C-1)$ In our example: $d f=(2-1)(3-1)=1 * 2=2 \nonumber$ Based on our 2 degrees of freedom, our critical value from our table is 5.991. Step 3: Calculate the Test Statistic The same formula for $\chi^{2}$ is used once again: $\chi^{2}=\sum \dfrac{(0-\mathrm{E})^{2}}{\mathrm{E}}$ \begin{aligned} \chi^{2} &=\dfrac{(47-35.21)^{2}}{35.21}+\dfrac{(26-25.38)^{2}}{25.38}+\dfrac{(14-26.41)^{2}}{26.41}+ \dfrac{(21-32.79)^{2}}{32.79}+\dfrac{(23-23.62)^{2}}{23.62}+\dfrac{(37-24.59)^{2}}{24.59} \end{aligned} \nonumber Step 4: Make the Decision The final decision for our test of independence is still based on our observed value (20.31) and our critical value (5.991). Because our observed value is greater than our critical value, we can reject the null hypothesis. Reject $H_0$. Based on our data from 168 people, we can say that there is a statistically significant relation between whether or not someone watches college sports growing up and how much a college’s sports team factor in to that person’s decision on which college to attend, $\chi^{2}(2)=20.31, p<0.05$. Effect Size for $\chi^{2}$ Like all other significance tests, $\chi^{2}$ tests – both goodness-of-fit and tests for independence – have effect sizes that can and should be calculated for statistically significant results. There are many options for which effect size to use, and the ultimate decision is based on the type of data, the structure of your frequency or contingency table, and the types of conclusions you would like to draw. For the purpose of our introductory course, we will focus only on a single effect size that is simple and flexible: Cramer’s $V$. Cramer’s $V$ is a type of correlation coefficient that can be computed on categorical data. Like any other correlation coefficient (e.g. Pearson’s $r$), the cutoffs for small, medium, and large effect sizes of Cramer’s $V$ are 0.10, 0.30, and 0.50, respectively. The calculation of Cramer’s $V$ is very simple: $V=\sqrt{\dfrac{\chi^{2}}{N(k-1)}}$ For this calculation, $k$ is the smaller value of either $R$ (the number of rows) or $C$ (the number of columns). The numerator is simply the test statistic we calculate during step 3 of the hypothesis testing procedure. For our example, we had 2 rows and 3 columns, so $k = 2$: $V=\sqrt{\dfrac{\chi^{2}}{N(k-1)}}=\sqrt{\dfrac{20.38}{168(2-1)}}=\sqrt{0.12}=0.35 \nonumber$ So the statistically significant relation between our variables was moderately strong.
textbooks/stats/Applied_Statistics/An_Introduction_to_Psychological_Statistics_(Foster_et_al.)/14%3A_Chi-square/14.06%3A_Test_for_Independence.txt
1. What does a frequency table display? What does a contingency table display? Answer: Frequency tables display observed category frequencies and (sometimes) expected category frequencies for a single categorical variable. Contingency tables display the frequency of observing people in crossed category levels for two categorical variables, and (sometimes) the marginal totals of each variable level. 1. What does a goodness-of-fit test assess? 2. How do expected frequencies relate to the null hypothesis? Answer: Expected values are what we would observe if the proportion of categories was completely random (i.e. no consistent difference other than chance), which is the same was what the null hypothesis predicts to be true. 1. What does a test-for-independence assess? 2. Compute the expected frequencies for the following contingency table: Category A Category B Category C 22 38 Category D 16 14 Answer: Observed Category A Category B Total Category C 22 38 60 Category D 16 14 30 Total 38 52 90 Expected Category A Category B Total Category C $((60 * 38) / 90)=25.33$ $((60^{*} 52) / 90)=34.67$ 60 Category D $((30 * 38) / 90)=12.67$ $((30 * 52) / 90)=17.33$ 30 Total 38 52 90 1. Test significance and find effect sizes (if significant) for the following tests: 1. $N = 19, R = 3, C = 2, χ2 (2) = 7.89, α = .05$ 2. $N = 12, R = 2, C = 2, χ2 (1) = 3.12, α = .05$ 3. $N = 74, R = 3, C = 3, χ2 (4) = 28.41, α = .01$ 2. You hear a lot of people claim that The Empire Strikes Back is the best movie in the original Star Wars trilogy, and you decide to collect some data to demonstrate this empirically (pun intended). You ask 48 people which of the original movies they liked best; 8 said A New Hope was their favorite, 23 said The Empire Strikes Back was their favorite, and 17 said Return of the Jedi was their favorite. Perform a chi-square test on these data at the .05 level of significance. Answer: Step 1: $H_0$: “There is no difference in preference for one movie”, $H_A$: “There is a difference in how many people prefer one movie over the others.” Step 2: 3 categories (columns) gives $df = 2$, $\chi_{\text {crit }}^{2}=5.991$. Step 3: Based on the given frequencies: New Hope Empire Jedi Total Observed 8 23 17 48 Expected 16 16 16 $\chi^{2}=7.13$. Step 4: Our obtained statistic is greater than our critical value, reject $H_0$. Based on our sample of 48 people, there is a statistically significant difference in the proportion of people who prefer one Star Wars movie over the others, $\chi^{2}(2)=7.13$, $p < .05$. Since this is a statistically significant result, we should calculate an effect size: Cramer’s $V=\sqrt{\dfrac{7.13}{48(3-1)}}=0.27$, which is a moderate effect size. 1. A pizza company wants to know if people order the same number of different toppings. They look at how many pepperoni, sausage, and cheese pizzas were ordered in the last week; fill out the rest of the frequency table and test for a difference. Pepperoni Sausage Cheese Total Observed 320 275 251 Expected 1. A university administrator wants to know if there is a difference in proportions of students who go on to grad school across different majors. Use the data below to test whether there is a relation between college major and going to grad school. Major Psychology Business Math Graduate School Yes 32 8 36 No 15 41 12 Answer: Step 1: $H_0$: “There is no relation between college major and going to grad school”, $H_A$: “Going to grad school is related to college major.” Step 2: $df = 2$, $\chi_{\text {crit }}^{2}=5.991$. Step 3: Based on the given frequencies: Expected Values Major Psychology Business Math Graduate School Yes 24.81 25.86 25.33 No 22.19 23.14 22.67 $\chi^{2}=2.09+12.34+4.49+2.33+13.79+5.02=40.05$. Step 4: Obtained statistic is greater than the critical value, reject $H_0$. Based on our data, there is a statistically significant relation between college major and going to grad school, $\chi^{2}(2)=40.05, \mathrm{p}<.05$, Cramer’s $V = 0.53$, which is a large effect 1. A company you work for wants to make sure that they are not discriminating against anyone in their promotion process. You have been asked to look across gender to see if there are differences in promotion rate (i.e. if gender and promotion rate are independent or not). The following data should be assessed at the normal level of significance: Promoted in last two years? Yes No Gender Women 8 5 Men 9 7
textbooks/stats/Applied_Statistics/An_Introduction_to_Psychological_Statistics_(Foster_et_al.)/14%3A_Chi-square/14.08%3A_Exercises.txt
To call in statisticians after the experiment is done may be no more than asking them to perform a post-mortem examination: They may be able to say what the experiment died of. — Sir Ronald Fisher 01: Why Statistics Note Adapted nearly verbatim from Chapters 1 and 2 in Navarro, D. “Learning Statistics with R.” compcogscisydney.org/learning-statistics-with-r/ To the surprise of many students, statistics is a fairly significant part of a psychological education. To the surprise of no-one, statistics is very rarely the favorite part of one’s psychological education. After all, if you really loved the idea of doing statistics, you’d probably be enrolled in a statistics class right now, not a psychology class. So, not surprisingly, there’s a pretty large proportion of the student base that isn’t happy about the fact that psychology has so much statistics in it. In view of this, I thought that the right place to start might be to answer some of the more common questions that people have about stats… A big part of this issue at hand relates to the very idea of statistics. What is it? What’s it there for? And why are scientists so bloody obsessed with it? These are all good questions, when you think about it. So let’s start with the last one. As a group, scientists seem to be bizarrely fixated on running statistical tests on everything. In fact, we use statistics so often that we sometimes forget to explain to people why we do. It’s a kind of article of faith among scientists – and especially social scientists – that your findings can’t be trusted until you’ve done some stats. Undergraduate students might be forgiven for thinking that we’re all completely mad, because no-one takes the time to answer one very simple question: Why do you do statistics? Why don’t scientists just use common sense? It’s a naive question in some ways, but most good questions are. There’s a lot of good answers to it, but for my money, the best answer is a really simple one: we don’t trust ourselves enough. We worry that we’re human, and susceptible to all of the biases, temptations and frailties that humans suffer from. Much of statistics is basically a safeguard. Using “common sense” to evaluate evidence means trusting gut instincts, relying on verbal arguments and on using the raw power of human reason to come up with the right answer. Most scientists don’t think this approach is likely to work. In fact, come to think of it, this sounds a lot like a psychological question to me, and since I do work in a psychology department, it seems like a good idea to dig a little deeper here. Is it really plausible to think that this “common sense” approach is very trustworthy? Verbal arguments have to be constructed in language, and all languages have biases – some things are harder to say than others, and not necessarily because they’re false (e.g., quantum electrodynamics is a good theory, but hard to explain in words). The instincts of our “gut” aren’t designed to solve scientific problems, they’re designed to handle day to day inferences – and given that biological evolution is slower than cultural change, we should say that they’re designed to solve the day to day problems for a different world than the one we live in. Most fundamentally, reasoning sensibly requires people to engage in “induction”, making wise guesses and going beyond the immediate evidence of the senses to make generalisations about the world. If you think that you can do that without being influenced by various distractors, well, I have a bridge in Brooklyn I’d like to sell you. Heck, as the next section shows, we can’t even solve “deductive” problems (ones where no guessing is required) without being influenced by our pre-existing biases. The curse of belief bias People are mostly pretty smart. We’re certainly smarter than the other species that we share the planet with (though many people might disagree). Our minds are quite amazing things, and we seem to be capable of the most incredible feats of thought and reason. That doesn’t make us perfect though. And among the many things that psychologists have shown over the years is that we really do find it hard to be neutral, to evaluate evidence impartially and without being swayed by pre-existing biases. A good example of this is the belief bias effect in logical reasoning: if you ask people to decide whether a particular argument is logically valid (i.e., conclusion would be true if the premises were true), we tend to be influenced by the believability of the conclusion, even when we shouldn’t. For instance, here’s a valid argument where the conclusion is believable: • No cigarettes are inexpensive (Premise 1) • Some addictive things are inexpensive (Premise 2) • Therefore, some addictive things are not cigarettes (Conclusion) And here’s a valid argument where the conclusion is not believable: • No addictive things are inexpensive (Premise 1) • Some cigarettes are inexpensive (Premise 2) • Therefore, some cigarettes are not addictive (Conclusion) The logical structure of argument #2 is identical to the structure of argument #1, and they’re both valid. However, in the second argument, there are good reasons to think that premise 1 is incorrect, and as a result it’s probably the case that the conclusion is also incorrect. But that’s entirely irrelevant to the topic at hand: an argument is deductively valid if the conclusion is a logical consequence of the premises. That is, a valid argument doesn’t have to involve true statements. On the other hand, here’s an invalid argument that has a believable conclusion: • No addictive things are inexpensive (Premise 1) • Some cigarettes are inexpensive (Premise 2) • Therefore, some addictive things are not cigarettes (Conclusion) And finally, an invalid argument with an unbelievable conclusion: • No cigarettes are inexpensive (Premise 1) • Some addictive things are inexpensive (Premise 2) • Therefore, some cigarettes are not addictive (Conclusion) Now, suppose that people really are perfectly able to set aside their pre-existing biases about what is true and what isn’t, and purely evaluate an argument on its logical merits. We’d expect 100% of people to say that the valid arguments are valid, and 0% of people to say that the invalid arguments are valid. So if you ran an experiment looking at this, you’d expect to see data like this: conlusion feels true conclusion feels false argument is valid 100% say “valid” 100% say “valid” argument is invalid 0% say “valid” 0% say “valid” If the psychological data looked like this (or even a good approximation to this), we might feel safe in just trusting our gut instincts. That is, it’d be perfectly okay just to let scientists evaluate data based on their common sense, and not bother with all this murky statistics stuff. However, you guys have taken psych classes, and by now you probably know where this is going. In a classic study, Evans, Barston, and Pollard (1983) ran an experiment looking at exactly this. What they found is that when pre-existing biases (i.e., beliefs) were in agreement with the structure of the data, everything went the way you’d hope: conlusion feels true conclusion feels false argument is valid 92% say “valid” argument is invalid 8% say “valid” Not perfect, but that’s pretty good. But look what happens when our intuitive feelings about the truth of the conclusion run against the logical structure of the argument: conlusion feels true conclusion feels false argument is valid 92% say “valid” 46% say “valid” argument is invalid 92% say “valid” 8% say “valid” Oh dear, that’s not as good. Apparently, when people are presented with a strong argument that contradicts our pre-existing beliefs, we find it pretty hard to even perceive it to be a strong argument (people only did so 46% of the time). Even worse, when people are presented with a weak argument that agrees with our pre-existing biases, almost no-one can see that the argument is weak (people got that one wrong 92% of the time!) If you think about it, it’s not as if these data are horribly damning. Overall, people did do better than chance at compensating for their prior biases, since about 60% of people’s judgements were correct (you’d expect 50% by chance). Even so, if you were a professional “evaluator of evidence”, and someone came along and offered you a magic tool that improves your chances of making the right decision from 60% to (say) 95%, you’d probably jump at it, right? Of course you would. Thankfully, we actually do have a tool that can do this. But it’s not magic, it’s statistics. So that’s reason #1 why scientists love statistics. It’s just too easy for us to “believe what we want to believe”; so if we want to “believe in the data” instead, we’re going to need a bit of help to keep our personal biases under control. That’s what statistics does: it helps keep us honest.
textbooks/stats/Applied_Statistics/Answering_Questions_with_Data_-__Introductory_Statistics_for_Psychology_Students_(Crump)/01%3A_Why_Statistics/1.01%3A_On_the_Psychology_of_Statistics.txt
The following is a true story (I think…). In 1973, the University of California, Berkeley had some worries about the admissions of students into their postgraduate courses. Specifically, the thing that caused the problem was that the gender breakdown of their admissions, which looked like this: Number of applicants Percent admitted Males 8442 44% Females 4321 35% and they were worried about being sued. Given that there were nearly 13,000 applicants, a difference of 9% in admission rates between males and females is just way too big to be a coincidence. Pretty compelling data, right? And if I were to say to you that these data actually reflect a weak bias in favour of women (sort of!), you’d probably think that I was either crazy or sexist. Note Earlier versions of these notes incorrectly suggested that they actually were sued – apparently that’s not true. There’s a nice commentary on this here: https://www.refsmmat.com/posts/2016-05-08-simpsons-paradox-berkeley.html. A big thank you to Wilfried Van Hirtum for pointing this out to me! When people started looking more carefully at the admissions data (Bickel, Hammel, and O’Connell 1975) they told a rather different story. Specifically, when they looked at it on a department by department basis, it turned out that most of the departments actually had a slightly higher success rate for female applicants than for male applicants. The table below shows the admission figures for the six largest departments (with the names of the departments removed for privacy reasons): Department Applicants Percent admitted Applicants Percent admitted A 825 62% 108 82% B 560 63% 25 68% C 325 37% 593 34% D 417 33% 375 35% E 191 28% 393 24% F 272 6% 341 7% Remarkably, most departments had a higher rate of admissions for females than for males! Yet the overall rate of admission across the university for females was lower than for males. How can this be? How can both of these statements be true at the same time? Here’s what’s going on. Firstly, notice that the departments are not equal to one another in terms of their admission percentages: some departments (e.g., engineering, chemistry) tended to admit a high percentage of the qualified applicants, whereas others (e.g., English) tended to reject most of the candidates, even if they were high quality. So, among the six departments shown above, notice that department A is the most generous, followed by B, C, D, E and F in that order. Next, notice that males and females tended to apply to different departments. If we rank the departments in terms of the total number of male applicants, we get \( \textbf{A} \gt \textbf{B} \gt D \gt C \gt F \gt E \) (the “easy” departments are in bold). On the whole, males tended to apply to the departments that had high admission rates. Now compare this to how the female applicants distributed themselves. Ranking the departments in terms of the total number of female applicants produces a quite different ordering \( C \gt E \gt D \gt F \gt \textbf{A} \gt \textbf{B} \). In other words, what these data seem to be suggesting is that the female applicants tended to apply to “harder” departments. And in fact, if we look at all Figure \(1\) we see that this trend is systematic, and quite striking. This effect is known as Simpson’s paradox. It’s not common, but it does happen in real life, and most people are very surprised by it when they first encounter it, and many people refuse to even believe that it’s real. It is very real. And while there are lots of very subtle statistical lessons buried in there, I want to use it to make a much more important point …doing research is hard, and there are lots of subtle, counterintuitive traps lying in wait for the unwary. That’s reason #2 why scientists love statistics, and why we teach research methods. Because science is hard, and the truth is sometimes cunningly hidden in the nooks and crannies of complicated data. Before leaving this topic entirely, I want to point out something else really critical that is often overlooked in a research methods class. Statistics only solves part of the problem. Remember that we started all this with the concern that Berkeley’s admissions processes might be unfairly biased against female applicants. When we looked at the “aggregated” data, it did seem like the university was discriminating against women, but when we “disaggregate” and looked at the individual behaviour of all the departments, it turned out that the actual departments were, if anything, slightly biased in favour of women. The gender bias in total admissions was caused by the fact that women tended to self-select for harder departments. From a legal perspective, that would probably put the university in the clear. Postgraduate admissions are determined at the level of the individual department (and there are good reasons to do that), and at the level of individual departments, the decisions are more or less unbiased (the weak bias in favour of females at that level is small, and not consistent across departments). Since the university can’t dictate which departments people choose to apply to, and the decision making takes place at the level of the department it can hardly be held accountable for any biases that those choices produce. That was the basis for my somewhat glib remarks earlier, but that’s not exactly the whole story, is it? After all, if we’re interested in this from a more sociological and psychological perspective, we might want to ask why there are such strong gender differences in applications. Why do males tend to apply to engineering more often than females, and why is this reversed for the English department? And why is it it the case that the departments that tend to have a female-application bias tend to have lower overall admission rates than those departments that have a male-application bias? Might this not still reflect a gender bias, even though every single department is itself unbiased? It might. Suppose, hypothetically, that males preferred to apply to “hard sciences” and females prefer “humanities”. And suppose further that the reason for why the humanities departments have low admission rates is because the government doesn’t want to fund the humanities (spots in Ph.D. programs, for instance, are often tied to government funded research projects). Does that constitute a gender bias? Or just an unenlightened view of the value of the humanities? What if someone at a high level in the government cut the humanities funds because they felt that the humanities are “useless chick stuff”. That seems pretty blatantly gender biased. None of this falls within the purview of statistics, but it matters to the research project. If you’re interested in the overall structural effects of subtle gender biases, then you probably want to look at both the aggregated and disaggregated data. If you’re interested in the decision making process at Berkeley itself then you’re probably only interested in the disaggregated data. In short there are a lot of critical questions that you can’t answer with statistics, but the answers to those questions will have a huge impact on how you analyse and interpret data. And this is the reason why you should always think of statistics as a tool to help you learn about your data, no more and no less. It’s a powerful tool to that end, but there’s no substitute for careful thought.
textbooks/stats/Applied_Statistics/Answering_Questions_with_Data_-__Introductory_Statistics_for_Psychology_Students_(Crump)/01%3A_Why_Statistics/1.02%3A_The_cautionary_tale_of_Simpsons_paradox.txt
I hope that the discussion above helped explain why science in general is so focused on statistics. But I’m guessing that you have a lot more questions about what role statistics plays in psychology, and specifically why psychology classes always devote so many lectures to stats. So here’s my attempt to answer a few of them… • Why does psychology have so much statistics? To be perfectly honest, there’s a few different reasons, some of which are better than others. The most important reason is that psychology is a statistical science. What I mean by that is that the “things” that we study are people. Real, complicated, gloriously messy, infuriatingly perverse people. The “things” of physics include objects like electrons, and while there are all sorts of complexities that arise in physics, electrons don’t have minds of their own. They don’t have opinions, they don’t differ from each other in weird and arbitrary ways, they don’t get bored in the middle of an experiment, and they don’t get angry at the experimenter and then deliberately try to sabotage the data set. At a fundamental level psychology is harder than physics. Basically, we teach statistics to you as psychologists because you need to be better at stats than physicists. There’s actually a saying used sometimes in physics, to the effect that “if your experiment needs statistics, you should have done a better experiment”. They have the luxury of being able to say that because their objects of study are pathetically simple in comparison to the vast mess that confronts social scientists. It’s not just psychology, really: most social sciences are desperately reliant on statistics. Not because we’re bad experimenters, but because we’ve picked a harder problem to solve. We teach you stats because you really, really need it. • Can’t someone else do the statistics? To some extent, but not completely. It’s true that you don’t need to become a fully trained statistician just to do psychology, but you do need to reach a certain level of statistical competence. In my view, there’s three reasons that every psychological researcher ought to be able to do basic statistics: 1. There’s the fundamental reason: statistics is deeply intertwined with research design. If you want to be good at designing psychological studies, you need to at least understand the basics of stats. 2. If you want to be good at the psychological side of the research, then you need to be able to understand the psychological literature, right? But almost every paper in the psychological literature reports the results of statistical analyses. So if you really want to understand the psychology, you need to be able to understand what other people did with their data. And that means understanding a certain amount of statistics. 3. There’s a big practical problem with being dependent on other people to do all your statistics: statistical analysis is expensive. In almost any real life situation where you want to do psychological research, the cruel facts will be that you don’t have enough money to afford a statistician. So the economics of the situation mean that you have to be pretty self-sufficient. Note that a lot of these reasons generalize beyond researchers. If you want to be a practicing psychologist and stay on top of the field, it helps to be able to read the scientific literature, which relies pretty heavily on statistics. • I don’t care about jobs, research, or clinical work. Do I need statistics? Okay, now you’re just messing with me. Still, I think it should matter to you too. Statistics should matter to you in the same way that statistics should matter to everyone: we live in the 21st century, and data are everywhere. Frankly, given the world in which we live these days, a basic knowledge of statistics is pretty damn close to a survival tool! Which is the topic of the next section…
textbooks/stats/Applied_Statistics/Answering_Questions_with_Data_-__Introductory_Statistics_for_Psychology_Students_(Crump)/01%3A_Why_Statistics/1.03%3A_Statistics_in_Psychology.txt
“We are drowning in information, but we are starved for knowledge” – Various authors, original probably John Naisbitt When I started writing up my lecture notes I took the 20 most recent news articles posted to the ABC news website. Of those 20 articles, it turned out that 8 of them involved a discussion of something that I would call a statistical topic; 6 of those made a mistake. The most common error, if you’re curious, was failing to report baseline data (e.g., the article mentions that 5% of people in situation X have some characteristic Y, but doesn’t say how common the characteristic is for everyone else!) The point I’m trying to make here isn’t that journalists are bad at statistics (though they almost always are), it’s that a basic knowledge of statistics is very helpful for trying to figure out when someone else is either making a mistake or even lying to you. Perhaps, one of the biggest things that a knowledge of statistics does to you is cause you to get angry at the newspaper or the internet on a far more frequent basis :). 1.05: Theres More to Research Methods than Statistics So far, most of what I’ve talked about is statistics, and so you’d be forgiven for thinking that statistics is all I care about in life. To be fair, you wouldn’t be far wrong, but research methodology is a broader concept than statistics. So most research methods courses will cover a lot of topics that relate much more to the pragmatics of research design, and in particular the issues that you encounter when trying to do research with humans. However, about 99% of student fears relate to the statistics part of the course, so I’ve focused on the stats in this discussion, and hopefully I’ve convinced you that statistics matters, and more importantly, that it’s not to be feared. That being said, it’s pretty typical for introductory research methods classes to be very stats-heavy. This is not (usually) because the lecturers are evil people. Quite the contrary, in fact. Introductory classes focus a lot on the statistics because you almost always find yourself needing statistics before you need the other research methods training. Why? Because almost all of your assignments in other classes will rely on statistical training, to a much greater extent than they rely on other methodological tools. It’s not common for undergraduate assignments to require you to design your own study from the ground up (in which case you would need to know a lot about research design), but it is common for assignments to ask you to analyse and interpret data that were collected in a study that someone else designed (in which case you need statistics). In that sense, from the perspective of allowing you to do well in all your other classes, the statistics is more urgent. But note that “urgent” is different from “important” – they both matter. I really do want to stress that research design is just as important as data analysis, and this book does spend a fair amount of time on it. However, while statistics has a kind of universality, and provides a set of core tools that are useful for most types of psychological research, the research methods side isn’t quite so universal. There are some general principles that everyone should think about, but a lot of research design is very idiosyncratic, and is specific to the area of research that you want to engage in. To the extent that it’s the details that matter, those details don’t usually show up in an introductory stats and research methods class. 1.06: A brief Introduction to Research Design In this chapter, we’re going to start thinking about the basic ideas that go into designing a study, collecting data, checking whether your data collection works, and so on. It won’t give you enough information to allow you to design studies of your own, but it will give you a lot of the basic tools that you need to assess the studies done by other people. However, since the focus of this book is much more on data analysis than on data collection, I’m only giving a very brief overview. Note that this chapter is “special” in two ways. Firstly, it’s much more psychology-specific than the later chapters. Secondly, it focuses much more heavily on the scientific problem of research methodology, and much less on the statistical problem of data analysis. Nevertheless, the two problems are related to one another, so it’s traditional for stats textbooks to discuss the problem in a little detail. This chapter relies heavily on Campbell and Stanley (1963) for the discussion of study design, and Stevens (1946) for the discussion of scales of measurement. Later versions will attempt to be more precise in the citations.
textbooks/stats/Applied_Statistics/Answering_Questions_with_Data_-__Introductory_Statistics_for_Psychology_Students_(Crump)/01%3A_Why_Statistics/1.04%3A_Statistics_in_Everyday_Life.txt
The first thing to understand is data collection can be thought of as a kind of measurement. That is, what we’re trying to do here is measure something about human behaviour or the human mind. What do I mean by “measurement”? Some thoughts about psychological measurement Measurement itself is a subtle concept, but basically it comes down to finding some way of assigning numbers, or labels, or some other kind of well-defined descriptions to “stuff”. So, any of the following would count as a psychological measurement: • My age is 33 years. • I do not like anchovies. • My chromosomal gender is male. • My self-identified gender is male. In the short list above, the bolded part is “the thing to be measured”, and the italicized part is “the measurement itself”. In fact, we can expand on this a little bit, by thinking about the set of possible measurements that could have arisen in each case: • My age (in years) could have been 0, 1, 2, 3 …, etc. The upper bound on what my age could possibly be is a bit fuzzy, but in practice you’d be safe in saying that the largest possible age is 150, since no human has ever lived that long. • When asked if I like anchovies, I might have said that I do, or I do not, or I have no opinion, or I sometimes do. • My chromosomal gender is almost certainly going to be male (XY) or female (XX), but there are a few other possibilities. I could also have Klinfelter’s syndrome (XXY), which is more similar to male than to female. And I imagine there are other possibilities too. • My self-identified gender is also very likely to be male or female, but it doesn’t have to agree with my chromosomal gender. I may also choose to identify with neither, or to explicitly call myself transgender. As you can see, for some things (like age) it seems fairly obvious what the set of possible measurements should be, whereas for other things it gets a bit tricky. But I want to point out that even in the case of someone’s age, it’s much more subtle than this. For instance, in the example above, I assumed that it was okay to measure age in years. But if you’re a developmental psychologist, that’s way too crude, and so you often measure age in years and months (if a child is 2 years and 11 months, this is usually written as “2;11”). If you’re interested in newborns, you might want to measure age in days since birth, maybe even hours since birth. In other words, the way in which you specify the allowable measurement values is important. Looking at this a bit more closely, you might also realise that the concept of “age” isn’t actually all that precise. In general, when we say “age” we implicitly mean “the length of time since birth”. But that’s not always the right way to do it. Suppose you’re interested in how newborn babies control their eye movements. If you’re interested in kids that young, you might also start to worry that “birth” is not the only meaningful point in time to care about. If Baby Alice is born 3 weeks premature and Baby Bianca is born 1 week late, would it really make sense to say that they are the “same age” if we encountered them “2 hours after birth”? In one sense, yes: by social convention, we use birth as our reference point for talking about age in everyday life, since it defines the amount of time the person has been operating as an independent entity in the world, but from a scientific perspective that’s not the only thing we care about. When we think about the biology of human beings, it’s often useful to think of ourselves as organisms that have been growing and maturing since conception, and from that perspective Alice and Bianca aren’t the same age at all. So you might want to define the concept of “age” in two different ways: the length of time since conception, and the length of time since birth. When dealing with adults, it won’t make much difference, but when dealing with newborns it might. Moving beyond these issues, there’s the question of methodology. What specific “measurement method” are you going to use to find out someone’s age? As before, there are lots of different possibilities: • You could just ask people “how old are you?” The method of self-report is fast, cheap and easy, but it only works with people old enough to understand the question, and some people lie about their age. • You could ask an authority (e.g., a parent) “how old is your child?” This method is fast, and when dealing with kids it’s not all that hard since the parent is almost always around. It doesn’t work as well if you want to know “age since conception”, since a lot of parents can’t say for sure when conception took place. For that, you might need a different authority (e.g., an obstetrician). • You could look up official records, like birth certificates. This is time consuming and annoying, but it has its uses (e.g., if the person is now dead). Operationalization: defining your measurement All of the ideas discussed in the previous section all relate to the concept of operationalization. To be a bit more precise about the idea, operationalization is the process by which we take a meaningful but somewhat vague concept, and turn it into a precise measurement. The process of operationalization can involve several different things: • Being precise about what you are trying to measure: For instance, does “age” mean “time since birth” or “time since conception” in the context of your research? • Determining what method you will use to measure it: Will you use self-report to measure age, ask a parent, or look up an official record? If you’re using self-report, how will you phrase the question? • Defining the set of the allowable values that the measurement can take: Note that these values don’t always have to be numerical, though they often are. When measuring age, the values are numerical, but we still need to think carefully about what numbers are allowed. Do we want age in years, years and months, days, hours? Etc. For other types of measurements (e.g., gender), the values aren’t numerical. But, just as before, we need to think about what values are allowed. If we’re asking people to self-report their gender, what options to we allow them to choose between? Is it enough to allow only “male” or “female”? Do you need an “other” option? Or should we not give people any specific options, and let them answer in their own words? And if you open up the set of possible values to include all verbal response, how will you interpret their answers? Operationalization is a tricky business, and there’s no “one, true way” to do it. The way in which you choose to operationalize the informal concept of “age” or “gender” into a formal measurement depends on what you need to use the measurement for. Often you’ll find that the community of scientists who work in your area have some fairly well-established ideas for how to go about it. In other words, operationalization needs to be thought through on a case by case basis. Nevertheless, while there a lot of issues that are specific to each individual research project, there are some aspects to it that are pretty general. Before moving on, I want to take a moment to clear up our terminology, and in the process introduce one more term. Here are four different things that are closely related to each other: • A theoretical construct. This is the thing that you’re trying to take a measurement of, like “age”, “gender” or an “opinion”. A theoretical construct can’t be directly observed, and often they’re actually a bit vague. • A measure. The measure refers to the method or the tool that you use to make your observations. A question in a survey, a behavioural observation or a brain scan could all count as a measure. • An operationalization. The term “operationalization” refers to the logical connection between the measure and the theoretical construct, or to the process by which we try to derive a measure from a theoretical construct. • A variable. Finally, a new term. A variable is what we end up with when we apply our measure to something in the world. That is, variables are the actual “data” that we end up with in our data sets. In practice, even scientists tend to blur the distinction between these things, but it’s very helpful to try to understand the differences.
textbooks/stats/Applied_Statistics/Answering_Questions_with_Data_-__Introductory_Statistics_for_Psychology_Students_(Crump)/01%3A_Why_Statistics/1.07%3A_Introduction_to_Psychological_Measurement.txt
As the previous section indicates, the outcome of a psychological measurement is called a variable. But not all variables are of the same qualitative type, and it’s very useful to understand what types there are. A very useful concept for distinguishing between different types of variables is what’s known as scales of measurement. Nominal scale A nominal scale variable (also referred to as a categorical variable) is one in which there is no particular relationship between the different possibilities: for these kinds of variables it doesn’t make any sense to say that one of them is "bigger" or “better” than any other one, and it absolutely doesn’t make any sense to average them. The classic example for this is “eye colour”. Eyes can be blue, green and brown, among other possibilities, but none of them is any “better” than any other one. As a result, it would feel really weird to talk about an “average eye colour”. Similarly, gender is nominal too: male isn’t better or worse than female, neither does it make sense to try to talk about an “average gender”. In short, nominal scale variables are those for which the only thing you can say about the different possibilities is that they are different. That’s it. Let’s take a slightly closer look at this. Suppose I was doing research on how people commute to and from work. One variable I would have to measure would be what kind of transportation people use to get to work. This “transport type” variable could have quite a few possible values, including: “train”, “bus”, “car”, “bicycle”, etc. For now, let’s suppose that these four are the only possibilities, and suppose that when I ask 100 people how they got to work today, and I get this: Transportation Number of people (1) Train 12 (2) Bus 30 (3) Car 48 (4) Bicycle 10 So, what’s the average transportation type? Obviously, the answer here is that there isn’t one. It’s a silly question to ask. You can say that travel by car is the most popular method, and travel by train is the least popular method, but that’s about all. Similarly, notice that the order in which I list the options isn’t very interesting. I could have chosen to display the data like this and nothing really changes. Transportation Number of people (3) Car 48 (1) Train 12 (4) Bicycle 10 (2) Bus 30 Ordinal scale Ordinal scale variables have a bit more structure than nominal scale variables, but not by a lot. An ordinal scale variable is one in which there is a natural, meaningful way to order the different possibilities, but you can’t do anything else. The usual example given of an ordinal variable is “finishing position in a race”. You can say that the person who finished first was faster than the person who finished second, but you don’t know how much faster. As a consequence we know that 1st \(>\) 2nd, and we know that 2nd \(>\) 3rd, but the difference between 1st and 2nd might be much larger than the difference between 2nd and 3rd. Here’s an more psychologically interesting example. Suppose I’m interested in people’s attitudes to climate change, and I ask them to pick one of these four statements that most closely matches their beliefs: 1. Temperatures are rising, because of human activity 2. Temperatures are rising, but we don’t know why 3. Temperatures are rising, but not because of humans 4. Temperatures are not rising Notice that these four statements actually do have a natural ordering, in terms of “the extent to which they agree with the current science”. Statement 1 is a close match, statement 2 is a reasonable match, statement 3 isn’t a very good match, and statement 4 is in strong opposition to the science. So, in terms of the thing I’m interested in (the extent to which people endorse the science), I can order the items as \(1 > 2 > 3 > 4\). Since this ordering exists, it would be very weird to list the options like this… 1. Temperatures are rising, but not because of humans 2. Temperatures are rising, because of human activity 3. Temperatures are not rising 4. Temperatures are rising, but we don’t know why …because it seems to violate the natural “structure” to the question. So, let’s suppose I asked 100 people these questions, and got the following answers: Number (1) Temperatures are rising, because of human activity 51 (2) Temperatures are rising, but we don’t know why 20 (3) Temperatures are rising, but not because of humans 10 (4) Temperatures are not rising 19 When analysing these data, it seems quite reasonable to try to group (1), (2) and (3) together, and say that 81 of 100 people were willing to at least partially endorse the science. And it’s also quite reasonable to group (2), (3) and (4) together and say that 49 of 100 people registered at least some disagreement with the dominant scientific view. However, it would be entirely bizarre to try to group (1), (2) and (4) together and say that 90 of 100 people said…what? There’s nothing sensible that allows you to group those responses together at all. That said, notice that while we can use the natural ordering of these items to construct sensible groupings, what we can’t do is average them. For instance, in my simple example here, the “average” response to the question is 1.97. If you can tell me what that means, I’d love to know. Because that sounds like gibberish to me! Interval scale In contrast to nominal and ordinal scale variables, interval scale and ratio scale variables are variables for which the numerical value is genuinely meaningful. In the case of interval scale variables, the differences between the numbers are interpretable, but the variable doesn’t have a “natural” zero value. A good example of an interval scale variable is measuring temperature in degrees celsius. For instance, if it was \( 15^{\circ} \) yesterday and \( 18^{\circ} \) today, then the \( 3^{\circ} \) difference between the two is genuinely meaningful. Moreover, that \( 3^{\circ} \) difference is exactly the same as the \( 3^{\circ} \) difference between \( 7^{\circ} \) and \( 10^{\circ} \). In short, addition and subtraction are meaningful for interval scale variables. However, notice that the \( 0^{\circ} \) does not mean “no temperature at all”: it actually means “the temperature at which water freezes”, which is pretty arbitrary. As a consequence, it becomes pointless to try to multiply and divide temperatures. It is wrong to say that \( 20^{\circ} \) is twice as hot as \( 10^{\circ} \), just as it is weird and meaningless to try to claim that \( 20^{\circ} \) is negative two times as hot as \( -10^{\circ} \). Again, lets look at a more psychological example. Suppose I’m interested in looking at how the attitudes of first-year university students have changed over time. Obviously, I’m going to want to record the year in which each student started. This is an interval scale variable. A student who started in 2003 did arrive 5 years before a student who started in 2008. However, it would be completely insane for me to divide 2008 by 2003 and say that the second student started “1.0024 times later” than the first one. That doesn’t make any sense at all. Ratio scale The fourth and final type of variable to consider is a ratio scale variable, in which zero really means zero, and it’s okay to multiply and divide. A good psychological example of a ratio scale variable is response time (RT). In a lot of tasks it’s very common to record the amount of time somebody takes to solve a problem or answer a question, because it’s an indicator of how difficult the task is. Suppose that Alan takes 2.3 seconds to respond to a question, whereas Ben takes 3.1 seconds. As with an interval scale variable, addition and subtraction are both meaningful here. Ben really did take \( 3.1 - 2.3 = 0.8 \) seconds longer than Alan did. However, notice that multiplication and division also make sense here too: Ben took \( 3.1 / 2.3 = 1.35 \) times as long as Alan did to answer the question. And the reason why you can do this is that, for a ratio scale variable such as RT, “zero seconds” really does mean “no time at all”. Continuous versus discrete variables There’s a second kind of distinction that you need to be aware of, regarding what types of variables you can run into. This is the distinction between continuous variables and discrete variables. The difference between these is as follows: • A continuous variable is one in which, for any two values that you can think of, it’s always logically possible to have another value in between. • A discrete variable is, in effect, a variable that isn’t continuous. For a discrete variable, it’s sometimes the case that there’s nothing in the middle. These definitions probably seem a bit abstract, but they’re pretty simple once you see some examples. For instance, response time is continuous. If Alan takes 3.1 seconds and Ben takes 2.3 seconds to respond to a question, then it’s possible for Cameron’s response time to lie in between, by taking 3.0 seconds. And of course it would also be possible for David to take 3.031 seconds to respond, meaning that his RT would lie in between Cameron’s and Alan’s. And while in practice it might be impossible to measure RT that precisely, it’s certainly possible in principle. Because we can always find a new value for RT in between any two other ones, we say that RT is continuous. Discrete variables occur when this rule is violated. For example, nominal scale variables are always discrete: there isn’t a type of transportation that falls “in between” trains and bicycles, not in the strict mathematical way that 2.3 falls in between 2 and 3. So transportation type is discrete. Similarly, ordinal scale variables are always discrete: although “2nd place” does fall between “1st place” and “3rd place”, there’s nothing that can logically fall in between “1st place” and “2nd place”. Interval scale and ratio scale variables can go either way. As we saw above, response time (a ratio scale variable) is continuous. Temperature in degrees celsius (an interval scale variable) is also continuous. However, the year you went to school (an interval scale variable) is discrete. There’s no year in between 2002 and 2003. The number of questions you get right on a true-or-false test (a ratio scale variable) is also discrete: since a true-or-false question doesn’t allow you to be “partially correct”, there’s nothing in between 5/10 and 6/10. The table summarizes the relationship between the scales of measurement and the discrete/continuity distinction. Cells with a tick mark correspond to things that are possible. I’m trying to hammer this point home, because (a) some textbooks get this wrong, and (b) people very often say things like “discrete variable” when they mean “nominal scale variable”. It’s very unfortunate. The relationship between the scales of measurement and the discrete/continuity distinction. Cells with an x correspond to things that are possible. continuous discrete nominal   x ordinal   x interval x x ratio x x Some complexities Okay, I know you’re going to be shocked to hear this, but …the real world is much messier than this little classification scheme suggests. Very few variables in real life actually fall into these nice neat categories, so you need to be kind of careful not to treat the scales of measurement as if they were hard and fast rules. It doesn’t work like that: they’re guidelines, intended to help you think about the situations in which you should treat different variables differently. Nothing more. So let’s take a classic example, maybe the classic example, of a psychological measurement tool: the Likert scale. The humble Likert scale is the bread and butter tool of all survey design. You yourself have filled out hundreds, maybe thousands of them, and odds are you’ve even used one yourself. Suppose we have a survey question that looks like this: Which of the following best describes your opinion of the statement that “all pirates are freaking awesome” and then the options presented to the participant are these: (1) Strongly disagree (2) Disagree (3) Neither agree nor disagree (4) Agree (5) Strongly agree This set of items is an example of a 5-point Likert scale: people are asked to choose among one of several (in this case 5) clearly ordered possibilities, generally with a verbal descriptor given in each case. However, it’s not necessary that all items be explicitly described. This is a perfectly good example of a 5-point Likert scale too: (1) Strongly disagree (2) (3) (4) (5) Strongly agree Likert scales are very handy, if somewhat limited, tools. The question is, what kind of variable are they? They’re obviously discrete, since you can’t give a response of 2.5. They’re obviously not nominal scale, since the items are ordered; and they’re not ratio scale either, since there’s no natural zero. But are they ordinal scale or interval scale? One argument says that we can’t really prove that the difference between “strongly agree” and “agree” is of the same size as the difference between “agree” and “neither agree nor disagree”. In fact, in everyday life it’s pretty obvious that they’re not the same at all. So this suggests that we ought to treat Likert scales as ordinal variables. On the other hand, in practice most participants do seem to take the whole “on a scale from 1 to 5” part fairly seriously, and they tend to act as if the differences between the five response options were fairly similar to one another. As a consequence, a lot of researchers treat Likert scale data as if it were interval scale. It’s not interval scale, but in practice it’s close enough that we usually think of it as being quasi-interval scale.
textbooks/stats/Applied_Statistics/Answering_Questions_with_Data_-__Introductory_Statistics_for_Psychology_Students_(Crump)/01%3A_Why_Statistics/1.08%3A_Scales_of_measurement.txt