chapter
stringlengths 1.97k
1.53M
| path
stringlengths 47
241
|
---|---|
One fairly major problem that remains is the problem of “model selection”. That is, if we have a data set that contains several variables, which ones should we include as predictors, and which ones should we not include? In other words, we have a problem of variable selection. In general, model selection is a complex business, but it’s made somewhat simpler if we restrict ourselves to the problem of choosing a subset of the variables that ought to be included in the model. Nevertheless, I’m not going to try covering even this reduced topic in a lot of detail. Instead, I’ll talk about two broad principles that you need to think about; and then discuss one concrete tool that R provides to help you select a subset of variables to include in your model. Firstly, the two principles:
• It’s nice to have an actual substantive basis for your choices. That is, in a lot of situations you the researcher have good reasons to pick out a smallish number of possible regression models that are of theoretical interest; these models will have a sensible interpretation in the context of your field. Never discount the importance of this. Statistics serves the scientific process, not the other way around.
• To the extent that your choices rely on statistical inference, there is a trade off between simplicity and goodness of fit. As you add more predictors to the model, you make it more complex; each predictor adds a new free parameter (i.e., a new regression coefficient), and each new parameter increases the model’s capacity to “absorb” random variations. So the goodness of fit (e.g., R2) continues to rise as you add more predictors no matter what. If you want your model to be able to generalise well to new observations, you need to avoid throwing in too many variables.
This latter principle is often referred to as Ockham’s razor, and is often summarised in terms of the following pithy saying: do not multiply entities beyond necessity. In this context, it means: don’t chuck in a bunch of largely irrelevant predictors just to boost your R2. Hm. Yeah, the original was better.
In any case, what we need is an actual mathematical criterion that will implement the qualitative principle behind Ockham’s razor in the context of selecting a regression model. As it turns out there are several possibilities. The one that I’ll talk about is the Akaike information criterion (AIC; Akaike 1974) simply because it’s the default one used in the R function step(). In the context of a linear regression model (and ignoring terms that don’t depend on the model in any way!), the AIC for a model that has K predictor variables plus an intercept is:227
$\mathrm{AIC}=\dfrac{\mathrm{SS}_{r e s}^{2}}{\hat{\sigma}}+2 K$
The smaller the AIC value, the better the model performance is. If we ignore the low level details, it’s fairly obvious what the AIC does: on the left we have a term that increases as the model predictions get worse; on the right we have a term that increases as the model complexity increases. The best model is the one that fits the data well (low residuals; left hand side) using as few predictors as possible (low K; right hand side). In short, this is a simple implementation of Ockham’s razor.
Backward elimination
Okay, let’s have a look at the step() function at work. In this example I’ll keep it simple and use only the basic backward elimination approach. That is, start with the complete regression model, including all possible predictors. Then, at each “step” we try all possible ways of removing one of the variables, and whichever of these is best (in terms of lowest AIC value) is accepted. This becomes our new regression model; and we then try all possible deletions from the new model, again choosing the option with lowest AIC. This process continues until we end up with a model that has a lower AIC value than any of the other possible models that you could produce by deleting one of its predictors. Let’s see this in action. First, I need to define the model from which the process starts.
full.model <- lm( formula = dan.grump ~ dan.sleep + baby.sleep + day,
data = parenthood
)
That’s nothing terribly new: yet another regression. Booooring. Still, we do need to do it: the object argument to the step() function will be this regression model. With this in mind, I would call the step() function using the following command:
step( object = full.model, # start at the full model
direction = "backward" # allow it remove predictors but not add them
)
## Start: AIC=299.08
## dan.grump ~ dan.sleep + baby.sleep + day
##
## Df Sum of Sq RSS AIC
## - baby.sleep 1 0.1 1837.2 297.08
## - day 1 1.6 1838.7 297.16
## <none> 1837.1 299.08
## - dan.sleep 1 4909.0 6746.1 427.15
##
## Step: AIC=297.08
## dan.grump ~ dan.sleep + day
##
## Df Sum of Sq RSS AIC
## - day 1 1.6 1838.7 295.17
## <none> 1837.2 297.08
## - dan.sleep 1 8103.0 9940.1 463.92
##
## Step: AIC=295.17
## dan.grump ~ dan.sleep
##
## Df Sum of Sq RSS AIC
## <none> 1838.7 295.17
## - dan.sleep 1 8159.9 9998.6 462.50
##
## Call:
## lm(formula = dan.grump ~ dan.sleep, data = parenthood)
##
## Coefficients:
## (Intercept) dan.sleep
## 125.956 -8.937
although in practice I didn’t need to specify direction because "backward" is the default. The output is somewhat lengthy, so I’ll go through it slowly. Firstly, the output reports the AIC value for the current best model:
Start: AIC=299.08
dan.grump ~ dan.sleep + baby.sleep + day
That’s our starting point. Since small AIC values are good, we want to see if we can get a value smaller than 299.08 by deleting one of those three predictors. So what R does is try all three possibilities, calculate the AIC values for each one, and then print out a short table with the results:
Df Sum of Sq RSS AIC
- baby.sleep 1 0.1 1837.2 297.08
- day 1 1.6 1838.7 297.16
<none> 1837.1 299.08
- dan.sleep 1 4909.0 6746.1 427.15
To read this table, it helps to note that the text in the left hand column is telling you what change R made to the regression model. So the line that reads <none> is the actual model we started with, and you can see on the right hand side that this still corresponds to an AIC value of 299.08 (obviously). The other three rows in the table correspond to the other three models that it looked at: it tried removing the baby.sleep variable, which is indicated by - baby.sleep, and this produced an AIC value of 297.08. That was the best of the three moves, so it’s at the top of the table. So, this move is accepted, and now we start again. There are two predictors left in the model, dan.sleep and day, so it tries deleting those:
Step: AIC=297.08
dan.grump ~ dan.sleep + day
Df Sum of Sq RSS AIC
- day 1 1.6 1838.7 295.17
<none> 1837.2 297.08
- dan.sleep 1 8103.0 9940.1 463.92
Okay, so what we can see is that removing the day variable lowers the AIC value from 297.08 to 295.17. So R decides to keep that change too, and moves on:
Step: AIC=295.17
dan.grump ~ dan.sleep
Df Sum of Sq RSS AIC
<none> 1838.7 295.17
- dan.sleep 1 8159.9 9998.6 462.50
This time around, there’s no further deletions that can actually improve the AIC value. So the step() function stops, and prints out the result of the best regression model it could find:
Call:
lm(formula = dan.grump ~ dan.sleep, data = parenthood)
Coefficients:
(Intercept) dan.sleep
125.956 -8.937
which is (perhaps not all that surprisingly) the regression.1 model that we started with at the beginning of the chapter.
Forward selection
As an alternative, you can also try forward selection. This time around we start with the smallest possible model as our start point, and only consider the possible additions to the model. However, there’s one complication: you also need to tell step() what the largest possible model you’re willing to entertain is, using the scope argument. The simplest usage is like this:
null.model <- lm( dan.grump ~ 1, parenthood ) # intercept only.
step( object = null.model, # start with null.model
direction = "forward", # only consider "addition" moves
scope = dan.grump ~ dan.sleep + baby.sleep + day # largest model allowed
)
## Start: AIC=462.5
## dan.grump ~ 1
##
## Df Sum of Sq RSS AIC
## + dan.sleep 1 8159.9 1838.7 295.17
## + baby.sleep 1 3202.7 6795.9 425.89
## <none> 9998.6 462.50
## + day 1 58.5 9940.1 463.92
##
## Step: AIC=295.17
## dan.grump ~ dan.sleep
##
## Df Sum of Sq RSS AIC
## <none> 1838.7 295.17
## + day 1 1.55760 1837.2 297.08
## + baby.sleep 1 0.02858 1838.7 297.16
##
## Call:
## lm(formula = dan.grump ~ dan.sleep, data = parenthood)
##
## Coefficients:
## (Intercept) dan.sleep
## 125.956 -8.937
If I do this, the output takes on a similar form, but now it only considers addition (+) moves rather than deletion (-) moves:
Start: AIC=462.5
dan.grump ~ 1
Df Sum of Sq RSS AIC
+ dan.sleep 1 8159.9 1838.7 295.17
+ baby.sleep 1 3202.7 6795.9 425.89
<none> 9998.6 462.50
+ day 1 58.5 9940.1 463.92
Step: AIC=295.17
dan.grump ~ dan.sleep
Df Sum of Sq RSS AIC
<none> 1838.7 295.17
+ day 1 1.55760 1837.2 297.08
+ baby.sleep 1 0.02858 1838.7 297.16
Call:
lm(formula = dan.grump ~ dan.sleep, data = parenthood)
Coefficients:
(Intercept) dan.sleep
125.956 -8.937
As you can see, it’s found the same model. In general though, forward and backward selection don’t always have to end up in the same place.
caveat
Automated variable selection methods are seductive things, especially when they’re bundled up in (fairly) simple functions like step(). They provide an element of objectivity to your model selection, and that’s kind of nice. Unfortunately, they’re sometimes used as an excuse for thoughtlessness. No longer do you have to think carefully about which predictors to add to the model and what the theoretical basis for their inclusion might be… everything is solved by the magic of AIC. And if we start throwing around phrases like Ockham’s razor, well, it sounds like everything is wrapped up in a nice neat little package that no-one can argue with.
Or, perhaps not. Firstly, there’s very little agreement on what counts as an appropriate model selection criterion. When I was taught backward elimination as an undergraduate, we used F-tests to do it, because that was the default method used by the software. The default in the step() function is AIC, and since this is an introductory text that’s the only method I’ve described, but the AIC is hardly the Word of the Gods of Statistics. It’s an approximation, derived under certain assumptions, and it’s guaranteed to work only for large samples when those assumptions are met. Alter those assumptions and you get a different criterion, like the BIC for instance. Take a different approach again and you get the NML criterion. Decide that you’re a Bayesian and you get model selection based on posterior odds ratios. Then there are a bunch of regression specific tools that I haven’t mentioned. And so on. All of these different methods have strengths and weaknesses, and some are easier to calculate than others (AIC is probably the easiest of the lot, which might account for its popularity). Almost all of them produce the same answers when the answer is “obvious” but there’s a fair amount of disagreement when the model selection problem becomes hard.
What does this mean in practice? Well, you could go and spend several years teaching yourself the theory of model selection, learning all the ins and outs of it; so that you could finally decide on what you personally think the right thing to do is. Speaking as someone who actually did that, I wouldn’t recommend it: you’ll probably come out the other side even more confused than when you started. A better strategy is to show a bit of common sense… if you’re staring at the results of a step() procedure, and the model that makes sense is close to having the smallest AIC, but is narrowly defeated by a model that doesn’t make any sense… trust your instincts. Statistical model selection is an inexact tool, and as I said at the beginning, interpretability matters.
Comparing two regression models
An alternative to using automated model selection procedures is for the researcher to explicitly select two or more regression models to compare to each other. You can do this in a few different ways, depending on what research question you’re trying to answer. Suppose we want to know whether or not the amount of sleep that my son got has any relationship to my grumpiness, over and above what we might expect from the amount of sleep that I got. We also want to make sure that the day on which we took the measurement has no influence on the relationship. That is, we’re interested in the relationship between baby.sleep and dan.grump, and from that perspective dan.sleep and day are nuisance variable or covariates that we want to control for. In this situation, what we would like to know is whether dan.grump ~ dan.sleep + day + baby.sleep (which I’ll call Model 1, or M1) is a better regression model for these data than dan.grump ~ dan.sleep + day (which I’ll call Model 0, or M0). There are two different ways we can compare these two models, one based on a model selection criterion like AIC, and the other based on an explicit hypothesis test. I’ll show you the AIC based approach first because it’s simpler, and follows naturally from the step() function that we saw in the last section. The first thing I need to do is actually run the regressions:
M0 <- lm( dan.grump ~ dan.sleep + day, parenthood )
M1 <- lm( dan.grump ~ dan.sleep + day + baby.sleep, parenthood )
Now that I have my regression models, I could use the summary() function to run various hypothesis tests and other useful statistics, just as we have discussed throughout this chapter. However, since the current focus on model comparison, I’ll skip this step and go straight to the AIC calculations. Conveniently, the AIC() function in R lets you input several regression models, and it will spit out the AIC values for each of them:228
AIC( M0, M1 )
## df AIC
## M0 4 582.8681
## M1 5 584.8646
Since Model 0 has the smaller AIC value, it is judged to be the better model for these data.
A somewhat different approach to the problem comes out of the hypothesis testing framework. Suppose you have two regression models, where one of them (Model 0) contains a subset of the predictors from the other one (Model 1). That is, Model 1 contains all of the predictors included in Model 0, plus one or more additional predictors. When this happens we say that Model 0 is nested within Model 1, or possibly that Model 0 is a submodel of Model 1. Regardless of the terminology what this means is that we can think of Model 0 as a null hypothesis and Model 1 as an alternative hypothesis. And in fact we can construct an F test for this in a fairly straightforward fashion. We can fit both models to the data and obtain a residual sum of squares for both models. I’ll denote these as $\ SS_{res}^{(0)}$ and $\ SS_{res}^{(1)}$ respectively. The superscripting here just indicates which model we’re talking about. Then our F statistic is
$F=\dfrac{\left(\mathrm{SS}_{r e s}^{(0)}-\mathrm{SS}_{r e s}^{(1)}\right) / k}{\left(\mathrm{SS}_{r e s}^{(1)}\right) /(N-p-1)}$
where N is the number of observations, p is the number of predictors in the full model (not including the intercept), and k is the difference in the number of parameters between the two models.229 The degrees of freedom here are k and N−p−1. Note that it’s often more convenient to think about the difference between those two SS values as a sum of squares in its own right. That is:
$\mathrm{SS}_{\Delta}=\mathrm{SS}_{r e s}^{(0)}-\mathrm{SS}_{r e s}^{(1)}$
The reason why this his helpful is that we can express SSΔ a measure of the extent to which the two models make different predictions about the the outcome variable. Specifically:
$\ SS_ {\Delta} = \sum_i (\hat{y_i}^{(1)} - \hat{y_i}^{(0)})^2$
where $\ \hat{y_i}^{(0)}$ is the fitted value for yi according to model M0 and $\ \hat{y_i}^{(1)}$ is the is the fitted value for yi according to model M1.
Okay, so that’s the hypothesis test that we use to compare two regression models to one another. Now, how do we do it in R? The answer is to use the anova() function. All we have to do is input the two models that we want to compare (null model first):
anova( M0, M1 )
## Analysis of Variance Table
##
## Model 1: dan.grump ~ dan.sleep + day
## Model 2: dan.grump ~ dan.sleep + day + baby.sleep
## Res.Df RSS Df Sum of Sq F Pr(>F)
## 1 97 1837.2
## 2 96 1837.1 1 0.063688 0.0033 0.9541
Note that, just like we saw with the output from the step() function, R has used the acronym RSS to refer to the residual sum of squares from each model. That is, RSS in this output corresponds to SSres in the formula above. Since we have p>.05 we retain the null hypothesis (M0). This approach to regression, in which we add all of our covariates into a null model, and then add the variables of interest into an alternative model, and then compare the two models in hypothesis testing framework, is often referred to as hierarchical regression. | textbooks/stats/Applied_Statistics/Learning_Statistics_with_R_-_A_tutorial_for_Psychology_Students_and_other_Beginners_(Navarro)/15%3A_Linear_Regression/15.10%3A_Model_Selection.txt |
• Basic ideas in linear regression and how regression models are estimated (Sections 15.1 and 15.2).
• Multiple linear regression (Section 15.3).
• Measuring the overall performance of a regression model using R2 (Section 15.4)
• Hypothesis tests for regression models (Section 15.5)
• Calculating confidence intervals for regression coefficients, and standardised coefficients (Section 15.7)
• The assumptions of regression (Section 15.8) and how to check them (Section 15.9)
• Selecting a regression model (Section 15.10)
References
Fox, J., and S. Weisberg. 2011. An R Companion to Applied Regression. 2nd ed. Los Angeles: Sage.
Cook, R. D., and S. Weisberg. 1983. “Diagnostics for Heteroscedasticity in Regression.” Biometrika 70: 1–10.
Long, J.S., and L.H. Ervin. 2000. “Using Heteroscedasticity Consistent Standard Errors in Thee Linear Regression Model.” The American Statistician 54: 217–24.
Akaike, H. 1974. “A New Look at the Statistical Model Identification.” IEEE Transactions on Automatic Control 19: 716–23.
1. The ϵ symbol is the Greek letter epsilon. It’s traditional to use ϵi or ei to denote a residual.
2. Or at least, I’m assuming that it doesn’t help most people. But on the off chance that someone reading this is a proper kung fu master of linear algebra (and to be fair, I always have a few of these people in my intro stats class), it will help you to know that the solution to the estimation problem turns out to be $\ \hat{b} = (X^TX)^{-1}X^Ty$, where $\ \hat{b}$ is a vector containing the estimated regression coefficients, X is the “design matrix” that contains the predictor variables (plus an additional column containing all ones; strictly X is a matrix of the regressors, but I haven’t discussed the distinction yet), and y is a vector containing the outcome variable. For everyone else, this isn’t exactly helpful, and can be downright scary. However, since quite a few things in linear regression can be written in linear algebra terms, you’ll see a bunch of footnotes like this one in this chapter. If you can follow the maths in them, great. If not, ignore it.
3. And by “sometimes” I mean “almost never”. In practice everyone just calls it “R-squared”.
4. Note that, although R has done multiple tests here, it hasn’t done a Bonferroni correction or anything. These are standard one-sample t-tests with a two-sided alternative. If you want to make corrections for multiple tests, you need to do that yourself.
5. You can change the kind of correction it applies by specifying the p.adjust.method argument.
6. Strictly, you standardise all the regressors: that is, every “thing” that has a regression coefficient associated with it in the model. For the regression models that I’ve talked about so far, each predictor variable maps onto exactly one regressor, and vice versa. However, that’s not actually true in general: we’ll see some examples of this in Chapter 16. But for now, we don’t need to care too much about this distinction.
7. Or have no hope, as the case may be.
8. Again, for the linear algebra fanatics: the “hat matrix” is defined to be that matrix H that converts the vector of observed values y into a vector of fitted values $\ \hat{y}$, such that $\ \hat{y}$=Hy. The name comes from the fact that this is the matrix that “puts a hat on y”. The hat value of the i-th observation is the i-th diagonal element of this matrix (so technically I should be writing it as hii rather than hi). Oh, and in case you care, here’s how it’s calculated: $\ H = X(X^TX)^{-1}X^T$. Pretty, isn’t it?
9. Though special mention should be made of the influenceIndexPlot() and influencePlot() functions in the car package. These produce somewhat more detailed pictures than the default plots that I’ve shown here. There’s also an outlierTest() function that tests to see if any of the Studentised residuals are significantly larger than would be expected by chance.
10. An alternative is to run a “robust regression”; I’ll discuss robust regression in a later version of this book.
11. And, if you take the time to check the residualPlots() for regression.1, it’s pretty clear that this isn’t some wacky distortion being caused by the fact that baby.sleep is a useless predictor variable. It’s an actual nonlinearity in the relationship between dan.sleep and dan.grump.
12. Note that the underlying mechanics of the test aren’t the same as the ones I’ve described for regressions; the goodness of fit is assessed using what’s known as a score-test not an F-test, and the test statistic is (approximately) χ2 distributed if there’s no relationship
13. Again, a footnote that should be read only by the two readers of this book that love linear algebra (mmmm… I love the smell of matrix computations in the morning; smells like… nerd). In these estimators, the covariance matrix for b is given by $\ (X^TX)^{-1}$ $\ X^T\sum X$ $\ (X^TX)^{-1}$. See, it’s a “sandwich”? Assuming you think that $\ (X^TX)^{-1}$ ="bread" and XTΣX="filling", that is. Which of course everyone does, right? In any case, the usual estimator is what you get when you set $\ \sum = \hat{\sigma}\ ^2I$. The corrected version that I learned originally uses $\ diag (\epsilon_i^2)$ (White 1980). However, the version that Fox and Weisberg (2011)
14. Note, however, that the step() function computes the full version of AIC, including the irrelevant constants that I’ve dropped here. As a consequence this equation won’t correctly describe the AIC values that you see in the outputs here. However, if you calculate the AIC values using my formula for two different regression models and take the difference between them, this will be the same as the differences between AIC values that step() reports. In practice, this is all you care about: the actual value of an AIC statistic isn’t very informative, but the differences between two AIC values are useful, since these provide a measure of the extent to which one model outperforms another.
15. While I’m on this topic I should point out that there is also a function called BIC() which computes the Bayesian information criterion (BIC) for the models. So you could type BIC(M0,M1) and get a very similar output. In fact, while I’m not particularly impressed with either AIC or BIC as model selection methods, if you do find yourself using one of these two, the empirical evidence suggests that BIC is the better criterion of the two. In most simulation studies that I’ve seen, BIC does a much better job of selecting the correct model.
16. It’s worth noting in passing that this same F statistic can be used to test a much broader range of hypotheses than those that I’m mentioning here. Very briefly: notice that the nested model M0 corresponds to the full model M1 when we constrain some of the regression coefficients to zero. It is sometimes useful to construct submodels by placing other kinds of constraints on the regression coefficients. For instance, maybe two different coefficients might have to sum to zero, or something like that. You can construct hypothesis tests for those kind of constraints too, but it is somewhat more complicated and the sampling distribution for F can end up being something known as the non-central F distribution, which is waaaaay beyond the scope of this book! All I want to do is alert you to this possibility. | textbooks/stats/Applied_Statistics/Learning_Statistics_with_R_-_A_tutorial_for_Psychology_Students_and_other_Beginners_(Navarro)/15%3A_Linear_Regression/15.11%3A_Summary.txt |
Over the course of the last few chapters you can probably detect a general trend. We started out looking at tools that you can use to compare two groups to one another, most notably the t-test (Chapter 13). Then, we introduced analysis of variance (ANOVA) as a method for comparing more than two groups (Chapter 14). The chapter on regression (Chapter 15) covered a somewhat different topic, but in doing so it introduced a powerful new idea: building statistical models that have multiple predictor variables used to explain a single outcome variable. For instance, a regression model could be used to predict the number of errors a student makes in a reading comprehension test based on the number of hours they studied for the test, and their score on a standardised IQ test. The goal in this chapter is to import this idea into the ANOVA framework. For instance, suppose we were interested in using the reading comprehension test to measure student achievements in three different schools, and we suspect that girls and boys are developing at different rates (and so would be expected to have different performance on average). Each student is classified in two different ways: on the basis of their gender, and on the basis of their school. What we’d like to do is analyse the reading comprehension scores in terms of both of these grouping variables. The tool for doing so is generically referred to as factorial ANOVA. However, since we have two grouping variables, we sometimes refer to the analysis as a two-way ANOVA, in contrast to the one-way ANOVAs that we ran in Chapter 14.
16: Factorial ANOVA
When we discussed analysis of variance in Chapter 14, we assumed a fairly simple experimental design: each person falls into one of several groups, and we want to know whether these groups have different means on some outcome variable. In this section, I’ll discuss a broader class of experimental designs, known as factorial designs, in we have more than one grouping variable. I gave one example of how this kind of design might arise above. Another example appears in Chapter 14, in which we were looking at the effect of different drugs on the mood.gain experienced by each person. In that chapter we did find a significant effect of drug, but at the end of the chapter we also ran an analysis to see if there was an effect of therapy. We didn’t find one, but there’s something a bit worrying about trying to run two separate analyses trying to predict the same outcome. Maybe there actually is an effect of therapy on mood gain, but we couldn’t find it because it was being “hidden” by the effect of drug? In other words, we’re going to want to run a single analysis that includes both drug and therapy as predictors. For this analysis each person is cross-classified by the drug they were given (a factor with 3 levels) and what therapy they received (a factor with 2 levels). We refer to this as a 3×2 factorial design. If we cross-tabulate drug by therapy, using the xtabs() function (see Section 7.1), we get the following table:230
load("./rbook-master/data/clinicaltrial.rdata")
xtabs( ~ drug + therapy, clin.trial )
## therapy
## drug no.therapy CBT
## placebo 3 3
## anxifree 3 3
## joyzepam 3 3
As you can see, not only do we have participants corresponding to all possible combinations of the two factors, indicating that our design is completely crossed, it turns out that there are an equal number of people in each group. In other words, we have a balanced design. In this section I’ll talk about how to analyse data from balanced designs, since this is the simplest case. The story for unbalanced designs is quite tedious, so we’ll put it to one side for the moment.
What hypotheses are we testing?
Like one-way ANOVA, factorial ANOVA is a tool for testing certain types of hypotheses about population means. So a sensible place to start would be to be explicit about what our hypotheses actually are. However, before we can even get to that point, it’s really useful to have some clean and simple notation to describe the population means. Because of the fact that observations are cross-classified in terms of two different factors, there are quite a lot of different means that one might be interested. To see this, let’s start by thinking about all the different sample means that we can calculate for this kind of design. Firstly, there’s the obvious idea that we might be interested in this table of group means:
aggregate( mood.gain ~ drug + therapy, clin.trial, mean )
## drug therapy mood.gain
## 1 placebo no.therapy 0.300000
## 2 anxifree no.therapy 0.400000
## 3 joyzepam no.therapy 1.466667
## 4 placebo CBT 0.600000
## 5 anxifree CBT 1.033333
## 6 joyzepam CBT 1.500000
Now, this output shows a cross-tabulation of the group means for all possible combinations of the two factors (e.g., people who received the placebo and no therapy, people who received the placebo while getting CBT, etc). However, we can also construct tables that ignore one of the two factors. That gives us output that looks like this:
aggregate( mood.gain ~ drug, clin.trial, mean )
## drug mood.gain
## 1 placebo 0.4500000
## 2 anxifree 0.7166667
## 3 joyzepam 1.4833333
aggregate( mood.gain ~ therapy, clin.trial, mean )
## therapy mood.gain
## 1 no.therapy 0.7222222
## 2 CBT 1.0444444
But of course, if we can ignore one factor we can certainly ignore both. That is, we might also be interested in calculating the average mood gain across all 18 participants, regardless of what drug or psychological therapy they were given:
mean( clin.trial$mood.gain ) ## [1] 0.8833333 At this point we have 12 different sample means to keep track of! It is helpful to organise all these numbers into a single table, which would look like this: knitr::kable(tibble::tribble( ~V1, ~V2, ~V3, ~V4, NA, "no therapy", "CBT", "total", "placebo", "0.30", "0.60", "0.45", "anxifree", "0.40", "1.03", "0.72", "joyzepam", "1.47", "1.50", "1.48", "total", "0.72", "1.04", "0.88" ), col.names = c("", "no therapy", "CBT", "total")) no therapy CBT total NA no therapy CBT total placebo 0.30 0.60 0.45 anxifree 0.40 1.03 0.72 joyzepam 1.47 1.50 1.48 total 0.72 1.04 0.88 Now, each of these different means is of course a sample statistic: it’s a quantity that pertains to the specific observations that we’ve made during our study. What we want to make inferences about are the corresponding population parameters: that is, the true means as they exist within some broader population. Those population means can also be organised into a similar table, but we’ll need a little mathematical notation to do so. As usual, I’ll use the symbol μ to denote a population mean. However, because there are lots of different means, I’ll need to use subscripts to distinguish between them. Here’s how the notation works. Our table is defined in terms of two factors: each row corresponds to a different level of Factor A (in this case drug), and each column corresponds to a different level of Factor B (in this case therapy). If we let R denote the number of rows in the table, and C denote the number of columns, we can refer to this as an R×C factorial ANOVA. In this case R=3 and C=2. We’ll use lowercase letters to refer to specific rows and columns, so μrc refers to the population mean associated with the rth level of Factor A (i.e. row number r) and the cth level of Factor B (column number c).231 So the population means are now written like this: knitr::kable(tibble::tribble( ~V1, ~V2, ~V3, ~V4, "placebo", "$\mu_{11}$", "$\mu_{12}$", "", "anxifree", "$\mu_{21}$", "$\mu_{22}$", "", "joyzepam", "$\mu_{31}$", "$\mu_{32}$", "", "total", "", "", "" ), col.names = c( "", "no therapy", "CBT", "total")) no therapy CBT total placebo μ11 μ12 anxifree μ21 μ22 joyzepam μ31 μ32 total Okay, what about the remaining entries? For instance, how should we describe the average mood gain across the entire (hypothetical) population of people who might be given Joyzepam in an experiment like this, regardless of whether they were in CBT? We use the “dot” notation to express this. In the case of Joyzepam, notice that we’re talking about the mean associated with the third row in the table. That is, we’re averaging across two cell means (i.e., μ31 and μ32). The result of this averaging is referred to as a marginal mean, and would be denoted μ3. in this case. The marginal mean for CBT corresponds to the population mean associated with the second column in the table, so we use the notation μ.2 to describe it. The grand mean is denoted μ.. because it is the mean obtained by averaging (marginalising232) over both. So our full table of population means can be written down like this: knitr::kable(tibble::tribble( ~V1, ~V2, ~V3, ~V4, "placebo", "$\mu_{11}$", "$\mu_{12}$", "$\mu_{1.}$", "anxifree", "$\mu_{21}$", "$\mu_{22}$", "$\mu_{2.}$", "joyzepam", "$\mu_{31}$", "$\mu_{32}$", "$\mu_{3.}$", "total", "$\mu_{.1}$", "$\mu_{.2}$", "$\mu_{..}$" ), col.names=c( NA, "no therapy", "CBT", "total")) NA no therapy CBT total placebo μ11 μ12 μ1. anxifree μ21 μ22 μ2. joyzepam μ31 μ32 μ3. total μ.1 μ.2 μ.. Now that we have this notation, it is straightforward to formulate and express some hypotheses. Let’s suppose that the goal is to find out two things: firstly, does the choice of drug have any effect on mood, and secondly, does CBT have any effect on mood? These aren’t the only hypotheses that we could formulate of course, and we’ll see a really important example of a different kind of hypothesis in Section 16.2, but these are the two simplest hypotheses to test, and so we’ll start there. Consider the first test. If drug has no effect, then we would expect all of the row means to be identical, right? So that’s our null hypothesis. On the other hand, if the drug does matter then we should expect these row means to be different. Formally, we write down our null and alternative hypotheses in terms of the equality of marginal means: knitr::kable(tibble::tribble( ~V1, ~V2, "Null hypothesis$H_0$:", "row means are the same i.e.$\mu_{1.} = \mu_{2.} = \mu_{3.}$", "Alternative hypothesis$H_1$:", "at least one row mean is different." ), col.names = c("", "")) Null hypothesis H0: row means are the same i.e. μ1.2.3. Alternative hypothesis H1: at least one row mean is different. It’s worth noting that these are exactly the same statistical hypotheses that we formed when we ran a one-way ANOVA on these data back in Chapter 14. Back then I used the notation μP to refer to the mean mood gain for the placebo group, with μA and μJ corresponding to the group means for the two drugs, and the null hypothesis was μPAJ. So we’re actually talking about the same hypothesis: it’s just that the more complicated ANOVA requires more careful notation due to the presence of multiple grouping variables, so we’re now referring to this hypothesis as μ1.2.3.. However, as we’ll see shortly, although the hypothesis is identical, the test of that hypothesis is subtly different due to the fact that we’re now acknowledging the existence of the second grouping variable. Speaking of the other grouping variable, you won’t be surprised to discover that our second hypothesis test is formulated the same way. However, since we’re talking about the psychological therapy rather than drugs, our null hypothesis now corresponds to the equality of the column means: knitr::kable(tibble::tribble( ~V1, ~V2, "Null hypothesis$H_0$:", "column means are the same, i.e.,$\mu_{.1} = \mu_{.2}$", "Alternative hypothesis$H_1$:", "column means are different, i.e.,$\mu_{.1} \neq \mu_{.2}"
), col.names = c("", ""))
Null hypothesis H0: column means are the same, i.e., μ.1.2
Alternative hypothesis H1: column means are different, i.e., ${.1} {.2} Running the analysis in R The null and alternative hypotheses that I described in the last section should seem awfully familiar: they’re basically the same as the hypotheses that we were testing in our simpler one-way ANOVAs in Chapter 14. So you’re probably expecting that the hypothesis tests that are used in factorial ANOVA will be essentially the same as the F-test from Chapter 14. You’re expecting to see references to sums of squares (SS), mean squares (MS), degrees of freedom (df), and finally an F-statistic that we can convert into a p-value, right? Well, you’re absolutely and completely right. So much so that I’m going to depart from my usual approach. Throughout this book, I’ve generally taken the approach of describing the logic (and to an extent the mathematics) that underpins a particular analysis first; and only then introducing the R commands that you’d use to produce the analysis. This time I’m going to do it the other way around, and show you the R commands first. The reason for doing this is that I want to highlight the similarities between the simple one-way ANOVA tool that we discussed in Chapter 14, and the more complicated tools that we’re going to use in this chapter. If the data you’re trying to analyse correspond to a balanced factorial design, then running your analysis of variance is easy. To see how easy it is, let’s start by reproducing the original analysis from Chapter 14. In case you’ve forgotten, for that analysis we were using only a single factor (i.e., drug) to predict our outcome variable (i.e., mood.gain), and so this was what we did: model.1 <- aov( mood.gain ~ drug, clin.trial ) summary( model.1 ) ## Df Sum Sq Mean Sq F value Pr(>F) ## drug 2 3.453 1.7267 18.61 8.65e-05 *** ## Residuals 15 1.392 0.0928 ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 Note that this time around I’ve used the name model.1 as the label for my aov object, since I’m planning on creating quite a few other models too. To start with, suppose I’m also curious to find out if therapy has a relationship to mood.gain. In light of what we’ve seen from our discussion of multiple regression in Chapter 15, you probably won’t be surprised that all we have to do is extend the formula: in other words, if we specify mood.gain ~ drug + therapy as our model, we’ll probably get what we’re after: model.2 <- aov( mood.gain ~ drug + therapy, clin.trial ) This output is pretty simple to read too: the first row of the table reports a between-group sum of squares (SS) value associated with the drug factor, along with a corresponding between-group df value. It also calculates a mean square value (MS), and F-statistic and a p-value. There is also a row corresponding to the therapy factor, and a row corresponding to the residuals (i.e., the within groups variation). Not only are all of the individual quantities pretty familiar, the relationships between these different quantities has remained unchanged: just like we saw with the original one-way ANOVA, note that the mean square value is calculated by dividing SS by the corresponding df. That is, it’s still true that $\ MS = {SS \over df}$ regardless of whether we’re talking about drug, therapy or the residuals. To see this, let’s not worry about how the sums of squares values are calculated: instead, let’s take it on faith that R has calculated the SS values correctly, and try to verify that all the rest of the numbers make sense. First, note that for the drug factor, we divide 3.45 by 2, and end up with a mean square value of 1.73. For the therapy factor, there’s only 1 degree of freedom, so our calculations are even simpler: dividing 0.47 (the SS value) by 1 gives us an answer of 0.47 (the MS value). Turning to the F statistics and the p values, notice that we have two of each: one corresponding to the drug factor and the other corresponding to the therapy factor. Regardless of which one we’re talking about, the F statistic is calculated by dividing the mean square value associated with the factor by the mean square value associated with the residuals. If we use “A” as shorthand notation to refer to the first factor (factor A; in this case drug) and “R” as shorthand notation to refer to the residuals, then the F statistic associated with factor A is denoted FA, and is calculated as follows: $\ F_A={MS_A \over MS_R}$ and an equivalent formula exists for factor B (i.e., therapy). Note that this use of “R” to refer to residuals is a bit awkward, since we also used the letter R to refer to the number of rows in the table, but I’m only going to use “R” to mean residuals in the context of SSR and MSR, so hopefully this shouldn’t be confusing. Anyway, to apply this formula to the drugs factor, we take the mean square of 1.73 and divide it by the residual mean square value of 0.07, which gives us an F-statistic of 26.15. The corresponding calculation for the therapy variable would be to divide 0.47 by 0.07 which gives 7.08 as the F-statistic. Not surprisingly, of course, these are the same values that R has reported in the ANOVA table above. The last part of the ANOVA table is the calculation of the p values. Once again, there is nothing new here: for each of our two factors, what we’re trying to do is test the null hypothesis that there is no relationship between the factor and the outcome variable (I’ll be a bit more precise about this later on). To that end, we’ve (apparently) followed a similar strategy that we did in the one way ANOVA, and have calculated an F-statistic for each of these hypotheses. To convert these to p values, all we need to do is note that the that the sampling distribution for the F statistic under the null hypothesis (that the factor in question is irrelevant) is an F distribution: and that two degrees of freedom values are those corresponding to the factor, and those corresponding to the residuals. For the drug factor we’re talking about an F distribution with 2 and 14 degrees of freedom (I’ll discuss degrees of freedom in more detail later). In contrast, for the therapy factor sampling distribution is F with 1 and 14 degrees of freedom. If we really wanted to, we could calculate the p value ourselves using the pf() function (see Section 9.6). Just to prove that there’s nothing funny going on, here’s what that would look like for the drug variable: pf( q=26.15, df1=2, df2=14, lower.tail=FALSE ) ## [1] 1.871981e-05 And as you can see, this is indeed the p value reported in the ANOVA table above. At this point, I hope you can see that the ANOVA table for this more complicated analysis corresponding to model.2 should be read in much the same way as the ANOVA table for the simpler analysis for model.1. In short, it’s telling us that the factorial ANOVA for our 3×2 design found a significant effect of drug (F2,14=26.15,p<.001) as well as a significant effect of therapy (F1,14=7.08,p=.02). Or, to use the more technically correct terminology, we would say that there are two main effects of drug and therapy. At the moment, it probably seems a bit redundant to refer to these as “main” effects: but it actually does make sense. Later on, we’re going to want to talk about the possibility of “interactions” between the two factors, and so we generally make a distinction between main effects and interaction effects. the sum of squares calculated? In the previous section I had two goals: firstly, to show you that the R commands needed to do factorial ANOVA are pretty much the same ones that we used for a one way ANOVA. The only difference is the formula argument to the aov() function. Secondly, I wanted to show you what the ANOVA table looks like in this case, so that you can see from the outset that the basic logic and structure behind factorial ANOVA is the same as that which underpins one way ANOVA. Try to hold onto that feeling. It’s genuinely true, insofar as factorial ANOVA is built in more or less the same way as the simpler one-way ANOVA model. It’s just that this feeling of familiarity starts to evaporate once you start digging into the details. Traditionally, this comforting sensation is replaced by an urge to murder the the authors of statistics textbooks. Okay, let’s start looking at some of those details. The explanation that I gave in the last section illustrates the fact that the hypothesis tests for the main effects (of drug and therapy in this case) are F-tests, but what it doesn’t do is show you how the sum of squares (SS) values are calculated. Nor does it tell you explicitly how to calculate degrees of freedom (df values) though that’s a simple thing by comparison. Let’s assume for now that we have only two predictor variables, Factor A and Factor B. If we use Y to refer to the outcome variable, then we would use Yrci to refer to the outcome associated with the i-th member of group rc (i.e., level/row r for Factor A and level/column c for Factor B). Thus, if we use $\ \bar{Y}$ to refer to a sample mean, we can use the same notation as before to refer to group means, marginal means and grand means: that is, $\ \bar{Y_{rc}}$ is the sample mean associated with the rth level of Factor A and the cth level of Factor B, $\ \bar{Y_r}$. would be the marginal mean for the rth level of Factor A, $\ \bar{Y_{.c}}$ would be the marginal mean for the cth level of Factor B, and $\ \bar{Y_{..}}$ is the grand mean. In other words, our sample means can be organised into the same table as the population means. For our clinical trial data, that table looks like this: knitr::kable(tibble::tribble( ~V1, ~V2, ~V3, ~V4, "placebo", "$\bar{Y}_{11}$", "$\bar{Y}_{12}$", "$\bar{Y}_{1.}$", "anxifree", "$\bar{Y}_{21}$", "$\bar{Y}_{22}$", "$\bar{Y}_{2.}$", "joyzepam", "$\bar{Y}_{31}$", "$\bar{Y}_{32}$", "$\bar{Y}_{3.}$", "total", "$\bar{Y}_{.1}$", "$\bar{Y}_{.2}$", "$\bar{Y}_{..}$" ), col.names = c("", "no therapy", "CBT", "total")) no therapy CBT total placebo $\ \bar{Y_{11}}$ $\ \bar{Y_{12}}$ $\ \bar{Y_{1.}}$ anxifree $\ \bar{Y_{21}}$ $\ \bar{Y_{22}}$ $\ \bar{Y_{2.}}$ joyzepam $\ \bar{Y_{31}}$ $\ \bar{Y_{32}}$ $\ \bar{Y_{3.}}$ total $\ \bar{Y_{.1}}$ $\ \bar{Y_{.2}}$ $\ \bar{Y_{..}}$ And if we look at the sample means that I showed earlier, we have $\ \bar{Y_{11}}$ =0.30, $\ \bar{Y_{12}}$=0.60 etc. In our clinical trial example, the drugs factor has 3 levels and the therapy factor has 2 levels, and so what we’re trying to run is a 3×2 factorial ANOVA. However, we’ll be a little more general and say that Factor A (the row factor) has R levels and Factor B (the column factor) has C levels, and so what we’re runnning here is an R×C factorial ANOVA. Now that we’ve got our notation straight, we can compute the sum of squares values for each of the two factors in a relatively familiar way. For Factor A, our between group sum of squares is calculated by assessing the extent to which the (row) marginal means $\ \bar{Y_{1.}}$, $\ \bar{Y_{2.}}$ etc, are different from the grand mean $\ \bar{Y_{..}}$. We do this in the same way that we did for one-way ANOVA: calculate the sum of squared difference between the $\ \bar{Y_{i.}}$ values and the $\ \bar{Y_{..}}$ values. Specifically, if there are N people in each group, then we calculate this: $\ SS_A=(N \times C) \sum_{r=1}^R(\bar{Y_{r.}} - \bar{Y_{..}})^2$ As with one-way ANOVA, the most interesting233 part of this formula is the $\ (\bar{Y_{r.}} - \bar{Y_{..}})^2$ bit, which corresponds to the squared deviation associated with level r. All that this formula does is calculate this squared deviation for all R levels of the factor, add them up, and then multiply the result by N×C. The reason for this last part is that there are multiple cells in our design that have level r on Factor A: in fact, there are C of them, one corresponding to each possible level of Factor B! For instance, in our toy example, there are two different cells in the design corresponding to the anxifree drug: one for people with no.therapy, and one for the CBT group. Not only that, within each of these cells there are N observations. So, if we want to convert our SS value into a quantity that calculates the between-groups sum of squares on a “per observation” basis, we have to multiply by by N×C. The formula for factor B is of course the same thing, just with some subscripts shuffled around: $\ SS_B = (N \times R)\sum_{c=1}^C(\bar{Y_{.c}} - \bar{Y_{..}})^2$ Now that we have these formulas, we can check them against the R output from the earlier section. First, notice that we calculated all the marginal means (i.e., row marginal means $\ \bar{Y_{r.}}$ and column marginal means $\ \bar{Y_{.c}}$) earlier using aggregate(), and we also calculated the grand mean. Let’s repeat those calculations, but this time we’ll save the results to varibles so that we can use them in subsequent calculations: drug.means <- aggregate( mood.gain ~ drug, clin.trial, mean )[,2] therapy.means <- aggregate( mood.gain ~ therapy, clin.trial, mean )[,2] grand.mean <- mean( clin.trial$mood.gain )
Okay, now let’s calculate the sum of squares associated with the main effect of drug. There are a total of N=3 people in each group, and C=2 different types of therapy. Or, to put it another way, there are 3×2=6 people who received any particular drug. So our calculations are:
SS.drug <- (3*2) * sum( (drug.means - grand.mean)^2 )
SS.drug
## [1] 3.453333
Not surprisingly, this is the same number that you get when you look up the SS value for the drugs factor in the ANOVA table that I presented earlier. We can repeat the same kind of calculation for the effect of therapy. Again there are N=3 people in each group, but since there are R=3 different drugs, this time around we note that there are 3×3=9 people who received CBT, and an additional 9 people who received the placebo. So our calculation is now:
SS.therapy <- (3*3) * sum( (therapy.means - grand.mean)^2 )
SS.therapy
## [1] 0.4672222
and we are, once again, unsurprised to see that our calculations are identical to the ANOVA output.
So that’s how you calculate the SS values for the two main effects. These SS values are analogous to the between-group sum of squares values that we calculated when doing one-way ANOVA in Chapter 14. However, it’s not a good idea to think of them as between-groups SS values anymore, just because we have two different grouping variables and it’s easy to get confused. In order to construct an F test, however, we also need to calculate the within-groups sum of squares. In keeping with the terminology that we used in the regression chapter (Chapter 15) and the terminology that R uses when printing out the ANOVA table, I’ll start referring to the within-groups SS value as the residual sum of squares SSR.
The easiest way to think about the residual SS values in this context, I think, is to think of it as the leftover variation in the outcome variable after you take into account the differences in the marginal means (i.e., after you remove SSA and SSB). What I mean by that is we can start by calculating the total sum of squares, which I’ll label SST. The formula for this is pretty much the same as it was for one-way ANOVA: we take the difference between each observation Yrci and the grand mean $\ \bar{Y_{..}}$, square the differences, and add them all up
$\ SS_T=\sum_{r=1}^R\sum_{c=1}^C\sum_{i=1}^N(Y_{rci} - \bar{Y_{..}})^2$
The “triple summation” here looks more complicated than it is. In the first two summations, we’re summing across all levels of Factor A (i.e., over all possible rows r in our table), across all levels of Factor B (i.e., all possible columns c). Each rc combination corresponds to a single group, and each group contains N people: so we have to sum across all those people (i.e., all i values) too. In other words, all we’re doing here is summing across all observations in the data set (i.e., all possible rci combinations).
At this point, we know the total variability of the outcome variable SST, and we know how much of that variability can be attributed to Factor A (SSA) and how much of it can be attributed to Factor B (SSB). The residual sum of squares is thus defined to be the variability in Y that can’t be attributed to either of our two factors. In other words:
SSR=SST − (SSA+SSB)
Of course, there is a formula that you can use to calculate the residual SS directly, but I think that it makes more conceptual sense to think of it like this. The whole point of calling it a residual is that it’s the leftover variation, and the formula above makes that clear. I should also note that, in keeping with the terminology used in the regression chapter, it is commonplace to refer to SSA+SSB as the variance attributable to the “ANOVA model”, denoted SSM, and so we often say that the total sum of squares is equal to the model sum of squares plus the residual sum of squares. Later on in this chapter we’ll see that this isn’t just a surface similarity: ANOVA and regression are actually the same thing under the hood.
In any case, it’s probably worth taking a moment to check that we can calculate SSR using this formula, and verify that we do obtain the same answer that R produces in its ANOVA table. The calculations are pretty straightforward. First we calculate the total sum of squares:
SS.tot <- sum( (clin.trial\$mood.gain - grand.mean)^2 )
SS.tot
## [1] 4.845
and then we use it to calculate the residual sum of squares:
SS.res <- SS.tot - (SS.drug + SS.therapy)
SS.res
## [1] 0.9244444
Yet again, we get the same answer.
What are our degrees of freedom?
The degrees of freedom are calculated in much the same way as for one-way ANOVA. For any given factor, the degrees of freedom is equal to the number of levels minus 1 (i.e., R−1 for the row variable, Factor A, and C−1 for the column variable, Factor B). So, for the drugs factor we obtain df=2, and for the therapy factor we obtain df=1. Later on on, when we discuss the interpretation of ANOVA as a regression model (see Section 16.6) I’ll give a clearer statement of how we arrive at this number, but for the moment we can use the simple definition of degrees of freedom, namely that the degrees of freedom equals the number of quantities that are observed, minus the number of constraints. So, for the drugs factor, we observe 3 separate group means, but these are constrained by 1 grand mean; and therefore the degrees of freedom is 2. For the residuals, the logic is similar, but not quite the same. The total number of observations in our experiment is 18. The constraints correspond to the 1 grand mean, the 2 additional group means that the drug factor introduces, and the 1 additional group mean that the the therapy factor introduces, and so our degrees of freedom is 14. As a formula, this is N−1−(R−1)−(C−1), which simplifies to N−R−C+1.
Factorial ANOVA versus one-way ANOVAs
Now that we’ve seen how a factorial ANOVA works, it’s worth taking a moment to compare it to the results of the one way analyses, because this will give us a really good sense of why it’s a good idea to run the factorial ANOVA. In Chapter 14 that I ran a one-way ANOVA that looked to see if there are any differences between drugs, and a second one-way ANOVA to see if there were any differences between therapies. As we saw in Section 16.1.1, the null and alternative hypotheses tested by the one-way ANOVAs are in fact identical to the hypotheses tested by the factorial ANOVA. Looking even more carefully at the ANOVA tables, we can see that the sum of squares associated with the factors are identical in the two different analyses (3.45 for drug and 0.92 for therapy), as are the degrees of freedom (2 for drug, 1 for therapy). But they don’t give the same answers! Most notably, when we ran the one-way ANOVA for therapy in Section 14.11 we didn’t find a significant effect (the p-value was 0.21). However, when we look at the main effect of therapy within the context of the two-way ANOVA, we do get a significant effect (p=.019). The two analyses are clearly not the same.
Why does that happen? The answer lies in understanding how the residuals are calculated. Recall that the whole idea behind an F-test is to compare the variability that can be attributed to a particular factor with the variability that cannot be accounted for (the residuals). If you run a one-way ANOVA for therapy, and therefore ignore the effect of drug, the ANOVA will end up dumping all of the drug-induced variability into the residuals! This has the effect of making the data look more noisy than they really are, and the effect of therapy which is correctly found to be significant in the two-way ANOVA now becomes non-significant. If we ignore something that actually matters (e.g., drug) when trying to assess the contribution of something else (e.g., therapy) then our analysis will be distorted. Of course, it’s perfectly okay to ignore variables that are genuinely irrelevant to the phenomenon of interest: if we had recorded the colour of the walls, and that turned out to be non-significant in a three-way ANOVA (i.e. mood.gain ~ drug + therapy + wall.colour), it would be perfectly okay to disregard it and just report the simpler two-way ANOVA that doesn’t include this irrelevant factor. What you shouldn’t do is drop variables that actually make a difference!
What kinds of outcomes does this analysis capture?
The ANOVA model that we’ve been talking about so far covers a range of different patterns that we might observe in our data. For instance, in a two-way ANOVA design, there are four possibilities: (a) only Factor A matters, (b) only Factor B matters, (c) both A and B matter, and (d) neither A nor B matters. An example of each of these four possibilities is plotted in Figure ??. | textbooks/stats/Applied_Statistics/Learning_Statistics_with_R_-_A_tutorial_for_Psychology_Students_and_other_Beginners_(Navarro)/16%3A_Factorial_ANOVA/16.01%3A__Factorial_ANOVA_1-_Balanced_Designs_No_Interactions.txt |
The four patterns of data shown in Figure ?? are all quite realistic: there are a great many data sets that produce exactly those patterns. However, they are not the whole story, and the ANOVA model that we have been talking about up to this point is not sufficient to fully account for a table of group means. Why not? Well, so far we have the ability to talk about the idea that drugs can influence mood, and therapy can influence mood, but no way of talking about the possibility of an interaction between the two. An interaction between A and B is said to occur whenever the effect of Factor A is different, depending on which level of Factor B we’re talking about. Several examples of an interaction effect with the context of a 2 x 2 ANOVA are shown in Figure ??. To give a more concrete example, suppose that the operation of Anxifree and Joyzepam is governed quite different physiological mechanisms, and one consequence of this is that while Joyzepam has more or less the same effect on mood regardless of whether one is in therapy, Anxifree is actually much more effective when administered in conjunction with CBT. The ANOVA that we developed in the previous section does not capture this idea. To get some idea of whether an interaction is actually happening here, it helps to plot the various group means. There are quite a few different ways draw these plots in R. One easy way is to use the interaction.plot() function, but this function won’t draw error bars for you. A fairly simple function that will include error bars for you is the lineplot.CI() function in the sciplots package (see Section 10.5.4). The command
library(sciplot)
library(lsr)
lineplot.CI( x.factor = clin.trial$drug, response = clin.trial$mood.gain,
group = clin.trial\$therapy,
ci.fun = ciMean,
xlab = "drug",
ylab = "mood gain" )
produces the output is shown in Figure 16.9 (don’t forget that the ciMean function is in the lsr package, so you need to have lsr loaded!). Our main concern relates to the fact that the two lines aren’t parallel. The effect of CBT (difference between solid line and dotted line) when the drug is Joyzepam (right side) appears to be near zero, even smaller than the effect of CBT when a placebo is used (left side). However, when Anxifree is administered, the effect of CBT is larger than the placebo (middle). Is this effect real, or is this just random variation due to chance? Our original ANOVA cannot answer this question, because we make no allowances for the idea that interactions even exist! In this section, we’ll fix this problem.
What exactly interaction effect?
The key idea that we’re going to introduce in this section is that of an interaction effect. What that means for our R formulas is that we’ll write down models like so although there are only two factors involved in our model (i.e., drug and therapy), there are three distinct terms (i.e., drug, therapy and drug:therapy). That is, in addition to the main effects of drug and therapy, we have a new component to the model, which is our interaction term drug:therapy. Intuitively, the idea behind an interaction effect is fairly simple: it just means that the effect of Factor A is different, depending on which level of Factor B we’re talking about. But what does that actually mean in terms of our data? Figure ?? depicts several different patterns that, although quite different to each other, would all count as an interaction effect. So it’s not entirely straightforward to translate this qualitative idea into something mathematical that a statistician can work with. As a consequence, the way that the idea of an interaction effect is formalised in terms of null and alternative hypotheses is slightly difficult, and I’m guessing that a lot of readers of this book probably won’t be all that interested. Even so, I’ll try to give the basic idea here.
To start with, we need to be a little more explicit about our main effects. Consider the main effect of Factor A (drug in our running example). We originally formulated this in terms of the null hypothesis that the two marginal means μr. are all equal to each other. Obviously, if all of these are equal to each other, then they must also be equal to the grand mean μ.. as well, right? So what we can do is define the effect of Factor A at level r to be equal to the difference between the marginal mean μr. and the grand mean μ...
Let’s denote this effect by αr, and note that
αrr.−μ..
Now, by definition all of the αr values must sum to zero, for the same reason that the average of the marginal means μr. must be the grand mean μ... We can similarly define the effect of Factor B at level i to be the difference between the column marginal mean μ.c and the grand mean μ..
βc.c−μ..
and once again, these βc values must sum to zero. The reason that statisticians sometimes like to talk about the main effects in terms of these αr and βc values is that it allows them to be precise about what it means to say that there is no interaction effect. If there is no interaction at all, then these αr and βc values will perfectly describe the group means μrc. Specifically, it means that
μrc..rc
That is, there’s nothing special about the group means that you couldn’t predict perfectly by knowing all the marginal means. And that’s our null hypothesis, right there. The alternative hypothesis is that
μrc≠μ..rc
for at least one group rc in our table. However, statisticians often like to write this slightly differently. They’ll usually define the specific interaction associated with group rc to be some number, awkwardly referred to as (αβ)rc, and then they will say that the alternative hypothesis is that
μrc..rc+(αβ)rc
where (αβ)rc is non-zero for at least one group. This notation is kind of ugly to look at, but it is handy as we’ll see in the next section when discussing how to calculate the sum of squares.
## Warning: package 'sciplot' was built under R version 3.5.2
## Warning: package 'lsr' was built under R version 3.5.2
Calculating sums of squares for the interaction
How should we calculate the sum of squares for the interaction terms, SSA:B? Well, first off, it helps to notice how the previous section defined the interaction effect in terms of the extent to which the actual group means differ from what you’d expect by just looking at the marginal means. Of course, all of those formulas refer to population parameters rather than sample statistics, so we don’t actually know what they are. However, we can estimate them by using sample means in place of population means. So for Factor A, a good way to estimate the main effect at level r as the difference between the sample marginal mean $\ \bar{Y_{rc}}$ and the sample grand mean $\ \bar{Y_{...}}$ . That is, we would use this as our estimate of the effect:
$\ \hat{\alpha_r} = \bar{Y_{r.}} - \bar{Y _{..}}$
Similarly, our estimate of the main effect of Factor B at level c can be defined as follows:
$\ \hat{\beta_c} = \bar{Y_{.c}} - \bar{Y_{..}}$
Now, if you go back to the formulas that I used to describe the SS values for the two main effects, you’ll notice that these effect terms are exactly the quantities that we were squaring and summing! So what’s the analog of this for interaction terms? The answer to this can be found by first rearranging the formula for the group means μrc under the alternative hypothesis, so that we get this:
\begin{aligned}(\alpha \beta)_{r c} &=\mu_{r c}-\mu_{..}-\alpha_{r}-\beta_{c} \ &=\mu_{r c}-\mu_{. .}-\left(\mu_{r .}-\mu_{. .}\right)-\left(\mu_{. c}-\mu_{..}\right) \ &=\mu_{r c}-\mu_{r .}-\mu_{. c}+\mu_{..} \end{aligned}
So, once again, if we substitute our sample statistics in place of the population means, we get the following as our estimate of the interaction effect for group rc, which is
$\ \hat{(\alpha\beta)_{rc}} = \bar{Y_{rc}} - \bar{Y_{r.}} - \bar{Y_{.c}} + \bar{Y_{..}}$
Now all we have to do is sum all of these estimates across all R levels of Factor A and all C levels of Factor B, and we obtain the following formula for the sum of squares associated with the interaction as a whole:
$\mathrm{SS}_{A: B}=N \sum_{r=1}^{R} \sum_{c=1}^{C}\left(\bar{Y}_{r c}-\bar{Y}_{r .}-\bar{Y}_{. c}+\bar{Y}_{. .}\right)^{2}$
where, we multiply by N because there are N observations in each of the groups, and we want our SS values to reflect the variation among observations accounted for by the interaction, not the variation among groups.
Now that we have a formula for calculating SSA:B, it’s important to recognise that the interaction term is part of the model (of course), so the total sum of squares associated with the model, SSM is now equal to the sum of the three relevant SS values, SSA+SSB+SSA:B. The residual sum of squares SSR is still defined as the leftover variation, namely SST−SSM, but now that we have the interaction term this becomes
SSR=SST−(SSA+SSB+SSA:B)
As a consequence, the residual sum of squares SSR will be smaller than in our original ANOVA that didn’t include interactions.
Degrees of freedom for the interaction
Calculating the degrees of freedom for the interaction is, once again, slightly trickier than the corresponding calculation for the main effects. To start with, let’s think about the ANOVA model as a whole. Once we include interaction effects in the model, we’re allowing every single group has a unique mean, μrc. For an R×C factorial ANOVA, this means that there are R×C quantities of interest in the model, and only the one constraint: all of the group means need to average out to the grand mean. So the model as a whole needs to have (R×C)−1 degrees of freedom. But the main effect of Factor A has R−1 degrees of freedom, and the main effect of Factor B has C−1 degrees of freedom. Which means that the degrees of freedom associated with the interaction is
\begin{aligned} d f_{A: B} &=(R \times C-1)-(R-1)-(C-1) \ &=R C-R-C+1 \ &=(R-1)(C-1) \end{aligned}
which is just the product of the degrees of freedom associated with the row factor and the column factor.
What about the residual degrees of freedom? Because we’ve added interaction terms, which absorb some degrees of freedom, there are fewer residual degrees of freedom left over. Specifically, note that if the model with interaction has a total of (R×C)−1, and there are N observations in your data set that are constrained to satisfy 1 grand mean, your residual degrees of freedom now become N−(R×C)−1+1, or just N−(R×C).
Running the ANOVA in R
Adding interaction terms to the ANOVA model in R is straightforward. Returning to our running example of the clinical trial, in addition to the main effect terms of drug and therapy, we include the interaction term drug:therapy. So the R command to create the ANOVA model now looks like this:
model.3 <- aov( mood.gain ~ drug + therapy + drug:therapy, clin.trial )
However, R allows a convenient shorthand. Instead of typing out all three terms, you can shorten the right hand side of the formula to drug*therapy. The * operator inside the formula is taken to indicate that you want both main effects and the interaction. So we can also run our ANOVA like this, and get the same answer:
model.3 <- aov( mood.gain ~ drug * therapy, clin.trial )
summary( model.3 )
## Df Sum Sq Mean Sq F value Pr(>F)
## drug 2 3.453 1.7267 31.714 1.62e-05 ***
## therapy 1 0.467 0.4672 8.582 0.0126 *
## drug:therapy 2 0.271 0.1356 2.490 0.1246
## Residuals 12 0.653 0.0544
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
As it turns out, while we do have a significant main effect of drug (F2,12=31.7,p<.001) and therapy type (F1,12=8.6,p=.013), there is no significant interaction between the two (F2,12=2.5,p=0.125).
Interpreting the results
There’s a couple of very important things to consider when interpreting the results of factorial ANOVA. Firstly, there’s the same issue that we had with one-way ANOVA, which is that if you obtain a significant main effect of (say) drug, it doesn’t tell you anything about which drugs are different to one another. To find that out, you need to run additional analyses. We’ll talk about some analyses that you can run in Sections 16.7 and ??. The same is true for interaction effects: knowing that there’s a significant interaction doesn’t tell you anything about what kind of interaction exists. Again, you’ll need to run additional analyses.
Secondly, there’s a very peculiar interpretation issue that arises when you obtain a significant interaction effect but no corresponding main effect. This happens sometimes. For instance, in the crossover interaction shown in Figure ??, this is exactly what you’d find: in this case, neither of the main effects would be significant, but the interaction effect would be. This is a difficult situation to interpret, and people often get a bit confused about it. The general advice that statisticians like to give in this situation is that you shouldn’t pay much attention to the main effects when an interaction is present. The reason they say this is that, although the tests of the main effects are perfectly valid from a mathematical point of view, when there is a significant interaction effect the main effects rarely test interesting hypotheses. Recall from Section 16.1.1 that the null hypothesis for a main effect is that the marginal means are equal to each other, and that a marginal mean is formed by averaging across several different groups. But if you have a significant interaction effect, then you know that the groups that comprise the marginal mean aren’t homogeneous, so it’s not really obvious why you would even care about those marginal means.
Here’s what I mean. Again, let’s stick with a clinical example. Suppose that we had a 2×2 design comparing two different treatments for phobias (e.g., systematic desensitisation vs flooding), and two different anxiety reducing drugs (e.g., Anxifree vs Joyzepam). Now suppose what we found was that Anxifree had no effect when desensitisation was the treatment, and Joyzepam had no effect when flooding was the treatment. But both were pretty effective for the other treatment. This is a classic crossover interaction, and what we’d find when running the ANOVA is that there is no main effect of drug, but a significant interaction. Now, what does it actually mean to say that there’s no main effect? Wel, it means that, if we average over the two different psychological treatments, then the average effect of Anxifree and Joyzepam is the same. But why would anyone care about that? When treating someone for phobias, it is never the case that a person can be treated using an “average” of flooding and desensitisation: that doesn’t make a lot of sense. You either get one or the other. For one treatment, one drug is effective; and for the other treatment, the other drug is effective. The interaction is the important thing; the main effect is kind of irrelevant.
This sort of thing happens a lot: the main effect are tests of marginal means, and when an interaction is present we often find ourselves not being terribly interested in marginal means, because they imply averaging over things that the interaction tells us shouldn’t be averaged! Of course, it’s not always the case that a main effect is meaningless when an interaction is present. Often you can get a big main effect and a very small interaction, in which case you can still say things like “drug A is generally more effective than drug B” (because there was a big effect of drug), but you’d need to modify it a bit by adding that “the difference in effectiveness was different for different psychological treatments”. In any case, the main point here is that whenever you get a significant interaction you should stop and think about what the main effect actually means in this context. Don’t automatically assume that the main effect is interesting. | textbooks/stats/Applied_Statistics/Learning_Statistics_with_R_-_A_tutorial_for_Psychology_Students_and_other_Beginners_(Navarro)/16%3A_Factorial_ANOVA/16.02%3A_Factorial_ANOVA_2-_Balanced_Designs_Interactions_Allowed.txt |
In this section I’ll discuss a few additional quantities that you might find yourself wanting to calculate for a factorial ANOVA. The main thing you will probably want to calculate is the effect size for each term in your model, but you may also want to R to give you some estimates for the group means and associated confidence intervals.
Effect sizes
The effect size calculations for a factorial ANOVA is pretty similar to those used in one way ANOVA (see Section 14.4). Specifically, we can use η2 (eta-squared) as simple way to measure how big the overall effect is for any particular term. As before, η2 is defined by dividing the sum of squares associated with that term by the total sum of squares. For instance, to determine the size of the main effect of Factor A, we would use the following formula
$\eta_{A}^{2}=\dfrac{\mathrm{SS}_{A}}{\mathrm{SS}_{T}}$
As before, this can be interpreted in much the same way as R2 in regression.234 It tells you the proportion of variance in the outcome variable that can be accounted for by the main effect of Factor A. It is therefore a number that ranges from 0 (no effect at all) to 1 (accounts for all of the variability in the outcome). Moreover, the sum of all the η2 values, taken across all the terms in the model, will sum to the the total R2 for the ANOVA model. If, for instance, the ANOVA model fits perfectly (i.e., there is no within-groups variability at all!), the η2 values will sum to 1. Of course, that rarely if ever happens in real life.
However, when doing a factorial ANOVA, there is a second measure of effect size that people like to report, known as partial η2. The idea behind partial η2 (which is sometimes denoted $\ {p^\eta}^2$ or $\ \eta_p^2$) is that, when measuring the effect size for a particular term (say, the main effect of Factor A), you want to deliberately ignore the other effects in the model (e.g., the main effect of Factor B). That is, you would pretend that the effect of all these other terms is zero, and then calculate what the η2 value would have been. This is actually pretty easy to calculate. All you have to do is remove the sum of squares associated with the other terms from the denominator. In other words, if you want the partial η2 for the main effect of Factor A, the denominator is just the sum of the SS values for Factor A and the residuals:
partial $\eta_{A}^{2}=\dfrac{\mathrm{SS}_{A}}{\mathrm{SS}_{A}+\mathrm{SS}_{R}}$
This will always give you a larger number than η2, which the cynic in me suspects accounts for the popularity of partial η2. And once again you get a number between 0 and 1, where 0 represents no effect. However, it’s slightly trickier to interpret what a large partial η2 value means. In particular, you can’t actually compare the partial η2 values across terms! Suppose, for instance, there is no within-groups variability at all: if so, SSR=0. What that means is that every term has a partial η2 value of 1. But that doesn’t mean that all terms in your model are equally important, or indeed that they are equally large. All it mean is that all terms in your model have effect sizes that are large relative to the residual variation. It is not comparable across terms.
To see what I mean by this, it’s useful to see a concrete example. Once again, we’ll use the etaSquared() function from the lsr package. As before, we input the aov object for which we want the η2 calculations performed, and R outputs a matrix showing the effect sizes for each term in the model. First, let’s have a look at the effect sizes for the original ANOVA without the interaction term:
etaSquared( model.2 )
## eta.sq eta.sq.part
## drug 0.7127623 0.7888325
## therapy 0.0964339 0.3357285
Looking at the η2 values first, we see that drug accounts for 71.3% of the variance (i.e. η2=0.713) in mood.gain, whereas therapy only accounts for 9.6%. This leaves a total of 19.1% of the variation unaccounted for (i.e., the residuals constitute 19.1% of the variation in the outcome). Overall, this implies that we have a very large effect235 of drug and a modest effect of therapy.
Now let’s look at the partial η2 values. Because the effect of therapy isn’t all that large, controlling for it doesn’t make much of a difference, so the partial η2 for drug doesn’t increase very much, and we obtain a value of $\ {p^\eta}^2$=0.789). In contrast, because the effect of drug was very large, controlling for it makes a big difference, and so when we calculate the partial η2 for therapy you can see that it rises to $\ {p^\eta}^2$=0.336. The question that we have to ask ourselves is, what does these partial η2 values actually mean? The way I generally interpret the partial η2 for the main effect of Factor A is to interpret it as a statement about a hypothetical experiment in which only Factor A was being varied. So, even though in this experiment we varied both A and B, we can easily imagine an experiment in which only Factor A was varied: the partial η2 statistic tells you how much of the variance in the outcome variable you would expect to see accounted for in that experiment. However, it should be noted that this interpretation – like many things associated with main effects – doesn’t make a lot of sense when there is a large and significant interaction effect.
Speaking of interaction effects, here’s what we get when we calculate the effect sizes for the model that includes the interaction term. As you can see, the η2 values for the main effects don’t change, but the partial η2 values do:
etaSquared( model.3 )
## eta.sq eta.sq.part
## drug 0.71276230 0.8409091
## therapy 0.09643390 0.4169559
## drug:therapy 0.05595689 0.2932692
Estimated group means
In many situations you will find yourself wanting to report estimates of all the group means based on the results of your ANOVA, as well as confidence intervals associated with them. You can use the effect() function in the effects package to do this (don’t forget to install the package if you don’t have it already!). If the ANOVA that you have run is a saturated model (i.e., contains all possible main effects and all possible interaction effects) then the estimates of the group means are actually identical to the sample means, though the confidence intervals will use a pooled estimate of the standard errors, rather than use a separate one for each group. To illustrate this, let’s apply the effect() function to our saturated model (i.e., model.3) for the clinical trial data. The effect() function contains two arguments we care about: the term argument specifies what terms in the model we want the means to be calculated for, and the mod argument specifies the model:
library(effects)
eff <- effect( term = "drug*therapy", mod = model.3 )
eff
##
## drug*therapy effect
## therapy
## drug no.therapy CBT
## placebo 0.300000 0.600000
## anxifree 0.400000 1.033333
## joyzepam 1.466667 1.500000
Notice that these are actually the same numbers we got when computing the sample means earlier (i.e., the group.means variable that we computed using aggregate()). One useful thing that we can do using the effect variable eff, however, is extract the confidence intervals using the summary() function:
summary(eff)
##
## drug*therapy effect
## therapy
## drug no.therapy CBT
## placebo 0.300000 0.600000
## anxifree 0.400000 1.033333
## joyzepam 1.466667 1.500000
##
## Lower 95 Percent Confidence Limits
## therapy
## drug no.therapy CBT
## placebo 0.006481093 0.3064811
## anxifree 0.106481093 0.7398144
## joyzepam 1.173147759 1.2064811
##
## Upper 95 Percent Confidence Limits
## therapy
## drug no.therapy CBT
## placebo 0.5935189 0.8935189
## anxifree 0.6935189 1.3268522
## joyzepam 1.7601856 1.7935189
In this output, we see that the estimated mean mood gain for the placebo group with no therapy was 0.300, with a 95% confidence interval from 0.006 to 0.594. Note that these are not the same confidence intervals that you would get if you calculated them separately for each group, because of the fact that the ANOVA model assumes homogeneity of variance and therefore uses a pooled estimate of the standard deviation.
When the model doesn’t contain the interaction term, then the estimated group means will be different from the sample means. Instead of reporting the sample mean, the effect() function will calculate the value of the group means that would be expected on the basis of the marginal means (i.e., assuming no interaction). Using the notation we developed earlier, the estimate reported for μrc, the mean for level r on the (row) Factor A and level c on the (column) Factor B would be μ..rc. If there are genuinely no interactions between the two factors, this is actually a better estimate of the population mean than the raw sample mean would be. The command to obtain these estimates is actually identical to the last one, except that we use model.2. When you do this, R will give you a warning message:
eff <- effect( "drug*therapy", model.2 )
## NOTE: drug:therapy does not appear in the model
but this isn’t anything to worry about. This is R being polite, and letting you know that the estimates it is constructing are based on the assumption that no interactions exist. It kind of makes sense that it would do this: when we use "drug*therapy" as our input, we’re telling R that we want it to output the estimated group means (rather than marginal means), but the actual input "drug*therapy" might mean that you want interactions included or you might not. There’s no actual ambiguity here, because the model itself either does or doesn’t have interactions, but the authors of the function thought it sensible to include a warning just to make sure that you’ve specified the actual model you care about. But, assuming that we genuinely don’t believe that there are any interactions, model.2 is the right model to use, so we can ignore this warning.236 In any case, when we inspect the output, we get the following table of estimated group means:
eff
##
## drug*therapy effect
## therapy
## drug no.therapy CBT
## placebo 0.2888889 0.6111111
## anxifree 0.5555556 0.8777778
## joyzepam 1.3222222 1.6444444
As before, we can obtain confidence intervals using the following command:
summary( eff )
##
## drug*therapy effect
## therapy
## drug no.therapy CBT
## placebo 0.2888889 0.6111111
## anxifree 0.5555556 0.8777778
## joyzepam 1.3222222 1.6444444
##
## Lower 95 Percent Confidence Limits
## therapy
## drug no.therapy CBT
## placebo 0.02907986 0.3513021
## anxifree 0.29574653 0.6179687
## joyzepam 1.06241319 1.3846354
##
## Upper 95 Percent Confidence Limits
## therapy
## drug no.therapy CBT
## placebo 0.5486979 0.8709201
## anxifree 0.8153646 1.1375868
## joyzepam 1.5820313 1.9042535
but the output looks pretty much the same as last time, and this book is already way too long, so I won’t include it here. | textbooks/stats/Applied_Statistics/Learning_Statistics_with_R_-_A_tutorial_for_Psychology_Students_and_other_Beginners_(Navarro)/16%3A_Factorial_ANOVA/16.03%3A_Effect_Size_Estimated_Means_and_Confidence_Intervals.txt |
As with one-way ANOVA, the key assumptions of factorial ANOVA are homogeneity of variance (all groups have the same standard deviation), normality of the residuals, and independence of the observations. The first two are things we can test for. The third is something that you need to assess yourself by asking if there are any special relationships between different observations. Additionally, if you aren’t using a saturated model (e.g., if you’ve omitted the interaction terms) then you’re also assuming that the omitted terms aren’t important. Of course, you can check this last one by running an ANOVA with the omitted terms included and see if they’re significant, so that’s pretty easy. What about homogeneity of variance and normality of the residuals? As it turns out, these are pretty easy to check: it’s no different to the checks we did for a one-way ANOVA.
Levene test for homogeneity of variance
To test whether the groups have the same variance, we can use the Levene test. The theory behind the Levene test was discussed in Section 14.7, so I won’t discuss it again. Once again, you can use the `leveneTest()` function in the `car` package to do this. This function expects that you have a saturated model (i.e., included all of the relevant terms), because the test is primarily concerned with the within-group variance, and it doesn’t really make a lot of sense to calculate this any way other than with respect to the full model. So we try either of the following commands:
``````leveneTest( model.2 )
leveneTest( mood.gain ~ drug + therapy, clin.trial )``````
R will spit out the following error:
``````Error in leveneTest.formula(formula(y), data = model.frame(y), ...) :
Model must be completely crossed formula only.``````
Instead, if you want to run the Levene test, you need to specify a saturated model. Either of the following two commands would work:237
``````library(car)
leveneTest( model.3 )``````
``````## Levene's Test for Homogeneity of Variance (center = median)
## Df F value Pr(>F)
## group 5 0.0955 0.9912
## 12``````
```` leveneTest( mood.gain ~ drug * therapy, clin.trial )`
```
``````## Levene's Test for Homogeneity of Variance (center = median)
## Df F value Pr(>F)
## group 5 0.0955 0.9912
## 12``````
The fact that the Levene test is non-significant means that we can safely assume that the homogeneity of variance assumption is not violated.
Normality of residuals
As with one-way ANOVA, we can test for the normality of residuals in a straightforward fashion (see Section 14.9). First, we use the `residuals()` function to extract the residuals from the model itself, and then we can examine those residuals in a few different ways. It’s generally a good idea to examine them graphically, by drawing histograms (i.e., `hist()` function) and QQ plots (i.e., `qqnorm()` function. If you want a formal test for the normality of the residuals, then we can run the Shapiro-Wilk test (i.e., `shapiro.test()`). If we wanted to check the residuals with respect to `model.2` (i.e., the model with both main effects but no interactions) then we could do the following:
``````resid <- residuals( model.2 ) # pull the residuals
hist( resid ) # draw a histogram``````
``qqnorm( resid ) # draw a normal QQ plot``
``shapiro.test( resid ) # run the Shapiro-Wilk test``
``````##
## Shapiro-Wilk normality test
##
## data: resid
## W = 0.95635, p-value = 0.5329``````
I haven’t included the plots (you can draw them yourself if you want to see them), but you can see from the non-significance of the Shapiro-Wilk test that normality isn’t violated here. | textbooks/stats/Applied_Statistics/Learning_Statistics_with_R_-_A_tutorial_for_Psychology_Students_and_other_Beginners_(Navarro)/16%3A_Factorial_ANOVA/16.04%3A_Assumption_Checking.txt |
At this point, I want to talk in a little more detail about what the F-tests in an ANOVA are actually doing. In the context of ANOVA, I’ve been referring to the F-test as a way of testing whether a particular term in the model (e.g., main effect of Factor A) is significant. This interpretation is perfectly valid, but it’s not necessarily the most useful way to think about the test. In fact, it’s actually a fairly limiting way of thinking about what the F-test does. Consider the clinical trial data we’ve been working with in this chapter. Suppose I want to see if there are any effects of any kind that involve therapy. I’m not fussy: I don’t care if it’s a main effect or an interaction effect.238 One thing I could do is look at the output for model.3 earlier: in this model we did see a main effect of therapy (p=.013) but we did not see an interaction effect (p=.125). That’s kind of telling us what we want to know, but it’s not quite the same thing. What we really want is a single test that jointly checks the main effect of therapy and the interaction effect.
Given the way that I’ve been describing the ANOVA F-test up to this point, you’d be tempted to think that this isn’t possible. On the other hand, if you recall the chapter on regression (in Section 15.10), we were able to use F-tests to make comparisons between a wide variety of regression models. Perhaps something of that sort is possible with ANOVA? And of course, the answer here is yes. The thing that you really need to understand is that the F-test, as it is used in both ANOVA and regression, is really a comparison of two statistical models. One of these models is the full model (alternative hypothesis), and the other model is a simpler model that is missing one or more of the terms that the full model includes (null hypothesis). The null model cannot contain any terms that are not in the full model. In the example I gave above, the full model is model.3, and it contains a main effect for therapy, a main effect for drug, and the drug by therapy interaction term. The null model would be model.1 since it contains only the main effect of drug.
test comparing two models
Let’s frame this in a slightly more abstract way. We’ll say that our full model can be written as an R formula that contains several different terms, say Y ~ A + B + C + D. Our null model only contains some subset of these terms, say Y ~ A + B. Some of these terms might be main effect terms, others might be interaction terms. It really doesn’t matter. The only thing that matters here is that we want to treat some of these terms as the “starting point” (i.e. the terms in the null model, A and B), and we want to see if including the other terms (i.e., C and D) leads to a significant improvement in model performance, over and above what could be achieved by a model that includes only A and B. In essence, we have null and alternative hypotheses that look like this:
Hypothesis Correct model? R formula for correct model
Null M0 Y ~ A + B
Alternative M1 Y ~ A + B + C + D
Is there a way of making this comparison directly?
To answer this, let’s go back to fundamentals. As we saw in Chapter 14, the F-test is constructed from two kinds of quantity: sums of squares (SS) and degrees of freedom (df). These two things define a mean square value (MS = SS/df), and we obtain our F statistic by contrasting the MS value associated with “the thing we’re interested in” (the model) with the MS value associated with “everything else” (the residuals). What we want to do is figure out how to talk about the SS value that is associated with the difference between two models. It’s actually not all that hard to do.
Let’s start with the fundamental rule that we used throughout the chapter on regression:
That is, the total sums of squares (i.e., the overall variability of the outcome variable) can be decomposed into two parts: the variability associated with the model SSM, and the residual or leftover variability, SSR. However, it’s kind of useful to rearrange this equation slightly, and say that the SS value associated with a model is defined like this…
SSM=SST−SSR
Now, in our scenario, we have two models: the null model (M0) and the full model (M1):
SSM0=SST−SSR0
SSM1=SST−SSR1
Next, let’s think about what it is we actually care about here. What we’re interested in is the difference between the full model and the null model. So, if we want to preserve the idea that what we’re doing is an “analysis of the variance” (ANOVA) in the outcome variable, what we should do is define the SS associated with the difference to be equal to the difference in the SS:
\begin{aligned} S S_{\Delta} &=\mathrm{SS}_{M 1}-\mathrm{SS}_{M 0} \ &=\left(\mathrm{SS}_{T}-\mathrm{SS}_{R 1}\right)-\left(\mathrm{SS}_{T}-\mathrm{SS}_{R 0}\right) \ &=\mathrm{SS}_{R 0}-\mathrm{SS}_{R 1} \end{aligned}
Now that we have our degrees of freedom, we can calculate mean squares and F values in the usual way. Specifically, we’re interested in the mean square for the difference between models, and the mean square for the residuals associated with the full model (M1), which are given by
\begin{aligned} M S_{\Delta} &=\dfrac{\mathrm{SS}_{\Delta}}{\mathrm{df}_{\Delta}} \ \mathrm{MS}_{R 1} &=\dfrac{\mathrm{SS}_{R 1}}{\mathrm{df}_{R 1}} \end{aligned}
Finally, taking the ratio of these two gives us our F statistic:
$\ F=\dfrac{MS_{\Delta}}{MS_{R1}}$
### Running the test in R
At this point, it may help to go back to our concrete example. The null model here is model.1, which stipulates that there is a main effect of drug, but no other effects exist. We expressed this via the model formula mood.gain ~ drug. The alternative model here is model.3, which stipulates that there is a main effect of drug, a main effect of therapy, and an interaction. If we express this in the “long” format, this model corresponds to the formula mood.gain ~ drug + therapy + drug:therapy, though we often express this using the * shorthand. The key thing here is that if we compare model.1 to model.3, we’re lumping the main effect of therapy and the interaction term together. Running this test in R is straightforward: we just input both models to the anova() function, and it will run the exact F-test that I outlined above.
anova( model.1, model.3 )
## Analysis of Variance Table
##
## Model 1: mood.gain ~ drug
## Model 2: mood.gain ~ drug * therapy
## Res.Df RSS Df Sum of Sq F Pr(>F)
## 1 15 1.39167
## 2 12 0.65333 3 0.73833 4.5204 0.02424 *
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Let’s see if we can reproduce this F-test ourselves. Firstly, if you go back and look at the ANOVA tables that we printed out for model.1 and model.3 you can reassure yourself that the RSS values printed in this table really do correspond to the residual sum of squares associated with these two models. So let’s type them in as variables:
ss.res.null <- 1.392
ss.res.full <- 0.653
Now, following the procedure that I described above, we will say that the “between model” sum of squares, is the difference between these two residual sum of squares values. So, if we do the subtraction, we discover that the sum of squares associated with those terms that appear in the full model but not the null model is:
ss.diff <- ss.res.null - ss.res.full
ss.diff
## [1] 0.739
Right. Next, as always we need to convert these SS values into MS (mean square) values, which we do by dividing by the degrees of freedom. The degrees of freedom associated with the full-model residuals hasn’t changed from our original ANOVA for model.3: it’s the total sample size N, minus the total number of groups G that are relevant to the model. We have 18 people in the trial and 6 possible groups (i.e., 2 therapies × 3 drugs), so the degrees of freedom here is 12. The degrees of freedom for the null model are calculated similarly. The only difference here is that there are only 3 relevant groups (i.e., 3 drugs), so the degrees of freedom here is 15. And, because the degrees of freedom associated with the difference is equal to the difference in the two degrees of freedom, we arrive at the conclusion that we have 15−12=3 degrees of freedom. Now that we know the degrees of freedom, we can calculate our MS values:
ms.res <- ss.res.full / 12
ms.diff <- ss.diff / 3
Okay, now that we have our two MS values, we can divide one by the other, and obtain an F-statistic …
F.stat <- ms.diff / ms.res
F.stat
## [1] 4.526799
… and, just as we had hoped, this turns out to be identical to the F-statistic that the anova() function produced earlier. | textbooks/stats/Applied_Statistics/Learning_Statistics_with_R_-_A_tutorial_for_Psychology_Students_and_other_Beginners_(Navarro)/16%3A_Factorial_ANOVA/16.05%3A_The___F___test_as_a_model_comparison.txt |
One of the most important things to understand about ANOVA and regression is that they’re basically the same thing. On the surface of it, you wouldn’t think that this is true: after all, the way that I’ve described them so far suggests that ANOVA is primarily concerned with testing for group differences, and regression is primarily concerned with understanding the correlations between variables. And as far as it goes, that’s perfectly true. But when you look under the hood, so to speak, the underlying mechanics of ANOVA and regression are awfully similar. In fact, if you think about it, you’ve already seen evidence of this. ANOVA and regression both rely heavily on sums of squares (SS), both make use of F tests, and so on. Looking back, it’s hard to escape the feeling that Chapters 14 and 15 were a bit repetitive.
The reason for this is that ANOVA and regression are both kinds of linear models. In the case of regression, this is kind of obvious. The regression equation that we use to define the relationship between predictors and outcomes is the equation for a straight line, so it’s quite obviously a linear model. And if that wasn’t a big enough clue, the simple fact that the command to run a regression is lm() is kind of a hint too. When we use an R formula like outcome ~ predictor1 + predictor2 what we’re really working with is the somewhat uglier linear model:
Yp=b1X1p+b2X2p+b0p
where Yp is the outcome value for the p-th observation (e.g., p-th person), X1p is the value of the first predictor for the p-th observation, X2p is the value of the second predictor for the p-th observation, the b1, b2 and b0 terms are our regression coefficients, and ϵp is the p-th residual. If we ignore the residuals ϵp and just focus on the regression line itself, we get the following formula:
$\ \hat{Y_p}$ = b1X1p+b2X2p+b0
where $\ \hat{Y_p}$ is the value of Y that the regression line predicts for person p, as opposed to the actually-observed value Yp. The thing that isn’t immediately obvious is that we can write ANOVA as a linear model as well. However, it’s actually pretty straightforward to do this. Let’s start with a really simple example: rewriting a 2×2 factorial ANOVA as a linear model.
Some data
To make things concrete, let’s suppose that our outcome variable is the grade that a student receives in my class, a ratio-scale variable corresponding to a mark from 0% to 100%. There are two predictor variables of interest: whether or not the student turned up to lectures (the attend variable), and whether or not the student actually read the textbook (the reading variable). We’ll say that attend = 1 if the student attended class, and attend = 0 if they did not. Similarly, we’ll say that reading = 1 if the student read the textbook, and reading = 0 if they did not.
Okay, so far that’s simple enough. The next thing we need to do is to wrap some maths around this (sorry!). For the purposes of this example, let Yp denote the grade of the p-th student in the class. This is not quite the same notation that we used earlier in this chapter: previously, we’ve used the notation Yrci to refer to the i-th person in the r-th group for predictor 1 (the row factor) and the c-th group for predictor 2 (the column factor). This extended notation was really handy for describing how the SS values are calculated, but it’s a pain in the current context, so I’ll switch notation here. Now, the Yp notation is visually simpler than Yrci, but it has the shortcoming that it doesn’t actually keep track of the group memberships! That is, if I told you that Y0,0,3=35, you’d immediately know that we’re talking about a student (the 3rd such student, in fact) who didn’t attend the lectures (i.e., attend = 0) and didn’t read the textbook (i.e. reading = 0), and who ended up failing the class (grade = 35). But if I tell you that Yp=35 all you know is that the p-th student didn’t get a good grade. We’ve lost some key information here. Of course, it doesn’t take a lot of thought to figure out how to fix this: what we’ll do instead is introduce two new variables X1p and X2p that keep track of this information. In the case of our hypothetical student, we know that X1p=0 (i.e., attend = 0) and X2p=0 (i.e., reading = 0). So the data might look like this:
knitr::kable(tibble::tribble(
~V1, ~V2, ~V3, ~V4,
"1", "90", "1", "1",
"2", "87", "1", "1",
"3", "75", "0", "1",
"4", "60", "1", "0",
"5", "35", "0", "0",
"6", "50", "0", "0",
"7", "65", "1", "0",
"8", "70", "0", "1"),
col.names= c("person $p$", "grade $Y_p$", "attendance $X_{1p}$", "reading $X_{2p}$"), align = 'cccc')
person p grade Yp attendance X1p reading X2p
5 35 0 0
6 50 0 0
4 60 1 0
7 65 1 0
8 70 0 1
3 75 0 1
2 87 1 1
1 90 1 1
This isn’t anything particularly special, of course: it’s exactly the format in which we expect to see our data! In other words, if your data have been stored as a data frame in R then you’re probably expecting to see something that looks like the rtfm.1 data frame:
load("./rbook-master/data/rtfm.rdata")
rtfm.1
## grade attend reading
## 1 90 1 1
## 2 87 1 1
## 3 75 0 1
## 4 60 1 0
## 5 35 0 0
## 6 50 0 0
## 7 65 1 0
## 8 70 0 1
Well, sort of. I suspect that a few readers are probably frowning a little at this point. Earlier on in the book I emphasised the importance of converting nominal scale variables such as attend and reading to factors, rather than encoding them as numeric variables. The rtfm.1 data frame doesn’t do this, but the rtfm.2 data frame does, and so you might instead be expecting to see data like this:
rtfm.2
## grade attend reading
## 1 90 yes yes
## 2 87 yes yes
## 3 75 no yes
## 4 60 yes no
## 5 35 no no
## 6 50 no no
## 7 65 yes no
## 8 70 no yes
However, for the purposes of this section it’s important that we be able to switch back and forth between these two different ways of thinking about the data. After all, our goal in this section is to look at some of the mathematics that underpins ANOVA, and if we want to do that we need to be able to see the numerical representation of the data (in rtfm.1) as well as the more meaningful factor representation (in rtfm.2). In any case, we can use the xtabs() function to confirm that this data set corresponds to a balanced design
xtabs( ~ attend + reading, rtfm.2 )
## reading
## attend no yes
## no 2 2
## yes 2 2
For each possible combination of the attend and reading variables, we have exactly two students. If we’re interested in calculating the mean grade for each of these cells, we can use the aggregate() function:
aggregate( grade ~ attend + reading, rtfm.2, mean )
## attend reading grade
## 1 no no 42.5
## 2 yes no 62.5
## 3 no yes 72.5
## 4 yes yes 88.5
Looking at this table, one gets the strong impression that reading the text and attending the class both matter a lot.
ANOVA with binary factors as a regression model
Okay, let’s get back to talking about the mathematics. We now have our data expressed in terms of three numeric variables: the continuous variable Y, and the two binary variables X1 and X2. What I want to you to recognise is that our 22 factorial ANOVA is exactly equivalent to the regression model
Yp=b1X1p+b2X2p+b0p
This is, of course, the exact same equation that I used earlier to describe a two-predictor regression model! The only difference is that X1 and X2 are now binary variables (i.e., values can only be 0 or 1), whereas in a regression analysis we expect that X1 and X2 will be continuous. There’s a couple of ways I could try to convince you of this. One possibility would be to do a lengthy mathematical exercise, proving that the two are identical. However, I’m going to go out on a limb and guess that most of the readership of this book will find that to be annoying rather than helpful. Instead, I’ll explain the basic ideas, and then rely on R to show that that ANOVA analyses and regression analyses aren’t just similar, they’re identical for all intents and purposes.239 Let’s start by running this as an ANOVA. To do this, we’ll use the rtfm.2 data frame, since that’s the one in which I did the proper thing of coding attend and reading as factors, and I’ll use the aov() function to do the analysis. Here’s what we get…
anova.model <- aov( grade ~ attend + reading, data = rtfm.2 )
summary( anova.model )
## Df Sum Sq Mean Sq F value Pr(>F)
## attend 1 648 648 21.60 0.00559 **
## reading 1 1568 1568 52.27 0.00079 ***
## Residuals 5 150 30
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
So, by reading the key numbers off the ANOVA table and the table of means that we presented earlier, we can see that the students obtained a higher grade if they attended class (F1,5=26.1,p=.0056) and if they read the textbook (F1,5=52.3,p=.0008). Let’s make a note of those p-values and those F statistics.
library(effects)
Effect( c("attend","reading"), anova.model )
##
## attend*reading effect
## reading
## attend no yes
## no 43.5 71.5
## yes 61.5 89.5
Now let’s think about the same analysis from a linear regression perspective. In the rtfm.1 data set, we have encoded attend and reading as if they were numeric predictors. In this case, this is perfectly acceptable. There really is a sense in which a student who turns up to class (i.e. attend = 1) has in fact done “more attendance” than a student who does not (i.e. attend = 0). So it’s not at all unreasonable to include it as a predictor in a regression model. It’s a little unusual, because the predictor only takes on two possible values, but it doesn’t violate any of the assumptions of linear regression. And it’s easy to interpret. If the regression coefficient for attend is greater than 0, it means that students that attend lectures get higher grades; if it’s less than zero, then students attending lectures get lower grades. The same is true for our reading variable.
Wait a second… why is this true? It’s something that is intuitively obvious to everyone who has taken a few stats classes and is comfortable with the maths, but it isn’t clear to everyone else at first pass. To see why this is true, it helps to look closely at a few specific students. Let’s start by considering the 6th and 7th students in our data set (i.e. p=6 and p=7). Neither one has read the textbook, so in both cases we can set reading = 0. Or, to say the same thing in our mathematical notation, we observe X2,6=0 and X2,7=0. However, student number 7 did turn up to lectures (i.e., attend = 1, X1,7=1) whereas student number 6 did not (i.e., attend = 0, X1,6=0). Now let’s look at what happens when we insert these numbers into the general formula for our regression line. For student number 6, the regression predicts that
\begin{aligned} Y_6 &= b_1X_{1,6} + b_2X_{2,6} + b_0 \& = (b_1 \times 0) + (b_2 \times 0) + b_0 \&=b_0 \end{aligned}
So we’re expecting that this student will obtain a grade corresponding to the value of the intercept term b0. What about student 7? This time, when we insert the numbers into the formula for the regression line, we obtain the following:
\begin{aligned} Y_6 &= b_1X_{1,7} + b_2X_{2,7} + b_0 \& = (b_1 \times 1) + (b_2 \times 0) + b_0 \&= b_1 + b_0 \end{aligned}
Because this student attended class, the predicted grade is equal to the intercept term b0 plus the coefficient associated with the attend variable, b1. So, if b1 is greater than zero, we’re expecting that the students who turn up to lectures will get higher grades than those students who don’t. If this coefficient is negative, we’re expecting the opposite: students who turn up at class end up performing much worse. In fact, we can push this a little bit further. What about student number 1, who turned up to class (X1,1=1) and read the textbook (X2,1=1)? If we plug these numbers into the regression, we get
\begin{aligned} Y_6 &= b_1X_{1,1} + b_2X_{2,1} + b_0 \& = (b_1 \times 1) + (b_2 \times 1) + b_0 \&= b_1 + b_2 + b_0 \end{aligned}
So if we assume that attending class helps you get a good grade (i.e., b1>0) and if we assume that reading the textbook also helps you get a good grade (i.e., b2>0), then our expectation is that student 1 will get a grade that that is higher than student 6 and student 7.
And at this point, you won’t be at all suprised to learn that the regression model predicts that student 3, who read the book but didn’t attend lectures, will obtain a grade of b2+b0. I won’t bore you with yet another regression formula. Instead, what I’ll do is show you the following table of expected grades:
knitr::kable(tibble::tribble(
~V1, ~V2, ~V3,
"attended - no","$b_0$","$b_0 + b_2$",
"attended - yes", "$b_0 + b_1$", "$b_0 + b_1 + b_2$"),
col.names = c("","read textbook? no", "read textbook? yes"))
read textbook? no read textbook? yes
attended - no b0 b0+b2
attended - yes b0+b1 b0+b1+b2
As you can see, the intercept term b0 acts like a kind of “baseline” grade that you would expect from those students who don’t take the time to attend class or read the textbook. Similarly, b1 represents the boost that you’re expected to get if you come to class, and b2 represents the boost that comes from reading the textbook. In fact, if this were an ANOVA you might very well want to characterise b1 as the main effect of attendance, and b2 as the main effect of reading! In fact, for a simple 2×2 ANOVA that’s exactly how it plays out.
Okay, now that we’re really starting to see why ANOVA and regression are basically the same thing, let’s actually run our regression using the rtfm.1 data and the lm() function to convince ourselves that this is really true. Running the regression in the usual way gives us the following output:240
regression.model <- lm( grade ~ attend + reading, data = rtfm.1 )
summary( regression.model )
##
## Call:
## lm(formula = grade ~ attend + reading, data = rtfm.1)
##
## Residuals:
## 1 2 3 4 5 6 7 8
## 0.5 -2.5 3.5 -1.5 -8.5 6.5 3.5 -1.5
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 43.500 3.354 12.969 4.86e-05 ***
## attend 18.000 3.873 4.648 0.00559 **
## reading 28.000 3.873 7.230 0.00079 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 5.477 on 5 degrees of freedom
## Multiple R-squared: 0.9366, Adjusted R-squared: 0.9112
## F-statistic: 36.93 on 2 and 5 DF, p-value: 0.001012
There’s a few interesting things to note here. Firstly, notice that the intercept term is 43.5, which is close to the “group” mean of 42.5 observed for those two students who didn’t read the text or attend class. Moreover, it’s identical to the predicted group mean that we pulled out of our ANOVA using the Effects() function! Secondly, notice that we have the regression coefficient of b1=18.0 for the attendance variable, suggesting that those students that attended class scored 18% higher than those who didn’t. So our expectation would be that those students who turned up to class but didn’t read the textbook would obtain a grade of b0+b1, which is equal to 43.5+18.0=61.5. Again, this is similar to the observed group mean of 62.5, and identical to the expected group mean that we pulled from our ANOVA. You can verify for yourself that the same thing happens when we look at the students that read the textbook.
Actually, we can push a little further in establishing the equivalence of our ANOVA and our regression. Look at the p-values associated with the attend variable and the reading variable in the regression output. They’re identical to the ones we encountered earlier when running the ANOVA. This might seem a little surprising, since the test used when running our regression model calculates a t-statistic and the ANOVA calculates an F-statistic. However, if you can remember all the way back to Chapter 9, I mentioned that there’s a relationship between the t-distribution and the F-distribution: if you have some quantity that is distributed according to a t-distribution with k degrees of freedom and you square it, then this new squared quantity follows an F-distribution whose degrees of freedom are 1 and k. We can check this with respect to the t statistics in our regression model. For the attend variable we get a t value of 4.648. If we square this number we end up with 21.604, which is identical to the corresponding F statistic in our ANOVA.
Finally, one last thing you should know. Because R understands the fact that ANOVA and regression are both examples of linear models, it lets you extract the classic ANOVA table from your regression model using the anova() function. All you have to do is this:
anova( regression.model )
## Analysis of Variance Table
##
## Response: grade
## Df Sum Sq Mean Sq F value Pr(>F)
## attend 1 648 648 21.600 0.0055943 **
## reading 1 1568 1568 52.267 0.0007899 ***
## Residuals 5 150 30
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Changing the baseline category
At this point, you’re probably convinced that the ANOVA and the regression are actually identical to each other. So there’s one last thing I should show you. What happens if I use the data from rtfm.2 to run the regression? In rtfm.2, we coded the attend and reading variables as factors rather than as numeric variables. Does this matter? It turns out that it doesn’t. The only differences are superficial:
regression.model.2 <- lm( grade ~ attend + reading, data = rtfm.2 )
summary( regression.model.2 )
##
## Call:
## lm(formula = grade ~ attend + reading, data = rtfm.2)
##
## Residuals:
## 1 2 3 4 5 6 7 8
## 0.5 -2.5 3.5 -1.5 -8.5 6.5 3.5 -1.5
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 43.500 3.354 12.969 4.86e-05 ***
## attendyes 18.000 3.873 4.648 0.00559 **
## readingyes 28.000 3.873 7.230 0.00079 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 5.477 on 5 degrees of freedom
## Multiple R-squared: 0.9366, Adjusted R-squared: 0.9112
## F-statistic: 36.93 on 2 and 5 DF, p-value: 0.001012
The only thing that is different is that R labels the two variables differently: the output now refers to attendyes and readingyes. You can probably guess what this means. When R refers to readingyes it’s trying to indicate that it is assuming that “yes = 1” and “no = 0”. This is important. Suppose we wanted to say that “yes = 0” and “no = 1”. We could still run this as a regression model, but now all of our coefficients will go in the opposite direction, because the effect of readingno would be referring to the consequences of not reading the textbook. To show you how this works, we can use the relevel() function in R to change which level of the reading variable is set to “0”. Here’s how it works. First, let’s get R to print out the reading factor as it currently stands:
rtfm.2$reading ## [1] yes yes yes no no no no yes ## Levels: no yes Notice that order in which R prints out the levels is “no” and then “yes”. Now let’s apply the relevel() function: relevel( x = rtfm.2$reading, ref = "yes" )
## [1] yes yes yes no no no no yes
## Levels: yes no
R now lists “yes” before “no”. This means that R will now treat “yes” as the “reference” level (sometimes called the baseline level) when you include it in an ANOVA. So let’s now create a new data frame with our factors recoded…
rtfm.3 <- rtfm.2 # copy the old data frame
rtfm.3$reading <- relevel( rtfm.2$reading, ref="yes" ) # re-level the reading factor
rtfm.3$attend <- relevel( rtfm.2$attend, ref="yes" ) # re-level the attend factor
Finally, let’s re-run our regression, this time using the re-coded data:
regression.model.3 <- lm( grade ~ attend + reading, data = rtfm.3 )
summary( regression.model.3 )
##
## Call:
## lm(formula = grade ~ attend + reading, data = rtfm.3)
##
## Residuals:
## 1 2 3 4 5 6 7 8
## 0.5 -2.5 3.5 -1.5 -8.5 6.5 3.5 -1.5
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 89.500 3.354 26.684 1.38e-06 ***
## attendno -18.000 3.873 -4.648 0.00559 **
## readingno -28.000 3.873 -7.230 0.00079 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 5.477 on 5 degrees of freedom
## Multiple R-squared: 0.9366, Adjusted R-squared: 0.9112
## F-statistic: 36.93 on 2 and 5 DF, p-value: 0.001012
As you can see, there are now a few changes. Most obviously, the attendno and readingno effects are both negative, though they’re the same magnitude as before: if you don’t read the textbook, for instance, you should expect your grade to drop by 28% relative to someone who did. The t-statistics have reversed sign too. The p-values remain the same, of course. The intercept has changed too. In our original regression, the baseline corresponded to a student who didn’t attend class and didn’t read the textbook, so we got a value of 43.5 as the expected baseline grade. However, now that we’ve recoded our variables, the baseline corresponds to a student who has read the textbook and did attend class, and for that student we would expect a grade of 89.5.
encode non binary factors as contrasts
At this point, I’ve shown you how we can view a 2×2 ANOVA into a linear model. And it’s pretty easy to see how this generalises to a 2×2×2 ANOVA or a 2×2×2×2 ANOVA… it’s the same thing, really: you just add a new binary variable for each of your factors. Where it begins to get trickier is when we consider factors that have more than two levels. Consider, for instance, the 3×2 ANOVA that we ran earlier in this chapter using the clin.trial data. How can we convert the three-level drug factor into a numerical form that is appropriate for a regression?
The answer to this question is pretty simple, actually. All we have to do is realise that a three-level factor can be redescribed as two binary variables. Suppose, for instance, I were to create a new binary variable called druganxifree. Whenever the drug variable is equal to "anxifree" we set druganxifree = 1. Otherwise, we set druganxifree = 0. This variable sets up a contrast, in this case between anxifree and the other two drugs. By itself, of course, the druganxifree contrast isn’t enough to fully capture all of the information in our drug variable. We need a second contrast, one that allows us to distinguish between joyzepam and the placebo. To do this, we can create a second binary contrast, called drugjoyzepam, which equals 1 if the drug is joyzepam, and 0 if it is not. Taken together, these two contrasts allows us to perfectly discriminate between all three possible drugs. The table below illustrates this:
knitr::kable(tibble::tribble(
~V1, ~V2, ~V3,
"placebo", "0", "0",
"anxifree", "1", "0",
"joyzepam", "0", "1"
), col.names = c( "drug", "druganxifree", "drugjoyzepam"))
drug druganxifree drugjoyzepam
placebo 0 0
anxifree 1 0
joyzepam 0 1
If the drug administered to a patient is a placebo, then both of the two contrast variables will equal 0. If the drug is Anxifree, then the druganxifree variable will equal 1, and drugjoyzepam will be 0. The reverse is true for Joyzepam: drugjoyzepam is 1, and druganxifree is 0.
Creating contrast variables manually is not too difficult to do using basic R commands. For example, here’s how we would create the druganxifree variable:
druganxifree <- as.numeric( clin.trial$drug == "anxifree" ) druganxifree ## [1] 0 0 0 1 1 1 0 0 0 0 0 0 1 1 1 0 0 0 The clin.trial$drug == "anxifree" part of the command returns a logical vector that has a value of TRUE if the drug is Anxifree, and a value of FALSE if the drug is Joyzepam or the placebo. The as.numeric() function just converts TRUE to 1 and FALSE to 0. Obviously, this command creates the druganxifree variable inside the workspace. If you wanted to add it to the clin.trial data frame, you’d use a commmand like this instead:
clin.trial$druganxifree <- as.numeric( clin.trial$drug == "anxifree" )
You could then repeat this for the other contrasts that you wanted to use. However, it’s kind of tedious to do this over and over again for every single contrast that you want to create. To make it a little easier, the lsr package contains a simple function called expandFactors() that will convert every factor in a data frame into a set of contrast variables.241 We can use it to create a new data frame, clin.trial.2 that contains the same data as clin.trial, but with the two factors represented in terms of the contrast variables:
clin.trial.2 <- expandFactors( clin.trial )
## (Intercept) druganxifree drugjoyzepam therapyCBT mood.gain druganxifree
## 1 1 0 0 0 0.5 0
## 2 1 0 0 0 0.3 0
## 3 1 0 0 0 0.1 0
## 4 1 1 0 0 0.6 1
## 5 1 1 0 0 0.4 1
## 6 1 1 0 0 0.2 1
## 7 1 0 1 0 1.4 0
## 8 1 0 1 0 1.7 0
## 9 1 0 1 0 1.3 0
## 10 1 0 0 1 0.6 0
## 11 1 0 0 1 0.9 0
## 12 1 0 0 1 0.3 0
## 13 1 1 0 1 1.1 1
## 14 1 1 0 1 0.8 1
## 15 1 1 0 1 1.2 1
## 16 1 0 1 1 1.8 0
## 17 1 0 1 1 1.3 0
## 18 1 0 1 1 1.4 0
## attr(,"assign")
## [1] 0 1 1 2 3 4
## attr(,"contrasts")
## attr(,"contrasts")$drug ## [1] "contr.treatment" ## ## attr(,"contrasts")$therapy
## [1] "contr.treatment"
clin.trial.2
## druganxifree drugjoyzepam therapyCBT mood.gain druganxifree
## 1 0 0 0 0.5 0
## 2 0 0 0 0.3 0
## 3 0 0 0 0.1 0
## 4 1 0 0 0.6 1
## 5 1 0 0 0.4 1
## 6 1 0 0 0.2 1
## 7 0 1 0 1.4 0
## 8 0 1 0 1.7 0
## 9 0 1 0 1.3 0
## 10 0 0 1 0.6 0
## 11 0 0 1 0.9 0
## 12 0 0 1 0.3 0
## 13 1 0 1 1.1 1
## 14 1 0 1 0.8 1
## 15 1 0 1 1.2 1
## 16 0 1 1 1.8 0
## 17 0 1 1 1.3 0
## 18 0 1 1 1.4 0
It’s not as pretty as the original clin.trial data, but it’s definitely saying the same thing. We have now recoded our three-level factor in terms of two binary variables, and we’ve already seen that ANOVA and regression behave the same way for binary variables. However, there are some additional complexities that arise in this case, which we’ll discuss in the next section.
equivalence between ANOVA and regression for non-binary factors
Now we have two different versions of the same data set: our original data frame clin.trial in which the drug variable is expressed as a single three-level factor, and the expanded data set clin.trial.2 in which it is expanded into two binary contrasts. Once again, the thing that we want to demonstrate is that our original 3×2 factorial ANOVA is equivalent to a regression model applied to the contrast variables. Let’s start by re-running the ANOVA:
drug.anova <- aov( mood.gain ~ drug + therapy, clin.trial )
summary( drug.anova )
## Df Sum Sq Mean Sq F value Pr(>F)
## drug 2 3.453 1.7267 26.149 1.87e-05 ***
## therapy 1 0.467 0.4672 7.076 0.0187 *
## Residuals 14 0.924 0.0660
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Obviously, there’s no surprises here. That’s the exact same ANOVA that we ran earlier, except for the fact that I’ve arbitrarily decided to rename the output variable as drug.anova for some stupid reason.242 Next, let’s run a regression, using druganxifree, drugjoyzepam and therapyCBT as the predictors. Here’s what we get:
drug.regression <- lm( mood.gain ~ druganxifree + drugjoyzepam + therapyCBT, clin.trial.2 )
summary( drug.regression )
##
## Call:
## lm(formula = mood.gain ~ druganxifree + drugjoyzepam + therapyCBT,
## data = clin.trial.2)
##
## Residuals:
## Min 1Q Median 3Q Max
## -0.3556 -0.1806 0.0000 0.1972 0.3778
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 0.2889 0.1211 2.385 0.0318 *
## druganxifree 0.2667 0.1484 1.797 0.0939 .
## drugjoyzepam 1.0333 0.1484 6.965 6.6e-06 ***
## therapyCBT 0.3222 0.1211 2.660 0.0187 *
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 0.257 on 14 degrees of freedom
## Multiple R-squared: 0.8092, Adjusted R-squared: 0.7683
## F-statistic: 19.79 on 3 and 14 DF, p-value: 2.64e-05
Hm. This isn’t the same output that we got last time. Not surprisingly, the regression output prints out the results for each of the three predictors separately, just like it did every other time we used lm(). On the one hand, we can see that the p-value for the therapyCBT variable is exactly the same as the one for the therapy factor in our original ANOVA, so we can be reassured that the regression model is doing the same thing as the ANOVA did. On the other hand, this regression model is testing the druganxifree contrast and the drugjoyzepam contrast separately, as if they were two completely unrelated variables. It’s not surprising of course, because the poor lm() function has no way of knowing that drugjoyzepam and druganxifree are actually the two different contrasts that we used to encode our three-level drug factor. As far as it knows, drugjoyzepam and druganxifree are no more related to one another than drugjoyzepam and therapyCBT. However, you and I know better. At this stage we’re not at all interested in determining whether these two contrasts are individually significant. We just want to know if there’s an “overall” effect of drug. That is, what we want R to do is to run some kind of “omnibus” test, one in which the two “drug-related” contrasts are lumped together for the purpose of the test. Sound familiar? This is exactly the situation that we discussed in Section 16.5, and it is precisely this situation that the F-test is built to handle. All we need to do is specify our null model, which in this case would include the therapyCBT predictor, and omit both of the drug-related variables, and then run it through the anova() function:
nodrug.regression <- lm( mood.gain ~ therapyCBT, clin.trial.2 )
anova( nodrug.regression, drug.regression )
## Analysis of Variance Table
##
## Model 1: mood.gain ~ therapyCBT
## Model 2: mood.gain ~ druganxifree + drugjoyzepam + therapyCBT
## Res.Df RSS Df Sum of Sq F Pr(>F)
## 1 16 4.3778
## 2 14 0.9244 2 3.4533 26.149 1.872e-05 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Ah, that’s better. Our F-statistic is 26.1, the degrees of freedom are 2 and 14, and the p-value is 0.000019. The numbers are identical to the ones we obtained for the main effect of drug in our original ANOVA. Once again, we see that ANOVA and regression are essentially the same: they are both linear models, and the underlying statistical machinery for ANOVA is identical to the machinery used in regression. The importance of this fact should not be understated. Throughout the rest of this chapter we’re going to rely heavily on this idea.
Degrees of freedom as parameter counting!
At long last, I can finally give a definition of degrees of freedom that I am happy with. Degrees of freedom are defined in terms of the number of parameters that have to be estimated in a model. For a regression model or an ANOVA, the number of parameters corresponds to the number of regression coefficients (i.e. b-values), including the intercept. Keeping in mind that any F-test is always a comparison between two models, the first df is the difference in the number of parameters. For example, model comparison above, the null model (mood.gain ~ therapyCBT) has two parameters: there’s one regression coefficient for the therapyCBT variable, and a second one for the intercept. The alternative model (mood.gain ~ druganxifree + drugjoyzepam + therapyCBT) has four parameters: one regression coefficient for each of the three contrasts, and one more for the intercept. So the degrees of freedom associated with the difference between these two models is df1=4−2=2.
What about the case when there doesn’t seem to be a null model? For instance, you might be thinking of the F-test that appears at the very bottom of the regression output. I originally described that as a test of the regression model as a whole. However, that is still a comparison between two models. The null model is the trivial model that only includes an intercept, which is written as outcome ~ 1 in R, and the alternative model is the full regression model. The null model in this case contains 1 regression coefficient, for the intercept term. The alternative model contains K+1 regression coefficients, one for each of the K predictor variables and one more for the intercept. So the df value that you see in this F test is equal to df1=K+1−1=K.
What about the second df value that appears in the F-test? This always refers to the degrees of freedom associated with the residuals. It is possible to think of this in terms of parameters too, but in a slightly counterintuitive way. Think of it like this: suppose that the total number of observations across the study as a whole is N. If you wanted to perfectly describe each of these N values, you need to do so using, well… N numbers. When you build a regression model, what you’re really doing is specifying some of the numbers need to perfectly describe the data. If your model has K predictors and an intercept, then you’ve specified K+1 numbers. So, without bothering to figure out exactly how this would be done, how many more numbers do you think are going to be needed to transform a K+1 parameter regression model into a perfect redescription of the raw data? If you found yourself thinking that (K+1)+(N−K−1)=N, and so the answer would have to be N−K−1, well done! That’s exactly right: in principle you can imagine an absurdly complicated regression model that includes a parameter for every single data point, and it would of course provide a perfect description of the data. This model would contain N parameters in total, but we’re interested in the difference between the number of parameters required to describe this full model (i.e. N) and the number of parameters used by the simpler regression model that you’re actually interested in (i.e., K+1), and so the second degrees of freedom in the F test is df2=N−K−1, where K is the number of predictors (in a regression model) or the number of contrasts (in an ANOVA). In the example I gave above, there are N=18 observations in the data set, and K+1=4 regression coefficients associated with the ANOVA model, so the degrees of freedom for the residuals is df2=18−4=14.
postscript
There’s one last thing I want to mention in this section. In the previous example, I used the aov() function to run an ANOVA using the clin.trial data which codes drug variable as a single factor. I also used the lm() function to run a regression using the clin.trial data in which we have two separate contrasts describing the drug. However, it’s also possible to use the lm() function on the the original data. That is, you could use a command like this:
drug.lm <- lm( mood.gain ~ drug + therapy, clin.trial )
The fact that drug is a three-level factor does not matter. As long as the drug variable has been declared to be a factor, R will automatically translate it into two binary contrast variables, and will perform the appropriate analysis. After all, as I’ve been saying throughout this section, ANOVA and regression are both linear models, and lm() is the function that handles linear models. In fact, the aov() function doesn’t actually do very much of the work when you run an ANOVA using it: internally, R just passes all the hard work straight to lm(). However, I want to emphasise again that it is critical that your factor variables are declared as such. If drug were declared to be a numeric variable, then R would be happy to treat it as one. After all, it might be that drug refers to the number of drugs that one has taken in the past, or something that is genuinely numeric. R won’t second guess you here. It assumes your factors are factors and your numbers are numbers. Don’t make the mistake of encoding your factors as numbers, or R will run the wrong analysis. This is not a flaw in R: it is your responsibility as the analyst to make sure you’re specifying the right model for your data. Software really can’t be trusted with this sort of thing.
Okay, warnings aside, it’s actually kind of neat to run your ANOVA using the lm() function in the way I did above. Because you’ve called the lm() function, the summary() that R pulls out is formatted like a regression. To save space I won’t show you the output here, but you can easily verify this by typing
summary( drug.lm )
##
## Call:
## lm(formula = mood.gain ~ drug + therapy, data = clin.trial)
##
## Residuals:
## Min 1Q Median 3Q Max
## -0.3556 -0.1806 0.0000 0.1972 0.3778
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 0.2889 0.1211 2.385 0.0318 *
## druganxifree 0.2667 0.1484 1.797 0.0939 .
## drugjoyzepam 1.0333 0.1484 6.965 6.6e-06 ***
## therapyCBT 0.3222 0.1211 2.660 0.0187 *
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 0.257 on 14 degrees of freedom
## Multiple R-squared: 0.8092, Adjusted R-squared: 0.7683
## F-statistic: 19.79 on 3 and 14 DF, p-value: 2.64e-05
However, because the drug and therapy variables were both factors, the anova() function actually knows which contrasts to group together for the purposes of running the F-tests, so you can extract the classic ANOVA table. Again, I won’t reproduce the output here since it’s identical to the ANOVA table I showed at the start of the section, but it’s worth trying the following command
anova( drug.lm )
## Analysis of Variance Table
##
## Response: mood.gain
## Df Sum Sq Mean Sq F value Pr(>F)
## drug 2 3.4533 1.72667 26.1490 1.872e-05 ***
## therapy 1 0.4672 0.46722 7.0757 0.01866 *
## Residuals 14 0.9244 0.06603
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
just to see for yourself. However, this behaviour of the anova() function only occurs when the predictor variables are factors. If we try a command like anova( drug.regression ), the output will continue to treate druganxifree and drugjoyzepam as if they were two distinct binary factors. This is because in the drug.regression model we included all the contrasts as “raw” variables, so R had no idea which ones belonged together. However, when we ran the drug.lm model, we gave R the original factor variables, so it does know which contrasts go together. The behaviour of the anova() function reflects that. | textbooks/stats/Applied_Statistics/Learning_Statistics_with_R_-_A_tutorial_for_Psychology_Students_and_other_Beginners_(Navarro)/16%3A_Factorial_ANOVA/16.06%3A_ANOVA_As_a_Linear_Model.txt |
In the previous section, I showed you a method for converting a factor into a collection of contrasts. In the method I showed you, we specify a set of binary variables, in which define a table like this one:
``````knitr::kable(tibble::tribble(
~V1, ~V2, ~V3,
"\"placebo\"", "0", "0",
"\"anxifree\"", "1", "0",
"\"joyzepam\"", "0", "1"
), col.names = c("drug", "druganxifree", "drugjoyzepam"))``````
drug druganxifree drugjoyzepam
“placebo” 0 0
“anxifree” 1 0
“joyzepam” 0 1
Each row in the table corresponds to one of the factor levels, and each column corresponds to one of the contrasts. This table, which always has one more row than columns, has a special name: it is called a contrast matrix. However, there are lots of different ways to specify a contrast matrix. In this section I discuss a few of the standard contrast matrices that statisticians use, and how you can use them in R. If you’re planning to read the section on unbalanced ANOVA later on (Section 16.10) it’s worth reading this section carefully. If not, you can get away with skimming it, because the choice of contrasts doesn’t matter much for balanced designs.
Treatment contrasts
In the particular kind of contrasts that I’ve described above, one level of the factor is special, and acts as a kind of “baseline” category (i.e., `placebo` in our example), against which the other two are defined. The name for these kinds of contrast is treatment contrasts. The name reflects the fact that these contrasts are quite natural and sensible when one of the categories in your factor really is special because it actually does represent a baseline. That makes sense in our clinical trial example: the `placebo` condition corresponds to the situation where you don’t give people any real drugs, and so it’s special. The other two conditions are defined in relation to the placebo: in one case you replace the placebo with Anxifree, and in the other case your replace it with Joyzepam.
R comes with a variety of functions that can generate different kinds of contrast matrices. For example, the table shown above is a matrix of treatment contrasts for a factor that has 3 levels. But suppose I want a matrix of treatment contrasts for a factor with 5 levels? The `contr.treatment()` function will do this:
``contr.treatment( n=5 )``
``````## 2 3 4 5
## 1 0 0 0 0
## 2 1 0 0 0
## 3 0 1 0 0
## 4 0 0 1 0
## 5 0 0 0 1``````
Notice that, by default, the first level of the factor is always treated as the baseline category (i.e., it’s the one that has all zeros, and doesn’t have an explicit contrast associated with it). In Section 16.6.3 I mentioned that you can use the `relevel()` function to change which category is the first level of the factor.243 There’s also a special function in R called `contr.SAS()` that generates a treatment contrast matrix in which the last category is treated as the baseline:
``contr.SAS( n=5 )``
``````## 1 2 3 4
## 1 1 0 0 0
## 2 0 1 0 0
## 3 0 0 1 0
## 4 0 0 0 1
## 5 0 0 0 0``````
However, you can actually select any category you like as the baseline within the `contr.treatment()` function, by specifying the `base` argument in that function. See the help documentation for more details.
Helmert contrasts
Treatment contrasts are useful for a lot of situations, and they’re the default in R. However, they make most sense in the situation when there really is a baseline category, and you want to assess all the other groups in relation to that one. In other situations, however, no such baseline category exists, and it may make more sense to compare each group to the mean of the other groups. This is where Helmert contrasts, generated by the `contr.helmert()` function, can be useful. The idea behind Helmert contrasts is to compare each group to the mean of the “previous” ones. That is, the first contrast represents the difference between group 2 and group 1, the second contrast represents the difference between group 3 and the mean of groups 1 and 2, and so on. This translates to a contrast matrix that looks like this:
`` contr.helmert( n=5 )``
``````## [,1] [,2] [,3] [,4]
## 1 -1 -1 -1 -1
## 2 1 -1 -1 -1
## 3 0 2 -1 -1
## 4 0 0 3 -1
## 5 0 0 0 4``````
One useful thing about Helmert contrasts is that every contrast sums to zero (i.e., all the columns sum to zero). This has the consequence that, when we interpret the ANOVA as a regression, the intercept term corresponds to the grand mean μ..) if we are using Helmert contrasts. Compare this to treatment contrasts, in which the intercept term corresponds to the group mean for the baseline category. This property can be very useful in some situations. It doesn’t matter very much if you have a balanced design, which we’ve been assuming so far, but it will turn out to be important later when we consider unbalanced designs in Section 16.10. In fact, the main reason why I’ve even bothered to include this section on specifying is that contrasts become important if you want to understand unbalanced ANOVA.
zero contrasts
The third option that I should briefly mention are “sum to zero” contrasts, which are used to construct pairwise comparisons between groups. Specifically, each contrast encodes the difference between one of the groups and a baseline category, which in this case corresponds to the last group:
``contr.sum( n=5 )``
``````## [,1] [,2] [,3] [,4]
## 1 1 0 0 0
## 2 0 1 0 0
## 3 0 0 1 0
## 4 0 0 0 1
## 5 -1 -1 -1 -1``````
Much like Helmert contrasts, we see that each column sums to zero, which means that the intercept term corresponds to the grand mean when ANOVA is treated as a regression model. When interpreting these contrasts, the thing to recognise is that each of these contrasts is a pairwise comparison between group 5 and one of the other four groups. Specifically, contrast 1 corresponds to a “group 1 minus group 5” comparison, contrast 2 corresponds to a “group 2 minus group 5” comparison, and so on.
Viewing and setting the default contrasts in R
Every factor variable in R is associated with a contrast matrix. It has to be, otherwise R wouldn’t be able to run ANOVAs properly! If you don’t specify one explictly, or R will implicitly specify one for you. Here’s what I mean. When I created the `clin.trial` data, I didn’t specify any contrast matrix for either of the factors. You can see this by using the `attr()` function to print out the “contrasts” attribute of the factors. For example:
``attr( clin.trial\$drug, "contrasts" )``
``## NULL``
The `NULL` output here means that R is telling you that the `drug` factor doesn’t have any attribute called “contrasts” for which it has any data. There is no contrast matrix stored anywhere explicitly for this factor. However, if we now ask R to tell us what contrasts are set up for this factor, it give us this:
``contrasts( clin.trial\$drug )``
``````## anxifree joyzepam
## placebo 0 0
## anxifree 1 0
## joyzepam 0 1``````
These are the same treatment contrast that we set up manually in Section 16.6. How did R know to set up treatment contrasts, even though I never actually told it anything about what contrasts I wanted? The answer is that R has a hidden list of default “options” that it looks up to resolve situations like this. You can print out all of the options by typing `options()` at the command prompt, but it’s not a very enlightening read. There are a lot of options, and we’re only interested in contrasts right now. Instead of printing out all of the options, we can ask for just one, like this:
``options( "contrasts" )``
``````## \$contrasts
## unordered ordered
## "contr.treatment" "contr.poly"``````
What this is telling us is that the default contrasts for unordered factors (i.e., nominal scale variables) are treatment contrasts, and the default for ordered factors (i.e., interval scale variables) are “polynomial” contrasts. I don’t discuss ordered factors much in this book, and so I won’t go into what polynomial contrasts are all about. The key thing is that the `options()` function also allows you to reset these defaults (though only for the current session: they’ll revert to the original settings once you close R). Here’s the command:
`` options(contrasts = c("contr.helmert", "contr.poly"))``
Once we’ve done this, we can inspect the contrast settings again:
```` options("contrasts") `
```
``````## \$contrasts
## [1] "contr.helmert" "contr.poly"``````
Now we see that the default contrasts for unordered factors have changed. So if I now ask R to tell me what contrasts are associated with the `drug` factor, it gives a different answer because I changed the default:
````contrasts( clin.trial\$drug )`
```
``````## [,1] [,2]
## placebo -1 -1
## anxifree 1 -1
## joyzepam 0 2``````
Those are Helmert contrasts. In general, if you’re changing the default settings for something in R, it’s a good idea to reset them to their original values once you’re done. So let’s do that:
``options(contrasts = c("contr.treatment", "contr.poly"))``
Setting the contrasts for a single factor
In the previous section, I showed you how to alter the default contrasts. However, suppose that all you really want to do is change the contrasts associated with a single factor, and leave the defaults as they are. To do this, what you need to do is specifically assign the contrast matrix as an “attribute’ of the factor. This is easy to do via the `contrasts()` function. For instance, suppose I wanted to use sum to zero contrasts for the `drug` factor, but keep the default treatment contrasts for everything else. I could do that like so:
``contrasts( clin.trial\$drug ) <- contr.sum(3)``
And if I now inspect the contrasts, I get the following
``contrasts( clin.trial\$drug)``
``````## [,1] [,2]
## placebo 1 0
## anxifree 0 1
## joyzepam -1 -1``````
However, the contrasts for everything else will still be the defaults. You can check that we have actually made a specific change to the factor itself by checking to see if it now has an attribute, using the command `attr( clin.trial\$drug, "contrasts" )`. This will print out the same output shown above, because the contrast has in fact been attached to the `drug` factor, and does not rely on the defaults. If you want to wipe the attribute and revert the defaults, use a command like this:
``contrasts( clin.trial\$drug ) <- NULL``
Setting the contrasts for a single analysis
One last way of changing contrasts. You might find yourself wanting to change the contrasts only for one specific analysis. That’s allowed too, because the `aov()` and `lm()` functions have a `contrasts` argument that you can use. To change contrasts for one specific analysis, we first set up a list variable that names244 the contrast types that you want to use for each of the factors:
``my.contrasts <- list( drug = contr.helmert, therapy = contr.helmert )``
Next, fit the ANOVA model in the usual way, but this time we’ll specify the `contrasts` argument:
``mod <- aov( mood.gain ~ drug*therapy, clin.trial, contrasts = my.contrasts )``
If you try a command like `summary(aov)` you won’t see any difference in the output because the choice of contrasts does not affect the outcome when you have a balanced design (this won’t always be true later on). However, if you want to check that it has actually worked, you can inspect the value of `mod\$contrasts`:
``mod\$contrasts` `
``````## \$drug
## [,1] [,2]
## placebo -1 -1
## anxifree 1 -1
## joyzepam 0 2
##
## \$therapy
## [,1]
## no.therapy -1
## CBT 1``````
As you can see, for the purposes of this one particular ANOVA, R has used Helmert contrasts for both variables. If I had omitted the part of the command that specified the `contrasts` argument, you’d be looking at treatment contrasts here because it would have reverted to whatever values the `contrasts()` function prints out for each of the factors. | textbooks/stats/Applied_Statistics/Learning_Statistics_with_R_-_A_tutorial_for_Psychology_Students_and_other_Beginners_(Navarro)/16%3A_Factorial_ANOVA/16.07%3A_Different_Ways_to_Specify_Contrasts.txt |
Time to switch to a different topic. Let’s suppose you’ve done your ANOVA, and it turns out that you obtained some significant effects. Because of the fact that the F-tests are “omnibus” tests that only really test the null hypothesis that there are no differences among groups, obtaining a significant effect doesn’t tell you which groups are different to which other ones. We discussed this issue back in Chapter 14, and in that chapter our solution was to run t-tests for all possible pairs of groups, making corrections for multiple comparisons (e.g., Bonferroni, Holm) to control the Type I error rate across all comparisons. The methods that we used back in Chapter 14 have the advantage of being relatively simple, and being the kind of tools that you can use in a lot of different situations where you’re testing multiple hypotheses, but they’re not necessarily the best choices if you’re interested in doing efficient post hoc testing in an ANOVA context. There are actually quite a lot of different methods for performing multiple comparisons in the statistics literature (Hsu 1996), and it would be beyond the scope of an introductory text like this one to discuss all of them in any detail.
That being said, there’s one tool that I do want to draw your attention to, namely Tukey’s “Honestly Significant Difference”, or Tukey’s HSD for short. For once, I’ll spare you the formulas, and just stick to the qualitative ideas. The basic idea in Tukey’s HSD is to examine all relevant pairwise comparisons between groups, and it’s only really appropriate to use Tukey’s HSD if it is pairwise differences that you’re interested in.245 For instance, in `model.2`, where we specified a main effect for drug and a main effect of therapy, we would be interested in the following four comparisons:
• The difference in mood gain for people given Anxifree versus people given the placebo.
• The difference in mood gain for people given Joyzepam versus people given the placebo.
• The difference in mood gain for people given Anxifree versus people given Joyzepam.
• The difference in mood gain for people treated with CBT and people given no therapy.
For any one of these comparisons, we’re interested in the true difference between (population) group means. Tukey’s HSD constructs simultaneous confidence intervals for all four of these comparisons. What we mean by 95% “simultaneous” confidence interval is that there is a 95% probability that all of these confidence intervals contain the relevant true value. Moreover, we can use these confidence intervals to calculate an adjusted p value for any specific comparison.
The `TukeyHSD()` function in R is pretty easy to use: you simply input the model that you want to run the post hoc tests for. For example, if we were looking to run post hoc tests for `model.2`, here’s the command we would use:
``TukeyHSD( model.2 )``
``````## Tukey multiple comparisons of means
## 95% family-wise confidence level
##
## Fit: aov(formula = mood.gain ~ drug + therapy, data = clin.trial)
##
## \$drug
## diff lwr upr p adj
## anxifree-placebo 0.2666667 -0.1216321 0.6549655 0.2062942
## joyzepam-placebo 1.0333333 0.6450345 1.4216321 0.0000186
## joyzepam-anxifree 0.7666667 0.3783679 1.1549655 0.0003934
##
## \$therapy
## diff lwr upr p adj
## CBT-no.therapy 0.3222222 0.0624132 0.5820312 0.0186602``````
The output here is (I hope) pretty straightforward. The first comparison, for example, is the Anxifree versus placebo difference, and the first part of the output indicates that the observed difference in group means is .27. The next two numbers indicate that the 95% (simultaneous) confidence interval for this comparison runs from −.12 to .65. Because the confidence interval for the difference includes 0, we cannot reject the null hypothesis that the two group means are identical, and so we’re not all that surprised to see that the adjusted p-value is .21. In contrast, if you look at the next line, we see that the observed difference between Joyzepam and the placebo is 1.03, and the 95% confidence interval runs from .64 to 1.42. Because the interval excludes 0, we see that the result is significant (p<.001).
So far, so good. What about the situation where your model includes interaction terms? For instance, in `model.3` we allowed for the possibility that there is an interaction between drug and therapy. If that’s the case, the number of pairwise comparisons that we need to consider starts to increase. As before, we need to consider the three comparisons that are relevant to the main effect of `drug` and the one comparison that is relevant to the main effect of `therapy`. But, if we want to consider the possibility of a significant interaction (and try to find the group differences that underpin that significant interaction), we need to include comparisons such as the following:
• The difference in mood gain for people given Anxifree and treated with CBT, versus people given the placebo and treated with CBT
• The difference in mood gain for people given Anxifree and given no therapy, versus people given the placebo and given no therapy.
• etc
There are quite a lot of these comparisons that you need to consider. So, when we run the `TukeyHSD()` command for `model.3` we see that it has made a lot of pairwise comparisons (19 in total). Here’s the output:
``TukeyHSD( model.3 )``
``````## Tukey multiple comparisons of means
## 95% family-wise confidence level
##
## Fit: aov(formula = mood.gain ~ drug * therapy, data = clin.trial)
##
## \$drug
## diff lwr upr p adj
## anxifree-placebo 0.2666667 -0.09273475 0.6260681 0.1597148
## joyzepam-placebo 1.0333333 0.67393191 1.3927348 0.0000160
## joyzepam-anxifree 0.7666667 0.40726525 1.1260681 0.0002740
##
## \$therapy
## diff lwr upr p adj
## CBT-no.therapy 0.3222222 0.08256504 0.5618794 0.012617
##
## \$`drug:therapy`
## diff lwr
## anxifree:no.therapy-placebo:no.therapy 0.10000000 -0.539927728
## joyzepam:no.therapy-placebo:no.therapy 1.16666667 0.526738939
## placebo:CBT-placebo:no.therapy 0.30000000 -0.339927728
## anxifree:CBT-placebo:no.therapy 0.73333333 0.093405606
## joyzepam:CBT-placebo:no.therapy 1.20000000 0.560072272
## joyzepam:no.therapy-anxifree:no.therapy 1.06666667 0.426738939
## placebo:CBT-anxifree:no.therapy 0.20000000 -0.439927728
## anxifree:CBT-anxifree:no.therapy 0.63333333 -0.006594394
## joyzepam:CBT-anxifree:no.therapy 1.10000000 0.460072272
## placebo:CBT-joyzepam:no.therapy -0.86666667 -1.506594394
## anxifree:CBT-joyzepam:no.therapy -0.43333333 -1.073261061
## joyzepam:CBT-joyzepam:no.therapy 0.03333333 -0.606594394
## anxifree:CBT-placebo:CBT 0.43333333 -0.206594394
## joyzepam:CBT-placebo:CBT 0.90000000 0.260072272
## joyzepam:CBT-anxifree:CBT 0.46666667 -0.173261061
## upr p adj
## anxifree:no.therapy-placebo:no.therapy 0.7399277 0.9940083
## joyzepam:no.therapy-placebo:no.therapy 1.8065944 0.0005667
## placebo:CBT-placebo:no.therapy 0.9399277 0.6280049
## anxifree:CBT-placebo:no.therapy 1.3732611 0.0218746
## joyzepam:CBT-placebo:no.therapy 1.8399277 0.0004380
## joyzepam:no.therapy-anxifree:no.therapy 1.7065944 0.0012553
## placebo:CBT-anxifree:no.therapy 0.8399277 0.8917157
## anxifree:CBT-anxifree:no.therapy 1.2732611 0.0529812
## joyzepam:CBT-anxifree:no.therapy 1.7399277 0.0009595
## placebo:CBT-joyzepam:no.therapy -0.2267389 0.0067639
## anxifree:CBT-joyzepam:no.therapy 0.2065944 0.2750590
## joyzepam:CBT-joyzepam:no.therapy 0.6732611 0.9999703
## anxifree:CBT-placebo:CBT 1.0732611 0.2750590
## joyzepam:CBT-placebo:CBT 1.5399277 0.0050693
## joyzepam:CBT-anxifree:CBT 1.1065944 0.2139229``````
It looks pretty similar to before, but with a lot more comparisons made.
16.09: The Method of Planned Comparisons
Okay, I have a confession to make. I haven’t had time to write this section, but I think the method of planned comparisons is important enough to deserve a quick discussion. In our discussions of multiple comparisons, in the previous section and back in Chapter 14, I’ve been assuming that the tests you want to run are genuinely post hoc. For instance, in our drugs example above, maybe you thought that the drugs would all have different effects on mood (i.e., you hypothesised a main effect of drug), but you didn’t have any specific hypothesis about how they would be different, nor did you have any real idea about which pairwise comparisons would be worth looking at. If that is the case, then you really have to resort to something like Tukey’s HSD to do your pairwise comparisons.
The situation is rather different, however, if you genuinely did have real, specific hypotheses about which comparisons are of interest, and you never ever have any intention to look at any other comparisons besides the ones that you specified ahead of time. When this is true, and if you honestly and rigourously stick to your noble intentions to not run any other comparisons (even when the data look like they’re showing you deliciously significant effects for stuff you didn’t have a hypothesis test for), then it doesn’t really make a lot of sense to run something like Tukey’s HSD, because it makes corrections for a whole bunch of comparisons that you never cared about and never had any intention of looking at. Under those circumstances, you can safely run a (limited) number of hypothesis tests without making an adjustment for multiple testing. This situation is known as the method of planned comparisons, and it is sometimes used in clinical trials. In a later version of this book, I would like to talk a lot more about planned comparisons. | textbooks/stats/Applied_Statistics/Learning_Statistics_with_R_-_A_tutorial_for_Psychology_Students_and_other_Beginners_(Navarro)/16%3A_Factorial_ANOVA/16.08%3A_Post_Hoc_Tests.txt |
Factorial ANOVA is a very handy thing to know about. It’s been one of the standard tools used to analyse experimental data for many decades, and you’ll find that you can’t read more than two or three papers in psychology without running into an ANOVA in there somewhere. However, there’s one huge difference between the ANOVAs that you’ll see in a lot of real scientific articles and the ANOVA that I’ve just described: in real life, we’re rarely lucky enough to have perfectly balanced designs. For one reason or another, it’s typical to end up with more observations in some cells than in others. Or, to put it another way, we have an unbalanced design.
Unbalanced designs need to be treated with a lot more care than balanced designs, and the statistical theory that underpins them is a lot messier. It might be a consequence of this messiness, or it might be a shortage of time, but my experience has been that undergraduate research methods classes in psychology have a nasty tendency to ignore this issue completely. A lot of stats textbooks tend to gloss over it too. The net result of this, I think, is that a lot of active researchers in the field don’t actually know that there’s several different “types” of unbalanced ANOVAs, and they produce quite different answers. In fact, reading the psychological literature, I’m kind of amazed at the fact that most people who report the results of an unbalanced factorial ANOVA don’t actually give you enough details to reproduce the analysis: I secretly suspect that most people don’t even realise that their statistical software package is making a whole lot of substantive data analysis decisions on their behalf. It’s actually a little terrifying, when you think about it. So, if you want to avoid handing control of your data analysis to stupid software, read on…
coffee data
As usual, it will help us to work with some data. The `coffee.Rdata` file contains a hypothetical data set (the `coffee` data frame) that produces an unbalanced 3×2 ANOVA. Suppose we were interested in finding out whether or not the tendency of people to `babble` when they have too much coffee is purely an effect of the coffee itself, or whether there’s some effect of the `milk` and `sugar` that people add to the coffee. Suppose we took 18 people, and gave them some coffee to drink. The amount of coffee / caffeine was held constant, and we varied whether or not milk was added: so `milk` is a binary factor with two levels, `"yes"` and `"no"`. We also varied the kind of sugar involved. The coffee might contain `"real"` sugar, or it might contain `"fake"` sugar (i.e., artificial sweetener), or it might contain `"none"` at all, so the `sugar` variable is a three level factor. Our outcome variable is a continuous variable that presumably refers to some psychologically sensible measure of the extent to which someone is “babbling”. The details don’t really matter for our purpose. To get a sense of what the data look like, we use the `some()` function in the `car` package. The `some()` function randomly picks a few of the observations in the data frame to print out, which is often very handy:
``````load("./rbook-master/data/coffee.rdata")
some( coffee )``````
``````## milk sugar babble
## 1 yes real 4.6
## 5 yes real 5.1
## 6 no real 5.5
## 7 yes none 3.9
## 9 yes none 3.7
## 10 no fake 5.6
## 11 no fake 4.7
## 13 no real 6.0
## 14 no real 5.4
## 17 no none 5.3``````
If we use the `aggregate()` function to quickly produce a table of means, we get a strong impression that there are differences between the groups:
`` aggregate( babble ~ milk + sugar, coffee, mean )``
``````## milk sugar babble
## 1 yes none 3.700
## 2 no none 5.550
## 3 yes fake 5.800
## 4 no fake 4.650
## 5 yes real 5.100
## 6 no real 5.875``````
This is especially true when we compare these means to the standard deviations for the `babble` variable, which you can calculate using `aggregate()` in much the same way. Across groups, this standard deviation varies from .14 to .71, which is fairly small relative to the differences in group means.246 So far, it’s looking like a straightforward factorial ANOVA, just like we did earlier. The problem arises when we check to see how many observations we have in each group:
``xtabs( ~ milk + sugar, coffee )``
``````## sugar
## milk none fake real
## yes 3 2 3
## no 2 4 4``````
This violates one of our original assumptions, namely that the number of people in each group is the same. We haven’t really discussed how to handle this situation.
“Standard ANOVA” does not exist for unbalanced designs
Unbalanced designs lead us to the somewhat unsettling discovery that there isn’t really any one thing that we might refer to as a standard ANOVA. In fact, it turns out that there are three fundamentally different ways247 in which you might want to run an ANOVA in an unbalanced design. If you have a balanced design, all three versions produce identical results, with the sums of squares, F-values etc all conforming to the formulas that I gave at the start of the chapter. However, when your design is unbalanced they don’t give the same answers. Furthermore, they are not all equally appropriate to every situation: some methods will be more appropriate to your situation than others. Given all this, it’s important to understand what the different types of ANOVA are and how they differ from one another.
The first kind of ANOVA is conventionally referred to as Type I sum of squares. I’m sure you can guess what they other two are called. The “sum of squares” part of the name was introduced by the SAS statistical software package, and has become standard nomenclature, but it’s a bit misleading in some ways. I think the logic for referring to them as different types of sum of squares is that, when you look at the ANOVA tables that they produce, the key difference in the numbers is the SS values. The degrees of freedom don’t change, the MS values are still defined as SS divided by df, etc. However, what the terminology gets wrong is that it hides the reason why the SS values are different from one another. To that end, it’s a lot more helpful to think of the three different kinds of ANOVA as three different hypothesis testing strategies. These different strategies lead to different SS values, to be sure, but it’s the strategy that is the important thing here, not the SS values themselves. Recall from Section 16.5 and 16.6 that any particular F-test is best thought of as a comparison between two linear models. So when you’re looking at an ANOVA table, it helps to remember that each of those F-tests corresponds to a pair of models that are being compared. Of course, this leads naturally to the question of which pair of models is being compared. This is the fundamental difference between ANOVA Types I, II and III: each one corresponds to a different way of choosing the model pairs for the tests.
Type I sum of squares
The Type I method is sometimes referred to as the “sequential” sum of squares, because it involves a process of adding terms to the model one at a time. Consider the coffee data, for instance. Suppose we want to run the full 3×2 factorial ANOVA, including interaction terms. The full model, as we’ve discussed earlier, is expressed by the R formula `babble ~ sugar + milk + sugar:milk`, though we often shorten it by using the `sugar * milk` notation. The Type I strategy builds this model up sequentially, starting from the simplest possible model and gradually adding terms.
The simplest possible model for the data would be one in which neither milk nor sugar is assumed to have any effect on babbling. The only term that would be included in such a model is the intercept, and in R formula notation we would write it as `babble ~ 1`. This is our initial null hypothesis. The next simplest model for the data would be one in which only one of the two main effects is included. In the coffee data, there are two different possible choices here, because we could choose to add milk first or to add sugar first (pardon the pun). The order actually turns out to matter, as we’ll see later, but for now let’s just make a choice arbitrarily, and pick sugar. So the second model in our sequence of models is `babble ~ sugar`, and it forms the alternative hypothesis for our first test. We now have our first hypothesis test:
``````knitr::kable(tibble::tribble(
~V1, ~V2,
"Null model:", "`babble ~ 1`",
"Alternative model:", "`babble ~ sugar`"
), col.names= c("", ""))``````
Null model: `babble ~ 1`
Alternative model: `babble ~ sugar`
This comparison forms our hypothesis test of the main effect of `sugar`. The next step in our model building exercise it to add the other main effect term, so the next model in our sequence is `babble ~ sugar + milk`. The second hypothesis test is then formed by comparing the following pair of models:
``````knitr::kable(tibble::tribble(
~V1, ~V2,
"Null model:", "`babble ~ sugar`",
"Alternative model:", "`babble ~ sugar + milk`"
), col.names= c("", ""))``````
Null model: `babble ~ sugar`
Alternative model: `babble ~ sugar + milk`
This comparison forms our hypothesis test of the main effect of `milk`. In one sense, this approach is very elegant: the alternative hypothesis from the first test forms the null hypothesis for the second one. It is in this sense that the Type I method is strictly sequential. Every test builds directly on the results of the last one. However, in another sense it’s very inelegant, because there’s a strong asymmetry between the two tests. The test of the main effect of `sugar` (the first test) completely ignores `milk`, whereas the test of the main effect of `milk` (the second test) does take `sugar` into account. In any case, the fourth model in our sequence is now the full model, `babble ~ sugar + milk + sugar:milk`, and the corresponding hypothesis test is
``````knitr::kable(tibble::tribble(
~V1, ~V2,
"Null model:", "`babble ~ sugar + milk`",
"Alternative model:", "`babble ~ sugar + milk + sugar:milk`"
), col.names= c("", ""))```
```
Null model: `babble ~ sugar + milk`
Alternative model: `babble ~ sugar + milk + sugar:milk`
Type I sum of squares is the default hypothesis testing method used by the `anova()` function, so it’s easy to produce the results from a Type I analysis. We just type in the same commands that we always did. Since we’ve now reached the point that we don’t need to hide the fact that ANOVA and regression are both linear models, I’ll use the `lm()` function to run the analyses:
`````` mod <- lm( babble ~ sugar + milk + sugar:milk, coffee )
anova( mod )``````
``````## Analysis of Variance Table
##
## Response: babble
## Df Sum Sq Mean Sq F value Pr(>F)
## sugar 2 3.5575 1.77876 6.7495 0.010863 *
## milk 1 0.9561 0.95611 3.6279 0.081061 .
## sugar:milk 2 5.9439 2.97193 11.2769 0.001754 **
## Residuals 12 3.1625 0.26354
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1``````
Leaving aside for one moment the question of how this result should be interpreted, let’s take note of the fact that our three p-values are .0109, .0811 and .0018 respectively. Next, let’s see if we can replicate the analysis using tools that we’re a little more familiar with. First, let’s fit all four models:
``````mod.1 <- lm( babble ~ 1, coffee )
mod.2 <- lm( babble ~ sugar, coffee )
mod.3 <- lm( babble ~ sugar + milk, coffee )
mod.4 <- lm( babble ~ sugar + milk + sugar:milk, coffee )``````
To run the first hypothesis test comparing `mod.1` to `mod.2` we can use the command `anova(mod.1, mod.2)` in much the same way that we did in Section 16.5. Similarly, we can use the commands `anova(mod.2, mod.3)` and `anova(mod.3, mod.4)` and to run the second and third hypothesis tests. However, rather than run each of those commands separately, we can enter the full sequence of models like this:
``anova( mod.1, mod.2, mod.3, mod.4 )``
``````## Analysis of Variance Table
##
## Model 1: babble ~ 1
## Model 2: babble ~ sugar
## Model 3: babble ~ sugar + milk
## Model 4: babble ~ sugar + milk + sugar:milk
## Res.Df RSS Df Sum of Sq F Pr(>F)
## 1 17 13.6200
## 2 15 10.0625 2 3.5575 6.7495 0.010863 *
## 3 14 9.1064 1 0.9561 3.6279 0.081061 .
## 4 12 3.1625 2 5.9439 11.2769 0.001754 **
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1``````
This output is rather more verbose than the last one, but it’s telling essentially the same story.248
The big problem with using Type I sum of squares is the fact that it really does depend on the order in which you enter the variables. Yet, in many situations the researcher has no reason to prefer one ordering over another. This is presumably the case for our milk and sugar problem. Should we add milk first, or sugar first? It feels exactly as arbitrary as a data analysis question as it does as a coffee-making question. There may in fact be some people with firm opinions about ordering, but it’s hard to imagine a principled answer to the question. Yet, look what happens when we change the ordering:
``````mod <- lm( babble ~ milk + sugar + sugar:milk, coffee )
anova( mod )```
```
``````## Analysis of Variance Table
##
## Response: babble
## Df Sum Sq Mean Sq F value Pr(>F)
## milk 1 1.4440 1.44400 5.4792 0.037333 *
## sugar 2 3.0696 1.53482 5.8238 0.017075 *
## milk:sugar 2 5.9439 2.97193 11.2769 0.001754 **
## Residuals 12 3.1625 0.26354
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1``````
The p-values for both main effect terms have changed, and fairly dramatically. Among other things, the effect of `milk` has become significant (though one should avoid drawing any strong conclusions about this, as I’ve mentioned previously). Which of these two ANOVAs should one report? It’s not immediately obvious.
When you look at the hypothesis tests that are used to define the “first” main effect and the “second” one, it’s clear that they’re qualitatively different from one another. In our initial example, we saw that the test for the main effect of `sugar` completely ignores `milk`, whereas the test of the main effect of `milk` does take `sugar` into account. As such, the Type I testing strategy really does treat the first main effect as if it had a kind of theoretical primacy over the second one. In my experience there is very rarely if ever any theoretically primacy of this kind that would justify treating any two main effects asymmetrically.
The consequence of all this is that Type I tests are very rarely of much interest, and so we should move on to discuss Type II tests and Type III tests. However, for the sake of completeness – on the off chance that you ever find yourself needing to run Type I tests – I’ll comment briefly on how R determines the ordering of terms in a Type I test. The key principle in Type I sum of squares is that the hypothesis testing be sequential, with terms added one at a time. However, it does also imply that main effects be added first (e.g., factors `A`, `B`, `C` etc), followed by first order interaction terms (e.g., terms like `A:B` and `B:C`), then second order interactions (e.g., `A:B:C`) and so on. Within each “block” you can specify whatever order you like. So, for instance, if we specified our model using a command like this,
``mod <- lm( outcome ~ A + B + C + B:C + A:B + A:C )``
and then used `anova(mod)` to produce sequential hypothesis tests, what we’d see is that the main effect terms would be entered `A` then `B` and then `C`, but then the interactions would be entered in the order `B:C` first, then `A:B` and then finally `A:C`. Reordering the terms within each group will change the ordering, as we saw earlier. However, changing the order of terms across blocks has no effect. For instance, if we tried to move the interaction term `B:C` to the front, like this,
``mod <- lm( outcome ~ B:C + A + B + C + A:B + A:C )``
it would have no effect. R would still enter the terms in the same order as last time. If for some reason you really, really need an interaction term to be entered first, then you have to do it the long way, creating each model manually using a separate `lm()` command and then using a command like `anova(mod.1, mod.2, mod.3, mod.4)` to force R to enter them in the order that you want.
Type III sum of squares
Having just finished talking about Type I tests, you might think that the natural thing to do next would be to talk about Type II tests. However, I think it’s actually a bit more natural to discuss Type III tests (which are simple) before talking about Type II tests (which are trickier). The basic idea behind Type III tests is extremely simple: regardless of which term you’re trying to evaluate, run the F-test in which the alternative hypothesis corresponds to the full ANOVA model as specified by the user, and the null model just deletes that one term that you’re testing. For instance, in the coffee example, in which our full model was `babble ~ sugar + milk + sugar:milk`, the test for a main effect of `sugar` would correspond to a comparison between the following two models:
``````knitr::kable(tibble::tribble(
~V1, ~V2,
"Null model:", "`babble ~ milk + sugar:milk`",
"Alternative model:", "`babble ~ sugar + milk + sugar:milk`"
), col.names= c("", ""))``````
Null model: `babble ~ milk + sugar:milk`
Alternative model: `babble ~ sugar + milk + sugar:milk`
Similarly the main effect of `milk` is evaluated by testing the full model against a null model that removes the `milk` term, like so:
``````knitr::kable(tibble::tribble(
~V1, ~V2,
"Null model:", "`babble ~ sugar + sugar:milk`",
"Alternative model:", "`babble ~ sugar + milk + sugar:milk`"
), col.names= c("", ""))``````
Null model: `babble ~ sugar + sugar:milk`
Alternative model: `babble ~ sugar + milk + sugar:milk`
Finally, the interaction term `sugar:milk` is evaluated in exactly the same way. Once again, we test the full model against a null model that removes the `sugar:milk` interaction term, like so:
``````knitr::kable(tibble::tribble(
~V1, ~V2,
"Null model:", "`babble ~ sugar + milk`",
"Alternative model:", "`babble ~ sugar + milk + sugar:milk`"
), col.names= c("", ""))``````
Null model: `babble ~ sugar + milk`
Alternative model: `babble ~ sugar + milk + sugar:milk`
The basic idea generalises to higher order ANOVAs. For instance, suppose that we were trying to run an ANOVA with three factors, `A`, `B` and `C`, and we wanted to consider all possible main effects and all possible interactions, including the three way interaction `A:B:C`. The table below shows you what the Type III tests look like for this situation:
``````knitr::kable(tibble::tribble(
~V1, ~V2, ~V3,
"`A`", "`B + C + A:B + A:C + B:C + A:B:C`", "`A + B + C + A:B + A:C + B:C + A:B:C`",
"`B`", "`A + C + A:B + A:C + B:C + A:B:C`", "`A + B + C + A:B + A:C + B:C + A:B:C`",
"`C`", "`A + B + A:B + A:C + B:C + A:B:C`", "`A + B + C + A:B + A:C + B:C + A:B:C`",
"`A:B`", "`A + B + C + A:C + B:C + A:B:C`", "`A + B + C + A:B + A:C + B:C + A:B:C`",
"`A:C`", "`A + B + C + A:B + B:C + A:B:C`", "`A + B + C + A:B + A:C + B:C + A:B:C`",
"`B:C`", "`A + B + C + A:B + A:C + A:B:C`", "`A + B + C + A:B + A:C + B:C + A:B:C`",
"`A:B:C`", "`A + B + C + A:B + A:C + B:C`", "`A + B + C + A:B + A:C + B:C + A:B:C`"
), col.names = c( "Term being tested is", "Null model is `outcome ~ ...`", "Alternative model is `outcome ~ ...`"))``````
Term being tested is Null model is `outcome ~ ...` Alternative model is `outcome ~ ...`
`A` `B + C + A:B + A:C + B:C + A:B:C` `A + B + C + A:B + A:C + B:C + A:B:C`
`B` `A + C + A:B + A:C + B:C + A:B:C` `A + B + C + A:B + A:C + B:C + A:B:C`
`C` `A + B + A:B + A:C + B:C + A:B:C` `A + B + C + A:B + A:C + B:C + A:B:C`
`A:B` `A + B + C + A:C + B:C + A:B:C` `A + B + C + A:B + A:C + B:C + A:B:C`
`A:C` `A + B + C + A:B + B:C + A:B:C` `A + B + C + A:B + A:C + B:C + A:B:C`
`B:C` `A + B + C + A:B + A:C + A:B:C` `A + B + C + A:B + A:C + B:C + A:B:C`
`A:B:C` `A + B + C + A:B + A:C + B:C` `A + B + C + A:B + A:C + B:C + A:B:C`
As ugly as that table looks, it’s pretty simple. In all cases, the alternative hypothesis corresponds to the full model, which contains three main effect terms (e.g. `A`), three first order interactions (e.g. `A:B`) and one second order interaction (i.e., `A:B:C`). The null model always contains 6 of thes 7 terms: and the missing one is the one whose significance we’re trying to test.
At first pass, Type III tests seem like a nice idea. Firstly, we’ve removed the asymmetry that caused us to have problems when running Type I tests. And because we’re now treating all terms the same way, the results of the hypothesis tests do not depend on the order in which we specify them. This is definitely a good thing. However, there is a big problem when interpreting the results of the tests, especially for main effect terms. Consider the coffee data. Suppose it turns out that the main effect of `milk` is not significant according to the Type III tests. What this is telling us is that `babble ~ sugar + sugar:milk` is a better model for the data than the full model. But what does that even mean? If the interaction term `sugar:milk` was also non-significant, we’d be tempted to conclude that the data are telling us that the only thing that matters is `sugar`. But suppose we have a significant interaction term, but a non-significant main effect of `milk`. In this case, are we to assume that there really is an “effect of sugar”, an “interaction between milk and sugar”, but no “effect of milk”? That seems crazy. The right answer simply must be that it’s meaningless249 to talk about the main effect if the interaction is significant. In general, this seems to be what most statisticians advise us to do, and I think that’s the right advice. But if it really is meaningless to talk about non-significant main effects in the presence of a significant interaction, then it’s not at all obvious why Type III tests should allow the null hypothesis to rely on a model that includes the interaction but omits one of the main effects that make it up. When characterised in this fashion, the null hypotheses really don’t make much sense at all.
Later on, we’ll see that Type III tests can be redeemed in some contexts, but I’d better show you how to actually compute a Type III ANOVA first. The `anova()` function in R does not directly support Type II tests or Type III tests. Technically, you can do it by creating the various models that form the null and alternative hypotheses for each test, and then using `anova()` to compare the models to one another. I outlined the gist of how that would be done when talking about Type I tests, but speaking from first hand experience250 I can tell you that it’s very tedious. In practice, the `anova()` function is only used to produce Type I tests or to compare specific models of particular interest (see Section 16.5). If you want Type II or Type III tests you need to use the `Anova()` function in the `car` package. It’s pretty easy to use, since there’s a `type` argument that you specify. So, to return to our coffee example, our Type III tests are run as follows:
``````mod <- lm( babble ~ sugar * milk, coffee )
Anova( mod, type=3 )``````
``````## Anova Table (Type III tests)
##
## Response: babble
## Sum Sq Df F value Pr(>F)
## (Intercept) 41.070 1 155.839 3.11e-08 ***
## sugar 5.880 2 11.156 0.001830 **
## milk 4.107 1 15.584 0.001936 **
## sugar:milk 5.944 2 11.277 0.001754 **
## Residuals 3.162 12
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1``````
As you can see, I got lazy this time and used `sugar * milk` as a shorthand way of referring to `sugar + milk + sugar:milk`. The important point here is that this is just a regular ANOVA table, and we can see that our Type III tests are significant for all terms, even the intercept.
Except, as usual, it’s not that simple. One of the perverse features of the Type III testing strategy is that the results turn out to depend on the contrasts that you use to encode your factors (see Section 16.7 if you’ve forgotten what the different types of contrasts are). The results that I presented in the ANOVA table above are based on the R default, which is treatment contrasts; and as we’ll see later, this is usually a very poor choice if you want to run Type III tests. So let’s see what happens if switch to Helmert contrasts:
`````` my.contrasts <- list( milk = "contr.Helmert", sugar = "contr.Helmert" )
mod.H <- lm( babble ~ sugar * milk, coffee, contrasts = my.contrasts )
Anova( mod.H, type=3 )``````
``````## Anova Table (Type III tests)
##
## Response: babble
## Sum Sq Df F value Pr(>F)
## (Intercept) 434.29 1 1647.8882 3.231e-14 ***
## sugar 2.13 2 4.0446 0.045426 *
## milk 1.00 1 3.8102 0.074672 .
## sugar:milk 5.94 2 11.2769 0.001754 **
## Residuals 3.16 12
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1``````
Oh, that’s not good at all. In the case of `milk` in particular, the p-value has changed from .002 to .07. This is a pretty substantial difference, and hopefully it gives you a sense of how important it is that you take care when using Type III tests.
Okay, so if the p-values that come out of Type III analyses are so sensitive to the choice of contrasts, does that mean that Type III tests are essentially arbitrary and not to be trusted? To some extent that’s true, and when we turn to a discussion of Type II tests we’ll see that Type II analyses avoid this arbitrariness entirely, but I think that’s too strong a conclusion. Firstly, it’s important to recognise that some choices of contrasts will always produce the same answers. Of particular importance is the fact that if the columns of our contrast matrix are all constrained to sum to zero, then the Type III analysis will always give the same answers. This means that you’ll get the same answers if you use `contr.Helmert` or `contr.sum` or `contr.poly`, but different answers for `contr.treatment` or `contr.SAS`.
``````random.contrasts <- matrix( rnorm(6), 3, 2 ) # create a random matrix
random.contrasts[, 1] <- random.contrasts[, 1] - mean( random.contrasts[, 1] ) # contrast 1 sums to 0
random.contrasts[, 2] <- random.contrasts[, 2] - mean( random.contrasts[, 2] ) # contrast 2 sums to 0
random.contrasts # print it to check that we really have an arbitrary contrast matrix...```
```
``````## [,1] [,2]
## [1,] -1.523759 0.6891740
## [2,] -0.334936 0.9999209
## [3,] 1.858695 -1.6890949``````
`````` contrasts( coffee\$sugar ) <- random.contrasts # random contrasts for sugar
contrasts( coffee\$milk ) <- contr.Helmert(2) # Helmert contrasts for the milk factor
mod.R <- lm( babble ~ sugar * milk, coffee ) # R will use the contrasts that we assigned
Anova( mod.R, type = 3 ) ```
```
``````## Anova Table (Type III tests)
##
## Response: babble
## Sum Sq Df F value Pr(>F)
## (Intercept) 434.29 1 1647.8882 3.231e-14 ***
## sugar 2.13 2 4.0446 0.045426 *
## milk 1.00 1 3.8102 0.074672 .
## sugar:milk 5.94 2 11.2769 0.001754 **
## Residuals 3.16 12
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1``````
Yep, same answers.
Type II sum of squares
Okay, so we’ve seen Type I and III tests now, and both are pretty straightforward: Type I tests are performed by gradually adding terms one at a time, whereas Type III tests are performed by taking the full model and looking to see what happens when you remove each term. However, both have some serious flaws: Type I tests are dependent on the order in which you enter the terms, and Type III tests are dependent on how you code up your contrasts. Because of these flaws, neither one is easy to interpret. Type II tests are a little harder to describe, but they avoid both of these problems, and as a result they are a little easier to interpret.
Type II tests are broadly similar to Type III tests: start with a “full” model, and test a particular term by deleting it from that model. However, Type II tests are based on the marginality principle which states that you should not omit a lower order term from your model if there are any higher order ones that depend on it. So, for instance, if your model contains the interaction `A:B` (a 2nd order term), then it really ought to contain the main effects `A` and `B` (1st order terms). Similarly, if it contains a three way interaction term `A:B:C`, then the model must also include the main effects `A`, `B` and `C` as well as the simpler interactions `A:B`, `A:C` and `B:C`. Type III tests routinely violate the marginality principle. For instance, consider the test of the main effect of `A` in the context of a three-way ANOVA that includes all possible interaction terms. According to Type III tests, our null and alternative models are:
``````knitr::kable(tibble::tribble(
~V1, ~V2,
"Null model:", "`outcome ~ B + C + A:B + A:C + B:C + A:B:C`",
"Alternative model:", "`outcome ~ A + B + C + A:B + A:C + B:C + A:B:C`"
), col.names = c("", ""))``````
Null model: `outcome ~ B + C + A:B + A:C + B:C + A:B:C`
Alternative model: `outcome ~ A + B + C + A:B + A:C + B:C + A:B:C`
Notice that the null hypothesis omits `A`, but includes `A:B`, `A:C` and `A:B:C` as part of the model. This, according to the Type II tests, is not a good choice of null hypothesis. What we should do instead, if we want to test the null hypothesis that `A` is not relevant to our `outcome`, is to specify the null hypothesis that is the most complicated model that does not rely on `A` in any form, even as an interaction. The alternative hypothesis corresponds to this null model plus a main effect term of `A`. This is a lot closer to what most people would intuitively think of as a “main effect of `A`”, and it yields the following as our Type II test of the main effect of `A`.251
``````knitr::kable(tibble::tribble(
~V1, ~V2,
"Null model:", "`outcome ~ B + C + B:C`",
"Alternative model:", "`outcome ~ A + B + C + B:C`"
), col.names = c("", ""))``````
Null model: `outcome ~ B + C + B:C`
Alternative model: `outcome ~ A + B + C + B:C`
Anyway, just to give you a sense of how the Type II tests play out, here’s the full table of tests that would be applied in a three-way factorial ANOVA:
``````knitr::kable(tibble::tribble(
~V1, ~V2, ~V3,
"`A`", "`B + C + B:C`", "`A + B + C + B:C`",
"`B`", "`A + C + A:C`", "`A + B + C + A:C`",
"`C`", "`A + B + A:B`", "`A + B + C + A:B`",
"`A:B`", "`A + B + C + A:C + B:C`", "`A + B + C + A:B + A:C + B:C`",
"`A:C`", "`A + B + C + A:B + B:C`", "`A + B + C + A:B + A:C + B:C`",
"`B:C`", "`A + B + C + A:B + A:C`", "`A + B + C + A:B + A:C + B:C`",
"`A:B:C`", "`A + B + C + A:B + A:C + B:C`", "`A + B + C + A:B + A:C + B:C + A:B:C`"
), col.names = c( "Term being tested is", "Null model is `outcome ~ ...`", "Alternative model is `outcome ~ ...`"))``````
Term being tested is Null model is `outcome ~ ...` Alternative model is `outcome ~ ...`
`A` `B + C + B:C` `A + B + C + B:C`
`B` `A + C + A:C` `A + B + C + A:C`
`C` `A + B + A:B` `A + B + C + A:B`
`A:B` `A + B + C + A:C + B:C` `A + B + C + A:B + A:C + B:C`
`A:C` `A + B + C + A:B + B:C` `A + B + C + A:B + A:C + B:C`
`B:C` `A + B + C + A:B + A:C` `A + B + C + A:B + A:C + B:C`
`A:B:C` `A + B + C + A:B + A:C + B:C` `A + B + C + A:B + A:C + B:C + A:B:C`
In the context of the two way ANOVA that we’ve been using in the coffee data, the hypothesis tests are even simpler. The main effect of `sugar` corresponds to an F-test comparing these two models:
``````knitr::kable(tibble::tribble(
~V1, ~V2,
"Null model:", "`babble ~ milk`",
"Alternative model:", "`babble ~ sugar + milk`"
), col.names = c("", ""))``` ```
Null model: `babble ~ milk`
Alternative model: `babble ~ sugar + milk`
The test for the main effect of `milk` is
``````knitr::kable(tibble::tribble(
~V1, ~V2,
"Null model:", "`babble ~ sugar`",
"Alternative model:", "`babble ~ sugar + milk`"
), col.names = c("", ""))``````
Null model: `babble ~ sugar`
Alternative model: `babble ~ sugar + milk`
Finally, the test for the interaction `sugar:milk` is:
``````knitr::kable(tibble::tribble(
~V1, ~V2,
"Null model:", "`babble ~ sugar + milk`",
"Alternative model:", "`babble ~ sugar + milk + sugar:milk`"
), col.names = c("", ""))``````
Null model: `babble ~ sugar + milk`
Alternative model: `babble ~ sugar + milk + sugar:milk`
Running the tests are again straightforward. We use the `Anova()` function, specifying `type=2`:
``````mod <- lm( babble ~ sugar*milk, coffee )
Anova( mod, type = 2 )``````
``````## Anova Table (Type II tests)
##
## Response: babble
## Sum Sq Df F value Pr(>F)
## sugar 3.0696 2 5.8238 0.017075 *
## milk 0.9561 1 3.6279 0.081061 .
## sugar:milk 5.9439 2 11.2769 0.001754 **
## Residuals 3.1625 12
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1``````
Type II tests have some clear advantages over Type I and Type III tests. They don’t depend on the order in which you specify factors (unlike Type I), and they don’t depend on the contrasts that you use to specify your factors (unlike Type III). And although opinions may differ on this last point, and it will definitely depend on what you’re trying to do with your data, I do think that the hypothesis tests that they specify are more likely to correspond to something that you actually care about. As a consequence, I find that it’s usually easier to interpret the results of a Type II test than the results of a Type I or Type III test. For this reason, my tentative advice is that, if you can’t think of any obvious model comparisons that directly map onto your research questions but you still want to run an ANOVA in an unbalanced design, Type II tests are probably a better choice than Type I or Type III.252
Effect sizes (and non-additive sums of squares)
The `etaSquared()` function in the `lsr` package computes η2 and partial η2 values for unbalanced designs and for different Types of tests. It’s pretty straightforward. All you have to do is indicate which `type` of tests you’re doing,
``etaSquared( mod, type=2 )``
``````## eta.sq eta.sq.part
## sugar 0.22537682 0.4925493
## milk 0.07019886 0.2321436
## sugar:milk 0.43640732 0.6527155``````
and out pops the η2 and partial η2 values, as requested. However, when you’ve got an unbalanced design, there’s a bit of extra complexity involved. To see why, let’s expand the output from the `etaSquared()` function so that it displays the full ANOVA table:
`````` es <- etaSquared( mod, type=2, anova=TRUE )
es``````
``````## eta.sq eta.sq.part SS df MS F
## sugar 0.22537682 0.4925493 3.0696323 2 1.5348161 5.823808
## milk 0.07019886 0.2321436 0.9561085 1 0.9561085 3.627921
## sugar:milk 0.43640732 0.6527155 5.9438677 2 2.9719339 11.276903
## Residuals 0.23219530 NA 3.1625000 12 0.2635417 NA
## p
## sugar 0.017075099
## milk 0.081060698
## sugar:milk 0.001754333
## Residuals NA``````
Okay, if you remember back to our very early discussions of ANOVA, one of the key ideas behind the sums of squares calculations is that if we add up all the SS terms associated with the effects in the model, and add that to the residual SS, they’re supposed to add up to the total sum of squares. And, on top of that, the whole idea behind η2 is that – because you’re dividing one of the SS terms by the total SS value – is that an η2 value can be interpreted as the proportion of variance accounted for by a particular term.
Now take a look at the output above. Because I’ve included the η2 value associated with the residuals (i.e., proportion of variance in the outcome attributed to the residuals, rather than to one of the effects), you’d expect all the η2 values to sum to 1. Because, the whole idea here was that the variance in the outcome variable can be divided up into the variability attributable to the model, and the variability in the residuals. Right? Right? And yet when we add up the η2 values for our model…
``sum( es[,"eta.sq"] )``
``## [1] 0.9641783``
… we discover that for Type II and Type III tests they generally don’t sum to 1. Some of the variability has gone “missing”. It’s not being attributed to the model, and it’s not being attributed to the residuals either. What’s going on here?
Before giving you the answer, I want to push this idea a little further. From a mathematical perspective, it’s easy enough to see that the missing variance is a consequence of the fact that in Types II and III, the individual SS values are not obliged to the total sum of squares, and will only do so if you have balanced data. I’ll explain why this happens and what it means in a second, but first let’s verify that this is the case using the ANOVA table. First, we can calculate the total sum of squares directly from the raw data:
``````ss.tot <- sum( (coffee\$babble - mean(coffee\$babble))^2 )
ss.tot``````
``## [1] 13.62``
Next, we can read off all the SS values from one of our Type I ANOVA tables, and add them up. As you can see, this gives us the same answer, just like it’s supposed to:
`````` type.I.sum <- 3.5575 + 0.9561 + 5.9439 + 3.1625
type.I.sum``````
``## [1] 13.62``
However, when we do the same thing for the Type II ANOVA table, it turns out that the SS values in the table add up to slightly less than the total SS value:
``````type.II.sum <- 0.9561 + 3.0696 + 5.9439 + 3.1625
type.II.sum``````
``## [1] 13.1321``
So, once again, we can see that there’s a little bit of variance that has “disappeared” somewhere.
Okay, time to explain what’s happened. The reason why this happens is that, when you have unbalanced designs, your factors become correlated with one another, and it becomes difficult to tell the difference between the effect of Factor A and the effect of Factor B. In the extreme case, suppose that we’d run a 2×2 design in which the number of participants in each group had been as follows:
``````knitr::kable(tibble::tribble(
~V1, ~V2, ~V3,
"milk", "100", "0",
"no milk", "0", "100"
), col.names = c( "", "sugar", "no sugar"))``````
sugar no sugar
milk 100 0
no milk 0 100
Here we have a spectacularly unbalanced design: 100 people have milk and sugar, 100 people have no milk and no sugar, and that’s all. There are 0 people with milk and no sugar, and 0 people with sugar but no milk. Now suppose that, when we collected the data, it turned out there is a large (and statistically significant) difference between the “milk and sugar” group and the “no-milk and no-sugar” group. Is this a main effect of sugar? A main effect of milk? Or an interaction? It’s impossible to tell, because the presence of sugar has a perfect association with the presence of milk. Now suppose the design had been a little more balanced:
``````knitr::kable(tibble::tribble(
~V1, ~V2, ~V3,
"milk", "100", "5",
"no milk", "5", "100"
), col.names = c( "", "sugar", "no sugar"))``````
sugar no sugar
milk 100 5
no milk 5 100
This time around, it’s technically possible to distinguish between the effect of milk and the effect of sugar, because we have a few people that have one but not the other. However, it will still be pretty difficult to do so, because the association between sugar and milk is still extremely strong, and there are so few observations in two of the groups. Again, we’re very likely to be in the situation where we know that the predictor variables (milk and sugar) are related to the outcome (babbling), but we don’t know if the nature of that relationship is a main effect of one predictor, or the other predictor or the interaction.
This uncertainty is the reason for the missing variance. The “missing” variance corresponds to variation in the outcome variable that is clearly attributable to the predictors, but we don’t know which of the effects in the model is responsible. When you calculate Type I sum of squares, no variance ever goes missing: the sequentiual nature of Type I sum of squares means that the ANOVA automatically attributes this variance to whichever effects are entered first. However, the Type II and Type III tests are more conservative. Variance that cannot be clearly attributed to a specific effect doesn’t get attributed to any of them, and it goes missing. | textbooks/stats/Applied_Statistics/Learning_Statistics_with_R_-_A_tutorial_for_Psychology_Students_and_other_Beginners_(Navarro)/16%3A_Factorial_ANOVA/16.10%3A_Factorial_ANOVA_3-_Unbalanced_Designs.txt |
• Factorial ANOVA with balanced designs, without interactions (Section 16.1) and with interactions included (Section 16.2)
• Effect size, estimated means, and confidence intervals in a factorial ANOVA (Section 16.3)
• Understanding the linear model underling ANOVA (Sections 16.5, 16.6 and 16.7)
• Post hoc testing using Tukey’s HSD (Section 16.8), and a brief commentary on planned comparisons (Section 16.9)
• Factorial ANOVA with unbalanced designs (Section 16.10)
References
Hsu, J. C. 1996. Multiple Comparisons: Theory and Methods. London, UK: Chapman; Hall.
1. The R command we need is `xtabs(~ drug+gender, clin.trial)`.
2. The nice thing about the subscript notation is that generalises nicely: if our experiment had involved a third factor, then we could just add a third subscript. In principle, the notation extends to as many factors as you might care to include, but in this book we’ll rarely consider analyses involving more than two factors, and never more than three.
3. Technically, marginalising isn’t quite identical to a regular mean: it’s a weighted average, where you take into account the frequency of the different events that you’re averaging over. However, in a balanced design, all of our cell frequencies are equal by definition, so the two are equivalent. We’ll discuss unbalanced designs later, and when we do so you’ll see that all of our calculations become a real headache. But let’s ignore this for now.
4. English translation: “least tedious”.
5. This chapter seems to be setting a new record for the number of different things that the letter R can stand for: so far we have R referring to the software package, the number of rows in our table of means, the residuals in the model, and now the correlation coefficient in a regression. Sorry: we clearly don’t have enough letters in the alphabet. However, I’ve tried pretty hard to be clear on which thing R is referring to in each case.
6. Implausibly large, I would think: the artificiality of this data set is really starting to show!
7. In fact, there’s a function `Effect()` within the `effects` package that has slightly different arguments, but computes the same things, and won’t give you this warning message.
8. Due to the way that the `leveneTest()` function is implemented, however, if you use a formula like `mood.gain \~\ drug + therapy + drug:therapy`, or input an ANOVA object based on a formula like this, you actually get the error message. That shouldn’t happen, because this actually is a fully crossed model. However, there’s a quirky shortcut in the way that the `leveneTest()` function checks whether your model is fully crossed that means that it doesn’t recognise this as a fully crossed model. Essentially what the function is doing is checking that you used `*` (which ensures that the model is fully crossed), and not `+` or `:` in your model formula. So if you’ve manually typed out all of the relevant terms for a fully crossed model, the `leveneTest()` function doesn’t detect it. I think this is a bug.
9. There could be all sorts of reasons for doing this, I would imagine.
10. This is cheating in some respects: because ANOVA and regression are provably the same thing, R is lazy: if you read the help documentation closely, you’ll notice that the `aov()` function is actually just the `lm()` function in disguise! But we shan’t let such things get in the way of our story, shall we?
11. In the example given above, I’ve typed `summary( regression.model )` to get the hypothesis tests. However, the `summary()` function does produce a lot of output, which is why I’ve used the `BLAH BLAH BLAH` text to hide the unnecessary parts of the output. But in fact, you can use the `coef()` function to do the same job. If you the command `coef( summary( regression.model ))` you’ll get exactly the same output that I’ve shown above (minus the `BLAH BLAH BLAH`). Compare and contrast this to the output of `coef( regression.model )`.
12. Advanced users may want to look into the `model.matrix()` function, which produces similar output. Alternatively, you can use a command like `contr.treatment(3)[clin.trial\$drug,]`. I’ll talk about the `contr.treatment()` function later.
13. Future versions of this book will try to be a bit more consistent with the naming scheme for variables. One of the many problems with having to write a lengthy text very quickly to meet a teaching deadline is that you lose some internal consistency.
14. The `lsr` package contains a more general function called `permuteLevels()` that can shuffle them in any way you like.
15. Technically, this list actually stores the functions themselves. R allows lists to contain functions, which is really neat for advanced purposes, but not something that matters for this book.
16. If, for instance, you actually would find yourself interested to know if Group A is significantly different from the mean of Group B and Group C, then you need to use a different tool (e.g., Scheffe’s method, which is more conservative, and beyond the scope this book). However, in most cases you probably are interested in pairwise group differences, so Tukey’s HSD is a pretty useful thing to know about.
17. This discrepancy in standard deviations might (and should) make you wonder if we have a violation of the homogeneity of variance assumption. I’ll leave it as an exercise for the reader to check this using the `leveneTest()` function.
18. Actually, this is a bit of a lie. ANOVAs can vary in other ways besides the ones I’ve discussed in this book. For instance, I’ve completely ignored the difference between fixed-effect models, in which the levels of a factor are “fixed” by the experimenter or the world, and random-effect models, in which the levels are random samples from a larger population of possible levels (this book only covers fixed-effect models). Don’t make the mistake of thinking that this book – or any other one – will tell you “everything you need to know” about statistics, any more than a single book could possibly tell you everything you need to know about psychology, physics or philosophy. Life is too complicated for that to ever be true. This isn’t a cause for despair, though. Most researchers get by with a basic working knowledge of ANOVA that doesn’t go any further than this book does. I just want you to keep in mind that this book is only the beginning of a very long story, not the whole story.
19. The one thing that might seem a little opaque to some people is why the residual degrees of freedom in this output look different from one another (i.e., ranging from 12 to 17) whereas in the original one the residual degrees of freedom is fixed at 12. It’s actually the case that R uses a residual df of 12 in all cases (that’s why the p values are the same in the two outputs, and it’s enough to verify that `pf(6.7495, 2,12, lower.tail=FALSE))` gives the correct answer of p=.010863, for instance, whereas `pf(6.7495, 2,15, lower.tail=FALSE))` would have given a p-value of about .00812. It’s the residual degrees of freedom in the full model (i.e., the last one) that matters here.
20. Or, at the very least, rarely of interest.
21. Yes, I’m actually a big enough nerd that I’ve written my own functions implementing Type II tests and Type III tests. I only did it to convince myself that I knew how the different Types of test worked, but it did turn out to be a handy exercise: the `etaSquared()` function in the `lsr` package relies on it. There’s actually even an argument in the `etaSquared()` function called `anova`. By default, `anova=FALSE` and the function just prints out the effect sizes. However, if you set `anova=TRUE` it will spit out the full ANOVA table as well. This works for Types I, II and III. Just set the `types` argument to select which type of test you want.
22. Note, of course, that this does depend on the model that the user specified. If original ANOVA model doesn’t contain an interaction term for `B:C`, then obviously it won’t appear in either the null or the alternative. But that’s true for Types I, II and III. They never include any terms that you didn’t include, but they make different choices about how to construct tests for the ones that you did include.
23. I find it amusing to note that the default in R is Type I and the default in SPSS is Type III (with Helmert contrasts). Neither of these appeals to me all that much. Relatedly, I find it depressing that almost nobody in the psychological literature ever bothers to report which Type of tests they ran, much less the order of variables (for Type I) or the contrasts used (for Type III). Often they don’t report what software they used either. The only way I can ever make any sense of what people typically report is to try to guess from auxiliary cues which software they were using, and to assume that they never changed the default settings. Please don’t do this… now that you know about these issues, make sure you indicate what software you used, and if you’re reporting ANOVA results for unbalanced data, then specify what Type of tests you ran, specify order information if you’ve done Type I tests and specify contrasts if you’ve done Type III tests. Or, even better, do hypotheses tests that correspond to things you really care about, and then report those! | textbooks/stats/Applied_Statistics/Learning_Statistics_with_R_-_A_tutorial_for_Psychology_Students_and_other_Beginners_(Navarro)/16%3A_Factorial_ANOVA/16.11%3A_Summary.txt |
In our reasonings concerning matter of fact, there are all imaginable degrees of assurance, from the highest certainty to the lowest species of moral evidence. A wise man, therefore, proportions his belief to the evidence. – David Hume253.
The ideas I’ve presented to you in this book describe inferential statistics from the frequentist perspective. I’m not alone in doing this. In fact, almost every textbook given to undergraduate psychology students presents the opinions of the frequentist statistician as the theory of inferential statistics, the one true way to do things. I have taught this way for practical reasons. The frequentist view of statistics dominated the academic field of statistics for most of the 20th century, and this dominance is even more extreme among applied scientists. It was and is current practice among psychologists to use frequentist methods. Because frequentist methods are ubiquitous in scientific papers, every student of statistics needs to understand those methods, otherwise they will be unable to make sense of what those papers are saying! Unfortunately – in my opinion at least – the current practice in psychology is often misguided, and the reliance on frequentist methods is partly to blame. In this chapter I explain why I think this, and provide an introduction to Bayesian statistics, an approach that I think is generally superior to the orthodox approach.
This chapter comes in two parts. In Sections 17.1 through 17.3 I talk about what Bayesian statistics are all about, covering the basic mathematical rules for how it works as well as an explanation for why I think the Bayesian approach is so useful. Afterwards, I provide a brief overview of how you can do Bayesian versions of chi-square tests (Section 17.6), t-tests (Section 17.7), regression (Section 17.8) and ANOVA (Section 17.9).
17: Bayesian Statistics
From a Bayesian perspective, statistical inference is all about belief revision. I start out with a set of candidate hypotheses h about the world. I don’t know which of these hypotheses is true, but do I have some beliefs about which hypotheses are plausible and which are not. When I observe the data d, I have to revise those beliefs. If the data are consistent with a hypothesis, my belief in that hypothesis is strengthened. If the data inconsistent with the hypothesis, my belief in that hypothesis is weakened. That’s it! At the end of this section I’ll give a precise description of how Bayesian reasoning works, but first I want to work through a simple example in order to introduce the key ideas. Consider the following reasoning problem:
I’m carrying an umbrella. Do you think it will rain?
In this problem, I have presented you with a single piece of data (d= I’m carrying the umbrella), and I’m asking you to tell me your beliefs about whether it’s raining. You have two possible hypotheses, h: either it rains today or it does not. How should you solve this problem?
Priors: what you believed before
The first thing you need to do ignore what I told you about the umbrella, and write down your pre-existing beliefs about rain. This is important: if you want to be honest about how your beliefs have been revised in the light of new evidence, then you must say something about what you believed before those data appeared! So, what might you believe about whether it will rain today? You probably know that I live in Australia, and that much of Australia is hot and dry. And in fact you’re right: the city of Adelaide where I live has a Mediterranean climate, very similar to southern California, southern Europe or northern Africa. I’m writing this in January, and so you can assume it’s the middle of summer. In fact, you might have decided to take a quick look on Wikipedia254 and discovered that Adelaide gets an average of 4.4 days of rain across the 31 days of January. Without knowing anything else, you might conclude that the probability of January rain in Adelaide is about 15%, and the probability of a dry day is 85%. If this is really what you believe about Adelaide rainfall (and now that I’ve told it to you, I’m betting that this really is what you believe) then what I have written here is your prior distribution, written P(h):
Hypothesis Degree of Belief
Rainy day 0.15
Dry day 0.85
Likelihoods: theories about the data
To solve the reasoning problem, you need a theory about my behaviour. When does Dan carry an umbrella? You might guess that I’m not a complete idiot,255 and I try to carry umbrellas only on rainy days. On the other hand, you also know that I have young kids, and you wouldn’t be all that surprised to know that I’m pretty forgetful about this sort of thing. Let’s suppose that on rainy days I remember my umbrella about 30% of the time (I really am awful at this). But let’s say that on dry days I’m only about 5% likely to be carrying an umbrella. So you might write out a little table like this:
Hypothesis Umbrella No umbrella
Rainy day 0.30 0.70
Dry day 0.05 0.95
It’s important to remember that each cell in this table describes your beliefs about what data d will be observed, given the truth of a particular hypothesis h. This “conditional probability” is written P(d|h), which you can read as “the probability of d given h”. In Bayesian statistics, this is referred to as likelihood of data d given hypothesis h.256
joint probability of data and hypothesis
At this point, all the elements are in place. Having written down the priors and the likelihood, you have all the information you need to do Bayesian reasoning. The question now becomes, how do we use this information? As it turns out, there’s a very simple equation that we can use here, but it’s important that you understand why we use it, so I’m going to try to build it up from more basic ideas.
Let’s start out with one of the rules of probability theory. I listed it way back in Table 9.1, but I didn’t make a big deal out of it at the time and you probably ignored it. The rule in question is the one that talks about the probability that two things are true. In our example, you might want to calculate the probability that today is rainy (i.e., hypothesis h is true) and I’m carrying an umbrella (i.e., data d is observed). The joint probability of the hypothesis and the data is written P(d,h), and you can calculate it by multiplying the prior P(h) by the likelihood P(d|h). Mathematically, we say that:
P(d,h)=P(d|h)P(h)
So, what is the probability that today is a rainy day and I remember to carry an umbrella? As we discussed earlier, the prior tells us that the probability of a rainy day is 15%, and the likelihood tells us that the probability of me remembering my umbrella on a rainy day is 30%. So the probability that both of these things are true is calculated by multiplying the two:
\begin{aligned} \text { (rainy, umbrella) } &=P(\text { umbrella } | \text {rainy}) \times P(\text { rainy }) \ &=0.30 \times 0.15 \ &=0.045 \end{aligned}
In other words, before being told anything about what actually happened, you think that there is a 4.5% probability that today will be a rainy day and that I will remember an umbrella. However, there are of course four possible things that could happen, right? So let’s repeat the exercise for all four. If we do that, we end up with the following table:
Umbrella No-umbrella
Rainy 0.045 0.105
Dry 0.0425 0.8075
This table captures all the information about which of the four possibilities are likely. To really get the full picture, though, it helps to add the row totals and column totals. That gives us this table:
Umbrella No-umbrella Total
Rainy 0.0450 0.1050 0.15
Dry 0.0425 0.8075 0.85
Total 0.0875 0.9125 1
This is a very useful table, so it’s worth taking a moment to think about what all these numbers are telling us. First, notice that the row sums aren’t telling us anything new at all. For example, the first row tells us that if we ignore all this umbrella business, the chance that today will be a rainy day is 15%. That’s not surprising, of course: that’s our prior. The important thing isn’t the number itself: rather, the important thing is that it gives us some confidence that our calculations are sensible! Now take a look at the column sums, and notice that they tell us something that we haven’t explicitly stated yet. In the same way that the row sums tell us the probability of rain, the column sums tell us the probability of me carrying an umbrella. Specifically, the first column tells us that on average (i.e., ignoring whether it’s a rainy day or not), the probability of me carrying an umbrella is 8.75%. Finally, notice that when we sum across all four logically-possible events, everything adds up to 1. In other words, what we have written down is a proper probability distribution defined over all possible combinations of data and hypothesis.
Now, because this table is so useful, I want to make sure you understand what all the elements correspond to, and how they written:
Umbrella No-umbrella
Rainy P(Umbrella, Rainy) P(No-umbrella, Rainy) P(Rainy)
Dry P(Umbrella, Dry) P(No-umbrella, Dry) P(Dry)
P(Umbrella) P(No-umbrella)
Finally, let’s use “proper” statistical notation. In the rainy day problem, the data corresponds to the observation that I do or do not have an umbrella. So we’ll let d1 refer to the possibility that you observe me carrying an umbrella, and d2 refers to you observing me not carrying one. Similarly, h1 is your hypothesis that today is rainy, and h2 is the hypothesis that it is not. Using this notation, the table looks like this:
Updating beliefs using Bayes’ rule
The table we laid out in the last section is a very powerful tool for solving the rainy day problem, because it considers all four logical possibilities and states exactly how confident you are in each of them before being given any data. It’s now time to consider what happens to our beliefs when we are actually given the data. In the rainy day problem, you are told that I really am carrying an umbrella. This is something of a surprising event: according to our table, the probability of me carrying an umbrella is only 8.75%. But that makes sense, right? A guy carrying an umbrella on a summer day in a hot dry city is pretty unusual, and so you really weren’t expecting that. Nevertheless, the problem tells you that it is true. No matter how unlikely you thought it was, you must now adjust your beliefs to accommodate the fact that you now know that I have an umbrella.257 To reflect this new knowledge, our revised table must have the following numbers:
Umbrella No-umbrella
Rainy 0
Dry 0
Total 1 0
In other words, the facts have eliminated any possibility of “no umbrella”, so we have to put zeros into any cell in the table that implies that I’m not carrying an umbrella. Also, you know for a fact that I am carrying an umbrella, so the column sum on the left must be 1 to correctly describe the fact that P(umbrella)=1.
What two numbers should we put in the empty cells? Again, let’s not worry about the maths, and instead think about our intuitions. When we wrote out our table the first time, it turned out that those two cells had almost identical numbers, right? We worked out that the joint probability of “rain and umbrella” was 4.5%, and the joint probability of “dry and umbrella” was 4.25%. In other words, before I told you that I am in fact carrying an umbrella, you’d have said that these two events were almost identical in probability, yes? But notice that both of these possibilities are consistent with the fact that I actually am carrying an umbrella. From the perspective of these two possibilities, very little has changed. I hope you’d agree that it’s still true that these two possibilities are equally plausible. So what we expect to see in our final table is some numbers that preserve the fact that “rain and umbrella” is slightly more plausible than “dry and umbrella”, while still ensuring that numbers in the table add up. Something like this, perhaps?
Umbrella No-umbrella
Rainy 0.514 0
Dry 0.486 0
Total 1 0
What this table is telling you is that, after being told that I’m carrying an umbrella, you believe that there’s a 51.4% chance that today will be a rainy day, and a 48.6% chance that it won’t. That’s the answer to our problem! The posterior probability of rain P(h|d) given that I am carrying an umbrella is 51.4%
How did I calculate these numbers? You can probably guess. To work out that there was a 0.514 probability of “rain”, all I did was take the 0.045 probability of “rain and umbrella” and divide it by the 0.0875 chance of “umbrella”. This produces a table that satisfies our need to have everything sum to 1, and our need not to interfere with the relative plausibility of the two events that are actually consistent with the data. To say the same thing using fancy statistical jargon, what I’ve done here is divide the joint probability of the hypothesis and the data P(d,h) by the marginal probability of the data P(d), and this is what gives us the posterior probability of the hypothesis given that we know the data have been observed. To write this as an equation:258
$P(h | d)=\dfrac{P(d, h)}{P(d)}$
However, remember what I said at the start of the last section, namely that the joint probability P(d,h) is calculated by multiplying the prior P(h) by the likelihood P(d|h). In real life, the things we actually know how to write down are the priors and the likelihood, so let’s substitute those back into the equation. This gives us the following formula for the posterior probability:
$\ P(h | d)=\dfrac{P(d | h)P(h)}{P(d)}$
And this formula, folks, is known as Bayes’ rule. It describes how a learner starts out with prior beliefs about the plausibility of different hypotheses, and tells you how those beliefs should be revised in the face of data. In the Bayesian paradigm, all statistical inference flows from this one simple rule. | textbooks/stats/Applied_Statistics/Learning_Statistics_with_R_-_A_tutorial_for_Psychology_Students_and_other_Beginners_(Navarro)/17%3A_Bayesian_Statistics/17.01%3A_Probabilistic_Reasoning_by_Rational_Agents.txt |
In Chapter 11 I described the orthodox approach to hypothesis testing. It took an entire chapter to describe, because null hypothesis testing is a very elaborate contraption that people find very hard to make sense of. In contrast, the Bayesian approach to hypothesis testing is incredibly simple. Let’s pick a setting that is closely analogous to the orthodox scenario. There are two hypotheses that we want to compare, a null hypothesis h0 and an alternative hypothesis h1. Prior to running the experiment we have some beliefs P(h) about which hypotheses are true. We run an experiment and obtain data d. Unlike frequentist statistics Bayesian statistics does allow to talk about the probability that the null hypothesis is true. Better yet, it allows us to calculate the posterior probability of the null hypothesis, using Bayes’ rule:
$\ P(h_0 | d) = \dfrac{P(d | h_0)P(h_0)}{P(d)}$
This formula tells us exactly how much belief we should have in the null hypothesis after having observed the data d. Similarly, we can work out how much belief to place in the alternative hypothesis using essentially the same equation. All we do is change the subscript:
$\ P(h_1 | d) = \dfrac{P(d | h_1)P(h_1)}{P(d)}$
It’s all so simple that I feel like an idiot even bothering to write these equations down, since all I’m doing is copying Bayes rule from the previous section.259
Bayes factor
In practice, most Bayesian data analysts tend not to talk in terms of the raw posterior probabilities P(h0|d) and P(h1|d). Instead, we tend to talk in terms of the posterior odds ratio. Think of it like betting. Suppose, for instance, the posterior probability of the null hypothesis is 25%, and the posterior probability of the alternative is 75%. The alternative hypothesis is three times as probable as the null, so we say that the odds are 3:1 in favour of the alternative. Mathematically, all we have to do to calculate the posterior odds is divide one posterior probability by the other:
$\ \dfrac{P(h_1 | d)}{P(h_0 | d)}=\dfrac{0.75}{0.25}=3$
Or, to write the same thing in terms of the equations above:
$\ \dfrac{P(h_1 | d)}{P(h_0 | d)} = \dfrac{P(d | h_1)}{P(d | h_0)} \times \dfrac{P(h_1)}{P(h_0)}$
Actually, this equation is worth expanding on. There are three different terms here that you should know. On the left hand side, we have the posterior odds, which tells you what you believe about the relative plausibilty of the null hypothesis and the alternative hypothesis after seeing the data. On the right hand side, we have the prior odds, which indicates what you thought before seeing the data. In the middle, we have the Bayes factor, which describes the amount of evidence provided by the data:
The Bayes factor (sometimes abbreviated as BF) has a special place in the Bayesian hypothesis testing, because it serves a similar role to the p-value in orthodox hypothesis testing: it quantifies the strength of evidence provided by the data, and as such it is the Bayes factor that people tend to report when running a Bayesian hypothesis test. The reason for reporting Bayes factors rather than posterior odds is that different researchers will have different priors. Some people might have a strong bias to believe the null hypothesis is true, others might have a strong bias to believe it is false. Because of this, the polite thing for an applied researcher to do is report the Bayes factor. That way, anyone reading the paper can multiply the Bayes factor by their own personal prior odds, and they can work out for themselves what the posterior odds would be. In any case, by convention we like to pretend that we give equal consideration to both the null hypothesis and the alternative, in which case the prior odds equals 1, and the posterior odds becomes the same as the Bayes factor.
Interpreting Bayes factors
One of the really nice things about the Bayes factor is the numbers are inherently meaningful. If you run an experiment and you compute a Bayes factor of 4, it means that the evidence provided by your data corresponds to betting odds of 4:1 in favour of the alternative. However, there have been some attempts to quantify the standards of evidence that would be considered meaningful in a scientific context. The two most widely used are from Jeffreys (1961) and Kass and Raftery (1995). Of the two, I tend to prefer the Kass and Raftery (1995) table because it’s a bit more conservative. So here it is:
Bayes factor Interpretation
1 - 3 Negligible evidence
3 - 20 Positive evidence
20 - 150 Strong evidence
$>$150 Very strong evidence
And to be perfectly honest, I think that even the Kass and Raftery standards are being a bit charitable. If it were up to me, I’d have called the “positive evidence” category “weak evidence”. To me, anything in the range 3:1 to 20:1 is “weak” or “modest” evidence at best. But there are no hard and fast rules here: what counts as strong or weak evidence depends entirely on how conservative you are, and upon the standards that your community insists upon before it is willing to label a finding as “true”.
In any case, note that all the numbers listed above make sense if the Bayes factor is greater than 1 (i.e., the evidence favours the alternative hypothesis). However, one big practical advantage of the Bayesian approach relative to the orthodox approach is that it also allows you to quantify evidence for the null. When that happens, the Bayes factor will be less than 1. You can choose to report a Bayes factor less than 1, but to be honest I find it confusing. For example, suppose that the likelihood of the data under the null hypothesis P(d|h0) is equal to 0.2, and the corresponding likelihood P(d|h0) under the alternative hypothesis is 0.1. Using the equations given above, Bayes factor here would be:
$\ BF=\dfrac{P(d | h_1)}{P(d | h_0}=\dfrac{0.1}{0.2}=0.5$
Read literally, this result tells is that the evidence in favour of the alternative is 0.5 to 1. I find this hard to understand. To me, it makes a lot more sense to turn the equation “upside down”, and report the amount op evidence in favour of the null. In other words, what we calculate is this:
$\ BF^{\prime} = \dfrac{P(d | h_0)}{P(d | h_1)}=\dfrac{0.2}{0.1}=2$
And what we would report is a Bayes factor of 2:1 in favour of the null. Much easier to understand, and you can interpret this using the table above. | textbooks/stats/Applied_Statistics/Learning_Statistics_with_R_-_A_tutorial_for_Psychology_Students_and_other_Beginners_(Navarro)/17%3A_Bayesian_Statistics/17.02%3A_Bayesian_Hypothesis_Tests.txt |
Up to this point I’ve focused exclusively on the logic underpinning Bayesian statistics. We’ve talked about the idea of “probability as a degree of belief”, and what it implies about how a rational agent should reason about the world. The question that you have to answer for yourself is this: how do you want to do your statistics? Do you want to be an orthodox statistician, relying on sampling distributions and p-values to guide your decisions? Or do you want to be a Bayesian, relying on Bayes factors and the rules for rational belief revision? And to be perfectly honest, I can’t answer this question for you. Ultimately it depends on what you think is right. It’s your call, and your call alone. That being said, I can talk a little about why I prefer the Bayesian approach.
Statistics that mean what you think they mean
You keep using that word. I do not think it means what you think it means
– Inigo Montoya, The Princess Bride260
To me, one of the biggest advantages to the Bayesian approach is that it answers the right questions. Within the Bayesian framework, it is perfectly sensible and allowable to refer to “the probability that a hypothesis is true”. You can even try to calculate this probability. Ultimately, isn’t that what you want your statistical tests to tell you? To an actual human being, this would seem to be the whole point of doing statistics: to determine what is true and what isn’t. Any time that you aren’t exactly sure about what the truth is, you should use the language of probability theory to say things like “there is an 80% chance that Theory A is true, but a 20% chance that Theory B is true instead”.
This seems so obvious to a human, yet it is explicitly forbidden within the orthodox framework. To a frequentist, such statements are a nonsense because “the theory is true” is not a repeatable event. A theory is true or it is not, and no probabilistic statements are allowed, no matter how much you might want to make them. There’s a reason why, back in Section 11.5, I repeatedly warned you not to interpret the p-value as the probability of that the null hypothesis is true. There’s a reason why almost every textbook on statstics is forced to repeat that warning. It’s because people desperately want that to be the correct interpretation. Frequentist dogma notwithstanding, a lifetime of experience of teaching undergraduates and of doing data analysis on a daily basis suggests to me that most actual humans thing that “the probability that the hypothesis is true” is not only meaningful, it’s the thing we care most about. It’s such an appealing idea that even trained statisticians fall prey to the mistake of trying to interpret a p-value this way. For example, here is a quote from an official Newspoll report in 2013, explaining how to interpret their (frequentist) data analysis:261
Throughout the report, where relevant, statistically significant changes have been noted. All significance tests have been based on the 95 percent level of confidence. This means that if a change is noted as being statistically significant, there is a 95 percent probability that a real change has occurred, and is not simply due to chance variation. (emphasis added)
Nope! That’s not what p<.05 means. That’s not what 95% confidence means to a frequentist statistician. The bolded section is just plain wrong. Orthodox methods cannot tell you that “there is a 95% chance that a real change has occurred”, because this is not the kind of event to which frequentist probabilities may be assigned. To an ideological frequentist, this sentence should be meaningless. Even if you’re a more pragmatic frequentist, it’s still the wrong definition of a p-value. It is simply not an allowed or correct thing to say if you want to rely on orthodox statistical tools.
On the other hand, let’s suppose you are a Bayesian. Although the bolded passage is the wrong definition of a p-value, it’s pretty much exactly what a Bayesian means when they say that the posterior probability of the alternative hypothesis is greater than 95%. And here’s the thing. If the Bayesian posterior is actually thing you want to report, why are you even trying to use orthodox methods? If you want to make Bayesian claims, all you have to do is be a Bayesian and use Bayesian tools.
Speaking for myself, I found this to be a the most liberating thing about switching to the Bayesian view. Once you’ve made the jump, you no longer have to wrap your head around counterinuitive definitions of p-values. You don’t have to bother remembering why you can’t say that you’re 95% confident that the true mean lies within some interval. All you have to do is be honest about what you believed before you ran the study, and then report what you learned from doing it. Sounds nice, doesn’t it? To me, this is the big promise of the Bayesian approach: you do the analysis you really want to do, and express what you really believe the data are telling you.
17.04: Evidentiary Standards You Can Believe
If [p] is below .02 it is strongly indicated that the [null] hypothesis fails to account for the whole of the facts. We shall not often be astray if we draw a conventional line at .05 and consider that [smaller values of p] indicate a real discrepancy.
– Sir Ronald Fisher (1925)
Consider the quote above by Sir Ronald Fisher, one of the founders of what has become the orthodox approach to statistics. If anyone has ever been entitled to express an opinion about the intended function of p-values, it’s Fisher. In this passage, taken from his classic guide Statistical Methods for Research Workers, he’s pretty clear about what it means to reject a null hypothesis at p<.05. In his opinion, if we take p<.05 to mean there is “a real effect”, then “we shall not often be astray”. This view is hardly unusual: in my experience, most practitioners express views very similar to Fisher’s. In essence, the p<.05 convention is assumed to represent a fairly stringent evidentiary standard.
Well, how true is that? One way to approach this question is to try to convert p-values to Bayes factors, and see how the two compare. It’s not an easy thing to do because a p-value is a fundamentally different kind of calculation to a Bayes factor, and they don’t measure the same thing. However, there have been some attempts to work out the relationship between the two, and it’s somewhat surprising. For example, Johnson (2013) presents a pretty compelling case that (for t-tests at least) the p<.05 threshold corresponds roughly to a Bayes factor of somewhere between 3:1 and 5:1 in favour of the alternative. If that’s right, then Fisher’s claim is a bit of a stretch. Let’s suppose that the null hypothesis is true about half the time (i.e., the prior probability of H0 is 0.5), and we use those numbers to work out the posterior probability of the null hypothesis given that it has been rejected at p<.05. Using the data from Johnson (2013), we see that if you reject the null at p<.05, you’ll be correct about 80% of the time. I don’t know about you, but in my opinion an evidentiary standard that ensures you’ll be wrong on 20% of your decisions isn’t good enough. The fact remains that, quite contrary to Fisher’s claim, if you reject at p<.05 you shall quite often go astray. It’s not a very stringent evidentiary threshold at all. | textbooks/stats/Applied_Statistics/Learning_Statistics_with_R_-_A_tutorial_for_Psychology_Students_and_other_Beginners_(Navarro)/17%3A_Bayesian_Statistics/17.03%3A_Why_Be_a_Bayesian.txt |
The cake is a lie.
The cake is a lie.
The cake is a lie.
The cake is a lie.
– Portal262
Okay, at this point you might be thinking that the real problem is not with orthodox statistics, just the p<.05 standard. In one sense, that’s true. The recommendation that Johnson (2013) gives is not that “everyone must be a Bayesian now”. Instead, the suggestion is that it would be wiser to shift the conventional standard to something like a p<.01 level. That’s not an unreasonable view to take, but in my view the problem is a little more severe than that. In my opinion, there’s a fairly big problem built into the way most (but not all) orthodox hypothesis tests are constructed. They are grossly naive about how humans actually do research, and because of this most p-values are wrong.
Sounds like an absurd claim, right? Well, consider the following scenario. You’ve come up with a really exciting research hypothesis and you design a study to test it. You’re very diligent, so you run a power analysis to work out what your sample size should be, and you run the study. You run your hypothesis test and out pops a p-value of 0.072. Really bloody annoying, right?
What should you do? Here are some possibilities:
1. You conclude that there is no effect, and try to publish it as a null result
2. You guess that there might be an effect, and try to publish it as a “borderline significant” result
3. You give up and try a new study
4. You collect some more data to see if the p value goes up or (preferably!) drops below the “magic” criterion of p<.05
Which would you choose? Before reading any further, I urge you to take some time to think about it. Be honest with yourself. But don’t stress about it too much, because you’re screwed no matter what you choose. Based on my own experiences as an author, reviewer and editor, as well as stories I’ve heard from others, here’s what will happen in each case:
• Let’s start with option 1. If you try to publish it as a null result, the paper will struggle to be published. Some reviewers will think that p=.072 is not really a null result. They’ll argue it’s borderline significant. Other reviewers will agree it’s a null result, but will claim that even though some null results are publishable, yours isn’t. One or two reviewers might even be on your side, but you’ll be fighting an uphill battle to get it through.
• Okay, let’s think about option number 2. Suppose you try to publish it as a borderline significant result. Some reviewers will claim that it’s a null result and should not be published. Others will claim that the evidence is ambiguous, and that you should collect more data until you get a clear significant result. Again, the publication process does not favour you.
• Given the difficulties in publishing an “ambiguous” result like p=.072, option number 3 might seem tempting: give up and do something else. But that’s a recipe for career suicide. If you give up and try a new project else every time you find yourself faced with ambiguity, your work will never be published. And if you’re in academia without a publication record you can lose your job. So that option is out.
• It looks like you’re stuck with option 4. You don’t have conclusive results, so you decide to collect some more data and re-run the analysis. Seems sensible, but unfortunately for you, if you do this all of your p-values are now incorrect. All of them. Not just the p-values that you calculated for this study. All of them. All the p-values you calculated in the past and all the p-values you will calculate in the future. Fortunately, no-one will notice. You’ll get published, and you’ll have lied.
Wait, what? How can that last part be true? I mean, it sounds like a perfectly reasonable strategy doesn’t it? You collected some data, the results weren’t conclusive, so now what you want to do is collect more data until the the results are conclusive. What’s wrong with that?
Honestly, there’s nothing wrong with it. It’s a reasonable, sensible and rational thing to do. In real life, this is exactly what every researcher does. Unfortunately, the theory of null hypothesis testing as I described it in Chapter 11 forbids you from doing this.263 The reason is that the theory assumes that the experiment is finished and all the data are in. And because it assumes the experiment is over, it only considers two possible decisions. If you’re using the conventional p<.05 threshold, those decisions are:
Outcome Action
p less than .05 Reject the null
p greater than .05 Retain the null
What you’re doing is adding a third possible action to the decision making problem. Specifically, what you’re doing is using the p-value itself as a reason to justify continuing the experiment. And as a consequence you’ve transformed the decision-making procedure into one that looks more like this:
Outcome Action
p less than .05 Stop the experiment and reject the null
p between .05 and .1 Continue the experiment
p greater than .1 Stop the experiment and retain the null
The “basic” theory of null hypothesis testing isn’t built to handle this sort of thing, not in the form I described back in Chapter 11. If you’re the kind of person who would choose to “collect more data” in real life, it implies that you are not making decisions in accordance with the rules of null hypothesis testing. Even if you happen to arrive at the same decision as the hypothesis test, you aren’t following the decision process it implies, and it’s this failure to follow the process that is causing the problem.264 Your p-values are a lie.
Worse yet, they’re a lie in a dangerous way, because they’re all too small. To give you a sense of just how bad it can be, consider the following (worst case) scenario. Imagine you’re a really super-enthusiastic researcher on a tight budget who didn’t pay any attention to my warnings above. You design a study comparing two groups. You desperately want to see a significant result at the p<.05 level, but you really don’t want to collect any more data than you have to (because it’s expensive). In order to cut costs, you start collecting data, but every time a new observation arrives you run a t-test on your data. If the t-tests says p<.05 then you stop the experiment and report a significant result. If not, you keep collecting data. You keep doing this until you reach your pre-defined spending limit for this experiment. Let’s say that limit kicks in at N=1000 observations. As it turns out, the truth of the matter is that there is no real effect to be found: the null hypothesis is true. So, what’s the chance that you’ll make it to the end of the experiment and (correctly) conclude that there is no effect? In an ideal world, the answer here should be 95%. After all, the whole point of the p<.05 criterion is to control the Type I error rate at 5%, so what we’d hope is that there’s only a 5% chance of falsely rejecting the null hypothesis in this situation. However, there’s no guarantee that will be true. You’re breaking the rules: you’re running tests repeatedly, “peeking” at your data to see if you’ve gotten a significant result, and all bets are off.
Figure 17.1: How badly can things go wrong if you re-run your tests every time new data arrive? If you are a frequentist, the answer is “very wrong”.
So how bad is it? The answer is shown as the solid black line in Figure 17.1, and it’s astoundingly bad. If you peek at your data after every single observation, there is a 49% chance that you will make a Type I error. That’s, um, quite a bit bigger than the 5% that it’s supposed to be. By way of comparison, imagine that you had used the following strategy. Start collecting data. Every single time an observation arrives, run a Bayesian t-test (Section 17.7 and look at the Bayes factor. I’ll assume that Johnson (2013) is right, and I’ll treat a Bayes factor of 3:1 as roughly equivalent to a p-value of .05.265 This time around, our trigger happy researcher uses the following procedure: if the Bayes factor is 3:1 or more in favour of the null, stop the experiment and retain the null. If it is 3:1 or more in favour of the alternative, stop the experiment and reject the null. Otherwise continue testing. Now, just like last time, let’s assume that the null hypothesis is true. What happens? As it happens, I ran the simulations for this scenario too, and the results are shown as the dashed line in Figure 17.1. It turns out that the Type I error rate is much much lower than the 49% rate that we were getting by using the orthodox t-test.
In some ways, this is remarkable. The entire point of orthodox null hypothesis testing is to control the Type I error rate. Bayesian methods aren’t actually designed to do this at all. Yet, as it turns out, when faced with a “trigger happy” researcher who keeps running hypothesis tests as the data come in, the Bayesian approach is much more effective. Even the 3:1 standard, which most Bayesians would consider unacceptably lax, is much safer than the p<.05 rule.
really this bad?
The example I gave in the previous section is a pretty extreme situation. In real life, people don’t run hypothesis tests every time a new observation arrives. So it’s not fair to say that the p<.05 threshold “really” corresponds to a 49% Type I error rate (i.e., p=.49). But the fact remains that if you want your p-values to be honest, then you either have to switch to a completely different way of doing hypothesis tests, or you must enforce a strict rule: no peeking. You are not allowed to use the data to decide when to terminate the experiment. You are not allowed to look at a “borderline” p-value and decide to collect more data. You aren’t even allowed to change your data analyis strategy after looking at data. You are strictly required to follow these rules, otherwise the p-values you calculate will be nonsense.
And yes, these rules are surprisingly strict. As a class exercise a couple of years back, I asked students to think about this scenario. Suppose you started running your study with the intention of collecting N=80 people. When the study starts out you follow the rules, refusing to look at the data or run any tests. But when you reach N=50 your willpower gives in… and you take a peek. Guess what? You’ve got a significant result! Now, sure, you know you said that you’d keep running the study out to a sample size of N=80, but it seems sort of pointless now, right? The result is significant with a sample size of N=50, so wouldn’t it be wasteful and inefficient to keep collecting data? Aren’t you tempted to stop? Just a little? Well, keep in mind that if you do, your Type I error rate at p<.05 just ballooned out to 8%. When you report p<.05 in your paper, what you’re really saying is p<.08. That’s how bad the consequences of “just one peek” can be.
Now consider this … the scientific literature is filled with t-tests, ANOVAs, regressions and chi-square tests. When I wrote this book I didn’t pick these tests arbitrarily. The reason why these four tools appear in most introductory statistics texts is that these are the bread and butter tools of science. None of these tools include a correction to deal with “data peeking”: they all assume that you’re not doing it. But how realistic is that assumption? In real life, how many people do you think have “peeked” at their data before the experiment was finished and adapted their subsequent behaviour after seeing what the data looked like? Except when the sampling procedure is fixed by an external constraint, I’m guessing the answer is “most people have done it”. If that has happened, you can infer that the reported p-values are wrong. Worse yet, because we don’t know what decision process they actually followed, we have no way to know what the p-values should have been. You can’t compute a p-value when you don’t know the decision making procedure that the researcher used. And so the reported p-value remains a lie.
Given all of the above, what is the take home message? It’s not that Bayesian methods are foolproof. If a researcher is determined to cheat, they can always do so. Bayes’ rule cannot stop people from lying, nor can it stop them from rigging an experiment. That’s not my point here. My point is the same one I made at the very beginning of the book in Section 1.1: the reason why we run statistical tests is to protect us from ourselves. And the reason why “data peeking” is such a concern is that it’s so tempting, even for honest researchers. A theory for statistical inference has to acknowledge this. Yes, you might try to defend p-values by saying that it’s the fault of the researcher for not using them properly. But to my mind that misses the point. A theory of statistical inference that is so completely naive about humans that it doesn’t even consider the possibility that the researcher might look at their own data isn’t a theory worth having. In essence, my point is this:
Good laws have their origins in bad morals.
– Ambrosius Macrobius266
Good rules for statistical testing have to acknowledge human frailty. None of us are without sin. None of us are beyond temptation. A good system for statistical inference should still work even when it is used by actual humans. Orthodox null hypothesis testing does not.267 | textbooks/stats/Applied_Statistics/Learning_Statistics_with_R_-_A_tutorial_for_Psychology_Students_and_other_Beginners_(Navarro)/17%3A_Bayesian_Statistics/17.05%3A_The_p-value_Is_a_Lie..txt |
Time to change gears. Up to this point I’ve been talking about what Bayesian inference is and why you might consider using it. I now want to briefly describe how to do Bayesian versions of various statistical tests. The discussions in the next few sections are not as detailed as I’d like, but I hope they’re enough to help you get started. So let’s begin.
The first kind of statistical inference problem I discussed in this book appeared in Chapter 12, in which we discussed categorical data analysis problems. In that chapter I talked about several different statistical problems that you might be interested in, but the one that appears most often in real life is the analysis of contingency tables. In this kind of data analysis situation, we have a cross-tabulation of one variable against another one, and the goal is to find out if there is some association between these variables. The data set I used to illustrate this problem is found in the `chapek9.Rdata` file, and it contains a single data frame `chapek9`
``````load("./rbook-master/data/chapek9.Rdata")
head(chapek9)``````
``````## species choice
## 1 robot flower
## 2 human data
## 3 human data
## 4 human data
## 5 robot data
## 6 human flower``````
In this data set, we supposedly sampled 180 beings and measured two things. First, we checked whether they were humans or robots, as captured by the `species` variable. Second, we asked them to nominate whether they most preferred flowers, puppies, or data. When we produce the cross-tabulation, we get this as the results:
``````crosstab <- xtabs( ~ species + choice, chapek9 )
crosstab```
```
``````## choice
## species puppy flower data
## robot 13 30 44
## human 15 13 65``````
Surprisingly, the humans seemed to show a much stronger preference for data than the robots did. At the time we speculated that this might have been because the questioner was a large robot carrying a gun, and the humans might have been scared.
orthodox text
Just to refresh your memory, here’s how we analysed these data back in Chapter@refch:chisquare. Because we want to determine if there is some association between `species` and `choice`, we used the `associationTest()` function in the `lsr` package to run a chi-square test of association. The results looked like this:
``library(lsr)``
``## Warning: package 'lsr' was built under R version 3.5.2` `
``associationTest( ~species + choice, chapek9 )``
``````##
## Chi-square test of categorical association
##
## Variables: species, choice
##
## Hypotheses:
## null: variables are independent of one another
## alternative: some contingency exists between variables
##
## Observed contingency table:
## choice
## species puppy flower data
## robot 13 30 44
## human 15 13 65
##
## Expected contingency table under the null hypothesis:
## choice
## species puppy flower data
## robot 13.5 20.8 52.7
## human 14.5 22.2 56.3
##
## Test results:
## X-squared statistic: 10.722
## degrees of freedom: 2
## p-value: 0.005
##
## Other information:
## estimated effect size (Cramer's v): 0.244``````
Because we found a small p value (in this case p<.01), we concluded that the data are inconsistent with the null hypothesis of no association, and we rejected it.
Bayesian test
How do we run an equivalent test as a Bayesian? Well, like every other bloody thing in statistics, there’s a lot of different ways you could do it. However, for the sake of everyone’s sanity, throughout this chapter I’ve decided to rely on one R package to do the work. Specifically, I’m going to use the `BayesFactor` package written by Jeff Rouder and Rich Morey, which as of this writing is in version 0.9.10.
For the analysis of contingency tables, the `BayesFactor` package contains a function called `contingencyTableBF()`. The data that you need to give to this function is the contingency table itself (i.e., the `crosstab` variable above), so you might be expecting to use a command like this:
``````library( BayesFactor ) # ...because we have to load the package
contingencyTableBF( crosstab ) # ...because that makes sense, right?``````
However, if you try this you’ll get an error message. This is because the `contingencyTestBF()` function needs one other piece of information from you: it needs to know what sampling plan you used to run your experiment. You can specify the sampling plan using the `sampleType` argument. So I should probably tell you what your options are! The `contingencyTableBF()` function distinguishes between four different types of experiment:
• Fixed sample size. Suppose that in our `chapek9` example, our experiment was designed like this: we deliberately set out to test 180 people, but we didn’t try to control the number of humans or robots, nor did we try to control the choices they made. In this design, the total number of observations N is fixed, but everything else is random. This is referred to as “joint multinomial” sampling, and if that’s what you did you should specify `sampleType = "jointMulti"`. In the case of the `chapek9` data, that’s actually what I had in mind when I invented the data set.
• Fixed row (or column) totals. A different kind of design might work like this. We decide ahead of time that we want 180 people, but we try to be a little more systematic about it. Specifically, the experimenter constrains it so that we get a predetermined number of humans and robots (e.g., 90 of each). In this design, either the row totals or the column totals are fixed, but not both. This is referred to as “independent multinomial” sampling, and if that’s what you did you should specify `sampleType = "indepMulti"`.
• Both row and column totals fixed. Another logical possibility is that you designed the experiment so that both the row totals and the column totals are fixed. This doesn’t make any sense at all in the `chapek9` example, but there are other deisgns that can work this way. Suppose that I show you a collection of 20 toys, and then given them 10 stickers that say `boy` and another 10 that say `girl`. I then give them 10 `blue` stickers and 10 `pink` stickers. I then ask you to put the stickers on the 20 toys such that every toy has a colour and every toy has a gender. No matter how you assign the stickers, the total number of pink and blue toys will be 10, as will the number of boys and girls. In this design both the rows and columns of the contingency table are fixed. This is referred to as “hypergeometric” sampling, and if that’s what you’ve done you should specify `sampleType = "hypergeom"`.
• Nothing is fixed. Finally, it might be the case that nothing is fixed. Not the row columns, not the column totals, and not the total sample size either. For instance, in the `chapek9` scenario, suppose what I’d done is run the study for a fixed length of time. By chance, it turned out that I got 180 people to turn up to study, but it could easily have been something else. This is referred to as “Poisson” sampling, and if that’s what you’ve done you should specify `sampleType="poisson"`.
Okay, so now we have enough knowledge to actually run a test. For the `chapek9` data, I implied that we designed the study such that the total sample size N was fixed, so we should set `sampleType = "jointMulti"`. The command that we need is,
````library( BayesFactor )`
```
``## Warning: package 'BayesFactor' was built under R version 3.5.2``
``## Loading required package: coda``
````## Warning: package 'coda' was built under R version 3.5.2`
```
``## Loading required package: Matrix``
``````## ************
## Welcome to BayesFactor 0.9.12-4.2. If you have questions, please contact Richard Morey ([email protected]).
##
## Type BFManual() to open the manual.
## ************``` ```
``contingencyTableBF( crosstab, sampleType = "jointMulti" )``
``````## Bayes factor analysis
## --------------
## [1] Non-indep. (a=1) : 15.92684 ±0%
##
## Against denominator:
## Null, independence, a = 1
## ---
## Bayes factor type: BFcontingencyTable, joint multinomial``````
As with most R commands, the output initially looks suspiciously similar to utter gibberish. Fortunately, it’s actually pretty simple once you get past the initial impression. Firstly, note that the stuff at the top and bottom are irrelevant fluff. You already know that you’re doing a Bayes factor analysis. You already know that you’re analysing a contingency table, and you already know that you specified a joint multinomial sampling plan. So let’s strip that out and take a look at what’s left over:
``````[1] Non-indep. (a=1) : 15.92684 @plusorminus0%
Against denominator:
Null, independence, a = 1 ``````
Let’s also ignore those two `a=1` bits, since they’re technical details that you don’t need to know about at this stage.268 The rest of the output is actually pretty straightforward. At the bottom, the output defines the null hypothesis for you: in this case, the null hypothesis is that there is no relationship between `species` and `choice`. Or, to put it another way, the null hypothesis is that these two variables are independent. Now if you look at the line above it, you might (correctly) guess that the `Non-indep.` part refers to the alternative hypothesis. In this case, the alternative is that there is a relationship between `species` and `choice`: that is, they are not independent. So the only thing left in the output is the bit that reads
``15.92684 @plusorminus0%``
The 15.9 part is the Bayes factor, and it’s telling you that the odds for the alternative hypothesis against the null are about 16:1. The ±0% part is not very interesting: essentially, all it’s telling you is that R has calculated an exact Bayes factor, so the uncertainty about the Bayes factor is 0%.269 In any case, the data are telling us that we have moderate evidence for the alternative hypothesis.
Writing up the results
When writing up the results, my experience has been that there aren’t quite so many “rules” for how you “should” report Bayesian hypothesis tests. That might change in the future if Bayesian methods become standard and some task force starts writing up style guides, but in the meantime I would suggest using some common sense. For example, I would avoid writing this:
A Bayesian test of association found a significant result (BF=15.92)
To my mind, this write up is unclear. Even assuming that you’ve already reported the relevant descriptive statistics, there are a number of things I am unhappy with. First, the concept of “statistical significance” is pretty closely tied with p-values, so it reads slightly strangely. Second, the “BF=15.92” part will only make sense to people who already understand Bayesian methods, and not everyone does. Third, it is somewhat unclear exactly which test was run and what software was used to do so.
On the other hand, unless precision is extremely important, I think that this is taking things a step too far:
We ran a Bayesian test of association using version 0.9.10-1 of the BayesFactor package using default priors and a joint multinomial sampling plan. The resulting Bayes factor of 15.92 to 1 in favour of the alternative hypothesis indicates that there is moderately strong evidence for the non-independence of species and choice.
Everything about that passage is correct, of course. Morey and Rouder (2015) built their Bayesian tests of association using the paper by Gunel and Dickey (1974), the specific test we used assumes that the experiment relied on a joint multinomial sampling plan, and indeed the Bayes factor of 15.92 is moderately strong evidence. It’s just far too wordy.
In most situations you just don’t need that much information. My preference is usually to go for something a little briefer. First, if you’re reporting multiple Bayes factor analyses in your write up, then somewhere you only need to cite the software once, at the beginning of the results section. So you might have one sentence like this:
All analyses were conducted using the BayesFactor package in R , and unless otherwise stated default parameter values were used
Notice that I don’t bother including the version number? That’s because the citation itself includes that information (go check my reference list if you don’t believe me). There’s no need to clutter up your results with redundant information that almost no-one will actually need. When you get to the actual test you can get away with this:
A test of association produced a Bayes factor of 16:1 in favour of a relationship between species and choice.
Short and sweet. I’ve rounded 15.92 to 16, because there’s not really any important difference between 15.92:1 and 16:1. I spelled out “Bayes factor” rather than truncating it to “BF” because not everyone knows the abbreviation. I indicated exactly what the effect is (i.e., “a relationship between species and choice”) and how strong the evidence was. I didn’t bother indicating whether this was “moderate” evidence or “strong” evidence, because the odds themselves tell you! There’s nothing stopping you from including that information, and I’ve done so myself on occasions, but you don’t strictly need it. Similarly, I didn’t bother to indicate that I ran the “joint multinomial” sampling plan, because I’m assuming that the method section of my write up would make clear how the experiment was designed. (I might change my mind about that if the method section was ambiguous.) Neither did I bother indicating that this was a Bayesian test of association: if your reader can’t work that out from the fact that you’re reporting a Bayes factor and the fact that you’re citing the `BayesFactor` package for all your analyses, then there’s no chance they’ll understand anything you’ve written. Besides, if you keep writing the word “Bayes” over and over again it starts to look stupid. Bayes Bayes Bayes Bayes Bayes. See?
Other sampling plans
Up to this point all I’ve shown you is how to use the `contingencyTableBF()` function for the joint multinomial sampling plan (i.e., when the total sample size N is fixed, but nothing else is). For the Poisson sampling plan (i.e., nothing fixed), the command you need is identical except for the `sampleType` argument:
``contingencyTableBF(crosstab, sampleType = "poisson" )``
``````## Bayes factor analysis
## --------------
## [1] Non-indep. (a=1) : 28.20757 ±0%
##
## Against denominator:
## Null, independence, a = 1
## ---
## Bayes factor type: BFcontingencyTable, poisson```
```
Notice that the Bayes factor of 28:1 here is not the identical to the Bayes factor of 16:1 that we obtained from the last test. The sampling plan actually does matter.
What about the design in which the row columns (or column totals) are fixed? As I mentioned earlier, this corresponds to the “independent multinomial” sampling plan. Again, you need to specify the `sampleType` argument, but this time you need to specify whether you fixed the rows or the columns. For example, suppose I deliberately sampled 87 humans and 93 robots, then I would need to indicate that the `fixedMargin` of the contingency table is the `"rows"`. So the command I would use is:
``contingencyTableBF(crosstab, sampleType = "indepMulti", fixedMargin="rows")``
``````## Bayes factor analysis
## --------------
## [1] Non-indep. (a=1) : 8.605897 ±0%
##
## Against denominator:
## Null, independence, a = 1
## ---
## Bayes factor type: BFcontingencyTable, independent multinomial``````
Again, the Bayes factor is different, with the evidence for the alternative dropping to a mere 9:1. As you might expect, the answers would be diffrent again if it were the columns of the contingency table that the experimental design fixed.
Finally, if we turn to hypergeometric sampling in which everything is fixed, we get…
``````contingencyTableBF(crosstab, sampleType = "hypergeom")
#Error in contingencyHypergeometric(as.matrix(data2), a) :
# hypergeometric contingency tables restricted to 2 x 2 tables; see help for contingencyTableBF()``````
… an error message. Okay, some quick reading through the help files hints that support for larger contingency tables is coming, but it’s not been implemented yet. In the meantime, let’s imagine we have data from the “toy labelling” experiment I described earlier in this section. Specifically, let’s say our data look like this:
``````toys <- data.frame(stringsAsFactors=FALSE,
gender = c("girl", "boy"),
pink = c(8, 2),
blue = c(2, 8)
)``````
The Bayesian test with hypergeometric sampling gives us this:
``````contingencyTableBF(toys, sampleType = "hypergeom")
#Bayes factor analysis
#--------------
#[1] Non-indep. (a=1) : 8.294321 @plusorminus0%
#
#Against denominator:
# Null, independence, a = 1
#---
#Bayes factor type: BFcontingencyTable, hypergeometric``````
The Bayes factor of 8:1 provides modest evidence that the labels were being assigned in a way that correlates gender with colour, but it’s not conclusive. | textbooks/stats/Applied_Statistics/Learning_Statistics_with_R_-_A_tutorial_for_Psychology_Students_and_other_Beginners_(Navarro)/17%3A_Bayesian_Statistics/17.06%3A_Bayesian_Analysis_of_Contingency_Tables.txt |
The second type of statistical inference problem discussed in this book is the comparison between two means, discussed in some detail in the chapter on t-tests (Chapter 13. If you can remember back that far, you’ll recall that there are several versions of the t-test. The `BayesFactor` package contains a function called `ttestBF()` that is flexible enough to run several different versions of the t-test. I’ll talk a little about Bayesian versions of the independent samples t-tests and the paired samples t-test in this section.
Independent samples t-test
The most common type of t-test is the independent samples t-test, and it arises when you have data that look something like this:
``````load( "./rbook-master/data/harpo.Rdata" )
head(harpo)``````
``````## grade tutor
## 1 65 Anastasia
## 2 72 Bernadette
## 3 66 Bernadette
## 4 74 Anastasia
## 5 73 Anastasia
## 6 71 Bernadette``````
In this data set, we have two groups of students, those who received lessons from Anastasia and those who took their classes with Bernadette. The question we want to answer is whether there’s any difference in the grades received by these two groups of student. Back in Chapter@refch:ttest I suggested you could analyse this kind of data using the `independentSamplesTTest()` function in the `lsr` package. For example, if you want to run a Student’s t-test, you’d use a command like this:
``````independentSamplesTTest(
formula = grade ~ tutor,
data = harpo,
var.equal = TRUE
)``````
``````##
## Student's independent samples t-test
##
## Outcome variable: grade
## Grouping variable: tutor
##
## Descriptive statistics:
## Anastasia Bernadette
## mean 74.533 69.056
## std dev. 8.999 5.775
##
## Hypotheses:
## null: population means equal for both groups
## alternative: different population means in each group
##
## Test results:
## t-statistic: 2.115
## degrees of freedom: 31
## p-value: 0.043
##
## Other information:
## two-sided 95% confidence interval: [0.197, 10.759]
## estimated effect size (Cohen's d): 0.74``````
Like most of the functions that I wrote for this book, the `independentSamplesTTest()` is very wordy. It prints out a bunch of descriptive statistics and a reminder of what the null and alternative hypotheses are, before finally getting to the test results. I wrote it that way deliberately, in order to help make things a little clearer for people who are new to statistics.
Again, we obtain a p-value less than 0.05, so we reject the null hypothesis.
What does the Bayesian version of the t-test look like? Using the `ttestBF()` function, we can obtain a Bayesian analog of Student’s independent samples t-test using the following command:
``ttestBF( formula = grade ~ tutor, data = harpo )``
``````## Bayes factor analysis
## --------------
## [1] Alt., r=0.707 : 1.754927 ±0%
##
## Against denominator:
## Null, mu1-mu2 = 0
## ---
## Bayes factor type: BFindepSample, JZS``````
Notice that format of this command is pretty standard. As usual we have a `formula` argument in which we specify the outcome variable on the left hand side and the grouping variable on the right. The `data` argument is used to specify the data frame containing the variables. However, notice that there’s no analog of the `var.equal` argument. This is because the `BayesFactor` package does not include an analog of the Welch test, only the Student test.270 In any case, when you run this command you get this as the output:
So what does all this mean? Just as we saw with the `contingencyTableBF()` function, the output is pretty dense. But, just like last time, there’s not a lot of information here that you actually need to process. Firstly, let’s examine the bottom line. The `BFindepSample` part just tells you that you ran an independent samples t-test, and the `JZS` part is technical information that is a little beyond the scope of this book.271 Clearly, there’s nothing to worry about in that part. In the line above, the text `Null, mu1-mu2 = 0` is just telling you that the null hypothesis is that there are no differences between means. But you already knew that. So the only part that really matters is this line here:
``[1] Alt., r=0.707 : 1.754927 @plusorminus0%``
Ignore the `r=0.707` part: it refers to a technical detail that we won’t worry about in this chapter.272 Instead, you should focus on the part that reads `1.754927`. This is the Bayes factor: the evidence provided by these data are about 1.8:1 in favour of the alternative.
Before moving on, it’s worth highlighting the difference between the orthodox test results and the Bayesian one. According to the orthodox test, we obtained a significant result, though only barely. Nevertheless, many people would happily accept p=.043 as reasonably strong evidence for an effect. In contrast, notice that the Bayesian test doesn’t even reach 2:1 odds in favour of an effect, and would be considered very weak evidence at best. In my experience that’s a pretty typical outcome. Bayesian methods usually require more evidence before rejecting the null.
Paired samples t-test
Back in Section 13.5 I discussed the `chico` data frame in which students grades were measured on two tests, and we were interested in finding out whether grades went up from test 1 to test 2. Because every student did both tests, the tool we used to analyse the data was a paired samples t-test. To remind you of what the data look like, here’s the first few cases:
``````load("./rbook-master/data/chico.rdata")
head(chico)``````
``````## id grade_test1 grade_test2
## 1 student1 42.9 44.6
## 2 student2 51.8 54.0
## 3 student3 71.7 72.3
## 4 student4 51.6 53.4
## 5 student5 63.5 63.8
## 6 student6 58.0 59.3``````
We originally analysed the data using the `pairedSamplesTTest()` function in the `lsr` package, but this time we’ll use the `ttestBF()` function from the `BayesFactor` package to do the same thing. The easiest way to do it with this data set is to use the `x` argument to specify one variable and the `y` argument to specify the other. All we need to do then is specify `paired=TRUE` to tell R that this is a paired samples test. So here’s our command:
``````ttestBF(
x = chico\$grade_test1,
y = chico\$grade_test2,
paired = TRUE
)``` ```
``````## Bayes factor analysis
## --------------
## [1] Alt., r=0.707 : 5992.05 ±0%
##
## Against denominator:
## Null, mu = 0
## ---
## Bayes factor type: BFoneSample, JZS``````
At this point, I hope you can read this output without any difficulty. The data provide evidence of about 6000:1 in favour of the alternative. We could probably reject the null with some confidence! | textbooks/stats/Applied_Statistics/Learning_Statistics_with_R_-_A_tutorial_for_Psychology_Students_and_other_Beginners_(Navarro)/17%3A_Bayesian_Statistics/17.07%3A_Bayesian_t-tests.txt |
Okay, so now we’ve seen Bayesian equivalents to orthodox chi-square tests and t-tests. What’s next? If I were to follow the same progression that I used when developing the orthodox tests you’d expect to see ANOVA next, but I think it’s a little clearer if we start with regression.
quick refresher
In Chapter 15 I used the `parenthood` data to illustrate the basic ideas behind regression. To remind you of what that data set looks like, here’s the first six observations:
``````load("./rbook-master/data/parenthood.Rdata")
head(parenthood)``````
``````## dan.sleep baby.sleep dan.grump day
## 1 7.59 10.18 56 1
## 2 7.91 11.66 60 2
## 3 5.14 7.92 82 3
## 4 7.71 9.61 55 4
## 5 6.68 9.75 67 5
## 6 5.99 5.04 72 6``````
Back in Chapter 15 I proposed a theory in which my grumpiness (`dan.grump`) on any given day is related to the amount of sleep I got the night before (`dan.sleep`), and possibly to the amount of sleep our baby got (`baby.sleep`), though probably not to the `day` on which we took the measurement. We tested this using a regression model. In order to estimate the regression model we used the `lm()` function, like so:
``````model <- lm(
formula = dan.grump ~ dan.sleep + day + baby.sleep,
data = parenthood
)``````
The hypothesis tests for each of the terms in the regression model were extracted using the `summary()` function as shown below:
``summary(model)` `
``````##
## Call:
## lm(formula = dan.grump ~ dan.sleep + day + baby.sleep, data = parenthood)
##
## Residuals:
## Min 1Q Median 3Q Max
## -10.906 -2.284 -0.295 2.652 11.880
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 126.278707 3.242492 38.945 <2e-16 ***
## dan.sleep -8.969319 0.560007 -16.016 <2e-16 ***
## day -0.004403 0.015262 -0.288 0.774
## baby.sleep 0.015747 0.272955 0.058 0.954
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 4.375 on 96 degrees of freedom
## Multiple R-squared: 0.8163, Adjusted R-squared: 0.8105
## F-statistic: 142.2 on 3 and 96 DF, p-value: < 2.2e-16``````
When interpreting the results, each row in this table corresponds to one of the possible predictors. The `(Intercept)` term isn’t usually interesting, though it is highly significant. The important thing for our purposes is the fact that `dan.sleep` is significant at p<.001 and neither of the other variables are.
Bayesian version
Okay, so how do we do the same thing using the `BayesFactor` package? The easiest way is to use the `regressionBF()` function instead of `lm()`. As before, we use `formula` to indicate what the full regression model looks like, and the `data` argument to specify the data frame. So the command is:
``````regressionBF(
formula = dan.grump ~ dan.sleep + day + baby.sleep,
data = parenthood
)``````
``````## Bayes factor analysis
## --------------
## [1] dan.sleep : 1.622545e+34 ±0.01%
## [2] day : 0.2724027 ±0%
## [3] baby.sleep : 10018411 ±0%
## [4] dan.sleep + day : 1.016576e+33 ±0%
## [5] dan.sleep + baby.sleep : 9.77022e+32 ±0%
## [6] day + baby.sleep : 2340755 ±0%
## [7] dan.sleep + day + baby.sleep : 7.835625e+31 ±0%
##
## Against denominator:
## Intercept only
## ---
## Bayes factor type: BFlinearModel, JZS``````
So that’s pretty straightforward: it’s exactly what we’ve been doing throughout the book. The output, however, is a little different from what you get from `lm()`. The format of this is pretty familiar. At the bottom we have some techical rubbish, and at the top we have some information about the Bayes factors. What’s new is the fact that we seem to have lots of Bayes factors here. What’s all this about?
The trick to understanding this output is to recognise that if we’re interested in working out which of the 3 predictor variables are related to `dan.grump`, there are actually 8 possible regression models that could be considered. One possibility is the intercept only model, in which none of the three variables have an effect. At the other end of the spectrum is the full model in which all three variables matter. So what `regressionBF()` does is treat the intercept only model as the null hypothesis, and print out the Bayes factors for all other models when compared against that null. For example, if we look at line 4 in the table, we see that the evidence is about 1033 to 1 in favour of the claim that a model that includes both `dan.sleep` and `day` is better than the intercept only model. Or if we look at line 1, we can see that the odds are about 1.6×1034 that a model containing the `dan.sleep` variable (but no others) is better than the intercept only model.
Finding the best model
In practice, this isn’t super helpful. In most situations the intercept only model is one that you don’t really care about at all. What I find helpful is to start out by working out which model is the best one, and then seeing how well all the alternatives compare to it. Here’s how you do that. In this case, it’s easy enough to see that the best model is actually the one that contains `dan.sleep` only (line 1), because it has the largest Bayes factor. However, if you’ve got a lot of possible models in the output, it’s handy to know that you can use the `head()` function to pick out the best few models. First, we have to go back and save the Bayes factor information to a variable:
``````models <- regressionBF(
formula = dan.grump ~ dan.sleep + day + baby.sleep,
data = parenthood
)``````
Let’s say I want to see the best three models. To do this, I use the `head()` function specifying `n=3`, and here’s what I get as the result:
````head( models, n = 3)`
```
``````## Bayes factor analysis
## --------------
## [1] dan.sleep : 1.622545e+34 ±0.01%
## [2] dan.sleep + day : 1.016576e+33 ±0%
## [3] dan.sleep + baby.sleep : 9.77022e+32 ±0%
##
## Against denominator:
## Intercept only
## ---
## Bayes factor type: BFlinearModel, JZS``````
This is telling us that the model in line 1 (i.e., `dan.grump ~ dan.sleep`) is the best one. That’s almost what I’m looking for, but it’s still comparing all the models against the intercept only model. That seems silly. What I’d like to know is how big the difference is between the best model and the other good models. For that, there’s this trick:
``head( models/max(models), n = 3)``
``````## Bayes factor analysis
## --------------
## [1] dan.sleep : 1 ±0%
## [2] dan.sleep + day : 0.0626532 ±0.01%
## [3] dan.sleep + baby.sleep : 0.0602154 ±0.01%
##
## Against denominator:
## dan.grump ~ dan.sleep
## ---
## Bayes factor type: BFlinearModel, JZS``````
Notice the bit at the bottom showing that the “denominator” has changed. What that means is that the Bayes factors are now comparing each of those 3 models listed against the `dan.grump ~ dan.sleep` model. Obviously, the Bayes factor in the first line is exactly 1, since that’s just comparing the best model to itself. More to the point, the other two Bayes factors are both less than 1, indicating that they’re all worse than that model. The Bayes factors of 0.06 to 1 imply that the odds for the best model over the second best model are about 16:1. You can work this out by simple arithmetic (i.e., 0.06/1≈16), but the other way to do it is to directly compare the models. To see what I mean, here’s the original output:
``models``
``````## Bayes factor analysis
## --------------
## [1] dan.sleep : 1.622545e+34 ±0.01%
## [2] day : 0.2724027 ±0%
## [3] baby.sleep : 10018411 ±0%
## [4] dan.sleep + day : 1.016576e+33 ±0%
## [5] dan.sleep + baby.sleep : 9.77022e+32 ±0%
## [6] day + baby.sleep : 2340755 ±0%
## [7] dan.sleep + day + baby.sleep : 7.835625e+31 ±0%
##
## Against denominator:
## Intercept only
## ---
## Bayes factor type: BFlinearModel, JZS``````
The best model corresponds to row 1 in this table, and the second best model corresponds to row 4. All you have to do to compare these two models is this:
``models[1] / models[4]``
``````## Bayes factor analysis
## --------------
## [1] dan.sleep : 15.96088 ±0.01%
##
## Against denominator:
## dan.grump ~ dan.sleep + day
## ---
## Bayes factor type: BFlinearModel, JZS``````
And there you have it. You’ve found the regression model with the highest Bayes factor (i.e., `dan.grump ~ dan.sleep`), and you know that the evidence for that model over the next best alternative (i.e., `dan.grump ~ dan.sleep + day`) is about 16:1.
Extracting Bayes factors for all included terms
Okay, let’s say you’ve settled on a specific regression model. What Bayes factors should you report? In this example, I’m going to pretend that you decided that `dan.grump ~ dan.sleep + baby.sleep` is the model you think is best. Sometimes it’s sensible to do this, even when it’s not the one with the highest Bayes factor. Usually this happens because you have a substantive theoretical reason to prefer one model over the other. However, in this case I’m doing it because I want to use a model with more than one predictor as my example!
Having figured out which model you prefer, it can be really useful to call the `regressionBF()` function and specifying `whichModels="top"`. You use your “preferred” model as the `formula` argument, and then the output will show you the Bayes factors that result when you try to drop predictors from this model:
``````regressionBF(
formula = dan.grump ~ dan.sleep + baby.sleep,
data = parenthood,
whichModels = "top"
)``````
``````## Bayes factor top-down analysis
## --------------
## When effect is omitted from dan.sleep + baby.sleep , BF is...
## [1] Omit baby.sleep : 16.60705 ±0.01%
## [2] Omit dan.sleep : 1.025403e-26 ±0.01%
##
## Against denominator:
## dan.grump ~ dan.sleep + baby.sleep
## ---
## Bayes factor type: BFlinearModel, JZS``````
Okay, so now you can see the results a bit more clearly. The Bayes factor when you try to drop the `dan.sleep` predictor is about 10−26, which is very strong evidence that you shouldn’t drop it. On the other hand, the Bayes factor actually goes up to 17 if you drop `baby.sleep`, so you’d usually say that’s pretty strong evidence for dropping that one. | textbooks/stats/Applied_Statistics/Learning_Statistics_with_R_-_A_tutorial_for_Psychology_Students_and_other_Beginners_(Navarro)/17%3A_Bayesian_Statistics/17.08%3A_Bayesian_Regression.txt |
As you can tell, the `BayesFactor` package is pretty flexible, and it can do Bayesian versions of pretty much everything in this book. In fact, it can do a few other neat things that I haven’t covered in the book at all. However, I have to stop somewhere, and so there’s only one other topic I want to cover: Bayesian ANOVA.
quick refresher
As with the other examples, I think it’s useful to start with a reminder of how I discussed ANOVA earlier in the book. First, let’s remind ourselves of what the data were. The example I used originally is the `clin.trial` data frame, which looks like this
``````load("./rbook-master/data/clinicaltrial.Rdata")
head(clin.trial)``````
``````## drug therapy mood.gain
## 1 placebo no.therapy 0.5
## 2 placebo no.therapy 0.3
## 3 placebo no.therapy 0.1
## 4 anxifree no.therapy 0.6
## 5 anxifree no.therapy 0.4
## 6 anxifree no.therapy 0.2``````
To run our orthodox analysis in earlier chapters we used the `aov()` function to do all the heavy lifting. In Chapter 16 I recommended using the `Anova()` function from the `car` package to produce the ANOVA table, because it uses Type II tests by default. If you’ve forgotten what “Type II tests” are, it might be a good idea to re-read Section 16.10, because it will become relevant again in a moment. In any case, here’s what our analysis looked like:
``library(car)``
````## Loading required package: carData`
```
``````model <- aov( mood.gain ~ drug * therapy, data = clin.trial )
Anova(model) ``````
``````## Anova Table (Type II tests)
##
## Response: mood.gain
## Sum Sq Df F value Pr(>F)
## drug 3.4533 2 31.7143 1.621e-05 ***
## therapy 0.4672 1 8.5816 0.01262 *
## drug:therapy 0.2711 2 2.4898 0.12460
## Residuals 0.6533 12
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1``````
That’s pretty clearly showing us evidence for a main effect of `drug` at p<.001, an effect of `therapy` at p<.05 and no interaction.
Bayesian version
How do we do the same thing using Bayesian methods? The `BayesFactor` package contains a function called `anovaBF()` that does this for you. It uses a pretty standard `formula` and `data` structure, so the command should look really familiar. Just like we did with regression, it will be useful to save the output to a variable:
``````models <- anovaBF(
formula = mood.gain ~ drug * therapy,
data = clin.trial
)``````
The output is quite different to the traditional ANOVA, but it’s not too bad once you understand what you’re looking for. Let’s take a look:
``models``
This looks very similar to the output we obtained from the `regressionBF()` function, and with good reason. Remember what I said back in Section 16.6: under the hood, ANOVA is no different to regression, and both are just different examples of a linear model. Becasue of this, the `anovaBF()` reports the output in much the same way. For instance, if we want to identify the best model we could use the same commands that we used in the last section. One variant that I find quite useful is this:
``models/max(models)``
``````## Bayes factor analysis
## --------------
## [1] drug : 0.3521042 ±0.94%
## [2] therapy : 0.001047568 ±0.94%
## [3] drug + therapy : 1 ±0%
## [4] drug + therapy + drug:therapy : 0.978514 ±1.29%
##
## Against denominator:
## mood.gain ~ drug + therapy
## ---
## Bayes factor type: BFlinearModel, JZS``````
By “dividing” the `models` output by the best model (i.e., `max(models)`), what R is doing is using the best model (which in this case is `drugs + therapy`) as the denominator, which gives you a pretty good sense of how close the competitors are. For instance, the model that contains the interaction term is almost as good as the model without the interaction, since the Bayes factor is 0.98. In other words, the data do not clearly indicate whether there is or is not an interaction.
Constructing Bayesian Type II tests
Okay, that’s all well and good, you might be thinking, but what do I report as the alternative to the p-value? In the classical ANOVA table, you get a single p-value for every predictor in the model, so you can talk about the significance of each effect. What’s the Bayesian analog of this?
It’s a good question, but the answer is tricky. Remember what I said in Section 16.10 about ANOVA being complicated. Even in the classical version of ANOVA there are several different “things” that ANOVA might correspond to. Specifically, I discussed how you get different p-values depending on whether you use Type I tests, Type II tests or Type III tests. To work out which Bayes factor is analogous to “the” p-value in a classical ANOVA, you need to work out which version of ANOVA you want an analog for. For the purposes of this section, I’ll assume you want Type II tests, because those are the ones I think are most sensible in general. As I discussed back in Section 16.10, Type II tests for a two-way ANOVA are reasonably straightforward, but if you have forgotten that section it wouldn’t be a bad idea to read it again before continuing.
Assuming you’ve had a refresher on Type II tests, let’s have a look at how to pull them from the Bayes factor table. Suppose we want to test the main effect of `drug`. The null hypothesis for this test corresponds to a model that includes an effect of `therapy`, but no effect of `drug`. The alternative hypothesis is the model that includes both. In other words, what we want is the Bayes factor corresponding to this comparison:
``````knitr::kable(tibble::tribble(
~V1, ~V2,
"Null model:", "`mood.gain ~ therapy`",
"Alternative model:", "`mood.gain ~ therapy + drug`"
), col.names = c("", ""))``````
Null model: `mood.gain ~ therapy`
Alternative model: `mood.gain ~ therapy + drug`
As it happens, we can read the answer to this straight off the table because it corresponds to a comparison between the model in line 2 of the table and the model in line 3: the Bayes factor in this case represents evidence for the null of 0.001 to 1. Or, more helpfully, the odds are about 1000 to 1 against the null.
The main effect of `therapy` can be calculated in much the same way. In this case, the null model is the one that contains only an effect of drug, and the alternative is the model that contains both. So the relevant comparison is between lines 2 and 1 in the table. The odds in favour of the null here are only 0.35 to 1. Again, I find it useful to frame things the other way around, so I’d refer to this as evidence of about 3 to 1 in favour of an effect of `therapy`.
Finally, in order to test an interaction effect, the null model here is one that contains both main effects but no interaction. The alternative model adds the interaction. That is:
``````knitr::kable(tibble::tribble(
~V1, ~V2,
"Null model:", "`mood.gain ~ drug + therapy`",
"Alternative model:", "`mood.gain ~ drug + therapy + drug:therapy`"
), col.names = c("", ""))``````
Null model: `mood.gain ~ drug + therapy`
Alternative model: `mood.gain ~ drug + therapy + drug:therapy`
If we look those two models up in the table, we see that this comparison is between the models on lines 3 and 4 of the table. The odds of 0.98 to 1 imply that these two models are fairly evenly matched.
You might be thinking that this is all pretty laborious, and I’ll concede that’s true. At some stage I might consider adding a function to the `lsr` package that would automate this process and construct something like a “Bayesian Type II ANOVA table” from the output of the `anovaBF()` function. However, I haven’t had time to do this yet, nor have I made up my mind about whether it’s really a good idea to do this. In the meantime, I thought I should show you the trick for how I do this in practice. The command that I use when I want to grab the right Bayes factors for a Type II ANOVA is this one:
``max(models)/models``
``````## denominator
## numerator drug therapy drug + therapy
## drug + therapy 2.840068 954.5918 1
## denominator
## numerator drug + therapy + drug:therapy
## drug + therapy 1.021958``````
The output isn’t quite so pretty as the last one, but the nice thing is that you can read off everything you need. The best model is `drug + therapy`, so all the other models are being compared to that. What’s the Bayes factor for the main effect of `drug`? The relevant null hypothesis is the one that contains only `therapy`, and the Bayes factor in question is 954:1. The main effect of `therapy` is weaker, and the evidence here is only 2.8:1. Finally, the evidence against an interaction is very weak, at 1.01:1.
Reading the results off this table is sort of counterintuitive, because you have to read off the answers from the “wrong” part of the table. For instance, the evidence for an effect of `drug` can be read from the column labelled `therapy`, which is pretty damned weird. To be fair to the authors of the package, I don’t think they ever intended for the `anovaBF()` function to be used this way. My understanding273 is that their view is simply that you should find the best model and report that model: there’s no inherent reason why a Bayesian ANOVA should try to follow the exact same design as an orthodox ANOVA.274
In any case, if you know what you’re looking for, you can look at this table and then report the results of the Bayesian analysis in a way that is pretty closely analogous to how you’d report a regular Type II ANOVA. As I mentioned earlier, there’s still no convention on how to do that, but I usually go for something like this:
A Bayesian Type II ANOVA found evidence for main effects of drug (Bayes factor: 954:1) and therapy (Bayes factor: 3:1), but no clear evidence for or against an interaction (Bayes factor: 1:1). | textbooks/stats/Applied_Statistics/Learning_Statistics_with_R_-_A_tutorial_for_Psychology_Students_and_other_Beginners_(Navarro)/17%3A_Bayesian_Statistics/17.09%3A_Bayesian_ANOVA.txt |
The first half of this chapter was focused primarily on the theoretical underpinnings of Bayesian statistics. I introduced the mathematics for how Bayesian inference works (Section 17.1), and gave a very basic overview of how Bayesian hypothesis testing is typically done (Section 17.2). Finally, I devoted some space to talking about why I think Bayesian methods are worth using (Section 17.3.
The second half of the chapter was a lot more practical, and focused on tools provided by the `BayesFactor` package. Specifically, I talked about using the `contingencyTableBF()` function to do Bayesian analogs of chi-square tests (Section 17.6, the `ttestBF()` function to do Bayesian t-tests, (Section 17.7), the `regressionBF()` function to do Bayesian regressions, and finally the `anovaBF()` function for Bayesian ANOVA.
If you’re interested in learning more about the Bayesian approach, there are many good books you could look into. John Kruschke’s book Doing Bayesian Data Analysis is a pretty good place to start (Kruschke 2011), and is a nice mix of theory and practice. His approach is a little different to the “Bayes factor” approach that I’ve discussed here, so you won’t be covering the same ground. If you’re a cognitive psychologist, you might want to check out Michael Lee and E.J. Wagenmakers’ book Bayesian Cognitive Modeling (Lee and Wagenmakers 2014). I picked these two because I think they’re especially useful for people in my discipline, but there’s a lot of good books out there, so look around!
References
Jeffreys, Harold. 1961. The Theory of Probability. 3rd ed. Oxford.
Kass, Robert E., and Adrian E. Raftery. 1995. “Bayes Factors.” Journal of the American Statistical Association 90: 773–95.
Fisher, R. 1925. Statistical Methods for Research Workers. Edinburgh, UK: Oliver; Boyd.
Johnson, Valen E. 2013. “Revised Standards for Statistical Evidence.” Proceedings of the National Academy of Sciences, no. 48: 19313–7.
Morey, Richard D., and Jeffrey N. Rouder. 2015. BayesFactor: Computation of Bayes Factors for Common Designs. http://CRAN.R-project.org/package=BayesFactor.
Gunel, Erdogan, and James Dickey. 1974. “Bayes Factors for Independence in Contingency Tables.” Biometrika, 545–57.
Kruschke, J. K. 2011. Doing Bayesian Data Analysis: A Tutorial with R and BUGS. Burlington, MA: Academic Press.
Lee, Michael D, and Eric-Jan Wagenmakers. 2014. Bayesian Cognitive Modeling: A Practical Course. Cambridge University Press.
1. http://en.wikiquote.org/wiki/David_Hume
2. http://en.Wikipedia.org/wiki/Climate_of_Adelaide
3. It’s a leap of faith, I know, but let’s run with it okay?
4. Um. I hate to bring this up, but some statisticians would object to me using the word “likelihood” here. The problem is that the word “likelihood” has a very specific meaning in frequentist statistics, and it’s not quite the same as what it means in Bayesian statistics. As far as I can tell, Bayesians didn’t originally have any agreed upon name for the likelihood, and so it became common practice for people to use the frequentist terminology. This wouldn’t have been a problem, except for the fact that the way that Bayesians use the word turns out to be quite different to the way frequentists do. This isn’t the place for yet another lengthy history lesson, but to put it crudely: when a Bayesian says “a likelihood function” they’re usually referring one of the rows of the table. When a frequentist says the same thing, they’re referring to the same table, but to them “a likelihood function” almost always refers to one of the columns. This distinction matters in some contexts, but it’s not important for our purposes.
5. If we were being a bit more sophisticated, we could extend the example to accommodate the possibility that I’m lying about the umbrella. But let’s keep things simple, shall we?
6. You might notice that this equation is actually a restatement of the same basic rule I listed at the start of the last section. If you multiply both sides of the equation by P(d), then you get P(d)P(h|d)=P(d,h), which is the rule for how joint probabilities are calculated. So I’m not actually introducing any “new” rules here, I’m just using the same rule in a different way.
7. Obviously, this is a highly simplified story. All the complexity of real life Bayesian hypothesis testing comes down to how you calculate the likelihood P(d|h) when the hypothesis h is a complex and vague thing. I’m not going to talk about those complexities in this book, but I do want to highlight that although this simple story is true as far as it goes, real life is messier than I’m able to cover in an introductory stats textbook.
8. http://www.imdb.com/title/tt0093779/quotes. I should note in passing that I’m not the first person to use this quote to complain about frequentist methods. Rich Morey and colleagues had the idea first. I’m shamelessly stealing it because it’s such an awesome pull quote to use in this context and I refuse to miss any opportunity to quote The Princess Bride.
9. http://about.abc.net.au/reports-publications/appreciation-survey-summary-report-2013/
10. http://knowyourmeme.com/memes/the-cake-is-a-lie
11. In the interests of being completely honest, I should acknowledge that not all orthodox statistical tests that rely on this silly assumption. There are a number of sequential analysis tools that are sometimes used in clinical trials and the like. These methods are built on the assumption that data are analysed as they arrive, and these tests aren’t horribly broken in the way I’m complaining about here. However, sequential analysis methods are constructed in a very different fashion to the “standard” version of null hypothesis testing. They don’t make it into any introductory textbooks, and they’re not very widely used in the psychological literature. The concern I’m raising here is valid for every single orthodox test I’ve presented so far, and for almost every test I’ve seen reported in the papers I read.
12. A related problem: http://xkcd.com/1478/
13. Some readers might wonder why I picked 3:1 rather than 5:1, given that Johnson (2013) suggests that p=.05 lies somewhere in that range. I did so in order to be charitable to the p-value. If I’d chosen a 5:1 Bayes factor instead, the results would look even better for the Bayesian approach.
14. http://www.quotationspage.com/quotes/Ambrosius_Macrobius/
15. Okay, I just know that some knowledgeable frequentists will read this and start complaining about this section. Look, I’m not dumb. I absolutely know that if you adopt a sequential analysis perspective you can avoid these errors within the orthodox framework. I also know that you can explictly design studies with interim analyses in mind. So yes, in one sense I’m attacking a “straw man” version of orthodox methods. However, the straw man that I’m attacking is the one that is used by almost every single practitioner. If it ever reaches the point where sequential methods become the norm among experimental psychologists and I’m no longer forced to read 20 extremely dubious ANOVAs a day, I promise I’ll rewrite this section and dial down the vitriol. But until that day arrives, I stand by my claim that default Bayes factor methods are much more robust in the face of data analysis practices as they exist in the real world. Default orthodox methods suck, and we all know it.
16. If you’re desperate to know, you can find all the gory details in Gunel and Dickey (1974). However, that’s a pretty technical paper. The help documentation to the `contingencyTableBF()` gives this explanation: “the argument `priorConcentration` indexes the expected deviation from the null hypothesis under the alternative, and corresponds to Gunel and Dickey’s (1974) a parameter.” As I write this I’m about halfway through the Gunel and Dickey paper, and I agree that setting a=1 is a pretty sensible default choice, since it corresponds to an assumption that you have very little a priori knowledge about the contingency table.
17. In some of the later examples, you’ll see that this number is not always 0%. This is because the `BayesFactor` package often has to run some simulations to compute approximate Bayes factors. So the answers you get won’t always be identical when you run the command a second time. That’s why the output of these functions tells you what the margin for error is.
18. Apparently this omission is deliberate. I have this vague recollection that I spoke to Jeff Rouder about this once, and his opinion was that when homogeneity of variance is violated the results of a t-test are uninterpretable. I can see the argument for this, but I’ve never really held a strong opinion myself. (Jeff, if you never said that, I’m sorry)
19. Just in case you’re interested: the “JZS” part of the output relates to how the Bayesian test expresses the prior uncertainty about the variance σ2, and it’s short for the names of three people: “Jeffreys Zellner Siow”. See Rouder et al. (2009) for details.
20. Again, in case you care … the null hypothesis here specifies an effect size of 0, since the two means are identical. The alternative hypothesis states that there is an effect, but it doesn’t specify exactly how big the effect will be. The r value here relates to how big the effect is expected to be according to the alternative. You can type `?ttestBF` to get more details.
21. Again, guys, sorry if I’ve misread you.
22. I don’t even disagree with them: it’s not at all obvious why a Bayesian ANOVA should reproduce (say) the same set of model comparisons that the Type II testing strategy uses. It’s precisely because of the fact that I haven’t really come to any strong conclusions that I haven’t added anything to the `lsr` package to make Bayesian Type II tests easier to produce. | textbooks/stats/Applied_Statistics/Learning_Statistics_with_R_-_A_tutorial_for_Psychology_Students_and_other_Beginners_(Navarro)/17%3A_Bayesian_Statistics/17.10%3A_Summary.txt |
“Begin at the beginning”, the King said, very gravely, “and go on till you come to the end: then stop”
– Lewis Carroll
It feels somewhat strange to be writing this chapter, and more than a little inappropriate. An epilogue is what you write when a book is finished, and this book really isn’t finished. There are a lot of things still missing from this book. It doesn’t have an index yet. A lot of references are missing. There are no “do it yourself exercises”. And in general, I feel that there a lot of things that are wrong with the presentation, organisation and content of this book. Given all that, I don’t want to try to write a “proper” epilogue. I haven’t finished writing the substantive content yet, so it doesn’t make sense to try to bring it all together. But this version of the book is going to go online for students to use, and you will be able to purchase a hard copy too, so I want to give it at least a veneer of closure. So let’s give it a go, shall we?
18: Epilogue
First, I’m going to talk a bit about some of the content that I wish I’d had the chance to cram into this version of the book, just so that you can get a sense of what other ideas are out there in the world of statistics. I think this would be important even if this book were getting close to a final product: one thing that students often fail to realise is that their introductory statistics classes are just that: an introduction. If you want to go out into the wider world and do real data analysis, you have to learn a whole lot of new tools that extend the content of your undergraduate lectures in all sorts of different ways. Don’t assume that something can’t be done just because it wasn’t covered in undergrad. Don’t assume that something is the right thing to do just because it {} covered in an undergrad class. To stop you from falling victim to that trap, I think it’s useful to give a bit of an overview of some of the other ideas out there.
Omissions within the topics covered
Even within the topics that I have covered in the book, there are a lot of omissions that I’d like to redress in future version of the book. Just sticking to things that are purely about statistics (rather than things associated with R), the following is a representative but not exhaustive list of topics that I’d like to expand on in later versions:
• Other types of correlations In Chapter 5 I talked about two types of correlation: Pearson and Spearman. Both of these methods of assessing correlation are applicable to the case where you have two continuous variables and want to assess the relationship between them. What about the case where your variables are both nominal scale? Or when one is nominal scale and the other is continuous? There are actually methods for computing correlations in such cases (e.g., polychoric correlation), but I just haven’t had time to write about them yet.
• More detail on effect sizes In general, I think the treatment of effect sizes throughout the book is a little more cursory than it should be. In almost every instance, I’ve tended just to pick one measure of effect size (usually the most popular one) and describe that. However, for almost all tests and models there are multiple ways of thinking about effect size, and I’d like to go into more detail in the future.
• Dealing with violated assumptions In a number of places in the book I’ve talked about some things you can do when you find that the assumptions of your test (or model) are violated, but I think that I ought to say more about this. In particular, I think it would have been nice to talk in a lot more detail about how you can tranform variables to fix problems. I talked a bit about this in Sections 7.2, 7.3 and 15.9.4, but the discussion isn’t detailed enough I think.
• Interaction terms for regression In Chapter 16 I talked about the fact that you can have interaction terms in an ANOVA, and I also pointed out that ANOVA can be interpreted as a kind of linear regression model. Yet, when talking about regression in Chapter 15 I made no mention of interactions at all. However, there’s nothing stopping you from including interaction terms in a regression model. It’s just a little more complicated to figure out what an “interaction” actually means when you’re talking about the interaction between two continuous predictors, and it can be done in more than one way. Even so, I would have liked to talk a little about this.
• Method of planned comparison As I mentioned this in Chapter 16, it’s not always appropriate to be using post hoc correction like Tukey’s HSD when doing an ANOVA, especially when you had a very clear (and limited) set of comparisons that you cared about ahead of time. I would like to talk more about this in a future version of book.
• Multiple comparison methods Even within the context of talking about post hoc tests and multiple comparisons, I would have liked to talk about the methods in more detail, and talk about what other methods exist besides the few options I mentioned. | textbooks/stats/Applied_Statistics/Learning_Statistics_with_R_-_A_tutorial_for_Psychology_Students_and_other_Beginners_(Navarro)/18%3A_Epilogue/18.01%3A_The_Undiscovered_Statistics.txt |
Statistics is a huge field. The core tools that I’ve described in this book (chi-square tests, t-tests, ANOVA and regression) are basic tools that are widely used in everyday data analysis, and they form the core of most introductory stats books. However, there are a lot of other tools out there. There are so very many data analysis situations that these tools don’t cover, and in future versions of this book I want to talk about them. To give you a sense of just how much more there is, and how much more work I want to do to finish this thing, the following is a list of statistical modelling tools that I would have liked to talk about. Some of these will definitely make it into future versions of the book.
• Analysis of covariance In Chapter 16 I spent a bit of time discussing the connection between ANOVA and regression, pointing out that any ANOVA model can be recast as a kind of regression model. More generally, both are examples of linear models, and it’s quite possible to consider linear models that are more general than either. The classic example of this is “analysis of covariance” (ANCOVA), and it refers to the situation where some of your predictors are continuous (like in a regression model) and others are categorical (like in an ANOVA).
• Nonlinear regression When discussing regression in Chapter 15, we saw that regression assume that the relationship between predictors and outcomes is linear. One the other hand, when we talked about the simpler problem of correlation in Chapter 5, we saw that there exist tools (e.g., Spearman correlations) that are able to assess non-linear relationships between variables. There are a number of tools in statistics that can be used to do non-linear regression. For instance, some non-linear regression models assume that the relationship between predictors and outcomes is monotonic (e.g., isotonic regression), while others assume that it is smooth but not necessarily monotonic (e.g., Lowess regression), while others assume that the relationship is of a known form that happens to be nonlinear (e.g., polynomial regression).
• Logistic regression Yet another variation on regression occurs when the outcome variable is binary valued, but the predictors are continuous. For instance, suppose you’re investigating social media, and you want to know if it’s possible to predict whether or not someone is on Twitter as a function of their income, their age, and a range of other variables. This is basically a regression model, but you can’t use regular linear regression because the outcome variable is binary (you’re either on Twitter or you’re not): because the outcome variable is binary, there’s no way that the residuals could possibly be normally distributed. There are a number of tools that statisticians can apply to this situation, the most prominent of which is logistic regression.
• The General Linear Model (GLM) The GLM is actually a family of models that includes logistic regression, linear regression, (some) nonlinear regression, ANOVA and many others. The basic idea in the GLM is essentially the same idea that underpins linear models, but it allows for the idea that your data might not be normally distributed, and allows for nonlinear relationships between predictors and outcomes. There are a lot of very handy analyses that you can run that fall within the GLM, so it’s a very useful thing to know about.
• Survival analysis In Chapter 2 I talked about “differential attrition”, the tendency for people to leave the study in a non-random fashion. Back then, I was talking about it as a potential methodological concern, but there are a lot of situations in which differential attrition is actually the thing you’re interested in. Suppose, for instance, you’re interested in finding out how long people play different kinds of computer games in a single session. Do people tend to play RTS (real time strategy) games for longer stretches than FPS (first person shooter) games? You might design your study like this. People come into the lab, and they can play for as long or as little as they like. Once they’re finished, you record the time they spent playing. However, due to ethical restrictions, let’s suppose that you cannot allow them to keep playing longer than two hours. A lot of people will stop playing before the two hour limit, so you know exactly how long they played. But some people will run into the two hour limit, and so you don’t know how long they would have kept playing if you’d been able to continue the study. As a consequence, your data are systematically censored: you’re missing all of the very long times. How do you analyse this data sensibly? This is the problem that survival analysis solves. It is specifically designed to handle this situation, where you’re systematically missing one “side” of the data because the study ended. It’s very widely used in health research, and in that context it is often literally used to analyse survival. For instance, you may be tracking people with a particular type of cancer, some who have received treatment A and others who have received treatment B, but you only have funding to track them for 5 years. At the end of the study period some people are alive, others are not. In this context, survival analysis is useful for determining which treatment is more effective, and telling you about the risk of death that people face over time.
• Repeated measures ANOVA When talking about reshaping data in Chapter 7, I introduced some data sets in which each participant was measured in multiple conditions (e.g., in the drugs data set, the working memory capacity (WMC) of each person was measured under the influence of alcohol and caffeine). It is quite common to design studies that have this kind of repeated measures structure. A regular ANOVA doesn’t make sense for these studies, because the repeated measurements mean that independence is violated (i.e., observations from the same participant are more closely related to one another than to observations from other participants. Repeated measures ANOVA is a tool that can be applied to data that have this structure. The basic idea behind RM-ANOVA is to take into account the fact that participants can have different overall levels of performance. For instance, Amy might have a WMC of 7 normally, which falls to 5 under the influence of caffeine, whereas Borat might have a WMC of 6 normally, which falls to 4 under the influence of caffeine. Because this is a repeated measures design, we recognise that – although Amy has a higher WMC than Borat – the effect of caffeine is identical for these two people. In other words, a repeated measures design means that we can attribute some of the variation in our WMC measurement to individual differences (i.e., some of it is just that Amy has higher WMC than Borat), which allows us to draw stronger conclusions about the effect of caffeine.
• Mixed models Repeated measures ANOVA is used in situations where you have observations clustered within experimental units. In the example I gave above, we have multiple WMC measures for each participant (i.e., one for each condition). However, there are a lot of other ways in which you can end up with multiple observations per participant, and for most of those situations the repeated measures ANOVA framework is insufficient. A good example of this is when you track individual people across multiple time points. Let’s say you’re tracking happiness over time, for two people. Aaron’s happiness starts at 10, then drops to 8, and then to 6. Belinda’s happiness starts at 6, then rises to 8 and then to 10. Both of these two people have the same “overall” level of happiness (the average across the three time points is 8), so a repeated measures ANOVA analysis would treat Aaron and Belinda the same way. But that’s clearly wrong. Aaron’s happiness is decreasing, whereas Belinda’s is increasing. If you want to optimally analyse data from an experiment where people can change over time, then you need a more powerful tool than repeated measures ANOVA. The tools that people use to solve this problem are called “mixed” models, because they are designed to learn about individual experimental units (e.g. happiness of individual people over time) as well as overall effects (e.g. the effect of money on happiness over time). Repeated measures ANOVA is perhaps the simplest example of a mixed model, but there’s a lot you can do with mixed models that you can’t do with repeated measures ANOVA.
• Reliability analysis Back in Chapter 2 I talked about reliability as one of the desirable characteristics of a measurement. One of the different types of reliability I mentioned was inter-item reliability. For example, when designing a survey used to measure some aspect to someone’s personality (e.g., extraversion), one generally attempts to include several different questions that all ask the same basic question in lots of different ways. When you do this, you tend to expect that all of these questions will tend to be correlated with one another, because they’re all measuring the same latent construct. There are a number of tools (e.g., Cronbach’s α) that you can use to check whether this is actually true for your study.
• Factor analysis One big shortcoming with reliability measures like Cronbach’s α is that they assume that your observed variables are all measuring a single latent construct. But that’s not true in general. If you look at most personality questionnaires, or IQ tests, or almost anything where you’re taking lots of measurements, it’s probably the case that you’re actually measuring several things at once. For example, all the different tests used when measuring IQ do tend to correlate with one another, but the pattern of correlations that you see across tests suggests that there are multiple different “things” going on in the data. Factor analysis (and related tools like principal components analysis and independent components analsysis) is a tool that you can use to help you figure out what these things are. Broadly speaking, what you do with these tools is take a big correlation matrix that describes all pairwise correlations between your variables, and attempt to express this pattern of correlations using only a small number of latent variables. Factor analysis is a very useful tool – it’s a great way of trying to see how your variables are related to one another – but it can be tricky to use well. A lot of people make the mistake of thinking that when factor analysis uncovers a latent variable (e.g., extraversion pops out as a latent variable when you factor analyse most personality questionnaires), it must actually correspond to a real “thing”. That’s not necessarily true. Even so, factor analysis is a very useful thing to know about (especially for psychologists), and I do want to talk about it in a later version of the book.
• Multidimensional scaling Factor analysis is an example of an “unsupervised learning” model. What this means is that, unlike most of the “supervised learning” tools I’ve mentioned, you can’t divide up your variables in to predictors and outcomes. Regression is supervised learning; factor analysis is unsupervised learning. It’s not the only type of unsupervised learning model however. For example, in factor analysis one is concerned with the analysis of correlations between variables. However, there are many situations where you’re actually interested in analysing similarities or dissimilarities between objects, items or people. There are a number of tools that you can use in this situation, the best known of which is multidimensional scaling (MDS). In MDS, the idea is to find a “geometric” representation of your items. Each item is “plotted” as a point in some space, and the distance between two points is a measure of how dissimilar those items are.
• Clustering Another example of an unsupervised learning model is clustering (also referred to as classification), in which you want to organise all of your items into meaningful groups, such that similar items are assigned to the same groups. A lot of clustering is unsupervised, meaning that you don’t know anything about what the groups are, you just have to guess. There are other “supervised clustering” situations where you need to predict group memberships on the basis of other variables, and those group memberships are actually observables: logistic regression is a good example of a tool that works this way. However, when you don’t actually know the group memberships, you have to use different tools (e.g., k-means clustering). There’s even situations where you want to do something called “semi-supervised clustering”, in which you know the group memberships for some items but not others. As you can probably guess, clustering is a pretty big topic, and a pretty useful thing to know about.
• Causal models One thing that I haven’t talked about much in this book is how you can use statistical modeling to learn about the causal relationships between variables. For instance, consider the following three variables which might be of interest when thinking about how someone died in a firing squad. We might want to measure whether or not an execution order was given (variable A), whether or not a marksman fired their gun (variable B), and whether or not the person got hit with a bullet (variable C). These three variables are all correlated with one another (e.g., there is a correlation between guns being fired and people getting hit with bullets), but we actually want to make stronger statements about them than merely talking about correlations. We want to talk about causation. We want to be able to say that the execution order (A) causes the marksman to fire (B) which causes someone to get shot (C). We can express this by a directed arrow notation: we write it as A→B→C. This “causal chain” is a fundamentally different explanation for events than one in which the marksman fires first, which causes the shooting B→C, and then causes the executioner to “retroactively” issue the execution order, B→A. This “common effect” model says that A and C are both caused by B. You can see why these are different. In the first causal model, if we had managed to stop the executioner from issuing the order (intervening to change A), then no shooting would have happened. In the second model, the shooting would have happened any way because the marksman was not following the execution order. There is a big literature in statistics on trying to understand the causal relationships between variables, and a number of different tools exist to help you test different causal stories about your data. The most widely used of these tools (in psychology at least) is structural equations modelling (SEM), and at some point I’d like to extend the book to talk about it.
Of course, even this listing is incomplete. I haven’t mentioned time series analysis, item response theory, market basket analysis, classification and regression trees, or any of a huge range of other topics. However, the list that I’ve given above is essentially my wish list for this book. Sure, it would double the length of the book, but it would mean that the scope has become broad enough to cover most things that applied researchers in psychology would need to use. | textbooks/stats/Applied_Statistics/Learning_Statistics_with_R_-_A_tutorial_for_Psychology_Students_and_other_Beginners_(Navarro)/18%3A_Epilogue/18.02%3A_Statistical_Models_Missing_from_the_Book.txt |
Okay, that was… long. And even that listing is massively incomplete. There really are a lot of big ideas in statistics that I haven’t covered in this book. It can seem pretty depressing to finish a 600-page textbook only to be told that this only the beginning, especially when you start to suspect that half of the stuff you’ve been taught is wrong. For instance, there are a lot of people in the field who would strongly argue against the use of the classical ANOVA model, yet I’ve devote two whole chapters to it! Standard ANOVA can be attacked from a Bayesian perspective, or from a robust statistics perspective, or even from a “it’s just plain wrong” perspective (people very frequently use ANOVA when they should actually be using mixed models). So why learn it at all?
As I see it, there are two key arguments. Firstly, there’s the pure pragmatism argument. Rightly or wrongly, ANOVA is widely used. If you want to understand the scientific literature, you need to understand ANOVA. And secondly, there’s the “incremental knowledge” argument. In the same way that it was handy to have seen one-way ANOVA before trying to learn factorial ANOVA, understanding ANOVA is helpful for understanding more advanced tools, because a lot of those tools extend on or modify the basic ANOVA setup in some way. For instance, although mixed models are way more useful than ANOVA and regression, I’ve never heard of anyone learning how mixed models work without first having worked through ANOVA and regression. You have to learn to crawl before you can climb a mountain.
Actually, I want to push this point a bit further. One thing that I’ve done a lot of in this book is talk about fundamentals. I spent a lot of time on probability theory. I talked about the theory of estimation and hypothesis tests in more detail than I needed to. When talking about R, I spent a lot of time talking about how the language works, and talking about things like writing your own scripts, functions and programs. I didn’t just teach you how to draw a histogram using `hist()`, I tried to give a basic overview of how the graphics system works. Why did I do all this? Looking back, you might ask whether I really needed to spend all that time talking about what a probability distribution is, or why there was even a section on probability density. If the goal of the book was to teach you how to run a t-test or an ANOVA, was all that really necessary? Or, come to think of it, why bother with R at all? There are lots of free alternatives out there: PSPP, for instance, is an SPSS-like clone that is totally free, has simple “point and click” menus, and can (I think) do every single analysis that I’ve talked about in this book. And you can learn PSPP in about 5 minutes. Was this all just a huge waste of everyone’s time???
The answer, I hope you’ll agree, is no. The goal of an introductory stats is not to teach ANOVA. It’s not to teach t-tests, or regressions, or histograms, or p-values. The goal is to start you on the path towards becoming a skilled data analyst. And in order for you to become a skilled data analyst, you need to be able to do more than ANOVA, more than t-tests, regressions and histograms. You need to be able to think properly about data. You need to be able to learn the more advanced statistical models that I talked about in the last section, and to understand the theory upon which they are based. And you need to have access to software that will let you use those advanced tools. And this is where – in my opinion at least – all that extra time I’ve spent on the fundamentals pays off. If you understand the graphics system in R, then you can draw the plots that you want, not just the canned plots that someone else has built into R for you. If you understand probability theory, you’ll find it much easier to switch from frequentist analyses to Bayesian ones. If you understand the core mechanics of R, you’ll find it much easier to generalise from linear regressions using `lm()` to using generalised linear models with `glm()` or linear mixed effects models using `lme()` and `lmer()`. You’ll even find that a basic knowledge of R will go a long way towards teaching you how to use other statistical programming languages that are based on it. Bayesians frequently rely on tools like WinBUGS and JAGS, which have a number of similarities to R, and can in fact be called from within R. In fact, because R is the “lingua franca of statistics”, what you’ll find is that most ideas in the statistics literature has been implemented somewhere as a package that you can download from CRAN. The same cannot be said for PSPP, or even SPSS.
In short, I think that the big payoff for learning statistics this way is extensibility. For a book that only covers the very basics of data analysis, this book has a massive overhead in terms of learning R, probability theory and so on. There’s a whole lot of other things that it pushes you to learn besides the specific analyses that the book covers. So if your goal had been to learn how to run an ANOVA in the minimum possible time, well, this book wasn’t a good choice. But as I say, I don’t think that is your goal. I think you want to learn how to do data analysis. And if that really is your goal, you want to make sure that the skills you learn in your introductory stats class are naturally and cleanly extensible to the more complicated models that you need in real world data analysis. You want to make sure that you learn to use the same tools that real data analysts use, so that you can learn to do what they do. And so yeah, okay, you’re a beginner right now (or you were when you started this book), but that doesn’t mean you should be given a dumbed-down story, a story in which I don’t tell you about probability density, or a story where I don’t tell you about the nightmare that is factorial ANOVA with unbalanced designs. And it doesn’t mean that you should be given baby toys instead of proper data analysis tools. Beginners aren’t dumb; they just lack knowledge. What you need is not to have the complexities of real world data analysis hidden from from you. What you need are the skills and tools that will let you handle those complexities when they inevitably ambush you in the real world.
And what I hope is that this book – or the finished book that this will one day turn into – is able to help you with that.
References
Adair, G. 1984. “The Hawthorne Effect: A Reconsideration of the Methodological Artifact.” Journal of Applied Psychology 69: 334–45.
Agresti, A. 1996. An Introduction to Categorical Data Analysis. Hoboken, NJ: Wiley.
———. 2002. Categorical Data Analysis. 2nd ed. Hoboken, NJ: Wiley.
Akaike, H. 1974. “A New Look at the Statistical Model Identification.” IEEE Transactions on Automatic Control 19: 716–23.
Bickel, P. J., E. A. Hammel, and J. W. O’Connell. 1975. “Sex Bias in Graduate Admissions: Data from Berkeley.” Science 187: 398–404.
Box, J. F. 1987. “Guinness, Gosset, Fisher, and Small Samples.” Statistical Science 2: 45–52.
Braun, John, and Duncan J Murdoch. 2007. A First Course in Statistical Programming with R. Cambridge University Press Cambridge.
Brown, M. B., and A. B. Forsythe. 1974. “Robust Tests for Equality of Variances.” Journal of the American Statistical Association 69: 364–67.
Campbell, D. T., and J. C. Stanley. 1963. Experimental and Quasi-Experimental Designs for Research. Boston, MA: Houghton Mifflin.
Cohen, J. 1988. Statistical Power Analysis for the Behavioral Sciences. 2nd ed. Lawrence Erlbaum.
Cook, R. D., and S. Weisberg. 1983. “Diagnostics for Heteroscedasticity in Regression.” Biometrika 70: 1–10.
Cramér, H. 1946. Mathematical Methods of Statistics. Princeton: Princeton University Press.
Dunn, O.J. 1961. “Multiple Comparisons Among Means.” Journal of the American Statistical Association 56: 52–64.
Ellis, P. D. 2010. The Essential Guide to Effect Sizes: Statistical Power, Meta-Analysis, and the Interpretation of Research Results. Cambridge, UK: Cambridge University Press.
Ellman, Michael. 2002. “Soviet Repression Statistics: Some Comments.” Europe-Asia Studies 54 (7). Taylor & Francis: 1151–72.
Evans, J. St. B. T., J. L. Barston, and P. Pollard. 1983. “On the Conflict Between Logic and Belief in Syllogistic Reasoning.” Memory and Cognition 11: 295–306.
Evans, M., N. Hastings, and B. Peacock. 2011. Statistical Distributions (3rd Ed). Wiley.
Fisher, R. A. 1922a. “On the Interpretation of χ2 from Contingency Tables, and the Calculation of p.” Journal of the Royal Statistical Society 84: 87–94.
———. 1922b. “On the Mathematical Foundation of Theoretical Statistics.” Philosophical Transactions of the Royal Society A 222: 309–68.
———. 1925. Statistical Methods for Research Workers. Edinburgh, UK: Oliver; Boyd.
Fox, J., and S. Weisberg. 2011. An R Companion to Applied Regression. 2nd ed. Los Angeles: Sage.
Gelman, A., and H. Stern. 2006. “The Difference Between ‘Significant’ and ‘Not Significant’ Is Not Itself Statistically Significant.” The American Statistician 60: 328–31.
Gunel, Erdogan, and James Dickey. 1974. “Bayes Factors for Independence in Contingency Tables.” Biometrika, 545–57.
Hays, W. L. 1994. Statistics. 5th ed. Fort Worth, TX: Harcourt Brace.
Hedges, L. V. 1981. “Distribution Theory for Glass’s Estimator of Effect Size and Related Estimators.” Journal of Educational Statistics 6: 107–28.
Hedges, L. V., and I. Olkin. 1985. Statistical Methods for Meta-Analysis. New York: Academic Press.
Hogg, R. V., J. V. McKean, and A. T. Craig. 2005. Introduction to Mathematical Statistics. 6th ed. Upper Saddle River, NJ: Pearson.
Holm, S. 1979. “A Simple Sequentially Rejective Multiple Test Procedure.” Scandinavian Journal of Statistics 6: 65–70.
Hothersall, D. 2004. History of Psychology. McGraw-Hill.
Hsu, J. C. 1996. Multiple Comparisons: Theory and Methods. London, UK: Chapman; Hall.
Ioannidis, John P. A. 2005. “Why Most Published Research Findings Are False.” PLoS Med 2 (8). Public Library of Science: 697–701.
Jeffreys, Harold. 1961. The Theory of Probability. 3rd ed. Oxford.
Johnson, Valen E. 2013. “Revised Standards for Statistical Evidence.” Proceedings of the National Academy of Sciences, no. 48: 19313–7.
Kahneman, D., and A. Tversky. 1973. “On the Psychology of Prediction.” Psychological Review 80: 237–51.
Kass, Robert E., and Adrian E. Raftery. 1995. “Bayes Factors.” Journal of the American Statistical Association 90: 773–95.
Keynes, John Maynard. 1923. A Tract on Monetary Reform. London: Macmillan; Company.
Kruschke, J. K. 2011. Doing Bayesian Data Analysis: A Tutorial with R and BUGS. Burlington, MA: Academic Press.
Kruskal, W. H., and W. A. Wallis. 1952. “Use of Ranks in One-Criterion Variance Analysis.” Journal of the American Statistical Association 47: 583–621.
Kühberger, A, A Fritz, and T. Scherndl. 2014. “Publication Bias in Psychology: A Diagnosis Based on the Correlation Between Effect Size and Sample Size.” Public Library of Science One 9: 1–8.
Lee, Michael D, and Eric-Jan Wagenmakers. 2014. Bayesian Cognitive Modeling: A Practical Course. Cambridge University Press.
Lehmann, Erich L. 2011. Fisher, Neyman, and the Creation of Classical Statistics. Springer.
Levene, H. 1960. “Robust Tests for Equality of Variances.” In Contributions to Probability and Statistics: Essays in Honor of Harold Hotelling, edited by I. Olkin et al, 278–92. Palo Alto, CA: Stanford University Press.
Long, J.S., and L.H. Ervin. 2000. “Using Heteroscedasticity Consistent Standard Errors in Thee Linear Regression Model.” The American Statistician 54: 217–24.
Matloff, Norman, and Norman S Matloff. 2011. The Art of R Programming: A Tour of Statistical Software Design. No Starch Press.
McGrath, R. E., and G. J. Meyer. 2006. “When Effect Sizes Disagree: The Case of r and d.” Psychological Methods 11: 386–401.
McNemar, Q. 1947. “Note on the Sampling Error of the Difference Between Correlated Proportions or Percentages.” Psychometrika 12: 153–57.
Meehl, P. H. 1967. “Theory Testing in Psychology and Physics: A Methodological Paradox.” Philosophy of Science 34: 103–15.
Morey, Richard D., and Jeffrey N. Rouder. 2015. BayesFactor: Computation of Bayes Factors for Common Designs. http://CRAN.R-project.org/package=BayesFactor.
Pearson, K. 1900. “On the Criterion That a Given System of Deviations from the Probable in the Case of a Correlated System of Variables Is Such That It Can Be Reasonably Supposed to Have Arisen from Random Sampling.” Philosophical Magazine 50: 157–75.
Pfungst, O. 1911. Clever Hans (the Horse of Mr. von Osten): A Contribution to Experimental Animal and Human Psychology. Translated by C. L. Rahn. New York: Henry Holt.
R Core Team. 2013. R: A Language and Environment for Statistical Computing. Vienna, Austria: R Foundation for Statistical Computing.
Rosenthal, R. 1966. Experimenter Effects in Behavioral Research. New York: Appleton.
Rouder, J. N., P. L. Speckman, D. Sun, R. D. Morey, and G. Iverson. 2009. “Bayesian T-Tests for Accepting and Rejecting the Null Hypothesis.” Psychonomic Bulletin & Review 16: 225–37.
Sahai, H., and M. I. Ageel. 2000. The Analysis of Variance: Fixed, Random and Mixed Models. Boston: Birkhauser.
Shaffer, J. P. 1995. “Multiple Hypothesis Testing.” Annual Review of Psychology 46: 561–84.
Shapiro, S. S., and M. B. Wilk. 1965. “An Analysis of Variance Test for Normality (Complete Samples).” Biometrika 52: 591–611.
Spector, P. 2008. Data Manipulation with R. New York, NY: Springer.
Stevens, S. S. 1946. “On the Theory of Scales of Measurement.” Science 103: 677–80.
Stigler, S. M. 1986. The History of Statistics. Cambridge, MA: Harvard University Press.
Student, A. 1908. “The Probable Error of a Mean.” Biometrika 6: 1–2.
Teetor, P. 2011. R Cookbook. Sebastopol, CA: O’Reilly.
Welch, B. L. 1947. “The Generalization of ‘Student’s’ Problem When Several Different Population Variances Are Involved.” Biometrika 34: 28–35.
———. 1951. “On the Comparison of Several Mean Values: An Alternative Approach.” Biometrika 38: 330–36.
White, H. 1980. “A Heteroskedasticity-Consistent Covariance Matrix Estimator and a Direct Test for Heteroskedasticity.” Econometrika 48: 817–38.
Yates, F. 1934. “Contingency Tables Involving Small Numbers and the χ2 Test.” Supplement to the Journal of the Royal Statistical Society 1: 217–35. | textbooks/stats/Applied_Statistics/Learning_Statistics_with_R_-_A_tutorial_for_Psychology_Students_and_other_Beginners_(Navarro)/18%3A_Epilogue/18.03%3A_Learning_the_Basics_and_Learning_Them_in_R.txt |
A different sense in which this book is incomplete is that it focuses pretty heavily on a very narrow and old-fashioned view of how inferential statistics should be done. In Chapter 10 I talked a little bit about the idea of unbiased estimators, sampling distributions and so on. In Chapter 11 I talked about the theory of null hypothesis significance testing and p-values. These ideas have been around since the early 20th century, and the tools that I’ve talked about in the book rely very heavily on the theoretical ideas from that time. I’ve felt obligated to stick to those topics because the vast majority of data analysis in science is also reliant on those ideas. However, the theory of statistics is not restricted to those topics, and – while everyone should know about them because of their practical importance – in many respects those ideas do not represent best practice for contemporary data analysis. One of the things that I’m especially happy with is that I’ve been able to go a little beyond this. Chapter 17 now presents the Bayesian perspective in a reasonable amount of detail, but the book overall is still pretty heavily weighted towards the frequentist orthodoxy. Additionally, there are a number of other approaches to inference that are worth mentioning:
• Bootstrapping Throughout the book, whenever I’ve introduced a hypothesis test, I’ve had a strong tendency just to make assertions like “the sampling distribution for BLAH is a t-distribution” or something like that. In some cases, I’ve actually attempted to justify this assertion. For example, when talking about χ2 tests in Chapter 12, I made reference to the known relationship between normal distributions and χ2 distributions (see Chapter 9) to explain how we end up assuming that the sampling distribution of the goodness of fit statistic is χ2. However, it’s also the case that a lot of these sampling distributions are, well, wrong. The χ2 test is a good example: it is based on an assumption about the distribution of your data, an assumption which is known to be wrong for small sample sizes! Back in the early 20th century, there wasn’t much you could do about this situation: statisticians had developed mathematical results that said that “under assumptions BLAH about the data, the sampling distribution is approximately BLAH”, and that was about the best you could do. A lot of times they didn’t even have that: there are lots of data analysis situations for which no-one has found a mathematical solution for the sampling distributions that you need. And so up until the late 20th century, the corresponding tests didn’t exist or didn’t work. However, computers have changed all that now. There are lots of fancy tricks, and some not-so-fancy, that you can use to get around it. The simplest of these is bootstrapping, and in it’s simplest form it’s incredibly simple. Here it is: simulate the results of your experiment lots and lots of time, under the twin assumptions that (a) the null hypothesis is true and (b) the unknown population distribution actually looks pretty similar to your raw data. In other words, instead of assuming that the data are (for instance) normally distributed, just assume that the population looks the same as your sample, and then use computers to simulate the sampling distribution for your test statistic if that assumption holds. Despite relying on a somewhat dubious assumption (i.e., the population distribution is the same as the sample!) bootstrapping is quick and easy method that works remarkably well in practice for lots of data analysis problems.
• Cross validation One question that pops up in my stats classes every now and then, usually by a student trying to be provocative, is “Why do we care about inferential statistics at all? Why not just describe your sample?” The answer to the question is usually something like this: “Because our true interest as scientists is not the specific sample that we have observed in the past, we want to make predictions about data we might observe in the future”. A lot of the issues in statistical inference arise because of the fact that we always expect the future to be similar to but a bit different from the past. Or, more generally, new data won’t be quite the same as old data. What we do, in a lot of situations, is try to derive mathematical rules that help us to draw the inferences that are most likely to be correct for new data, rather than to pick the statements that best describe old data. For instance, given two models A and B, and a data set X you collected today, try to pick the model that will best describe a new data set Y that you’re going to collect tomorrow. Sometimes it’s convenient to simulate the process, and that’s what cross-validation does. What you do is divide your data set into two subsets, X1 and X2. Use the subset X1 to train the model (e.g., estimate regression coefficients, let’s say), but then assess the model performance on the other one X2. This gives you a measure of how well the model generalises from an old data set to a new one, and is often a better measure of how good your model is than if you just fit it to the full data set X.
• Robust statistics Life is messy, and nothing really works the way it’s supposed to. This is just as true for statistics as it is for anything else, and when trying to analyse data we’re often stuck with all sorts of problems in which the data are just messier than they’re supposed to be. Variables that are supposed to be normally distributed are not actually normally distributed, relationships that are supposed to be linear are not actually linear, and some of the observations in your data set are almost certainly junk (i.e., not measuring what they’re supposed to). All of this messiness is ignored in most of the statistical theory I developed in this book. However, ignoring a problem doesn’t always solve it. Sometimes, it’s actually okay to ignore the mess, because some types of statistical tools are “robust”: if the data don’t satisfy your theoretical assumptions, they still work pretty well. Other types of statistical tools are not robust: even minor deviations from the theoretical assumptions cause them to break. Robust statistics is a branch of stats concerned with this question, and they talk about things like the “breakdown point” of a statistic: that is, how messy does your data have to be before the statistic cannot be trusted? I touched on this in places. The mean is not a robust estimator of the central tendency of a variable; the median is. For instance, suppose I told you that the ages of my five best friends are 34, 39, 31, 43 and 4003 years. How old do you think they are on average? That is, what is the true population mean here? If you use the sample mean as your estimator of the population mean, you get an answer of 830 years. If you use the sample median as the estimator of the population mean, you get an answer of 39 years. Notice that, even though you’re “technically” doing the wrong thing in the second case (using the median to estimate the mean!) you’re actually getting a better answer. The problem here is that one of the observations is clearly, obviously a lie. I don’t have a friend aged 4003 years. It’s probably a typo: I probably meant to type 43. But what if I had typed 53 instead of 43, or 34 instead of 43? Could you be sure if this was a typo? Sometimes the errors in the data are subtle, so you can’t detect them just by eyeballing the sample, but they’re still errors that contaminate your data, and they still affect your conclusions. Robust statistics is a concerned with how you can make safe inferences even when faced with contamination that you don’t know about. It’s pretty cool stuff.
Miscellaneous topics
• Missing data Suppose you’re doing a survey, and you’re interested in exercise and weight. You send data to four people. Adam says he exercises a lot and is not overweight. Briony says she exercises a lot and is not overweight. Carol says she does not exercise and is overweight. Dan says he does not exercise and refuses to answer the question about his weight. Elaine does not return the survey. You now have a missing data problem. There is one entire survey missing, and one question missing from another one, What do you do about it? I’ve only barely touched on this question in this book, in Section5.8, and in that section all I did was tell you about some R commands you can use to ignore the missing data. But ignoring missing data is not, in general, a safe thing to do. Let’s think about Dan’s survey here. Firstly, notice that, on the basis of my other responses, I appear to be more similar to Carol (neither of us exercise) than to Adam or Briony. So if you were forced to guess my weight, you’d guess that I’m closer to her than to them. Maybe you’d make some correction for the fact that Adam and I are males and Briony and Carol are females. The statistical name for this kind of guessing is “imputation”. Doing imputation safely is hard, but important, especially when the missing data are missing in a systematic way. Because of the fact that people who are overweight are often pressured to feel poorly about their weight (often thanks to public health campaigns), we actually have reason to suspect that the people who are not responding are more likely to be overweight than the people who do respond. Imputing a weight to Dan means that the number of overweight people in the sample will probably rise from 1 out of 3 (if we ignore Dan), to 2 out of 4 (if we impute Dan’s weight). Clearly this matters. But doing it sensibly is more complicated than it sounds. Earlier, I suggested you should treat me like Carol, since we gave the same answer to the exercise question. But that’s not quite right: there is a systematic difference between us. She answered the question, and I didn’t. Given the social pressures faced by overweight people, isn’t it likely that I’m more overweight than Carol? And of course this is still ignoring the fact that it’s not sensible to impute a single weight to me, as if you actually knew my weight. Instead, what you need to do it is impute a range of plausible guesses (referred to as multiple imputation), in order to capture the fact that you’re more uncertain about my weight than you are about Carol’s. And let’s not get started on the problem posed by the fact that Elaine didn’t send in the survey. As you can probably guess, dealing with missing data is an increasingly important topic. In fact, I’ve been told that a lot of journals in some fields will not accept studies that have missing data unless some kind of sensible multiple imputation scheme is followed.
• Power analysis In Chapter 11 I discussed the concept of power (i.e., how likely are you to be able to detect an effect if it actually exists), and referred to power analysis, a collection of tools that are useful for assessing how much power your study has. Power analysis can be useful for planning a study (e.g., figuring out how large a sample you’re likely to need), but it also serves a useful role in analysing data that you already collected. For instance, suppose you get a significant result, and you have an estimate of your effect size. You can use this information to estimate how much power your study actually had. This is kind of useful, especially if your effect size is not large. For instance, suppose you reject the null hypothesis p<.05, but you use power analysis to figure out that your estimated power was only .08. The significant result means that, if the null hypothesis was in fact true, there was a 5% chance of getting data like this. But the low power means that, even if the null hypothesis is false, the effect size was really as small as it looks, there was only an 8% chance of getting data like the one you did. This suggests that you need to be pretty cautious, because luck seems to have played a big part in your results, one way or the other!
• Data analysis using theory-inspired models In a few places in this book I’ve mentioned response time (RT) data, where you record how long it takes someone to do something (e.g., make a simple decision). I’ve mentioned that RT data are almost invariably non-normal, and positively skewed. Additionally, there’s a thing known as the speed-accuracy tradeoff: if you try to make decisions too quickly (low RT), you’re likely to make poorer decisions (lower accuracy). So if you measure both the accuracy of a participant’s decisions and their RT, you’ll probably find that speed and accuracy are related. There’s more to the story than this, of course, because some people make better decisions than others regardless of how fast they’re going. Moreover, speed depends on both cognitive processes (i.e., time spend thinking) but also physiological ones (e.g., how fast can you move your muscles). It’s starting to sound like analysing this data will be a complicated process. And indeed it is, but one of the things that you find when you dig into the psychological literature is that there already exist mathematical models (called “sequential sampling models”) that describe how people make simple decisions, and these models take into account a lot of the factors I mentioned above. You won’t find any of these theoretically-inspired models in a standard statistics textbook. Standard stats textbooks describe standard tools, tools that could meaningfully be applied in lots of different disciplines, not just psychology. ANOVA is an example of a standard tool: it is just as applicable to psychology as to pharmacology. Sequential sampling models are not: they are psychology-specific, more or less. This doesn’t make them less powerful tools: in fact, if you’re analysing data where people have to make choices quickly, you should really be using sequential sampling models to analyse the data. Using ANOVA or regression or whatever won’t work as well, because the theoretical assumptions that underpin them are not well-matched to your data. In contrast, sequential sampling models were explicitly designed to analyse this specific type of data, and their theoretical assumptions are extremely well-matched to the data. Obviously, it’s impossible to cover this sort of thing properly, because there are thousands of context-specific models in every field of science. Even so, one thing that I’d like to do in later versions of the book is to give some case studies that are of particular relevance to psychologists, just to give a sense for how psychological theory can be used to do better statistical analysis of psychological data. So, in later versions of the book I’ll probably talk about how to analyse response time data, among other things. | textbooks/stats/Applied_Statistics/Learning_Statistics_with_R_-_A_tutorial_for_Psychology_Students_and_other_Beginners_(Navarro)/18%3A_Epilogue/18.04%3A_Other_Ways_of_Doing_Inference.txt |
ERROR: type should be string, got "https://milnepublishing.geneseo.edu/...017/02/958.png\n\nhttps://milnepublishing.geneseo.edu/...e35759_fmt.png\n\nhttps://web.archive.org/web/20230328...e35759_fmt.png\n\nA population is the group to be studied, and population data is a collection of all elements in the population. For example:\n\n• All the fish in Long Lake.\n• All the lakes in the Adirondack Park.\n• All the grizzly bears in Yellowstone National Park.\n\nA sample is a subset of data drawn from the population of interest. For example:\n\n• 100 fish randomly sampled from Long Lake.\n• 25 lakes randomly selected from the Adirondack Park.\n• 60 grizzly bears with a home range in Yellowstone National Park.\n\nFigure 1. Using sample statistics to estimate population parameters.\n\nPopulations are characterized by descriptive measures called parameters. Inferences about parameters are based on sample statistics. For example, the population mean ($μ$) is estimated by the sample mean ($\\bar x$ ). The population variance ( $\\sigma ^2$) is estimated by the sample variance ($s^2$).\n\nVariables are the characteristics we are interested in. For example:\n\n• The length of fish in Long Lake.\n• The pH of lakes in the Adirondack Park.\n• The weight of grizzly bears in Yellowstone National Park.\n\nVariables are divided into two major groups: qualitative and quantitative. Qualitative variables have values that are attributes or categories. Mathematical operations cannot be applied to qualitative variables. Examples of qualitative variables are gender, race, and petal color. Quantitative variables have values that are typically numeric, such as measurements. Mathematical operations can be applied to these data. Examples of quantitative variables are age, height, and length. Quantitative variables can be broken down further into two more categories: discrete and continuous variables. Discrete variables have a finite or countable number of possible values. Think of discrete variables as “hens”. Hens can lay 1 egg, or 2 eggs, or 13 eggs… There are a limited, definable number of values that the variable could take on.\n\nContinuous variables have an infinite number of possible values. Think of continuous variables as “cows”. Cows can give 4.6713245 gallons of milk, or 7.0918754 gallons of milk, or 13.272698 gallons of milk … There are an almost infinite number of values that a continuous variable could take on.\n\nExample $1$:\n\nIs the variable qualitative or quantitative?\n\n1. Species\n2. Weight\n3. Diameter\n4. Zip Code\n\nSolution\n\n(qualitative quantitative, quantitative, qualitative)\n\nDescriptive Measures\n\nDescriptive measures of populations are called parameters and are typically written using Greek letters. The population mean is $\\mu$ (mu). The population variance is $\\sigma ^2$ (sigma squared) and population standard deviation is $\\sigma$ (sigma). Descriptive measures of samples are called statistics and are typically written using Roman letters. The sample mean is $\\bar x$ (x-bar). The sample variance is $s^2$ and the sample standard deviation is $s$. Sample statistics are used to estimate unknown population parameters. In this section, we will examine descriptive statistics in terms of measures of center and measures of dispersion. These descriptive statistics help us to identify the center and spread of the data.\n\nMeasures of Center\n\nMean\n\nThe arithmetic mean of a variable, often called the average, is computed by adding up all the values and dividing by the total number of values. The population mean is represented by the Greek letter $\\mu$ (mu). The sample mean is represented by $\\bar x$ (x-bar). The sample mean is usually the best, unbiased estimate of the population mean. However, the mean is influenced by extreme values (outliers) and may not be the best measure of center with strongly skewed data. The following equations compute the population mean and sample mean.\n\n$\\mu = \\frac {\\sum x_i}{N}$\n\n$\\bar x = \\frac {\\sum x_i}{n}$\n\nwhere $x_i$ is an element in the data set, $N$ is the number of elements in the population, and $n$ is the number of elements in the sample data set.\n\nExample $2$: mean\n\nFind the mean for the following sample data set:\n\n6.4, 5.2, 7.9, 3.4\n\nSolution\n\n$\\bar x = \\frac {6.4+5.2+7.9+3.4} {4} = 5.725 \\nonumber$\n\nMedian\n\nThe median of a variable is the middle value of the data set when the data are sorted in order from least to greatest. It splits the data into two equal halves with 50% of the data below the median and 50% above the median. The median is resistant to the influence of outliers, and may be a better measure of center with strongly skewed data.\n\nThe calculation of the median depends on the number of observations in the data set.\n\nTo calculate the median with an odd number of values (n is odd), first sort the data from smallest to largest.\n\nExample $3$: Calculating Median with Odd number of values\n\nFind the median for the following sample data set:\n\n$23, 27, 29, 31, 35, 39, 40, 42, 44, 47, 51\\nonumber$\n\nSolution\n\nThe median is 39. It is the middle value that separates the lower 50% of the data from the upper 50% of the data.\n\nTo calculate the median with an even number of values (n is even), first sort the data from smallest to largest and take the average of the two middle values.\n\nExample $4$: Calculating Median with even number of values\n\nFind the median for the following sample data set:\n\n$23, 27, 29, 31, 35, 39, 40, 42, 44, 47\\nonumber$\n\nSolution\n\n$M = \\frac {35+39}{2} = 37\\nonumber$\n\nMode\n\nThe mode is the most frequently occurring value and is commonly used with qualitative data as the values are categorical. Categorical data cannot be added, subtracted, multiplied or divided, so the mean and median cannot be computed. The mode is less commonly used with quantitative data as a measure of center. Sometimes each value occurs only once and the mode will not be meaningful.\n\nUnderstanding the relationship between the mean and median is important. It gives us insight into the distribution of the variable. For example, if the distribution is skewed right (positively skewed), the mean will increase to account for the few larger observations that pull the distribution to the right. The median will be less affected by these extreme large values, so in this situation, the mean will be larger than the median. In a symmetric distribution, the mean, median, and mode will all be similar in value. If the distribution is skewed left (negatively skewed), the mean will decrease to account for the few smaller observations that pull the distribution to the left. Again, the median will be less affected by these extreme small observations, and in this situation, the mean will be less than the median.\n\nFigure 2. Illustration of skewed and symmetric distributions.\n\nMeasures of Dispersion\n\nMeasures of center look at the average or middle values of a data set. Measures of dispersion look at the spread or variation of the data. Variation refers to the amount that the values vary among themselves. Values in a data set that are relatively close to each other have lower measures of variation. Values that are spread farther apart have higher measures of variation.\n\nExamine the two histograms below. Both groups have the same mean weight, but the values of Group A are more spread out compared to the values in Group B. Both groups have an average weight of 267 lb. but the weights of Group A are more variable.\n\nFigure 3. Histograms of Group A and Group B.\n\nThis section will examine five measures of dispersion: range, variance, standard deviation, standard error, and coefficient of variation.\n\nRange\n\nThe range of a variable is the largest value minus the smallest value. It is the simplest measure and uses only these two values in a quantitative data set.\n\nExample $5$: Computing Range\n\nFind the range for the given data set.\n\n$12, 29, 32, 34, 38, 49, 57\\nonumber$\n\n$Range = 57 – 12 = 45\\nonumber$\n\nVariance\n\nThe variance uses the difference between each value and its arithmetic mean. The differences are squared to deal with positive and negative differences. The sample variance ($s^2$) is an unbiased estimator of the population variance ($\\sigma ^2$), with n-1 degrees of freedom.\n\nDegrees of freedom: In general, the degrees of freedom for an estimate is equal to the number of values minus the number of parameters estimated en route to the estimate in question.\n\nThe sample variance is unbiased due to the difference in the denominator. If we used “n” in the denominator instead of “n – 1”, we would consistently underestimate the true population variance. To correct this bias, the denominator is modified to “n – 1”.\n\nDefinition: population variance\n\n$\\sigma ^2 = \\frac {\\sum (x_i-\\mu)^2} {N}$\n\nDefinition: sample variance\n\n$s^2 = \\frac {\\sum (x_i- \\bar x)^2}{n-1} = \\frac {\\sum x_i^2 - \\frac {(\\sum x_i)^2}{n}}{n-1} \\label{samplevar}$\n\nExample $6$: Computing Variance\n\nCompute the variance of the sample data: 3, 5, 7.\n\nSolution\n\nThe sample mean ($\\bar x$) is 5. Then use Equation \\ref{samplevar}\n\n$s^2 = \\frac {(3-5)^2 +(5-5)^2 + (7-5) ^2} {3-1} = 4\\nonumber$\n\nStandard Deviation\n\nThe standard deviation is the square root of the variance (both population and sample). While the sample variance is the positive, unbiased estimator for the population variance, the units for the variance are squared. The standard deviation is a common method for numerically describing the distribution of a variable. The population standard deviation is σ (sigma) and sample standard deviation is s.\n\nDefinition:SAMPLE STANDARD DEVIATION\n\n$s = \\sqrt {s^2}$\n\nDefinition:POPULATION STANDARD DEVIATION\n\n$\\sigma = \\sqrt {\\sigma ^2}$\n\nExample $7$:\n\nCompute the standard deviation of the sample data: 3, 5, 7 with a sample mean of 5.\n\nSolution\n\nThe sample mean ($\\bar x$) is 5, using the definition of standard deviation\n\n$s = \\sqrt {\\frac {(3-5)^2+(5-5)^2+(7-5)^2} {3-1}} = \\sqrt {4} = 2\\nonumber$\n\nStandard Error of Mean\n\nCommonly, we use the sample mean to estimate the population mean μ. For example, if we want to estimate the heights of eighty-year-old cherry trees, we can proceed as follows:\n\n• Randomly select 100 trees\n• Compute the sample mean of the 100 heights\n• Use that as our estimate\n\nWe want to use this sample mean to estimate the true but unknown population mean. But our sample of 100 trees is just one of many possible samples (of the same size) that could have been randomly selected. Imagine if we take a series of different random samples from the same population and all the same size:\n\n• Sample 1—we compute sample mean $\\bar x$\n• Sample 2—we compute sample mean $\\bar x$\n• Sample 3—we compute sample mean $\\bar x$\n• Etc.\n\nEach time we sample, we may get a different result as we are using a different subset of data to compute the sample mean. This shows us that the sample mean is a random variable!\n\nThe sample mean ($\\bar x$) is a random variable with its own probability distribution called the sampling distribution of the sample mean. The distribution of the sample mean will have a mean equal to µ and a standard deviation equal to $\\frac {s} {\\sqrt {n}}$\n\nNote\n\nThe standard error $\\frac {s} {\\sqrt {n}}$ is the standard deviation of all possible sample means.\n\nIn reality, we would only take one sample, but we need to understand and quantify the sample to sample variability that occurs in the sampling process.\n\nThe standard error is the standard deviation of the sample means and can be expressed in different ways.\n\n$s_{\\bar x}=\\sqrt {\\frac {s^2}{n}}=\\frac {s}{\\sqrt {n}}$\n\nNote\n\n$s^2$ is the sample variance and s is the sample standard deviation\n\nExample $8$:\n\nDescribe the distribution of the sample mean.\n\nA population of fish has weights that are normally distributed with µ = 8 lb. and s = 2.6 lb. If you take a sample of size n=6, the sample mean will have a normal distribution with a mean of 8 and a standard deviation (standard error) of $\\frac {2.6}{\\sqrt {6}}$= 1.061 lb.\n\nIf you increase the sample size to 10, the sample mean will be normally distributed with a mean of 8 lb. and a standard deviation (standard error) of $\\frac {2.6}{\\sqrt {10}}$ = 0.822 lb.\n\nNotice how the standard error decreases as the sample size increases.\n\nThe Central Limit Theorem (CLT) states that the sampling distribution of the sample means will approach a normal distribution as the sample size increases. If we do not have a normal distribution, or know nothing about our distribution of our random variable, the CLT tells us that the distribution of the ’s will become normal as nincreases. How large does n have to be? A general rule of thumb tells us that n ≥ 30.\n\nNote\n\nThe Central Limit Theorem tells us that regardless of the shape of our population, the sampling distribution of the sample mean will be normal as the sample size increases.\n\nCoefficient of Variation\n\nTo compare standard deviations between different populations or samples is difficult because the standard deviation depends on units of measure. The coefficient of variation expresses the standard deviation as a percentage of the sample or population mean. It is a unitless measure.\n\nDefinition: CV of Population\n\n$CV=\\frac {\\sigma}{\\mu} \\times 100$\n\nDefinition: cv of sample\n\n$CV=\\frac {s}{\\bar x} \\times 100$\n\nExample $9$:\n\nFisheries biologists were studying the length and weight of Pacific salmon. They took a random sample and computed the mean and standard deviation for length and weight (given below). While the standard deviations are similar, the differences in units between lengths and weights make it difficult to compare the variability. Computing the coefficient of variation for each variable allows the biologists to determine which variable has the greater standard deviation.\n\nSample mean Sample standard deviation\nLength 63 cm 19.97 cm\nWeight 37.6 kg 19.39 kg\n\nThere is greater variability in Pacific salmon weight compared to length.\n\nVariability\n\nVariability is described in many different ways. Standard deviation measures point to point variability within a sample, i.e., variation among individual sampling units. Coefficient of variation also measures point to point variability but on a relative basis (relative to the mean), and is not influenced by measurement units. Standard error measures the sample to sample variability, i.e. variation among repeated samples in the sampling process. Typically, we only have one sample and standard error allows us to quantify the uncertainty in our sampling process.\n\nBasic Statistics Example using Excel and Minitab Software\n\nConsider the following tally from 11 sample plots on Heiburg Forest, where Xi is the number of downed logs per acre. Compute basic statistics for the sample plots.\n\nID\n\nOrder\n\n1\n\n25\n\n625\n\n-7.27\n\n52.8529\n\n4\n\n2\n\n35\n\n1225\n\n2.73\n\n7.4529\n\n6\n\n4\n\n55\n\n3025\n\n22.73\n\n516.6529\n\n10\n\n5\n\n15\n\n225\n\n-17.25\n\n298.2529\n\n2\n\n6\n\n40\n\n1600\n\n7.73\n\n59.7529\n\n8\n\n7\n\n25\n\n625\n\n-7.27\n\n52.8529\n\n5\n\n8\n\n55\n\n3025\n\n22.73\n\n516.6529\n\n11\n\n9\n\n35\n\n1225\n\n2.73\n\n7.4529\n\n7\n\n10\n\n45\n\n2025\n\n12.73\n\n162.0529\n\n9\n\n11\n\n5\n\n25\n\n-27.27\n\n743.6529\n\n1\n\nSum\n\n20\n\n400\n\n-12.27\n\n150.1819\n\n3\n\n$\\sum_{\\mathrm{i}=1}^{\\mathrm{n}} \\mathrm{X}_{\\mathrm{i}}$\n\n$\\sum_{\\mathrm{i}=1}^{\\mathrm{n}} \\mathrm{X}_{\\mathrm{i}}^2$\n\n$\\sum_{\\mathrm{i}=1}^{\\mathrm{n}}\\left(\\mathrm{X}_{\\mathrm{i}}-\\overline{\\mathrm{X}}\\right)$\n\n$\\sum_{\\mathrm{i}=1}^{\\mathrm{n}}\\left(\\mathrm{X}_{\\mathrm{i}}-\\overline{\\mathrm{X}}\\right)^2$\n\nTable 1. Sample data on number of downed logs per acre from Heiburg Forest.\n\n(1) Sample mean:\n$\\bar{X}=\\frac{\\sum_{i=1}^n X_i}{n}=\\frac{355}{11} 32.27$\n(2) Median $=35$\n\n(3) Variance:\n\\begin{aligned} S^2= & \\frac{\\sum_{i=1}^n\\left(X_i-\\bar{X}\\right)^2}{n-1}=\\frac{2568.1519}{11-1}=256.82 \\ & =\\frac{\\sum_{i=1}^n X_i^2-\\frac{\\left(\\sum_{i=1}^n X_i\\right)^2}{n}}{n-1}=\\frac{14025-\\frac{(355)^2}{11}}{11-1}=256.82 \\end{aligned}\n(4) Standard deviation: $S=\\sqrt{S^2}=\\sqrt{256.82}=16.0256$\n\n(5) Range: $55-5=50$\n\n(6) Coefficient of variation:\n$C V=\\frac{S}{\\bar{X}} \\cdot 100=\\frac{16.0256}{32.27} \\cdot 100=49.66 \\%$\n(7) Standard error of the mean:\n\\begin{aligned} S_{\\bar{X}}= & \\sqrt{\\frac{S^2}{n}}=\\sqrt{\\frac{256.82}{11}}=4.8319 \\ = & \\frac{S}{\\sqrt{n}}=\\frac{16.0256}{\\sqrt{11}}=4.8319 \\end{aligned}\n\nSoftware Solutions\n\nMinitab\n\nOpen Minitab and enter data in the spreadsheet. Select STAT>Descriptive stats and check all statistics required.\n\nVariable\n\nN\n\nN*\n\nMean\n\nSE Mean\n\nStDev\n\nVariance\n\nCoefVar\n\nMinimum\n\nQ1\n\nData\n\n11\n\n0\n\n32.27\n\n4.83\n\n16.03\n\n256.82\n\n49.66\n\n5.00\n\n20.00\n\nVariable\n\nMedian\n\nQ3\n\nMaximum\n\nIQR\n\nData\n\n35.00\n\n45.00\n\n55.00\n\n25.00\n\nExcel\n\nOpen up Excel and enter the data in the first column of the spreadsheet. Select DATA>Data Analysis>Descriptive Statistics. For the Input Range, select data in column A. Check “Labels in First Row” and “Summary Statistics”. Also check “Output Range” and select location for output.\n\nData\n\nMean\n\n32.27273\n\nStandard Error\n\n4.831884\n\nMedian\n\n35\n\nMode\n\n25\n\nStandard Deviation\n\n16.02555\n\nSample Variance\n\n256.8182\n\nKurtosis\n\n-0.73643\n\nSkewness\n\n-0.05982\n\nRange\n\n50\n\nMinimum\n\n5\n\nMaximum\n\n55\n\nSum\n\n355\n\nCount\n\n11\n\nGraphic Representation\n\nData organization and summarization can be done graphically, as well as numerically. Tables and graphs allow for a quick overview of the information collected and support the presentation of the data used in the project. While there are a multitude of available graphics, this chapter will focus on a specific few commonly used tools.\n\nPie Charts\n\nPie charts are a good visual tool allowing the reader to quickly see the relationship between categories. It is important to clearly label each category, and adding the frequency or relative frequency is often helpful. However, too many categories can be confusing. Be careful of putting too much information in a pie chart. The first pie chart gives a clear idea of the representation of fish types relative to the whole sample. The second pie chart is more difficult to interpret, with too many categories. It is important to select the best graphic when presenting the information to the reader.\n\nBar Charts and Histograms\n\nBar charts graphically describe the distribution of a qualitative variable (fish type) while histograms describe the distribution of a quantitative variable discrete or continuous variables (bear weight).\n\nIn both cases, the bars’ equal width and the y-axis are clearly defined. With qualitative data, each category is represented by a specific bar. With continuous data, lower and upper class limits must be defined with equal class widths. There should be no gaps between classes and each observation should fall into one, and only one, class.\n\nBoxplots\n\nBoxplots use the 5-number summary (minimum and maximum values with the three quartiles) to illustrate the center, spread, and distribution of your data. When paired with histograms, they give an excellent description, both numerically and graphically, of the data.\n\nWith symmetric data, the distribution is bell-shaped and somewhat symmetric. In the boxplot, we see that Q1 and Q3 are approximately equidistant from the median, as are the minimum and maximum values. Also, both whiskers (lines extending from the boxes) are approximately equal in length.\n\nWith skewed left distributions, we see that the histogram looks “pulled” to the left. In the boxplot, Q1 is farther away from the median as are the minimum values, and the left whisker is longer than the right whisker.\n\nWith skewed right distributions, we see that the histogram looks “pulled” to the right. In the boxplot, Q3 is farther away from the median, as is the maximum value, and the right whisker is longer than the left whisker." | textbooks/stats/Applied_Statistics/Natural_Resources_Biometrics_(Kiernan)/01%3A_Descriptive_Statistics_and_the_Normal_Distribution/1.01%3A_Descriptive_Statistics.txt |
Once we have organized and summarized your sample data, the next step is to identify the underlying distribution of our random variable. Computing probabilities for continuous random variables are complicated by the fact that there are an infinite number of possible values that our random variable can take on, so the probability of observing a particular value for a random variable is zero. Therefore, to find the probabilities associated with a continuous random variable, we use a probability density function (PDF).
A PDF is an equation used to find probabilities for continuous random variables. The PDF must satisfy the following two rules:
1. The area under the curve must equal one (over all possible values of the random variable).
2. The probabilities must be equal to or greater than zero for all possible values of the random variable.
The area under the curve of the probability density function over some interval represents the probability of observing those values of the random variable in that interval.
The Normal Distribution
Many continuous random variables have a bell-shaped or somewhat symmetric distribution. This is a normal distribution. In other words, the probability distribution of its relative frequency histogram follows a normal curve. The curve is bell-shaped, symmetric about the mean, and defined by µ and σ (the mean and standard deviation).
There are normal curves for every combination of µ and σ. The mean (µ) shifts the curve to the left or right. The standard deviation (σ) alters the spread of the curve. The first pair of curves have different means but the same standard deviation. The second pair of curves share the same mean (µ) but have different standard deviations. The pink curve has a smaller standard deviation. It is narrower and taller, and the probability is spread over a smaller range of values. The blue curve has a larger standard deviation. The curve is flatter and the tails are thicker. The probability is spread over a larger range of values.
Properties of the normal curve:
• The mean is the center of this distribution and the highest point.
• The curve is symmetric about the mean. (The area to the left of the mean equals the area to the right of the mean.)
• The total area under the curve is equal to one.
• As x increases and decreases, the curve goes to zero but never touches.
• The PDF of a normal curve is $y= \frac {1}{\sqrt {2\pi} \sigma} e^{\frac {-(x-\mu)^2}{2\sigma^2}}$
• A normal curve can be used to estimate probabilities.
• A normal curve can be used to estimate proportions of a population that have certain x-values.
The Standard Normal Distribution
There are millions of possible combinations of means and standard deviations for continuous random variables. Finding probabilities associated with these variables would require us to integrate the PDF over the range of values we are interested in. To avoid this, we can rely on the standard normal distribution. The standard normal distribution is a special normal distribution with a µ = 0 and σ = 1. We can use the Z-score to standardize any normal random variable, converting the x-values to Z-scores, thus allowing us to use probabilities from the standard normal table. So how do we find area under the curve associated with a Z-score?
Standard Normal Table
• The standard normal table gives probabilities associated with specific Z-scores.
• The table we use is cumulative from the left.
• The negative side is for all Z-scores less than zero (all values less than the mean).
• The positive side is for all Z-scores greater than zero (all values greater than the mean).
• Not all standard normal tables work the same way.
Example $1$:
What is the area associated with the Z-score 1.62?
.00
.01
.02
.03
.04
.05
.06
.07
.08
.09
0.0
0.5000
0.5040
0.5080
0.5120
0.5160
0.5199
0.5239
0.5279
0.5319
0.5359
0.1
0.5398
0.5438
0.5478
0.5517
0.5557
0.5596
0.5636
0.5675
0.5714
0.5753
0.2
0.5793
0.5832
0.5871
0.5910
0.5948
0.5987
0.6026
0.6064
0.6103
0.6141
1.5
0.9332
0.9345
0.9357
0.9370
0.9382
0.9394
0.9406
0.9418
0.9429
0.9441
1.6
0.9452
0.9463
0.9474
0.9484
0.9495
0.9595
0.9515
0.9525
0.9535
0.9545
1.7
0.9554
0.9564
0.9573
0.9582
0.9591
0.9599
0.9608
0.9616
0.9625
0.9633
Answer
The area is 0.9474.
Reading the Standard Normal Table
• Read down the Z-column to get the first part of the Z-score (1.6).
• Read across the top row to get the second decimal place in the Z-score (0.02).
• The intersection of this row and column gives the area under the curve to the left of the Z-score.
Finding Z-scores for a Given Area
• What if we have an area and we want to find the Z-score associated with that area?
• Instead of Z-score → area, we want area → Z-score.
• We can use the standard normal table to find the area in the body of values and read backwards to find the associated Z-score.
• Using the table, search the probabilities to find an area that is closest to the probability you are interested in.
Example $2$:
To find a Z-score for which the area to the right is 5%:
Since the table is cumulative from the left, you must use the complement of 5%.
$1.000 – 0.05 = 0.9500 \nonumber$
• Find the Z-score for the area of 0.9500.
• Look at the probabilities and find a value as close to 0.9500 as possible.
The standard normal table
• z
.00
.01
.02
.03
.04
.05
.06
.07
.08
.09
0.0
0.5000
0.5040
0.5080
0.5120
0.5160
0.5199
0.5239
0.5279
0.5319
0.5359
0.1
0.5398
0.5438
0.5478
0.5517
0.5557
0.5596
0.5636
0.5675
0.5714
0.5753
0.2
0.5793
0.5832
0.5871
0.5910
0.5948
0.5987
0.6026
0.6064
0.6103
0.6141
1.5
0.9332
0.9345
0.9357
0.9370
0.9382
0.9394
0.9406
0.9418
0.9429
0.9441
1.6
0.9452
0.9463
0.9474
0.9484
0.9495
0.9595
0.9515
0.9525
0.9535
0.9545
1.7
0.9554
0.9564
0.9573
0.9582
0.9591
0.9599
0.9608
0.9616
0.9625
0.9633
Answer
The Z-score for the 95th percentile is 1.64.
Area in between Two Z-scores
Example $3$
To find Z-scores that limit the middle 95%:
• The middle 95% has 2.5% on the right and 2.5% on the left.
• Use the symmetry of the curve.
Solution
• Look at your standard normal table. Since the table is cumulative from the left, it is easier to find the area to the left first.
• Find the area of 0.025 on the negative side of the table.
• The Z-score for the area to the left is -1.96.
• Since the curve is symmetric, the Z-score for the area to the right is 1.96.
Common Z-scores
There are many commonly used Z-scores:
• $Z_{.05}$ = 1.645 and the area between -1.645 and 1.645 is 90%
• $Z_{.025}$ = 1.96 and the area between -1.96 and 1.96 is 95%
• $Z_{.005}$ = 2.575 and the area between -2.575 and 2.575 is 99%
Applications of the Normal Distribution
Typically, our normally distributed data do not have μ = 0 and σ = 1, but we can relate any normal distribution to the standard normal distributions using the Z-score. We can transform values of x to values of z.
$z=\frac {x-\mu}{\sigma}$
For example, if a normally distributed random variable has a μ = 6 and σ = 2, then a value of x = 7 corresponds to a Z-score of 0.5.
$Z=\frac{7-6}{2}=0.5$
This tells you that 7 is one-half a standard deviation above its mean. We can use this relationship to find probabilities for any normal random variable.
To find the area for values of X, a normal random variable, draw a picture of the area of interest, convert the x-values to Z-scores using the Z-score and then use the standard normal table to find areas to the left, to the right, or in between.
$z=\frac {x-\mu}{\sigma}$
Example $4$:
Adult deer population weights are normally distributed with µ = 110 lb. and σ = 29.7 lb. As a biologist you determine that a weight less than 82 lb. is unhealthy and you want to know what proportion of your population is unhealthy.
Convert 82 to a Z-score
$z=\frac{82-110}{29.7} = -0.94 \nonumber$
The x value of 82 is 0.94 standard deviations below the mean.
Go to the standard normal table (negative side) and find the area associated with a Z-score of -0.94.
This is an “area to the left” problem so you can read directly from the table to get the probability.
$P(x<82) = 0.1736 \nonumber$
Approximately 17.36% of the population of adult deer is underweight, OR one deer chosen at random will have a 17.36% chance of weighing less than 82 lb.
Example $5$:
Statistics from the Midwest Regional Climate Center indicate that Jones City, which has a large wildlife refuge, gets an average of 36.7 in. of rain each year with a standard deviation of 5.1 in. The amount of rain is normally distributed. During what percent of the years does Jones City get more than 40 in. of rain?
$P(x > 40) \nonumber$
Solution
$z=\frac {40-36.7}{5.1}=0.65 \nonumber$
$P(x>40) = (1-0.7422) = 0.2578 \nonumber$
For approximately 25.78% of the years, Jones City will get more than 40 in. of rain.
Assessing Normality
If the distribution is unknown and the sample size is not greater than 30 (Central Limit Theorem), we have to assess the assumption of normality. Our primary method is the normal probability plot. This plot graphs the observed data, ranked in ascending order, against the “expected” Z-score of that rank. If the sample data were taken from a normally distributed random variable, then the plot would be approximately linear.
Examine the following probability plot. The center line is the relationship we would expect to see if the data were drawn from a perfectly normal distribution. Notice how the observed data (red dots) loosely follow this linear relationship. Minitab also computes an Anderson-Darling test to assess normality. The null hypothesis for this test is that the sample data have been drawn from a normally distributed population. A p-value greater than 0.05 supports the assumption of normality.
Compare the histogram and the normal probability plot in this next example. The histogram indicates a skewed right distribution.
The observed data do not follow a linear pattern and the p-value for the A-D test is less than 0.005 indicating a non-normal population distribution.
Normality cannot be assumed. You must always verify this assumption. Remember, the probabilities we are finding come from the standard NORMAL table. If our data are NOT normally distributed, then these probabilities DO NOT APPLY.
• Do you know if the population is normally distributed?
• Do you have a large enough sample size (n≥30)? Remember the Central Limit Theorem?
• Did you construct a normal probability plot? | textbooks/stats/Applied_Statistics/Natural_Resources_Biometrics_(Kiernan)/01%3A_Descriptive_Statistics_and_the_Normal_Distribution/1.02%3A_Probability_Distribution.txt |
• 2.1: Sampling Distribution of the Sample Mean
Because our inferences about the population mean rely on the sample mean, we focus on the distribution of the sample mean. Is it normal? What if our population is not normally distributed or we don’t know anything about the distribution of our population? The Central Limit Theorem states that the sampling distribution of the sample means will approach a normal distribution as the sample size increases.
• 2.2: Confidence Intervals
In the preceding chapter we learned that populations are characterized by descriptive measures called parameters. Inferences about parameters are based on sample statistics. We now want to estimate population parameters and assess the reliability of our estimates based on our knowledge of the sampling distributions of these statistics.
02: Sampling Distributions and Confidence Intervals
Inferential testing uses the sample mean ($\bar{x}$) to estimate the population mean ($μ$). Typically, we use the data from a single sample, but there are many possible samples of the same size that could be drawn from that population. As we saw in the previous chapter, the sample mean ($\bar{x}$) is a random variable with its own distribution.
• The distribution of the sample mean will have a mean equal to µ.
• It will have a standard deviation (standard error) equal to $\frac{\sigma}{\sqrt {n}}$
Because our inferences about the population mean rely on the sample mean, we focus on the distribution of the sample mean. Is it normal? What if our population is not normally distributed or we don’t know anything about the distribution of our population?
The Central Limit Theorem (CLT)
The Central Limit Theorem states that the sampling distribution of the sample means will approach a normal distribution as the sample size increases.
So if we do not have a normal distribution, or know nothing about our distribution, the CLT tells us that the distribution of the sample means () will become normal distributed as n (sample size) increases. How large does n have to be? A general rule of thumb tells us that n ≥ 30.
The Central Limit Theorem tells us that regardless of the shape of our population, the sampling distribution of the sample mean will be normal as the sample size increases.
Sampling Distribution of the Sample Proportion
The population proportion ($p$) is a parameter that is as commonly estimated as the mean. It is just as important to understand the distribution of the sample proportion, as the mean. With proportions, the element either has the characteristic you are interested in or the element does not have the characteristic. The sample proportion ($\hat {p}$) is calculated by
$\hat {p} = \frac{x}{n} \label{sampleproption}$
where $x$ is the number of elements in your population with the characteristic and n is the sample size.
Example $1$: sample proportion
You are studying the number of cavity trees in the Monongahela National Forest for wildlife habitat. You have a sample size of n = 950 trees and, of those trees, x = 238 trees with cavities. Calculate the sample proportion.
A naturally formed tree hollow at the base of the tree. (CC BY 2.0; Lauren "Lolly" Weinhold).
Solution
This is a simple application of Equation \ref{sampleproption}:
$\hat {p} = \frac {238}{950} =0.25 \nonumber$
The distribution of the sample proportion has a mean of $\mu_\hat{p} = p$
and has a standard deviation of $\sigma_{\hat {p}} = \sqrt {\frac {p(1-p)}{n}}.$
The sample proportion is normally distributed if $n$ is very large and $\hat{p}$ is not close to 0 or 1. We can also use the following relationship to assess normality when the parameter being estimated is p, the population proportion:
$n\hat {p} (1- \hat {p}) \ge 10$ | textbooks/stats/Applied_Statistics/Natural_Resources_Biometrics_(Kiernan)/02%3A_Sampling_Distributions_and_Confidence_Intervals/2.01%3A_Sampling_Distribution_of_the_Sample_Mean.txt |
In the preceding chapter we learned that populations are characterized by descriptive measures called parameters. Inferences about parameters are based on sample statistics. We now want to estimate population parameters and assess the reliability of our estimates based on our knowledge of the sampling distributions of these statistics.
Point Estimates
We start with a point estimate. This is a single value computed from the sample data that is used to estimate the population parameter of interest.
• The sample mean ($\bar {x}$) is a point estimate of the population mean ($\mu$).
• The sample proportion ($\hat {p}$) is the point estimate of the population proportion (p).
We use point estimates to construct confidence intervals for unknown parameters.
• A confidence interval is an interval of values instead of a single point estimate.
• The level of confidence corresponds to the expected proportion of intervals that will contain the parameter if many confidence intervals are constructed of the same sample size from the same population.
• Our uncertainty is about whether our particular confidence interval is one of those that truly contains the true value of the parameter.
Example $1$: bear weight
We are 95% confident that our interval contains the population mean bear weight.
If we created 100 confidence intervals of the same size from the same population, we would expect 95 of them to contain the true parameter (the population mean weight). We also expect five of the intervals would not contain the parameter.
Figure $1$: Confidence intervals from twenty-five different samples.
In this example, twenty-five samples from the same population gave these 95% confidence intervals. In the long term, 95% of all samples give an interval that contains µ, the true (but unknown) population mean.
Level of confidence is expressed as a percent.
• The compliment to the level of confidence is α (alpha), the level of significance.
• The level of confidence is described as $(1- \alpha) \times 100%$.
What does this really mean?
• We use a point estimate (e.g., sample mean) to estimate the population mean.
• We attach a level of confidence to this interval to describe how certain we are that this interval actually contains the unknown population parameter.
• We want to estimate the population parameter, such as the mean (μ) or proportion (p).
$\bar {x}-E < \mu < \bar {x}+E$
or
$\hat {p}-E < p <\hat {p}+E$
where $E$ is the margin of error.
The confidence is based on area under a normal curve. So the assumption of normality must be met (Chapter 1).
Confidence Intervals about the Mean (μ) when the Population Standard Deviation (σ) is Known
A confidence interval takes the form of: point estimate $\pm$ margin of error.
The point estimate
• The point estimate comes from the sample data.
• To estimate the population mean ($μ$), use the sample mean ($\bar{x}$) as the point estimate.
The margin of error
• Depends on the level of confidence, the sample size and the population standard deviation.
• It is computed as $E=Z_{\frac {\alpha}{2}}\times \frac {\sigma}{\sqrt {n}}$where $Z_{\frac {\alpha}{2}}$ is the critical value from the standard normal table associated with α (the level of significance).
The critical value $Z_{\frac {\alpha}{2}}$
• This is a Z-score that bounds the level of confidence.
• Confidence intervals are ALWAYS two-sided and the Z-scores are the limits of the area associated with the level of confidence.
• The level of significance (α) is divided into halves because we are looking at the middle 95% of the area under the curve.
• Go to your standard normal table and find the area of 0.025 in the body of values.
• What is the Z-score for that area?
Table $1$: Common critical values (Z-scores).
Confidence Level
(level of significance)
$Z_{\alpha / 2}$
99%
1%
2.575
95%
5%
1.96
90%
10%
1.645
Steps
Construction of a confidence interval about $μ$ when $σ$ is known:
1. $Z_{\frac {\alpha}{2}}$ (critical value)
2. $E=Z_{\frac {\alpha}{2}}\times \frac {\sigma}{\sqrt {n}}$ (margin of error)
3. $\bar {x} \pm E$ (point estimate ± margin of error)
Example $3$: Construct a confidence interval about the population mean
Researchers have been studying p-loading in Jones Lake for many years. It is known that mean water clarity (using a Secchi disk) is normally distributed with a population standard deviation of σ = 15.4 in. A random sample of 22 measurements was taken at various points on the lake with a sample mean of = 57.8 in. The researchers want you to construct a 95% confidence interval for μ, the mean water clarity.
A secchi disk to measure turbidly of water. (CC SA; publiclab.org)
Solution
1) $Z_{\frac {\alpha}{2}}$ = 1.96
2) $E=Z_{\frac {\alpha}{2}}\times \frac {\sigma}{\sqrt {n}}$ = $1.96 \times \frac {15.4}{\sqrt {22}}$ = 6.435
3) $\bar {x} \pm E$ = 57.8 ± 6.435
95% confidence interval for the mean water clarity is (51.36, 64.24).
We can be 95% confident that this interval contains the population mean water clarity for Jones Lake.
Now construct a 99% confidence interval for μ, the mean water clarity, and interpret.
1) $Z_{\frac {\alpha}{2}}$= 2.575
2) $E=Z_{\frac {\alpha}{2}}\times \frac {\sigma}{\sqrt {n}}$ = $2.575 \times \frac {15.4}{\sqrt {22}}$ = 8.454
3) $\bar {x} \pm E$= 57.8± 8.454
99% confidence interval for the mean water clarity is (49.35, 66.25).
We can be 99% confident that this interval contains the population mean water clarity for Jones Lake.
As the level of confidence increased from 95% to 99%, the width of the interval increased. As the probability (area under the normal curve) increased, the critical value increased resulting in a wider interval.
Software Solutions
Minitab
You can use Minitab to construct this 95% confidence interval (Excel does not construct confidence intervals about the mean when the population standard deviation is known). Select Basic Statistics>1-sample Z. Enter the known population standard deviation and select the required level of confidence.
Minitab screen shots for constructing a confidence interval.
One-Sample Z: depth
The assumed standard deviation = 15.4
Variable
N
Mean
StDev
SE Mean
95% CI
depth
22
57.80
11.60
3.28
(51.36, 64.24)
Confidence Intervals about the Mean (μ) when the Population Standard Deviation (σ) is Unknown
Typically, in real life we often don’t know the population standard deviation (σ). We can use the sample standard deviation (s) in place of σ. However, because of this change, we can’t use the standard normal distribution to find the critical values necessary for constructing a confidence interval.
The Student’s t-distribution was created for situations when σ was unknown. Gosset worked as a quality control engineer for Guinness Brewery in Dublin. He found errors in his testing and he knew it was due to the use of s instead of σ. He created this distribution to deal with the problem of an unknown population standard deviation and small sample sizes. A portion of the t-table is shown below.
Table $2$: Portion of the student’s t-table.
df
0.10
0.05
0.025
0.02
0.01
0.005
1
3.078
6.314
12.706
15.894
31.821
63.657
2
1.886
2.920
4.303
4.849
6.965
9.925
3
1.638
2.353
3.182
3.482
4.541
5.841
4
1.533
2.132
2.776
2.999
3.747
4.604
5
1.476
2.015
2.571
2.757
3.365
4.
Example $4$
Find the critical value $t_{\frac {\alpha}{2}}$ for a 95% confidence interval with a sample size of n=13.
Solution
• Degrees of freedom (down the left-hand column) is equal to n-1 = 12
• α = 0.05 and α/2 = 0.025
• Go down the 0.025 column to 12 df
• $t_{\frac {\alpha}{2}}$= 2.179
The critical values from the students’ t-distribution approach the critical values from the standard normal distribution as the sample size (n) increases.
Table $3$: Critical values from the student’s t-table.
n
Degrees of Freedom
$t_{0.25}$
11
10
2.228
51
50
2.009
101
100
1.984
1001
1000
1.962
Using the standard normal curve, the critical value for a 95% confidence interval is 1.96. You can see how different samples sizes will change the critical value and thus the confidence interval, especially when the sample size is small.
Construction of a Confidence Interval
When σ is Unknown
1. $t_{\frac {\alpha}{2}}$ critical value with n-1 df
2. $E = t_{\frac {\alpha}{2}} \times \frac{s}{\sqrt {n}}$
3. $\bar {x} \pm E$
Example $5$:
Researchers studying the effects of acid rain in the Adirondack Mountains collected water samples from 22 lakes. They measured the pH (acidity) of the water and want to construct a 99% confidence interval about the mean lake pH for this region. The sample mean is 6.4438 with a sample standard deviation of 0.7120. They do not know anything about the distribution of the pH of this population, and the sample is small (n<30), so they look at a normal probability plot.
Figure 4. Normal probability plot.
Solution
The data is normally distributed. Now construct the 99% confidence interval about the mean pH.
1) $t_{\frac {\alpha}{2}}$ = 2.831
2) $E = t_{\frac {\alpha}{2}} \times \frac{s}{\sqrt {n}}$ = $2.831 \times \frac {0.7120}{\sqrt {22}}$= 0.4297
3) $\bar {x} \pm E$ = 6.443 ± 0.4297
The 99% confidence interval about the mean pH is (6.013, 6.863).
We are 99% confident that this interval contains the mean lake pH for this lake population.
Now construct a 90% confidence interval about the mean pH for these lakes.
1) $t_{\frac {\alpha}{2}}$ = 1.721
2) $E = t_{\frac {\alpha}{2}} \times \frac{s}{\sqrt {n}}$ = $1.71221 \times \frac {0.7120}{\sqrt {22}}$0.2612
3) $\bar {x} \pm E$ = 6.443 ± 0.2612
The 90% confidence interval about the mean pH is (6.182, 6.704).
We are 90% confident that this interval contains the mean lake pH for this lake population.
Notice how the width of the interval decreased as the level of confidence decreased from 99 to 90%.
Construct a 90% confidence interval about the mean lake pH using Excel and Minitab.
Software Solutions
Minitab
For Minitab, enter the data in the spreadsheet and select Basic statistics and 1-sample t-test.
One-Sample T: pH
Variable N Mean StDev SE Mean 90% CI
pH
22
6.443
0.712
0.152
(6.182, 6.704)
Additional example:
Excel
For Excel, enter the data in the spreadsheet and select descriptive statistics. Check Summary Statistics and select the level and confidence.
Mean
6.442909
Standard Error
0.151801
Median
6.4925
Mode
#N/A
Standard Deviation
0.712008
Sample Variance
0.506956
Kurtosis
-0.5007
Skewness
-0.60591
Range
2.338
Minimum
5.113
Maximum
7.451
Sum
141.744
Count
22
Confidence Level(90.0%)
0.26121
Excel gives you the sample mean in the first line (6.442909) and the margin of error in the last line (0.26121). You must complete the computation yourself to obtain the interval (6.442909±0.26121).
Confidence Intervals about the Population Proportion (p)
Frequently, we are interested in estimating the population proportion (p), instead of the population mean (µ). For example, you may need to estimate the proportion of trees infected with beech bark disease, or the proportion of people who support “green” products. The parameter p can be estimated in the same ways as we estimated µ, the population mean.
The Sample Proportion
• The sample proportion is the best point estimate for the true population proportion.
• Sample proportion $\hat {p} = \frac {x}{n}$where x is the number of elements in the sample with the characteristic you are interested in, and n is the sample size.
The Assumption of Normality when Estimating Proportions
• The assumption of a normally distributed population is still important, even though the parameter has changed.
• Normality can be verified if:$n \times \hat {p} \times (1- \hat {p}) \ge 10$
Constructing a Confidence Interval about the Population Proportion
Constructing a confidence interval about the proportion follows the same three steps we have used in previous examples.
1. $Z_{\frac {\alpha}{2}}$(critical value from the standard normal table)
2. $E = Z_{\frac {\alpha}{2}} \times \sqrt {\frac{\hat {p}(1-\hat {p})}{n}}$ (margin of error)
3. $\hat {p} \pm E$(point estimate ± margin of error)
Example $6$:
A botanist has produced a new variety of hybrid soybean that is better able to withstand drought. She wants to construct a 95% confidence interval about the germination rate (percent germination). She randomly selected 500 seeds and found that 421 have germinated.
Solution
First, compute the point estimate
$\hat {p} = \frac {x}{n} =\frac {421}{500}=0.842$
Check normality:
n \times \hat {p} \times (1-\hat {p}) \ge 10 = 500 \times 0.842 \times (1-0.842) =66.5\]
You can assume a normal distribution.
Now construct the confidence interval:
1) $Z_{\frac {\alpha}{2}}$ = 1.96
2) $E = Z_{\frac {\alpha}{2}} \times \sqrt {\frac{\hat {p}(1-\hat {p})}{n}}$ =$1.96 \times \sqrt {\frac {0.842(1-0.842)}{500}}$ = 0.032
3) $\hat {p} \pm E =0.842 \pm 0.0032$
The 95% confidence interval for the germination rate is (81.0%, 87.4%).
We can be 95% confident that this interval contains the true germination rate for this population.
Software Solutions
Minitab
You can use Minitab to compute the confidence interval. Select STAT>Basic stats>1-proportion. Select summarized data and enter the number of events (421) and the number of trials (500). Click Options and select the correct confidence level. Check “test and interval based on normal distribution” if the assumption of normality has been verified.
Test and CI for One Proportion
Sample X N Sample p 95% CI
1 421 500 0.842000 (0.810030, 0.873970)
Using the normal approximation.
Excel
Excel does not compute confidence intervals for estimating the population proportion.
Confidence Interval Summary
Which method do I use?
The first question to ask yourself is: Which parameter are you trying to estimate? If it is the mean (µ), then ask yourself: Is the population standard deviation (σ) known? If yes, then follow the next 3 steps:
Confidence Interval about the Population Mean (µ) when σ is Known
1. $Z_{\frac {\alpha}{2}}$ critical value (from the standard normal table)
2. $E=Z_{\frac {\alpha}{2}} \times \frac {\sigma}{\sqrt {n}}$
3. $\bar {x} \pm E$
If no, follow these 3 steps:
Confidence Interval about the Population Mean (µ) when σ is Unknown
1. $t_{\frac {\alpha}{2}}$ critical value with n-1 df from the student t-distribution
2. $E=t_{\frac {\alpha}{2}} \times \frac {s}{\sqrt {n}}$
3. $\bar {x} \pm E$
If you want to construct a confidence interval about the population proportion, follow these 3 steps:
Confidence Interval about the Proportion
1. $Z_{\frac {\alpha}{2}}$ critical value from the standard normal table
2. $E = Z_{\frac {\alpha}{2}} \times \sqrt {\frac{\hat {p}(1-\hat {p})}{n}}$
3. $\hat {p} \pm E$
Remember that the assumption of normality must be verified. | textbooks/stats/Applied_Statistics/Natural_Resources_Biometrics_(Kiernan)/02%3A_Sampling_Distributions_and_Confidence_Intervals/2.02%3A_Confidence_Intervals.txt |
The previous two chapters introduced methods for organizing and summarizing sample data, and using sample statistics to estimate population parameters. This chapter introduces the next major topic of inferential statistics: hypothesis testing.
Note
A hypothesis is a statement or claim about a property of a population.
The Fundamentals of Hypothesis Testing
When conducting scientific research, typically there is some known information, perhaps from some past work or from a long accepted idea. We want to test whether this claim is believable. This is the basic idea behind a hypothesis test:
• State what we think is true.
• Quantify how confident we are about our claim.
• Use sample statistics to make inferences about population parameters.
For example, past research tells us that the average life span for a hummingbird is about four years. You have been studying the hummingbirds in the southeastern United States and find a sample mean lifespan of 4.8 years. Should you reject the known or accepted information in favor of your results? How confident are you in your estimate? At what point would you say that there is enough evidence to reject the known information and support your alternative claim? How far from the known mean of four years can the sample mean be before we reject the idea that the average lifespan of a hummingbird is four years?
Definition: hypothesis testing
Hypothesis testing is a procedure, based on sample evidence and probability, used to test claims regarding a characteristic of a population.
A hypothesis is a claim or statement about a characteristic of a population of interest to us. A hypothesis test is a way for us to use our sample statistics to test a specific claim.
Example $1$:
The population mean weight is known to be 157 lb. We want to test the claim that the mean weight has increased.
Example $2$:
Two years ago, the proportion of infected plants was 37%. We believe that a treatment has helped, and we want to test the claim that there has been a reduction in the proportion of infected plants.
Components of a Formal Hypothesis Test
The null hypothesis is a statement about the value of a population parameter, such as the population mean (µ) or the population proportion (p). It contains the condition of equality and is denoted as H0 (H-naught).
H0 : µ = 157 or H0 : p = 0.37
The alternative hypothesis is the claim to be tested, the opposite of the null hypothesis. It contains the value of the parameter that we consider plausible and is denoted as H1 .
H1 : µ > 157 or H1 : p ≠ 0.37
The test statistic is a value computed from the sample data that is used in making a decision about the rejection of the null hypothesis. The test statistic converts the sample mean () or sample proportion () to a Z- or t-score under the assumption that the null hypothesis is true. It is used to decide whether the difference between the sample statistic and the hypothesized claim is significant.
The p-value is the area under the curve to the left or right of the test statistic. It is compared to the level of significance (α).
The critical value is the value that defines the rejection zone (the test statistic values that would lead to rejection of the null hypothesis). It is defined by the level of significance.
The level of significance (α) is the probability that the test statistic will fall into the critical region when the null hypothesis is true. This level is set by the researcher.
The conclusion is the final decision of the hypothesis test. The conclusion must always be clearly stated, communicating the decision based on the components of the test. It is important to realize that we never prove or accept the null hypothesis. We are merely saying that the sample evidence is not strong enough to warrant the rejection of the null hypothesis. The conclusion is made up of two parts:
1) Reject or fail to reject the null hypothesis, and 2) there is or is not enough evidence to support the alternative claim.
Option 1) Reject the null hypothesis (H0). This means that you have enough statistical evidence to support the alternative claim (H1).
Option 2) Fail to reject the null hypothesis (H0). This means that you do NOT have enough evidence to support the alternative claim (H1).
Another way to think about hypothesis testing is to compare it to the US justice system. A defendant is innocent until proven guilty (Null hypothesis—innocent). The prosecuting attorney tries to prove that the defendant is guilty (Alternative hypothesis—guilty). There are two possible conclusions that the jury can reach. First, the defendant is guilty (Reject the null hypothesis). Second, the defendant is not guilty (Fail to reject the null hypothesis). This is NOT the same thing as saying the defendant is innocent! In the first case, the prosecutor had enough evidence to reject the null hypothesis (innocent) and support the alternative claim (guilty). In the second case, the prosecutor did NOT have enough evidence to reject the null hypothesis (innocent) and support the alternative claim of guilty.
The Null and Alternative Hypotheses
There are three different pairs of null and alternative hypotheses:
Table $PageIndex{1}$: The rejection zone for a two-sided hypothesis test.
Two-sided
Left-sided
Right-sided
$\mathrm{H}_{\mathrm{O}}: \boldsymbol{\mu}=\mathrm{c}$
$\mathbf{H}_{\mathbf{0}}: \boldsymbol{\mu}=\mathbf{C}$
$\mathbf{H}_{\mathbf{0}}: \boldsymbol{\mu}=\mathbf{C}$
$\mathbf{H}_{\mathbf{1}}: \boldsymbol{\mu \neq \mathbf { C }}$
$\mathbf{H}_{\mathbf{1}}: \boldsymbol{\mu}< \mathbf{C}$
$\mathbf{H}_{\mathbf{1}}: \boldsymbol{\mu}>\mathbf{C}$
where c is some known value.
A Two-sided Test
This tests whether the population parameter is equal to, versus not equal to, some specific value.
Ho: μ = 12 vs. H1: μ ≠ 12
The critical region is divided equally into the two tails and the critical values are ± values that define the rejection zones.
Example $3$:
A forester studying diameter growth of red pine believes that the mean diameter growth will be different if a fertilization treatment is applied to the stand.
• Ho: μ = 1.2 in./ year
• H1: μ ≠ 1.2 in./ year
This is a two-sided question, as the forester doesn’t state whether population mean diameter growth will increase or decrease.
A Right-sided Test
This tests whether the population parameter is equal to, versus greater than, some specific value.
Ho: μ = 12 vs. H1: μ > 12
The critical region is in the right tail and the critical value is a positive value that defines the rejection zone.
Example $4$:
A biologist believes that there has been an increase in the mean number of lakes infected with milfoil, an invasive species, since the last study five years ago.
• Ho: μ = 15 lakes
• H1: μ >15 lakes
This is a right-sided question, as the biologist believes that there has been an increase in population mean number of infected lakes.
A Left-sided Test
This tests whether the population parameter is equal to, versus less than, some specific value.
Ho: μ = 12 vs. H1: μ < 12
The critical region is in the left tail and the critical value is a negative value that defines the rejection zone.
Example $5$:
A scientist’s research indicates that there has been a change in the proportion of people who support certain environmental policies. He wants to test the claim that there has been a reduction in the proportion of people who support these policies.
• Ho: p = 0.57
• H1: p < 0.57
This is a left-sided question, as the scientist believes that there has been a reduction in the true population proportion.
Statistically Significant
When the observed results (the sample statistics) are unlikely (a low probability) under the assumption that the null hypothesis is true, we say that the result is statistically significant, and we reject the null hypothesis. This result depends on the level of significance, the sample statistic, sample size, and whether it is a one- or two-sided alternative hypothesis.
Types of Errors
When testing, we arrive at a conclusion of rejecting the null hypothesis or failing to reject the null hypothesis. Such conclusions are sometimes correct and sometimes incorrect (even when we have followed all the correct procedures). We use incomplete sample data to reach a conclusion and there is always the possibility of reaching the wrong conclusion. There are four possible conclusions to reach from hypothesis testing. Of the four possible outcomes, two are correct and two are NOT correct.
Table $2$. Possible outcomes from a hypothesis test.
H0 is True
H1 is True
Do Not Reject H0
Correct Conclusion
Type II Error
Reject H0
Type I Error
Correct Conclusion
A Type I error is when we reject the null hypothesis when it is true. The symbol α (alpha) is used to represent Type I errors. This is the same alpha we use as the level of significance. By setting alpha as low as reasonably possible, we try to control the Type I error through the level of significance.
A Type II error is when we fail to reject the null hypothesis when it is false. The symbol β(beta) is used to represent Type II errors.
In general, Type I errors are considered more serious. One step in the hypothesis test procedure involves selecting the significance level (α), which is the probability of rejecting the null hypothesis when it is correct. So the researcher can select the level of significance that minimizes Type I errors. However, there is a mathematical relationship between α, β, and n (sample size).
• As α increases, β decreases
• As α decreases, β increases
• As sample size increases (n), both α and β decrease
The natural inclination is to select the smallest possible value for α, thinking to minimize the possibility of causing a Type I error. Unfortunately, this forces an increase in Type II errors. By making the rejection zone too small, you may fail to reject the null hypothesis, when, in fact, it is false. Typically, we select the best sample size and level of significance, automatically setting β.
Figure 4.
Power of the Test
A Type II error (β) is the probability of failing to reject a false null hypothesis. It follows that 1-β is the probability of rejecting a false null hypothesis. This probability is identified as the power of the test, and is often used to gauge the test’s effectiveness in recognizing that a null hypothesis is false.
Definition: power of the test
The probability that at a fixed level α significance test will reject H0, when a particular alternative value of the parameter is true is called the power of the test.
Power is also directly linked to sample size. For example, suppose the null hypothesis is that the mean fish weight is 8.7 lb. Given sample data, a level of significance of 5%, and an alternative weight of 9.2 lb., we can compute the power of the test to reject μ = 8.7 lb. If we have a small sample size, the power will be low. However, increasing the sample size will increase the power of the test. Increasing the level of significance will also increase power. A 5% test of significance will have a greater chance of rejecting the null hypothesis than a 1% test because the strength of evidence required for the rejection is less. Decreasing the standard deviation has the same effect as increasing the sample size: there is more information about μ. | textbooks/stats/Applied_Statistics/Natural_Resources_Biometrics_(Kiernan)/03%3A_Hypothesis_Testing/3.01%3A_The_Fundamentals_of_Hypothesis_Testing.txt |
Hypothesis Test about the Population Mean (μ) when the Population Standard Deviation (σ) is Known
We are going to examine two equivalent ways to perform a hypothesis test: the classical approach and the p-value approach. The classical approach is based on standard deviations. This method compares the test statistic (Z-score) to a critical value (Z-score) from the standard normal table. If the test statistic falls in the rejection zone, you reject the null hypothesis. The p-value approach is based on area under the normal curve. This method compares the area associated with the test statistic to alpha (α), the level of significance (which is also area under the normal curve). If the p-value is less than alpha, you would reject the null hypothesis.
As a past student poetically said: If the p-value is a wee value, Reject Ho
Both methods must have:
• Data from a random sample.
• Verification of the assumption of normality.
• A null and alternative hypothesis.
• A criterion that determines if we reject or fail to reject the null hypothesis.
• A conclusion that answers the question.
There are four steps required for a hypothesis test:
1. State the null and alternative hypotheses.
2. State the level of significance and the critical value.
3. Compute the test statistic.
4. State a conclusion.
The Classical Method for Testing a Claim about the Population Mean (μ) when the Population Standard Deviation (σ) is Known
Example $1$: A Two-sided Test
A forester studying diameter growth of red pine believes that the mean diameter growth will be different from the known mean growth of 1.35 inches/year if a fertilization treatment is applied to the stand. He conducts his experiment, collects data from a sample of 32 plots, and gets a sample mean diameter growth of 1.6 in./year. The population standard deviation for this stand is known to be 0.46 in./year. Does he have enough evidence to support his claim?
Solution
Step 1) State the null and alternative hypotheses.
• Ho: μ = 1.35 in./year
• H1: μ ≠ 1.35 in./year
Step 2) State the level of significance and the critical value.
• We will choose a level of significance of 5% (α = 0.05).
• For a two-sided question, we need a two-sided critical value – Z α/2 and + Z α/2.
• The level of significance is divided by 2 (since we are only testing “not equal”). We must have two rejection zones that can deal with either a greater than or less than outcome (to the right (+) or to the left (-)).
• We need to find the Z-score associated with the area of 0.025. The red areas are equal to α/2 = 0.05/2 = 0.025 or 2.5% of the area under the normal curve.
• Go into the body of values and find the negative Z-score associated with the area 0.025.
• The negative critical value is -1.96. Since the curve is symmetric, we know that the positive critical value is 1.96.
• ±1.96 are the critical values. These values set up the rejection zone. If the test statistic falls within these red rejection zones, we reject the null hypothesis.
Step 3) Compute the test statistic.
• The test statistic is the number of standard deviations the sample mean is from the known mean. It is also a Z-score, just like the critical value.
• $z = \frac {\bar {x} -\mu}{\frac {\sigma}{\sqrt {n}}}$
• For this problem, the test statistic is
$z = \frac {1.6-1.35}{\frac {0.46}{\sqrt {32}}} =3.07 \nonumber$
Step 4) State a conclusion.
• Compare the test statistic to the critical value. If the test statistic falls into the rejection zones, reject the null hypothesis. In other words, if the test statistic is greater than +1.96 or less than -1.96, reject the null hypothesis.
In this problem, the test statistic falls in the red rejection zone. The test statistic of 3.07 is greater than the critical value of 1.96.We will reject the null hypothesis. We have enough evidence to support the claim that the mean diameter growth is different from (not equal to) 1.35 in./year.
Example $2$: A Right-sided Test
A researcher believes that there has been an increase in the average farm size in his state since the last study five years ago. The previous study reported a mean size of 450 acres with a population standard deviation (σ) of 167 acres. He samples 45 farms and gets a sample mean of 485.8 acres. Is there enough information to support his claim?
Solution
Step 1) State the null and alternative hypotheses.
• Ho: μ = 450 acres
• H1: μ >450 acres
Step 2) State the level of significance and the critical value.
• We will choose a level of significance of 5% (α = 0.05).
• For a one-sided question, we need a one-sided positive critical value Zα.
• The level of significance is all in the right side (the rejection zone is just on the right side).
• We need to find the Z-score associated with the 5% area in the right tail.
• Go into the body of values in the standard normal table and find the Z-score that separates the lower 95% from the upper 5%.
• The critical value is 1.645. This value sets up the rejection zone.
Step 3) Compute the test statistic.
• The test statistic is the number of standard deviations the sample mean is from the known mean. It is also a Z-score, just like the critical value.
• $z = \frac {\bar {x} -\mu}{\frac {\sigma}{\sqrt {n}}}$
• For this problem, the test statistic is
$z = \frac {485.8-450}{\frac {167}{\sqrt {45}}} =1.44 \nonumber$
Step 4) State a conclusion.
• Compare the test statistic to the critical value.
• The test statistic does not fall in the rejection zone. It is less than the critical value.
We fail to reject the null hypothesis. We do not have enough evidence to support the claim that the mean farm size has increased from 450 acres.
Example $3$:A Left-sided Test
A researcher believes that there has been a reduction in the mean number of hours that college students spend preparing for final exams. A national study stated that students at a 4-year college spend an average of 23 hours preparing for 5 final exams each semester with a population standard deviation of 7.3 hours. The researcher sampled 227 students and found a sample mean study time of 19.6 hours. Does this indicate that the average study time for final exams has decreased? Use a 1% level of significance to test this claim.
Solution
Step 1) State the null and alternative hypotheses.
• Ho: μ = 23 hours
• H1: μ < 23 hours
Step 2) State the level of significance and the critical value.
• This is a left-sided test so alpha (0.01) is all in the left tail.
• Go into the body of values in the standard normal table and find the Z-score that defines the lower 1% of the area.
• The critical value is -2.33. This value sets up the rejection zone.
Step 3) Compute the test statistic.
• The test statistic is the number of standard deviations the sample mean is from the known mean. It is also a Z-score, just like the critical value.
• $z = \frac {\bar {x} - \mu}{\frac {\sigma} {\sqrt {n}}}$
• For this problem, the test statistic is
$z= \frac {19.6-23}{\frac {7.3}{\sqrt {277}}} \nonumber$
Step 4) State a conclusion.
• Compare the test statistic to the critical value.
• The test statistic falls in the rejection zone. The test statistic of -7.02 is less than the critical value of -2.33.
We reject the null hypothesis. We have sufficient evidence to support the claim that the mean final exam study time has decreased below 23 hours.
Testing a Hypothesis using P-values
The p-value is the probability of observing our sample mean given that the null hypothesis is true. It is the area under the curve to the left or right of the test statistic. If the probability of observing such a sample mean is very small (less than the level of significance), we would reject the null hypothesis. Computations for the p-value depend on whether it is a one- or two-sided test.
Steps for a hypothesis test using p-values:
• State the null and alternative hypotheses.
• State the level of significance.
• Compute the test statistic and find the area associated with it (this is the p-value).
• Compare the p-value to alpha (α) and state a conclusion.
Instead of comparing Z-score test statistic to Z-score critical value, as in the classical method, we compare area of the test statistic to area of the level of significance.
Note:The Decision Rule
If the p-value is less than alpha, we reject the null hypothesis.
Computing P-values
If it is a two-sided test (the alternative claim is ≠), the p-value is equal to two times the probability of the absolute value of the test statistic. If the test is a left-sided test (the alternative claim is “<”), then the p-value is equal to the area to the left of the test statistic. If the test is a right-sided test (the alternative claim is “>”), then the p-value is equal to the area to the right of the test statistic.
Let’s look at Example 6 again.
A forester studying diameter growth of red pine believes that the mean diameter growth will be different from the known mean growth of 1.35 in./year if a fertilization treatment is applied to the stand. He conducts his experiment, collects data from a sample of 32 plots, and gets a sample mean diameter growth of 1.6 in./year. The population standard deviation for this stand is known to be 0.46 in./year. Does he have enough evidence to support his claim?
Step 1) State the null and alternative hypotheses.
• Ho: μ = 1.35 in./year
• H1: μ ≠ 1.35 in./year
Step 2) State the level of significance.
• We will choose a level of significance of 5% (α = 0.05).
Step 3) Compute the test statistic.
• For this problem, the test statistic is:
$z=\frac{1.6-1.35}{\frac{0.46}{\sqrt {32}}}=3.07 \nonumber$
The p-value is two times the area of the absolute value of the test statistic (because the alternative claim is “not equal”).
• Look up the area for the Z-score 3.07 in the standard normal table. The area (probability) is equal to 1 – 0.9989 = 0.0011.
• Multiply this by 2 to get the p-value = 2 * 0.0011 = 0.0022.
Step 4) Compare the p-value to alpha and state a conclusion.
• Use the Decision Rule (if the p-value is less than α, reject H0).
• In this problem, the p-value (0.0022) is less than alpha (0.05).
• We reject the H0. We have enough evidence to support the claim that the mean diameter growth is different from 1.35 inches/year.
Let’s look at Example 7 again.
A researcher believes that there has been an increase in the average farm size in his state since the last study five years ago. The previous study reported a mean size of 450 acres with a population standard deviation (σ) of 167 acres. He samples 45 farms and gets a sample mean of 485.8 acres. Is there enough information to support his claim?
Step 1) State the null and alternative hypotheses.
• Ho: μ = 450 acres
• H1: μ >450 acres
Step 2) State the level of significance.
• We will choose a level of significance of 5% (α = 0.05).
Step 3) Compute the test statistic.
• For this problem, the test statistic is
$z= \frac {485.8-450}{\frac {167}{\sqrt {45}}}=1.44 \nonumber$
The p-value is the area to the right of the Z-score 1.44 (the hatched area).
• This is equal to 1 – 0.9251 = 0.0749.
• The p-value is 0.0749.
Step 4) Compare the p-value to alpha and state a conclusion.
• Use the Decision Rule.
• In this problem, the p-value (0.0749) is greater than alpha (0.05), so we Fail to Reject the H0.
• The area of the test statistic is greater than the area of alpha (α).
We fail to reject the null hypothesis. We do not have enough evidence to support the claim that the mean farm size has increased.
Let’s look at Example 8 again.
A researcher believes that there has been a reduction in the mean number of hours that college students spend preparing for final exams. A national study stated that students at a 4-year college spend an average of 23 hours preparing for 5 final exams each semester with a population standard deviation of 7.3 hours. The researcher sampled 227 students and found a sample mean study time of 19.6 hours. Does this indicate that the average study time for final exams has decreased? Use a 1% level of significance to test this claim.
Step 1) State the null and alternative hypotheses.
• H0: μ = 23 hours
• H1: μ < 23 hours
Step 2) State the level of significance.
• This is a left-sided test so alpha (0.01) is all in the left tail.
Step 3) Compute the test statistic.
• For this problem, the test statistic is
$z=\frac {19.6-23}{\frac {7.3}{\sqrt {227}}}=-7.02 \nonumber$
The p-value is the area to the left of the test statistic (the little black area to the left of -7.02). The Z-score of -7.02 is not on the standard normal table. The smallest probability on the table is 0.0002. We know that the area for the Z-score -7.02 is smaller than this area (probability). Therefore, the p-value is <0.0002.
Step 4) Compare the p-value to alpha and state a conclusion.
• Use the Decision Rule.
• In this problem, the p-value (p<0.0002) is less than alpha (0.01), so we Reject the H0.
• The area of the test statistic is much less than the area of alpha (α).
We reject the null hypothesis. We have enough evidence to support the claim that the mean final exam study time has decreased below 23 hours.
Both the classical method and p-value method for testing a hypothesis will arrive at the same conclusion. In the classical method, the critical Z-score is the number on the z-axis that defines the level of significance (α). The test statistic converts the sample mean to units of standard deviation (a Z-score). If the test statistic falls in the rejection zone defined by the critical value, we will reject the null hypothesis. In this approach, two Z-scores, which are numbers on the z-axis, are compared. In the p-value approach, the p-value is the area associated with the test statistic. In this method, we compare α (which is also area under the curve) to the p-value. If the p-value is less than α, we reject the null hypothesis. The p-value is the probability of observing such a sample mean when the null hypothesis is true. If the probability is too small (less than the level of significance), then we believe we have enough statistical evidence to reject the null hypothesis and support the alternative claim.
Software Solutions
Minitab
(referring to Ex. 8)
One-Sample Z
Test of mu = 23 vs. < 23
The assumed standard deviation = 7.3
99% Upper
N Mean SE Mean Bound Z P
227 19.600 0.485 20.727 -7.02 0.000
Excel
Excel does not offer 1-sample hypothesis testing. | textbooks/stats/Applied_Statistics/Natural_Resources_Biometrics_(Kiernan)/03%3A_Hypothesis_Testing/3.02%3A_Hypothesis_Test_about_the_Population_Mean_when_the_Population_Standard_Deviation_is_Known.txt |
Hypothesis Test about the Population Mean (μ) when the Population Standard Deviation (σ) is Unknown
Frequently, the population standard deviation (σ) is not known. We can estimate the population standard deviation (σ) with the sample standard deviation (s). However, the test statistic will no longer follow the standard normal distribution. We must rely on the student’s t-distribution with n-1 degrees of freedom. Because we use the sample standard deviation (s), the test statistic will change from a Z-score to a t-score.
$z=\frac {\bar {x}-\mu}{\frac {\sigma}{\sqrt {n}}} \longrightarrow t = \frac {\bar {x} - \mu}{\frac {s}{\sqrt {n}}}$
Steps for a hypothesis test are the same that we covered in Section 2.
• State the null and alternative hypotheses.
• State the level of significance and the critical value.
• Compute the test statistic.
• State a conclusion.
Just as with the hypothesis test from the previous section, the data for this test must be from a random sample and requires either that the population from which the sample was drawn be normal or that the sample size is sufficiently large (n≥30). A t-test is robust, so small departures from normality will not adversely affect the results of the test. That being said, if the sample size is smaller than 30, it is always good to verify the assumption of normality through a normal probability plot.
We will still have the same three pairs of null and alternative hypotheses and we can still use either the classical approach or the p-value approach.
Table $PageIndex{1}$: The rejection zone for a two-sided hypothesis test.
Two-sided
Left-sided
Right-sided
$\mathrm{H}_{\mathrm{O}}: \boldsymbol{\mu}=\mathrm{c}$
$\mathbf{H}_{\mathbf{0}}: \boldsymbol{\mu}=\mathbf{C}$
$\mathbf{H}_{\mathbf{0}}: \boldsymbol{\mu}=\mathbf{C}$
$\mathbf{H}_{\mathbf{1}}: \boldsymbol{\mu \neq \mathbf { C }}$
$\mathbf{H}_{\mathbf{1}}: \boldsymbol{\mu}< \mathbf{C}$
$\mathbf{H}_{\mathbf{1}}: \boldsymbol{\mu}>\mathbf{C}$
Selecting the correct critical value from the student’s t-distribution table depends on three factors: the type of test (one-sided or two-sided alternative hypothesis), the sample size, and the level of significance.
For a two-sided test (“not equal” alternative hypothesis), the critical value (tα/2), is determined by alpha (α), the level of significance, divided by two, to deal with the possibility that the result could be less than OR greater than the known value.
• If your level of significance was 0.05, you would use the 0.025 column to find the correct critical value (0.05/2 = 0.025).
• If your level of significance was 0.01, you would use the 0.005 column to find the correct critical value (0.01/2 = 0.005).
For a one-sided test (“a less than” or “greater than” alternative hypothesis), the critical value (tα) , is determined by alpha (α), the level of significance, being all in the one side.
• If your level of significance was 0.05, you would use the 0.05 column to find the correct critical value for either a left or right-side question. If you are asking a “less than” (left-sided question, your critical value will be negative. If you are asking a “greater than” (right-sided question), your critical value will be positive.
Example $1$
Find the critical value you would use to test the claim that μ ≠ 112 with a sample size of 18 and a 5% level of significance.
Solution
In this case, the critical value ($t_{α/2}$) would be 2.110. This is a two-sided question (≠) so you would divide alpha by 2 (0.05/2 = 0.025) and go down the 0.025 column to 17 degrees of freedom.
Example $2$
What would the critical value be if you wanted to test that μ < 112 for the same data?
Solution
In this case, the critical value would be 1.740. This is a one-sided question (<) so alpha would be divided by 1 (0.05/1 = 0.05). You would go down the 0.05 column with 17 degrees of freedom to get the correct critical value.
Example $3$:A Two-sided Test
In 2005, the mean pH level of rain in a county in northern New York was 5.41. A biologist believes that the rain acidity has changed. He takes a random sample of 11 rain dates in 2010 and obtains the following data. Use a 1% level of significance to test his claim.
4.70, 5.63, 5.02, 5.78, 4.99, 5.91, 5.76, 5.54, 5.25, 5.18, 5.01
The sample size is small and we don’t know anything about the distribution of the population, so we examine a normal probability plot. The distribution looks normal so we will continue with our test.
The sample mean is 5.343 with a sample standard deviation of 0.397.
Solution
Step 1) State the null and alternative hypotheses.
• Ho: μ = 5.41
• H1: μ ≠ 5.41
Step 2) State the level of significance and the critical value.
• This is a two-sided question so alpha is divided by two.
• t α/2 is found by going down the 0.005 column with 14 degrees of freedom.
• t α/2 = ±3.169.
Step 3) Compute the test statistic.
• The test statistic is a t-score.
$t=\frac {\bar {x}-\mu}{\frac {s}{sqrt {n}}}$
• For this problem, the test statistic is
$t=\frac {5.343-5.41}{\frac {0.397}{\sqrt {11}}} = -0.560 \nonumber$
Step 4) State a conclusion.
• Compare the test statistic to the critical value.
• The test statistic does not fall in the rejection zone.
We will fail to reject the null hypothesis. We do not have enough evidence to support the claim that the mean rain pH has changed.
Example $4$:A One-sided Test
Cadmium, a heavy metal, is toxic to animals. Mushrooms, however, are able to absorb and accumulate cadmium at high concentrations. The government has set safety limits for cadmium in dry vegetables at 0.5 ppm. Biologists believe that the mean level of cadmium in mushrooms growing near strip mines is greater than the recommended limit of 0.5 ppm, negatively impacting the animals that live in this ecosystem. A random sample of 51 mushrooms gave a sample mean of 0.59 ppm with a sample standard deviation of 0.29 ppm. Use a 5% level of significance to test the claim that the mean cadmium level is greater than the acceptable limit of 0.5 ppm.
The sample size is greater than 30 so we are assured of a normal distribution of the means.
Solution
Step 1) State the null and alternative hypotheses.
• Ho: μ = 0.5 ppm
• H1: μ > 0.5 ppm
Step 2) State the level of significance and the critical value.
• This is a right-sided question so alpha is all in the right tail.
• t α is found by going down the 0.05 column with 50 degrees of freedom.
• t α = 1.676
Step 3) Compute the test statistic.
• The test statistic is a t-score.
$t=\frac {\bar {x}-\mu}{\frac {s}{\sqrt {n}}}$
• For this problem, the test statistic is
$t=\frac {0.59-0.50}{\frac {0.29}{\sqrt {51}}}=2.216 \nonumber$
Step 4) State a Conclusion.
• Compare the test statistic to the critical value.
The test statistic falls in the rejection zone. We will reject the null hypothesis. We have enough evidence to support the claim that the mean cadmium level is greater than the acceptable safe limit.
BUT, what happens if the significance level changes to 1%?
The critical value is now found by going down the 0.01 column with 50 degrees of freedom. The critical value is 2.403. The test statistic is now LESS THAN the critical value. The test statistic does not fall in the rejection zone. The conclusion will change. We do NOT have enough evidence to support the claim that the mean cadmium level is greater than the acceptable safe limit of 0.5 ppm.
Note
The level of significance is the probability that you, as the researcher, set to decide if there is enough statistical evidence to support the alternative claim. It should be set before the experiment begins.
P-value Approach
We can also use the p-value approach for a hypothesis test about the mean when the population standard deviation (σ) is unknown. However, when using a student’s t-table, we can only estimate the range of the p-value, not a specific value as when using the standard normal table. The student’s t-table has area (probability) across the top row in the table, with t-scores in the body of the table.
• To find the p-value (the area associated with the test statistic), you would go to the row with the number of degrees of freedom.
• Go across that row until you find the two values that your test statistic is between, then go up those columns to find the estimated range for the p-value.
Example $5$
Estimating P-value from a Student’s T-table
Table $PageIndex{2}$. Portion of the student’s t-table. t- distribution, Area in right tail.
df
.05
.025
.02
.01
.005
1
6.314
12.706
15.894
31.821
63.657
2
2.920
4.303
4.849
6.965
9.925
3
2.353
3.182
3.482
4.541
5.841
4
2.132
2.776
2.999
3.747
4.604
5
2.015
2.571
2.757
3.365
4.032
Solution
If your test statistic is 3.789 with 3 degrees of freedom, you would go across the 3 df row. The value 3.789 falls between the values 3.482 and 4.541 in that row. Therefore, the p-value is between 0.02 and 0.01. The p-value will be greater than 0.01 but less than 0.02 (0.01<p<0.02).
Conclusion
If your level of significance is 5%, you would reject the null hypothesis as the p-value (0.01-0.02) is less than alpha (α) of 0.05.
If your level of significance is 1%, you would fail to reject the null hypothesis as the p-value (0.01-0.02) is greater than alpha (α) of 0.01.
Software packages typically output p-values. It is easy to use the Decision Rule to answer your research question by the p-value method.
Software Solutions
Minitab
(referring to Ex. 5)
One-Sample T
Test of mu = 0.5 vs. > 0.5
95% Lower
N
Mean
StDev
SE Mean
Bound
T
P
51
0.5900
0.2900
0.0406
0.5219
2.22
0.016
Additional example: www.youtube.com/watch?v=WwdSjO4VUsg.
Excel
Excel does not offer 1-sample hypothesis testing. | textbooks/stats/Applied_Statistics/Natural_Resources_Biometrics_(Kiernan)/03%3A_Hypothesis_Testing/3.03%3A_Hypothesis_Test_about_the_Population_Mean_when_the_Population_Standard_Deviation_is_Unknown.txt |
Hypothesis Test for a Population Proportion (p)
Frequently, the parameter we are testing is the population proportion.
• We are studying the proportion of trees with cavities for wildlife habitat.
• We need to know if the proportion of people who support green building materials has changed.
• Has the proportion of wolves that died last year in Yellowstone increased from the year before?
Recall that the best point estimate of p, the population proportion, is given by
$\hat {p} = \dfrac {x}{n}$
where x is the number of individuals in the sample with the characteristic studied and n is the sample size. The sampling distribution of is approximately normal with a mean $\mu_{\hat {p}} = p$ and a standard deviation
$\sigma_{\hat {p}} = \sqrt {\dfrac {p(1-p)}{n}}$
when np(1 – p)≥10. We can use both the classical approach and the p-value approach for testing.
The steps for a hypothesis test are the same that we covered in Section 2.
• State the null and alternative hypotheses.
• State the level of significance and the critical value.
• Compute the test statistic.
• State a conclusion.
The test statistic follows the standard normal distribution. Notice that the standard error (the denominator) uses p instead of , which was used when constructing a confidence interval about the population proportion. In a hypothesis test, the null hypothesis is assumed to be true, so the known proportion is used.
$z= \dfrac {\hat {p} - p} {\sqrt {\dfrac {p(1-p)}{n}}}$
• The critical value comes from the standard normal table, just as in Section 2. We will still use the same three pairs of null and alternative hypotheses as we used in the previous sections, but the parameter is now p instead of μ:
Table $PageIndex{1}$: The rejection zone for a two-sided hypothesis test.
Two-sided
Left-sided
Right-sided
$\mathrm{H}_{\mathrm{O}}: \boldsymbol{p}=\mathrm{c}$
$\mathbf{H}_{\mathbf{0}}: \boldsymbol{p}=\mathbf{C}$
$\mathbf{H}_{\mathbf{0}}: \boldsymbol{p}=\mathbf{C}$
$\mathbf{H}_{\mathbf{1}}: \boldsymbol{p \neq \mathbf { C }}$
$\mathbf{H}_{\mathbf{1}}: \boldsymbol{p}< \mathbf{C}$
$\mathbf{H}_{\mathbf{1}}: \boldsymbol{p}>\mathbf{C}$
• For a two-sided test, alpha will be divided by 2 giving a ± Zα/2 critical value.
• For a left-sided test, alpha will be all in the left tail giving a – Zα critical value.
• For a right-sided test, alpha will be all in the right tail giving a Zα critical value.
Example $1$
botanist has produced a new variety of hybrid soy plant that is better able to withstand drought than other varieties. The botanist knows the seed germination for the parent plants is 75%, but does not know the seed germination for the new hybrid. He tests the claim that it is different from the parent plants. To test this claim, 450 seeds from the hybrid plant are tested and 321 have germinated. Use a 5% level of significance to test this claim that the germination rate is different from 75%.
Solution
Step 1) State the null and alternative hypotheses.
• Ho: p = 0.75
• H1: p ≠ 0.75
Step 2) State the level of significance and the critical value.
This is a two-sided question so alpha is divided by 2.
• Alpha is 0.05 so the critical values are ± Zα/2 = ± Z.025.
• Look on the negative side of the standard normal table, in the body of values for 0.025.
• The critical values are ± 1.96.
Step 3) Compute the test statistic.
• The test statistic is the number of standard deviations the sample mean is from the known mean. It is also a Z-score, just like the critical value.
$z= \dfrac {\hat {p} - p} {\sqrt {\dfrac {p(1-p)}{n}}}$
• For this problem, the test statistic is
$z=\dfrac {0.713-0.75}{\sqrt {\dfrac {0.75(1-0.75)}{450}}} = -1.81 \nonumber$
Step 4) State a conclusion.
• Compare the test statistic to the critical value.
The test statistic does not fall in the rejection zone. We fail to reject the null hypothesis. We do not have enough evidence to support the claim that the germination rate of the hybrid plant is different from the parent plants.
Let’s answer this question using the p-value approach. Remember, for a two-sided alternative hypothesis (“not equal”), the p-value is two times the area of the test statistic. The test statistic is -1.81 and we want to find the area to the left of -1.81 from the standard normal table.
• On the negative page, find the Z-score -1.81. Find the area associated with this Z-score.
• The area = 0.0351.
• This is a two-sided test so multiply the area times 2 to get the p-value = 0.0351 x 2 = 0.0702.
Now compare the p-value to alpha. The Decision Rule states that if the p-value is less than alpha, reject the H0. In this case, the p-value (0.0702) is greater than alpha (0.05) so we will fail to reject H0. We do not have enough evidence to support the claim that the germination rate of the hybrid plant is different from the parent plants.
Example $2$:
You are a biologist studying the wildlife habitat in the Monongahela National Forest. Cavities in older trees provide excellent habitat for a variety of birds and small mammals. A study five years ago stated that 32% of the trees in this forest had suitable cavities for this type of wildlife. You believe that the proportion of cavity trees has increased. You sample 196 trees and find that 79 trees have cavities. Does this evidence support your claim that there has been an increase in the proportion of cavity trees?
Use a 10% level of significance to test this claim.
Solution
Step 1) State the null and alternative hypotheses.
• Ho: p = 0.32
• H1: p > 0.32
Step 2) State the level of significance and the critical value.
This is a one-sided question so alpha is divided by 1.
• Alpha is 0.10 so the critical value is Zα = Z .10
• Look on the positive side of the standard normal table, in the body of values for 0.90.
• The critical value is 1.28.
Step 3) Compute the test statistic.
• The test statistic is the number of standard deviations the sample proportion is from the known proportion. It is also a Z-score, just like the critical value.
$z= \dfrac {\hat {p} - p} {\sqrt {\dfrac {p(1-p)}{n}}}$
• For this problem, the test statistic is:
$z= \frac {0.403-0.32}{\sqrt {\frac {0.32(1-0.32)}{196}}}=2.49 \nonumber$
Step 4) State a conclusion.
• Compare the test statistic to the critical value.
The test statistic is larger than the critical value (it falls in the rejection zone). We will reject the null hypothesis. We have enough evidence to support the claim that there has been an increase in the proportion of cavity trees.
Now use the p-value approach to answer the question. This is a right-sided question (“greater than”), so the p-value is equal to the area to the right of the test statistic. Go to the positive side of the standard normal table and find the area associated with the Z-score of 2.49. The area is 0.9936. Remember that this table is cumulative from the left. To find the area to the right of 2.49, we subtract from one.
p-value = (1 – 0.9936) = 0.0064
The p-value is less than the level of significance (0.10), so we reject the null hypothesis. We have enough evidence to support the claim that the proportion of cavity trees has increased.
Software Solutions
Minitab
(referring to Ex. 15)
Test and CI for One Proportion
Test of p = 0.32 vs. p > 0.32
90% Lower
Sample X N Sample p Bound Z-Value p-Value
1 79 196 0.403061 0.358160 2.49 0.006
Using the normal approximation.
Excel
Excel does not offer 1-sample hypothesis testing. | textbooks/stats/Applied_Statistics/Natural_Resources_Biometrics_(Kiernan)/03%3A_Hypothesis_Testing/3.04%3A_Hypothesis_Test_for_a_Population_Proportion.txt |
Hypothesis Test about a Variance
When people think of statistical inference, they usually think of inferences involving population means or proportions. However, the particular population parameter needed to answer an experimenter’s practical questions varies from one situation to another, and sometimes a population’s variability is more important than its mean. Thus, product quality is often defined in terms of low variability.
Sample variance $s^2$ can be used for inferences concerning a population variance $\sigma^2$. For a random sample of n measurements drawn from a normal population with mean μ and variance $\sigma^2$, the value $s^2$ provides a point estimate for $\sigma^2$. In addition, the quantity $\frac {(n-1)s^2}{\sigma^2}$ follows a Chi-square($\chi^{2}$) distribution, with $df = n – 1$.
The properties of Chi-square ($\chi^{2}$) distribution are:
• Unlike Z and t distributions, the values in a chi-square distribution are all positive.
• The chi-square distribution is asymmetric, unlike the Z and t distributions.
• There are many chi-square distributions. We obtain a particular one by specifying the degrees of freedom $(df = n – 1)$ associated with the sample variances $s^2$.
One-sample ($\chi^{2}$) test for testing the hypotheses:
Null hypothesis: $H_0: \sigma^{2} = \sigma^{2}_{0}$(constant)
Alternative hypothesis:
• $H_a: σ^2 > \sigma_{0}^{2}$(one-tailed), reject $H_0$ if the observed $\chi^2 > \chi_{U}^{2}$(upper-tail value at α).
• $H_a: σ^2 <\sigma_{0}^{2}$ (one-tailed), reject $H_0$ if the observed $\chi^2 < \chi_{L}^{2}$(lower-tail value at α).
• $H_a: σ^2 ≠ \sigma_{0}^{2}$ (two-tailed), reject $H_0$ if the observed $\chi^2 > \chi_{U}^{2}$or $\chi^{2} < \chi_{L}^{2}$at α/2.
where the $\chi^2$ critical value in the rejection region is based on degrees of freedom $df = n – 1$ and a specified significance level of α.
Test statistic: \chi^2 = \frac{(n-1)S^2}{\sigma _{0}^{2}}\]
As with previous sections, if the test statistic falls in the rejection zone set by the critical value, you will reject the null hypothesis.
Example $1$:
A forester wants to control a dense understory of striped maple that is interfering with desirable hardwood regeneration using a mist blower to apply an herbicide treatment. She wants to make sure that treatment has a consistent application rate, in other words, low variability not exceeding 0.25 gal./acre (0.06 gal.2). She collects sample data (n = 11) on this type of mist blower and gets a sample variance of 0.064 gal.2 Using a 5% level of significance, test the claim that the variance is significantly greater than 0.06 gal.2
$H_0: \sigma^{2} = 0.06$
$H_1: \sigma^{2} >0.06$
The critical value is 18.307. Any test statistic greater than this value will cause you to reject the null hypothesis.
The test statistic is
$\chi^2 = \frac {(n-1)S^2}{\sigma_{0}^{2}}=\frac {(11-1)0.064}{0.06}=10.667 \nonumber$
We fail to reject the null hypothesis. The forester does NOT have enough evidence to support the claim that the variance is greater than 0.06 gal.2 You can also estimate the p-value using the same method as for the student t-table. Go across the row for degrees of freedom until you find the two values that your test statistic falls between. In this case going across the row 10, the two table values are 4.865 and 15.987. Now go up those two columns to the top row to estimate the p-value (0.1-0.9). The p-value is greater than 0.1 and less than 0.9. Both are greater than the level of significance (0.05) causing us to fail to reject the null hypothesis.
Software Solutions
Minitab
(referring to Ex. $1$)
Test and CI for One Variance
Method
Null hypothesis Sigma-squared = 0.06
Alternative hypothesis Sigma-squared > 0.06
The chi-square method is only for the normal distribution.
Tests
Test
Method Statistic DF P-Value
Chi-Square 10.67 10 0.384
Excel
Excel does not offer 1-sample $\chi^2$ testing.
3.06: Putting it all Together Using the Classical Method
Putting it all Together Using the Classical Method
To Test a Claim about μ when σ is Known
• Write the null and alternative hypotheses.
• State the level of significance and get the critical value from the standard normal table.
• Compute the test statistic.
$z=\frac {\bar {x}-\mu}{\frac {\sigma}{\sqrt {n}}}$
• Compare the test statistic to the critical value (Z-score) and write the conclusion.
To Test a Claim about μ When σ is Unknown
• Write the null and alternative hypotheses.
• State the level of significance and get the critical value from the student’s t-table with n-1 degrees of freedom.
• Compute the test statistic.
$t=\frac {\bar {x}-\mu}{\frac {s}{\sqrt {n}}}$
• Compare the test statistic to the critical value (t-score) and write the conclusion.
To Test a Claim about p
• Write the null and alternative hypotheses.
• State the level of significance and get the critical value from the standard normal distribution.
• Compute the test statistic.
$z=\frac {\hat {p}-p}{\sqrt {\frac {p(1-p)}{n}}}$
• Compare the test statistic to the critical value (Z-score) and write the conclusion.
Table $PageIndex{1}$. A summary table for critical Z-scores.
Two-sided Test
One-sided Test
Alpha (á)
á
Z á
0.01
2.575
2.33
0.05
1.96
1.645
0.10
1.645
1.28
To Test a Claim about Variance
• Write the null and alternative hypotheses.
• State the level of significance and get the critical value from the chi-square table using n-1 degrees of freedom.
• Compute the test statistic.
$\chi^2 = \frac {(n-1)S^2}{\sigma^{2}_{0}}$
• Compare the test statistic to the critical value and write the conclusion. | textbooks/stats/Applied_Statistics/Natural_Resources_Biometrics_(Kiernan)/03%3A_Hypothesis_Testing/3.05%3A_Hypothesis_Test_about_a_Variance.txt |
Up to this point, we have discussed inferences regarding a single population parameter (e.g., μ, p, $\sigma^2$). We have used sample data to construct confidence intervals to estimate the population mean or proportion and to test hypotheses about the population mean and proportion. In both of these chapters, all the examples involved the use of one sample to form an inference about one population. Frequently, we need to compare two sets of data, and make inferences about two populations. This chapter deals with inferences about two means, proportions, or variances. For example:
• You are studying turkey habitat and want to see if the mean number of brood hens is different in New York compared to Pennsylvania.
• You want to determine if the treatment used in Skaneateles Lake has reduced the number of milfoil plants over the last three years.
• Is the proportion of people who support alternative energy in California greater compared to New York?
• Is the variability in application different between two mist blowers?
These questions can be answered by comparing the differences of:
• Mean number of hens in NY to the mean number of hens in PA.
• Number of plants in 2007 to the number of plants in 2010.
• Proportion of people in CA to the proportion of people in NY.
• Variances between the mist blowers.
This chapter is comprised of five sections. The first and second sections examine inferences about two means with two independent samples. The third section examines inferences about means with two dependent samples, the fourth section examines inferences about two proportions, and the fifth section examines inferences between two variances.
Inferences about Two Means with Independent Samples (Assuming Unequal Variances)
Using independent samples means that there is no relationship between the groups. The values in one sample have no association with the values in the other sample. For example, we want to see if the mean life span for hummingbirds in South Carolina is different from the mean life span in North Carolina. These populations are not related, and the samples are independent. We look at the difference of the independent means.
In Chapter 3, we did a one-sample t-test where we compared the sample mean ($\bar {x}$) to the hypothesized mean (μ). We expect that $\bar {x}$ would be close to μ. We use the sample mean, the sample standard deviation, and the sample size for the one-sample test.
With a two-sample t-test, we compare the population means to each other and again look at the difference. We expect that $\bar {x_1}-\bar {x_2}$ would be close to $\mu_{1} – \mu_{2}$. The test statistic will use both sample means, sample standard deviations, and sample sizes for the test.
• For a one-sample t-test we used $\frac {s}{\sqrt{n}}$as a measure of the standard deviation (the standard error).
• We can rewrite $\frac {s}{\sqrt{n}} \rightarrow \sqrt {\frac {s^2}{n}}$
• The numerator of the test statistic will be $(\bar {x_1} - \bar{x_2})-(\mu_{1} - \mu_{2})$
• This has a standard deviation of $\sqrt {\frac {s^2_1}{n_1}+\frac {s^2_2}{n_2}}$.
A two-sample t-test follows the same four steps we saw in Chapter 3.
• Write the null and alternative hypotheses.
• State the level of significance and find the critical value. The critical value, from the student’s t-distribution, has the lesser of n1-1 and n2 -1 degrees of freedom.
• Compute the test statistic.
• Compare the test statistic to the critical value and state a conclusion.
The assumptions we saw in Chapter 3 still must be met. Both samples come from independent random samples. The populations must be normally distributed, or both have large enough sample sizes (n1 and n2 ≥ 30). We will also use the same three pairs of null and alternative hypotheses.
Table $1$. Null and alternative hypotheses.
Two-sided Left-sided Right=sided
$\mathrm{H}_{\mathrm{0}}: \mu_1=\mu_2$ $\mathrm{H}_{\mathrm{0}}: \mu_1=\mu_2$ (\mathrm{H}_{\mathrm{0}}: \mu_1=\mu_2\)
$\mathrm{H}_1: \mu_1 \neq \mu_2$ $\mathrm{H}_1: \mu_1<\mu_2$ $\mathrm{H}_1: \mu_1>\mu_2$
Rewriting the null hypothesis of μ1 = μ2 to μ1 – μ2 = 0, simplifies the numerator. The test statistic is Welch’s approximation (Satterthwaite Adjustment) under the assumption that the independent population variances are not equal.
$t=\frac {(\bar {x_1}-\bar {x_2})-(\mu_{1}-\mu_{2})}{\sqrt {\frac {s^2_1}{n_1}+\frac {s^2_2}{n_2}}}$
This test statistic follows the student’s t-distribution with the degrees of freedom adjusted by
$df=\frac {(\frac {S^2_1}{n_1} + \frac {S^2_2}{n_2})^2}{\frac {1}{n_1-1}(\frac {S^2_1}{n_1})^2+\frac {1}{n_2-1}(\frac {S^2_2}{n_2})^2}$
A simpler alternative to determining degrees of freedom when working a problem long-hand is to use the lesser of n1-1 or n2-1 as the degrees of freedom. This method results in a smaller value for degrees of freedom and therefore a larger critical value. This makes the test more conservative, requiring more evidence to reject the null hypothesis.
Example $1$:
A forester is studying the number of cavity trees in old growth stands in Adirondack Park in northern New York. He wants to know if there is a significant difference between the mean number of cavity trees in the Adirondack Park and the old growth stands in the Monongahela National Forest. He collects two independent random samples from each forest. Use a 5% level of significance to test this claim.
Adirondack Park
Monongahela Forest
$n_1$ = 51 stands
$n_2$ = 56 stands
$\bar {x_1}$= 39.6
$\bar {x_2}$= 43.9
$s_1$ = 9.4
$s_2$ = 10.7
1) $H_0: \mu_1 = \mu_2 or \mu_1 – \mu_2 = 0$ There is no difference between the two population means.
$H_1: \mu_1 ≠ \mu_2$ There is a difference between the two population means.
2) The level of significance is 5%. This is a two-sided test so alpha is split into two sides. Computing degrees of freedom using the equation above gives 105 degrees of freedom.
$df = \frac {(\frac {9.4^2}{51}+\frac {10.7^2}{56})^2}{\frac {1}{51-1}(\frac {9.4^2}{51})^2+\frac {1}{56-1}(\frac {10.7^2}{56})^2}=104.9 \nonumber$
The critical value ($t_{\frac {\alpha}{2}}$, based on 100 degrees of freedom (closest value in the t-table), is ±1.984. Using 50 degrees of freedom, the critical value is ±2.009.
3) The test statistic is
$t=\frac {(\bar {x_1} - \bar {x_2}) - (\mu _1 - \mu_2)}{\sqrt {\frac {s_1^2}{n_1}+\frac {s_2^2}{n_2}}} =\frac {(39.6-43.9)-(0)}{\sqrt{\frac {9.4^2}{51}+\frac {10.7^2}{56}}} = -2.213 \nonumber$
4) The test statistic falls in the rejection zone.
We reject the null hypothesis. We have enough evidence to support the claim that there is a difference in the mean number of cavity trees between the Adirondack Park and the Monongahela National Forest.
Construct and Interpret a Confidence Interval about the Difference of Two Independent Means
A hypothesis test will answer the question about the difference of the means. BUT, we can answer the same question by constructing a confidence interval about the difference of the means. This process is just like the confidence intervals from Chapter 2.
1. Find the critical value.
2. Compute the margin of error.
3. Point estimate ± margin of error.
Because we are working with two samples, we must modify the components of the confidence interval to incorporate the information from the two populations.
• The point estimate is $\bar {x_1} -\bar {x_2}$.
• The standard error comes from the test statistic $\sqrt {\frac {s_1^2}{n_1} +\frac {s^2_2}{n_2}}$
• The critical value $t_{\frac {\alpha}{2}}$comes from the student’s t-table.
The confidence interval takes the form of the point estimate plus or minus the standard error of the differences.
$\bar {x_1} -\bar {x_2} \pm t_{\frac {\alpha}{2}}\sqrt {\frac {s_1^2}{n_1} +\frac {s^2_2}{n_2}}$
We will use the same three steps to construct a confidence interval about the difference of the means.
1. critical value $t_{\frac {\alpha}{2}}$
2. $E = t_{\frac {\alpha}{2}}\sqrt {\frac {s_1^2}{n_1} +\frac {s^2_2}{n_2}}$
3. $\bar {x_1} -\bar {x_2} \pm E$
Example $2$:
Let’s look at the mean number of cavity trees in old growth stands again. The forester wants to know if there is a difference between the mean number of cavity trees in old growth stands in the Adirondack forests and in the Monongahela Forest. We can answer this question by constructing a confidence interval about the difference of the means.
1) $t_{\frac {\alpha}{2}}$ = 2.009
2) $E = t_{\frac {\alpha}{2}}\sqrt {\frac {s_1^2}{n_1} +\frac {s^2_2}{n_2}} = 2.009 \sqrt {\frac {9.4^2}{51}+\frac {10.7^2}{56}}=3.904$
3) $\bar {x_1} -\bar {x_2} \pm 3.904$
The 95% confidence interval for the difference of the means is (-8.204, -0.396).
We can be 95% confident that this interval contains the mean difference in number of cavity trees between the two locations. BUT, this doesn’t answer the question the forester asked. Is there a difference in the mean number of cavity trees between the Adirondack and Monongahela forests? To answer this, we must look at the confidence interval interpretations.
Confidence Interval Interpretations
• If the confidence interval contains all positive values, we find a significant difference between the groups, AND we can conclude that the mean of the first group is significantly greater than the mean of the second group.
• If the confidence interval contains all negative values, we find a significant difference between the groups, AND we can conclude that the mean of the first group is significantly less than the mean of the second group.
• If the confidence interval contains zero (it goes from negative to positive values), we find NO significant difference between the groups.
In this problem, the confidence interval is (-8.204, -0.396). We have all negative values, so we can conclude that there is a significant difference in the mean number of cavity trees AND that the mean number of cavity trees in the Adirondack forests is significantly less than the mean number of cavity trees in the Monongahela Forest. The confidence interval gives an estimate of the mean difference in number of cavity trees between the two forests. There are, on average, 0.396 to 8.204 fewer cavity trees in the Adirondack Park than the Monongahela Forest.
P-value Approach
We can also use the p-value approach to answer the question. Remember, the p-value is the area under the normal curve associated with the test statistic. This example is a two-sided test (H1: μ1 ≠ μ2 ) so the p-value, when computed by hand, will be multiplied by two.
The test statistic equals -2.213, so the p-value is two times the area to the left of -2.213. We can only estimate the p-value using the student’s t-table. Using the lesser of n1– 1 or n2– 1 as the degrees of freedom, we have 50 degrees of freedom. Go across the 50 row in the student’s t-table until you find the absolute value of the test statistic. In this case, 2.213 falls between 2.109 and 2.403. Going up to the top of each of those columns gives you the estimate of the p-value (between 0.02 and 0.01).
Table $2$. Student t-Distribution Area in Right Tail
df
.05
.025
.02
.01
.005
39
1.686
2.024
2.127
2.429
2.712
40
1.684
2.021
2.123
2.423
2.704
50
1.676
2.009
2.109
2.403
2.678
60
1.671
2.000
2.099
2.390
2.660
70
1.667
1.994
2.093
2.381
2.648
The p-value is 2x(0.01 – 0.02) = (0.02 < p < 0.04). The p-value is greater than 0.02 but less than 0.04. This is less than the level of significance (0.05), so we reject the null hypothesis. There is enough evidence to support the claim that there is a significant difference in the mean number of cavity trees between the areas.
Example $3$:
Researchers are studying the relationship between logging activities in the northern forests and amphibian habitats. They were comparing moisture levels between old-growth and post-harvest habitats. The researchers believe that post-harvest habitat has a lower moisture level. They collected data on moisture levels from two independent random samples. Test their claim using a 5% level of significance.
Old Growth
Post Harvest
n1 = 26
n2 = 31
=0.62 g/cm3
= 0.56 g/cm3
s1 = 0.12 g/cm3
s2 = 0.17 g/cm3
H0: μ1 = μ2 or μ1 – μ2 = 0. There is no difference between the two population means.
H1: μ1 > μ2. Mean moisture level in old growth forests is greater than post-harvest levels.
We will use the critical value based on the lesser of n1– 1 or n2– 1 degrees of freedom. In this problem, there are 25 degrees of freedom and the critical value is 1.708. Now compute the test statistic.
$t=\frac {(0.62-0.56)-0}{\sqrt {\frac {0.12^2}{26}+\frac {0.17^2}{31}}} = 1.556$
The test statistic does not fall in the rejection zone. We fail to reject the null hypothesis. There is not enough evidence to support the claim that the moisture level is significantly lower in the post-harvest habitat.
Now answer this question by constructing a 90% confidence interval about the difference of the means.
1) $t_{\frac {\alpha}{2}}$ = 1.708
2) E = $t_{\frac {\alpha}{2}}$$\sqrt {\frac {s_1^2}{n_1}+\frac {s^2_2}{n_2}}=1.708\sqrt {\frac {0.12^2}{26}+\frac {0.17^2}{31}}=0.0658$
3) $\bar {x_1} -\bar {x_2} \pm E= (0.62-0.56) ±0.0658$
The 90% confidence interval for the difference of the means is (-0.0058, 0.1258). The values in the confidence interval run from negative to positive indicating that there is no significant different in the mean moisture levels between old growth and post-harvest stands.
Software Solutions
Minitab
Two-Sample T-Test and CI: for old vs. post
N
Mean
StDev
SE Mean
old
26
0.620
0.121
0.024
post
31
0.559
0.172
0.031
Difference = $\mu_{(old)} – \mu_{(post)}$
Estimate for difference: 0.0603
95% lower bound for difference: -0.0049
T-Test of difference = 0 (vs >): T-Value = 1.55 p-Value = 0.064 DF = 53
The p-value (0.064) is greater than the level of confidence so we fail to reject the null hypothesis.
Additional example: www.youtube.com/watch?v=7pIb-GVixFo.
Excel
t-Test: Two-Sample Assuming Unequal Variances
Variable 1
Variable 2
Mean
0.619615
0.559355
Variance
0.014708
0.02948
Observations
26
31
Hypothesized Mean Difference
0
df
54
t Stat
1.557361
$P(T\le t)$ one-tail
0.063809
t Critical one-tail
1.673565
$P(T\le t)$ two-tail
0.127617
t Critical two-tail
2.004879
The one-tail p-value (0.063809) is greater than the level of significance, therefore, we fail to reject the null hypothesis. | textbooks/stats/Applied_Statistics/Natural_Resources_Biometrics_(Kiernan)/04%3A_Inferences_about_the_Differences_of_Two_Populations/4.01%3A_Inferences_about_Two_Means_with_Independent_Samples_%28Assuming_Unequal_Variances%29.txt |
Pooled Two-sampled t-test (Assuming Equal Variances)
In the previous section, we made the assumption of unequal variances between our two populations. Welch’s t-test statistic does not assume that the population variances are equal and can be used whether the population variances are equal or not. The test that assumes equal population variances is referred to as the pooled t-test. Pooling refers to finding a weighted average of the two independent sample variances.
The pooled test statistic uses a weighted average of the two sample variances.
$S_p^2 = \frac {(n_1-1)S_1^2+(n_2-1)S_2^2}{n_1+n_2-2} = (\frac {n_1-1}{n_1+n_2-2})S_1^2 + (\frac {n_2-1}{n_1+n_2-2})S_2^2$
If $n_1= n_2$, then $S^2_p = (\frac {1}{2}s^2_1 + (1/2)s^2_2$, the average of the two sample variances. But whenever n1≠n2, the $s^2$ based on the larger sample size will receive more weight than the other $s^2$.
The advantage of this test statistic is that it exactly follows the student’s t-distribution with n1+ n2– 2 degrees of freedom.
$t=\frac {\bar {X_1} - \bar {X_2}}{\sqrt {S_p^2(\frac {1}{n_1}+\frac {1}{n_2})}}=\frac {\bar {X_1}- \bar {X_2}}{S_p\sqrt {\frac {1}{n_1} +\frac {1}{n_2}}}$
The hypothesis test procedure will follow the same steps as the previous section.
It may be difficult to verify that two population variances might be equal based on sample data. The F-test is commonly used to test variances but is not robust. Small departures from normality greatly impact the outcome making the results of the F-test unreliable. It can be difficult to decide if a significant outcome from an F-test is due to the differences in variances or non-normality. Because of this, many researchers rely on Welch’s t when comparing two means.
Example $1$:
Growth of pine seedlings in two different substrates was measured. We want to know if growth was better in substrate 2. Growth (in cm/yr) was measured and included in the table below. α = 0.05
Substrate 1
Substrate 2
3.2
4.5
4.5
6.2
3.8
5.8
4.0
6.0
3.7
7.1
3.2
6.8
4.1
7.2
Solution
$H_0: \mu_1 = \mu_2$
$H_1: \mu_1 < \mu_2$
$S^2_p=\frac {(7-1)0.474^2 +(7-1)0.936^2}{7+7-2} = 0.55$
$t=\frac {3.79-6.23}{\sqrt{0.55(\frac {1}{7}+ \frac {1}{7})}}=\frac {-2.44}{0.396}=-6.16$
This is a one-sided test with $n_1 + n_2 – 2 = 12$ degrees of freedom. The critical value is -1.782. The test statistic is less than the critical value so we will reject the null hypothesis.
There is enough evidence to support the claim that the mean growth is less in substrate 1. Growth in substrate 2 is greater.
The confidence interval approach also uses the pooled variance and takes the form:
$(\bar {x_1}-\bar {x_2}) \pm t_{\frac {\alpha}{2}}\sqrt {s^2_p(\frac {1}{n_1}+\frac {1}{n_2})}$
using $n_1 + n_2 – 2$ degrees of freedom. So let’s answer the same question with a 90% confidence interval.
$(3.79-6.23)\pm 1.782 \sqrt{0.55(\frac {1}{7}+\frac {1}{7})} = (-2.44 \pm 0.7064)=(-3.146,-1.734)$
All negative values tell you that there is a significant difference between the mean growth for the two substrates and that the growth in substrate 1 is significantly lower than the growth in substrate 2 with reduction in growth ranging from 1.734 to 3.146 cm/yr.
Software Solutions
Minitab
Two-Sample T-Test and CI: Substrate1, Substrate2
Two-sample T for Substrate1 vs. Substrate2
N
Mean
StDev
SE Mean
Substrate1
7
3.786
0.474
0.18
Substrate2
7
6.229
0.936
0.35
Difference = mu (Substrate1) – mu (Substrate2)
Estimate for difference: -2.443
95% upper bound for difference: -1.736
T-Test of difference = 0 (vs <): T-Value = -6.16 p-value = 0.000 DF = 12
Both use Pooled StDev = 0.7418
The p-value (0.000) is less than the level of significance (0.05). We will reject the null hypothesis.
Excel
t-Test: Two-Sample Assuming Equal Variances
Variable 1
Variable 2
Mean
3.785714
6.228571
Variance
0.224762
0.875714
Observations
7
7
Pooled Variance
0.550238
Hypothesized Mean Difference
0
df
12
t Stat
-6.16108
P$(T \le t)$ one-tail
2.43E-05
t Critical one-tail
1.782288
P$(T \le t)$ two-tail
4.86E-05
t Critical two-tail
2.178813
This is a one-sided test (greater than) so use the P$(T \le t)$ one-tail value 2.43E-05. The p-value (0.0000243) is less than the level of significance (0.05). We will reject the null hypothesis. | textbooks/stats/Applied_Statistics/Natural_Resources_Biometrics_(Kiernan)/04%3A_Inferences_about_the_Differences_of_Two_Populations/4.02%3A_Pooled_Two-sampled_t-test_%28Assuming_Equal_Variances%29.txt |
Inferences about Two Means with Dependent Samples—Matched Pairs
Dependent samples occur when there is a relationship between the samples. The data consists of matched pairs from random samples. A sampling method is dependent when the values selected for one sample are used to determine the values in the second sample. Before and after measurements on a population, such as people, lakes, or animals are an example of dependent samples. The objects in your sample are measured twice; measurements are taken at a certain point in time, and then re-taken at a later date. Dependency also occurs when the objects are related, such as eyes or tires on a car. Pairing isn’t a problem; it’s an opportunity to use the information that occurs with both measurements.
Before you begin your work, you must decide if your samples are dependent. If they are, you can take advantage of this fact. You can use this matching to better answer your research questions. Pairing data reduces measurement variability, which increases the accuracy of our statistical conclusions.
We use the difference (the subtraction) of the pairs of data in our analysis. For each pair, we subtract the values:
• $d_1$ = before1 – after 1
• $d_2$ = before 2 – after 2
• $d_3$ = before 3 – after 3
We are creating a new random variable d (differences), and it is important to keep the sign, whether positive or negative. We can compute , the sample mean of the differences, and sd, the sample standard deviation of the differences as follows:
$\bar {d} = \frac {\sum d_i}{n}$
$s_d = \sqrt {\frac {\sum (d-\bar{d})^2}{n-1}}$
Just as we used the sample mean and the sample standard deviation in a one-sample t-test, we will use the sample mean and sample standard deviation of the differences to test for matched pairs. The assumption of normality must still be verified. The differences must be normally distributed or the sample size must be large enough (n ≥ 30).
We can do a hypothesis test using matched pairs data following the same methods we used in the previous chapter.
• Write the null and alternative hypotheses.
• State the level of significance and find the critical value.
• Compute a test statistic.
• Compare the test statistic to the critical value and state a conclusion.
Since we are using the differences between the pairs of data, we identify this in our null and alternative hypotheses: $H_0: \mu d = 0$. The mean of the differences is equal to zero; there is no difference in “before and after” values.
We’ll use the same three pairs of null and alternative hypotheses we used in the previous chapter.
Table $1$. Null and alternative hypotheses.
Two-sided Left-sided Right=sided
$\mathrm{H}_{\mathrm{0}}: \mu_d=c$ $\mathrm{H}_{\mathrm{0}}: \mu_d=\c$ (\mathrm{H}_{\mathrm{0}}: \mu_d=c \)
$\mathrm{H}_1: \mu_d \neq c$ $\mathrm{H}_1: \mu_d<c$ $\mathrm{H}_1: \mu_d>c$
Table 3. Null and alternative hypotheses.
The critical value comes from the student’s t-distribution table with n – 1 degrees of freedom, where n = number of matched pairs. The test statistic follows the student’s t-distribution
$t=\frac {\bar {d}-\mu_d}{(s_d/\sqrt {n})}$
The conclusion must always answer the question you are asking in the alternative hypothesis.
• Reject the $H_0$. There is enough evidence to support the alternative claim.
• Fail to reject the $H_0$. There is not enough evidence to support the alternative claim.
Example $1$:
An environmental biologist wants to know if the water clarity in Owasco Lake is improving. Using a Secchi disk, she takes measurements in specific locations at specific dates during the course of the year. She then repeats the measurements in the same locations and on the same dates five years later. She obtains the following results:
Date
Initial Depth
5-year Depth
Difference
5/11
38
52
-14
6/7
58
60
-2
6/24
65
72
-7
7/8
74
72
2
7/27
56
54
2
8/31
36
48
-12
9/30
56
58
-2
10/12
52
60
-8
Using a 5% level of significance, test the biologist’s claim that water clarity is improving.
Solution
The data are paired by date with two measurements taken at each point five years apart. We will use the differences (right column) to see if there has been a significant improvement in water clarity. Using your calculator, Minitab, or Excel, compute the descriptive statistics on the differences to get the sample mean and the sample standard deviation of the differences.
$\bar d = -5.125$
$s_d = 6.081$
1) The null and alternative hypotheses:
$H_0: \mu_d = 0$ (The mean of the differences is equal to zero- no difference in water clarity over time.)
$H_1: \mu_d < 0$ (The water clarity is improving.)
We test “less than” because of how we computed the differences and the question we are asking.
In this case, we hope to see greater depth (better water clarity) at the five-year measurements. By calculating Initial – 5-year we hope to see negative values, values less than zero, indicating greater depth and clarity at the 5-year mark. Think of it like this:
Initial Depth < 5-year depth
This gives you the direction of the test!
2) The critical value tα.
The critical value comes from the student’s t-distribution table with n – 1 degrees of freedom. In this problem, we have eight pairs of data (n = 8) with 7 degrees of freedom. This is a one-sided test (less than), so alpha is all in the left tail. Go down the 0.05 column with 7 df to find the correct critical value (tα) of -1.895.
3) The test statistic t=\frac {\bar {d}-\mu_{d}}{s_{d}/\sqrt {n}} = \frac {-5.125-0}{6.081/\sqrt{8}} = -2.38\]
We subtract zero from d-bar because of our null hypothesis. Our null hypothesis is that the difference of the before and after values are statistically equal to zero. In other words, there has been no change in water clarity.
4) Compare the test statistic to the critical value and state a conclusion.
The test statistic (-2.38) is less than the critical value (-1.895). It falls in the rejection zone.
We reject the null hypothesis. We have enough evidence to support the claim that the mean water clarity has improved.
P-value Approach
We can also use the p-value approach to answer the question. To estimate p-value using the student’s t-table, go across the row for 7 degrees of freedom until you find the two values that the absolute value of your test statistic falls between.
Table 4. Student t-Distribution.
df
.05
.025
.02
.01
.005
5
2.015
2.571
2.757
3.365
4.032
6
1.943
2.447
2.612
3.143
3.707
7
1.895
2.365
2.517
2.998
3.499
8
1.860
2.306
2.449
2.896
3.355
9
1.833
2.262
2.398
2.821
3.250
The p-value for this test statistic is greater than 0.02 and just less than 0.025. Compare this to the level of significance (alpha). The Decision Rule says that if the p-value is less than α, reject the null hypothesis. In this case, the p-value estimate (0.02 – 0.025) is less than the level of significance (0.05). Reject the null hypothesis. We have enough evidence to support the claim that the mean water clarity has improved.
BUT, what if you used a 1% level of significance? In this case, the p-value is NOT less than the level of significance ((0.02 – 0.025)>0.01). We would fail to reject the null hypothesis. There is NOT enough evidence to support the claim that the water clarity has improved. It is important to set the level of significance at the start of your research and report the p-value. Another researcher may interpret your findings differently, based on your reported p-value and their own selected level of significance.
Construct and Interpret a Confidence Interval about the Differences of the Data for Matched Pairs
A hypothesis test for matched pairs data is very similar to a one-sample t-test. BUT, we can answer the same question by constructing a confidence interval about the mean of the differences. This process is just like the confidence intervals from Chapter 2.
1. Find the critical value.
2. Compute the margin of error.
3. Point estimate ± margin of error.
For matched pairs data, the critical value comes from the student’s t-distribution with n – 1 degrees of freedom. The margin of error uses the sample standard deviation of the differences (sd) and the point estimate is $\bar {d}$, the mean of the differences.
For a (1 – α)*100% confidence interval for the mean of the differences
$\bar d\pm t_{\frac{\alpha}{2}}(\frac {s_d}{\sqrt {n}})$
Where $t_{\frac {\alpha}{2}}$ is used because confidence intervals are always two-sided.
Example $2$:
Let’s look at the biologist studying water clarity in Owasco Lake again. She wants to test the claim that water clarity has improved. We can answer this question by constructing a confidence interval about the mean of the differences.
= -5.125
sd = 6.081
α = 0.05
n = 8
Solution
1) $t_{\frac {\alpha}{2}} = 2.365$
2) $E=t_{\frac {\alpha}{2}}(\frac {s_d}{\sqrt {n}})=2.365(\frac {6.081}{\sqrt {8}}) = 5.085$
3) $\bar {d} \pm E = -5.125 \pm 5.085$
The 95% confidence interval about the mean of the differences is
$(-10.21, -0.04)$
$(-10.21\le \mu_d \le -0.04)$
We can be 95% confident that this interval contains the true mean of the differences in water clarity between the two time periods. BUT, this doesn’t directly answer the question about improved water clarity. To do this, we use the interpretations given below.
Confidence Interval Interpretations
1. If the confidence interval contains all positive values, we find a significant difference between the groups, AND we can conclude that the mean of the first group is significantly greater than the mean of the second group.
2. If the confidence interval contains all negative values, we find a significant difference between the groups, AND we can conclude that the mean of the first group is significantly less than the mean of the second group.
3. If the confidence interval contains zero (it goes from negative to positive values), we find NO significant difference between the groups.
In this problem, the confidence interval is (-10.21, -0.04). We have all negative values, so we can conclude that there is a significant difference in the mean water clarity between the years AND…
• The mean water clarity for the initial time was significantly less than at the five-year re-measurement.
• Water clarity has improved during the five-year period. The confidence interval estimates the mean improvement.
Example $3$:
Biologists are studying elk migration in the western US and want to know if the four-lane interstate that was built ten years ago has disturbed elk migration to the winter feeding area. A random sample was gathered from nine wilderness districts in the winter feeding areas. These data were compared to a random sample collected from the same nine areas before the highway was built. Use a 1% level of significance to test this claim.
District
1
2
3
4
5
6
7
8
9
Before
11.6
18.7
15.9
20.6
10.1
17.4
7.2
12.2
11.7
After
10.0
21.6
13.9
22.8
11.5
16.2
8.1
10.8
9.6
d
1.6
-2.9
2.0
-2.2
-1.4
1.2
-0.9
1.4
2.1
$\bar {d} = 0.100$
$s_d = 1.946$
$H_0: \mu_d = 0$
$H_1: \mu_d \ne 0$
Determine the critical values: This is a two-sided question (alternative ≠) so the critical values are ±3.355.
Compute the test statistic:
$t=\frac {\bar d -\mu_d}{s_d/\sqrt {n}} = \frac {0.100-0}{1.946/\sqrt {9}}=0.1542$
Now compare the critical value to the test statistic and state a conclusion. The test statistic is NOT greater than 3.355 or less than -3.355 (it doesn’t fall in the rejection zones). We fail to reject the null hypothesis. There is not enough evidence to support the claim that the highway has interfered with the elk migration (no difference before or after the highway).
Now construct a 99% confidence interval and answer the question.
1) $t_{\frac {\alpha}{2}}$ = 3.355
2) $E=t_{\frac {\alpha}{2}}(\frac {s_d}{\sqrt{n}}) =3.355(1.946/\sqrt{9})=2.176$
3) $\bar d \pm E = 0.100\pm 2.176$
The 99% confidence interval about the difference of the means is: (-2.076, 2.276)
This confidence interval contains zero. The null hypothesis is that there is zero difference before and after the highway way was created. Therefore, we fail to reject the null hypothesis. There is not enough evidence to support the claim that the highway has interfered with the elk migration (no difference before or after the highway).
Software Solutions
Minitab
Paired T-Test and CI: Before, After
N
Mean
StDev
SE Mean
Before
9
13.93
4.42
1.47
After
9
13.83
5.32
1.77
Difference
9
0.100
1.946
0.649
99% CI for mean difference: (-2.077, 2.277)
T-Test of mean difference = 0 (vs not = 0): T-Value = 0.15 p-value = 0.881
Minitab gives the test statistic of 0.15 and the p-value of 0.881. It also gives a 99% confidence interval for the difference of the means (-2.077, 2.277). All results support failing to reject the null hypothesis.
Excel
t-Test: Paired Two Sample for Means
Before
After
Mean
13.93333
13.83333333
Variance
19.565
28.3075
Observations
9
9
Pearson Correlation
0.936635
Hypothesized Mean Difference
0
df
8
t Stat
0.15415
$P(T\le t)$ one-tail
0.440654
t Critical one-tail
2.896459
$P(T\le t)$ two-tail
0.881309
t Critical two-tail
3.355387
The test statistic is 0.15415. This is a two-sided question so we can use $P(T\le t)$ two-tail = 0.881309. The p-value is NOT less than the 1% level of significance so we will fail to reject the null hypothesis. | textbooks/stats/Applied_Statistics/Natural_Resources_Biometrics_(Kiernan)/04%3A_Inferences_about_the_Differences_of_Two_Populations/4.03%3A_Inferences_about_Two_Means_with_Dependent_SamplesMatched_Pairs.txt |
Inferences about Two Population Proportions
We can apply the same methods we just learned with means to our two-sample proportion problems. We have two populations with two samples and we want to compare the population proportions.
• Is the proportion of lakes in New York with invasive species different from the proportion of lakes in Michigan with invasive species?
• Is the proportion of construction companies using certified lumber greater in the northeast than in the southeast?
A test of two population proportions is very similar to a test of two means, except that the parameter of interest is now “p” instead of “µ”. With a one-sample proportion test, we used $\hat p =\frac {x}{n}$as the point estimate of p. We expect that would be close to p. With a test of two proportions, we will have two ’s, and we expect that (1 – 2) will be close to (p1 – p2). The test statistic accounts for both samples.
• With a one-sample proportion test, the test statistic is
$z = \frac {\hat p - p}{\sqrt {\frac {p(1-p)}{n}}}$
and it has an approximate standard normal distribution.
• For a two-sample proportion test, we would expect the test statistic to be
$z=\frac {(\hat {p_1} -\hat {p_2})-(p_1-p_2)}{\sqrt {\frac {p_1(1-p_1)}{n_1}+\frac {p_2(1-p_2)}{n_2}}}$
HOWEVER, the null hypothesis will be that p1 = p2. Because the H0 is assumed to be true, the test assumes that p1 = p2. We can then assume that p1 = p2 equals p, a common population proportion. We must compute a pooled estimate of p (its unknown) using our sample data.
$\bar p = \frac {x_1+x_2}{n_1+n_2}$
The test statistic then takes the form of
$z=\frac {(\hat {p_1} -\hat {p_2})-(p_1-p_2)}{\sqrt {\frac {\bar p(1-\bar p)}{n_1}+\frac {\bar p(1-\bar p)}{n_2}}}$
The hypothesis test follows the same steps that we have seen in previous sections:
• State the null and alternative hypotheses
• State the level of significance and determine the critical value
• Compute the test statistic
• Compare the critical value and the test statistic and state a conclusion
The assumptions that we set for a one-sample proportion test still hold true for both samples. Both must be random samples from normally distributed populations satisfying the following statements:
• $n(p)(1 – p) \ge 10$
• Each sample size is no more than 5% of the population size.
We can again use the same three pairs of null and alternative hypotheses. Notice that we are working with population proportions so the parameter is p.
Table $1$. Null and alternative hypotheses.
Two-sided Left-sided Right=sided
$\mathrm{H}_{\mathrm{0}}:p_1=p_2$ $\mathrm{H}_{\mathrm{0}}: p_1=p_2$ (\mathrm{H}_{\mathrm{0}}: p_1=p_2 \)
$\mathrm{H}_1: \p_1 \neq p_2$ $\mathrm{H}_1: \p_1 < p_2$ $\mathrm{H}_1: \p_1> p_2$
The critical value comes from the standard normal table and depends on the alternative hypothesis (is the question one- or two-sided?). As usual, you must state a conclusion. You must always answer the question that is asked in the alternative hypothesis.
Example $1$:
A researcher believes that a greater proportion of construction companies in the northeast are using certified lumber in home construction projects compared to companies in the southeast. She collected a random sample of 173 companies in the southeast and found that 86 used at least 30% certified lumber. She collected another random sample of 115 companies from the northeast and found that 68 used at least 30% certified lumber. Test the researcher’s claim that a greater proportion of companies in the northeast use at least 30% certified lumber compared to the southeast. α = 0.05.
Southeast Northeast
$n_1 = 173$ $n_2 = 115$
$x_1 = 86$ $x_2 = 68$
Solution
Write the null and alternative hypotheses:
$H_0: p_1 = p_2$ or $p_1 – p_2 = 0$
$H_1: p_1 < p_2$
The critical value comes from the standard normal table. It is a one-sided test, so alpha is all in the left tail. The critical value is -1.645.
Compute the point estimates
$\hat {p_1} = \frac {86}{173}=0.497$
$\hat {p_2} = \frac {68}{115} = 0.591$
Now compute
$\bar p = \frac {x_1+x_2}{n_1+n_2} = \frac {86+68}{173+115} = 0.535$
The test statistic is
$z=\frac {(\hat {p_1} -\hat {p_2})-(p_1-p_2)}{\sqrt {\frac {\bar p(1-\bar p)}{n_1}+\frac {\bar p(1-\bar p)}{n_2}}} = \frac {(0.497-0.591)-0}{\sqrt {\frac {0.535(1-0.535)}{173}+\frac {0.535(1-0.535)}{115}}} = -1.57$
Now compare the critical value to the test statistic and state a conclusion.
We fail to reject the null hypothesis. There is not enough evidence to support the claim that a greater proportion of companies in the northeast use at least 30% certified lumber compared to companies in the southeast.
Using the P-Value Approach
We can also answer this question using the p-value approach. The p-value is the area associated with the test statistic. This is a left-tailed problem with a test statistic of -1.57 so the p-value is the area to the left of -1.57. Look up the area associated with the Z-score -1.57 in the standard normal table.
The p-value is 0.0582.
The hatched area (p-value) is greater than the 5% level of significance (red area). We fail to reject the null hypothesis. There is not enough statistical evidence to support the claim that a greater proportion of companies in the northeast use at least 30% certified lumber compared to companies in the southeast.
Construct and Interpret a Confidence Interval about the Difference of Two Proportions
Just like a two-sample t-test about the means, we can answer this question by constructing a confidence interval about the difference of the proportions. The point estimate is $\hat {p_1} - \hat {p_2}$. The standard error is $\sqrt {\frac {\hat {p_1}(1-\hat {p_1})}{n_1}+\frac {\hat {p_2}(1-\hat {p_2})}{n_2}}$and the critical value $z_{\alpha/2}$comes from the standard normal table.
The confidence interval takes the form of the point estimate ± the margin of error.
$(\hat {p_1}- \hat {p_2}) \pm z_{\alpha/2} \sqrt {\frac {\hat {p_1}(1-\hat {p_1})}{n_1} + \frac {\hat {p_2}(1-\hat {p_2})}{n_2}}$
We will use the same three steps to construct a confidence interval about the difference of the proportions. Notice the estimate of the standard error of the differences. We do not rely on the pooled estimate of p when constructing confidence intervals to estimate the difference in proportions. This is because we are not making any assumptions regarding the equality of p1 and p2, as we did in the hypothesis test.
1) critical value $z_{\alpha/2}$
2) $E = z_{\alpha/2} \sqrt {\frac {\hat {p_1}(1-\hat {p_1})}{n_1}+\frac {\hat {p_2}(1-\hat {p_2}}{n_2}}$
3) $(\hat {p_1}-\hat {p_2}) \pm E$
Let’s revisit Ex. 6 again, but this time we will construct a confidence interval about the difference between the two proportions.
Example $2$:
The researcher claims that a greater proportion of companies in the northeast use at least 30% certified lumber compared to companies in the southeast. We can test this claim by constructing a 90% confidence interval about the difference of the proportions.
1) critical value $z_{\alpha/2}= 1.645$
2) $E = z_{\alpha/2} \sqrt {\frac {\hat {p_1}(1-\hat {p_1})}{n_1}+\frac {\hat {p_2}(1-\hat {p_2}}{n_2}}=1.645\sqrt {\frac {0.497(1-0.497)}{173}+\frac {0.591(1-0.591)}{115}}=0.098$
3) $(\hat {p_1}-\hat {p_2}) \pm E= (0.497-0.591) ± 0.098$
The 90% confidence interval about the difference of the proportions is (-0.192, 0.004).
BUT, this doesn’t answer the question the researcher asked. We must use one of the three interpretations seen in the previous section. In this problem, the confidence interval contains zero. Therefore we can conclude that there is no significant difference between the proportions of companies using certified lumber in the northeast and in the southeast.
Example $3$:
A hydrologist is studying the use of Best Management Plans (BMP) in managed forest stands to protect riparian zones. He collects information from 62 stands that had a management plan by a forester and finds that 47 stands had correctly implemented BMPs to protect the riparian zones. He collected information from 58 stands that had no management plan and found that 26 of them had correctly implemented BMPs for riparian zones. Do these data suggest that there is a significant difference in the proportion of stands with and without management plans that had correct BMPs for riparian zones? α = 0.05.
Plan No Plan
$x_1 = 47$ $x_2 = 26$
$n_1 = 62$ $n_2 = 58$
Let’s answer this question both ways by first using a hypothesis test and then by constructing a confidence interval about the difference of the proportions.
$H_0: p_1 = p_2$ or $p_1 – p_2 = 0$
$H_1: p_1 \ne p_2$
Critical value: ±1.96
Test statistic:
$z=\frac {(\hat {p_1}-\hat {p_2})-(p_1 - p_2)}{\sqrt {\frac {\bar p (1- \bar p)}{n_1}+\frac {\bar p(1-\bar p)}{n_2}}}= \frac {(0.758-0.448)-0}{\sqrt {\frac {0.608(1-0.608)}{62}+\frac {0.608(1-0.608)}{58}}}=3.48 \nonumber$
The test statistic is greater than 1.96 and falls in the rejection zone. There is enough evidence to support the claim that there is a significant difference in the proportion of correctly implemented BMPs with and without management plans.
Now compute the p-value and compare it to the level of significance. The p-value is two times the area under the curve to the right of 3.48. Look for the area (in the standard normal table) associated with a Z-score of 3.48. The area to the right of 3.48 is 1 – 0.9997 = 0.0003. The p-value is 2 x 0.0003 = 0.0006.
The p-value is less than 0.05. We will reject the null hypothesis and support the claim that the proportions are different.
Now, answer this question using a confidence interval.
1) critical value $z_{\alpha/2}= 1.96$
2) $E = z_{\alpha/2} \sqrt {\frac {\hat {p_1}(1-\hat {p_1})}{n_1}+\frac {\hat {p_2}(1-\hat {p_2})}{n_2}}=1.96\sqrt {\frac {0.758(1-0.758)}{62}+\frac {0.448(1-0.448)}{58}}=0.1666$
3) $\hat {p_1}-\hat {p_2} \pm E = (0.758,-0.448) \pm 0.1666$
The 95% confidence interval about the difference of the proportions is (0.143, 0.477). The confidence interval contains all positive values, telling you that there is a significant difference between the proportions AND the first group (BMPs used with management plans) is significantly greater than the second group (BMPs with no plans). This confidence interval estimates the difference in proportions. For this problem, we can say that correctly implemented BMPs with a plan occur in a greater proportion (14.3% to 44.7%) compared to those implemented without a management plan.
Software Solutions
Minitab
Test and CI for Two Proportions
Sample
X
N
Sample p
1
47
62
0.758065
2
26
58
0.448276
Difference = p (1) – p (2)
Estimate for difference: 0.309789
95% CI for difference: (0.143223, 0.476355)
Test for difference = 0 (vs. not = 0): Z = 3.47 p-value = 0.001
Fisher’s exact test: p-value = 0.001
The p-value equals 0.001 which tells us to reject the null hypothesis. There is a significant difference in the proportion of correctly implemented BMPs with and without management plans. The confidence interval for the difference in proportions is also given (0.143223, 0.476355) which allows us to estimate the difference.
Excel
Excel does not analyze data from proportions. | textbooks/stats/Applied_Statistics/Natural_Resources_Biometrics_(Kiernan)/04%3A_Inferences_about_the_Differences_of_Two_Populations/4.04%3A_Inferences_about_Two_Population_Proportions.txt |
F-Test for Comparing Two Population Variances
One major application of a test for the equality of two population variances is for checking the validity of the equal variance assumption $(\sigma_1^2=\sigma_2^2)$ for a two-sample t-test. First we hypothesize two populations of measurements that are normally distributed. We label these populations as 1 and 2, respectively. We are interested in comparing the variance of population 1 $(\sigma_1^2)$ to the variance of population 2 $(\sigma_2^2)$.
When independent random samples have been drawn from the respective populations, the ratio
$\frac {S^2_1/S_2^2}{\sigma_1^2/\sigma ^2_2}\] possesses a probability distribution in repeated sampling that is referred to as an F distribution and its properties are: • Unlike Z and t, but like $\chi^2$, F can assume only positive values. • The F distribution, unlike the Z and t distributions, but like the (\chi^2\) distribution, is non-symmetrical. • There are many F distributions, and each one has a different shape. We specify a particular one by designating the degrees of freedom associated with $S_1^2$ and $S_2^2$. We denote these quantities by $df_1$ and $df_2$, respectively. Note: A statistical test of the null hypothesis $\sigma_1^2 = \sigma_2^2$ utilizes the test statistic $S_1^2/S_2^2$. It may require either upper tail or lower tail rejection region, depending on which sample variance is larger. To alleviate this situation, we are at liberty to designate the population with the larger sample variance as population 1 (i.e., used as the numerator of the ratio $S_1^2/S_2^2$). By this convention, the rejection region is only located in the upper tail of the F distribution. Null hypothesis: $H_0:\sigma_1^2 = \sigma_2^2$ Alternative hypothesis: • $H_a: \sigma_1^2 > \sigma_2^2$ (one-tailed), reject $H_0$ if the observed F > Fα • $H_a: \sigma_1^2 \ne \sigma_2^2$ (two-tailed), reject $H_0$ if the observed F > Fα/2. Test statistic: $F = \frac {S_1^2}{S^2_2}$ assuming $S_1^2>S_2^2$, where the F critical value in the rejection region is based on 2 degrees of freedom $df_1 = n_1 – 1$ (associated with numerator $S_1^2$) and $df_2 = n_2 – 1$ (associated with denominator $S_2^2$). Example $1$: A forester wants to compare two different mist blowers for consistent application. She wants to use the mist blower with the smaller variance, which means more consistent application. She wants to test that the variance of Type A (0.087 gal.2) is significantly greater than the variance of Type B (0.073 gal.2) using α = 0.05. Type A Type B $S_1^2$ = 0.087 $S^2_2$=0.073 $n_1$= 16 $n_2$ = 21 Solution $H_0: \sigma_1^2 = \sigma_2^2$ $H_1:\sigma_1^2 > \sigma_2^2$ The critical value $(df_1 = 15$ and $df_2 = 20)$ is 2.20. The test statistic is:$F = \frac {S_1^2}{S_2^2} = \frac {0.087}{0.073}=1.192\]
The test statistic is not larger than the critical value (it does not fall in the rejection zone) so we fail to reject the null hypothesis. While the variance of Type B is mathematically smaller than the variance of Type A, it is not statistically smaller. There is not enough statistical evidence to support the claim that the variance of Type A is significantly greater than the variance of Type B. Both mist blowers will deliver the chemical with equal consistency.
Software Solutions
Minitab
Test and CI for Two Variances - Methods
Null hypothesis
Variance(1) / Variance(2) = 1
Alternative hypothesis
Variance(1) / Variance(2) > 1
Significance level
Alpha = 0.05
Statistics
Sample
N
StDev
Variance
1
16
0.295
0.087
2
21
0.270
0.073
Ratio of standard deviations = 1.092
Ratio of variances = 1.192
Tests
Test
Method
DF1
DF2
Statistic
p-value
F Test (normal)
15
20
1.19
0.351
Excel
F-Test Two-Sample for Variances
Type A
Type B
Mean
11.07188
11.10595
Variance
0.08699
0.073379
Observations
16
21
df
15
20
F
1.185483
$P(F\le f)$ one-tail
0.355098
F Critical one-tail
2.203274
4.06: Summary
Questions about the differences between two samples can be answered in several ways: hypothesis test, p-value approach, or confidence interval approach. In all cases, you must clearly state your question, the selected level of significance and the conclusion.
If you choose the hypothesis test approach, you need to compare the critical value to the test statistic. If the test statistic falls in the rejection zone set by the critical value, then you will reject the null hypothesis and support the alternative claim.
If you use the p-value approach, you must compute the test statistic and find the area associated with that value. For a two-sided test, the p-value is two times the area of the absolute value of the test statistic. For a one-sided test, the p-value is the area to the left or right of the test statistic. The decision rule states: If the p-value is less than α(level of significance), reject the null hypothesis and support the alternative claim.
The confidence interval approach constructs an interval about the difference of the means or proportions. If the interval contains zero, then you can conclude that there is no difference between the two groups. If the interval contains all positive values, you can conclude that group 1 is significantly greater than group 2. If the interval contains all negative numbers, you can conclude that group 2 is significantly greater than group 1.
In all approaches, a clear and concise conclusion is required. You MUST answer the question being asked by stating the results of your approach. | textbooks/stats/Applied_Statistics/Natural_Resources_Biometrics_(Kiernan)/04%3A_Inferences_about_the_Differences_of_Two_Populations/4.05%3A_F-Test_for_Comparing_Two_Population_Variances.txt |
Variance Analysis
Previously, we have tested hypotheses about two population means. This chapter examines methods for comparing more than two means. Analysis of variance (ANOVA) is an inferential method used to test the equality of three or more population means.
$H_0: \mu_1= \mu_2= \mu_3= \cdot =\mu_k$
This method is also referred to as single-factor ANOVA because we use a single property, or characteristic, for categorizing the populations. This characteristic is sometimes referred to as a treatment or factor.
Note
A treatment (or factor) is a property, or characteristic, that allows us to distinguish the different populations from one another.
The objects of ANOVA are (1) estimate treatment means, and the differences of treatment means; (2) test hypotheses for statistical significance of comparisons of treatment means, where “treatment” or “factor” is the characteristic that distinguishes the populations.
For example, a biologist might compare the effect that three different herbicides may have on seed production of an invasive species in a forest environment. The biologist would want to estimate the mean annual seed production under the three different treatments, while also testing to see which treatment results in the lowest annual seed production. The null and alternative hypotheses are:
$H_0: \mu_1= \mu_2= \mu_3$ $H_1$: at least one of the means is significantly different from the others
It would be tempting to test this null hypothesis $H_0: \mu_1= \mu_2= \mu_3$ by comparing the population means two at a time. If we continue this way, we would need to test three different pairs of hypotheses:
$H_0: \mu_1= \mu_2$ AND $H_0: \mu_1= \mu_3$ AND $H_0: \mu_2= \mu_3$
$H_1: \mu_1 \ne \mu_2$ $H_1: \mu_1 \ne \mu_3$ $H_1: \mu_2 \ne \mu_3$
If we used a 5% level of significance, each test would have a probability of a Type I error (rejecting the null hypothesis when it is true) of α = 0.05. Each test would have a 95% probability of correctly not rejecting the null hypothesis. The probability that all three tests correctly do not reject the null hypothesis is 0.953 = 0.86. There is a 1 – 0.953 = 0.14 (14%) probability that at least one test will lead to an incorrect rejection of the null hypothesis. A 14% probability of a Type I error is much higher than the desired alpha of 5% (remember: α is the same as Type I error). As the number of populations increases, the probability of making a Type I error using multiple t-tests also increases. Analysis of variance allows us to test the null hypothesis (all means are equal) against the alternative hypothesis (at least one mean is different) with a specified value of α.
The assumptions for ANOVA are (1) observations in each treatment group represents a random sample from that population; (2) each of the populations is normally distributed; (3) population variances for each treatment group are homogeneous (i.e., ). We can easily test the normality of the samples by creating a normal probability plot, however, verifying homogeneous variances can be more difficult. A general rule of thumb is as follows: One-way ANOVA may be used if the largest sample standard deviation is no more than twice the smallest sample standard deviation.
In the previous chapter, we used a two-sample t-test to compare the means from two independent samples with a common variance. The sample data are used to compute the test statistic:
$t=\dfrac {\bar {x_1}-\bar {x_2}}{s_p\sqrt {\dfrac {1}{n_1}+\dfrac {1}{n_2}}}$ where $S_p^2 = \dfrac {(n_1-1)S_1^2 + (n_2-1)S_2^2}{n_1+n_2-2}$
is the pooled estimate of the common population variance σ2. To test more than two populations, we must extend this idea of pooled variance to include all samples as shown below:
$s^2_w= \frac {(n_1-1)s_1^2 + (n_2-1)s_2^2 + ...+(n_k - 1)s_k^2}{n_1+n_2+...+n_k-k}\] where $s_w^2$ represents the pooled estimate of the common variance $\sigma^2$, and it measures the variability of the observations within the different populations whether or not H0 is true. This is often referred to as the variance within samples (variation due to error). If the null hypothesis IS true (all the means are equal), then all the populations are the same, with a common mean $\mu$ and variance $\sigma^2$. Instead of randomly selecting different samples from different populations, we are actually drawing k different samples from one population. We know that the sampling distribution for k means based on n observations will have mean $\mu \bar x$ and variance $\frac {\sigma^2}{n}$ (squared standard error). Since we have drawn k samples of n observations each, we can estimate the variance of the k sample means ($\frac {\sigma^2}{n}$) by$\dfrac {\sum(\bar {x_1} - \mu_{\bar x} )^2}{k-1} = \dfrac {\sum \bar {x_i}^1 - \dfrac {[\sum \bar {x_i}]^2}{k}}{k-1} = \frac {\sigma^2}{n}\]
Consequently, n times the sample variance of the means estimates σ2. We designate this quantity as SB2 such that
$S_B^2 = n*\dfrac {\sum (\bar {x_i}-\mu_{\bar x})^2}{k-1}=n*\dfrac {\sum \bar {x_i}^2 -\dfrac {[\bar {x_i}]^2}{k}}{k-1}\] where $S_B^2$ is also an unbiased estimate of the common variance $\sigma^2$, IF $H_0$ IS TRUE. This is often referred to as the variance between samples (variation due to treatment). Under the null hypothesis that all k populations are identical, we have two estimates of $σ_2$ ($S_W^2$and $S_B^2$). We can use the ratio of $S_B^2/ S_W^2$ as a test statistic to test the null hypothesis that $H_0: \mu_1= \mu_2= \mu_3= …= \mu_k$, which follows an F-distribution with degrees of freedom $df_1= k – 1$ and $df_2= N –k$ (where k is the number of populations and N is the total number of observations ($N = n_1 + n_2+…+ n_k$). The numerator of the test statistic measures the variation between sample means. The estimate of the variance in the denominator depends only on the sample variances and is not affected by the differences among the sample means. When the null hypothesis is true, the ratio of $S_B^2$ and $S_W^2$ will be close to 1. When the null hypothesis is false, $S_B^2$ will tend to be larger than $S_W^2$ due to the differences among the populations. We will reject the null hypothesis if the F test statistic is larger than the F critical value at a given level of significance (or if the p-value is less than the level of significance). Tables are a convenient format for summarizing the key results in ANOVA calculations. The following one-way ANOVA table illustrates the required computations and the relationships between the various ANOVA table elements. Table 1. One-way ANOVA table. Source of Variation df Sum of Squares (MSS) F-Test p-value Treatment k-1 SSTr MSTr=SSTr/(k-1) Error N-k SSE MSE=SSE/(N-k) Total N-1 SST0 The sum of squares for the ANOVA table has the relationship of SSTo = SSTr + SSE where:$SSTo = \sum_{i=1}^k \sum_{j=1}^n (x_{ij} - \bar {\bar{x}})^2\]
$SSTr = \sum_{i=1}^k n_i(\bar {x_i} -\bar {\bar{x}})^2\]$SSE = \sum_{i=1}^k \sum^n_{j=1} (x_{ij}-\bar {x_i})^2\]
Total variation (SSTo) = explained variation (SSTr) + unexplained variation (SSE)
The degrees of freedom also have a similar relationship: df(SSTo) = df(SSTr) + df(SSE)
The Mean Sum of Squares for the treatment and error are found by dividing the Sums of Squares by the degrees of freedom for each. While the Sums of Squares are additive, the Mean Sums of Squares are not. The F-statistic is then found by dividing the Mean Sum of Squares for the treatment (MSTr) by the Mean Sum of Squares for the error(MSE). The MSTr is the $S_B^2$ and the MSE is the $S_W^2$.
$F=\dfrac {S_B^2}{S_W^2}=\dfrac {MSTr}{MSE}\] Example $1$: An environmentalist wanted to determine if the mean acidity of rain differed among Alaska, Florida, and Texas. He randomly selected six rain dates at each site obtained the following data: Table 2. Data for Alaska, Florida, and Texas. Alaska Florida Texas 5.11 4.87 5.46 5.01 4.18 6.29 4.90 4.40 5.57 5.14 4.67 5.15 4.80 4.89 5.45 5.24 4.09 5.30 Solution $H_0: \mu_A = \mu_F = \mu_T$ $H_1$: at least one of the means is different State Sample size Sample total Sample mean Sample variance Alaska n1 = 6 30.2 5.033 0.0265 Florida n2 = 6 27.1 4.517 0.1193 Texas n3 = 6 33.22 5.537 0.1575 Table 3. Summary Table. Notice that there are differences among the sample means. Are the differences small enough to be explained solely by sampling variability? Or are they of sufficient magnitude so that a more reasonable explanation is that the μ’s are not all equal? The conclusion depends on how much variation among the sample means (based on their deviations from the grand mean) compares to the variation within the three samples. The grand mean is equal to the sum of all observations divided by the total sample size: $\bar {\bar{x}}$= grand total/N = 90.52/18 = 5.0289$SSTo = (5.11-5.0289)^2 + (5.01-5.0289)^2 +…+(5.24-5.0289)^2+ (4.87-5.0289)^2 + (4.18-5.0289)^2 +…+(4.09-5.0289)^2 + (5.46-5.0289)^2 + (6.29-5.0289)^2 +…+(5.30-5.0289)^2 = 4.6384\]
$SSTr = 6(5.033-5.0289)^2 + 6(4.517-5.0289)^2 + 6(5.537-5.0289)^2 = 3.1214\]$SSE = SSTo – SSTr = 4.6384 – 3.1214 = 1.5170\]
Table 4. One-way ANOVA Table.
Source of Variation df Sum of Squares (SS) Mean Sum of Squares (MSS) F-Test
Treatment 3-1 3.1214 3.1214/2=1.5607 1.5607/0.1011=15.4372
Error 18-3 1.5170 1.5170/15=0.1011
Total 18-1 4.6384
This test is based on $df_1 = k – 1 = 2$ and $df_2 = N – k = 15$. For α = 0.05, the F critical value is 3.68. Since the observed F = 15.4372 is greater than the F critical value of 3.68, we reject the null hypothesis. There is enough evidence to state that at least one of the means is different.
Software Solutions
Minitab
One-way ANOVA: pH vs. State
Source
DF
SS
MS
F
P
State
2
3.121
1.561
15.43
0.000
Error
15
1.517
0.101
Total
17 4.638
S = 0.3180 R-Sq = 67.29% R-Sq(adj) = 62.93%
Individual 95% CIs For Mean Based on Pooled StDev
Level
N
Mean
StDev
—-+———+———+———+—–
Alaska
6
5.0333
0.1629
(——*——)
Florida
6
4.5167
0.3455
(——*——)
Texas
6
5.5367
0.3969
(——*——)
—-+———+———+———+—–
4.40
4.80
5.20
5.60
Pooled StDev = 0.3180
The p-value (0.000) is less than the level of significance (0.05) so we will reject the null hypothesis.
Excel
ANOVA: Single Factor
SUMMARY
Groups
Count
Sum
Average
Variance
Column 1
6
30.2
5.033333
0.026547
Column 2
6
27.1
4.516667
0.119347
Column 3
6
33.22
5.536667
0.157507
ANOVA
Source of Variation
SS
df
MS
F
p-value
F crit
Between Groups
3.121378
2
1.560689
15.43199
0.000229
3.68232
Within Groups
1.517
15
0.101133
Total
4.638378
17
The p-value (0.000229) is less than alpha (0.05) so we reject the null hypothesis. There is enough evidence to support the claim that at least one of the means is different.
Once we have rejected the null hypothesis and found that at least one of the treatment means is different, the next step is to identify those differences. There are two approaches that can be used to answer this type of question: contrasts and multiple comparisons.
Contrasts can be used only when there are clear expectations BEFORE starting an experiment, and these are reflected in the experimental design. Contrasts are planned comparisons. For example, mule deer are treated with drug A, drug B, or a placebo to treat an infection. The three treatments are not symmetrical. The placebo is meant to provide a baseline against which the other drugs can be compared. Contrasts are more powerful than multiple comparisons because they are more specific. They are more able to pick up a significant difference. Contrasts are not always readily available in statistical software packages (when they are, you often need to assign the coefficients), or may be limited to comparing each sample to a control.
Multiple comparisons should be used when there are no justified expectations. They are aposteriori, pair-wise tests of significance. For example, we compare the gas mileage for six brands of all-terrain vehicles. We have no prior knowledge to expect any vehicle to perform differently from the rest. Pair-wise comparisons should be performed here, but only if an ANOVA test on all six vehicles rejected the null hypothesis first.
It is NOT appropriate to use a contrast test when suggested comparisons appear only after the data have been collected. We are going to focus on multiple comparisons instead of planned contrasts. | textbooks/stats/Applied_Statistics/Natural_Resources_Biometrics_(Kiernan)/05%3A_One-Way_Analysis_of_Variance/5.01%3A_Analysis_of_Variance.txt |
When the null hypothesis is rejected by the F-test, we believe that there are significant differences among the k population means. So, which ones are different? Multiple comparison method is the way to identify which of the means are different while controlling the experiment-wise error (the accumulated risk associated with a family of comparisons). There are many multiple comparison methods available.
In The Least Significant Difference Test, each individual hypothesis is tested with the student t-statistic. When the Type I error probability is set at some value and the variance s2 has v degrees of freedom, the null hypothesis is rejected for any observed value such that |to|>tα/2, v. It is an abbreviated version of conducting all possible pair-wise t-tests. This method has weak experiment-wise error rate. Fisher’s Protected LSD is somewhat better at controlling this problem.
Bonferroni inequality is a conservative alternative when software is not available. When conducting n comparisons, αe≤ n αc therefore αc = αe/n. In other words, divide the experiment-wise level of significance by the number of multiple comparisons to get the comparison-wise level of significance. The Bonferroni procedure is based on computing confidence intervals for the differences between each possible pair of μ’s. The critical value for the confidence intervals comes from a table with (N – k) degrees of freedom and k(k – 1)/2 number of intervals. If a particular interval does not contain zero, the two means are declared to be significantly different from one another. An interval that contains zero indicates that the two means are NOT significantly different.
Dunnett’s procedure was created for studies where one of the treatments acts as a control treatment for some or all of the remaining treatments. It is primarily used if the interest of the study is determining whether the mean responses for the treatments differ from that of the control. Like Bonferroni, confidence intervals are created to estimate the difference between two treatment means with a specific table of critical values used to control the experiment-wise error rate. The standard error of the difference is .
Scheffe’s test is also a conservative method for all possible simultaneous comparisons suggested by the data. This test equates the F statistic of ANOVA with the t-test statistic. Since t2 = F then t = √F, we can substitute √F(αe, v1, v2) for t(αe, v2) for Scheffe’s statistic.
Tukey’s test provides a strong sense of experiment-wise error rate for all pair-wise comparison of treatment means. This test is also known as the Honestly Significant Difference. This test orders the treatments from smallest to largest and uses the studentized range statistic
$q=\dfrac {\bar {y}(largest)-\bar y (smallest)}{\sqrt {MSE/r}}$
The absolute difference of the two means is used because the location of the two means in the calculated difference is arbitrary, with the sign of the difference depending on which mean is used first. For unequal replications, the Tukey-Kramer approximation is used instead.
Student-Newman-Keuls (SNK) test is a multiple range test based on the studentized range statistic like Tukey’s. The critical value is based on a particular pair of means being tested within the entire set of ordered means. Two or more ranges among means are used for test criteria. While it is similar to Tukey’s in terms of a test statistic, it has weak experiment-wise error rates.
Bonferroni, Dunnett’s, and Scheffe’s tests are the most conservative, meaning that the difference between the two means must be greater before concluding a significant difference. The LSD and SNK tests are the least conservative. Tukey’s test is in the middle. Robert Kuehl, author of Design of Experiments: Statistical Principles of Research Design and Analysis (2000), states that the Tukey method provides the best protection against decision errors, along with a strong inference about magnitude and direction of differences.
Let’s go back to our question on mean rain acidity in Alaska, Florida, and Texas. The null and alternative hypotheses were as follows:
H0: μA = μF = μT
H1: at least one of the means is different
The p-value for the F-test was 0.000229, which is less than our 5% level of significance. We rejected the null hypothesis and had enough evidence to support the claim that at least one of the means was significantly different from another. We will use Bonferroni and Tukey’s methods for multiple comparisons in order to determine which mean(s) is different.
Bonferroni Multiple Comparison Method
A Bonferroni confidence interval is computed for each pair-wise comparison. For k populations, there will be k(k-1)/2 multiple comparisons. The confidence interval takes the form of:
$For \ \mu_1 - \mu_2 : (\bar {x_1}-\bar {x_2}) \pm (Bonferronit \ critical \ value) \sqrt{\dfrac {MSE}{n_1} +\dfrac {MSE}{n_2}}$
$For \ \mu_{k-1} - \mu_k: (\bar {x_{k-1}} - \bar {x_k}) \pm (Bonferronit \ critical \ value) \sqrt {\dfrac {MSE}{n_{k-1}}+\dfrac{MSE}{n_k}}$
Where MSE is from the analysis of variance table and the Bonferroni t critical value comes from the Bonferroni Table given below. The Bonferroni t critical value, instead of the student t critical value, combined with the use of the MSE is used to achieve a simultaneous confidence level of at least 95% for all intervals computed. The two means are judged to be significantly different if the corresponding interval does not include zero.
Table 5. Bonferroni t-critical values.
df
2
3
4
5
6
10
2
6.21
7.65
8.86
9.92
10.89
14.09
3
4.18
4.86
5.39
5.84
6.23
7.45
4
3.50
3.96
4.31
4.60
4.85
5.60
5
3.16
3.53
3.81
4.03
4.22
4.77
6
2.97
3.29
3.52
3.71
3.86
4.32
7
2.84
3.13
3.34
3.50
3.64
4.03
8
2.75
3.02
3.21
3.36
3.48
3.83
9
2.69
2.93
3.11
3.25
3.36
3.69
10
2.63
2.87
3.04
3.17
3.28
3.58
11
2.59
2.82
2.98
3.11
3.21
3.50
12
2.56
2.78
2.93
3.05
3.15
3.43
13
2.53
2.75
2.90
3.01
3.11
3.37
14
2.51
2.72
2.86
2.98
3.07
3.33
15
2.49
2.69
2.84
2.95
3.04
3.29
16
2.47
2.67
2.81
2.92
3.01
3.25
17
2.46
2.66
2.79
2.90
2.98
3.22
18
2.45
2.64
2.77
2.88
2.96
3.20
19
2.43
2.63
2.76
2.86
2.94
3.17
20
2.42
2.61
2.74
2.85
2.93
3.15
21
2.41
2.60
2.73
2.83
2.91
3.14
22
2.41
2.59
2.72
2.82
2.90
3.12
23
2.40
2.58
2.71
2.81
2.89
3.10
24
2.39
2.57
2.70
2.80
2.88
3.09
25
2.38
2.57
2.69
2.79
2.86
3.08
26
2.38
2.56
2.68
2.78
2.86
3.07
27
2.37
2.55
2.68
2.77
2.85
3.06
28
2.37
2.55
2.67
2.76
2.84
3.05
29
2.36
2.54
2.66
2.76
2.83
3.04
30
2.36
2.54
2.66
2.75
2.82
3.03
40
2.33
2.50
2.62
2.70
2.78
2.97
60
2.30
2.46
2.58
2.66
2.73
2.91
120
2.27
2.43
2.54
2.62
2.68
2.86
For this problem, k = 3 so there are k(k – 1)/2= 3(3 – 1)/2 = 3 multiple comparisons. The degrees of freedom are equal to N – k = 18 – 3 = 15. The Bonferroni critical value is 2.69.
$For \mu_A -\mu_F : (5.033-4.517) \pm (2.69) \sqrt {\dfrac {0.1011}{6} +\dfrac {0.1011}{6}} = (0.0222, 1.0098)$
$For \mu_A - \mu_T : (5.033-5.537) \pm (2.69)\sqrt {\dfrac {0.1011}{6} +\dfrac {0.1011}{6}} = (-0.9978, -0.0102)$
$For \mu_F - \mu_T : (4.517-5.537) \pm (2.69)\sqrt {\dfrac {0.1011}{6} +\dfrac {0.1011}{6}} = (-1.5138, 0.5262)$
The first confidence interval contains all positive values. This tells you that there is a significant difference between the two means and that the mean rain pH for Alaska is significantly greater than the mean rain pH for Florida.
The second confidence interval contains all negative values. This tells you that there is a significant difference between the two means and that the mean rain pH of Alaska is significantly lower than the mean rain pH of Texas.
The third confidence interval also contains all negative values. This tells you that there is a significant difference between the two means and that the mean rain pH of Florida is significantly lower than the mean rain pH of Texas.
All three states have significantly different levels of rain pH. Texas has the highest rain pH, then Alaska followed by Florida, which has the lowest mean rain pH level. You can use the confidence intervals to estimate the mean difference between the states. For example, the average rain pH in Texas ranges from 0.5262 to 1.5138 higher than the average rain pH in Florida.
Now let’s use the Tukey method for multiple comparisons. We are going to let software compute the values for us. Excel doesn’t do multiple comparisons so we are going to rely on Minitab output.
One-way ANOVA: pH vs. state
Source
DF
SS
MS
F
P
state
2
3.121
1.561
15.4
0.000
Error
15
1.517
0.101
Total
17
4.638
S = 0.3180
R-Sq = 67.29%
R-Sq(adj) = 62.93%
We have seen this part of the output before. We now want to focus on the Grouping Information Using Tukey Method. All three states have different letters indicating that the mean rain pH for each state is significantly different. They are also listed from highest to lowest. It is easy to see that Texas has the highest mean rain pH while Florida has the lowest.
Grouping Information Using Tukey Method
state
N
Mean
Grouping
Texas
6
5.5367
A
Alaska
6
5.0333
B
Florida
6
4.516
C
Means that do not share a letter are significantly different.
This next set of confidence intervals is similar to the Bonferroni confidence intervals. They estimate the difference of each pair of means. The individual confidence interval level is set at 97.97% instead of 95% thus controlling the experiment-wise error rate.
Tukey 95% Simultaneous Confidence Intervals
All Pairwise Comparisons among Levels of state
Individual confidence level = 97.97%
state = Alaska subtracted from:
state
Lower
Center
Upper
———+———+———+———+
Florida
-0.9931
-0.5167
-0.0402
(—–*—-)
Texas
0.0269
0.5033
0.9798
(—–*—–)
———+———+———+———+
-0.80
0.00
0.80
1.60
state = Florida subtracted from:
state
Lower
Center
Upper
———+———+———+———+
Texas
0.5435
1.0200
1.4965
(—–*—–)
———+———+———+———+
-0.80
0.00
0.80
1.60
The first pairing is Florida – Alaska, which results in an interval of (-0.9931, -0.0402). The interval has all negative values indicating that Florida is significantly lower than Alaska. The second pairing is Texas – Alaska, which results in an interval of (0.0269, 0.9798). The interval has all positive values indicating that Texas is greater than Alaska. The third pairing is Texas – Florida, which results in an interval from (0.5435, 1.4965). All positive values indicate that Texas is greater than Florida.
The intervals are similar to the Bonferroni intervals with differences in width due to methods used. In both cases, the same conclusions are reached.
When we use one-way ANOVA and conclude that the differences among the means are significant, we can’t be absolutely sure that the given factor is responsible for the differences. It is possible that the variation of some other unknown factor is responsible. One way to reduce the effect of extraneous factors is to design an experiment so that it has a completely randomized design. This means that each element has an equal probability of receiving any treatment or belonging to any different group. In general good results require that the experiment be carefully designed and executed.
Additional Example:
https://youtu.be/BMyYXc8cWHs | textbooks/stats/Applied_Statistics/Natural_Resources_Biometrics_(Kiernan)/05%3A_One-Way_Analysis_of_Variance/5.02%3A_Multiple_Comparisons.txt |
In the previous chapter we used one-way ANOVA to analyze data from three or more populations using the null hypothesis that all means were the same (no treatment effect). For example, a biologist wants to compare mean growth for three different levels of fertilizer. A one-way ANOVA tests to see if at least one of the treatment means is significantly different from the others. If the null hypothesis is rejected, a multiple comparison method, such as Tukey’s, can be used to identify which means are different, and the confidence interval can be used to estimate the difference between the different means.
Suppose the biologist wants to ask this same question but with two different species of plants while still testing the three different levels of fertilizer. The biologist needs to investigate not only the average growth between the two species (main effect A) and the average growth for the three levels of fertilizer (main effect B), but also the interaction or relationship between the two factors of species and fertilizer. Two-way analysis of variance allows the biologist to answer the question about growth affected by species and levels of fertilizer, and to account for the variation due to both factors simultaneously.
Our examination of one-way ANOVA was done in the context of a completely randomized design where the treatments are assigned randomly to each subject (or experimental unit). We now consider analysis in which two factors can explain variability in the response variable. Remember that we can deal with factors by controlling them, by fixing them at specific levels, and randomly applying the treatments so the effect of uncontrolled variables on the response variable is minimized. With two factors, we need a factorial experiment.
Table 1. Observed data for two species at three levels of fertilizer.
This is an example of a factorial experiment in which there are a total of 2 x 3 = 6 possible combinations of the levels for the two different factors (species and level of fertilizer). These six combinations are referred to as treatments and the experiment is called a 2 x 3 factorial experiment. We use this type of experiment to investigate the effect of multiple factors on a response and the interaction between the factors. Each of the n observations of the response variable for the different levels of the factors exists within a cell. In this example, there are six cells and each cell corresponds to a specific treatment.
When you compare treatment means for a factorial experiment (or for any other experiment), multiple observations are required for each treatment. These are called replicates. For example, if you have four observations for each of the six treatments, you have four replications of the experiment. Replication demonstrates the results to be reproducible and provides the means to estimate experimental error variance. Replication also provides the capacity to increase the precision for estimates of treatment means. Increasing replication decreases $s_{\frac{2}{y}} = \frac {s^2}{r}$ thereby increasing the precision of $\bar y$.
Notation
• k = number of levels of factor A
• l = number of levels of factor B
• kl = number of treatments (each one a combination of a factor A level and a factor B level)
• m = number of observations on each treatment
Main Effects and Interaction Effect
Main effects deal with each factor separately. In the previous example we have two factors, A and B. The main effect of Factor A (species) is the difference between the mean growth for Species 1 and Species 2, averaged across the three levels of fertilizer. The main effect of Factor B (fertilizer) is the difference in mean growth for levels 1, 2, and 3 averaged across the two species. The interaction is the simultaneous changes in the levels of both factors. If the changes in the level of Factor A result in different changes in the value of the response variable for the different levels of Factor B, we say that there is an interaction effect between the factors. Consider the following example to help clarify this idea of interaction.
Example $1$:
Factor A has two levels and Factor B has two levels. In the left box, when Factor A is at level 1, Factor B changes by 3 units. When Factor A is at level 2, Factor B again changes by 3 units. Similarly, when Factor B is at level 1, Factor A changes by 2 units. When Factor B is at level 2, Factor A again changes by 2 units. There is no interaction. The change in the true average response when the level of either factor changes from 1 to 2 is the same for each level of the other factor. In this case, changes in levels of the two factors affect the true average response separately, or in an additive manner.
Figure 1. Illustration of interaction effect.
Solution
The right box illustrates the idea of interaction. When Factor A is at level 1, Factor B changes by 3 units but when Factor A is at level 2, Factor B changes by 6 units. When Factor B is at level 1, Factor A changes by 2 units but when Factor B is at level 2, Factor A changes by 5 units. The change in the true average response when the levels of both factors change simultaneously from level 1 to level 2 is 8 units, which is much larger than the separate changes suggest. In this case, there is an interaction between the two factors, so the effect of simultaneous changes cannot be determined from the individual effects of the separate changes. Change in the true average response when the level of one factor changes depends on the level of the other factor. You cannot determine the separate effect of Factor A or Factor B on the response because of the interaction.
Assumptions
Note: Basic Assumption
The observations on any particular treatment are independently selected from a normal distribution with variance σ2 (the same variance for each treatment), and samples from different treatments are independent of one another.
We can use normal probability plots to satisfy the assumption of normality for each treatment. The requirement for equal variances is more difficult to confirm, but we can generally check by making sure that the largest sample standard deviation is no more than twice the smallest sample standard deviation.
Although not a requirement for two-way ANOVA, having an equal number of observations in each treatment, referred to as a balance design, increases the power of the test. However, unequal replications (an unbalanced design), are very common. Some statistical software packages (such as Excel) will only work with balanced designs. Minitab will provide the correct analysis for both balanced and unbalanced designs in the General Linear Model component under ANOVA statistical analysis. However, for the sake of simplicity, we will focus on balanced designs in this chapter.
Sums of Squares and the ANOVA Table
In the previous chapter, the idea of sums of squares was introduced to partition the variation due to treatment and random variation. The relationship is as follows:
$SSTo = SSTr + SSE$
We now partition the variation even more to reflect the main effects (Factor A and Factor B) and the interaction term:
$SSTo = SSA + SSB +SSAB +SSE$
where
1. SSTo is the total sums of squares, with the associated degrees of freedom klm – 1
2. SSA is the factor A main effect sums of squares, with associated degrees of freedom k – 1
3. SSB is the factor B main effect sums of squares, with associated degrees of freedom l – 1
4. SSAB is the interaction sum of squares, with associated degrees of freedom (k – 1)(l – 1)
5. SSE is the error sum of squares, with associated degrees of freedom kl(m – 1)
As we saw in the previous chapter, the magnitude of the SSE is related entirely to the amount of underlying variability in the distributions being sampled. It has nothing to do with values of the various true average responses. SSAB reflects in part underlying variability, but its value is also affected by whether or not there is an interaction between the factors; the greater the interaction, the greater the value of SSAB.
The following ANOVA table illustrates the relationship between the sums of squares for each component and the resulting F-statistic for testing the three null and alternative hypotheses for a two-way ANOVA.
1. $H_0$: There is no interaction between factors
$H_1$: There is a significant interaction between factors
2. $H_0$: There is no effect of Factor A on the response variable
$H_1$: There is an effect of Factor A on the response variable
3. $H_0$: There is no effect of Factor B on the response variable
$H_1$: There is an effect of Factor B on the response variable
If there is a significant interaction, then ignore the following two sets of hypotheses for the main effects. A significant interaction tells you that the change in the true average response for a level of Factor A depends on the level of Factor B. The effect of simultaneous changes cannot be determined by examining the main effects separately. If there is NOT a significant interaction, then proceed to test the main effects. The Factor A sums of squares will reflect random variation and any differences between the true average responses for different levels of Factor A. Similarly, Factor B sums of squares will reflect random variation and the true average responses for the different levels of Factor B.
Table 2. Two-way ANOVA table.
Each of the five sources of variation, when divided by the appropriate degrees of freedom (df), provides an estimate of the variation in the experiment. The estimates are called mean squares and are displayed along with their respective sums of squares and df in the analysis of variance table. In one-way ANOVA, the mean square error (MSE) is the best estimate of $\sigma^2$ (the population variance) and is the denominator in the F-statistic. In a two-way ANOVA, it is still the best estimate of $\sigma^2$. Notice that in each case, the MSE is the denominator in the test statistic and the numerator is the mean sum of squares for each main factor and interaction term. The F-statistic is found in the final column of this table and is used to answer the three alternative hypotheses. Typically, the p-values associated with each F-statistic are also presented in an ANOVA table. You will use the Decision Rule to determine the outcome for each of the three pairs of hypotheses.
If the p-value is smaller than α (level of significance), you will reject the null hypothesis.
When we conduct a two-way ANOVA, we always first test the hypothesis regarding the interaction effect. If the null hypothesis of no interaction is rejected, we do NOT interpret the results of the hypotheses involving the main effects. If the interaction term is NOT significant, then we examine the two main effects separately. Let’s look at an example.
Example $2$:
An experiment was carried out to assess the effects of soy plant variety (factor A, with k = 3 levels) and planting density (factor B, with l = 4 levels – 5, 10, 15, and 20 thousand plants per hectare) on yield. Each of the 12 treatments (k * l) was randomly applied to m = 3 plots (klm = 36 total observations). Use a two-way ANOVA to assess the effects at a 5% level of significance.
Table 3. Observed data for three varieties of soy plants at four densities.
It is always important to look at the sample average yields for each treatment, each level of factor A, and each level of factor B.
Table 4. Summary table.
Density
Variety
5
10
15
20
Sample average yield for each level of factor A
1
9.17
12.40
12.90
10.80
11.32
2
8.90
12.67
14.50
12.77
12.21
3
16.30
18.10
19.87
18.20
18.12
Sample average yield for each level of factor B
11.46
14.39
15.77
13.92
13.88
For example, 11.32 is the average yield for variety #1 over all levels of planting densities. The value 11.46 is the average yield for plots planted with 5,000 plants across all varieties. The grand mean is 13.88. The ANOVA table is presented next.
Table 5. Two-way ANOVA table.
Source
DF
SS
MSS
F
P
variety
2
327.774
163.887
100.48
<0.001
density
3
86.908
28.969
17.76
<0.001
variety*density
6
8.068
1.345
0.82
0.562
error
24
39.147
1.631
total
35
You begin with the following null and alternative hypotheses:
• $H_0$: There is no interaction between factors
• $H_1$: There is a significant interaction between factors
The F-statistic:
$F_{AB} = \dfrac {MSAB}{MSE} = \dfrac {1.345}{1.631} = 0.82$
The p-value for the test for a significant interaction between factors is 0.562. This p-value is greater than 5% (α), therefore we fail to reject the null hypothesis. There is no evidence of a significant interaction between variety and density. So it is appropriate to carry out further tests concerning the presence of the main effects.
$H_0$: There is no effect of Factor A (variety) on the response variable
$H_1$: There is an effect of Factor A on the response variable
The F-statistic:
$F_{A} = \dfrac {MSA}{MSE} = \dfrac {163.887}{1.631} = 100.48$
The p-value (<0.001) is less than 0.05 so we will reject the null hypothesis. There is a significant difference in yield between the three varieties.
• $H_0$: There is no effect of Factor B (density) on the response variable
• $H_1$: There is an effect of Factor B on the response variable
The F-statistic:
$F_A = \dfrac {MSB}{MSE} = \dfrac {28.969}{1.631} = 17.76$
The p-value (<0.001) is less than 0.05 so we will reject the null hypothesis. There is a significant difference in yield between the four planting densities. | textbooks/stats/Applied_Statistics/Natural_Resources_Biometrics_(Kiernan)/06%3A_Two-way_Analysis_of_Variance/6.01%3A_Main_Effects_and_Interaction_Effect.txt |
The next step is to examine the multiple comparisons for each main effect to determine the differences. We will proceed as we did with one-way ANOVA multiple comparisons by examining the Tukey’s Grouping for each main effect. For factor A, variety, the sample means, and grouping letters are presented to identify those varieties that are significantly different from other varieties. Varieties 1 and 2 are not significantly different from each other, both producing similar yields. Variety 3 produced significantly greater yields than both variety 1 and 2.
Grouping Information Using Tukey Method and 95.0% Confidence
variety
N
Mean
Grouping
3
12
18.117
A
2
12
12.208
B
1
12
11.317
B
Means that do not share a letter are significantly different.
Some of the densities are also significantly different. We will follow the same procedure to determine the differences.
Grouping Information Using Tukey Method and 95.0% Confidence
density
N
Mean
Grouping
15
9
15.756
A
10
9
14.389
A
B
20
9
13.922
B
5
9
11.456
C
Means that do not share a letter are significantly different.
The Grouping Information shows us that a planting density of 15,000 plants/plot results in the greatest yield. However, there is no significant difference in yield between 10,000 and 15,000 plants/plot or between 10,000 and 20,000 plants/plot. The plots with 5,000 plants/plot result in the lowest yields and these yields are significantly lower than all other densities tested.
The main effects plots also illustrate the differences in yield across the three varieties and four densities.
But what happens if there is a significant interaction between the main effects? This next example will demonstrate how a significant interaction alters the interpretation of a 2-way ANOVA.
Example $1$:
A researcher was interested in the effects of four levels of fertilization (control, 100 lb., 150 lb., and 200 lb.) and four levels of irrigation (A, B, C, and D) on biomass yield. The sixteen possible treatment combinations were randomly assigned to 80 plots (5 plots for each treatment). The total biomass yields for each treatment are listed below.
Fertilizer
Irrigation
Control
100 lb.
150 lb.
200 lb.
A
2700,2801,2720, 2390, 2890
3250, 3151, 3170, 3300, 3290
3300, 3235, 3025, 3165, 3120
3500, 3455, 3100, 3600, 3250
B
3101, 3035, 3205, 3007, 3100
2700, 2935, 2250, 2495, 2850
3050, 3110, 3033, 3195, 4250
3100, 3235, 3005, 3095, 3050
C
101, 97, 106, 142, 99
400, 302, 296, 315, 390
630, 624, 595, 675, 595
400, 325, 200, 375, 390
D
121, 174, 88, 100, 76
100, 125, 91, 222, 219
60, 28, 112, 89, 67
201, 223, 195, 120, 180
Table 6. Observed data for four irrigation levels and four fertilizer levels.
Factor A (irrigation level) has k = 4 levels and factor B (fertilizer) has l = 4 levels. There are m = 5 replicates and 80 total observations. This is a balanced design as the number of replicates is equal. The ANOVA table is presented next.
Two-way ANOVA table.
Source
DF
SS
MSS
F
P
fertilizer
3
1128272
376091
12.76
<0.001
irrigation
3
161776127
53925376
1830.16
<0.001
fert*irrigation
9
2088667
232074
7.88
<0.001
error
64
1885746
29465
total
79
166878812
We again begin with testing the interaction term. Remember, if the interaction term is significant, we ignore the main effects.
$H_0$: There is no interaction between factors
$H_1$: There is a significant interaction between factors
The F-statistic:
$F_{AB} = \dfrac {MSAB}{MSE} = \dfrac {232074}{29465} = 7.88 \nonumber$
The p-value for the test for a significant interaction between factors is <0.001. This p-value is less than 5%, therefore we reject the null hypothesis. There is evidence of a significant interaction between fertilizer and irrigation. Since the interaction term is significant, we do not investigate the presence of the main effects. We must now examine multiple comparisons for all 16 treatments (each combination of fertilizer and irrigation level) to determine the differences in yield, aided by the factor plot.
Grouping Information Using Tukey Method and 95.0% Confidence
fert
irrigation
N
Mean
Grouping
200
A
5
3381.00
A
150
B
5
3327.60
A
100
A
5
3232.20
A
150
A
5
3169.00
A
200
B
5
3097.00
A
C
B
5
3089.60
A
C
A
5
2700.20
B
100
B
5
2646.00
B
150
C
5
623.80
C
100
C
5
340.60
C
D
200
C
5
338.00
C
D
200
D
5
183.80
D
100
D
5
151.40
D
C
D
5
111.80
D
C
C
5
109.00
D
150
D
5
71.20
D
Means that do not share a letter are significantly different.
The factor plot allows you to visualize the differences between the 16 treatments. Factor plots can present the information two ways, each with a different factor on the x-axis. In the first plot, fertilizer level is on the x-axis. There is a clear distinction in average yields for the different treatments. Irrigation levels A and B appear to be producing greater yields across all levels of fertilizers compared to irrigation levels C and D. In the second plot, irrigation level is on the x-axis. All levels of fertilizer seem to result in greater yields for irrigation levels A and B compared to C and D.
The next step is to use the multiple comparison output to determine where there are SIGNIFICANT differences. Let’s focus on the first factor plot to do this.
The Grouping Information tells us that while irrigation levels A and B look similar across all levels of fertilizer, only treatments A-100, A-150, A-200, B-control, B-150, and B-200 are statistically similar (upper circle). Treatment B-100 and A-control also result in similar yields (middle circle) and both have significantly lower yields than the first group.
Irrigation levels C and D result in the lowest yields across the fertilizer levels. We again refer to the Grouping Information to identify the differences. There is no significant difference in yield for irrigation level D over any level of fertilizer. Yields for D are also similar to yields for irrigation level C at 100, 200, and control levels for fertilizer (lowest circle). Irrigation level C at 150 level fertilizer results in significantly higher yields than any yield from irrigation level D for any fertilizer level, however, this yield is still significantly smaller than the first group using irrigation levels A and B.
Interpreting Factor Plots
When the interaction term is significant the analysis focuses solely on the treatments, not the main effects. The factor plot and grouping information allow the researcher to identify similarities and differences, along with any trends or patterns. The following series of factor plots illustrate some true average responses in terms of interactions and main effects.
This first plot clearly shows a significant interaction between the factors. The change in response when level B changes, depends on level A.
Figure 5.
The second plot shows no significant interaction. The change in response for the level of factor A is the same for each level of factor B.
The third plot shows no significant interaction and shows that the average response does not depend on the level of factor A.
This fourth plot again shows no significant interaction and shows that the average response does not depend on the level of factor B.
This final plot illustrates no interaction and neither factor has any effect on the response. | textbooks/stats/Applied_Statistics/Natural_Resources_Biometrics_(Kiernan)/06%3A_Two-way_Analysis_of_Variance/6.02%3A_Multiple_Comparisons.txt |
Summary
Two-way analysis of variance allows you to examine the effect of two factors simultaneously on the average response. The interaction of these two factors is always the starting point for two-way ANOVA. If the interaction term is significant, then you will ignore the main effects and focus solely on the unique treatments (combinations of the different levels of the two factors). If the interaction term is not significant, then it is appropriate to investigate the presence of the main effect of the response variable separately.
Software Solutions
Minitab
General Linear Model: yield vs. fert, irrigation
Factor
Type
Levels
Values
fert
fixed
4
100,
150,
200,
C
irrigation
fixed
4
A,
B,
C,
D
Analysis of Variance for Yield, using Adjusted SS for Tests
Source
DF
Seq SS
Adj SS
Adj MS
F
P
fert
3
1128272
1128272
376091
12.76
0.000
irrigation
3
161776127
161776127
53925376
1830.16
0.000
fert*irrigation
9
2088667
2088667
232074
7.88
0.000
Error
64
1885746
1885746
29465
Total
79
166878812
S = 171.653 R-Sq = 98.87% R-Sq(adj) = 98.61%
Unusual Observations for yield
Obs
yield
Fit
SE
Fit
Residual
St
Resid
4
2390.00
2700.20
76.77
-310.20
-2.02
R
28
2250.00
2646.00
76.77
-396.00
-2.58
R
35
4250.00
3327.60
76.77
922.40
6.01
R
R denotes an observation with a large standardized residual.
Grouping Information Using Tukey Method and 95.0% Confidence
irrigation
N
Mean
Grouping
A
20
3120.60
A
B
20
3040.05
A
C
20
352.85
B
D
20
129.55
C
Means that do not share a letter are significantly different.
Grouping Information Using Tukey Method and 95.0% Confidence
fert
N
Mean
Grouping
150
20
1797.90
A
200
20
1749.95
A
100
20
1592.55
B
C
20
1502.65
B
Means that do not share a letter are significantly different.
Grouping Information Using Tukey Method and 95.0% Confidence
fert
irrigation
N
Mean
Grouping
200
A
5
3381.00
A
150
B
5
3327.60
A
100
A
5
3232.20
A
150
A
5
3169.00
A
200
B
5
3097.00
A
C
B
5
3089.60
A
C
A
5
2700.20
B
100
B
5
2646.00
B
150
C
5
623.80
C
100
C
5
340.60
C
D
200
C
5
338.00
C
D
200
D
5
183.80
D
100
D
5
151.40
D
C
D
5
111.80
D
C
C
5
109.00
D
150
D
5
71.20
D
Means that do not share a letter are significantly different.
Excel
Anova: Two-Factor With Replication
SUMMARY
Bcontrol
B100
B150
B200
Total
AA
Count
5
5
5
5
20
Sum
13501
16161
15845
16905
62412
Average
2700.2
3232.2
3169
3381
3120.6
Variance
35700.2
4679.2
11167.5
40930
87716.57
AB
Count
5
5
5
5
20
Sum
15448
13230
16638
15485
60801
Average
3089.6
2646
3327.6
3097
3040.05
Variance
5839.8
76917.5
269901.3
7432.5
139929.4
AC
Count
5
5
5
5
20
Sum
545
1703
3119
1690
7057
Average
109
340.6
623.8
338
352.85
Variance
351.5
2525.8
1079.7
6782.5
37326.03
AD
Count
5
5
5
5
20
Sum
559
757
356
919
2591
Average
111.8
151.4
71.2
183.8
129.55
Variance
1485.2
4135.3
997.7
1510.7
3590.366
Total
Count
20
20
20
20
Sum
30053
31851
35958
34999
Average
1502.65
1592.55
1797.9
1749.95
Variance
2069464
1977134
2317478
2359637
ANOVA
Source of Variation
SS
df
MS
F
p-value
F crit
Sample
1.62E+08
3
53925376
1830.164
5.98E-62
2.748191
Columns
1128272
3
376090.7
12.76408
1.23E-06
2.748191
Interaction
2088667
9
232074.2
7.876325
1.02E-07
2.029792
Within
1885746
64
29464.78
Total
1.67E+08
79 | textbooks/stats/Applied_Statistics/Natural_Resources_Biometrics_(Kiernan)/06%3A_Two-way_Analysis_of_Variance/6.03%3A_Summary_And_Software_Solution.txt |
• 7.1: Correlation
In many studies, we measure more than one variable for each individual. We collect pairs of data and instead of examining each variable separately (univariate data), we want to find ways to describe bivariate data, in which two variables are measured on each subject in our sample. Given such data, we begin by determining if there is a relationship between these two variables. As the values of one variable change, do we see corresponding changes in the other variable?
• 7.2: Simple Linear Regression
Once we have identified two variables that are correlated, we would like to model this relationship. We want to use one variable as a predictor or explanatory variable to explain the other variable, the response or dependent variable. In order to do this, we need a good relationship between our two variables. The model can then be used to predict changes in our response variable. A strong relationship between the predictor variable and the response variable leads to a good model.
• 7.3: Population Model
We use the means and standard deviations of our sample data to compute the slope (b1) and y-intercept (b0) in order to create an ordinary least-squares regression line. But we want to describe the relationship between y and x in the population, not just within our sample data. We want to construct a population model. Now we will think of the least-squares line computed from a sample as an estimate of the true regression line for the population.
• 7.4: Software Solution
07: Correlation and Simple Linear Regression
In many studies, we measure more than one variable for each individual. For example, we measure precipitation and plant growth, or number of young with nesting habitat, or soil erosion and volume of water. We collect pairs of data and instead of examining each variable separately (univariate data), we want to find ways to describe bivariate data, in which two variables are measured on each subject in our sample. Given such data, we begin by determining if there is a relationship between these two variables. As the values of one variable change, do we see corresponding changes in the other variable?
We can describe the relationship between these two variables graphically and numerically. We begin by considering the concept of correlation.
Definition: Correlation
Correlation is defined as the statistical association between two variables.
A correlation exists between two variables when one of them is related to the other in some way. A scatterplot is the best place to start. A scatterplot (or scatter diagram) is a graph of the paired (x, y) sample data with a horizontal x-axis and a vertical y-axis. Each individual (x, y) pair is plotted as a single point.
Figure $1$ . Scatterplot of chest girth versus length.
In this example, we plot bear chest girth (y) against bear length (x). When examining a scatterplot, we should study the overall pattern of the plotted points. In this example, we see that the value for chest girth does tend to increase as the value of length increases. We can see an upward slope and a straight-line pattern in the plotted data points.
A scatterplot can identify several different types of relationships between two variables.
• A relationship has no correlation when the points on a scatterplot do not show any pattern.
• A relationship is non-linear when the points on a scatterplot follow a pattern but not a straight line.
• A relationship is linear when the points on a scatterplot follow a somewhat straight line pattern. This is the relationship that we will examine.
Linear relationships can be either positive or negative. Positive relationships have points that incline upwards to the right. As x values increase, y values increase. As x values decrease, y values decrease. For example, when studying plants, height typically increases as diameter increases.
Figure $2$ . Scatterplot of height versus diameter.
Negative relationships have points that decline downward to the right. As x values increase, yvalues decrease. As x values decrease, y values increase. For example, as wind speed increases, wind chill temperature decreases.
Figure $3$ . Scatterplot of temperature versus wind speed.
Non-linear relationships have an apparent pattern, just not linear. For example, as age increases height increases up to a point then levels off after reaching a maximum height.
Figure $4$ . Scatterplot of height versus age.
When two variables have no relationship, there is no straight-line relationship or non-linear relationship. When one variable changes, it does not influence the other variable.
Figure $5$ . Scatterplot of growth versus area.
Linear Correlation Coefficient
Because visual examinations are largely subjective, we need a more precise and objective measure to define the correlation between the two variables. To quantify the strength and direction of the relationship between two variables, we use the linear correlation coefficient:
$r = \dfrac {\sum \dfrac {(x_i-\bar x)}{s_x} \dfrac {(y_i - \bar y)}{s_y}}{n-1}$
where $\bar x$ and $s_x$ are the sample mean and sample standard deviation of the x’s, and $\bar y$ and $s_y$ are the mean and standard deviation of the y’s. The sample size is n.
An alternate computation of the correlation coefficient is:
$r = \dfrac {S_{xy}}{\sqrt {S_{xx}S_{yy}}}$
where
$S_{xx} = \sum x^2 - \dfrac {(\sum x)^2}{n}$
$S_{xy} = \sum xy - \dfrac {(\sum x)(\sum y )}{n}$
$S_{yy} = \sum y^2 - \dfrac {(\sum x)^2}{n}$
The linear correlation coefficient is also referred to as Pearson’s product moment correlation coefficient in honor of Karl Pearson, who originally developed it. This statistic numerically describes how strong the straight-line or linear relationship is between the two variables and the direction, positive or negative.
The properties of “r”:
• It is always between -1 and +1.
• It is a unitless measure so “r” would be the same value whether you measured the two variables in pounds and inches or in grams and centimeters.
• Positive values of “r” are associated with positive relationships.
• Negative values of “r” are associated with negative relationships.
Examples of Positive Correlation
Figure $6$ . Examples of positive correlation.
Examples of Negative Correlation
Figure $7$ . Examples of negative correlation.
Note
Correlation is not causation!!! Just because two variables are correlated does not mean that one variable causes another variable to change.
Examine these next two scatterplots. Both of these data sets have an r = 0.01, but they are very different. Plot 1 shows little linear relationship between x and y variables. Plot 2 shows a strong non-linear relationship. Pearson’s linear correlation coefficient only measures the strength and direction of a linear relationship. Ignoring the scatterplot could result in a serious mistake when describing the relationship between two variables.
Figure $8$ . Comparison of scatterplots.
When you investigate the relationship between two variables, always begin with a scatterplot. This graph allows you to look for patterns (both linear and non-linear). The next step is to quantitatively describe the strength and direction of the linear relationship using “r”. Once you have established that a linear relationship exists, you can take the next step in model building. | textbooks/stats/Applied_Statistics/Natural_Resources_Biometrics_(Kiernan)/07%3A_Correlation_and_Simple_Linear_Regression/7.01%3A_Correlation.txt |
Once we have identified two variables that are correlated, we would like to model this relationship. We want to use one variable as a predictor or explanatory variable to explain the other variable, the response or dependent variable. In order to do this, we need a good relationship between our two variables. The model can then be used to predict changes in our response variable. A strong relationship between the predictor variable and the response variable leads to a good model.
Figure $1$. Scatterplot with regression model.
Definition: simple linear regression
A simple linear regression model is a mathematical equation that allows us to predict a response for a given predictor value.
Our model will take the form of $\hat y = b_0+b_1x$ where b0 is the y-intercept, b1 is the slope, x is the predictor variable, and an estimate of the mean value of the response variable for any value of the predictor variable.
The y-intercept is the predicted value for the response (y) when x = 0. The slope describes the change in y for each one unit change in x. Let’s look at this example to clarify the interpretation of the slope and intercept.
Example $1$:
A hydrologist creates a model to predict the volume flow for a stream at a bridge crossing with a predictor variable of daily rainfall in inches.
Answer
$\hat y = 1.6 +29 x \nonumber$
The y-intercept of 1.6 can be interpreted this way: On a day with no rainfall, there will be 1.6 gal. of water/min. flowing in the stream at that bridge crossing. The slope tells us that if it rained one inch that day the flow in the stream would increase by an additional 29 gal./min. If it rained 2 inches that day, the flow would increase by an additional 58 gal./min.
Example $2$:
What would be the average stream flow if it rained 0.45 inches that day?
Answer
$\hat y= 1.6 + 29x = 1.6 + 29(0.45) = 14.65 gal./min \nonumber$
The Least-Squares Regression Line (shortcut equations)
The equation is given by
$\hat y = b_0+b_1x$
where $b_1 = r\left ( \dfrac {s_y}{s_x} \right )$ is the slope and $b_0=\hat y -b_1\bar x$ is the y-intercept of the regression line.
An alternate computational equation for slope is:
$b_1 = \dfrac {\sum xy - \dfrac {(\sum x)(\sum y)}{n}} {\sum x^2 - \dfrac {(\sum x)^2}{n}} = \dfrac {S_{xy}}{S_{xx}}$
This simple model is the line of best fit for our sample data. The regression line does not go through every point; instead it balances the difference between all data points and the straight-line model. The difference between the observed data value and the predicted value (the value on the straight line) is the error or residual. The criterion to determine the line that best describes the relation between two variables is based on the residuals.
$Residual = Observed – Predicted$
For example, if you wanted to predict the chest girth of a black bear given its weight, you could use the following model.
Chest girth = 13.2 +0.43 weight
The predicted chest girth of a bear that weighed 120 lb. is 64.8 in.
Chest girth = 13.2 + 0.43(120) = 64.8 in.
But a measured bear chest girth (observed value) for a bear that weighed 120 lb. was actually 62.1 in.
The residual would be 62.1 – 64.8 = -2.7 in.
A negative residual indicates that the model is over-predicting. A positive residual indicates that the model is under-predicting. In this instance, the model over-predicted the chest girth of a bear that actually weighed 120 lb.
Figure $2$. Scatterplot with regression model illustrating a residual value.
This random error (residual) takes into account all unpredictable and unknown factors that are not included in the model. An ordinary least squares regression line minimizes the sum of the squared errors between the observed and predicted values to create a best fitting line. The differences between the observed and predicted values are squared to deal with the positive and negative differences.
Coefficient of Determination
After we fit our regression line (compute b0 and b1), we usually wish to know how well the model fits our data. To determine this, we need to think back to the idea of analysis of variance. In ANOVA, we partitioned the variation using sums of squares so we could identify a treatment effect opposed to random variation that occurred in our data. The idea is the same for regression. We want to partition the total variability into two parts: the variation due to the regression and the variation due to random error. And we are again going to compute sums of squares to help us do this.
Suppose the total variability in the sample measurements about the sample mean is denoted by $\sum (y_i - \bar y)^2$, called the sums of squares of total variability about the mean (SST). The squared difference between the predicted value $\hat y$ and the sample mean is denoted by $\sum (\hat {y_i} - \bar y)^2$, called the sums of squares due to regression (SSR). The SSR represents the variability explained by the regression line. Finally, the variability which cannot be explained by the regression line is called the sums of squares due to error (SSE) and is denoted by $\sum (y_i - \hat y)^2$. SSE is actually the squared residual.
SST
= SSR
+ SSE
$\sum (y_i - \bar y)^2$
= $\sum (\hat {y_i} - \bar y)^2$
+$\sum (\hat {y_i} - \bar y)^2$
Figure $3$. An illustration of the relationship between the mean of the y’s and the predicted and observed value of a specific y.
The sums of squares and mean sums of squares (just like ANOVA) are typically presented in the regression analysis of variance table. The ratio of the mean sums of squares for the regression (MSR) and mean sums of squares for error (MSE) form an F-test statistic used to test the regression model.
The relationship between these sums of square is defined as
$Total \ Variation = Explained \ Variation + Unexplained \ Variation$
The larger the explained variation, the better the model is at prediction. The larger the unexplained variation, the worse the model is at prediction. A quantitative measure of the explanatory power of a model is $R^2$, the Coefficient of Determination:
$R^2 = \dfrac {Explained \ Variation}{Total \ Variation}$
The Coefficient of Determination measures the percent variation in the response variable (y) that is explained by the model.
• Values range from 0 to 1.
• An $R^2$ close to zero indicates a model with very little explanatory power.
• An $R^2$ close to one indicates a model with more explanatory power.
The Coefficient of Determination and the linear correlation coefficient are related mathematically.
$R^2 = r^2$
However, they have two very different meanings: r is a measure of the strength and direction of a linear relationship between two variables; R2 describes the percent variation in “y” that is explained by the model.
Residual and Normal Probability Plots
Even though you have determined, using a scatterplot, correlation coefficient and R2, that x is useful in predicting the value of y, the results of a regression analysis are valid only when the data satisfy the necessary regression assumptions.
1. The response variable (y) is a random variable while the predictor variable (x) is assumed non-random or fixed and measured without error.
2. The relationship between y and x must be linear, given by the model $\hat y = b_0 + b_1x$.
3. The error of random term the values ε are independent, have a mean of 0 and a common variance $\sigma^2$, independent of x, and are normally distributed.
We can use residual plots to check for a constant variance, as well as to make sure that the linear model is in fact adequate. A residual plot is a scatterplot of the residual (= observed – predicted values) versus the predicted or fitted (as used in the residual plot) value. The center horizontal axis is set at zero. One property of the residuals is that they sum to zero and have a mean of zero. A residual plot should be free of any patterns and the residuals should appear as a random scatter of points about zero.
A residual plot with no appearance of any patterns indicates that the model assumptions are satisfied for these data.
Figure $4$. A residual plot.
A residual plot that has a “fan shape” indicates a heterogeneous variance (non-constant variance). The residuals tend to fan out or fan in as error variance increases or decreases.
Figure $5$. A residual plot that indicates a non-constant variance.
A residual plot that tends to “swoop” indicates that a linear model may not be appropriate. The model may need higher-order terms of x, or a non-linear model may be needed to better describe the relationship between y and x. Transformations on x or y may also be considered.
Figure $6$. A residual plot that indicates the need for a higher order model.
A normal probability plot allows us to check that the errors are normally distributed. It plots the residuals against the expected value of the residual as if it had come from a normal distribution. Recall that when the residuals are normally distributed, they will follow a straight-line pattern, sloping upward.
This plot is not unusual and does not indicate any non-normality with the residuals.
Figure $7$. A normal probability plot.
This next plot clearly illustrates a non-normal distribution of the residuals.
Figure $8$. A normal probability plot, which illustrates non-normal distribution.
The most serious violations of normality usually appear in the tails of the distribution because this is where the normal distribution differs most from other types of distributions with a similar mean and spread. Curvature in either or both ends of a normal probability plot is indicative of nonnormality. | textbooks/stats/Applied_Statistics/Natural_Resources_Biometrics_(Kiernan)/07%3A_Correlation_and_Simple_Linear_Regression/7.02%3A_Simple_Linear_Regression.txt |
Our regression model is based on a sample of n bivariate observations drawn from a larger population of measurements.
$\hat y = b_0 +b_1x$
We use the means and standard deviations of our sample data to compute the slope (b1) and y-intercept (b0) in order to create an ordinary least-squares regression line. But we want to describe the relationship between y and x in the population, not just within our sample data. We want to construct a population model. Now we will think of the least-squares line computed from a sample as an estimate of the true regression line for the population.
Definition: The Population Model
$\mu_y = \beta_0 + \beta_1x$, where $\mu_y$ is the population mean response, $\beta_0$ is the y-intercept, and $beta_1$ is the slope for the population model.
In our population, there could be many different responses for a value of x. In simple linear regression, the model assumes that for each value of x the observed values of the response variable y are normally distributed with a mean that depends on x. We use μy to represent these means. We also assume that these means all lie on a straight line when plotted against x (a line of means).
Figure $1$. . The statistical model for linear regression; the mean response is a straight-line function of the predictor variable.
The sample data then fit the statistical model:
Data = fit + residual
$y_i = (\beta_0 + \beta_1x_i)+\epsilon_i$
where the errors (εi) are independent and normally distributed N (0, σ). Linear regression also assumes equal variance of y (σ is the same for all values of x). We use ε (Greek epsilon) to stand for the residual part of the statistical model. A response y is the sum of its mean and chance deviation εfrom the mean. The deviations ε represents the “noise” in the data. In other words, the noise is the variation in y due to other causes that prevent the observed (x, y) from forming a perfectly straight line.
The sample data used for regression are the observed values of y and x. The response y to a given xis a random variable, and the regression model describes the mean and standard deviation of this random variable y. The intercept β0, slope β1, and standard deviation σ of y are the unknown parameters of the regression model and must be estimated from the sample data.
• The value of from the least squares regression line is really a prediction of the mean value of y (μy) for a given value of x.
• The least squares regression line ($\hat y = b_0+b_1x$) obtained from sample data is the best estimate of the true population regression line
($\mu_y = \beta_0 + \beta_1x$).
is an unbiased estimate for the mean response μy
b0 is an unbiased estimate for the intercept β0
b1 is an unbiased estimate for the slope β1
Parameter Estimation
Once we have estimates of β0 and β1 (from our sample data b0 and b1), the linear relationship determines the estimates of μy for all values of x in our population, not just for the observed values of x. We now want to use the least-squares line as a basis for inference about a population from which our sample was drawn.
Model assumptions tell us that b0 and b1 are normally distributed with means β0 and β1 with standard deviations that can be estimated from the data. Procedures for inference about the population regression line will be similar to those described in the previous chapter for means. As always, it is important to examine the data for outliers and influential observations.
In order to do this, we need to estimate σ, the regression standard error. This is the standard deviation of the model errors. It measures the variation of y about the population regression line. We will use the residuals to compute this value. Remember, the predicted value of y () for a specific x is the point on the regression line. It is the unbiased estimate of the mean response (μy) for that x. The residual is:
residual = observed – predicted
$\epsilon_i = y_i – \hat {y} = y_i -(b_0+b_1x)$
The residual ei corresponds to model deviation $\epsilon_i$ where $\sum \epsilon_i = 0$ with a mean of 0. The regression standard error s is an unbiased estimate of σ.
$s=\sqrt {\dfrac {\sum residual^2}{n-2}} = \sqrt {\dfrac {\sum (y_i-\hat {y_i})^2}{n-2}}$
The quantity s is the estimate of the regression standard error (σ) and $s^2$ is often called the mean square error (MSE). A small value of s suggests that observed values of y fall close to the true regression line and the line $\hat y = b_0 +b_1x$should provide accurate estimates and predictions.
Confidence Intervals and Significance Tests for Model Parameters
In an earlier chapter, we constructed confidence intervals and did significance tests for the population parameter μ (the population mean). We relied on sample statistics such as the mean and standard deviation for point estimates, margins of errors, and test statistics. Inference for the population parameters β0 (slope) and β1 (y-intercept) is very similar.
Inference for the slope and intercept are based on the normal distribution using the estimates b0 and b1. The standard deviations of these estimates are multiples of σ, the population regression standard error. Remember, we estimate σ with s (the variability of the data about the regression line). Because we use s, we rely on the student t-distribution with (n – 2) degrees of freedom.
$\sigma_{\hat{\beta_0}} = \sigma \sqrt { \frac {1}{n} + \dfrac {\bar x ^2}{\sum (x_i - \bar x)^2}}$
The standard error for estimate of $\beta_0$
$\sigma_{\hat{\beta_1}} = \sigma \sqrt { \frac {1}{n} + \dfrac {\bar x ^2}{\sum (x_i - \bar x)^2}}$
The standard error for estimate of $\beta_1$
We can construct confidence intervals for the regression slope and intercept in much the same way as we did when estimating the population mean.
A confidence interval for $\beta_0 : b_0 \pm t_{\alpha/2} SE_{b_0}$
A confidence interval for $\beta_1 : b_1 \pm t_{\alpha/2} SE_{b_1}$
where $SE_{b_0}$ and $SE_{b_1}$ are the standard errors for the y-intercept and slope, respectively.
We can also test the hypothesis $H_0: \beta_1 = 0$. When we substitute $\beta_1 = 0$ in the model, the x-term drops out and we are left with $\mu_y = \beta_0$. This tells us that the mean of y does NOT vary with x. In other words, there is no straight line relationship between x and y and the regression of y on x is of no value for predicting y.
Hypothesis test for $\beta_1$
$H_0: \beta_1 =0$
$H_1: \beta_1 \ne 0$
The test statistic is $t = b_1 / SE_{b_1}$
We can also use the F-statistic (MSR/MSE) in the regression ANOVA table*
*Recall that t2 = F
So let’s pull all of this together in an example.
Example $1$:
The index of biotic integrity (IBI) is a measure of water quality in streams. As a manager for the natural resources in this region, you must monitor, track, and predict changes in water quality. You want to create a simple linear regression model that will allow you to predict changes in IBI in forested area. The following table conveys sample data from a coastal forest region and gives the data for IBI and forested area in square kilometers. Let forest area be the predictor variable (x) and IBI be the response variable (y).
$\begin{array}{ccccccccc} \text { IBI } & \text { Forest Area } & \text { IBI } & \text { Forest Area } & \text { IBI } & \text { Forest Area } & \text { IBI } & \text { Forest Area } & \text { IBI } \ 47 & 38 & 41 & 22 & 61 & 43 & 71 & 79 & 84 \ 72 & 9 & 33 & 25 & 62 & 47 & 33 & 79 & 83 \ 21 & 10 & 23 & 31 & 18 & 49 & 59 & 80 & 82 \ 19 & 10 & 32 & 32 & 44 & 49 & 81 & 86 & 82 \ 72 & 52 & 80 & 33 & 30 & 52 & 71 & 89 & 86 \ 56 & 14 & 31 & 33 & 65 & 52 & 75 & 90 & 79 \ 49 & 66 & 78 & 33 & 78 & 59 & 64 & 95 & 67 \ 89 & 17 & 21 & 39 & 71 & 63 & 41 & 95 & 56 \ 43 & 18 & 43 & 41 & 60 & 68 & 82 & 100 & 85 \ 66 & 21 & 45 & 43 & 58 & 75 & 60 & 100 & 91 \end{array} \nonumber$
Table $1$. Observed data of biotic integrity and forest area.
Solution
We begin with a computing descriptive statistics and a scatterplot of IBI against Forest Area.
= 47.42; sx 27.37; = 58.80; sy = 21.38; r = 0.735
Figure $2$. . Scatterplot of IBI vs. Forest Area.
There appears to be a positive linear relationship between the two variables. The linear correlation coefficient is r = 0.735. This indicates a strong, positive, linear relationship. In other words, forest area is a good predictor of IBI. Now let’s create a simple linear regression model using forest area to predict IBI (response).
First, we will compute b0 and b1 using the shortcut equations.
$b_1 = r (\frac {s_y}{s_x}) = 0.735(\frac {21.38}{27.37})=0.574 \nonumber$
$b_0 =\bar y -b_1 \bar x =58.80-0.574 \times 47.42=31.581 \nonumber$
The regression equation is
$\hat y =31.58 + 0.574x \nonumber$
Now let’s use Minitab to compute the regression model. The output appears below.
Regression Analysis: IBI versus Forest Area
The regression equation is IBI = 31.6 + 0.574 Forest Area
Predictor
Coef
SE Coef
T
P
Constant
31.583
4.177
7.56
0.000
Forest Area
0.57396
0.07648
7.50
0.000
S = 14.6505
R-Sq = 54.0%
R-Sq(adj) = 53.0%
Analysis of Variance
Source
DF
SS
MS
F
P
Regression
1
12089
12089
56.32
0.000
Residual Error
48
10303
215
Total
49
22392
The estimates for β0 and β1 are 31.6 and 0.574, respectively. We can interpret the y-intercept to mean that when there is zero forested area, the IBI will equal 31.6. For each additional square kilometer of forested area added, the IBI will increase by 0.574 units.
The coefficient of determination, R2, is 54.0%. This means that 54% of the variation in IBI is explained by this model. Approximately 46% of the variation in IBI is due to other factors or random variation. We would like R2 to be as high as possible (maximum value of 100%).
The residual and normal probability plots do not indicate any problems.
Figure $3$. . A residual and normal probability plot.
The estimate of σ, the regression standard error, is s = 14.6505. This is a measure of the variation of the observed values about the population regression line. We would like this value to be as small as possible. The MSE is equal to 215. Remember, the $\sqrt {MSE}=s$. The standard errors for the coefficients are 4.177 for the y-intercept and 0.07648 for the slope.
We know that the values b0 = 31.6 and b1 = 0.574 are sample estimates of the true, but unknown, population parameters β0 and β1. We can construct 95% confidence intervals to better estimate these parameters. The critical value (tα/2) comes from the student t-distribution with (n – 2) degrees of freedom. Our sample size is 50 so we would have 48 degrees of freedom. The closest table value is 2.009.
95% confidence intervals for β0 and β1
$b_0 \pm t_{\alpha/2} SE_{b_0} = 31.6 \pm 2.009(4.177) = (23.21, 39.99) \nonumber$
$b_1 \pm t_{\alpha/2} SE_{b_1} = 0.574 \pm 2.009(0.07648) = (0.4204, 0.7277) \nonumber$
The next step is to test that the slope is significantly different from zero using a 5% level of significance.
H0: β1 =0
H1: β1 0
$t = \frac {b_1} {SE_{b_1}} = \frac {0.574}{0.07648} = 7.50523 \nonumber$
We have 48 degrees of freedom and the closest critical value from the student t-distribution is 2.009. The test statistic is greater than the critical value, so we will reject the null hypothesis. The slope is significantly different from zero. We have found a statistically significant relationship between Forest Area and IBI.
The Minitab output also report the test statistic and p-value for this test.
The regression equation is IBI = 31.6 + 0.574 Forest Area
Predictor
Coef
SE Coef
T
P
Constant
31.583
4.177
7.56
0.000
Forest Area
0.57396
0.07648
7.50
0.000
S = 14.6505
R-Sq = 54.0%
R-Sq(adj) = 53.0%
Analysis of Variance
Source
DF
SS
MS
F
P
Regression
1
12089
12089
56.32
0.000
Residual Error
48
10303
215
Total
49
22392
The t test statistic is 7.50 with an associated p-value of 0.000. The p-value is less than the level of significance (5%) so we will reject the null hypothesis. The slope is significantly different from zero. The same result can be found from the F-test statistic of 56.32 (7.5052 = 56.32). The p-value is the same (0.000) as the conclusion.
Confidence Interval for $\mu_y$
Now that we have created a regression model built on a significant relationship between the predictor variable and the response variable, we are ready to use the model for
• estimating the average value of y for a given value of x
• predicting a particular value of y for a given value of x
Let’s examine the first option. The sample data of n pairs that was drawn from a population was used to compute the regression coefficients b0 and b1 for our model, and gives us the average value of y for a specific value of x through our population model $\mu_y = \beta_0 + \beta_1x$
. For every specific value of x, there is an average y (μy), which falls on the straight line equation (a line of means). Remember, that there can be many different observed values of the y for a particular x, and these values are assumed to have a normal distribution with a mean equal to $\beta_0 + \beta_1x$ and a variance of σ2. Since the computed values of b0 and b1 vary from sample to sample, each new sample may produce a slightly different regression equation. Each new model can be used to estimate a value of y for a value of x. How far will our estimator $\hat y =b_0+b_1x$ be from the true population mean for that value of x? This depends, as always, on the variability in our estimator, measured by the standard error.
It can be shown that the estimated value of y when x = x0 (some specified value of x), is an unbiased estimator of the population mean, and that is normally distributed with a standard error of
SE_{\hat \mu} = s\sqrt {\frac {1}{n} + \frac {(x_0-\bar x)^2}{\sum (x_i - \bar x)^2}}\]
We can construct a confidence interval to better estimate this parameter (μy) following the same procedure illustrated previously in this chapter.
\hat {\mu_y} \pm t_{\alpha/2}SE_{\hat \mu}\]
where the critical value tα/2 comes from the student t-table with (n – 2) degrees of freedom.
Statistical software, such as Minitab, will compute the confidence intervals for you. Using the data from the previous example, we will use Minitab to compute the 95% confidence interval for the mean response for an average forested area of 32 km.
Predicted Values for New Observations
New Obs Fit
SE Fit
95%
CI
1
49.9496
2.38400
(45.1562,54.7429)
If you sampled many areas that averaged 32 km. of forested area, your estimate of the average IBI would be from 45.1562 to 54.7429.
You can repeat this process many times for several different values of x and plot the confidence intervals for the mean response.
x
95% CI
20
(37.13, 48.88)
40
(50.22, 58.86)
60
(61.43, 70.61)
80
(70.98, 84.02)
100
(79.88, 98.07)
Figure $4$. . 95% confidence intervals for the mean response.
Notice how the width of the 95% confidence interval varies for the different values of x. Since the confidence interval width is narrower for the central values of x, it follows that μy is estimated more precisely for values of x in this area. As you move towards the extreme limits of the data, the width of the intervals increases, indicating that it would be unwise to extrapolate beyond the limits of the data used to create this model.
Prediction Intervals
What if you want to predict a particular value of y when $x = x_0$? Or, perhaps you want to predict the next measurement for a given value of x? This problem differs from constructing a confidence interval for $\mu_y$. Instead of constructing a confidence interval to estimate a population parameter, we need to construct a prediction interval. Choosing to predict a particular value of y incurs some additional error in the prediction because of the deviation of y from the line of means. Examine the figure below. You can see that the error in prediction has two components:
1. The error in using the fitted line to estimate the line of means
2. The error caused by the deviation of y from the line of means, measured by $\sigma^2$
Figure $5$. Illustrating the two components in the error of prediction.
The variance of the difference between y and $\hat y$ is the sum of these two variances and forms the basis for the standard error of $(y-\hat y)$ used for prediction. The resulting form of a prediction interval is as follows:
\hat y \pm t_{\alpha/2}s\sqrt {1+\frac {1}{n} + \frac {(x_0 - \bar x)^2}{\sum (x_i - \bar x)^2}} \]
where x0 is the given value for the predictor variable, n is the number of observations, and $t_{\alpha/2}$ is the critical value with (n – 2) degrees of freedom.
Software, such as Minitab, can compute the prediction intervals. Using the data from the previous example, we will use Minitab to compute the 95% prediction interval for the IBI of a specific forested area of 32 km.
Predicted Values for New Observations
New Obs
Fit
SE Fit
95% PI
1
49.9496
2.38400
(20.1053, 79.7939)
You can repeat this process many times for several different values of x and plot the prediction intervals for the mean response.
x
95% PI
20
(13.01, 73.11)
40
(24.77, 84.31)
60
(36.21, 95.83)
80
(47.33, 107.67)
100
(58.15, 119.81)
Notice that the prediction interval bands are wider than the corresponding confidence interval bands, reflecting the fact that we are predicting the value of a random variable rather than estimating a population parameter. We would expect predictions for an individual value to be more variable than estimates of an average value.
Figure $6$. A comparison of confidence and prediction intervals.
Transformations to Linearize Data Relationships
In many situations, the relationship between x and y is non-linear. In order to simplify the underlying model, we can transform or convert either x or y or both to result in a more linear relationship. There are many common transformations such as logarithmic and reciprocal. Including higher order terms on x may also help to linearize the relationship between x and y. Shown below are some common shapes of scatterplots and possible choices for transformations. However, the choice of transformation is frequently more a matter of trial and error than set rules.
$\begin{array}{lll} \mathbf{x} & \text { or } & \mathbf{y} \ \mathrm{x}^2 & & \mathrm{y}^2 \ \mathrm{x}^3 & & \mathrm{y}^3 \end{array} \nonumber$
$\begin{array}{lll} \mathbf{x} & \text { or } & \mathbf{y} \ \log \mathrm{x} & & \log \mathrm{y} \ -1 / \mathrm{x} & & -1 / \mathrm{y} \end{array} \nonumber$
$\begin{array}{lll} \mathbf{x} & \text { or } & \mathbf{y} \ \log \mathrm{x} & & \mathrm{y}^2 \ -1 / \mathrm{x} & & \mathrm{y}^3 \end{array} \nonumber$
Choice of transformation
$\begin{array}{lll} \mathbf{x} & \text { or } & \mathbf{y} \ \mathrm{x}^2 & & \log \mathrm{y} \ \mathrm{x}^3 & & \ & & -1 / \mathrm{y} \end{array} \nonumber$
Figure $7$. Examples of possible transformations for x and y variables.
Example $2$:
A forester needs to create a simple linear regression model to predict tree volume using diameter-at-breast height (dbh) for sugar maple trees. He collects dbh and volume for 236 sugar maple trees and plots volume versus dbh. Given below is the scatterplot, correlation coefficient, and regression output from Minitab.
Figure $8$ Scatterplot of volume versus dbh.
Pearson’s linear correlation coefficient is 0.894, which indicates a strong, positive, linear relationship. However, the scatterplot shows a distinct nonlinear relationship.
Regression Analysis: volume versus dbh
The regression equation is volume = – 51.1 + 7.15 dbh
Predictor
Coef
SE Coef
T
P
Constant
-51.097
3.271
-15.62
0.000
dbh
7.1500
0.2342
30.53
0.000
S = 19.5820
R-Sq = 79.9%
R-Sq(adj) = 79.8%
Analysis of Variance
Source
DF
SS
MS
F
P
Regression
1
357397
357397
932.04
0.000
Residual Error
234
89728
383
Total
235
447125
The R2 is 79.9% indicating a fairly strong model and the slope is significantly different from zero. However, both the residual plot and the residual normal probability plot indicate serious problems with this model. A transformation may help to create a more linear relationship between volume and dbh.
Figure $9$. Residual and normal probability plots.
Volume was transformed to the natural log of volume and plotted against dbh (see scatterplot below). Unfortunately, this did little to improve the linearity of this relationship. The forester then took the natural log transformation of dbh. The scatterplot of the natural log of volume versus the natural log of dbh indicated a more linear relationship between these two variables. The linear correlation coefficient is 0.954.
Figure $10$. Scatterplots of natural log of volume versus dbh and natural log of volume versus natural log of dbh.
The regression analysis output from Minitab is given below.
Regression Analysis: lnVOL vs. lnDBH
The regression equation is lnVOL = – 2.86 + 2.44 lnDBH
Predictor
Coef
SE Coef
T
P
Constant
-2.8571
0.1253
-22.80
0.000
lnDBH
2.44383
0.05007
48.80
0.000
S = 0.327327
R-Sq = 91.1%
R-Sq(adj) = 91.0%
Analysis of Variance
Source
DF
SS
MS
F
P
Regression
1
255.19
255.19
2381.78
0.000
Residual Error
234
25.07
0.11
Total
235
280.26
Figure $11$. Residual and normal probability plots.
The model using the transformed values of volume and dbh has a more linear relationship and a more positive correlation coefficient. The slope is significantly different from zero and the R2 has increased from 79.9% to 91.1%. The residual plot shows a more random pattern and the normal probability plot shows some improvement.
There are many possible transformation combinations possible to linearize data. Each situation is unique and the user may need to try several alternatives before selecting the best transformation for x or y or both.
7.04: Software Solution
Minitab
The Minitab output is shown above in Ex. 4.
Excel
Figure \(1\). Residual and normal probability plots. | textbooks/stats/Applied_Statistics/Natural_Resources_Biometrics_(Kiernan)/07%3A_Correlation_and_Simple_Linear_Regression/7.03%3A_Population_Model.txt |
• 8.1: Multiple Regressions
It frequently happens that a dependent variable (y) in which we are interested is related to more than one independent variable. If this relationship can be estimated, it may enable us to make more precise predictions of the dependent variable than would be possible by a simple linear regression. Regressions based on more than one independent variable are called multiple regressions.
• 8.2: Software Solution
08: Multiple Linear Regression
It frequently happens that a dependent variable (y) in which we are interested is related to more than one independent variable. If this relationship can be estimated, it may enable us to make more precise predictions of the dependent variable than would be possible by a simple linear regression. Regressions based on more than one independent variable are called multiple regressions.
Multiple linear regression is an extension of simple linear regression and many of the ideas we examined in simple linear regression carry over to the multiple regression setting. For example, scatterplots, correlation, and least squares method are still essential components for a multiple regression.
For example, a habitat suitability index (used to evaluate the impact on wildlife habitat from land use changes) for ruffed grouse might be related to three factors:
x1 = stem density
x2 = percent of conifers
x3 = amount of understory herbaceous matter
A researcher would collect data on these variables and use the sample data to construct a regression equation relating these three variables to the response. The researcher will have questions about his model similar to a simple linear regression model.
• How strong is the relationship between y and the three predictor variables?
• How well does the model fit?
• Have any important assumptions been violated?
• How good are the estimates and predictions?
The general linear regression model takes the form of
$y_i = \beta_0+ \beta_1x_1+\beta_2x_2 + ...+\beta_kx_k+\epsilon$
with the mean value of y given as
$\mu_y = \beta_0 +\beta_1 x_1+\beta_2 x_2+...+\beta_k x_k$
where:
• y is the random response variable and μy is the mean value of y,
• β0, β1, β2, and βk are the parameters to be estimated based on the sample data,
• x1, x2,…, xk are the predictor variables that are assumed to be non-random or fixed and measured without error, and k is the number of predictor variable,
• and ε is the random error, which allows each response to deviate from the average value of y. The errors are assumed to be independent, have a mean of zero and a common variance (σ2), and are normally distributed.
As you can see, the multiple regression model and assumptions are very similar to those for a simple linear regression model with one predictor variable. Examining residual plots and normal probability plots for the residuals is key to verifying the assumptions.
Correlation
As with simple linear regression, we should always begin with a scatterplot of the response variable versus each predictor variable. Linear correlation coefficients for each pair should also be computed. Instead of computing the correlation of each pair individually, we can create a correlation matrix, which shows the linear correlation between each pair of variables under consideration in a multiple linear regression model.
$\begin{array}{cccc} & \mathbf{y} & \mathbf{x} 1 & \mathbf{x} 2 \ \mathbf{x 1} & 0.816 & & \ & 0.000 & & \ \mathbf{x 2} & 0.413 & -0.144 & \ & 0.029 & 0.466 & \ \mathbf{x 3} & 0.768 & 0.588 & 0.406 \ & 0.000 & 0.001 & 0.032\end{array}$
Table $1$ . A correlation matrix.
In this matrix, the upper value is the linear correlation coefficient and the lower value is the p-value for testing the null hypothesis that a correlation coefficient is equal to zero. This matrix allows us to see the strength and direction of the linear relationship between each predictor variable and the response variable, but also the relationship between the predictor variables. For example, y and x1have a strong, positive linear relationship with r = 0.816, which is statistically significant because p = 0.000. We can also see that predictor variables x1 and x3 have a moderately strong positive linear relationship (r = 0.588) that is significant (p = 0.001).
There are many different reasons for selecting which explanatory variables to include in our model (see Model Development and Selection), however, we frequently choose the ones that have a high linear correlation with the response variable, but we must be careful. We do not want to include explanatory variables that are highly correlated among themselves. We need to be aware of any multicollinearity between predictor variables.
Multicollinearity exists between two explanatory variables if they have a strong linear relationship.
For example, if we are trying to predict a person’s blood pressure, one predictor variable would be weight and another predictor variable would be diet. Both predictor variables are highly correlated with blood pressure (as weight increases blood pressure typically increases, and as diet increases blood pressure also increases). But, both predictor variables are also highly correlated with each other. Both of these predictor variables are conveying essentially the same information when it comes to explaining blood pressure. Including both in the model may lead to problems when estimating the coefficients, as multicollinearity increases the standard errors of the coefficients. This means that coefficients for some variables may be found not to be significantly different from zero, whereas without multicollinearity and with lower standard errors, the same coefficients might have been found significant. Ways to test for multicollinearity are not covered in this text, however a general rule of thumb is to be wary of a linear correlation of less than -0.7 and greater than 0.7 between two predictor variables. Always examine the correlation matrix for relationships between predictor variables to avoid multicollinearity issues.
Estimation
Estimation and inference procedures are also very similar to simple linear regression. Just as we used our sample data to estimate β0 and β1 for our simple linear regression model, we are going to extend this process to estimate all the coefficients for our multiple regression models.
With the simpler population model
$\mu_y = \beta_0+\beta_1x$
β1 is the slope and tells the user what the change in the response would be as the predictor variable changes. With multiple predictor variables, and therefore multiple parameters to estimate, the coefficients β1, β2, β3 and so on are called partial slopes or partial regression coefficients. The partial slope βi measures the change in y for a one-unit change in xi when all other independent variables are held constant. These regression coefficients must be estimated from the sample data in order to obtain the general form of the estimated multiple regression equation
$\hat y = b_0+b_1x_1+b_2x_2+b_3x_3+...+b_kx_k$
and the population model
$\mu_y = \beta_0 + \beta_1x_1+\beta_2x_2+\beta_3x_3+...+\beta_kx_k$
where k = the number of independent variables (also called predictor variables)
= the predicted value of the dependent variable (computed by using the multiple regression equation)
x1, x2, …, xk = the independent variables
β0 is the y-intercept (the value of y when all the predictor variables equal 0)
b0 is the estimate of β0 based on that sample data
β1, β2, β3,…βk are the coefficients of the independent variables x1, x2, …, xk
b1, b2, b3, …, bk are the sample estimates of the coefficients β1, β2, β3,…βk
The method of least-squares is still used to fit the model to the data. Remember that this method minimizes the sum of the squared deviations of the observed and predicted values (SSE).
The analysis of variance table for multiple regression has a similar appearance to that of a simple linear regression.
Source of variation
df
Seq sums of squares
Sums of squares
Mean sums of squares
F
Regression
k
SSR
SSR/k = MSR
MSR/MSE = F
Error
n - k - 1
SSE
SSE/(n - k - 1) = MSE
Total
n -1
SST
Table $2$ . ANOVA table.
Where k is the number of predictor variables and n is the number of observations.
The best estimate of the random variation $\sigma^2$—the variation that is unexplained by the predictor variables—is still s2, the MSE. The regression standard error, s, is the square root of the MSE.
A new column in the ANOVA table for multiple linear regression shows a decomposition of SSR, in which the conditional contribution of each predictor variable given the variables already entered into the model is shown for the order of entry that you specify in your regression. These conditional or sequential sums of squares each account for 1 regression degree of freedom, and allow the user to see the contribution of each predictor variable to the total variation explained by the regression model by using the ratio:
$\dfrac {SeqSS}{SSR}$
Adjusted $R^2$
In simple linear regression, we used the relationship between the explained and total variation as a measure of model fit:
$R^2 = \dfrac {Explained \ Variation}{Total \ Variation} = \dfrac {SSR}{SSTo} = 1 - \dfrac {SSE}{SSTo}$
Notice from this definition that the value of the coefficient of determination can never decrease with the addition of more variables into the regression model. Hence, $R^2$ can be artificially inflated as more variables (significant or not) are included in the model. An alternative measure of strength of the regression model is adjusted for degrees of freedom by using mean squares rather than sums of squares:
$R^2(adj) = 1 -\dfrac {(n-1)(1-R^2)}{(n-p)} = (1 - \dfrac {MSE}{SSTo/(n-1)})$
The adjusted $R^2$ value represents the percentage of variation in the response variable explained by the independent variables, corrected for degrees of freedom. Unlike $R^2$, the adjusted $R^2$ will not tend to increase as variables are added and it will tend to stabilize around some upper limit as variables are added.
Tests of Significance
Recall in the previous chapter we tested to see if y and x were linearly related by testing
$H_0: \beta_1 = 0$
$H_1: \beta_1 \ne 0$
with the t-test (or the equivalent F-test). In multiple linear regression, there are several partial slopes and the t-test and F-test are no longer equivalent. Our question changes: Is the regression equation that uses information provided by the predictor variables x1, x2, x3, …, xk, better than the simple predictor (the mean response value), which does not rely on any of these independent variables?
$H_0: \beta_1 = \beta_2 = \beta_3 = …=\beta_k = 0$
$H1: At \ least \ one \ of β_1, β_2 , β_3 , …β_k \ne 0$
The F-test statistic is used to answer this question and is found in the ANOVA table.
$F=\dfrac{MSR}{MSE}$
This test statistic follows the F-distribution with $df_1 = k$ and $df_2 = (n-k-1)$. Since the exact p-value is given in the output, you can use the Decision Rule to answer the question.
If the p-value is less than the level of significance, reject the null hypothesis.
Rejecting the null hypothesis supports the claim that at least one of the predictor variables has a significant linear relationship with the response variable. The next step is to determine which predictor variables add important information for prediction in the presence of other predictors already in the model. To test the significance of the partial regression coefficients, you need to examine each relationship separately using individual t-tests.
$H_0: β_i = 0$
$H_1: β_i \ne 0$
$t=\dfrac {b_i-\beta_o}{SE(b_i)} \ with \ df = (n-k-1)$
where SE(bi) is the standard error of bi. Exact p-values are also given for these tests. Examining specific p-values for each predictor variable will allow you to decide which variables are significantly related to the response variable. Typically, any insignificant variables are removed from the model, but remember these tests are done with other variables in the model. A good procedure is to remove the least significant variable and then refit the model with the reduced data set. With each new model, always check the regression standard error (lower is better), the adjusted R2 (higher is better), the p-values for all predictor variables, and the residual and normal probability plots.
Because of the complexity of the calculations, we will rely on software to fit the model and give us the regression coefficients. Don’t forget… you always begin with scatterplots. Strong relationships between predictor and response variables make for a good model.
Example $1$:
A researcher collected data in a project to predict the annual growth per acre of upland boreal forests in southern Canada. They hypothesized that cubic foot volume growth (y) is a function of stand basal area per acre (x1), the percentage of that basal area in black spruce (x2), and the stand’s site index for black spruce (x3). α = 0.05.
CuFt
BA/ac
%BA Bspruce
SI
CuFt
BA/ac
%BA Bspruce
SI
55
51
79
45
71
65
93
35
68
100
48
53
67
87
68
41
60
63
67
44
73
108
51
54
40
52
52
31
87
105
82
51
45
67
52
29
80
100
70
45
49
42
82
43
77
103
61
43
62
81
80
42
64
55
96
51
56
70
65
36
60
60
80
47
93
108
96
63
65
70
76
40
76
90
81
60
65
78
74
46
94
110
78
56
83
85
96
55
82
111
59
48
67
92
58
50
86
94
84
53
61
82
58
38
55
82
48
40
51
56
69
35
Table $3$ . Observed data for cubic feet, stand basal area, percent basal area in black spruce, and site index.
Scatterplots of the response variable versus each predictor variable were created along with a correlation matrix.
Figure $1$ . Scatterplots of cubic feet versus basal area, percent basal area in black spruce, and site index.
$\begin{array}{l} \text { Correlations: CuFt, BA/ac, %BA Bspruce, SI }\ \begin{array}{|c|c|c|c|c|} \hline \mathrm{BA} / \mathrm{ac} & \begin{array}{r} \text { CuFt } \ 0.816 \ 0.000 \end{array} & \mathrm{BA} / \mathrm{aC} & \frac{8}{8} \mathrm{BA} & \text { Bspruce } \ \hline \text { ㅇA Bspruce } & \begin{array}{l} 0.413 \ 0.029 \end{array} & \begin{array}{r} -0.144 \ 0.466 \end{array} & & \ \hline \text { SI } & \begin{array}{l} 0.768 \ 0.000 \end{array} & \begin{array}{l} 0.588 \ 0.001 \end{array} & & \begin{array}{l} 0.406 \ 0.032 \end{array} \ \hline \end{array} \end{array}$
Table $4$ . Correlation matrix.
As you can see from the scatterplots and the correlation matrix, BA/ac has the strongest linear relationship with CuFt volume (r = 0.816) and %BA in black spruce has the weakest linear relationship (r = 0.413). Also of note is the moderately strong correlation between the two predictor variables, BA/ac and SI (r = 0.588). All three predictor variables have significant linear relationships with the response variable (volume) so we will begin by using all variables in our multiple linear regression model. The Minitab output is given below.
We begin by testing the following null and alternative hypotheses:
H0: β1 = β2 = β3 = 0
H1: At least one of β1, β2 , β3 ≠ 0
General Regression Analysis: CuFt versus BA/ac, SI, %BA Bspruce
Regression Equation: CuFt = -19.3858 + 0.591004 BA/ac + 0.0899883 SI + 0.489441 %BA Bspruce
Coefficients
Term
Coef
SE Coef
T
P
95% CI
Constant
-19.3858
4.15332
-4.6675
0.000
(-27.9578, -10.8137)
BA/ac
0.5910
0.04294
13.7647
0.000
(0.5024, 0.6796)
SI
0.0900
0.11262
0.7991
0.432
(-0.1424, 0.3224)
%BA Bspruce
0.4894
0.05245
9.3311
0.000
(0.3812, 0.5977)
Summary of Model
S = 3.17736
R-Sq = 95.53%
R-Sq(adj) = 94.97%
PRESS = 322.279
R-Sq(pred) = 94.05%
Analysis of Variance
Source
DF
Seq SS
Adj SS
Adj MS
F
P
Regression
3
5176.56
5176.56
1725.52
170.918
0.000000
BA/ac
1
3611.17
1912.79
1912.79
189.467
0.000000
SI
1
686.37
6.45
6.45
0.638
0.432094
%BA Bspruce
1
879.02
879.02
879.02
87.069
0.000000
Error
24
242.30
242.30
10.10
Total
27
5418.86
The F-test statistic (and associated p-value) is used to answer this question and is found in the ANOVA table. For this example, F = 170.918 with a p-value of 0.00000. The p-value is smaller than our level of significance (0.0000<0.05) so we will reject the null hypothesis. At least one of the predictor variables significantly contributes to the prediction of volume.
The coefficients for the three predictor variables are all positive indicating that as they increase cubic foot volume will also increase. For example, if we hold values of SI and %BA Bspruce constant, this equation tells us that as basal area increases by 1 sq. ft., volume will increase an additional 0.591004 cu. ft. The signs of these coefficients are logical, and what we would expect. The adjusted R2 is also very high at 94.97%.
The next step is to examine the individual t-tests for each predictor variable. The test statistics and associated p-values are found in the Minitab output and repeated below:
Coefficients
Term
Coef
SE Coef
T
P
95% CI
Constant
-19.3858
4.15332
-4.6675
0.000
(-27.9578, -10.8137)
BA/ac
0.5910
0.04294
13.7647
0.000
( 0.5024, 0.6796)
SI
0.0900
0.11262
0.7991
0.432
( -0.1424, 0.3224)
%BA Bspruce
0.4894
0.05245
9.3311
0.000
( 0.3812, 0.5977)
The predictor variables BA/ac and %BA Bspruce have t-statistics of 13.7647 and 9.3311 and p-values of 0.0000, indicating that both are significantly contributing to the prediction of volume. However, SI has a t-statistic of 0.7991 with a p-value of 0.432. This variable does not significantly contribute to the prediction of cubic foot volume.
This result may surprise you as SI had the second strongest relationship with volume, but don’t forget about the correlation between SI and BA/ac (r = 0.588). The predictor variable BA/ac had the strongest linear relationship with volume, and using the sequential sums of squares, we can see that BA/ac is already accounting for 70% of the variation in cubic foot volume (3611.17/5176.56 = 0.6976). The information from SI may be too similar to the information in BA/ac, and SI only explains about 13% of the variation on volume (686.37/5176.56 = 0.1326) given that BA/ac is already in the model.
The next step is to examine the residual and normal probability plots. A single outlier is evident in the otherwise acceptable plots.
Figure $2$ . Residual and normal probability plots.
So where do we go from here?
We will remove the non-significant variable and re-fit the model excluding the data for SI in our model. The Minitab output is given below.
General Regression Analysis: CuFt versus BA/ac, %BA Bspruce
Regression Equation
CuFt = -19.1142 + 0.615531 BA/ac + 0.515122 %BA Bspruce
Coefficients
Term
Coef
SE Coef
T
P
95% CI
Constant
-19.1142
4.10936
-4.6514
0.000
(-27.5776, -10.6508)
BA/ac
0.6155
0.02980
20.6523
0.000
(0.5541, 0.6769)
%BA Bspruce
0.5151
0.04115
12.5173
0.000
(0.4304, 0.5999)
Summary of Model
S = 3.15431
R-Sq = 95.41%
R-Sq(adj) = 95.04%
PRESS = 298.712
R-Sq(pred) = 94.49%
Analysis of Variance
Source
DF
SeqSS
AdjSS
AdjMS
F
P
Regression
2
5170.12
5170.12
2585.06
259.814
0.0000000
BA/ac
1
3611.17
4243.71
4243.71
426.519
0.0000000
%BA Bspruce
1
1558.95
1558.95
1558.95
156.684
0.0000000
Error
25
248.74
248.74
9.95
Total
27
5418.86
We will repeat the steps followed with our first model. We begin by again testing the following hypotheses:
$H_0: \beta_1 = \beta_2 = \beta_3 = 0$
$H_1: At \ least \ one \ of \ \beta_1, \beta_2 , \beta_3 \ne 0$
This reduced model has an F-statistic equal to 259.814 and a p-value of 0.0000. We will reject the null hypothesis. At least one of the predictor variables significantly contributes to the prediction of volume. The coefficients are still positive (as we expected) but the values have changed to account for the different model.
The individual t-tests for each coefficient (repeated below) show that both predictor variables are significantly different from zero and contribute to the prediction of volume.
Coefficients
Term
Coef
SE Coef
T
P
95% CI
Constant
-19.1142
4.10936
-4.6514
0.000
(-27.5776, -10.6508)
BA/ac
0.6155
0.02980
20.6523
0.000
( 0.5541, 0.6769)
%BA Bspruce
0.5151
0.04115
12.5173
0.000
( 0.4304, 0.5999)
Notice that the adjusted R2 has increased from 94.97% to 95.04% indicating a slightly better fit to the data. The regression standard error has also changed for the better, decreasing from 3.17736 to 3.15431 indicating slightly less variation of the observed data to the model.
Figure $3$ . Residual and normal probability plots.
The residual and normal probability plots have changed little, still not indicating any issues with the regression assumption. By removing the non-significant variable, the model has improved.
8.02: Software Solution
Model Development and Selection
There are many different reasons for creating a multiple linear regression model and its purpose directly influences how the model is created. Listed below are several of the more commons uses for a regression model:
1. Describing the behavior of your response variable
2. Predicting a response or estimating the average response
3. Estimating the parameters (β0, β1, β2, …)
4. Developing an accurate model of the process
Depending on your objective for creating a regression model, your methodology may vary when it comes to variable selection, retention, and elimination.
When the object is simple description of your response variable, you are typically less concerned about eliminating non-significant variables. The best representation of the response variable, in terms of minimal residual sums of squares, is the full model, which includes all predictor variables available from the data set. It is less important that the variables are causally related or that the model is realistic.
A common reason for creating a regression model is for prediction and estimating. A researcher wants to be able to define events within the x-space of data that were collected for this model, and it is assumed that the system will continue to function as it did when the data were collected. Any measurable predictor variables that contain information on the response variable should be included. For this reason, non-significant variables may be retained in the model. However, regression equations with fewer variables are easier to use and have an economic advantage in terms of data collection. Additionally, there is a greater confidence attached to models that contain only significant variables.
If the objective is to estimate the model parameters, you will be more cautious when considering variable elimination. You want to avoid introducing a bias by removing a variable that has predictive information about the response. However, there is a statistical advantage in terms of reduced variance of the parameter estimates if variables truly unrelated to the response variable are removed.
Building a realistic model of the process you are studying is often a primary goal of much research. It is important to identify the variables that are linked to the response through some causal relationship. While you can identify which variables have a strong correlation with the response, this only serves as an indicator of which variables require further study. The principal objective is to develop a model whose functional form realistically reflects the behavior of a system.
The following figure is a strategy for building a regression model.
Figure \(1\) . Strategy for building a regression model.
Software Solutions
Minitab
The output and plots are given in the previous example. | textbooks/stats/Applied_Statistics/Natural_Resources_Biometrics_(Kiernan)/08%3A_Multiple_Linear_Regression/8.01%3A_Multiple_Regressions.txt |
• 9.1: Growth and Yield Models
• 9.2: Site Index
Site is defined by the Society of American Foresters (1971) as “an area considered in terms of its own environment, particularly as this determines the type and quality of the vegetation the area can carry.” Forest and natural resource managers use site measurement to identify the potential productivity of a forest stand and to provide a comparative frame of reference for management options. The productive potential or capacity of a site is often referred to as site quality.
• 9.3: Reference
09: Modeling Growth Yield and Site Index
Forest and natural resource management decisions are often based on information collected on past and present resource conditions. This information provides us with not only current details on the timber we manage (e.g., volume, diameter distribution) but also allows us to track changes in growth, mortality, and ingrowth over time. We use this information to make predictions of future growth and yield based on our management objectives. Techniques for forecasting stand dynamics are collectively referred to as growth and yield models. Growth and yield models are relationships between the amount of yield or growth and the many different factors that explain or predict this growth.
Before we continue our examination of growth and yield models, let’s review some basic terms.
• Yield: total volume available for harvest at a given time
• Growth: difference in volume between the beginning and end of a specified period of time (V2 – V1)
• Annual growth: when growth is divided by number of years in the growing period
• Model: a mathematical function used to relate observed growth rates or yield to measured tree, stand, and site variables
• Estimation: a statistical process of obtaining coefficients for models that describe the growth rates or yield as a function of measured tree, stand, and site variables
• Evaluation: considering how, where, and by whom the model should be used, how the model and its components operate, and the quality of the system design and its biological reality
• Verification: the process of confirming that the model functions correctly with respect to the conceptual model. In other words, verification makes sure that there are no flaws in the programming logic or algorithms, and no bias in computation (systematic errors).
• Validation: checks the accuracy and consistency of the model and tests the model to see how well it reflects the real system, if possible, using an independent data set
• Simulation: using a computer program to simulate an abstract model of a particular system. We use a growth model to estimate stand development through time under alternative conditions or silvicultural practices.
• Calibration: the process of modifying the model to account for local conditions that may differ from those on which the model was based
• Monitoring: continually checking the simulation output of the system to identify any shortcomings of the model
• Deterministic model: a model in which the outcomes are determined through known relationships among states and events, without any room for random variation. In forestry, a deterministic model provides an estimate of average stand growth, and given the same initial conditions, a deterministic model will always predict the same result.
• Stochastic model: a model that attempts to illustrate the natural variation in a system by providing different predictions (each with a specific probability of occurrence) given the same initial conditions. A stochastic model requires multiple runs to provide estimates of the variability of predictions.
• Process model: a model that attempts to simulate biological processes that convert carbon dioxide, nutrients, and moisture into biomass through photosynthesis
• Succession model: a model that attempts to model species succession, but is generally unable to provide reliable information on timber yield
Models
Growth and yield models are typically stated as mathematical equations and can be implicit or explicit in form. An implicit model defines the variables in the equation but the specific relationship is not quantified. For example,
$V = f (BA,H_t)$
where V is volume (ft3/ac), BA is density (basal area in ft2), Ht is total tree height. This model says that volume is a function of (depends on) density and height, but it does not put a numerical value on the volume for specific values of basal area and height. This equation becomes explicit when we specify the relationship such as
$ln(V) = -0.723 + 0.781*ln(BA)+ 0.922 ln(H_t)$
Growth and yield models can be linear or nonlinear equations. In this linear model, all the independent variables of X1 and X2 are only raised to the first power.
$y = 1.29 + 7.65 X1 -27.02 X2$
A nonlinear model has independent variables with exponents different from one.
$y=b_oe^{b_1X}$
In this example, b0 and b1 are parameters to be estimated and X is the independent variable.
Classification of Growth and Yield Models
Growth and yield models have long been part of forestry but development and use has greatly increased in the last 25 years due to the accessibility of computers. There are many different approaches to modeling, each with their own advantages and disadvantages. Selecting a specific type of modeling approach often depends on the type of data used. Growth and yield models are categorized depending on whether they model the whole stand, the diameter classes, or individual trees.
Whole Stand Models
Whole stand models may or may not contain density as an independent variable. Density-free whole stand models provide the basis for traditional normal yield tables since “normal” implies nature’s maximum density, and empirical yield tables assume nature’s average density. In both of these cases, stand volume at a specific age is typically a function of stand age and site index. Variable-density whole stand models use density as an explicit independent variable to predict current or future volume. Buckman (1962) published the first study in the United States that directly predicted growth from current stand variables, then integrated the growth function to obtain yield:
$Y = 1.6689 + 0.041066BA – 0.00016303BA_2– 0.076958A + 0.00022741A_2 + 0.06441S$
where Y = periodic net annual basal area increment
BA = basal area, in square feet per acre
A = age, in years
S = site index
Diameter distribution models are a refinement of whole stand models. This type of model disaggregates the results at each age and then adds additional information about diameter class structure such as height and volume. The number of stems in each class is a function of the stand variables and all growth functions are for the stand. This type of whole stand model provides greater detail of the stand conditions in terms of volume, tree size, and value.
Diameter Class Models
Diameter class models (not to be confused with diameter distribution models) simulate growth and volume for each diameter class based on the average tree in each class. The number of trees in each class is empirically determined. The diameter class volumes are computed separately for each diameter class, then summed up to obtain stand values. Stand table projection is a common diameter class method used to predict short-term future conditions based on observable diameter growth for that stand. Mortality, harvest, and ingrowth must be computed separately. Differences in projection methods are based on the distribution of the number of stems in each class and how the growth rate is applied. For example, the simplest projection method is based on two assumptions: 1) that all tree diameters in a diameter class equal the midpoint diameter for that class, and 2) that they all grow at the same average rate. An improvement upon this method is to use a movement ratio that defines the proportion of trees which move into a higher DBH class.
$m = \frac {g}{i} x 100$
where m is the movement ratio, g is the average periodic diameter increment for that specific class, and i is the diameter class interval. Let’s look at an example.
Assume for a specific DBH class that g is 1.2 in. and i (class interval) is 2.0 in.
$m =\frac {1.2}{2.0}x1000 = 60%$
This means that 60% of the trees in that diameter class will move up to the next diameter class, and 40% will remain in this class. If the diameter class interval was one inch, the movement ratio would be different.
$m =\frac {1.2}{1.0} x 100 =120%$
In this case, all the trees in this diameter class would move up at least one size class and 20% of them would move up two size classes.
Individual Tree Models
Individual tree models simulate the growth of each individual tree in the tree list. These models are more complex but have become more common as computing power has increased. Individual tree models typically simulate the height, diameter, and survival of each tree while calculating its growth. Individual tree data are aggregated after the model grows each tree, while stand models aggregate individual tree data into stand variables before the growth model is applied. Additionally, this type of model allows the user to include a measure of competition for each tree. Because of this, individual tree models are typically divided into two groups based on how competition is treated.
Distance-independent models define the competitive neighborhood for a subject tree by its own diameter, height, and condition to stand characteristics such as basal area, number of trees per area, and average diameter, however, the distances between trees are not required for computing the competition for each tree. Distance-dependent models include distance and bearing to all neighboring trees, along with their diameter. This way, the competitive neighborhood for each subject tree is precisely and uniquely defined. While this approach seems logically superior to distance-independent methods, there has not been any clear documented evidence to support the use of distance-dependent competition measures over distance-independent measures.
There are many growth and yield models and simulators available and it can be difficult to select the most appropriate model. There are advantages and disadvantages to many of these options and foresters must be concerned with the reliability of the estimates, the flexibility of the model to deal with management alternatives, the level of required detail, and the efficiency for providing information in a clear and useable fashion. Many models have been created using a broad range of available data. These models are best used for comparative purposes only. In other words, they are most appropriate when comparing the outcomes from different management options instead of predicting results for a specific stand. It is important to review and understand the foundations for any model or simulator before using it.
Forest Vegetation Simulator
The Forest Vegetation Simulator (FVS, Wykoff et al. 1982; Dixon 2002) is a distance-independent, individual-tree forest growth model commonly used in the United States to support forest management decisions. Projections are typically made at the stand level, but FVS has the ability to expand the spatial scope to much larger management units. FVS began as the Prognosis Model for Stand Development (Stage 1973) with the objective to predict stand dynamics in the mixed forests of Idaho and Montana. This model became the common modeling platform for the USDA Forest Service and was renamed FVS.
Stands are the basic unit of management and projections are dependent on the interactions among trees within stands using key variables such as density, species, diameter, height, crown ratio, diameter growth, and height growth. Values for slope, aspect, elevation, density, and a measure of site potential are included for each plot. There are 22 geographically specific versions of FVS called variants.
NE-TWIGS (Belcher 1982) is a common variant applicable to fourteen northeastern states. Stand growth projections are based on simulating the growth and mortality for trees in the 5-inch and larger DBH classes. Ingrowth can be manually entered or simulated using an automatic ingrowth function. The growth equation annually estimates a diameter for each sample tree and updates the crown ratio of the tree (Miner et al. 1988).
Annual diameter growth = potential growth*competition modifier
Potential growth is defined as the growth of the top 10% of the fastest growing trees and is predicted using the following equation:
$Potential \ growth = b_1 * SI * [1.0 - e^{(-b_2*D)}]$
where,
potential growth is defined as the potential annual basal area growth of a tree (sq. ft./yr)
b1 and b2 are species specific coefficients
SI is site index (index age 50 years) and
D is current tree diameter in in.
The competition modifier is an index bounded from 0 to 1, and is found by:
Competition modifier = $e^{-b_3*BA}$
where b3 is a species-specific coefficient and
BA is the current basal area (sq. ft./ac).
Tree mortality is calculated by estimating the probability of death of each tree in a given year:
Survival = 1-[1/(1+en)]
where $n = c_1+c_2*(D+1)^{c_3}*e^{c_4*D - c_5*BA - c_6*SI}$
c1,…,c6 are species-specific coefficients
D is current tree diameter (inches)
BA is stand basal area (sq. ft./ac) and
SI is site index.
Inventory data and site information are entered into FVS, and a self-calibration process adjusts the growth models to match the rates present in the entered data. Harvests can be simulated with growth and mortality rates based on post-removal stand densities. Growth cycles run for 5-10 years and output includes a summary of current stand conditions, sampling statistics, and calibration results.
Applications of Regression Techniques
Regression models serve many purposes in the management of natural and forest resources. The following examples serve to highlight some of these applications.
Weight Scaling for Sawlogs
In 1962, Bower created the following equation for predicting loblolly pine sawlog volume based on truckload weights and the number of logs per truck:
$Y = -3.954 N + 0.0925 W$
where Y = total board-foot volume (International1/4- rule) for a truckload of logs
N = number of 16-ft logs on the truck
W = total load weight (lb.)
Notice that there is no y-intercept in the model. When there are no logs on the truck, there is no volume to be estimated.
Rates of Stem Taper
Kozak et al. (1969) developed a technique for estimating the fraction of volume per tree located in logs of any specified length and dib for any system of scaling (board feet, cubic feet, or weight). Their regression model also predicted taper curves and upper stem diameters (dib) for some conifer species.
$\frac {d^2}{dbh^2} = b_0 +b_1(\frac {h}{H}) +b_2(\frac {h^2}{H^2})$
where d = stem diameter at any height h above ground
H = total tree height
This equation resolves to:
$d = dbh\sqrt {b_0 +b_1(\frac {h}{H}) +b_2(\frac {h^2}{H^2})}$
The predictor variables are the ratio, and squared ratio, of any height to total height.
Multiple Entry Volume Table that Allows for Variable Utilization Standards
Foresters commonly want to predict tree volume for various top diameters but many of the available volume equations were created for specific top limits. Burkhart (1977) created a regression model to predict volume (cubic feet) of loblolly pine to any desired merchantable top limit. His approach predicted total stem volume, then converted total volume to merchantable volume by applying predicted ratios of merchantable volume to total volume.
$V = 0.34864 + 0.00232dbh^2H$
$R=1-0.32354(\dfrac {d_t^{3.1579}}{dbh^{2.7115}})$
where dbh = diameter at breast height (in.)
H = total tree height (ft.)
V = total stem cubic-foot volume
R = merchantable cubic-foot volume to top diameter dt divided by total stem cubic-foot volume
dt = top dob (in.)
Weight Tables for Tree Boles
Belanger (1973) utilized a combined-variable approach to develop predictions of green-weight and dry weight of sycamore tree:
$GBW = -32.35109+0.15544dbh^2H$
$DBW = -17.67910 +0.06684dbh^2H$
where GBW = green bole weight to 3-in.top (lb.)
DBW = dry bole weight to 3-in.top (lb.)
dbh = diameter at breast height (in.)
H = total tree height (ft.)
Biomass Prediction
A common approach to predicting tree biomass weight has been to use a logarithmic combined-variable formula (e.g. Edwards and McNab 1979). The observed relationship between these variables is typically non-linear, therefore a log or natural log transformation is needed to linearize the relationship.
$log \ Y = b_0 +b_1 log \ dbh^2H$
where Y = total tree weight
dbh = diameter at breast height
H = total tree height
However, past studies (Tritton and Hornbeck 1982 and Wiant et al. 1979) indicated that there was little model improvement when height was added. Many dry-weight biomass models now follow this form:
$ln \ wt = b_0 +b_1ln \ dbh$
$wt =e^{b_0}dbh^{b_1}$
where wt = total tree weight
dbh = diameter at breast height
Volume Predictions based on Stump Diameter
Bylin (1982) created a regression model to predict tree volume using stump diameter and stump height for species in Louisiana.
$V = b_0 +b_1S_{DIB}^2 + b_2H_s$
where V = tree volume (cu. ft.)
SDIB = stump diameters inside bark (in.)
HS = stump height (ft.)
Yield Estimation
MacKinney and Chaiken (1939) were the first to use multiple regression, with stand density as a predictor variable, to predict yield for loblolly pine trees.
$log \ Y =b_0 +b_1\frac {1}{A} + b_2S+b_3log \ SDI+b_4C$
where
• Y = yield (cu. ft./ac)
• A = stand age
• S = site index
• SDI = stand-density index
• C = composition index (loblolly pine BA/total BA)
Growth and Yield Prediction for Uneven-aged Stands
Moser and Hall (1969) developed a yield equation, expressed as a function of time, initial volume, and basal area, to predict volume in mixed northern hardwoods.
$Y = [(Y_0)(8.3348BA_0^{-1.3175})]x[0.9348-(0.9348-1.0203BA_0^{-0.0125})e^{-0.0062t}]^{-105.5}$
where
• Y0 = initial volume (cu. ft./ac)
• BA0 = initial basal area (sq. ft./ac)
• t = elapsed time interval (years from initial condition)
• Y = predicted volume (cu. ft./ac) t years after observation of initial conditions Y0 and BA0 at time t0 | textbooks/stats/Applied_Statistics/Natural_Resources_Biometrics_(Kiernan)/09%3A_Modeling_Growth_Yield_and_Site_Index/9.01%3A_Growth_and_Yield_Models.txt |
Site Index
Site is defined by the Society of American Foresters (1971) as “an area considered in terms of its own environment, particularly as this determines the type and quality of the vegetation the area can carry.” Forest and natural resource managers use site measurement to identify the potential productivity of a forest stand and to provide a comparative frame of reference for management options. The productive potential or capacity of a site is often referred to as site quality.
Site quality can be measured directly or indirectly. Direct measurement of a stand’s productivity can be measured by analyzing the variables such as soil nutrients, moisture, temperature regimes, available light, slope, and aspect. A productivity-estimation method based on the permanent features of soil and topography can be used on any site and is suitable in areas where forest stands do not presently exist. Soil site index is an example of such an index. However, such indices are location specific and should not be used outside the geographic region in which they were developed. Unfortunately, environmental factor information is not always available and natural resource managers must use alternative methods.
Historical yield records also provide direct evidence of a site’s productivity by averaging the yields over multiple rotations or cutting cycles. Unfortunately, there are limited long-term data available, and yields may be affected by species composition, stand density, pests, rotation age, and genetics. Consequently, indirect methods of measuring site quality are frequently used, with the most common involving the relationship between tree height and tree age.
Using stand height data is an easy and reliable way to quantify site quality. Theoretically, height growth is sensitive to differences in site quality and height development of larger trees in an even-aged stand is seldom affected by stand density. Additionally, the volume-production potential is strongly correlated with height-growth rate. This measure of site quality is called site index and is the average total height of selected dominant-codominant trees on a site at a particular reference or index age. If you measure a stand that is at an index age, the average height of the dominant and codominant trees is the site index. It is the most widely accepted quantitative measure of site quality in the United States for even-aged stands (Avery and Burkhart 1994).
The objective of the site index method is to select the height development pattern that the stand can be expected to follow during the remainder of its life (not to predict stand height at the index age). Most height-based methods of site quality evaluation use site index curves. Site index curves are a family of height development patterns referenced by either age at breast height or total age. For example, site index curves for plantations are generally based on total age (years since planted), where age at breast height is frequently used for natural stands for the sake of convenience. If total age were to be used in this situation, the number of years required for a tree to grow from a seedling to DBH must be added in. Site index curves can either be anamorphic or polymorphic curves. Anamorphic curves (most common) are a family of curves with the same shape but different intercepts. Polymorphic curves are a family of curves with different shapes and intercepts.
The index age for this method is typically the culmination of mean annual growth. In the western part of the United States, 100 years is commonly used as the reference age with 50 years in the eastern part of this country. However, site index curves can be based on any index age that is needed. Coile and Schumacher (1964) created a family of anamorphic site index curves for plantation loblolly pine with an index age of 25 years. The following family of anamorphic site index curves for a southern pine is based on a reference age of 50 years.
Figure 1. Site index curves with an index age of 50 years.
Creating a site index curve involves the random selection of dominant and codominant trees, measuring their total height, and statistically fitting the data to a mathematical equation. So, which equation do you use? Plotting height over age for single species, even-aged stands typically results in a sigmoid shaped pattern.
$H_d = b_0e^{(b_1A^{-1})}$
where Hd is the height of dominant and codominant trees, A is stand age, and b0 and b1 are coefficients to be estimated. Variable transformation is needed if linear regression is to be used to fit the model. A common transformation is
$ln \ H_d = b_0+b_1A^{-1}$
Coile and Schumacher (1964) fit their data to the following model:
$ln \ S = ln \ H +5.190(\frac {1}{A} - \frac {1}{25})$
where S is site index, H is total tree height, and A is average age. The site index curve is created by fitting the model to data from stands of varying site qualities and ages, making sure that all necessary site index classes are equally represented at all ages. It is important not to bias the curve by using an incomplete range of data.
Data for the development of site index equations can come from measurement of tree or stand height and age from temporary or permanent inventory plots or from stem analysis. Inventory plot data are typically used for anamorphic curves only and sampling bias can occur when poor sites are over represented in older age classes. Stem analysis can be used for polymorphic curves but requires destructive sampling and it can be expensive to obtain such data.
We are going to examine three different methods for developing site index equations:
1. Guide curve method
2. Difference equation method
3. Parameter prediction method
Guide Curve Method
The guide curve method is commonly used to generate anamorphic site index equations. Let’s begin with a commonly used model form:
$ln \ H_d =b_0 +b_1A^{-1} = b_0 + b_1\frac{1}{A}$
Parameterizing this model results in a “guide curve” (the average line for the sample data) that is used to create the individual height/age development curves that parallel the guide curve. For a particular site index the equation is:
$ln \ H_d = b_{0i} +b_1A^{-1}$
where boi is the unique y-intercept for that age. By definition, when A = A0 (index age), H is equal to site index S. Thus:
$b_{0i} = ln \ S - b_1A_0^{-1}$
Substituting boi into equation 9.2.5 gives:
which can be used to generate site index curves for given values of S and A0 and a range of ages (A). The equation can be algebraically rearranged as:
$ln \ S = ln \ H -b_1(A^{-1} - A_0^{-1}) = ln (H) - b_1(\frac {1}{A} - \frac {1}{A_0})$
This is the form to estimate site index (height at index age) when height and age data measurements are given. This process is sound only if the average site quality in the sample data is approximately the same for all age classes. If the average site quality varies systematically with age, the guide curve will be biased.
Difference Equation Method
This method requires either monumented plot, tree remeasurement data, or stem analysis data. The model is fit using differences of height and specific ages. This method is appropriate for anamorphic and polymorphic curves, especially for longer and/or multiple measurement periods. Schumacher (after Clutter et al. 1983) used this approach when estimating site index using the reciprocal of age and the natural log of height. He believed that there was a linear relationship between Point A (1/A1, lnH1) and Point B (1/A2, lnH2) and defined β1 (slope) as:
$\beta_1 = \dfrac {ln(H_2) - ln (H_1)}{(1/A_2)-(1/A_1)}$
where H1 and A1 were initial height and age, and H2 and A2 were height and age at the end of the remeasurement period. His height/age model became:
$ln (H_2) = ln (H_1) +\beta_1 (\frac {1}{A_2} - \frac {1}{A_1})$
Using remeasurement data, this equation would be fitted using linear regression procedures with the model
$Y = \beta_1X$
where Y = ln(H2) – ln(H1)
X = (1/A2) – (1/A1)
After estimating β1, a site index equation is obtained from the height/age equation by letting A2equal A0 (the index age) so that H2 is, by definition, site index (S). The equation can then be written:
$ln (S) = ln(H_1) + \beta_1(\frac {1}{A_0} - \frac {1}{A_1})$
Parameter Prediction Method
This method requires remeasurement or stem analysis data, and involves the following steps:
1. Fitting a linear or nonlinear height/age function to the data on a tree-by-tree (stem analysis data) or plot by plot (remeasurement data) basis
2. Using each fitted curve to assign a site index value to each tree or plot (put A0 in the equation to estimate site index)
3. Relating the parameters of the fitted curves to site index through linear or nonlinear regression procedures
Trousdell et al. (1974) used this approach to estimate site index for loblolly pine and it provides an example using the Chapman-Richards (Richards 1959) function for the height/age relationship. They collected stem analysis data on 44 dominant and codominant trees that had a minimum age of at least 50 years. The Chapman-Richards function was used to define the height/age relationship:
$H = \theta_1[1-e^{(-\theta_2A)}]^{[(1-\theta_3)^{-1}]}$
where H is height in feet at age A and θ1, θ2, and θ3 are parameters to be estimated. This equation was fitted separately to each tree. The fitted curves were all solved with A = 50 to obtain site index values (S) for each tree.
The parameters θ1, θ2, and θ3 were hypothesized to be functions of site index, where
$\theta_1 = \beta_1 + \beta_2S$
$\theta_2 = \beta_3 + \beta_4S+\beta_5S^2$
$\theta_3 = \beta_6 + \beta_7S + \beta_8S^2$
The Chapman-Richards function was then expressed as:
$H = (\beta_1+\beta_2S){1-e^{[-(\beta_3+\beta_4S+\beta_5S^2)A]}}^{[(1-\beta_6-\beta_7S-\beta_8S^2)^{-1}]}$
This function was then refitted to the data to estimate the parameters β1, β2, …β8. The estimating equations obtained for θ1, θ2, and θ3 were
$\hat {\theta_1} = 63.1415+0.635080S$
$\hat {\theta_2} = 0.00643041 + 0.000124189S + 0.00000162545S^2 \nonumber$
$\hat {\theta_3} = 0.0172714 - 0.00291877S + 0.0000310915S^2 \nonumber$
For any given site index value, these equations can be solved to give a particular Chapman-Richards site index curve. By substituting various values of age into the equation and solving for H, we obtain height/age points that can be plotted for a site index curve. Since each site index curve has different parameter values, the curves are polymorphic.
Periodic Height Growth Data
An alternative to using current stand height as the surrogate for site quality is to use periodic height growth data, which is referred to as a growth intercept method. This method is practical only for species that display distinct annual branch whorls and is primarily used for juvenile stands because site index curves are less dependable for young stands.
This method requires the length measurement of a specified number of successive annual internodes or the length over a 5-year period. While the growth-intercept values can be used directly as measures of site quality, they are more commonly used to estimate site index.
Alban (1972) created a simple linear model to predict site index for red pine using 5-year growth intercept in feet beginning at 8 ft. above ground.
SI = 32.54 + 3.43 X
where SI is site index at a base age of 50 years and X is 5-year growth intercept in feet.
Using periodic height growth data has the advantage of not requiring stand age or total tree height measurements, which can be difficult in young, dense stands. However, due to the short-term nature of the data, weather variation may strongly influence the internodal growth thereby rendering the results inaccurate.
Site index equations should be based on biological or mathematical theories, which will help the equation perform better. They should behave logically and not allow unreasonable values for predicted height, especially at very young or very old ages. The equations should also contain an asymptotic parameter to control unbounded height growth at old age. The asymptote should be some function of site index such that the asymptote increases with increases of site index.
When using site index, it is important to know the base age for the curve before use. It is also important to realize that site index based on one base age cannot be converted to another base age. Additionally, similar site indices for different species do not mean similar sites even when the same base age is used for both species. You have to understand how height and age were measured before you can safely interpret a site index curve. Site index is not a true measure of site quality; rather it is a measure of a tree growth component that is affected by site quality (top height is a measure of stand development, NOT site quality). | textbooks/stats/Applied_Statistics/Natural_Resources_Biometrics_(Kiernan)/09%3A_Modeling_Growth_Yield_and_Site_Index/9.02%3A_Site_Index.txt |
D.H. Alban, “An Improved Growth Intercept Method for Estimating Site Index of Red Pine,” U.S. Forest Serv., North Central Forest Expt. Sta., Res. Paper NC-80, 1972, p. 7.
T.E. Avery and H.E. Burkhart, Forest Measurements,. McGraw-Hill, 1994, p. 408.
R.P. Belanger, “Volume and Weight Tables for Plantation-grown Sycamore,” U.S. Forest Serv. Southeast. Forest Expt. Sta. Res. Paper SE-107, 1973, p. 8.
D.M. Belcher, “TWIGS: The Woodman’s Ideal Growth Projection System,” Microcomputers, a New Tool for Foresters, Purdue University Press, 1982, p. 70.
D.R. Bower, “Volume-weight Relationships for Loblolly Pine Sawlogs,” J. Forestry 60, 1962, pp. 411-412.
R.R. Buckman, “Growth and Yield of Red Pine in Minnesota,” U.S. Department of Agriculture,Technical Bulletin 1272, 1962.
H.E. Burkhart, “Cubic-foot Volume of Loblolly Pine to Any Merchantable Top Limit,” So. J. Appl. For. 1, 1977, pp. 7-9.
C.V. Bylin, “Volume Prediction from Stump Diameter and Stump Height of Selected Species in Louisiana,” U.S. Forest Serv., Southern Forest. Expt. Sta., Res. Paper SO-182, 1982, p. 11.
J.R. Clutter Et al., Timber Management: A Quantitative Approach, Wiley, 1983, p. 333.
T.S. Coile and F. X. Schumacher, Soil-site Relations, Stand Structure, and Yields of Slash and Loblolly Pine Plantations in the Southern United States, T.S. Coile, 1964.
G.E. Dixon (Comp.), “Essential FVS: A User’s Guide to the Forest Vegetation Simulator,” Internal Report. U.S. Department of Agriculture, Forest Service, Forest Management Service Center,2002, p. 189.
M.B. Edwards and W.H. McNab, “Biomass Prediction for Young Southern Pines,” J. Forestry, 77, 1979, pp. 291-292.
A.D. Kozak, D.D. Munro, and J.H.G. Smith, “Taper Functions and Their Application in Forest Inventory,” Forestry Chronicle 45, 1969, pp. 278-283.
A.L. MacKinney and L.E. Chaiken, “Volume, Yield, and Growth of Loblolly Pine in the Mid-Atlantic Coastal Region,” U.S. Forest. Serv. Appalachian Forest Expt. Sta., Tech. Note 33, 1939, p. 30.
C.L. Miner, N.R. Walters, and M.L. Belli, “A Guide to the TWIGS Program for the North Central United States,” USDA Forest Serv., North Central Forest Exp.Sta., Gen. Tech. Rep. NC-125, 1988, p. 105.
J.W. Moser, Jr. and O.F. Hall, “Deriving Growth and Yield Functions for Uneven-aged Forest Stands,” Forest Sci. 15, 1969, pp. 183-188.
F.J. Richards, “A Flexible Growth Function for Empirical Use,” J. Exp. Botany, vol. 10, no. 2 1959, pp. 290-300.
Society of American Foresters, Terminology of Forest Science, Technology, Practice, and Products,Washington,D.C., Society of American Foresters, 1971, p. 349.
A.R. Stage, “Prognosis Model for Stand Development,” U.S. Department of Agriculture, Forest Service, Intermountain Forest and Range Expt. Sta., Res. Pa INT-137, 1973, p. 32.
L.M. Tritton and J.W. Hornbeck, “Biomass Equations for Major Tree Species of the Northeast,” USDA For. Serv. Gen. Tech. Rep. NE-GTR-69, 1982.
K.B. Trousdell, D.E. Beck, and F.T. Lloyd, “Site Index for Loblolly Pine in the Atlantic Coastal Plain of the Carolinas and Virginia,” U.S. Southeastern Forest Expt. Sta., 1974, p. 115.
H.J. Wiant et al., “Equations for Predicting Weights of Some Appalachian Hardwoods,” West Virginia Univ. Agric. and Forest Expt. Sta., Coll.. of Agric. and Forest. West Virginia Forest. Notes, no. 7, 1979.
W.R. Wykoff, N.L. Crookston, and A.R. Stage, “User’s Guide to the Stand Prognosis Model,” U.S. Department of Agriculture, Forest Service, Intermountain Forest and Range Expt. Sta., Gen. Tech. Re INT-133, 1982, p. 112. | textbooks/stats/Applied_Statistics/Natural_Resources_Biometrics_(Kiernan)/09%3A_Modeling_Growth_Yield_and_Site_Index/9.03%3A_Reference.txt |
As forest and natural resource managers, we must be aware of how our timber management practices impact the biological communities in which they occur. A silvicultural prescription is going to influence not only the timber we are growing but also the plant and wildlife communities that inhabit these stands. Landowners, both public an(18)}{d private, often require management of non-timber components, such as wildlife, along with meeting the financial objectives achieved through timber management. Resource managers must be cognizant of the effect management practices have on plant and wildlife communities. The primary interface between timber and wildlife is habitat, and habitat is simply an amalgam of environmental factors necessary for species survival (e.g., food or cover). The key component to habitat for most wildlife is vegetation, which provides food and structural cover. Creating prescriptions that combine timber and wildlife management objectives are crucial for sustainable, long-term balance in the system.
So how do we develop a plan that will encompass multiple land use objectives? Knowledge is the key. We need information on the habitat required by the wildlife species of interest and we need to be aware of how timber harvesting and subsequent regeneration will affect the vegetative characteristics of the system. In other words, we need to understand the diversity of organisms present in the community and appreciate the impact our management practices will have on this system.
Diversity of organisms and the measurement of diversity have long interested ecologists and natural resource managers. Diversity is variety and at its simplest level it involves counting or listing species. Biological communities vary in the number of species they contain (richness) and relative abundance of these species (evenness). Species richness, as a measure on its own, does not take into account the number of individuals of each species present. It gives equal weight to those species with few individuals as it does to a species with many individuals. Thus a single yellow birch has as much influence on the richness of an area as 100 sugar maple trees. Evenness is a measure of the relative abundance of the different species making up the richness of an area. Consider the following example.
Example $1$:
Number of Individuals
Tree Species
Sample 1
Sample 2
Sugar Maple
167
391
Beech
145
24
Yellow Birch
134
31
Both samples have the same richness (3 species) and the same number of individuals (446). However, the first sample has more evenness than the second. The number of individuals is more evenly distributed between the three species. In the second sample, most of the individuals are sugar maples with fewer beech and yellow birch trees. In this example, the first sample would be considered more diverse.
A diversity index is a quantitative measure that reflects the number of different species and how evenly the individuals are distributed among those species. Typically, the value of a diversity index increases when the number of types increases and the evenness increases. For example, communities with a large number of species that are evenly distributed are the most diverse and communities with few species that are dominated by one species are the least diverse. We are going to examine several common measures of species diversity.
Simpson’s Index
Simpson (1949) developed an index of diversity that is computed as:
$D = \sum^R_{i=1} (\dfrac {n_i(n_i-1)}{N(N-1)})$
where ni is the number of individuals in species i, and N is the total number of species in the sample. An equivalent formula is:
$D = \sum^R_{i=1} p_i^2$
where $p_i$ is the proportional abundance for each species and R is the total number of species in the sample. Simpson’s index is a weighted arithmetic mean of proportional abundance and measures the probability that two individuals randomly selected from a sample will belong to the same species. Since the mean of the proportional abundance of the species increases with decreasing number of species and increasing abundance of the most abundant species, the value of D obtains small values in data sets of high diversity and large values in data sets with low diversity. The value of Simpson’s D ranges from 0 to 1, with 0 representing infinite diversity and 1 representing no diversity, so the larger the value of $D$, the lower the diversity. For this reason, Simpson’s index is usually expressed as its inverse (1/D) or its compliment (1-D) which is also known as the Gini-Simpson index. Let’s look at an example.
Example $2$:calculating Simpson’s Index
We want to compute Simpson’s $D$ for this hypothetical community with three species.
Species
No. of individuals
Sugar Maple
35
Beech
19
Yellow Birch
11
First, calculate N.
$N = 35 + 19 + 11 = 65 \nonumber$
Then compute the index using the number of individuals for each species:
$D = \sum^R_{i=1} (\dfrac {n_i(n_i-1)}{N(N-1)}) = (\frac {35(34)}{65(64)} +\frac {19(18)}{65(64)} + \frac {11(10)}{65(64)}) = 0.3947 \nonumber$
The inverse is found to be:
$\frac {1}{0.3947} = 2.5336 \nonumber$
Using the inverse, the value of this index starts with 1 as the lowest possible figure. The higher the value of this inverse index the greater the diversity. If we use the compliment to Simpson’s D, the value is:
$1-0.3947 = 0.6053 \nonumber$
This version of the index has values ranging from 0 to 1, but now, the greater the value, the greater the diversity of your sample. This compliment represents the probability that two individuals randomly selected from a sample will belong to different species. It is very important to clearly state which version of Simpson’s D you are using when comparing diversity.
Shannon-Weiner Index
The Shannon-Weiner index (Barnes et al. 1998) was developed from information theory and is based on measuring uncertainty. The degree of uncertainty of predicting the species of a random sample is related to the diversity of a community. If a community has low diversity (dominated by one species), the uncertainty of prediction is low; a randomly sampled species is most likely going to be the dominant species. However, if diversity is high, uncertainty is high. It is computed as:
$H' = -\sum^R_{i=1} ln(p_i) = ln (\frac {1}{\prod^R_{i=1} p^{p_i}_i})$
where pi is the proportion of individuals that belong to species i and R is the number of species in the sample. Since the sum of the pi’s equals unity by definition, the denominator equals the weighted geometric mean of the pi values, with the pi values being used as weights. The term in the parenthesis equals true diversity D and H’=ln(D). When all species in the data set are equally common, all pi values = 1/R and the Shannon-Weiner index equals ln(R). The more unequal the abundance of species, the larger the weighted geometric mean of the pi values, the smaller the index. If abundance is primarily concentrated into one species, the index will be close to zero.
An equivalent and computationally easier formula is:
$H' = \frac {N ln \ N -\sum (n_i ln \ n_i)}{N}$
where N is the total number of species and ni is the number of individuals in species i. The Shannon-Weiner index is most sensitive to the number of species in a sample, so it is usually considered to be biased toward measuring species richness.
Let’s compute the Shannon-Weiner diversity index for the same hypothetical community in the previous example.
Example $3$:Calculating Shannon-Weiner Index
Species
No. of individuals
Sugar Maple
35
Beech
19
Yellow Birch
11
We know that N = 65. Now let’s compute the index:
$H' = \dfrac {271.335 - (124.437+55.944+26.377)}{65}=0.993 \nonumber$ | textbooks/stats/Applied_Statistics/Natural_Resources_Biometrics_(Kiernan)/10%3A_Quantitative_Measures_of_Diversity_Site_Similarity_and_Habitat_Suitability/10.01%3A_Introduction__Simpsons_Index_and_Shannon-Weiner_Index.txt |
Rank Abundance Graphs
Species abundance distribution can also be expressed through rank abundance graphs. A common approach is to plot some measure of species abundance against their rank order of abundance. Such a plot allows the user to compare not only relative richness but also evenness. Species abundance models (also called abundance curves) use all available community information to create a mathematical model that describes the number and relative abundance of all species in a community. These models include the log normal, geometric, logarithmic, and MacArthur’s brokenstick model. Many ecologists use these models as a way to express resource partitioning where the abundance of a species is equivalent to the percentage of space it occupies (Magurran 1988). Abundance curves offer an alternative to single number diversity indices by graphically describing community structure.
Figure $1$. Generic Rank-abundance diagram of three common mathematical models used to fit species abundance distributions: Motomura’s geometric series, Fisher’s logseries, and Preston’s log-normal series (modified from Magurran 1988) by Aedrake09.
Let’s compare the indices and a very simple abundance distribution in two different situations. Stand A and B both have the same number of species (same richness), but the number of individuals in each species is more similar in Stand A (greater evenness). In Stand B, species 1 has the most individuals, with the remaining nine species having a substantially smaller number of individuals per species. Richness, the compliment to Simpson’s D, and Shannon’s H’ are computed for both stands. These two diversity indices incorporate both richness and evenness. In the abundance distribution graph, richness can be compared on the x-axis and evenness by the shape of the distribution. Because Stand A displays greater evenness it has greater overall diversity than Stand B. Notice that Stand A has higher values for both Simpson’s and Shannon’s indices compared to Stand B.
Figure $2$. Two stands comparing richness, Simpson’s D, and Shannon’s index.
Indices of diversity vary in computation and interpretation so it is important to make sure you understand which index is being used to measure diversity. It is unsuitable to compare diversity between two areas when different indices are computed for each area. However, when multiple indices are computed for each area, the sampled areas will rank similarly in diversity as measured by the different indices. Notice in this previous example both Simpson’s and Shannon’s index rank Stand A as more diverse and Stand B as less diverse.
Similarity between Sites
There are also indices that compare the similarity (and dissimilarity) between sites. The ideal objective is to express the ecological similarity of different sites; however, it is important to identify the aim or focus of the investigation in order to select the most appropriate index. While many indices are available, van Tongeren (1995) states that most of the indices do not have a firm theoretical basis and suggests that practical experience should guide the selection of available indices.
The Jaccard index (1912) compares two sites based on the presence or absence of species and is used with qualitative data (e.g., species lists). It is based on the idea that the more species both sites have in common, the more similar they are. The Jaccard index is the proportion of species out of the total species list of the two sites, which is common to both sites:
$SJ = \frac {c} {(a + b + c)}$
where SJ is the similarity index, c is the number of shared species between the two sites and a and b are the number of species unique to each site. Sørenson (1948) developed a similarity index that is frequently referred to as the coefficient of community (CC):
$CC = \frac {2c} {(a + b + 2c)}$
As you can see, this index differs from Jaccard’s in that the number of species shared between the two sites is divided by the average number of species instead of the total number of species for both sites. For both indices, the higher the value the more ecologically similar two sites are.
If quantitative data are available, a similarity ratio (Ball 1966) or a percentage similarity index, such as Gauch (1982), can be computed. Not only do these indices compare number of similar and dissimilar species present between two sites, but also incorporate abundance. The similarity ratio is:
$SR_{ij} = \dfrac {\sum y_{ki}y_{kj}}{\sum y_{ki}^2 +\sum y_{kj}^2 -\sum(y_{ki}y_{kj})}$
where yki is the abundance of the kth species at site i (sites i and j are compared). Notice that this equation resolves to Jaccard’s index when just presence or absence data is available. The percent similarity index is:
$PS_{ij} = \dfrac {200\sum min (y_{ki},y_{kj})} {\sum y_{ki}+\sum y_{kj}}$
Again, notice how this equation resolves to Sørenson’s index with qualitative data only. So let’s look at a simple example of how these indices allow us to compare similarity between three sites. The following example presents hypothetical data on species abundance from three different sites containing seven different species (A-G).
Table $0$
Site
Species
1
2
3
A
4
0
1
B
0
1
0
C
0
0
0
D
1
0
1
E
1
4
0
F
3
1
1
G
1
0
3
Let’s begin by computing Jaccard’s and Sørenson’s indices for the three comparisons (site 1 vs. site 2, site 1 vs. site 3, and site 2 vs. site 3).
$SJ1,2=\frac {2}{(3+1+2)}=0.33$ $SJ1,3 = \frac {4}{(4+1+0)}=0.80$ $SJ2,3 =\frac {1}{(1+2+3)} = 0.17 \nonumber$
$CC1,2=\frac {2(2)}{(3+1+2(2))} = 0.50$ $CC1,3 =\frac {2(4)}{(1+0+2(4))} = 0.89$ $CC2,3 =\frac {2(1)}{(2+3+2(1))} = 0.29 \nonumber$
Both of these qualitative indices declare that sites 1 and 3 are the most similar and sites 2 and 3 are the least similar. Now let’s compute the similarity ratio and the percent similarity index for the same site comparisons.
$SR1,2=\dfrac {[(4 \times 0)+(0 \times 1) +(0\times 0)+(1\times 0)+(1\times4)+(3\times 1)+(1\times 0)]}{(4^2+0^2+0^2+1^2+1^2+3^2+1^2)+(0^0+1^2+0^2+0^2+4^2+1^2+0^2)+(4 \times 0)+(0 \times 1) +(0\times 0)+(1\times 0)+(1\times4)+(3\times 1)+(1\times 0)} \nonumber$
$SR1,2= 0.23$
$SR1,3=\dfrac {[(4\times 1)+(0\times 0)+(0\times 0)+(1\times 1)+(1\times 0)+(3\times 1)+(1\times 3)]}{(4^2 +0^2+0^2+1^2+1^2+3^2+1^2)+(1^2+0^2+0^2+1^2+0^2+1^2+3^2)+(4\times 1)+(0\times 0)+(0\times 0)+(1\times 1)+(1\times 0)+(3\times 1)+(1\times 3)} \nonumber$
$SR1,3= 0.38$
$SR2,3=\dfrac {[(0\times 1)+(1\times 0)+(0\times 0)+(0\times 1) +(4\times 0) +(1\times 1) +(0\times 3)]}{(0^2+1^2+0^2+0^2+4^2+1^2+0^2)+(1^2+0^2+0^2+1^2+0^2+1^2+3^2)+(0\times 1)+(1\times 0)+(0\times 0)+(0\times 1) +(4\times 0) +(1\times 1) +(0\times 3)} \nonumber$
$SR1,3= 0.03$
$PS1,2=\dfrac {200(0+0+0+0+1+1+0)}{(4+0+0+1+1+3+1)+(0+1+0+0+4+1+0)}=25.0 \nonumber$
$PS1,3=\dfrac {200(1+0+0+1+0+1+1)}{(4+0+0+1+1+3+1)+(1+0+0+1+0+1+3)} = 50.0 \nonumber$
$PS2,3=\dfrac {200(0+0+0+0+0+1+0)}{(0+1+0+0+4+1+0)+(1+0+0+1+0+1+3)} = 16.7 \nonumber$
A matrix of percent similarity values allows for easy interpretation (especially when comparing more than three sites).
$\begin{array}{c|cc} & \mathbf{1} & \mathbf{2} \ \hline \mathbf{2} & 25.0 & \ \mathbf{3} & 50.0 & 16.7 \end{array} \nonumber$
Table $1$. A matrix of percent similarity for three sites.
The quantitative indices return the same conclusions as the qualitative indices. Sites 1 and 3 are the most similar ecologically, and sites 2 and 3 are the least similar; and also site 2 is most unlike the other two sites.
Habitat Suitability Index (HSI)
In 1980, the U.S. Fish and Wildlife Service (USFWS) developed a procedure for documenting predicted impacts to fish and wildlife from proposed land and water resource development projects. The Habitat Evaluation Procedures (HEP) (Schamberger and Farmer 1978) were developed in response to the need to document the non-monetary value of fish and wildlife resources. HEP incorporates population and habitat theories for each species and is based on the assumption that habitat quality and quantity can be numerically described so that changes to the area could be assessed and compared. It is a species-habitat approach to impact assessment and habitat quality, for a specific species is quantified using a habitat suitability index (HSI).
Habitat suitability index (HSI) models provide a numerical index of habitat quality for a specific species (Schamberger et al. 1982) and in general assume a positive, linear relationship between carrying capacity (number of animals supported by some unit area) and HSI. Today’s natural resource manager often faces economically and socially important decisions that will affect not only timber but wildlife and its habitat. HSI models provide managers with tools to investigate the requirements necessary for survival of a species. Understanding the relationships between animal habitat and forest management prescription is vital towards a more comprehensive management approach of our natural resources. An HSI model synthesizes habitat use information into a framework appropriate for fieldwork and is scaled to produce an index value between 0.0 (unsuitable habitat) to 1.0 (optimum habitat), with each increment of change being identical to another. For example, a change in HSI from 0.4 to 0.5 represents the same magnitude of change as from 0.7 to 0.8. The HSI values are multiplied by area of available habitat to obtain Habitat Units (HUs) for individual species. The U.S. Fish and Wildlife Service (USFWS) has documented a series of HSI models for a wide variety of species (FWS/OBS-82/10).
Let’s examine a simple HSI model for the marten (Martes americana) which inhabits late successional forest communities in North America (Allen 1982). An HSI model must begin with habitat use information, understanding the species needs in terms of food, water, cover, reproduction, and range for this species. For this species, the winter cover requirements are more restrictive than cover requirements for any other season so it was assumed that if adequate winter cover was available, habitat requirements for the rest of the year would not be limiting. Additionally, all winter habitat requirements are satisfied in boreal evergreen forests. Given this, the research identified four crucial variables for winter cover that needed to be included in the model.
Figure $3$. Habitat requirements for the marten.
For each of these four winter cover variables (V1, V2, V3, and V4), suitability index graphs were created to examine the relationship between various conditions of these variables and suitable habitat for the marten. A reproduction of the graph for % tree canopy closure is presented below.
Figure $4$. Suitability index graph for percent canopy cover.
Notice that any canopy cover less than 25% results in unacceptable habitat based on this variable alone. However, once 50% canopy cover is reached the suitability index reaches 1.0 and optimum habitat for this variable is achieved. The following equation was created that combined the life requisite values for the marten using these four variables:
$\frac{(V_1 \times V_2 \times V_3 \times V_4)} {2} \nonumber$
Since winter cover was the only life requisite considered in this model, the HSI equals the winter cover value. As you can see, the more life requisites included in the model, the more complex the model becomes.
While HSI values identify the quality of the habitat for a specific species, wildlife diversity as a whole is a function of size and spatial arrangement of the treated stands (Porter 1986). Horizontal and structural diversity are important. Generally speaking, the more stands of different character an area contains, the greater the wildlife diversity. The spatial distribution of differing types of stands supports animals that need multiple cover types. In order to promote wildlife species diversity, a manager must develop forest management prescription that varies the spatial and temporal patterns of timber reproduction, thereby providing greater horizontal and vertical structural diversity.
Figure $5$: Bird species diversity nesting across a forest to field gradient (After Strelke and Dickson 1980).
Typically, even-aged management reduces vertical structural diversity, but options such as the shelterwood method tend to mitigate this problem. Selection system tends to promotes both horizontal and vertical diversity.
Integrated natural resource management can be a complicated process but not impossible. Vegetation response to silvicultural prescriptions provides the foundation for understanding the wildlife response. By examining the present characteristics of the managed stands, understanding the future response due to management, and comparing those with the requirements of specific species, we can achieve habitat manipulation together with timber management.
10.03: Reference
Aedrake09. “Modified logseries,” Wikipedia, en.Wikipedia.org/wiki/File:Co...eWhittaker.jpg, 2009.
A.W. Allen, “Habitat Suitability Index Models: Marten,” U.S.D.I. Fish and Wildlife Service.FWS/OBS-82/10.11., 1982,9 pp.
B.V.Barnes et al., Forest Ecology 4th ed. , Wiley, 1998.
P. Jacard, “The Distribution of the Flora of the Alpine Zone,” New Phytologist 11, 1912, pp. 37-50.
A.E. Magurran, Ecological Diversity and Its Measurement, Princeton Univ. Press, 1988.
W.F. Porter, “Integrating Wildlife Management with Even-aged Timber Systems,” Managing Northern Hardwoods: Proceedings of a Silvicultural Symposium, ed. R. Nyland, SUNY College of Environmental Science and Forestry, 23-25 June, 1986, pp. 319-337.
M. Schamberger and A. Farmer, “The Habitat Evaluation Procedures: Their Application in Project Planning and Impact Evaluation,” Trans. N. A. Wildlife and Natural Resource Conf. 43, 1978, pp. 274-283.
E.H. Simpson, “Measurement of Diversity,” Nature 163, 1949, p. 688.
T. Sørenson, “A Method of Establishing Groups of Equal Amplitude in Plant Sociology Based on Similarity of Species Content,” Det. Kong. Danske Vidensk. Selsk. Biol. Skr. (Copenhagen) vol. 5, no. 4, 1948, pp. 1-34.
W.K. Strelke and J.G. Dickson, “Effect of Forest Clear-cut Edge on Breeding Birds in East Texas,” J. Wildl. Manage. vol, 44, no. 3, 1980, pp. 559-567.
U.S.D.I. Fish and Wildlife Service, “Habitat as a Basis for Environmental Assessment,” 101 ESM, 1980.
O.F.R. van Tongeren, “Cluster Analysis,” Data Analysis in Community and Landscape Ecology, Eds. R.H.G. Jongman, C.J.F. Ter Braak, and O.F.R. van Tongeren, 1995, pp. 174-212. | textbooks/stats/Applied_Statistics/Natural_Resources_Biometrics_(Kiernan)/10%3A_Quantitative_Measures_of_Diversity_Site_Similarity_and_Habitat_Suitability/10.02%3A_Rank_Abundance_Graphs_and_Habitat_Suitability_Index.txt |
Experiment 1
You are unhappy with the logging company you hired to thin a stand of red pine. You carefully laid out the skid trails leaving bumper trees to avoid excess damage to the remaining trees. In the contract, it is stated that the logging company would pay a penalty (3 times the stumpage rate) for trees damaged beyond the agreed amount of five or more damaged trees per acre. You want to estimate the number of damaged trees per acre to see if they exceeded this amount. You take 27 samples, from which you compute the sample mean, and then construct a 95% confidence interval about the mean number of damaged trees per acre.
2
4
0
3
5
0
0
1
3
2
7
4
8
10
0
2
1
1
5
3
5
6
4
9
5
3
6
Enter these data in the first column of the Minitab worksheet and label it “Trees.” Now calculate the sample mean and sample standard deviation. Stat>Basic Statistics>Display Descriptive Statistics. Select the column with your data in the variable box.
a) sample mean: ____________________________
sample standard deviation: ___________________
Examine the normal probability plot for this data set. Remember, for a sample size less than n = 30, we must verify the assumption of normality if we do not know that the random variable is normally distributed. Go to GRAPH PROBABILITY PLOT. Enter the column with your data in the “Graph variables” box and click OK.
b) Would you say that this distribution is normal?
c) Calculate the 95% confidence interval by hand using $x \pm t_{\alpha/2}(\frac {s}{\squr {n}})$ and the t-table.
95% CI for the mean number of damaged trees:____________________________________
Now find the 95% confidence interval for the mean using Minitab.
Go to STAT> Basic Statistics> 1-sample t…Enter data in “Samples in columns.” You do not have to enter the standard deviation but select OPTIONS and set the confidence level (make sure it is for 95%) and select “Alternative:not equal.”
d) 95% CI for the mean number of damaged trees: __________________________________
e) Do you have enough statistical evidence to state that the logging company has exceeded the damage limit? Why?
Experiment 2
The amount of sewage and industrial pollution dumped into a body of water affects the health of the water by reducing the amount of dissolved oxygen available for aquatic life. If the population mean dissolved oxygen drops below five parts per million (ppm), a level some scientists think is marginal for supplying enough dissolved oxygen for fish, some remedial action will be attempted. Given the expense of remediation, a decision to take action will be made only if there is sufficient evidence to support the claim that the mean dissolved oxygen has DECREASED below 5 ppm. Below are weekly readings from the same location in a river over a two-month time period.
5.2, 4.9, 5.1, 4.2, 4.7, 4.5, 5.0, 5.2, 4.8, 4.6, 4.8
The population standard deviation is unknown and we have a small sample (n≤30). You must verify the assumption of normality. Go to GRAPHPROBABILITY PLOT. Examine the normal probability plot. Does the distribution look normal?
Use DESCRIPTIVE STATISTICS (Basic Statistics>Display Descriptive Statistics) to get the mean and sample standard deviation.
Now test the claim that the mean dissolved oxygen is less than 5ppm using α = 0.05
a) First, state the null and alternative hypotheses
H0:__________________________________
H1: __________________________________
b) Compute the test statistic by hand $t = \frac {\bar x - \mu}{s/\sqrt {n}}$
c) Find the critical value from the t-table: ________________________________________
d) Do you reject the null hypothesis or fail to reject the null hypothesis?
Now use Minitab to do the hypothesis test. Go to STAT > BASIC STAT > 1-SAMPLE t. Check PERFORM HYPOTHESIS TEST and enter the hypothesized mean (5.00). Click OPTIONS and enter the confidence level (1-α) and select alternative hypothesis (H1). Click OK. Check to see that the null and alternative hypotheses shown in the session window are correct.
e) What is the p-value for this test?
f) Do you reject or fail to reject the null hypothesis?
g) State your conclusion:
Experiment 3
A forester believes that tent caterpillars are doing a significant amount of damage to the growth of the hardwood tree species in his stand. He has growth data from 21 plots before the infestation. Since then, he has re-measured those same plots and wants to know if there has been a significant reduction in the annual diameter growth.
Before
After
0.17
0.15
0.22
0.23
0.19
0.17
0.2
0.14
0.12
0.13
0.13
0.11
0.15
0.13
0.16
0.17
0.16
0.12
0.19
0.16
0.25
0.26
0.24
0.21
0.21
0.21
0.18
0.15
0.19
0.17
0.22
0.2
0.24
0.19
0.25
0.24
0.24
0.25
0.14
0.1
0.11
0.11
You need to compute the differences between the before values and the after values. To create a new variable (diff), type “diff” in the header of the column you want to use. Select CALC>CALCULATOR. In the “Expressions” box, type in the equation “Before-After.” In the box “Store results in variable” type “diff.” Click OK.
You now have a new data set of the differences with which you will complete your analyses. Compute basic descriptive statistics to get the sample mean $\bar d$ and sample standard deviation $s_d$ of the differences. Use these statistics to test the claim that there has been a reduction in annual diameter growth. You can answer this question by using either a hypothesis test or confidence interval.
a) H0:____________________________________
H1: ____________________________________
$t= \frac {\bar d -\mu_d}{s_d/\sqrt {n}}$ or $\bar d \pm t_{\alpha/2} \frac {s_d}{\sqrt{n}}$
Do you reject or fail to reject the null hypothesis?
Now let Minitab do the work for you. Select STAT> Basic Statistics> Paired t… Select SAMPLES IN COLUMNS. Enter the before as the First sample and after data as the Second sample. Select OPTIONS to set the confidence level and alternative hypothesis. Make sure the Test mean is set to 0.0. Click OK.
b) Write the test statistic and p-value
c) Write a complete conclusion that answers the question.
Experiment 4
Alternative energy is an important topic these days and a researcher is studying a solar electric system. Each day at the same time he collected voltage readings from a meter connected to the system and the data are given below. Is there a significant difference in the mean voltage readings for the different types of days? First do an F-test to test for equal variances and then test the means using the appropriate 2-sample t-test based on the results from the F-test. Please state a complete conclusion for this problem. α = 0.05.
Sunny – 13.5, 15.8, 13.2, 13.9, 13.8, 14.0, 15.2, 12.1, 12.9, 14.9
Cloudy – 12.7, 12.5, 12.6, 12.7, 13.0, 13.0, 12.1, 12.2, 12.9, 12.7
F-Test
a) Write the null and alternative hypotheses to test the claim that the variances are not equal.
H0:____________________________________ H1: ____________________________________
Select STAT>BASIC STAT>2 Variances. In the Data box select “Samples in different columns” and enter Sunny in the First box and Cloudy in the Second box. Click OPTIONS and in Hypothesized Ratio box select Variance1/Variance2. Make sure the Alternative is set at “Not equal.” Click OK. Look at the p-value for the F-test at the bottom of the output.
b) Do you reject for fail to reject the null hypothesis?
c) Can you assume equal variances?
Now conduct a 2-sample t-test (you should have rejected the null hypothesis in the F-test and assumed unequal variances). STAT>BASIC STAT>2-Sample t…Select the button for “Samples in different columns” and put Sunny in the First box and Cloudy in the Second box. Click OPTIONS and set the confidence level and select the correct alternative hypothesis. Set the Test difference at 0.0. Click OK.
d) What is the p-value for this test?
e) Do you reject or fail to reject the null hypothesis? State your conclusion. | textbooks/stats/Applied_Statistics/Natural_Resources_Biometrics_(Kiernan)/11%3A_Biometric_Labs/11.01%3A_Biometrics_Lab_1.txt |
One-way ANOVA Computer Lab
Name: ______________________________________________________
Experiment 1
A forester working with uneven-aged northern hardwoods wants to know if there is a significant difference in total merchantable sawtimber volume (m3ha-1) produced from stands using three different methods of selection system and a 15-yr cutting cycle. The following data are the total merchantable volume from 7 sample plots for each method. If you find a significant difference (reject Ho), then test the multiple comparisons for significant differences. Report the findings using all available information. α=0.05.
Single Tree
Group Selection
Patch Strip
108.6
104.2
102.1
110.9
103.9
101.4
112.4
109.4
100.3
106.3
105.2
95.6
101.4
106.3
102.9
114.6
107.2
99.8
117
105.8
103.5
Write the null and alternative hypotheses.
H0: ____________________________________
H1: ____________________________________
Open Minitab and label the first column as Volume and the second column as Method. Enter all of the volumes in the first column and the methods in the second:
Volume
Method
108.6
Single
110.9
Single
104.2
Group
103.9
Group
102.1
Patch
101.4
Patch
Select STAT>ANOVA>One-way. In the Response box select Volume, and in the Factor box select Method. Click on the Comparisons box. Select Tukeys, family error rate “5.” This tells Minitab that you want to control the experiment-wise error using Tukey’s method while keeping the overall level of significance at 5% across all multiple comparisons. Click OK.
State the p-value from the ANOVA table ____________________________________
Write the value for the S2b ___________ and the S2w (MSE) ____________________
Do you reject or fail to reject the null hypothesis? ______________________________
Using the Grouping Information from the Tukey Method, describe the differences in volume produced using the three methods.
________________________________________________________________________
________________________________________________________________________
________________________________________________________________________
Now refer to the Tukey 95% Simultaneous Confidence intervals for the multiple comparisons. What is the Individual confidence interval level? __________________ This is the adjusted level of significance used for all the multiple comparisons that keeps the 5% level of significance across the total experiment.
Using these confidence intervals, describe the estimated differences in sawtimber volume due to the three different treatments.
Example: The group method results in greater levels of sawtimber volume compared to patch. The group method yields, on average, 0.327 to 10.073 m3 more sawtimber volume per plot than the patch method.
Compare “Single” and “Patch,” and “Single” and “Group.”
________________________________________________________________________
________________________________________________________________________
________________________________________________________________________
________________________________________________________________________
________________________________________________________________________
Experiment 2
A plant physiologist is studying the rate of transpirational water loss (ml) of plants growing under five levels of soil moisture stress. This species is an important component to the wildlife habitat in this area and she wants to make sure it survives in an area that tends to be dry. She randomly assigns 18 pots to each treatment (N = 90). She is measuring total rate of water transpiring from the leaves (ml) per pot per unit area. Is there a significant difference in the transpiration rates between the levels of water stress (days)? α = 0.05.
0 DAYS
5 DAYS
10 DAYS
20 DAYS
30 DAYS
7.78
7.15
9.1
4.72
1.05
8.09
9.12
5.86
3.53
1.29
7.27
7.67
9.45
4.96
1.11
11.35
10.82
7.14
5
0.83
11.94
12.31
6.87
3.82
1.08
10.89
9.76
8.72
4.36
1.09
10.93
8.46
8.58
2.91
0.75
9.16
11.01
9.93
4.91
0.99
7.83
7.54
9.28
4.99
0.71
8.6
9.48
6.65
4.95
1.02
9.32
9.47
10.55
3.28
1.01
6.46
10.2
7.93
3.53
1.08
8.12
6.04
7.68
5.37
1.99
10.47
7.99
5.42
6.54
3.01
5.98
8.05
4.99
5.51
2.61
6.9
7.42
5.29
4.24
2.99
7.57
5.76
7.65
4.39
2.62
9.17
7.78
4.75
4.16
1.98
Write the null and alternative hypotheses.
H0: ____________________________________
H1: ____________________________________
State the p-value from the ANOVA table ____________________________________
Do you reject or fail to reject the null hypothesis? ______________________________
Using the Grouping Information using the Tukey Method, describe the differences in water loss between the five levels of water stress (0, 5, 10, 20, and 30).
________________________________________________________________________
________________________________________________________________________
________________________________________________________________________
________________________________________________________________________
Now refer to the Tukey 95% Simultaneous Confidence intervals for the multiple comparisons. What is the Individual confidence interval level? __________________ This is the adjusted level of significance used for all the multiple comparisons that keeps the 5% level of significance across the total experiment.
Using these confidence intervals, describe the estimated differences in water loss between the five different treatments.
________________________________________________________________________
________________________________________________________________________
________________________________________________________________________
________________________________________________________________________
________________________________________________________________________
Experiment 3
A rifle club performed an experiment on a randomly selected group of first-time shooters. The purpose was to determine whether shooting accuracy is affected by method of sighting used: only the right eye open, only the left eye open, or both eyes open. Fifteen shooters were all given similar training except in the method of sighting. Their scores are recorded below. At the 0.05 level of significance, is there sufficient evidence to reject the claim that the three methods of sighting are equally effective? α = 0.05.
Right
Left
Both
13
10
15
9
18
16
17
15
15
13
11
12
14
15
16
Write the null and alternative hypotheses.
H0: ____________________________________
H1: ____________________________________
State the p-value from the ANOVA table ____________________________________
Do you reject or fail to reject the null hypothesis? ______________________________
Give a complete conclusion.
________________________________________________________________________
________________________________________________________________________
________________________________________________________________________
________________________________________________________________________
________________________________________________________________________
Why do you think you were not able to identify any differences between the sighting methods?
________________________________________________________________________
________________________________________________________________________
________________________________________________________________________
________________________________________________________________________
________________________________________________________________________ | textbooks/stats/Applied_Statistics/Natural_Resources_Biometrics_(Kiernan)/11%3A_Biometric_Labs/11.02%3A_Biometrics_Lab_2.txt |
Name: ______________________________________________________
You are studying the growth of a hybrid species of Alaskan pine in three levels of soil moisture (wet, moderate, and dry) over a period of 30 days (0, 5, 10, 20, and 30). You want to determine if this species grows differently over time given different starting levels of soil moisture. Use the given data to test this claim (α = 0.05). If the interaction is significant, at what point does the difference in growth between the levels of soil moisture over time become significant? Use the factor plot and the Grouping information to specifically identify the difference in your conclusion.
Moisture
Days
Growth
Moisture
Days
Growth
Moisture
Days
Growth
Dry
0
7.78
Moderate
0
10.926
Wet
0
8.116
Dry
0
8.09
Moderate
0
9.162
Wet
0
10.473
Dry
0
7.27
Moderate
0
7.83
Wet
0
8.654
Dry
0
11.35
Moderate
0
8.604
Wet
0
6.901
Dry
0
11.94
Moderate
0
9.324
Wet
0
7.565
Dry
0
10.89
Moderate
0
6.462
Wet
0
9.169
Dry
5
7.152
Moderate
5
8.456
Wet
5
10.039
Dry
5
9.117
Moderate
5
11.012
Wet
5
9.994
Dry
5
7.671
Moderate
5
7.541
Wet
5
8.045
Dry
5
10.823
Moderate
5
9.482
Wet
5
9.445
Dry
5
12.309
Moderate
5
9.473
Wet
5
8.024
Dry
5
9.756
Moderate
5
10.2
Wet
5
7.783
Dry
10
9.096
Moderate
10
8.582
Wet
10
7.679
Dry
10
5.864
Moderate
10
9.934
Wet
10
11.671
Dry
10
9.445
Moderate
10
9.279
Wet
10
10.567
Dry
10
7.136
Moderate
10
6.651
Wet
10
9.66
Dry
10
6.869
Moderate
10
10.546
Wet
10
7.646
Dry
10
8.716
Moderate
10
7.927
Wet
10
8.953
Dry
20
4.716
Moderate
20
2.903
Wet
20
7.368
Dry
20
3.528
Moderate
20
4.91
Wet
20
6.539
Dry
20
4.964
Moderate
20
4.998
Wet
20
7.034
Dry
20
5.004
Moderate
20
4.954
Wet
20
7.258
Dry
20
3.824
Moderate
20
3.279
Wet
20
6.309
Dry
20
4.356
Moderate
20
3.528
Wet
20
7.223
Dry
30
1.053
Moderate
30
0.748
Wet
30
4.909
Dry
30
1.287
Moderate
30
0.997
Wet
30
5.891
Dry
30
1.11
Moderate
30
0.7
Wet
30
4.223
Dry
30
0.832
Moderate
30
1.018
Wet
30
3.997
Dry
30
1.082
Moderate
30
1.007
Wet
30
2.616
Dry
30
1.095
Moderate
30
1.083
Wet
30
3.995
Open Minitab and enter the data into a spreadsheet. Select STAT>ANOVA>General Linear Model.
Click in the Response box and select GROWTH for the Response box, and enter MOISTURE, DAYS, and MOISTURE*DAYS (interaction term) in the Model box, as shown.
Under OPTIONS, select “Adjusted (Type III)” under Sums of Squares. Click OK.
Under COMPARISONS, select “Pairwise comparisons” using “Tukey” method and enter the two main effects and interaction (MOISTURE, DAYS, and MOISTURE*DAYS) in the terms box (click in the box first to select).
Check the Grouping Information box. Click OK.
Under RESULTS, select “Analysis of Variance Table” for Display of Results. Click OK.
Under FACTOR PLOTS, enter MOISTURE and DAYS in both the main effects and interaction plot box. Click OK. Click OK.
Is the interaction term significant? __________________
Write the p-value ________________________________
Use the third Grouping Information Using Tukey Method (for the interaction) and the Factor plot to determine where the differences are for each treatment.
Attach a complete conclusion describing the differences in growth for this species over the 30 days for the 3 different levels of soil moisture. | textbooks/stats/Applied_Statistics/Natural_Resources_Biometrics_(Kiernan)/11%3A_Biometric_Labs/11.03%3A_Biometrics_Lab_3.txt |
Name: ______________________________________________________
Experiment 1
The following data were collected on Old Faithful geyser in Yellowstone Park. The x-variable is time between eruptions and the y-variable is length of eruptions.
X
Y
12.17
1.88
11.63
1.77
12.03
1.83
12.15
1.83
11.30
1.70
11.70
1.82
12.27
1.93
11.60
1.77
11.72
1.83
12.10
1.89
11.70
1.80
11.40
1.72
11.22
1.75
11.42
1.73
11.53
1.74
11.50
1.77
11.90
1.87
11.86
1.84
a) Determine if a relationship exists between the 2 variables using a scatterplot and the linear correlation coefficient. Select Graph> Scatterplot. Select the Simple plot and click OK. Enter the response variable (length of eruptions) in the Y variables box, and the predictor variable (time between eruptions) in the X variables box. Click OK. Describe the relationship that you see.
________________________________________________________________________
________________________________________________________________________
________________________________________________________________________
________________________________________________________________________
b) Calculate the linear correlation coefficient. Statistics> Basic Stats> Correlation. Enter the 2 variables in the Variables box and click OK.
r = ____________________________________
What two pieces of information about the relationship between these two variables does the linear correlation coefficient tell you?
________________________________________________________________________
________________________________________________________________________
________________________________________________________________________
________________________________________________________________________
________________________________________________________________________
c) Find a least squares regression line treating “time between eruptions” as the predictor variable (x) and “length of eruptions” as the response variable (y). Stat>Regression> General Regression.Enter “length of eruptions” in the Response box. Enter “time between eruptions” in the Model box. Click on Options and make sure that 95% is selected for all confidence intervals. Click on Graphsand select the Residual plot “Residual versus fits.” Click Results and make sure the Regression equation, Coefficient table, Display confidence intervals, Summary of model, Analysis of Variance table, and prediction tables are checked. Click OK.
Write the regression equation __________________
What is the value of R2? _______________________
What does this mean?
________________________________________________________________________
________________________________________________________________________
________________________________________________________________________
Examine the residual model. Do you see any problems?
________________________________________________________________________
________________________________________________________________________
________________________________________________________________________
What is the value of the regression standard error? _____________________________
Write the confidence intervals for the y-intercept ______________________________
and slope ______________________________________________________________
Use the output to test if the slope is significantly different from zero. Write the null and alternative hypotheses for this test.
H0:____________________________________
H1: ____________________________________
Using the test statistic and p-value from the Minitab output to test this claim.
Test statistic_______________________________ p-value _______________________________
Conclusion:
________________________________________________________________________
________________________________________________________________________
________________________________________________________________________
________________________________________________________________________
d) Using the regression equation, what would be the length of the eruption if the time between eruptions is 11.42 min.?
Experiment 2
The index of biotic integrity (IBI) is a measure of water quality in streams. The sample data given in the table below comes from the Piedmont forest region. The table gives the data for IBI and forested area in square kilometers. Let Forest Area be the predictor variable (x) and IBI be the response variable (y).
Create a scatterplot and describe the relationship between these variables. Compute the linear correlation coefficient.
r = ____________________________________
Create a regression model for this data set following the steps from the first example. Write the regression model.
________________________________________________________________________
Is there significant evidence to support the claim that IBI increases with Forest Area? Write the test statistic/p-value used for this slope test along with your answer.
________________________________________________________________________
________________________________________________________________________
________________________________________________________________________
________________________________________________________________________
The researcher wants to estimate the population mean IBI for streams that have an average forested area of 48 sq. km. Click STAT>REGRESSION> GENERAL REGRESSION. Making sure that IBI is in the Response box and Forest Area is in the Model box, click on Prediction and enter 48 in the New observation for continuous predictors box and check Confidence limits. Click OK. Write the 95% confidence interval for mean IBI for streams in an average forested area of 48 sq. km. ______________________________________________________
You are working with a stream in an area with 19 sq. km. of forested area. Your management plan includes an afforestation project that will increase the forested area to 23 sq. km. You need to predict what the specific IBI would be for this stream when the forested area is increased. Create a prediction interval to estimate this IBI if the forested area increased to 23 sq. km.
Click STAT>REGRESSION>GENERAL REGRESSION. Making sure that IBI is in the Response box and Forest Area is in the Model box, click on Prediction and enter 23 in the New observation for continuous predictors box and check Prediction limits. Click OK. Write the 95% prediction interval for the IBI for this stream when the forested area is increased to 23 sq. km. ___________________________________________________
Explain the difference between the confidence and prediction intervals you just computed.
________________________________________________________________________
________________________________________________________________________
________________________________________________________________________
________________________________________________________________________
________________________________________________________________________ | textbooks/stats/Applied_Statistics/Natural_Resources_Biometrics_(Kiernan)/11%3A_Biometric_Labs/11.04%3A_Biometrics_Lab_4.txt |
Name: ______________________________________________________
You are working on an alternative energy source and biomass is a key component. You want to predict above-ground biomass for this region, and you believe that biomass is related to substrate (subsoil) variables of salinity, water acidity, potassium, sodium, and zinc. Your crew collects information on biomass and these five variables for 45 plots.
Experiment 1
Before you create this regression model, you must examine the relationships between each of the five predictor variables and biomass (the response variable). Create five scatterplots using biomass as the response variable (y) and each of the predictor variables (x). Compute the linear correlation coefficient for each pair. Describe the relationships.
GRAPH>Scatterplot>Simple>OK. The response variable (y-variable) is Bio and the five predictor variables are the x-variables. Look at the scatterplots and describe each relationship below. Next compute the correlation coefficient for each pair and write the r-value below. STAT>Basic Statistics>Correlation. You can easily do all correlations at once by creating a correlation matrix. Put all predictor variables in the Variables box together.
Correlation (r) Description
Bio v. sal ______________________________________________________
Bio v.pH ______________________________________________________
Bio v. K _______________________________________________________
Bio v. Na ______________________________________________________
Bio v. Zn ______________________________________________________
Circle the above pair that has the strongest linear relationship.
Experiment 2
You are now going to create four regression models using the predictor variables. You will compare the adjusted R2, regression standard error, p-values for each coefficient, and the residuals for each model. Using this information, you will select the best model and state your reasons for this choice.
Begin with the full model using all five predictor variables. STAT>Regression>General Regression. Put Bio in the Response box and all five predictor variables in the Model box (see image). Click Results and make sure that the Regression equation, coefficient table, Display confidence intervals, Summary of Model, and Analysis of Variance Table are checked (see image). Click OK. Click Graphs and make sure that under Residual Plots that Individual plots and Residual versus Fits are selected (see image). Click OK.
MODEL 1
Write the regression model _______________________________________________
Write the adj. R2 ________________________________________________________
Write the regression standard error _________________________________________
Examine the residual plot. Are there any problems? ____________________________
Write the variables which are NOT significant ________________________________
MODEL 2
Now remove the LEAST significant variable (highest p-value) and repeat the steps using only the remaining variables.
Write the regression model _______________________________________________
Write the adj. R2 ________________________________________________________
Write the regression standard error _________________________________________
Examine the residual plot. Are there any problems? ____________________________
Write the variables which are NOT significant ________________________________
MODEL 3
Now remove the LEAST significant variable (highest p-value) and repeat the steps using only the remaining variables.
Write the regression model _______________________________________________
Write the adj. R2 ________________________________________________________
Write the regression standard error _________________________________________
Examine the residual plot. Are there any problems? ____________________________
Write the variables which are NOT significant ________________________________
MODEL 4
Now remove the LEAST significant variable (highest p-value) and repeat the steps using only the remaining variables.
Write the regression model _______________________________________________
Write the adj. R2 ________________________________________________________
Write the regression standard error _________________________________________
Examine the residual plot. Are there any problems? ____________________________
Write the variables which are NOT significant ________________________________
Experiment 3
Select the best model and state your reasons for selecting this model.
$\begin{array}{cccccc} \text { biomass } & \text { sal } & \mathrm{pH} & \mathrm{K} & \mathrm{Na} & \mathrm{Zn} \ 676 & 33 & 5 & 1441.67 & 35184.5 & 16.4524 \ 516 & 35 & 4.75 & 1299.19 & 28170.4 & 13.9852 \ 1052 & 32 & 4.2 & 1154.27 & 26455 & 15.3276 \ 868 & 30 & 4.4 & 1045.15 & 25072.9 & 17.3128 \ 1008 & 33 & 5.55 & 521.62 & 31664.2 & 22.3312 \ 436 & 33 & 5.05 & 1273.02 & 25491.7 & 12.2778 \ 544 & 36 & 4.25 & 1346.35 & 20877.3 & 17.8225 \ 680 & 30 & 4.45 & 1253.88 & 25621.3 & 14.3516 \ 640 & 38 & 4.75 & 1242.65 & 27587.3 & 13.6826 \ 492 & 30 & 4.6 & 1281.95 & 26511.7 & 11.7566 \ 984 & 30 & 4.1 & 553.69 & 7886.5 & 9.882 \ 1400 & 37 & 3.45 & 494.74 & 14596 & 16.6752 \ 1276 & 33 & 3.45 & 525.97 & 9826.8 & 12.373 \ 1736 & 36 & 4.1 & 571.14 & 11978.4 & 9.4058 \ 1004 & 30 & 3.5 & 408.64 & 10368.6 & 14.9302 \ 396 & 30 & 3.25 & 646.65 & 17307.4 & 31.2865 \ 352 & 27 & 3.35 & 514.03 & 12822 & 30.1652 \ 328 & 29 & 3.2 & 350.73 & 8582.6 & 28.5901 \ 392 & 34 & 3.35 & 496.29 & 12369.5 & 19.8795 \ 236 & 36 & 3.3 & 580.92 & 14731.9 & 18.5056 \ 392 & 30 & 3.25 & 535.82 & 15060.6 & 22.1344 \ 268 & 28 & 3.25 & 490.34 & 11056.3 & 28.6101 \ 252 & 31 & 3.2 & 552.39 & 8118.9 & 23.1908 \ 236 & 31 & 3.2 & 661.32 & 13009.5 & 24.6917 \ 340 & 35 & 3.35 & 672.15 & 15003.7 & 22.6758 \ 2436 & 29 & 7.1 & 528.65 & 10225 & 0.3729 \ 2216 & 35 & 7.35 & 563.13 & 8024.2 & 0.2703 \ 2096 & 35 & 7.45 & 497.96 & 10393 & 0.3205 \ 1660 & 30 & 7.45 & 458.38 & 8711.6 & 0.2648 \ 2272 & 30 & 7.4 & 498.25 & 10239.6 & 0.2105 \ 824 & 26 & 4.85 & 936.26 & 20436 & 18.9875 \ 1196 & 29 & 4.6 & 894.79 & 12519.9 & 20.9687 \ 1960 & 25 & 5.2 & 941.36 & 18979 & 23.9841 \ 2080 & 26 & 4.75 & 1038.79 & 22986.1 & 19.9727 \ 1764 & 26 & 5.2 & 898.05 & 11704.5 & 21.3864 \ 412 & 25 & 4.55 & 989.87 & 17721 & 23.7063 \ 416 & 26 & 3.95 & 951.28 & 16485.2 & 30.5589 \ 504 & 26 & 3.7 & 939.83 & 17101.3 & 26.8415 \ 492 & 27 & 3.75 & 925.42 & 17849 & 27.7292 \ 636 & 27 & 4.15 & 954.11 & 16949.6 & 21.5699 \ 1756 & 24 & 5.6 & 720.72 & 11344.6 & 19.6531 \ 1232 & 27 & 5.35 & 782.09 & 14752.4 & 20.3295 \ 1400 & 26 & 5.5 & 773.3 & 13649.8 & 19.588 \ 1620 & 28 & 5.5 & 829.26 & 14533 & 20.1328 \ 1560 & 28 & 5.4 & 856.96 & 16892.2 & 19.242 \end{array} \nonumber$ | textbooks/stats/Applied_Statistics/Natural_Resources_Biometrics_(Kiernan)/11%3A_Biometric_Labs/11.05%3A_Biometrics_Lab_5.txt |
One of the most fundamental of the broad range of data mining techniques that have been developed is regression modeling. Regression modeling is simply generating a mathematical model from measured data. This model is said to explain an output value given a new set of input values. Linear regression modeling is a specific form of regression modeling that assumes that the output can be explained using a linear combination of the input values.
01: Introduction
Data mining is a phrase that has been popularly used to suggest the process of finding useful information from within a large collection of data. I like to think of data mining as encompassing a broad range of statistical techniques and tools that can be used to extract different types of information from your data. Which particular technique or tool to use depends on your specific goals.
One of the most fundamental of the broad range of data mining techniques that have been developed is regression modeling. Regression modeling is simply generating a mathematical model from measured data. This model is said to explain an output value given a new set of input values. Linear regression modeling is a specific form of regression modeling that assumes that the output can be explained using a linear combination of the input values.
A common goal for developing a regression model is to predict what the output value of a system should be for a new set of input values, given that you have a collection of data about similar systems. For example, as you gain experience driving a car, you begun to develop an intuitive sense of how long it might take you to drive somewhere if you know the type of car, the weather, an estimate of the traffic, the distance, the condition of the roads, and so on. What you really have done to make this estimate of driving time is constructed a multi-factor regression model in your mind. The inputs to your model are the type of car, the weather, etc. The output is how long it will take you to drive from one point to another. When you change any of the inputs, such as a sudden increase in traffic, you automatically re-estimate how long it will take you to reach the destination.
This type of model building and estimating is precisely what we are going to learn to do more formally in this tutorial. As a concrete example, we will use real performance data obtained from thousands of measurements of computer systems to develop a regression model using the R statistical software package. You will learn how to develop the model and how to evaluate how well it fits the data. You also will learn how to use it to predict the performance of other computer systems.
As you go through this tutorial, remember that what you are developing is just a model. It will hopefully be useful in understanding the system and in predicting future results. However, do not confuse a model with the real system. The real system will always produce the correct results, regardless of what the model may say the results should be. | textbooks/stats/Computing_and_Modeling/Book%3A_Linear_Regression_Using_R_-_An_Introduction_to_Data_Modeling_(Lilja)/01%3A_Introduction/1.01%3A_Prelude_to_Linear_Regression.txt |
Suppose that we have measured the performance of several different computer systems using some standard benchmark program. We can organize these measurements into a table, such as the example data shown in Table 1.1. The details of each system are recorded in a single row. Since we measured the performance of n different systems, we need n rows in the table.
Table 1.1: An example of computer system performance data.
System Inputs Output
Clock (MHz) Cache (kB) Transistors (M) Performance
1 1500 64 2 98
2 2000 128 2.5 134
... ... ... ... ...
i ... ... ... ...
n 1750 32 4.5 113
The first column in this table is the index number (or name) from 1 to n that we have arbitrarily assigned to each of the different systems measured. Columns 2-4 are the input parameters. These are called the independent variables for the system we will be modeling. The specific values of the
input parameters were set by the experimenter when the system was measured, or they were determined by the system configuration. In either case, we know what the values are and we want to measure the performance obtained for these input values. For example, in the first system, the processor’s clock was 1500 MHz, the cache size was 64 kbytes, and the processor contained 2 million transistors. The last column is the performance that was measured for this system when it executed a standard benchmark program. We refer to this value as the output of the system. More technically, this is known as the system’s dependent variable or the system’s response.
The goal of regression modeling is to use these n independent measurements to determine a mathematical function, f(), that describes the relationship between the input parameters and the output, such as:
performance = f(Clock,Cache,Transistors)
This function, which is just an ordinary mathematical equation, is the regression model. A regression model can take on any form. However, we will restrict ourselves to a function that is a linear combination of the input parameters. We will explain later that, while the function is a linear combination of the input parameters, the parameters themselves do not need to be linear. This linear combination is commonly used in regression modeling and is powerful enough to model most systems we are likely to encounter.
In the process of developing this model, we will discover how important each of these inputs are in determining the output value. For example, we might find that the performance is heavily dependent on the clock frequency, while the cache size and the number of transistors may be much less important. We may even find that some of the inputs have essentially no impact on the output making it completely unnecessary to include them in the model. We also will be able to use the model we develop to predict the performance we would expect to see on a system that has input values that did not exist in any of the systems that we actually measured. For instance, Table 1.2 shows three new systems that were not part of the set of systems that we previously measured. We can use our regression model to predict the performance of each of these three systems to replace the question marks in the table.
Table 1.2: An example in which we want to predict the performance of new systems n + 1, n + 2, and n + 3 using the previously measured results from the other n systems.
System Inputs Output
Clock (MHz) Cache (kB) Transistors (M) Performance
1 1500 64 2 98
2 2000 128 2.5 134
... ... ... ... ...
i ... ... ... ...
... ... ... ... ...
n 1750 32 4.5 113
n + 1 2500 256 2.8 ?
n + 2 1560 128 1.8 ?
n + 3 900 64 1.5
As a final point, note that, since the regression model is a linear combination of the input values, the values of the model parameters will automatically be scaled as we develop the model. As a result, the units used for the inputs and the output are arbitrary. In fact, we can rescale the values of the inputs and the output before we begin the modeling process and still produce a valid model. | textbooks/stats/Computing_and_Modeling/Book%3A_Linear_Regression_Using_R_-_An_Introduction_to_Data_Modeling_(Lilja)/01%3A_Introduction/1.02%3A_What_is_a_Linear_Regression_Model%3F.txt |
R is a computer language developed specifically for statistical computing. It is actually more than that, though. R provides a complete environment for interacting with your data. You can directly use the functions that are provided in the environment to process your data without writing a complete program. You also can write your own programs to perform operations that do not have built-in functions, or to repeat the same task multiple times, for instance.
R is an object-oriented language that uses vectors and matrices as its basic operands. This feature makes it quite useful for working on large sets of data using only a few lines of code. The R environment also provides excellent graphical tools for producing complex plots relatively easily. And, perhaps best of all, it is free. It is an open source project developed by many volunteers. You can learn more about the history of R, and download a copy to your own computer, from the R Project web site [13].
As an example of using R, here is a copy of a simple interaction with the R environment. > x x [1] 2 4 6 8 10 12 14 16 > mean(x) [1] 9 > var(x) [1] 24 > In this listing, the “>” character indicates that R is waiting for input. The line x <- c(2, 4, 6, 8, 10, 12, 14, 16) concatenates all of the values in the argument into a vector and assigns that vector to the variable x. Simply typing x by itself causes R to print the contents of the vector. Note that R treats vectors as a matrix with a single row. Thus, the “[1]” preceding the values is R’s notation to show that this is the first row of the matrix x. The next line, mean(x), calls a function in R that computes the arithmetic mean of the input vector, x. The function var(x) computes the corresponding variance.
This book will not make you an expert in programming using the R computer language. Developing good regression models is an interactive process that requires you to dig in and play around with your data and your models. Thus, I am more interested in using R as a computing environment for doing statistical analysis than as a programming language. Instead of teaching you the language’s syntax and semantics directly, this tutorial will introduce what you need to know about R as you need it to perform the specific steps to develop a regression model. You should already have some programming expertise so that you can follow the examples in the remainder of the book. However, you do not need to be an expert programmer.
1.04: What's Next?
Before beginning any sort of data analysis, you need to understand your data. Chapter 2 describes the sample data that will be used in the examples throughout this tutorial, and how to read this data into the R environment. Chapter 3 introduces the simplest regression model consisting of a single independent variable. The process used to develop a more complex regression model with multiple independent input variables is explained in Chapter 4. Chapter 5 then shows how to use this multi-factor regression model to predict the system response when given new input data. Chapter 6 explains in more detail the routines used to read a file containing your data into the R environment. The process used to develop a multi-factor regression model is summarized in Chapter 7 along with some suggestions for further reading. Finally, Chapter 8 provides some experiments you might want to try to expand your understanding of the modeling process. | textbooks/stats/Computing_and_Modeling/Book%3A_Linear_Regression_Using_R_-_An_Introduction_to_Data_Modeling_(Lilja)/01%3A_Introduction/1.03%3A_What_is_R%3F.txt |
Good data are the basis of any sort of regression model, because we use this data to actually construct the model. If the data is flawed, the model will be flawed. It is the old maxim of garbage in, garbage out. Thus, the first step in regression modeling is to ensure that your data is reliable. There is no universal approach to verifying the quality of your data, unfortunately. If you collect it yourself, you at least have the advantage of knowing its provenance. If you obtain your data from somewhere else, though, you depend on the source to ensure data quality. Your job then becomes verifying your source’s reliability and correctness as much as possible.
• 2.1: Missing Values
Any large collection of data is probably incomplete. That is, it is likely that there will be cells without values in your data table. These missing values may be the result of an error, such as the experimenter simply forgetting to fill in a particular entry. They also could be missing because that particular system configuration did not have that parameter available. Fortunately, R is designed to gracefully handle missing values.
• 2.2: Sanity Checking and Data Cleaning
• 2.3: The Example Data
• 2.4: Data Frames
The fundamental object used for storing tables of data in R is called a data frame. We can think of a data frame as a way of organizing data into a large table with a row for each system measured and a column for each parameter. An interesting and useful feature of R is that all the columns in a data frame do not need to be the same data type. Some columns may consist of numerical data, for instance, while other columns contain textual data.
• 2.5: Accessing a Data Frame
02: Understand Your Data
Any large collection of data is probably incomplete. That is, it is likely that there will be cells without values in your data table. These missing values may be the result of an error, such as the experimenter simply forgetting to fill in a particular entry. They also could be missing because that particular system configuration did not have that parameter available. For example, not every processor tested in our example data had an L2 cache. Fortunately, R is designed to gracefully handle missing values. R uses the notation NA to indicate that the corresponding value is not available.
Most of the functions in R have been written to appropriately ignore NA values and still compute the desired result. Sometimes, however, you must explicitly tell the function to ignore the NA values. For example, calling the mean() function with an input vector that contains NA values causes it to return NA as the result. To compute the mean of the input vector while ignoring the NA values, you must explicitly tell the function to remove the NA values using mean(x, na.rm=TRUE).
2.02: Sanity Checking and Data Cleaning
Regardless of where you obtain your data, it is important to do some sanity checks to ensure that nothing is drastically flawed. For instance, you can check the minimum and maximum values of key input parameters (i.e., columns) of your data to see if anything looks obviously wrong. One of the exercises in Chapter 8 encourages you explore other approaches for verifying your data. R also provides good plotting functions to quickly obtain a visual indication of some of the key relationships in your data set. We will see some examples of these functions in Section 3.1.
If you discover obvious errors or flaws in your data, you may have to eliminate portions of that data. For instance, you may find that the performance reported for a few system configurations is hundreds of times larger than that of all of the other systems tested. Although it is possible that this data is correct, it seems more likely that whoever recorded the data simply made a transcription error. You may decide that you should delete those results from your data. It is important, though, not to throw out data that looks strange without good justification. Sometimes the most interesting conclusions come from data that on first glance appeared flawed, but was actually hiding an interesting and unsuspected phenomenon. This process of checking your data and putting it into the proper format is often called data cleaning.
It also is always appropriate to use your knowledge of the system and the relationships between the inputs and the output to inform your model building. For instance, from our experience, we expect that the clock rate will be a key parameter in any regression model of computer systems performance that we construct. Consequently, we will want to make sure that our models include the clock parameter. If the modeling methodology suggests that the clock is not important in the model, then using the methodology is probably an error. We additionally may have deeper insights into the physical system that suggest how we should proceed in developing a model. We will see a specific example of applying our insights about the effect of caches on system performance when we begin constructing more complex models in Chapter 4.
These types of sanity checks help you feel more comfortable that your data is valid. However, keep in mind that it is impossible to prove that your data is flawless. As a result, you should always look at the results of any regression modeling exercise with a healthy dose of skepticism and think carefully about whether or not the results make sense. Trust your intuition. If the results don’t feel right, there is quite possibly a problem lurking somewhere in the data or in your analysis.
2.03: The Example Data
I obtained the input data used for developing the regression models in the subsequent chapters from the publicly available CPU DB database [2]. This database contains design characteristics and measured performance results for a large collection of commercial processors. The data was collected over many years and is nicely organized using a common format and a standardized set of parameters. The particular version of the database used in this book contains information on 1,525 processors.
Many of the database’s parameters (columns) are useful in understanding and comparing the performance of the various processors. Not all of these parameters will be useful as predictors in the regression models, however. For instance, some of the parameters, such as the column labeled Instruction set width, are not available for many of the processors. Others, such as the Processor family, are common among several processors and do not provide useful information for distinguishing among them. As a result, we can eliminate these columns as possible predictors when we develop the regression model.
On the other hand, based on our knowledge of processor design, we know that the clock frequency has a large effect on performance. It also seems likely that the parallelism-related parameters, specifically, the number of threads and cores, could have a significant effect on performance, so we will keep these parameters available for possible inclusion in the regression model.
Technology-related parameters are those that are directly determined by the particular fabrication technology used to build the processor. The number of transistors and the die size are rough indicators of the size and complexity of the processor’s logic. The feature size, channel length, and FO4 (fanout-of-four) delay are related to gate delays in the processor’s logic. Because these parameters both have a direct effect on how much processing can be done per clock cycle and effect the critical path delays, at least some of these parameters could be important in a regression model that describes performance.
Finally, the memory-related parameters recorded in the database are the separate L1 instruction and data cache sizes, and the unified L2 and L3 cache sizes. Because memory delays are critical to a processor’s performance, all of these memory-related parameters have the potential for being important in the regression models.
The reported performance metric is the score obtained from the SPEC CPU integer and floating-point benchmark programs from 1992, 1995, 2000, and 2006 [6–8]. This performance result will be the regression model’s output. Note that performance results are not available for every processor running every benchmark. Most of the processors have performance results for only those benchmark sets that were current when the processor was introduced into the market. Thus, although there are more than 1,500 lines in the database representing more than 1,500 unique processor configurations, a much | textbooks/stats/Computing_and_Modeling/Book%3A_Linear_Regression_Using_R_-_An_Introduction_to_Data_Modeling_(Lilja)/02%3A_Understand_Your_Data/2.01%3A_Missing_Values.txt |
The fundamental object used for storing tables of data in R is called a data frame. We can think of a data frame as a way of organizing data into a large table with a row for each system measured and a column for each parameter. An interesting and useful feature of R is that all the columns in a data frame do not need to be the same data type. Some columns may consist of numerical data, for instance, while other columns contain textual data. This feature is quite useful when manipulating large, heterogeneous data files.
To access the CPU DB data, we first must read it into the R environment. R has built-in functions for reading data directly from files in the csv (comma separated values) format and for organizing the data into data frames. The specifics of this reading process can get a little messy, depending on how the data is organized in the file. We will defer the specifics of reading the CPU DB file into R until Chapter 6. For now, we will use a function called extract_data(), which was specifically written for reading the CPU DB file.
To use this function, copy both the all-data.csv and read-data.R files into a directory on your computer (you can download both of these files from this book’s web site shown on p. ii). Then start the R environment and set the local directory in R to be this directory using the File -> Change dir pull-down menu. Then use the File -> Source R code pull-down menu to read the read-data.R file into R. When the R code in this file completes, you should have six new data frames in your R environment workspace: int92.dat, fp92.dat, int95.dat, fp95.dat, int00.dat, fp00.dat, int06.dat, and fp06.dat.
The data frame int92.dat contains the data from the CPU DB database for all of the processors for which performance results were available for the SPEC Integer 1992 (Int1992) benchmark program. Similarly, fp92.dat contains the data for the processors that executed the Floating-Point 1992 (Fp1992) benchmarks, and so on. I use the .dat suffix to show that the corresponding variable name is a data frame.
Simply typing the name of the data frame will cause R to print the entire table. For example, here are the first few lines printed after I type int92.dat, truncated to fit within the page: nperf perf clock threads cores ... 1 9.662070 68.60000 100 1 1 ... 2 7.996196 63.10000 125 1 1 ... 3 16.363872 90.72647 166 1 1 ... 4 13.720745 82.00000 175 1 1 ... ... The first row is the header, which shows the name of each column. Each subsequent row contains the data corresponding to an individual processor. The first column is the index number assigned to the processor whose data is in that row. The next columns are the specific values recorded for that parameter for each processor. The function head(int92.dat) prints out just the header and the first few rows of the corresponding data frame. It gives you a quick glance at the data frame when you interact with your data.
Table 2.1shows the complete list of column names available in these data frames. Note that the column names are listed vertically in this table, simply to make them fit on the page.
Table 2.1: The names and definitions of the columns in the data frames containing the data from CPU DB.
Column number Column name Definition
1 (blank) Processor index number
2 nperf Normalized performance
3 perf SPEC performance
4 clock Clock frequency (MHz)
5 threads Number of hardware threads available
6 cores Number of hardware cores available
7 TDP Thermal design power
8 transistors Number of transistors on the chip (M)
9 dieSize The size of the chip
10 voltage Nominal operating voltage
11 featureSize Fabrication feature size
12 channel Fabrication channel size
13 FO4delay Fan-out-four delay
14 L1icache Level 1 instruction cache size
15 L1dcache Level 1 data cache size
16 L2cache Level 2 cache size
17 L3cache Level 3 cache size
2.05: Accessing a Data Frame
We access the individual elements in a data frame using square brackets to identify a specific cell. For instance, the following accesses the data in the cell in row 15, column 12:
``````> int92.dat[15,12]
[1] 180``````
We can also access cells by name by putting quotes around the name:
`````` > int92.dat["71","perf"]
[1] 105.1``````
This expression returns the data in the row labeled `71` and the column labeled `perf`. Note that this is not row 71, but rather the row that contains the data for the processor whose name is `71`
We can access an entire column by leaving the first parameter in the square brackets empty. For instance, the following prints the value in every row for the column labeled `clock`:
``````> int92.dat[,"clock"]
[1] 100 125 166 175 190 ...``````
Similarly, this expression prints the values in all of the columns for row 36:
``````> int92.dat[36,]
nperf perf clock threads cores ...
36 13.07378 79.86399 80 1 1 ...``````
The functions nrow() and ncol() return the number of rows and columns, respectively, in the data frame:
``````> nrow(int92.dat)
[1] 78
> ncol(int92.dat)
[1] 16``````
Because R functions can typically operate on a vector of any length, we can use built-in functions to quickly compute some useful results. For example, the following expressions compute the minimum, maximum, mean, and standard deviation of the `perf` column in the `int92.dat` data frame:
``````> min(int92.dat[,"perf"])
[1] 36.7
> max(int92.dat[,"perf"])
[1] 366.857
> mean(int92.dat[,"perf"])
[1] 124.2859
> sd(int92.dat[,"perf"])
[1] 78.0974``````
This square-bracket notation can become cumbersome when you do a substantial amount of interactive computation within the R environment. R provides an alternative notation using the \$ symbol to more easily access a column. Repeating the previous example using this notation:
``````> min(int92.dat\$perf)
[1] 36.7
> max(int92.dat\$perf)
[1] 366.857
> mean(int92.dat\$perf)
[1] 124.2859
> sd(int92.dat\$perf)
[1] 78.0974``````
This notation says to use the data in the column named `perf` from the data frame named `int92.dat`. We can make yet a further simplification using the `attach` function. This function makes the corresponding data frame local to the current workspace, thereby eliminating the need to use the potentially awkward \$ or square-bracket indexing notation. The following example shows how this works:
```> attach(int92.dat)
> min(perf)
[1] 36.7
> max(perf)
[1] 366.857
> mean(perf)
[1] 124.2859
> sd(perf)
[1] 78.0974```
To change to a different data frame within your local workspace, you must first detach the current data frame:
``````> detach(int92.dat)
> attach(fp00.dat)
> min(perf)
[1] 87.54153
> max(perf)
[1] 3369
> mean(perf)
[1] 1217.282
> sd(perf)
[1] 787.4139``````
Now that we have the necessary data available in the R environment, and some understanding of how to access and manipulate this data, we are ready to generate our first regression model. | textbooks/stats/Computing_and_Modeling/Book%3A_Linear_Regression_Using_R_-_An_Introduction_to_Data_Modeling_(Lilja)/02%3A_Understand_Your_Data/2.04%3A_Data_Frames.txt |
The simplest linear regression model finds the relationship between one input variable, which is called the predictor variable, and the output, which is called the system’s response. This type of model is known as a one-factor linear regression. To demonstrate the regression-modeling process, we will begin developing a one-factor model for the SPEC Integer 2000 (Int2000) benchmark results reported in the CPU DB data set. We will expand this model to include multiple input variables in this Chapter.
03: One-Factor Regression
The first step in this one-factor modeling process is to determine whether or not it looks as though a linear relationship exists between the predictor and the output value. From our understanding of computer system design that is, from our domain-specific knowledge we know that the clock frequency strongly influences a computer system’s performance. Consequently, we must look for a roughly linear relationship between the processor’s performance and its clock frequency. Fortunately, R provides powerful and flexible plotting functions that let us visualize this type relationship quite easily.
This R function call:
``> plot(int00.dat[,"clock"],int00.dat[,"perf"], main="Int2000", xlab="Clock", ylab="Performance")``
generates the plot shown in Figure 3.1. The first parameter in this function call is the value we will plot on the x-axis. In this case, we will plot the clock values from the int00.dat data frame as the independent variable
Figure 3.1: A scatter plot of the performance of the processors that were tested using the Int2000 benchmark versus the clock frequency.
on the x-axis. The dependent variable is the perf column from `int00.dat`, which we plot on the y-axis. The function argument `main="Int2000"` provides a title for the plot, while `xlab="Clock"` and `ylab="Performance"` provide labels for the xand y-axes, respectively.
This figure shows that the performance tends to increase as the clock frequency increases, as we expected. If we superimpose a straight line on this scatter plot, we see that the relationship between the predictor (the clock frequency) and the output (the performance) is roughly linear. It is not perfectly linear, however. As the clock frequency increases, we see a larger spread in performance values. Our next step is to develop a regression model that will help us quantify the degree of linearity in the relationship between the output and the predictor.
3.02: The Linear Model Function
We use regression models to predict a system’s behavior by extrapolating from previously measured output values when the system is tested with known input parameter values. The simplest regression model is a straight line. It has the mathematical form:
y = a0 + a1x1
where x1 is the input to the system, a0 is the y-intercept of the line, a1 is the slope, and y is the output value the model predicts.
R provides the function `lm()` that generates a linear model from the data contained in a data frame. For this one-factor model, R computes the values of a0 and ausing the method of least squares. This method finds the line that most closely fits the measured data by minimizing the distances between the line and the individual data points. For the data frame `int00.dat`, we compute the model as follows:
``````> attach(int00.dat)
> int00.lm <lm(perf ~ clock)``````
The first line in this example attaches the `int00.dat` data frame to the current workspace. The next line calls the `lm()` function and assigns the resulting linear model object to the variable `int00.lm.` We use the suffix `.lm` to emphasize that this variable contains a linear model. The argument in the `lm()` function, `(perf ~ clock)`, says that we want to find a model where the predictor `clock` explains the output `perf`.
Typing the variable’s name, `int00.lm`, by itself causes R to print the argument with which the function `lm()` was called, along with the computed coefficients for the regression model.
```> int00.lm
Call:
lm(formula = perf ~ clock)
Coefficients:
(Intercept) clock
51.7871 0.5863
```
In this case, the y-intercept is a0 = 51.7871 and the slope is a1 = 0.5863. Thus, the final regression model is:
perf = 51.7871 + 0.5863 ∗ clock.
The following code plots the original data along with the fitted line, as shown in Figure 3.2. The function `abline()` is short for (a,b)-line. It plots a line on the active plot window, using the slope and intercept of the linear model given in its argument.
``````> plot(clock,perf)
> abline(int00.lm)``` ``` | textbooks/stats/Computing_and_Modeling/Book%3A_Linear_Regression_Using_R_-_An_Introduction_to_Data_Modeling_(Lilja)/03%3A_One-Factor_Regression/3.01%3A_Visualize_the_Data.txt |
The information we obtain by typing `int00.lm` shows us the regression model’s basic values, but does not tell us anything about the model’s quality. In fact, there are many different ways to evaluate a regression model’s quality. Many of the techniques can be rather technical, and the details of them are beyond the scope of this tutorial. However, the function `summary()` extracts some additional information that we can use to determine how well the data fit the resulting model. When called with the model object `int00.lm` as the argument, `summary()` produces the following information:
``````> summary(int00.lm)
Call:
lm(formula = perf ~ clock)
Residuals:```
```Min 1Q Median 3Q Max
-634.61 -276.17 -30.83 75.38 1299.52
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 51.78709 53.31513 0.971 0.332
clock 0.58635 0.02697 21.741 <2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 396.1 on 254 degrees of freedom
Multiple R-squared: 0.6505, Adjusted R-squared: 0.6491
F-statistic: 472.7 on 1 and 254 DF, p-value: < 2.2e-16```
```
Let’s examine each of the items presented in this summary in turn.
``````> summary(int00.lm)
Call:
lm(formula = perf ~ clock)``````
These first few lines simply repeat how the lm() function was called. It is useful to look at this information to verify that you actually called the function as you intended.
``````Residuals:
Min 1Q Median 3Q Max
-634.61 -276.17 -30.83 75.38 1299.52```
```
The residuals are the differences between the actual measured values and the corresponding values on the fitted regression line. In Figure 3.2, each data point’s residual is the distance that the individual data point is above (positive residual) or below (negative residual) the regression line. `Min` is the minimum residual value, which is the distance from the regression line to the point furthest below the line. Similarly, `Max` is the distance from the regression line of the point furthest above the line. `Median` is the median value of all of the residuals. The `1Q` and `3Q` values are the points that mark the first and third quartiles of all the sorted residual values.
How should we interpret these values? If the line is a good fit with the data, we would expect residual values that are normally distributed around a mean of zero. (Recall that a normal distribution is also called a Gaussian distribution.) This distribution implies that there is a decreasing probability of finding residual values as we move further away from the mean. That is, a good model’s residuals should be roughly balanced around and not too far away from the mean of zero. Consequently, when we look at the residual values reported by `summary()`, a good model would tend to have a median value near zero, minimum and maximum values of roughly the same magnitude, and first and third quartile values of roughly the same magnitude. For this model, the residual values are not too far off what we would expect for Gaussian-distributed numbers. In Section 3.4, we present a simple visual test to determine whether the residuals appear to follow a normal distribution.
``````Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 51.78709 53.31513 0.971 0.332
clock 0.58635 0.02697 21.741 <2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1```
```
This portion of the output shows the estimated coefficient values. These values are simply the fitted regression model values from Equation 3.2. The Std. Error column shows the statistical `standard error` for each of the coefficients. For a good model, we typically would like to see a standard error that is at least five to ten times smaller than the corresponding coefficient. For example, the standard error for `clock` is 21.7 times smaller than the coefficient value (0.58635/0.02697 = 21.7). This large ratio means that there is relatively little variability in the slope estimate, a1. The standard error for the intercept, a0, is 53.31513, which is roughly the same as the estimated value of 51.78709 for this coefficient. These similar values suggest that the estimate of this coefficient for this model can vary significantly.
The last column, labeled `Pr(>|t|)`, shows the probability that the corresponding coefficient is not relevant in the model. This value is also known as the significance or p-value of the coefficient. In this example, the probability that `clock` is not relevant in this model is 2 × 10−16 a tiny value. The probability that the intercept is not relevant is 0.332, or about a one-inthree chance that this specific intercept value is not relevant to the model. There is an intercept, of course, but we are again seeing indications that the model is not predicting this value very well.
The symbols printed to the right in this summary that is, the asterisks, periods, or spaces are intended to give a quick visual check of the coefficients’ significance. The line labeled `Signif. codes:` gives these symbols’ meanings. Three asterisks (***) means 0 < p ≤ 0.001, two asterisks (**) means 0.001 < p ≤ 0.01, and so on.
R uses the column labeled `t value` to compute the p-values and the corresponding significance symbols. You probably will not use these values directly when you evaluate your model’s quality, so we will ignore this column for now.
``````Residual standard error: 396.1 on 254 degrees of freedom
Multiple R-squared: 0.6505, Adjusted R-squared: 0.6491
F-statistic: 472.7 on 1 and 254 DF, p-value: < 2.2e-16``````
These final few lines in the output provide some statistical information about the quality of the regression model’s fit to the data. The Residual standard error is a measure of the total variation in the residual values. If the `residuals` are distributed normally, the first and third quantiles of the previous residuals should be about 1.5 times this `standard error`
The number of `degrees of freedom` is the total number of measurements or observations used to generate the model, minus the number of coefficients in the model. This example had 256 unique rows in the data frame, corresponding to 256 independent measurements. We used this data to produce a regression model with two coefficients: the slope and the intercept. Thus, we are left with (256 2 = 254) degrees of freedom.
The `Multiple R-squared` value is a number between 0 and 1. It is a statistical measure of how well the model describes the measured data. We compute it by dividing the total variation that the model explains by the data’s total variation. Multiplying this value by 100 gives a value that we can interpret as a percentage between 0 and 100. The reported R2 of 0.6505 for this model means that the model explains 65.05 percent of the data’s variation. Random chance and measurement errors creep in, so the model will never explain all data variation. Consequently, you should not ever expect an R2 value of exactly one. In general, values of R2 that are closer to one indicate a better-fitting model. However, a good model does not necessarily require a large R2 value. It may still accurately predict future observations, even with a small R2 value.
The Adjusted `R-squared value` is the R2 value modified to take into account the number of predictors used in the model. The adjusted R2 is always smaller than the R2 value. We will discuss the meaining of the adjusted R2 in Chapter 4, when we present regression models that use more than one predictor.
The final line shows the `F-statistic`. This value compares the current model to a model that has one fewer parameters. Because the one-factor model already has only a single parameter, this test is not particularly useful in this case. It is an interesting statistic for the multi-factor models, however, as we will discuss later.
3.04: Residual Analysis
The `summary()` function provides a substantial amount of information to help us evaluate a regression model’s fit to the data used to develop that model. To dig deeper into the model’s quality, we can analyze some additional information about the observed values compared to the values that the model predicts. In particular, residual analysis examines these residual values to see what they can tell us about the model’s quality.
Recall that the residual value is the difference between the actual measured value stored in the data frame and the value that the fitted regression line predicts for that corresponding data point. Residual values greater than zero mean that the regression model predicted a value that was too small compared to the actual measured value, and negative values indicate that the regression model predicted a value that was too large. A model that fits the data well would tend to over-predict as often as it under-predicts. Thus, if we plot the residual values, we would expect to see them distributed uniformly around zero for a well-fitted model.
The following function calls produce the residuals plot for our model, shown in Figure 3.3.
````> plot(fitted(int00.lm),resid(int00.lm))`
```
In this plot, we see that the residuals tend to increase as we move to the right. Additionally, the residuals are not uniformly scattered above and below zero. Overall, this plot tells us that using the clock as the sole predictor in the regression model does not sufficiently or fully explain the data. In general, if you observe any sort of clear trend or pattern in the residuals, you probably need to generate a better model. This does not mean that our simple one-factor model is useless, though. It only means that we may be able to construct a model that produces tighter residual values and better predictions.
Another test of the residuals uses the quantile-versus-quantile, or Q-Q, plot. Previously we said that, if the model fits the data well, we would expect the residuals to be normally (Gaussian) distributed around a mean of zero. The Q-Q plot provides a nice visual indication of whether the residuals from the model are normally distributed. The following function calls generate the Q-Q plot shown in Figure 3.4:
``````> qqnorm(resid(int00.lm))
> qqline(resid(int00.lm))```
```
Figure 3.4: The Q-Q plot for the one-factor model developed using the Int2000 data.
If the residuals were normally distributed, we would expect the points plotted in this figure to follow a straight line. With our model, though, we see that the two ends diverge significantly from that line. This behavior indicates that the residuals are not normally distributed. In fact, this plot suggests that the distribution’s tails are “heavier” than what we would expect from a normal distribution. This test further confirms that using only the clock as a predictor in the model is insufficient to explain the data.
Our next step is to learn to develop regression models with multiple input factors. Perhaps we will find a more complex model that is better able to explain the data. | textbooks/stats/Computing_and_Modeling/Book%3A_Linear_Regression_Using_R_-_An_Introduction_to_Data_Modeling_(Lilja)/03%3A_One-Factor_Regression/3.03%3A_Evaluating_the_Quality_of_the_Model.txt |
A multi-factor regression model is a generalization of the simple one- factor regression model discussed in Chapter 3. It has n factors with the form:
y = a0 + a1x1 + a2x2 + ...anxn,
where the xi values are the inputs to the system, the ai coefficients are the model parameters computed from the measured data, and y is the output value predicted by the model. Everything we learned in Chapter 3 for one- factor models also applies to the multi-factor models. To develop this type of multi-factor regression model, we must also learn how to select specific predictors to include in the model
04: Multi-factor Regression
Before beginning model development, it is useful to get a visual sense of the relationships within the data. We can do this easily with the following function call:
``> pairs(int00.dat, gap=0.5)``
The `pairs()` function produces the plot shown in Figure 4.1. This plot provides a pairwise comparison of all the data in the` int00.dat` data frame. The `gap` parameter in the function call controls the spacing between the individual plots. Set it to zero to eliminate any space between plots.
As an example of how to read this plot, locate the box near the upper left corner labeled `perf`. This is the value of the performance measured for the `int00.dat` data set. The box immediately to the right of this one is a scatter
plot, with `perf` data on the vertical axis and `clock` data on the horizontal axis. This is the same information we previously plotted in Figure 3.1. By scanning through these plots, we can see any obviously significant relationships between the variables. For example, we quickly observe that there is a somewhat proportional relationship between `perf` and `clock`. Scanning down the `perf` column, we also see that there might be a weakly inverse relationship between `perf` and `featureSize`.
Notice that there is a perfect linear correlation between `perf` and `nperf`. This relationship occurs because `nperf` is a simple rescaling of `perf`. The reported benchmark performance values in the database that is, the `perf` values use different scales for different benchmarks. To directly compare the values that our models will predict, it is useful to rescale `perf` to the range [0,100]. Do this quite easily, using this R code:
``````max_perf = max(perf)
min_perf = min(perf)
range = max_perf min_perf
nperf = 100 * (perf min_perf) / range``````
Note that this rescaling has no effect on the models we will develop, because it is a linear transformation of `perf`. For convenience and consistency, we use `nperf` in the remainder of this tutorial.
4.02: Identifying Potential Predictors
The first step in developing the multi-factor regression model is to identify all possible predictors that we could include in the model. To the novice model developer, it may seem that we should include all factors available in the data as predictors, because more information is likely to be better than not enough information. However, a good regression model explains the relationship between a system’s inputs and output as simply as possible. Thus, we should use the smallest number of predictors necessary to provide good predictions. Furthermore, using too many or redundant predictors builds the random noise in the data into the model. In this situation, we obtain an over-fitted model that is very good at predicting the outputs from the specific input data set used to train the model. It does not accurately model the overall system’s response, though, and it will not appropriately predict the system output for a broader range of inputs than those on which it was trained. Redundant or unnecessary predictors also can lead to numerical instabilities when computing the coefficients.
We must find a balance between including too few and too many predictors. A model with too few predictors can produce biased predictions. On the other hand, adding more predictors to the model will always cause the R2 value to increase. This can confuse you into thinking that the additional predictors generated a better model. In some cases, adding a predictor will improve the model, so the increase in the R2 value makes sense. In some cases, however, the R2 value increases simply because we’ve better modeled the random noise.
The adjusted R2 attempts to compensate for the regular R2’s behavior by changing the R2 value according to the number of predictors in the model. This adjustment helps us determine whether adding a predictor improves the fit of the model, or whether it is simply modeling the noise better. It is computed as:
$\ R_{adjusted}^2 = {1-\frac{n-1}{n-m}(1-R^2)}$
where n is the number of observations and m is the number of predictors in the model. If adding a new predictor to the model increases the previous model’s R2 value by more than we would expect from random fluctuations, then the adjusted R2 will increase. Conversely, it will decrease if removing a predictor decreases the R2 by more than we would expect due to random variations. Recall that the goal is to use as few predictors as possible, while still producing a model that explains the data well.
Because we do not know a priori which input parameters will be useful predictors, it seems reasonable to start with all of the columns available in the measured data as the set of potential predictors. We listed all of the column names in Table 2.1. Before we throw all these columns into the modeling process, though, we need to step back and consider what we know about the underlying system, to help us find any parameters that we should obviously exclude from the start.
There are two output columns: perf and nperf. The regression model can have only one output, however, so we must choose only one column to use in our model development process. As discussed in Section 4.1, nperf is a linear transformation of perf that shifts the output range to be between 0 and 100. This range is useful for quickly obtaining a sense of future predictions’ quality, so we decide to use nperf as our model’s output and ignore the perf column.
Almost all the remaining possible predictors appear potentially useful in our model, so we keep them available as potential predictors for now. The only exception is TDP. The name of this factor, thermal design power, does not clearly indicate whether this could be a useful predictor in our model, so we must do a little additional research to understand it better. We discover [10] that thermal design power is “the average amount of power in watts that a cooling system must dissipate. Also called the ‘thermal guideline’ or ‘thermal design point,’ the TDP is provided by the chip manufacturer to the system vendor, who is expected to build a case that accommodates the chip’s thermal requirements.” From this definition, we conclude that TDP is not really a parameter that will directly affect performance. Rather, it is a specification provided by the processor’s manufacturer to ensure that the system designer includes adequate cooling capability in the final product. Thus, we decide not to include TDP as a potential predictor in the regression model.
In addition to excluding some apparently unhelpful factors (such as TDP) at the beginning of the model development process, we also should consider whether we should include any additional parameters. For example, the terms in a regression model add linearly to produce the predicted output. However, the individual terms themselves can be nonlinear, such as aixim ,where m does not have to be equal to one.This flexibility lets us include additional powers of the individual factors. We should include these non-linear terms, though, only if we have some physical reason to suspect that the output could be a nonlinear function of a particular input.
For example, we know from our prior experience modeling processor performance that empirical studies have suggested that cache miss rates are roughly proportional to the square root of the cache size [5]. Consequently, we will include terms for the square root (m = 1/2) of each cache size as possible predictors. We must also include first-degree terms (m = 1) of each cache size as possible predictors. Finally, we notice that only a few of the entries in the int00.dat data frame include values for the L3 cache, so we decide to exclude the L3 cache size as a potential predictor. Exploiting this type of domain-specific knowledge when selecting predictors ultimately can help produce better models than blindly applying the model development process.
The final list of potential predictors that we will make available for the model development process is shown in Table 4.1.
Table 4.1: The list of potential predictors to be used in the model development process.
4.03: The Backward Elimination Process
We are finally ready to develop the multi-factor linear regression model for the `int00.dat` data set. As mentioned in the previous section, we must find the right balance in the number of predictors that we use in our model. Too many predictors will train our model to follow the data’s random variations (noise) too closely. Too few predictors will produce a model that may not be as accurate at predicting future values as a model with more predictors.
We will use a process called backward elimination [1] to help decide which predictors to keep in our model and which to exclude. In backward elimination, we start with all possible predictors and then use `lm()` to compute the model. We use the `summary()` function to find each predictor’s significance level. The predictor with the least significance has the largest p-value. If this value is larger than our predetermined significance threshold, we remove that predictor from the model and start over. A typical threshold for keeping predictors in a model is p = 0.05, meaning that there is at least a 95 percent chance that the predictor is meaningful. A threshold of p = 0.10 also is not unusual. We repeat this process until the significance levels of all of the predictors remaining in the model are below our threshold.
All of these approaches have their advantages and disadvantages, their supporters and detractors. I prefer the backward elimination process because it is usually straightforward to determine which factor we should drop at each step of the process. Determining which factor to try at each step is more difficult with forward selection. Backward elimination has a further advantage, in that several factors together may have better predictive power than any subset of these factors. As a result, the backward elimination process is more likely to include these factors as a group in the final model than is the forward selection process.
The automated procedures have a very strong allure because, as technologically savvy individuals, we tend to believe that this type of automated process will likely test a broader range of possible predictor combinations than we could test manually. However, these automated procedures lack intuitive insights into the underlying physical nature of the system being modeled. Intuition can help us answer the question of whether this is a reasonable model to construct in the first place.
As you develop your models, continually ask yourself whether the model “makes sense.” Does it make sense that factor i is included but factor j is excluded? Is there a physical explanation to support the inclusion or exclusion of any potential factor? Although the automated methods can simplify the process, they also make it too easy for you to forget to think about whether or not each step in the modeling process makes sense. | textbooks/stats/Computing_and_Modeling/Book%3A_Linear_Regression_Using_R_-_An_Introduction_to_Data_Modeling_(Lilja)/04%3A_Multi-factor_Regression/4.01%3A_Visualizing_the_Relationships_in_the_Data.txt |
We previously identified the list of possible predictors that we can include in our models, shown in Table 4.1. We start the backward elimination process by putting all these potential predictors into a model for the int00.dat data frame using the lm() function.
> int00.lm <lm(nperf ~ clock + threads + cores + transistors +
dieSize + voltage + featureSize + channel + FO4delay + L1icache +
sqrt(L1icache) + L1dcache + sqrt(L1dcache) + L2cache + sqrt(L2cache),
data=int00.dat)
This function call assigns the resulting linear model object to the variable int00.lm. As before, we use the suffix .lm to remind us that this variable is a linear model developed from the data in the corresponding data frame, int00.dat. The arguments in the function call tell lm() to compute a linear model that explains the output nperf as a function of the predictors separated by the “+” signs. The argument data=int00.dat explicitly passes to the lm() function the name of the data frame that should be used when developing this model. This data= argument is not necessary if we attach() the data frame int00.dat to the current workspace. However, it is useful to explicitly specify the data frame that lm() should use, to avoid confusion when you manipulate multiple models simultaneously.
The summary() function gives us a great deal of information about the linear model we just created:
> summary(int00.lm)
Call:
lm(formula = nperf ~ clock + threads + cores + transistors + dieSize +
voltage + featureSize + channel + FO4delay + L1icache + sqrt(L1icache) +
L1dcache + sqrt(L1dcache) + L2cache + sqrt(L2cache), data = int00.dat)
Residuals:
Min 1Q Median 3Q Max
-10.804 -2.702 0.000 2.285 9.809
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -2.108e+01 7.852e+01 -0.268 0.78927
clock 2.605e-02 1.671e-03 15.594 < 2e-16 ***
threads -2.346e+00 2.089e+00 -1.123 0.26596
cores 2.246e+00 1.782e+00 1.260 0.21235
transistors -5.580e-03 1.388e-02 -0.402 0.68897
dieSize 1.021e-02 1.746e-02 0.585 0.56084
voltage -2.623e+01 7.698e+00 -3.408 0.00117 **
freatureSize 3.101e+01 1.122e+02 0.276 0.78324
channel 9.496e+01 5.945e+02 0.160 0.87361
FO4delay -1.765e-02 1.600e+00 -0.011 0.99123
L1icache 1.102e+02 4.206e+01 2.619 0.01111 *
sqrt(L1icache) -7.390e+02 2.980e+02 -2.480 0.01593 *
L1dcache -1.114e+02 4.019e+01 -2.771 0.00739 **
sqrt(L1dcache) 7.492e+02 2.739e+02 2.735 0.00815 **
L2cache -9.684e-03 1.745e-03 -5.550 6.57e-07 ***
sqrt(L2cache) 1.221e+00 2.425e-01 5.034 4.54e-06 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 4.632 on 61 degrees of freedom (179 observations deleted due to missingness)
Multiple R-squared: 0.9652, Adjusted R-squared: 0.9566 F-statistic: 112.8 on 15 and 61 DF, p-value: < 2.2e-16
Notice a few things in this summary: First, a quick glance at the residuals shows that they are roughly balanced around a median of zero, which is what we like to see in our models. Also, notice the line, (179 observations deleted due to missingness). This tells us that in 179 of the rows in the data frame that is, in 179 of the processors for which performance results were reported for the Int2000 benchmark some of the values in the columns that we would like to use as potential predictors were missing. These NA values caused R to automatically remove these data rows when computing the linear model.
The total number of observations used in the model equals the number of degrees of freedom remaining 61 in this case plus the total number of predictors in the model. Finally, notice that the R2 and adjusted R2 values are relatively close to one, indicating that the model explains the nperf values well. Recall, however, that these large R2 values may simply show us that the model is good at modeling the noise in the measurements. We must still determine whether we should retain all these potential predictors in the model.
To continue developing the model, we apply the backward elimination procedure by identifying the predictor with the largest p-value that exceeds our predetermined threshold of p = 0.05. This predictor is FO4delay, which has a p-value of 0.99123. We can use the update() function to eliminate a given predictor and recompute the model in one step. The notation “.~.” means that update() should keep the left and right-hand sides of the model the same. By including “- FO4delay, ”we also tell it to remove that predictor from the model, as shown in the following:
> int00.lm <- update(int00.lm, .~. - FO4delay, data = int00.dat) > summary(int00.lm)
Call:
lm(formula = nperf ~ clock + threads + cores + transistors +
dieSize + voltage + featureSize + channel + L1icache + sqrt(L1icache) +
L1dcache + sqrt(L1dcache) + L2cache + sqrt(L2cache), data = int00.dat)
Residuals:
Min 1Q Median 3Q Max
-10.795 -2.714 0.000 2.283 9.809
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -2.088e+01 7.584e+01 -0.275 0.783983
clock 2.604e-02 1.563e-03 16.662 < 2e-16 ***
threads -2.345e+00 2.070e+00 -1.133 0.261641
cores 2.248e+00 1.759e+00 1.278 0.206080
transistors -5.556e-03 1.359e-02 -0.409 0.684020
dieSize 1.013e-02 1.571e-02 0.645 0.521488
voltage -2.626e+01 7.302e+00 -3.596 0.000642 ***
featureSize 3.104e+01 1.113e+02 0.279 0.781232
channel 8.855e+01 1.218e+02 0.727 0.469815
L1icache 1.103e+02 4.041e+01 2.729 0.008257 **
sqrt(L1icache) -7.398e+02 2.866e+02 -2.581 0.012230 *
L1dcache -1.115e+02 3.859e+01 -2.889 0.005311 **
sqrt(L1dcache) 7.500e+02 2.632e+02 2.849 0.005937 **
L2cache -9.693e-03 1.494e-01 -6.488 1.64e-08 ***
sqrt(L2cache) 1.222e+00 1.975e-01 6.189 5.33e-08 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 4.594 on 62 degrees of freedom (179 observations deleted due to missingness)
Multiple R-squared: 0.9652, Adjusted R-squared: 0.9573 F-statistic: 122.8 on 14 and 62 DF, p-value: < 2.2e-16
We repeat this process by removing the next potential predictor with the largest p-value that exceeds our predetermined threshold, featureSize. As we repeat this process, we obtain the following sequence of possible models.
Remove featureSize:
> int00.lm <- update(int00.lm, .~. - featureSize, data=int00.dat)
> summary(int00.lm)
Call:
lm(formula = nperf ~ clock + threads + cores + transistors + dieSize +
voltage + channel + L1icache + sqrt(L1icache) + L1dcache + sqrt(L1dcache) +
L2cache + sqrt(L2cache), data = int00.dat)
Residuals:
Min 1Q Median 3Q Max
-10.5548 -2.6442 0.0937 2.2010 10.0264
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -3.129e+01 6.554e+01 -0.477 0.634666
clock 2.591e-02 1.471e-03 17.609 < 2e-16 ***
threads -2.447e+00 2.022e+00 -1.210 0.230755
cores 1.901e+00 1.233e+00 1.541 0.128305
transistors -5.366e-03 1.347e-02 -0.398 0.691700
dieSize 1.325e-02 1.097e-02 1.208 0.231608
voltage -2.519e+01 6.182e+00 -4.075 0.000131 ***
channel 1.188e+02 5.504e+01 2.158 0.034735 *
L1icache 1.037e+02 3.255e+01 3.186 0.002246 **
sqrt(L1icache) -6.930e+02 2.307e+02 -3.004 0.003818 **
L1icache -1.052e+02 3.106e+01 -3.387 0.001223 **
sqrt(L1dcache) 7.069e+02 2.116e+02 3.341 0.001406 **
L2cache -9.548e-03 1.390e-03 -6.870 3.37e-09 ***
sqrt(L2cache) 1.202e+00 1.821e-01 6.598 9.96e-09 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 4.56 on 63 degrees of freedom
(179 observations deleted due to missingness)
Multiple R-squared: 0.9651, Adjusted R-squared: 0.958
F-statistic: 134.2 on 13 and 63 DF, p-value: < 2.2e-16
Remove transistors:
> int00.lm <- update(int00.lm, .~. - transistors, data=int00.dat)
> summary(int00.lm)
Call:
lm(formula = nperf ~ clock + threads + cores + dieSize + voltage + channel +
L1icache + sqrt(L1icache) + L1dcache + sqrt(L1dcache) + L2cache + sqrt(L2cache),
data = int00.dat)
Residuals:
Min 1Q Median 3Q Max
-9.8861 -3.0801 -0.1871 2.4534 10.4863
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -7.789e+01 4.318e+01 -1.804 0.075745 .
clock 2.566e-02 1.422e-03 18.040 < 2e-16 ***
threads -1.801e+00 1.995e+00 -0.903 0.369794
cores 1.805e+00 1.132e+00 1.595 0.115496
dieSize 1.111e-02 8.807e-03 1.262 0.211407
voltage -2.379e+01 5.734e+00 -4.148 9.64e-05 ***
channel 1.512e+02 3.918e+01 3.861 0.000257 ***
L1icache 8.159e+01 2.006e+01 4.067 0.000128 ***
sqrt(L1icache) -5.386e+02 1.418e+02 -3.798 0.000317 ***
L1dcache -8.422e+01 1.914e+01 -4.401 3.96e-05 ***
sqrt(L1dcache) 5.671e+02 1.299e+02 4.365 4.51e-05 ***
L2cache -8.700e-03 1.262e-03 -6.893 2.35e-09 ***
sqrt(L2cache) 1.069e+00 1.654e-01 6.465 1.36e-08 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 4.578 on 67 degrees of freedom
(176 observations deleted due to missingness)
Multiple R-squared: 0.9657, Adjusted R-squared: 0.9596
F-statistic: 157.3 on 12 and 67 DF, p-value: < 2.2e-16
Remove threads:
> int00.lm <- update(int00.lm, .~. - threads, data=int00.dat)
> summary(int00.lm)
Call:
lm(formula = nperf ~ clock + cores + dieSize + voltage + channel + L1icache +
sqrt(L1icache) + L1dcache + sqrt(L1dcache) + L2cache + sqrt(L2cache), data = int00.dat)
Residuals:
Min 1Q Median 3Q Max
-9.7388 -3.2326 0.1496 2.6633 10.6255
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -8.022e+01 4.304e+01 -1.864 0.066675 .
clock 2.552e-02 1.412e-03 18.074 <2e-16 ***
cores 2.271e+00 1.006e+00 2.257 0.027226 *
dieSize 1.281e-02 8.592e-03 1.491 0.140520
voltage -2.299e+01 5.657e+00 -4.063 0.000128 ***
channel 1.491e+02 3.905e+01 3.818 0.000293 ***
L1icache 8.131e+01 2.003e+01 4.059 0.000130 ***
sqrt(L1icache) -5.356e+02 1.416e+02 -3.783 0.000329 ***
L1dcache -8.388e+01 1.911e+01 -4.390 4.05e-05 ***
sqrt(L1dcache) 5.637e+02 1.297e+02 4.346 4.74e-05 ***
L2cache -8.567e-03 1.252e-03 -6.844 2.71e-09
sqrt(L2cache). 1.040e+00 1.619e-01 6.422 1.54e-08 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 4.572 on 68 degrees of freedom
(176 observations deleted due to missingness)
Multiple R-squared: 0.9653, Adjusted R-squared: 0.9597
F-statistic: 172 on 11 and 68 DF, p-value: < 2.2e-16
Remove dieSize:
> int00.lm <- update(int00.lm, .~. - dieSize, data=int00.dat)
> summary(int00.lm)
Call:
lm(formula = nperf ~ clock + cores + voltage + channel + L1icache + sqrt(L1icache) +
L1dcache + sqrt(L1dcache) + L2cache + sqrt(L2cache), data = int00.dat)
Residuals:
Min 1Q Median 3Q Max
-10.0240 -3.5195 0.3577 2.5486 12.0545
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -5.822e+01 3.840e+01 -1.516 0.133913
clock 2.482e-02 1.246e-03 19.922 < 2e-16 ***
cores 2.397e+00 1.004e+00 2.389 0.019561 *
voltage -2.358e+01 5.495e+00 -4.291 5.52e-05 ***
channel 1.399e+02 3.960e+01 3.533 0.000726 ***
L1icache 8.703e+01 1.972e+01 4.412 3.57e-05 ***
sqrt(L1icache) -5.768e+02 1.391e+02 -4.146 9.24e-05 ***
L1dcache -8.903e+01 1.888e+01 -4.716 1.17e-05 ***
sqrt(L1dcache) 5.980e+02 1.282e+02 4.665 1.41e-05 ***
L2cache -8.621e-03 1.273e-03 -6.772 3.07e-09 ***
sqrt(L2cache) 1.085e+00 1.645e-01 6.598 6.36e-09 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 4.683 on 71 degrees of freedom
(174 observations deleted due to missingness)
Multiple R-squared: 0.9641, Adjusted R-squared: 0.959
F-statistic: 190.7 on 10 and 71 DF, p-value: < 2.2e-16
At this point, the p-values for all of the predictors are less than 0.02, which is less than our predetermined threshold of 0.05. This tells us to stop the backward elimination process. Intuition and experience tell us that ten predictors are a rather large number to use in this type of model. Nevertheless, all of these predictors have p-values below our significance threshold, so we have no reason to exclude any specific predictor. We decide to include all ten predictors in the final model:
\begin{aligned} \text {nperf}=&\ -58.22+0.02482 c \operatorname{loc} k+2.397 \text {cores} \ &-23.58 \text {voltage}+139.9 \text { channel }+87.03 \text {L1icache} \ &-576.8 \sqrt{\text {L1icache}}-89.03 L 1 d c a c h e+598 \sqrt{L 1 d c a c h e} \ &-0.008621 L 2 c a c h e+1.085 \sqrt{L 2 c a c h e} \end{aligned}
Looking back over the sequence of models we developed, notice that the number of degrees of freedom in each subsequent model increases as predictors are excluded, as expected. In some cases, the number of degrees of freedom increases by more than one when only a single predictor is eliminated from the model. To understand how an increase of more than one is possible, look at the sequence of values in the lines labeled the number of observations dropped due to missingness. These values show how many rows the update() function dropped because the value for one of the predictors in those rows was missing and had the NA value. When the backward elimination process removed that predictor from the model, at least some of those rows became ones we can use in computing the next version of the model, thereby increasing the number of degrees of freedom.
Also notice that, as predictors drop from the model, the R2 values stay very close to 0.965. However, the adjusted R2 value tends to increase very slightly with each dropped predictor. This increase indicates that the model with fewer predictors and more degrees of freedom tends to explain the data slightly better than the previous model, which had one more predictor. These changes in R2 values are very small, though, so we should not read too much into them. It is possible that these changes are simply due to random data fluctuations. Nevertheless, it is nice to see them behaving as we expect.
Roughly speaking, the F-test compares the current model to a model with one fewer predictor. If the current model is better than the reduced model, the p-value will be small. In all of our models, we see that the p-value for the F-test is quite small and consistent from model to model. As a result, this F-test does not particularly help us discriminate between potential models. | textbooks/stats/Computing_and_Modeling/Book%3A_Linear_Regression_Using_R_-_An_Introduction_to_Data_Modeling_(Lilja)/04%3A_Multi-factor_Regression/4.04%3A_An_Example_of_the_Backward_Elimination_Process.txt |
To check the validity of the assumptions used to develop our model, we can again apply the residual analysis techniques that we used to examine the one-factor model in Section 3.4.
This function call:
``> plot(fitted(int00.lm),resid(int00.lm))``
produces the plot shown in Figure 4.2. We see that the residuals appear to be somewhat uniformly scattered about zero. At least, we do not see any obvious patterns that lead us to think that the residuals are not well behaved. Consequently, this plot gives us no reason to believe that we have produced a poor model.
The Q-Q plot in Figure 4.3 is generated using these commands:
``````> qqnorm(resid(int00.lm))
> qqline(resid(int00.lm))``````
We see the that residuals roughly follow the indicated line. In this plot, we can see a bit more of a pattern and some obvious nonlinearities, leading us to be slightly more cautious about concluding that the residuals are
Figure 4.2: The fitted versus residual values for the multi-factor model developed from the Int2000 data.
normally distributed. We should not necessarily reject the model based on this one test, but the results should serve as a reminder that all models are imperfect.
4.06: When Things Go Wrong
Sometimes when we try to develop a model using the backward elimination process, we get results that do not appear to make any sense. For an example, let’s try to develop a multi-factor regression model for the Int1992 data using this process. As before, we begin by including all of the potential predictors from Table 4.1 in the model. When we try that for Int1992, however, we obtain the following result:
``````> int92.lm<-lm(nperf ~ clock + threads + cores + transistors + dieSize + voltage + featureSize +
channel + FO4delay + L1icache + sqrt(L1icache) + L1dcache + sqrt(L1dcache) + L2cache + sqrt(L2cache))
> summary(int92.lm)
Call:```
```lm(formula = nperf ~ clock + threads + cores + transistors +
dieSize + voltage + featureSize + channel + FO4delay +
L1icache + sqrt(L1icache) + L1dcache + sqrt(L1dcache) +
L2cache + sqrt(L2cache))
Residuals:```
``` 14 15 16 17 18 19
0.4096 1.3957 -2.3612 0.1498 -1.5513 1.9575
Coefficients: (14 not defined because of singularities)
Estimate Std. Error t value Pr(>|t|)
(Intercept) -25.93278 6.56141 -3.952 0.0168 *
clock 0.35422 0.02184 16.215 8.46e-05 ***
threads NA NA NA NA
cores NA NA NA NA
transistors NA NA NA NA
dieSize NA NA NA NA
voltage NA NA NA NA
featureSize NA NA NA NA
channel NA NA NA NA
FO4delay NA NA NA NA
L1icache NA NA NA NA
sqrt(L1icache) NA NA NA NA
L1dcache NA NA NA NA
sqrt(L1dcache) NA NA NA NA
L2cache NA NA NA NA
sqrt(L2cache) NA NA NA NA
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 1.868 on 4 degrees (72 observations deleted due to missingness)
Multiple R-squared: 0.985, Adjusted R-squared: 0.9813 F-statistic: 262.9 on 1 and 4 DF, p-value: 8.463e-05```
```
Notice that every predictor but `clock` has `NA` for every entry. Furthermore, we see a line that says that fourteen coefficients were “not defined because of singularities.” This statement means that R could not compute a value for those coefficients because of some anomalies in the data. (More technically, it could not invert the matrix used in the least-squares minimization process.)
The first step toward resolving this problem is to notice that 72 observations were deleted due to “missingness,” leaving only four degrees of freedom. We use the function `nrow(int92.dat)` to determine that there are 78 total rows in this data frame. These 78 separate observations sum up to the two predictors used in the model, plus four degrees of freedom, plus 72 deleted rows. When we tried to develop the model using` lm()`, however, some of our data remained unused.
To determine why these rows were excluded, we must do a bit of sanity checking to see what data anomalies may be causing the problem. The function `table()` provides a quick way to summarize a data vector, to see if anything looks obviously out of place. Executing this function on the `clock` column, we obtain the following:
``````> table(clock)
clock
48 50 60 64 66 70 75 77 80 85 90 96 99 100 101 110
118 120 125 133 150 166 175 180 190 200 225 231 233 250 266
275 291 300 333 350```
```1 3 4 1 5 1 4 1 2 1 2 1 2 10 1 1
1 3 4 4 3 2 2 1 1 4 1 1 2 2 2 1 1 1 1 1```
```
The top line shows the unique values that appear in the column. The list of numbers directly below that line is the count of how many times that particular value appeared in the column. For example, 48 appeared once, while `50 `appeared three times and `60` appeared four times. We see a reasonable range of values with minimum (`48`) and maximum (`350`) values that are not unexpected. Some of the values occur only once; the most frequent value occurs ten times, which again does not seem unreasonable. In short, we do not see anything obviously amiss with these results. We conclude that the problem likely is with a different data column.
Executing the `table()` function on the next column in the data frame threads produces this output:
``````> table(threads)
threads
1
78```
```
Aha! Now we are getting somewhere. This result shows that all of the 78 entries in this column contain the same value: `1`. An input factor in which all of the elements are the same value has no predictive power in a regression model. If every row has the same value, we have no way to distinguish one row from another. Thus, we conclude that `threads` is not a useful predictor for our model and we eliminate it as a potential predictor as we continue to develop our Int1992 regression model.
We continue by executing` table()` on the column labeled `cores`. This operation shows that this column also consists of only a single value, 1. Using the `update() `function to eliminate these two predictors from the model gives the following:
``````> int92.lm <update(int92.lm, .~. threads cores)
> summary(int92.lm)
Call:
lm(formula = nperf ~ clock + transistors + dieSize + voltage +
featureSize + channel + FO4delay + L1icache + sqrt(L1icache) +
L1dcache + sqrt(L1dcache) + L2cache + sqrt(L2cache))```
```Residuals:
14 15 16 17 18 19
0.4096 1.3957 -2.3612 0.1498 -1.5513 1.9575
Coefficients: (12 not defined because of singularities)
Estimate Std. Error t value Pr(>|t|)
(Intercept) -25.93278 6.56141 -3.952 0.0168 *
clock 0.35422 0.02184 16.215 8.46e-05 ***
transistors NA NA NA NA
dieSize NA NA NA NA
voltage NA NA NA NA
featureSize NA NA NA NA
channel NA NA NA NA
FO4delay NA NA NA NA
L1icache NA NA NA NA
sqrt(L1icache) NA NA NA NA
L1dcache NA NA NA NA
sqrt(L1dcache) NA NA NA NA
L2cache NA NA NA NA
sqrt(L2cache) NA NA NA NA
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 1.868 on 4 degrees of freedom (72 observations deleted due to missingness)
Multiple R-squared: 0.985, Adjusted R-squared: 0.9813 F-statistic: 262.9 on 1 and 4 DF, p-value: 8.463e-05
```
```
Unfortunately, eliminating these two predictors from consideration has not solved the problem. Notice that we still have only four degrees of freedom, because 72 observations were again eliminated. This small number of degrees of freedom indicates that there must be at least one more column with insufficient data.
By executing `table()` on the remaining columns, we find that the column labeled `L2cache` has only three unique values, and that these appear in a total of only ten rows:
``````> table(L2cache)
L2cache
96 256 512
6 2 2```
```
Although these specific data values do not look out of place, having only three unique values can make it impossible for `lm()` to compute the model coefficients. Dropping `L2cache` and `sqrt(L2cache)` as potential predictors finally produces the type of result we expect:
``````> int92.lm <update(int92.lm, .~. L2cache sqrt(L2cache))
> summary(int92.lm)
Call:
lm(formula = nperf ~ clock + transistors + dieSize + voltage +
featureSize + channel + FO4delay + L1icache + sqrt(L1icache) +
L1dcache + sqrt(L1dcache))```
```Residuals:
Min 1Q Median 3Q Max
-7.3233 -1.1756 0.2151 1.0157 8.0634
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -58.51730 17.70879 -3.304 0.00278 **
clock 0.23444 0.01792 13.084 6.03e-13 ***
transistors -0.32032 1.13593 -0.282 0.78018
dieSize 0.25550 0.04800 5.323 1.44e-05 ***
voltage 1.66368 1.61147 1.032 0.31139
featureSize 377.84287 69.85249 5.409 1.15e-05 ***
channel -493.84797 88.12198 -5.604 6.88e-06 ***
FO4delay 0.14082 0.08581 1.641 0.11283
L1icache 4.21569 1.74565 2.415 0.02307 *
sqrt(L1icache) -12.33773 7.76656 -1.589 0.12425
L1dcache -5.53450 2.10354 -2.631 0.01412 *
sqrt(L1dcache) 23.89764 7.98986 2.991 0.00602 **
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 3.68 on 26 degrees of freedom (40 observations deleted due to missingness)
Multiple R-squared: 0.985, Adjusted R-squared: 0.9786 F-statistic: 155 on 11 and 26 DF, p-value: < 2.2e-16```
```
We now can proceed with the normal backward elimination process. We begin by eliminating the predictor that has the largest p-value above our preselected threshold, which is `transistors` in this case. Eliminating this predictor gives the following:
``````> int92.lm <update(int92.lm, .~. -transistors)
> summary(int92.lm)
Call:
lm(formula = nperf ~ clock + dieSize + voltage + featureSize +
channel + FO4delay + L1icache + sqrt(L1icache) + L1dcache +
sqrt(L1dcache))
Residuals:
Min 1Q Median 3Q Max
-13.2935 -3.6068 -0.3808 2.4535 19.9617
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -16.73899 24.50101 -0.683 0.499726
clock 0.19330 0.02091 9.243 2.77e-10 ***
dieSize 0.11457 0.02728 4.201 0.000219 ***
voltage 0.40317 2.85990 0.141 0.888834
featureSize 11.08190 104.66780 0.106 0.916385
channel -37.23928 104.22834 -0.357 0.723379
FO4delay -0.13803 0.14809 -0.932 0.358763
L1icache 7.84707 3.33619 2.352 0.025425 *
sqrt(L1icache) -16.28582 15.38525 -1.059 0.298261
L1dcache -14.31871 2.94480 -4.862 3.44e-05 ***
sqrt(L1dcache) 48.26276 9.41996 5.123 1.64e-05 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 7.528 on 30 degrees of freedom (37 observations deleted due to missingness)
Multiple R-squared: 0.9288, Adjusted R-squared: 0.9051 F-statistic: 39.13 on 10 and 30 DF, p-value: 1.802e-14```
```
After eliminating this predictor, however, we see something unexpected. The p-values for `voltage` and `featureSize` increased dramatically. Furthermore, the adjusted R-squared value dropped substantially, from 0.9786 to 0.9051. These unexpectedly large changes make us suspect that `transistors` may actually be a useful predictor in the model even though at this stage of the backward elimination process it has a high p-value. So, we decide to put `transistors` back into the model and instead drop `voltage`, which has the next highest p-value. These changes produce the following result:
``````> int92.lm <update(int92.lm, .~. +transistors -voltage)
> summary(int92.lm)
Call:
lm(formula = nperf ~ clock + dieSize + featureSize + channel +
FO4delay + L1icache + sqrt(L1icache) + L1dcache +
sqrt(L1dcache) +
transistors)```
```Residuals:
Min 1Q Median 3Q Max
-10.0828 -1.3106 0.1447 1.5501 8.7589```
`Coefficients:`
``` Estimate Std. Error t value Pr(>|t|)
(Intercept) -50.28514 15.27839 -3.291 0.002700 **
clock 0.21854 0.01718 12.722 3.71e-13 ***
dieSize 0.20348 0.04401 4.623 7.77e-05 ***
featureSize 409.68604 67.00007 6.115 1.34e-06 ***
channel -490.99083 86.23288 -5.694 4.18e-06 ***
FO4delay 0.12986 0.09159 1.418 0.167264
L1icache 1.48070 1.21941 1.214 0.234784
sqrt(L1icache) -5.15568 7.06192 -0.730 0.471413
L1dcache -0.45668 0.10589 -4.313 0.000181 ***
sqrt(L1dcache) 4.77962 2.45951 1.943 0.062092 .
transistors 1.54264 0.88345 1.746 0.091750 .
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 3.96 on 28 degrees of freedom (39 observations deleted due to missingness)
Multiple R-squared: 0.9813, Adjusted R-squared: 0.9746 F-statistic: 146.9 on 10 and 28 DF, p-value: < 2.2e-16```
```
The adjusted R-squared value now is 0.9746, which is much closer to the adjusted R-squared value we had before dropping `transistors`. Continuing with the backward elimination process, we first drop `sqrt(L1icache)` with a p-value of 0.471413, then `FO4delay` with a p-value of 0.180836, and finally `sqrt(L1dcache)` with a p-value of 0.071730.
After completing this backward elimination process, we find that the following predictors belong in the final model for Int1992:
clock transistors dieSize featureSize
channel L1icache L1dcache
As shown below, all of these predictors have p-values below our threshold of 0.05. Additionally, the adjusted R-square looks quite good at 0.9722.
``````> int92.lm <update(int92.lm, .~. -sqrt(L1dcache))
> summary(int92.lm)
Call:
lm(formula = nperf ~ clock + dieSize + featureSize + channel +
L1icache + L1dcache + transistors, data = int92.dat)
Residuals:```
``` Min 1Q Median 3Q Max
-10.1742 -1.5180 0.1324 1.9967 10.1737
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -34.17260 5.47413 -6.243 6.16e-07 ***
clock 0.18973 0.01265 15.004 9.21e-16 ***
dieSize 0.11751 0.02034 5.778 2.31e-06 ***
featureSize 305.79593 52.76134 5.796 2.20e-06 ***
channel -328.13544 53.04160 -6.186 7.23e-07 ***
L1icache 0.78911 0.16045 4.918 2.72e-05 ***
L1dcache -0.23335 0.03222 -7.242 3.80e-08 ***
transistors 3.13795 0.51450 6.099 9.26e-07 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 4.141 on 31 degrees of freedom (39 observations deleted due to missingness)
Multiple R-squared: 0.9773, Adjusted R-squared: 0.9722 F-statistic: 191 on 7 and 31 DF, p-value: < 2.2e-16```
` `
```
This example illustrates that you cannot always look at only the p-values to determine which potential predictors to eliminate in each step of the backward elimination process. You also must be careful to look at the broader picture, such as changes in the adjusted R-squared value and large changes in the p-values of other predictors, after each change to the model. | textbooks/stats/Computing_and_Modeling/Book%3A_Linear_Regression_Using_R_-_An_Introduction_to_Data_Modeling_(Lilja)/04%3A_Multi-factor_Regression/4.05%3A_Residual_Analysis.txt |
PREDICTION is typically the primary goal of most regression modeling projects. That is, the model developer wants to use the model to estimate or predict the system’s response if it were operated with input values that were never actually available in any of the measured systems. For instance, we might want to use the model we developed using the Int2000 data set to predict the performance of a new processor with a clock frequency, a cache size, or some other parameter combination that does not exist in the data set. By inserting this new combination of parameter values into the model, we can compute the new processor’s expected performance when executing that benchmark program.
Because the model was developed using measured data, the coefficient values necessarily are only estimates. Consequently, any predictions we make with the model are also only estimates. The `summary()` function produces useful statistics about the regression model’s quality, such as the R2 and adjusted R2 values. These statistics offer insights into how well the model explains variation in the data. The best indicator of any regression model’s quality, however, is how well it predicts output values. The R environment provides some powerful functions that help us predict new values from a given model and evaluate the quality of these predictions.
05: Predicting Responses
In Chapter 4 we used all of the data available in the `int00.dat` data frame to select the appropriate predictors to include in the final regression model. Because we computed the model to fit this particular data set, we cannot now use this same data set to test the model’s predictive capabilities. That would be like copying exam answers from the answer key and then using that same answer key to grade your exam. Of course you would get a perfect result. Instead, we must use one set of data to train the model and another set of data to test it.
The difficulty with this train-test process is that we need separate but similar data sets. A standard way to find these two different data sets is to split the available data into two parts. We take a random portion of all the available data and call it our training set. We then use this portion of the data in the `lm()` function to compute the specific values of the model’s coefficients. We use the remaining portion of the data as our testing set to see how well the model predicts the results, compared to this test data.
The following sequence of operations splits the `int00.dat` data set into the training and testing sets:
``````rows <nrow(int00.dat)
f <0.5
upper_bound <floor(f * rows)
permuted_int00.dat <int00.dat[sample(rows), ]
train.dat <permuted_int00.dat[1:upper_bound, ]
test.dat <permuted_int00.dat[(upper_bound+1):rows, ]``````
The first line assigns the total number of rows in the `int00.dat` data frame to the variable `rows`. The next line assigns to the variable `f `the fraction of the entire data set we wish to use for the training set. In this case, we somewhat arbitrarily decide to use half of the data as the training set and the other half as the testing set. The `floor()` function rounds its argument value down to the nearest integer. So the line `upper_bound <floor(f * rows)` assigns the middle row’s index number to the variable `upper_bound`
The interesting action happens in the next line. The `sample()` function returns a permutation of the integers between 1 and n when we give it the integer value n as its input argument. In this code, the expression `sample(rows)` returns a vector that is a permutation of the integers between 1 and `rows`, where `rows` is the total number of rows in the `int00.dat` data frame. Using this vector as the row index for this data frame gives a random permutation of all of the rows in the data frame, which we assign to the new data frame, `permuted_int00.dat. `The next two lines assign the lower portion of this new data frame to the training data set and the top portion to the testing data set, respectively. This randomization process ensures that we obtain a new random selection of the rows in the train-and-test data sets every time we execute this sequence of operations. | textbooks/stats/Computing_and_Modeling/Book%3A_Linear_Regression_Using_R_-_An_Introduction_to_Data_Modeling_(Lilja)/05%3A_Predicting_Responses/5.01%3A_Data_Splitting_for_Training_and_Testing.txt |
With the data set partitioned into two randomly selected portions, we can train the model on the first portion, and test it on the second portion. Figure 5.1shows the overall flow of this training and testing process. We next explain the details of this process to train and test the model we previously developed for the Int2000 benchmark results.
The following statement calls the lm() function to generate a regression model using the predictors we identified in Chapter 4 and the train.dat data frame we extracted in the previous section. It then assigns this model to the variable int00_new.lm. We refer to this process of computing the model’s coefficients as training the regression model.
int00_new.lm <lm(nperf ~ clock + cores + voltage + channel + L1icache +
sqrt(L1icache) + L1dcache + sqrt(L1dcache) + L2cache + sqrt(L2cache), data = train.dat)
The predict() function takes this new model as one of its arguments. It uses this model to compute the predicted outputs when we use the test.dat data frame as the input, as follows:
predicted.dat <predict(int00_new.lm, newdata=test.dat)
We define the difference between the predicted and measured performance for each processor i to be i = Predictedi − Measuredi, where Predictedi is the value predicted by the model, which is stored in the data frame predicted.dat, and Measuredi is the actual measured performance response, which we previously assigned to the test.dat data frame. The following statement computes the entire vector of these ∆i values and assigns the vector to the variable delta.
delta <predicted.dat test.dat$nperf Note that we use the $ notation to select the column with the output value, nperf, from the test.dat data frame.
The mean of these ∆ differences for n different processors is:
$\bar{\Delta}=\frac{1}{n} \sum_{i=1}^{n} \Delta_{i}$
A confidence interval computed for this mean will give us some indication of how well a model trained on the train.dat data set predicted the performance of the processors in the test.dat data set. The t.test() function computes a confidence interval for the desired confidence level of these ∆i values as follows:
> t.test(delta, conf.level = 0.95)
One Sample t-test
data: delta
t = -0.65496, df = 41, p-value = 0.5161
alternative hypothesis: true mean is not equal to 0
95 percent confidence interval:-2.232621 1.139121
sample estimates: mean of x -0.5467502
If the prediction were perfect, then ∆= 0. If ∆i > 0, then the model predicted that the performance would be greater than it actually was. A ∆i < 0, on the other hand, means that the model predicted that the performance was lower than it actually was. Consequently, if the predictions were reasonably good, we would expect to see a tight confidence interval around zero. In this case, we obtain a 95 percent confidence interval of [-2.23, 1.14]. Given that nperf is scaled to between 0 and 100, this is a reasonably tight confidence interval that includes zero. Thus, we conclude that the model is reasonably good at predicting values in the test.dat data set when trained on the train.dat data set.
Another way to get a sense of the predictions’ quality is to generate a scatter plot of the ∆i values using the plot() function:
plot(delta)
This function call produces the plot shown in Figure 5.2. Good predictions would produce a tight band of values uniformly scattered around zero. In this figure, we do see such a distribution, although there are a few outliers that are more than ten points above or below zero.
It is important to realize that the sample() function will return a different random permutation each time we execute it. These differing permutations will partition different processors (i.e., rows in the data frame) into the train and test sets. Thus, if we run this experiment again with exactly the same inputs, we will likely get a different confidence interval and ∆i scatter plot. For example, when we repeat the same test five times with identical inputs, we obtain the following confidence intervals: [-1.94, 1.46], [-1.95, 2.68], [-2.66, 3.81], [-6.13, 0.75], [-4.21, 5.29]. Similarly, varying the fraction of the data we assign to the train and test sets by changing f = 0.5 also changes the results.
It is good practice to run this type of experiment several times and observe how the results change. If you see the results vary wildly when you re-run these tests, you have good reason for concern. On the other hand, a series of similar results does not necessarily mean your results are good, only that they are consistently reproducible. It is often easier to spot a bad model than to determine that a model is good.
Based on the repeated confidence interval results and the corresponding scatter plot, similar to Figure 5.2, we conclude that this model is reasonably good at predicting the performance of a set of processors when the model is trained on a different set of processors executing the same benchmark program. It is not perfect, but it is also not too bad. Whether the differences are large enough to warrant concern is up to you. | textbooks/stats/Computing_and_Modeling/Book%3A_Linear_Regression_Using_R_-_An_Introduction_to_Data_Modeling_(Lilja)/05%3A_Predicting_Responses/5.02%3A_Training_and_Testing.txt |
As we saw in the previous section, data splitting is a useful technique for testing a regression model. If you have other data sets, you can use them to further test your new model’s capabilities.
In our situation, we have several additional benchmark results in the data file that we can use for these tests. As an example, we use the model we developed from the Int2000 data to predict the Fp2000 benchmark’s performance.
We first train the model developed using the Int2000 data, `int00.lm`, using all the Int2000 data available in the `int00.dat` data frame. We then predict the Fp2000 results using this model and the `fp00.dat` data. Again, we assign the differences between the predicted and actual results to the vector `delta`. Figure 5.3 shows the overall data flow for this training and testing. The corresponding R commands are:
``````> int00.lm <lm(nperf ~ clock + cores + voltage + channel +
L1icache + sqrt(L1icache) + L1dcache + sqrt(L1dcache) + L2cache +
sqrt(L2cache), data = int00.dat)> predicted.dat <predict(int00.lm, newdata=fp00.dat)
> delta <predicted.dat fp00.dat\$nperf
> t.test(delta, conf.level = 0.95)
One Sample t-test
data: delta
t = 1.5231, df = 80, p-value = 0.1317
alternative hypothesis: true mean is not equal to 0
95 percent confidence interval:
-0.4532477 3.4099288 sample estimates:
mean of x
1.478341```
```
Figure 5.3: Predicting the Fp2000 results using the model developed with the Int2000 data.
The resulting confidence interval for the `delta` values contains zero and is relatively small. This result suggests that the model developed using the Int2000 data is reasonably good at predicting the Fp2000 benchmark program’s results. The scatter plot in Figure 5.4 shows the resulting `delta` values for each of the processors we used in the prediction. The results tend to be randomly distributed around zero, as we would expect from a good regression model. Note, however, that some of the values differ significantly from zero. The maximum positive deviation is almost 20, and the magnitude of the largest negative value is greater than 43. The confidence interval suggests relatively good results, but this scatter plot shows that not all the values are well predicted.
As a final example, we use the Int2000 regression model to predict the results of the benchmark program’s future Int2006 version. The R code to compute this prediction is:
``````> int00.lm <lm(nperf ~ clock + cores + voltage + channel + L1icache + sqrt(L1icache) + L1dcache +
sqrt(L1dcache) + L2cache + sqrt(L2cache), data = int00.dat)> predicted.dat <predict(int00.lm, newdata=int06.dat)
> delta <predicted.dat int06.dat\$nperf
> t.test(delta, conf.level = 0.95)
One Sample t-test
data: delta
t = 49.339, df = 168, p-value < 2.2e-16```
```alternative hypothesis: true mean is not equal to 0
95 percent confidence interval: 48.87259 52.94662
sample estimates:
mean of x
50.9096```
```
In this case, the confidence interval for the `delta` values does not include zero. In fact, the mean value of the differences is 50.9096, which indicates that the average of the model-predicted values is substantially larger than the actual average value. The scatter plot shown in Figure 5.5 further confirms that the predicted values are all much larger than the actual values.
This example is a good reminder that models have their limits. Apparently, there are more factors that affect the performance of the next generation of the benchmark programs, Int2006, than the model we developed using the Int2000 results captures. To develop a model that better predicts future performance, we would have to uncover those factors. Doing so requires a deeper understanding of the factors that affect computer performance, which is beyond the scope of this tutorial. | textbooks/stats/Computing_and_Modeling/Book%3A_Linear_Regression_Using_R_-_An_Introduction_to_Data_Modeling_(Lilja)/05%3A_Predicting_Responses/5.03%3A_Predicting_Across_Data_Sets.txt |
AS we have seen, the R environment provides some powerful functions to quickly and relatively easily develop and test regression models. Ironically, simply reading the data into R in a useful format can be one of the most difficult aspects of developing a model. R does not lack good input-output capabilities, but data often comes to the model developer in a messy form. For instance, the data format may be inconsistent, with missing fields and incorrectly recorded values. Getting the data into the format necessary for analysis and modeling is often called data cleaning. The specific steps necessary to “clean” data are heavily dependent on the data set and are thus beyond the scope of this tutorial. Suffice it to say that you should carefully examine your data before you use it to develop any sort of regression model. Section 2.2 provides a few thoughts on data cleaning.
In Chapter 2, we provided the functions used to read the example data into the R environment, but with no explanation about how they worked. In this chapter, we will look at these functions in detail, as specific examples of how to read a data set into R. Of course, the details of the functions you may need to write to input your data will necessarily change to match the specifics of your data set.
06: Reading Data into the R Environment
Perhaps the simplest format for exchanging data among computer systems is the de facto standard comma separated values, or csv, file. R provides a function to directly read data from a csv file and assign it to a data frame:
``> processors <- read.csv("all-data.csv")``
The name between the quotes is the name of the csv-formatted file to be read. Each file line corresponds to one data record. Commas separate the individual data fields in each record. This function assigns each data record to a new row in the data frame, and assigns each data field to the corre- sponding column. When this function completes, the variable `processors` contains all the data from the file` all-data.csv` nicely organized into rows and columns in a data frame.
If you type `processors` to see what is stored in the data frame, you will get a long, confusing list of data. Typing
````> head(processors)`
```
will show a list of column headings and the values of the first few rows of data. From this list, we can determine which columns to extract for our model development. Although this is conceptually a simple problem, the execution can be rather messy, depending on how the data was collected and organized in the file.
As with any programming language, R lets you define your own func- tions. This feature is useful when you must perform a sequence of opera- tions multiple times on different data pieces, for instance. The format for defining a function is:
``````function-name <- function(a1, a2, ...) { R expressions
return(object)
}``````
where `function-name` is the function name you choose and `a1, a2, ...` is the list of arguments in your function. The R system evaluates the expres- sions in the body of the definition when the function is called. A function can return any type of data object using the `return()` statement.
We will define a new function called `extract_data` to extract all the rows that have a result for the given benchmark program from the `processors `data frame. For instance, calling the function as follows:
``````> int92.dat <- extract_data("Int1992")
> fp92.dat <- extract_data("Fp1992")
> int95.dat <- extract_data("Int1995")
> fp95.dat <- extract_data("Fp1995")
> int00.dat <- extract_data("Int2000")
> fp00.dat <- extract_data("Fp2000")
> int06.dat <- extract_data("Int2006")
> fp06.dat <- extract_data("Fp2006")``````
extracts every row that has a result for the given benchmark program and assigns it to the corresponding data frame, `int92.dat, fp92.dat`, and so on.
We define the `extract_data` function as follows:
``````extract_data <- function(benchmark) {
temp <- paste(paste("Spec",benchmark,sep=""), "..average.base.", sep="")
perf <- get_column(benchmark,temp)
max_perf <- max(perf)
min_perf <- min(perf)
range <- max_perf - min_perf
nperf <- 100 * (perf - min_perf) / range
clock <- get_column(benchmark,"Processor.Clock..MHz.")
threads <- get_column(benchmark,"Threads.core")
cores <- get_column(benchmark,"Cores")
TDP <- get_column(benchmark,"TDP")
transistors <- get_column(benchmark,"Transistors..millions.")
dieSize <- get_column(benchmark,"Die.size..mm.2.")
voltage <- get_column(benchmark,"Voltage..low.")
featureSize <- get_column(benchmark,"Feature.Size..microns.")
channel <- get_column(benchmark,"Channel.length..microns.")
FO4delay <- get_column(benchmark,"FO4.Delay..ps.")
L1icache <- get_column(benchmark,"L1..instruction...on.chip.")
L1dcache <- get_column(benchmark,"L1..data...on.chip.")
L2cache <- get_column(benchmark,"L2..on.chip.")
L3cache <- get_column(benchmark,"L3..on.chip.")
return(data.frame(nperf, perf, clock, threads, cores, TDP, transistors, dieSize,
voltage, featureSize, channel, FO4delay, L1icache, L1dcache, L2cache, L3cache))
}```
```
The first line with the `paste` functions looks rather complicated. How- ever, it simply forms the name of the column with the given benchmark results. For example, when `extract_data` is called with `Int2000` as the ar- gument, the nested `paste` functions simply concatenate the strings "`Spec`", "`Int2000`", and "`..average.base.`". The final string corresponds to the name of the column in the `processors` data frame that contains the perfor- mance results for the `Int2000` benchmark, "`SpecInt2000..average.base.`".
The next line calls the function `get_column`, which selects all the rows with the desired column name. In this case, that column contains the actual performance result reported for the given benchmark program, `perf`. The next four lines compute the normalized performance value, `nperf`, from the `perf` value we obtained from the data frame. The following sequence of calls to `get_column` extracts the data for each of the predictors we intend to use in developing the regression model. Note that the second parameter in each case, such as "`Processor.Clock..MHz.`", is the name of a column in the `processors` data frame. Finally, the `data.frame()` function is a predefined R function that assembles all its arguments into a single data frame. The new function we have just defined, `extract_data()`, returns this new data frame.
Next, we define the `get_column()` function to return all the data in a given column for which the given benchmark program has been defined:
``````get_column <- function(x,y) {
benchmark <- paste(paste("Spec",x,sep=""), "..average.base.", sep="")
ix <- !is.na(processors[,benchmark]) return(processors[ix,y])
}```
```
The argument `x` is a string with the name of the benchmark program, and `y` is a string with the name of the desired column. The nested `paste()` func- tions produce the same result as the `extract_data()` function. The `is.na()` function performs the interesting work. This function returns a vector with “`1`” values corresponding to the row numbers in the `processors` data frame that have `NA` values in the column selected by the `benchmark` index. If there is a value in that location, `is.na()` will return a corresponding value that is a `0`. Thus, `is.na` indicates which rows are missing performance results for the benchmark of interest. Inserting the exclamation point in front of this function complements its output. As a result, the variable `ix` will con- tain a vector that identifies every row that contains performance results for the indicated benchmark program. The function then extracts the selected rows from the `processors` data frame and returns them.
These types of data extraction functions can be somewhat tricky to write, because they depend so much on the specific format of your input file. The functions presented in this chapter are a guide to writing your own data extraction functions. | textbooks/stats/Computing_and_Modeling/Book%3A_Linear_Regression_Using_R_-_An_Introduction_to_Data_Modeling_(Lilja)/06%3A_Reading_Data_into_the_R_Environment/6.01%3A_Reading_CSV_files.txt |
LINEAR regression modeling is one of the most basic of a broad collection of data mining techniques. It can demonstrate the relationships between the inputs to a system and the corresponding output. It also can be used to predict the output given a new set of input values. While the specifics for developing a regression model will depend on the details of your data, there are several key steps to keep in mind when developing a new model using the R programming environment:
1. Read your data into the R environment.
As simple as it sounds, one of the trickiest tasks oftentimes is simply reading your data into R. Because you may not have controlled how data was collected, or in what format, be prepared to spend some time writing new functions to parse your data and load it into an R data frame. Chapter 6 provides an example of reading a moderately complicated csv file into R.
2. Sanity check your data.
Once you have your data in the R environment, perform some sanity checks to make sure that there is nothing obviously wrong with the data. The types of checks you should perform depend on the specifics of your data. Some possibilities include:
• Finding the values’ minimum, maximum, average, and standard deviation in each data frame column.
• Looking for any parameter values that seem suspiciously outside the expected limits.
• Determining the fraction of missing (`NA`) values in each column to ensure that there is sufficient data available.
• Determining the frequency of categorical parameters, to see if any unexpected values pop up.
• Any other data-specific tests.
Ultimately, you need to feel confident that your data set’s values are reasonable and consistent.
3. Visualize your data.
It is always good to plot your data, to get a basic sense of its shape and ensure that nothing looks out of place. For instance, you may expect to see a somewhat linear relationship between two parameters. If you see something else, such as a horizontal line, you should investigate further. Your assumption about a linear relationship could be wrong, or the data may be corrupted (see item no. 2 above). Or perhaps something completely unexpected is going on. Regardless, you must understand what might be happening before you begin developing the model. The `pairs()` function is quite useful for performing this quick visual check, as described in Section 4.1.
4. Identify the potential predictors.
Before you can begin the backward elimination process, you must identify the set of all possible predictors that could go into your model. In the simplest case, this set consists of all of the available columns in your data frame. However, you may know that some of the columns will not be useful, even before you begin constructing the model. For example, a column containing only a few valid entries probably is not useful in a model. Your knowledge of the system may also give you good reason to eliminate a parameter as a possible predictor, much as we eliminated TDP as a possible predictor in Section 4.2, or to include some of the parameters’ non-linear functions as possible predictors, as we did when we added the square root of the cache size terms to our set of possible predictors.
5. Select the predictors.
Once you have identified the potential predictors, use the backward elimination process described in Section 4.3 to select the predictors you’ll include in the final model, based on the significance threshold you decide to use.
6. Validate the model.
Examine your model’s R2 value and the adjusted-R2 value. Use residual analysis to further examine the model’s quality. You also should split your data into training and testing sets, and then see how well your model predicts values from the test set.
7. Predict.
Now that you have a model that you feel appropriately explains your data, you can use it to predict previously unknown output values.
A deep body of literature is devoted to both statistical modeling and the R language. If you want to learn more about R as a programming language, many good books are available, including [11, 12, 15, 16]. These books focus on specific statistical ideas and use R as the computational language [1, 3, 4, 14]. Finally, this book [9] gives an introduction to computer performance measurement.
As you continue to develop your data-mining skills, remember that what you have developed is only a model. Ideally, it is a useful tool for explaining the variations in your measured data and understanding the relationships between inputs and output. But like all models, it is only an approximation of the real underlying system, and is limited in what it can tell us about that system. Proceed with caution. | textbooks/stats/Computing_and_Modeling/Book%3A_Linear_Regression_Using_R_-_An_Introduction_to_Data_Modeling_(Lilja)/07%3A_Summary.txt |
HERE are a few suggested exercises to help you learn more about regression modeling using R.
1. Show how you would clean the data set for one of the selected benchmark results (Int1992, Int1995, etc.). For example, for every column in the data frame, you could:
• Compute the average, variance, minimum, and maximum.
• Sort the column data to look for outliers or unusual patterns.
• Determine the fraction of NA values for each column.
How else could you verify that the data looks reasonable?
2. Plot the processor performance versus the clock frequency for each of the benchmark results, similar to Figure 3.1.
3. Develop a one-factor linear regression model for all the benchmark results. What input factor should you use as the predictor?
4. Superimpose your one-factor models on the corresponding scatter plots of the data (see Figure 3.2).
5. Evaluate the quality of the one-factor models by discussing the residuals, the p-values of the coefficients, the residual standard errors, the R2 values, the F-statistic, and by performing appropriate residual analysis.
6. Generate a pair-wise comparison plot for each of the benchmark results, similar to Figure 4.1.
7. Develop a multi-factor linear regression model for each of the benchmark results. Which predictors are the same and which are different across these models? What other similarities and differences do you see across these models?
8. Evaluate the multi-factor models’ quality by discussing the residuals, the p-values of the coefficients, the residual standard errors, the R2 values, the F-statistic, and by performing appropriate residual analysis.
9. Use the regression models you’ve developed to complete the follow- ing tables, showing how well the models from each row predict the benchmark results in each column. Specifically, fill in the x and y values so that x is the mean of the `delta` values for the predictions and y is the width of the corresponding 95 percent confidence in- terval. You need only predict forwards in time. For example, it is reasonable to use the model developed with Int1992 data to predict Int2006 results, but it does not make sense to use a model developed with Int2006 data to predict Int1992 results.
Int1992
Int1995
Int2000
Int2006
Int1992
Int1995
Int2000
Int2006
Fp1992
Fp1995
Fp2000
Fp2006
x(±y)
x(±y)
x(±y)
x(±y)
x(±y)
x(±y)
x(±y)
x(±y)
x(±y)
x(±y)
x(±y)
x(±y)
x(±y)
x(±y)
x(±y)
x(±y)
x(±y)
x(±y)
x(±y)
x(±y)
Fp1992
Fp1995
Fp2000
Fp2006
Int1992
Int1995
Int2000
Int2006
Fp1992
Fp1995
Fp2000
Fp2006
x(±y)
x(±y)
x(±y)
x(±y)
x(±y)
x(±y)
x(±y)
x(±y)
x(±y)
x(±y)
x(±y)
x(±y)
x(±y)
x(±y)
x(±y)
x(±y)
x(±y)
x(±y)
x(±y)
x(±y)
10. What can you say about these models’ predictive abilities, based on the results from the previous problem? For example, how well does a model developed for the integer benchmarks predict the same-year performance of the floating-point benchmarks? What about predic- tions across benchmark generations?
11. In the discussion of data splitting, we defined the value f as the fraction of the complete data set used in the training set. For the Fp2000 data set, plot a 95 percent confidence interval for the mean of `delta` for f = [0.1, 0.2, ..., 0.9]. What value of f gives the best result (i.e., the smallest confidence interval)? Repeat this test n = 5 times to see how the best value of f changes.
12. Repeat the previous problem, varying f for all the other data sets. | textbooks/stats/Computing_and_Modeling/Book%3A_Linear_Regression_Using_R_-_An_Introduction_to_Data_Modeling_(Lilja)/08%3A_A_Few_Things_to_Try_Next.txt |
• 1: Support Vector Machine (SVM)
Support vector machines (SVMs) are supervised learning models with associated learning algorithms that analyze data used for classification and regression analysis.
• 2: Kernel Density Estimation (KDE)
Kernel density estimation (KDE) is a non-parametric way to estimate the probability density function of a random variable. Kernel density estimation is a fundamental data smoothing problem where inferences about the population are made, based on a finite data sample.
• 3: K-Nearest Neighbors (KNN)
The k-nearest neighbors algorithm (KNN) is a non-parametric method used for classification and regression. In both cases, the input consists of the k closest training examples in the feature space
• 4: Numerical Experiments and Real Data Analysis
RTG: Classification Methods
The idea for Support Vector Machine is straight forward: Given observations $x_i \in {\rm I\!R}^p$, $i \in 1,...,n$, and each observation $x_i$ has a class $y_i$, where $y_i \in \{-1,1\}$, we want to find a hyperplane such that it can separate the observations based on their classes and also maximize the the minimum distance of the observation to the hyperplane.
First, we denote the best hyperplane as $w^Tx+b = 0$. From the linear algebra, we know the distance from a point $x_0$ to the plane $w^Tx+b = 0$ is calculated by: \begin{aligned} \text{Distance} = \dfrac{|w^Tx_0 + b|}{||w||} \end{aligned} Then we want such hyperplane can correctly separate two classes, which is equivalent to satisfy following restrictions:
\begin{aligned} &(w^Tx_i+b) > 0, & & \text{if y_i = 1} \ &(w^Tx_i+b) < 0, & & \text{if y_i = -1}, \end{aligned}
which is equivalent to \begin{aligned} &y_i(w^Tx_i+b) > 0, & &i = 1, \ldots, n. \end{aligned}
Our goal is to maximize the minimum distance for all observations to that hyperplane,so we have our first optimization problem: \begin{aligned} &\underset{w,b}{\text{maximize}} & & \text{min} \{\frac{|w^Tx_i+b|}{||w||}\}\ & \text{s.t.} & & y_i(w^Tx_i+b) > 0,i = 1, \ldots, n. \end{aligned}
Then we define margin $M = \text{min}\{\frac{|w^Tx_i+b|}{||w||}\}$, and for the mathematics convenience, we scale $w$ such that satisfy $||w|| = 1$
\begin{aligned} &\underset{w,||w|| = 1,b}{\text{maximize}} & & M\ & \text{ s.t.} & & y_i(w^Tx_i+b) \geq M, i = 1, \ldots, n. \end{aligned}
Then we define $v = \frac{w}{M}$ and the norm of the $||v||$ is therefore $\frac{1}{M}$. We substitute $v$ back to our optimization:
\begin{aligned} &\underset{v,b}{\text{maximize}} & & \frac{1}{||v||}\ & \text{s.t.} & & y_i(\frac{w^T}{M}x_i+\frac{b}{M}) \geq \frac{M}{M} = 1, i = 1, \ldots, n. \end{aligned}
Then we change the variable name v to w, and to maximize $\frac{1}{||v||}$ is equivalent to minimize $\frac{1}{2}w^Tw$. So we get our final optimization problem as:
\begin{aligned} &\underset{w,b}{\text{minimize}} & & \frac{1}{2}w^Tw\ & \text{s.t.} & & y_i(w^Tx_i+b) \geq 1, i = 1, \ldots, n. \end{aligned}
We want to use the Method of Lagrange Multipliers to solve the optimization problem. We can write our constrained as $g_i(w) = y_i(w^T x_i+b) - 1 \geq 0$. Lets define $L(w,b,\alpha \geq 0) = \frac{1}{2}w^Tw - \sum_{i=1}^{n}\alpha_i[y_i(w^Twx_i+b) -1]$. We observe that the maximum of the function $L$ with respect to $\alpha$ equals $\frac{1}{2}w^Tw$. So we change our original optimization problem to the new one : \begin{aligned} &\underset{w,b}{\text{min }} \underset{\alpha \geq 0}{\text{max }}L(w,b,\alpha) = \frac{1}{2}w^Tw - \sum_{i=1}^{n}\alpha_i[y_i(w^Twx_i+b) -1] \end{aligned} where $\alpha$ is the Lagrange Multiplier.
We can verify that $w,b,\alpha$ satisfy Karush-Kuhn-Tucker (KKT condition). Therefore, we can solve the primal problem by solving its dual problem: \begin{aligned} &\underset{\alpha \geq 0}{\text{max }}\underset{w,b}{\text{min }} L(w,b,\alpha) = \frac{1}{2}w^Tw - \sum_{i=1}^{n}\alpha_i[y_i(w^Twx_i+b) -1] \end{aligned}
To solve the dual problem, we first need to set the partial derivative of L(w,b,$\alpha$) with respect of w and b to 0: \begin{aligned} & \nabla_w L(w,b,\alpha)= w - \sum_{i = 1}^{n}\alpha_iy_ix_i = 0\ & \frac{\partial}{\partial b}L(w,b,\alpha)= \sum_{i=1}^{n}\alpha_iy_i= 0 \end{aligned}
Then we substitute them back to the function L(w,b,$\alpha$) \begin{aligned} L(w,b,\alpha) & =\frac{1}{2}w^Tw-\sum_{i = 1}^{n}\alpha_i(y_i(w^Tx_i+b)-1)\ & = \frac{1}{2}w^Tw - \sum_{i = 1}^{n}\alpha_iy_iw^Tx_i - \sum_{i = 1}^{n}\alpha_iy_ib + \sum_{i = 1}^{n}\alpha_i\ & = \frac{1}{2}w^T\sum_{i = 1}^{n}\alpha_iy_ix_i - w^T\sum_{i = 1}^{n}\alpha_iy_ix_i - b\sum_{i = 1}^{n}\alpha_iy_i + \sum_{i = 1}^{n}\alpha_i\ & = -\frac{1}{2}w^T\sum_{i = 1}^{n}\alpha_iy_ix_i - b\sum_{i = 1}^{n}\alpha_iy_i + \sum_{i = 1}^{n}\alpha_i \ & = -\frac{1}{2}(\sum_{i = 1}^{n}\alpha_iy_ix_i)^T\sum_{i = 1}^{n}\alpha_iy_ix_i - b\sum_{i = 1}^{n}\alpha_iy_i + \sum_{i = 1}^{n}\alpha_i \ & = -\frac{1}{2}\sum_{i = 1}^{n}\alpha_iy_i(x_i)^T\sum_{i = 1}^{n}\alpha_iy_ix_i - b\sum_{i = 1}^{n}\alpha_iy_i + \sum_{i = 1}^{n}\alpha_i \ & = -\frac{1}{2}\sum_{i,j = 1}^{n}\alpha_i\alpha_jy_iy_jx_i^Tx_j - b\sum_{i = 1}^{n}\alpha_iy_i + \sum_{i = 1}^{n}\alpha_i \ & = -\frac{1}{2}\sum_{i,j = 1}^{n}\alpha_i\alpha_jy_iy_jx_i^Tx_j + \sum_{i = 1}^{n}\alpha_i \ \end{aligned}
and simplify the dual problem as: \begin{aligned} \underset{\alpha \geq 0}{\text{maximize }} & W(\alpha) = \sum_{i = 1}^{n}\alpha_i - \frac{1}{2}\sum_{i,j=1}^{n}y_iy_j\alpha_i\alpha_j<x_i,x_j> \ \text{s.t.} & \sum_{i = 1}^{n}\alpha_iy_i = 0 \end{aligned}
Then we notice that it is a convex optimization problem so that we can use SMO algorithm to solve the value of Lagrange multiplier $\alpha$. And using formula (5), we can compute the parameter $w$. Also,b is calculated as \begin{aligned} b = \frac{min_{i:y_i = 1}{w^Tx_i} - max_{i:y_i = -1}{w^Tx_i}}{2} \end{aligned}
Therefore, after figuring out the parameter $w$ and $b$ for the best hyperplane, for the new observation $x_i$, the decision rule is that \begin{aligned} & y_i = {\begin{cases} 1, & \text{for } w^Tx_i+b \geq 1\ -1, & \text{for } w^Tx_i+b \leq 1\ \end{cases}} \end{aligned}
1.2 Soft Margin
In most real-world data, hyperplane cannot totally separate binary classes data, so we are tolerant to some observations in train data at wrong side of margin or hyperplane. We called the margin in above situation as Soft Margin or Support Vector Classifier.Here is primal optimization problem of Soft Margin:
\begin{aligned} & \underset{w,b,\varepsilon_i}{\text{minimize}} & & \frac{1}{2}w^Tw + C\sum_{i = 1}^{n}\varepsilon_i\ & \text{s.t.} & & y_i(w^Tx_i+b) \geq 1 - \varepsilon_i,\; \text{and } \varepsilon_i \geq 0, i = 1, \ldots, n, \end{aligned}
where $\xi_i$ is the slack variables that allow misclassification; the penalty term $\sum_{i=1}^{l}\xi_i$ is a measure of the total number of misclassification in the model constructed by training dataset; C,a tuning parameter, is the misclassification cost that controls the trade-off of maximizing the margin and minimizing the penalty term.
Following the same process we have done in deriving the hard margin,We can solve the primal problem by proceed it in its dual space. Here is the Lagrangian of (9) and the $\alpha_i$ and $\beta_i$ showing below are the Lagrange Multiplier:
\begin{aligned} \underset{w,b,\varepsilon_i}{\text{min }}\underset{\alpha \geq 0, \beta \geq 0}{\text{max }}L(w,b,\alpha, \beta,\varepsilon) &= \frac{1}{2}||w||^2 +C\sum\limits_{i=1}^{n}\varepsilon_{i}-\sum\limits_{i=1}^{n} \alpha_{i} [y_{i}({w}^T \cdot {x}_{i} + b)- 1+\varepsilon_{i}] -\sum\limits_{i=1}^{n}\beta_{i}\varepsilon_{i} , \ \end{aligned}
Due to the nonnegative property of the primal constraints and Lagrange Multiplier, the part $-\sum\limits_{i=1}^{n} \alpha_{i} [y_{i}({w}^T \cdot \vec{x}_{i} + b) - 1+\varepsilon_{i}] -\sum\limits_{i=1}^{n}\beta_{i}\varepsilon_{i}$ should be negative. thus we can minimize that part to zero in order to get $\underset{\alpha \geq 0, \beta \geq 0}{\text{max }}L(w,b,\alpha, \beta,\varepsilon) = \frac{1}{2}||w||^2 +C\sum\limits_{i=1}^{n}\varepsilon_{i}-\sum\limits_{i=1}^{n} \alpha_{i} [y_{i}({w}^T \cdot {x}_{i} + b) - 1+\varepsilon_{i}] -\sum\limits_{i=1}^{n}\beta_{i}\varepsilon_{i}$ with respect to w, b and $\varepsilon$. However, the constraints $\alpha \geq 0$ and $\beta \geq 0$ make it difficult to find the maximization. Therefore, we transform the primal problem to the following dual problem through the KKT condition:
\begin{aligned} \underset{\alpha \geq 0, \beta \geq 0} {\text{max}}\underset{w,b,\varepsilon_i}{\text{min }}L(w,b,\alpha, \beta,\varepsilon) &= \frac{1}{2}||w||^2 +C\sum\limits_{i=1}^{n}\varepsilon_{i}-\sum\limits_{i=1}^{n} \alpha_{i} [y_{i}({w}^T \cdot {x}_{i} + b)- 1+\varepsilon_{i}] -\sum\limits_{i=1}^{n}\beta_{i}\varepsilon_{i} , \ \end{aligned}
Because the subproblem for minimizing the equation with respect to w, b and $\varepsilon$, we have no constraints on them, thus we could set the gradient to 0 as follows to find $\underset{w,b,\varepsilon_i}{\text{min }}L(w,b,\alpha, \beta,\varepsilon)$:
\begin{aligned} \nabla_w L &= \vec{w} - \sum\limits_{i=1}^{n} \alpha_{i} y_{i} x_{i}=0 \Rightarrow \vec{w} = \sum\limits_{i=1}^{n} \alpha_{i} y_{i} x_{i}\ \frac{\partial L}{\partial b} &= - \sum\limits_{i=1}^{n} y_{i} x_{i}= 0 \Rightarrow \sum\limits_{i=1}^{n} y_{i} x_{i}=0\ \frac{\partial L}{\partial \varepsilon_{i}} &= C- \alpha_{i} - \beta_{i}=0, \Rightarrow C = \alpha_{i}+\beta_{i}\ \end{aligned}
When we plug Equation set $(12)$ to Equation $(11)$, we get the dual Lagrangian form of this optimization problem as follows:
\begin{aligned} &\mathop{max}_{\alpha_i \geq 0} && \sum\limits_{i=1}^{n} \alpha_{i} - \frac{1}{2}\sum\limits_{i=1}^{n}\sum\limits_{j=1}^{n} \alpha_{i}\alpha_{j}y_{i}y_{j}{x}_{i}^T{x}_{j} \ &s.t. && \sum_{i=1}^{n} y_i \alpha_i = 0 \text{ and } 0 \leq \alpha_i \leq C, \text{for } i = 1,...,n \end{aligned}
Finally, the decision rule will show as follows:
$(1)$ If $\alpha_{i} = 0$ and $\varepsilon_{i} = 0$, the testing observation $x_{i}$ has been classified on the right side.
$(2)$ If $0 < \alpha_{i} < C$, we can conclude that $\varepsilon_{i} = 0$ and $y_{i}={w}^T\cdot\vec{x}_{i} + b$, which means the testing observation $x_{i}$ is a support vector.
$(3)$ If $\alpha_{i} = 0$, $\varepsilon_{i} >0$ and $x_{i}$ is a support vector, we can conclude that $x_{i}$ is correctly classified when $0 \leq \varepsilon_{i} < 1$, and $x_{i}$ is misclassified when $\varepsilon_{i} > 1$.
1.3: Kernel Method
Since we calculate $w=\sum_{i=1}^{n}\alpha_iy_ix_i$, so our hyperplane can be rewrite as $g(x) = (\sum_{i=1}^{n}\alpha_iy_ix_i)^{T}x + b = \sum_{i=1}^{n}\alpha_iy_i<x_i,x> + b$ where $<x_i,x>$ represents the inner product of $x_i$ and $x$. What we discussed above is about linear separable. What if our data set is not linear separable? One common way here is define a map as: $\phi:\mathbb{R}^p\to\mathbb{R}^m$ A feature mapping $\phi$, for instance, could be \begin{aligned} & \phi(x) = \begin{bmatrix}x \x^2 \ x^3 \end{bmatrix}. \end{aligned}
The idea is that if the observations are inseparable in current feature space, we can use a feature mapping function $\phi$ and try to separate the observations after being mapped.
Here is a plot shows how the mapping function $\phi$ works.
Hence, in order to solve the hyperplane to separate the observations after being mapped, we simply replace x with $\phi(x)$ everywhere in the previous optimization problem and get:
\begin{aligned} &\underset{w,b}{\text{maximize}} & & \frac{1}{2}w^Tw\ & \text{s.t.} & & y_i(w^T\phi(x_i)+b) \geq 1, i = 1, \ldots, n. \end{aligned}
From the previous part, we know that for each observations, we only need their pairwise inner products to solve the problem. But since computing the inner product in high dimensional space is expensive or even impossible, we need another way to computing the inner product.So here, given a feature mapping $\phi(x)$ we define the corresponding Kernel function K as:
\begin{aligned} K(x_i,x_j) = \phi(x_i)^T\phi(x_j) = <\phi(x_i),\phi(x_j)> \end{aligned}
So now, instead of actually computing the inner product of pairwise observations, we use the kernel function to help us in computation. And using the same methods as before, we can get our hyperplane as:
\begin{aligned} w = \sum_{i=1}^{n}\alpha_iy_i\phi(x_i), \end{aligned}
and \begin{aligned} b = \frac{min_{i:y_i = 1}\sum_{j=1}^{n}\alpha_iy_iK(x_i,x_j) - max_{i:y_i = -1}\sum_{j=1}^{n}\alpha_iy_iK(x_i,x_j)}{2} \end{aligned}
So after using kernel, our decision rule becomes: \begin{aligned} & y_i = {\begin{cases} 1, & \text{for } \sum_{i=1}^{n}\alpha_iy_i<\phi(x_i),\phi(x)> + b \geq 1\ -1, & \text{for } \sum_{i=1}^{n}\alpha_iy_i<\phi(x_i),\phi(x)> + b \leq 1\ \end{cases}} \end{aligned}
There are some commonly used kernel: Polynomial Kernel and Gaussian Kernel. Polynomial kernel with degree n is defined as $K(x_i,x_j) = (x_i^Tx_j)^d$ and Gaussian kernel with $\sigma$ as $K(x_i,x_j) = \text{exp}{-\frac{||x_i-x_j||^2}{2\sigma^2}}$. | textbooks/stats/Computing_and_Modeling/RTG%3A_Classification_Methods/1%3A_Support_Vector_Machine_%28SVM%29.txt |
3.1: K nearest neighbors
Assume we are given a dataset where $X$ is a matrix of features from an observation and $Y$ is a class label. We will use this notation throughout this article. $k$-nearest neighbors then, is a method of classification that estimates the conditional distribution of $Y$ given $X$ and classifies an observation to the class with the highest probability. Given a positive integer $k$, $k$-nearest neighbors looks at the $k$ observations closest to a test observation $x_0$ and estimates the conditional probability that it belongs to class $j$ using the formula
$Pr(Y=j|X=x_0)=\frac{1}{k}\sum_{i\in \mathcal{N}_0}I(y_i=j)$
where $\mathcal{N}_0$ is the set of $k$-nearest observations and $I(y_i=j)$ is an indicator variable that evaluates to 1 if a given observation $(x_i,y_i)$ in $\mathcal{N}_0$ is a member of class $j$, and 0 if otherwise. After estimating these probabilities, $k$-nearest neighbors assigns the observation $x_0$ to the class which the previous probability is the greatest. The following plot can be used to illustrate how the algorithm works:
• If we choose $K = 3$, then we have 2 observations in Class B and one observation in Class A. So, we classify the red star to Class B.
• If we choose $K = 6$, then we have 2 observations in Class B but four observations in Class A. So, we classify the red star to Class A.
3.2: The Calculation of Distance
Since in k-NN algorithm, we need k nearest points, thus, the first step is calculating the distance between the input data point and other points in our training data. Suppose $x$ is a point with coordinates $(x_1,x_2,...,x_p)$ and $y$ is a point with coordinates $(y_1,y_2,...,y_p)$, then the distance between these two points is: $d(x,y) = \sqrt{\sum_{i=1}^{p}(x_i - y_i)^2}$
3.3: The Effect of K
As most statistical learning model does, if $K$ is small, then, we use a smaller region of data to learn. This may cause over-fitting.This may cause under-fitting. And if $K$ is large, then we use a larger region of data to learn. The following plot shows the effects of $K$. as $K$ gets larger, the decision boundary appears linear.
Analysis the Similarity and Difference between SVM, LDA and QDA (Heng Xu)
Connection between SVM, Covariance Adjusted SVM, LDA and QDA
We here want to show some interesting connection between SVM, Covariance Adjusted SVM, LDA and QDA. Under certain condition, these methods will construct very similar classifiers. But firstly, let’s briefly introduce the ideas of other three methods.
1. Covariance Adjusted SVM
Covariance Adjusted SVM is a modified SVM by adding pooled variance - covariance matrix into consideration. The idea that we want to maximize the margin in the direction of $((\sum)^{\frac{1}{2}})\textbf{w}$. Here we use $\sum$ to denote the pooled sample variance for class 1 and class 2, which equals to $\frac{n_1s_1^2+n_2s_2^2}{N}$. And then, we form our model as
\begin{aligned} & \underset{w,b,\varepsilon_i}{\text{minimize}} & & \frac{1}{2}w^T\sum w + C\sum_{i = 1}^{n}\varepsilon_i\ & \text{ s.t.} & & y_i(w^Tx_i+b) \geq 1 - \varepsilon_i,\; \text{and } \varepsilon_i \geq 0, i = 1, \ldots, n,\ \end{aligned}
In fact, we can verify that Covariance Adjusted SVM is equivalent to multiply the square root of pooled sample variance to the training data and apply the SVM to the new data. The verification is following:
\begin{aligned} & \underset{w,b,\varepsilon_i}{\text{minimize}} & &\frac{1}{2}\textbf{w}^T((\sum)^{\frac{1}{2}})^T ((\sum)^{\frac{1}{2}})\textbf{w}+C\sum_{i = 1}^{n}\varepsilon_i\ & \text{ s.t.} & &y_i(\textbf{w}^T(\sum)^{\frac{1}{2}}(\sum)^{-\frac{1}{2}}x_i+b) \geq 1, \text{for i = 1,...,n,}\ \end{aligned}
In practice, such model performs at least as good as traditional SVM, and sometimes can correct some improper prediction by traditional SVM, which will be demonstrated in following comparison part.
2. LDA and QDA
Linear discriminant Analysis and Quadratic discriminate Analysis are popular traditional classification methods. These two methods assume each class are from multivariate Gaussian distribution and use statistical properties of the data, the variance - covariance and the mean, to establish the classifier. The mainly difference between LDA and QDA is that if we have observed or calculated that each class has similar variance - covariance matrix, we will use LDA to construct a straight line as our classifier;otherwise, if classes have different variance - covariance matrix, we will use QDA to construct a quadratic curve as our classifier. The following plot gives some basic ideas:
3. Comparison with SVM, Covariance Adjusted SVM, LDA and QDA
Case 1: 2 dimension, Same Variance-Covariance and merged heavily
We here look at the example for two classes generated from the same multivariate Gaussian Distribution. We can see that the shape of these two ellipse are similar and yellow class and red class merged heavily with each other. We here use LDA(blue solid line), linear SVM(green solid line) and linear Covariance Adjusted SVM(red dashed line) respectively and plot them on the data set. It is obvious that these three lines give basically the same prediction. In other words, in the case that two classes merged heavily with each other, using LDA (representing considering statistical information), SVM or Covariance adjusted SVM would likely give same prediction.
Case 2: 2 dimension, Same Variance-Covariance but merged not heavily
We here move two classes a little further with each other and also use LDA(blue solid line), linear SVM(black solid line) and linear Covariance Adjusted SVM(red dashed line). We here notice that unlike in merged heavily case, these three methods give really different classifiers.
Case 3: 2 dimension, different Variance-Covariance and merged heavily
Here we are dealing with the classes have different variance-covariance matrices but still merged heavily. But since she shape of two classes are different, here we use QDA(figure 3), SVM of Polynomial Kernel of degree 2(figure 1) and Covariance Adjusted SVM of Polynomial Kernel of degree 2(figure 2). We observe that these three methods also give similar classifiers, even though that Covariance Adjusted SVM of Polynomial Kernel of degree 2 neglects the yellow class on the left part.
Case 4: 2 dimension, different Variance-Covariance and merged heavily
We here want to see one more intense case. The yellow class and red class have different variance-covariance matrices and also cross each other. But still, there are many points mix with each other in the middle part. From previous experience, it is reasonable for these three methods give the similar prediction.
Case 5: 2 dimension, different Variance-Covariance but not merged heavily
We here decrease the amount of points that being merged together. Like the situation in case 2, SVM and covariance adjusted SVM are different from QDA.
4. Analysis
We mainly compare two pair of classifications: 1. Linear SVM, Covariance Adjusted SVM and LDA. 2. SVM of Polynomial Kernel of degree 2, Covariance Adjusted SVM of Polynomial Kernel of degree 2 and QDA. Based on the comparison, we might have the conclusion that:
In the case that different classes’ observations merge with each other, we might consider use LDA and QDA by using statistical properties of the data to construct the classifiers. However, if there are enough mixed points, SVM and Covariance Adjusted SVM will construct really similar classifiers. And if there are not such many points mixed with each other, SVM and Covariance Adjusted SVM will be more ’ambitious’ to find a best classifier based on the situations. | textbooks/stats/Computing_and_Modeling/RTG%3A_Classification_Methods/3%3A_K-Nearest_Neighbors_%28KNN%29.txt |
SVM has been successfully applied in many areas ranging from handwriting recognition, text classification, image retrieval, etc. However, the performance of SVM decreases significantly when facing unbalance data. Application such as disease detection, credit card fraud detection which has highly skewed datasets with a very small number of minority class instances are hard to classify correctly. However, the information of minority class is very important. The default classification generally perform badly on imbalanced data, because they simply assume the classes are balanced. In this section I will show my exploration of the empirical behavior of liwnear SVM for unbalanced data. In particular, I will introduce the concept of confusion matrix, SVM with class weights, and illustrate these concepts by some simulation and real data analysis. For unbalance data, our primal problem now becomes:
1. Confusion Matrix
This study focuses on binary class data. In an imbalanced binary class data, we need to assign different class weights or different misclassification cost to the two classes. We then discuss about confusion matrix and performance metrics. We assign the positive class(+) to the majority class and negative class(-) to the minority one. Let $N(i,j)$ be the total number of cases that predicted as class i while true class is j. The following is a confusion matrix:
In the confusion matrix, misclassification information is contained in False Positives($N_{12}$)and False Negative($N_{21}$). Meanwhile $N_{11}$ and $N_{22}$ are the data predicted correctly. Several performance metrics of choosing tuning parameters can be derived based on the confusion matrix. There are two popular performance metrics:
• $\text{Accuracy} = \frac{1}{N..}(N_{12} + N_{21})$
• $\text{Kappa} = (\text{Observed Accuracy} - \text{Expected Accuracy})/(1 - \text{Expected Accuracy})$; where $\text{Observed Accuracy} = \frac{1}{N..}(N_{11} + N_{22})$, $\text{Expected Accuracy} = \frac{1}{N..^2}(N_{1.}\times N_{.1} +N_{2.}\times N_{.2})$ (wikipedia cohen’s Kappa, Calculation)
• $\text{Kappa} = (N..(N_{11} + N_{22}) - (N_{1.} \times N_{.1} + N_{2.} \times N_{.2}))/ N_{..}^{2}$
It calculates the deviation between Observed Accuracy and Expected Accuracy, where $\text{Observed Accuracy} = \frac{1}{N..}(N_{11} + N_{22})$, $\text{Expected Accuracy} = \frac{1}{N..^2}(N_{1.}\times N_{.1} +N_{2.}\times N_{.2})$
Most functions such as tune() in R package e1071 and train() in R package caret choose accuracy as default performance metric. However, because the calculation of accuracy does not separately consider misclassification of each class, the effect of misclassification of minority will be ignored. Therefore, I use Kappa as performance metric to choose optimal tuning parameter.
2.Simulation
The following simulation shows how building svm model with weight improves the kappa. The simulated data has 1000 observations with two classes: majority class($+1$) has 900 observations while the minority class ($-1$) has 100 observations.
I randomly choose 800 observations to be training data and $723$ are majority class, 77 are minority class. After 10 fold cross validation, we get the optimal tuning parameter cost is $0.01$ with largest kappa value.
cost Accuracy Kappa
1e-02 0.9524351 0.7277368
1e-01 0.9437002 0.6267951
5e-01 0.9462002 0.6482385
1e+00 0.9462002 0.6576304
2e+00 0.9462002 0.6482385
5e+00 0.9462002 0.6482385
1e+01 0.9462002 0.6482385
1e+02 0.9474502 0.6537532
After building the linear svm model with $\text{cost} = 0.01$, we get 151 support vectors: 76 from majority class and 75 from minority class.The training error is $6\%$ by accuracy criteria. The test accuracy is $94\%$, but kappa is only $61.87\%$; 12 minority class observations are predicted as majority class and none of majority class predicted wrongly. Therefore, the information of minority class has been ignored by the choice of hyperplane.
I then build a model with weight in consideration. After cross validation, I get he optimal cost is $0.1$ and weight is $3$ with the largest kappa value $74.26\%$ I then used those parameters to build SVM model; I have 129 support vectors in total with 96 from majority class and 33 from minority class. When using the SVM model to predict testing data, I get $\text{kappa} = 79.08\%$; 3 observations of minority class are predicted as majority and 6 observations of majority class are predicted as minority. After adding weight, the kappa improves $17.21\%$. The following two figures shows the hyperplane of SVM model without weight (left) and that with weight(right). After adding weight, the hyperplane is obviously move towards majority class; therefore, less observations of minority class will be classified as majority class and linear SVM is very sensitive to unbalance data.
One interesting discover is the number of support vector is about $3:1$ for majority class to minority class, while the weight is $1:3$. In model without weight, the ratio of support vector is about $1:1$. Therefore, the weight could control the complexity of boundary between support vectors.
In addition, the reason we choice kappa as our criteria instead of accuracy is the trend of kappa match the trend of weight when fix cost. I first fix $\text{cost} = 0.01$,the optimal cost under cross validation without weight. I then look at how weight influence classification of test data by kappa, accuracy, misclassification of minority class observations() and misclassification of majority class observations() as following table.
Weight Kappa Accuracy mis_minor mis_major
1.0 0.659 0.948 31 11
1.2 0.662 0.946 29 14
1.4 0.680 0.948 26 16
1.6 0.719 0.951 20 19
1.8 0.705 0.948 19 23
2.0 0.731 0.951 16 23
2.2 0.739 0.953 15 23
2.4 0.732 0.949 12 29
2.6 0.743 0.950 10 30
2.8 0.743 0.950 10 30
3.0 0.743 0.950 10 30
3.2 0.742 0.950 10 30
3.4 0.733 0.946 8 35
3.6 0.723 0.944 8 37
3.8 0.726 0.944 7 38
4.0 0.724 0.943 6 40
4.2 0.719 0.941 6 41
4.4 0.719 0.941 6 41
4.6 0.719 0.941 6 41
4.8 0.719 0.941 6 41
5.0 0.719 0.941 6 41
From the table, we get that when weight increases,the accuracy does not vary much; but the Kappa is significantly increasing too until $\text{weight} = 3$; it then begins to decrease. The accuracy also decrease a little when weight is greater than 3. Therefore, a larger weight does not means a better fit. The misclassification of minority class is decreasing; but the misclassification of majority class increases. Our goal when analyzing unbalance data is to avoid majority class dominant our choices of hyperplane and support vectors. When the class is unbalance, the observations density of majority class would be higher than that of minority even around the hyperplane. As a consequence, in order to reduce the total number of misclassification, the hyperplane will be chosen skewed towards the minority class, which would lower the model’s performance on the minority class. Hence, the decision function in the SVM model would be more likely to classify a point near the hyperplane as majority class. We cannot ignore the information from minority class; therefore, we put more punishment to the misclassification of minority class. Thus kappa is a more appropriate criteria to find tuning parameter: cost and weight.
Reference
[1] Haibo He, Yunqian, "Imbalanced Learning: Foundations, Algorithms, and Applications,.” $pp7-8$, $2012$
[2] C.CortesandV.Vapnik,“Support-vectornetworks,”MachineLearning,vol.$20$, no.$3$, pp. $273–297$, $1995$.
[3] Information Resources Management Association(USA), ‘A Measure Optimized Cost Sensitive Learning Framework’, Artificial Intelligence: Concepts, Methodologies, Tools, and Applications $pp613-615$
[5] https://cran.r-project.org/web/packa...1071/e1071.pdf at Arguments kenel
[6] https://cran.r-project.org/web/packa...ab/kernlab.pdf at Kernel functions Description
[7] topepo, https://github.com/topepo/caret/blob...iles/svmPoly.R,$2016$
[8] topepo, https://github.com/topepo/caret/blob...adialWeights.R,$2016$
[11] Jason Brownlee,http://machinelearningmastery.com/ma...-metrics-in-r/, February $2016$ in R Machine learning | textbooks/stats/Computing_and_Modeling/RTG%3A_Classification_Methods/4%3A_Numerical_Experiments_and_Real_Data_Analysis/Jing_Peng.txt |
Preprocessing of categorical predictors in SVM, KNN and KDC (contributed by Xi Cheng)
Non-numerical data such as categorical data are common in practice. Some classification methods are adaptive to categorical predictor variables in nature, but some methods can be only applied to continuous numerical data. Among the three classification methods, only Kernel Density Classification can handle the categorical variables in theory, while kNN and SVM are unable to be applied directly since they are based on the Euclidean distances. In order to define the distance metrics for categorical variables, the first step of preprocessing of the dataset is to use dummy variables to represent the categorical variables.
Secondly, due to the distinct natures of categorical and numerical data, we usually need to standardize the numerical variables, such as the contributions to the euclidean distances from a numerical variable and a categorical variable are basically on the same level.
Finally, the introduction of dummy variables usually increase the dimension significantly. By various experiments, we find that dimension reduction techniques such as PCA usually improve the performance of these three classifiers significantly.
Car Evaluation Data Set
This dataset is from UCI machine learning repository, which was derived from a simple hierarchical decision model. The model evaluates cars according to the following six categorical features:
• V1: the buying price (v-high, high, med, low),
• V2: the price of maintenance (v-high, high, med, low),
• V3: the numer of doors (2, 3, 4, 5-more),
• V4: the capacity interms of persons to carry (2, 4, more),
• V5: the size of luggage boot (small, med, big),
• V6: and the estimated safety of the car (low, med, high).
For kernel density classification, I use `NaiveBayes` function with the argument `usekernel = T` in `klaR` package, which is used to fit Naive Bayes model in which predictors are assumed to be independent within each class label, and kernel density estimation can be used to estimate their class-conditional distributions. Although it can be applied works to categorical variables directly, the misclassification rates are quite high. See the tables as follows:
``````### import data ###
library(readr)
car <- read.csv("~/Desktop/RTG/dataset/car.data.txt", header = F)
# V7: unacc, acc, good, vgood
roww <- nrow(car)
coll <- ncol(car)
numTrain <- floor((2/3) * roww)
numTest <- roww - numTrain
training <- car[sample(roww, numTrain), ]
test <- car[sample(roww, numTest), ]
### KDC ###
library(MASS)
library(klaR)
nb1 <- NaiveBayes(V7 ~.,data=training, usekernel=T)
p1 <- predict(nb1, test[,1:6])
table(true = test\$V7, predict = p1\$class)
p2 <- predict(nb1, training[,1:6])
table(true = training\$V7,predict = p2\$class)
1 - mean(p1\$class != test\$V7)
## Confusion matrix of the training data ##
predict
true acc good unacc vgood
acc 208 10 53 0
good 34 11 0 0
unacc 46 1 745 0
vgood 17 0 0 27
## Confusion matrix of the testing data ##
predict
true acc good unacc vgood
acc 95 3 26 0
good 22 5 0 0
unacc 21 1 377 0
vgood 12 0 0 14``````
For SVM classification, we can set dummy variables to represent the categorical variables. For each variable, we create dummy variables of the number of the level. For example, for V1, which has four levels, we then replace it with four variables, V1.high, V1.low, V1.med, and V1.vhigh. If V1 = vhigh for a particular row, then V1.vhigh = 1 with V1.low = 0 and V1.med = 0. Since there is no numeric predictor variables in the dataset, we don’t need to consider the issue of standardization of numerical variables. Then I use `svm` function from `e1071` package with both radial and linear kernel. The two important parameters `cost` and `gamma` are obtained by `tune.svm` function. The classification results are shown below.
``````### encode to dummy variables ###
library(lattice)
library(ggplot2)
library(caret)
dummies <- dummyVars(~ ., data=training[,-7])
c2 <- predict(dummies, training[,-7])
d_training <- as.data.frame(cbind(training\$V7, c2))
dummies <- dummyVars(~ ., data=test[,-7])
c2 <- predict(dummies, test[,-7])
d_test <- as.data.frame(cbind(test\$V7, c2))
### SVM ###
library(e1071)
gammalist <- c(0.005,0.01,0.015,0.02,0.025,0.03,0.035,0.04,0.045,0.05)
tune.out <- tune.svm(as.factor(V1) ~., data=d_training,
kernel='radial', cost=2^(-1:5), gamma = gammalist)
summary(tune.out)
summary(tune.out\$best.model)
svm1 <- predict(tune.out\$best.model, d_test[,-1])
confusionMatrix(svm1, as.factor(d_test\$V1))
tune.out2 <- tune.svm(as.factor(V1) ~., data=d_training,
kernel='linear', cost=2^(-1:5), gamma = gammalist)
summary(tune.out2)
summary(tune.out2\$best.model)
svm2 <- predict(tune.out2\$best.model, d_test[,-1])
confusionMatrix(svm2, as.factor(d_test\$V1))
## Test on Training Set ##
predict
true 1 2 3 4
1 271 0 0 0
2 0 45 0 0
3 0 0 792 0
4 0 0 0 44
## Test on Test Set ##
predict
true 1 2 3 4
1 123 1 0 0
2 0 27 0 0
3 1 0 398 0
4 0 0 0 26``````
For kNN classification, I use `knn` function from `class` package after all categorical variables are encoded to dummy variables. The parameter `k` is obtained by `tune.knn` function by 10-fold cross validation. The classification result is shown below.
`````` predict
true 1 2 3 4
1 119 0 5 0
2 4 23 0 0
3 4 0 395 0
4 3 0 0 23``````
The classification success rate for testing on the test set of these three methods are shown below.
KDC SVM kNN
0.8524 0.9965 0.9722
We can see that handling categorical variables using dummy variables works for SVM and kNN and they perform even better than KDC. Here, I try to perform the PCA dimension reduction method to this small dataset, to see if dimension reduction improves classification for categorical variables in this simple case.
Here I choose the first 15 principal components. The classification results are shown below.
``````Naive Bayes (KDE): 0.9861111
predict
true 1 2 3 4
1 141 2 0 0
2 2 18 1 0
3 0 0 390 3
4 0 0 0 19
Naive Bayes (Normal): 0.984375
predict
true 1 2 3 4
1 140 3 0 0
2 4 16 1 0
3 0 0 393 0
4 0 0 1 18
SVM (radial, gamma = 0.02, cost = 8, Number of Support Vectors: 308): 1
Reference
Prediction 1 2 3 4
1 143 0 0 0
2 0 21 0 0
3 0 0 393 0
4 0 0 0 19
SVM (linear, gamma = 0.005, cost = 1, Number of Support Vectors: 201): 1
Reference
Prediction 1 2 3 4
1 143 0 0 0
2 0 21 0 0
3 0 0 393 0
4 0 0 0 19
kNN
- sampling method: 10-fold cross validation -> k = 6 (0.984375)
predict
true 1 2 3 4
1 142 0 1 0
2 5 13 3 0
3 0 0 393 0
4 0 0 0 19``````
According to the results, we can see that performing PCA improves the classification, especially for KDE. Here, we may be interested in the change in parameters and the number of support vectors in SVM method after using `tune.svm` function according to different number of principal components we use.
Mushroom Database
This dataset is obtained from UCI Machine Learning Repository, which was derived from Audobon Society Field Guide. Mushrooms are described in terms of physical characteristics, and we want to classify a mushroom to be poisonous or edible. There are 22 predictor variables, such as cap-shape (bell=b, conical=c, convex=x, flat=f, knobbed=k, sunken=s) and habitat ( grasses=g, leaves=l, meadows=m, paths=p, urban=u, waste=w, woods=d), which are all categorical variables. Since the dimension of the dataset would be even higher after encoding all categorical variables into dummy variables, I used Principal Component Analysis (PCA) to perform dimension reduction.
From the plot above, we can see that 40 components results in variance close to 80%. Therefore, in this case, we’ll select number of components as 40 [PC1 to PC40] and proceed to the modeling stage. The methods of applying the three classifications are similar to the ones used in last section.
``````#### Kernel Density Classification ####
## Test on Training Set ##
1 2
1 2798 25
2 2 2591
## Test on Test Set ##
1 2
1 1408 19
2 3 1278
#### SVM ####
## Test on Training Set ##
pred2 1 2
1 2800 0
2 0 2616
## Test on Test Set ##
pred 1 2
1 1411 0
2 0 1297
#### kNN ####
k1 1 2
1 1411 4
2 0 1293``````
The success rates of testing on the test set using these three classification methods are shown below.
KDC SVM kNN
0.9918759 1 0.9985229
We can see that all methods perform well on this mushroom dataset when we choose to use number of components as 40. The following plot shows the classification success rate when selecting different number of components.
Connect-4 Data Set
This data set is derived from UCI Machine Learning Repository, which contains 67557 instances of data and all legal 42 positions in the game of connect-4 in which neither player has won yet, and in which the next move is not forced. The outcome class is the game theoretical value for the first player (win, loss, draw). The attributes are the 42 positions, each of which is a categorical variable and has three levels, x=first player has taken, o=second player has taken, b=blank. I first tried using Kernel Density Classification directly on the dataset. The classification result is shown below.
``````## Test on Training Set ##
predict
true draw loss win
draw 310 514 3527
loss 450 4534 6078
win 503 1394 27728
## Test on Test Set ##
predict
true draw loss win
draw 164 269 1770
loss 232 2273 3051
win 247 693 13820``````
The classification success rate is only 72.19%, which implies that we need to preprocess the data later. Here, I encoded the variables to dummy variables to firstly try SVM and kNN methods. The classification results are shown below.
``````#### SVM ####
## Test on Training Set ##
predict
true 1 2 3
1 4093 106 152
2 33 10972 57
3 51 50 29524
## Test on Test Set ##
predict
true 1 2 3
1 1654 247 302
2 148 5162 246
3 199 263 14298
#### kNN ####
predict
true 1 2 3
1 267 368 1568
2 89 3785 1682
3 27 248 14485``````
The classification success rate is 93.76% for SVM, and 82.32% for kNN. After encoding all variables to dummy variables, we can also try KDC in this case. The classification is shown below.
``````## Test on Training Set ##
predict
true 1 2 3
1 0 0 4351
2 0 0 11062
3 0 0 29625
## Test on Test Set ##
predict
true 1 2 3
1 0 0 2203
2 0 0 5556
3 0 0 14760``````
In this case, KDC doesn’t work and can’t classify data to different classes. Then, we want to perform PCA to reduce dimension.
Here I firstly choose the first 80 principal components. The KDC method have better performances with classification success rate 87.35% as shown below.
``````## Test on Training Set ##
## predict
## true 1 2 3
## 1 3061 1217 73
## 2 692 8788 1582
## 3 145 1603 27877
## Test on Test Set ##
## predict
## true 1 2 3
## 1 1454 712 37
## 2 352 4348 856
## 3 81 811 13868``````
Then I perform SVM and kNN after this dimension reduction.
``````#### SVM ####
## Test on Training Set ##
## predict
## true 1 2 3
## 1 4351 0 0
## 2 0 11062 0
## 3 0 0 29625
## Test on Test Set ##
## predict
## true 1 2 3
## 1 2194 8 1
## 2 1 5550 5
## 3 0 1 14759
#### kNN ####
## predict
## true 1 2 3
## 1 1125 736 342
## 2 190 4510 856
## 3 20 406 14334`````` | textbooks/stats/Computing_and_Modeling/RTG%3A_Classification_Methods/4%3A_Numerical_Experiments_and_Real_Data_Analysis/Preprocessing_of_categorical_predictors_in_SVM%2C_KNN_and_KDC_%28contributed_by_Xi_Cheng%29.txt |
1. Letter data
Firstly, I want to use a data set of hand-written letters from UCI machine learning to illustrate the use of classification methods. The objective is to identify each of a large number of black-and-white rectangular pixel displays as one of the 26 capital letters in the English alphabet. The data contains 1 categorical variable and 16 numerical variables. Clearer explanation of these variables is necessary. The following picture shows the description of the variables.
Here, we first write a Bayes Classifier function.
In this function, we firstly randomize the 20000 sample, and select training and testing data. Here, the “training_size” is the size of our training data, and the “number_you_like” represents the seed we need to set when we randomize our data.
We try choose our sample to be 16000, and when we randomize the data, we choose set.seed(1). Then, our output is as follows:
From the output, we see the Bayes Classifier is not powerful for this data set. So, we want to try SVM to see if we can find a better classifier our not.
We write a SVM classifier function, and as the bayes one, we also randomize the data first, and then choose our sample.
We also choose our sample to be 16000, and set.seed(1). Then, our output is as follows:
From the result, we see SVM is much more powerful the Bayes classifier.
2.Text Classification
2.1 Introduction
The idea of text classification is that: if we are given a set of different kinds of articles, such as news and scientific articles, we can use them as training set to make classification for future articles. Here, I use 5 news groups from BBC to illustrate some methods for doing text classification.
2.2 Classification
In order to do classification, we need to translate our text files to $n\times p$ matrix, where n denotes the size of our training data and p presents the features to describe one text file. Thus, our first goal is transforming our text into a matrix, and then, we use SVM or KNN to construct a classifier.
2.3 Pre-processing of the text
Here in this experiment, we mainly use Python 2.7 to process the data and use R to construct classifier and do predictions.
Firstly, we need to set the path of our data set and select the classes to do classification.
folder_path = '/Users/shiyuanli/Desktop/bbc'
classes = os.listdir(folder_path)[1:6]
Here the news groups are ’business’, ’entertainment’, ’politics’, ’tech’, and ’sport’.
Next, we get all files from each class and clean the character that is not utf-8 encoded. Then randomly select 500 news as our testing data, and rest of them to be our training data.
from random import shuffle
x = corpus
shuffle(x)
testing_data = x[-500:]
training_data = x[0:(len(x)-len(testing_data))]
Now, we use the training data as our corpus and select the first 3000 most frequent words to be our vocabulary to represent each news. And then, we can transform the training data to a $1721\times 3000$ data matrix and testing data to a $500\times 3000$ data matrix as follows:
count_v1= CountVectorizer(stop_words = 'english', vocabulary = vocabulary1);
counts_train = count_v1.fit_transform(corpus_adj_3);
count_v2 = CountVectorizer(vocabulary = vocabulary1);
counts_test = count_v2.fit_transform(testing_data_2);
2.4 Basic Experiments
Since we have our data matrix, then, we can put the data matrices in to R studio and train a classifier.
setwd("/Users/shiyuanli/Desktop")
train_data = read.csv("train.csv",header = FALSE)
test_data = read.csv("test.csv",header = FALSE)
Here, the train data is our $1721\times 3000$ data matrix, where each row is one news; and the test data is our $500\times 3000$ data matrix.
Linear SVM
We firstly perform 10-fold cross validation to get the best slack parameters and make a prediction for the test data using the best model.
tune.out = tune.svm(train_class_2~., data = train, kernel = 'linear',
cost = c(0.01,0.1,0.5,1,10,100), tunecontrol = tune.control(cross = 10))
tune.out$best.model Next, we construct a SVM classifier, using the linear kernel and the slack parameter 0.1, and get the confusion matrix of the result. ptm <- proc.time() classifier = ksvm(train_class_2~ ., data = train, kernel ="vanilladot",C=0.1) prediction = predict(classifier, test) proc.time() - ptm con_mat = table(prediction,test_class_2) con_mat test_class_2 prediction business entertainment politics sport tech business 112 2 3 0 1 entertainment 1 71 2 0 1 politics 4 1 87 0 0 sport 1 0 2 118 1 tech 1 1 1 0 90 Naive Bayes Classifier ptm <- proc.time() classifier_kde = naiveBayes(train_class_2 ~ ., data = train, usekernel = TRUE) prediction_kde = predict(classifier_kde, test_dat) proc.time() - ptm con_mat_kde = table(prediction_kde,test_class_2) test_class_2 prediction_kde business entertainment politics sport tech business 79 0 3 0 2 entertainment 1 34 0 1 2 politics 3 0 68 0 4 sport 35 40 24 117 27 tech 1 1 0 0 58 2.5 K Nearest Neighbor knn_pred = knn( train = train_data, test = test_dat, cl = train_class_2, k = 6 ) knn_con = table(true = test_class_2, model = knn_pred) model true business entertainment politics sport tech business 71 1 1 45 1 entertainment 2 42 0 31 0 politics 6 4 64 20 1 sport 0 0 0 118 0 tech 4 7 2 35 45 Now, we get the accuracy of each classification method. Accuracy Linear SVM Naive Bayes KNN 95.6 71.2 72 2.6 Using PCA to reduce dimensions Since we spend much time in constructing a classifier and doing prediction, we want to try dimension reduction methods to see how these approaches help us save time and how they affect the prediction error. In order to select the features, we need to find how many eigenvalues cov_mat = cov(train_data) eig = eigen(cov_mat) vectors = eig$vectors
total_eigen = eig\$values
for (i in 1:3000){
if (sum(total_eigen[1:i])/sum(total_eigen) > 0.96){
print(i)
break
}
} | textbooks/stats/Computing_and_Modeling/RTG%3A_Classification_Methods/4%3A_Numerical_Experiments_and_Real_Data_Analysis/Shiyuan_Li.txt |
• ID: The unique code name for each review
• Sentiment: It is a binary number consisting of 1 and 0. People can rate the movie from 1 to 10. When there are less than 5 IMDB rating, the score will be 0. Otherwise, the score will be 1. Overall, there are 12,500 zeros and 12,500 ones within these 25,000 IMDB movie reviews.
• Review: A paragraph of raw text written by a reviewer.
By using this dataset, I want to do Sentiment Analysis by using SVM for the purpose of predicting the sentiment score of certain reviews. The general procedures are doing a text cleaning and processing to the raw text, applying the Bag of word method to create features and forming a Term-Document matrix to fit the SVM model.
\(\Rightarrow \textbf{STEP 1: Text cleaning}\)
As you can see in picture 1, the text has plenty of HTML symbols, stop words, unexpected punctuation, and extra space that may distract the computer to do machine learning. In order to transform the review to a relatively clean version, we removed all of them by following steps:
1. Removing HTML symbols
2. Removing punctuation, number and extra space
3. Converting each word to lower case
4. Splitting the review into individual word
5. Removing the Stop words
6. Putting the rest of the words altogether as the clean review
We will follow this procedure to all 25,000 movie reviews. Picture 2 is a comparison between the raw text and the clean text for a review. The first paragraph is the original review. The second paragraph is the clean review.
To be clarified, the words shown frequently without any meaning are called “stop words” such as “you”, “the”, and “I”. Their occurrence in the reviews doesn’t affect the sentiment score. For the convenience of feature extraction in the bag of words, we want to delete them as great as possible. In Python Natural Language Toolkit, there is a package having the list of stop words that we can directly use. Picture 3 is showing the Stop words list.
\(\Rightarrow \textbf{STEP 2: Bag of Words Model}\)
For the convenience of machine learning, we need to convert all the information from words into a numeric version and create features from it. The Bag of Words model will perfectly suit our requirements. It learns every single vocabulary from all of the reviews and counts the frequency of each word about how many times they appear in total. For example, here are two simple sentences:
Sentence 1: “stuff, go, moment, start, listening”
Sentence 2: “stuff, odd, start, start, cool”
We can see the words contained in each sentence and get the vocabulary as follow:
{start, stuff, go, odd, moment, listening, cool}
For each sentence, we will get a list of number that shows the count of a number of each word and form the feature vector for each sentence:
Sentence 1: {1,1,1,0,1,1,0}
Sentence 2: {2,1,0,1,0,0,1}
Similarly, we can form a feature vector for each review. However, the dataset size is too large to process. To limit the size of the feature vectors, we can choose the first 5000 reviews as our training dataset and extract the 1000 most frequent words as the feature. The final result will be a matrix formed by the frequency of 1000 features for 5000 reviews. Picture 4 shows a part of a term-document matrix which has 5000 rows * 1001 columns. The first column is the real sentiment label for each review and the rest of 1000 columns are the features we just extracted through Bag of words model.
\(\Rightarrow \textbf{STEP 3: Fitting a SVM Model}\)
After the data processing, we fit a SVM model to the Document term matrix and see how great our model is by checking the accuracy score. Picture 6 shows the python code to achieve our goal. The parameter “C” here is a non-negative slack variable to control the over fitting. If C = 0, we would consider that none of the samples should be misclassified. If C \(>\) 0, then we can trade-off some misclassified samples in order to find the best margin that separates others into different classes. A large C means that we would like a narrow margin that separates most the samples correctly. Conversely, a small C gives a wider margin with the higher error of misclassification but it might strengthen the robustness.
\(\Rightarrow \textbf{STEP 4: Comparing the result under different feature and cost}\)
After we extract the different amount of features by changing the function named “max\(\_\)features” in the bag of word function, the accuracy score will change accordingly. Picture 7 is a table showing all the result we got from different SVM models.
Reference
Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. (\(2011\)). “Learning Word Vectors for Sentiment Analysis.”The 49th Annual Meeting of the Association for Computational Linguistics (ACL \(2011\)).
Bag of Words Meets Bags of Popcorn (\(2015\)). Kaggle. Retrieved from: \(https://www.kaggle.com/c/word2vec-nlp-tutorial\) | textbooks/stats/Computing_and_Modeling/RTG%3A_Classification_Methods/4%3A_Numerical_Experiments_and_Real_Data_Analysis/Zimeng_Wang.txt |
What is Monte Carlo?
In this section, we will discuss some aspects of the Monte Carlo method our team used to simulate high dimensional data. The Monte Carlo methods are basically a class of computational algorithms that rely on repeated random sampling to obtain certain numerical results, and can be used to solve problems that have a probabilistic interpretation. This method of simulation is useful for our project because it enables us to sample high-dimensional vectors from a known distribution--the standard normal distribution--so that we can compare our simulated results with our theory. Although using real high-dimensional data is also an option, we more often than not do not know the true distribution of these data points, so what we observe from real data might not always align nicely with theory. However, with simulated data, we can always test to see if what we're expecting is correct if we fix a sample size, dimension, and distribution beforehand.
How does it work?
Monte Carlo simulation works by selecting a random value for each task, and then building models based on those values. This process is then repeated many times, with different values so in the end, the output is a distribution of outcomes.
Below we have two common examples, CLT and LLN, that utilizes this Monte Carlo simulation method.
CLT
The Central Limit Theorem (CLT) is a way to approximate the probability of the sample average is close to the mean. When a random sample of size n is taken from any distribution with mean u and variance $\sigma^2$, the sample mean will have a distribution approximately Normal with mean u and variance $\sigma^2/n$. It doesn't matter what the shape of the underlying distribution is, all that is necessary is finite variance and many, many repeated samples of size n from a population. Then, the average or sum will be approximately Normal distributed. Alternatively, the sum will be approximately Normal with mean $\mu$ and variance $n\sigma^2$.
The CLT can be demonstrated through simulation. Below demonstrates the CLT theorem using Poisson distribution with sample size 1000.
Another example below is Exponential distribution with sample size 10,000.
As from these example plots with varying sample sizes and distributions, the data's sample mean still has approximately a Normal distribution.
The R code is below, and one can adjust the parameters to test the theorem.
# Central Limit Theorem Simulation
# n: sample size of each sample
# dist: underlying distribution where the sample is drawn
simulation <- function(n, dist=NULL, param1=NULL, param2=NULL) {
r <- 10000 # Number of replications/samples - (DO NOT ADJUST)
# This produces a matrix of observations with
# n columns and r rows. Each row is one sample:
my.samples <- switch(dist,
"E" = matrix(rexp(n*r,param1),r),
"N" = matrix(rnorm(n*r,param1,param2),r),
"U" = matrix(runif(n*r,param1,param2),r),
"P" = matrix(rpois(n*r,param1),r),
"C" = matrix(rcauchy(n*r,param1,param2),r),
"B" = matrix(rbinom(n*r,param1,param2),r),
"G" = matrix(rgamma(n*r,param1,param2),r),
"X" = matrix(rchisq(n*r,param1),r),
"T" = matrix(rt(n*r,param1),r))
all.sample.sums <- apply(my.samples,1,sum)
all.sample.means <- apply(my.samples,1,mean)
all.sample.vars <- apply(my.samples,1,var)
par(mfrow=c(2,2))
hist(my.samples[1,], col="gray", main="Distribution of One Sample", xlab="")
hist(all.sample.sums, col="gray", main="Sampling Distribution of the Sum", xlab="")
hist(all.sample.means,col="gray", main="Sampling Distribution of the Mean", xlab="")
hist(all.sample.vars,col="gray", main="Sampling Distribution of the Variance", xlab="")
}
simulation(n=1000, dist="P", param1=1)
simulation(n=10000, dist="E", param1=1)
LLN
The Law of Large Numbers (LLN) is a way to explain how the average of a large sample of independently and identically distributed (iid) random variables will be close to their mean.
An example of a simulation is below:
Code is following:
set.seed(1212)
n = 50000
x = sample(0:1, n, repl = TRUE)
s = cumsum(x)
r = s/(1:n)
plot(r, ylim = c(0.4, 0.6), type = "l")
lines(c(0,n), c(0.5, 0.5))
round(cbind(x, s, r), 5)[1:10, ]
r[n]
Strong
The strong law of large numbers states that the sample average converges almost surely to the expected value.
$\bar X$ $\xrightarrow{a.s.} \mu$ as $n \rightarrow \infty$
Weak
The weak law of large numbers (also called Khintchine's law) states that the sample average converges in probability towards the expected value.
$\bar X$ $\xrightarrow{P} \mu$ as $n \rightarrow \infty$ | textbooks/stats/Computing_and_Modeling/RTG%3A_Simulating_High_Dimensional_Data/The_Monte_Carlo_Simulation_Method.txt |
What is a Monte Carlo Simulation?
A Monte Carlo technique describes any technique that uses random numbers and probability to solve a problem while a simulation is a numerical technique for conducting experiments on the computer. Putting the two terms together, Monte Carlo Simulation would then describe a class of computational algorithms that rely on repeated random sampling to obtain certain numerical results, and can be used to solve problems that have a probabilistic interpretation. Monte Carlo Simulation allows us to explicitly and quantitatively represent uncertainties. This means that simulation gives us insight on how likely our results are. Note that with any simulation, the results are as good as the inputs you give in. Poor inputs/model will lead to meaningless outputs.
Common tasks of the Monte Carlo Simulation include:
• Approximation of distribution of test statistics, estimators, etc.
• Approximation of quantities that allow for a probabilistic interpretation (in a certain sense). (E.g. Monte Carlo Integration, estimation of $\pi$, etc)
For more examples of Monte Carlo in various fields, Wikipedia provides some good examples.
How does it work?
A typical Monte Carlo Simulation involves the following steps:
1. Define your inputs.
2. Model the system by using an appropriate probability density function.
3. Generate inputs randomly from the probability distribution.
4. Perform a deterministic computation on the inputs.
5. Repeat steps 2 and 3 as many times as desired.
Example 1 - Estimation of $\pi$
To estimate $\pi$, we can imagine a circle enclosed by a square. We know that the area of the circle $A_{circle} = \pi r^2$ and the area of the square is $A_{square} = 4r^2$. Now, if we take a bunch of objects, say sand and splatter it onto the circle and square, the probability of the sand landing inside the circle will be $P${sand lands in circle} =$\frac{\pi r^2}{4r^2} = \frac{\pi}{4}$. As a result, calculating $\pi$ is a matter of calculating $4 P${sand lands in circle}. With our mathematical model defined, we can begin the Monte Carlo Simulation. First, we identify a probability density function that best models this phenomenon. The uniform distribution works well here because the probability of the sand landing in the square is equally likely. At the beginning of the simulation, initialize two variables, one to keep count of times we sample or drop a sand into the square and the second to count the number of sand that lands inside the circle. From the uniform distribution, we will sample for the location of the sand in the square; the $(x, y)$ coordinates. Next, we check whether the $(x, y)$ falls inside the circle using the formula $x^2 + y^2 \leq r^2$. To get the probability of sand inside the circle, we take the ratio of number of sand inside the circle over the number of sand we dropped onto the square.
#Simple R function to estimate pi. n is number of times to run the simulation and r is the radius of the circle.
estimatePi = function(n, r)
{
x = runif(n, -r, r)
y = runif(n, -r, r)
sum(x^2 + y^2 <= r^2)/n * 4
}
A great YouTube video explaining this example can be viewed here.
Example 2 - Approximating Distribution of Sample Mean
As a less rigorous application of the Monte Carlo Simulation in terms of statistics, we can try to approximate the distribution of the sample mean.
Sample mean is defined as $\frac{1}{n}\sum_{i=1}^n X_{i}$. If we sample our data from $N(0,1)$, then the distribution of the sample mean should be $\bar{X} \sim N(0, \frac{1}{n}$).
We can run a simulation in R to check this result.
#Number of times to repeat simulation
m = c(50,100,500,1000)
#Sample size
n = 100
generateSampleMeans = function(m) {
replicate(m, {
data = rnorm(n, 0, 1)
sum(data)/sqrt(n)
})
}
#Visualize histogram
par(mfrow=c(2,2))
sapply(m, function(i)
hist(generateSampleMeans(i), main = paste0("Histogram for m = ",i), xlab = "Sample Mean")
From the plot on the left, we can see that as number of iterations for the simulation increase, we get a histogram that looks more like the expected distribution for the sample mean as shown on the right.
Why does it work?
Convergence of the Monte Carlo Method means that you will get an approximately good estimation of your estimator. This convergence is implied by the law of large numbers. Specifically, it meets the requirement for the strong law of large numbers which in turn implies the weak law of large numbers. The intuition for the law of large numbers is that the Monte Carlo method requires repeated sampling and by law of large numbers, the average of the outcome you get will converge to the expected value. In the estimation of $\pi$, we would expect that as we increase the amount of sand we drop onto the square, the closer we are to the value of $\pi$.
Weak Law of Large Numbers:
Let $X_{1}, X_{2}, . . .X_{n}$ be an i.i.d sequence of random variables for which $\mathbf{E} X_{1} < \infty$. Then, $\frac{1}{n} (X_{1} + . . . + X_{n}) \overset{P}{\rightarrow} \mathbf{E} [X_{1}] \ as \ n \rightarrow \infty$.
Strong Law of Large Numbers:
Suppose that the first moment of $\mathbf{E} [(|X|)]$ is finite. Then $\bar{ X_{n} }$ converges almost surely to $\mathbf{E} [X]$, thus $P(lim_{n -> \infty} \bar{ X_{n}} = \mathbf{E} [X]) = 1$
The law of large numbers guarantees convergence for the Monte Carlo Method, to identify the rate of convergence, it would require the central limit theorem. For a review of what the central limit theorem say, click here.
The central limit theorem tells us that the distribution of the errors will converge to a normal distribution and with this notion in mind, we can figure out the number of times we need to resample to achieve a certain accuracy. By assuming normal distribution of the errors, we have information to calculate the confidence interval and see what sample size is needed for the desired accuracy.
If we let $\alpha_{n}$ be the numerical outcome we get and $\alpha$ the expected estimator value we are trying to find, then by law of large numbers and central limit theorem, we can say that $\alpha_{n} \overset{D}{\approx} \alpha + \frac{\sigma}{\sqrt{n}} N(0,1)$.
$\overset{D}{\approx}$ denotes convergence in distribution. From this, we see that Monte Carlo converges very slowly because to achieve a tenfold accuracy, we would need to increase our sampling by a hundredfold. | textbooks/stats/Computing_and_Modeling/RTG%3A_Simulating_High_Dimensional_Data/The_Monte_Carlo_Simulation_V2.txt |
Introduction
From Section 2 of the research paper titled Geometric Representation of High Dimension, Low Sample Size Data (HDLSS) by Peter Hall, it discusses that as dimension of the data, $d$, increases drastically, i.e. $d \rightarrow \infty$,we expect to see two characteristics that are vastly different from low dimensional data, namely:
1. All points are approximately equidistant from one another (i.e. their pair wise distances are nearly the same).
2. Most points lie on boundary of the data cloud.
To gain some insight on this phenomenon, we will simulate data of high dimensions from a well-known distribution, the standard normal distribution, and observe if the data behaves according to the assumptions/expectations above. The simulation is performed using the the Monte Carlo method, which is briefly explained in the previous section here.
High Dimensional Data Simulation Process
Here, we will go over the basics on what one should think of when creating a simulation to visualize high dimensional data. The idea here is that we want to create a random sample of high-dimensional vectors, and observe how each of these vector (points) lie on the vector space from one another.
To illustrate, we want to sample a number of observations, denoted $n$, from standard normal distribution (for simplification) with $d$ dimensions. Then, each observation is a vector of the form $X = [X_1, X_2, \dots, X_d]'$, where each component is from a normal distribution with mean 0 and standard deviation 1. Then, we will observe distances between these points by calculating and comparing the norms. The following steps are a guideline on how to set this up in R:
1. Fix the number of dimensions, $d$, and sample size, $n$ however you want, but keep in mind that $d > n$ for HDLSS.
2. Sample $d$ (dimension) data points from a standard normal distribution. This will be our first observation vector $Z(d) = (Z_1 , . . . , Z_d )$ .
3. Repeat (Step 1) $n$ number of times to generate $n$ samples. We now have our data set.
4. For each vector in our data set, compute the square of the Euclidean norm and divide by $d$.
5. Plot a histogram of Euclidean norm to examine, mean, standard deviation, and distribution.
6. To examine the pair-wise distances between each observation, compute pair-wise differences of the samples after (Step 3) and proceed as before.
Sample R Code
You can try the above in R using this sample code to get started:
hdlss_simulation = function(sample_size, dimension) {
data = data.frame(replicated(sample_size, rnorm(dimension, 0,1)))
#Generates the data and stores into a data frame. Each row corresponds to a single observation.
squared_norm = apply(data, 1, function(sample) sum(sample^2)/dimension)
#Calculate squared norm and divide each vector by its dimension.
hist(squared_norm)
}
As for the pair-wise distance between each point, we can use the nifty function dist()in R. Here we are using the Euclidean distance:
pairdist = as.matrix(dist(t(data), method = "euclidean"))
pairs = pairdist[upper.tri(pairdist)
#Note 1: 'pairdist' is the distance between every observation vector in matrix form. To find the distances
between each row (i.e. observation), transpose the matrix because as.matrix has it reversed.
#Note 2: Since the distance matrix is symmetric about the diagonal, upper.tri() serves to take only the
values above the diagonal.
At this point, you can plot the histogram as before.
(Side note: To get a better visualization of how different sample sizes and dimensions affect or change the distribution of the norms, fix your histogram's x-axis. Another option is to try using a scatter plot with a fixed x-axis to see how the variance of the norm changes.)
What we expect to see
Let $Z$ denote our vector of standard normal random variable.
Formula for Euclidean Norm : $||X|| = \sqrt{X^TX}$
Single Vector
Find distribution of $\frac{Z^TZ}{d}$
$Z^TZ = \sum\limits_{i = 1}^d (Z^i)^2$
Since each $Z^i$ come from standard normal distribution, we have the square of a standard normal distribution which gives a $X^{2}_{1}$, chi-square random variable with $\mu = 1$ and $\sigma^2 = 2$.
Given that we have d chi-square random variables, $Z^TZ \sim X^{2}_{d}$, or chi-square with $\mu = d$ and $\sigma^2 = 2d$.
Dividing $Z^TZ$ by $d$, we get that $\frac{Z^TZ}{d}$ is chi-square with $\mu = 1$ and $sigma^2 = 2/d$.
Pair-wise Distances
Find distribution of $\frac{\sum_{d} (Z^i - Z^j)^2}{d}$, where $i \neq j$
$Z^i - Z^j \sim N(0, 2)$, because the expectation of difference between two standard normal is a standard normal random variable with the variance be the sum of the variance of the two variables.
Since we only know the square of a standard normal random variable, not the square of a $N(0,2)$ random variable, therefore we divide $Z^i - Z^j$ by $\sqrt{2}$ to get $\frac{Z^i - Z^j}{\sqrt{2}} \sim N(0,1)$.
Then $(\frac{Z^i - Z^j}{\sqrt{2}})^2 \sim N(0,1)^2 \sim X^{2}_{1}$. Given d of these random variables, we have $\mu = d$, $\sigma^2 = 2d$. To get only $\sum\limits_{d}(Z^i - Z^j)^2$, we multiply $\sum\limits_{d}(\frac{Z^i - Z^j}{\sqrt{2}})^2$ by 2. As a result, our $\mu = 2d$ and $\sigma^2 = 8d$.
Dividing $\sum\limits_{d}({Z^i - Z^j})^2$ by d $\mu = 2$ and $\sigma^2 = 8/d$
Dynamic Histogram of the Norms
The following link is a R-Shiny app to demonstrate how the histograms should look like. You can also select the sample size, norm of interest, and dimensions to see how the distributions change whenever any one of these parameters are changed. Please click here to access the Shiny app.
Analysis
Our simulation results matches our expectations. We see that as dimensions increase, the mean of of the data is centered at 1 and the standard deviation decreases for the norm. In other words, this means that the norms are approximately equal when we increase the dimensions significantly for a fixed sample size. For pair-wise differences between the points, we note that the mean is centered at 2 and the standard deviation decreases slowly as well. Likewise, this suggests that the as dimensions increase for a fixed sample size, the each point is approximately equidistant from one another. Hence, the points tend to lie on the border of the data cloud in high dimensions.
Contributors
• Ariel Sim, Kavi Tan, Wolfgang Polonik | textbooks/stats/Computing_and_Modeling/RTG%3A_Simulating_High_Dimensional_Data/Understanding_the_Geometry_of_High_Dimensional_Data_through_Simulation.txt |
1 Analysis of Factor Level Means
Suppose we reject $H_{0} : \mu_{1} = ... = \mu_{r}$. Then, we want to investigate the nature of the differences amond the factor level means by studying the following:
• One factor level mean: $\mu_{i}$
• Difference between two factor level means: $D = \mu_{i} - \mu_{j}$
• Contrast of factor level means: $L = \sum_{i = 1}^{r}c_{i}\mu_{i}$ where $\sum_{i = 1}^{r}c_{i} = 0$
When more than one contrasts are involved, we also need to consider procedures that account for multiple comparisons, including:
• Bonferroni's procedure
• Tukey's procedure
• Scheffe's procedure
1.1 Inference about one factor level mean
The $i$th factor level sample mean $\bar{Y}_{i\cdot}$ is a point estimator of $\mu_{i}$. Here are some properties of this estimator:
• $\bar{Y}_{i\cdot}$ is an unbiased estimator of $E(\bar{Y}_{i\cdot}) = \mu_{i}$
• $Var(\bar{Y}_{i\cdot}) = \frac{\sigma^2}{n_{i}}$
$MSE = SSE/(n_{T} - r)$ is a point estimator of $\sigma^2$:
• an unbiased estimator: $E(MSE) = \sigma^2$
• $SSE \sim \chi^2_{(n_{T} - r)}$ and is independent of $\{\bar{Y}_{i\cdot} : i = 1, ..., r\}$
Thus,
• the estimated standard error of $\bar{Y}_{i\cdot}$ is
$s(\bar{Y}_{i\cdot}) = \sqrt{\frac{MSE}{n_{i}}}$
• $\frac{\bar{Y}_{i\cdot} - \mu_{i}}{\sqrt{\frac{MSE}{n_{i}}}} \sim t_{(n_{T} - r)}$, i.e. a $t$-distribution with $n_{T} - r$ degrees of freedom
• A 100(1-$\alpha$%) two sided confidence interval of $\mu_{i}$ is given by
$\bar{Y}_{i\cdot} \pm s(\bar{Y}_{i\cdot})t(1 - \frac{\alpha}{2}; n_{T} - r)$
where $t(1 - \frac{\alpha}{2}; n_{T} - r)$ denotes the $1 - \alpha/2$ quantile of the t-distribution with $n_{T} - r$ degrees of freedom.
• Test $H_{0} : \mu_{i} = c$ against $H_{a} : \mu_{i} \neq c$
• T-statistics:
$T^{*} = \frac{\bar{Y}_{i\cdot} - c}{\sqrt{\frac{MSE}{n_{i}}}}$
• Under $H_{0} : T^{*} ~ t_{(n_{T} - r)}$
• At significance level $\alpha$, reject $H_{0}$ if $\|T^{*}\| > t(1 - \frac{\alpha}{2}; n_{T} - r)$
• Confidence Interval Approach: If c does not belong to the 100(1-$\alpha$%) (two-sided) confidence interval of $\mu_{i}$, then reject $H_{0}$ at level $\alpha$
1.2 Example
• In the "package design" example, the estimate of $\mu_{1}$ is $\bar{Y}_{1\cdot} =$ 14.6
• MSE = 10.55 and $n_{1}$ = 5
• Thus $s(\bar{Y}_{1\cdot}) = \sqrt{10.55/5} = 1.45258$
• The degrees of freedom of MSE is 19 - 4 = 15 (since $n_{T}$ = 19 and r = 4)
• The 95% confidence interval of $\mu_{1}$ is
14.6 $\pm$ 1.45258 x t(0.975; 15) = 14.6 $\pm$ 1.45258 x 2.131 = 14.6 $\pm$ 3.09545 = (11.50455, 17.69545)
Here, from Table B.2, we get that t(0.975, 15) = 2.131 (or, use the R command: $\textit{qt(0.675, 15)}$)
1.3 Interpretation of confidence intervals
• A wrong statement: P(11.51 $\leq \mu_{1} \leq$ 17.69 = 0.95. Why? Since "11.51 $\leq \mu_{1} \leq$ 17.69" is true or false as a fact.
• Interpretation of C.I.: if exactly the same study on package designs were repeated many times, and at each time a 95% C.I. for $\mu_{1}$ were constructed as above, then about 95% of the time, the C.I. would contain the true value $\mu_{1}$.
• A correct statement based on the observed data, we are 95% confident that $\mu_{1}$ is in between 11.51 and 17.69
• The difference between a random variable and its realiizations $\bar{Y}_{1\cdot}$ is a random variable; 14.6 is the realization of $\bar{Y}_{1\cdot}$ in the current sample
1.4 Difference between two factor level means
Let $D = \mu_{i} - \mu_{j}$ for some $i \neq j$
• $\hat{D} = \bar{Y}_{i\cdot} - \bar{Y}_{j\cdot}$ is an unbiased estimator of D
• $Var(\hat{D}) = Var(\bar{Y}_{i\cdot})$ $+ Var(\bar{Y}_{j\cdot}) = \sigma^{2}(\frac{1}{n_{i}} + \frac{1}{n_{j}})$ (since $\bar{Y}_{i\cdot}$ and $bar{Y}_{j\cdot}$ are independent
• estimated standard error of $\hat{D} = s(\hat{D}) = \sqrt{MSE(1/n_{i} + 1/n_{j}}$
• for every $\mu_{i}$ and $\mu_{j}$, the ratio $frac{\hat{D} - D}{s(\hat{D})}$ has $t_{(n_{T} - r)}$ distribution
1.5 Inference on the difference between two factor level means
• 100(1 - $\alpha$)% (two-sided) confidence interval of D
$\hat{D} \pm s(\hat{D})t(1 - \frac{\alpha}{2}; n_{T} - r)$
Test $H_{0}: D = 0$ against $H_{a}: D \neq 0$. At the significance level $\alpha$, check whether
$\hat{D} - s(\hat{D})t(1 - \frac{\alpha}{2}; n_{T} - r) \leq 0 \leq \hat{D} + s(\hat{D})t(1 - \frac{\alpha}{2}; n_{T} - r)$
If not, reject $H_{0}$ at level $\alpha$ and conclude that the two means are different.
1.6 Example
In a study of the effectiveness of different rust inhibitors, four brands (1, 2, 3, 4) were tested. Altogether, 40 experimental units were randomly assigned to the four brands, with 10 units assigned to each brand. The resistance to rust was evaluated in a coded form after exposing the experimental units to severe conditions.
• This is a balanced complete randomized design (CRD)
Summary statistics and ANOVA table
$n_{1} = n_{2} = n_{3} = n_{4} =10$ and $\bar{Y}_{1\cdot} = 43.14, \bar{Y}_{2\cdot} = 89.44, \bar{Y}_{3\cdot} = 67.95, \bar{Y}_{4\cdot} = 40.47$
Source of Variation Sum of Squares (SS) Degrees of Freedom (df) Mean of Squares (MS)
Between Treatments SSTR = 15953.47 r - 1 = 3 MST = 5317.82
Within Treatments SSE = 221.03 $n_{T} - r$ = 36 MSE = 6.140
Total SSTO = 16174.50 $n_{T} - 1$ = 39
95% confidence interval for $D = \mu_{1} - \mu_{2}$
We compute
$\hat{D} = \bar{Y}_{1\cdot} - \bar{Y}_{1\cdot} = 43.14 - 89.44 = -46.3$
$s(\hat{D}) = \sqrt{MSE(1/n_{1} = 1/n_{2})} = \sqrt{6.140(1/10 = 1/10)} = 1.108152$
Also, since $\alpha$ = 1 - 0.95 = 0.05, we have $t(1-\alpha/2; n_{T} - r) = t(0.975; 36) = 2.028094$ (use R command: $\textit{qt(0.975, 36)}$; or use Table B.2 and approximate the value by averaging the value of the 0.975-th quantile of t - distribution with degrees of freedom v = 30 and 40).
Therefore, the 95% confidence interval for $D = \mu_{1} - \mu_{2}$ is
$-46.3 \pm 1.108152 x 2.028094 = -46.3 \pm 2.247436 = (-48.54744, -44.05256)$
2 Contrasts
A contrast is a linear combination of the factor level means: $L = \sum_{i = 1}^{r}c_{i}\mu_{i}$ where $c_{i}$'s are prespecified constants with the constraint: $\sum_{i=1}^{r}c_{i} = 0$.
• Examples:
- Pairwise comparisons: $\mu_{i} - \mu_{j}$
- $\frac{\mu_{1} = \mu_{2}}{2} - \mu_{3}$
• Unbiased estimator:
$\hat{L} = \sum_{i = 1}^{r}c_{i}\bar{Y}_{i\cdot}$
• Estimated standard error:
$s(\hat{L}) = \sqrt{MSE\sum_{i = 1}^{r}\frac{c^{2}_{i}}{n_{i}}}$
since $Var(\hat{L}) = \sum_{i = 1}^{r}\sigma^{2}c^{2}_{i}/n_{i}$.
2.1 Example of a contrast for the package design problem
Suppose, designs one and two are 3-color designs, while designs three and four are 5-color designs. The goal is to compare 3-color designs to 5-color designs in terms of sales.
• Consider the contrast: $L = \frac{\mu_{1} + \mu{2}}{2} - \frac{\mu_{3} + \mu_{4}}{2}$
• An unbiased point estimation of L is
$\hat{L} = \frac{\bar{Y}_{1\cdot} + \bar{Y}_{2\cdot}}{2} - \frac{\bar{Y}_{3\cdot} + \bar{Y}_{4\cdot}}{2}$
$= \frac{14.6 + 13.4}{2} - \frac{19.5 + 27.2}{2} = -9.35$
• $c_{1} = c_{2} = 0.5, c_{3} = c_{4} = -0.5$ (note that, they add up to zero), so
$s(\hat{L}) = \sqrt{MSE\sum_{i =1}^{r}\frac{c^{2}_{i}}{n_{i}}}$
$= \sqrt{10.55 x (\frac{(0.5)^{2}}{5} + \frac{(0.5)^{2}}{5} + \frac{(-0.5)^{2}}{5} +\frac{(-0.5)^{2}}{5})}$
$\sqrt{10.55 x 0.2125} = 1.5$
• A 90% C.I. for L is
$\hat{L} \pm t(0.95; 15) x s(\hat{L})$
$= -9.35 \pm 1.5 x 1.753 = [-11.98, -6.72]$
• Since the 90% for L does not contain zero, we are 90% confident that 5-color designs work better than 3-color designs.
Contributors
• Joy Wei, Debashis Paul | textbooks/stats/Computing_and_Modeling/Supplemental_Modules_(Computing_and_Modeling)/Experimental_Design/Analysis_of_Factor_Level_Means_and_Contrasts.txt |
1.1 Study Design: basic concepts
Usually the goal of a study is to find out the relationships between certain explanatory factors and the response variables. The design of a study thus consists of making decisions on the following:
• The set of explanatory factors.
• The set of response variables.
• The set of treatments.
• The set of experimental units.
• The method of randomization and blocking.
• Sample size and number of replications.
• The outcome measurements on the experimental units - the response variables.
1.2 Factors
Factors are explanatory variables to be studied in an investigation.
Examples:
1. In a study of the effects of colors and prices on sales of cars, the factors being studied are color (qualitative variable) and price (quantitative variable).
2. In an investigation of the effects of education on income, the factor being studied is education level (qualitative but ordinal).
Factor levels
Factor levels are the "values" of that factor in an experiment. For example, in the study involving color of cars, the factor car color could have four levels: red, black, blue and grey. In a design involving vaccination, the treatment could have two levels: vaccine and placebo.
Types of factors
• Experimental factors: levels of the factor are assigned at random to the experimental units.
• Observational factors: levels of the factor are characteristic of the experimental units and is not under the control of the investigators.
• There could be observational factors in an experimental study.
Example: in the "new drug study" (refer to Handout 1), if we are also interested in the effects of age and gender on the recovery rate, then these observational factors; while the treatment (new drug or old drug) is an experimental factor.
1.3 Treatments
• In a single factor study, a treatment corresponds to a factor level; thus the number of treatments equals the number of different factor levels of that factor.
• In a multi-factor study, a treatment corresponds to a combination of factor levels across different factors; thus the number of all possible treatments is the prodcut of the number of factor levels of different factors.
Examples:
• In the study of effects of education on income, each education level is a treatment (high school, college, advanced degree, etc).
• In the study of effects of race and gender on income, each combination of race and gender is a treatment (Asian female; Hispanic male, etc).
Exercise: How many different treatments are there for the above examples?
Choice of treatments
Choice of treatments depends on the choice of: (i) the factors (which are the important factors);
(ii) levels of each factor.
• For qualitative factors the levels are usually indicated by the nature of the factor.
Example: gender has two levels: female and male
• For quantitative factors the choice of levels reflects the type of trend expected by the investigator.
Example: linear trend implies two levels; quadratic trend implies three levels. Usually 3 to 4 equally spaced levels are sufficient.
• The range of the levels is also crucial. Usually prior knowledge is required for an effective choice of factors and treatments (refer to the "quick bread volume" example on page 650).
1.4 Experimental units
• An experimental unit is the smallest unit of experimental material to which a treatment can be assigned.
Example: In a study of two retirement systems involving the 10 UC schools, we could ask if the basic unit should be an individual employee, a department, or a University.
Answer: The basic unit should be an entire University for practical feasibility.
• Representativeness: the experimental units should be representative of the population about which a conclusion is going to be drawn.
Example: A study conducted surveys among 5,000 US college students, and found out that about 20% of them had uses marijuana at least once. If the goal of the study is the drug usage among Americans aging from 18 to 22, is this a good design?
• Choosing a representative set of experimental units which fits teh purpose of your study is important.
1.5 Sample size and replicates
Loosely speaking, sample size is the number of experimental units in the study.
• Sample size is usually determined by the trade-off between statistical considerations such as power of tests, precision of estimations, and the availability of resources such as money, time, man power, technology etc.
• In general, the larger teh sample size, the better it is for statistical inference; however, the costlier is the study.
• An important consideration in an experimental design is how to assess power or precision as a function of the sample size (sample size planning/power calculation) ?
Replicates
For many designed studies, the sample size is an integer multiple of the total number of treatments. This integer is the number of times each treatment being repeated and one complete repitition of all treatments (under similar experimental conditions) is called a complete replicate of the experiment.
• Example: In a study of baking temperature on the volume of quick bread prepared from a package mix, four oven temperatures: low, medium, high and very high were tested by randomly assigning each temperature to 5 package mixes (all of the same brand). Thus the sample size is 20(= 4 $\times$ 5), the number of treatments is 4 (4 levels of temperatures) and there are 5 complete replicates of the experiment.
Why replicates?
When a treatment is repeated under the same experimental conditions, any difference in the response from prior responses for the same treatment is due to random errors. Thus replication provides us some information about random errors. If the variation in random errors is relatively small compared to the total variation in the response, we would have evidence for treatment effect.
1.6 Randomization
• Randomization tends to average out between treatments whatever systematic effects may be present, apparent or hidden, so that the comparison between treatments measure only the pure treatment effect.
• Randomization is necessary not only for the assignment of treatments to experimental units, but also for other stages of the experiment, where systematic errors may be present.
Example: In a study of light effects on plant growth rate, two treatments are considered: brighter environment vs. darker environment. 100 plants are randomly assigned to each treatment (all genetically identical). However, there is only one growth chamber which can grow 20 plants at one time. Therefore the 200 plants need to be grown in 10 different time slots.
In addition to randomizing the treatments, it is important to randomize the time slots also. This is because, the conditions of the growth chamber (such as humidity, temperature) might change over time. Therefore, growing all plants with brighter light treatment in the first 5 time slots and then growing all plants with darker light treatment in the last 5 time slots is not a good design.
1.7 Blocking
In a blocked experiment, heterogenous experimental units (with known sources of heterogenity) are divided into homogenous subgroups, called blocks, and separate randomized experiments are conducted within each block.
• Example: in a study of Vitamin C on cold prevention, 1000 children were recruited. Half of them were randomly chosen and were given Vitamin C in their diet and the other half got placebos. At the end of the study, the number of colds contracted by each child was recorded. (This is an example of a complete randomized design (CRD). )
• If we know (or have sufficient reason to believe) that gender may also influence the incidence of cold, then a more efficient way to conduct the study is through blocking on gender: 500 girls and 500 boys were recruited. Among the girls, 250 were randomly chosen and given Vitamin C and the other 250 were given placebo. Same is done for the 500 boys. (This is an example of a randomized block design (RCBD). )
• By blocking, one removes the source of variation due to potential confounding factors (here it is gender), and thus improves the efficiency of the inference of treatment effect (here it is Vitamin C)
• Randomization alone (as in CRD) does not assure that the same number of girls and boys will receive each treatment. Thus if there is a difference of cold incidence rate between genders, observed differences between treatment groups maybe observed even if there is indeed no treatment effect.
1.8 Measurements of response variables
The issue of measurement bias arises due to unrecognizable differences in the evaluation process.
Example: The knowledge of the treatment of a patient may influence the judgement of the doctor. The source of measurement bias can be reduced to concealing the treatment assignment to both the subject and the evaluator (double-blind).
Contributors
• Anirudh Kandada (UCD) | textbooks/stats/Computing_and_Modeling/Supplemental_Modules_(Computing_and_Modeling)/Experimental_Design/Components_of_an_experimental_study_design.txt |
An overview of experimental designs
1. Complete randomized design (CRD): treatments (combinations of the factor levels of the different factors) are randomly assigned to the experimental units.
Examples: one factor, two-factor, multi-factor studies (factorial designs).
2. Crossed factor design: in a multi-factor study, the factors are crossed if all the combinations of all the factors are included in the study.
Example: in a study involving two factors -- temperature (at three levels) and concentration of a solvent (at two levels) on the yield of a chemical process, all combinations of temperature and solvent concentrations are considered.
Solvent conc. Temperature
low medium high
low x x x
high x x x
Table 1: Chemical yield study: Crossed factor design
3. Nested design: one factor is nested within another factor in a multi-factor study.
Example: in a study involving the effects of operators on a production process, three manufacturing plants and six operators are considered. However, the first two operators operate in plant 1, the next two in plant 2 and the last two in plant 3. Here, operators are said to be nested within manufacturing plants.
Plant Operator
1 2 3 4 5 6
1 x x
2 x x
3 x x
Table 2: Production study: Nested design
4. Repeated measurement design: the same experimental unit receives all the treatment combinations. This helps to eliminate the effects of confounding factors associated with the experimental units.
Example: taste-testing experiment where each tester rates all three brands of the breakfast cereals being tested.
5. Randomized complete block design (RCBD): every treatment appears with every other treatment in the same block the same number of times and every block receives the full suit of treatments (possibly with replications), and the treatments are randomly assigned to the experimental units within each block. Used when the block sizes are integer multiples of the number of treatments.
6. Balanced incomplete block design (BIBD): every treatment appears with every other treatment in the same block the same number of times, but every block does not receive the full suit of treatments. Used when the block sizes are smaller than the number of treatments. 7. Fractional factorial design, response surface experiments, etc.
An overview of observational studies
1. Cross sectional studies : measurements are taken from one or more populations or subpopulations at a single time point or time interval; and exposure to a potential causal factor and the response are determined simultaneously.
Example: A study of incomes by gender in the decade 1981-1990, stratified by geographical locations.
2. Cohort study (prospective): one or more groups are formed in a nonrandom manner according to some hypothesized causal factors, and then these groups are observed over time with respect to the outcome of interest. (What is going to happen ?)
Example: A group of smokers and a group of non-smokers were followed since their early 30's, and their health indices were recorded over years until their late 60's.
3. Case control study (retrospective study): groups are defined on the basis of an observed outcome, and the differences among the groups at an earlier time point are identified as potential causal effects. (What has happened?)
Example: study of lung cancer (outcome) and smoking (explanatory factor) based on data collected on lung cancer patients.
A single factor study
A food company wanted to test four different package designs for a new break-fast cereal. 20 stores with approximately the same sales condition (such as sales volume, price, etc) were selected as experimental units. Five stores were randomly assigned to each of the 4 package designs.
• A balanced complete randomized design.
• A single, 4-level, qualitative factor: package design;
• A quantitative response variable: sales -- number of packets of cereal sold during the period of study;
• Goal: exploring relationship between package design and sales.
Data
The assignment of the different stores (indicated by letters A to T) to the package designs (D1 to D4) is given in the following table.
Store IDs
S1 S2 S3 S4
S5
D1 A B C D E
Design
D2
F G H I J
D3 K L M N O
D4 P Q R S T
The observed data on sales in the following table. Store O was dropped from the study because of a fire. As a result of this, the design is not balanced anymore.
Store IDs
S1 S2 S3 S4 S5
D1 11 17 16 14 15
Design D2 12 10 15 19 11
D3 23 20 18 17 Miss
D4 27 33 22 26 28
ANOVA for single factor study
A simple statistical model the data is as follows: $\large Y_{ij} = \mu_i + \varepsilon_{ij}, \qquad j=1,\ldots,n_i; ~~i=1,\ldots,r ;$ where:
• $r$ is the number of factor levels (treatments) and $n_i$ is the number of experimental units corresponding to the $i$-th factor level;
• $Y_{ij}$ is the measurement for the $j$-th experimental unit corresponding to the $i$-th factor level;
• $\mu_i$ is the mean of all the measurements corresponding to the $i$-th factor level (unknown);
• $\varepsilon_{ij}$'s are random errors (unobserved).
In this example, $r=4$, $n_1 = n_2 = n_4 =5$, $n_3 = 4$; $Y_{23} = 15$; $\mu_2 =$ average sale in the population if Design 2 is used, where the population consists of all stores with similar sales condition as those in this study.
Model Assumptions
The following assumptions are made about the previous model:
• $\varepsilon_{ij}$ are independently and identically distributed as $N(0,\sigma^2)$.
• $\mu_i$ 's are unknown fixed parameters (so called fixed effects), so that $\mathbb{E}(Y_{ij}) = \mu_i$ and Var $(Y_{ij}) = \sigma^2$ . The above assumption is thus equivalent to assuming that $Y_{ij}$ are independently distributed as $N(\mu_i,\sigma^2)$.
The assumption that the distribution is normal and the variances are equal play a crucial role in determining whether the means corresponding to the the different factor levels are the same.
Interpretations
• Factor level means ($\mu_i$): in an experimental study, the factor level mean $\mu_i$ stands for the mean response that would be obtained if the $i$-th factor level were applied to the entire population from where the experimental units were sampled
• Residual variance ($\sigma^2$) : refers to the variability among the responses if any given treatment were applied to the entire population.
Steps in the anyalysis of factor level means
1. Determine whether or not the factor level means are all the same: $\large \mu_1=\cdots=\mu_r$
• What does $\mu_1=\cdots=\mu_r$ mean? The factor has no effect on the distribution of the response variable.
• How to evaluate the evidence of this statement $\mu_1=\cdots=\mu_r$ based on observed data ?
2. If the factor level means do differ, examine how they differ and what are the implications of these differences? (Chapter 17)
In the example, we want to answer whether there is any effect of package design on sales. First step is the obtain estimates of the factor level means.
Estimation of $\mu_i$
Define, the sample mean for the $i$-th factor level: $\large \overline{Y}_{i\cdot} = \frac{1}{n_i} \sum_{j=1}^{n_i} Y_{ij} = \frac{1}{n_i} Y_{i\cdot}$ where $Y_{i\cdot} = \sum_{j=1}^{n_i} Y_{ij}$ is the sum of responses for the $i$-th treatment group, for $i=1,\ldots,r$; and the overall sample mean: $\large \overline{Y}_{\cdot\cdot} = \frac{1}{\sum_{i=1}^r n_i} \sum_{i=1}^r \sum_{j=1}^{n_i} Y_{ij} = \frac{1}{\sum_{i=1}^r n_i} \sum_{i=1}^r n_i \overline{Y}_{i\cdot} = \frac{1}{\sum_{i=1}^r n_i} Y_{\cdot\cdot}~.$ Then $\overline{Y}_{i\cdot}$ is an estimate of $\mu_i$ for each $i=1,\ldots,I$. Under the assumptions, $\overline{Y}_{i\cdot}$ is an unbiased estimator of $\mu_i$ since $\large \mathbb{E}(\overline{Y}_{i\cdot}) = \frac{1}{n_i} \sum_{j=1}^{n_i} \mathbb{E}(Y_{ij}) = \frac{1}{n_i} \sum_{j=1}^{n_i} \mu_i = \mu_i.$
Store IDs Total Mean $n_i$
S1 S2 S3 S4 S5 ( $Y_{i.}$ ) ( $\overline{Y_{i.}}$
D1 11 17 16 14 15 73 14.6 5
Design D2 12 10 15 19 11 67 13.4 5
D3 23 20 18 17 Miss 78 19.5 4
D4
27 33 22 26 28 136 27.2 5
Total $Y_{..} = 354$ $\overline{Y}_{..} = 18.63$ 19
It is easy to see that Designs 3 and 4 lead to larger sales than Designs 1 and 2. How to quantify these differences? How large is large?
Pairwise comparison of factor level means
Suppose we want to compare Designs 1 and 2. We can formulate this as a hypothesis testing problem for the following hypothesis: $H_0 : \mu_1 = \mu_2$ against $H_a : \mu_1 \neq \mu_2$. The standard test procedure is the two-sample $z$ -test described below (assuming for the time being that $\sigma$ is known).
• Null hypothesis $\large H_0 : \mu_1 = \mu_2$ tested against alternative hypothesis $\large H_a : \mu_1 \neq \mu_2$.
• The test procedure essentially asks the following question: is the observed difference $\overline{Y}_{1\cdot} - \overline{Y}_{2\cdot}$ large enough to support the hypothesis $H_a : \mu_1 \neq \mu_2$ ?
The answer to this depends on the magnitude of Var$(\overline{Y}_{1\cdot} - \overline{Y}_{2\cdot})$ which tells you what is the typical sampling variation. Note that $\large \mbox{Var}(\overline{Y}_{1\cdot} - \overline{Y}_{2\cdot}) = \frac{\sigma^2}{n_1} + \frac{\sigma^2}{n_2}.$
• The $z$-test statistic for $H_0 : \mu_1 = \mu_2$ vs. $H_a : \mu_1 \neq \mu_2$ is
$\large Z = \frac{\overline{Y}_{1\cdot} - \overline{Y}_{2\cdot}}{\sqrt{\frac{\sigma^2}{n_1} + \frac{\sigma^2}{n_2}}}$ which has a $N(0,1)$ distribution if $H_0$ is true. Thus, we reject $H_0$ (more evidence towards $\mu_1 \neq \mu_2$) for large values of $|Z|$. How large is determined by the level of significance $\alpha$ (where $0 < \alpha < 1$ is pre-specified).
Contributors:
• Valerie Regalia
• Debashis Paul
Remedial measures for violations of assumptions in ANOVA
A basic question when one suspects that the variances of the treatment groups may be unequal or the distribution of measurements may be non-normal, one question is: How to make up for unequal variance and/or nonnormality and still apply the ANOVA framework?
1. Transformation of the response variable to make its distribution closer to normal and to stabilize the variance.
2. When the departures are too extreme such that transformations do not work: use nonparametric tests instead of the usual F-test for testing the equality of means. | textbooks/stats/Computing_and_Modeling/Supplemental_Modules_(Computing_and_Modeling)/Experimental_Design/Experimental_Design_and_Introduction_to_Analysis_of_Variance_%28LN_3%29.txt |
The dependence of a response variable on two factors, A and B, say, is of interest.
• Factor A has a levels and Factor B has b levels.
• In total there are a x b treatments.
• Sample sizes for all treatment groups are equal (balanced design).
• Goal: to study the simultaneous effects of the two factors on the response variable, including their respective main effects and their interaction effects.
1.1 Example: drugs for hypertension
A medical investigator studied the relationship between the response to three blood pressure lowering drugs for hypertensive males and females. The investigator selected 30 males and 30 females and randomly assigned 10 males and 10 females to each of the three drugs.
• This is a balanced randomized complete block design (RCBD)
• Two factors:
• A - gender (observational factor)
• B - drug (experimental factor)
• Factor A has a = 2 levels: male vs. female
• Factor B has b = 3 levels: drug 1, drug 2, and drug 3
• In total there are a x b = 2 x 3 = 6 treatments
• For each treatment, the sample size is n = 10
Treatment Description Sample Size
1 drug 1, male 10
2 drug 1, female 10
3 drug 2, male 10
4 drug 2, female 10
5 drug 3, male 10
6 drug 3, female 10
1.2 Population means
• Treatment means: $\mu_{ij}$ = population mean response of the treatment with factor A at level i and factor B at level j
• Factor level means: $\mu_{i\cdot}$ = population mean response when the i-th level of factor A is applied; $\mu_{\cdot j}$ = population mean response when the j-th level of factor B is applied:
$\mu_{i\cdot} = \frac{1}{b} \sum_{j = 1}^{b}\mu_{ij} \mu_{\cdot j} = \frac{1}{a} \sum_{i = 1}^{a}\mu_{ij}$
• Overall mean (the basic line quantity in comparisons of factor effects):
$\mu_{\cdot \cdot} = \frac{1}{ab} \sum_{i = 1}^{a} \sum_{j = 1}^{b}\mu_{ij} = \frac{1}{a} \sum_{i = 1}^{a}\mu_{i\cdot} = \frac{1}{b} \sum_{j = 1}^{b}\mu_{\cdot j}$
Table 1: Population means in a balanced two-factor ANOVA model
FACTOR B
j = 1 j = 2 . . . . . j = b Row Avg
i = 1 $\mu_{11}$ $\mu_{12}$ . . . . . $\mu_{1b}$ $\mu_{1\cdot}$
i = 2 $\mu_{21}$ $\mu_{22}$ . . . . . $\mu_{2b}$ $\mu_{2\cdot}$
FACTOR A
.
.
.
.
.
.
.
.
.
. . . . . .
.
.
.
.
.
i = a $\mu_{a1}$ $\mu_{a2}$ . . . . . $\mu_{ab}$ $\mu_{a\cdot}$
Column Avg $\mu_{\cdot 1}$ $\mu_{\cdot 2}$ . . . . . $\mu_{\cdot b}$ $\mu_{\cdot \cdot}$
1.3 Main effects
• Main effects are defined as the differences between factor level means and the overall mean
• Factor A main effects: $\alpha_{i}$ = main effect of factor A at the i-th factor level
• Factor B main effects: $\beta_{j}$ = main effect of factor B at the j-th factor level
• For both factor A and factor B, the sum of main effects is zero
1.4 Interaction effects
• Interaction effects describe how the effects of one factor depend on the levels of the other factor
• $(\alpha\beta)_{ij}$ = interaction effect of the i-th level of factor A and j-th level of factor B
• Note: for $1 \leq i \leq a, 1\leq j \leq b$
$\mu_{ij} = \mu_{\cdot \cdot} + \alpha_{i} + \beta_{j} + (\alpha\beta)_{ij}$ Interpretation of the interaction effects
• If all $(\alpha\beta)_{ij} = 0, i = 1, ..., a, j = 1, ..., b$, then the factor effects are additive
• This is equivalent to
$\mu_{ij} = \mu_{\cdot \cdot} + \alpha_{i} + \beta_{j}, i = 1, ..., a, j = 1, ..., b$
Thus the model is additive if all the interaction effects are zero
• If at least one of the $(\alpha\beta)_{ij}$'s is nonzero, then the factor effects are interacting
• This means that the effects of one factor are different for differing levels of the other factor.
1.5 Additive factor effects
• If the two are additive (i.e. no interaction)
$\mu_{ij} = \mu_{\cdot \cdot} + \alpha_{i} + \beta_{j}, i = 1, ..., a, j = 1, ..., b$
• Each factor can be studied separately, based on their factor level means $\{\mu_{i\cdot}\}$ and $\{\mu_{\cdot j}\}$, respectively
• This is much simpler than the joint analysis based on the treatment means $\{\mu_{ij}\}$
Example 1
Factor A has a = 2 levels, Factor B has b = 3
• Check additivity for all pairs of (i,j)
$\alpha_{1} = \mu_{1\cdot} - \mu_{\cdot \cdot} = 12 - 12 = 0$
$\beta_{1} = \mu_{\cdot 1} - \mu_{\cdot \cdot} = 9 - 12 = -3$
$\mu_{11} = 9$
$\mu_{\cdot \cdot} + (\alpha_{1} + \beta_{1}) = 12 + (0 - 3) = 9 = \mu_{11}$
• Exercise: Complete the check for additivity
FACTOR B
j = 1 j = 2 j = 3 $\mu_{i\cdot}$
FACTOR A i = 1 9 11 16 12
i = 2 9 11 16 12
$\mu_{\cdot j}$ 9 11 16 12 (= $\mu_{\cdot \cdot}$)
Table 2: Treatment group means for Example 1
1.6 Graphical method: interaction plots
Interaction plots constitute a graphical tool to check additivity.
• X-axis is for the factor A (or B) levels, and Y-axis is for the treatment means $\mu_{ij}$'s
• seperate curves are drawn for each of the factor B (or A) levels
Interpreting the interaction plots
• If the curves are all horizontal, then the factor on the X-axis has no effect at all, i.e. the treatment means do not depend on the level of that factor
• If the curves are overlapping, then the other factor (the one not on the X-axis) has no effect
• If the curves are parallel, then the two factors are additive, i.e. the effects of factor A do not depend on (or interact with) the level of factor B, and vice versa
• Note: "horizontal" and "overlapping" are special cases of "parallel"
For Example 1:
• The two factors are additive
• Moreover, factor A does not have any effect at all (main effects of factor A are all zero)
• Factor B does have some effects (not all main effects of factor B are zero)
Example 2 Factor A: a = 2 levels; Factor B: b = 3 levels
Factor B
j = 1 j = 2 j = 3 $\mu_{i\cdot}$
Factor A i = 1 11 13 18 14
i = 2 7 9 14 10
$\mu_{\cdot j}$ 9 11 16 12 (=$\mu_{\cdot \cdot}$)
Table 3: Treatment group means for Example 2
For Example 2:
• The two factors are additive, since the curves are parallel
• Both factors have some effects (main effects of both factors are not all zero), since the curves are neither horizontal to the X-axis nor overlapping in any of the plots
• Note: Indeed, you only need to examine one of the two plots. If the curves in one plot are parallel, the curves in the other plot must also be parallel
Summary: Additive Model
• For all pairs of (i, j): $\mu_{ij} = \mu_{\cdot \cdot} + \alpha_{i} + \beta_{j}$
• The curves in an interaction plot are parallel
• The difference between treatments means for any two levels of factor B (respectively, A) is the same for all level of factor A (respectively, B):
$\mu_{1j} - \mu_{1j'} = ... = \mu_{aj} - \mu_{aj'}, 1 \leq j, j' \leq b$
1.7 Interacting factor effects
• Interpretation of $(\alpha\beta)_{ij}$: the difference between the treatment mean $\mu_{ij}$ and the value that would be expected if the two factors were additive
• Factor A and factor B are interacting: if some $(\alpha\beta)_{ij} \neq 0$, i.e. $\mu_{ij} \neq \mu_{\cdot \cdot} + \alpha_{i} + \beta_{j}$ for some (i, j)
• Equivalently, the curves are not parallel in an interaction plot
Example 3
Factor A: a = 2 levels; Factor B: b = 3 levels
Factor B
j = 1 j = 2 j = 3 $\mu_{i\cdot}$
Factor A i = 1 9 12 18 13
i = 2 9 10 14 11
$\mu_{\cdot j$ 9 11 16 12 (=$\mu_{\cdot \cdot}$)
Table 4: Treatment group means for Example 3 For Example 3:
• The two curves in the interaction plot are not parallel, which means the two factors are interacting
• For example:
$\alpha_{1} = \mu_{1\cdot} - \mu_{\cdot \cdot} = 13 - 12 = 1 and \beta_{1} = \mu_{\cdot 1} - \mu_{\cdot \cdot} = 9 - 12 = -3$
$Thus 9 = \mu_{11} \neq \mu_{\cdot \cdot} + \alpha_{1} + \beta_{1} = 12 + 1 - 3 = 10$
$Or (\alpha\beta)_{11} = 9 - 10 = -1 \neq 0$
$\mu_{11} - \mu_{12} = 9 - 12 = -3 \neq \mu_{21} - \mu_{22} = 9 - 10 = -1$
• There is a larger difference among treatment means between the two levels of factor A when factor B is at the 3rd level (j = 3) than when B is at the first two levels (j = 1, 2)
Summary: Interactions
Suppose we put Factor B on the X-axis of the interaction plot
• The differences in heights of the curves reflect Factor A effects. On the other hand, if all curves are overlapping, then Factor A has no effect.
• The departure from horizontal by the curves reflects Factor B effects. On the other hand, if all curves are horizontal, then Factor B has no effect.
• The lack of parallelism among the curves reflects interaction effects.
• Any one factor with no effect means additivity
• Important: no main effects does not necessarily mean no effects or no interaction effects.
Example 4 (refer to figure 4)
• Figure (a): additive
• Figure (b) and (c): Factor B has no main effects, but Factor A and Factor B are interacting
• Figure (d): when Factor A is at level 1, treatment means increase with Factor B levels; when Factor A is at level 2, the trend becomes decreasing
• Figure (e): larger difference among treatment means between the two levels of Factor A when Factor B is at a smaller indexed level
• Figure (f): more dramatic change of treatment means among Factor B levels when Factor A is at level 1
Contributors
Joy Wei, Debashis Paul
Analysis of a balanced two factor ANOVA model
Experimental design is the design of any information-gathering exercises where variation is present, whether under the full control of the experimenter or not.
Experimental Design
1 Analysis of a balanced two factor ANOVA model
$\alpha$
Model:
$Y_{ijk}=\mu_{\cdot\cdot}+\alpha_i+\beta_j+(\alpha\beta)_{ij}+\epsilon_{ijk}, \qquad i=1,\cdots,a; ~~j=1,\cdots,b; ~k=1,\cdots,n, (1)$
$$\label{eq:two_factor_ANOVA} Y_{ijk}=\mu_{\cdot\cdot}+\alpha_i+\beta_j+(\alpha\beta)_{ij}+\epsilon_{ijk}, \qquad i=1,\cdots,a; ~~j=1,\cdots,b; ~k=1,\cdots,n,$$
where
• $\mu..$, $\alpha_i$'s $\beta_j$'s and ($alpha\beta)_{ij}$'s are unknown parameters (fixed effects) subject to identifiability constraints:
$\sum_{i=1}^a \alpha_i=0, \sum_{j=1}^b \beta_j=0 (2)$
$\sum_{i=1}^a (\alpha\beta)_{ij}=0, ~~j=1,\cdots,b; \sum_{j=1}^b (\alpha\beta)_{ij}=0, ~~ i=1,\cdots,a . (3)$
• Distributional assumption : $\epsilon_{ijk}$ are i.i.d. (independently and identically distributed) as N(0, $\sigma^2$).
• In another word, $Y_{ijk}$'s are independent random variables with normal distribution with
$\mu_{ij} := \mathbb{E}(Y_{ijk}) = \mu_{\cdot\cdot}+\alpha_i+\beta_j+(\alpha\beta)_{ij},$
and Var($Y_{ijk}$) = $\sigma^2$, where $\alpha_i$'s, $\beta_j$'s and $(\alpha\beta)_{ij}$'s are subject to the identifiability constnraints (2) and (3).
1.1 Point estimates of the population means
We estimate the population means by the corresponding sample means.
$\overline{Y}_{ij\cdot}&=&\frac{1}{n}\sum_{k=1}^n Y_{ijk} &\longrightarrow& \mu_{ij}=\mu_{\cdot\cdot}+\alpha_i+\beta_j+(\alpha\beta)_{ij}\ \overline{Y}_{i\cdot\cdot}&=&\frac{1}{bn}\sum_{j=1}^b\sum_{k=1}^nY_{ijk} &\longrightarrow& \mu_{i\cdot}=\mu_{\cdot\cdot}+\alpha_i\ \overline{Y}_{\cdot j\cdot}&=&\frac{1}{an}\sum_{i=1}^a\sum_{k=1}^nY_{ijk} &\longrightarrow& \mu_{\cdot j}=\mu_{\cdot\cdot}+\beta_j\ \overline{Y}_{\cdots}&=&\frac{1}{abn}\sum_{i=1}^a\sum_{j=1}^b\sum_{k=1}^nY_{ijk} &\longrightarrow& \mu_{\cdot\cdot}$
The effects (main effects and interaction effects) can be estimated accordingly.
$\widehat{\alpha}_i & := \overline{Y}_{i\cdot\cdot}-\overline{Y}_{\cdots} & \longrightarrow \alpha_i=\mu_{i\cdot}-\mu_{\cdot\cdot}\ \widehat{\beta}_j & := \overline{Y}_{\cdot j\cdot}-\overline{Y}_{\cdots} & \longrightarrow \beta_j=\mu_{\cdot j}-\mu_{\cdot\cdot} \ \widehat{(\alpha\beta)}_{ij} & :=\overline{Y}_{ij\cdot}-\overline{Y}_{\cdots}-(\overline{Y}_{i\cdot\cdot}-\overline{Y}_{\cdots})-(\overline{Y}_{\cdot j\cdot}-\overline{Y}_{\cdots}) & \ & =\overline{Y}_{ij\cdot}-\overline{Y}_{i\cdot\cdot}-\overline{Y}_{\cdot j\cdot}+\overline{Y}_{\cdots} & \longrightarrow (\alpha\beta)_{ij}=\mu_{ij}-\alpha_i-\beta_j+\mu_{\cdot\cdot}$
1.2 ANOVA decomposition of sum squares
Basic decomposition:
$SSTO = SSTR + SSE.$
where
$SSTO &=& \sum_{i=1}^a\sum_{j=1}^b \sum_{k=1}^n (Y_{ijk} - \overline{Y}_{\cdots})^2 \ SSTR &=& n \sum_{i=1}^a \sum_{j=1}^b (\overline{Y}_{ij\cdot} - \overline{Y}_{\cdots})^2\ SSE &=& \sum_{i=1}^a\sum_{j=1}^b \sum_{k=1}^n (Y_{ijk} - \overline{Y}_{ij\cdot})^2$
Contributors
• Yingwen Li (UCD)
• Debashis Paul (UCD) | textbooks/stats/Computing_and_Modeling/Supplemental_Modules_(Computing_and_Modeling)/Experimental_Design/Two_Factor_ANOVA_Under_Balanced_Designs.txt |
Regression analysis is a set of statistical processes for estimating the relationships among variables.
Regression Analysis
1. A single factor study (continued)
A food company wanted to test four different package designs for a new breakfast cereal. 20 stores with approximately the same sales condition (such as sales volume, price, etc) were selected as experimental units. Five stores were randomly assigned to each of the 4 package designs:
• A balanced complete randomized design.
• A single, 4-level, qualitative factor: package design.
• A quantitative response variable: sales - number of packets of cereal sold during the period of study.
• Goal: exploring relationship between package design and sales.
1.1 ANOVA for single factor study
A simple statistical model the data is as follows:
$Y_{ij} = \mu_i + \epsilon_{ij}, j = 1,...,n_i; i = 1,...,r;$
where
• $r$ is the number of factor levels (treatments) and $n_i$ is the number of experimental units corresponding to the $i$-th factor level;
• $Y_{ij}$ is the measurement for the $j$-th experimental unit corresponding to the $i$-th factor level;
• $\mu_i$ is the mean of all the measurements corresponding to the $i$-th factor level (unknown);
• $\epsilon_{ij}$'s are random errors (unobserved).
1.2 Model asumptions
The following assumptions are made about the above model:
• $\epsilon_{ij}$ are independently and identically distributed as $N(0, \sigma^2)$.
• $\mu_i$'s are unknown fixed parameters (so called fixed effects), so that $E(Y_{ij}) = \mu_i$ and $Var(Y_{ij}) = \sigma^2$. The above assumption is thus equivalent to assuming that $Y_{ij}$ are independently distributed as $N(\mu_i, \sigma^2)$.
1.3 Estimation of $\mu_i$
Define, the sample mean for the $i$-th factor level:
$\overline{Y}_{i.} = \frac{1}{n_i}\sum_{j = 1}^{n_i}Y_{ij} = \frac{1}{n_i}Y_{i.}$
where $Y_{i.} = \sum_{j=1}^{n_i}Y_{ij}$ is the sum of responses for the $i$-th treatment group, for $i = 1,...,r$; and the overall sample mean:
$\overline{Y}_{..} = \frac{1}{n_T}\sum_{i=1}^{r}\sum_{j=1}^{n_i}Y_{ij} = \frac{1}{n_T}\sum_{i=1}^{r}n_{i}\overline{Y}_{i.} = \frac{Y_{..}}{n_T}$,
where $n_T = \sum_{i=1}^{r}n_i$. Then $\overline{Y}_i.$ is an estimate of $\mu_i$ for each $i = 1,...,r$. Under the assumptions, $\overline{Y}_i.$ is an unbiased estimator of $\mu_i$ since
$E(\overline{Y}_i.) = \frac{1}{n_i}\sum_{j=1}^{n_i}E(Y_{ij}) = \frac{1}{n_i}\sum_{j=1}^{n_i}\mu_i = \mu_{i.}$
Table 1: Data summary: packaging of breakfast cereals
S1 S2 S3 S4 S5 $(Y_{i.})$ $\overline{Y}_{i.})$ $n_i$
Packaging Design D1 11 17 16 14 15 73 14.6 5
Packaging Design D2 12 10 15 19 11 67 13.4 5
Packaging Design D3 23 20 18 17 Miss 78 19.5 4
Packaging Design D4 27 33 22 26 28 136 27.2 5
Total $Y_{..}$ = 354
$\overline{Y}_{..}$ = 18.63
19
1.4 Comparison of factor level means
Want to check for deviations from the null hypothesis $H_0 : \mu_1 = ... = \mu_r,$ i.e., the alternative hypothesis is $H_a:$ not all $\mu_1$'s are equal.
• Idea 1: A baseline value for comparison is the overall mean:
$\mu_{.} = \frac{\sum_{i=1}^{r}n_i\mu_i}{n_r}$.
• Idea 2: Calculate deviations from the overall mean for each factor level:
$(\mu_1 - \mu_.)^2,...,(\mu_r - \mu_.)^2$.
Under $H_0 : \mu_1 = ... = \mu_r,$ these deviations are all zero.
• Idea 3: Use the weighted sum of the above deviations as an overall measurement of the deviation from $H_0: \mu_1 = ... = \mu_r:$
$\sum_{i=1}^{r}n_i(\mu_i - \mu_.)^2$
The weight of the i-th treatment group is its sample size $n_i$, i.e., the more data, the more importance.
Estimators
Estimate the population means by their sample counterparts:
$\overline{Y}_{1.} \rightarrow \mu_1,...,\overline{Y}_{r.} \rightarrow \mu_r$
and
$\overline{Y}_{..} = \frac{1}{n_T}\sum_{i=1}^{n}n_i\overline{Y}_{i.} \rightarrow \mu.$
Thus,
$\sum_{i=1}^{r}n_i(\overline{Y}_{i.} - \overline{Y}_{..})^2$
is a statistic to measure the deviation from $H_0 : \mu_1 = ... = \mu_r$. However, $\sum_{i=1}^{r}n_i(\overline{Y}_{i.} - \overline{Y}_{..})^2$ is not an unbiased estimator of $\sum_{i=1}^{r}n_i(\mu_i - \mu_.)^2$. In fact
$E[\sum_{i=1}^{r}n_i(\overline{Y}_{i.} - \overline{Y}_{..})^2] = (r - 1)\sigma^2 + \sum_{i=1}^{r}n_i(\mu_i - \mu_.)^2.$
Nevertheless, we can compare the magnitude of $\sum_{i=1}^{r}n_i(\overline{Y}_{i.} - \overline{Y}_{..})^2$ to that of $\sigma^2$ to decide whether the deviation is large or not.
Decomposition of Total Sum of Squares
Write
$Y_{ij} - \overline{Y}_{..} = (Y_{ij} - \overline{Y}_{i.}) + (\overline{Y}_{i.} - \overline{Y}_{..})$
• $Y_{ij} - \overline{Y}_{..}$ : deviation of the response from the overall mean;
• $\overline{Y}_{i.} - \overline{Y}_{..}$ : deviation of the i-th factor level mean from the overall mean;
• $Y_{ij} - \overline{Y}_{i.}$ : deviation of the response from the corresponding factor level mean (residual).
Then the ANOVA decomposition of the sum of squares:
$\sum_{i=1}^{r}\sum_{j=1}^{n_i}(Y_{ij} - \overline{Y}_{..})^2 = \sum_{i=1}^{r}\sum_{j=1}^{n_i}(Y_{ij} - \overline{Y}_{i.})^2 + \sum_{i=1}^{r}n_i(\overline{Y}_{i.} - \overline{Y}_{..})^2$.
This can be expressed as
$SSTO = SSE + SSTR$
where $SSTO = \sum_{i=1}^{r}\sum_{j=1}^{n_i} (y_{ij} - \overline{y}_{..})^2$ is the Total Sum of Squares; $SSE = \sum_{i=1}^{r}\sum_{j=1}^{n_i} (y_{ij} - \overline{y}_{i.})^2$ is the Error Sum of Squares and $SSTR = \sum_{i=1}^{r}n_i(\overline{y}_{i.} - \overline{y}_{..})^2$ is the Treatment Sum of Squares.
Interpretation of decomposition (5)
• SSTO: A measure of the overall variability among the responses.
• SSTR: A measure of the variability among the factor level means. The more similar the factor level means are, the smaller is the SSTR.
• SSE: A measure of the random variation of the responses around their corresponding factor level means. The smaller the error variance is, the smaller the SSE tends to be.
• Overall variability is the sum of the variability due to difference in treatments and that due to random fluctuations.
For the study on the effect of package design on sales volume
Refer to table 1.3. Based on the information there:
$SSTO = (11 - 18.63)^2 + (17 - 18.62)^2 + ... + (28 - 18.63)^2 = 746.42$
$SSTR = 5(14.6 - 18.63)^2 + 5(13.4 - 18.63)^2 + 4(19.5 - 18.63)^2 + 5(27.2 - 18.63)^2 = 588.22$
$SSE = {(11 - 14.6)^2 + ... + (15 - 14.6)^2} + ... + {(27 - 27.2)^2 + ... + (28 - 27.2)^2} = 158.20.$
Contributors
• Scott Brunstein (UCD)
• Debashis Paul (UCD)
Analysis of Variance
1 Review of concepts related to hypothesis tests
1.1 Type I and Type II errors
In hypothesis testing, there are two types of errors
• Type I error: reject null hypothesis when it is true
• Type I error rate
P(reject $H_0$ | $H_0$ true)
• When testing $H_0$ at a pre-specified level of significance $\alpha$, the Type I error rate is controlled to be no larger than $\alpha$.
• Type II error: accept the null hypothesis when it is wrong.
• Type II error rate
P(accept $H_0$ | $H_0$ wrong).
• Power : probability of rejecting $H_0$ when it is wrong
Power = P(reject $H_0$ | $H_0$ wrong)
= 1 - Type II error rate.
1.2 What determines the power?
The power of a testing procedure depends on
• Significance level $\alpha$ - the maximum allowable Type I error - the larger $\alpha$ is , the higher is the power.
• Deviation from $H_0$ - the strength of signal - the larger the deviation is, the higher is the power.
• Sample size: the larger the sample size is, the higher is the power.
2 Power of an F-test
2.1 Power calculation for F-test
Test $H_0$ : $\mu_1$ = $\cdots$ = $\mu_r$ under a single factor ANOVA model: given the significance level $\alpha$ :
• Decision rule
$\left\{\begin{array}{ccc}{\rm reject} H_0 & if & F^{\ast}> F(1-\alpha;r-1,n_T-r)\{\rm accept} H_0 & if & F^{\ast} \leq F(1-\alpha;r-1,n_T-r)\end{array}\right.$
• The Type I error rate is at most $\alpha$.
• Power depends on the noncentrality parameter
$\phi=\frac{1}{\sigma}\sqrt{\frac{\sum_{i=1}^r n_i(\mu_i-\mu_{\cdot})^2}{r}}.$
Note $\phi$ depends on sample size (determined by the $n_i$'s) and signal size (determined by the $(\mu_i - \mu.)^2$'s).
2.2 Distribution of F-ratio under the alternative hypothesis
The distribution of F* under an alternative hypothesis.
• When the noncentrality parameter is $\phi$, then
$F^{\ast} \sim F_{r-1,n_T-r}(\phi),$
i.e., a noncentral F-distribution with noncentrality parameter $\phi$.
• Power = P($\sim F_{r-1,n_T-r}(\phi)$ > F(1 - $\alpha$;r - 1, $n_T - r$)).
• Example: if $\alpha$ = 0.01, r = 4, $n_T$ = 20 and $\phi$ = 2, then Power = 0.61. (Use Table B.11 of the textbook.)
2.3 How to calculate power of the F test using R
• The textbook defines the noncentrality parameter for a single factor ANOVA model as
$\phi = \frac{1}{\sigma} \sqrt{\frac{\sum_{i=1}^r n_i (\mu_i - \mu_{\cdot})^2}{r}}$
where r is number of treatment group (factor levels), $\mu_i$'s are the factor level means, $n_i$ is the sample size (number of replicates) corresponding to the i-th treatment group, and $\sigma^2$ is the variance of the measurements.
• For a balanced design, i.e., when $n_1$ = $\cdots$ = $n_r$ = n, the formula for $\phi$ reduces to
$\phi = \frac{1}{\sigma} \sqrt{(n/r) \sum_{i=1}^r (\mu_i - \mu_{\cdot})^2}~.$
Table B.11 gives the power of the F test given the values of the numerator degree of freedom $v_1$ = r - 1, denominator degree of freedom $v_2$ = $n_T - r$, level of significance $\alpha$ and noncentrality parameter $\phi$.
• Example: For r = 3, n = 5, (so that $v_1$ = 2 and $v_2$ = 12), $\alpha$ = 0.05 and $\phi$ = 2, the value of power from Table B.11 is 0.78.
However, if you want to use R to compute the power of the F-test, you need to be aware that the noncentrality parameter for F distribution in R is defined differently. Indeed, compared to the above setting, the noncentrality parameter to used in the function in R will be r x $\phi^2$ instead of $\phi$. Here is the R code to be used for computing the power in the example described above: r = 3, n = 5, $\alpha$ = 0.05 and $\phi$ = 2:
• Critical value for the F-test when $\alpha$ = 0.05, $v_i$ = r - 1 = 2 and $v_2$ = $n_T$ - r = 12 is
F.crit = qf(0.95,2,12)
• Then the power of the test, when will be computed as
F.power = 1 - pf(F.crit, 2, 12, 3$*$2^2)
• Note that the function qf is used to compute the quantile of the central F distribution. Its second and third arguments are the numerator and denominator degrees of freedom of the F distribution.
• The function pf is used to calculate the probability under the noncentral F- density curve to the left of a given value (in this case F.crit). Its second and third arguments are the numerator and denominator degrees of freedom of the F distribution, while the fourth argument is the noncentrality parameter r x $\phi^2$ (we specify this explicitly in the above example).
• The values of F.crit and F.power are 3.885294 and 0.7827158, respectively.
3 Calculating sample size
God: find the smallest sample size needed to achieve
• a pre-specified power $\gamma$;
• with a pre-specified Type I error rate $\alpha$;
• for at least a pre-specifiec signal leval s.
The idea behind the sample size calculation is as follows:
• On one hand, we want the sample size to be large enough to detect practically important deviations ( with a signal size to be at least s) from $H_0$ with high probability (with a power at least $\gamma$), and we only allow for a pre-specified low level of Type I error rate (at most $\alpha$) when there is no signal.
• On the other hand, the sample size should not be unnecessarily large such that the cost of the study is too high.
3.1 An example of sample size calculation
• For a single factor study with 4 levels and assuming a balanced design, i.e., the $n_1 = n_2 = n_3 = n_4$ (=n, say), the goal is to test $H_0$: all the factor level means $\mu_i$ are the same.
• Question: What should be the sample size for each treatment group under a balanced design, such that the F-test can achieve $\gamma$ = 0.85 power with at most $\alpha$ = 0.05 Type I error rate when the deviation from $H_0$ has at least $s=\sum_{i=1}^{r}(\mu_i-\mu_{\cdot})^2=40$ ?
• One additional piece of information needed in order to answer this question is the residual variance $\sigma^2$.
• Suppose from a pilot study, we know the residual variance is about $\sigma^2$ = 10.
• Use a trial-and-error strategy to search Table B.11. This means, for a given n (starting with n = 1),
(i) calculate $\phi = (1/\sigma) \sqrt{(n/r)\sum_{i=1}^r(\mu_i - \mu_{\cdot})^2} = (1/\sigma) \sqrt{(n/r) s}$;
(ii) fix the numerator degree of freedom $v_1$ = r - 1 = 3;
(iii) check the power of the test when the denominator degree of freedom $v_2 = n_T - r$ (where $n_T$ = nr), with the given $\phi$ and $\alpha$ ;
(iv) keep increasing n until the power of the test is closest to (equal or just above) the given value of $\gamma$.
3.2 An alternative approach to sample size calculation
Suppose that we want to determine the minimum sample size required to attain a certain power of the test subject to a specified value of the maximum discrepancy among the factor level means. In other words, we want the test to attain power $\gamma$ (= 1 - $\beta$, where $\beta$ is the probability of Type II error) when the minimum range of the treatment group means
$\Delta = \max_{1\leq i \leq r} \mu_i - \min_{1\leq i \leq r}\mu_i ~.$
• Suppose we have a balanced design, i.e., $n_1 = \cdots = n_r$ = n, say. We want to determine the minimum value of n such that the power of the F test for testing $H_0$ : $\mu_1 = \cdots = \mu_r$ is at least a prespecified value $\gamma = 1 - \beta$.
• We need to also specify the level of significance $\alpha$ and the standard deviation of the measurements $\sigma$.
• Table B.12 gives the minimum value of n needed to attain a given power 1 - $\beta$ for a given value of $\alpha$, for a given number of treatments r and a given "effect size" $\Delta/\sigma$.
• Example : For r = 4, $\alpha$ = 0.05, in order that the F-test achieves the power 1 - $\beta$ = 0.9 when the effect size is $\Delta/\sigma$ = 1.5, we need n to be at least 14. That is, we need a balanced design with at least 14 experimental units in each treatment group.
Contributors
• Yingwen Li (UCD)
• Debashis Paul (UCD) | textbooks/stats/Computing_and_Modeling/Supplemental_Modules_(Computing_and_Modeling)/Regression_Analysis/Analysis_of_Variance/Concepts_Related_to_Hypothesis_Tests.txt |
1.1 Model assumptions for a single factor ANOVA model
Single factor (fixed effect) ANOVA model:
$Y_{i_j} = \mu_i + \epsilon_{i_j}, j = 1, ... , n_i; i = 1, ... , r.$
Important model assumptions
• Normality: $\epsilon_{i_j}$'s are normal random variables
• Equal Variance: $\epsilon_{i_j}$'s have the same variance ($\sigma^2$).
• Independence: $\epsilon_{i_j}$'s are independent random variables.
Some questions:
• What will happen if these assumptions are violated?
• How to find out whether these assumptions are violated? diagnostic tools:
- residual plots: check normality, equal variance, independence, outliers, etc.
- tests for equal variance
• What to do when these assumptions are violated? remedial measures
- Data transformations
- Non-parametric tests
1.2 Effects of various violations
• Non-normality:
- It is not a big deal unless the departure from normality is extreme.
- $F$-test and related procedures are pretty robust to the normality assumption, both in terms of significance level and power.
• Unequal error variance:
- $F$-test and related analysis are pretty robust against unequal variance under an approximately balanced design.
- One parameter inference such as pairwise comparisons of group means could be substantially affected.
• Non-independence:
- It can have serious side effects (effective loss of degrees of freedom).
- It is also hard to correct.
- Thus it is very important to use randomization whenever necessary.
1.3 Diagnostic tools
Based on residuals:
• Residuals:
$\epsilon_{i_j} = Y_{i_j} - \bar{Y}_i, j = 1, ... , n_i; i = 1, ... , r.$
• Studentized residuals: $r_{i_j} = \frac{e_{i_j}}{s(e_{i_j})}$, where $s(e_{i_j}) = \sqrt{MSE \times (n_i - 1)/n_i}$ (since Var($e_{i_j}) = \sigma^2(1-1/n_i)$.
• Studentized residuals adjust for sample sizes and thus they are comparable across treatment groups when the design is unbalanced.
Normal probability plots
It is a graphical tool to check whether a set of quantities is approximately normally distributed.
• Each value is plotted against its "expected value under normality"
- Sort the values from smallest to largest: $x_{(1)}, ... , x_{(n)}$
- For the $i$-th smallest value $x_{(i)}$, the "expected value under normality" is roughly the $\frac{i}{n}$ percentile of the standard normal distribution (the exact definition is a bit more complex).
• A plot that is nearly linear suggests agreement with normality
• A plot that departs substantially from linearity suggests non-normality
Check normality
Normal probability plots of the residuals
• When sample size is small: use the combined residuals across all treatment groups.
• When sample size is large: draw separate plot for each treatment group.
• Use studentized residuals (but with MSE replaced by $s_{i}^{2}$'s (sample variance of the $i$-th treatment group) in the standard error calculation) when unequal variances are indicated and combined residuals are used. Note that $s_{i}^{2} = \frac{1}{n_i - 1} \sum_{j=1}^{n_i}(Y_{i_j} - \bar{Y}_i)^2$
• Normality is shown by the normal probability plots being reasonably linear (points falling roughly along the 45$^\circ$ line when using the studentized residuals).
Checking the equal variance assumption
Residual vs. fitted value plots.
• When the design is approximately balanced: plot residuals $e_{i_j}$'s against the fitted values $\bar{Y}_i$'s.
• When the design is very unbalanced: plot the studentized residuals $r_{i_j}$'s against the fitted values $\bar{Y}_i$'s.
• Constancy of the error variance is shown by the plot having about the same extent of dispersion of residuals (around zero) across different treatment groups.
Other things that can be examined by residual plots:
• Independence: if measurements are obtained in a time/space sequence, a residual sequence plot can be used to check whether the error terms are serially correlated.
• Outliers are identified by residuals with big magnitude.
• Existence of other important (but un-accounted for) explanatory variables: whether residual plots shown a certain pattern.
Example: package design
Residuals for the package design example are given below.
• Cathy Wang | textbooks/stats/Computing_and_Modeling/Supplemental_Modules_(Computing_and_Modeling)/Regression_Analysis/Analysis_of_Variance/Effects_of_violations_of_model_assumptions.txt |
1. Two-factor ANOVA model with n = 1 (no replication)
• For some studies, there is only one replicate per treatment, i.e., n = 1.
• ANOVA model for two-factor studies need to be modified, since
- the degrees of freedom associated with $SSE$ will be $(n - 1)ab = 0$;
- thus the error variance $\sigma^2$ can not be estimated by $SSE$ anymore.
• Idea: make the model simpler by assuming the two factors do not interact with each other. Validity of this assumption needs to be checked.
1.1 Two-factor model without interaction
With n = 1.
• Model equation:
$Y_{ij} = \mu_{..} + \alpha_i + \beta_j + \epsilon_{ij}, i = 1, ..., a, j = 1, ..., b.$
• Identifiability constraints:
$\sum_{i=1}^{a}\alpha_i = 0, \sum_{j=1}^{b}\beta_j = 0.$
• Distributional assumptions: $\epsilon_{ij}$ are i.i.d. $N(0,\sigma^2)$
Sum of squares
Interaction sum of squares now plays the role of error sum of squares.
$SSAB = n\sum_{i=1}^{a} \sum_{j=1}^{b}(\overline{Y}_{ij.} - \overline{Y}_{i..} - \overline{Y}_{.j.} + \overline{Y}_{...})^2 = \sum_{i=1}^{a} \sum_{j=1}^{b}(\overline{Y}_{ij} - \overline{Y}_{i.} - \overline{Y}_{.j} + \overline{Y}_{..})^2$
$MSAB = \frac{SSAB}{(a-1)(b-1)}$ since $d.f.(SSAB) = (a-1)(b-1)$.
• In the general two-factor ANOVA model (when n = 1),
$E(MSAB) = \sigma^2 + \frac{\sum_{i=1}^{a}\sum_{j=1}^{b} (\alpha\beta)^2_{ij}}{(a-1)(b-1)}$
• Under the model without interaction: $E(MSAB) = \sigma^2$
• Thus $MSAB$ can be used to estimate $\sigma^2$.
ANOVA Table
ANOVA table for two-factor model without interaction and $n=1$
Source of Variation SS df MS
Factor A $SSA = b\sum_i(\overline{Y}_{i.} - \overline{Y}_{..})^2$ $a - 1$ $MSA$
Factor B $SSB = a\sum_j(\overline{Y}_{.j} - \overline{Y}_{..})^2$ $b - 1$ $MSB$
Error $SSAB = \sum_{i=1}^{a}\sum_{j=1}^{b}(\overline{Y}_{ij} - \overline{Y}_{i.} - \overline{Y}_{.j} + \overline{Y}_{..})^2$ $(a - 1)(b - 1)$ $MSAB$
Total $SSTO = \sum_{i=1}^{a}\sum_{j=1}^{b}(\overline{Y}_{ij} - \overline{Y}_{..})^2$ $ab - 1$
Expected mean squares (under no interaction):
$E(MSA) = \sigma^2 + \frac{b\sum_{i=1}^{a}\alpha_i^2}{a - 1}, E(MSB) = \sigma^2 + \frac{a\sum_{j=1}^{b}\beta_j^2}{b - 1}, E(MSAB) = \sigma^2$
F tests (for main effects)
Test factor A main effects: $H_o: \alpha_1 = ... = \alpha_a = 0$ vs. $H_a:$ not all $\alpha_i$'s are equal to zero.
• $F_A^* = \frac{MSA}{MSAB} ~ F_{a - 1, (a - 1)(b - 1)}$ under $H_o$.
• Reject $H_o$ at level of significance $\alpha$ if observed $F_A^* > F(1 - \alpha; a - 1, (a - 1)(b - 1))$.
Test factor B main effects: $H_o: \beta_1 = ... = \beta_b = 0$ vs. $H_a:$ not all $\beta_j$'s are equal to zero.
• $F_B^* = \frac{MSB}{MSAB} ~ F_{b - 1, (a - 1)(b - 1)}$ under $H_o$.
• Reject $H_o$ at level of significance $\alpha$ if observed $F_B^* > F(1 - \alpha; b - 1, (a - 1)(b - 1))$.
Estimation of means
Estimation of factor level means $\mu_{i.}$'s , $\mu_{.j}$'s.
• Proceed as before, viz., use the unbiased estimator $\overline{Y}_{i.}$ for $\mu_{i.}$ and $\overline{Y}_{.j}$ for $\mu_{.j}$, but replace $MSE$ by $MSAB$ and use the degrees of freedom of $MSAB$, that is $(a - 1)(b - 1)$. Thus, estimated standard errors:
$s(\overline{Y}_{i.}) = \sqrt{\frac{MSAB}{b}}, s(\overline{Y}_{.j}) = \sqrt{\frac{MSAB}{a}}.$
Estimation of treatment means $\mu_{ij}$'s.
• $\mu_{ij} = E(Y_{ij}) = \mu_{..} + \alpha_i + \beta_j = \mu_{i.} + \mu_{.j} - \mu_{..}$
Thus, an unbiased estimator: $\widehat{\mu}_{ij} = \overline{Y}_{i.} + \overline{Y}_{.j} - \overline{Y}_{..}$
Estimated standard error:
$s(\widehat{\mu}_{ij}) = \sqrt{MSAB(\frac{1}{b} + \frac{1}{a} - \frac{1}{ab})} = \sqrt{MSAB(\frac{a + b - 1}{ab})}$
1.2 Example: Insurance
An analyst studied the premium for auto insurance charged by an insurance company in six cities. The six cities were selected to represent different sizes (Factor A: small, medium, large) and differentregions of the state (Factor B: east, west). There is only one city for each combination of size and region. The amounts of premiums charged for a specific type of coverage in a given risk category for each of the six cities are given in the following table.
Table 1: Numbers in parentheses are $\widehat{\mu}_{ij} = \overline{Y}_{i.} + \overline{Y}_{.j} - \overline{Y}_{..}$
Factor B
East West
Factor A Small 140(135)
Medium 210(210)
Large 220(225)
100(105)
180(180)
200(195)
$\overline{Y}_{1.} = 120$
$\overline{Y}_{2.} = 195$
$\overline{Y}_{3.} = 210$
$\overline{Y}_{.1} = 190$ $\overline{Y}_{.2} = 160$ $\overline{Y}_{..} = 175$
Interaction plot based on the treatment sample means $Y_{ij}$'s: no strong interactions.
Sum of squares:
• Here $a = 3$, $b = 2$, $n = 1$.
• $SSA = 2[(120 - 175)^2 + (195 - 175)^2 + (210 - 175)^2] = 9300.$.
• $SSB = 3[(190 - 175)^2 + (160 - 175)^2] = 1350$.
• $SSAB = (140 - 120 - 190 + 175)^2 + ... + (200 - 210 - 160 + 175)^2 = 100$.
• $SSTO = SSA + SSB + SSAB = 10750$.
Hypothesis testing:
• Test $H_o: \mu_{1.} = \mu_{2.} = \mu_{3.}$ (equivalently, $H_o: \alpha_1 = \alpha_2 = \alpha_3 = 0$) at level 0.05.
Table 2: ANOVA Table for Insurance example
Source of Variation SS df MS
Factor A $SSA = 9300$ $a - 1 = 2$ $MSA = 4650$
Factor B $SSB = 1350$ $b - 1 = 1$ $MSB = 1350$
Error $SSAB = 100$ $(a - 1)(b - 1) = 2$ $MSAB = 50$
Total $SSTO = 10750$ $ab - 1 = 5$
$F_A^* = \frac{MSA}{MSAB} = \frac{4650}{50} = 93$ and $F(0.95; 2, 2) = 19$. Thus reject $H_o$ at level 0.05.
• Estimation of $\mu_{ij}$: e.g.,
$\widehat{\mu}_{11} = \overline{Y}_{1.} + \overline{Y}_{.1} - \overline{Y}_{..} = 120 + 190 - 175 = 135$.
• Estimation of $\mu_{i.}$ and $\mu_{.j}$: e.g.,
$\widehat{\mu}_{1.} = \overline{Y}_{1.} = 120$.
$s(\overline{Y}_{1.}) = \sqrt{\frac{MSAB}{b}} = \sqrt{\frac{50}{2}} = 5$.
The 95% C.I. for $\mu_{1.}$ is:
$\overline{Y}_{1.} \pm t(0.975; 2) * s(\overline{Y}_{1.}) = 120 \pm 4.3*5 = (98.5, 141.5).$
1.3 Checking for the presence of interaction: Tukey's test for additivity
For a two-factor study with $n = 1$, decide whether or not the two factors are interacting.
• In the no-interaction model, we assume that all $(\alpha\beta)_{ij} = 0$.
• Idea: use a less severe restriction on the interaction effects, by assuming
$(\alpha\beta)_{ij} = D\alpha_i\beta_j, i = 1, ... , a, j = 1, ... , b,$
where $D$ is an unknown parameter.
• The model becomes:
$Y_{ij} = \mu_{..} + \alpha_i + \beta_j + D\alpha_i\beta_j + \epsilon_{ij}, i = 1, ... , a, j = 1, ..., b,$
under the constraints that
$\sum_{i = 1}^{a}\alpha_i = \sum_{j = 1}^{b}\beta_j = 0.$
Estimation of $D$
• Multiply $\alpha_i\beta_j$ on both sides of the equation:
$\alpha_i\beta_jY_{ij} = \mu_{..}\alpha_i\beta_j + \alpha_i^2\beta_j + \alpha_i\beta_j^2 + D\alpha_i^2\beta_j^2 + \epsilon_{ij}\alpha_i\beta_j$
• Sum over all pairs (i, j):
$\sum_{i=1}^{a}\sum_{j=1}^{b}\alpha_i\beta_jY_{ij}=D\sum_{i=1}^{a}\sum_{j=1}^{b}\alpha_i^2\beta_j^2 + \sum_{i=1}^{a}\sum_{j=1}^{b}\epsilon_{ij}\alpha_i\beta_j$
• Then
$\widetilde{D} := \frac{\sum_{i=1}^{a}\sum_{j=1}^{b}\alpha_i\beta_jY_{ij}}{(\sum_{i=1}^{a}\alpha_i^2)(\sum_{j=1}^{b}\beta_j^2)} \approx D$
• We have the following estimates:
$\widehat{\alpha}_i = \overline{Y}_{i.} - \overline{Y}_{..}, \widehat{\beta}_j = \overline{Y}_{.j} - \overline{Y}_{..}$
• Thus, an estimator of $D$ (which is also the least squares and the maximum likelihood estimator) is given by
$\widehat{D} = \frac{\sum_{i=1}^{a}\sum_{j=1}^b ( \overline{Y}_{i.} - \overline{Y}_{..})( \overline{Y}_{.j} - \overline{Y}_{..})Y_{ij}}{(\sum_{i=1}^{a}( \overline{Y}_{i.} - \overline{Y}_{..})^2)(\sum_{j=1}^{b}( \overline{Y}_{.j} - \overline{Y}_{..})^2)}.$
ANOVA decomposition
$SSTO = SSA + SSB + SSAB* + SSRem*.$
• Interaction sum of squares
$SSAB* = \sum_{i=1}^{a}\sum_{j=1}^{b}\widehat{D}^2\widehat{\alpha}_i^2\widehat{\beta}_j^2 = \frac{(\sum_{i=1}^{a}\sum_{j=1}^b ( \overline{Y}_{i.} - \overline{Y}_{..})( \overline{Y}_{.j} - \overline{Y}_{..})Y_{ij})^2}{(\sum_{i=1}^{a}( \overline{Y}_{i.} - \overline{Y}_{..})^2)(\sum_{j=1}^{b}( \overline{Y}_{.j} - \overline{Y}_{..})^2)}$
• Remainder sum of squares
$SSREM^* = SSTO - SSA - SSB - SSAB^*$
• Decomposition of degrees of freedom
$df(SSTO) = df(SSA) + df(SSB) + df(SSAB^*) + df(SSRem^*)$
$ab - 1 = (a - 1) + (b - 1) + 1 + (ab - a - b)$
• Tukey's one degree of freedom test for additivity: $H_o: D = 0$ (i.e., no interaction) vs. $H_a: D \neq 0$.
• $F$ ratio $F_{Tukey}^{*} = \frac{SSAB^*/1}{SSRem^*/(ab - a - b)}\sim F_{1, ab - a - b}$ under $H_o$.
• Decision rule: reject $H_o: D = 0$ at level of significance $\alpha$ if $F_{Tukey}^{*} > F(1 - \alpha; 1, ab - a - b)$.
Example: Insurance
• $\sum_{ij}(\overline{Y}_{i.} - \overline{Y}_{..})( \overline{Y}_{.j} - \overline{Y}_{..})Y_{ij} = -13500.$
• $\sum_{i=1}^{a}( \overline{Y}_{i.} - \overline{Y}_{..})^2 = 4650$, and $\sum_{j=1}^{b}( \overline{Y}_{.j} - \overline{Y}_{..})^2 = 450.$
• $SSAB^* = \frac{(-13500)^2}{4650 * 450} = 87.1.$
• $SSRem^* = 10750 - 9300 - 1350 - 87.1 = 12.9.$
• $ab - a - b = 3*2 - 3 - 2 = 1.$
• $F$-ratio for Tukey's test:
$F_{Tukey}^{*} = \frac{SSAB^*/1}{SSRem^*/1} = \frac{87.1}{12.9} = 6.8.$
• When $\alpha = 0.05, F(0.95; 1, 1) = 161.4 > 6.8.$
• Thus, we can not reject $H_o: D = 0$ at the 0.05 level, and we conclude that there is no significant interaction between the two factors.
• Indeed, the p-value is $p = P(F_{1,1} > 6.8) = 0.23$ which is not at all significant.
Contributors
• Scott Brunstein (UCD)
• Debashis Paul (UCD) | textbooks/stats/Computing_and_Modeling/Supplemental_Modules_(Computing_and_Modeling)/Regression_Analysis/Analysis_of_Variance/Two-Factor_ANOVA_model_with_n_%3D_1_%28no_replication%29.txt |
Review of concepts related to hypothesis tests
1.1 Type I and Type II errors
In hypothesis testing, there are two types of errors:
Type I error: reject null hypothesis when it is true
• Type I error rate
$P(reject \, H_0 | H_0\, true)$
• When testing $H_0$ at a pre-specified level of significance $\alpha$, the Type I error rate is controlled to be no larger than $\alpha$.
Type II error: accept the null hypothesis when it is wrong.
• Type II error rate
P(accept $H_0$ | $H_0$ wrong).
• Power : probability of rejecting $H_0$ when it is wrong
Power = P(reject $H_0$ | $H_0$ wrong)
= 1 - Type II error rate.
The power of a testing procedure depends on
• Significance level $\alpha$ - the maximum allowable Type I error - the larger $\alpha$ is , the higher is the power.
• Deviation from $H_0$ - the strength of signal - the larger the deviation is, the higher is the power.
• Sample size: the larger the sample size is, the higher is the power.
Power of an F-test
2.1 Power calculation for F-test
Test $H_0$ : $\mu_1$ = $\cdots$ = $\mu_r$ under a single factor ANOVA model: given the significance level $\alpha$ :
• Decision rule
$\left\{\begin{array}{ccc}{\rm reject} H_0 & if & F^{\ast}> F(1-\alpha;r-1,n_T-r)\{\rm accept} H_0 & if & F^{\ast} \leq F(1-\alpha;r-1,n_T-r)\end{array}\right.$
• The Type I error rate is at most $\alpha$.
• Power depends on the noncentrality parameter
$\phi=\frac{1}{\sigma}\sqrt{\frac{\sum_{i=1}^r n_i(\mu_i-\mu_{\cdot})^2}{r}}.$
Note $\phi$ depends on sample size (determined by the $n_i$'s) and signal size (determined by the $(\mu_i - \mu.)^2$'s).
2.2 Distribution of F-ratio under the alternative hypothesis
The distribution of F* under an alternative hypothesis.
• When the noncentrality parameter is $\phi$, then
$F^{\ast} \sim F_{r-1,n_T-r}(\phi),$
i.e., a noncentral F-distribution with noncentrality parameter $\phi$.
• Power = P($\sim F_{r-1,n_T-r}(\phi)$ > F(1 - $\alpha$;r - 1, $n_T - r$)).
• Example: if $\alpha$ = 0.01, r = 4, $n_T$ = 20 and $\phi$ = 2, then Power = 0.61. (Use Table B.11 of the textbook.)
2.3 How to calculate power of the F test using R
• The textbook defines the noncentrality parameter for a single factor ANOVA model as
$\phi = \frac{1}{\sigma} \sqrt{\frac{\sum_{i=1}^r n_i (\mu_i - \mu_{\cdot})^2}{r}}$
where r is number of treatment group (factor levels), $\mu_i$'s are the factor level means, $n_i$ is the sample size (number of replicates) corresponding to the i-th treatment group, and $\sigma^2$ is the variance of the measurements.
• For a balanced design, i.e., when $n_1$ = $\cdots$ = $n_r$ = n, the formula for $\phi$ reduces to
$\phi = \frac{1}{\sigma} \sqrt{(n/r) \sum_{i=1}^r (\mu_i - \mu_{\cdot})^2}~.$
Table B.11 gives the power of the F test given the values of the numerator degree of freedom $v_1$ = r - 1, denominator degree of freedom $v_2$ = $n_T - r$, level of significance $\alpha$ and noncentrality parameter $\phi$.
• Example: For r = 3, n = 5, (so that $v_1$ = 2 and $v_2$ = 12), $\alpha$ = 0.05 and $\phi$ = 2, the value of power from Table B.11 is 0.78.
However, if you want to use R to compute the power of the F-test, you need to be aware that the noncentrality parameter for F distribution in R is defined differently. Indeed, compared to the above setting, the noncentrality parameter to used in the function in R will be r x $\phi^2$ instead of $\phi$. Here is the R code to be used for computing the power in the example described above: r = 3, n = 5, $\alpha$ = 0.05 and $\phi$ = 2:
• Critical value for the F-test when $\alpha$ = 0.05, $v_i$ = r - 1 = 2 and $v_2$ = $n_T$ - r = 12 is
F.crit = qf(0.95,2,12)
• Then the power of the test, when will be computed as
F.power = 1 - pf(F.crit, 2, 12, 3$*$2^2)
• Note that the function qf is used to compute the quantile of the central F-distribution. Its second and third arguments are the numerator and denominator degrees of freedom of the F distribution.
• The function pf is used to calculate the probability under the noncentral F-density curve to the left of a given value (in this case F.crit). Its second and third arguments are the numerator and denominator degrees of freedom of the F distribution, while the fourth argument is the noncentrality parameter r x $\phi^2$ (we specify this explicitly in the above example).
• The values of F.crit and F.power are 3.885294 and 0.7827158, respectively.
Calculating sample size
God: find the smallest sample size needed to achieve
• a pre-specified power $\gamma$;
• with a pre-specified Type I error rate $\alpha$;
• for at least a pre-specific signal leval $s$.
The idea behind the sample size calculation is as follows:
• On one hand, we want the sample size to be large enough to detect practically important deviations ( with a signal size to be at least s) from $H_0$ with high probability (with a power at least $\gamma$), and we only allow for a pre-specified low level of Type I error rate (at most $\alpha$) when there is no signal.
• On the other hand, the sample size should not be unnecessarily large such that the cost of the study is too high.
Example $1$: sample size calculation
For a single factor study with 4 levels and assuming a balanced design, i.e., the $n_1 = n_2 = n_3 = n_4$ (=n, say), the goal is to test $H_0$: all the factor level means $\mu_i$ are the same.
• Question: What should be the sample size for each treatment group under a balanced design, such that the F-test can achieve $\gamma$ = 0.85 power with at most $\alpha$ = 0.05 Type I error rate when the deviation from $H_0$ has at least $s=\sum_{i=1}^{r}(\mu_i-\mu_{\cdot})^2=40$ ?
• One additional piece of information needed in order to answer this question is the residual variance $\sigma^2$.
• Suppose from a pilot study, we know the residual variance is about $\sigma^2$ = 10.
• Use a trial-and-error strategy to search Table B.11. This means, for a given n (starting with n = 1),
(i) calculate $\phi = (1/\sigma) \sqrt{(n/r)\sum_{i=1}^r(\mu_i - \mu_{\cdot})^2} = (1/\sigma) \sqrt{(n/r) s}$;
(ii) fix the numerator degree of freedom $v_1$ = r - 1 = 3;
(iii) check the power of the test when the denominator degree of freedom $v_2 = n_T - r$ (where $n_T$ = nr), with the given $\phi$ and $\alpha$ ;
(iv) keep increasing n until the power of the test is closest to (equal or just above) the given value of $\gamma$.
3.2 An alternative approach to sample size calculation
Suppose that we want to determine the minimum sample size required to attain a certain power of the test subject to a specified value of the maximum discrepancy among the factor level means. In other words, we want the test to attain power $\gamma$ (= 1 - $\beta$, where $\beta$ is the probability of Type II error) when the minimum range of the treatment group means
$\Delta = \max_{1\leq i \leq r} \mu_i - \min_{1\leq i \leq r}\mu_i ~.$
• Suppose we have a balanced design, i.e., $n_1 = \cdots = n_r$ = n, say. We want to determine the minimum value of n such that the power of the F test for testing $H_0$ : $\mu_1 = \cdots = \mu_r$ is at least a prespecified value $\gamma = 1 - \beta$.
• We need to also specify the level of significance $\alpha$ and the standard deviation of the measurements $\sigma$.
• Table B.12 gives the minimum value of n needed to attain a given power 1 - $\beta$ for a given value of $\alpha$, for a given number of treatments r and a given "effect size" $\Delta/\sigma$.
• Example : For r = 4, $\alpha$ = 0.05, in order that the F-test achieves the power 1 - $\beta$ = 0.9 when the effect size is $\Delta/\sigma$ = 1.5, we need n to be at least 14. That is, we need a balanced design with at least 14 experimental units in each treatment group.
Contributors
• Yingwen Li (UCD)
• Debashis Paul (UCD) | textbooks/stats/Computing_and_Modeling/Supplemental_Modules_(Computing_and_Modeling)/Regression_Analysis/Concepts_Related_to_Hypothesis_Tests.txt |
Inference in Simple Linear Regression
• Fact : Under normal regression model $(b_0,b_1)$ and $SSE$ are independently distributed and
$\frac{b_0 - \beta_0}{s(b_0)} \sim t_{n-2}$, $\qquad \frac{b_1 - \beta_1}{s(b_1)} \sim t_{n-2}$, $\qquad SSE \sim \sigma^2 \chi_{n-2}^2$.
• Confidence interval for $\beta_0$ and $\beta_1$ : $100(1-\alpha)\%$ (two-sided) confidence interval for $\beta_i$:
$(b_i - t(1-\alpha/2;n-2) s(b_i)$, $b_i + t(1-\alpha/2;n-2) s(b_i))$
for $i=0,1$, where $t(1-\alpha/2;n-2)$ is the $1-\alpha/2$ upper cut-off point (or $(1-\alpha/2)$ quantile) of $t_{n-2}$ distribution; i.e., $P(t_{n-2} > t(1-\alpha/2;n-2)) = \alpha/2$.
• Hypothesis tests for $\beta_0$ and $\beta_1$ : $H_0 : \beta_i = \beta_{i0}$ ($i=0$ or $1$).
Test statistic : $T_i = \frac{b_i - \beta_{i0}}{s(b_i)}$.
1. Alternative: $H_1 : \beta_i > \beta_{i0}$. Reject $H_0$ at level $\alpha$ if $\frac{b_i - \beta_{i0}}{s(b_i)} > t(1-\alpha;n-2)$. Or if, P-value = $P(t_{n-2} > T_i^{observed}) < \alpha$.
2. Alternative: $H_1 : \beta_i < \beta_{i0}$. Reject $H_0$ at level $\alpha$ if $\frac{b_i - \beta_{i0}}{s(b_i)} < t(\alpha;n-2)$. Or if, P-value = $P(t_{n-2} < T_i^{observed}) < \alpha$.
3. Alternative: $H_1 : \beta_i \neq \beta_{i0}$. Reject $H_0$ at level $\alpha$ if $|\frac{b_i - \beta_{i0}}{s(b_i)}| > t(1-\alpha/2;n-2)$. Or if, P-value = $P(|t_{n-2}| > |T_i^{observed}|) < \alpha$.
Inference for mean response at $X = X_h$
• Point estimate: $\widehat Y_h = b_0 + b_1 X_h$.
Fact: $E(\widehat Y_h) = \beta_0 + \beta_1 X_h = E(Y_h)$, $Var(\widehat Y_h) = \sigma^2(\widehat Y_h) = \sigma^2\left[\frac{1}{n} + \frac{(X_h - \overline{X})^2}{\sum_i (X_i - \overline{X})^2}\right]$. Estimated variance is $s^2(\widehat Y_h) = MSE \left[\frac{1}{n} + \frac{(X_h - \overline{X})^2}{\sum_i (X_i - \overline{X})^2}\right]$.
Distribution: $\frac{\widehat Y_h - E(Y_h)}{s(\widehat Y_h)} \sim t_{n-2}$.
Confidence interval: $100(1-\alpha)$% confidence interval for $E(Y_h)$ is $(\widehat Y_h - t(1-\alpha/2;n-2) s(\widehat Y_h),\widehat Y_h + t(1-\alpha/2;n-2) s(\widehat Y_h))$.
Prediction of a new observation $Y_{h(new)}$ at $X = X_h$
• Prediction : $\widehat Y_{h(new)} = \widehat Y_h = b_0 + b_1 X_h$.
Error in prediction : $Y_{h(new)} - \widehat Y_{h(new)} = Y_{h(new)} - \widehat Y_h$.
Fact : $\sigma^2(Y_{h(new)} - \widehat Y_h) = \sigma^2(Y_{h(new)}) + \sigma^2(\widehat Y_h) = \sigma^2 + \sigma^2(\widehat Y_h) = \sigma^2\left[1+\frac{1}{n}+ \frac{(X_h - \overline{X})^2}{\sum_i (X_i - \overline{X})^2}\right]$.
Estimate of $\sigma^2(Y_{h(new)} - \widehat Y_h)$ is $s^2(Y_{h(new)} - \widehat Y_h) = MSE \left[1+\frac{1}{n}+ \frac{(X_h - \overline{X})^2}{\sum_i (X_i - \overline{X})^2}\right]$.
Distribution : $\frac{Y_{h(new)} - \widehat Y_h}{s(Y_{h(new)} -\widehat Y_h)} \sim t_{n-2}$.
Prediction interval : $100(1-\alpha)$% prediction interval for $Y_{h(new)}$ is $(\widehat Y_h - t(1-\alpha/2;n-2) s(Y_{h(new)}-\widehat Y_h),\widehat Y_h + t(1-\alpha/2;n-2) s(Y_{h(new)}-\widehat Y_h))$.
• Confidence band for the regression line : At $X=X_h$ the $100(1-\alpha)$% confidence band for the regression line is given by $\widehat Y_h \pm w_\alpha s(\widehat Y_h), \qquad \mbox{where} \sim w_\alpha = \sqrt{2F(1-\alpha; 2, n-2)}$.
Here $F(1-\alpha;2,n-2)$ is the $1-\alpha$ upper cut-off point (or, $(1-\alpha)$ quantile) for the $F_{2,n-2}$ distribution ($F$ distribution with d.f. $(2,n-2)$).
Example $1$: Simple linear regression
We consider a data set on housing price. Here$Y=$ selling price of houses (in $1000), and $X=$ size of house (100 square feet). The summary statistics are given below: $n = 19$, $\overline{X} = 15.719$, $\overline{Y} = 75.211$ $\sum_i(X_i - \overline{X})^2 = 40.805$, $\sum_i (Y_i - \overline{Y})^2 = 556.078$, $\sum_i (X_i - \overline{X})(Y_i - \overline{Y}) = 120.001$. Estimates of $\beta_1$ and $\beta_0$ : $b_1 = \frac{\sum_i (X_i - \overline{X})(Y_i - \overline{Y})}{\sum_i(X_i - \overline{X})^2} = \frac{120.001}{40.805} = 2.941$ and $b_0 = \overline{Y} - b_1 \overline{X} = 75.211 - (2.941)(15.719) = 28.981.$ • Fit and Prediction: The fitted regression line : $Y = 28.981 + 2.941 X$. When $X = 18.5 = X_h$, the predicted value, that is an estimate of the mean selling price (in$1000) when size of the house is 1850 sq. ft., is $\widehat Y_h = 28.981 + (2.941) (18.5) = 83.39$.
• MSE: The degrees of freedom (df) $= n-2 = 17$. $SSE = \sum_i(Y_i - \overline{Y})^2 - b_1^2\sum_i(X_i - \overline{X})^2 = 203.17$. So, $MSE = \frac{SSE}{n-2} = \frac{203.17}{17} = 11.95$.
• Standard Error Estimates: $s^2(b_0) = MSE \left[\frac{1}{n} + \frac{\overline{X}^2}{\sum_i(X_i - \overline{X})^2} \right] = 73.00$, $\qquad s(b_0) = \sqrt{s^2(b_0)} = 8.544$.
$s^2(b_1) = \frac{MSE}{\sum_i(X_i - \overline{X})^2} = 0.2929$, $\qquad s(b_1) = \sqrt{s^2(b_1)} = 0.5412$.
• Confidence Intervals: We assume that the errors are normal to find confidence intervals for the parameters $\beta_0$ and $\beta_1$. We use the fact that $\frac{b_0 - \beta_0}{s(b_0)} \sim t_{n-2}$ and $\frac{b_1 - \beta_1}{s(b_1)} \sim t_{n-2}$ where $t_{n-2}$ denotes the $t$-distribution with $n-2$ degrees of freedom. Since $t(0.975;17) = 2.1098$, it follows that 95% two-sided confidence interval for $\beta_1$ is $2.941 \pm (2.1098)(0.5412) = (1.80, 4.08)$. Since $t(0.95;17) = 1.740$, the 90% two-sided confidence interval for $\beta_0$ is $28.981\pm (1.740)(8.544) = (14.12,43.84)$.
Contributors
• Agnes Oshiro
(Source: Spring 2012 STA108 Handout 4) | textbooks/stats/Computing_and_Modeling/Supplemental_Modules_(Computing_and_Modeling)/Regression_Analysis/Inference_in_Simple_Linear_Regression.txt |
Multiple comparison refers to the situation where a family of statistical inferences are considered simultaneously.
• Examples: construct a family of confidence intervals, or test multiple hypotheses.
• However, "errors" are more likely to occur when one consider these inferences as a whole.
If one conducts 100 hypotheses testing, each at the 0.05 significance level. Then even when all 100 null hypotheses are true, on average, one would reject 5 of them purely by chance. If these tests are independent, then the probability of at least one wrong rejection is 1 - 0.95100 = 99.4%.
Therefore, we also want to simultaneously control these errors.
• C.I. : maintain a family -wise confidence coefficient;
• Hypothesis testing : control the family-wise Type-I error rate.
1.1 Family-wise confidence coefficient for pairwise differences
For r factor levels, there are r(r - 1)/2 pairwise comparisons of form $D_{ij} = \mu_i - \mu_j$ (1 $\leq i < j \leq$ r).
For a given pair (i, j), denote the (1 - $\alpha$)-C.I. for $D_{ij}$ by $C_{ij}$($\alpha$):
$C_{ij}(\alpha)=\widehat{D}_{ij} \pm t(1-\frac{\alpha}{2};n_T-r) \times s(\widehat{D}_{ij}).$
• $\widehat{D}_{ij}=\bar{Y}_{i\cdot}-\bar{Y}_{j\cdot}$ is the estimator
• s($\widehat{D}_{ij}$) is its estimated standard error
• $t(1-\frac{\alpha}{2};n_T-r)$ is the multiplier of the standard error that gives the desired confidence coefficient 1 - $\alpha$
By definition
$\mathbb{P}(D_{ij} \in C_{ij}(\alpha)) = 1-\alpha,$
in other words, the probability that $D_{ij}$ falls out of this C.I. is controlled by $\alpha$.
Definition: Family-wise confidence coefficient
Family-wise confidence coefficient of this family of C.I.s is defined as
$\mathbb{P}(D_{ij} \in C_{ij}(\alpha), ~\hbox{for all $1 \leq i<j \leq r$})$
i.e., the probability that these C.I.s simultaneously contain their corresponding parameter.
It is obvious that
$\begin{eqnarray*} && \mathbb{P}(D_{ij} \in C_{ij}(\alpha), ~ \hbox{for all 1 \leq i<j \leq r})\ &\leq& \mathbb{P}(D_{i'j'} \in C_{i'j'}(\alpha))~=~1-\alpha, \end{eqnarray*}$
for any given pair (i', j').
How to construct C.I.s such that the family-wise confidence coefficient is at least 1 - $\alpha$ ?
Idea: replace $t(1-\frac{\alpha}{2};n_T-r)$ by another multiplier which gives a family-wise confidence coefficient (1 - $\alpha$).
1.2 Bonferroni's Procedure
Suppose we want to construct g C.I.s simultaneously and control the family-wise confidence coefficient at level 1 - $\alpha$
• Bonferroni procedure : construct each C.I. at level 1 - $\alpha$/g.
-- If we want to construct C.I.s for g pairwise comparisons, then Bonferroni's C.I.s are of the form
$C^B_{ij}(\alpha)=\widehat{D}_{ij} \pm B \times s(\widehat{D}_{ij}).$
where $B=t(1-\frac{\alpha}{2g};n_T-r).$
Then
$\mathbb{P}(D_{ij} \in C^B_{ij}(\alpha), \hbox{for all g pairs}~(i,j)) \geq 1-\alpha.$
This is due to the Bonferroni's inequality:
Bonferroni's inequality
$\mathbb{P}(\bigcap_{k=1}^g A_k) \geq 1- \sum_{k=1}^g \mathbb{P}(A_k) = 1-g\beta,$
provided that $\mathbb(P)(A_k) = 1 - \beta$, for each k = 1, $\cdots$ , g;
and the fact that for every pair (i, j),
$\mathbb{P}(D_{ij} \in C^B_{ij}(\alpha))=1-\frac{\alpha}{g}.$
1.3 Tukey's Procedure
Tukey's procedure for families of pairwise comparisons: define C.I. for $D_{ij}$ as
$C^T_{ij}(\alpha)=\widehat{D}_{ij} \pm T \times s(\widehat{D}_{ij}),$
where the multiplier T : = $\frac{1}{\sqrt{2}}q(1-\alpha;r,n_T-r)$ and q(r, $n_T - r) is the studentized range distribution with parameters r and \(n_T - r$ (refer to Table B.9).
For such C.I.s, the family-wise confidence coefficient is at least 1 - $\alpha$, i.e.,
$\mathbb{P}(D_{ij} \in C^T_{ij}(\alpha), ~\hbox{for all 1 \leq i<j \leq r}) \geq 1-\alpha.$
In the above "=" holds for balanced designs. Tukey's procedure is conservative for unbalanced designs.
Reasoning of Tukey's Procedure
• Fact 1: Suppose that $X_1$, $\cdots$ , $X_r$ are i.i.d N($\mu$, $\sigma^2$). Let W = max{$X_i$} - min{$X_i$} denote the range of the data. If $s^2$ is an estimator of $\sigma^2$, which is also independent of $X_i$'s and has $\nu$ degrees of freedom ( with $\nu$$s^2$/$\sigma^2$ having $\chi_{(\nu)}^2$ distribution), then the quantity W/s is called the studentized range and we have
$\frac{W}{s} \sim q(r,\nu).$
• Fact 2: When $n_1 = \cdots = n_r = n$ (balanced design), $\overline{Y}_{1\cdot}-\mu_1,\cdots, \overline{Y}_{r\cdot}-\mu_r$ are i.i.d N(0, $\frac{\sigma^2}{n})$; MSE is an estimator of $\sigma^2$ with $n_T - r$ degrees of freedom and is independent of $\{\overline{Y}_{i\cdot} - \mu_i\}$'s . Therefore
$\frac{\max\{\overline{Y}_{i\cdot} - \mu_i\} - \min\{\overline{Y}_{i\cdot} - \mu_i\}}{\sqrt{MSE/n}} \sim q(r,n_T - r).$
• Fact 3: For a given pair $1 \leq i < j \leq r$,
$|\widehat{D}_{ij} - D_{ij}| = |(\overline{Y}_{i\cdot} - \mu_i) - (\overline{Y}_{j\cdot} - \mu_j)|\ \leq \max\{\overline{Y}_{i\cdot} - \mu_i\} - \min\{\overline{Y}_{i\cdot } - \mu_i\}.$
• Fact 4: $s(\widehat{D}_{ij})=\sqrt{MSE(\frac{1}{n}+\frac{1}{n})}=\sqrt{2}\sqrt{\frac{MSE}{n}}$ and for each pair (i, j),
$D_{ij} \in C^T_{ij}(\alpha) \Longleftrightarrow \frac{|\widehat{D}_{ij} - D_{ij}|}{s(\widehat{D}_{ij})} \leq T.$
Finally, the family-wise confidence coefficient for Tukey's C.I.s is
$\begin{eqnarray*} &&\mathbb{P}(D_{ij} \in C^T_{ij}(\alpha), ~\hbox{for all 1 \leq i<j \leq r})\ &=& \mathbb{P}\Bigl(\frac{|\hat{D}_{ij}-D_{ij}|}{s(\hat{D}_{ij})} \leq T, ~\hbox{for all 1 \leq i<j \leq r}\Bigr)\ &=& \mathbb{P}\Bigl(\frac{\max\{\overline{Y}_{i\cdot}-\mu_i\}-\min\{\overline{Y}_{i\cdot}-\mu_i\}}{\sqrt{MSE/n}} \leq \sqrt{2}T \Bigr)\ &=& \mathbb{P}(q(r,n_T-r) \leq q(1-\alpha;r, n_T-r)) ~=~ 1-\alpha. \end{eqnarray*}$
2 Simultanrous inference for contrasts: Scheffé's procedure
• There are many contrasts (indeed infinitely many). How to achieve a family-wise confidence coefficient (say 1 - $\alpha$) or a type-one error rate (say $\alpha$) for a large number of contrasts?
• Consider the family of all possible contrasts:
$\mathcal{L}=\Bigl\{L=\sum_{i=1}^r c_i\mu_i: \quad \sum_{i=1}^r c_i=0 \Bigr\}.$
• Scheffé's procedure: define the C.I. for a contrast L as
$C^S_L(\alpha) :=\hat{L} \pm S \times s(\hat{L}),$
where $S^2=(r-1)F(1-\alpha;r-1,n_T-r)$.
2.1 Interpretation of Scheffé's procedure
• The family-wise confidence coefficient of ${C^S_L(\alpha)}$ is
\label{eq:scheffe_confidence_level}
\mathbb{P}(L \in C^S_{L}(\alpha), \ \hbox{for all $L \in \mathcal{L}$}) = 1-\alpha. (1)
• Interpretation: If the study were repeated many times and each time these C.I.s were constructed, then in (1 - $\alpha$)100% of times, all contrasts would fall into their respective C.I.s.
• Simultaneous testing: reject $H_{0L}$ : L = 0, if zero is not contained in the corresponding C.I.: ${C^S_L(\alpha)}$. Such a decision rule has a family-wise significance level $\alpha$, i.e.,
$\mathbb{P}(\hbox{at least one of $H_{0L}$ is rejected} | \hbox {all $H_{0L}$ are true}) \leq \alpha.$
2.2 Application to package design example
Suppose we want to maintain a family-wise confidence coefficient at 90% for all possible contrasts simultaneously. Then for example, the Scheffé's C.I. of
$L=\frac{\mu_1+\mu_2}{2}-\frac{\mu_3+\mu_4}{2},$
is constructed by:
• $S^2=(r-1)F(1-\alpha;r-1,n_T-r)=3 \times F(0.9;3,15)=7.47$, which means $S=\sqrt{7.47}=2.73.$
• The Scheffé's C.I. of L is
$-9.35 \pm 1.50 \times 2.73=[-13.4, \ -5.3].$
• Note that the Scheffé multiplier S = 2.73 is larger than the multiplier t(0.95;15) = 1.753 if we are only interested in L.
2.3 Justification of Scheffé procedure
Consider an arbitrary sequence $c_1, \cdots, c_r$ satisfying $\sum_{i=1}^r c_i = 0.$ Then, with L = $\sum_{i=1}^r c_i \mu_i$, and $\widehat{L}$ = $\sum_{i=1}^r c_i \overline{Y}_{i\cdot}$ we have
\begin{eqnarray*}
\widehat L - L &=& \sum_{i=1}^r c_i (\overline{Y}_{i\cdot} - \mu_i) \
&=& \sum_{i=1}^r c_i [(\overline{Y}_{i\cdot} - \mu_i) -
(\overline{Y}_{\cdot\cdot} - \mu_{\cdot})] \qquad (\mbox{since}~\sum_{i=1}^r
c_i = 0)\
&=& \sum_{i=1}^r c_i (\overline{\varepsilon}_{i\cdot} -
\overline{\varepsilon}_{\cdot\cdot})
\end{eqnarray*}
with
$\overline{\varepsilon}_{i\cdot} = \frac{1}{n_i} \sum_{j=1}^{n_i} \varepsilon_{ij} \qquad\mbox{and} \qquad \varepsilon_{\cdot\cdot} = \frac{1}{n_T} \sum_{i=1}^r \sum_{j=1}^{n_i} \varepsilon_{ij}$
where $\varepsilon_{ij} = Y_{ij} - \mu_i are i.i.d. N(0,\sigma^2)$.
Cauchy-Schwarz inequality
et $a_1, \cdots, a_r$ and $b_1, \cdots, b_r$ be real numbers. Then
$|\sum_{i=1}^r a_i b_i | \leq \sqrt{\sum_{i=1}^r a_i^2}\sqrt{\sum_{i=1}^r b_i^2}.$
Taking aking $a_i = c_i/\sqrt{n_i}$ and $b_i = \sqrt{n_i}(\overline{\varepsilon}_{i\cdot} - \overline{\varepsilon}_{\cdot\cdot})$, and applying the Cauchy-Schwarz inequality, we obtain
\begin{eqnarray}\label{eq:L_diff}
|\widehat L - L| &=& |\sum_{i=1}^r \frac{c}{\sqrt{n_i}}
\sqrt{n_i}(\overline{\varepsilon}_{i\cdot} -
\overline{\varepsilon}_{\cdot\cdot})| \nonumber\
&\leq& \sqrt{\sum_{i=1}^r \frac{c_i^2}{n_i}} \sqrt{\sum_{i=1}^r n_i
(\overline{\varepsilon}_{i\cdot} - \overline{\varepsilon}_{\cdot\cdot})^2} . (2)
\end{eqnarray}
Since
$s(\widehat L) = \sqrt{MSE} \sqrt{\sum_{i=1}^r \frac{c_i^2}{n_i}}$
from equation (2), we have
$\frac{|\widehat{L} - L|}{s(\widehat{L})} \leq \sqrt{\frac{\sum_{i=1}^r n_i (\overline{\varepsilon}_{i\cdot} - \overline{\varepsilon}_{\cdot\cdot})^2}{MSE}}~.$
Observe that $\sum_{i=1}^r n_i (\overline{\varepsilon}_{i\cdot} - \overline{\varepsilon}_{\cdot\cdot})^2$ has the same distribution as SSTR under the hypothesis $\mu_1 = \cdots = \mu_r.$ Thus,
$\frac{\sum_{i=1}^r n_i (\overline{\varepsilon}_{i\cdot} - \overline{\varepsilon}_{\cdot\cdot})^2}{\sigma^2} \sim \chi_{(r-1)}^2.$
Also, since $\sum_{i=1}^r n_i (\overline{\varepsilon}_{i\cdot} - \overline{\varepsilon}_{\cdot\cdot})^2$ is defined in terms of $\overline{Y}_{1\cdot},\ldots,\overline{Y}_{r\cdot}$, it is independent of MSE. Moreover, $SSE/\sigma^2 \sim \chi_{(n_T-r)}^2$. Therefore,
\label{eq:F_ratio_scheffe}
\frac{\sum_{i=1}^r n_i (\overline{\varepsilon}_{i\cdot} -
\overline{\varepsilon}_{\cdot\cdot})^2/ (r-1) }{MSE} =
\frac{\chi_{(r-1)}^2/(r-1)}{\chi_{(n_T-r)}^2/(n_T - r)} \sim F_{(r-1,n_T-r)} (3)
This proves that if we choose $S = \sqrt{(r-1) F(1-\alpha;r-1,n_T-r)}$, then the intervals $C^{S}_{L}$ defined by $\widehat{L} \pm S \times s(\widehat{L})$ satisfy (1).
3 Comparison of different multiple comparison procedures
In this section, we analyze the performance of Bonferroni's, Tukey's and Scheffé procedure for finding confidence intervals for multiple parameters (pairwise diffeneces of treatment means or more general contrasts).
3.1 Rust inhibitors example revisited
In a study of the effectiveness of different rust inhibitors, four brands (1,2,3,4) were tested. Altogether, 40 experimental units were randomly assigned to the four brands, with 10 units assigned to each brand (balanced design). The resistance to rust was evaluated in a coded form after exposing the experimental units to severe conditions.
Summary statistics and ANOVA table: $n_1 = n_2 = n_3 = n_4 = 10$ and $\overline{Y}_{1\cdot} = 43.14$, $\overline{Y}_{2\cdot} = 89.44$, $\overline{Y}_{3\cdot} = 67.95$ and $\overline{Y}_{4\cdot} = 40.47.$
Source of Variation Sum of Squares (SS) Degrees of Freedom (df) MS
Between treatments SSTR = 15953.47 r - 1 = 3 MSTR = 5317.82
Within treatments SSE = 221.03 $n_T - r = 36$ MSE = 6.140
Total SSTO = 16174.50 $n_T - 1 = 39$
Example $1$
All 6 pairwise comparisons $D_{ij} = \mu_i - \mu_j, 1\leq i < j \leq 4$, are of interest.
First we construct the Tukey's multiple comparison confidence intervals for all pairwise comparisons with a family-wise confidence coefficient 95%.
• Using linear interpolation based on the quantiles given in Table B.9, q(0.95;4,36) $\approx$ 3.814. A more accurate value of 3.809 may be obtained by using the R command qtukey (0.95,4,36). Thus, $T=\frac{1}{\sqrt{2}}q(1-\alpha;r,n_T-r)=\frac{1}{\sqrt{2}}q(0.95;4,36)=\frac{1}{\sqrt{2}}3.809=2.69$.
• Note that T = 2.69 > 2.03 = t(0.975;36).
• Tukey's C.I. for $\mu_1 - \mu_2$ is
$-46.3 \pm 1.11 \times 2.69 =[-49.31,-43.29].$
• All six confidence intervals are:
\begin{eqnarray*}
&&-49.3 \leq \mu_1-\mu_2 \leq -43.3, \qquad -27.8 \leq \mu_1-\mu_3 \leq
-21.8,\
&& -0.3 \leq \mu_1-\mu_4 \leq 5.7, \qquad 18.5 \leq \mu_2-\mu_3 \leq
24.5,\
&& 46.0 \leq \mu_2-\mu_4 \leq 52.0, \qquad 24.5 \leq \mu_3-\mu_4 \leq 30.5.
\end{eqnarray*}
• Zero is contained in one of the C.I.s (corresponding to $\mu_1 - \mu_4)$, but is not in the other five C.I.s. Therefore, at the family-wise significance level 0.05, we should not reject $H_{0,14} : \mu_1 = \mu_4$, but should reject the other five null hypotheses of the form $H_{0,ij} : \mu_i = \mu_j.)$
Let us compare with the simultuneous C.I.s formed by the Tukey's and Scheffé's procedure.
• Tukey's multiplier:
$T=\frac{1}{\sqrt{2}}q(1-\alpha;r,n_T-r)=\frac{1}{\sqrt{2}}q(0.95;4,36)=2.69.$
• Bonferroni's multiplier: since g = 6,
$B=t(1-\frac{\alpha}{2g};n_T-r)=t(1-\frac{0.05}{12};36)=2.79.$
• Scheffé's multiplier:
$S = \sqrt{(r-1)F(1-\alpha;r-1,n_T-r)} = \sqrt{3 F(0.95;3,36)}= 2.93.$
• T < B < S, and so Tukey's procedure is the best (gives rise to the narrowest confidence intervals).
• For example, the C.I.s for $\mu_1 - \mu_2$ are
$T: [-49.3,-43.3]; \qquad S: [-49.6,-43.0]; \qquad B: [-49.4,-43.2].$
Example $2$
Suppose that only 4 pairwise comparisons are of interest. Then
• T and S won't change
• Bonferroni's multiplier B decreases. Now g = 4, and so
$B=t(1-\frac{\alpha}{2g};n_T-I)=t(1-\frac{0.05}{8};36)=2.63.$
• B < T < S, and so Bonferroni's procedure is the best.
• For example, the Bonferroni's C.I. for $\mu_1 - \mu_2$ narrow down to [-49.2, -43.4].
4 Comparative merits of Tukey's, Bonferroni's and Scheffé's procedures
• All three procedures give confidence intervals of the form
${\rm estimator} \pm {\rm multiplier} \times {\rm SE}$
where multiplier = T, B or S and SE stands for the estimated standard error of the estimator.
• Tukey's procedure: applicable to families of pairwise comparisons.
• Bonferroni's procedure: applicable to families of finite number of pre-specified linear combinations (whether these are contrasts or not).
• Scheffé's procedure: applicable to families of finite/infinite number of contrasts.
• If the family of interest consists of all pairwise comparisons, then Tukey's procedure is the best.
• If the family consists of some (but not all) of the pairwise comparisons, Bonferroni's procedure may or may not be better than Tukey's depending on the number of pairwise comparisons of interests.
• If the family consists of finite number of contrasts no larger than the number of factor level means, then Bonferroni's procedure is better than Scheffé's. Otherwise, Bonferroni's procedure may or may not be better than Scheffé's.
• In practice, one can compute all applicable multipliers and use the smallest number to construct the C.I.s.
Contributors
• Yingwen Li (UCD), Debashis Paul (UCD) | textbooks/stats/Computing_and_Modeling/Supplemental_Modules_(Computing_and_Modeling)/Regression_Analysis/Multiple_Comparison.txt |
A response variable $Y$ is linearly related to $p$ different explanatory variables $X^{(1)},\ldots,X^{(p-1)}$ (where $p \geq2$). The regression model is given by
$Y_i = \beta_0 + \beta_1 X_i^{(1)} + \cdots + \beta_p X_i^{(p-1)} + \varepsilon_i, \qquad i=1,\ldots,n \qquad \label{1}$
where $\varepsilon_i$ have mean zero, variance $\sigma^2$ and are uncorrelated. The Equation \ref{1} can be expressed in matrix notations as
$Y = \mathbf{X} \beta + \varepsilon,$
where
$Y = \begin{bmatrix} Y_1 \ Y_2 \ \cdot\ Y_n \end{bmatrix}, \qquad \varepsilon = \begin{bmatrix} \varepsilon_1 \ \varepsilon_2 \ \cdot\ \varepsilon_n \end{bmatrix},$
$\mathbf{X} = \begin{bmatrix} 1 & X_1^{(1)} & X_1^{(2)} & \cdots & X_1^{(p-1)} \ 1 & X_2^{(1)} & X_2^{(2)} & \cdots & X_2^{(p-1)} \ \cdot & \cdot & \cdot & \cdots & \cdot\ 1 & X_n^{(1)} & X_n^{(2)} & \cdots & X_n^{(p-1)} \end{bmatrix}, \qquad\mbox{and} \qquad \beta = \begin{bmatrix} \beta_0\ \beta_1 \ \cdot \ \beta_{p-1} \end{bmatrix} .$
So $\mathbf{X}$ is an $n \times p$ matrix.
Estimation Problem
Note that $\beta$ is estimated by the least squares procedure. That is minimizing the sum of squared errors $\sum_{i=1}^n (Y_i - \beta_0 - \beta_1 X_i^{(1)} - \cdots - \beta_{p-1} X_i^{(p-1)})^2$. The latter quantity can be expressed in matrix notations as $\parallel Y - \mathbf{X}\beta\parallel^2$. Minimization with respect to the parameter $\beta$ (a $p \times 1$ vector) gives rise to the normal equations:
$\begin{eqnarray*} b_0 n + b_1\sum_i X_i^{(1)} + b_2 \sum_i X_i^{(2)} + \cdots + b_{p-1} \sum_i X_i^{(p-1)} &=& \sum_i Y_i \ b_0 \sum_i X_i^{(1)} + b_1 \sum_i (X_i^{(1)})^2 + b_2 \sum_i X_i^{(1)} X_i^{(2)} + \cdots + b_{p-1} \sum_i X_i^{(1)} X_i^{(p-1)} &=& \sum_i X_i^{(1)} Y_i \ \cdots \qquad \cdots \qquad \cdots \qquad \cdots &=& \cdot \ b_0 \sum_i X_i^{(p-1)} + b_1 \sum_i X_i^{(p-1)}X_i^{(1)} + b_2 \sum_i X_i^{(p-1)} X_i^{(2)} + \cdots + b_{p-1} \sum_i (X_i^{(p-1)})^2 &=& \sum_i X_i^{(p-1)} Y_i \end{eqnarray*}$
Observe that we can express this system of $p$ equations in $p$ variables $b_0,b_1,\ldots,b_{p-1}$ as $$\label{eq:normal} \mathbf{X}^T\mathbf{X} \mathbf{b} = \mathbf{X}^T Y,$$ where $\mathbf{b}$ is a $p \times 1$ vector with $\mathbf{b}^T = (b_0,b_1,\ldots,b_{p-1})$.
If the $p \times p$ matrix $\mathbf{X}^T\mathbf{X}$ is nonsingular (as we shall assume for the time being), then the solution to this system is given by $\widehat \beta = \mathbf{b} = (\mathbf{X}^T\mathbf{X})^{-1}\mathbf{X}^T Y .$ This is the least squares estimate of $\beta$.
Expected value and variance of random vectors
For an $m \times 1$ vector $\mathbf{Z}$, with coordinates $Z_1,\ldots,Z_m$, the expected value (or mean), and variance of $\mathbf{Z}$ are defined as
$E(\mathbf{Z}) = E \begin{bmatrix} Z_1 \ Z_2 \ \cdot\ Z_m \end{bmatrix} = \begin{bmatrix} E(Z_1) \ E(Z_2)\ \cdot\ E(Z_m)\) $\begin{bmatrix} \mbox{Var}(Z_1) & \mbox{Cov}(Z_1,Z_2) & \cdot & \mbox{Cov}(Z_1,Z_m) \ \mbox{Cov}(Z_2,Z_1) & \mbox{Var}(Z_2) & \cdot & \mbox{Cov}(Z_2,Z_m) \ \cdot & \cdot & \cdots & \cdot \ \mbox{Cov}(Z_m,Z_1) & \mbox{Cov}(Z_m,Z_2) & \cdot & \mbox{Var}(Z_m) \end{bmatrix}.$ Observe that Var\((\mathbf{Z})$ is an $m\times m$ matrix. Also, since Cov$(Z_i,Z_j)$ = Cov$(Z_j,Z_i)$ for all $1\leq i,j \leq m$, Var$(\mathbf{Z})$ is a symmetric matrix. Moreover, it can be checked, using the relationship that Cov$(Z_i,Z_j) = E(Z_iZ_j) - E(Z_i)E(Z_j)$, that Var$(\mathbf{Z}) = E(\mathbf{Z}\mathbf{Z}^T) - (E(\mathbf{Z}))(E(\mathbf{Z}))^T$.
• Agnes Oshiro
Multivariable Regression
When there are many predictors, it is often of interest to see if one or a few of the predictors can do the job of estimation of the mean response and prediction of new observations well enough. This can be put in the framework of comparison between a reduced regression model involving a subset of the variables $X^{(1)},...,X^{p-1}$ versus the full regression model involving all the variables.
Example 1
Housing data: $Y$ = selling price, $X^{(1)}$ = house size, $X^{(2)}$ = assessed value. The fitted models with various predictors are:
$Y$ vs. $X^{(1)}: \widehat{Y} = 28.981 + 2.941X^{(1)}, SSE = 203.17, R^2 = 0.635$
$Y$ vs. $X^{(2)}: \widehat{Y} = 39.440 + 0.8585X^{(2)}, SSE = 338.46, R^2 = 0.391$
$Y$ vs. $X^{(1)}$ & $X^{(2)}:$ $\widehat{Y} = 27.995 + 2.746X^{(1)} + 0.097X^{(2)}, SSE = 201.9277, R^2 = 0.6369$
Decrease in $SSE: Y$ vs. $X^{(1)}$ to $Y$ vs. $X^{(1)}$ & $X^{(2)}:$ : $203.17 - 201.93 = 1.24.$
Decrease in $SSE: Y$ vs. $X^{(2)}$ to $Y$ vs. $X^{(1)}$ & $X^{(2)}:$ : $338.46 - 201.93 = 136.53.$
Let us denote by $SSE(X^{(1)},X^{(2)})$ the $SSE$ when we include both $X^{(1)}$ and $X^{(2)}$ in the model. Analogous definition applies to $SSE(X^{(1)})$ and $SSE(X^{(2)})$. Then define
$SSR(X^{(2)}|X^{(1)}) = SSE(X^{(1)}) - SSE(X^{(1)},X^{(2)})$
$SSR(X^{(1)}|X^{(2)}) = SSE(X^{(2)}) - SSE(X^{(1)},X^{(2)})$.
In this example,
$SSR(X^{(1)}|X^{(2)}) = 136.53$ and $SSR(X^{(2)}|X^{(1)}) = 1.24$.
In general if we are comparing a regression model that involves variables $X^{(1)},...,X^{(q-1)}$ (with $q < p)$ against the full model, then define the Extra sum of squares due to inclusion of variables $X^{(q)},...,X^{(p-1)}$ to be
$SSR(X^{(q)},...,X^{(p-1)}|X^{(1)},...,X^{(q-1)}) = SSE(X^{(1)},...,X^{(q-1)}) - SSE(X^{(1)},...,X^{(p-1)})$.
ANOVA decomposition in terms of extra sum of squares
Supposing we have two variables $X^{(1)}$ and $X^{(2)}$ (p = 3) in the full model. Then the following ANOVA decompositions are possible:
$SSTO = SSR(X^{(1)}) + SSR(X^{(2)}|X^{(1)}) + SSE(X^{(1)},X^{(2)})$
$SSTO = SSR(X^{(2)}) + SSR(X^{(1)}|X^{(2)}) + SSE(X^{(1)},X^{(2)})$
Observe that
$SSR(X^{(2)}|X^{(1)}) + SSE(X^{(1)},X^{(2)}) = SSE(X^{(1)})$ and $SSR(X^{(1)}|X^{(2)}) + SSE(X^{(1)},X^{(2)}) = SSE(X^{(2)})$.
Use of extra sum of squares
We considered standard multiple regression model:
$Y_i = \beta_0 + \beta_1X_i^{(1)} + ... + \beta_{p-1}X_i^{(p-1)} + \epsilon_i, i = 1,...,n,$
where $\epsilon_i$ have mean zero, variance $\sigma^2$ and are uncorrelated.
Earlier we defined the extra sum of squares due to inclusion of variables $X^{(q)},...,(X^{(p-1)}$, after variables $X^{(1)},...,X^{(q-1)}$ have been included in the model, as
$SSR(X^{(q)},...,X^{(p-1)}|X^{(1)},...,X^{(q-1)}) = SSE(X^{(1)},...,X^{(q-1)}) - SSE(X^{(1)},...,X^{(p-1)}) .$
Also, we have the ANOVA decomposition of the total variability in the response $Y$ as
$SSTO = SSR(X^{(1)},...,X^{(q-1)}) + SSR(X^{(q+1)},...,X^{(p-1)}|X^{(1)},...,X^{(q-1)}) + SSE(X^{(1)},...,X^{(p-1)}) .$
We can utilize the extra sum of squares to test hypotheses about the parameters.
Test for a single parameter $\beta_k$
Suppose we want to test /(H_0: \beta_k = 0\) against /(H_1: \beta_k \neq 0\), where $k$ is an integer between 1 and $p$. This problem can be formulated as a comparison between the full model given by $(1)$ and the reduced model
$Y_i = \beta_0 + \beta_1X_i^{(1)} + ... + \beta_{k-1}X_i^{(k-1)} + \beta_{k+1}X_i^{(k+1)} + ... + \beta_{p-1}X_i^{(p-1)} + \epsilon_i, i = 1,...,n.$
Note that, by definition of $SSE, SSE_{full} = SSE(X^{(1)},...,X^{(p-1)})$ with d.f. $n-p$, and $SSE_{red} = SSE(X^{(1)},...,X^{(k-1)},x^{(k+1)},...,X^{(p-1)})$ with d.f. $n - p + 1$. Test statistic is
$F^{*} = \dfrac{\dfrac{SSE_{red}-SSE_{full}}{d.f.(SSE_{red}) - d.f.(SSE_{full})}}{\dfrac{SSE_{full}}{d.f.(SSE_{full})}}$
$=\dfrac{SSR(X^{(k)}|X^{(1)},...,x^{(k-1)},X^{(k)},...,X^{(p-1)})/1}{SSE(X^{(1)},...,X^{(p-1)})/(n-p)}.$
Under $H_0$ and the assumption of normality of errors, $F^{*}$ has $F_{1,n-p}$ distribution. So, $H_0 : \beta_k \neq 0$ is rejected at level $\alpha$ if $F^{*} > F(1 - \alpha; 1, n - p).$
Connection with $t$-test : Note that the $t$ test for testing /(H_0: \beta_k = 0\) against /(H_1: \beta_k \neq 0\) uses the test statistic $t^{*} = \frac{b_k}{s(b_k)}.$ It can be checked that $F^{*} = (t^{*})^2.$ Thus for this combination of null and alternative hypotheses these two tests are equivalent.
Example 2
Consider the housing price data example. Suppose that you want to test $H_0 : \beta_2 = 0$ against $H_1 : \beta_2 \neq 0.$ We have, $SSE(X^{(1)},X{(2)}) = 201.9277$ with d.f. $n - 3 = 16$, and $SSE(X^{(1)}) = 203.17$ with d.f. $n - 2 = 17$. Therefore,
$F^{*} = \frac{(203.17-201.9277)/1}{201.93/16} = \frac{1.2423}{12.6205} = 0.0984.$
$F^{*} = 0.0984 < 4.4940 = F(0.95;1,16).$ Hence cannot reject $H_0 : \beta_2 = 0$ at 5% level of significance (check that $p$-value for this test is 0.664).
Test for multiple parameters
Suppose we are testing $H_0: \beta_1 = ... = \beta_{p-1} = 0$ (where $1 \leq q < p)$ against $H_1$ : for at least one k in ${q,...,p - 1}, \beta_k \neq 0.$ Using the comparison procedure between the reduced and full model, the test statistic is
$F^{*} =\dfrac{\dfrac{SSR(X_{q},...,X_{p-1}|X_{1},...,X_{q-1})}{p-q}} {\dfrac{SSE(X_{1},...,X_{p-1})}{n-p}}$
and under $H_0$ and the normality assumption, $F^*$ has $F_{p-q,n-p}$ distribution. Reject $H_0$ at level $\alpha$ if $F^* > F(1-\alpha;p - q; n - p).$
Another Interpretation of extra sum of squares
It can be checkted that extra sum of squares $SSR(X^{(k)}|X^{(1)},...,X^{(k-1)},X^{(k+1)},...,x^{(p-1)})$ is the sum of squares due to regression of $Y$ on $X^{(k)}$ after accounting for the linear regression (of both $Y$ and $X^{(k)}$) on {${X^{(1)},...,X^{(k-1)},X^{(k+1)},...,x^{(p-1)}}$}. Similarly, for arbitrary $q < p$, the extra sum of squares $SSR(X^{(q)},...,X^{(p-1)}|X^{(1)},...,x^{(q-1)})$ is the sum of squares due to regression of $Y$ on {${X^{(q)},...,X^{(p-1)}}$} after accounting for the linear regression on {${X^{(1)},...,X^{(q-1)}}$}.
Coefficient of partial determination
The fraction
$R^{2}_{Y k|1,...,k-1,k+1,...,p-1} := \frac{SSR(X^{(k)}|X^{(1)},...,X^{(k-1)},X^{(k+1)},...,X^{(p-1)})}{SSE(X^{(1)},...,X^{(k-1)},X^{(k+1)},...,X^{(p-1)}}),$
respresents the proportional reduction in the error sum of squares due to inclusion of variable $X^{(k)}$ in the model given by (2). Observe that $R^{2}_{Y k|1,...,k-1,k+1,...,p-1}$ is the squared correlation coefficient between the residuals obtained by regressing $Y$ and $X^{(k)}$ (separately) on $X^{(1)},...,X^{(k-1)},X^{(k+1)},...,X^{(p-1)}$. The latter correlation, denoted by $r_{Y k|1,...,k-1,k+1,...,p-1}$, is called the partial correlation coefficient between $Y$ and $X^{(k)}$ given $X^{(1)},...,X^{(k-1)},X^{(k+1)},...,X^{(p-1)}$.
Example 3
Consider the housing price data. We have $r_{Y X_2} = r_{Y2} = 0.625$ so that $R^2_{Y2} = (r_{Y2})^2 = 0.391.$ However, $R^2_{Y 2|1} = SSR(X^{(2)}|X^{(1)})/SSE(X^{(1)}) = 1.2423/203.17 = 0.00611458.$
This means that there is only a $0.61%$ reduction in error sum of squares due to adding the variable $X^{(2)}$ (assessed value of house) to the model containing $X^{(1)}$ (house size) as the predictor.
Multicollinearity
Suppose that one or more predictor variables are perfectly linearly related. Then there are infinitely many relations describing the same regression model. In this case one can discard a few variables to avoid redundancy.
In reality, such exact relationships among variables are rare. However, near perfect linear relationship among some of the predictors is quite possible. In the regression model $Y = \beta_0 + \beta_1X^{(1)} + \beta_2X^{(2)} + \epsilon,$ if $X^{(1)}$ and $X^{(2)}$ are strongly related, then $X^TX$ matrix will be nearly singular, and the inverse may not exist or may have rather large diagonal elements. Since $s^2(b_j)$ is proportional to the $j$ + 1-th diagonal element of $(X^TX)^{-1}$, this shows that the collinearity between $X^{(1)}$ and $X^{(2)}$ leads to unstable estimates of $\beta_j$'s. If there are many predictor variables then the problem can be quite severe.
Coefficient of partial determination helps in detecting the presence of multicollinearity. For example, if there is collinearity between the predictor variables $X^{(1)}$ and $X^{(2)}$, and $Y$ has a fairly strong linear relationship with both $X^{(1)}$ and $X^{(2)}$, then $r^2_{Y1}$ and $r^2_{Y2}$ (squared correlation coefficients between $Y$ and $X^{(1)}$, and $Y$ and $X^{(2)}$, respectively) will be large, but $R^2_{Y 1|2}$ and $R^2_{Y 2|1}$ will tend to have small values.
Contributors:
• Scott Brunstein
• Debashis Paul | textbooks/stats/Computing_and_Modeling/Supplemental_Modules_(Computing_and_Modeling)/Regression_Analysis/Multiple_Linear_Regression.txt |
Introduction and definition
Here we discuss the non-parametric estimation of a pdf $f$ of a distribution on the real line. The kernel density estimator is a non-parametric estimator because it is not based on a parametric model of the form $\{ f_{\theta}, \theta \in \Theta \subset {\mathbb R}^d\}$. What makes the latter model 'parametric' is the assumption that the parameter space $\Theta$ is a subset of ${\mathbb R}^d$ which, in mathematical terms, is a finite-dimensional space. In such a parametric case, all we need to do is to estimate finitely many (i.e. d) parameters and this then specifies our density estimator entirely. A typical example for such a parametric model is the normal model, where $\theta = (\mu,\sigma^2).$ In such a model we only need to estimate the two parameters (via least squares, for instance), and then we know how the density looks like based on these estimated parameters (see also the discussion of effective sample size below).
A non-parametric approach is used if very little information about the underlying distribution is available, so that the specification of a parametric model cannot be well justified. Non-parametric models are much broader than parametric models. The model underlying a kernel density estimator usually only assumes that the data are sampled from a distribution that has a density f satisfying some 'smoothness' conditions, such as continuity, or differentiability. [One should mention here that in general the line between parametric and non-parametric procedures is not as clear as it might sound. A kernel density estimator is clearly a non-parametric estimator, however.]
A kernel density estimator based on a set of n observations $X_1,\ldots,X_n$ is of the following form:
${\hat f}_n(x) = \frac{1}{nh} \sum_{i=1}^n K\Big(\frac{X_i - x}{h}\Big)$
where h > 0 is the so-called {\em bandwidth}, and K is the kernel function, which means that $K(z) \ge 0$ and $\int_{\mathbb R} K(z) dz = 1,$ and usually one also assumes that K is symmetric about 0. The choice of both the kernel and the bandwidth is left to the user. The choice of the latter is quite crucial (see also discussion of optimal bandwidth choice below), whereas the choice of the kernel tends to have a relatively minor effect on the estimator. In order to understand why the just given formula describes a reasonable estimator for a pdf let us consider a simple example.
The kernel density estimator using a rectangular kernel
A 'rectangular kernel' K is given by the pdf of a uniform distribution on the interval $(-\frac{1}{2}, \frac{1}{2})$, which is of the form
$K(z) = \begin{cases}1 & \text{for }\; -{\textstyle \frac{1}{2} < z < \frac{1}{2}}\0 & \text{otherwise.}\end{cases}$
How does our kernel density estimator look like with this choice of K? To see this, first observe that for the rectangular kernel we have that $K\big(\frac{X_i - x}{h}\big)$ is either 1 or 0 depending on whether $-\frac{1}{2} < \frac{X_i - x}{h} < \frac{1}{2}$ holds or not. Using simple algebra one can see that this condition holds if and only if
$x - \frac{h}{2} < X_i < x + \frac{h}{2}.$
This in turn means that the sum $\sum_{i=1}^n K\big(\frac{X_i - x}{h}\big)$ equals the number of observations Xi that fall into the interval (x-h/2, x + h/2), because we are summing only 1's and 0's, and consequently the sum equals the number of 1's, which is the same as the number of observations falling into $(x-\frac{h}{2},x+\frac{h}{2} )$.
We now see that the kernel density estimator with a rectangular kernel is of the form
$\widehat{f}_n(x) = \frac{\text{average number of observations in }\,(x-\frac{h}{2},x+\frac{h}{2})}{h}.$
Noticing that h is the length of the interval $(x-\frac{h}{2},x+\frac{h}{2} )$ then this equals the average number of observations in the bin' $(x-\frac{h}{2},x+\frac{h}{2} )$ divided by the length of the bin. Of course the well-known histogram is exactly of this form. However, for a histogram we have only a finite, fixed number of prespecified bins, whereas here in case of a kernel density estimator using a rectangular kernel the bin itself depends on x. In particular, in contrast to a histogram, close by bins are overlapping.
Another important remark is here in order. The kernel density estimate at x only depends on the observations that fall close to x (inside the bin $(x-\frac{h}{2},x+\frac{h}{2})$ ).This makes sense because the pdf is a derivative of the cdf and the derivative of F at x only depends on the behavior of F locally at the point x and this local bevahior of F at x is reflected by the number of points falling into a small bin centered at x. Now, it is clear that not all the n available observations fall into a small bin around a given point x, and so the quality of the kernel density estimator (in terms of variance) at a given point x cannot be expected to be as good as in the case of the estimator of a (global) parameter that uses all the available observations. (See discussion on 'effective sample size' given below.)
The step from a rectangular kernel to a general kernel is now motivated by the fact that since the rectangular kernel is not continuous (it has two jumps at $-\frac{1}{2}$ and $\frac{1}{2}$, respectively), the resulting kernel estimator is also not continuous. Therefore one often uses a continuous kernel function. Since sums of continuous (or differentiable) functions are again continuous (or differentiable), the resulting kernel density estimator then is continuous. Another way to think about a more general kernel is to observe that the rectangular kernel gives a weight of either 1 or 0 to each observation (depending on whether $X_i$ falls into $(x-\frac{h}{2},x+\frac{h}{2})$ or not). It might be a better idea to replace these simple 0-1-weights by more general weights that decrease smoothly when moving away from x in either direction. Such kernels K might have a compact support (for instance, K(z) > 0 only on an interval (a,b) with a < b, and K(z) = 0 otherwise) or not. They might even have a strictly positive value everywhere, such as the standard normal pdf, which in fact is a popular kernel function.
(To be added.)
The bias and the variance of a kernel density estimator
Notice that $\hat{f}_n(x)$ in fact is a function (in x), but when we speak of bias and variance of the kernel estimator then we mean the random quantity $\hat{f}_n(x)$ for a fixed value of x.
In order to be able to do bias and variance calculations we obviously need to specify the sampling distribution. Here we consider the standard case that the observations $X_1,\ldots,X_n$ constitute a random sample, i.e. they are independent and have identical distribution (iid) given by the pdf f. First we again consider the special case of a rectangular kernel, because in this case finding nice expressions bias and variance does not require calculus but only some based knowledge of probability.
We have seen that for a rectangular kernel we have
$n\,h\, \hat{f}_n(x) = \text{number of observations in }\,{\textstyle (x-\frac{h}{2},x+\frac{h}{2})}.$
(Notice that as compared to the above we multiplied both sides by $n\,h.$) Since the observations are assumed to be iid, we see that the right hand side (and thus also the left hand side) is in fact a random variable that has a binomial distribution, because it is the number of successes in a sequence of n Bernoulli trials where each trial has only one of two outcomes 1 or 0 (success or failure). Success of the i-th trial here means that $X_i$ falls into the bin $(x-\frac{h}{2},x+\frac{h}{2})$. The success probability p of the Bernoulli trials thus is given by the probability content of the interval $(x-\frac{h}{2},x+\frac{h}{2})$ under the pdf f, which, in other words, is the integral of f over this interval. In terms of the cdf F we can express this as $p = F(x-\frac{h}{2}) - F(x-\frac{h}{2})$.
We have thus arrived at the conclusion that
$n\,h\,\hat{f}_n(x) \sim \text{Bin}{\textstyle \big(n, F(x+\frac{h}{2}\,) - F(x-\frac{h}{2}\,)\,\big)},$
where the notation $Z \sim \text{Bin}(n,p)$ means that the distribution of the random variable Z is a binomial with parameters n and p.
It is well-known that a binomial distribution with parameters n and p has mean np. We therefore obtain that $\text{E}(nh\ \hat{f}_n(x)) = n\, \big( F(x+{\textstyle\frac{h}{2}}\,) - F(x-{\textstyle\frac{h}{2}}\,) \big)$, or, by dividing by nh,
$\text{E}\big(\hat{f}_n(x)\big) = \frac{F(x+{\textstyle\frac{h}{2}}\,) - F(x-{\textstyle\frac{h}{2}}\,)}{h}$
Let's pause a little, and discuss this. What we want to estimate by using $\hat{f}_n(x)$ is $f(x)$, the value of the underlying pdf at x. Our calculations show that the expected value E$\big(\hat{f}_n(x)\big)$ in general does not equal f(x). In statistical terms this means that the kernel density estimator is biased:
$\text{bias} \big( \hat{f}_n(x) \big) = \text{E}( \hat{f}_n(x) ) - f(x) \ne 0.$
However, if the cdf F is differentiable at x, then by definition of differentiability we have that
$\frac{F(x+{\textstyle\frac{h}{2}}\,) - F(x-{\textstyle\frac{h}{2}}\,)}{h} \to f(x) \quad \text{ as } h \to 0$
and thus the bias will tend to be close to 0 if h is small.
It is important to observe here that in practice for a given sample size the bandwidth h cannot be chosen too small, because otherwise none of the observations will fall into the bin $x-\frac{h}{2}$ and $x+\frac{h}{2}$, leading to a useless estimator of the pdf. On the other hand if the sample size is large, and h is not 'sufficiently' small, then the estimator will stay biased as we have indicated above. The right' choice of the bandwidth is a challenge, but it is hopefully clear from this discussion that the choice of h also needs to depend on the sample size n. Mathematically, we think of $h = h_n$ being a sequence satisfying
$h_n \to 0 \quad \text{ as } n \to \infty.$
Now let's discuss the variance of the kernel density estimator (still based on the rectangular kernel). Using again the fact that $n\,h\,\hat{f}_n(x)$ is a Bin(n,p) random variable, and recalling the fact that the variance of a Bin(n,p) distribution equals np(1-p) we have by using the shorthand notation $F(\,x-\frac{h}{2},\,x+\frac{h}{2}\,) = F(x+\frac{h}{2}\,) - F(x-\frac{h}{2})$ that
${\textstyle\text{Var}(nh \hat f_n(x)) = n\,\, F\big(\, x- \frac{h}{2},\,x+\frac{h_n}{2}\,\big)\,\big(\,1 - F\big(\,x-\frac{h}{2},\,x+\frac{h}{2}\,\big)\,\big) }$
and by using the fact that Var$(cZ) = c^2\text{Var}(Z)$ for a constant c we obtain
${\textstyle \text{Var}(\hat f_n(x)) = \big( \frac{1}{nh} \big)^2 n\,\, F\big(\, x- \frac{h}{2},\,x+\frac{h}{2}\,\big)\,\big(\,1 - F\big(\,x-\frac{h}{2},\,x+\frac{h}{2}\,\big)\,\big) }$
In order to make more sense out of this, we rewrite this formula as:
$\text{Var}(\hat f_n(x)) = \frac{1}{nh} \, \frac{ F\big(\, x- {\textstyle \frac{h}{2},\,x+\frac{h}{2}}\,\big)}{h}\,\big(\,1 - F\big( x-{\textstyle \frac{h}{2},\,x+\frac{h}{2}}\,\big)\,\big) \tag{*}$
Now observe that for small values of h (and this is what we need for the bias to be small as well), the last term $1 - F\big( x-{\textstyle \frac{h}{2},\,x+\frac{h}{2}}\,\big)$ is close to 1 (because the expression $F\big( x-{\textstyle \frac{h}{2},\,x+\frac{h}{2}}\,\big)$ equals the probability that an observation falls into the interval $\big( x-{\textstyle \frac{h}{2},\,x+\frac{h}{2}}\,\big)$, which by definition of a pdf equals the area under the pdf f between $x-\frac{h}{2}$ and $x+\frac{h}{2}$, and for small h these two bounds are very close together, so that the area under the curve also is small. Of course, this is loosely formulated. From a mathematical point of view we mean that
$1 - F\big( x-{\textstyle \frac{h}{2},\,x+\frac{h}{2}}\,\big) \to 1\quad \text{ as } h \to 0.$
What about the second term in (*)? Again using the interpretation of $F\big(\, x- {\textstyle \frac{h}{2},\,x+\frac{h}{2}}\,\big)$ as the area under the curve f between the bounds $x-\frac{h}{2}$ and $x+\frac{h}{2}$, we see that if the interval given by these two bounds is small, then this area is approximately equal to the area of the rectangle with height f(x) and width being the length of the interval, which is h, so that $F\big(\, x- {\textstyle \frac{h}{2},\,x+\frac{h}{2}}\,\big) \approx f(x) h$. In other words, the second term in (*) satisfies
$\frac{ F\big(\, x- {\textstyle \frac{h}{2},\,x+\frac{h}{2}}\,\big)}{h} \to f(x)\quad \text{ as } h \to 0.$
Assuming that x is such that f(x) > 0 (strictly), we conclude that for small values of h the variance of the kernel estimator evaluated at x behaves like
$\text{Var}(\hat{f}_n(x)) \approx \frac{1}{nh} f(x). \tag{**}$
Now, if we want our kernel density estimator $\hat{f}_n(x)$ to be a consistent estimator for f(x), meaning that $\hat{f}_n(x)$ is close' to f(x) at least if the sample size n is large (more precisely, $\hat{f}_n(x)$ converges to f(x) in probability as $n \to \infty$), then we want to have both, the bias tending to zero and the variance tending to zero as the sample size tends to infinity. We have seen above that for the former we need that
$h = h_n \to 0 \quad \text{ as } n \to \infty \quad \text{ in order for the bias to become small for large sample size}$
and from (**) we can see that for the latter we need that $h = h_n$ is such that
$nh_n \to \infty\quad\text{ as } n \to \infty \quad \text{ in order for the variance to become small for large sample size}$
These two conditions on the bandwidth are standard when the behavior of the kernel estimator is analyzed theoretically. Similar formulas can be derived for more general kernel functions, but some more calculus is needed for that. (To be added.)
The bias-variance trade-off and 'optimal' bandwidth choice
The bias-variance trade-off is the uncertainty principle of statistics, and it shows up in many contexts. In order to understand this it is helpful to consider the decomposition of the mean squared error of an estimator into squared bias and variance. Since we are dealing with kernel estimation here, we use the estimator $\hat{f}_n(x)$, but the decomposition holds for any estimator (with finite variance). The mean squared error of $\hat{f}_n(x)$ is defined as
$\text{MSE}(\hat{f}_n(x)) = \text{E}\big( \hat{f}_n(x) - f(x)\big)^2.$
The mean squared error is a widely accepted measure of quality of an estimator. The following decomposition holds:
$\text{MSE}(\hat{f}_n(x)) = \text{Var}(\hat{f}_n(x)) + \big[\text{bias}\big(\hat{f}_n(x)\big)\big]^2.$
The proof of this decomposition is given below. Note that this in particular tells us that if our estimator is unbiased (bias = 0), then the MSE reduces to the variance. (Classical estimation theory deals with unbiased estimators and thus the variance is used there as a measure of performance.)
Now observe that the above calculations of bias and variance indicate (at least intuitively) that
• the bias of $\hat{f}_n(x)$ increases as h increases, since we are estimating $\frac{F\big(x+\frac{h}{2}\big) - F\big(x-\frac{h}{2}\big)}{h}$, and
• the variance of $\hat{f}_n(x)$ increases as h decreases, since fewer observations are included into the estimation process, and averaging over fewer (independent) observations decreases the variance.
The last statement is based on the following fact. If $X_1,\ldots,X_n$ is a random sample with $\text{Var}(X_1) = \sigma^2 < \infty$ then the variance of the average of the $X_i$'s, i.e. the variance of $\overline{X}_n = \frac{1}{n}\sum_{i=1}^n X_i$ equals
${\text Var}(\overline{X}_n) = \text{Var}\big(\frac{1}{n} \sum_{i=1}^n X_i\big) = \frac{1}{n^2}\text{Var}\big(\sum_{i=1}^n X_i\big) = \frac{1}{n^2} n \sigma^2 = \frac{1}{n} \sigma^2.$
So the variance is de(in)creasing when we in(de)crease the number of (independent) variables we averge.
Oversmoothing and undersmoothing
Now, considering the above two bullet points we see that in order to minimize the MSE of $\hat{f}_n(x)$ we have to strike a balance between bias and variance by an appropriate choice of the bandwidth h (not too small and not too big'). If h is large, then the bias is large, because we are in fact `averaging' over a large neighborhood of x, and as discussed above, the expected value of the kernel estimator is the area under the curve (i.e. under f) over the bin $x- \frac{h}{2}, x + \frac{h}{2}$ divided by the length of this bin, namely h. This means the larger the bin is, the further away from f(x) we can expect the mean to be. Choosing a bandwidth that is too large is called oversmoothing and this means a large bias. The other extreme is choosing a bandwidth that is too small (undersmoothing). In this case the bias is small, but the variance is large. Striking the right balance is an art.
Optimal bandwidth choice
Using some more assumptions on f we can get a more explicit expression for how the bias depends on h. In the case of a rectangular kernel we have seen that $\text{bias}\big(\hat{f}_n(x)\big) = \frac{F\big(x+\frac{h}{2}\big) - F\big(x-\frac{h}{2}\big)}{h} - f(x)$. We can write the numerator in this expression as $\big[F\big(x+\frac{h}{2}\big) - F(x)\big] - \big[F\big(x-\frac{h}{2}\big) - F(x)\big]$. If we assume that f is twice differentiable then we can use a two-term Taylor expansion to both of the differences in the brackets to get
$F\big(x+{\textstyle \frac{h}{2}}\big) - F(x) = f(x) {\textstyle \frac{h}{2}} + {\textstyle \frac{1}{2}}\, f^\prime(x) \big({\textstyle \frac{h}{2}}\big)^2 + {\textstyle\frac{1}{3!}}\,f^{\prime\prime}(x) \big({\textstyle \frac{h}{2}}\big)^3$
and similarly
$F\big(x-{\textstyle \frac{h}{2}}\big) - F(x) = f(x) \big(- {\textstyle \frac{h}{2}}\,\big)+ {\textstyle \frac{1}{2}}\, f^\prime(x) \big(-{\textstyle \frac{h}{2}}\big)^2 + {\textstyle\frac{1}{3!}}\,f^{\prime\prime}(x) \big(-{\textstyle \frac{h}{2}}\big)^3$.
Taking their difference gives
$F\big(x+\frac{h}{2}\big) - F\big(x-\frac{h}{2}\big) = f(x) h + {\textstyle \frac{1}{6}}\, f^{\prime\prime}(x) h^3$
so that the bias satisfies
$\text{bias}(\hat{f}_n(x)) = \frac{F\big(x+\frac{h}{2}\big) - F\big(x-\frac{h}{2}\big)}{h} - f(x) \approx {\textstyle \frac{1}{24}}\, f^{\prime\prime}(x) h^2,$
where we used '$\approx$' rather than '$=$' because we ignored the remainder terms in the Taylor expansion. Even if we are using more general kernels, as long as we assume them to be symmetric about 0, and as long as f is twice differentiable at x, one can show a similar epansion of the bias, namely, that the bias approximately equals a constant times $f^{\prime\prime}(x) h^2$. This, together with the above approximation of the variance can then be used to balance out (squared) bias and variance in order to find the bandwidth minimizing the MSE as follows. We have the approximation
$\text{MSE}(\hat{f}_n(x)) = \text{Var}(\hat{f}_n(x)) + \text{bias}^2(\hat{f}_n(x)) \approx c_1 {\textstyle \frac{1}{nh}}\, f(x) + c_2 \big(f^{\prime\prime}(x)\big)^2 h^4.$
where these constants depend on the kernel. in the case of a rectangular kernel we have seen that $c_1 = 1$ and $c_2 = \big(\frac{1}{24}\big)^2$. More generally, for other symmetric kernels these constants can be show to be $c_1 = \frac{1}{4}\big(\int x^2K(x)dx\big)^2$ and $c_2 = \int K^2(x)dx$. We can easily minimize the right-hand side with respect to h by taking derivatives to get the bandwidth $h_{loc}^*$ minimizing this right-hand side as
$h_{loc}^* = c_{loc}^* n^{-1/5},$
where $c_{loc}^* = \big( \frac{c_1}{4 c_2} \frac {f(x)} { (f^{\prime\prime}(x))^2 } \big)^{1/5}.$ This is known as the locally (asymptotically) optimal bandwidth of the kernel density estimator assuming that f is twice differentiable. It is locally optimal because it depends on the value x, and it is asymptotic because our approximations required a small value of h (recall that we ignored the remainder terms in the Taylor expansions), and since $h = h_n \to 0$ as $n \to \infty$ (see above) this means that implicitly we assume a large enough sample size n.
To use a different bandwidth for every different value of x is computationally not feasible (sometimes it does make sense, however, to use different bandwidths in different regions of f if these regions are expected to differ in their smoothness properties). Replacing the MSE by the integrated MSE (integrated over all values of x, there the integration is often done with respect to a weight function) gives a global measure for the performance of our kernel density estimator, and it turns out that similar calculations can be used to derive a globally optimal bandwidth which has the same form as the locally optimal bandwidth, namely
$h^*_{glob} = c_{glob}^* n^{-1/5},$
where the values of $f(x)$ and $f^{\prime\prime}(x)$ in $c^*_{loc}$ are replaced by integrated versions. We still cannot use this in practice, because the constant $c_{glob}^*$ depends on the unknown quantities, namely the (weighted) integrals of $f(x)$ and $f^{\prime\prime}(x)$, respectively. There are various practical ways of how to obtain reasonable values for $c^*_{glob}$. One of them is to estimate the constant $c_{glob}^*$, for instance by estimating $f(x)$ and $f^{\prime\prime}(x)$ via kernel estimators. Another often used approach is to use a reference distribution in determining $c^*_{glob}$, such as the normal distribution. This means, one first estimates mean and variance from the data, and then uses the corresponding normal to find the corresponding values of the weighted integrals.
proof of the decomposition of the MSE:
Write $\text{MSE}(\hat{f}_n(x)) = \text{E}(\hat{f}_n(x) - f(x))^2 = \text{E}\big([\hat{f}_n(x) - \text{E}(\hat{f}_n(x))] + [\text{E}(\hat{f}_n(x)) - f(x)]\big)^2$. Nothing really happened. We only added and subtracted the term $\text{E}(\hat{f}_n(x))$ (and included some brackets). Now we expand the square (by keeping the terms in the brackets together) and use the fact that the expectation of sums equals the sum of the expectations, i.e. we use that $\text{E}(X+Y) = \text{E}(X) + \text{E}(Y)$ (and of course this holds with any finite number of summands). This then gives us
$\text{MSE}(\hat{f}_n(x)) = \text{E}\big(\hat{f}_n(x) - \text{E}(\hat{f}_n(x)) \big)^2 + \text{E}\big( \text{E}(\hat{f}_n(x)) - f(x)]\big)^2 +$
$2 \text{E}\big( \big[\hat{f}_n(x) - \text{E}\big(\hat{f}_n(x)\big)\big] \cdot \big[\text{E}\big(\hat{f}_n(x)\big) - f(x) \big]\big).$
The first term on the right-hand side equals Var$\big(\hat{f}_n(x)\big)$. The second term is the expected value of a constant, which of course equals the constant itself, and this constant is the squared bias. So it remains to show that the third term on the right equals zero. Looking the the product inside the expectation of this term we see, that only one of the two terms is random, the other, namely $\text{E}\big(\hat{f}_n(x)\big) - f(x)$ is a constant. To make this more clear, we replace this (long) constant term by the symbol c. So the third term becomes
$\text{E}\big( \big[\hat{f}_n(x) - \text{E}\big(\hat{f}_n(x)\big)\big] \cdot c \big) = c\,\text{E}\big( \hat{f}_n(x) - \text{E}\big(\hat{f}_n(x) \big)$
and the last expectation is of the form $\text{E}\big(Z - \text{E}(Z)\big)$ and again using the facts that the expectation of the sum equals the sums of the expectations we have $\text{E}\big(Z - \text{E}(Z)\big) = \text{E}(Z) - \text{E}(Z) = 0.$ This completes the proof.
Effective sample size
From a statistical perspective an important distinction between a parametric and the non-parametric density estimator is the 'effective number of observations' that are used to estimate the pdf f at a specific point x. In the parametric case we use all the available data to estimate the parameters by $\widehat{\theta}$, and the estimator of the pdf at the point x then simply is given by $f_{\hat{\theta}}(x).$ This means that all the data are used to estimate f(x) at any given point x and thus the effective sample size for estimating f at a fixed point x equals the sample size n. Not so in the non-parametric case.
As we have discussed above, when using a rectangular kernel in the estimator $\hat{f}_n(x)$, then we are only using those data that are falling inside a small neighborhood of the point x, namely inside the bin $\big(x - \frac{h}{2} , x + \frac{h}{2}\big)$. How many points can we expect to fall into this bin? We have already done this calculation above based on the binomial distribution: The expected number of observations falling into the bin is np where
$p = F\big(x + \frac{h}{2}\big) - F\big( x - \frac{h}{2}\big) .$
Now, we also have seen already that
$\frac{F\big(x + \frac{h}{2}\big) - F\big( x - \frac{h}{2}\big)}{h} \to f(x),$
in other words,
$np = n\big(F\big(x + \frac{h}{2}\big) - F\big( x - \frac{h}{2}\big)\big) \approx f(x) n h.$
Now both p and n depend on the sample size while f(x) is a constant. So ignoring this constant, the effective sample size is 'of the order' nh, which is also the order of the variance of the kernel estimator! This of course makes sense, because the kernel density estimator is in fact averaging over (in the mean) nh points (see the calculation of the variance of $\overline{X}_n$).
Since h is assumed to become small (tends to zero) as the sample size increases, the effective sample size for a kernel estimator is of smaller order than the sample size n, which is the effective sample size of parametric estimators of a density. This means that the performance of the kernel density estimator cannot be as good as the performance of a parametric estimator of a density (see also discussion at the beginning of this page). However, the kernel estimator 'works' under much weaker assumptions than the parametric estimator, and this is the basic trade-off between non-parametric and parametric methods, namely weak model assumptions and weak(er) performance of estimators on the one hand, versus strong model assumptions and strong performance of the estimator on the other. The derived performances of both of the estimators hold if the underlying model assumptions hold. Since the model assumptions in the parametric case are much stronger, we can be less certain that they actually hold (or that the assumptions constitute a 'good' approximation to the conditions under which the experiment is performed), but if they do, then the parametric estimator is not to beat by a nonparametric estimator.
To be added.
Contributors
• Wolfgang Polonik | textbooks/stats/Computing_and_Modeling/Supplemental_Modules_(Computing_and_Modeling)/Regression_Analysis/Nonparametric_Inference_-_Kernel_Density_Estimation.txt |
ANOVA table
• Let's say we have collected data, and our X values have been entered in R as an array called data.X, and our Y values as data.Y. Now, we want to find the ANOVA values for the data. We can do this through the following steps:
1. First, we should fit our data to a model. > data.lm = lm(data.Y~data.X)
2. Next, we can get R to produce an ANOVA table by typing : > anova(data.lm)
3. Now, we should have an ANOVA table!
Fitted Values
• To obtain the fitted values of the model from our previous example, we type: > data.fit = fitted(data.lm)
• This gives us an array called "data.fit" that contains the fitted values of data.lm
Residuals
• Now we want to obtain the residuals of the model: > data.res = resid(data.lm)
• Now we have an array of the residuals.
Hypothesis testing
• If you have already found the ANOVA table for your data, you should be able to calculate your test statistic from the numbers given.
• Let's say we want to find the F - quantile given by $\large \mathbf{F} (.95; 3 , 24)$. We can find this by typing > qf(.95, 3, 24)
• To find the t - quantile given by $\large \mathbf{t} (.975; 1, 19)$ , we would type: > qt(.975, 1, 19)
P - values
• To get the p - value for the F - quantile of, say, 2.84 , with degrees of freedom 3 and 24, we would type in > pf(2.84, 3, 24)
Normal Q-Q plot
• We want to obtain the Normal Probability plot for the standardized residuals of our data, "data.lm".
• We have already fit our data to a model, but we now need the studentized residuals:
> data.stdres = rstandard(data.lm)
• Now, we make the plot by typing: > qqnorm(data.stdres)
• Now, to see the line, type: > qqline(data.stdres)
More on Linear Regression
Fitting a Model
• Let's say we have two X variables in our data, and we want to find a multiple regression model. Once again, let's say our Y values have been saved as a vector titled "data.Y". Now, let's assume that the X values for the first variable are saved as "data.X1", and those for the second variable as "data.X2".
• If we want to fit our data to the model $\large Y_i = \beta_1 X_{i1} + \beta_2 X_{i2} + \epsilon_i$ , we can type:
> data.lm.mult = lm(data.Y ~ data.X1 + data.X2).
• This has given us a model to work with, titled "data.lm.mult"
Summary of Model
• We can now see our model by typing > summary(data.lm.mult)
• The summary should list the estimates, the standard errors, and the t-values of each variable. The summary also lists the Residual Standard Error, the Multiple and Adjusted R-squared values, and other very useful information.
Pairwise Comparison Scatterplot Matrix
• Let's say we have a model with three different variables (the variables are named "data.X", "data.Y", and "data.Z"). We can compare the variables against eachother in a scatterplot matrix easily by typing:
> pairs(cbind(data.X, data.Y, data.Z))
• If the variables are listed together in one data frame (let's say it's called "data.XYZ"), we can get the same matrix by typing: > pairs(data.XYZ)
Further Questions
• If you would like more information on any R instructions to be added to this page, please comment, noting what you would like to see, and we will work on putting up the information as soon as possible.
Contributors
• Valerie Regalia
• Debashis Paul | textbooks/stats/Computing_and_Modeling/Supplemental_Modules_(Computing_and_Modeling)/Regression_Analysis/R_Tutorial_for_ANOVA_and_Linear_Regression.txt |
The basic problem in regression analysis is to understand the relationship between a response variable, denoted by $Y$, and one or more predictor variables, denoted by $X$. The relationship is typically empirical or statistical as opposed to functional or mathematical. The goal is to describe this relationship in the form of a functional dependence of the mean value of Y given any value of $X$ from paired observations $\{(X_i,Y_i) : i=1,\ldots,n\}$
A basic linear regression model for the response $Y$ on the predictor $X$ is given by
$Y_i = \beta_0 + \beta_1 X_i + \varepsilon_i, \qquad i=1,\ldots,n,$
where the noise $\varepsilon_1, \ldots, \varepsilon_n$ are uncorrelated, $Mean(\varepsilon_i) = 0$, and $Variance(\varepsilon_i) = \sigma^2.$
Interpretation
Look at the scatter plot of $Y$ (vertical axis) vs. $X$ (horizontal axis). Consider narrow vertical strips around the different values of $X$:
• Mean (measure of center) of the points falling in the vertical strips lie (approximately) on a straight line with slope $\beta_1$ and intercept $\beta_0$ .
• Standard deviations (measure of spread) of the points falling in each vertical strip are (roughly) the same.
Simple linear regression
We divide the total variability in the observe data into two parts - one coming from the errors, the other coming from the predictor.
ANOVA Decomposition
The following decomposition
$Y_i - \overline{Y} = (\widehat{Y_i} - \overline{Y}) + (Y_i - \widehat{Y_i} )$
with $i=1,2,...,n.$.
represents the deviation of the observed response from the mean response in terms of the sum of the deviation of the fitted value from the mean plus the residual.
Taking the sum of squares, and after some algebra we have:
$\sum_{i=1}^n (Y_i - \overline{Y})^2 = \sum_{i=1}^n (\widehat{Y_i} -\overline{Y})^2 + \sum_{i=1}^n (Y_i - \widehat{Y_i})^2. \label{1}$
or
$SSTO = SSR +SSE$
where
$SSTO = \sum_{i=1}^n (Y_i - \overline{Y})^2$
and
$SSR = \sum_{i=1}^n (\widehat{Y_i} -\overline{Y})^2. \label{2}$
is referred to as the ANOVA decomposition to the variation in the response. Note that
$SSR = b_1^2 \sum_{i=1}^n (X_i - \overline{X})^2 .$
Degrees of freedom
The degrees of freedom of different terms in the decomposition Equation \ref{2} are
$df( SSTO ) = n - 1$
$df( SSR ) = 1$
$df( SSE ) = n - 2.$
So,
$df( SSTO ) = d.f.( SSR ) + d.f.( SSE ).$
Expected value and distribution
$E ( SSE ) = ( n - 2) \sigma^2,$ and $E ( SSR ) = \sigma^2 + \beta_1^2 \sum_{i=1}^n (X_i - \overline{X})^2.$ Also, under the normal regression model, and under $H_0 : \beta_1 = 0,$
$SSR \sim \sigma^2 \chi_1^2, SSE \sim \sigma^2 \chi_{n-2}^2,$
and these two are independent.
Mean squares
$MSE = \dfrac{SSE}{d.f.(SSE)} = \dfrac{SSE}{n-2}, MSR = \dfrac{SSR}{d.f.(SSR)} = \dfrac{SSR}{1}.$
Also, $E ( MSE ) = \sigma^2 , E ( MSR ) = \sigma^2 + \beta_1^2 \sum_{i=1}^n (X_i - \overline{X})^2.$
F ratio
For testing $H_0 : \beta_1 = 0$ versus $H_1 : \beta_1 \neq 0,$ the following test statistics, called the F ratio, can be used:
$F^* = \dfrac{MSR}{MSE}.$
The reason is that $\dfrac{MSR}{MSE}$ fluctuates around 1 + $\dfrac{ \beta_1^2 \sum_{i=1}^n (X_i - \overline{X})^2 }{\sigma^2}.$ So, a significantly large value of $F^*$ provides evidence against $H_0$ and for $H_1.$
Under $H_0, F^*$ has the $F$ distribution with paired degrees of freedom (d.f.( SSR ), d.f.( SSE )) = (1, n - 2 ), (written $F^* \sim F_{1, n - 2}).$ Thus, the test rejects $H_0$ at level of significance $\alpha$ if $F^* > F( 1 - \alpha; 1, n - 2 ),$ where $F( 1 - \alpha; 1, n - 2 )$ is the $(1 - \alpha )$ quantile of $F_{1; n - 2}$ distribution.
Relation between F-test and t-test
Check that $F^* = ( t^* )^2.$ where $t^* = \dfrac{b_1}{s ( b_1 )}$ is the test statistic for testing $H_0 : \beta_1 = 0$ versus $H_1 : \beta_1 \neq 0.$ So, the F-test is equivalent to the t-test in this case.
ANOVA table
It is a table that gives the summary of the various objects used in testing $H_0 : \beta_1 = 0$ against $H_1 : \beta_1 \neq 0.$ It is of the form:
Source df SS MS F*
Regression d.f.(SSR) = 1 SSR MSR $\dfrac{MSR}{MSE}$
Error d.f.(SSE) = n - 2 SSE MSE
Total d.f.(SSTO) = n - 1 SSTO
Example $1$: housing price data
We consider a data set on housing prices. Here Y = selling price of houses (in \$1000), and X = size of houses (100 square feet). The summary statistics are given below:
$n = 19, \overline{X} = 15.719, \overline{Y} = 75.211,$
$\sum_i ( X_i - \overline{X} )^2 = 40.805, \sum_i ( Y_i - \overline{Y} )^2 = 556.078, \sum_i ( X_i - \overline{X} ) ( Y_i - \overline{Y} ) = 120.001.$
(Example) - Estimates of $\beta_1$ and $\beta_0$
$b_1 = \dfrac{\sum_i ( X_i - \overline{X} ) ( Y_i - \overline{Y} ) }{\sum_i ( X_i - \overline{X} )^2} = \dfrac{120.001}{40.805} = 2.941.$
and
$b_0 = \overline{Y} - b_1 \overline{X} = 75.211 - (2.941)(15.719) = 28.981.$
(Example) - MSE
The degrees of freedom (d.f.) = $n -2 = 17. SSE = \sum_i (Y_i - \overline{Y} )^2 - b_1^2 \sum_i ( X_i - \overline{X} )^2 = 203.17.$ So,
$MSE = \dfrac{SSE}{n - 2} = {203.17}{17} = 11.95.$
Also, SSTO = 556.08 and SSR = SSTO - SSE = 352.91, MSR = SSR/1 = 352.91.
$F^* = \dfrac{MSR}{MSE} = 29.529 = (t^* )^2,$ where $t^* = \dfrac{b_1}{s ( b_1 )} = \dfrac{2.941}{0.5412} = 5.434.$ Also, F( 0.95; 1, 17 ) = 4.45, t( 0.975; 17) = 2.11. So, we reject $H_0 : \beta_1 = 0.$ The ANOVA table is given below.
Source df SS MS F*
Regression 1 352.91 352.91 29.529
Error 17 203.17 11.95
Total 18 556.08
Contributors
• Valerie Regalia
• Debashis Paul | textbooks/stats/Computing_and_Modeling/Supplemental_Modules_(Computing_and_Modeling)/Regression_Analysis/Simple_linear_regression/Analysis_of_variance_approach_to_regression.txt |
Diagnostics for residuals (continued)
Nonnormality of errors
This can be studied graphically by using the normal probability plot, or Q-Q (standing for quantile-quantile) plot. In this plot, the ordered residual (or observed quantiles) of the residuals are plotted aginst the expected quantiles assuming that $\epsilon_i$'s are approximately normal and independent with mean 0 and variance = MSE. This results in plotting the k-th largest ei against
${\sqrt{MSE}*z\left[\dfrac{k-0.375}{n+0.25}\right]},$
where z(q) is the q-th quantile of N(0,1) distribution, where0<q<1. If the errors are normally distributed then the points on the plots should almost along the diagonal line. Departures from that could indicates skewness or heavier-tailed distributions.
(a) The model: $Y = 2 + 3X + \epsilon$, where $\epsilon$~N(0,1). 100 observations, with Xi= i/10, i = 1,...,100
Coefficients Estimate Std. Error t-statistic P-value
Intercept 1.5413 0.2196 7.02 2.92 * 10-10
Slope 3.08907 0.03775 81.84 <2 * 10-16
${\sqrt{MSE}}$= 1.09, R2 = 0.9856.
(b) True Model: $Y = 2+3X+\epsilon$, where $\epsilon$~t5.. 100 observations, with Xi = i/10, i = 1...100.
Coefficients Estimate Std. Error t-statistic P-value
Intercept 2.11144 0.28279 7.467 3.42*10-11
Slope 2.97458 0.04862 61.185 <2*10-16
${\sqrt{MSE}} = 1.403,$
with $R^2 = 1.403$.
(c) True Model:$Y = 2+3X+\epsilon$. where $\epsilon$ ~ (x52 - 5). 100 observations, with Xi = i/10, i= 1...100.
Coefficients Estimate Std. Error t-statistic P-value
Intercept 2.4615 0.6533 3.768 0.000281
Slope 2.9894 0.1123 26.617 <2*10-16
${\sqrt{MSE}}$ = 3.242, R2 = 0.8785.
(d) True Model:$Y = 2+3X+\epsilon$. where $\epsilon$ ~ (5-x52). 100 observations, with Xi = i/10, i= 1...100.
Coefficients Estimate Std. Error t-statistic P-value
Intercept 2.7402 0.4694 6.838 6.87*10-8
Slope 2.9896 0.0807 37.048 <2*10-16
${\sqrt{MSE}}= 2.329,$
with $R^2 = 0.9334$.
Heteroscedasticity
Heteroscedasticity or unequal variance: the variance of the error $\epsilon$i may sometimes depend on the value of Xi. This is often reflected in the plot of residuals versus X through an unequal spread of the residuals along the X-axis.
One possibility is that the variance either increases or decreases with increasing value of X. This is often true for financial data, where the volume of transactions usually has a role in the uncertainty of the market. Another possibility is that the data may come from different strata with different variabilities. E.g. different measuring instruments, with different precisions, may have been used.
(a) True Model:$Y = 2+3X+\epsilon$. where $\epsilon$ ~ (5-x52). 100 observations, with Xi = i/10, i= 1...100.
Coefficients Estimate Std. Error t-statistic P-value
Intercept 1.0074 0.9729 1.035 0.303
Slope 3.3382 0.1673 19.958 <2*10-16
${\sqrt{MSE}}$ = 2.329, R2 = 0.9334.
Contributors
• Chengcheng Zhang
General Linear Test
We are interested in testing for the dependence on the predictor variable from a different viewpoint. We call the model
$Y_i = \beta_0+\beta_1X_i+\epsilon_i$
the full model. We want to test $H_0 : \beta_1 = 0$ against $H_1 : \beta_1 \neq 0$. Under $H_0 : \beta_1 = 0$, we have the reduced model:
$Y_i = \beta_0 + \epsilon_i$
Under the full model,
$SSE_{full} = \sum_i(Y_i - \hat{Y_i})^2 = SSE .$
Under the reduced model
$SSE_{red} = \sum_i(Y_i - \hat{Y_i})^2 = SSTO$
General Structure of Test Statistic
Observe that d.f.$(SSE_{full}) = n-2$, d.f.$(SSE_{red}) = n-1$ and $SSE_{red} - SSE_{full} = SSR$.
$F^\ast =\dfrac{\dfrac{SSE_{red}-SSE_{full}}{d.f.(SSE_{red})-d.f.(SSE_{full})}}{\dfrac{SSE_{full}}{d.f.(SSE_{full})}} = \dfrac{\dfrac{SSR}{d.f.(SSR)}}{\dfrac{SSE}{d.f.(SSE)}}=\dfrac{MSR}{MSE}$
Under normal error model, and under $H_0 : \beta_1 = 0, F^\ast$ has the $F$ distribution with (paired) degress of freedom $(d.f.(SSE_{red}) - d.f.(SSE_{full}), d.f.(SSE_{full}))$.
Descriptive Measure of Association Between $X$ and $Y$
Define the coefficient of determination:
$R^2 = \dfrac{SSR}{SSTO}= 1-\dfrac{SSE}{SSTO}$
Observe that $0\leq R^2\leq 1$, and the correlation coefficient, Corr$(X, Y)$ between $X$ and $Y$ is the (signed) square root of $R^2$. That is (Corr$(X, Y))^2 = R^2$. Larger value of $R^2$ generally indicates higher degree of linear association between $X$ and $Y$. Another (and considered better) measure of association is the adjusted coefficient of determination:
$R^2_{ad} =1- \dfrac{MSE}{MSTO}$
$R^2$ is the proportion of variability in $Y$ explained by its regression on $X$. Also, $R^2$ is unit free, i.e. does not depend on the units of measurements of the variables $X$ and $Y$.
For the housing price data, $SSR = 352.91, SSTO = 556.08, n = 19$, and hence $SSE = 203.17$, $d.f.(SSE) = 17$, $d.f.(SSTO) = 18$. So, $R^2 = \dfrac{352.91}{556.08} = 0.635$ and $R^2_{ad} = 1-\dfrac{11.95}{30.8} = 0.613$.
• Cathy Wang
Least squares principle
1. Contributors
Least squares principle is a widely used method for obtaining the estimates of the parameters in a statistical model based on observed data. Suppose that we have measurements $Y_1,\ldots,Y_n$ which are noisy versions of known functions $f_1(\beta),\ldots,f_n(\beta)$ of an unknown parameter $\beta$. This means, we can write
$Y_i = f_i(\beta) + \varepsilon_i, i=1,\ldots,n$
where $\varepsilon_1,\ldots,\varepsilon_n$ are quantities that measure the departure of the observed measurements from the model, and are typically referred to as noise. Then the least squares estimate of $\beta$ from this model is defined as
$\widehat\beta = \min_{\beta} \sum_{i=1}^n(Y_i - f_i(\beta))^2$
The quantity $f_i(\widehat\beta)$ is then referred to as the fitted value of $Y_i$, and the difference $Y_i - f_i(\widehat\beta)$ is referred to as the corresponding residual. It should be noted that $\widehat\beta$ may not be unique. Also, even if it is unique it may not be available in a closed mathematical form. Usually, if each $f_i$ is a smooth function of $\beta$, one can obtain the estimate $\widehat\beta$ by using numerical optimization methods that rely on taking derivatives of the objective function. If the functions $f_i(\beta)$ are linear functions of $\beta$, as is the case in a linear regression problem, then one can obtain the estimate $\widehat\beta$ in a closed form.
Contributors
• Debashis Paul | textbooks/stats/Computing_and_Modeling/Supplemental_Modules_(Computing_and_Modeling)/Regression_Analysis/Simple_linear_regression/Diagnostics_for_residuals%28continued%29.txt |
A response variable $Y$ is linearly related to $p-1$ different explanatory variables $X^{(1)},\ldots,X^{(p-1)}$. The regression model is given by
$Y_i = \beta_0 + \beta_1 X_i^{(1)} + \cdots + \beta_{p-1} X_i^{(p-1)} + \varepsilon_i, \qquad i=1,\ldots,n, \tag{1}$
where $\varepsilon_i$ have mean zero, variance $\sigma^2$ and are independent with a normal distribution (working assumption). The equation (1) can be expressed in matrix notations as
$Y = \mathbf{X} \beta + \varepsilon, \qquad \mbox{where} \qquad Y = \begin{bmatrix} Y_1 \Y_2 \ \cdot\Y_n\end{bmatrix}, \qquad \varepsilon = \begin{bmatrix} \varepsilon_1 \ \varepsilon_2 \ \cdot\ \varepsilon_n \end{bmatrix}.$
Fitted values and residuals
The fitted value for the $i$-th observation is $\widehat{Y}_i = b_0 + b_1 X_i^{(1)} + . . . + b_{p-1} X_i^{(p-1)}$, and the residual is $e_i = Y_i - \widehat{Y}_i.$ Using matrix notations, the vector of fitted values, $\widehat{Y}$, can be expressed as
$\widehat{Y} = X b = X \widehat{\beta} = X ( X^T X)^{-1} X^T Y$
The $n \times n$ matrix $X ( X^T X)^{-1} X^T Y$ is called the hat matrix and is denoted by H. Thus $\widehat{Y}$ = HY. The vector of residuals, to be denoted by $\mathbf{e}$ (with $i$-th coordinate $e_i$, for $i=1,\ldots,n$) can therefore be expressed as
$e = Y - \widehat{Y}$ = Y - HY = ($I_n$ - H) Y = $( I_n - X (X^T X)^{-1} X^T ) Y.$
• Hat matrix: check that the matrix H has the property that HH = H and ($I_n$ - H)(($I_n$ - H) = ($I_n$ - H). A square matrix A having the property that AA = A is called an indempotent matrix. So both H and $I_n$ - H are indempotent matrices. The important implication of the equation $\widehat Y = \mathbf{H} Y$ is that the matrix $\mathbf{H}$ the response vector $\mathbf{Y}$ as a linear combination of the columns of the matrix $\mathbf{X}$ to obtain the vector of fitted values, $\widehat{Y}$. Similarly, the matrix $I_n - \mathbf{H}$ applied to $\mathbf{Y}$ gives the residual vector $\mathbf{e}$.
• Properties of Residuals: Many of the properties of residual can be deduced by studying the properties of the matrix $\mathbf{H}$. Some of them are listed below. $\sum_i e_i = 0 and \sum_i X_i^{(j)}e_i = 0 , for j=1,\ldots,p-1$. These are results of the following: $\mathbf{X}^T\mathbf{e} \mathbf{X}^T(I_n - \mathbf{H})Y = \mathbf{X}^TY - \mathbf{X}^T\mathbf{X} (\mathbf{X}^T \mathbf{X})^{-1} \mathbf{X}^T Y = \mathbf{X}^T Y - \mathbf{X}^T Y = 0.$ Also note that $\widehat{Y} = \mathbf{X}(\mathbf{X}^T \mathbf{X})^{-1}\mathbf{X}^T Y$, and hence $\sum_i \widehat Y_i e_i = \widehat Y^T \mathbf{e} = Y^T \mathbf{X} (\mathbf{X}^T\mathbf{X})^{-1} \mathbf{X}^T\mathbf{e} = 0.$
ANOVA
The matrix viewpoint gives a coherent way of representing the different components of the analysis of variance of the response in regression. As before, we need to deal with the objects
$SSTO = \sum_i (Y_i - \overline{Y})^2, \qquad SSE = \sum_i(Y_i - \widehat Y_i)^2 = \sum_i e_i^2, \qquad \mbox{and}~~SSR = SSTO - SSE.$
The degrees of freedom of $SSR$ is $\mathbf{p} - 1$. The degrees of freedom of $SSTO$ is $\mathbf{n} - 1$ and d.f.$(SSE)$ = d.f.$(SSTO)$ - d.f.$(SSR)$ = $\mathbf{n} - 1 - (\mathbf{p}-1) = \mathbf{n}-\mathbf{p}$. Moreover,
$\overline{Y} = \frac{1}{n} \sum_i Y_i = (\frac{1}{n}) Y^T \mathbf{1}$
$SSTO = \sum_i Y_i^2 - \frac{1}{n}(\sum_i Y_i)^2 = Y^T Y - (\frac{1}{n}) Y^T \mathbf{J} Y$
$SSE = \mathbf{e}^T \mathbf{e} = \mathbf{Y}^T(I-\mathbf{H}) (I-\mathbf{H})\mathbf{Y} = \mathbf{Y}^T (I-\mathbf{H}) \mathbf{Y}\ SSE = Y^T Y - \widehat \beta^T \mathbf{X}^T Y where \mathbf{J} = \mathbf{1}\mathbf{1}^T$.
• We can use the ANOVA decomposition to test $H_0 : \beta_1 = \beta_2 = \cdots = \beta_{p-1} = 0$ (no regression effect), against $H_1$ : not all $\beta_j$ are equal to zero. The test statistic is $F^* = \frac{\frac{SSR}{\mbox{d.f.}(SSR)}}{\frac{SSE}{\mbox{d.f.}(SSE)}} = \frac{SSR/(p-1)}{SSE/(n-p)}.$ Under $H_0$ and assumption of normal errors, $F^*$ has $F_{p-1, n-p}$ distribution. So, reject $H_0$ in favor of $H_1$, at level $\alpha$ if $F^* > F(1-\alpha;p-1,n-p)$.
Inference on Multiple Linear Regression
We can ask the same questions regarding estimation of various parameters as we did in the case of regression with one predictor variable.
• Mean and standard error of estimates: We already checked that (with $\mathbf{b} \equiv \widehat \beta$) $E(\mathbf{b}) = \beta$ and Var$(\mathbf{b}) = \sigma^2 (\mathbf{X}^T \mathbf{X})^{-1}$. And hence the estimated variance-covariance matrix of $\mathbf{b}$ is $\widehat{\mbox{Var}}(\mathbf{b}) = MSE(\mathbf{X}^T \mathbf{X})^{-1}$. Denote by $s(b_j)$ the standard error of $b_j = \widehat \beta_j$. Then $s^2(b_j)$ is the $(j+1)$-th diagonal entry of the $p \times p$ matrix $\widehat{\mbox{Var}}(\mathbf{b})$.
• Note that $\mbox{Var}(\mathbf{b}) = \sigma^2 (\mathbf{X}^T \mathbf{X})^{-1} ~~\mbox{so that}~~ \widehat{\mbox{Var}}(\mathbf{b}) = \mbox{MSE} ~ (\mathbf{X}^T \mathbf{X})^{-1}.$
• ANOVA : Under $H_0 : \beta_1=\beta_2=\cdots=\beta_{p-1} =0$, the F-ratio $F^* = MSR/MSE$ has an $F_{p-1,n-p}$ distribution. So, reject $H_0$ in favor of $H_1$: at least one $j \in\{1,\ldots,p-1\}, \beta \neq$ 0 , at level $\alpha$ if $F^* > F(1-\alpha;p-1,n-p)$.
• Hypothesis tests for individual parameters : Under $H_0 : \beta_j = \beta_j^0$, for a given $j \in \{1,\ldots,p-1\}$, $t^* = \frac{b_j-\beta_j^0}{s(b_j)} \sim t_{n-p}.$ So, if $H_1 : \beta_j \neq \beta_j^0$, then reject $H_0$ in favor of $H_1$ at level $\alpha$ if $|t^*| > t(1-\alpha/2;n-p)$.
• Confidence intervals for individual parameters : Based on the result above, 100(1-$\alpha$) % two-sided confidence interval for $\beta_j$ is given by $b_j \pm t(1-\alpha/2;n-p)s(b_j).$
• Estimation of mean response : Since $E(Y|X_h) = \beta^T X_h, where X_h = \begin{bmatrix}1 \X_h^{(1)}\ \cdot \ \cdot \X_h^{(p-1)} \end{bmatrix},$ an unbiased point estimate of $E(Y|X_h)$ is $\widehat Y_h = \mathbf{b}^T X_h = b_0 + b_1X_h^{(1)} + \cdots + b_{p-1}X_h^{(p-1)}$. Using the Working-Hotelling procedure, an $100(1-\alpha)$ % confidence region for the entire regression surface (that is, confidence region for $E(Y|X_h)$ for all possible values of $X_h$), is given by $\widehat Y_h \pm \sqrt{p F(1-\alpha;p,n-p)} \hspace{.05in} s (\widehat Y_h),$ where $s(\widehat Y_h)$ is the estimated standard error of $\widehat Y_h$ and is given by $s^2(\widehat Y_h) = (MSE) \cdot X_h^T (\mathbf{X}^T \mathbf{X})^{-1}X_h.$ The last formula can be deduced from the fact that $\mbox{Var}(\widehat Y_h) = \mbox{Var}(X_h^T \mathbf{b}) = X_h^T \mbox{Var}(\mathbf{b}) X_h = \sigma^2 X_h^T (\mathbf{X}^T\mathbf{X})^{-1} X_h.$ Also, using the fact that $(\widehat Y_h - X_h^T \beta)/s(\widehat Y_h) \sim t_{n-p}$, a pointwise, $100(1-\alpha)$ % two-sided confidence interval for $E(Y|X_h) = X_h^T \beta$ is given by $\widehat Y_h \pm t(1-\alpha/2;n-p) s(\widehat Y_h).$ Extensions to the case where we want to simultaneously estimate $E(Y|X_h)$ for $g$ different values of $X_h$ can be achieved using either the Bonferroni procedure, or the Working-Hotelling procedure.
• Simultaneous prediction of new observations : Analogous to the one variable regression case, we consider the simultaneous prediction of new observations $Y_{h(new)} = \beta^T X_h + \varepsilon_{h(new)}$ for $g$ different values of $X_h$. Use $s(Y_{h(new)} - \widehat Y_{h(new)})$ to denote the estimated standard deviation of prediction error when $X=X_h$. We have $s^2(Y_{h(new)} - \widehat Y_{h(new)}) = (MSE) (1+X_h^T (\mathbf{X}^T\mathbf{X})^{-1}X_h).$ Bonferroni procedure yields simultaneous 100(1-$\alpha$) % prediction intervals of the form $\widehat Y_h \pm t(1-\alpha/2g;n-p) s(Y_{h(new)} - \widehat Y_{h(new)}).$ Scheff'{e}'s procedure gives the following simultaneous confidence intervals $\widehat Y_h \pm \sqrt{gF(1-\alpha;g,n-p)} \hspace{.06in} s(Y_{h(new)} - \widehat Y_{h(new)}).$
• Coefficient of multiple determination : The quantity $R^2 = 1 - \frac{SSE}{SSTO} = \frac{SSR}{SSTO}$ is a measure of association between the response $Y$ and the predictors $X^{(1)},\ldots,X^{(p-1)}$. This has the interpretation that $R^2$ is the proportion of variability in the response explained by the predictors. Another interpretation is that $R^2$ is the maximum squared correlation between $Y$ and any linear function of $X^{(1)},\ldots,X^{(p-1)}$.
• Adjusted $R^2$ : If one increases number of predictor variables in the regression model, then $R^2$ increases. To take into account the number of predictors, the measure called adjusted multiple $R$-squared, or, $R_a^2 = 1-\frac{MSE}{MSTO} = 1 - \frac{SSE/(n-p)}{SSTO/(n-1)} = 1- \left(\frac{n-1}{n-p}\right) \frac{SSE}{SSTO},$ is used. Notice that $R_a^2 < R^2$, and when the number of observationsis not too large, $R_a^2$ can be substantially smaller than $R^2$. Even though $R_a^2$ does not have as nice an interpretation as $R^2$, in multiple linear regression, this considered to be a better measure of association.
Contributors:
• Valerie Regalia
• Debashis Paul | textbooks/stats/Computing_and_Modeling/Supplemental_Modules_(Computing_and_Modeling)/Regression_Analysis/Simple_linear_regression/Multiple_Linear_Regression_%28continued%29.txt |
Diagnostics for predictor
Diagnostic information about the predictor variable X, e.g. whether there are any outlying value, the range and concentration of X, are useful information that can provide clues to the appropriateness of the regression model assumptions.
• For moderate-size data, we can use the stem-and leaf plot, or the dot plot to gather information about the range and concentration of the data, as well as the possible extreme values of X. Similar information can be extracted from a summary plot like the box plot.
• If the data are observed over time, then both the predictor Y and the response X may show some pattern over time. A useful way of gathering this information is through the sequence plot (X values plotted against time).
To illustrate the effect of an extreme X value, consider the following example:
$n = 16 , \sum_{i=1}^{n-1} X_i = 90, \sum_{i=1}^{n-1}Y_i = 330, \sum_{i=1}^{n-1}X_i^2 = 1000, \sum_{i=1}^{n-1}X_iY_i = 2400, X_n = 22, Y_n = 22.$
Let $\overline{X}_{n-1} = \sum_{i=1}^{n-1} X_i/(n-1) = 6, and \overline{Y}_{n-1} = \sum_{i=1}^{n-1} Y_i/(n-1) = 22.$
Also,$\sum_{i=1}^{n-1} (X_i - \overline{X}_{n-1})(Y_i - \overline{Y}_{n-1}) = \sum_{i=1}^{n-1} X_i Y_i - (n-1) \overline{X}_{n-1} \overline{Y}_{n-1} = 2400 - 15 \times 6\times 22= 420.$
$\sum_{i=1}^{n-1}(X_i - \overline{X}_{n-1})^2 = \sum_{i=1}^{n-1}X_i^2 - (n-1)(\overline{X})^2 = 1000 - 15 \times 6^2 = 460.$
Hence, denoting by $b_1^{(n-1)}$ the least squares estimate of $\beta_1$ computed from the first n-1 observations, we have $b_1^{(n-1)} = \frac{\sum_{i=1}^{n-1} (X_i - \overline{X}_{n-1})(Y_i - \overline{Y}_{n-1})}{\sum_{i=1}^{n-1}(X_i - \overline{X}_{n-1})^2} = \frac{420}{460} = 0.913.$ For the whole data set, $\overline{X} = (\sum_{i=1}^{n-1} X_i + X_n)/n = 7. \overline{Y} = (\sum_{i=1}^{n-1} Y_i + Y_n)/n = 22.$
$\sum_{i=1}^n (X_i - \overline{X})(Y_i - \overline{Y}) = \sum_{i=1}^n X_i Y_i - n \overline{X}\overline{Y} = (2400+22\times 22) - 16 \times 7 \times 22 = 420.$
$\sum_{i=1}^n (X_i - \overline{X})^2 = \sum_{i=1}^n X_i^2 - n(\overline{X})^2 = (1000+20^2) - 16 \times 7^2 = 616.$
So, from the full data, estimate for $\beta_1$ is $b_1 =\frac{\sum_{i=1}^n (X_i - \overline{X})(Y_i - \overline{Y})}{ \sum_{i=1}^n (X_i - \overline{X})^2} = \frac{420}{616} = 0.6818.$ Note that, in this example, standard deviation of X estimated from
the first n-1 observations is $s_X^{(n-1)} = \sqrt{\frac{1}{n-2}\sum_{i=1}^{n-1}(X_i - \overline{X}_{n-1})^2} = 5.73$. And observe that $X_n > \overline{X}_{n-1} + 2 s_X^{(n-1)}.$
It can be shown that if we use $b_0^{(n-1)}$ to denote the least squares estimate of $\beta_0$, from the first n-1 observations,
and $e_n^{(n-1)} = Y_n - b_0^{(n-1)} - b_1^{(n-1)} X_n$, then $b_1 = b_1^{(n-1)} + \frac{(1-\frac{1}{n})(X_n - \overline{X}_{n-1})e_n^{(n-1)}}{\sum_{i=1}^n (X_i - \overline{X})^2} = b_1^{(n-1)} + \frac{(X_n - \overline{X})}{\sum_{i=1}^n (X_i - \overline{X})^2}e_n^{(n-1)}.$
Diagnostics for residuals
Residuals $e_i = Y_i - \widehat Y_i$ convey information about the appropriateness of the model. In particular, possible departures from model assumptions are often reflected in the plot of residuals against either predictor(s) or fitted values, or in the distribution of the residuals.
Some important properties
• Mean : We have seen that $\sum_i e_i =0$ and hence = $\overline{e} =\frac{1}{n}\sum_i e_i = 0.$
• Variance : Var $(e) = s^2 = \frac{1}{n-2}\sum_i(e_i - \overline{e})^2 = \frac{1}{n-2}\sum_i e_i^2 = MSE.$
• Correlations : $\sum_i X_i e_i = 0, \sum_i \widehat Y_i e_i =0$ and $\overline{e} =0$ imply that Corr$(X,e) =0$ and Corr$(\widehat Y,e) = 0.$
• Nonindependence : The residuals $e_i$ are not independent even if the model errors $\varepsilon_i$ are. This is because the $e_i$'s satisfy two constraints: $\sum_i e_i =0$ and $\sum_i X_i e_i =0.$ However, when n is large, the residuals are almost independent if the model assumptions hold.
• Semi-studentized residuals : Standardize the residuals by dividing through by $\sqrt{MSE}$ to get the semi-studentized
residuals:
$e_i^* = \frac{e_i - \overline{e}}{\sqrt{MSE}} = \frac{e_i}{\sqrt{MSE}}.$
Model departures that can be studied by residuals plots
1. The regression function is not linear.
2. The error terms do not have a constant variance.
3. The error terms are not independent.
4. The model fits all but one or a few outliers.
5. The error terms are not normally distributed.
6. One or several predictor variables have been omitted from the model.
Diagnostic plot
• Plot of residuals versus time: When the observations involve a time component, the sequence plot, i.e. plot of residuals versus time are helpful in detecting possible pressence of runs or cycles. Runs may indicate that the errors are correlated in time. Systematic patterns like cycles may indicate presence of seasonality in the data.
Example: True model : Y = 5 + 2X + sin(X/10) + $\varepsilon$, where X is the time, and $\varepsilon$ ~ N(0,9). $X_i$ = i for i = 1,2,...,40. Fitted linear regression model for simulated data:
Coefficients Estimate Std. Error t-statistic P-value
Intercept 7.0878 1.1035 6.423 1.5 x 10-7
Slope 1.9598 0.0469 41.783 < 2 x 10-16
$\sqrt{MSE}$ = 3.424. R2 = 0.9787, $R_{ad}^2$ = 0.9781.
Another example of time course data
True model : Y = 5 + 3X + 3sin(X/5) + $\varepsilon$. 40 observations at $X_i$ = i, i = 1,...,40. $\varepsilon$ sin N(0,9). The linear model fit is given below.
Coefficients Estimate Std. Error t-statistic P-value
Intercept 5.5481 1.14164 3.917 0.000361
Slope 1.9862 0.0602 32.991 < 2 x 10-16
$\sqrt{MSE}$ = 4.395. R2 = 0.9663, $R_{ad}^2$ = 0.9654.
• Nonlinearity of the regression function : If the plot of the residuals versus predictors show discernible, nonlinear pattern, that is an indication of possible nonlinearity if the regression function.
Example :True model : Y = 5 - X + 0.1 * X2 + $\varepsilon$ with $\varepsilon$ ~ N(0, (10)2). We simulate 30 observations with X following a N(100, (16)2) distribution. The data summary is given below.
$\overline{X} = 104.13, \overline{Y} = 1004.79, \sum_i X_i^2 = 330962.9, \sum_i Y_i^2 = 32466188, \sum_i X_iY_i = 3249512.$
The linear model : Y = $\beta_0 + \beta_1 X + \epsilon$ was fitted to this data. The following table gives the summary.
Coefficients Estimate Std. Error t-statistic P-value
Intercept -1021.3803 40.0648 -25.49 < 2 x 10-16
Slope 19.4587 0.3814 51.01
< 2 x 10-16
$\sqrt{MSE}$ = 28.78. R2 = 0.9894, $R_{ad}^2$ = 0.989.
• Presence of outliers : If some of the semi-studentized residuals have "too large" absolute values (say $|e_i^*|$ > 3 for some i) then the corresponding observation can be taken to be an outlier (in Y).
Contributors
• Yingwen Li
• Debashis Paul | textbooks/stats/Computing_and_Modeling/Supplemental_Modules_(Computing_and_Modeling)/Regression_Analysis/Simple_linear_regression/Regression_diagnostics_for_one_predictor.txt |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.