chapter
stringlengths
1.97k
1.53M
path
stringlengths
47
241
At this point we’ve thought a little bit about how to operationalize a theoretical construct and thereby create a psychological measure; and we’ve seen that by applying psychological measures we end up with variables, which can come in many different types. At this point, we should start discussing the obvious question: is the measurement any good? We’ll do this in terms of two related ideas: reliability and validity. Put simply, the reliability of a measure tells you how precisely you are measuring something, whereas the validity of a measure tells you how accurate the measure is. Reliability is actually a very simple concept: it refers to the repeatability or consistency of your measurement. The measurement of my weight by means of a “bathroom scale” is very reliable: if I step on and off the scales over and over again, it’ll keep giving me the same answer. Measuring my intelligence by means of “asking my mom” is very unreliable: some days she tells me I’m a bit thick, and other days she tells me I’m a complete moron. Notice that this concept of reliability is different to the question of whether the measurements are correct (the correctness of a measurement relates to it’s validity). If I’m holding a sack of potatos when I step on and off of the bathroom scales, the measurement will still be reliable: it will always give me the same answer. However, this highly reliable answer doesn’t match up to my true weight at all, therefore it’s wrong. In technical terms, this is a reliable but invalid measurement. Similarly, while my mom’s estimate of my intelligence is a bit unreliable, she might be right. Maybe I’m just not too bright, and so while her estimate of my intelligence fluctuates pretty wildly from day to day, it’s basically right. So that would be an unreliable but valid measure. Of course, to some extent, notice that if my mum’s estimates are too unreliable, it’s going to be very hard to figure out which one of her many claims about my intelligence is actually the right one. To some extent, then, a very unreliable measure tends to end up being invalid for practical purposes; so much so that many people would say that reliability is necessary (but not sufficient) to ensure validity. Okay, now that we’re clear on the distinction between reliability and validity, let’s have a think about the different ways in which we might measure reliability: • Test-retest reliability. This relates to consistency over time: if we repeat the measurement at a later date, do we get a the same answer? • Inter-rater reliability. This relates to consistency across people: if someone else repeats the measurement (e.g., someone else rates my intelligence) will they produce the same answer? • Parallel forms reliability. This relates to consistency across theoretically-equivalent measurements: if I use a different set of bathroom scales to measure my weight, does it give the same answer? • Internal consistency reliability. If a measurement is constructed from lots of different parts that perform similar functions (e.g., a personality questionnaire result is added up across several questions) do the individual parts tend to give similar answers. Not all measurements need to possess all forms of reliability. For instance, educational assessment can be thought of as a form of measurement. One of the subjects that I teach, Computational Cognitive Science, has an assessment structure that has a research component and an exam component (plus other things). The exam component is intended to measure something different from the research component, so the assessment as a whole has low internal consistency. However, within the exam there are several questions that are intended to (approximately) measure the same things, and those tend to produce similar outcomes; so the exam on its own has a fairly high internal consistency. Which is as it should be. You should only demand reliability in those situations where you want to be measure the same thing! 1.10: The role of variables predictors and outcomes Okay, I’ve got one last piece of terminology that I need to explain to you before moving away from variables. Normally, when we do some research we end up with lots of different variables. Then, when we analyse our data we usually try to explain some of the variables in terms of some of the other variables. It’s important to keep the two roles “thing doing the explaining” and “thing being explained” distinct. So let’s be clear about this now. Firstly, we might as well get used to the idea of using mathematical symbols to describe variables, since it’s going to happen over and over again. Let’s denote the “to be explained” variable \(Y\), and denote the variables “doing the explaining” as \(X_1\), \(X_2\), etc. Now, when we doing an analysis, we have different names for \(X\) and \(Y\), since they play different roles in the analysis. The classical names for these roles are independent variable (IV) and dependent variable (DV). The IV is the variable that you use to do the explaining (i.e., \(X\)) and the DV is the variable being explained (i.e., \(Y\)). The logic behind these names goes like this: if there really is a relationship between \(X\) and \(Y\) then we can say that \(Y\) depends on \(X\), and if we have designed our study “properly” then \(X\) isn’t dependent on anything else. However, I personally find those names horrible: they’re hard to remember and they’re highly misleading, because (a) the IV is never actually “independent of everything else” and (b) if there’s no relationship, then the DV doesn’t actually depend on the IV. And in fact, because I’m not the only person who thinks that IV and DV are just awful names, there are a number of alternatives that I find more appealing. For example, in an experiment the IV refers to the manipulation, and the DV refers to the measurement. So, we could use manipulated variable (independent variable) and measured variable (dependent variable). The terminology used to distinguish between different roles that a variable can play when analysing a data set. role of the variable classical name modern name “to be explained” dependent variable (DV) Measurement “to do the explaining” independent variable (IV) Manipulation We could also use predictors and outcomes. The idea here is that what you’re trying to do is use \(X\) (the predictors) to make guesses about \(Y\) (the outcomes). This is summarized in the table: The terminology used to distinguish between different roles that a variable can play when analysing a data set. role of the variable classical name modern name “to be explained” dependent variable (DV) outcome “to do the explaining” independent variable (IV) predictor
textbooks/stats/Applied_Statistics/Answering_Questions_with_Data_-__Introductory_Statistics_for_Psychology_Students_(Crump)/01%3A_Why_Statistics/1.09%3A_Assessing_the_Reliability_of_a_Measurement.txt
One of the big distinctions that you should be aware of is the distinction between “experimental research” and “non-experimental research”. When we make this distinction, what we’re really talking about is the degree of control that the researcher exercises over the people and events in the study. Experimental research The key features of experimental research is that the researcher controls all aspects of the study, especially what participants experience during the study. In particular, the researcher manipulates or varies something (IVs), and then allows the outcome variable (DV) to vary naturally. The idea here is to deliberately vary the something in the world (IVs) to see if it has any causal effects on the outcomes. Moreover, in order to ensure that there’s no chance that something other than the manipulated variable is causing the outcomes, everything else is kept constant or is in some other way “balanced” to ensure that they have no effect on the results. In practice, it’s almost impossible to think of everything else that might have an influence on the outcome of an experiment, much less keep it constant. The standard solution to this is randomization: that is, we randomly assign people to different groups, and then give each group a different treatment (i.e., assign them different values of the predictor variables). We’ll talk more about randomization later in this course, but for now, it’s enough to say that what randomization does is minimize (but not eliminate) the chances that there are any systematic difference between groups. Let’s consider a very simple, completely unrealistic and grossly unethical example. Suppose you wanted to find out if smoking causes lung cancer. One way to do this would be to find people who smoke and people who don’t smoke, and look to see if smokers have a higher rate of lung cancer. This is not a proper experiment, since the researcher doesn’t have a lot of control over who is and isn’t a smoker. And this really matters: for instance, it might be that people who choose to smoke cigarettes also tend to have poor diets, or maybe they tend to work in asbestos mines, or whatever. The point here is that the groups (smokers and non-smokers) actually differ on lots of things, not just smoking. So it might be that the higher incidence of lung cancer among smokers is caused by something else, not by smoking per se. In technical terms, these other things (e.g. diet) are called “confounds”, and we’ll talk about those in just a moment. In the meantime, let’s now consider what a proper experiment might look like. Recall that our concern was that smokers and non-smokers might differ in lots of ways. The solution, as long as you have no ethics, is to control who smokes and who doesn’t. Specifically, if we randomly divide participants into two groups, and force half of them to become smokers, then it’s very unlikely that the groups will differ in any respect other than the fact that half of them smoke. That way, if our smoking group gets cancer at a higher rate than the non-smoking group, then we can feel pretty confident that (a) smoking does cause cancer and (b) we’re murderers. Non-experimental research Non-experimental research is a broad term that covers “any study in which the researcher doesn’t have quite as much control as they do in an experiment”. Obviously, control is something that scientists like to have, but as the previous example illustrates, there are lots of situations in which you can’t or shouldn’t try to obtain that control. Since it’s grossly unethical (and almost certainly criminal) to force people to smoke in order to find out if they get cancer, this is a good example of a situation in which you really shouldn’t try to obtain experimental control. But there are other reasons too. Even leaving aside the ethical issues, our “smoking experiment” does have a few other issues. For instance, when I suggested that we “force” half of the people to become smokers, I must have been talking about starting with a sample of non-smokers, and then forcing them to become smokers. While this sounds like the kind of solid, evil experimental design that a mad scientist would love, it might not be a very sound way of investigating the effect in the real world. For instance, suppose that smoking only causes lung cancer when people have poor diets, and suppose also that people who normally smoke do tend to have poor diets. However, since the “smokers” in our experiment aren’t “natural” smokers (i.e., we forced non-smokers to become smokers; they didn’t take on all of the other normal, real life characteristics that smokers might tend to possess) they probably have better diets. As such, in this silly example they wouldn’t get lung cancer, and our experiment will fail, because it violates the structure of the “natural” world (the technical name for this is an “artifactual” result; see later). One distinction worth making between two types of non-experimental research is the difference between quasi-experimental research and case studies. The example I discussed earlier – in which we wanted to examine incidence of lung cancer among smokers and non-smokers, without trying to control who smokes and who doesn’t – is a quasi-experimental design. That is, it’s the same as an experiment, but we don’t control the predictors (IVs). We can still use statistics to analyse the results, it’s just that we have to be a lot more careful. The alternative approach, case studies, aims to provide a very detailed description of one or a few instances. In general, you can’t use statistics to analyse the results of case studies, and it’s usually very hard to draw any general conclusions about “people in general” from a few isolated examples. However, case studies are very useful in some situations. Firstly, there are situations where you don’t have any alternative: neuropsychology has this issue a lot. Sometimes, you just can’t find a lot of people with brain damage in a specific area, so the only thing you can do is describe those cases that you do have in as much detail and with as much care as you can. However, there’s also some genuine advantages to case studies: because you don’t have as many people to study, you have the ability to invest lots of time and effort trying to understand the specific factors at play in each case. This is a very valuable thing to do. As a consequence, case studies can complement the more statistically-oriented approaches that you see in experimental and quasi-experimental designs. We won’t talk much about case studies in these lectures, but they are nevertheless very valuable tools!
textbooks/stats/Applied_Statistics/Answering_Questions_with_Data_-__Introductory_Statistics_for_Psychology_Students_(Crump)/01%3A_Why_Statistics/1.11%3A_Experimental_and_non-experimental_research.txt
More than any other thing, a scientist wants their research to be “valid”. The conceptual idea behind validity is very simple: can you trust the results of your study? If not, the study is invalid. However, while it’s easy to state, in practice it’s much harder to check validity than it is to check reliability. And in all honesty, there’s no precise, clearly agreed upon notion of what validity actually is. In fact, there’s lots of different kinds of validity, each of which raises it’s own issues, and not all forms of validity are relevant to all studies. I’m going to talk about five different types: • Internal validity • External validity • Construct validity • Face validity • Ecological validity To give you a quick guide as to what matters here…(1) Internal and external validity are the most important, since they tie directly to the fundamental question of whether your study really works. (2) Construct validity asks whether you’re measuring what you think you are. (3) Face validity isn’t terribly important except insofar as you care about “appearances”. (4) Ecological validity is a special case of face validity that corresponds to a kind of appearance that you might care about a lot. Internal validity Internal validity refers to the extent to which you are able draw the correct conclusions about the causal relationships between variables. It’s called “internal” because it refers to the relationships between things “inside” the study. Let’s illustrate the concept with a simple example. Suppose you’re interested in finding out whether a university education makes you write better. To do so, you get a group of first year students, ask them to write a 1000 word essay, and count the number of spelling and grammatical errors they make. Then you find some third-year students, who obviously have had more of a university education than the first-years, and repeat the exercise. And let’s suppose it turns out that the third-year students produce fewer errors. And so you conclude that a university education improves writing skills. Right? Except… the big problem that you have with this experiment is that the third-year students are older, and they’ve had more experience with writing things. So it’s hard to know for sure what the causal relationship is: Do older people write better? Or people who have had more writing experience? Or people who have had more education? Which of the above is the true cause of the superior performance of the third-years? Age? Experience? Education? You can’t tell. This is an example of a failure of internal validity, because your study doesn’t properly tease apart the causal relationships between the different variables. External validity External validity relates to the generalizability of your findings. That is, to what extent do you expect to see the same pattern of results in “real life” as you saw in your study. To put it a bit more precisely, any study that you do in psychology will involve a fairly specific set of questions or tasks, will occur in a specific environment, and will involve participants that are drawn from a particular subgroup. So, if it turns out that the results don’t actually generalize to people and situations beyond the ones that you studied, then what you’ve got is a lack of external validity. The classic example of this issue is the fact that a very large proportion of studies in psychology will use undergraduate psychology students as the participants. Obviously, however, the researchers don’t care only about psychology students; they care about people in general. Given that, a study that uses only psych students as participants always carries a risk of lacking external validity. That is, if there’s something “special” about psychology students that makes them different to the general populace in some relevant respect, then we may start worrying about a lack of external validity. That said, it is absolutely critical to realize that a study that uses only psychology students does not necessarily have a problem with external validity. I’ll talk about this again later, but it’s such a common mistake that I’m going to mention it here. The external validity is threatened by the choice of population if (a) the population from which you sample your participants is very narrow (e.g., psych students), and (b) the narrow population that you sampled from is systematically different from the general population, in some respect that is relevant to the psychological phenomenon that you intend to study. The italicized part is the bit that lots of people forget: it is true that psychology undergraduates differ from the general population in lots of ways, and so a study that uses only psych students may have problems with external validity. However, if those differences aren’t very relevant to the phenomenon that you’re studying, then there’s nothing to worry about. To make this a bit more concrete, here’s two extreme examples: • You want to measure “attitudes of the general public towards psychotherapy”, but all of your participants are psychology students. This study would almost certainly have a problem with external validity. • You want to measure the effectiveness of a visual illusion, and your participants are all psychology students. This study is very unlikely to have a problem with external validity Having just spent the last couple of paragraphs focusing on the choice of participants (since that’s the big issue that everyone tends to worry most about), it’s worth remembering that external validity is a broader concept. The following are also examples of things that might pose a threat to external validity, depending on what kind of study you’re doing: • People might answer a “psychology questionnaire” in a manner that doesn’t reflect what they would do in real life. • Your lab experiment on (say) “human learning” has a different structure to the learning problems people face in real life. Construct validity Construct validity is basically a question of whether you’re measuring what you want to be measuring. A measurement has good construct validity if it is actually measuring the correct theoretical construct, and bad construct validity if it doesn’t. To give very simple (if ridiculous) example, suppose I’m trying to investigate the rates with which university students cheat on their exams. And the way I attempt to measure it is by asking the cheating students to stand up in the lecture theatre so that I can count them. When I do this with a class of 300 students, 0 people claim to be cheaters. So I therefore conclude that the proportion of cheaters in my class is 0%. Clearly this is a bit ridiculous. But the point here is not that this is a very deep methodological example, but rather to explain what construct validity is. The problem with my measure is that while I’m trying to measure “the proportion of people who cheat” what I’m actually measuring is “the proportion of people stupid enough to own up to cheating, or bloody minded enough to pretend that they do”. Obviously, these aren’t the same thing! So my study has gone wrong, because my measurement has very poor construct validity. Face validity Face validity simply refers to whether or not a measure “looks like” it’s doing what it’s supposed to, nothing more. If I design a test of intelligence, and people look at it and they say “no, that test doesn’t measure intelligence”, then the measure lacks face validity. It’s as simple as that. Obviously, face validity isn’t very important from a pure scientific perspective. After all, what we care about is whether or not the measure actually does what it’s supposed to do, not whether it looks like it does what it’s supposed to do. As a consequence, we generally don’t care very much about face validity. That said, the concept of face validity serves three useful pragmatic purposes: • Sometimes, an experienced scientist will have a “hunch” that a particular measure won’t work. While these sorts of hunches have no strict evidentiary value, it’s often worth paying attention to them. Because often times people have knowledge that they can’t quite verbalize, so there might be something to worry about even if you can’t quite say why. In other words, when someone you trust criticizes the face validity of your study, it’s worth taking the time to think more carefully about your design to see if you can think of reasons why it might go awry. Mind you, if you don’t find any reason for concern, then you should probably not worry: after all, face validity really doesn’t matter much. • Often (very often), completely uninformed people will also have a “hunch” that your research is crap. And they’ll criticize it on the internet or something. On close inspection, you’ll often notice that these criticisms are actually focused entirely on how the study “looks”, but not on anything deeper. The concept of face validity is useful for gently explaining to people that they need to substantiate their arguments further. • Expanding on the last point, if the beliefs of untrained people are critical (e.g., this is often the case for applied research where you actually want to convince policy makers of something or other) then you have to care about face validity. Simply because – whether you like it or not – a lot of people will use face validity as a proxy for real validity. If you want the government to change a law on scientific, psychological grounds, then it won’t matter how good your studies “really” are. If they lack face validity, you’ll find that politicians ignore you. Of course, it’s somewhat unfair that policy often depends more on appearance than fact, but that’s how things go. Ecological validity Ecological validity is a different notion of validity, which is similar to external validity, but less important. The idea is that, in order to be ecologically valid, the entire set up of the study should closely approximate the real world scenario that is being investigated. In a sense, ecological validity is a kind of face validity – it relates mostly to whether the study “looks” right, but with a bit more rigour to it. To be ecologically valid, the study has to look right in a fairly specific way. The idea behind it is the intuition that a study that is ecologically valid is more likely to be externally valid. It’s no guarantee, of course. But the nice thing about ecological validity is that it’s much easier to check whether a study is ecologically valid than it is to check whether a study is externally valid. An simple example would be eyewitness identification studies. Most of these studies tend to be done in a university setting, often with fairly simple array of faces to look at rather than a line up. The length of time between seeing the “criminal” and being asked to identify the suspect in the “line up” is usually shorter. The “crime” isn’t real, so there’s no chance that the witness being scared, and there’s no police officers present, so there’s not as much chance of feeling pressured. These things all mean that the study definitely lacks ecological validity. They might (but might not) mean that it also lacks external validity.
textbooks/stats/Applied_Statistics/Answering_Questions_with_Data_-__Introductory_Statistics_for_Psychology_Students_(Crump)/01%3A_Why_Statistics/1.12%3A_Assessing_the_validity_of_a_study.txt
If we look at the issue of validity in the most general fashion, the two biggest worries that we have are confounds and artifact. These two terms are defined in the following way: • Confound: A confound is an additional, often unmeasured variable that turns out to be related to both the predictors and the outcomes. The existence of confounds threatens the internal validity of the study because you can’t tell whether the predictor causes the outcome, or if the confounding variable causes it, etc. • Artifact: A result is said to be “artifactual” if it only holds in the special situation that you happened to test in your study. The possibility that your result is an artifact describes a threat to your external validity, because it raises the possibility that you can’t generalize your results to the actual population that you care about. As a general rule confounds are a bigger concern for non-experimental studies, precisely because they’re not proper experiments: by definition, you’re leaving lots of things uncontrolled, so there’s a lot of scope for confounds working their way into your study. Experimental research tends to be much less vulnerable to confounds: the more control you have over what happens during the study, the more you can prevent confounds from appearing. However, there’s always swings and roundabouts, and when we start thinking about artifacts rather than confounds, the shoe is very firmly on the other foot. For the most part, artifactual results tend to be a concern for experimental studies than for non-experimental studies. To see this, it helps to realize that the reason that a lot of studies are non-experimental is precisely because what the researcher is trying to do is examine human behaviour in a more naturalistic context. By working in a more real-world context, you lose experimental control (making yourself vulnerable to confounds) but because you tend to be studying human psychology “in the wild” you reduce the chances of getting an artifactual result. Or, to put it another way, when you take psychology out of the wild and bring it into the lab (which we usually have to do to gain our experimental control), you always run the risk of accidentally studying something different than you wanted to study: which is more or less the definition of an artifact. Be warned though: the above is a rough guide only. It’s absolutely possible to have confounds in an experiment, and to get artifactual results with non-experimental studies. This can happen for all sorts of reasons, not least of which is researcher error. In practice, it’s really hard to think everything through ahead of time, and even very good researchers make mistakes. But other times it’s unavoidable, simply because the researcher has ethics (e.g., see “differential attrition”). Okay. There’s a sense in which almost any threat to validity can be characterized as a confound or an artifact: they’re pretty vague concepts. So let’s have a look at some of the most common examples… History effects History effects refer to the possibility that specific events may occur during the study itself that might influence the outcomes. For instance, something might happen in between a pre-test and a post-test. Or, in between testing participant 23 and participant 24. Alternatively, it might be that you’re looking at an older study, which was perfectly valid for its time, but the world has changed enough since then that the conclusions are no longer trustworthy. Examples of things that would count as history effects: • You’re interested in how people think about risk and uncertainty. You started your data collection in December 2010. But finding participants and collecting data takes time, so you’re still finding new people in February 2011. Unfortunately for you (and even more unfortunately for others), the Queensland floods occurred in January 2011, causing billions of dollars of damage and killing many people. Not surprisingly, the people tested in February 2011 express quite different beliefs about handling risk than the people tested in December 2010. Which (if any) of these reflects the “true” beliefs of participants? I think the answer is probably both: the Queensland floods genuinely changed the beliefs of the Australian public, though possibly only temporarily. The key thing here is that the “history” of the people tested in February is quite different to people tested in December. • You’re testing the psychological effects of a new anti-anxiety drug. So what you do is measure anxiety before administering the drug (e.g., by self-report, and taking physiological measures, let’s say), then you administer the drug, and then you take the same measures afterwards. In the middle, however, because your labs are in Los Angeles, there’s an earthquake, which increases the anxiety of the participants. Maturation effects As with history effects, maturational effects are fundamentally about change over time. However, maturation effects aren’t in response to specific events. Rather, they relate to how people change on their own over time: we get older, we get tired, we get bored, etc. Some examples of maturation effects: • When doing developmental psychology research, you need to be aware that children grow up quite rapidly. So, suppose that you want to find out whether some educational trick helps with vocabulary size among 3 year olds. One thing that you need to be aware of is that the vocabulary size of children that age is growing at an incredible rate (multiple words per day), all on its own. If you design your study without taking this maturational effect into account, then you won’t be able to tell if your educational trick works. • When running a very long experiment in the lab (say, something that goes for 3 hours), it’s very likely that people will begin to get bored and tired, and that this maturational effect will cause performance to decline, regardless of anything else going on in the experiment Repeated testing effects An important type of history effect is the effect of repeated testing. Suppose I want to take two measurements of some psychological construct (e.g., anxiety). One thing I might be worried about is if the first measurement has an effect on the second measurement. In other words, this is a history effect in which the “event” that influences the second measurement is the first measurement itself! This is not at all uncommon. Examples of this include: • Learning and practice: e.g., “intelligence” at time 2 might appear to go up relative to time 1 because participants learned the general rules of how to solve “intelligence-test-style” questions during the first testing session. • Familiarity with the testing situation: e.g., if people are nervous at time 1, this might make performance go down; after sitting through the first testing situation, they might calm down a lot precisely because they’ve seen what the testing looks like. • Auxiliary changes caused by testing: e.g., if a questionnaire assessing mood is boring, then mood at measurement at time 2 is more likely to become “bored”, precisely because of the boring measurement made at time 1. Selection bias Selection bias is a pretty broad term. Suppose that you’re running an experiment with two groups of participants, where each group gets a different “treatment”, and you want to see if the different treatments lead to different outcomes. However, suppose that, despite your best efforts, you’ve ended up with a gender imbalance across groups (say, group A has 80% females and group B has 50% females). It might sound like this could never happen, but trust me, it can. This is an example of a selection bias, in which the people “selected into” the two groups have different characteristics. If any of those characteristics turns out to be relevant (say, your treatment works better on females than males) then you’re in a lot of trouble. Differential attrition One quite subtle danger to be aware of is called differential attrition, which is a kind of selection bias that is caused by the study itself. Suppose that, for the first time ever in the history of psychology, I manage to find the perfectly balanced and representative sample of people. I start running “Dan’s incredibly long and tedious experiment” on my perfect sample, but then, because my study is incredibly long and tedious, lots of people start dropping out. I can’t stop this: as we’ll discuss later in the chapter on research ethics, participants absolutely have the right to stop doing any experiment, any time, for whatever reason they feel like, and as researchers we are morally (and professionally) obliged to remind people that they do have this right. So, suppose that “Dan’s incredibly long and tedious experiment” has a very high drop out rate. What do you suppose the odds are that this drop out is random? Answer: zero. Almost certainly, the people who remain are more conscientious, more tolerant of boredom etc than those that leave. To the extent that (say) conscientiousness is relevant to the psychological phenomenon that I care about, this attrition can decrease the validity of my results. When thinking about the effects of differential attrition, it is sometimes helpful to distinguish between two different types. The first is homogeneous attrition, in which the attrition effect is the same for all groups, treatments or conditions. In the example I gave above, the differential attrition would be homogeneous if (and only if) the easily bored participants are dropping out of all of the conditions in my experiment at about the same rate. In general, the main effect of homogeneous attrition is likely to be that it makes your sample unrepresentative. As such, the biggest worry that you’ll have is that the generalisability of the results decreases: in other words, you lose external validity. The second type of differential attrition is heterogeneous attrition, in which the attrition effect is different for different groups. This is a much bigger problem: not only do you have to worry about your external validity, you also have to worry about your internal validity too. To see why this is the case, let’s consider a very dumb study in which I want to see if insulting people makes them act in a more obedient way. Why anyone would actually want to study that I don’t know, but let’s suppose I really, deeply cared about this. So, I design my experiment with two conditions. In the “treatment” condition, the experimenter insults the participant and then gives them a questionnaire designed to measure obedience. In the “control” condition, the experimenter engages in a bit of pointless chitchat and then gives them the questionnaire. Leaving aside the questionable scientific merits and dubious ethics of such a study, let’s have a think about what might go wrong here. As a general rule, when someone insults me to my face, I tend to get much less co-operative. So, there’s a pretty good chance that a lot more people are going to drop out of the treatment condition than the control condition. And this drop out isn’t going to be random. The people most likely to drop out would probably be the people who don’t care all that much about the importance of obediently sitting through the experiment. Since the most bloody minded and disobedient people all left the treatment group but not the control group, we’ve introduced a confound: the people who actually took the questionnaire in the treatment group were already more likely to be dutiful and obedient than the people in the control group. In short, in this study insulting people doesn’t make them more obedient: it makes the more disobedient people leave the experiment! The internal validity of this experiment is completely shot. Non-response bias Non-response bias is closely related to selection bias, and to differential attrition. The simplest version of the problem goes like this. You mail out a survey to 1000 people, and only 300 of them reply. The 300 people who replied are almost certainly not a random subsample. People who respond to surveys are systematically different to people who don’t. This introduces a problem when trying to generalize from those 300 people who replied, to the population at large; since you now have a very non-random sample. The issue of non-response bias is more general than this, though. Among the (say) 300 people that did respond to the survey, you might find that not everyone answers every question. If (say) 80 people chose not to answer one of your questions, does this introduce problems? As always, the answer is maybe. If the question that wasn’t answered was on the last page of the questionnaire, and those 80 surveys were returned with the last page missing, there’s a good chance that the missing data isn’t a big deal: probably the pages just fell off. However, if the question that 80 people didn’t answer was the most confrontational or invasive personal question in the questionnaire, then almost certainly you’ve got a problem. In essence, what you’re dealing with here is what’s called the problem of missing data. If the data that is missing was “lost” randomly, then it’s not a big problem. If it’s missing systematically, then it can be a big problem. Regression to the mean Regression to the mean is a curious variation on selection bias. It refers to any situation where you select data based on an extreme value on some measure. Because the measure has natural variation, it almost certainly means that when you take a subsequent measurement, that later measurement will be less extreme than the first one, purely by chance. Here’s an example. Suppose I’m interested in whether a psychology education has an adverse effect on very smart kids. To do this, I find the 20 psych I students with the best high school grades and look at how well they’re doing at university. It turns out that they’re doing a lot better than average, but they’re not topping the class at university, even though they did top their classes at high school. What’s going on? The natural first thought is that this must mean that the psychology classes must be having an adverse effect on those students. However, while that might very well be the explanation, it’s more likely that what you’re seeing is an example of “regression to the mean”. To see how it works, let’s take a moment to think about what is required to get the best mark in a class, regardless of whether that class be at high school or at university. When you’ve got a big class, there are going to be lots of very smart people enrolled. To get the best mark you have to be very smart, work very hard, and be a bit lucky. The exam has to ask just the right questions for your idiosyncratic skills, and you have to not make any dumb mistakes (we all do that sometimes) when answering them. And that’s the thing: intelligence and hard work are transferrable from one class to the next. Luck isn’t. The people who got lucky in high school won’t be the same as the people who get lucky at university. That’s the very definition of “luck”. The consequence of this is that, when you select people at the very extreme values of one measurement (the top 20 students), you’re selecting for hard work, skill and luck. But because the luck doesn’t transfer to the second measurement (only the skill and work), these people will all be expected to drop a little bit when you measure them a second time (at university). So their scores fall back a little bit, back towards everyone else. This is regression to the mean. Regression to the mean is surprisingly common. For instance, if two very tall people have kids, their children will tend to be taller than average, but not as tall as the parents. The reverse happens with very short parents: two very short parents will tend to have short children, but nevertheless those kids will tend to be taller than the parents. It can also be extremely subtle. For instance, there have been studies done that suggested that people learn better from negative feedback than from positive feedback. However, the way that people tried to show this was to give people positive reinforcement whenever they did good, and negative reinforcement when they did bad. And what you see is that after the positive reinforcement, people tended to do worse; but after the negative reinforcement they tended to do better. But! Notice that there’s a selection bias here: when people do very well, you’re selecting for “high” values, and so you should expect (because of regression to the mean) that performance on the next trial should be worse, regardless of whether reinforcement is given. Similarly, after a bad trial, people will tend to improve all on their own. The apparent superiority of negative feedback is an artifact caused by regression to the mean (Kahneman and Tversky 1973). Experimenter bias Experimenter bias can come in multiple forms. The basic idea is that the experimenter, despite the best of intentions, can accidentally end up influencing the results of the experiment by subtly communicating the “right answer” or the “desired behaviour” to the participants. Typically, this occurs because the experimenter has special knowledge that the participant does not – either the right answer to the questions being asked, or knowledge of the expected pattern of performance for the condition that the participant is in, and so on. The classic example of this happening is the case study of “Clever Hans”, which dates back to 1907, Pfungst (1911; Hothersall 2004). Clever Hans was a horse that apparently was able to read and count, and perform other human like feats of intelligence. After Clever Hans became famous, psychologists started examining his behaviour more closely. It turned out that – not surprisingly – Hans didn’t know how to do maths. Rather, Hans was responding to the human observers around him. Because they did know how to count, and the horse had learned to change its behaviour when people changed theirs. The general solution to the problem of experimenter bias is to engage in double blind studies, where neither the experimenter nor the participant knows which condition the participant is in, or knows what the desired behaviour is. This provides a very good solution to the problem, but it’s important to recognize that it’s not quite ideal, and hard to pull off perfectly. For instance, the obvious way that I could try to construct a double blind study is to have one of my Ph.D. students (one who doesn’t know anything about the experiment) run the study. That feels like it should be enough. The only person (me) who knows all the details (e.g., correct answers to the questions, assignments of participants to conditions) has no interaction with the participants, and the person who does all the talking to people (the Ph.D. student) doesn’t know anything. Except, that last part is very unlikely to be true. In order for the Ph.D. student to run the study effectively, they need to have been briefed by me, the researcher. And, as it happens, the Ph.D. student also knows me, and knows a bit about my general beliefs about people and psychology (e.g., I tend to think humans are much smarter than psychologists give them credit for). As a result of all this, it’s almost impossible for the experimenter to avoid knowing a little bit about what expectations I have. And even a little bit of knowledge can have an effect: suppose the experimenter accidentally conveys the fact that the participants are expected to do well in this task. Well, there’s a thing called the “Pygmalion effect”: if you expect great things of people, they’ll rise to the occasion; but if you expect them to fail, they’ll do that too. In other words, the expectations become a self-fulfilling prophesy. Demand effects and reactivity When talking about experimenter bias, the worry is that the experimenter’s knowledge or desires for the experiment are communicated to the participants, and that these effect people’s behaviour Rosenthal (1966). However, even if you manage to stop this from happening, it’s almost impossible to stop people from knowing that they’re part of a psychological study. And the mere fact of knowing that someone is watching/studying you can have a pretty big effect on behaviour. This is generally referred to as reactivity or demand effects. The basic idea is captured by the Hawthorne effect: people alter their performance because of the attention that the study focuses on them. The effect takes its name from a the “Hawthorne Works” factory outside of Chicago (Adair 1984). A study done in the 1920s looking at the effects of lighting on worker productivity at the factory turned out to be an effect of the fact that the workers knew they were being studied, rather than the lighting. To get a bit more specific about some of the ways in which the mere fact of being in a study can change how people behave, it helps to think like a social psychologist and look at some of the roles that people might adopt during an experiment, but might not adopt if the corresponding events were occurring in the real world: • The good participant tries to be too helpful to the researcher: he or she seeks to figure out the experimenter’s hypotheses and confirm them. • The negative participant does the exact opposite of the good participant: he or she seeks to break or destroy the study or the hypothesis in some way. • The faithful participant is unnaturally obedient: he or she seeks to follow instructions perfectly, regardless of what might have happened in a more realistic setting. • The apprehensive participant gets nervous about being tested or studied, so much so that his or her behaviour becomes highly unnatural, or overly socially desirable. Placebo effects The placebo effect is a specific type of demand effect that we worry a lot about. It refers to the situation where the mere fact of being treated causes an improvement in outcomes. The classic example comes from clinical trials: if you give people a completely chemically inert drug and tell them that it’s a cure for a disease, they will tend to get better faster than people who aren’t treated at all. In other words, it is people’s belief that they are being treated that causes the improved outcomes, not the drug. Situation, measurement and subpopulation effects In some respects, these terms are a catch-all term for “all other threats to external validity”. They refer to the fact that the choice of subpopulation from which you draw your participants, the location, timing and manner in which you run your study (including who collects the data) and the tools that you use to make your measurements might all be influencing the results. Specifically, the worry is that these things might be influencing the results in such a way that the results won’t generalize to a wider array of people, places and measures. Fraud, deception and self-deception It is difficult to get a man to understand something, when his salary depends on his not understanding it. – Upton Sinclair One final thing that I feel like I should mention. While reading what the textbooks often have to say about assessing the validity of the study, I couldn’t help but notice that they seem to make the assumption that the researcher is honest. I find this hilarious. While the vast majority of scientists are honest, in my experience at least, some are not. Not only that, as I mentioned earlier, scientists are not immune to belief bias – it’s easy for a researcher to end up deceiving themselves into believing the wrong thing, and this can lead them to conduct subtly flawed research, and then hide those flaws when they write it up. So you need to consider not only the (probably unlikely) possibility of outright fraud, but also the (probably quite common) possibility that the research is unintentionally “slanted”. I opened a few standard textbooks and didn’t find much of a discussion of this problem, so here’s my own attempt to list a few ways in which these issues can arise are: • Data fabrication. Sometimes, people just make up the data. This is occasionally done with “good” intentions. For instance, the researcher believes that the fabricated data do reflect the truth, and may actually reflect “slightly cleaned up” versions of actual data. On other occasions, the fraud is deliberate and malicious. Some high-profile examples where data fabrication has been alleged or shown include Cyril Burt (a psychologist who is thought to have fabricated some of his data), Andrew Wakefield (who has been accused of fabricating his data connecting the MMR vaccine to autism) and Hwang Woo-suk (who falsified a lot of his data on stem cell research). • Hoaxes. Hoaxes share a lot of similarities with data fabrication, but they differ in the intended purpose. A hoax is often a joke, and many of them are intended to be (eventually) discovered. Often, the point of a hoax is to discredit someone or some field. There’s quite a few well known scientific hoaxes that have occurred over the years (e.g., Piltdown man) some of were deliberate attempts to discredit particular fields of research (e.g., the Sokal affair). • Data misrepresentation. While fraud gets most of the headlines, it’s much more common in my experience to see data being misrepresented. When I say this, I’m not referring to newspapers getting it wrong (which they do, almost always). I’m referring to the fact that often, the data don’t actually say what the researchers think they say. My guess is that, almost always, this isn’t the result of deliberate dishonesty, it’s due to a lack of sophistication in the data analyses. For instance, think back to the example of Simpson’s paradox that I discussed in the beginning of these notes. It’s very common to see people present “aggregated” data of some kind; and sometimes, when you dig deeper and find the raw data yourself, you find that the aggregated data tell a different story to the disaggregated data. Alternatively, you might find that some aspect of the data is being hidden, because it tells an inconvenient story (e.g., the researcher might choose not to refer to a particular variable). There’s a lot of variants on this; many of which are very hard to detect. • Study “misdesign”. Okay, this one is subtle. Basically, the issue here is that a researcher designs a study that has built-in flaws, and those flaws are never reported in the paper. The data that are reported are completely real, and are correctly analysed, but they are produced by a study that is actually quite wrongly put together. The researcher really wants to find a particular effect, and so the study is set up in such a way as to make it “easy” to (artifactually) observe that effect. One sneaky way to do this – in case you’re feeling like dabbling in a bit of fraud yourself – is to design an experiment in which it’s obvious to the participants what they’re “supposed” to be doing, and then let reactivity work its magic for you. If you want, you can add all the trappings of double blind experimentation etc. It won’t make a difference, since the study materials themselves are subtly telling people what you want them to do. When you write up the results, the fraud won’t be obvious to the reader: what’s obvious to the participant when they’re in the experimental context isn’t always obvious to the person reading the paper. Of course, the way I’ve described this makes it sound like it’s always fraud: probably there are cases where this is done deliberately, but in my experience the bigger concern has been with unintentional misdesign. The researcher believes …and so the study just happens to end up with a built in flaw, and that flaw then magically erases itself when the study is written up for publication. • Data mining & post hoc hypothesising. Another way in which the authors of a study can more or less lie about what they found is by engaging in what’s referred to as “data mining”. As we’ll discuss later in the class, if you keep trying to analyse your data in lots of different ways, you’ll eventually find something that “looks” like a real effect but isn’t. This is referred to as “data mining”. It used to be quite rare because data analysis used to take weeks, but now that everyone has very powerful statistical software on their computers, it’s becoming very common. Data mining per se isn’t “wrong”, but the more that you do it, the bigger the risk you’re taking. The thing that is wrong, and I suspect is very common, is unacknowledged data mining. That is, the researcher run every possible analysis known to humanity, finds the one that works, and then pretends that this was the only analysis that they ever conducted. Worse yet, they often “invent” a hypothesis after looking at the data, to cover up the data mining. To be clear: it’s not wrong to change your beliefs after looking at the data, and to reanalyse your data using your new “post hoc” hypotheses. What is wrong (and, I suspect, common) is failing to acknowledge that you’ve done so. If you acknowledge that you did it, then other researchers are able to take your behaviour into account. If you don’t, then they can’t. And that makes your behaviour deceptive. Bad! • Publication bias & self-censoring. Finally, a pervasive bias is “non-reporting” of negative results. This is almost impossible to prevent. Journals don’t publish every article that is submitted to them: they prefer to publish articles that find “something”. So, if 20 people run an experiment looking at whether reading Finnegans Wake causes insanity in humans, and 19 of them find that it doesn’t, which one do you think is going to get published? Obviously, it’s the one study that did find that Finnegans Wake causes insanity. This is an example of a publication bias: since no-one ever published the 19 studies that didn’t find an effect, a naive reader would never know that they existed. Worse yet, most researchers “internalize” this bias, and end up self-censoring their research. Knowing that negative results aren’t going to be accepted for publication, they never even try to report them. As a friend of mine says “for every experiment that you get published, you also have 10 failures”. And she’s right. The catch is, while some (maybe most) of those studies are failures for boring reasons (e.g. you stuffed something up) others might be genuine “null” results that you ought to acknowledge when you write up the “good” experiment. And telling which is which is often hard to do. A good place to start is a paper by Ioannidis (2005) with the depressing title “Why most published research findings are false”. I’d also suggest taking a look at work by Kühberger, Fritz, and Scherndl (2014) presenting statistical evidence that this actually happens in psychology. There’s probably a lot more issues like this to think about, but that’ll do to start with. What I really want to point out is the blindingly obvious truth that real world science is conducted by actual humans, and only the most gullible of people automatically assumes that everyone else is honest and impartial. Actual scientists aren’t usually that naive, but for some reason the world likes to pretend that we are, and the textbooks we usually write seem to reinforce that stereotype.
textbooks/stats/Applied_Statistics/Answering_Questions_with_Data_-__Introductory_Statistics_for_Psychology_Students_(Crump)/01%3A_Why_Statistics/1.13%3A_Confounds_Artifacts_and_other_Threats_to_Validity.txt
This chapter isn’t really meant to provide a comprehensive discussion of psychological research methods: it would require another volume just as long as this one to do justice to the topic. However, in real life statistics and study design are tightly intertwined, so it’s very handy to discuss some of the key topics. In this chapter, I’ve briefly discussed the following topics: • Introduction to psychological measurement: What does it mean to operationalize a theoretical construct? What does it mean to have variables and take measurements? • Scales of measurement and types of variables: Remember that there are two different distinctions here: there’s the difference between discrete and continuous data, and there’s the difference between the four different scale types (nominal, ordinal, interval and ratio). • Reliability of a measurement: If I measure the “same” thing twice, should I expect to see the same result? Only if my measure is reliable. But what does it mean to talk about doing the “same” thing? Well, that’s why we have different types of reliability. Make sure you remember what they are. • Terminology: predictors and outcomes: What roles do variables play in an analysis? Can you remember the difference between predictors and outcomes? Dependent and independent variables? Etc. • Experimental and non-experimental research designs: What makes an experiment an experiment? Is it a nice white lab coat, or does it have something to do with researcher control over variables? • Validity and its threats: Does your study measure what you want it to? How might things go wrong? And is it my imagination, or was that a very long list of possible ways in which things can go wrong? All this should make clear to you that study design is a critical part of research methodology. I built this chapter from the classic little book by Campbell and Stanley (1963), but there are of course a large number of textbooks out there on research design. Spend a few minutes with your favourite search engine and you’ll find dozens. 1.16: References Adair, G. 1984. “The Hawthorne Effect: A Reconsideration of the Methodological Artifact.” Journal of Applied Psychology 69: 334–45. Bickel, P. J., E. A. Hammel, and J. W. O’Connell. 1975. “Sex Bias in Graduate Admissions: Data from Berkeley.” Science 187: 398–404. Campbell, D. T., and J. C. Stanley. 1963. Experimental and Quasi-Experimental Designs for Research. Boston, MA: Houghton Mifflin. Evans, J. St. B. T., J. L. Barston, and P. Pollard. 1983. “On the Conflict Between Logic and Belief in Syllogistic Reasoning.” Memory and Cognition 11: 295–306. Hothersall, D. 2004. History of Psychology. McGraw-Hill. Ioannidis, John P. A. 2005. “Why Most Published Research Findings Are False.” PLoS Med 2 (8): 697–701. Kahneman, D., and A. Tversky. 1973. “On the Psychology of Prediction.” Psychological Review 80: 237–51. Kühberger, A, A Fritz, and T. Scherndl. 2014. “Publication Bias in Psychology: A Diagnosis Based on the Correlation Between Effect Size and Sample Size.” Public Library of Science One 9: 1–8. Pfungst, O. 1911. Clever Hans (the Horse of Mr. Von Osten): A Contribution to Experimental Animal and Human Psychology. Translated by C. L. Rahn. New York: Henry Holt. Rosenthal, R. 1966. Experimenter Effects in Behavioral Research. New York: Appleton. Stevens, S. S. 1946. “On the Theory of Scales of Measurement.” Science 103: 677–80.
textbooks/stats/Applied_Statistics/Answering_Questions_with_Data_-__Introductory_Statistics_for_Psychology_Students_(Crump)/01%3A_Why_Statistics/1.14%3A_Summary.txt
Chapter by Matthew Crump Far better an approximate answer to the right question, which is often vague, than an exact answer to the wrong question, which can always be made precise. — John W. Tukey This chapter is about descriptive statistics. These are tools for describing data. Some things to keep in mind as we go along are: 1. There are lots of different ways to describe data 2. There is more than one “correct” way, and you get to choose the most “useful” way for the data that you are describing 3. It is possible to invent new ways of describing data, all of the ways we discuss were previously invented by other people, and they are commonly used because they are useful. 4. Describing data is necessary because there is usually too much of it, so it doesn’t make any sense by itself. 02: Describing Data Let’s say you wanted to know how happy people are. So, you ask thousands of people on the street how happy they are. You let them pick any number they want from negative infinity to positive infinity. Then you record all the numbers. Now what? Well, how about you look at the numbers and see if that helps you determine anything about how happy people are. What could the numbers look like. Perhaps something like this: 73 594 -22 -20 -547 162 -90 312 235 -511 -337 85 552 377 241 -382 241 -439 264 -292 -136 -262 432 835 73 -180 -93 218 597 419 -500 -120 588 -96 -412 502 1058 761 549 -320 14 -869 338 935 531 339 83 37 820 544 50 -397 203 -374 -186 518 530 1320 816 1293 580 -741 -102 -56 933 -228 -347 656 162 714 440 569 -431 557 -502 -331 -281 73 311 459 -143 -348 136 -624 55 -790 374 -988 -1102 -408 -666 671 660 452 1299 717 369 158 679 411 -593 -364 115 379 56 -440 505 -370 -102 -1020 610 -86 -181 -143 75 -188 502 606 443 74 181 -355 40 551 -362 414 -307 415 -930 -302 1416 -387 437 -126 -407 28 466 -25 -413 -286 106 257 459 703 3 1592 1042 -124 102 -578 550 -605 -41 167 -581 830 -17 200 98 472 242 -30 94 -619 -885 424 320 241 193 121 -373 -478 -398 1035 425 -199 -350 189 -394 346 -161 -355 108 -685 -668 -667 893 -623 19 879 -430 119 830 -236 -527 61 313 265 453 -565 -523 9 -413 -705 -527 237 -341 80 349 891 181 555 371 -623 -107 859 -673 855 4 117 -1225 317 279 266 24 -387 368 567 -717 717 -110 706 -40 -836 -882 48 307 1150 -917 -236 -669 -401 -274 -465 -178 104 517 635 86 186 -357 356 932 118 -51 62 -111 -154 -409 852 -91 -568 640 -48 -349 -481 511 -544 254 -641 654 -127 -563 -340 30 -293 -100 292 220 41 312 640 -628 335 -808 105 77 -674 108 -1177 -804 -318 608 954 -350 606 -394 -68 -226 161 -580 174 622 -433 -758 -49 949 496 802 -271 745 184 -41 281 -318 -323 634 -53 -307 446 245 368 163 -489 -124 -258 -463 357 -465 -321 628 1055 -11 -177 -28 139 -531 134 -400 -182 -298 153 -206 946 534 295 543 350 184 -311 1109 -174 1169 -175 88 804 -555 -269 -376 1199 -463 1078 -384 -804 2 -29 219 -467 375 503 1717 264 -177 -222 1125 -738 569 -335 581 364 -36 -523 847 -1189 -379 -704 -654 51 -136 303 609 -200 675 286 353 67 -993 -181 1198 -508 77 58 -53 -510 -343 657 1303 -300 804 -376 421 73 -165 -238 409 470 648 127 347 -296 659 280 1397 -715 979 -793 565 -102 510 333 -848 571 -297 630 286 -512 275 468 -314 -246 -212 603 -152 -474 428 -315 -38 -53 -324 -225 -46 -89 316 341 516 -655 613 249 334 94 -66 -688 101 -128 -422 424 326 -287 417 -605 357 -959 -149 387 -39 -104 -596 55 -25 -26 -533 -667 280 863 215 -182 397 333 -56 36 -118 -329 44 -1 354 -545 630 460 458 30 Now, what are you going to with that big pile of numbers? Look at it all day long? When you deal with data, it will deal so many numbers to you that you will be overwhelmed by them. That is why we need ways to describe the data in a more manageable fashion. The complete description of the data is always the data itself. Descriptive statistics and other tools for describing data go one step further to summarize aspects of the data. Summaries are a way to compress the important bits of a thing down to a useful and manageable tidbit. It’s like telling your friends why they should watch a movie: you don’t replay the entire movie for them, instead you hit the highlights. Summarizing the data is just like a movie preview, only for data.
textbooks/stats/Applied_Statistics/Answering_Questions_with_Data_-__Introductory_Statistics_for_Psychology_Students_(Crump)/02%3A_Describing_Data/2.01%3A_This_is_what_too_many_numbers_looks_like.txt
We already tried one way of looking at the numbers, and it wasn’t useful. Let’s look at some other ways of looking at the numbers, using graphs. Stop, plotting time (o o oh) U can plot this Let’s turn all of the numbers into dots, then show them in a graph. Note, when we do this, we have not yet summarized anything about the data. Instead, we just look at all of the data in a visual format, rather than looking at the numbers. Figure \(1\) shows 500 measurements of happiness. The graph has two axes. The horizontal x-axis, going from left to right is labeled “Index”. The vertical y-axis, going up and down, is labelled “happiness”. Each dot represents one measurement of every person’s happiness from our pretend study. Before we talk about what we can and cannot see about the data, it is worth mentioning that the way you plot the data will make some things easier to see and some things harder to see. So, what can we now see about the data? There are lots of dots everywhere. It looks like there are 500 of them because the index goes to 500. It looks like some dots go as high as 1000-1500 and as low as -1500. It looks like there are more dots in the middle-ish area of the plot, sort of spread about 0. Take home: we can see all the numbers at once by putting them in a plot, and that is much easier and more helpful than looking at the raw numbers. OK, so if these dots represent how happy 500 people are, what can we say about those people? First, the dots are kind of all over the place, so different people have different levels of happiness. Are there any trends? Are more people happy than unhappy, or vice-versa? It’s hard to see that in the graph, so let’s make a different one, called a histogram Histograms Making a histogram will be our first act of officially summarizing something about the data. We will no longer look at the individual bits of data, instead we will see how the numbers group together. Let’s look at a histogram of the happiness data, and then explain it. The dots have disappeared, and now we some bars. Each bar is a summary of the dots, representing the number of dots (frequency count) inside a particular range of happiness, also called bins. For example, how many people gave a happiness rating between 0 and 500? The fifth bar, the one between 0 and 500 on the x-axis, tells you how many. Look how tall that bar is. How tall is it? The height is shown on the y-axis, which provides a frequency count (the number of dots or data points). It looks like around 150 people said their happiness was between 0-500. More generally, we see there are many bins on the x-axis. We have divided the data into bins of 500. Bin #1 goes from -2000 to -1500, bin #2 goes from -1500 to -1000, and so on until the last bin. To make the histogram, we just count up the number of data points falling inside each bin, then plot those frequency counts as a function of the bins. Voila, a histogram. What does the histogram help us see about the data? First, we can see the shape of data. The shape of the histogram refers to how it goes up and down. The shape tells us where the data is. For example, when the bars are low we know there isn’t much data there. When the bars are high, we know there is more data there. So, where is most of the data? It looks like it’s mostly in the middle two bins, between -500 and 500. We can also see the range of the data. This tells us the minimums and the maximums of the data. Most of the data is between -1500 and +1500, so no infinite sadness or infinite happiness in our data-set. When you make a histogram you get to choose how wide each bar will be. For example, below are four different histograms of the very same happiness data. What changes is the width of the bins. All of the histograms have roughly the same overall shape: From left to right, the bars start off small, then go up, then get small again. In other words, as the numbers get closer to zero, they start to occur more frequently. We see this general trend across all the histograms. But, some aspects of the trend fall apart when the bars get really narrow. For example, although the bars generally get taller when moving from -1000 to 0, there are some exceptions and the bars seem to fluctuate a little bit. When the bars are wider, there are less exceptions to the general trend. How wide or narrow should your histogram be? It’s a Goldilocks question. Make it just right for your data. 2.03: Important Ideas - Distribution Central Tendency and Variance Let’s introduce three important terms we will use a lot, distribution, central tendency, and variance. These terms are similar to their everyday meanings (although I suspect most people don’t say central tendency very often). Distribution. When you order something from Amazon, where does it come from, and how does it get to your place? That stuff comes from one of Amazon’s distribution centers. They distribute all sorts of things by spreading them around to your doorstep. “To Distribute”" is to spread something. Notice, the data in the histogram is distributed, or spread across the bins. We can also talk about a distribution as a noun. The histogram is a distribution of the frequency counts across the bins. Distributions are very, very, very, very, very important. They can have many different shapes. They can describe data, like in the histogram above. And as we will learn in later chapters, they can produce data. Many times we will be asking questions about where our data came from, and this usually means asking what kind of distribution could have created our data (more on that later.) Central Tendency is all about sameness: What is common about some numbers? For example, is there anything similar about all of the numbers in the histogram? Yes, we can say that most of them are near 0. There is a tendency for most of the numbers to be centered near 0. Notice we are being cautious about our generalization about the numbers. We are not saying they are all 0. We are saying there is a tendency for many of them to be near zero. There are lots of ways to talk about the central tendency of some numbers. There can even be more than one kind of tendency. For example, if lots of the numbers were around -1000, and a similar large amount of numbers were grouped around 1000, we could say there was two tendencies. Variance is all about differentness: What is different about some numbers?. For example, is there anything different about all of the numbers in the histogram? YES!!! The numbers are not all the same! When the numbers are not all the same, they must vary. So, the variance in the numbers refers to how the numbers are different. There are many ways to summarize the amount of variance in the numbers, and we discuss these very soon.
textbooks/stats/Applied_Statistics/Answering_Questions_with_Data_-__Introductory_Statistics_for_Psychology_Students_(Crump)/02%3A_Describing_Data/2.02%3A_Look_at_the_data.txt
We’ve seen that we can get a sense of data by plotting dots in a graph, and by making a histogram. These tools show us what the numbers look like, approximately how big and small they are, and how similar and different they are from another. It is good to get a feeling about the numbers in this way. But, these visual sensitudes are not very precise. In addition to summarizing numbers with graphs, we can summarize numbers using numbers (NO, please not more numbers, we promise numbers can be your friend). From many numbers to one Measures of central have one important summary goal: to reduce a pile of numbers to a single number that we can look at. We already know that looking at thousands of numbers is hopeless. Wouldn’t it be nice if we could just look at one number instead? We think so. It turns out there are lots of ways to do this. Then, if your friend ever asks the frightening question, “hey, what are all these numbers like?”. You can say they are like this one number right here. But, just like in Indiana Jones and the Last Crusade (highly recommended movie), you must choose your measure of central tendency wisely. Mode The mode is the most frequently occurring number in your measurement. That is it. How do you find it? You have to count the number of times each number appears in your measure, then whichever one occurs the most, is the mode. Example: 1 1 1 2 3 4 5 6 The mode of the above set is 1, which occurs three times. Every other number only occurs once. OK fine. What happens here: Example: 1 1 1 2 2 2 3 4 5 6 Hmm, now 1 and 2 both occur three times each. What do we do? We say there are two modes, and they are 1 and 2. Why is the mode a measure of central tendency? Well, when we ask, “what are my numbers like”, we can say, “most of the number are, like a 1 (or whatever the mode is)”. Is the mode a good measure of central tendency? That depends on your numbers. For example, consider these numbers 1 1 2 3 4 5 6 7 8 9 Here, the mode is 1 again, because there are two 1s, and all of the other numbers occur once. But, are most of the numbers like, a 1. No, they are mostly not 1s. “Argh, so should I or should I not use the mode? I thought this class was supposed to tell me what to do?”. There is no telling you what to do. Every time you use a tool in statistics you have to think about what you are doing and justify why what you are doing makes sense. Sorry. Median The median is the exact middle of the data. After all, we are asking about central tendency, so why not go to the center of the data and see where we are. What do you mean middle of the data? Let’s look at these numbers: 1 5 4 3 6 7 9 Umm, OK. So, three is in the middle? Isn’t that kind of arbitrary. Yes. Before we can compute the median, we need to order the numbers from smallest to largest. 1 3 4 5 6 7 9 Now, 5 is in the middle. And, by middle we mean in the middle. There are three numbers to the left of 5, and three numbers to the right. So, five is definitely in the middle. OK fine, but what happens when there aren’t an even number of numbers? Then the middle will be missing right? Let’s see: 1 2 3 4 5 6 There is no number between 3 and 4 in the data, the middle is empty. In this case, we compute the median by figuring out the number in between 3 and 4. So, the median would be 3.5. Is the median a good measure of central tendency? Sure, it is often very useful. One property of the median is that it stays in the middle even when some of the other numbers get really weird. For example, consider these numbers: 1 2 3 4 4 4 5 6 6 6 7 7 1000 Most of these numbers are smallish, but the 1000 is a big old weird number, very different from the rest. The median is still 5, because it is in the middle of these ordered numbers. We can also see that five is pretty similar to most of the numbers (except for 1000). So, the median does a pretty good job of representing most of the numbers in the set, and it does so even if one or two of the numbers are very different from the others. Finally, outlier is a term will we use to describe numbers that appear in data that are very different from the rest. 1000 is an outlier, because it lies way out there on the number line compared to the other numbers. What to do with outliers is another topic we discuss sometimes throughout this course. Mean Have you noticed this is a textbook about statistics that hasn’t used a formula yet? That is about to change, but for those of you with formula anxiety, don’t worry, we will do our best to explain them. The mean is also called the average. And, we’re guessing you might already now what the average of a bunch of numbers is? It’s the sum of the numbers, divided by the number of number right? How do we express that idea in a formula? Just like this: $\text{Mean} = \bar{X} = \frac{\sum_{i=1}^{n} x_{i}}{N} \label{mean}$ “That looks like Greek to me”. Yup. The $\sum$ symbol is called sigma, and it stands for the operation of summing. The little “$i$” on the bottom, and the little “$n$” on the top refers to all of the numbers in the set, from the first number “$i$” to the last number “$n$”. The letters are just arbitrary labels, called variables that we use for descriptive purposes. The $x_{i}$ refers to individual numbers in the set. We sum up all of the numbers, then divide the sum by $N$, which is the total number of numbers. Sometimes you will see $\bar{X}$ to refer to the mean of all of the numbers. In plain English, the formula looks like: $\text{Mean} = \dfrac{\text{Sum of my numbers}}{\text{Count of my numbers}} \nonumber$ “Well, why didn’t you just say that?”. We just did in Equation \ref{mean}. Let’s compute the mean for these five numbers: 3 7 9 2 6 Add em up: 3+7+9+2+6 = 27 Count em up: $i_{1}$ = 3, $i_{2}$ = 7, $i_{3}$ = 9, $i_{4}$ = 2, $i_{5}$ = 6; N=5, because $i$ went from 1 to 5 Divide em: mean = 27 / 5 = 5.4 Or, to put the numbers in the formula, it looks like this: $\text{Mean} = \bar{X} = \frac{\sum_{i=1}^{n} x_{i}}{N} = \frac{3+7+9+2+6}{5} = \frac{27}{5} = 5.4 \nonumber$ OK fine, that is how to compute the mean. But, like we imagined, you probably already knew that, and if you didn’t that’s OK, now you do. What’s next? Is the mean a good measure of central tendency? By now, you should know: it depends. What does the mean mean? It is not enough to know the formula for the mean, or to be able to use the formula to compute a mean for a set of numbers. We believe in your ability to add and divide numbers. What you really need to know is what the mean really “means”. This requires that you know what the mean does, and not just how to do it. Puzzled? Let’s explain. Can you answer this question: What happens when you divide a sum of numbers by the number of numbers? What are the consequences of doing this? What is the formula doing? What kind of properties does the result give us? FYI, the answer is not that we compute the mean. OK, so what happens when you divide any number by another number? Of course, the key word here is divide. We literally carve the number up top in the numerator into pieces. How many times do we split the top number? That depends on the bottom number in the denominator. Watch: $\frac{12}{3} = 4 \nonumber$ So, we know the answer is 4. But, what is really going on here is that we are slicing and dicing up 12 aren’t we. Yes, and we slicing 12 into three parts. It turns out the size of those three parts is 4. So, now we are thinking of 12 as three different pieces $12 = 4 + 4 + 4$. I know this will be obvious, but what kind of properties do our pieces have? You mean the fours? Yup. Well, obviously they are all fours. Yes. The pieces are all the same size. They are all equal. So, division equalizes the numerator by the denominator… “Umm, I think I learned this in elementary school, what does this have to do with the mean?”. The number on top of the formula for the mean is just another numerator being divided by a denominator isn’t it. In this case, the numerator is a sum of all the values in your data. What if it was the sum of all of the 500 happiness ratings? The sum of all of them would just be a single number adding up all the different ratings. If we split the sum up into equal parts representing one part for each person’s happiness what would we get? We would get 500 identical and equal numbers for each person. It would be like taking all of the happiness in the world, then dividing it up equally, then to be fair, giving back the same equal amount of happiness to everyone in the world. This would make some people more happy than they were before, and some people less happy right. Of course, that’s because it would be equalizing the distribution of happiness for everybody. This process of equalization by dividing something into equal parts is what the mean does. See, it’s more than just a formula. It’s an idea. This is just the beginning of thinking about these kinds of ideas. We will come back to this idea about the mean, and other ideas, in later chapters. Pro tip: The mean is the one and only number that can take the place of every number in the data, such that when you add up all the equal parts, you get back the original sum of the data. All together now Just to remind ourselves of the mode, median, and mean, take a look at the next histogram. We have overlaid the location of the mean (red), median (green), and mode (blue). For this dataset, the three measures of central tendency all give different answers. The mean is the largest because it is influenced by large numbers, even if they occur rarely. The mode and median are insensitive to large numbers that occur infrequently, so they have smaller values.
textbooks/stats/Applied_Statistics/Answering_Questions_with_Data_-__Introductory_Statistics_for_Psychology_Students_(Crump)/02%3A_Describing_Data/2.04%3A_Measures_of_Central_Tendency_%28Sameness%29.txt
What did you do when you wrote essays in high school about a book you read? Probably compare and contrast something right? When you summarize data, you do the same thing. Measures of central tendency give us something like comparing does, they tell us stuff about what is the same. Measures of variation give us something like contrasting does, they tell us stuff about what is different. First, we note that whenever you see a bunch of numbers that aren’t the same, you already know there are some differences. This means the numbers vary, and there is variation in the size of the numbers. The Range Consider these 10 numbers, that I already ordered from smallest to largest for you: 1 3 4 5 5 6 7 8 9 24 The numbers have variation, because they are not all the same. We can use the range to describe the width of the variation. The range refers to the minimum (smallest value) and maximum (largest value) in the set. So, the range would be 1 and 24. The range is a good way to quickly summarize the boundaries of your data in just two numbers. By computing the range we know that none of the data is larger or smaller than the range. And, it can alert you to outliers. For example, if you are expecting your numbers to be between 1 and 7, but you find the range is 1 - 340,500, then you know you have some big numbers that shouldn’t be there, and then you can try to figure out why those numbers occurred (and potentially remove them if something went wrong). The Difference Scores It would be nice to summarize the amount of differentness in the data. Here’s why. If you thought that raw data (lots of numbers) is too big to look at, then you will be frightened to contemplate how many differences there are to look at. For example, these 10 numbers are easy to look at: 1 3 4 5 5 6 7 8 9 24 But, what about the difference between the numbers, what do those look like? We can compute the difference scores between each number, then put them in a matrix like the one below: numbers<-c(1, 3, 4, 5, 5, 6, 7, 8, 9, 24) mat<-matrix(rep(numbers,10),ncol=10) differences<-t(mat)-numbers row.names(differences)<-numbers colnames(differences)<-numbers knitr::kable(differences,row.names=T) | | 1| 3| 4| 5| 5| 6| 7| 8| 9| 24| |:--|---:|---:|---:|---:|---:|---:|---:|---:|---:|--:| |1 | 0| 2| 3| 4| 4| 5| 6| 7| 8| 23| |3 | -2| 0| 1| 2| 2| 3| 4| 5| 6| 21| |4 | -3| -1| 0| 1| 1| 2| 3| 4| 5| 20| |5 | -4| -2| -1| 0| 0| 1| 2| 3| 4| 19| |5 | -4| -2| -1| 0| 0| 1| 2| 3| 4| 19| |6 | -5| -3| -2| -1| -1| 0| 1| 2| 3| 18| |7 | -6| -4| -3| -2| -2| -1| 0| 1| 2| 17| |8 | -7| -5| -4| -3| -3| -2| -1| 0| 1| 16| |9 | -8| -6| -5| -4| -4| -3| -2| -1| 0| 15| |24 | -23| -21| -20| -19| -19| -18| -17| -16| -15| 0| We are looking at all of the possible differences between each number and every other number. So, in the top left, the difference between 1 and itself is 0. One column over to the right, the difference between 3 and 1 (3-1) is 2, etc. As you can see, this is a 10x10 matrix, which means there are 100 differences to look at. Not too bad, but if we had 500 numbers, then we would have 500*500 = 250,000 differences to look at (go for it if you like looking at that sort of thing). Pause for a simple question. What would this matrix look like if all of the 10 numbers in our data were the same number? It should look like a bunch of 0s right? Good. In that case, we could easily see that the numbers have no variation. But, when the numbers are different, we can see that there is a very large matrix of difference scores. How can we summarize that? How about we apply what we learned from the previous section on measures of central tendency. We have a lot of differences, so we could ask something like, what is the average difference that we have? So, we could just take all of our differences, and compute the mean difference right? What do you think would happen if we did that? Let’s try it out on these three numbers: 1 2 3 1 2 3 1 0 1 2 2 -1 0 1 3 -2 -1 0 You might already guess what is going to happen. Let’s compute the mean: $\text{mean of difference scores} = \frac{0+1+2-1+0+1-2-1+0}{9} = \frac{0}{9} = 0 \nonumber$ Uh oh, we get zero for the mean of the difference scores. This will always happen whenever you take the mean of the difference scores. We can see that there are some differences between the numbers, so using 0 as the summary value for the variation in the numbers doesn’t make much sense. Furthermore, you might also notice that the matrices of difference scores are redundant. The diagonal is always zero, and numbers on one side of the diagonal are the same as the numbers on the other side, except their signs are reversed. So, that’s one reason why the difference scores add up to zero. These are little problems that can be solved by computing the variance and the standard deviation. For now, the standard deviation is a just a trick that we use to avoid getting a zero. But, later we will see it has properties that are important for other reasons. The Variance Variability, variation, variance, vary, variable, varying, variety. Confused yet? Before we describe the variance, we want to you be OK with how this word is used. First, don’t forget the big picture. We know that variability and variation refers to the big idea of differences between numbers. We can even use the word variance in the same way. When numbers are different, they have variance. Note The formulas for variance and standard deviation depend on whether you think your data represents an entire population of numbers, or is sample from the population. We discuss this issue in later on. For now, we divide by N, later we discuss why you will often divide by N-1 instead. The word variance also refers to a specific summary statistic, the sum of the squared deviations from the mean. Hold on what? Plain English please. The variance is the sum of the squared difference scores, where the difference scores are computed between each score and the mean. What are these scores? The scores are the numbers in the data set. Let’s see the formula in English first: $\textit{variance} = \frac{\text{Sum of squared difference scores}}{\text{Number of Scores}} \nonumber$ Deviations from the mean, Difference scores from the mean We got a little bit complicated before when we computed the difference scores between all of the numbers in the data. Let’s do it again, but in a more manageable way. This time, we calculate the difference between each score and the mean. The idea here is 1. We can figure out how similar our scores are by computing the mean 2. Then we can figure out how different our scores are from the mean This could tell us, 1) something about whether our scores are really all very close to the mean (which could help us know if the mean is good representative number of the data), and 2) something about how much differences there are in the numbers. Take a look at this table: scores values mean Difference_from_Mean 1 1 4.5 -3.5 2 6 4.5 1.5 3 4 4.5 -0.5 4 2 4.5 -2.5 5 6 4.5 1.5 6 8 4.5 3.5 Sums 27 27 0 Means 4.5 4.5 0 The first column shows we have 6 scores in the data set, and the value columns shows each score. The sum of the values, and the mean is presented on the last two rows. The sum and the mean were obtained by: $\frac{1+6+4+2+6+8}{6} = \frac{27}{6} = 4.5 \nonumber$ The third column mean, appears a bit silly. We are just listing the mean once for every score. If you think back to our discussion about the meaning of the mean, then you will remember that it equally distributes the total sum across each data point. We can see that here, if we treat each score as the mean, then every score is a 4.5. We can also see that adding up all of the means for each score gives us back 27, which is the sum of the original values. Also, we see that if we find the mean of the mean scores, we get back the mean (4.5 again). All of the action is occurring in the fourth column, Difference_from_Mean. Here, we are showing the difference scores from the mean, using $X_{i}-\bar{X}$. In other words, we subtracted the mean from each score. So, the first score, 1, is -3.5 from the mean, the second score, 6, is +1.5 from the mean, and so on. Now, we can look at our original scores and we can look at their differences from the mean. Notice, we don’t have a matrix of raw difference scores, so it is much easier to look at out. But, we still have a problem: We can see that there are non-zero values in the difference scores, so we know there are a differences in the data. But, when we add them all up, we still get zero, which makes it seem like there are a total of zero differences in the data…Why does this happen…and what to do about it? The mean is the balancing point in the data One brief pause here to point out another wonderful property of the mean. It is the balancing point in the data. If you take a pen or pencil and try to balance it on your figure so it lays flat what are you doing? You need to find the center of mass in the pen, so that half of it is on one side, and the other half is on the other side. That’s how balancing works. One side = the other side. We can think of data as having mass or weight to it. If we put our data on our bathroom scale, we could figure out how heavy it was by summing it up. If we wanted to split the data down the middle so that half of the weight was equal to the other half, then we could balance the data on top of a pin. The mean of the data tells you where to put the pin. It is the location in the data, where the numbers on the one side add up to the same sum as the numbers on the other side. If we think this through, it means that the sum of the difference scores from the mean will always add up to zero. This is because the numbers on one side of the mean will always add up to -x (whatever the sum of those numbers is), and the numbers of the other side of the mean will always add up to +x (which will be the same value only positive). And: $-x + x = 0$, right. Right. The squared deviations Some devious someone divined a solution to the fact that differences scores from the mean always add to zero. Can you think of any solutions? For example, what could you do to the difference scores so that you could add them up, and they would weigh something useful, that is they would not be zero? The devious solution is to square the numbers. Squaring numbers converts all the negative numbers to positive numbers. For example, $2^2 = 4$, and $-2^2 = 4$. Remember how squaring works, we multiply the number twice: $2^2 = 2*2 = 4$, and $-2^2 = -2*-2 = 4$. We use the term squared deviations to refer to differences scores that have been squared. Deviations are things that move away from something. The difference scores move away from the mean, so we also call them deviations. Let’s look at our table again, but add the squared deviations. scores values mean Difference_from_Mean Squared_Deviations 1 1 4.5 -3.5 12.25 2 6 4.5 1.5 2.25 3 4 4.5 -0.5 0.25 4 2 4.5 -2.5 6.25 5 6 4.5 1.5 2.25 6 8 4.5 3.5 12.25 Sums 27 27 0 35.5 Means 4.5 4.5 0 5.91666666666667 OK, now we have a new column called squared_deviations. These are just the difference scores squared. So, $-3.5^2 = 12.25$, etc. You can confirm for yourself with your cellphone calculator. Now that all of the squared deviations are positive, we can add them up. When we do this we create something very special called the sum of squares (SS), also known as the sum of the squared deviations from the mean. We will talk at length about this SS later on in the ANOVA chapter. So, when you get there, remember that you already know what it is, just some sums of some squared deviations, nothing fancy. Finally, the variance Guess what, we already computed the variance. It already happened, and maybe you didn’t notice. “Wait, I missed that, what happened?”. First, see if you can remember what we are trying to do here. Take a pause, and see if you can tell yourself what problem we are trying solve. pause Without further ado, we are trying to get a summary of the differences in our data. There are just as many difference scores from the mean as there are data points, which can be a lot, so it would be nice to have a single number to look at, something like a mean, that would tell us about the average differences in the data. If you look at the table, you can see we already computed the mean of the squared deviations. First, we found the sum (SS), then below that we calculated the mean = 5.916 repeating. This is the variance. The variance is the mean of the sum of the squared deviations: $\textit{variance} = \frac{SS}{N}$, where SS is the sum of the squared deviations, and N is the number of observations. OK, now what. What do I do with the variance? What does this number mean? Good question. The variance is often an unhelpful number to look at. Why? Because it is not in the same scale as the original data. This is because we squared the difference scores before taking the mean. Squaring produces large numbers. For example, we see a 12.25 in there. That’s a big difference, bigger than any difference between any two original values. What to do? How can we bring the numbers back down to their original unsquared size? If you are thinking about taking the square root, that’s a ding ding ding, correct answer for you. We can always unsquare anything by taking the square root. So, let’s do that to 5.916. $\sqrt{5.916} = 2.4322829$. The Standard Deviation Oops, we did it again. We already computed the standard deviation, and we didn’t tell you. The standard deviation is the square root of the variance…At least, it is right now, until we complicate matters for you in the next chapter. Here is the formula for the standard deviation: $\text{standard deviation} = \sqrt{\textit{Variance}} = \sqrt{\frac{SS}{N}} \nonumber$ We could also expand this to say: $\text{standard deviation} = \sqrt{\frac{\sum_{i}^{n}({x_{i}-\bar{x})^2}}{N}} \nonumber$ Don’t let those big square root signs put you off. Now, you know what they are doing there. Just bringing our measure of the variance back down to the original size of the data. Let’s look at our table again: scores values mean Difference_from_Mean Squared_Deviations 1 1 4.5 -3.5 12.25 2 6 4.5 1.5 2.25 3 4 4.5 -0.5 0.25 4 2 4.5 -2.5 6.25 5 6 4.5 1.5 2.25 6 8 4.5 3.5 12.25 Sums 27 27 0 35.5 Means 4.5 4.5 0 5.91666666666667 We measured the standard deviation as $2.4322829$. Notice this number fits right in the with differences scores from the mean. All of the scores are kind of in and around + or - $2.4322829$. Whereas, if we looked at the variance, 5.916 is just too big, it doesn’t summarize the actual differences very well. What does all this mean? Well, if someone told they had some number with a mean of 4.5 (like the values in our table), and a standard deviation of $2.4322829$, you would get a pretty good summary of the numbers. You would know that many of the numbers are around 4.5, and you would know that not all of the numbers are 4.5. You would know that the numbers spread around 4.5. You also know that the spread isn’t super huge, it’s only + or - $2.4322829$ on average. That’s a good starting point for describing numbers. If you had loads of numbers, you could reduce them down to the mean and the standard deviation, and still be pretty well off in terms of getting a sense of those numbers.
textbooks/stats/Applied_Statistics/Answering_Questions_with_Data_-__Introductory_Statistics_for_Psychology_Students_(Crump)/02%3A_Describing_Data/2.05%3A_Measures_of_Variation_%28Differentness%29.txt
Remember, you will be learning how to compute descriptive statistics using software in the labs. Check out the lab manual exercises for descriptives to see some examples of working with real data. 2.07: Rolling your own descriptive statistics We spent many paragraphs talking about variation in numbers, and how to use calculate the variance and standard deviation to summarize the average differences between numbers in a data set. The basic process was to 1) calculate some measure of the differences, then 2) average the differences to create a summary. We found that we couldn’t average the raw difference scores, because we would always get a zero. So, we squared the differences from the mean, then averaged the squared differences differences. Finally, we square rooted our measure to bring the summary back down to the scale of the original numbers. Perhaps you haven’t heard, but there is more than one way to skin a cat, but we prefer to think of this in terms of petting cats, because some of us love cats. Jokes aside, perhaps you were also thinking that the problem of summing differences scores (so that they don’t equal zero), can be solved in more than one way. Can you think of a different way, besides squaring? Absolute deviations How about just taking the absolute value of the difference scores. Remember, the absolute value converts any number to a positive value. Check out the following table: scores values mean Difference_from_Mean Absolute_Deviations 1 1 4.5 -3.5 3.5 2 6 4.5 1.5 1.5 3 4 4.5 -0.5 0.5 4 2 4.5 -2.5 2.5 5 6 4.5 1.5 1.5 6 8 4.5 3.5 3.5 Sums 27 27 0 13 Means 4.5 4.5 0 2.16666666666667 This works pretty well too. By converting the difference scores from the mean to positive values, we can now add them up and get a non-zero value (if there are differences). Then, we can find the mean of the sum of the absolute deviations. If we were to map the terms sum of squares (SS), variance and standard deviation onto these new measures based off of the absolute deviation, how would the mapping go? For example, what value in the table corresponds to the SS? That would be the sum of absolute deviations in the last column. How about the variance and standard deviation, what do those correspond to? Remember that the variance is mean ($SS/N$), and the standard deviation is a square-rooted mean ($\sqrt{SS/N}$). In the table above we only have one corresponding mean, the mean of the sum of the absolute deviations. So, we have a variance measure that does not need to be square rooted. We might say the mean absolute deviation, is doing double-duty as a variance and a standard-deviation. Neat. Other sign-inverting operations In principle, we could create lots of different summary statistics for variance that solve the summing to zero problem. For example, we could raise every difference score to any even numbered power beyond 2 (which is the square). We could use, 4, 6, 8, 10, etc. There is an infinity of even numbers, so there is an infinity of possible variance statistics. We could also use odd numbers as powers, and then take their absolute value. Many things are possible. The important aspect to any of this is to have a reason for what you are doing, and to choose a method that works for the data-analysis problem you are trying to solve. Note also, we bring up this general issue because we want you to understand that statistics is a creative exercise. We invent things when we need them, and we use things that have already been invented when they work for the problem at hand. 2.08: Remember to look at your data Descriptive statistics are great and we will use them a lot in the course to describe data. You may suspect that descriptive statistics also have some short-comings. This is very true. They are compressed summaries of large piles of numbers. They will almost always be unable to represent all of the numbers fairly. There are also different kinds of descriptive statistics that you could use, and it sometimes not clear which one’s you should use. Perhaps the most important thing you can do when using descriptives is to use them in combination with looking at the data in a graph form. This can help you see whether or not your descriptives are doing a good job of representing the data. Anscombe’s Quartet To hit this point home, and to get you thinking about the issues we discuss in the next chapter, check this out. It’s called Anscombe’s Quartet, because these interesting graphs and numbers and numbers were produced by Anscombe (1973). You are looking at pairs of measurements. Each graph has an X and Y axis, and each point represents two measurements. Each of the graphs looks very different, right? ```library(data.table) library(ggplot2) ac <- fread("https://stats.libretexts.org/@api/deki/files/10478/anscombe.txt") ac<-as.data.frame(ac) ac_long<-data.frame(x=c(ac[,1], ac[,3], ac[,5], ac[,7]), y=c(ac[,2], ac[,4], ac[,6], ac[,8]), quartet = as.factor(rep(1:4,each=11)) ) ggplot(ac_long, aes(x=x, y=y, color=quartet))+ geom_point()+ theme_classic()+ facet_wrap(~quartet)``` Well, would you be surprised if I told that the descriptive statistics for the numbers in these graphs are exactly the same? It turns out they do have the same descriptive statistics. In the table below I present the mean and variance for the x-values in each graph, and the mean and the variance for the y-values in each graph. quartet mean_x var_x mean_y var_y 1 9 11 7.500909 4.127269 2 9 11 7.500909 4.127629 3 9 11 7.500000 4.122620 4 9 11 7.500909 4.123249 The descriptives are all the same! Anscombe put these special numbers together to illustrate the point of graphing your numbers. If you only look at your descriptives, you don’t know what patterns in the data they are hiding. If you look at the graph, then you can get a better understanding. Datasaurus Dozen If you thought that Anscombe’s quartet was neat, you should take a look at the Datasaurus Dozen (Matejka and Fitzmaurice 2017). Scroll down to see the examples. You will be looking at dot plots. The dot plots show many different patterns, including dinosaurs! What’s amazing is that all of the dots have very nearly the same descriptive statistics. Just another reminder to look at your data, it might look like a dinosaur! 2.10: References Anscombe, F. J. 1973. “Graphs in Statistical Analysis.” American Statistician 27: 17–21. Matejka, Justin, and George Fitzmaurice. 2017. “Same Stats, Different Graphs: Generating Datasets with Varied Appearance and Identical Statistics Through Simulated Annealing.” In Proceedings of the 2017 Chi Conference on Human Factors in Computing Systems, 1290–4. ACM.
textbooks/stats/Applied_Statistics/Answering_Questions_with_Data_-__Introductory_Statistics_for_Psychology_Students_(Crump)/02%3A_Describing_Data/2.06%3A_Using_Descriptive_Statistics_with_data.txt
Chapter by Matthew Crump Correlation does not equal causation — Every Statistics and Research Methods Instructor Ever In the last chapter we had some data. It was too much too look at and it didn’t make sense. So, we talked about how to look at the data visually using plots and histograms, and we talked about how to summarize lots of numbers so we could determine their central tendencies (sameness) and variability (differentness). And, all was well with the world. Let’s not forget the big reason why we learned about descriptive statistics. The big reason is that we are interested in getting answers to questions using data. If you are looking for a big theme to think about while you take this course, the theme is: how do we ask and answer questions using data? For every section in this book, you should be connecting your inner monologue to this question, and asking yourself: How does what I am learning about help me answer questions with data? Advance warning: we know it is easy to forget this stuff when we dive into the details, and we will try to throw you a rope to help you out along the way…remember, we’re trying to answer questions with data. We started Chapter two with some fake data on human happiness, remember? We imagined that we asked a bunch of people to tell us how happy they were, then we looked at the numbers they gave us. Let’s continue with this imaginary thought experiment. What do you get when you ask people to use a number to describe how happy they are? A bunch of numbers. What kind of questions can you ask about those numbers? Well, you can look at the numbers and estimate their general properties as we already did. We would expect those numbers tell us some things we already know. There are different people, and different people are different amounts of happy. You’ve probably met some of those of really happy people, and really unhappy people, and you yourself probably have some amount of happiness. “Great, thanks Captain Obvious”. Before moving on, you should also be skeptical of what the numbers might mean. For example, if you force people to give a number between 0-100 to rate their happiness, does this number truly reflect how happy that person is? Can a person know how happy they are? Does the question format bias how they give their answer? Is happiness even a real thing? These are all good questions about the validity of the construct (happiness itself) and the measure (numbers) you are using to quantify it. For now, though, we will side-step those very important questions, and assume that, happiness is a thing, and our measure of happiness measures something about how happy people are. OK then, after we have measured some happiness, I bet you can think of some more pressing questions. For example, what causes happiness to go up or down. If you knew the causes of happiness what could you do? How about increase your own happiness; or, help people who are unhappy; or, better appreciate why Eeyore from Winnie the Pooh is unhappy; or, present valid scientific arguments that argue against incorrect claims about what causes happiness. A causal theory and understanding of happiness could be used for all of those things. How can we get there? Imagine you were an alien observer. You arrived on earth and heard about this thing called happiness that people have. You want to know what causes happiness. You also discover that planet earth has lots of other things. Which of those things, you wonder, cause happiness? How would your alien-self get started on this big question. As a person who has happiness, you might already have some hunches about what causes changes in happiness. For example things like: weather, friends, music, money, education, drugs, books, movies, beliefs, personality, color of your shoes, eyebrow length, number of cat’s you see per day, frequency of subway delay, a lifetime supply of chocolate, etcetera etcetera (as Willy Wonka would say), might all contribute to happiness in someway. There could be many different causes of happiness. 03: Correlation Before we go around determining the causes of happiness, we should prepare ourselves with some analytical tools so that we could identify what causation looks like. If we don’t prepare ourselves for what we might find, then we won’t know how to interpret our own data. Instead, we need to anticipate what the data could look like. Specifically, we need to know what data would look like when one thing does not cause another thing, and what data would look like when one thing does cause another thing. This chapter does some of this preparation. Fair warning: we will find out some tricky things. For example, we can find patterns that look like one thing is causing another, even when that one thing DOES NOT CAUSE the other thing. Hang in there. Charlie and the Chocolate factory Let’s imagine that a person’s supply of chocolate has a causal influence on their level of happiness. Let’s further imagine that, like Charlie, the more chocolate you have the more happy you will be, and the less chocolate you have, the less happy you will be. Finally, because we suspect happiness is caused by lots of other things in a person’s life, we anticipate that the relationship between chocolate supply and happiness won’t be perfect. What do these assumptions mean for how the data should look? Our first step is to collect some imaginary data from 100 people. We walk around and ask the first 100 people we meet to answer two questions: 1. how much chocolate do you have, and 2. how happy are you. For convenience, both the scales will go from 0 to 100. For the chocolate scale, 0 means no chocolate, 100 means lifetime supply of chocolate. Any other number is somewhere in between. For the happiness scale, 0 means no happiness, 100 means all of the happiness, and in between means some amount in between. Here is some sample data from the first 10 imaginary subjects. subject chocolate happiness 1 1 1 2 1 1 3 2 2 4 2 4 5 4 5 6 4 5 7 7 5 8 8 5 9 8 6 10 9 6 We asked each subject two questions so there are two scores for each subject, one for their chocolate supply, and one for their level of happiness. You might already notice some relationships between amount of chocolate and level of happiness in the table. To make those relationships even more clear, let’s plot all of the data in a graph. Scatter plots When you have two measurements worth of data, you can always turn them into dots and plot them in a scatter plot. A scatter plot has a horizontal x-axis, and a vertical y-axis. You get to choose which measurement goes on which axis. Let’s put chocolate supply on the x-axis, and happiness level on the y-axis. The plot below shows 100 dots for each subject. You might be wondering, why are there only 100 dots for the data. Didn’t we collect 100 measures for chocolate, and 100 measures for happiness, shouldn’t there be 200 dots? Nope. Each dot is for one subject, there are 100 subjects, so there are 100 dots. What do the dots mean? Each dot has two coordinates, an x-coordinate for chocolate, and a y-coordinate for happiness. The first dot, all the way on the bottom left is the first subject in the table, who had close to 0 chocolate and close to zero happiness. You can look at any dot, then draw a straight line down to the x-axis: that will tell you how much chocolate that subject has. You can draw a straight line left to the y-axis: that will tell you how much happiness the subject has. Now that we are looking at the scatter plot, we can see many things. The dots are scattered around a bit aren’t they, hence scatter plot. Even when the dot’s don’t scatter, they’re still called scatter plots, perhaps because those pesky dots in real life have so much scatter all the time. More important, the dots show a relationship between chocolate supply and happiness. Happiness is lower for people with smaller supplies of chocolate, and higher for people with larger supplies of chocolate. It looks like the more chocolate you have the happier you will be, and vice-versa. This kind of relationship is called a positive correlation. Positive, Negative, and No-Correlation Seeing as we are in the business of imagining data, let’s imagine some more. We’ve already imagined what data would look like if larger chocolate supplies increase happiness. We’ll show that again in a bit. What do you imagine the scatter plot would look like if the relationship was reversed, and larger chocolate supplies decreased happiness. Or, what do you imagine the scatter plot would look like if there was no relationship, and the amount of chocolate that you have doesn’t do anything to your happiness. We invite your imagination to look at these graphs: The first panel shows a negative correlation. Happiness goes down as chocolate supply increases. Negative correlation occurs when one thing goes up and the other thing goes down; or, when more of X is less of Y, and vice-versa. The second panel shows a positive correlation. Happiness goes up as chocolate as chocolate supply increases. Positive correlation occurs when both things go up together, and go down together: more of X is more of Y, and vice-versa. The third panel shows no correlation. Here, there doesn’t appear to be any obvious relationship between chocolate supply and happiness. The dots are scattered all over the place, the truest of the scatter plots. Note We are wading into the idea that measures of two things can be related, or correlated with one another. It is possible for the relationships to be more complicated than just going up, or going down. For example, we could have a relationship that where the dots go up for the first half of X, and then go down for the second half. Zero correlation occurs when one thing is not related in any way to another things: changes in X do not relate to any changes in Y, and vice-versa.
textbooks/stats/Applied_Statistics/Answering_Questions_with_Data_-__Introductory_Statistics_for_Psychology_Students_(Crump)/03%3A_Correlation/3.01%3A_If_something_caused_something_else_to_change_what_would_that_look_like.txt
If Beyoncé was a statistician, she might look at these scatter plots and want to “put a number on it”. We think this is a good idea too. We’ve already learned how to create descriptive statistics for a single measure, like chocolate, or happiness (i.e., means, variances, etc.). Is it possible to create a descriptive statistic that summarized the relationship between two measures, all in one number? Can it be done? Karl Pearson to the rescue. Note The stories about the invention of various statistics are very interesting, you can read more about them in the book, “The Lady Tasting Tea” (Salsburg 2001) There’s a statistic for that, and Karl Pearson invented it. Everyone now calls it, “Pearson’s \(r\)”. We will find out later that Karl Pearson was a big-wig editor at Biometrika in the 1930s. He took a hating to another big-wig statistician, Sir Ronald Fisher (who we learn about later), and they had some stats fights…why can’t we all just get along in statistics. How does Pearson’s \(r\) work? Let’s look again at the first 10 subjects in our fake experiment: subject chocolate happiness 1 1 1 2 2 2 3 2 3 4 3 3 5 3 3 6 5 5 7 4 6 8 5 5 9 9 5 10 6 9 Sums 40 42 Means 4 4.2 What could we do to these numbers to produce a single summary value that represents the relationship between the chocolate supply and happiness? The idea of co-variance “Oh please no, don’t use the word variance again”. Yes, we’re doing it, we’re going to use the word variance again, and again, until it starts making sense. Remember what variance means about some numbers. It means the numbers have some change in them, they are not all the same, some of them are big, some are small. We can see that there is variance in chocolate supply across the 10 subjects. We can see that there is variance in happiness across the 10 subjects. We also saw in the scatter plot, that happiness increases as chocolate supply increases; which is a positive relationship, a positive correlation. What does this have to do with variance? Well, it means there is a relationship between the variance in chocolate supply, and the variance in happiness levels. The two measures vary together don’t they? When we have two measures that vary together, they are like a happy couple who share their variance. This is what co-variance refers to, the idea that the pattern of varying numbers in one measure is shared by the pattern of varying numbers in another measure. Co-variance is very, very, very ,very important. We suspect that the word co-variance is initially confusing, especially if you are not yet fully comfortable with the meaning of variance for a single measure. Nevertheless, we must proceed and use the idea of co-variance over and over again to firmly implant it into your statistical mind (we already said, but redundancy works, it’s a thing). Pro tip: Three-legged race is a metaphor for co-variance. Two people tie one leg to each other, then try to walk. It works when they co-vary their legs together (positive relationship). They can also co-vary in an unhelpful way, when one person tries to move forward exactly when the other person tries to move backward. This is still co-variance (negative relationship). Funny random walking happens when there is no co-variance. This means one person does whatever they want, and so does the other person. There is a lot of variance, but the variance is shared randomly, so it’s just a bunch of legs moving around accomplishing nothing. Pro tip #2: Succesfully playing paddycake occurs when two people coordinate their actions so they have postively shared co-variance.
textbooks/stats/Applied_Statistics/Answering_Questions_with_Data_-__Introductory_Statistics_for_Psychology_Students_(Crump)/03%3A_Correlation/3.02%3A_Pearsons_r.txt
“OK, so if you are saying that co-variance is just another word for correlation or relationship between two measures, I’m good with that. I suppose we would need some way to measure that.” Correct, back to our table…notice anything new? subject chocolate happiness Chocolate_X_Happiness 1 1 1 1 2 2 2 4 3 2 3 6 4 3 3 9 5 3 3 9 6 5 5 25 7 4 6 24 8 5 5 25 9 9 5 45 10 6 9 54 Sums 40 42 202 Means 4 4.2 20.2 We’ve added a new column called “Chocolate_X_Happiness”, which translates to Chocolate scores multiplied by Happiness scores. Each row in the new column, is the product, or multiplication of the chocolate and happiness score for that row. Yes, but why would we do this? Last chapter we took you back to Elementary school and had you think about division. Now it’s time to do the same thing with multiplication. We assume you know how that works. One number times another, means taking the first number, and adding it as many times as the second says to do, $2*2= 2+2=4 \nonumber$ $2*6= 2+2+2+2+2+2 = 12$, or $6+6=12$, same thing. Yes, you know all that. But, can you bend multiplication to your will, and make it do your bidding when need to solve a problem like summarizing co-variance? Multiplication is the droid you are looking for. We know how to multiple numbers, and all we have to next is think about the consequences of multiplying sets of numbers together. For example, what happens when you multiply two small numbers together, compared to multiplying two big numbers together? The first product should be smaller than the second product right? How about things like multiplying a small number by a big number? Those products should be in between right?. Then next step is to think about how the products of two measures sum together, depending on how they line up. Let’s look at another table: scores X Y A B XY AB 1 1 1 1 10 1 10 2 2 2 2 9 4 18 3 3 3 3 8 9 24 4 4 4 4 7 16 28 5 5 5 5 6 25 30 6 6 6 6 5 36 30 7 7 7 7 4 49 28 8 8 8 8 3 64 24 9 9 9 9 2 81 18 10 10 10 10 1 100 10 Sums 55 55 55 55 385 220 Means 5.5 5.5 5.5 5.5 38.5 22 Look at the X and Y column. The scores for X and Y perfectly co-vary. When X is 1, Y is 1; when X is 2, Y is 2, etc. They are perfectly aligned. The scores for A and B also perfectly co-vary, just in the opposite manner. When A is 1, B is 10; when A is 2, B is 9, etc. B is a reversed copy of A. Now, look at the column $XY$. These are the products we get when we multiply the values of X across with the values of Y. Also, look at the column $AB$. These are the products we get when we multiply the values of A across with the values of B. So far so good. Now, look at the Sums for the XY and AB columns. Not the same. The sum of the XY products is 385, and the sum of the AB products is 220. For this specific set of data, the numbers 385 and 220 are very important. They represent the biggest possible sum of products (385), and the smallest possible sum of products (220). There is no way of re-ordering the numbers 1 to 10, say for X, and the numbers 1 to 10 for Y, that would ever produce larger or smaller numbers. Don’t believe me? Check this out: library(ggplot2) simulated_sums<-length(0) for(sim in 1:1000){ X<-sample(1:10) Y<-sample(1:10) simulated_sums[sim]<-sum(X*Y) } sim_df<-data.frame(sims=1:1000,simulated_sums) ggplot(sim_df,aes(x=sims,y=simulated_sums))+ geom_point()+ theme_classic()+ geom_hline(yintercept = 385)+ geom_hline(yintercept = 220) The above graph shows 1000 computer simulations. I convinced my computer to randomly order the numbers 1 to 10 for X, and randomly order the numbers 1 to 10 for Y. Then, I multiplied X and Y, and added the products together. I did this 1000 times. The dots show the sum of the products for each simulation. The two black lines show the maximum possible sum (385), and the minimum possible sum (220), for this set of numbers. Notice, how all of the dots are in between the maximum and minimum possible values. Told you so. “OK fine, you told me so…So what, who cares?”. We’ve been looking for a way to summarize the co-variance between two measures right? Well, for these numbers, we have found one, haven’t we. It’s the sum of the products. We know that when the sum of the products is 385, we have found a perfect, positive correlation. We know, that when the sum of the products is 220, we have found a perfect negative correlation. What about the numbers in between. What could we conclude about the correlation if we found the sum of the products to be 350. Well, it’s going to be positive, because it’s close to 385, and that’s perfectly positive. If the sum of the products was 240, that’s going to be negative, because it’s close to the perfectly negatively correlating 220. What about no correlation? Well, that’s going to be in the middle between 220 and 385 right. We have just come up with a data-specific summary measure for the correlation between the numbers 1 to 10 in X, and the numbers 1 to 10 in Y, it’s the sum of the products. We know the maximum (385) and minimum values (220), so we can now interpret any product sum for this kind of data with respect to that scale. Pro tip: When the correlation between two measures increases in the positive direction, the sum of their products increases to its maximum possible value. This is because the bigger numbers in X will tend to line up with the bigger numbers in Y, creating the biggest possible sum of products. When the correlation between two measures increases in the negative direction, the sum of their products decreases to its minimum possible value. This is because the bigger numbers in X will tend to line up with the smaller numbers in Y, creating the smallest possible sum of products. When there is no correlation, the big numbers in X will be randomly lined up with the big and small numbers in Y, making the sum of the products, somewhere in the middle. Co-variance, the measure We took some time to see what happens when you multiply sets of numbers together. We found that $\textit{big} * \textit{big} = \text{bigger}$ and $\textit{small} * \textit{small} = \text{still small}$, and $\textit{big} * \textit{small} = \text{in the middle}$. The purpose of this was to give you some conceptual idea of how the co-variance between two measures is reflected in the sum of their products. We did something very straightforward. We just multiplied X with Y, and looked at how the product sums get big and small, as X and Y co-vary in different ways. Now, we can get a little bit more formal. In statistics, co-variance is not just the straight multiplication of values in X and Y. Instead, it’s the multiplication of the deviations in X from the mean of X, and the deviation in Y from the mean of Y. Remember those difference scores from the mean we talked about last chapter? They’re coming back to haunt you know, but in a good way like Casper the friendly ghost. Let’s see what this look like in a table: subject chocolate happiness C_d H_d Cd_x_Hd 1 1 1 -3 -3.2 9.6 2 2 2 -2 -2.2 4.4 3 2 3 -2 -1.2 2.4 4 3 3 -1 -1.2 1.2 5 3 3 -1 -1.2 1.2 6 5 5 1 0.8 0.8 7 4 6 0 1.8 0 8 5 5 1 0.8 0.8 9 9 5 5 0.8 4 10 6 9 2 4.8 9.6 Sums 40 42 0 0 34 Means 4 4.2 0 0 3.4 We have computed the deviations from the mean for the chocolate scores (column C_d), and the deviations from the mean for the happiness scores (column H_d). Then, we multiplied them together (last column). Finally, you can see the mean of the products listed in the bottom right corner of the table, the official the covariance. The formula for the co-variance is: $cov(X,Y) = \frac{\sum_{i}^{n}(x_{i}-\bar{X})(y_{i}-\bar{Y})}{N} \nonumber$ OK, so now we have a formal single number to calculate the relationship between two variables. This is great, it’s what we’ve been looking for. However, there is a problem. Remember when we learned how to compute just the plain old variance. We looked at that number, and we didn’t know what to make of it. It was squared, it wasn’t in the same scale as the original data. So, we square rooted the variance to produce the standard deviation, which gave us a more interpretable number in the range of our data. The co-variance has a similar problem. When you calculate the co-variance as we just did, we don’t know immediately know its scale. Is a 3 big? is a 6 big? is a 100 big? How big or small is this thing? From our prelude discussion on the idea of co-variance, we learned the sum of products between two measures ranges between a maximum and minimum value. The same is true of the co-variance. For a given set of data, there is a maximum possible positive value for the co-variance (which occurs when there is perfect positive correlation). And, there is a minimum possible negative value for the co-variance (which occurs when there is a perfect negative correlation). When there is zero co-variation, guess what happens. Zeroes. So, at the very least, when we look at a co-variation statistic, we can see what direction it points, positive or negative. But, we don’t know how big or small it is compared to the maximum or minimum possible value, so we don’t know the relative size, which means we can’t say how strong the correlation is. What to do? Pearson’s r we there yet Yes, we are here now. Wouldn’t it be nice if we could force our measure of co-variation to be between -1 and +1? -1 would be the minimum possible value for a perfect negative correlation. +1 would be the maximum possible value for a perfect positive correlation. 0 would mean no correlation. Everything in between 0 and -1 would be increasingly large negative correlations. Everything between 0 and +1 would be increasingly large positive correlations. It would be a fantastic, sensible, easy to interpret system. If only we could force the co-variation number to be between -1 and 1. Fortunately, for us, this episode is brought to you by Pearson’s $r$, which does precisely this wonderful thing. Let’s take a look at a formula for Pearson’s $r$: $r = \frac{cov(X,Y)}{\sigma_{X}\sigma_{Y}} = \frac{cov(X,Y)}{SD_{X}SD_{Y}} \nonumber$ We see the symbol $\sigma$ here, that’s more Greek for you. $\sigma$ is often used as a symbol for the standard deviation (SD). If we read out the formula in English, we see that r is the co-variance of X and Y, divided by the product of the standard deviation of X and the standard deviation of Y. Why are we dividing the co-variance by the product of the standard deviations. This operation has the effect of normalizing the co-variance into the range -1 to 1. Note But, we will fill this part in as soon as we can…promissory note to explain the magic. FYI, it’s not magic. Brief explanation here is that dividing each measure by its standard deviation ensures that the values in each measure are in the same range as one another. For now, we will call this mathematical magic. It works, but we don’t have space to tell you why it works right now. It’s worth saying that there are loads of different formulas for computing Pearson’s $r$. You can find them by Googling them. We will probably include more of them here, when we get around to it. However, they all give you the same answer. And, they are all not as pretty as each other. Some of them might even look scary. In other statistics textbook you will often find formulas that are easier to use for calculation purposes. For example, if you only had a pen and paper, you might use one or another formula because it helps you compute the answer faster by hand. To be honest, we are not very interested in teaching you how to plug numbers into formulas. We give one lesson on that here: Put the numbers into the letters, then compute the answer. Sorry to be snarky. Nowadays you have a computer that you should use for this kind of stuff. So, we are more interested in teaching you what the calculations mean, rather than how to do them. Of course, every week we are showing you how to do the calculations in lab with computers, because that is important to. Does Pearson’s $r$ really stay between -1 and 1 no matter what? It’s true, take a look at the following simulation. Here I randomly ordered the numbers 1 to 10 for an X measure, and did the same for a Y measure. Then, I computed Pearson’s $r$, and repeated this process 1000 times. As you can see all of the dots are between -1 and 1. Neat huh.
textbooks/stats/Applied_Statistics/Answering_Questions_with_Data_-__Introductory_Statistics_for_Psychology_Students_(Crump)/03%3A_Correlation/3.03%3A_Turning_the_numbers_into_a_measure_of_co-variance.txt
In the lab for correlation you will be shown how to compute correlations in real data-sets using software. To give you a brief preview, let's look at some data from the world happiness report (2018). This report measured various attitudes across people from different countries. For example, one question asked about how much freedom people thought they had to make life choices. Another question asked how confident people were in their national government. Here is a scatterplot showing the relationship between these two measures. Each dot represents means for different countries. We put a blue line on the scatterplot to summarize the positive relationship. It appears that as “freedom to make life choices goes up”, so to does confidence in national government. It’s a positive correlation. The actual correlation, as measured by Pearson’s \(r\) is: ```library(data.table) suppressPackageStartupMessages(library(dplyr)) whr_data <- fread("https://stats.libretexts.org/@api/deki/files/10477/WHR2018.csv") # select DVs and filter for NAs smaller_df <- whr_data %>% dplyr::select(country, `Freedom to make life choices`, `Confidence in national government`) %>% dplyr::filter(!is.na(`Freedom to make life choices`), !is.na(`Confidence in national government`)) # calculate correlation cor(smaller_df\$`Freedom to make life choices`, smaller_df\$`Confidence in national government`)``` 0.408096292505333 You will do a lot more of this kind of thing in the lab. Looking at the graph you might start to wonder: Does freedom to make life choices cause changes how confident people are in their national government? Our does it work the other way? Does being confident in your national government give you a greater sense of freedom to make life choices? Or, is this just a random relationship that doesn’t mean anything? All good questions. These data do not provide the answers, they just suggest a possible relationship. 3.05: Regression A mini intro We’re going to spend the next little bit adding one more thing to our understanding of correlation. It’s called linear regression. It sounds scary, and it really is. You’ll find out much later in your Statistics education that everything we will be soon be talking about can be thought of as a special case of regression. But, we don’t want to scare you off, so right now we just introduce the basic concepts. First, let’s look at a linear regression. This way we can see what we’re trying to learn about. Here’s some scatter plots, same one’s you’ve already seen. But, we’ve added something new! Lines. The best fit line Notice anything about these blue lines? Hopefully you can see, at least for the first two panels, that they go straight through the data, just like a kebab skewer. We call these lines best fit lines, because according to our definition (soon we promise) there are no other lines that you could draw that would do a better job of going straight throw the data. One big idea here is that we are using the line as a kind of mean to describe the relationship between the two variables. When we only have one variable, that variable exists on a single dimension, it’s 1D. So, it is appropriate that we only have one number, like the mean, to describe it’s central tendency. When we have two variables, and plot them together, we now have a two-dimensional space. So, for two dimensions we could use a bigger thing that is 2d, like a line, to summarize the central tendency of the relationship between the two variables. What do we want out of our line? Well, if you had a pencil, and a printout of the data, you could draw all sorts of straight lines any way you wanted. Your lines wouldn’t even have to go through the data, or they could slant through the data with all sorts of angles. Would all of those lines be very good a describing the general pattern of the dots? Most of them would not. The best lines would go through the data following the general shape of the dots. Of the best lines, however, which one is the best? How can we find out, and what do we mean by that? In short, the best fit line is the one that has the least error. Note R code for plotting residuals thanks to Simon Jackson’s blog post: https://drsimonj.svbtle.com/visualising-residuals Check out this next plot, it shows a line through some dots. But, it also shows some teeny tiny lines. These lines drop down from each dot, and they land on the line. Each of these little lines is called a residual. They show you how far off the line is for different dots. It’s measure of error, it shows us just how wrong the line is. After all, it’s pretty obvious that not all of the dots are on the line. This means the line does not actually represent all of the dots. The line is wrong. But, the best fit line is the least wrong of all the wrong lines. There’s a lot going on in this graph. First, we are looking at a scatter plot of two variables, an X and Y variable. Each of the black dots are the actual values from these variables. You can see there is a negative correlation here, as X increases, Y tends to decrease. We drew a regression line through the data, that’s the blue line. There’s these little white dots too. This is where the line thinks the black dots should be. The red lines are the important residuals we’ve been talking about. Each black dot has a red line that drops straight down, or straight up from the location of the black dot, and lands directly on the line. We can already see that many of the dots are not on the line, so we already know the line is “off” by some amount for each dot. The red line just makes it easier to see exactly how off the line is. The important thing that is happening here, is that the the blue line is drawn is such a way, that it minimizes the total length of the red lines. For example, if we wanted to know how wrong this line was, we could simply gather up all the red lines, measure how long they are, and then add all the wrongness together. This would give us the total amount of wrongness. We usually call this the error. In fact, we’ve already talked about this idea before when we discussed standard deviation. What we will actually be doing with the red lines, is computing the sum of the squared deviations from the line. That sum is the total amount of error. Now, this blue line here minimizes the sum of the squared deviations. Any other line would produce a larger total error. Here’s an animation to see this in action. The animations compares the best fit line in blue, to some other possible lines in black. The black line moves up and down. The red lines show the error between the black line and the data points. As the black line moves toward the best fit line, the total error, depicted visually by the grey area shrinks to it’s minimum value. The total error expands as the black line moves away from the best fit line. Whenever the black line does not overlap with the blue line, it is worse than the best fit line. The blue regression line is like Goldilocks, it’s just right, and it’s in the middle. This next graph shows a little simulation of how the sum of squared deviations (the sum of the squared lengths of the red lines) behaves as we move the line up and down. What’s going on here is that we are computing a measure of the total error as the black line moves through the best fit line. This represents the sum of the squared deviations. In other words, we square the length of each red line from the above animation, then we add up all of the squared red lines, and get the total error (the total sum of the squared deviations). The graph below shows what the total error looks like as the black line approaches then moves away from the best fit line. Notice, the dots in this graph start high on the left side, then they swoop down to a minimum at the bottom middle of the graph. When they reach their minimum point, we have found a line that minimizes the total error. This is the best fit regression line. OK, so we haven’t talked about the y-intercept yet. But, what this graph shows us is how the total error behaves as we move the line up and down. The y-intercept here is the thing we change that makes our line move up and down. As you can see the dots go up when we move the line down from 0 to -5, and the dots go up when we move the line up from 0 to +5. The best line, that minimizes the error occurs right in the middle, when we don’t move the blue regression line at all. Lines OK, fine you say. So, there is one magic line that will go through the middle of the scatter plot and minimize the sum of the squared deviations. How do I find this magic line? We’ll show you. But, to be completely honest, you’ll almost never do it the way we’ll show you here. Instead, it’s much easier to use software and make your computer do it for. You’ll learn how to that in the labs. Before we show you how to find the regression line, it’s worth refreshing your memory about how lines work, especially in 2 dimensions. Remember this? $y = ax + b$, or also $y = mx + b$ (sometimes a or m is used for the slope) This is the formula for a line. Another way of writing it is: $y = slope * x + \text{y-intercept} \nonumber$ The slope is the slant of the line, and the y-intercept is where the line crosses the y-axis. Let’s look at some lines: So there is two lines. The formula for the blue line is $y = 1*x + 5$. Let’s talk about that. When x = 0, where is the blue line on the y-axis? It’s at five. That happens because 1 times 0 is 0, and then we just have the five left over. How about when x = 5? In that case y =10. You just need the plug in the numbers to the formula, like this: $y = 1*x + 5 \nonumber$ $y = 1*5 + 5 = 5+5 =10 \nonumber$ The point of the formula is to tell you where y will be, for any number of x. The slope of the line tells you whether the line is going to go up or down, as you move from the left to the right. The blue line has a positive slope of one, so it goes up as x goes up. How much does it go up? It goes up by one for everyone one of x! If we made the slope a 2, it would be much steeper, and go up faster. The red line has a negative slope, so it slants down. This means $y$ goes down, as $x$ goes up. When there is no slant, and we want to make a perfectly flat line, we set the slope to 0. This means that y doesn’t go anywhere as x gets bigger and smaller. That’s lines. Computing the best fit line If you have a scatter plot showing the locations of scores from two variables, the real question is how can you find the slope and the y-intercept for the best fit line? What are you going to do? Draw millions of lines, add up the residuals, and then see which one was best? That would take forever. Fortunately, there are computers, and when you don’t have one around, there’s also some handy formulas. Note It’s worth pointing out just how much computers have changed everything. Before computers everyone had to do these calculations by hand, such a chore! Aside from the deeper mathematical ideas in the formulas, many of them were made for convenience, to speed up hand calculations, because there were no computers. Now that we have computers, the hand calculations are often just an exercise in algebra. Perhaps they build character. You decide. We’ll show you the formulas. And, work through one example by hand. It’s the worst, we know. By the way, you should feel sorry for me as I do this entire thing by hand for you. Here are two formulas we can use to calculate the slope and the intercept, straight from the data. We won’t go into why these formulas do what they do. These ones are for “easy” calculation. $intercept = b = \frac{\sum{y}\sum{x^2}-\sum{x}\sum{xy}}{n\sum{x^2}-(\sum{x})^2} \nonumber$ $slope = m = \frac{n\sum{xy}-\sum{x}\sum{y}}{n\sum{x^2}-(\sum{x})^2} \nonumber$ In these formulas, the $x$ and the $y$ refer to the individual scores. Here’s a table showing you how everything fits together. suppressPackageStartupMessages(library(dplyr)) scores<-c(1,2,3,4,5,6,7) x<-c(1,4,3,6,5,7,8) y<-c(2,5,1,8,6,8,9) x_squared<-x^2 y_squared<-y^2 xy<-x*y all_df<-data.frame(scores,x,y,x_squared,y_squared,xy) all_df <- all_df %>% rbind(c("Sums",colSums(all_df[1:7,2:6]))) slope=((sum(y)*sum(x_squared))-(sum(x)*sum(xy)))/((7*sum(x_squared))-sum(x)^2) intercept=(7*sum(xy)-sum(x)*sum(y))/(7*sum(x_squared)-sum(x)^2) knitr::kable(all_df) scores x y x_squared y_squared xy 1 1 2 1 4 2 2 4 5 16 25 20 3 3 1 9 1 3 4 6 8 36 64 48 5 5 6 25 36 30 6 7 8 49 64 56 7 8 9 64 81 72 Sums 34 39 200 275 231 We see 7 sets of scores for the x and y variable. We calculated $x^2$ by squaring each value of x, and putting it in a column. We calculated $y^2$ by squaring each value of y, and putting it in a column. Then we calculated $xy$, by multiplying each $x$ score with each $y$ score, and put that in a column. Then we added all the columns up, and put the sums at the bottom. These are all the number we need for the formulas to find the best fit line. Here’s what the formulas look like when we put numbers in them: $intercept = b = \frac{\sum{y}\sum{x^2}-\sum{x}\sum{xy}}{n\sum{x^2}-(\sum{x})^2} = \frac{39 * 200 - 34*231}{7*200-34^2} = -.221 \nonumber$ $slope = m = \frac{n\sum{xy}-\sum{x}\sum{y}}{n\sum{x^2}-(\sum{x})^2} = \frac{7*231-34*39}{7*275-34^2} = 1.19 \nonumber$ Great, now we can check our work, let’s plot the scores in a scatter plot and draw a line through it with slope = 1.19, and a y-intercept of -.221. It should go through the middle of the dots. x<-c(1,4,3,6,5,7,8) y<-c(2,5,1,8,6,8,9) plot_df<-data.frame(x,y) coef(lm(y~x,plot_df)) (Intercept) -0.221311475409834 x 1.19262295081967
textbooks/stats/Applied_Statistics/Answering_Questions_with_Data_-__Introductory_Statistics_for_Psychology_Students_(Crump)/03%3A_Correlation/3.04%3A_Examples_with_Data.txt
What does the presence or the absence of a correlation between two measures mean? How should correlations be interpreted? What kind of inferences can be drawn from correlations? These are all very good questions. A first piece of advice is to use caution when interpreting correlations. Here’s why. Correlation does not equal causation Perhaps you have heard that correlation does not equal causation. Why not? There are lots of reasons why not. However, before listing some of the reasons let’s start with a case where we would expect a causal connection between two measurements. Consider, buying a snake plant for your home. Snake plants are supposed to be easy to take care of because you can mostly ignore them. Like most plants, snake plants need some water to stay alive. However, they also need just the right amount of water. Imagine an experiment where 1000 snake plants were grown in a house. Each snake plant is given a different amount of water per day, from zero teaspoons of water per day to 1000 teaspoons of water per day. We will assume that water is part of the causal process that allows snake plants to grow. The amount of water given to each snake plant per day can also be one of our measures. Imagine further that every week the experimenter measures snake plant growth, which will be the second measurement. Now, can you imagine for yourself what a scatter plot of weekly snake plant growth by tablespoons of water would look like? Even when there is causation, there might not be obvious correlation The first plant given no water at all would have a very hard time and eventually die. It should have the least amount of weekly growth. How about the plants given only a few teaspoons of water per day. This could be just enough water to keep the plants alive, so they will grow a little bit but not a lot. If you are imagining a scatter plot, with each dot being a snake plant, then you should imagine some dots starting in the bottom left hand corner (no water & no plant growth), moving up and to the right (a bit of water, and a bit of growth). As we look at snake plants getting more and more water, we should see more and more plant growth, right? “Sure, but only up to a point”. Correct, there should be a trend for a positive correlation with increasing plant growth as amount of water per day increases. But, what happens when you give snake plants too much water? From personal experience, they die. So, at some point, the dots in the scatter plot will start moving back down again. Snake plants that get way too much water will not grow very well. The imaginary scatter plot you should be envisioning could have an upside U shape. Going from left to right, the dot’s go up, they reach a maximum, then they go down again reaching a minimum. Computing Pearson’s \(r\) for data like this can give you \(r\) values close to zero. The scatter plot could look something like this: Granted this looks more like an inverted V, than an inverted U, but you get the picture right? There is clearly a relationship between watering and snake plant growth. But, the correlation isn’t in one direction. As a result, when we compute the correlation in terms of Pearson’s r, we get a value suggesting no relationship. ```water<-seq(0,999,1) growth<-c(seq(0,10,(10/499)),seq(10,0,-(10/499))) noise<-runif(1000,-2,2) growth<-growth+noise cor(growth,water)``` -0.0051489425461363 What this really means is there is no linear relationship that can be described by a single straight line. When we need lines or curves going in more than one direction, we have a nonlinear relationship. This example illustrates some conundrums in interpreting correlations. We already know that water is needed for plants to grow, so we are rightly expecting there to be a relationship between our measure of amount of water and plant growth. If we look at the first half of the data we see a positive correlation, if we look at the last half of the data we see a negative correlation, and if we look at all of the data we see no correlation. Yikes. So, even when there is a causal connection between two measures, we won’t necessarily obtain clear evidence of the connection just by computing a correlation coefficient. Pro Tip: This is one reason why plotting your data is so important. If you see an upside U shape pattern, then a correlation analysis is probably not the best analysis for your data. Confounding variable, or Third variable problem Anybody can correlate any two things that can be quantified and measured. For example, we could find a hundred people, ask them all sorts of questions like: 1. how happy are you 2. how old are you 3. how tall are you 4. how much money do you make per year 5. how long are your eyelashes 6. how many books have you read in your life 7. how loud is your inner voice Let’s say we found a positive correlation between yearly salary and happiness. Note, we could have just as easily computed the same correlation between happiness and yearly salary. If we found a correlation, would you be willing to infer that yearly salary causes happiness? Perhaps it does play a small part. But, something like happiness probably has a lot of contributing causes. Money could directly cause some people to be happy. But, more likely, money buys people access to all sorts of things, and some of those things might contribute happiness. These “other” things are called third variables. For example, perhaps people living in nicer places in more expensive houses are more happy than people in worse places in cheaper houses. In this scenario, money isn’t causing happiness, it’s the places and houses that money buys. But, even is this were true, people can still be more or less happy in lots of different situations. The lesson here is that a correlation can occur between two measures because of a third variable that is not directly measured. So, just because we find a correlation, does not mean we can conclude anything about a causal connection between two measurements. Correlation and Random chance Another very important aspect of correlations is the fact that they can be produced by random chance. This means that you can find a positive or negative correlation between two measures, even when they have absolutely nothing to do with one another. You might have hoped to find zero correlation when two measures are totally unrelated to each other. Although this certainly happens, unrelated measures can accidentally produce spurious correlations, just by chance alone. Let’s demonstrate how correlations can occur by chance when there is no causal connection between two measures. Imagine two participants. One is at the North pole with a lottery machine full of balls with numbers from 1 to 10. The other is at the south pole with a different lottery machine full of balls with numbers from 1 to 10. There are an endless supply of balls in the machine, so every number could be picked for any ball. Each participant randomly chooses 10 balls, then records the number on the ball. In this situation we will assume that there is no possible way that balls chosen by the first participant could causally influence the balls chosen by the second participant. They are on the other side of the world. We should assume that the balls will be chosen by chance alone. Here is what the numbers on each ball could look like for each participant: ```Ball<-1:10 North_pole<-round(round(runif(10,1,10))) South_pole<-round(round(runif(10,1,10))) the_df_balls<-data.frame(Ball,North_pole,South_pole) #the_df_balls <- the_df_balls %>% # rbind(c("Sums",colSums(the_df_balls[1:10,2:3]))) %>% # rbind(c("Means",colMeans(the_df_balls[1:10,2:3]))) knitr::kable(the_df_balls)``` Ball North_pole South_pole 1 3 1 2 7 7 3 8 8 4 6 9 5 4 6 6 10 10 7 2 2 8 5 8 9 2 5 10 2 3 In this one case, if we computed Pearson's \(r\), we would find that \(r = \) ```North_pole<-round(round(runif(10,1,10))) South_pole<-round(round(runif(10,1,10))) cor(North_pole,South_pole)``` 0.0803444730711034 But, we already know that this value does not tell us anything about the relationship between the balls chosen in the north and south pole. We know that relationship should be completely random, because that is how we set up the game. The better question here is to ask what can random chance do? For example, if we ran our game over and over again thousands of times, each time choosing new balls, and each time computing the correlation, what would we find? First, we will find fluctuation. The r value will sometimes be positive, sometimes be negative, sometimes be big and sometimes be small. Second, we will see what the fluctuation looks like. This will give us a window into the kinds of correlations that chance alone can produce. Let's see what happens. Monte-carlo simulation of random correlations It is possible to use a computer to simulate our game as many times as we want. This process is often termed monte-carlo simulation. Below is a script written for the programming language R. We won't go into the details of the code here. However, let's briefly explain what is going on. Notice, the part that says `for(sim in 1:1000)`. This creates a loop that repeats our game 1000 times. Inside the loop there are variables named `North_pole` and `South_pole`. During each simulation, we sample 10 random numbers (between 1 to 10) into each variable. These random numbers stand for the numbers that would have been on the balls from the lottery machine. Once we have 10 random numbers for each, we then compute the correlation using `cor(North_pole,South_pole)`. Then, we save the correlation value and move on to the next simulation. At the end, we will have 1000 individual Pearson \( r \) values. Let’s take a look at all of the 1000 Pearson \(r\) values. Does the figure below look familiar to you? It should, we have already conducted a similar kind of simulation before. Each dot in the scatter plot shows the Pearson \(r\) for each simulation from 1 to 1000. As you can see the dots are all over of the place, in between the range -1 to 1. The important lesson here is that random chance produced all of these correlations. This means we can find “correlations” in the data that are completely meaningless, and do not reflect any causal relationship between one measure and another. Let’s illustrate the idea of finding “random” correlations one more time, with a little movie. This time, we will show you a scatter plot of the random values sampled for the balls chosen from the North and South pole. If there is no relationship we should see dots going everywhere. If there happens to be a positive relationship (purely by chance), we should see the dots going from the bottom left to the top right. If there happens to be a negative relationship (purely by chance), we should see the dots going from the top left down to the bottom right. On more thing to prepare you for the movie. There are three scatter plots below, showing negative, positive, and zero correlations between two variables. You’ve already seen this graph before. We are just reminding you that the blue lines are helpful for seeing the correlation.Negative correlations occur when a line goes down from the top left to bottom right. Positive correlations occur when a line goes up from the bottom left to the top right. Zero correlations occur when the line is flat (doesn’t go up or down). OK, now we are ready for the movie. You are looking at the process of sampling two sets of numbers randomly, one for the X variable, and one for the Y variable. Each time we sample 10 numbers for each, plot them, then draw a line through them. Remember, these numbers are all completely random, so we should expect, on average that there should be no correlation between the numbers. However, this is not what happens. You can the line going all over the place. Sometimes we find a negative correlation (line goes down), sometimes we see a positive correlation (line goes up), and sometimes it looks like zero correlation (line is more flat). You might be thinking this is kind of disturbing. If we know that there should be no correlation between two random variables, how come we are finding correlations? This is a big problem right? I mean, if someone showed me a correlation between two things, and then claimed one thing was related to another, how could know I if it was true. After all, it could be chance! Chance can do that too. Fortunately, all is not lost. We can look at our simulated data in another way, using a histogram. Remember, just before the movie, we simulated 1000 different correlations using random numbers. By, putting all of those \( r \) values into a histogram, we can get a better sense of how chance behaves. We can see what kind of correlations chance is likely or unlikely to produce. Here is a histogram of the simulated \( r \) values. Notice that this histogram is not flat. Most of the simulated \(r\) values are close to zero. Notice, also that the bars get smaller as you move away from zero in the positive or negative direction. The general take home here is that chance can produce a wide range of correlations. However, not all correlations happen very often. For example, the bars for -1 and 1 are very small. Chance does not produce nearly perfect correlations very often. The bars around -.5 and .5 are smaller than the bars around zero, as medium correlations do not occur as often as small correlations by chance alone. You can think of this histogram as the window of chance. It shows what chance often does, and what it often does not do. If you found a correlation under these very same circumstances (e.g., measured the correlation between two sets of 10 random numbers), then you could consult this window. What should you ask the window? How about, could my observed correlation (the one that you found in your data) have come from this window. Let’s say you found a correlation of \(r = .1\). Could a .1 have come from the histogram? Well, look at the histogram around where the .1 mark on the x-axis is. Is there a big bar there? If so, this means that chance produces this value fairly often. You might be comfortable with the inference: Yes, this .1 could have been produced by chance, because it is well inside the window of chance. How about \(r = .5\)? The bar is much smaller here, you might think, “well, I can see that chance does produce .5 some times, so chance could have produced my .5. Did it? Maybe, maybe not, not sure”. Here, your confidence in a strong inference about the role of chance might start getting a bit shakier. How about an \(r = .95\)?. You might see that the bar for .95 is very very small, perhaps too small to see. What does this tell you? It tells you that chance does not produce .95 very often, hardly if at all, pretty much never. So, if you found a .95 in your data, what would you infer? Perhaps you would be comfortable inferring that chance did not produce your .95, after .95 is mostly outside the window of chance. Increasing sample-size decreases opportunity for spurious correlation Before moving on, let’s do one more thing with correlations. In our pretend lottery game, each participant only sampled 10 balls each. We found that this could lead to a range of correlations between the numbers randomly drawn from either sides of the pole. Indeed, we even found some correlations that were medium to large in size. If you were a researcher who found such correlations, you might be tempted to believe there was a relationship between your measurements. However, we know in our little game, that those correlations would be spurious, just a product of random sampling. The good news is that, as a researcher, you get to make the rules of the game. You get to determine how chance can play. This is all a little bit metaphorical, so let’s make it concrete. We will see what happens in four different scenarios. First, we will repeat what we already did. Each participant will draw 10 balls, then we compute the correlation, and do this over 1000 times and look at a histogram. Second, we will change the game so each participant draws 50 balls each, and then repeat our simulation. Third, and fourth, we will change the game so each participant draws 100 balls each, and then 1000 balls each, and repeat etc. The graph below shows four different histograms of the Pearson \(r\) values in each of the different scenarios. Each scenario involves a different sample-size, from, 10, 50, 100 to 1000. By inspecting the four histograms you should notice a clear pattern. The width or range of each histogram shrinks as the sample-size increases. What is going on here? Well, we already know that we can think of these histograms as windows of chance. They tell us which \(r\) values occur fairly often, which do not. When our sample-size is 10, lots of different \(r\) values happen. That histogram is very flat and spread out. However, as the sample-size increases, we see that the window of chance gets pulled in. For example, by the time we get to 1000 balls each, almost all of the Pearson \(r\) values are very close to 0. One take home here, is that increasing sample-size narrows the window of chance. So, for example, if you ran a study involving 1000 samples of two measures, and you found a correlation of .5, then you can clearly see in the bottom right histogram that .5 does not occur very often by chance alone. In fact, there is no bar, because it didn't happen even once in the simulation. As a result, when you have a large sample size like n = 1000, you might be more confident that your observed correlation (say of .5) was not a spurious correlation. If chance is not producing your result, then something else is. Finally, notice how your confidence about whether or not chance is mucking about with your results depends on your sample size. If you only obtained 10 samples per measurement, and found \( r = .5 \), you should not be as confident that your correlation reflects a real relationship. Instead, you can see that \( r \)'s of .5 happen fairly often by chance alone. Pro tip: when you run an experiment you get to decide how many samples you will collect, which means you can choose to narrow the window of chance. Then, if you find a relationship in the data you can be more confident that your finding is real, and not just something that happened by chance. Some more movies Let's ingrain these idea with some more movies. When our sample-size is small (N is small), sampling error can cause all sort "patterns" in the data. This makes it possible, and indeed common, for "correlations" to occur between two sets of numbers. When we increase the sample-size, sampling error is reduced, making it less possible for "correlations" to occur just by chance alone. When N is large, chance has less of an opportunity to operate. Watching how correlation behaves when there is no correlation Below we randomly sample numbers for two variables, plot them, and show the correlation using a line. There are four panels, each showing the number of observations in the samples, from 10, 50, 100, to 1000 in each sample. Remember, because we are randomly sampling numbers, there should be no relationship between the X and Y variables. But, as we have been discussing, because of chance, we can sometimes observe a correlation (due to chance). The important thing to watch is how the line behaves across the four panels. The line twirls around in all directions when the sample size is 10. It is also moves around quite a bit when the sample size is 50 or 100. It still moves a bit when the sample size is 1000, but much less. In all cases we expect that the line should be flat, but every time we take new samples, sometimes the line shows us pseudo patterns. Which line should you trust? Well, hopefully you can see that the line for 1000 samples is the most stable. It tends to be very flat every time, and it does not depend so much on the particular sample. The line with 10 observations per sample goes all over the place. The take home here, is that if someone told you that they found a correlation, you should want to know how many observations they hand in their sample. If they only had 10 observations, how could you trust the claim that there was a correlation? You can’t!!! Not now that you know samples that are that small can do all sorts of things by chance alone. If instead, you found out the sample was very large, then you might trust that finding a little bit more. For example, in the above movie you can see that when there are 1000 samples, we never see a strong or weak correlation; the line is always flat. This is because chance almost never produces strong correlations when the sample size is very large. In the above example, we sampled numbers random numbers from a uniform distribution. Many examples of real-world data will come from a normal or approximately normal distribution. We can repeat the above, but sample random numbers from the same normal distribution. There will still be zero actual correlation between the X and Y variables, because everything is sampled randomly. But, we still see the same behavior as above. The computed correlation for small sample-sizes fluctuate wildly, and large sample sizes do not. OK, so what do things look like when there actually is a correlation between variables? Watching correlations behave when there really is a correlation Sometimes there really are correlations between two variables that are not caused by chance. Below, we get to watch a movie of four scatter plots. Each shows the correlation between two variables. Again, we change the sample-size in steps of 10, 50 100, and 1000. The data have been programmed to contain a real positive correlation. So, we should expect that the line will be going up from the bottom left to the top right. However, there is still variability in the data. So this time, sampling error due to chance will fuzz the correlation. We know it is there, but sometimes chance will cause the correlation to be eliminated. Notice that in the top left panel (sample-size 10), the line is twirling around much more than the other panels. Every new set of samples produces different correlations. Sometimes, the line even goes flat or downward. However, as we increase sample-size, we can see that the line doesn’t change very much, it is always going up showing a positive correlation. The main takeaway here is that even when there is a positive correlation between two things, you might not be able to see it if your sample size is small. For example, you might get unlucky with the one sample that you measured. Your sample could show a negative correlation, even when the actual correlation is positive! Unfortunately, in the real world we usually only have the sample that we collected, so we always have to wonder if we got lucky or unlucky. Fortunately, if you want to remove luck, all you need to do is collect larger samples. Then you will be much more likely to observe the real pattern, rather the pattern that can be introduced by chance. 3.07: Summary In this section we have talked about correlation, and started to build some intuitions about inferential statistics, which is the major topic of the remaining chapters. For now, the main ideas are: 1. We can measure relationships in data using things like correlation 2. The correlations we measure can be produced by numerous things, so they are hard to to interpret 3. Correlations can be produced by chance, so have the potential to be completely meaningless. 4. However, we can create a model of exactly what chance can do. The model tells us whether chance is more or less likely to produce correlations of different sizes 5. We can use the chance model to help us make decisions about our own data. We can compare the correlation we found in our data to the model, then ask whether or not chance could have or was likely to have produced our results. New Page Salsburg, David. 2001. The Lady Tasting Tea: How Statistics Revolutionized Science in the Twentieth Century. Macmillan.
textbooks/stats/Applied_Statistics/Answering_Questions_with_Data_-__Introductory_Statistics_for_Psychology_Students_(Crump)/03%3A_Correlation/3.06%3A_Interpreting_Correlations.txt
I have studied many languages-French, Spanish and a little Italian, but no one told me that Statistics was a foreign language. —Charmaine J. Forde Note Sections 4.1 & 4.9 - Adapted text by Danielle Navarro. Section 4.10 - 4.11 & 4.13 - Mix of Matthew Crump & Danielle Navarro. Section 4.12-4.13 - Adapted text by Danielle Navarro. Up to this point in the book, we’ve discussed some of the key ideas in experimental design, and we’ve talked a little about how you can summarize a data set. To a lot of people, this is all there is to statistics: it’s about calculating averages, collecting all the numbers, drawing pictures, and putting them all in a report somewhere. Kind of like stamp collecting, but with numbers. However, statistics covers much more than that. In fact, descriptive statistics is one of the smallest parts of statistics, and one of the least powerful. The bigger and more useful part of statistics is that it provides tools that let you make inferences about data. Once you start thinking about statistics in these terms – that statistics is there to help us draw inferences from data – you start seeing examples of it everywhere. For instance, here’s a tiny extract from a newspaper article in the Sydney Morning Herald (30 Oct 2010): “I have a tough job,” the Premier said in response to a poll which found her government is now the most unpopular Labor administration in polling history, with a primary vote of just 23 per cent. This kind of remark is entirely unremarkable in the papers or in everyday life, but let’s have a think about what it entails. A polling company has conducted a survey, usually a pretty big one because they can afford it. I’m too lazy to track down the original survey, so let’s just imagine that they called 1000 voters at random, and 230 (23%) of those claimed that they intended to vote for the party. For the 2010 Federal election, the Australian Electoral Commission reported 4,610,795 enrolled voters in New South Whales; so the opinions of the remaining 4,609,795 voters (about 99.98% of voters) remain unknown to us. Even assuming that no-one lied to the polling company the only thing we can say with 100% confidence is that the true primary vote is somewhere between 230/4610795 (about 0.005%) and 4610025/4610795 (about 99.83%). So, on what basis is it legitimate for the polling company, the newspaper, and the readership to conclude that the ALP primary vote is only about 23%? The answer to the question is pretty obvious: if I call 1000 people at random, and 230 of them say they intend to vote for the ALP, then it seems very unlikely that these are the only 230 people out of the entire voting public who actually intend to do so. In other words, we assume that the data collected by the polling company is pretty representative of the population at large. But how representative? Would we be surprised to discover that the true ALP primary vote is actually 24%? 29%? 37%? At this point everyday intuition starts to break down a bit. No-one would be surprised by 24%, and everybody would be surprised by 37%, but it’s a bit hard to say whether 29% is plausible. We need some more powerful tools than just looking at the numbers and guessing. Inferential statistics provides the tools that we need to answer these sorts of questions, and since these kinds of questions lie at the heart of the scientific enterprise, they take up the lions share of every introductory course on statistics and research methods. However, our tools for making statistical inferences are 1) built on top of probability theory, and 2) require an understanding of how samples behave when you take them from distributions (defined by probability theory…). So, this chapter has two main parts. A brief introduction to probability theory, and an introduction to sampling from distributions.
textbooks/stats/Applied_Statistics/Answering_Questions_with_Data_-__Introductory_Statistics_for_Psychology_Students_(Crump)/04%3A_Probability_Sampling_and_Estimation/4.00%3A_Prelude_to_Probability%2C_Sampling%2C_and_Estimation.txt
Before we start talking about probability theory, it’s helpful to spend a moment thinking about the relationship between probability and statistics. The two disciplines are closely related but they’re not identical. Probability theory is “the doctrine of chances”. It’s a branch of mathematics that tells you how often different kinds of events will happen. For example, all of these questions are things you can answer using probability theory: • What are the chances of a fair coin coming up heads 10 times in a row? • If I roll two six sided dice, how likely is it that I’ll roll two sixes? • How likely is it that five cards drawn from a perfectly shuffled deck will all be hearts? • What are the chances that I’ll win the lottery? Notice that all of these questions have something in common. In each case the “truth of the world” is known, and my question relates to the “what kind of events” will happen. In the first question I know that the coin is fair, so there’s a 50% chance that any individual coin flip will come up heads. In the second question, I know that the chance of rolling a 6 on a single die is 1 in 6. In the third question I know that the deck is shuffled properly. And in the fourth question, I know that the lottery follows specific rules. You get the idea. The critical point is that probabilistic questions start with a known model of the world, and we use that model to do some calculations. The underlying model can be quite simple. For instance, in the coin flipping example, we can write down the model like this: $P(\textit{heads}) = 0.5$ which you can read as “the probability of heads is 0.5”. As we’ll see later, in the same way that percentages are numbers that range from 0% to 100%, probabilities are just numbers that range from 0 to 1. When using this probability model to answer the first question, I don’t actually know exactly what’s going to happen. Maybe I’ll get 10 heads, like the question says. But maybe I’ll get three heads. That’s the key thing: in probability theory, the model is known, but the data are not. So that’s probability. What about statistics? Statistical questions work the other way around. In statistics, we know the truth about the world. All we have is the data, and it is from the data that we want to learn the truth about the world. Statistical questions tend to look more like these: • If my friend flips a coin 10 times and gets 10 heads, are they playing a trick on me? • If five cards off the top of the deck are all hearts, how likely is it that the deck was shuffled? • If the lottery commissioner’s spouse wins the lottery, how likely is it that the lottery was rigged? This time around, the only thing we have are data. What I know is that I saw my friend flip the coin 10 times and it came up heads every time. And what I want to infer is whether or not I should conclude that what I just saw was actually a fair coin being flipped 10 times in a row, or whether I should suspect that my friend is playing a trick on me. The data I have look like this: H H H H H H H H H H H and what I’m trying to do is work out which “model of the world” I should put my trust in. If the coin is fair, then the model I should adopt is one that says that the probability of heads is 0.5; that is, $P(\textit{heads}) = 0.5$. If the coin is not fair, then I should conclude that the probability of heads is not 0.5, which we would write as $P(\textit{heads}) \neq 0.5$. In other words, the statistical inference problem is to figure out which of these probability models is right. Clearly, the statistical question isn’t the same as the probability question, but they’re deeply connected to one another. Because of this, a good introduction to statistical theory will start with a discussion of what probability is and how it works.
textbooks/stats/Applied_Statistics/Answering_Questions_with_Data_-__Introductory_Statistics_for_Psychology_Students_(Crump)/04%3A_Probability_Sampling_and_Estimation/4.01%3A_How_are_Probability_and_Statistics_Different.txt
Let’s start with the first of these questions. What is “probability”? It might seem surprising to you, but while statisticians and mathematicians (mostly) agree on what the rules of probability are, there’s much less of a consensus on what the word really means. It seems weird because we’re all very comfortable using words like “chance”, “likely”, “possible” and “probable”, and it doesn’t seem like it should be a very difficult question to answer. If you had to explain “probability” to a five year old, you could do a pretty good job. But if you’ve ever had that experience in real life, you might walk away from the conversation feeling like you didn’t quite get it right, and that (like many everyday concepts) it turns out that you don’t really know what it’s all about. So I’ll have a go at it. Let’s suppose I want to bet on a soccer game between two teams of robots, Arduino Arsenal and C Milan. After thinking about it, I decide that there is an 80% probability that Arduino Arsenal winning. What do I mean by that? Here are three possibilities… • They’re robot teams, so I can make them play over and over again, and if I did that, Arduino Arsenal would win 8 out of every 10 games on average. • For any given game, I would only agree that betting on this game is only “fair” if a $1 bet on C Milan gives a$5 payoff (i.e. I get my $1 back plus a$4 reward for being correct), as would a $4 bet on Arduino Arsenal (i.e., my$4 bet plus a $1 reward). • My subjective “belief” or “confidence” in an Arduino Arsenal victory is four times as strong as my belief in a C Milan victory. Each of these seems sensible. However they’re not identical, and not every statistician would endorse all of them. The reason is that there are different statistical ideologies (yes, really!) and depending on which one you subscribe to, you might say that some of those statements are meaningless or irrelevant. In this section, I give a brief introduction the two main approaches that exist in the literature. These are by no means the only approaches, but they’re the two big ones. The Frequentist View The first of the two major approaches to probability, and the more dominant one in statistics, is referred to as the frequentist view, and it defines probability as a long-run frequency. Suppose we were to try flipping a fair coin, over and over again. By definition, this is a coin that has $P(H) = 0.5$. What might we observe? One possibility is that the first 20 flips might look like this: T,H,H,H,H,T,T,H,H,H,H,T,H,H,T,T,T,T,T,H In this case 11 of these 20 coin flips (55%) came up heads. Now suppose that I’d been keeping a running tally of the number of heads (which I’ll call $N_H$) that I’ve seen, across the first $N$ flips, and calculate the proportion of heads $N_H / N$ every time. Here’s what I’d get (I did literally flip coins to produce this!): number of flips 1 2 3 4 5 6 7 8 9 10 number of heads 0 1 2 3 4 4 4 5 6 7 proportion .00 .50 .67 .75 .80 .67 .57 .63 .67 .70 number of flips 11 12 13 14 15 16 17 18 19 20 number of heads 8 8 9 10 10 10 10 10 10 11 proportion .73 .67 .69 .71 .67 .63 .59 .56 .53 .55 Notice that at the start of the sequence, the proportion of heads fluctuates wildly, starting at .00 and rising as high as .80. Later on, one gets the impression that it dampens out a bit, with more and more of the values actually being pretty close to the “right” answer of .50. This is the frequentist definition of probability in a nutshell: flip a fair coin over and over again, and as $N$ grows large (approaches infinity, denoted $N \rightarrow \infty$), the proportion of heads will converge to 50%. There are some subtle technicalities that the mathematicians care about, but qualitatively speaking, that’s how the frequentists define probability. Unfortunately, I don’t have an infinite number of coins, or the infinite patience required to flip a coin an infinite number of times. However, I do have a computer, and computers excel at mindless repetitive tasks. So I asked my computer to simulate flipping a coin 1000 times, and then drew a picture of what happens to the proportion $N_H / N$ as $N$ increases. Actually, I did it four times, just to make sure it wasn’t a fluke. The results are shown in Figure $1$. As you can see, the proportion of observed heads eventually stops fluctuating, and settles down; when it does, the number at which it finally settles is the true probability of heads. The frequentist definition of probability has some desirable characteristics. First, it is objective: the probability of an event is necessarily grounded in the world. The only way that probability statements can make sense is if they refer to (a sequence of) events that occur in the physical universe. Second, it is unambiguous: any two people watching the same sequence of events unfold, trying to calculate the probability of an event, must inevitably come up with the same answer. However, it also has undesirable characteristics. Infinite sequences don’t exist in the physical world. Suppose you picked up a coin from your pocket and started to flip it. Every time it lands, it impacts on the ground. Each impact wears the coin down a bit; eventually, the coin will be destroyed. So, one might ask whether it really makes sense to pretend that an “infinite” sequence of coin flips is even a meaningful concept, or an objective one. We can’t say that an “infinite sequence” of events is a real thing in the physical universe, because the physical universe doesn’t allow infinite anything. More seriously, the frequentist definition has a narrow scope. There are lots of things out there that human beings are happy to assign probability to in everyday language, but cannot (even in theory) be mapped onto a hypothetical sequence of events. For instance, if a meteorologist comes on TV and says, “the probability of rain in Adelaide on 2 November 2048 is 60%” we humans are happy to accept this. But it’s not clear how to define this in frequentist terms. There’s only one city of Adelaide, and only 2 November 2048. There’s no infinite sequence of events here, just a once-off thing. Frequentist probability genuinely forbids us from making probability statements about a single event. From the frequentist perspective, it will either rain tomorrow or it will not; there is no “probability” that attaches to a single non-repeatable event. Now, it should be said that there are some very clever tricks that frequentists can use to get around this. One possibility is that what the meteorologist means is something like this: “There is a category of days for which I predict a 60% chance of rain; if we look only across those days for which I make this prediction, then on 60% of those days it will actually rain”. It’s very weird and counterintuitive to think of it this way, but you do see frequentists do this sometimes. The Bayesian View The Bayesian view of probability is often called the subjectivist view, and it is a minority view among statisticians, but one that has been steadily gaining traction for the last several decades. There are many flavours of Bayesianism, making hard to say exactly what “the” Bayesian view is. The most common way of thinking about subjective probability is to define the probability of an event as the degree of belief that an intelligent and rational agent assigns to that truth of that event. From that perspective, probabilities don’t exist in the world, but rather in the thoughts and assumptions of people and other intelligent beings. However, in order for this approach to work, we need some way of operationalising “degree of belief”. One way that you can do this is to formalise it in terms of “rational gambling”, though there are many other ways. Suppose that I believe that there’s a 60% probability of rain tomorrow. If someone offers me a bet: if it rains tomorrow, then I win$5, but if it doesn’t rain then I lose \$5. Clearly, from my perspective, this is a pretty good bet. On the other hand, if I think that the probability of rain is only 40%, then it’s a bad bet to take. Thus, we can operationalise the notion of a “subjective probability” in terms of what bets I’m willing to accept. What are the advantages and disadvantages to the Bayesian approach? The main advantage is that it allows you to assign probabilities to any event you want to. You don’t need to be limited to those events that are repeatable. The main disadvantage (to many people) is that we can’t be purely objective – specifying a probability requires us to specify an entity that has the relevant degree of belief. This entity might be a human, an alien, a robot, or even a statistician, but there has to be an intelligent agent out there that believes in things. To many people this is uncomfortable: it seems to make probability arbitrary. While the Bayesian approach does require that the agent in question be rational (i.e., obey the rules of probability), it does allow everyone to have their own beliefs; I can believe the coin is fair and you don’t have to, even though we’re both rational. The frequentist view doesn’t allow any two observers to attribute different probabilities to the same event: when that happens, then at least one of them must be wrong. The Bayesian view does not prevent this from occurring. Two observers with different background knowledge can legitimately hold different beliefs about the same event. In short, where the frequentist view is sometimes considered to be too narrow (forbids lots of things that that we want to assign probabilities to), the Bayesian view is sometimes thought to be too broad (allows too many differences between observers). What’s the difference? And who is right? Now that you’ve seen each of these two views independently, it’s useful to make sure you can compare the two. Go back to the hypothetical robot soccer game at the start of the section. What do you think a frequentist and a Bayesian would say about these three statements? Which statement would a frequentist say is the correct definition of probability? Which one would a Bayesian do? Would some of these statements be meaningless to a frequentist or a Bayesian? If you’ve understood the two perspectives, you should have some sense of how to answer those questions. Okay, assuming you understand the different, you might be wondering which of them is right? Honestly, I don’t know that there is a right answer. As far as I can tell there’s nothing mathematically incorrect about the way frequentists think about sequences of events, and there’s nothing mathematically incorrect about the way that Bayesians define the beliefs of a rational agent. In fact, when you dig down into the details, Bayesians and frequentists actually agree about a lot of things. Many frequentist methods lead to decisions that Bayesians agree a rational agent would make. Many Bayesian methods have very good frequentist properties. For the most part, I’m a pragmatist so I’ll use any statistical method that I trust. As it turns out, that makes me prefer Bayesian methods, for reasons I’ll explain towards the end of the book, but I’m not fundamentally opposed to frequentist methods. Not everyone is quite so relaxed. For instance, consider Sir Ronald Fisher, one of the towering figures of 20th century statistics and a vehement opponent to all things Bayesian, whose paper on the mathematical foundations of statistics referred to Bayesian probability as “an impenetrable jungle [that] arrests progress towards precision of statistical concepts” Fisher (1922, 311). Or the psychologist Paul Meehl, who suggests that relying on frequentist methods could turn you into “a potent but sterile intellectual rake who leaves in his merry path a long train of ravished maidens but no viable scientific offspring” Meehl (1967, 114). The history of statistics, as you might gather, is not devoid of entertainment.
textbooks/stats/Applied_Statistics/Answering_Questions_with_Data_-__Introductory_Statistics_for_Psychology_Students_(Crump)/04%3A_Probability_Sampling_and_Estimation/4.02%3A_What_Does_Probability_Mean.txt
Ideological arguments between Bayesians and frequentists notwithstanding, it turns out that people mostly agree on the rules that probabilities should obey. There are lots of different ways of arriving at these rules. The most commonly used approach is based on the work of Andrey Kolmogorov, one of the great Soviet mathematicians of the 20th century. I won’t go into a lot of detail, but I’ll try to give you a bit of a sense of how it works. And in order to do so, I’m going to have to talk about my pants. Introducing Probability Distributions One of the disturbing truths about my life is that I only own 5 pairs of pants: three pairs of jeans, the bottom half of a suit, and a pair of tracksuit pants. Even sadder, I’ve given them names: I call them $X_1$, $X_2$, $X_3$, $X_4$ and $X_5$. I really do: that’s why they call me Mister Imaginative. Now, on any given day, I pick out exactly one of pair of pants to wear. Not even I’m so stupid as to try to wear two pairs of pants, and thanks to years of training I never go outside without wearing pants anymore. If I were to describe this situation using the language of probability theory, I would refer to each pair of pants (i.e., each $X$) as an elementary event. The key characteristic of elementary events is that every time we make an observation (e.g., every time I put on a pair of pants), then the outcome will be one and only one of these events. Like I said, these days I always wear exactly one pair of pants, so my pants satisfy this constraint. Similarly, the set of all possible events is called a sample space. Granted, some people would call it a “wardrobe”, but that’s because they’re refusing to think about my pants in probabilistic terms. Sad. Okay, now that we have a sample space (a wardrobe), which is built from lots of possible elementary events (pants), what we want to do is assign a probability of one of these elementary events. For an event $X$, the probability of that event $P(X)$ is a number that lies between 0 and 1. The bigger the value of $P(X)$, the more likely the event is to occur. So, for example, if $P(X) = 0$, it means the event $X$ is impossible (i.e., I never wear those pants). On the other hand, if $P(X) = 1$ it means that event $X$ is certain to occur (i.e., I always wear those pants). For probability values in the middle, it means that I sometimes wear those pants. For instance, if $P(X) = 0.5$ it means that I wear those pants half of the time. At this point, we’re almost done. The last thing we need to recognise is that “something always happens”. Every time I put on pants, I really do end up wearing pants (crazy, right?). What this somewhat trite statement means, in probabilistic terms, is that the probabilities of the elementary events need to add up to 1. This is known as the law of total probability, not that any of us really care. More importantly, if these requirements are satisfied, then what we have is a probability distribution. For example, this is an example of a probability distribution Which pants? Label Probability Blue jeans $X_1$ $P(X_1) = .5$ Grey jeans $X_2$ $P(X_2) = .3$ Black jeans $X_3$ $P(X_3) = .1$ Black suit $X_4$ $P(X_4) = 0$ Blue tracksuit $X_5$ $P(X_5) = .1$ Each of the events has a probability that lies between 0 and 1, and if we add up the probability of all events, they sum to 1. Awesome. We can even draw a nice bar graph to visualise this distribution, as shown in Figure $1$. And at this point, we’ve all achieved something. You’ve learned what a probability distribution is, and I’ve finally managed to find a way to create a graph that focuses entirely on my pants. Everyone wins! The only other thing that I need to point out is that probability theory allows you to talk about non elementary events as well as elementary ones. The easiest way to illustrate the concept is with an example. In the pants example, it’s perfectly legitimate to refer to the probability that I wear jeans. In this scenario, the “Dan wears jeans” event said to have happened as long as the elementary event that actually did occur is one of the appropriate ones; in this case “blue jeans”, “black jeans” or “grey jeans”. In mathematical terms, we defined the “jeans” event $E$ to correspond to the set of elementary events $(X_1, X_2, X_3)$. If any of these elementary events occurs, then $E$ is also said to have occurred. Having decided to write down the definition of the $E$ this way, it’s pretty straightforward to state what the probability $P(E)$ is: we just add everything up. In this particular case $P(E) = P(X_1) + P(X_2) + P(X_3) \nonumber$ and, since the probabilities of blue, grey and black jeans respectively are .5, .3 and .1, the probability that I wear jeans is equal to .9. At this point you might be thinking that this is all terribly obvious and simple and you’d be right. All we’ve really done is wrap some basic mathematics around a few common sense intuitions. However, from these simple beginnings it’s possible to construct some extremely powerful mathematical tools. I’m definitely not going to go into the details in this book, but what I will do is list some of the other rules that probabilities satisfy. These rules can be derived from the simple assumptions that I’ve outlined above, but since we don’t actually use these rules for anything in this book, I won’t do so here. Table $1$: Some basic rules that probabilities must satisfy. You don’t really need to know these rules in order to understand the analyses that we’ll talk about later in the book, but they are important if you want to understand probability theory a bit more deeply. English Notation   Formula not $A$ $P(\neg A)$ $=$ $1-P(A)$ $A$ or $B$ $P(A \cup B)$ $=$ $P(A) + P(B) - P(A \cap B)$ $A$ and $B$ $P(A \cap B)$ $=$ $P(A|B) P(B)$ Now that we have the ability to “define” non-elementary events in terms of elementary ones, we can actually use this to construct (or, if you want to be all mathematicallish, “derive”) some of the other rules of probability. These rules are listed above, and while I’m pretty confident that very few of my readers actually care about how these rules are constructed, I’m going to show you anyway: even though it’s boring and you’ll probably never have a lot of use for these derivations, if you read through it once or twice and try to see how it works, you’ll find that probability starts to feel a bit less mysterious, and with any luck a lot less daunting. So here goes. Firstly, in order to construct the rules I’m going to need a sample space $X$ that consists of a bunch of elementary events $x$, and two non-elementary events, which I’ll call $A$ and $B$. Let’s say: $\begin{array}{rcl} X &=& (x_1, x_2, x_3, x_4, x_5) \ A &=& (x_1, x_2, x_3) \ B &=& (x_3, x_4) \end{array} \nonumber$ To make this a bit more concrete, let’s suppose that we’re still talking about the pants distribution. If so, $A$ corresponds to the event “jeans”, and $B$ corresponds to the event “black”: $\begin{array}{rcl} \mbox{"jeans''} &=& (\mbox{"blue jeans''}, \mbox{"grey jeans''}, \mbox{"black jeans''}) \ \mbox{"black''} &=& (\mbox{"black jeans''}, \mbox{"black suit''}) \end{array} \nonumber$ So now let’s start checking the rules that I’ve listed in the table. In the first line, the table says that $P(\neg A) = 1- P(A) \nonumber$ and what it means is that the probability of “not $A$” is equal to 1 minus the probability of $A$. A moment’s thought (and a tedious example) make it obvious why this must be true. If $A$ coresponds to the even that I wear jeans (i.e., one of $x_1$ or $x_2$ or $x_3$ happens), then the only meaningful definition of “not $A$” (which is mathematically denoted as $\neg A$) is to say that $\neg A$ consists of all elementary events that don’t belong to $A$. In the case of the pants distribution it means that $\neg A = (x_4, x_5)$, or, to say it in English: “not jeans” consists of all pairs of pants that aren’t jeans (i.e., the black suit and the blue tracksuit). Consequently, every single elementary event belongs to either $A$ or $\neg A$, but not both. Okay, so now let’s rearrange our statement above: $P(\neg A) + P(A) = 1 \nonumber$ which is a trite way of saying either I do wear jeans or I don’t wear jeans: the probability of “not jeans” plus the probability of “jeans” is 1. Mathematically: $\begin{array}{rcl} P(\neg A) &=& P(x_4) + P(x_5) \ P(A) &=& P(x_1) + P(x_2) + P(x_3) \end{array} \nonumber$ so therefore $\begin{array}{rcl} P(\neg A) + P(A) &=& P(x_1) + P(x_2) + P(x_3) + P(x_4) + P(x_5) \ &=& \sum_{x \in X} P(x) \ &=& 1 \end{array} \nonumber$ Excellent. It all seems to work. Wow, I can hear you saying. That’s a lot of $x$s to tell me the freaking obvious. And you’re right: this is freaking obvious. The whole point of probability theory to to formalise and mathematise a few very basic common sense intuitions. So let’s carry this line of thought forward a bit further. In the last section I defined an event corresponding to not A, which I denoted $\neg A$. Let’s now define two new events that correspond to important everyday concepts: $A$ and $B$, and $A$ or $B$. To be precise: English statement: Mathematical notation: “$A$ and $B$” both happen $A \cap B$ at least one of “$A$ or $B$” happens $A \cup B$ Since $A$ and $B$ are both defined in terms of our elementary events (the $x$s) we’re going to need to try to describe $A \cap B$ and $A \cup B$ in terms of our elementary events too. Can we do this? Yes we can The only way that both $A$ and $B$ can occur is if the elementary event that we observe turns out to belong to both $A$ and $B$. Thus “$A \cap B$” includes only those elementary events that belong to both $A$ and $B$… $\begin{array}{rcl} A &=& (x_1, x_2, x_3) \ B &=& (x_3, x_4) \ A \cap B & = & (x_3) \end{array} \nonumber$ So, um, the only way that I can wear “jeans” $(x_1, x_2, x_3)$ and “black pants” $(x_3, x_4)$ is if I wear “black jeans” $(x_3)$. Another victory for the bloody obvious. At this point, you’re not going to be at all shocked by the definition of $A \cup B$, though you’re probably going to be extremely bored by it. The only way that I can wear “jeans” or “black pants” is if the elementary pants that I actually do wear belongs to $A$ or to $B$, or to both. So… $\begin{array}{rcl} A &=& (x_1, x_2, x_3) \ B &=& (x_3, x_4) \ A \cup B & = & (x_1, x_2, x_3, x_4) \end{array} \nonumber$ Oh yeah baby. Mathematics at its finest. So, we’ve defined what we mean by $A \cap B$ and $A \cup B$. Now let’s assign probabilities to these events. More specifically, let’s start by verifying the rule that claims that: $P(A \cup B) = P(A) + P(B) - P(A \cap B) \nonumber$ Using our definitions earlier, we know that $A \cup B = (x_1, x_2, x_3, x_4)$, so $P(A \cup B) = P(x_1) + P(x_2) + P(x_3) + P(x_4) \nonumber$ and making similar use of the fact that we know what elementary events belong to $A$, $B$ and $A \cap B$…. $\begin{array}{rcl} P(A) &=& P(x_1) + P(x_2) + P(x_3) \ P(B) &=& P(x_3) + P(x_4) \ P(A \cap B) &=& P(x_3) \end{array} \nonumber$ and therefore $\begin{array}{rcl} P(A) + P(B) - P(A \cap B) &=& P(x_1) + P(x_2) + P(x_3) + P(x_3) + P(x_4) - P(x_3) \ &=& P(x_1) + P(x_2) + P(x_3) + P(x_4) \ &=& P(A \cup B) \end{array} \nonumber$ Done. The next concept we need to define is the notion of “$B$ given $A$”, which is typically written $B | A$. Here’s what I mean: suppose that I get up one morning, and put on a pair of pants. An elementary event $x$ has occurred. Suppose further I yell out to my wife (who is in the other room, and so cannot see my pants) “I’m wearing jeans today!”. Assuming that she believes that I’m telling the truth, she knows that $A$ is true. Given that she knows that $A$ has happened, what is the conditional probability that $B$ is also true? Well, let’s think about what she knows. Here are the facts: • The non-jeans events are impossible. If $A$ is true, then we know that the only possible elementary events that could have occurred are $x_1$, $x_2$ and $x_3$ (i.e.,the jeans). The non-jeans events $x_4$ and $x_5$ are now impossible, and must be assigned probability zero. In other words, our sample space has been restricted to the jeans events. But it’s still the case that the probabilities of these these events must sum to 1: we know for sure that I’m wearing jeans. • She’s learned nothing about which jeans I’m wearing. Before I made my announcement that I was wearing jeans, she already knew that I was five times as likely to be wearing blue jeans ($P(x_1) = 0.5$) than to be wearing black jeans ($P(x_3) = 0.1$). My announcement doesn’t change this… I said nothing about what colour my jeans were, so it must remain the case that $P(x_1) / P(x_3)$ stays the same, at a value of 5. There’s only one way to satisfy these constraints: set the impossible events to have zero probability (i.e., $P(x | A) = 0$ if $x$ is not in $A$), and then divide the probabilities of all the others by $P(A)$. In this case, since $P(A) = 0.9$, we divide by 0.9. This gives: which pants? elementary event old prob, $P(x)$ new prob, $P(x | A)$ blue jeans $x_1$ 0.5 0.556 grey jeans $x_2$ 0.3 0.333 black jeans $x_3$ 0.1 0.111 black suit $x_4$ 0 0 blue tracksuit $x_5$ 0.1 0 In mathematical terms, we say that $P(x | A) = \frac{P(x)}{P(A)} \nonumber$ if $x \in A$, and $P(x|A) = 0$ otherwise. And therefore… $\begin{array}{rcl} P(B | A) &=& P(x_3 | A) + P(x_4 | A) \ &=& \displaystyle\frac{P(x_3)}{P(A)} + 0 \ &=& \displaystyle\frac{P(x_3)}{P(A)} \end{array} \nonumber$ Now, recalling that $A \cap B = (x_3)$, we can write this as $P(B | A) = \frac{P(A \cap B)}{P(A)} \nonumber$ and if we multiply both sides by $P(A)$ we obtain: $P(A \cap B) = P(B| A) P(A) \nonumber$ which is the third rule that we had listed in the table.
textbooks/stats/Applied_Statistics/Answering_Questions_with_Data_-__Introductory_Statistics_for_Psychology_Students_(Crump)/04%3A_Probability_Sampling_and_Estimation/4.03%3A_Basic_Probability_Theory.txt
As you might imagine, probability distributions vary enormously, and there’s an enormous range of distributions out there. However, they aren’t all equally important. In fact, the vast majority of the content in this book relies on one of five distributions: the binomial distribution, the normal distribution, the $t$ distribution, the $\chi^2$ (“chi-square”) distribution and the $F$ distribution. Given this, what I’ll do over the next few sections is provide a brief introduction to all five of these, paying special attention to the binomial and the normal. I’ll start with the binomial distribution, since it’s the simplest of the five. Introducing the Binomial The theory of probability originated in the attempt to describe how games of chance work, so it seems fitting that our discussion of the binomial distribution should involve a discussion of rolling dice and flipping coins. Let’s imagine a simple “experiment”: in my hot little hand I’m holding 20 identical six-sided dice. On one face of each die there’s a picture of a skull; the other five faces are all blank. If I proceed to roll all 20 dice, what’s the probability that I’ll get exactly 4 skulls? Assuming that the dice are fair, we know that the chance of any one die coming up skulls is 1 in 6; to say this another way, the skull probability for a single die is approximately $.167$. This is enough information to answer our question, so let’s have a look at how it’s done. As usual, we’ll want to introduce some names and some notation. We’ll let $N$ denote the number of dice rolls in our experiment; which is often referred to as the size parameter of our binomial distribution. We’ll also use $\theta$ to refer to the the probability that a single die comes up skulls, a quantity that is usually called the success probability of the binomial. Finally, we’ll use $X$ to refer to the results of our experiment, namely the number of skulls I get when I roll the dice. Since the actual value of $X$ is due to chance, we refer to it as a random variable. In any case, now that we have all this terminology and notation, we can use it to state the problem a little more precisely. The quantity that we want to calculate is the probability that $X = 4$ given that we know that $\theta = 0.167$ and $N=20$. The general “form” of the thing I’m interested in calculating could be written as $P(X \ | \ \theta, N) \nonumber$ and we’re interested in the special case where $X=4$, $\theta = .167$ and $N=20$. There’s only one more piece of notation I want to refer to before moving on to discuss the solution to the problem. If I want to say that $X$ is generated randomly from a binomial distribution with parameters $\theta$ and $N$, the notation I would use is as follows: $X \sim \mbox{Binomial}(\theta, N) \nonumber$ Yeah, yeah. I know what you’re thinking: notation, notation, notation. Really, who cares? Very few readers of this book are here for the notation, so I should probably move on and talk about how to use the binomial distribution. I’ve included the formula for the binomial distribution in Table 4.3.1, since some readers may want to play with it themselves, but since most people probably don’t care that much and because we don’t need the formula in this book, I won’t talk about it in any detail. Instead, I just want to show you what the binomial distribution looks like. To that end, Figure $1$ plots the binomial probabilities for all possible values of $X$ for our dice rolling experiment, from $X=0$ (no skulls) all the way up to $X=20$ (all skulls). Note that this is basically a bar chart, and is no different to the “pants probability” plot I drew in Figure 4.3.1. On the horizontal axis we have all the possible events, and on the vertical axis we can read off the probability of each of those events. So, the probability of rolling 4 skulls out of 20 times is about 0.20 (the actual answer is 0.2022036, as we’ll see in a moment). In other words, you’d expect that to happen about 20% of the times you repeated this experiment. Working with the binomial distribution in R R has a function called dbinom that calculates binomial probabilities for us. The main arguments to the function are • x This is a number, or vector of numbers, specifying the outcomes whose probability you’re trying to calculate. • size This is a number telling R the size of the experiment. • prob This is the success probability for any one trial in the experiment. So, in order to calculate the probability of getting skulls, from an experiment of trials, in which the probability of getting a skull on any one trial is … well, the command I would use is simply this: dbinom( x = 4, size = 20, prob = 1/6 ) 0.202203581217173 To give you a feel for how the binomial distribution changes when we alter the values of $\theta$ and $N$, let’s suppose that instead of rolling dice, I’m actually flipping coins. This time around, my experiment involves flipping a fair coin repeatedly, and the outcome that I’m interested in is the number of heads that I observe. In this scenario, the success probability is now $\theta = 1/2$. Suppose I were to flip the coin $N=20$ times. In this example, I’ve changed the success probability, but kept the size of the experiment the same. What does this do to our binomial distribution? Well, as Figure $2 (a)$ shows, the main effect of this is to shift the whole distribution, as you’d expect. Okay, what if we flipped a coin $N=100$ times? Well, in that case, we get Figure $2 (b)$. The distribution stays roughly in the middle, but there’s a bit more variability in the possible outcomes. At this point, I should probably explain the name of the dbinom function. Obviously, the “binom” part comes from the fact that we’re working with the binomial distribution, but the “d” prefix is probably a bit of a mystery. In this section I’ll give a partial explanation: specifically, I’ll explain why there is a prefix. As for why it’s a “d” specifically, you’ll have to wait until the next section. What’s going on here is that R actually provides four functions in relation to the binomial distribution. These four functions are dbinom, pbinom, rbinom and qbinom, and each one calculates a different quantity of interest. Not only that, R does the same thing for every probability distribution that it implements. No matter what distribution you’re talking about, there’s a d function, a p function, r a function and a q function. Let’s have a look at what all four functions do. Firstly, all four versions of the function require you to specify the size and prob arguments: no matter what you’re trying to get R to calculate, it needs to know what the parameters are. However, they differ in terms of what the other argument is, and what the output is. So let’s look at them one at a time. • The d form we’ve already seen: you specify a particular outcome x, and the output is the probability of obtaining exactly that outcome. (the “d” is short for density, but ignore that for now). • The p form calculates the cumulative probability. You specify a particular quantile q , and it tells you the probability of obtaining an outcome smaller than or equal to q. • The q form calculates the quantiles of the distribution. You specify a probability value p, and it gives you the corresponding percentile. That is, the value of the variable for which there’s a probability p of obtaining an outcome lower than that value. • The r form is a random number generator: specifically, it generates n random outcomes from the distribution. This is a little abstract, so let’s look at some concrete examples. Again, we’ve already covered dbinom so let’s focus on the other three versions. We’ll start with pbinom, and we’ll go back to the skull-dice example. Again, I’m rolling 20 dice, and each die has a 1 in 6 chance of coming up skulls. Suppose, however, that I want to know the probability of rolling 4 or fewer skulls. If I wanted to, I could use the dbinom function to calculate the exact probability of rolling 0 skulls, 1 skull, 2 skulls, 3 skulls and 4 skulls and then add these up, but there’s a faster way. Instead, I can calculate this using the pbinom function. Here’s the command: pbinom( q= 4, size = 20, prob = 1/6) 0.768749218992842 In other words, there is a 76.9% chance that I will roll 4 or fewer skulls. Or, to put it another way, R is telling us that a value of 4 is actually the 76.9th percentile of this binomial distribution. Next, let’s consider the qbinom function. Let’s say I want to calculate the 75th percentile of the binomial distribution. If we’re sticking with our skulls example, I would use the following command to do this: qbinom( p = 0.75, size = 20, prob = 1/6 ) 4 Hm. There’s something odd going on here. Let’s think this through. What the qbinom function appears to be telling us is that the 75th percentile of the binomial distribution is 4, even though we saw from the function that 4 is actually the 76.9th percentile. And it’s definitely the pbinom function that is correct. I promise. The weirdness here comes from the fact that our binomial distribution doesn’t really have a 75th percentile. Not really. Why not? Well, there’s a 56.7% chance of rolling 3 or fewer skulls (you can type pbinom(3, 20, 1/6) to confirm this if you want), and a 76.9% chance of rolling 4 or fewer skulls. So there’s a sense in which the 75th percentile should lie “in between” 3 and 4 skulls. But that makes no sense at all! You can’t roll 20 dice and get 3.9 of them come up skulls. This issue can be handled in different ways: you could report an in between value (or interpolated value, to use the technical name) like 3.9, you could round down (to 3) or you could round up (to 4). The qbinom function rounds upwards: if you ask for a percentile that doesn’t actually exist (like the 75th in this example), R finds the smallest value for which the the percentile rank is at least what you asked for. In this case, since the “true” 75th percentile (whatever that would mean) lies somewhere between 3 and 4 skulls, R rounds up and gives you an answer of 4. This subtlety is tedious, I admit, but thankfully it’s only an issue for discrete distributions like the binomial. The other distributions that I’ll talk about (normal, $t$, $\chi^2$ and $F$) are all continuous, and so R can always return an exact quantile whenever you ask for it. Finally, we have the random number generator. To use the rbinom function, you specify how many times R should “simulate” the experiment using the n argument, and it will generate random outcomes from the binomial distribution. So, for instance, suppose I were to repeat my die rolling experiment 100 times. I could get R to simulate the results of these experiments by using the following command: rbinom( n = 100, size = 20, prob = 1/6 ) 1. 2 2. 4 3. 3 4. 5 5. 5 6. 1 7. 2 8. 7 9. 2 10. 1 11. 4 12. 3 13. 4 14. 1 15. 1 16. 4 17. 5 18. 2 19. 1 20. 2 21. 7 22. 6 23. 1 24. 1 25. 1 26. 2 27. 4 28. 4 29. 3 30. 6 31. 4 32. 3 33. 3 34. 3 35. 4 36. 2 37. 4 38. 4 39. 2 40. 1 41. 3 42. 2 43. 5 44. 4 45. 2 46. 4 47. 1 48. 1 49. 6 50. 2 51. 3 52. 1 53. 4 54. 1 55. 1 56. 3 57. 4 58. 4 59. 3 60. 6 61. 5 62. 5 63. 5 64. 3 65. 2 66. 4 67. 5 68. 5 69. 4 70. 4 71. 2 72. 1 73. 1 74. 5 75. 3 76. 6 77. 1 78. 6 79. 3 80. 4 81. 3 82. 5 83. 2 84. 7 85. 1 86. 5 87. 3 88. 2 89. 2 90. 0 91. 3 92. 4 93. 3 94. 6 95. 6 96. 4 97. 5 98. 1 99. 4 100. 3 As you can see, these numbers are pretty much what you’d expect given the distribution shown in Figure $1$. Most of the time I roll somewhere between 1 to 5 skulls. There are a lot of subtleties associated with random number generation using a computer, but for the purposes of this book we don’t need to worry too much about them.
textbooks/stats/Applied_Statistics/Answering_Questions_with_Data_-__Introductory_Statistics_for_Psychology_Students_(Crump)/04%3A_Probability_Sampling_and_Estimation/4.04%3A_The_Binomial_Distribution.txt
While the binomial distribution is conceptually the simplest distribution to understand, it’s not the most important one. That particular honour goes to the normal distribution, which is also referred to as “the bell curve” or a “Gaussian distribution”. A normal distribution is described using two parameters, the mean of the distribution $\mu$ and the standard deviation of the distribution $\sigma$. The notation that we sometimes use to say that a variable $X$ is normally distributed is as follows: $X \sim \mbox{Normal}(\mu,\sigma)$ Of course, that’s just notation. It doesn’t tell us anything interesting about the normal distribution itself. The mathematical formula for the normal distribution is: The formula is important enough that everyone who learns statistics should at least look at it, but since this is an introductory text I don’t want to focus on it to much. Instead, we look at how R can be used to work with normal distributions. The R functions for the normal distribution are dnorm(), pnorm(), qnorm() and rnorm(). However, they behave in pretty much exactly the same way as the corresponding functions for the binomial distribution, so there’s not a lot that you need to know. The only thing that I should point out is that the argument names for the parameters are mean and sd. In pretty much every other respect, there’s nothing else to add. Instead of focusing on the maths, let’s try to get a sense for what it means for a variable to be normally distributed. To that end, have a look at Figure $1$, which plots a normal distribution with mean $\mu = 0$ and standard deviation $\sigma = 1$. You can see where the name “bell curve” comes from: it looks a bit like a bell. Notice that, unlike the plots that I drew to illustrate the binomial distribution, the picture of the normal distribution in Figure $1$ shows a smooth curve instead of “histogram-like” bars. This isn’t an arbitrary choice: the normal distribution is continuous, whereas the binomial is discrete. For instance, in the die rolling example from the last section, it was possible to get 3 skulls or 4 skulls, but impossible to get 3.9 skulls. With this in mind, let’s see if we can’t get an intuition for how the normal distribution works. Firstly, let’s have a look at what happens when we play around with the parameters of the distribution. One parameter we can change is the mean. This will shift the distribution to the right or left. The animation below shows a normal distribution with mean = 0, moving up and down from mean = 0 to mean = 5. Note, when you change the mean the whole shape of the distribution does not change, it just shifts from left to right. In the animation the normal distribution bounces up and down a little, but that’s just a quirk of the animation (plus it looks fund that way). In contrast, if we increase the standard deviation while keeping the mean constant, the peak of the distribution stays in the same place, but the distribution gets wider. The next animation shows what happens when you start with a small standard deviation (sd=0.5), and move to larger and larger standard deviation (up to sd =5). As you can see, the distribution spreads out and becomes wider as the standard deviation increases. Notice, though, that when we widen the distribution, the height of the peak shrinks. This has to happen: in the same way that the heights of the bars that we used to draw a discrete binomial distribution have to sum to 1, the total area under the curve for the normal distribution must equal 1. Before moving on, I want to point out one important characteristic of the normal distribution. Irrespective of what the actual mean and standard deviation are, 68.3% of the area falls within 1 standard deviation of the mean. Similarly, 95.4% of the distribution falls within 2 standard deviations of the mean, and 99.7% of the distribution is within 3 standard deviations. Probability density There’s something I’ve been trying to hide throughout my discussion of the normal distribution, something that some introductory textbooks omit completely. They might be right to do so: this “thing” that I’m hiding is weird and counterintuitive even by the admittedly distorted standards that apply in statistics. Fortunately, it’s not something that you need to understand at a deep level in order to do basic statistics: rather, it’s something that starts to become important later on when you move beyond the basics. So, if it doesn’t make complete sense, don’t worry: try to make sure that you follow the gist of it. Throughout my discussion of the normal distribution, there’s been one or two things that don’t quite make sense. Perhaps you noticed that the $y$-axis in these figures is labelled “Probability Density” rather than density. Maybe you noticed that I used $p(X)$ instead of $P(X)$ when giving the formula for the normal distribution. Maybe you’re wondering why R uses the “d” prefix for functions like dnorm(). And maybe, just maybe, you’ve been playing around with the dnorm() function, and you accidentally typed in a command like this: dnorm( x = 1, mean = 1, sd = 0.1 ) 3.98942280401433 And if you’ve done the last part, you’re probably very confused. I’ve asked R to calculate the probability that x = 1, for a normally distributed variable with mean = 1 and standard deviation sd = 0.1; and it tells me that the probability is 3.99. But, as we discussed earlier, probabilities can’t be larger than 1. So either I’ve made a mistake, or that’s not a probability. As it turns out, the second answer is correct. What we’ve calculated here isn’t actually a probability: it’s something else. To understand what that something is, you have to spend a little time thinking about what it really means to say that $X$ is a continuous variable. Let’s say we’re talking about the temperature outside. The thermometer tells me it’s 23 degrees, but I know that’s not really true. It’s not exactly 23 degrees. Maybe it’s 23.1 degrees, I think to myself. But I know that that’s not really true either, because it might actually be 23.09 degrees. But, I know that… well, you get the idea. The tricky thing with genuinely continuous quantities is that you never really know exactly what they are. Now think about what this implies when we talk about probabilities. Suppose that tomorrow’s maximum temperature is sampled from a normal distribution with mean 23 and standard deviation 1. What’s the probability that the temperature will be exactly 23 degrees? The answer is “zero”, or possibly, “a number so close to zero that it might as well be zero”. Why is this? It’s like trying to throw a dart at an infinitely small dart board: no matter how good your aim, you’ll never hit it. In real life you’ll never get a value of exactly 23. It’ll always be something like 23.1 or 22.99998 or something. In other words, it’s completely meaningless to talk about the probability that the temperature is exactly 23 degrees. However, in everyday language, if I told you that it was 23 degrees outside and it turned out to be 22.9998 degrees, you probably wouldn’t call me a liar. Because in everyday language, “23 degrees” usually means something like “somewhere between 22.5 and 23.5 degrees”. And while it doesn’t feel very meaningful to ask about the probability that the temperature is exactly 23 degrees, it does seem sensible to ask about the probability that the temperature lies between 22.5 and 23.5, or between 20 and 30, or any other range of temperatures. The point of this discussion is to make clear that, when we’re talking about continuous distributions, it’s not meaningful to talk about the probability of a specific value. However, what we can talk about is the probability that the value lies within a particular range of values. To find out the probability associated with a particular range, what you need to do is calculate the “area under the curve”. Okay, so that explains part of the story. I’ve explained a little bit about how continuous probability distributions should be interpreted (i.e., area under the curve is the key thing), but I haven’t actually explained what the dnorm() function actually calculates. Equivalently, what does the formula for $p(x)$ that I described earlier actually mean? Obviously, $p(x)$ doesn’t describe a probability, but what is it? The name for this quantity $p(x)$ is a probability density, and in terms of the plots we’ve been drawing, it corresponds to the height of the curve. The densities themselves aren’t meaningful in and of themselves: but they’re “rigged” to ensure that the area under the curve is always interpretable as genuine probabilities. To be honest, that’s about as much as you really need to know for now. 4.06: Other useful distributions There are many other useful distributions, these include the `t` distribution, the `F` distribution, and the chi squared distribution. We will soon discover more about the `t` and `F` distributions when we discuss t-tests and ANOVAs in later chapters. 4.07: Summary of Probability We’ve talked what probability means, and why statisticians can’t agree on what it means. We talked about the rules that probabilities have to obey. And we introduced the idea of a probability distribution, and spent a good chunk talking about some of the more important probability distributions that statisticians work with. We talked about things like this: • Probability theory versus statistics • Frequentist versus Bayesian views of probability • Basics of probability theory • Binomial distribution, normal distribution As you’d expect, this coverage is by no means exhaustive. Probability theory is a large branch of mathematics in its own right, entirely separate from its application to statistics and data analysis. As such, there are thousands of books written on the subject and universities generally offer multiple classes devoted entirely to probability theory. Even the “simpler” task of documenting standard probability distributions is a big topic.Fortunately for you, very little of this is necessary. You’re unlikely to need to know dozens of statistical distributions when you go out and do real world data analysis, and you definitely won’t need them for this book, but it never hurts to know that there’s other possibilities out there. Picking up on that last point, there’s a sense in which this whole chapter is something of a digression. Many undergraduate psychology classes on statistics skim over this content very quickly (I know mine did), and even the more advanced classes will often “forget” to revisit the basic foundations of the field. Most academic psychologists would not know the difference between probability and density, and until recently very few would have been aware of the difference between Bayesian and frequentist probability. However, I think it’s important to understand these things before moving onto the applications. For example, there are a lot of rules about what you’re “allowed” to say when doing statistical inference, and many of these can seem arbitrary and weird. However, they start to make sense if you understand that there is this Bayesian/frequentist distinction.
textbooks/stats/Applied_Statistics/Answering_Questions_with_Data_-__Introductory_Statistics_for_Psychology_Students_(Crump)/04%3A_Probability_Sampling_and_Estimation/4.05%3A_The_normal_distribution.txt
Remember, the role of descriptive statistics is to concisely summarize what we do know. In contrast, the purpose of inferential statistics is to “learn what we do not know from what we do”. What kinds of things would we like to learn about? And how do we learn them? These are the questions that lie at the heart of inferential statistics, and they are traditionally divided into two “big ideas”: estimation and hypothesis testing. The goal in this chapter is to introduce the first of these big ideas, estimation theory, but we’ll talk about sampling theory first because estimation theory doesn’t make sense until you understand sampling. So, this chapter divides into sampling theory, and how to make use of sampling theory to discuss how statisticians think about estimation. We have already done lots of sampling, so you are already familiar with some of the big ideas. Sampling theory plays a huge role in specifying the assumptions upon which your statistical inferences rely. And in order to talk about “making inferences” the way statisticians think about it, we need to be a bit more explicit about what it is that we’re drawing inferences from (the sample) and what it is that we’re drawing inferences about (the population). In almost every situation of interest, what we have available to us as researchers is a sample of data. We might have run experiment with some number of participants; a polling company might have phoned some number of people to ask questions about voting intentions; etc. Regardless: the data set available to us is finite, and incomplete. We can’t possibly get every person in the world to do our experiment; a polling company doesn’t have the time or the money to ring up every voter in the country etc. In our earlier discussion of descriptive statistics, this sample was the only thing we were interested in. Our only goal was to find ways of describing, summarizing and graphing that sample. This is about to change. Defining a population A sample is a concrete thing. You can open up a data file, and there’s the data from your sample. A population, on the other hand, is a more abstract idea. It refers to the set of all possible people, or all possible observations, that you want to draw conclusions about, and is generally much bigger than the sample. In an ideal world, the researcher would begin the study with a clear idea of what the population of interest is, since the process of designing a study and testing hypotheses about the data that it produces does depend on the population about which you want to make statements. However, that doesn’t always happen in practice: usually the researcher has a fairly vague idea of what the population is and designs the study as best he/she can on that basis. Sometimes it’s easy to state the population of interest. For instance, in the “polling company” example, the population consisted of all voters enrolled at the a time of the study – millions of people. The sample was a set of 1000 people who all belong to that population. In most situations the situation is much less simple. In a typical a psychological experiment, determining the population of interest is a bit more complicated. Suppose I run an experiment using 100 undergraduate students as my participants. My goal, as a cognitive scientist, is to try to learn something about how the mind works. So, which of the following would count as “the population”: • All of the undergraduate psychology students at the University of Adelaide? • Undergraduate psychology students in general, anywhere in the world? • Australians currently living? • Australians of similar ages to my sample? • Anyone currently alive? • Any human being, past, present or future? • Any biological organism with a sufficient degree of intelligence operating in a terrestrial environment? • Any intelligent being? Each of these defines a real group of mind-possessing entities, all of which might be of interest to me as a cognitive scientist, and it’s not at all clear which one ought to be the true population of interest. Simple random samples Irrespective of how we define the population, the critical point is that the sample is a subset of the population, and our goal is to use our knowledge of the sample to draw inferences about the properties of the population. The relationship between the two depends on the procedure by which the sample was selected. This procedure is referred to as a sampling method, and it is important to understand why it matters. To keep things simple, imagine we have a bag containing 10 chips. Each chip has a unique letter printed on it, so we can distinguish between the 10 chips. The chips come in two colors, black and white. This set of chips is the population of interest, and it is depicted graphically on the left of Figure $1$. As you can see from looking at the picture, there are 4 black chips and 6 white chips, but of course in real life we wouldn’t know that unless we looked in the bag. Now imagine you run the following “experiment”: you shake up the bag, close your eyes, and pull out 4 chips without putting any of them back into the bag. First out comes the $a$ chip (black), then the $c$ chip (white), then $j$ (white) and then finally $b$ (black). If you wanted, you could then put all the chips back in the bag and repeat the experiment, as depicted on the right hand side of Figure $1$. Each time you get different results, but the procedure is identical in each case. The fact that the same procedure can lead to different results each time, we refer to it as a random process. However, because we shook the bag before pulling any chips out, it seems reasonable to think that every chip has the same chance of being selected. A procedure in which every member of the population has the same chance of being selected is called a simple random sample. The fact that we did not put the chips back in the bag after pulling them out means that you can’t observe the same thing twice, and in such cases the observations are said to have been sampled without replacement. To help make sure you understand the importance of the sampling procedure, consider an alternative way in which the experiment could have been run. Suppose that my 5-year old son had opened the bag, and decided to pull out four black chips without putting any of them back in the bag. This biased sampling scheme is depicted in Figure $2$. Now consider the evidentiary value of seeing 4 black chips and 0 white chips. Clearly, it depends on the sampling scheme, does it not? If you know that the sampling scheme is biased to select only black chips, then a sample that consists of only black chips doesn’t tell you very much about the population! For this reason, statisticians really like it when a data set can be considered a simple random sample, because it makes the data analysis much easier. A third procedure is worth mentioning. This time around we close our eyes, shake the bag, and pull out a chip. This time, however, we record the observation and then put the chip back in the bag. Again we close our eyes, shake the bag, and pull out a chip. We then repeat this procedure until we have 4 chips. Data sets generated in this way are still simple random samples, but because we put the chips back in the bag immediately after drawing them it is referred to as a sample with replacement. The difference between this situation and the first one is that it is possible to observe the same population member multiple times, as illustrated in Figure $3$. Most psychology experiments tend to be sampling without replacement, because the same person is not allowed to participate in the experiment twice. However, most statistical theory is based on the assumption that the data arise from a simple random sample with replacement. In real life, this very rarely matters. If the population of interest is large (e.g., has more than 10 entities!) the difference between sampling with- and without- replacement is too small to be concerned with. The difference between simple random samples and biased samples, on the other hand, is not such an easy thing to dismiss. Most samples are not simple random samples As you can see from looking at the list of possible populations that I showed above, it is almost impossible to obtain a simple random sample from most populations of interest. When I run experiments, I’d consider it a minor miracle if my participants turned out to be a random sampling of the undergraduate psychology students at Adelaide university, even though this is by far the narrowest population that I might want to generalize to. A thorough discussion of other types of sampling schemes is beyond the scope of this book, but to give you a sense of what’s out there I’ll list a few of the more important ones: • Stratified sampling. Suppose your population is (or can be) divided into several different sub-populations, or strata. Perhaps you’re running a study at several different sites, for example. Instead of trying to sample randomly from the population as a whole, you instead try to collect a separate random sample from each of the strata. Stratified sampling is sometimes easier to do than simple random sampling, especially when the population is already divided into the distinct strata. It can also be more efficient that simple random sampling, especially when some of the sub-populations are rare. For instance, when studying schizophrenia it would be much better to divide the population into two strata (schizophrenic and not-schizophrenic), and then sample an equal number of people from each group. If you selected people randomly, you would get so few schizophrenic people in the sample that your study would be useless. This specific kind of of stratified sampling is referred to as oversampling because it makes a deliberate attempt to over-represent rare groups. • Snowball sampling is a technique that is especially useful when sampling from a “hidden” or hard to access population, and is especially common in social sciences. For instance, suppose the researchers want to conduct an opinion poll among transgender people. The research team might only have contact details for a few trans folks, so the survey starts by asking them to participate (stage 1). At the end of the survey, the participants are asked to provide contact details for other people who might want to participate. In stage 2, those new contacts are surveyed. The process continues until the researchers have sufficient data. The big advantage to snowball sampling is that it gets you data in situations that might otherwise be impossible to get any. On the statistical side, the main disadvantage is that the sample is highly non-random, and non-random in ways that are difficult to address. On the real life side, the disadvantage is that the procedure can be unethical if not handled well, because hidden populations are often hidden for a reason. I chose transgender people as an example here to highlight this: if you weren’t careful you might end up outing people who don’t want to be outed (very, very bad form), and even if you don’t make that mistake it can still be intrusive to use people’s social networks to study them. It’s certainly very hard to get people’s informed consent before contacting them, yet in many cases the simple act of contacting them and saying “hey we want to study you” can be hurtful. Social networks are complex things, and just because you can use them to get data doesn’t always mean you should. • Convenience sampling is more or less what it sounds like. The samples are chosen in a way that is convenient to the researcher, and not selected at random from the population of interest. Snowball sampling is one type of convenience sampling, but there are many others. A common example in psychology are studies that rely on undergraduate psychology students. These samples are generally non-random in two respects: firstly, reliance on undergraduate psychology students automatically means that your data are restricted to a single sub-population. Secondly, the students usually get to pick which studies they participate in, so the sample is a self selected subset of psychology students not a randomly selected subset. In real life, most studies are convenience samples of one form or another. This is sometimes a severe limitation, but not always. How much does it matter if you don’t have a simple random sample? Okay, so real world data collection tends not to involve nice simple random samples. Does that matter? A little thought should make it clear to you that it can matter if your data are not a simple random sample: just think about the difference between Figures $1$ and $2$. However, it’s not quite as bad as it sounds. Some types of biased samples are entirely unproblematic. For instance, when using a stratified sampling technique you actually know what the bias is because you created it deliberately, often to increase the effectiveness of your study, and there are statistical techniques that you can use to adjust for the biases you’ve introduced (not covered in this book!). So in those situations it’s not a problem. More generally though, it’s important to remember that random sampling is a means to an end, not the end in itself. Let’s assume you’ve relied on a convenience sample, and as such you can assume it’s biased. A bias in your sampling method is only a problem if it causes you to draw the wrong conclusions. When viewed from that perspective, I’d argue that we don’t need the sample to be randomly generated in every respect: we only need it to be random with respect to the psychologically-relevant phenomenon of interest. Suppose I’m doing a study looking at working memory capacity. In study 1, I actually have the ability to sample randomly from all human beings currently alive, with one exception: I can only sample people born on a Monday. In study 2, I am able to sample randomly from the Australian population. I want to generalize my results to the population of all living humans. Which study is better? The answer, obviously, is study 1. Why? Because we have no reason to think that being “born on a Monday” has any interesting relationship to working memory capacity. In contrast, I can think of several reasons why “being Australian” might matter. Australia is a wealthy, industrialized country with a very well-developed education system. People growing up in that system will have had life experiences much more similar to the experiences of the people who designed the tests for working memory capacity. This shared experience might easily translate into similar beliefs about how to “take a test”, a shared assumption about how psychological experimentation works, and so on. These things might actually matter. For instance, “test taking” style might have taught the Australian participants how to direct their attention exclusively on fairly abstract test materials relative to people that haven’t grown up in a similar environment; leading to a misleading picture of what working memory capacity is. There are two points hidden in this discussion. Firstly, when designing your own studies, it’s important to think about what population you care about, and try hard to sample in a way that is appropriate to that population. In practice, you’re usually forced to put up with a “sample of convenience” (e.g., psychology lecturers sample psychology students because that’s the least expensive way to collect data, and our coffers aren’t exactly overflowing with gold), but if so you should at least spend some time thinking about what the dangers of this practice might be. Secondly, if you’re going to criticize someone else’s study because they’ve used a sample of convenience rather than laboriously sampling randomly from the entire human population, at least have the courtesy to offer a specific theory as to how this might have distorted the results. Remember, everyone in science is aware of this issue, and does what they can to alleviate it. Merely pointing out that “the study only included people from group BLAH” is entirely unhelpful, and borders on being insulting to the researchers, who are aware of the issue. They just don’t happen to be in possession of the infinite supply of time and money required to construct the perfect sample. In short, if you want to offer a responsible critique of the sampling process, then be helpful. Rehashing the blindingly obvious truisms that I’ve been rambling on about in this section isn’t helpful. Population parameters and sample statistics Okay. Setting aside the thorny methodological issues associated with obtaining a random sample, let’s consider a slightly different issue. Up to this point we have been talking about populations the way a scientist might. To a psychologist, a population might be a group of people. To an ecologist, a population might be a group of bears. In most cases the populations that scientists care about are concrete things that actually exist in the real world. Statisticians, however, are a funny lot. On the one hand, they are interested in real world data and real science in the same way that scientists are. On the other hand, they also operate in the realm of pure abstraction in the way that mathematicians do. As a consequence, statistical theory tends to be a bit abstract in how a population is defined. In much the same way that psychological researchers operationalize our abstract theoretical ideas in terms of concrete measurements, statisticians operationalize the concept of a “population” in terms of mathematical objects that they know how to work with. You’ve already come across these objects they’re called probability distributions (remember, the place where data comes from). The idea is quite simple. Let’s say we’re talking about IQ scores. To a psychologist, the population of interest is a group of actual humans who have IQ scores. A statistician “simplifies” this by operationally defining the population as the probability distribution depicted in Figure $4a$. IQ tests are designed so that the average IQ is 100, the standard deviation of IQ scores is 15, and the distribution of IQ scores is normal. These values are referred to as the population parameters because they are characteristics of the entire population. That is, we say that the population mean $\mu$ is 100, and the population standard deviation $\sigma$ is 15. Now suppose we collect some data. We select 100 people at random and administer an IQ test, giving a simple random sample from the population. The sample would consist of a collection of numbers like this: 106 101 98 80 74 ... 107 72 100 Each of these IQ scores is sampled from a normal distribution with mean 100 and standard deviation 15. So if I plot a histogram of the sample, I get something like the one shown in Figure $4b$. As you can see, the histogram is roughly the right shape, but it’s a very crude approximation to the true population distribution shown in Figure $4c$. The mean of the sample is fairly close to the population mean 100 but not identical. In this case, it turns out that the people in the sample have a mean IQ of 98.5, and the standard deviation of their IQ scores is 15.9. These sample statistics are properties of the data set, and although they are fairly similar to the true population values, they are not the same. In general, sample statistics are the things you can calculate from your data set, and the population parameters are the things you want to learn about. Later on in this chapter we’ll talk about how you can estimate population parameters using your sample statistics and how to work out how confident you are in your estimates but before we get to that there’s a few more ideas in sampling theory that you need to know about.
textbooks/stats/Applied_Statistics/Answering_Questions_with_Data_-__Introductory_Statistics_for_Psychology_Students_(Crump)/04%3A_Probability_Sampling_and_Estimation/4.08%3A_Samples_populations_and_sampling.txt
We just looked at the results of one fictitious IQ experiment with a sample size of $N=100$. The results were somewhat encouraging: the true population mean is 100, and the sample mean of 98.5 is a pretty reasonable approximation to it. In many scientific studies that level of precision is perfectly acceptable, but in other situations you need to be a lot more precise. If we want our sample statistics to be much closer to the population parameters, what can we do about it? The obvious answer is to collect more data. Suppose that we ran a much larger experiment, this time measuring the IQ’s of 10,000 people. We can simulate the results of this experiment using R, using the rnorm() function, which generates random numbers sampled from a normal distribution. For an experiment with a sample size of n = 10000, and a population with mean = 100 and sd = 15, R produces our fake IQ data using these commands: IQ <- rnorm(n=10000, mean=100, sd=15) #generate IQ scores IQ <- round(IQ) # make round numbers Cool, we just generated 10,000 fake IQ scores. Where did they go? Well, they went into the variable IQ on my computer. You can do the same on your computer too by copying the above code. 10,000 numbers is too many numbers to look at. We can look at the first 100 like this: IQ <- rnorm(n=10000, mean=100, sd=15) IQ <- round(IQ) print(IQ[1:100]) [1] 97 98 101 114 110 105 84 95 96 103 86 118 99 93 64 101 117 104 [19] 106 73 81 98 100 111 103 100 91 115 107 98 107 76 70 107 104 86 [37] 120 91 103 129 92 98 105 108 96 87 94 97 102 80 98 76 131 107 [55] 104 114 90 109 104 86 124 73 131 114 104 83 99 91 83 105 107 107 [73] 125 74 112 87 76 103 105 88 97 86 99 90 117 121 86 109 132 89 [91] 97 132 76 131 98 111 118 98 94 98 We can compute the mean IQ using the command mean(IQ) and the standard deviation using the command sd(IQ), and draw a histogram using hist(). The histogram of this much larger sample is shown in Figure 4.8.4c. Even a moment’s inspections makes clear that the larger sample is a much better approximation to the true population distribution than the smaller one. This is reflected in the sample statistics: the mean IQ for the larger sample turns out to be 99.9, and the standard deviation is 15.1. These values are now very close to the true population.s I feel a bit silly saying this, but the thing I want you to take away from this is that large samples generally give you better information. I feel silly saying it because it’s so bloody obvious that it shouldn’t need to be said. In fact, it’s such an obvious point that when Jacob Bernoulli – one of the founders of probability theory – formalized this idea back in 1713, he was kind of a jerk about it. Here’s how he described the fact that we all share this intuition: For even the most stupid of men, by some instinct of nature, by himself and without any instruction (which is a remarkable thing), is convinced that the more observations have been made, the less danger there is of wandering from one’s goal (see Stigler, 1986, p65). Okay, so the passage comes across as a bit condescending (not to mention sexist), but his main point is correct: it really does feel obvious that more data will give you better answers. The question is, why is this so? Not surprisingly, this intuition that we all share turns out to be correct, and statisticians refer to it as the law of large numbers. The law of large numbers is a mathematical law that applies to many different sample statistics, but the simplest way to think about it is as a law about averages. The sample mean is the most obvious example of a statistic that relies on averaging (because that’s what the mean is… an average), so let’s look at that. When applied to the sample mean, what the law of large numbers states is that as the sample gets larger, the sample mean tends to get closer to the true population mean. Or, to say it a little bit more precisely, as the sample size “approaches” infinity (written as $N \rightarrow \infty$) the sample mean approaches the population mean ($\bar{X} \rightarrow \mu$). I don’t intend to subject you to a proof that the law of large numbers is true, but it’s one of the most important tools for statistical theory. The law of large numbers is the thing we can use to justify our belief that collecting more and more data will eventually lead us to the truth. For any particular data set, the sample statistics that we calculate from it will be wrong, but the law of large numbers tells us that if we keep collecting more data those sample statistics will tend to get closer and closer to the true population parameters.
textbooks/stats/Applied_Statistics/Answering_Questions_with_Data_-__Introductory_Statistics_for_Psychology_Students_(Crump)/04%3A_Probability_Sampling_and_Estimation/4.09%3A_The_Law_of_Large_Numbers.txt
The law of large numbers is a very powerful tool, but it’s not going to be good enough to answer all our questions. Among other things, all it gives us is a “long run guarantee”. In the long run, if we were somehow able to collect an infinite amount of data, then the law of large numbers guarantees that our sample statistics will be correct. But as John Maynard Keynes famously argued in economics, a long run guarantee is of little use in real life: [The] long run is a misleading guide to current affairs. In the long run we are all dead. Economists set themselves too easy, too useless a task, if in tempestuous seasons they can only tell us, that when the storm is long past, the ocean is flat again. Keynes (1923, 80) As in economics, so too in psychology and statistics. It is not enough to know that we will eventually arrive at the right answer when calculating the sample mean. Knowing that an infinitely large data set will tell me the exact value of the population mean is cold comfort when my actual data set has a sample size of \(N=100\). In real life, then, we must know something about the behavior of the sample mean when it is calculated from a more modest data set! Sampling distribution of the sample means “Oh no, what is the sample distribution of the sample means? Is that even allowed in English?”. Yes, unfortunately, this is allowed. The sampling distribution of the sample means is the next most important thing you will need to understand. IT IS SO IMPORTANT THAT IT IS NECESSARY TO USE ALL CAPS. It is only confusing at first because it’s long and uses sampling and sample in the same phrase. Don’t worry, we’ve been prepping you for this. You know what a distribution is right? It’s where numbers comes from. It makes some numbers occur more or less frequently, or the same as other numbers. You know what a sample is right? It’s the numbers we take from a distribution. So, what could the sampling distribution of the sample means refer to? First, what do you think the sample means refers to? Well, if you took a sample of numbers, you would have a bunch of numbers…then, you could compute the mean of those numbers. The sample mean is the mean of the numbers in the sample. That is all. So, what is this distribution you speak of? Well, what if you took a bunch of samples, put one here, put one there, put some other ones other places. You have a lot of different samples of numbers. You could compute the mean for each them. Then you would have a bunch of means. What do those means look like? Well, if you put them in a histogram, you could find out. If you did that, you would be looking at (roughly) a distribution, AKA the sampling distribution of the sample means. “I’m following along sort of, why would I want to do this instead of watching Netflix…”. Because, the sampling distribution of the sample means gives you another window into chance. A very useful one that you can control, just like your remote control, by pressing the right design buttons. Seeing the pieces To make a sampling distribution of the sample means, we just need the following: 1. A distribution to take numbers from 2. A bunch of different samples from the distribution 3. The means of each of the samples 4. Get all of the sample means, and plot them in a histogram Question Question for yourself: What do you think the sampling distribution of the sample means will look like? Will it tend to look the shape of the distribution that the samples came from? Or not? Good question, think about it. Let’s do those four things. We will sample numbers from the uniform distribution, it looks like this if we are sampling from the set of integers from 1 to 10: OK, now let’s take a bunch of samples from that distribution. We will set our sample-size to 20. It’s easier to see how the sample mean behaves in a movie. Each histogram shows a new sample. The red line shows where the mean of the sample is. The samples are all very different from each other, but the red line doesn’t move around very much, it always stays near the middle. However, the red line does move around a little bit, and this variance is what we call the sampling distribution of the sample mean. OK, what have we got here? We have an animiation of 10 different samples. Each sample has 20 observations and these are summarized in each of histograms that show up in the animiation. Each histogram has a red line. The red line shows you where the mean of each sample is located. So, we have found the sample means for the 10 different samples from a uniform distribution. First question. Are the sample means all the same? The answer is no. They are all kind of similar to each other though, they are all around five plus or minus a few numbers. This is interesting. Although all of our samples look pretty different from one another, the means of our samples look more similar than different. Second question. What should we do with the means of our samples? Well, how about we collect them them all, and then plot a histogram of them. This would allow us to see what the distribution of the sample means looks like. The next histogram is just this. Except, rather than taking 10 samples, we will take 10,000 samples. For each of them we will compute the means. So, we will have 10,000 means. This is the histogram of the sample means: “Wait what? This doesn’t look right. I thought we were taking samples from a uniform distribution. Uniform distributions are flat. THIS DOES NOT LOOK LIKE A FLAT DISTRIBTUION, WHAT IS GOING ON, AAAAAGGGHH.” We feel your pain. Remember, we are looking at the distribution of sample means. It is indeed true that the distribution of sample means does not look the same as the distribution we took the samples from. Our distribution of sample means goes up and down. In fact, this will almost always be the case for distributions of sample means. This fact is called the central limit theorem, which we talk about later. For now, let’s talk about about what’s happening. Remember, we have been sampling numbers between the range 1 to 10. We are supposed to get each number with roughly equal frequency, because we are sampling from a uniform distribution. So, let’s say we took a sample of 10 numbers, and happened to get one of each from 1 to 10. `1 2 3 4 5 6 7 8 9 10` What is the mean of those numbers? Well, its 1+2+3+4+5+6+7+8+9+10 = 55 / 10 = 5.5. Imagine if we took a bigger sample, say of 20 numbers, and again we got exactly 2 of each number. What would the mean be? It would be (1+2+3+4+5+6+7+8+9+10)*2 = 110 / 20 = 5.5. Still 5.5. You can see here, that the mean value of our uniform distribution is 5.5. Now that we know this, we might expect that most of our samples will have a mean near this number. We already know that every sample won’t be perfect, and it won’t have exactly an equal amount of every number. So, we will expect the mean of our samples to vary a little bit. The histogram that we made shows the variation. Not surprisingly, the numbers vary around the value 5.5. Sampling distributions exist for any sample statistic! One thing to keep in mind when thinking about sampling distributions is that any sample statistic you might care to calculate has a sampling distribution. For example, suppose that each time you sampled some numbers from an experiment you wrote down the largest number in the experiment. Doing this over and over again would give you a very different sampling distribution, namely the sampling distribution of the maximum. You could calculate the smallest number, or the mode, or the median, of the variance, or the standard deviation, or anything else from your sample. Then, you could repeat many times, and produce the sampling distribution of those statistics. Neat! Just for fun here are some different sampling distributions for different statistics. We will take a normal distribution with mean = 100, and standard deviation =20. Then, we’ll take lots of samples with n = 50 (50 observations per sample). We’ll save all of the sample statistics, then plot their histograms. Let’s do it: We just computed 4 different sampling distributions, for the mean, standard deviation, maximum value, and the median. If you just look quickly at these histograms you might think they all basically look the same. Hold up now. It’s very important to look at the x-axes. They are different. For example, the sample mean goes from about 90 to 110, whereas the standard deviation goes from 15 to 25. These sampling distributions are super important, and worth thinking about. What should you think about? Well, here’s a clue. These distributions are telling you what to expect from your sample. Critically, they are telling you what you should expect from a sample, when you take one from the specific distribution that we used (normal distribution with mean =100 and SD = 20). What have we learned. We’ve learned a tonne. We’ve learned that we can expect our sample to have a mean somewhere between 90 and 108ish. Notice, the sample means are never more extreme. We’ve learned that our sample will usually have some variance, and that the the standard deviation will be somewhere between 15 and 25 (never much more extreme than that). We can see that sometime we get some big numbers, say between 120 and 180, but not much bigger than that. And, we can see that the median is pretty similar to the mean. If you ever took a sample of 50 numbers, and your descriptive statistics were inside these windows, then perhaps they came from this kind of normal distribution. If your sample statistics are very different, then your sample probably did not come this distribution. By using simulation, we can find out what samples look like when they come from distributions, and we can use this information to make inferences about whether our sample came from particular distributions.
textbooks/stats/Applied_Statistics/Answering_Questions_with_Data_-__Introductory_Statistics_for_Psychology_Students_(Crump)/04%3A_Probability_Sampling_and_Estimation/4.10%3A_Sampling_distributions_and_the_central_limit_theorem.txt
OK, so now you’ve seen lots of sampling distributions, and you know what the sampling distribution of the mean is. Here, we’ll focus on how the sampling distribution of the mean changes as a function of sample size. Intuitively, you already know part of the answer: if you only have a few observations, the sample mean is likely to be quite inaccurate (you’ve already seen it bounce around): if you replicate a small experiment and recalculate the mean you’ll get a very different answer. In other words, the sampling distribution is quite wide. If you replicate a large experiment and recalculate the sample mean you’ll probably get the same answer you got last time, so the sampling distribution will be very narrow. Let’s give ourselves a nice movie to see everything in action. We’re going to sample numbers from a normal distribution. You will see four panels, each panel represents a different sample size (n), including sample-sizes of 10, 50, 100, and 1000. The red line shows the shape of the normal distribution. The grey bars show a histogram of each of the samples that we take. The red line shows the mean of an individual sample (the middle of the grey bars). As you can see, the red line moves around a lot, especially when the sample size is small (10). The new bits are the blue bars and the blue lines. The blue bars represent the sampling distribution of the sample mean. For example, in the panel for sample-size 10, we see a bunch of blue bars. This is a histogram of 10 sample means, taken from 10 samples of size 10. In the 50 panel, we see a histogram of 50 sample means, taken from 50 samples of size 50, and so on. The blue line in each panel is the mean of the sample means (“aaagh, it’s a mean of means”, yes it is). What should you notice? Notice that the range of the blue bars shrinks as sample size increases. The sampling distribution of the mean is quite wide when the sample-size is 10, it narrows as sample-size increases to 50 and 100, and it’s just one bar, right in the middle when sample-size goes to 1000. What we are seeing is that the mean of the sampling distribution approaches the mean of the population as sample-size increases. So, the sampling distribution of the mean is another distribution, and it has some variance. It varies more when sample-size is small, and varies less when sample-size is large. We can quantify this effect by calculating the standard deviation of the sampling distribution, which is referred to as the standard error. The standard error of a statistic is often denoted SE, and since we’re usually interested in the standard error of the sample mean, we often use the acronym SEM. As you can see just by looking at the movie, as the sample size $N$ increases, the SEM decreases. Okay, so that’s one part of the story. However, there’s something we’ve been glossing over a little bit. We’ve seen it already, but it’s worth looking at it one more time. Here’s the thing: no matter what shape your population distribution is, as $N$ increases the sampling distribution of the mean starts to look more like a normal distribution. This is the central limit theorem. To see the central limit theorem in action, we are going to look at some histograms of sample means different kinds of distributions. It is very important to recognize that you are looking at distributions of sample means, not distributions of individual samples! Here we go, starting with sampling from a normal distribution. The red line is the distribution, the blue bars are the histogram for the sample means. They both look normal! Let’s do it again. This time we sample from a flat uniform distribution. Again, we see that the distribution of the sample means is not flat, it looks like a normal distribution. One more time with an exponential distribution. Even though way more of the numbers should be smaller than bigger, then sampling distribution of the mean again does not look the red line. Instead, it looks more normal-ish. That’s the central limit theorem. It just works like that. On the basis of these figures, it seems like we have evidence for all of the following claims about the sampling distribution of the mean: • The mean of the sampling distribution is the same as the mean of the population • The standard deviation of the sampling distribution (i.e., the standard error) gets smaller as the sample size increases • The shape of the sampling distribution becomes normal as the sample size increases As it happens, not only are all of these statements true, there is a very famous theorem in statistics that proves all three of them, known as the central limit theorem. Among other things, the central limit theorem tells us that if the population distribution has mean $\mu$ and standard deviation $\sigma$, then the sampling distribution of the mean also has mean $\mu$, and the standard error of the mean is $\mbox{SEM} = \frac{\sigma}{ \sqrt{N} } \nonumber$ Because we divide the population standard deviation $\sigma$ by the square root of the sample size $N$, the SEM gets smaller as the sample size increases. It also tells us that the shape of the sampling distribution becomes normal. This result is useful for all sorts of things. It tells us why large experiments are more reliable than small ones, and because it gives us an explicit formula for the standard error it tells us how much more reliable a large experiment is. It tells us why the normal distribution is, well, normal. In real experiments, many of the things that we want to measure are actually averages of lots of different quantities (e.g., arguably, “general” intelligence as measured by IQ is an average of a large number of “specific” skills and abilities), and when that happens, the averaged quantity should follow a normal distribution. Because of this mathematical law, the normal distribution pops up over and over again in real data.
textbooks/stats/Applied_Statistics/Answering_Questions_with_Data_-__Introductory_Statistics_for_Psychology_Students_(Crump)/04%3A_Probability_Sampling_and_Estimation/4.11%3A_The_Central_Limit_Theorem.txt
We are now in a position to combine some of things we’ve been talking about in this chapter, and introduce you to a new tool, z-scores. It turns out we won’t use z-scores very much in this textbook. However, you can’t take a class on statistics and not learn about z-scores. The first thing we show you seems to be something that many students remember from their statistics class. This thing is probably remembered because instructors may test this knowledge many times, so students have to learn it for the test. Let’s look at this thing. We are going to look at a normal distribution, and we are going to draw lines through the distribution at 0, +/- 1, +/-2, and +/- 3 standard deviations from the mean: The figure shows a normal distribution with mean = 0, and standard deviation = 1. We’ve drawn lines at each of the standard deviations: -3, -2, -1, 0, 1, 2, and 3. We also show some numbers in the labels, in between each line. These numbers are proportions. For example, we see the proportion is .341 for scores that fall between the range 0 and 1. Scores between 0 and 1 occur 34.1% of the time. Scores in between -1 and 1, occur 68.2% of the time, that’s more than half of the scores. Scores between 1 and occur about 13.6% of the time, and scores between 2 and 3 occur even less, only 2.1% of the time. Normal distributions always have these properties, even when they have different means and standard deviations. For example, take a look at this normal distribution, it has a mean =100, and standard deviation =25. Now we are looking at a normal distribution with mean = 100 and standard deviation = 25. Notice that the region between 100 and 125 contains 34.1% of the scores. This region is 1 standard deviation away from the mean (the standard deviation is 25, the mean is 100, so 25 is one whole standard deviation away from 100). As you can see, the very same proportions occur between each of the standard deviations, as they did when our standard deviation was set to 1 (with a mean of 0). Idea behind z-scores Sometimes it can be convenient to transform your original scores into different scores that are easier to work with. For example, if you have a bunch of proportions, like .3, .5, .6, .7, you might want to turn them into percentages like 30%, 50%, 60%, and 70%. To do that you multiply the proportions by a constant of 100. If you want to turn percentages back into proportions, you divide by a constant of 100. This kind of transformation just changes the scale of the numbers from between 0-1, and between 0-100. Otherwise, the pattern in the numbers stays the same. The idea behind z-scores is a similar kind of transformation. The idea is to express each raw score in terms of it’s standard deviation. For example, if I told you I got a 75% on test, you wouldn’t know how well I did compared to the rest of the class. But, if I told you that I scored 2 standard deviations above the mean, you’d know I did quite well compared to the rest of the class, because you know that most scores (if they are distributed normally) fall below 2 standard deviations of the mean. We also know, now thanks to the central limit theorem, that many of our measures, such as sample means, will be distributed normally. So, it can often be desirable to express the raw scores in terms of their standard deviations. Let’s see how this looks in a table without showing you any formulas. We will look at some scores that come from a normal distirbution with mean =100, and standard deviation = 25. We will list some raw scores, along with the z-scores raw z 25 -3 50 -2 75 -1 100 0 125 1 150 2 175 3 Remember, the mean is 100, and the standard deviation is 25. How many standard deviations away from the mean is a score of 100? The answer is 0, it’s right on the mean. You can see the z-score for 100, is 0. How many standard deviations is 125 away from the mean? Well the standard deviation is 25, 125 is one whole 25 away from 100, that’s a total of 1 standard deviation, so the z-score for 125 is 1. The z-score for 150 is 2, because 150 is two 25s away from 100. The z-score for 50 is -2, because 50 is two 25s away from 100 in the opposite direction. All we are doing here is re-expressing the raw scores in terms of how many standard deviations they are from the mean. Remember, the mean is always right on target, so the center of the z-score distribution is always 0. Calculating z-scores To calculate z-scores all you have to do is figure out how many standard deviations from the mean each number is. Let’s say the mean is 100, and the standard deviation is 25. You have a score of 97. How many standard deviations from the mean is 97? First compute the difference between the score and the mean: $97-100 = -3 \nonumber$ Alright, we have a total difference of -3. How many standard deviations does -3 represent if 1 standard deviation is 25? Clearly -3 is much smaller than 25, so it’s going to be much less than 1. To figure it out, just divide -3 by the standard deviation. $\frac{-3}{25} = -.12 \nonumber$ Our z-score for 97 is -.12. Here’s the general formula: $z = \frac{\text{raw score} - \text{mean}}{\text{standard deviation}} \nonumber$ So, for example if we had these 10 scores from a normal distribution with mean = 100, and standard deviation =25 72.23 73.48 96.25 91.60 56.84 105.56 128.96 91.33 70.96 120.23 The z-scores would be: -1.1108 -1.0608 -0.1500 -0.3360 -1.7264 0.2224 1.1584 -0.3468 -1.1616 0.8092 Once you have the z-scores, you could use them as another way to describe your data. For example, now just by looking at a score you know if it is likely or unlikely to occur, because you know how the area under the normal curve works. z-scores between -1 and 1 happen pretty often, scores greater than 1 or -1 still happen fairly often, but not as often. And, scores bigger than 2 or -2 don’t happen very often. This is a convenient thing to do if you want to look at your numbers and get a general sense of how often they happen. Usually you do not know the mean or the standard deviation of the population that you are drawing your sample scores from. So, you could use the mean and standard deviation of your sample as an estimate, and then use those to calculate z-scores. Finally, z-scores are also called standardized scores, because each raw score is described in terms of it’s standard deviation. This may well be the last time we talk about z-scores in this book. You might wonder why we even bothered telling you about them. First, it’s worth knowing they are a thing. Second, they become important as your statistical prowess becomes more advanced. Third, some statistical concepts, like correlation, can be re-written in terms of z-scores, and this illuminates aspects of those statistics. Finally, they are super useful when you are dealing with a normal distribution that has a known mean and standard deviation.
textbooks/stats/Applied_Statistics/Answering_Questions_with_Data_-__Introductory_Statistics_for_Psychology_Students_(Crump)/04%3A_Probability_Sampling_and_Estimation/4.12%3A_z-scores.txt
Let’s pause for a moment to get our bearings. We’re about to go into the topic of estimation. What is that, and why should you care? First, population parameters are things about a distribution. For example, distributions have means. The mean is a parameter of the distribution. The standard deviation of a distribution is a parameter. Anything that can describe a distribution is a potential parameter. OK fine, who cares? This I think, is a really good question. There are some good concrete reasons to care. And there are some great abstract reasons to care. Unfortunately, most of the time in research, it’s the abstract reasons that matter most, and these can be the most difficult to get your head around. Concrete population parameters First some concrete reasons. There are real populations out there, and sometimes you want to know the parameters of them. For example, if you are a shoe company, you would want to know about the population parameters of feet size. As a first pass, you would want to know the mean and standard deviation of the population. If your company knew this, and other companies did not, your company would do better (assuming all shoes are made equal). Why would your company do better, and how could it use the parameters? Here’s one good reason. As a shoe company you want to meet demand with the right amount of supply. If you make too many big or small shoes, and there aren’t enough people to buy them, then you’re making extra shoes that don’t sell. If you don’t make enough of the most popular sizes, you’ll be leaving money on the table. Right? Yes. So, what would be an optimal thing to do? Perhaps, you would make different amounts of shoes in each size, corresponding to how the demand for each shoe size. You would know something about the demand by figuring out the frequency of each size in the population. You would need to know the population parameters to do this. Fortunately, it’s pretty easy to get the population parameters without measuring the entire population. Who has time to measure every-bodies feet? Nobody, that’s who. Instead, you would just need to randomly pick a bunch of people, measure their feet, and then measure the parameters of the sample. If you take a big enough sample, we have learned that the sample mean gives a very good estimate of the population mean. We will learn shortly that a version of the standard deviation of the sample also gives a good estimate of the standard deviation of the population. Perhaps shoe-sizes have a slightly different shape than a normal distribution. Here too, if you collect a big enough sample, the shape of the distribution of the sample will be a good estimate of the shape of the populations. All of these are good reasons to care about estimating population parameters. But, do you run a shoe company? Probably not. Abstract population parameters Even when we think we are talking about something concrete in Psychology, it often gets abstract right away. Instead of measuring the population of feet-sizes, how about the population of human happiness. We all think we know what happiness is, everyone has more or less of it, there are a bunch of people, so there must be a population of happiness right? Perhaps, but it’s not very concrete. The first problem is figuring out how to measure happiness. Let’s use a questionnaire. Consider these questions: How happy are you right now on a scale from 1 to 7? How happy are you in general on a scale from 1 to 7? How happy are you in the mornings on a scale from 1 to 7? How happy are you in the afternoons on a scale from 1 to 7? 1. = very unhappy 2. = unhappy 3. = sort of unhappy 4. = in the middle 5. = sort of happy 6. = happy 7. = very happy Forget about asking these questions to everybody in the world. Let’s just ask them to lots of people (our sample). What do you think would happen? Well, obviously people would give all sorts of answers right. We could tally up the answers and plot them in a histogram. This would show us a distribution of happiness scores from our sample. “Great, fantastic!”, you say. Yes, fine and dandy. So, on the one hand we could say lots of things about the people in our sample. We could say exactly who says they are happy and who says they aren’t, after all they just told us! But, what can we say about the larger population? Can we use the parameters of our sample (e.g., mean, standard deviation, shape etc.) to estimate something about a larger population. Can we infer how happy everybody else is, just from our sample? HOLD THE PHONE. Complications with inference Before listing a bunch of complications, let me tell you what I think we can do with our sample. Provided it is big enough, our sample parameters will be a pretty good estimate of what another sample would look like. Because of the following discussion, this is often all we can say. But, that’s OK, as you see throughout this book, we can work with that! Problem 1: Multiple populations: If you looked at a large sample of questionnaire data you will find evidence of multiple distributions inside your sample. People answer questions differently. Some people are very cautious and not very extreme. Their answers will tend to be distributed about the middle of the scale, mostly 3s, 4s, and 5s. Some people are very bi-modal, they are very happy and very unhappy, depending on time of day. These people’s answers will be mostly 1s and 2s, and 6s and 7s, and those numbers look like they come from a completely different distribution. Some people are entirely happy or entirely unhappy. Again, these two “populations” of people’s numbers look like two different distributions, one with mostly 6s and 7s, and one with mostly 1s and 2s. Other people will be more random, and their scores will look like a uniform distribution. So, is there a single population with parameters that we can estimate from our sample? Probably not. Could be a mixture of lots of populations with different distributions. Problem 2: What do these questions measure?: If the whole point of doing the questionnaire is to estimate the population’s happiness, we really need wonder if the sample measurements actually tell us anything about happiness in the first place. Some questions: Are people accurate in saying how happy they are? Does the measure of happiness depend on the scale, for example, would the results be different if we used 0-100, or -100 to +100, or no numbers? Does the measure of happiness depend on the wording in the question? Does a measure like this one tell us everything we want to know about happiness (probably not), what is it missing (who knows? probably lots). In short, nobody knows if these kinds of questions measure what we want them to measure. We just hope that they do. Instead, we have a very good idea of the kinds of things that they actually measure. It’s really quite obvious, and staring you in the face. Questionnaire measurements measure how people answer questionnaires. In other words, how people behave and answer questions when they are given a questionnaire. This might also measure something about happiness, when the question has to do about happiness. But, it turns out people are remarkably consistent in how they answer questions, even when the questions are total nonsense, or have no questions at all (just numbers to choose!) Maul (2017). The take home complications here are that we can collect samples, but in Psychology, we often don’t have a good idea of the populations that might be linked to these samples. There might be lots of populations, or the populations could be different depending on who you ask. Finally, the “population” might not be the one you want it to be. Experiments and Population parameters OK, so we don’t own a shoe company, and we can’t really identify the population of interest in Psychology, can’t we just skip this section on estimation? After all, the “population” is just too weird and abstract and useless and contentious. HOLD THE PHONE AGAIN! It turns out we can apply the things we have been learning to solve lots of important problems in research. These allow us to answer questions with the data that we collect. Parameter estimation is one of these tools. We just need to be a little bit more creative, and a little bit more abstract to use the tools. Here is what we know already. The numbers that we measure come from somewhere, we have called this place “distributions”. Distributions control how the numbers arrive. Some numbers happen more than others depending on the distribution. We assume, even if we don’t know what the distribution is, or what it means, that the numbers came from one. Second, when get some numbers, we call it a sample. This entire chapter so far has taught you one thing. When your sample is big, it resembles the distribution it came from. And, when your sample is big, it will resemble very closely what another big sample of the same thing will look like. We can use this knowledge! Very often as Psychologists what we want to know is what causes what. We want to know if X causes something to change in Y. Does eating chocolate make you happier? Does studying improve your grades? There a bazillions of these kinds of questions. And, we want answers to them. I’ve been trying to be mostly concrete so far in this textbook, that’s why we talk about silly things like chocolate and happiness, at least they are concrete. Let’s give a go at being abstract. We can do it. So, we want to know if X causes Y to change. What is X? What is Y? X is something you change, something you manipulate, the independent variable. Y is something you measure. So, we will be taking samples from Y. “Oh I get it, we’ll take samples from Y, then we can use the sample parameters to estimate the population parameters of Y!” NO, not really, but yes sort of. We will take sample from Y, that is something we absolutely do. In fact, that is really all we ever do, which is why talking about the population of Y is kind of meaningless. We’re more interested in our samples of Y, and how they behave. So, what would happen if we removed X from the universe altogether, and then took a big sample of Y. We’ll pretend Y measures something in a Psychology experiment. So, we know right away that Y is variable. When we take a big sample, it will have a distribution (because Y is variable). So, we can do things like measure the mean of Y, and measure the standard deviation of Y, and anything else we want to know about Y. Fine. What would happen if we replicated this measurement. That is, we just take another random sample of Y, just as big as the first. What should happen is that our first sample should look a lot like our second example. After all, we didn’t do anything to Y, we just took two big samples twice. Both of our samples will be a little bit different (due to sampling error), but they’ll be mostly the same. The bigger our samples, the more they will look the same, especially when we don’t do anything to cause them to be different. In other words, we can use the parameters of one sample to estimate the parameters of a second sample, because they will tend to be the same, especially when they are large. We are now ready for step two. You want to know if X changes Y. What do you do? You make X go up and take a big sample of Y then look at it. You make X go down, then take a second big sample of Y and look at it. Next, you compare the two samples of Y. If X does nothing then what should you find? We already discussed that in the previous paragraph. If X does nothing, then both of your big samples of Y should be pretty similar. However, if X does something to Y, then one of your big samples of Y will be different from the other. You will have changed something about Y. Maybe X makes the mean of Y change. Or maybe X makes the variation in Y change. Or, maybe X makes the whole shape of the distribution change. If we find any big changes that can’t be explained by sampling error, then we can conclude that something about X caused a change in Y! We could use this approach to learn about what causes what! The very important idea is still about estimation, just not population parameter estimation exactly. We know that when we take samples they naturally vary. So, when we estimate a parameter of a sample, like the mean, we know we are off by some amount. When we find that two samples are different, we need to find out if the size of the difference is consistent with what sampling error can produce, or if the difference is bigger than that. If the difference is bigger, then we can be confident that sampling error didn’t produce the difference. So, we can confidently infer that something else (like an X) did cause the difference. This bit of abstract thinking is what most of the rest of the textbook is about. Determining whether there is a difference caused by your manipulation. There’s more to the story, there always is. We can get more specific than just, is there a difference, but for introductory purposes, we will focus on the finding of differences as a foundational concept. Interim summary We’ve talked about estimation without doing any estimation, so in the next section we will do some estimating of the mean and of the standard deviation. Formally, we talk about this as using a sample to estimate a parameter of the population. Feel free to think of the “population” in different ways. It could be concrete population, like the distribution of feet-sizes. Or, it could be something more abstract, like the parameter estimate of what samples usually look like when they come from a distribution. Estimating the population mean Suppose we go to Brooklyn and 100 of the locals are kind enough to sit through an IQ test. The average IQ score among these people turns out to be $\bar{X}=98.5$. So what is the true mean IQ for the entire population of Brooklyn? Obviously, we don’t know the answer to that question. It could be $97.2$, but if could also be $103.5$. Our sampling isn’t exhaustive so we cannot give a definitive answer. Nevertheless if forced to give a “best guess” I’d have to say $98.5$. That’s the essence of statistical estimation: giving a best guess. We’re using the sample mean as the best guess of the population mean. In this example, estimating the unknown population parameter is straightforward. I calculate the sample mean, and I use that as my estimate of the population mean. It’s pretty simple, and in the next section we’ll explain the statistical justification for this intuitive answer. However, for the moment let’s make sure you recognize that the sample statistic and the estimate of the population parameter are conceptually different things. A sample statistic is a description of your data, whereas the estimate is a guess about the population. With that in mind, statisticians often use different notation to refer to them. For instance, if true population mean is denoted $\mu$, then we would use $\hat\mu$ to refer to our estimate of the population mean. In contrast, the sample mean is denoted $\bar{X}$ or sometimes $m$. However, in simple random samples, the estimate of the population mean is identical to the sample mean: if I observe a sample mean of $\bar{X} = 98.5$, then my estimate of the population mean is also $\hat\mu = 98.5$. To help keep the notation clear, here’s a handy table: Symbol What is it? Do we know what it is? $\bar{X}$ Sample mean Yes, calculated from the raw data $\mu$ True population mean Almost never known for sure $\hat{\mu}$ Estimate of the population mean Yes, identical to the sample mean Estimating the population standard deviation So far, estimation seems pretty simple, and you might be wondering why I forced you to read through all that stuff about sampling theory. In the case of the mean, our estimate of the population parameter (i.e. $\hat\mu$) turned out to identical to the corresponding sample statistic (i.e. $\bar{X}$). However, that’s not always true. To see this, let’s have a think about how to construct an estimate of the population standard deviation, which we’ll denote $\hat\sigma$. What shall we use as our estimate in this case? Your first thought might be that we could do the same thing we did when estimating the mean, and just use the sample statistic as our estimate. That’s almost the right thing to do, but not quite. Here’s why. Suppose I have a sample that contains a single observation. For this example, it helps to consider a sample where you have no intuitions at all about what the true population values might be, so let’s use something completely fictitious. Suppose the observation in question measures the cromulence of my shoes. It turns out that my shoes have a cromulence of 20. So here’s my sample: 20 This is a perfectly legitimate sample, even if it does have a sample size of $N=1$. It has a sample mean of 20, and because every observation in this sample is equal to the sample mean (obviously!) it has a sample standard deviation of 0. As a description of the sample this seems quite right: the sample contains a single observation and therefore there is no variation observed within the sample. A sample standard deviation of $s = 0$ is the right answer here. But as an estimate of the population standard deviation, it feels completely insane, right? Admittedly, you and I don’t know anything at all about what “cromulence” is, but we know something about data: the only reason that we don’t see any variability in the sample is that the sample is too small to display any variation! So, if you have a sample size of $N=1$, it feels like the right answer is just to say “no idea at all”. Notice that you don’t have the same intuition when it comes to the sample mean and the population mean. If forced to make a best guess about the population mean, it doesn’t feel completely insane to guess that the population mean is 20. Sure, you probably wouldn’t feel very confident in that guess, because you have only the one observation to work with, but it’s still the best guess you can make. Let’s extend this example a little. Suppose I now make a second observation. My data set now has $N=2$ observations of the cromulence of shoes, and the complete sample now looks like this: 20, 22 This time around, our sample is just large enough for us to be able to observe some variability: two observations is the bare minimum number needed for any variability to be observed! For our new data set, the sample mean is $\bar{X}=21$, and the sample standard deviation is $s=1$. What intuitions do we have about the population? Again, as far as the population mean goes, the best guess we can possibly make is the sample mean: if forced to guess, we’d probably guess that the population mean cromulence is 21. What about the standard deviation? This is a little more complicated. The sample standard deviation is only based on two observations, and if you’re at all like me you probably have the intuition that, with only two observations, we haven’t given the population “enough of a chance” to reveal its true variability to us. It’s not just that we suspect that the estimate is wrong: after all, with only two observations we expect it to be wrong to some degree. The worry is that the error is systematic. If the error is systematic, that means it is biased. For example, imagine if the sample mean was always smaller than the population mean. If this was true (it’s not), then we couldn’t use the sample mean as an estimator. It would be biased, we’d be using the wrong number. It turns out the sample standard deviation is a biased estimator of the population standard deviation. We can sort of anticipate this by what we’ve been discussing. When the sample size is 1, the standard deviation is 0, which is obviously to small. When the sample size is 2, the standard deviation becomes a number bigger than 0, but because we only have two sample, we suspect it might still be too small. Turns out this intuition is correct. It would be nice to demonstrate this somehow. There are in fact mathematical proofs that confirm this intuition, but unless you have the right mathematical background they don’t help very much. Instead, what I’ll do is use R to simulate the results of some experiments. With that in mind, let’s return to our IQ studies. Suppose the true population mean IQ is 100 and the standard deviation is 15. I can use the rnorm() function to generate the the results of an experiment in which I measure $N=2$ IQ scores, and calculate the sample standard deviation. If I do this over and over again, and plot a histogram of these sample standard deviations, what I have is the sampling distribution of the standard deviation. I’ve plotted this distribution in Figure $1$. Even though the true population standard deviation is 15, the average of the sample standard deviations is only 8.5. Notice that this is a very different from when we were plotting sampling distributions of the sample mean, those were always centered around the mean of the population. Now let’s extend the simulation. Instead of restricting ourselves to the situation where we have a sample size of $N=2$, let’s repeat the exercise for sample sizes from 1 to 10. If we plot the average sample mean and average sample standard deviation as a function of sample size, you get the following results. Figure $2$ shows the sample mean as a function of sample size. Notice it’s a flat line. The sample mean doesn’t underestimate or overestimate the population mean. It is an unbiased estimate! Figure $3$ shows the sample standard deviation as a function of sample size. Notice it is not a flat line. The sample standard deviation systematically underestimates the population standard deviation! In other words, if we want to make a “best guess” ($\hat\sigma$, our estimate of the population standard deviation) about the value of the population standard deviation $\sigma$, we should make sure our guess is a little bit larger than the sample standard deviation $s$. The fix to this systematic bias turns out to be very simple. Here’s how it works. Before tackling the standard deviation, let’s look at the variance. If you recall from the second chapter, the sample variance is defined to be the average of the squared deviations from the sample mean. That is: $s^2 = \frac{1}{N} \sum_{i=1}^N (X_i - \bar{X})^2 \nonumber$ The sample variance $s^2$ is a biased estimator of the population variance $\sigma^2$. But as it turns out, we only need to make a tiny tweak to transform this into an unbiased estimator. All we have to do is divide by $N-1$ rather than by $N$. If we do that, we obtain the following formula: $\hat\sigma^2 = \frac{1}{N-1} \sum_{i=1}^N (X_i - \bar{X})^2 \nonumber$ This is an unbiased estimator of the population variance $\sigma$. A similar story applies for the standard deviation. If we divide by $N-1$ rather than $N$, our estimate of the population standard deviation becomes: $\hat\sigma = \sqrt{\frac{1}{N-1} \sum_{i=1}^N (X_i - \bar{X})^2} \nonumber$ It is worth pointing out that software programs make assumptions for you, about which variance and standard deviation you are computing. Some programs automatically divide by $N-1$, some do not. You need to check to figure out what they are doing. Don’t let the software tell you what to do. Software is for you telling it what to do. One final point: in practice, a lot of people tend to refer to $\hat{\sigma}$ (i.e., the formula where we divide by $N-1$) as the sample standard deviation. Technically, this is incorrect: the sample standard deviation should be equal to $s$ (i.e., the formula where we divide by $N$). These aren’t the same thing, either conceptually or numerically. One is a property of the sample, the other is an estimated characteristic of the population. However, in almost every real life application, what we actually care about is the estimate of the population parameter, and so people always report $\hat\sigma$ rather than $s$. Note Note, whether you should divide by N or N-1 also depends on your philosophy about what you are doing. For example, if you don’t think that what you are doing is estimating a population parameter, then why would you divide by N-1? Also, when N is large, it doesn’t matter too much. The difference between a big N, and a big N-1, is just -1. This is the right number to report, of course, it’s that people tend to get a little bit imprecise about terminology when they write it up, because “sample standard deviation” is shorter than “estimated population standard deviation”. It’s no big deal, and in practice I do the same thing everyone else does. Nevertheless, I think it’s important to keep the two concepts separate: it’s never a good idea to confuse “known properties of your sample” with “guesses about the population from which it came”. The moment you start thinking that $s$ and $\hat\sigma$ are the same thing, you start doing exactly that. To finish this section off, here’s another couple of tables to help keep things clear: Symbol What is it? Do we know what it is? $s^2$ Sample variance Yes, calculated from the raw data $\sigma^2$ Population variance Almost never known for sure $\hat{\sigma}^2$ Estimate of the population variance Yes, but not the same as the sample variance
textbooks/stats/Applied_Statistics/Answering_Questions_with_Data_-__Introductory_Statistics_for_Psychology_Students_(Crump)/04%3A_Probability_Sampling_and_Estimation/4.13%3A_Estimating_population_parameters.txt
Statistics means never having to say you’re certain – Unknown origin Up to this point in this chapter, we’ve outlined the basics of sampling theory which statisticians rely on to make guesses about population parameters on the basis of a sample of data. As this discussion illustrates, one of the reasons we need all this sampling theory is that every data set leaves us with some of uncertainty, so our estimates are never going to be perfectly accurate. The thing that has been missing from this discussion is an attempt to quantify the amount of uncertainty in our estimate. It’s not enough to be able guess that the mean IQ of undergraduate psychology students is 115 (yes, I just made that number up). We also want to be able to say something that expresses the degree of certainty that we have in our guess. For example, it would be nice to be able to say that there is a 95% chance that the true mean lies between 109 and 121. The name for this is a confidence interval for the mean. Armed with an understanding of sampling distributions, constructing a confidence interval for the mean is actually pretty easy. Here’s how it works. Suppose the true population mean is $\mu$ and the standard deviation is $\sigma$. I’ve just finished running my study that has $N$ participants, and the mean IQ among those participants is $\bar{X}$. We know from our discussion of the central limit theorem that the sampling distribution of the mean is approximately normal. We also know from our discussion of the normal distribution that there is a 95% chance that a normally-distributed quantity will fall within two standard deviations of the true mean. To be more precise, we can use the qnorm() function to compute the 2.5th and 97.5th percentiles of the normal distribution qnorm( p = c(.025, .975) ) [1] -1.959964 1.959964 Okay, so I lied earlier on. The more correct answer is that a 95% chance that a normally-distributed quantity will fall within 1.96 standard deviations of the true mean. Next, recall that the standard deviation of the sampling distribution is referred to as the standard error, and the standard error of the mean is written as SEM. When we put all these pieces together, we learn that there is a 95% probability that the sample mean $\bar{X}$ that we have actually observed lies within 1.96 standard errors of the population mean. Oof, that is a lot of mathy talk there. We’ll clear it up, don’t worry. Mathematically, we write this as: $\mu - \left( 1.96 \times \mbox{SEM} \right) \ \leq \ \bar{X}\ \leq \ \mu + \left( 1.96 \times \mbox{SEM} \right) \nonumber$ where the SEM is equal to $\sigma / \sqrt{N}$, and we can be 95% confident that this is true. However, that’s not answering the question that we’re actually interested in. The equation above tells us what we should expect about the sample mean, given that we know what the population parameters are. What we want is to have this work the other way around: we want to know what we should believe about the population parameters, given that we have observed a particular sample. However, it’s not too difficult to do this. Using a little high school algebra, a sneaky way to rewrite our equation is like this: $\bar{X} - \left( 1.96 \times \mbox{SEM} \right) \ \leq \ \mu \ \leq \ \bar{X} + \left( 1.96 \times \mbox{SEM}\right) \nonumber$ What this is telling is is that the range of values has a 95% probability of containing the population mean $\mu$. We refer to this range as a 95% confidence interval, denoted $\mbox{CI}_{95}$. In short, as long as $N$ is sufficiently large – large enough for us to believe that the sampling distribution of the mean is normal – then we can write this as our formula for the 95% confidence interval: $\mbox{CI}_{95} = \bar{X} \pm \left( 1.96 \times \frac{\sigma}{\sqrt{N}} \right) \nonumber$ Of course, there’s nothing special about the number 1.96: it just happens to be the multiplier you need to use if you want a 95% confidence interval. If I’d wanted a 70% confidence interval, I could have used the qnorm() function to calculate the 15th and 85th quantiles: qnorm( p = c(.15, .85) ) [1] -1.036433 1.036433 and so the formula for $\mbox{CI}_{70}$ would be the same as the formula for $\mbox{CI}_{95}$ except that we’d use 1.04 as our magic number rather than 1.96. A slight mistake in the formula As usual, I lied. The formula that I’ve given above for the 95% confidence interval is approximately correct, but I glossed over an important detail in the discussion. Notice my formula requires you to use the standard error of the mean, SEM, which in turn requires you to use the true population standard deviation $\sigma$. Yet, before we stressed the fact that we don’t actually know the true population parameters. Because we don’t know the true value of $\sigma$, we have to use an estimate of the population standard deviation $\hat{\sigma}$ instead. This is pretty straightforward to do, but this has the consequence that we need to use the quantiles of the $t$-distribution rather than the normal distribution to calculate our magic number; and the answer depends on the sample size. Plus, we haven’t really talked about the $t$ distribution yet. When we use the $t$ distribution instead of the normal distribution, we get bigger numbers, indicating that we have more uncertainty. And why do we have that extra uncertainty? Well, because our estimate of the population standard deviation $\hat\sigma$ might be wrong! If it’s wrong, it implies that we’re a bit less sure about what our sampling distribution of the mean actually looks like… and this uncertainty ends up getting reflected in a wider confidence interval. 4.15: Summary In this chapter I’ve covered two main topics. The first half of the chapter talks about sampling theory, and the second half talks about how we can use sampling theory to construct estimates of the population parameters. The section breakdown looks like this: • Basic ideas about samples, sampling and populations • Statistical theory of sampling: the law of large numbers, sampling distributions and the central limit theorem. • Estimating means and standard deviations • confidence intervals As always, there’s a lot of topics related to sampling and estimation that aren’t covered in this chapter, but for an introductory psychology class this is fairly comprehensive I think. For most applied researchers you won’t need much more theory than this. One big question that I haven’t touched on in this chapter is what you do when you don’t have a simple random sample. There is a lot of statistical theory you can draw on to handle this situation, but it’s well beyond the scope of this book. 4.16: Videos Introduction to Probability Jeff has several more videos on probability that you can view on his statistics playlist. 4.17: References Fisher, R. A. 1922. “On the Mathematical Foundation of Theoretical Statistics.” Philosophical Transactions of the Royal Society A 222: 309–68. Keynes, John Maynard. 1923. A Tract on Monetary Reform. London: Macmillan; Company. Maul, Andrew. 2017. “Rethinking Traditional Methods of Survey Validation.” Measurement: Interdisciplinary Research and Perspectives 15 (2): 51–69. https://doi.org/10.1080/15366367.2017.1348108. Meehl, P. H. 1967. “Theory Testing in Psychology and Physics: A Methodological Paradox.” Philosophy of Science 34: 103–15.
textbooks/stats/Applied_Statistics/Answering_Questions_with_Data_-__Introductory_Statistics_for_Psychology_Students_(Crump)/04%3A_Probability_Sampling_and_Estimation/4.14%3A_Estimating_a_confidence_interval.txt
Chapter by Matthew Crump Data and data sets are not objective; they are creations of human design. We give numbers their voice, draw inferences from them, and define their meaning through our interpretations. —Katie Crawford So far we have been talking about describing data and looking possible relationships between things we measure. We began by talking about the problem of having too many numbers. So, we discussed how we could summarize big piles of numbers with descriptive statistics, and by looking at the data with graphs. We also looked at the idea of relationships between things. If one thing causes another thing, then if we measure how one thing goes up and down, we should find that other thing goes up and down, or does something at least systematically following the first thing. At the end of the chapter on correlation, we showed how correlations, which imply a relationship between two things, are very difficult to interpret. Why? because an observed correlation can be caused by a hidden third variable, or simply be a spurious findings “caused” by random chance. In the last chapter, we talked about sampling from distributions, and we saw how samples can be different because of random error introduced by the sampling process. Now we begin our journey into inferential statistics. The tools we use to make inferences about where our data came from, and more importantly make inferences about what causes what. In this chapter we provide some foundational ideas. We will stay mostly at a conceptual level, and use lots of simulations like we did in the last chapters. In the remaining chapters we formalize the intuitions built here to explain how some common inferential statistics work. 05: Foundations for inference In chapter one we talked a little bit about research methods and experiments. Experiments are a structured way of collecting data that can permit inferences about causality. If we wanted to know whether something like watching cats on YouTube increases happiness we would need an experiment. We already found out that just finding a bunch of people and measuring number of hours watching cats, and level of happiness, and correlating the two will not permit inferences about causation. For one, the causal flow could be reversed. Maybe being happy causes people to watch more cat videos. We need an experiment. An experiment has two parts. A manipulation and a measurement. The manipulation is under the control of the experimenter. Manipulations are also called independent variables. For example, we could manipulate how many cat videos people will watch, 1 hour versus 2 hours of cat videos. The measurement is the data that is collected. We could measure how happy people are after watching cat videos on a scale from 1 to 100. Measurements are also called dependent variables. So, in a basic experiment like the one above, we take measurements of happiness from people in one of two experimental conditions defined by the independent variable. Let’s say we ran 50 subjects. 25 subjects would be randomly assigned to watch 1 hour of cat videos, and the other 25 subjects would be randomly assigned to watch 2 hours of cat videos. We would measure happiness for each subject at the end of the videos. Then we could look at the data. What would we want to look at? Well, if watching cat videos cause change in happiness, then we would expect the measures of happiness for people watching 1 hour of cat videos to be different from the measures of happiness for people watching 2 hours of cat videos. If watching cat videos does not change happiness, then we would expect no differences in measures of happiness between conditions. Causal forces cause change, and the experiment is set up to detect the change. Now we can state one overarching question, how do we know if the data changed between conditions? If we can be confident that there was a change between conditions, we can infer that our manipulation caused a changed in the measurement. If we cannot be confident there was a change, then we cannot infer that our manipulation caused a change in the measurement. We need to build some change detection tools so we can know a change when we find one. “Hold on, if we are just looking for a change, wouldn’t that be easy to see by looking at the numbers and seeing if they are different, what’s so hard about that?”. Good question. Now we must take a detour. The short answer is that there will always be change in the data (remember variance).
textbooks/stats/Applied_Statistics/Answering_Questions_with_Data_-__Introductory_Statistics_for_Psychology_Students_(Crump)/05%3A_Foundations_for_inference/5.01%3A_Brief_review_of_Experiments.txt
In the last chapter we discussed samples and distributions, and the idea that you can take samples from distributions. So, from now on when you see a bunch of numbers, you should wonder, “where did these numbers come from?”. What caused some kinds of numbers to happen more than other kinds of numbers. The answer to this question requires us to again veer off into the abstract world of distributions. A distribution a place where numbers can come from. The distribution sets the constraints. It determines what numbers are likely to occur, and what numbers are not likely to occur. Distributions are abstract ideas. But, they can be made concrete, and we can draw them with pictures that you have seen already, called histograms. The next bit might seem slightly repetitive from the previous chapter. We again look at sampling numbers from a uniform distribution. We show that individual samples can look quite different from each other. Much of the beginning part of this chapter will already be familiar to you, but we take the concepts in a slightly different direction. The direction is how to make inferences about the role of chance in your experiment. Uniform distribution A uniform distribution is completely flat, it looks like this: OK, so that doesn’t look like much. What is going on here? The y-axis is labelled `probability`, and it goes from 0 to 1. The x-axis is labelled `Number`, and it goes from one to 10. There is a horizontal line drawn straight through. This line tells you the probability of each number from 1 to 10. Notice the line is flat. This means all of the numbers have the same probability of occurring. More specifically, there are 10 numbers from 1 to 10 (1,2,3,4,5,6,7,8,9,10), and they all have an equal chance of occurring. 1/10 = .1, which is the probability indicated by the horizontal line. “So what?”. Imagine that this uniform distribution is a number generating machine. It spits out numbers, but it spits out each number with the probability indicated by the line. If this distribution was going to start spitting out numbers, it would spit out 10% 1s, 10% 2s, 10% 3s, and so on, up to 10% 10s. Wanna see what that would look like? Let’s make it spit out 100 numbers ```options(warn=-1) a<-matrix(round(runif(100,1,10)),ncol=10) knitr::kable(a)``` ```| | | | | | | | | | | |--:|--:|--:|--:|--:|--:|--:|--:|--:|--:| | 2| 4| 9| 3| 5| 9| 7| 8| 8| 5| | 2| 6| 4| 2| 3| 5| 2| 1| 7| 3| | 7| 8| 5| 10| 4| 4| 4| 5| 2| 3| | 10| 2| 9| 4| 10| 2| 9| 6| 6| 4| | 3| 6| 2| 7| 9| 10| 10| 5| 2| 3| | 5| 7| 5| 4| 2| 2| 7| 6| 3| 9| | 7| 2| 4| 7| 2| 5| 9| 4| 6| 2| | 8| 9| 5| 9| 10| 10| 4| 4| 1| 1| | 3| 8| 6| 8| 9| 8| 6| 2| 8| 6| | 2| 7| 4| 3| 8| 4| 4| 4| 2| 6|``` We used the uniform distribution to generate these numbers. Officially, we call this sampling from a distribution. Sampling is what you do at a grocery store when there is free food. You can keep taking more. However, if you take all of the samples, then what you have is called the population. We’ll talk more about samples and populations as we go along. Because we used the uniform distribution to create numbers, we already know where our numbers came from. However, we can still pretend for the moment that someone showed up at your door, showed you these numbers, and then you wondered where they came from. Can you tell just by looking at these numbers that they came from a uniform distribution? What would need to look at? Perhaps you would want to know if all of the numbers occur with roughly equal frequency, after all they should have right? That is, if each number had the same chance of occurring, we should see that each number occurs roughly the same number of times. We already know what a histogram is, so we can put our numbers into a histogram and see what the counts look like. If all of the numbers occur with equal frequency, then each number should occur 10 times, because we sampled a total of 100 numbers. The histogram looks like this: Uh oh, as you can see, not all of the number occurred 10 times each. All of the bars are not the same height. This shows that randomly sampling numbers from this distribution does not guarantee that our numbers will be exactly like the distribution they came from. We can call this sampling error, or sampling variability. Not all samples are the same, they are usually quite different Let’s take a look at sampling error more closely. We will sample 20 numbers from the uniform. Here we should expect that each number between 1 and 10 occurs two times each. Let’s take 20 sample and make a histogram. And then, let’s do that 10 times. So we will be looking at 10 histograms, each showing us what the 10 different samples of twenty numbers looks like: You might notice right away that none of the histograms are the same. Even though we are randomly taking 20 numbers from the very same uniform distribution, each sample of 20 numbers comes out different. This is sampling variability, or sampling error. Here is movie version. You are watching a new histogram for each sample of 20 observations. The horizontal line shows the shape of the uniform distribution. It crosses the y-axis at 2, because we expect that each number (from 1 to 10) should occur about 2 times each in a sample of 20. However, as you can see, this does not happen. Instead, each sample bounces around quite a bit, due to random chance. Looking at the above histograms shows us that figuring out where our numbers came from can be difficult. In the real world, our measurements are samples. We usually only have the luxury of getting one sample of measurements, rather than repeating our own measurements 10 times or more. If you look at the histograms, you will see that some of them look like they could have come from the uniform distribution: most of the bars are near two, and they all fall kind of on a flat line. But, if you happen to look at a different sample, you might see something that is very bumpy, with some numbers happening way more than others. This could suggest to you that those numbers did not come from a uniform distribution (they’re just too bumpy). But let me remind you, all of these samples came from a uniform distribution, this is what samples from that distribution look like. This is what chance does to samples, it makes the individual data points noisy. Large samples are more like the distribution they came from Let’s refresh the question. Which of these two samples do you think came from a uniform distribution? The answer is that they both did. But, neither of them look like they did. Can we improve things, and make it easier to see if a sample came from a uniform distribution? Yes, we can. All we need to do is increase the sample-size. We will often use the letter `n` to refer to sample-size. N is the number of observations in the sample. So let’s increase the number of observations in each sample from 20 to 100. We will again create 10 samples (each with 100 observations), and make histograms for each of them. All of these samples will be drawn from the very same uniform distribution. This, means we should expect each number from 1 to 10 to occur about 10 times in each sample. Here are the histograms: Again, most of these histograms don’t look very flat, and all of the bars seem to be going up or down, and they are not exactly at 10 each. So, we are still dealing with sampling error. It’s a pain. It’s always there. Let’s bump it up to 1000 observations per sample. Now we should expect every number to appear about 100 times each. What happens? Each of these histograms are starting to flatten out. The bars are still not perfectly at 100, because there is still sampling error (there always will be). But, if you found a histogram that looked flat and knew that the sample contained many observations, you might be more confident that those numbers came from a uniform distribution. Just for fun let’s make the samples really big. Say 100,000 observations per sample. Here, we should expect that each number occurs about 10,000 times each. What happens? Now we see that all of our samples start to look the same. They all have 100,000 observations, and this gives chance enough opportunity to equally distribute the numbers, roughly making sure that they all occur very close to the same amount of times. As you can see, the bars are all very close to 10,000, where they should be if the sample came from a uniform distribution. Pro tip: The pattern behind a sample will tend to stabilize as sample-size increases. Small samples will have all sorts of patterns because of sampling error (chance). Before getting back to experiments, let’s ask two more questions. First, which of these two samples do you think came from a uniform distribution? I will tell you that each of these samples had 20 observations each. If you are not confident in the answer, this is because sampling error (randomness) is fuzzing with the histograms. Here is the very same question, only this time we will take 1,000 observations for each sample. Which one do you think came from a uniform distribution, which one did not? Now that we have increased N, we can see the pattern in each sample becomes more obvious. The histogram for sample 1 has bars near 100, not perfectly flat, but it resembles a uniform distribution. The histogram for sample 2 does not look flat at all. Instead, there the number five appears most of the time, and numbers on either side of five happen less and less. Congratulations to Us! We have just made some statistical inferences without using formulas! “We did?”. Yes, by looking at our two samples we have inferred that sample 2 did not come from a uniform distribution. We have also inferred that sample 1 could have come form a uniform distribution. Fantastic. This is really all we will be doing for the rest of the course. We will be looking at some numbers, wondering where they came from, then we will arrange the numbers in such a way so that we can make an inference about where they came from. That’s it.
textbooks/stats/Applied_Statistics/Answering_Questions_with_Data_-__Introductory_Statistics_for_Psychology_Students_(Crump)/05%3A_Foundations_for_inference/5.02%3A_The_data_came_from_a_distribution.txt
Let’s get back to experiments. In an experiment we want to know if our independent variable (our manipulation) causes a change in our dependent variable (measurement). If this occurs, then we will expect to see some differences in our measurement as a function of manipulation. Consider the light switch example: Light Switch Experiment: You manipulate the switch up (condition 1 of independent variable), light goes on (measurement). You manipulate the switch down (condition 2 of independent variable), light goes off (another measurement). The measurement (light) changes (goes off and on) as a function of the manipulation (moving switch up or down). You can see the change in measurement between the conditions, it is as obvious as night and day. So, when you conduct a manipulation, and can see the difference (change) in your measure, you can be pretty confident that your manipulation is causing the change. Note To be cautious we can say “something” about your manipulation is causing the change, it might not be what you think it is if your manipulation is very complicated and involves lots of moving parts Chance can produce differences Do you think random chance can produce the appearance of differences, even when there really aren’t any? I hope so. We have already shown that the process of sampling numbers from a distribution is a chancy process that produces different samples. Different samples are different, so yes, chance can produce differences. This can muck up our interpretation of experiments. Let’s conduct a fictitious experiment where we expect to find no differences, because we will manipulate something that shouldn’t do anything. Here’s the set-up: You are the experimenter standing in front of a gumball machine. It is very big, has thousands of gumballs. 50% of the gumballs are green, and 50% are red. You want to find out if picking gumballs with your right hand vs. your left hand will cause you to pick more green gumballs. Plus, you will be blindfolded the entire time. The independent variable is Hand: right hand vs. left hand. The dependent variable is the measurement of the color of each gumball. You run the experiment as follows. 1) put on blind fold. 2) pick 10 gumballs randomly with left hand, set them aside. 3) pick 10 gumballs randomly with right hand, set them aside. 4) count the number of green and red gumballs chosen by your left hand, and count the number of green and red gumballs chosen by your right hand. Hopefully you will agree that your hands will not be able to tell the difference between the gumballs. If you don’t agree, we will further stipulate the gumballs are completely identical in every way except their color, so it would be impossible to tell them apart using your hands. So, what should happen in this experiment? “Umm, maybe you get 5 red gum balls and 5 green balls from your left hand, and also from your right hand?”. Sort of yes, this is what you would usually get. But, it is not all that you can get. Here is some data showing what happened from one pretend experiment: ```hand<-rep(c("left","right"),each=10) gumball<-rbinom(20,1,.5) df<-data.frame(hand,gumball) knitr::kable(df)``` ```|hand | gumball| |:-----|-------:| |left | 1| |left | 1| |left | 1| |left | 1| |left | 0| |left | 0| |left | 0| |left | 0| |left | 0| |left | 0| |right | 0| |right | 0| |right | 0| |right | 0| |right | 1| |right | 0| |right | 0| |right | 0| |right | 1| |right | 1|``` “What am I looking at here”. This is a long-format table. Each row is one gumball. The first column tells you what hand was used. The second column tells you what kind of gumball. We will say 1s stand for green gum balls, and 0s stand for red gumballs. So, did your left hand cause you to pick more green gumballs than your right hand? It would be easier to look at the data using a bar graph. To keep things simple, we will only count green gumballs (the other gumballs must be red). So, all we need to do is sum up the 1s. The 0s won’t add anything. Oh look, the bars are not the same. One hand picked more green gum balls than the other. Does this mean that one of your hands secretly knows how to find green gumballs? No, it’s just another case of sampling error, that thing we call luck or chance. The difference here is caused by chance, not by the manipulation (which hand you use). Major problem for inference alert. We run experiments to look for differences so we can make inferences about whether our manipulations cause change in our measures. Now we know that we can find differences by chance. How can we know if a difference is real, or just caused by chance? Differences due to chance can be simulated Remember when we showed that chance can produce correlations. We also showed that chance is restricted in its ability to produce correlations. For example, chance more often produces weak correlations than strong correlations. Remember the window of chance? We found out before that correlations falling outside the window of chance were very unlikely. We can do the same thing for differences. Let’s find out just what chance can do in our experiment. Once we know what chance is capable of we will be in a better position to judge whether our manipulation caused a difference, or whether it could have been chance. The first thing to do is pretend you conduct the gumball experiment 10 times in a row. This will produce 10 different sets of results. For each of them we can make a bar graph, and look at whether the left hand chose more green gumballs than red gumballs. It looks like this: These 10 experiments give us a better look at what chance can do. It should also mesh well with your expectations. If everything is left up to chance (as we have made it so), then sometimes your left hand will choose more green balls, sometimes your right hand will choose more green gumballs, and sometimes they will choose the same amount of gumballs. Right? Right.
textbooks/stats/Applied_Statistics/Answering_Questions_with_Data_-__Introductory_Statistics_for_Psychology_Students_(Crump)/05%3A_Foundations_for_inference/5.03%3A_Is_there_a_difference.txt
OK, we have seen that chance can produce differences here. But, we still don’t have a good idea about what chance usually does and doesn’t do. For example, if we could find the window of opportunity here, we would be able find out that chance usually does not produce differences of a certain large size. If we knew what the size was, then if we ran experiment and our difference was bigger than what chance can do, we could be confident that chance did not produce our difference. Let’s use the word difference some more, because it will be helpful. In fact, let’s think about our measure of green balls in terms of a difference. For example, in each experiment we counted the green balls for the left and right hand. What we really want to know is if there is a difference between them. So, we can calculate the difference score. Let’s decide that difference score = # of green gumballs in left hand - # of green gumballs in right hand. Now, we can redraw the 10 bar graphs from above. But this time we will only see one bar for each experiment. This bar will show the difference in number of green gumballs. Missing bars mean that there were an equal number of green gumballs chosen by the left and right hands (difference score is 0). A positive value means that more green gumballs were chosen by the left than right hand. A negative value means that more green gumballs were chosen by the right than left hand. Note that if we decided (and we get to decide) to calculate the difference in reverse (right hand - left hand), the signs of the differences scores would flip around. We are starting to see more of the differences that chance can produce. The difference scores are mostly between -2 to +2. We could get an even better impression by running this pretend experiment 100 times instead of only 10 times. How about we do that. Ooph, we just ran so many simulated experiments that the x-axis is unreadable, but it goes from 1 to 100. Each bar represents the difference of number of green balls chosen randomly by the left or right hand. Beginning to notice anything? Look at the y-axis, this shows the size of the difference. Yes, there are lots of bars of different sizes, this shows us that many kinds of differences do occur by chance. However, the y-axis is also restricted. It does not go from -10 to +10. Big differences greater than 5 or -5 don’t happen very often. Now that we have a method for simulating differences due to chance, let’s run 10,000 simulated experiments. But, instead of plotting the differences in a bar graph for each experiment, how about we look at the histogram of difference scores. This will give us a clearer picture about which differences happen most often, and which ones do not. This will be another window into chance. The chance window of differences. Our computer simulation allows us to force chance to operate hundreds of times, each time it produces a difference. We record the difference, then at the end of the simulation we plot the histogram of the differences. The histogram begins to show us the where the differences came from. Remember the idea that numbers come from a distribution, and the distribution says how often each number occurs. We are looking at one of these distributions. It is showing us that chance produces some differences more often than others. First, chance usually produces 0 differences, that’s the biggest bar in the middle. Chance also produces larger differences, but as the differences get larger (positive or negative), they occur less frequently. The shape of this histogram is your chance window, it tells you what chance can do, it tells you what chance usually does, and what it usually does not do. You can use this chance window to help you make inferences. If you ran yourself in the gumball experiment and found that your left hand chose 2 more green gumballs than red gumballs, would you conclude that you left hand was special, and caused you to choose more green gumballs? Hopefully not. You could look at the chance window and see that differences of size +2 do happen fairly often by chance alone. You should not be surprised if you got a +2 difference. However, what if your left chose 5 more green gumballs than red gumballs. Well, chance doesn’t do this very often, you might think something is up with your left hand. If you got a whopping 9 more green gumballs than red gumballs, you might really start to wonder. This is the kind of thing that could happen (it’s possible), but virtually never happens by chance. When you get things that almost never happen by chance, you can be more confident that the difference reflects a causal force that is not chance.
textbooks/stats/Applied_Statistics/Answering_Questions_with_Data_-__Introductory_Statistics_for_Psychology_Students_(Crump)/05%3A_Foundations_for_inference/5.04%3A_Chance_makes_some_differences_more_likely_than_others.txt
We are going to be doing a lot of inference throughout the rest of this course. Pretty much all of it will come down to one question. Did chance produce the differences in my data? We will be talking about experiments mostly, and in experiments we want to know if our manipulation caused a difference in our measurement. But, we measure things that have natural variability, so every time we measure things we will always find a difference. We want to know if the difference we found (between our experimental conditions) could have been produced by chance. If chance is a very unlikely explanation of our observed difference, we will make the inference that chance did not produce the difference, and that something about our experimental manipulation did produce the difference. This is it (for this textbook). Note Statistics is not only about determining whether chance could have produced a pattern in the observed data. The same tools we are talking about here can be generalized to ask whether any kind of distribution could have produced the differences. This allows comparisons between different models of the data, to see which one was the most likely, rather than just rejecting the unlikely ones (e.g., chance). But, we’ll leave those advanced topics for another textbook. This chapter is about building intuitions for making these kinds of inferences about the role of chance in your data. It’s not clear to me what are the best things to say, to build up your intuitions for how to do statistical inference. So, this chapter tries different things, some of them standard, and some of them made up. What you are about to read, is a made up way of doing statistical inference, without using the jargon that we normally use to talk about it. The goal is to do things without formulas, and without probabilities, and just work with some ideas using simulations to see what happens. We will look at what chance can do, then we will talk about what needs to happen in your data in order for you to be confident that chance didn’t do it. Intuitive methods Warning, this is an unofficial statistical test made up by Matt Crump. It makes sense to him (me), and if it turns out someone else already made this up, then Crump didn’t do his homework, and we will change the name of this test to it’s original author. The point of this test is to show how simple operations that you already understand can be used to create a tool for inference. This test is not complicated, it uses 1. Sampling numbers randomly from a distribution 2. Adding, subtracting 3. Division, to find the mean 4. Counting 5. Graphing and drawing lines 6. NO FORMULAS Part 1: Frequency based intuition about occurence Question: How many times does something need to happen, for it to happen a lot? Or, how many times does something need to happen for it to happen not very much, or even really not at all? Small enough for you to not worry about it at all happening to you? Would you go outside everyday if you thought that you would get hit by lightning 1 out of 10 times? I wouldn’t. You’d probably be hit by lightning more than once per month, you’d be dead pretty quickly. 1 out of 10 is a lot (to me, maybe not to you, there’s no right answer here). Would you go outside everyday if you thought that you would get hit by lightning 1 out of every 100 days? Jeez, that’s a tough one. What would I even do? If I went out everyday, I’d probably be dead in a year! Maybe I would go out 2 or 3 times per year, I’m risky like that, but I’d probably live longer. It would massively suck. Would you go outside everyday if you thought you would get hit by lightning 1 out of every 1000 days? Well, you’d probably be dead in 3-6 years if you did that. Are you a gambler? Maybe go out once per month, still sucks. Would you go outside everyday if you thought lightning would get you 1 out every 10,000 days? 10,000 is a bigger number, harder to think about. It’s about once every 27 years. Ya, I’d probably go out 150 days per year, and live a bit longer if I can. Would you go outside everyday if you thought lightning would get you 1 out every 100,000 days? 100,000 is a bigger number, harder to think about. How many years is that? It’s about 273 years. With those odds, I’d probably go out all the time and forget about being hit by lightning. It doesn’t happen very often, and if it does, c’est la vie. The point of considering these questions is to get a sense for yourself of what happens a lot, and what doesn’t happen a lot, and how you would make important decisions based on what happens a lot and what doesn’t. Part 2: Simulating chance This next part could happen a bunch of ways, I’ll make loads of assumptions that I won’t defend, and I won’t claim the Crump test has problems. I will claim it helps us make an inference about whether chance could have produced some differences in data. We’ve already been introduced to simulating things, so we’ll do that again. Here is what we will do. I am a cognitive psychologist who happens to be measuring X. Because of prior research in the field, I know that when I measure X, my samples will tend to have a particular mean and standard deviation. Let’s say the mean is usually 100, and the standard deviation is usually 15. In this case, I don’t care about using these numbers as estimates of the population parameters, I’m just thinking about what my samples usually look like. What I want to know is how they behave when I sample them. I want to see what kind of samples happen a lot, and what kind of samples don’t happen a lot. Now, I also live in the real world, and in the real world when I run experiments to see what changes X, I usually only have access to some number of participants, who I am very grateful too, because they participate in my experiments. Let’s say I usually can run 20 subjects in each condition in my experiments. Let’s keep the experiment simple, with two conditions, so I will need 40 total subjects. I would like to learn something to help me with inference. One thing I would like to learn is what the sampling distribution of the sample mean looks like. This distribution tells me what kinds of mean values happen a lot, and what kinds don’t happen very often. But, I’m actually going to skip that bit. Because what I’m really interested in is what the sampling distribution of the difference between my sample means looks like. After all, I am going to run an experiment with 20 people in one condition, and 20 people in the other. Then I am going to calculate the mean for group A, and the mean for group B, and I’m going to look a the difference. I will probably find a difference, but my question is, did my manipulation cause this difference, or is this the kind of thing that happens a lot by chance. If I knew what chance can do, and how often it produces differences of particular sizes, I could look at the difference I observed, then look at what chance can do, and then I can make a decision! If my difference doesn’t happen a lot (we’ll get to how much not a lot is in a bit), then I might be willing to believe that my manipulation caused a difference. If my difference happens all the time by chance alone, then I wouldn’t be inclined to think my manipulation caused the difference, because it could have been chance. So, here’s what we’ll do, even before running the experiment. We’ll do a simulation. We will sample numbers for group A and Group B, then compute the means for group A and group B, then we will find the difference in the means between group A and group B. But, we will do one very important thing. We will pretend that we haven’t actually done a manipulation. If we do this (do nothing, no manipulation that could cause a difference), then we know that only sampling error could cause any differences between the mean of group A and group B. We’ve eliminated all other causes, only chance is left. By doing this, we will be able to see exactly what chance can do. More importantly, we will see the kinds of differences that occur a lot, and the kinds that don’t occur a lot. Before we do the simulation, we need to answer one question. How much is a lot? We could pick any number for a lot. I’m going to pick 10,000. That is a lot. If something happens only 1 times out 10,000, I am willing to say that is not a lot. OK, now we have our number, we are going to simulate the possible mean differences between group A and group B that could arise by chance. We do this 10,000 times. This gives chance a lot of opportunity to show us what it does do, and what it does not do. This is what I did: I sampled 20 numbers into group A, and 20 into group B. The numbers both came from the same normal distribution, with mean = 100, and standard deviation = 15. Because the samples are coming from the same distribution, we expect that on average they will be similar (but we already know that samples differ from one another). Then, I compute the mean for each sample, and compute the difference between the means. I save the mean difference score, and end up with 10,000 of them. Then I draw a histogram. It looks like this: Note Sidenote: Of course, we might recognize that chance could do a difference greater than 15. We just didn’t give it the opportunity. We only ran the simulation 10,000 times. If we ran it on million times, maybe a difference greater than 20 would happen a couple times. If we ran it a bazillion gazillion times, maybe a difference greater than 30 would happen a couple times. If we go out to infinity, then chance might produce all sorts of bigger differences once in a while. But, we’ve already decided that 1/10,000 is not a lot. So things that happen 0/10,000 times, like differences greater than 15, just don’t happen very much. Now we can see what chance can do to the size of our mean difference. The x-axis shows the size of the mean difference. We took our samples from the sample distribution, so the difference between them should usually be 0, and that’s what we see in the histogram. Pause for a second. Why should the mean differences usually be zero, wasn’t the population mean = 100, shouldn’t they be around 100? No. The mean of group A will tend to be around 100, and the mean of group B will tend be around 100. So, the difference score will tend to be 100-100 = 0. That is why we expect a mean difference of zero when the samples are drawn from the same population. So, differences near zero happen the most, that’s good, that’s what we expect. Bigger or smaller differences happen increasingly less often. Differences greater than 15 or -15 never happen at all. For our purposes, it looks like chance only produces differences between -15 to 15. OK, let’s ask a couple simple questions. What was the biggest negative number that occurred in the simulation? We’ll use R for this. All of the 10,000 difference scores are stored in a variable I made called `difference`. If we want to find the minimum value, we use the `min` function. Here’s the result. ```difference<-length(10000) for(i in 1:10000){ difference[i]<-mean(rnorm(20,100,15)-rnorm(20,100,15)) } min(difference)``` -17.0773846332609 OK, so what was the biggest positive number that occurred? Let’s use the `max` function to find out. It finds the biggest (maximum) value in the variable. FYI, we’ve just computed the range, the minimum and maximum numbers in the data. Remember we learned that before. Anyway, here’s the max. ```difference<-length(10000) for(i in 1:10000){ difference[i]<-mean(rnorm(20,100,15)-rnorm(20,100,15)) } max(difference)``` 21.5695238598948 Both of these extreme values only occurred once. Those values were so rare we couldn’t even see them on the histogram, the bar was so small. Also, these biggest negative and positive numbers are pretty much the same size if you ignore their sign, which makes sense because the distribution looks roughly symmetrical. So, what can we say about these two numbers for the min and max? We can say the min happens 1 times out of 10,000. We can say the max happens 1 times out of 10,000. Is that a lot of times? Not to me. It’s not a lot. So, how often does a difference of 30 (much larger larger than the max) occur out of 10,000. We really can’t say, 30s didn’t occur in the simulation. Going with what we got, we say 0 out of 10,000. That’s never. We’re about to move into part three, which involves drawing decision lines and talking about them. The really important part about part 3 is this. What would you say if you ran this experiment once, and found a mean difference of 30? I would say it happens 0 times of out 10,000 by chance. I would say chance did not produce my difference of 30. That’s what I would say. We’re going to expand upon this right now. Part 3: Judgment and Decision-making Remember, we haven’t even conducted an experiment. We’re just simulating what could happen if we did conduct an experiment. We made a histogram. We can see that chance produces some differences more than others, and that chance never produced really big differences. What should we do with this information? What we are going to do is talk about judgment and decision making. What kind of judgment and decision making? Well, when you finally do run an experiment, you will get two means for group A and B, and then you will need to make some judgments, and perhaps even a decision, if you are so inclined. You will need to judge whether chance (sampling error) could have produced the difference you observed. If you judge that it did it not, you might make the decision to tell people that your experimental manipulation actually works. If you judge that it could have been chance, you might make a different decision. These are important decisions for researchers. Their careers can depend on them. Also, their decisions matter for the public. Nobody wants to hear fake news from the media about scientific findings. So, what we are doing is preparing to make those judgments. We are going to draw up a plan, before we even see the data, for how we will make judgments and decisions about what we find. This kind of planning is extremely important, because we discuss in part 4, that your planning can help you design an even better experiment than the one you might have been intending to run. This kind of planning can also be used to interpret other people’s results, as a way of double-checking checking whether you believe those results are plausible. The thing about judgement and decision making is that reasonable people disagree about how to do it, unreasonable people really disagree about it, and statisticians and researchers disagree about how to do it. I will propose some things that people will disagree with. That’s OK, these things still make sense. And, the disagreeable things point to important problems that are very real for any “real” statistical inference test. Let’s talk about some objective facts from our simulation of 10,000 things that we definitely know to be true. For example, we can draw some lines on the graph, and label some different regions. We’ll talk about two kinds of regions. 1. Region of chance. Chance did it. Chance could have done it 2. Region of not chance. Chance didn’t do it. Chance couldn’t have done it. The regions are defined by the minimum value and the maximum value. Chance never produced a smaller or bigger number. The region inside the range is what chance did do, and the the region outside the range on both sides is what chance never did. It looks like this: We have just drawn some lines, and shaded some regions, and made one plan we could use to make decisions. How would the decisions work. Let’s say you ran the experiment and found a mean difference between groups A and B of 25. Where is 25 in the figure? It’s in the green part. What does the green part say? NOT CHANCE. What does this mean. It means chance never made a difference of 25. It did that 0 out of 10,000 times. If we found a difference of 25, perhaps we could confidently conclude that chance did not cause the difference. If I found a difference of 25 with this kind of data, I’d be pretty confident that my experimental manipulation caused the difference, because obviously chance never does. What about a difference of +10? That’s in the red part, where chance lives. Chance could have done a difference of +10 because we can see that it did do that. The red part is the window of what chance did in our simulation. Anything inside the window could have been a difference caused by chance. If I found a difference of +10, I’d say, coulda been chance. I would not be very confident that my experimental manipulation caused the difference. Statistical inference could be this easy. The number you get from your experiment could be in the chance window (then you can’t rule out chance as a cause), or it could be outside the chance window (then you can rule out chance). Case closed. Let’s all go home. Grey areas So what’s the problem? Depending on who you are, and what kinds of risks you’re willing to take, there might not be a problem. But, if you are just even a little bit risky then there is a problem that makes clear judgments about the role of chance difficult. We would like to say chance did or did not cause our difference. But, we’re really always in the position of admitting that it could have sometimes, or wouldn’t have most times. These are wishy washy statements, they are in between yes or no. That’s OK. Grey is a color too, let’s give grey some respect. “What grey areas are you talking about?, I only see red or green, am I grey blind?”. Let’s look at where some grey areas might be. I say might be, because people disagree about where the grey is. People have different comfort levels with grey. Here’s my opinion on some clear grey areas. I made two grey areas, and they are reddish grey, because we are still in the chance window. There are question marks (?) in the grey areas. Why? The question marks reflect some uncertainty that we have about those particular differences. For example, if you found a difference that was in a grey area, say a 15. 15 is less than the maximum, which means chance did create differences of around 15. But, differences of 15 don’t happen very often. What can you conclude or say about this 15 you found? Can you say without a doubt that chance did not produce the difference? Of course not, you know that chance could have. Still, it’s one of those things that doesn’t happen a lot. That makes chance an unlikely explanation. Instead of thinking that chance did it, you might be willing to take a risk and say that your experimental manipulation caused the difference. You’d be making a bet that it wasn’t chance…but, could be a safe bet, since you know the odds are in your favor. You might be thinking that your grey areas aren’t the same as the ones I’ve drawn. Maybe you want to be more conservative, and make them smaller. Or, maybe you’re more risky, and would make them bigger. Or, maybe you’d add some grey area going in a little bit to the green area (after all, chance could probably produce some bigger differences sometimes, and to avoid those you would have to make the grey area go a bit into the green area). Another thing to think about is your decision policy. What will you do, when your observed difference is in your grey area? Will you always make the same decision about the role of chance? Or, will you sometimes flip-flop depending on how you feel. Perhaps, you think that there shouldn’t be a strict policy, and that you should accept some level of uncertainty. The difference you found could be a real one, or it might not. There’s uncertainty, hard to avoid that. So let’s illustrate one more kind of strategy for making decisions. We just talked about one that had some lines, and some regions. This makes it seem like we can either rule out, or not rule out the role of chance. Another way of looking at things is that everything is a different shade of grey. It looks like this: OK, so I made it shades of blue (because it was easier in R). Now we can see two decision plans at the same time. Notice that as the bars get shorter, they also get become a darker stronger blue. The color can be used as a guide for your confidence. That is, your confidence in the belief that your manipulation caused the difference rather than chance. If you found a difference near a really dark bar, those don’t happen often by chance, so you might be really confident that chance didn’t do it. If you find a difference near a slightly lighter blue bar, you might be slightly less confident. That is all. You run your experiment, you get your data, then you have some amount of confidence that it wasn’t produced by chance. This way of thinking is elaborated to very interesting degrees in the Bayesian world of statistics. We don’t wade too much into that, but mention it a little bit here and there. It’s worth knowing it’s out there. Making Bad Decisions No matter how you plan to make decisions about your data, you will always be prone to making some mistakes. You might call one finding real, when in fact it was caused by chance. This is called a type I error, or a false positive. You might ignore one finding, calling it chance, when in fact it wasn’t chance (even though it was in the window). This is called a ** type II**, or a false negative. How you make decisions can influence how often you make errors over time. If you are a researcher, you will run lots of experiments, and you will make some amount of mistakes over time. If you do something like the very strict method of only accepting results as real when they are in the “no chance” zone, then you won’t make many type I errors. Pretty much all of your result will be real. But, you’ll also make type II errors, because you will miss things real things that your decision criteria says are due to chance. The opposite also holds. If you are willing to be more liberal, and accept results in the grey as real, then you will make more type I errors, but you won’t make as many type II errors. Under the decision strategy of using these cutoff regions for decision-making there is a necessary trade-off. The Bayesian view get’s around this a little bit. Bayesians talk about updating their beliefs and confidence over time. In that view, all you ever have is some level of confidence about whether something is real, and by running more experiments you can increase or decrease your level of confidence. This, in some fashion, avoids some trade-off between type I and type II errors. Regardless, there is another way to avoid type I and type II errors, and to increase your confidence in your results, even before you do the experiment. It’s called “knowing how to design a good experiment”. Part 4: Experiment Design We’ve seen what chance can do. Now we run an experiment. We manipulate something between groups A and B, get the data, calculate the group means, then look at the difference. Then we cross all of our finger and toes, and hope beyond hope that the difference is big enough to not be caused by chance. That’s a lot of hope. Here’s the thing, we don’t often know how strong our manipulation is in the first place. So, even if it can cause a change, we don’t necessarily know how much change it can cause. That’s why we’re running the experiment. Many manipulations in Psychology are not strong enough to cause big changes. This is a problem for detecting these smallish causal forces. In our fake example, you could easily manipulate something that has a tiny influence, and will never push the mean difference past say 5 or 10. In our simulation, we need something more like a 15 or 17 or a 21, or hey, a 30 would be great, chance never does that. Let’s say your manipulation is listening to music or not listening to music. Music listening might change something about X, but if it only changes X by +5, you’ll never be able to confidently say it wasn’t chance. And, it’s not that easy to completely change music and make music super strong in the music condition so it really causes a change in X compared to the no music condition. EXPERIMENT DESIGN TO THE RESCUE! Newsflash, it is often possible to change how you run your experiment so that it is more sensitive to smaller effects. How do you think we can do this? Here is a hint. It’s the stuff you learned about the sampling distribution of the sample mean, and the role of sample-size. What happens to the sampling distribution of the sample mean when N (sample size)? The distribution gets narrower and narrower, and starts to look the a single number (the hypothetical mean of the hypothetical population). That’s great. If you switch to thinking about mean difference scores, like the distribution we created in this test, what do you think will happen to that distribution as we increase N? It will will also shrink. As we increase N to infinity, it will shrink to 0. Which means that, when N is infinity, chance never produces any differences at all. We can use this. For example, we could run our experiment with 20 subjects in each group. Or, we could decide to invest more time and run 40 subjects in each group, or 80, or 150. When you are the experimenter, you get to decide the design. These decisions matter big time. Basically, the more subjects you have, the more sensitive your experiment. With bigger N, you will be able to reliably detect smaller mean differences, and be able to confidently conclude that chance did not produce those small effects. Check out this next set of histograms. All we are doing is the very same simulation as before, but this time we do it for different sample-sizes: 20, 40, 80, 160. We are doubling our sample-size across each simulation just to see what happens to the width of the chance window. There you have it. The sampling distribution of the mean differences shrinks toward 0 as sample-size increases. This means if you run an experiment with a larger sample-size, you will be able to detect smaller mean differences, and be confident they aren’t due to chance. Let’s look at a table of the minimum and maximum values that chance produced across these four sample-sizes: sample_size smallest biggest 20 -25.858660 26.266110 40 -17.098721 16.177815 80 -12.000585 11.919035 160 -9.251625 8.357951 The table is telling… The range of chance’s behavior is very wide for sample-size = 20, but about half as wide for sample-size = 160. If it turns out your manipulation will cause a difference of +11, then what should you do? Run an experiment with 20 people? I hope not. If you did that, you could get +11s fairly often by chance. If you ran the experiment with 160 people, then you would definitely be able to say that +11 was not due to chance, it would be outside the range of what chance can do. You could even consider running the experiment with 80 subjects. A +11 there wouldn’t happen often by chance, and you’d be cost-effective, spending less time on the experiment. The point is: the design of the experiment determines the sizes of the effects it can detect. If you want to detect a small effect. Make your sample size bigger. It’s really important to say this is not the only thing you can do. You can also make your cell-sizes bigger. For example, often times we take several measurements from a single subject. The more measurements you take (cell-size), the more stable your estimate of the subject’s mean. We discuss these issues more later. You can also make a stronger manipulation, when possible. Part 5: I have the power By the power of greyskull, I HAVE THE POWER - He-man The last thing we’ll talk about here is something called power. In fact, we are going to talk about the concept of power, not actual power. It’s confusing now, but later we will define power in terms of some particular ideas about statistical inference. Here, we will just talk about the idea. And, we’ll show how to make sure your design has 100% power. Because, why not. Why run a design that doesn’t have the power? The big idea behind power is the concept of sensitivity. The concept of sensitivity assumes that there is something to be sensitive to. That is, there is some real difference that can be measured. So, the question is, how sensitive is your experiment? We’ve already seen that the number of subjects (sample-size), changes the sensitivity of the design. More subjects = more sensitivity to smaller effects. Let’s take a look at one more plot. What we will do is simulate a measure of sensitivity across a whole bunch of sample sizes, from 10 to 300. We’ll do this in steps of 10. For each simulation, we’ll compute the mean differences as we have done. But, rather than showing the histogram, we’ll just compute the smallest value and the largest value. This is a pretty good measure of the outer reach of chance. Then we’ll plot those values as a function of sample size and see what we’ve got. What we have here is a reasonably precise window of sensitivity as a function of sample size. For each sample size, we can see the maximum difference that chance produced and the minimum difference. In those simulations, chance never produced bigger or smaller differences. So, each design is sensitive to any difference that is underneath the bottom line, or above the top line. It’s really that simple. Here’s another way of putting it. Which of the sample sizes will be sensitive to a difference of +10 or -10. That is, if a difference of +10 or -10 was observed, then we could very confidently say that the difference was not due to chance, because according to these simulations, chance never produced differences that big. To help us see which ones are sensitive, let’s draw some horizontal lines at -10 and +10. I would say all of the designs with sample size = 100 or greater are all perfectly sensitive to real differences of 10 (if they exist). We can see that all of the dots after sample size 100 are underneath the red line. So effects that are as big as the red line, or bigger will almost never occur due to chance. But, if they do occur in nature, those experiments will detect them straight away. That is sensitivity. And, designing your experiment so that you know it is sensitive to the thing you are looking for is the big idea behind power. It’s worth knowing this kind of thing before you run your experiment. Why waste your own time and run an experiment that doesn’t have a chance of detecting the thing you are looking for. Summary of Crump Test What did we learn from this so-called fake Crump test that nobody uses? Well, we learned the basics of what we’ll be doing moving forward. And, we did it all without any hard math or formulas. We sampled numbers, we computed means, we subtracted means, then we did that a lot and counted up the means and put them in a histogram. This showed us what chance do in an experiment. Then, we discussed how to make decisions around these facts. And, we showed how we can manipulate the role of chance just by changing things like sample size.
textbooks/stats/Applied_Statistics/Answering_Questions_with_Data_-__Introductory_Statistics_for_Psychology_Students_(Crump)/05%3A_Foundations_for_inference/5.05%3A_The_Crump_Test.txt
Welcome to the first official inferential statistic in this textbook. Up till now we have been building some intuitions for you. Next, we will get slightly more formal and show you how we can use random chance to tell us whether our experimental finding was likely due to chance or not. We do this with something called a randomization test. The ideas behind the randomization test are the very same ideas behind the rest of the inferential statistics that we will talk about in later chapters. And, surprise, we have already talked about all of the major ideas already. Now, we will just put the ideas together, and give them the name randomization test. Here’s the big idea. When you run an experiment and collect some data you get to find out what happened that one time. But, because you ran the experiment only once, you don’t get to find out what could have happened. The randomization test is a way of finding out what could have happened. And, once you know that, you can compare what did happen in your experiment, with what could have happened. Pretend example does chewing gum improve your grades? Let’s say you run an experiment to find out if chewing gum causes students to get better grades on statistics exams. You randomly assign 20 students to the chewing gum condition, and 20 different students to the no-chewing gum condition. Then, you give everybody statistics tests and measure their grades. If chewing gum causes better grades, then the chewing gum group should have higher grades on average than the group who did not chew gum. Let’s say the data looked like this: ```suppressPackageStartupMessages(library(dplyr)) gum<-round(runif(20,70,100)) no_gum<-round(runif(20,40,90)) gum_df<-data.frame(student=seq(1:20),gum,no_gum) gum_df <- gum_df %>% rbind(c("Sums",colSums(gum_df[,2:3]))) %>% rbind(c("Means",colMeans(gum_df[,2:3]))) knitr::kable(gum_df)``` ```|student |gum |no_gum | |:-------|:-----|:------| |1 |96 |69 | |2 |78 |56 | |3 |86 |42 | |4 |92 |89 | |5 |84 |55 | |6 |94 |75 | |7 |96 |44 | |8 |82 |59 | |9 |83 |85 | |10 |74 |78 | |11 |99 |63 | |12 |70 |85 | |13 |83 |44 | |14 |89 |71 | |15 |83 |72 | |16 |76 |52 | |17 |79 |61 | |18 |90 |44 | |19 |99 |80 | |20 |98 |84 | |Sums |1731 |1308 | |Means |86.55 |65.4 |``` So, did the students chewing gum do better than the students who didn’t chew gum? Look at the mean test performance at the bottom of the table. The mean for students chewing gum was 86.55, and the mean for students who did not chew gum was 65.4. Just looking at the means, it looks like chewing gum worked! “STOP THE PRESSES, this is silly”. We already know this is silly because we are making pretend data. But, even if this was real data, you might think, “Chewing gum won’t do anything, this difference could have been caused by chance, I mean, maybe the better students just happened to be put into the chewing group, so because of that their grades were higher, chewing gum didn’t do anything…”. We agree. But, let’s take a closer look. We already know how the data come out. What we want to know is how they could have come out, what are all the possibilities? For example, the data would have come out a bit different if we happened to have put some of the students from the gum group into the no gum group, and vice versa. Think of all the ways you could have assigned the 40 students into two groups, there are lots of ways. And, the means for each group would turn out differently depending on how the students are assigned to each group. Practically speaking, it’s not possible to run the experiment every possible way, that would take too long. But, we can nevertheless estimate how all of those experiments might have turned out using simulation. Here’s the idea. We will take the 40 measurements (exam scores) that we found for all the students. Then we will randomly take 20 of them and pretend they were in the gum group, and we’ll take the remaining 20 and pretend they were in the no gum group. Then we can compute the means again to find out what would have happened. We can keep doing this over and over again. Every time computing what happened in that version of the experiment. Doing the randomization Before we do that, let’s show how the randomization part works. We’ll use fewer numbers to make the process easier to look at. Here are the first 5 exam scores for students in both groups. ```suppressPackageStartupMessages(library(dplyr)) gum<-round(runif(20,70,100)) no_gum<-round(runif(20,40,90)) gum_df<-data.frame(student=seq(1:20),gum,no_gum) gum_df <- gum_df %>% rbind(c("Sums",colSums(gum_df[,2:3]))) %>% rbind(c("Means",colMeans(gum_df[,2:3]))) gum_df_small<-gum_df[1:5,] gum_df_small\$gum<-as.numeric(gum_df_small\$gum) gum_df_small\$no_gum<-as.numeric(gum_df_small\$no_gum) gum_df_small <- gum_df_small %>% rbind(c("Sums",colSums(gum_df_small[,2:3]))) %>% rbind(c("Means",colMeans(gum_df_small[,2:3]))) knitr::kable(gum_df_small)``` ```|student |gum |no_gum | |:-------|:----|:------| |1 |87 |61 | |2 |74 |57 | |3 |84 |85 | |4 |96 |79 | |5 |83 |50 | |Sums |424 |332 | |Means |84.8 |66.4 |``` Things could have turned out differently if some of the subjects in the gum group were switched with the subjects in the no gum group. Here’s how we can do some random switching. We will do this using R. ```gum<-round(runif(20,70,100)) no_gum<-round(runif(20,40,90)) all_scores <- c(gum[1:5],no_gum[1:5]) randomize_scores <- sample(all_scores) new_gum <- randomize_scores[1:5] new_no_gum <- randomize_scores[6:10] print(new_gum) print(new_no_gum)``` ```[1] 83 84 42 92 60 [1] 82 67 58 83 43 ``` We have taken the first 5 numbers from the original data, and put them all into a variable called `all_scores`. Then we use the `sample` function in R to shuffle the scores. Finally, we take the first 5 scores from the shuffled numbers and put them into a new variable called `new_gum`. Then, we put the last five scores into the variable `new_no_gum`. Then we printed them, so we can see them. If we do this a couple of times and put them in a table, we can indeed see that the means for gum and no gum would be different if the subjects were shuffled around. Check it out: ```suppressPackageStartupMessages(library(dplyr)) gum<-round(runif(20,70,100)) no_gum<-round(runif(20,40,90)) gum_df<-data.frame(student=seq(1:20),gum,no_gum) gum_df <- gum_df %>% rbind(c("Sums",colSums(gum_df[,2:3]))) %>% rbind(c("Means",colMeans(gum_df[,2:3]))) gum_df_small<-gum_df[1:5,] gum_df_small\$gum<-as.numeric(gum_df_small\$gum) gum_df_small\$no_gum<-as.numeric(gum_df_small\$no_gum) all_scores <- c(gum[1:5],no_gum[1:5]) randomize_scores <- sample(all_scores) gum2 <- randomize_scores[1:5] no_gum2 <- randomize_scores[6:10] gum_df_small <-cbind(gum_df_small,gum2,no_gum2) all_scores <- c(gum[1:5],no_gum[1:5]) randomize_scores <- sample(all_scores) gum3 <- randomize_scores[1:5] no_gum3 <- randomize_scores[6:10] gum_df_small <-cbind(gum_df_small,gum3,no_gum3) gum_df_small <- gum_df_small %>% rbind(c("Sums",colSums(gum_df_small[,2:7]))) %>% rbind(c("Means",colMeans(gum_df_small[,2:7]))) knitr::kable(gum_df_small)``` ```|student |gum |no_gum |gum2 |no_gum2 |gum3 |no_gum3 | |:-------|:---|:------|:----|:-------|:----|:-------| |1 |75 |90 |41 |74 |41 |75 | |2 |89 |41 |60 |89 |89 |65 | |3 |74 |51 |89 |90 |60 |90 | |4 |93 |60 |93 |65 |89 |51 | |5 |89 |65 |51 |75 |74 |93 | |Sums |420 |307 |334 |393 |353 |374 | |Means |84 |61.4 |66.8 |78.6 |70.6 |74.8 |``` Simulating the mean differences across the different randomizations In our pretend experiment we found that the mean for students chewing gum was ```gum<-round(runif(20,70,100)) mean(gum)``` 89.55 , and the mean for students who did not chew gum was ```no_gum<-round(runif(20,40,90)) mean(no_gum)``` 61.7 . The mean difference (gum - no gum) was ```gum<-round(runif(20,70,100)) no_gum<-round(runif(20,40,90)) mean(gum) - mean(no_gum)``` 16.05 . This is a pretty big difference. This is what did happen. But, what could have happened? If we tried out all of the experiments where different subjects were switched around, what does the distribution of the possible mean differences look like? Let’s find out. This is what the randomization test is all about. When we do our randomization test we will measure the mean difference in exam scores between the gum group and the no gum group. Every time we randomize we will save the mean difference. Let’s look at a short animation of what is happening in the randomization test. Note, what you are about to see is data from a different fake experiment, but the principles are the same. We’ll return to the gum no gum experiment after the animation. The animation is showing you three important things. First, the purple dots show you the mean scores in two groups (didn’t study vs study). It looks like there is a difference, as 1 dot is lower than the other. We want to know if chance could produce a difference this big. At the beginning of the animation, the light green and red dots show the individual scores from each of 10 subjects in the design (the purple dots are the means of these original scores). Now, during the randomizations, we randomly shuffle the original scores between the groups. You can see this happening throughout the animation, as the green and red dots appear in different random combinations. The moving yellow dots show you the new means for each group after the randomization. The differences between the yellow dots show you the range of differences that chance could produce. We are engaging in some visual statistical inference. By looking at the range of motion of the yellow dots, we are watching what kind of differences chance can produce. In this animation, the purple dots, representing the original difference, are generally outside of the range of chance. The yellow dots don’t move past the purple dots, as a result chance is an unlikely explanation of the difference. If the purple dots were inside the range of the yellow dots, then when would know that chance is capable of producing the difference we observed, and that it does so fairly often. As a result, we should not conclude the manipulation caused the difference, because it could have easily occurred by chance. Let’s return to the gum example. After we randomize our scores many times, and computed the new means, and the mean differences, we will have loads of mean differences to look at, which we can plot in a histogram. The histogram gives a picture of what could have happened. Then, we can compare what did happen with what could have happened. Here’s the histogram of the mean differences from the randomization test. For this simulation, we randomized the results from the original experiment 1000 times. This is what could have happened. The blue line in the figure shows us where our observed difference lies on the x-axis. What do you think? Could the difference represented by the blue line have been caused by chance? My answer is probably not. The histogram shows us the window of chance. The blue line is not inside the window. This means we can be pretty confident that the difference we observed was not due to chance. We are looking at another window of chance. We are seeing a histogram of the kinds of mean differences that could have occurred in our experiment, if we had assigned our subjects to the gum and no gum groups differently. As you can see, the mean differences range from negative to positive. The most frequent difference is 0. Also, the distribution appears to be symmetrical about zero, which shows we had roughly same the chances of getting a positive or negative difference. Also, notice that as the differences get larger (in the positive or negative direction, they become less frequent). The blue line shows us the observed difference, this is the one we found in our fake experiment. Where is it? It’s way out to the right. It is is well outside the histogram. In other words, when we look at what could have happened, we see that what did happen doesn’t occur very often. IMPORTANT: In this case, when we speak of what could have happened. We are talking about what could have happened by chance. When we compare what did happen to what chance could have done, we can get a better idea of whether our result was caused by chance. OK, let’s pretend we got a much smaller mean difference when we first ran the experiment. We can draw new lines (blue and red) to represent a smaller mean we might have found. Look at the blue line. If you found a mean difference of 10, would you be convinced that your difference was not caused by chance? As you can see, the blue line is inside the chance window. Notably, differences of +10 don’t very often. You might infer that your difference was not likely to be due to chance (but you might be a little bit skeptical, because it could have been). How about the red line? The red line represents a difference of +5. If you found a difference of +5 here, would you be confident that your difference was not caused by chance? I wouldn’t be. The red line is totally inside the chance window, this kind of difference happens fairly often. I’d need some more evidence to consider the claim the some independent variable actually caused the difference. I’d be much more comfortable assuming that sampling error probably caused the difference. Take homes so far Have you noticed that we haven’t used any formulas yet, but we have been able to accomplish inferential statistics. We will see some formulas as we progress, but these aren’t as the idea behind the formulas. Inferential statistics is an attempt to solve the problem: where did my data from?. In the randomization test example, our question was: where did the differences between the means in my data come from?. We know that the differences could be produced by chance alone. We simulated what chance can due using randomization. Then we plotted what chance can do using a histogram. Then, we used to picture to help us make an inference. Did our observed difference come from the distribution, or not? When the observed difference is clearly inside the chance distribution, then we can infer that our difference could have been produced by chance. When the observed difference is not clearly inside the chance distribution, then we can infer that our difference was probably not produced by chance. In my opinion, these pictures are very, very helpful. If one of our goals is to help ourselves summarize a bunch of complicated numbers to arrive at an inference, then the pictures do a great job. We don’t even need a summary number, we just need to look at the picture and see if the observed difference is inside or outside of the window. This is what it is all about. Creating intuitive and meaningful ways to make inferences from our data. As we move forward, the main thing that we will do is formalize our process, and talk more about “standard” inferential statistics. For example, rather than looking at a picture (which is a good thing to do), we will create some helpful numbers. For example, what if you wanted to the probability that your difference could have been produced by chance? That could be a single number, like 95%. If there was a 95% probability that chance can produce the difference you observed, you might not be very confident that something like your experimental manipulation was causing the difference. If there was only 1% probability that chance could produce your difference, then you might be more confident that chance did not produce the difference; and, you might instead be comfortable with the possibility that your experimental manipulation actually caused the difference. So, how can we arrive at those numbers? In order to get there we will introduce you to some more foundational tools for statistical inference.
textbooks/stats/Applied_Statistics/Answering_Questions_with_Data_-__Introductory_Statistics_for_Psychology_Students_(Crump)/05%3A_Foundations_for_inference/5.06%3A_The_randomization_test_%28permutation_test%29.txt
One day, many moons ago, William Sealy Gosset got a job working for Guinness Breweries. They make the famous Irish stout called Guinness. What happens next went something like this (total fabrication, but mostly on point). Guinness wanted all of their beers to be the best beers. No mistakes, no bad beers. They wanted to improve their quality control so that when Guinness was poured anywhere in the world, it would always comes out fantastic: 5 stars out of 5 every time, the best. Guinness had some beer tasters, who were super-experts. Every time they tasted a Guinness from the factory that wasn’t 5 out of 5, they knew right away. But, Guinness had a big problem. They would make a keg of beer, and they would want to know if every single pint that would come out would be a 5 out of 5. So, the beer tasters drank pint after pint out of the keg, until it was gone. Some kegs were all 5 out of 5s. Some weren’t, Guinness needed to fix that. But, the biggest problem was that, after the testing, there was no beer left to sell, the testers drank it all (remember I’m making this part up to illustrate a point, they probably still had beer left to sell). Guinness had a sampling and population problem. They wanted to know that the entire population of the beers they made were all 5 out of 5 stars. But, if they sampled the entire population, they would drink all of their beer, and wouldn’t have any left to sell. Enter William Sealy Gosset. Gosset figured out the solution to the problem. He asked questions like this: 1. How many samples do I need to take to know the whole population is 5 out of 5? 2. What’s the fewest amount of samples I need to take to know the above, that would mean Guinness could test fewer beers for quality, sell more beers for profit, and make the product testing time shorter. Gosset solved those questions, and he invented something called the Student’s t-test. Gosset was working for Guinness, and could be fired for releasing trade-secrets that he invented (the t-test). But, Gosset published the work anyways, under a pseudonym (Student 1908). He called himself Student, hence Student’s t-test. Now you know the rest of the story. It turns out this was a very nice thing for Gosset to have done. t-tests are used all the time, and they are useful, that’s why they are used. In this chapter we learn how they work. You’ll be surprised to learn that what we’ve already talked about, (the Crump Test, and the Randomization Test), are both very very similar to the t-test. So, in general, you have already been thinking about the things you need to think about to understand t-tests. You’re probably wondering what is this \(t\), what does \(t\) mean? We will tell you. Before we tell what it means, we first tell you about one more idea. 6.01: Check your confidence in your mean We’ve talked about getting a sample of data. We know we can find the mean, we know we can find the standard deviation. We know we can look at the data in a histogram. These are all useful things to do for us to learn something about the properties of our data. You might be thinking of the mean and standard deviation as very different things that we would not put together. The mean is about central tendency (where most of the data is), and the standard deviation is about variance (where most of the data isn’t). Yes, they are different things, but we can use them together to create useful new things. What if I told you my sample mean was 50, and I told you nothing else about my sample. Would you be confident that most of the numbers were near 50? Would you wonder if there was a lot of variability in the sample, and many of the numbers were very different from 50. You should wonder all of those things. The mean alone, just by itself, doesn’t tell you anything about well the mean represents all of the numbers in the sample. It could be a representative number, when the standard deviation is very small, and all the numbers are close to 50. It could be a non-representative number, when the standard deviation is large, and many of the numbers are not near 50. You need to know the standard deviation in order to be confident in how well the mean represents the data. How can we put the mean and the standard deviation together, to give us a new number that tells us about confidence in the mean? We can do this using a ratio: $\frac{mean}{\text{standard deviation}} \nonumber$ Think about what happens here. We are dividing a number by a number. Look at what happens: $\frac{number}{\text{same number}} = 1 \nonumber$ $\frac{number}{\text{smaller number}} = \text{big number} \nonumber$ compared to: $\frac{number}{\text{bigger number}} = \text{smaller number} \nonumber$ Imagine we have a mean of 50, and a truly small standard deviation of 1. What do we get with our formula? $\frac{50}{1} = 50 \nonumber$ Imagine we have a mean of 50, and a big standard deviation of 100. What do we get with our formula? $\frac{50}{100} = 0.5 \nonumber$ Notice, when we have a mean paired with a small standard deviation, our formula gives us a big number, like 50. When we have a mean paired with a large standard deviation, our formula gives us a small number, like 0.5. These numbers can tell us something about confidence in our mean, in a general way. We can be 50 confident in our mean in the first case, and only 0.5 (not at a lot) confident in the second case. What did we do here? We created a descriptive statistic by dividing the mean by the standard deviation. And, we have a sense of how to interpret this number, when it’s big we’re more confident that the mean represents all of the numbers, when it’s small we are less confident. This is a useful kind of number, a ratio between what we think about our sample (the mean), and the variability in our sample (the standard deviation). Get used to this idea. Almost everything that follows in this textbook is based on this kind of ratio. We will see that our ratio turns into different kinds of “statistics”, and the ratios will look like this in general: $\text{name of statistic} = \frac{\text{measure of what we know}}{\text{measure of what we don't know}} \nonumber$ or, to say it using different words: $\text{name of statistic} = \frac{\text{measure of effect}}{\text{measure of error}} \nonumber$ In fact, this is the general formula for the t-test. Big surprise!
textbooks/stats/Applied_Statistics/Answering_Questions_with_Data_-__Introductory_Statistics_for_Psychology_Students_(Crump)/06%3A_t-Tests/6.00%3A_Prelude_to_t-Tests.txt
Now we are ready to talk about t-test. We will talk about three of them. We start with the one-sample t-test. Commonly, the one-sample t-test is used to estimate the chances that your sample came from a particular population. Specifically, you might want to know whether the mean that you found from your sample, could have come from a particular population having a particular mean. Straight away, the one-sample t-test becomes a little confusing (and I haven’t even described it yet). Officially, it uses known parameters from the population, like the mean of the population and the standard deviation of the population. However, most times you don’t know those parameters of the population! So, you have to estimate them from your sample. Remember from the chapters on descriptive statistics and sampling, our sample mean is an unbiased estimate of the population mean. And, our sample standard deviation (the one where we divide by n-1) is an unbiased estimate of the population standard deviation. When Gosset developed the t-test, he recognized that he could use these estimates from his samples, to make the t-test. Here is the formula for the one sample t-test, we first use words, and then become more specific: Formulas for one-sample t-test $\text{name of statistic} = \frac{\text{measure of effect}}{\text{measure of error}} \nonumber$ $\text{t} = \frac{\text{measure of effect}}{\text{measure of error}} \nonumber$ $\text{t} = \frac{\text{Mean difference}}{\text{standard error}} \nonumber$ $\text{t} = \frac{\bar{X}-u}{S_{\bar{X}}} \nonumber$ $\text{t} = \frac{\text{Sample Mean - Population Mean}}{\text{Sample Standard Error}} \nonumber$ $\text{Estimated Standard Error} = \text{Standard Error of Sample} = \frac{s}{\sqrt{N}} \nonumber$ Where, $s$ is the sample standard deviation. Some of you may have gone cross-eyed looking at all of this. Remember, we’ve seen it before when we divided our mean by the standard deviation in the first bit. The t-test is just a measure of a sample mean, divided by the standard error of the sample mean. That is it. What does t represent? $t$ gives us a measure of confidence, just like our previous ratio for dividing the mean by a standard deviations. The only difference with $t$, is that we divide by the standard error of mean (remember, this is also a standard deviation, it is the standard deviation of the sampling distribution of the mean) Note What does the t in t-test stand for? Apparently nothing. Gosset originally labelled it z. And, Fisher later called it t, perhaps because t comes after s, which is often used for the sample standard deviation. $t$ is a property of the data that you collect. You compute it with a sample mean, and a sample standard error (there’s one more thing in the one-sample formula, the population mean, which we get to in a moment). This is why we call $t$, a sample-statistic. It’s a statistic we compute from the sample. What kinds of numbers should we expect to find for these $ts$? How could we figure that out? Let’s start small and work through some examples. Imagine your sample mean is 5. You want to know if it came from a population that also has a mean of 5. In this case, what would $t$ be? It would be zero: we first subtract the sample mean from the population mean, $5-5=0$. Because the numerator is 0, $t$ will be zero. So, $t$ = 0, occurs, when there is no difference. Let’s say you take another sample, do you think the mean will be 5 every time, probably not. Let’s say the mean is 6. So, what can $t$ be here? It will be a positive number, because $6-5= +1$. But, will $t$ be +1? That depends on the standard error of the sample. If the standard error of the sample is 1, then $t$ could be 1, because $1/1 = 1$. If the sample standard error is smaller than 1, what happens to $t$? It get’s bigger right? For example, 1 divided by $0.5 = 2$. If the sample standard error was 0.5, $t$ would be 2. And, what could we do with this information? Well, it be like a measure of confidence. As $t$ get’s bigger we could be more confident in the mean difference we are measuring. Can $t$ be smaller than 1? Sure, it can. If the sample standard error is big, say like 2, then $t$ will be smaller than one (in our case), e.g., $1/2 = .5$. The direction of the difference between the sample mean and population mean, can also make the $t$ become negative. What if our sample mean was 4. Well, then $t$ will be negative, because the mean difference in the numerator will be negative, and the number in the bottom (denominator) will always be positive (remember why, it’s the standard error, computed from the sample standard deviation, which is always positive because of the squaring that we did.). So, that is some intuitions about what the kinds of values t can take. $t$ can be positive or negative, and big or small. Let’s do one more thing to build our intuitions about what $t$ can look like. How about we sample some numbers and then measure the sample mean and the standard error of the mean, and then plot those two things against each each. This will show us how a sample mean typically varies with respect to the standard error of the mean. In the following figure, I pulled 1,000 samples of N=10 from a normal distribution (mean = 0, sd = 1). Each time I measured the mean and standard error of the sample. That gave two descriptive statistics for each sample, letting us plot each sample as dot in a scatterplot What we get is a cloud of dots. You might notice the cloud has a circular quality. There’s more dots in the middle, and fewer dots as they radiate out from the middle. The dot cloud shows us the general range of the sample mean, for example most of the dots are in between -1 and 1. Similarly, the range for the sample standard error is roughly between .2 and .5. Remember, each dot represents one sample. We can look at the same data a different way. For example, rather than using a scatterplot, we can divide the mean for each dot, by the standard error for each dot. Below is a histogram showing what this looks like: Interesting, we can see the histogram is shaped like a normal curve. It is centered on 0, which is the most common value. As values become more extreme, they become less common. If you remember, our formula for $t$, was the mean divided by the standard error of the mean. That’s what we did here. This histogram is showing you a $t$-distribution. Calculating t from data Let’s briefly calculate a t-value from a small sample. Let’s say we had 10 students do a true/false quiz with 5 questions on it. There’s a 50% chance of getting each answer correct. Every student completes the 5 questions, we grade them, and then we find their performance (mean percent correct). What we want to know is whether the students were guessing. If they were all guessing, then the sample mean should be about 50%, it shouldn’t be different from chance, which is 50%. Let’s look at the table: suppressPackageStartupMessages(library(dplyr)) students <- 1:10 scores <- c(50,70,60,40,80,30,90,60,70,60) mean_scores <- mean(scores) Difference_from_Mean <- scores-mean_scores Squared_Deviations <- Difference_from_Mean^2 the_df<-data.frame(students, scores, mean=rep(mean_scores,10), Difference_from_Mean, Squared_Deviations) the_df <- the_df %>% rbind(c("Sums",colSums(the_df[1:10,2:5]))) %>% rbind(c("Means",colMeans(the_df[1:10,2:5]))) %>% rbind(c(" "," "," ","sd ",round(sd(the_df[1:10,2]),digits=2))) %>% rbind(c(" "," "," ","SEM ",round(sd(the_df[1:10,2])/sqrt(10), digits=2))) %>% rbind(c(" "," "," ","t",(61-50)/round(sd(the_df[1:10,2])/sqrt(10), digits=2))) knitr::kable(the_df) |students |scores |mean |Difference_from_Mean |Squared_Deviations | |:--------|:------|:----|:--------------------|:------------------| |1 |50 |61 |-11 |121 | |2 |70 |61 |9 |81 | |3 |60 |61 |-1 |1 | |4 |40 |61 |-21 |441 | |5 |80 |61 |19 |361 | |6 |30 |61 |-31 |961 | |7 |90 |61 |29 |841 | |8 |60 |61 |-1 |1 | |9 |70 |61 |9 |81 | |10 |60 |61 |-1 |1 | |Sums |610 |610 |0 |2890 | |Means |61 |61 |0 |289 | | | | |sd |17.92 | | | | |SEM |5.67 | | | | |t |1.94003527336861 | You can see the scores column has all of the test scores for each of the 10 students. We did the things we need to do to compute the standard deviation. Remember the sample standard deviation is the square root of the sample variance, or: $\text{sample standard deviation} = \sqrt{\frac{\sum_{i}^{n}({x_{i}-\bar{x})^2}}{N-1}} \nonumber$ $\text{sd} = \sqrt{\frac{2890}{10-1}} = 17.92 \nonumber$ The standard error of the mean, is the standard deviation divided by the square root of N $\text{SEM} = \frac{s}{\sqrt{N}} = \frac{17.92}{10} = 5.67 \nonumber$ $t$ is the difference between our sample mean (61), and our population mean (50, assuming chance), divided by the standard error of the mean. $\text{t} = \frac{\bar{X}-u}{S_{\bar{X}}} = \frac{\bar{X}-u}{SEM} = \frac{61-50}{5.67} = 1.94 \nonumber$ And, that is you how calculate $t$, by hand. It’s a pain. I was annoyed doing it this way. In the lab, you learn how to calculate $t$ using software, so it will just spit out $t$. For example in R, all you have to do is this: scores <- c(50,70,60,40,80,30,90,60,70,60) t.test(scores, mu=50) One Sample t-test data: scores t = 1.9412, df = 9, p-value = 0.08415 alternative hypothesis: true mean is not equal to 50 95 percent confidence interval: 48.18111 73.81889 sample estimates: mean of x 61 How does t behave? If $t$ is just a number that we can compute from our sample (it is), what can we do with it? How can we use $t$ for statistical inference? Remember back to the chapter on sampling and distributions, that’s where we discussed the sampling distribution of the sample mean. Remember, we made a lot of samples, then computed the mean for each sample, then we plotted a histogram of the sample means. Later, in that same section, we mentioned that we could generate sampling distributions for any statistic. For each sample, we could compute the mean, the standard deviation, the standard error, and now even $t$, if we wanted to. We could generate 10,000 samples, and draw four histograms, one for each sampling distribution for each statistic. This is exactly what I did, and the results are shown in the four figures below. I used a sample size of 20, and drew random observations for each sample from a normal distribution, with mean = 0, and standard deviation = 1. Let’s look at the sampling distributions for each of the statistics. $t$ was computed assuming with the population mean assumed to be 0. We see four sampling distributions. This is how statistical summaries of these summaries behave. We have used the word chance windows before. These are four chance windows, measuring different aspects of the sample. In this case, all of the samples came from the same normal distribution. Because of sampling error, each sample is not identical. The means are not identical, the standard deviations are not identical, sample standard error of the means are not identical, and the $t$s of the samples are not identical. They all have some variation, as shown by the histograms. This is how samples of size 20 behave. We can see straight away, that in this case, we are unlikely to get a sample mean of 2. That’s way outside the window. The range for the sampling distribution of the mean is around -.5 to +.5, and is centered on 0 (the population mean, would you believe!). We are unlikely to get sample standard deviations of between .6 and 1.5, that is a different range, specific to the sample standard deviation. Same thing with the sample standard error of the mean, the range here is even smaller, mostly between .1, and .3. You would rarely find a sample with a standard error of the mean greater than .3. Virtually never would you find one of say 1 (for this situation). Now, look at $t$. It’s range is basically between -3 and +3 here. 3s barely happen at all. You pretty much never see a 5 or -5 in this situation. All of these sampling windows are chance windows, and they can all be used in the same way as we have used similar sampling distributions before (e.g., Crump Test, and Randomization Test) for statistical inference. For all of them we would follow the same process: 1. Generate these distributions 2. Look at your sample statistics for the data you have (mean, SD, SEM, and $t$) 3. Find the likelihood of obtaining that value or greater 4. Obtain that probability 5. See if you think your sample statistics were probable or improbable. We’ll formalize this in a second. I just want you to know that what you will be doing is something that you have already done before. For example, in the Crump test and the Randomization test we focused on the distribution of mean differences. We could do that again here, but instead, we will focus on the distribution of $t$ values. We then apply the same kinds of decision rules to the $t$ distribution, as we did for the other distributions. Below you will see a graph you have already seen, except this time it is a distribution of $t$s, not mean differences: Remember, if we obtained a single $t$ from one sample we collected, we could consult this chance window below to find out the $t$ we obtained from the sample was likely or unlikely to occur by chance. Making a decision From our early example involving the TRUE/FALSE quizzes, we are now ready to make some kind of decision about what happened there. We found a mean difference of scores <- c(50,70,60,40,80,30,90,60,70,60) mean(scores)-50 11 . We found a $t$ = scores <- c(50,70,60,40,80,30,90,60,70,60) t.test(scores, mu=50)$statistic t: 1.94117647058824 . The probability of this $t$ or larger occurring is $p$ = scores <- c(50,70,60,40,80,30,90,60,70,60) t.test(scores, mu=50)$p.value 0.0841503080536893 . We were testing the idea that our sample mean of scores <- c(50,70,60,40,80,30,90,60,70,60) mean(scores) 61 could have come from a normal distribution with mean = 50. The $t$ test tells us that the $t$ for our sample, or a larger one, would happen with p = 0.0841503. In other words, chance can do it a kind of small amount of time, but not often. In English, this means that all of the students could have been guessing, but it wasn’t that likely that were just guessing. We’re guessing that you are still a little bit confused about $t$ values, and what we are doing here. We are going to skip ahead to the next $t$-test, called a paired samples t-test. We will also fill in some more things about $t$-tests that are more obvious when discussing paired samples t-test. In fact, spoiler alert, we will find out that a paired samples t-test is actually a one-sample t-test in disguise (WHAT!), yes it is. If the one-sample $t$-test didn’t make sense to you, read the next section.
textbooks/stats/Applied_Statistics/Answering_Questions_with_Data_-__Introductory_Statistics_for_Psychology_Students_(Crump)/06%3A_t-Tests/6.02%3A_One-sample_t-test__A_new_t-test.txt
For me (Crump), many analyses often boil down to a paired samples t-test. It just happens that many things I do reduce down to a test like this. I am a cognitive psychologist, I conduct research about how people do things like remember, pay attention, and learn skills. There are lots of Psychologists like me, who do very similar things. We all often conduct the same kinds of experiments. They go like this, and they are called repeated measures designs. They are called repeated measures designs, because we measure how one person does something more than once, we repeat the measure. So, I might measure somebody doing something in condition A, and measure the same person doing something in Condition B, and then I see that same person does different things in the two conditions. I repeatedly measure the same person in both conditions. I am interested in whether the experimental manipulation changes something about how people perform the task in question. Mehr, Song, and Spelke (2016) We will introduce the paired-samples t-test with an example using real data, from a real study. Mehr, Song, and Spelke (2016) were interested in whether singing songs to infants helps infants become more sensitive to social cues. For example, infants might need to learn to direct their attention toward people as a part of learning how to interact socially with people. Perhaps singing songs to infants aids this process of directing attention. When an infant hears a familiar song, they might start to pay more attention to the person singing that song, even after they are done singing the song. The person who sang the song might become more socially important to the infant. You will learn more about this study in the lab for this week. This example, prepares you for the lab activities. Here is a brief summary of what they did. First, parents were trained to sing a song to their infants. After many days of singing this song to the infants, a parent came into the lab with their infant. In the first session, parents sat with their infants on their knees, so the infant could watch two video presentations. There were two videos. Each video involved two unfamiliar new people the infant had never seen before. Each new person in the video (the singers) sang one song to the infant. One singer sang the “familiar” song the infant had learned from their parents. The other singer sang an “unfamiliar” song the infant had not hear before. There were two really important measurement phases: the baseline phase, and the test phase. The baseline phase occurred before the infants saw and heard each singer sing a song. During the baseline phase, the infants watched a video of both singers at the same time. The researchers recorded the proportion of time that the infant looked at each singer. The baseline phase was conducted to determine whether infants had a preference to look at either person (who would later sing them a song). The test phase occurred after infants saw and heard each song, sung by each singer. During the test phase, each infant had an opportunity to watch silent videos of both singers. The researchers measured the proportion of time the infants spent looking at each person. The question of interest, was whether the infants would spend a greater proportion of time looking at the singer who sang the familiar song, compared to the singer who sang the unfamiliar song. There is more than one way to describe the design of this study. We will describe it like this. It was a repeated measures design, with one independent (manipulation) variable called Viewing phase: Baseline versus Test. There was one dependent variable (the measurement), which was proportion looking time (to singer who sung familiar song). This was a repeated measures design because the researchers measured proportion looking time twice (they repeated the measure), once during baseline (before infants heard each singer sing a song), and again during test (after infants head each singer sing a song). The important question was whether infants would change their looking time, and look more at the singer who sang the familiar song during the test phase, than they did during the baseline phase. This is a question about a change within individual infants. In general, the possible outcomes for the study are: 1. No change: The difference between looking time toward the singer of the familiar song during baseline and test is zero, no difference. 2. Positive change: Infants will look longer toward the singer of the familiar song during the test phase (after they saw and heard the singers), compared to the baseline phase (before they saw and heard the singers). This is a positive difference if we use the formula: Test Phase Looking time - Baseline phase looking time (to familiar song singer). 3. Negative change: Infants will look longer toward the singer of the unfamiliar song during the test phase (after they saw and heard the singers), compared to the baseline phase (before they saw and heard the singers). This is a negative difference if we use the same formula: Test Phase Looking time - Baseline phase looking time (to familiar song singer). The Data Let’s take a look at the data for the first 5 infants in the study. This will help us better understand some properties of the data before we analyze it. We will see that the data is structured in a particular way that we can take advantage of with a paired samples t-test. Note, we look at the first 5 infants to show how the computations work. The results of the paired-samples t-test change when we use all of the data from the study. Here is a table of the data: library(data.table) suppressPackageStartupMessages(library(dplyr)) all_data <- fread( "https://stats.libretexts.org/@api/deki/files/10603/MehrSongSpelke2016.csv") experiment_one <- all_data %>% filter(exp1==1) paired_sample_df <- data.frame(infant=1:5, Baseline = round(experiment_one$Baseline_Proportion_Gaze_to_Singer[1:5], digits=2), Test = round(experiment_one$Test_Proportion_Gaze_to_Singer[1:5], digits=2)) knitr::kable(paired_sample_df) infant Baseline Test 1 0.44 0.60 2 0.41 0.68 3 0.75 0.72 4 0.44 0.28 5 0.47 0.50 The table shows proportion looking times toward the singer of the familiar song during the Baseline and Test phases. Notice there are five different infants, (1 to 5). Each infant is measured twice, once during the Baseline phase, and once during the Test phase. To repeat from before, this is a repeated-measures design, because the infants are measured repeatedly (twice in this case). Or, this kind of design is also called a paired-samples design. Why? because each participant comes with a pair of samples (two samples), one for each level of the design. Great, so what are we really interested in here? We want to know if the mean looking time toward the singer of the familiar song for the Test phase is higher than the Baseline phase. We are comparing the two sample means against each other and looking for a difference. We already know that differences could be obtained by chance alone, simply because we took two sets of samples, and we know that samples can be different. So, we are interested in knowing whether chance was likely or unlikely to have produced any difference we might observe. In other words, we are interested in looking at the difference scores between the baseline and test phase for each infant. The question here is, for each infant, did their proportion looking time to the singer of the familiar song, increase during the test phase as compared to the baseline phase. The difference scores Let’s add the difference scores to the table of data so it is easier to see what we are talking about. The first step in creating difference scores is to decide how you will take the difference, there are two options: 1. Test phase score - Baseline Phase Score 2. Baseline phase score - Test Phase score Let’s use the first formula. Why? Because it will give us positive differences when the test phase score is higher than the baseline phase score. This makes a positive score meaningful with respect to the study design, we know (because we defined it to be this way), that positive scores will refer to longer proportion looking times (to singer of familiar song) during the test phase compared to the baseline phase. library(data.table) suppressPackageStartupMessages(library(dplyr)) all_data <- fread( "https://stats.libretexts.org/@api/deki/files/10603/MehrSongSpelke2016.csv") experiment_one <- all_data %>% filter(exp1==1) paired_sample_df <- data.frame(infant=1:5, Baseline = round(experiment_one$Baseline_Proportion_Gaze_to_Singer[1:5], digits=2), Test = round(experiment_one$Test_Proportion_Gaze_to_Singer[1:5], digits=2)) paired_sample_df <- cbind(paired_sample_df, differences = (paired_sample_df$Test- paired_sample_df$Baseline)) knitr::kable(paired_sample_df) infant Baseline Test differences 1 0.44 0.60 0.16 2 0.41 0.68 0.27 3 0.75 0.72 -0.03 4 0.44 0.28 -0.16 5 0.47 0.50 0.03 There we have it, the difference scores. The first thing we can do here is look at the difference scores, and ask how many infants showed the effect of interest. Specifically, how many infants showed a positive difference score. We can see that three of five infants showed a positive difference (they looked more at the singer of the familiar song during the test than baseline phase), and two the infants showed the opposite effect (negative difference, they looked more at the singer of the familiar song during baseline than test). The Mean Difference As we have been discussing, the effect of interest in this study is the mean difference between the baseline and test phase proportion looking times. We can calculate the mean difference, by finding the mean of the difference scores. Let’s do that, in fact, for fun let’s calculate the mean of the baseline scores, the test scores, and the difference scores. library(data.table) suppressPackageStartupMessages(library(dplyr)) all_data <- fread( "https://stats.libretexts.org/@api/deki/files/10603/MehrSongSpelke2016.csv") experiment_one <- all_data %>% filter(exp1==1) paired_sample_df <- data.frame(infant=1:5, Baseline = round(experiment_one$Baseline_Proportion_Gaze_to_Singer[1:5], digits=2), Test = round(experiment_one$Test_Proportion_Gaze_to_Singer[1:5], digits=2)) paired_sample_df <- cbind(paired_sample_df, differences = (paired_sample_df$Test- paired_sample_df$Baseline)) paired_sample_df <- paired_sample_df %>% rbind(c("Sums",colSums(paired_sample_df[1:5,2:4]))) %>% rbind(c("Means",colMeans(paired_sample_df[1:5,2:4]))) knitr::kable(paired_sample_df) infant Baseline Test differences 1 0.44 0.6 0.16 2 0.41 0.68 0.27 3 0.75 0.72 -0.03 4 0.44 0.28 -0.16 5 0.47 0.5 0.03 Sums 2.51 2.78 0.27 Means 0.502 0.556 0.054 We can see there was a positive mean difference of 0.054, between the test and baseline phases. Can we rush to judgment and conclude that infants are more socially attracted to individuals who have sung them a familiar song? I would hope not based on this very small sample. First, the difference in proportion looking isn’t very large, and of course we recognize that this difference could have been produced by chance. We will more formally evaluate whether this difference could have been caused by chance with the paired-samples t-test. But, before we do that, let’s again calculate $t$ and discuss what $t$ tells us over and above what our measure of the mean of the difference scores tells us. Calculate t OK, so how do we calculate $t$ for a paired-samples $t$-test? Surprise, we use the one-sample t-test formula that you already learned about! Specifically, we use the one-sample $t$-test formula on the difference scores. We have one sample of difference scores (you can see they are in one column), so we can use the one-sample $t$-test on the difference scores. Specifically, we are interested in comparing whether the mean of our difference scores came from a distribution with mean difference = 0. This is a special distribution we refer to as the null distribution. It is the distribution no differences. Of course, this null distribution can produce differences due to to sampling error, but those differences are not caused by any experimental manipulation, they caused by the random sampling process. We calculate $t$ in a moment. Let’s now consider again why we want to calculate $t$? Why don’t we just stick with the mean difference we already have? Remember, the whole concept behind $t$, is that it gives an indication of how confident we should be in our mean. Remember, $t$ involves a measure of the mean in the numerator, divided by a measure of variation (standard error of the sample mean) in the denominator. The resulting $t$ value is small when the mean difference is small, or when the variation is large. So small $t$-values tell us that we shouldn’t be that confident in the estimate of our mean difference. Large $t$-values occur when the mean difference is large and/or when the measure of variation is small. So, large $t$-values tell us that we can be more confident in the estimate of our mean difference. Let’s find $t$ for the mean difference scores. We use the same formulas as we did last time: library(data.table) suppressPackageStartupMessages(library(dplyr)) all_data <- fread( "https://stats.libretexts.org/@api/deki/files/10603/MehrSongSpelke2016.csv") experiment_one <- all_data %>% filter(exp1==1) paired_sample_df <- data.frame(infant=1:5, Baseline = round(experiment_one$Baseline_Proportion_Gaze_to_Singer[1:5], digits=2), Test = round(experiment_one$Test_Proportion_Gaze_to_Singer[1:5], digits=2)) paired_sample_df <- cbind(paired_sample_df, differences = (paired_sample_df$Test- paired_sample_df$Baseline)) paired_sample_df <- paired_sample_df %>% rbind(c("Sums",colSums(paired_sample_df[1:5,2:4]))) %>% rbind(c("Means",colMeans(paired_sample_df[1:5,2:4]))) paired_sample_df <- data.frame(infant=1:5, Baseline = round(experiment_one$Baseline_Proportion_Gaze_to_Singer[1:5], digits=2), Test = round(experiment_one$Test_Proportion_Gaze_to_Singer[1:5], digits=2)) differences <- paired_sample_df$Test-paired_sample_df$Baseline diff_from_mean <- differences-mean(differences) Squared_differences <- diff_from_mean^2 paired_sample_df <- cbind(paired_sample_df, differences, diff_from_mean, Squared_differences) paired_sample_df <- paired_sample_df %>% rbind(c("Sums",colSums(paired_sample_df[1:5,2:6]))) %>% rbind(c("Means",colMeans(paired_sample_df[1:5,2:6]))) %>% rbind(c(" "," "," "," ","sd ",round(sd(paired_sample_df[1:5,4]), digits=3))) %>% rbind(c(" "," "," "," ","SEM ",round(sd(paired_sample_df[1:5,4])/sqrt(5), digits=3))) %>% rbind(c(" "," "," "," ","t",mean(differences)/round( sd(paired_sample_df[1:5,4])/sqrt(5), digits=3)) ) paired_sample_df[6,5]<-0 paired_sample_df[7,5]<-0 knitr::kable(paired_sample_df) infant Baseline Test differences diff_from_mean Squared_differences 1 0.44 0.6 0.16 0.106 0.011236 2 0.41 0.68 0.27 0.216 0.046656 3 0.75 0.72 -0.03 -0.084 0.00705600000000001 4 0.44 0.28 -0.16 -0.214 0.045796 5 0.47 0.5 0.03 -0.024 0.000575999999999999 Sums 2.51 2.78 0.27 0 0.11132 Means 0.502 0.556 0.054 0 0.022264 sd 0.167 SEM 0.075 t 0.72 If we did this test using R, we would obtain almost the same numbers (there is a little bit of rounding in the table). library(data.table) suppressPackageStartupMessages(library(dplyr)) all_data <- fread( "https://stats.libretexts.org/@api/deki/files/10603/MehrSongSpelke2016.csv") experiment_one <- all_data %>% filter(exp1==1) paired_sample_df <- data.frame(infant=1:5, Baseline = round(experiment_one$Baseline_Proportion_Gaze_to_Singer[1:5], digits=2), Test = round(experiment_one$Test_Proportion_Gaze_to_Singer[1:5], digits=2)) paired_sample_df <- cbind(paired_sample_df, differences = (paired_sample_df$Test- paired_sample_df$Baseline)) paired_sample_df <- paired_sample_df %>% rbind(c("Sums",colSums(paired_sample_df[1:5,2:4]))) %>% rbind(c("Means",colMeans(paired_sample_df[1:5,2:4]))) paired_sample_df <- data.frame(infant=1:5, Baseline = round(experiment_one$Baseline_Proportion_Gaze_to_Singer[1:5], digits=2), Test = round(experiment_one$Test_Proportion_Gaze_to_Singer[1:5], digits=2)) differences <- paired_sample_df$Test-paired_sample_df$Baseline diff_from_mean <- differences-mean(differences) Squared_differences <- diff_from_mean^2 paired_sample_df <- cbind(paired_sample_df, differences, diff_from_mean, Squared_differences) paired_sample_df <- paired_sample_df %>% rbind(c("Sums",colSums(paired_sample_df[1:5,2:6]))) %>% rbind(c("Means",colMeans(paired_sample_df[1:5,2:6]))) %>% rbind(c(" "," "," "," ","sd ",round(sd(paired_sample_df[1:5,4]), digits=3))) %>% rbind(c(" "," "," "," ","SEM ",round(sd(paired_sample_df[1:5,4])/sqrt(5), digits=3))) %>% rbind(c(" "," "," "," ","t",mean(differences)/round( sd(paired_sample_df[1:5,4])/sqrt(5), digits=3)) ) paired_sample_df[6,5]<-0 paired_sample_df[7,5]<-0 t.test(differences,mu=0) One Sample t-test data: differences t = 0.72381, df = 4, p-value = 0.5092 alternative hypothesis: true mean is not equal to 0 95 percent confidence interval: -0.1531384 0.2611384 sample estimates: mean of x 0.054 Here is a quick write up of our t-test results, t(4) = .72, p = .509. What does all of that tell us? There’s a few things we haven’t gotten into much yet. For example, the 4 represents degrees of freedom, which we discuss later. The important part, the $t$ value should start to be a little bit more meaningful. We got a kind of small t-value didn’t we. It’s .72. What can we tell from this value? First, it is positive, so we know the mean difference is positive. The sign of the $t$-value is always the same as the sign of the mean difference (ours was +0.054). We can also see that the p-value was .509. We’ve seen p-values before. This tells us that our $t$ value or larger, occurs about 50.9% of the time… Actually it means more than this. And, to understand it, we need to talk about the concept of two-tailed and one-tailed tests. Interpreting ts Remember what it is we are doing here. We are evaluating whether our sample data could have come from a particular kind of distribution. The null distribution of no differences. This is the distribution of $t$-values that would occur for samples of size 5, with a mean difference of 0, and a standard error of the sample mean of .075 (this is the SEM that we calculated from our sample). We can see what this particular null-distribution looks like by plotting it like this: The $t$-distribution above shows us the kinds of values $t$ will will take by chance alone, when we measure the mean differences for pairs of 5 samples (like our current). $t$ is most likely to be zero, which is good, because we are looking at the distribution of no-differences, which should most often be 0! But, sometimes, due to sampling error, we can get $t$s that are bigger than 0, either in the positive or negative direction. Notice the distribution is symmetrical, a $t$ from the null-distribution will be positive half of the time, and negative half of the time, that is what we would expect by chance. So, what kind of information do we want know when we find a particular $t$ value from our sample? We want to know how likely the $t$ value like the one we found occurs just by chance. This is actually a subtly nuanced kind of question. For example, any particular $t$ value doesn’t have a specific probability of occurring. When we talk about probabilities, we are talking about ranges of probabilities. Let’s consider some probabilities. We will use the letter $p$, to talk about the probabilities of particular $t$ values. 1. What is the probability that $t$ is zero or positive or negative? The answer is p=1, or 100%. We will always have a $t$ value that is zero or non-zero…Actually, if we can’t compute the t-value, for example when the standard deviation is undefined, I guess then we would have a non-number. But, assuming we can calculate $t$, then it will always be 0 or positive or negative. 2. What is the probability of $t$ = 0 or greater than 0? The answer is p=.5, or 50%. 50% of $t$-values are 0 or greater. 3. What is the of $t$ = 0 or smaller than 0? The answer is p=.5, or 50%. 50% of $t$-values are 0 or smaller. We can answer all of those questions just by looking at our t-distribution, and dividing it into two equal regions, the left side (containing 50% of the $t$ values), and the right side containing 50% of the $t$-values). What if we wanted to take a more fine-grained approach, let’s say we were interested in regions of 10%. What kinds of $t$s occur 10% of the time. We would apply lines like the following. Notice, the likelihood of bigger numbers (positive or negative) gets smaller, so we have to increase the width of the bars for each of the intervals between the bars to contain 10% of the $t$-values, it looks like this: Consider the probabilities ($p$) of $t$ for the different ranges. 1. $t$ <= -1.5 ($t$ is less than or equal to -1.5), $p$ = 10% 2. -1.5 >= $t$ <= -0.9 ($t$ is equal to or between -1.5 and -.9), $p$ = 10% 3. -.9 >= $t$ <= -0.6 ($t$ is equal to or between -.9 and -.6), $p$ = 10% 4. $t$ >= 1.5 ($t$ is greater than or equal to 1.5), $p$ = 10% Notice, that the $p$s are always 10%. $t$s occur in these ranges with 10% probability. Getting the p-values for t-values You might be wondering where I am getting some of these values from. For example, how do I know that 10% of $t$ values (for this null distribution) have a value of approximately 1.5 or greater than 1.5? The answer is I used R to tell me. In most statistics textbooks the answer would be: there is a table at the back of the book where you can look these things up…This textbook has no such table. We could make one for you. And, we might do that. But, we didn’t do that yet… So, where do these values come from, how can you figure out what they are? The complicated answer is that we are not going to explain the math behind finding these values because, 1) the authors (some of us) admittedly don’t know the math well enough to explain it, and 2) it would sidetrack us to much, 3) you will learn how to get these numbers in the lab with software, 4) you will learn how to get these numbers in lab without the math, just by doing a simulation, and 5) you can do it in R, or excel, or you can use an online calculator. This is all to say that you can find the $t$s and their associated $p$s using software. But, the software won’t tell you what these values mean. That’s we are doing here. You will also see that software wants to know a few more things from you, such as the degrees of freedom for the test, and whether the test is one-tailed or two tailed. We haven’t explained any of these things yet. That’s what we are going to do now. Note, we explain degrees of freedom last. First, we start with a one-tailed test. One-tailed tests A one-tailed test is sometimes also called a directional test. It is called a directional test, because a researcher might have a hypothesis in mind suggesting that the difference they observe in their means is going to have a particular direction, either a positive difference, or a negative difference. Typically, a researcher would set an alpha criterion. The alpha criterion describes a line in the sand for the researcher. Often, the alpha criterion is set at p=.05. What does this mean? Let’s look at again at the graph of the $t$-distribution, and show the alpha criterion. The figure shows that $t$ values of +2.13 or greater occur 5% of the time. Because the t-distribution is symmetrical, we also know that $t$ values of -2.13 or smaller also occur 5% of the time. Both of these properties are true under the null distribution of no differences. This means, that when there really are no differences, a researcher can expect to find $t$ values of 2.13 or larger 5% of the time. Let’s review and connect some of the terms: 1. alpha criterion: the criterion set by the researcher to make decisions about whether they believe chance did or did not cause the difference. The alpha criterion here is set to p=.05 2. Critical $t$. The critical $t$ is the $t$-value associated with the alpha-criterion. In this case for a one-tailed test, it is the $t$ value where 5% of all $t$s are this number or greater. In our example, the critical $t$ is 2.13. 5% of all $t$ values (with degrees of freedom = 4) are +2.13, or greater than +2.13. 3. Observed $t$. The observed $t$ is the one that you calculated from your sample. In our example about the infants, the observed $t$ was $t$ (4) = 0.72. 4. p-value. The $p$-value is the probability of obtaining the observed $t$ value or larger. Now, you could look back at our previous example, and find that the $p$-value for $t$ (4) = .72, was p=.509. HOWEVER, this p-value was not calculated for a one-directional test…(we talk about what .509 means in the next section). Let’s see what the $p$-value for $t$ (4) = .72 using a one-directional test would be, and what it would look like: Let’s take this one step at a time. We have located the observed $t$ of .72 on the graph. We shaded the right region all grey. What we see is that the grey region represents .256 or 25.6% of all $t$ values. In other words, 25.6% of $t$ values are 0.72 or larger than 0.72. You could expect, by chance alone, to a find a $t$ value of .72 or larger, 25.6% of the time. That’s fairly often. We did find a $t$ value of 0.72. Now that you know this kind of $t$ value or larger occurs 25.6% of the time, would you be confident that the mean difference was not due to chance? Probably not, given that chance can produce this difference fairly often. Following the “standard” decision making procedure, we would claim that our $t$ value was not statistically significant, because it was not large enough. If our observed value was larger than the critical $t$ (larger than 2.13), defined by our alpha criterion, then we would claim that our $t$ value was statistically signicant. This would be equivalent to saying that we believe it is unlikely that the difference we observed was due to chance. In general, for any observed $t$ value, the associated $p$-value tells you how likely a $t$ of the observed size or larger would be observed. The $p$-value always refers to a range of $t$-values, never to a single $t$-value. Researchers use the alpha criterion of .05, as a matter of convenience and convention. There are other ways to interpret these values that do not rely on a strict (significant versus not) dichotomy. Two-tailed tests OK, so that was one-tailed tests… What are two tailed tests, what is that? The $p$-value that we originally calculated from our paired-samples $t$-test was for a 2-tailed test. Often, the default is that the $p$-value is for a two-tailed test. The two-tailed test, is asking a more general question about whether a difference is likely to have been produced by chance. The question is: what is probability of any difference. It is also called a non-directional test, because here we don’t care about the direction or sign of the difference (positive or negative), we just care if there is any kind of difference. The same basic things as before are involved. We define an alpha criterion ($\alpha = 0.05$). And, we say that any observed $t$ value that has a probability of $p$ <.05 ($p$ is less than .05) will be called statistically signficant, and ones that are more likely ($p$ >.05, $p$ is greater than .05) will be called null-results, or not statistically significant. The only difference is how we draw the alpha range. Before it was on the right side of the $t$ distribution (we were conducting a one-sided test remember, so we were only interested in one side). Let’s just take a look at what the most extreme 5% of the t-values are, when we ignore if they are positive or negative: Here is what we are seeing. A distribution of no differences (the null, which is what we are looking at), will produce $t$s that are 2.78 or greater 2.5% of the time, and $t$s that are -2.78 or smaller 2.5% of the time. 2.5% + 2.5% is a total of 5% of the time. We could also say that $t$s larger than +/- 2.78 occur 5% of the time. As a result, the critical $t$ value is (+/-) 2.78 for a two-tailed test. As you can see, the two-tailed test is blind to the direction or sign of the difference. Because of this, the critical $t$ value is also higher for a two-tailed test, than for the one-tailed test that we did earlier. Hopefully, now you can see why it is called a two-tailed test. There are two tails of the distribution, one on the left and right, both shaded in green. One or two tailed, which one? Now that you know there are two kinds of tests, one-tailed, and two-tailed, which one should you use? There is some conventional wisdom on this, but also some debate. In the end, it is up to you to be able to justify your choice and why it is appropriate for you data. That is the real answer. The conventional answer is that you use a one-tailed test when you have a theory or hypothesis that is making a directional prediction (the theory predicts that the difference will be positive, or negative). Similarly, use a two-tailed test when you are looking for any difference, and you don’t have a theory that makes a directional prediction (it just makes the prediction that there will be a difference, either positive or negative). Also, people appear to choose one or two-tailed tests based on how risky they are as researchers. If you always ran one-tailed tests, your critical $t$ values for your set alpha criterion would always be smaller than the critical $t$s for a two-tailed test. Over the long run, you would make more type I errors, because the criterion to detect an effect is a lower bar for one than two tailed tests. Remember type 1 errors occur when you reject the idea that chance could have caused your difference. You often never know when you make this error. It happens anytime that sampling error was the actual cause of the difference, but a researcher dismisses that possibility and concludes that their manipulation caused the difference. Similarly, if you always ran two-tailed tests, even when you had a directional prediction, you would make fewer type I errors over the long run, because the $t$ for a two-tailed test is higher than the $t$ for a one-tailed test. It seems quite common for researchers to use a more conservative two-tailed test, even when they are making a directional prediction based on theory. In practice, researchers tend to adopt a standard for reporting that is common in their field. Whether or not the practice is justifiable can sometimes be an open question. The important task for any researcher, or student learning statistics, is to be able to justify their choice of test. Degrees of freedom Before we finish up with paired-samples $t$-tests, we should talk about degrees of freedom. Our sense is that students don’t really understand degrees of freedom very well. If you are reading this textbook, you are probably still wondering what is degrees of freedom, seeing as we haven’t really talked about it all. For the $t$-test, there is a formula for degrees of freedom. For the one-sample and paired sample $t$-tests, the formula is: $\text{Degrees of Freedom} = \text{df} = n-1$. Where n is the number of samples in the test. In our paired $t$-test example, there were 5 infants. Therefore, degrees of freedom = 5-1 = 4. OK, that’s a formula. Who cares about degrees of freedom, what does the number mean? And why do we report it when we report a $t$-test… you’ve probably noticed the number in parentheses e.g., $t$(4)=.72, the 4 is the $df$, or degrees of freedom. Degrees of freedom is both a concept, and a correction. The concept is that if you estimate a property of the numbers, and you use this estimate, you will be forcing some constraints on your numbers. Consider the numbers: 1, 2, 3. The mean of these numbers is 2. Now, let’s say I told you that the mean of three numbers is 2. Then, how many of these three numbers have freedom? Funny question right. What we mean is, how many of the three numbers could be any number, or have the freedom to be any number. The first two numbers could be any number. But, once those two numbers are set, the final number (the third number), MUST be a particular number that makes the mean 2. The first two numbers have freedom. The third number has no freedom. To illustrate. Let’s freely pick two numbers: 51 and -3. I used my personal freedom to pick those two numbers. Now, if our three numbers are 51, -3, and x, and the mean of these three numbers is 2. There is only one solution, x has to be -42, otherwise the mean won’t be 2. This is one way to think about degrees of freedom. The degrees of freedom for these three numbers is n-1 = 3-1= 2, because 2 of the numbers can be free, but the last number has no freedom, it becomes fixed after the first two are decided. Now, statisticians often apply degrees of freedom to their calculations, especially when a second calculation relies on an estimated value. For example, when we calculate the standard deviation of a sample, we first calculate the mean of the sample right! By estimating the mean, we are fixing an aspect of our sample, and so, our sample now has n-1 degrees of freedom when we calculate the standard deviation (remember for the sample standard deviation, we divide by n-1…there’s that n-1 again.) Simulating how degrees of freedom affects the t distribution There are at least two ways to think the degrees of freedom for a $t$-test. For example, if you want to use math to compute aspects of the $t$ distribution, then you need the degrees of freedom to plug in to the formula… If you want to see the formulas I’m talking about, scroll down on the t-test Wikipedia page and look for the probability density or cumulative distribution functions…We think that is quite scary for most people, and one reason why degrees of freedom are not well-understood. If we wanted to simulate the $t$ distribution we could more easily see what influence degrees of freedom has on the shape of the distribution. Remember, $t$ is a sample statistic, it is something we measure from the sample. So, we could simulate the process of measuring $t$ from many different samples, then plot the histogram of $t$ to show us the simulated $t$ distribution. Notice that the red distribution for $df$ =4, is a little bit shorter, and a little bit wider than the bluey-green distribution for $df$ = 100. As degrees of freedom increase, the $t$-distribution gets taller (in the middle), and narrower in the range. It get’s more peaky. Can you guess the reason for this? Remember, we are estimating a sample statistic, and degrees of freedom is really just a number that refers to the number of subjects (well minus one). And, we already know that as we increase $n$, our sample statistics become better estimates (less variance) of the distributional parameters they are estimating. So, $t$ becomes a better estimate of it’s “true” value as sample size increase, resulting in a more narrow distribution of $t$s. There is a slightly different $t$ distribution for every degrees of freedom, and the critical regions associated with 5% of the extreme values are thus slightly different every time. This is why we report the degrees of freedom for each t-test, they define the distribution of $t$ values for the sample-size in question. Why do we use n-1 and not n? Well, we calculate $t$ using the sample standard deviation to estimate the standard error or the mean, that estimate uses n-1 in the denominator, so our $t$ distribution is built assuming n-1. That’s enough for degrees of freedom…
textbooks/stats/Applied_Statistics/Answering_Questions_with_Data_-__Introductory_Statistics_for_Psychology_Students_(Crump)/06%3A_t-Tests/6.03%3A_Paired-samples_t-test.txt
You must be wondering if we will ever be finished talking about paired samples t-tests… why are we doing round 2, oh no! Don’t worry, we’re just going to 1) remind you about what we were doing with the infant study, and 2) do a paired samples t-test on the entire data set and discuss. Remember, we were wondering if the infants would look longer toward the singer who sang the familiar song during the test phase compared to the baseline phase. We showed you data from 5 infants, and walked through the computations for the \(t\)-test. As a reminder, it looked like this: infant Baseline Test differences diff_from_mean Squared_differences 1 0.44 0.6 0.16 0.106 0.011236 2 0.41 0.68 0.27 0.216 0.046656 3 0.75 0.72 -0.03 -0.084 0.00705600000000001 4 0.44 0.28 -0.16 -0.214 0.045796 5 0.47 0.5 0.03 -0.024 0.000575999999999999 Sums 2.51 2.78 0.27 0 0.11132 Means 0.502 0.556 0.054 0 0.022264 sd 0.167 SEM 0.075 t 0.72 ```library(data.table) suppressPackageStartupMessages(library(dplyr)) all_data <- fread( "https://stats.libretexts.org/@api/deki/files/10603/MehrSongSpelke2016.csv") experiment_one <- all_data %>% filter(exp1==1) paired_sample_df <- data.frame(infant=1:5, Baseline = round(experiment_one\$Baseline_Proportion_Gaze_to_Singer[1:5], digits=2), Test = round(experiment_one\$Test_Proportion_Gaze_to_Singer[1:5], digits=2)) paired_sample_df <- cbind(paired_sample_df, differences = (paired_sample_df\$Test- paired_sample_df\$Baseline)) paired_sample_df <- paired_sample_df %>% rbind(c("Sums",colSums(paired_sample_df[1:5,2:4]))) %>% rbind(c("Means",colMeans(paired_sample_df[1:5,2:4]))) paired_sample_df <- data.frame(infant=1:5, Baseline = round(experiment_one\$Baseline_Proportion_Gaze_to_Singer[1:5], digits=2), Test = round(experiment_one\$Test_Proportion_Gaze_to_Singer[1:5], digits=2)) differences <- paired_sample_df\$Test-paired_sample_df\$Baseline diff_from_mean <- differences-mean(differences) Squared_differences <- diff_from_mean^2 paired_sample_df <- cbind(paired_sample_df, differences, diff_from_mean, Squared_differences) paired_sample_df <- paired_sample_df %>% rbind(c("Sums",colSums(paired_sample_df[1:5,2:6]))) %>% rbind(c("Means",colMeans(paired_sample_df[1:5,2:6]))) %>% rbind(c(" "," "," "," ","sd ",round(sd(paired_sample_df[1:5,4]), digits=3))) %>% rbind(c(" "," "," "," ","SEM ",round(sd(paired_sample_df[1:5,4])/sqrt(5), digits=3))) %>% rbind(c(" "," "," "," ","t",mean(differences)/round( sd(paired_sample_df[1:5,4])/sqrt(5), digits=3)) ) paired_sample_df[6,5]<-0 paired_sample_df[7,5]<-0 t.test(round(differences, digits=2), mu=0)``` ``` One Sample t-test data: round(differences, digits = 2) t = 0.72381, df = 4, p-value = 0.5092 alternative hypothesis: true mean is not equal to 0 95 percent confidence interval: -0.1531384 0.2611384 sample estimates: mean of x 0.054 ``` Let’s write down the finding one more time: The mean difference was 0.054, \(t\)(4) = .72, \(p\) =.509. We can also now confirm, that the \(p\)-value was from a two-tailed test. So, what does this all really mean. We can say that a \(t\) value with an absolute of .72 or larger occurs 50.9% of the time. More precisely, the distribution of no differences (the null), will produce a \(t\) value this large or larger 50.9% of the time. In other words, chance alone good have easily produced the \(t\) value from our sample, and the mean difference we observed or .054, could easily have been a result of chance. Let’s quickly put all of the data in the \(t\)-test, and re-run the test using all of the infant subjects. ```library(data.table) suppressPackageStartupMessages(library(dplyr)) all_data <- fread( "https://stats.libretexts.org/@api/deki/files/10603/MehrSongSpelke2016.csv") experiment_one <- all_data %>% filter(exp1==1) paired_sample_df <- data.frame(infant=1:5, Baseline = round(experiment_one\$Baseline_Proportion_Gaze_to_Singer[1:5], digits=2), Test = round(experiment_one\$Test_Proportion_Gaze_to_Singer[1:5], digits=2)) paired_sample_df <- cbind(paired_sample_df, differences = (paired_sample_df\$Test- paired_sample_df\$Baseline)) paired_sample_df <- paired_sample_df %>% rbind(c("Sums",colSums(paired_sample_df[1:5,2:4]))) %>% rbind(c("Means",colMeans(paired_sample_df[1:5,2:4]))) paired_sample_df <- data.frame(infant=1:5, Baseline = round(experiment_one\$Baseline_Proportion_Gaze_to_Singer[1:5], digits=2), Test = round(experiment_one\$Test_Proportion_Gaze_to_Singer[1:5], digits=2)) differences <- paired_sample_df\$Test-paired_sample_df\$Baseline diff_from_mean <- differences-mean(differences) Squared_differences <- diff_from_mean^2 paired_sample_df <- cbind(paired_sample_df, differences, diff_from_mean, Squared_differences) paired_sample_df <- paired_sample_df %>% rbind(c("Sums",colSums(paired_sample_df[1:5,2:6]))) %>% rbind(c("Means",colMeans(paired_sample_df[1:5,2:6]))) %>% rbind(c(" "," "," "," ","sd ",round(sd(paired_sample_df[1:5,4]), digits=3))) %>% rbind(c(" "," "," "," ","SEM ",round(sd(paired_sample_df[1:5,4])/sqrt(5), digits=3))) %>% rbind(c(" "," "," "," ","t",mean(differences)/round( sd(paired_sample_df[1:5,4])/sqrt(5), digits=3)) ) paired_sample_df[6,5]<-0 paired_sample_df[7,5]<-0 paired_sample_df <- data.frame(infant=1:32, Baseline = round(experiment_one\$Baseline_Proportion_Gaze_to_Singer[1:32], digits=2), Test = round(experiment_one\$Test_Proportion_Gaze_to_Singer[1:32], digits=2)) differences <- paired_sample_df\$Test-paired_sample_df\$Baseline t.test(differences,mu=0)``` ``` One Sample t-test data: differences t = 2.4388, df = 31, p-value = 0.02066 alternative hypothesis: true mean is not equal to 0 95 percent confidence interval: 0.01192088 0.13370412 sample estimates: mean of x 0.0728125 ``` Now we get a very different answer. We would summarize the results saying the mean difference was .073, t(31) = 2.44, p = 0.020. How many total infants were their? Well the degrees of freedom was 31, so there must have been 32 infants in the study. Now we see a much smaller \(p\)-value. This was also a two-tailed test, so we that observing a \(t\) value of 2.4 or greater (absolute value) only occurs 2% of the time. In other words, the distribution of no differences will produce the observed t-value very rarely. So, it is unlikely that the observed mean difference of .073 was due to chance (it could have been due to chance, but that is very unlikely). As a result, we can be somewhat confident in concluding that something about seeing and hearing a unfamiliar person sing a familiar song, causes an infant to draw their attention toward the singer, and this potentially benefits social learning on the part of the infant.
textbooks/stats/Applied_Statistics/Answering_Questions_with_Data_-__Introductory_Statistics_for_Psychology_Students_(Crump)/06%3A_t-Tests/6.04%3A_The_paired_samples_t-test_strikes_back.txt
If you’ve been following the Star Wars references, we are on last movie (of the original trilogy)… the independent t-test. This is were basically the same story plays out as before, only slightly different. Remember there are different $t$-tests for different kinds of research designs. When your design is a between-subjects design, you use an independent samples t-test. Between-subjects design involve different people or subjects in each experimental condition. If there are two conditions, and 10 people in each, then there are 20 total people. And, there are no paired scores, because every single person is measured once, not twice, no repeated measures. Because there are no repeated measures we can’t look at the difference scores between conditions one and two. The scores are not paired in any meaningful way, to it doesn’t make sense to subtract them. So what do we do? The logic of the independent samples t-test is the very same as the other $t$-tests. We calculated the means for each group, then we find the difference. That goes into the numerator of the t formula. Then we get an estimate of the variation for the denominator. We divide the mean difference by the estimate of the variation, and we get $t$. It’s the same as before. The only wrinkle here is what goes into the denominator? How should we calculate the estimate of the variance? It would be nice if we could do something very straightforward like this, say for an experiment with two groups A and B: $t = \frac{\bar{A}-\bar{B}}{(\frac{SEM_A+SEM_B}{2})} \nonumber$ In plain language, this is just: 1. Find the mean difference for the top part 2. Compute the SEM (standard error of the mean) for each group, and average them together to make a single estimate, pooling over both samples. This would be nice, but unfortunately, it turns out that finding the average of two standard errors of the mean is not the best way to do it. This would create a biased estimator of the variation for the hypothesized distribution of no differences. We won’t go into the math here, but instead of the above formula, we an use a different one that gives as an unbiased estimate of the pooled standard error of the sample mean. Our new and improved $t$ formula would look like this: $t = \frac{\bar{X_A}-\bar{X_B}}{s_p * \sqrt{\frac{1}{n_A} + \frac{1}{n_B}}} \nonumber$ and, $s_p$, which is the pooled sample standard deviation is defined as, note the s’s in the formula are variances: $s_p = \sqrt{\frac{(n_A-1)s_A^2 + (n_B-1)s^2_B}{n_A +n_B -2}} \nonumber$ Believe you me, that is so much more formula than I wanted to type out. Shall we do one independent $t$-test example by hand, just to see the computations? Let’s do it…but in a slightly different way than you expect. I show the steps using R. I made some fake scores for groups A and B. Then, I followed all of the steps from the formula, but made R do each of the calculations. This shows you the needed steps by following the code. At the end, I print the $t$-test values I computed “by hand”, and then the $t$-test value that the R software outputs using the $t$-test function. You should be able to get the same values for $t$, if you were brave enough to compute $t$ by hand. ## By "hand" using R r code a <- c(1,2,3,4,5) b <- c(3,5,4,7,9) mean_difference <- mean(a)-mean(b) # compute mean difference variance_a <- var(a) # compute variance for A variance_b <- var(b) # compute variance for B # Compute top part and bottom part of sp formula sp_numerator <- (4*variance_a + 4* variance_b) sp_denominator <- 5+5-2 sp <- sqrt(sp_numerator/sp_denominator) # compute sp # compute t following formulat t <- mean_difference / ( sp * sqrt( (1/5) +(1/5) ) ) t # print results -2.01799136683647 a <- c(1,2,3,4,5) b <- c(3,5,4,7,9) # using the R function t.test t.test(a,b, paired=FALSE, var.equal = TRUE) Two Sample t-test data: a and b t = -2.018, df = 8, p-value = 0.0783 alternative hypothesis: true difference in means is not equal to 0 95 percent confidence interval: -5.5710785 0.3710785 sample estimates: mean of x mean of y 3.0 5.6
textbooks/stats/Applied_Statistics/Answering_Questions_with_Data_-__Introductory_Statistics_for_Psychology_Students_(Crump)/06%3A_t-Tests/6.05%3A_Independent_samples_t-test__The_return_of_the_t-test.txt
An “advanced” topic for $t$-tests is the idea of using R to simulations for $t$-tests. If you recall, $t$ is a property of a sample. We calculate $t$ from our sample. The $t$ distribution is the hypothetical behavior of our sample. That is, if we had taken thousands upon thousands of samples, and calculated $t$ for each one, and then looked at the distribution of those $t$’s, we would have the sampling distribution of $t$! It can be very useful to get in the habit of using R to simulate data under certain conditions, to see how your sample data, and things like $t$ behave. Why is this useful? It mainly prepares you with some intuitions about how sampling error (random chance) can influence your results, given specific parameters of your design, such as sample-size, the size of the mean difference you expect to find in your data, and the amount of variation you might find. These methods can be used formally to conduct power-analyses. Or more informally for data sense. Simulating a one-sample t-test Here are the steps you might follow to simulate data for a one sample $t$-test. 1. Make some assumptions about what your sample (that you might be planning to collect) might look like. For example, you might be planning to collect 30 subjects worth of data. The scores of those data points might come from a normal distribution (mean = 50, SD = 10). 2. sample simulated numbers from the distribution, then conduct a $t$-test on the simulated numbers. Save the statistics you want (such as $t$s and $p$s), and then see how things behave. Let’s do this a couple different times. First, let’s simulate samples with N = 30, taken from a normal (mean= 50, SD =25). We’ll do a simulation with 1000 simulations. For each simulation, we will compare the sample mean with a population mean of 50. There should be no difference on average here, this is the null distribution that we are simulating. The distribution of no differences Neat. We see both a $t$ distribution, that looks like $t$ distribution as it should. And we see the $p$ distribution. This shows us how often we get $t$ values of particular sizes. You may find it interesting that the $p$-distribution is flat under the null, which we are simulating here. This means that you have the same chances of a getting a $t$ with a p-value between 0 and 0.05, as you would for getting a $t$ with a p-value between .90 and .95. Those ranges are both ranges of 5%, so there are an equal amount of $t$ values in them by definition. Here’s another way to do the same simulation in R, using the replicate function, instead a for loop: Simulating a paired samples t-test The code below is set up to sample 10 scores for condition A and B from the same normal distribution. The simulation is conducted 1000 times, and the $t$s and $p$s are saved and plotted for each. According to the simulation. When there are no differences between the conditions, and the samples are being pulled from the very same distribution, you get these two distributions for $t$ and $p$. These again show how the null distribution of no differences behaves. For any of these simulations, if you rejected the null-hypothesis (that your difference was only due to chance), you would be making a type I error. If you set your alpha criteria to $\alpha = .05$, we can ask how many type I errors were made in these 1000 simulations. The answer is: save_ps <- length(1000) save_ts <- length(1000) for ( i in 1:1000 ){ condition_A <- rnorm(10,10,5) condition_B <- rnorm(10,10,5) differences <- condition_A - condition_B t_test <- t.test(differences, mu=0) save_ps[i] <- t_test$p.value save_ts[i] <- t_test$statistic } length(save_ps[save_ps<.05]) 58 save_ps <- length(1000) save_ts <- length(1000) for ( i in 1:1000 ){ condition_A <- rnorm(10,10,5) condition_B <- rnorm(10,10,5) differences <- condition_A - condition_B t_test <- t.test(differences, mu=0) save_ps[i] <- t_test$p.value save_ts[i] <- t_test$statistic } length(save_ps[save_ps<.05])/1000 0.054 We happened to make 55. The expectation over the long run is 5% type I error rates (if your alpha is .05). What happens if there actually is a difference in the simulated data, let’s set one condition to have a larger mean than the other: Now you can see that the $p$-value distribution is skewed to the left. This is because when there is a true effect, you will get p-values that are less than .05 more often. Or, rather, you get larger $t$ values than you normally would if there were no differences. In this case, we wouldn’t be making a type I error if we rejected the null when p was smaller than .05. How many times would we do that out of our 1000 experiments? save_ps <- length(1000) save_ts <- length(1000) for ( i in 1:1000 ){ condition_A <- rnorm(10,10,5) condition_B <- rnorm(10,13,5) differences <- condition_A - condition_B t_test <- t.test(differences, mu=0) save_ps[i] <- t_test$p.value save_ts[i] <- t_test$statistic } length(save_ps[save_ps<.05]) 210 save_ps <- length(1000) save_ts <- length(1000) for ( i in 1:1000 ){ condition_A <- rnorm(10,10,5) condition_B <- rnorm(10,13,5) differences <- condition_A - condition_B t_test <- t.test(differences, mu=0) save_ps[i] <- t_test$p.value save_ts[i] <- t_test$statistic } length(save_ps[save_ps<.05])/1000 0.21 We happened to get 210 simulations where p was less than .05, that’s only 0.21 experiments. If you were the researcher, would you want to run an experiment that would be successful only 0.21 of the time? I wouldn’t. I would run a better experiment. How would you run a better simulated experiment? Well, you could increase $n$, the number of subjects in the experiment. Let’s increase $n$ from 10 to 100, and see what happens to the number of “significant” simulated experiments. save_ps <- length(1000) save_ts <- length(1000) for ( i in 1:1000 ){ condition_A <- rnorm(100,10,5) condition_B <- rnorm(100,13,5) differences <- condition_A - condition_B t_test <- t.test(differences, mu=0) save_ps[i] <- t_test$p.value save_ts[i] <- t_test$statistic } length(save_ps[save_ps<.05]) 985 save_ps <- length(1000) save_ts <- length(1000) for ( i in 1:1000 ){ condition_A <- rnorm(100,10,5) condition_B <- rnorm(100,13,5) differences <- condition_A - condition_B t_test <- t.test(differences, mu=0) save_ps[i] <- t_test$p.value save_ts[i] <- t_test$statistic } length(save_ps[save_ps<.05])/1000 0.985 Cool, now almost all of the experiments show a $p$-value of less than .05 (using a two-tailed test, that’s the default in R). See, you could use this simulation process to determine how many subjects you need to reliably find your effect. Simulating an independent samples t.test Just change the t.test function like so… this is for the null, assuming no difference between groups. save_ps <- length(1000) save_ts <- length(1000) for ( i in 1:1000 ){ group_A <- rnorm(10,10,5) group_B <- rnorm(10,10,5) t_test <- t.test(group_A, group_B, paired=FALSE, var.equal=TRUE) save_ps[i] <- t_test$p.value save_ts[i] <- t_test$statistic } length(save_ps[save_ps<.05]) 41 save_ps <- length(1000) save_ts <- length(1000) for ( i in 1:1000 ){ group_A <- rnorm(10,10,5) group_B <- rnorm(10,10,5) t_test <- t.test(group_A, group_B, paired=FALSE, var.equal=TRUE) save_ps[i] <- t_test$p.value save_ts[i] <- t_test$statistic } length(save_ps[save_ps<.05])/1000 0.041 6.08: References Mehr, Samuel A, Lee Ann Song, and Elizabeth S Spelke. 2016. “For 5-Month-Old Infants, Melodies Are Social.” Psychological Science 27 (4): 486–501. Student, A. 1908. “The Probable Error of a Mean.” Biometrika 6: 1–2.
textbooks/stats/Applied_Statistics/Answering_Questions_with_Data_-__Introductory_Statistics_for_Psychology_Students_(Crump)/06%3A_t-Tests/6.06%3A_Simulating_data_for_t-tests.txt
A fun bit of stats history (Salsburg 2001). Sir Ronald Fisher invented the ANOVA, which we learn about in this section. He wanted to publish his new test in the journal Biometrika. The editor at the time was Karl Pearson (remember Pearson’s \(r\) for correlation?). Pearson and Fisher were apparently not on good terms, they didn’t like each other. Pearson refused to publish Fisher’s new test. So, Fisher eventually published his work in the Journal of Agricultural Science. Funnily enough, the feud continued onto the next generation. Years after Fisher published his ANOVA, Karl Pearson’s son Egon Pearson, and Jersey Neyman revamped Fisher’s ideas, and re-cast them into what is commonly known as null vs. alternative hypothesis testing. Fisher didn’t like this very much. We present the ANOVA in the Fisherian sense, and at the end describe the Neyman-Pearson approach that invokes the concept of null vs. alternative hypotheses. 07: ANOVA ANOVA stands for Analysis Of Variance. It is a widely used technique for assessing the likelihood that differences found between means in sample data could be produced by chance. You might be thinking, well don’t we have \(t\)-tests for that? Why do we need the ANOVA, what do we get that’s new that we didn’t have before? What’s new with the ANOVA, is the ability to test a wider range of means beyond just two. In all of the \(t\)-test examples we were always comparing two things. For example, we might ask whether the difference between two sample means could have been produced by chance. What if our experiment had more than two conditions or groups? We would have more than 2 means. We would have one mean for each group or condition. That could be a lot depending on the experiment. How would we compare all of those means? What should we do, run a lot of \(t\)-tests, comparing every possible combination of means? Actually, you could do that. Or, you could do an ANOVA. In practice, we will combine both the ANOVA test and \(t\)-tests when analyzing data with many sample means (from more than two groups or conditions). Just like the \(t\)-test, there are different kinds of ANOVAs for different research designs. There is one for between-subjects designs, and a slightly different one for repeated measures designs. We talk about both, beginning with the ANOVA for between-subjects designs. 7.02: One-factor ANOVA The one-factor ANOVA is sometimes also called a between-subjects ANOVA, an independent factor ANOVA, or a one-way ANOVA (which is a bit of a misnomer as we discuss later). The critical ingredient for a one-factor, between-subjects ANOVA, is that you have one independent variable, with at least two-levels. When you have one IV with two levels, you can run a $t$-test. You can also run an ANOVA. Interestingly, they give you almost the exact same results. You will get a $p$-value from both tests that is identical (they are really doing the same thing under the hood). The $t$-test gives a $t$-value as the important sample statistic. The ANOVA gives you the $F$-value (for Fisher, the inventor of the test) as the important sample statistic. It turns out that $t^2$ equals $F$, when there are only two groups in the design. They are the same test. Side-note, it turns out they are all related to Pearson’s r too (but we haven’t written about this relationship yet in this textbook). Remember that $t$ is computed directly from the data. It’s like a mean and standard error that we measure from the sample. In fact it’s the mean difference divided by the standard error of the sample. It’s just another descriptive statistic isn’t it. The same thing is true about $F$. $F$ is computed directly from the data. In fact, the idea behind $F$ is the same basic idea that goes into making $t$. Here is the general idea behind the formula, it is again a ratio of the effect we are measuring (in the numerator), and the variation associated with the effect (in the denominator). $\text{name of statistic} = \frac{\text{measure of effect}}{\text{measure of error}} \nonumber$ $\text{F} = \frac{\text{measure of effect}}{\text{measure of error}} \nonumber$ The difference with $F$, is that we use variances to describe both the measure of the effect and the measure of error. So, $F$ is a ratio of two variances. Remember what we said about how these ratios work. When the variance associated with the effect is the same size as the variance associated with sampling error, we will get two of the same numbers, this will result in an $F$-value of 1. When the variance due to the effect is larger than the variance associated with sampling error, then $F$ will be greater than 1. When the variance associated with the effect is smaller than the variance associated with sampling error, $F$ will be less than one. Let’s rewrite in plainer English. We are talking about two concepts that we would like to measure from our data. 1) A measure of what we can explain, and 2) a measure of error, or stuff about our data we can’t explain. So, the $F$ formula looks like this: $\text{F} = \frac{\text{Can Explain}}{\text{Can't Explain}} \nonumber$ When we can explain as much as we can’t explain, $F$ = 1. This isn’t that great of a situation for us to be in. It means we have a lot of uncertainty. When we can explain much more than we can’t we are doing a good job, $F$ will be greater than 1. When we can explain less than what we can’t, we really can’t explain very much, $F$ will be less than 1. That’s the concept behind making $F$. If you saw an $F$ in the wild, and it was .6. Then you would automatically know the researchers couldn’t explain much of their data. If you saw an $F$ of 5, then you would know the researchers could explain 5 times more than the couldn’t, that’s pretty good. And the point of this is to give you an intuition about the meaning of an $F$-value, even before you know how to compute it. Computing the $F$-value Fisher’s ANOVA is very elegant in my opinion. It starts us off with a big problem we always have with data. We have a lot of numbers, and there is a lot of variation in the numbers, what to do? Wouldn’t it be nice to split up the variation into to kinds, or sources. If we could know what parts of the variation were being caused by our experimental manipulation, and what parts were being caused by sampling error, we would be making really good progress. We would be able to know if our experimental manipulation was causing more change in the data than sampling error, or chance alone. If we could measure those two parts of the total variation, we could make a ratio, and then we would have an $F$ value. This is what the ANOVA does. It splits the total variation in the data into two parts. The formula is: Total Variation = Variation due to Manipulation + Variation due to sampling error This is a nice idea, but it is also vague. We haven’t specified our measure of variation. What should we use? Remember the sums of squares that we used to make the variance and the standard deviation? That’s what we’ll use. Let’s take another look at the formula, using sums of squares for the measure of variation: $SS_\text{total} = SS_\text{Effect} + SS_\text{Error} \nonumber$ SS Total The total sums of squares, or $SS\text{Total}$ is a way of thinking about all of the variation in a set of data. It’s pretty straightforward to measure. No tricky business. All we do is find the difference between each score and the grand mean, then we square the differences and add them all up. Let’s imagine we had some data in three groups, A, B, and C. For example, we might have 3 scores in each group. The data could look like this: suppressPackageStartupMessages(library(dplyr)) scores <- c(20,11,2,6,2,7,2,11,2) groups <- as.character(rep(c("A","B","C"), each=3)) diff <-scores-mean(scores) diff_squared <-diff^2 df<-data.frame(groups,scores,diff, diff_squared) df$groups<-as.character(df$groups) df <- df %>% rbind(c("Sums",colSums(df[1:9,2:4]))) %>% rbind(c("Means",colMeans(df[1:9,2:4]))) knitr::kable(df) groups scores diff diff_squared A 20 13 169 A 11 4 16 A 2 -5 25 B 6 -1 1 B 2 -5 25 B 7 0 0 C 2 -5 25 C 11 4 16 C 2 -5 25 Sums 63 0 302 Means 7 0 33.5555555555556 The data is organized in long format, so that each row is a single score. There are three scores for the A, B, and C groups. The mean of all of the scores is called the Grand Mean. It’s calculated in the table, the Grand Mean = 7. We also calculated all of the difference scores from the Grand Mean. The difference scores are in the column titled diff. Next, we squared the difference scores, and those are in the next column called diff_squared. Remember, the difference scores are a way of measuring variation. They represent how far each number is from the Grand Mean. If the Grand Mean represents our best guess at summarizing the data, the difference scores represent the error between the guess and each actual data point. The only problem with the difference scores is that they sum to zero (because the mean is the balancing point in the data). So, it is convenient to square the difference scores, this turns all of them into positive numbers. The size of the squared difference scores still represents error between the mean and each score. And, the squaring operation exacerbates the differences as the error grows larger (squaring a big number makes a really big number, squaring a small number still makes a smallish number). OK fine! We have the squared deviations from the grand mean, we know that they represent the error between the grand mean and each score. What next? SUM THEM UP! When you add up all of the individual squared deviations (difference scores) you get the sums of squares. That’s why it’s called the sums of squares (SS). Now, we have the first part of our answer: $SS_\text{total} = SS_\text{Effect} + SS_\text{Error} \nonumber$ $SS_\text{total} = 302 \nonumber$ and $302 = SS_\text{Effect} + SS_\text{Error} \nonumber$ What next? If you think back to what you learned about algebra, and solving for X, you might notice that we don’t really need to find the answers to both missing parts of the equation. We only need one, and we can solve for the other. For example, if we found $SS_\text{Effect}$, then we could solve for $SS_\text{Error}$. SS Effect $SS_\text{Total}$ gave us a number representing all of the change in our data, how all the scores are different from the grand mean. What we want to do next is estimate how much of the total change in the data might be due to the experimental manipulation. For example, if we ran an experiment that causes causes change in the measurement, then the means for each group will be different from other. As a result, the manipulation forces change onto the numbers, and this will naturally mean that some part of the total variation in the numbers is caused by the manipulation. The way to isolate the variation due to the manipulation (also called effect) is to look at the means in each group, and calculate the difference scores between each group mean and the grand mean, and then sum the squared deviations to find $SS_\text{Effect}$. Consider this table, showing the calculations for $SS_\text{Effect}$. suppressPackageStartupMessages(library(dplyr)) scores <- c(20,11,2,6,2,7,2,11,2) means <-c(11,11,11,5,5,5,5,5,5) groups <- as.character(rep(c("A","B","C"), each=3)) diff <-means-mean(scores) diff_squared <-diff^2 df<-data.frame(groups,scores,means,diff, diff_squared) df$groups<-as.character(df$groups) df <- df %>% rbind(c("Sums",colSums(df[1:9,2:5]))) %>% rbind(c("Means",colMeans(df[1:9,2:5]))) knitr::kable(df) groups scores means diff diff_squared A 20 11 4 16 A 11 11 4 16 A 2 11 4 16 B 6 5 -2 4 B 2 5 -2 4 B 7 5 -2 4 C 2 5 -2 4 C 11 5 -2 4 C 2 5 -2 4 Sums 63 63 0 72 Means 7 7 0 8 Notice we created a new column called means. For example, the mean for group A was 11. You can see there are three 11s, one for each observation in row A. The means for group B and C happen to both be 5. So, the rest of the numbers in the means column are 5s. What we are doing here is thinking of each score in the data from the viewpoint of the group means. The group means are our best attempt to summarize the data in those groups. From the point of view of the mean, all of the numbers are treated as the same. The mean doesn’t know how far off it is from each score, it just knows that all of the scores are centered on the mean. Let’s pretend you are the mean for group A. That means you are an 11. Someone asks you “hey, what’s the score for the first data point in group A?”. Because you are the mean, you say, I know that, it’s 11. “What about the second score?”…it’s 11… they’re all 11, so far as I can tell…“Am I missing something…”, asked the mean. Now that we have converted each score to it’s mean value we can find the differences between each mean score and the grand mean, then square them, then sum them up. We did that, and found that the $SS_\text{Effect} = 72$. $SS_\text{Effect}$ represents the amount of variation that is caused by differences between the means. I also refer to this as the amount of variation that the researcher can explain (by the means, which represent differences between groups or conditions that were manipulated by the researcher). Notice also that $SS_\text{Effect} = 72$, and that 72 is smaller than $SS_\text{total} = 302$. That is very important. $SS_\text{Effect}$ by definition can never be larger than $SS_\text{total}$. SS Error Great, we made it to SS Error. We already found SS Total, and SS Effect, so now we can solve for SS Error just like this: $SS_\text{total} = SS_\text{Effect} + SS_\text{Error} \nonumber$ switching around: $SS_\text{Error} = SS_\text{total} - SS_\text{Effect} \nonumber$ $SS_\text{Error} = 302 - 72 = 230 \nonumber$ We could stop here and show you the rest of the ANOVA, we’re almost there. But, the next step might not make sense unless we show you how to calculate $SS_\text{Error}$ directly from the data, rather than just solving for it. We should do this just to double-check our work anyway. suppressPackageStartupMessages(library(dplyr)) scores <- c(20,11,2,6,2,7,2,11,2) means <-c(11,11,11,5,5,5,5,5,5) groups <- as.character(rep(c("A","B","C"), each=3)) diff <-means-scores diff_squared <-diff^2 df<-data.frame(groups,scores,means,diff, diff_squared) df$groups<-as.character(df$groups) df <- df %>% rbind(c("Sums",colSums(df[1:9,2:5]))) %>% rbind(c("Means",colMeans(df[1:9,2:5]))) knitr::kable(df) groups scores means diff diff_squared A 20 11 -9 81 A 11 11 0 0 A 2 11 9 81 B 6 5 -1 1 B 2 5 3 9 B 7 5 -2 4 C 2 5 3 9 C 11 5 -6 36 C 2 5 3 9 Sums 63 63 0 230 Means 7 7 0 25.5555555555556 Alright, we did almost the same thing as we did to find $SS_\text{Effect}$. Can you spot the difference? This time for each score we first found the group mean, then we found the error in the group mean estimate for each score. In other words, the values in the $diff$ column are the differences between each score and it’s group mean. The values in the diff_squared column are the squared deviations. When we sum up the squared deviations, we get another Sums of Squares, this time it’s the $SS_\text{Error}$. This is an appropriate name, because these deviations are the ones that the group means can’t explain! Degrees of freedom Degrees of freedom come into play again with ANOVA. This time, their purpose is a little bit more clear. $Df$s can be fairly simple when we are doing a relatively simple ANOVA like this one, but they can become complicated when designs get more complicated. Let’s talk about the degrees of freedom for the $SS_\text{Effect}$ and $SS_\text{Error}$. The formula for the degrees of freedom for $SS_\text{Effect}$ is $df_\text{Effect} = \text{Groups} -1$, where Groups is the number of groups in the design. In our example, there are 3 groups, so the df is 3-1 = 2. You can think of the df for the effect this way. When we estimate the grand mean (the overall mean), we are taking away a degree of freedom for the group means. Two of the group means can be anything they want (they have complete freedom), but in order for all three to be consistent with the Grand Mean, the last group mean has to be fixed. The formula for the degrees of freedom for $SS_\text{Error}$ is $df_\text{Error} = \text{scores} - \text{groups}$, or the number of scores minus the number of groups. We have 9 scores and 3 groups, so our $df$ for the error term is 9-3 = 6. Remember, when we computed the difference score between each score and its group mean, we had to compute three means (one for each group) to do that. So, that reduces the degrees of freedom by 3. 6 of the difference scores could be anything they want, but the last 3 have to be fixed to match the means from the groups. Mean Squared Error OK, so we have the degrees of freedom. What’s next? There are two steps left. First we divide the $SS$es by their respective degrees of freedom to create something new called Mean Squared Error. Let’s talk about why we do this. First of all, remember we are trying to accomplish this goal: $\text{F} = \frac{\text{measure of effect}}{\text{measure of error}} \nonumber$ We want to build a ratio that divides a measure of an effect by a measure of error. Perhaps you noticed that we already have a measure of an effect and error! How about the $SS_\text{Effect}$ and $SS_\text{Error}$. They both represent the variation due to the effect, and the leftover variation that is unexplained. Why don’t we just do this? $\frac{SS_\text{Effect}}{SS_\text{Error}} \nonumber$ Well, of course you could do that. What would happen is you can get some really big and small numbers for your inferential statistic. And, the kind of number you would get wouldn’t be readily interpretable like a $t$ value or a $z$ score. The solution is to normalize the $SS$ terms. Don’t worry, normalize is just a fancy word for taking the average, or finding the mean. Remember, the SS terms are all sums. And, each sum represents a different number of underlying properties. For example, the SS_ represents the sum of variation for three means in our study. We might ask the question, well, what is the average amount of variation for each mean…You might think to divide SS_ by 3, because there are three means, but because we are estimating this property, we divide by the degrees of freedom instead (# groups - 1 = 3-1 = 2). Now we have created something new, it’s called the $MSE_\text{Effect}$. $MSE_\text{Effect} = \frac{SS_\text{Effect}}{df_\text{Effect}} \nonumber$ $MSE_\text{Effect} = \frac{72}{2} = 36 \nonumber$ This might look alien and seem a bit complicated. But, it’s just another mean. It’s the mean of the sums of squares for the effect. If this reminds you of the formula for the variance, good memory. The $SME_\text{Effect}$ is a measure variance for the change in the data due to changes in the means (which are tied to the experimental conditions). The $SS_\text{Error}$ represents the sum of variation for nine scores in our study. That’s a lot more scores, so the $SS_\text{Error}$ is often way bigger than than $SS_\text{Effect}$. If we left our SSes this way and divided them, we would almost always get numbers less than one, because the $SS_\text{Error}$ is so big. What we need to do is bring it down to the average size. So, we might want to divide our $SS_\text{Error}$ by 9, after all there were nine scores. However, because we are estimating this property, we divide by the degrees of freedom instead (scores-groups) = 9-3 = 6). Now we have created something new, it’s called the $MSE_\text{Error}$. $MSE_\text{Error} = \frac{SS_\text{Error}}{df_\text{Error}} \nonumber$ $MSE_\text{Error} = \frac{230}{6} = 38.33 \nonumber$ Calculate F Now that we have done all of the hard work, calculating $F$ is easy: $\text{F} = \frac{\text{measure of effect}}{\text{measure of error}} \nonumber$ $\text{F} = \frac{MSE_\text{Effect}}{MSE_\text{Error}} \nonumber$ $\text{F} = \frac{36}{38.33} = .939 \nonumber$ Done! The ANOVA TABLE You might suspect we aren’t totally done here. We’ve walked through the steps of computing $F$. Remember, $F$ is a sample statistic, we computed $F$ directly from the data. There were a whole bunch of pieces we needed, the dfs, the SSes, the MSEs, and then finally the F. All of these little pieces are conveniently organized by ANOVA tables. ANOVA tables look like this: library(xtable) suppressPackageStartupMessages(library(dplyr)) scores <- c(20,11,2,6,2,7,2,11,2) means <-c(11,11,11,5,5,5,5,5,5) groups <- as.character(rep(c("A","B","C"), each=3)) diff <-means-scores diff_squared <-diff^2 df<-data.frame(groups,scores,means,diff, diff_squared) df$groups<-as.character(df$groups) df <- df %>% rbind(c("Sums",colSums(df[1:9,2:5]))) %>% rbind(c("Means",colMeans(df[1:9,2:5]))) aov_out<-aov(scores~ groups, df[1:9,]) summary_out<-summary(aov_out) knitr::kable(xtable(summary_out)) Df Sum Sq Mean Sq F value Pr(>F) groups 2 72 36.00000 0.9391304 0.4417359 Residuals 6 230 38.33333 NA NA You are looking at the print-out of an ANOVA summary table from R. Notice, it had columns for $Df$, $SS$ (Sum Sq), $MSE$ (Mean Sq), $F$, and a $p$-value. There are two rows. The groups row is for the Effect (what our means can explain). The Residuals row is for the Error (what our means can’t explain). Different programs give slightly different labels, but they are all attempting to present the same information in the ANOVA table. There isn’t anything special about the ANOVA table, it’s just a way of organizing all the pieces. Notice, the MSE for the effect (36) is placed above the MSE for the error (38.333), and this seems natural because we divide 36/38.33 in or to get the $F$-value!
textbooks/stats/Applied_Statistics/Answering_Questions_with_Data_-__Introductory_Statistics_for_Psychology_Students_(Crump)/07%3A_ANOVA/7.01%3A_ANOVA_is_Analysis_of_Variance.txt
We’ve just noted that the ANOVA has a bunch of numbers that we calculated straight from the data. All except one, the \(p\)-value. We did not calculate the \(p\)-value from the data. Where did it come from, what does it mean? How do we use this for statistical inference. Just so you don’t get too worried, the \(p\)-value for the ANOVA has the very same general meaning as the \(p\)-value for the \(t\)-test, or the \(p\)-value for any sample statistic. It tells us that the probability that we would observe our test statistic or larger, under the distribution of no differences (the null). As we keep saying, \(F\) is a sample statistic. Can you guess what we do with sample statistics in this textbook? We did it for the Crump Test, the Randomization Test, and the \(t\)-test… We make fake data, we simulate it, we compute the sample statistic we are interested in, then we see how it behaves over many replications or simulations. Let’s do that for \(F\). This will help you understand what \(F\) really is, and how it behaves. We are going to created the sampling distribution of \(F\). Once we have that you will be able to see where the \(p\)-values come from. It’s the same basic process that we followed for the \(t\) tests, except we are measuring \(F\) instead of \(t\). Here is the set-up, we are going to run an experiment with three levels. In our imaginary experiment we are going to test whether a new magic pill can make you smarter. The independent variable is the number of magic pills you take: 1, 2, or 3. We will measure your smartness using a smartness test. We will assume the smartness test has some known properties, the mean score on the test is 100, with a standard deviation of 10 (and the distribution is normal). The only catch is that our magic pill does NOTHING AT ALL. The fake people in our fake experiment will all take sugar pills that do absolutely nothing to their smartness. Why would we want to simulate such a bunch of nonsense? The answer is that this kind of simulation is critical for making inferences about chance if you were to conduct a real experiment. Here are some more details for the experiment. Each group will have 10 different subjects, so there will be a total of 30 subjects. We are going to run this experiment 10,000 times. Each time drawing numbers randomly from the very same normal distribution. We are going to calculate \(F\) from our sample data every time, and then we are going to draw the histogram of \(F\)-values. This will show us the sampling distribution of \(F\) for our situation. Let’s do that and see what it looks like: Let’s note a couple things about the \(F\) distribution. 1) The smallest value is 0, and there are no negative values. Does this make sense? \(F\) can never be negative because it is the ratio of two variances, and variances are always positive because of the squaring operation. So, yes, it makes sense that the sampling distribution of \(F\) is always 0 or greater. 2) it does not look normal. No it does not. \(F\) can have many different looking shapes, depending on the degrees of freedom in the numerator and denominator. However, these aspects are too important for now. Remember, before we talked about some intuitive ideas for understanding \(F\), based on the idea that \(F\) is a ratio of what we can explain (variance due to mean differences), divided by what we can’t explain (the error variance). When the error variance is higher than the effect variance, then we will always get an \(F\)-value less than one. You can see that we often got \(F\)-values less than one in the simulation. This is sensible, after all we were simulating samples coming from the very same distribution. On average there should be no differences between the means. So, on average the part of the total variance that is explained by the means should be less than one, or around one, because it should be roughly the same as the amount of error variance (remember, we are simulating no differences). At the same time, we do see that some \(F\)-values are larger than 1. There are little bars that we can see going all the way up to about 5. If you were to get an \(F\)-value of 5, you might automatically think, that’s a pretty big \(F\)-value. Indeed it kind of is, it means that you can explain 5 times more of variance than you can’t explain. That seems like a lot. You can also see that larger \(F\)-values don’t occur very often. As a final reminder, what you are looking at is how the \(F\)-statistic (measured from each of 10,000 simulated experiments) behaves when the only thing that can cause differences in the means is random sampling error. Just by chance sometimes the means will be different. You are looking at another chance window. These are the \(F\)s that chance can produce. Making Decisions We can use the sampling distribution of \(F\) (for the null) to make decisions about the role of chance in a real experiment. For example, we could do the following. 1. Set an alpha criterion of \(p\) = 0.05 2. Find out the critical value for \(F\), for our particular situation (with our \(df\)s for the numerator and denominator). Let’s do that. I’ve drawn the line for the critical value onto the histogram: Alright, now we can see that only 5% of all \(F\)-values from from this sampling distribution will be 3.35 or larger. We can use this information. How would we use it? Imagine we ran a real version of this experiment. And, we really used some pills that just might change smartness. If we ran the exact same design, with 30 people in total (10 in each group), we could set an \(F\) criterion of 3.35 for determining whether any of our results reflected a causal change in smartness due to the pills, and not due to random chance. For example, if we found an \(F\)-value of 3.34, which happens, just less than 5% of the time, we might conclude that random sampling error did not produce the differences between our means. Instead, we might be more confident that the pills actually did something, after all an \(F\)-value of 3.34 doesn’t happen very often, it is unlikely (only 5 times out of 100) to occur by chance. Fs and means Up to here we have been building your intuition for understanding \(F\). We went through the calculation of \(F\) from sample data. We went through the process of simulating thousands of \(F\)s to show you the null distribution. We have not talked so much about what researchers really care about…The MEANS! The actual results from the experiment. Were the means different? that’s often what people want to know. So, now we will talk about the means, and \(F\), together. Notice, if I told you I ran an experiment with three groups, testing whether some manipulation changes the behavior of the groups, and I told you that I found a big \(F\)!, say an \(F\) of 6!. And, that the \(F\) of 6 had a \(p\)-value of .001. What would you know based on that information alone? You would only know that Fs of 6 don’t happen very often by chance. In fact they only happen 0.1% of the time, that’s hardly at all. If someone told me those values, I would believe that the results they found in their experiment were not likely due to chance. However, I still would not know what the results of the experiment were! Nobody told us what the means were in the different groups, we don’t know what happened! IMPORTANT: even though we don’t know what the means were, we do know something about them, whenever we get \(F\)-values and \(p\)-values like that (big \(F\)s, and very small associated \(p\)s)… Can you guess what we know? I’ll tell you. We automatically know that there must have been some differences between the means. If there was no differences between the means, then the variance explained by the means (the numerator for \(F\)) would not be very large. So, we know that there must be some differences, we just don’t know what they are. Of course, if we had the data, all we would need to do is look at the means for the groups (the ANOVA table doesn’t report this, we need to do it as a separate step). ANOVA is an omnibus test This property of the ANOVA is why the ANOVA is sometimes called the omnibus test. Omnibus is a fun word, it sounds like a bus I’d like to ride. The meaning of omnibus, according to the dictionary, is “comprising several items”. The ANOVA is, in a way, one omnibus test, comprising several little tests. For example, if you had three groups, A, B, and C. You get could differences between 1. A and B 2. B and C 3. A and C That’s three possible differences you could get. You could run separate \(t\)-tests, to test whether each of those differences you might have found could have been produced by chance. Or, you could run an ANOVA, like what we have been doing, to ask one more general question about the differences. Here is one way to think about what the omnibus test is testing: Hypothesis of no differences anywhere: \( A = B = C \) Any differences anywhere: 1. \( A \neq B = C \) 2. \( A = B \neq C \) 3. \( A \neq C = B \) The \(\neq\) symbol means “does not equal”, it’s an equal sign with a cross through it (no equals allowed!). How do we put all of this together. Generally, when we get a small \(F\)-value, with a large \(p\)-value, we will not reject the hypothesis of no differences. We will say that we do not have evidence that the means of the three groups are in any way different, and the differences that are there could easily have been produced by chance. When we get a large F with a small \(p\)-value (one that is below our alpha criterion), we will generally reject the hypothesis of no differences. We would then assume that at least one group mean is not equal to one of the others. That is the omnibus test. Rejecting the null in this way is rejecting the idea there are no differences. But, the \(F\) test still does not tell you which of the possible group differences are the ones that are different. Looking at a bunch of group means We ran 10,000 experiments just before, and we didn’t even once look at the group means for any of the experiments. Let’s quickly do that, so we get a better sense of what is going on. Whoa, that’s a lot to look at. What is going on here? Each little box represents the outcome of a simulated experiment. The dots are the means for each group (whether subjects took 1 , 2, or 3 magic pills). The y-axis shows the mean smartness for each group. The error bars are standard errors of the mean. You can see that each of the 10 experiments turn out different. Remember, we sampled 10 numbers for each group from the same normal distribution with mean = 100, and sd = 10. So, we know that the correct means for each sample should actually be 100 every single time. However, they are not 100 every single time because of?…sampling error (Our good friend that we talk about all the time). For most of the simulations the error bars are all overlapping, this suggests visually that the means are not different. However, some of them look like they are not overlapping so much, and this would suggest that they are different. This is the siren song of chance (sirens lured sailors to their deaths at sea…beware of the siren call of chance). If we concluded that any of these sets of means had a true difference, we would be committing a type I error. Because we made the simulation, we know that none of these means are actually different. But, when you are running a real experiment, you don’t get to know this for sure. Looking at bar graphs Let’s look at the exact same graph as above, but this time use bars to visually illustrate the means, instead of dots. We’ll re-do our simulation of 10 experiments, so the pattern will be a little bit different: Now the heights of the bars display the means for each pill group. In general we see the same thing. Some of the fake experiments look like there might be differences, and some of them don’t. What mean differences look like when F is < 1 We are now giving you some visual experience looking at what means look like from a particular experiment. This is for your stats intuition. We’re trying to improve your data senses. What we are going to do now is similar to what we did before. Except this time we are going to look at 10 simulated experiments, where all of the \(F\)-values were less than 1. All of these \(F\)-values would also be associated with fairly large \(p\)-values. When F is less than one, we would not reject the hypothesis of no differences. So, when we look at patterns of means when F is less than 1, we should see mostly the same means, and no big differences. The numbers in the panels now tell us which simulations actually produced Fs of less than 1. We see here that all the bars aren’t perfectly flat, that’s OK. What’s more important is that for each panel, the error bars for each mean are totally overlapping with all the other error bars. We can see visually that our estimate of the mean for each sample is about the same for all of the bars. That’s good, we wouldn’t make any type I errors here. What mean differences look like when F > 3.35 Earlier we found that the critical value for \(F\) in our situation was 3.35, this was the location on the \(F\) distribution where only 5% of \(F\)s were 3.35 or greater. We would reject the hypothesis of no differences whenever \(F\) was greater than 3.35. In this case, whenever we did that, we would be making a type I error. That is because we are simulating the distribution of no differences (remember all of our sample means are coming from the exact same distribution). So, now we can take a look at what type I errors look like. In other words, we can run some simulations and look at the pattern in the means, only when F happens to be 3.35 or greater (this only happens 5% of the time, so we might have to let the computer simulate for a while). Let’s see what that looks like: The numbers in the panels now tell us which simulations actually produced \(F\)s that were greater than 3.35 What do you notice about the pattern of means inside each panel? Now, every single panel shows at least one mean that is different from the others. Specifically, the error bars for one mean do not overlap with the error bars for one or another mean. This is what mistakes looks like. These are all type I errors. They are insidious. When they happen to you by chance, the data really does appear to show a strong pattern, and your \(F\)-value is large, and your \(p\)-value is small! It is easy to be convinced by a type I error (it’s the siren song of chance).
textbooks/stats/Applied_Statistics/Answering_Questions_with_Data_-__Introductory_Statistics_for_Psychology_Students_(Crump)/07%3A_ANOVA/7.03%3A_What_does_F_mean.txt
We’ve covered many fundamentals about the ANOVA, how to calculate the necessary values to obtain an \(F\)-statistic, and how to interpret the \(F\)-statistic along with it’s associate \(p\)-value once we have one. In general, you will be conducting ANOVAs and playing with \(F\)s and \(p\)s using software that will automatically spit out the numbers for you. It’s important that you understand what the numbers mean, that’s why we’ve spent time on the concepts. We also recommend that you try to compute an ANOVA by hand at least once. It builds character, and let’s you know that you know what you are doing with the numbers. But, we’ve probably also lost the real thread of all this. The core thread is that when we run an experiment we use our inferential statistics, like ANOVA, to help us determine whether the differences we found are likely due to chance or not. In general, we like to find out that the differences that we find are not due to chance, but instead to due to our manipulation. So, we return to the application of the ANOVA to a real data set with a real question. This is the same one that you will be learning about in the lab. We give you a brief overview here so you know what to expect. Tetris and bad memories Yup, you read that right. The research you will learn about tests whether playing Tetris after watching a scary movie can help prevent you from having bad memories from the movie (James et al. 2015). Sometimes in life people have intrusive memories, and they think about things they’d rather not have to think about. This research looks at one method that could reduce the frequency of intrusive memories. Here’s what they did. Subjects watched a scary movie, then at the end of the week they reported how many intrusive memories about the movie they had. The mean number of intrusive memories was the measurement (the dependent variable). This was a between-subjects experiment with four groups. Each group of subjects received a different treatment following the scary movie. The question was whether any of these treatments would reduce the number of intrusive memories. All of these treatments occurred after watching the scary movie: 1. No-task control: These participants completed a 10-minute music filler task after watching the scary movie. 2. Reactivation + Tetris: These participants were shown a series of images from the trauma film to reactivate the traumatic memories (i.e., reactivation task). Then, participants played the video game Tetris for 12 minutes. 3. Tetris Only: These participants played Tetris for 12 minutes, but did not complete the reactivation task. 4. Reactivation Only: These participants completed the reactivation task, but did not play Tetris. For reasons we elaborate on in the lab, the researchers hypothesized that the `Reactivation+Tetris` group would have fewer intrusive memories over the week than the other groups. Let’s look at the findings. Note you will learn how to do all of these steps in the lab. For now, we just show the findings and the ANOVA table. Then we walk through how to interpret it. OOooh, look at that. We did something fancy. You are looking at the the data from the four groups. The height of each bar shows the mean intrusive memories for the week. The dots show the individual scores for each subject in each group (useful to to the spread of the data). The error bars show the standard errors of the mean. What can we see here? Right away it looks like there is some support for the research hypothesis. The green bar, for the Reactivation + Tetris group had the lowest mean number of intrusive memories. Also, the error bar is not overlapping with any of the other error bars. This implies that the mean for the Reactivation + Tetris group is different from the means for the other groups. And, this difference is probably not very likely by chance. We can now conduct the ANOVA on the data to ask the omnibus question. If we get a an \(F\)-value with an associated \(p\)-value of less than .05 (the alpha criterion set by the authors), then we can reject the hypothesis of no differences. Let’s see what happens: ```library(data.table) library(xtable) all_data <- fread( "https://stats.libretexts.org/@api/deki/files/10605/Jamesetal2015Experiment2.csv") all_data\$Condition <- as.factor(all_data\$Condition) levels(all_data\$Condition) <- c("Control", "Reactivation+Tetris", "Tetris_only", "Reactivation_only") aov_out<-aov(Days_One_to_Seven_Number_of_Intrusions ~ Condition, all_data) summary_out<-summary(aov_out) knitr::kable(xtable(summary_out))``` Df Sum Sq Mean Sq F value Pr(>F) Condition 3 114.8194 38.27315 3.794762 0.0140858 Residuals 68 685.8333 10.08578 NA NA We see the ANOVA table, it’s up there. We could report the results from the ANOVA table like this: There was a significant main effect of treatment condition, F(3, 68) = 3.79, MSE = 10.08, p=0.014. We called this a significant effect because the \(p\)-value was less than 0.05. In other words, the \(F\)-value of 3.79 only happens 1.4% of the time when the null is true. Or, the differences we observed in the means only occur by random chance (sampling error) 1.4% of the time. Because chance rarely produces this kind of result, the researchers made the inference that chance DID NOT produce their differences, instead, they were inclined to conclude that the Reactivation + Tetris treatment really did cause a reduction in intrusive memories. That’s pretty neat. Comparing means after the ANOVA Remember that the ANOVA is an omnibus test, it just tells us whether we can reject the idea that all of the means are the same. The F-test (synonym for ANOVA) that we just conducted suggested we could reject the hypothesis of no differences. As we discussed before, that must mean that there are some differences in the pattern of means. Generally after conducting an ANOVA, researchers will conduct follow-up tests to compare differences between specific means. We will talk more about this practice throughout the textbook. There are many recommended practices for follow-up tests, and there is a lot of debate about what you should do. We are not going to wade into this debate right now. Instead we are going to point out that you need to do something to compare the means of interest after you conduct the ANOVA, because the ANOVA is just the beginning…It usually doesn’t tell you want you want to know. You might wonder why bother conducting the ANOVA in the first place…Not a terrible question at all. A good question. You will see as we talk about more complicated designs, why ANOVAs are so useful. In the present example, they are just a common first step. There are required next steps, such as what we do next. How can you compare the difference between two means, from a between-subjects design, to determine whether or not the difference you observed is likely or unlikely to be produced by chance? We covered this one already, it’s the independent \(t\)-test. We’ll do a couple \(t\)-tests, showing the process. Control vs. Reactivation+Tetris What we really want to know is if Reactivation+Tetris caused fewer intrusive memories…but compared to what? Well, if it did something, the Reactivation+Tetris group should have a smaller mean than the Control group. So, let’s do that comparison: ```library(data.table) library(ggplot2) suppressPackageStartupMessages(library(dplyr)) all_data <- fread( "https://stats.libretexts.org/@api/deki/files/10605/Jamesetal2015Experiment2.csv") all_data\$Condition <- as.factor(all_data\$Condition) levels(all_data\$Condition) <- c("Control", "Reactivation+Tetris", "Tetris_only", "Reactivation_only") comparison_df <- all_data %>% filter(Condition %in% c('Control','Reactivation+Tetris')==TRUE) t.test(Days_One_to_Seven_Number_of_Intrusions ~ Condition, comparison_df, var.equal=TRUE)``` ``` Two Sample t-test data: Days_One_to_Seven_Number_of_Intrusions by Condition t = 2.9893, df = 34, p-value = 0.005167 alternative hypothesis: true difference in means is not equal to 0 95 percent confidence interval: 1.031592 5.412852 sample estimates: mean in group Control mean in group Reactivation+Tetris 5.111111 1.888889 ``` We found that there was a significant difference between the control group (M=5.11) and Reactivation + Tetris group (M=1.89), t(34) = 2.99, p=0.005. Above you just saw an example of reporting another \(t\)-test. This sentences does an OK job of telling the reader everything they want to know. It has the means for each group, and the important bits from the \(t\)-test. More important, as we suspected the difference between the control and Reactivation + Tetris group was likely not due to chance. Control vs. Tetris_only Now we can really start wondering what caused the difference. Was it just playing Tetris? Does just playing Tetris reduce the number of intrusive memories during the week? Let’s compare that to control: ```library(data.table) suppressPackageStartupMessages(library(dplyr)) all_data <- fread( "https://stats.libretexts.org/@api/deki/files/10605/Jamesetal2015Experiment2.csv") all_data\$Condition <- as.factor(all_data\$Condition) levels(all_data\$Condition) <- c("Control", "Reactivation+Tetris", "Tetris_only", "Reactivation_only") comparison_df <- all_data %>% filter(Condition %in% c('Control','Tetris_only')==TRUE) t.test(Days_One_to_Seven_Number_of_Intrusions ~ Condition, comparison_df, var.equal=TRUE)``` ``` Two Sample t-test data: Days_One_to_Seven_Number_of_Intrusions by Condition t = 1.0129, df = 34, p-value = 0.3183 alternative hypothesis: true difference in means is not equal to 0 95 percent confidence interval: -1.230036 3.674480 sample estimates: mean in group Control mean in group Tetris_only 5.111111 3.888889 ``` Here we did not find a significant difference. We found that no significant difference between the control group (M=5.11) and Tetris Only group (M=3.89), t(34) = 2.99, p=0.318. So, it seems that not all of the differences between our means are large enough to be called statistically significant. In particular, the difference here, or larger, happens by chance 31.8% of the time. You could go on doing more comparisons, between all of the different pairs of means. Each time conducting a \(t\)-test, and each time saying something more specific about the patterns across the means than you get to say with the omnibus test provided by the ANOVA. Usually, it is the pattern of differences across the means that you as a researcher are primarily interested in understanding. Your theories will make predictions about how the pattern turns out (e.g., which specific means should be higher or lower and by how much). So, the practice of doing comparisons after an ANOVA is really important for establishing the patterns in the means. 7.05: ANOVA Summmary We have just finished a rather long introduction to the ANOVA, and the \(F\)-test. The next couple of chapters continue to explore properties of the ANOVA for different kinds of experimental designs. In general, the process to follow for all of the more complicated designs is very similar to what we did here, which boils down to two steps: 1. conduct the ANOVA on the data 2. conduct follow-up tests, looking at differences between particular means So what’s next…the ANOVA for repeated measures designs. See you in the next chapter. 7.06: References James, Ella L, Michael B Bonsall, Laura Hoppitt, Elizabeth M Tunbridge, John R Geddes, Amy L Milton, and Emily A Holmes. 2015. “Computer Game Play Reduces Intrusive Memories of Experimental Trauma via Reconsolidation-Update Mechanisms.” Psychological Science 26 (8): 1201–15. Salsburg, David. 2001. The Lady Tasting Tea: How Statistics Revolutionized Science in the Twentieth Century. Macmillan.
textbooks/stats/Applied_Statistics/Answering_Questions_with_Data_-__Introductory_Statistics_for_Psychology_Students_(Crump)/07%3A_ANOVA/7.04%3A_ANOVA_on_Real_Data.txt
This chapter introduces you to repeated measures ANOVA. Repeated measures ANOVAs are very common in Psychology, because psychologists often use repeated measures designs, and repeated measures ANOVAs are the appropriate test for making inferences about repeated measures designs. Remember the paired sample \(t\)-test? We used that test to compare two means from a repeated measures design. Remember what a repeated measures design is? It’s also called a within-subjects design. These designs involve measuring the same subject more than once. Specifically, at least once for every experimental condition. In the paired \(t\)-test example, we discussed a simple experiment with only two experimental conditions. There, each subject would contribute a measurement to level one and level two of the design. However, paired-samples \(t\)-tests are limited to comparing two means. What if you had a design that had more than two experimental conditions? For example, perhaps your experiment had 3 levels for the independent variable, and each subject contributed data to each of the three levels? This is starting to sounds like an ANOVA problem. ANOVAs are capable of evaluating whether there is a difference between any number of means, two or greater. So, we can use an ANOVA for our repeated measures design with three levels for the independent variable. Great! So, what makes a repeated measures ANOVA different from the ANOVA we just talked about? 08: Repeated Measures ANOVA Let’s use the exact same toy example from the previous chapter, but let’s convert it to a repeated measures design. Last time, we imagined we had some data in three groups, A, B, and C. The data looked like this: ```scores <- c(20,11,2,6,2,7,2,11,2) groups <- as.character(rep(c("A","B","C"), each=3)) df<-data.frame(groups,scores) knitr::kable(df)``` groups scores A 20 A 11 A 2 B 6 B 2 B 7 C 2 C 11 C 2 The above table represents a between-subject design where each score involves a unique subject. Let’s change things up a tiny bit, and imagine we only had 3 subjects in total in the experiment. And, that each subject contributed data to the three levels of the independent variable, A, B, and C. Before we called the IV `groups`, because there were different groups of subjects. Let’s change that to `conditions`, because now the same group of subjects participates in all three conditions. Here’s the new table for a within-subjects (repeated measures) version of this experiment: ```scores <- c(20,11,2,6,2,7,2,11,2) conditions <- as.character(rep(c("A","B","C"), each=3)) subjects <-rep(1:3,3) df<-data.frame(subjects,conditions,scores) knitr::kable(df)``` subjects conditions scores 1 A 20 2 A 11 3 A 2 1 B 6 2 B 2 3 B 7 1 C 2 2 C 11 3 C 2 8.02: Partioning the Sums of Squares Time to introduce a new name for an idea you learned about last chapter, it’s called partitioning the sums of squares. Sometimes an obscure new name can be helpful for your understanding of what is going on. ANOVAs are all about partitioning the sums of squares. We already did some partitioning in the last chapter. What do we mean by partitioning? Imagine you had a big empty house with no rooms in it. What would happen if you partitioned the house? What would you be doing? One way to partition the house is to split it up into different rooms. You can do this by adding new walls and making little rooms everywhere. That’s what partitioning means, to split up. The act of partitioning, or splitting up, is the core idea of ANOVA. To use the house analogy. Our total sums of squares (SS Total) is our big empty house. We want to split it up into little rooms. Before we partitioned SS Total using this formula: $SS_\text{TOTAL} = SS_\text{Effect} + SS_\text{Error} \nonumber$ Remember, the $SS_\text{Effect}$ was the variance we could attribute to the means of the different groups, and $SS_\text{Error}$ was the leftover variance that we couldn’t explain. $SS_\text{Effect}$ and $SS_\text{Error}$ are the partitions of $SS_\text{TOTAL}$, they are the little rooms. In the between-subjects case above, we got to split $SS_\text{TOTAL}$ into two parts. What is most interesting about the repeated-measures design, is that we get to split $SS_\text{TOTAL}$ into three parts, there’s one more partition. Can you guess what the new partition is? Hint: whenever we have a new way to calculate means in our design, we can always create a partition for those new means. What are the new means in the repeated measures design? Here is the new idea for partitioning $SS_\text{TOTAL}$ in a repeated-measures design: $SS_\text{TOTAL} = SS_\text{Effect} + SS_\text{Subjects} +SS_\text{Error} \nonumber$ We’ve added $SS_\text{Subjects}$ as the new idea in the formula. What’s the idea here? Well, because each subject was measured in each condition, we have a new set of means. These are the means for each subject, collapsed across the conditions. For example, subject 1 has a mean (mean of their scores in conditions A, B, and C); subject 2 has a mean (mean of their scores in conditions A, B, and C); and subject 3 has a mean (mean of their scores in conditions A, B, and C). There are three subject means, one for each subject, collapsed across the conditions. And, we can now estimate the portion of the total variance that is explained by these subject means. We just showed you a “formula” to split up $SS_\text{TOTAL}$ into three parts, but we called the formula an idea. We did that because the way we wrote the formula is a little bit misleading, and we need to clear something up. Before we clear the thing up, we will confuse you just a little bit. Be prepared to be confused a little bit. First, we need to introduce you to some more terms. It turns out that different authors use different words to describe parts of the ANOVA. This can be really confusing. For example, we described the SS formula for a between subjects design like this: $SS_\text{TOTAL} = SS_\text{Effect} + SS_\text{Error} \nonumber$ However, the very same formula is often written differently, using the words between and within in place of effect and error, it looks like this: $SS_\text{TOTAL} = SS_\text{Between} + SS_\text{Within} \nonumber$ Whoa, hold on a minute. Haven’t we switched back to talking about a between-subjects ANOVA. YES! Then why are we using the word within, what does that mean? YES! We think this is very confusing for people. Here the word within has a special meaning. It does not refer to a within-subjects design. Let’s explain. First, $SS_\text{Between}$ (which we have been calling $SS_\text{Effect}$) refers to variation between the group means, that’s why it is called $SS_\text{Between}$. Second, and most important, $SS_\text{Within}$ (which we have been calling $SS_\text{Error}$), refers to the leftover variation within each group mean. Specifically, it is the variation between each group mean and each score in the group. “AAGGH, you’ve just used the word between to describe within group variation!”. Yes! We feel your pain. Remember, for each group mean, every score is probably off a little bit from the mean. So, the scores within each group have some variation. This is the within group variation, and it is why the leftover error that we can’t explain is often called $SS_\text{Within}$. OK. So why did we introduce this new confusing way of talking about things? Why can’t we just use $SS_\text{Error}$ to talk about this instead of $SS_\text{Within}$, which you might (we do) find confusing. We’re getting there, but perhaps a picture will help to clear things up. The figure lines up the partitioning of the Sums of Squares for both between-subjects and repeated-measures designs. In both designs, $SS_\text{Total}$ is first split up into two pieces $SS_\text{Effect (between-groups)}$ and $SS_\text{Error (within-groups)}$. At this point, both ANOVAs are the same. In the repeated measures case we split the $SS_\text{Error (within-groups)}$ into two more littler parts, which we call $SS_\text{Subjects (error variation about the subject mean)}$ and $SS_\text{Error (left-over variation we can't explain)}$. So, when we earlier wrote the formula to split up SS in the repeated-measures design, we were kind of careless in defining what we actually meant by $SS_\text{Error}$, this was a little too vague: $SS_\text{TOTAL} = SS_\text{Effect} + SS_\text{Subjects} +SS_\text{Error} \nonumber$ The critical feature of the repeated-measures ANOVA, is that the $SS_\text{Error}$ that we will later use to compute the MSE in the denominator for the $F$-value, is smaller in a repeated-measures design, compared to a between subjects design. This is because the $SS_\text{Error (within-groups)}$ is split into two parts, $SS_\text{Subjects (error variation about the subject mean)}$ and $SS_\text{Error (left-over variation we can't explain)}$. To make this more clear, we made another figure: As we point out, the $SS_\text{Error (left-over)}$ in the green circle will be a smaller number than the $SS_\text{Error (within-group)}$. That’s because we are able to subtract out the $SS_\text{Subjects}$ part of the $SS_\text{Error (within-group)}$. As we will see shortly, this can have the effect of producing larger F-values when using a repeated-measures design compared to a between-subjects design.
textbooks/stats/Applied_Statistics/Answering_Questions_with_Data_-__Introductory_Statistics_for_Psychology_Students_(Crump)/08%3A_Repeated_Measures_ANOVA/8.01%3A_Repeated_Measures_Design.txt
Now that you are familiar with the concept of an ANOVA table (remember the table from last chapter where we reported all of the parts to calculate the $F$-value?), we can take a look at the things we need to find out to make the ANOVA table. The figure below presents an abstract for the repeated-measures ANOVA table. It shows us all the thing we need to calculate to get the $F$-value for our data. So, what we need to do is calculate all the $SS$es that we did before for the between-subjects ANOVA. That means the next three steps are identical to the ones you did before. In fact, I will just basically copy the next three steps to find $SS_\text{TOTAL}$, $SS_\text{Effect}$, and $SS_\text{Error (within-conditions)}$. After that we will talk about splitting up $SS_\text{Error (within-conditions)}$ into two parts, this is the new thing for this chapter. Here we go! SS Total The total sums of squares, or $SS\text{Total}$ measures the total variation in a set of data. All we do is find the difference between each score and the grand mean, then we square the differences and add them all up. suppressPackageStartupMessages(library(dplyr)) scores <- c(20,11,2,6,2,7,2,11,2) conditions <- as.character(rep(c("A","B","C"), each=3)) subjects <-rep(1:3,3) diff <-scores-mean(scores) diff_squared <-diff^2 df<-data.frame(subjects,conditions,scores,diff, diff_squared) df$conditions<-as.character(df$conditions) df$subjects<-as.character(df$subjects) df <- df %>% rbind(c("Sums","", colSums(df[1:9,3:5]))) %>% rbind(c("Means","",colMeans(df[1:9,3:5]))) knitr::kable(df) subjects conditions scores diff diff_squared 1 A 20 13 169 2 A 11 4 16 3 A 2 -5 25 1 B 6 -1 1 2 B 2 -5 25 3 B 7 0 0 1 C 2 -5 25 2 C 11 4 16 3 C 2 -5 25 Sums   63 0 302 Means   7 0 33.5555555555556 The mean of all of the scores is called the Grand Mean. It’s calculated in the table, the Grand Mean = 7. We also calculated all of the difference scores from the Grand Mean. The difference scores are in the column titled diff. Next, we squared the difference scores, and those are in the next column called diff_squared. When you add up all of the individual squared deviations (difference sscores) you get the sums of squares. That’s why it’s called the sums of squares (SS). Now, we have the first part of our answer: $SS_\text{total} = SS_\text{Effect} + SS_\text{Error} \nonumber$ $SS_\text{total} = 302 \nonumber$ and $302 = SS_\text{Effect} + SS_\text{Error} \nonumber$ SS Effect $SS_\text{Total}$ gave us a number representing all of the change in our data, how they all are different from the grand mean. What we want to do next is estimate how much of the total change in the data might be due to the experimental manipulation. For example, if we ran an experiment that causes causes change in the measurement, then the means for each group will be different from other, and the scores in each group will be different from each. As a result, the manipulation forces change onto the numbers, and this will naturally mean that some part of the total variation in the numbers is caused by the manipulation. The way to isolate the variation due to the manipulation (also called effect) is to look at the means in each group, and the calculate the difference scores between each group mean and the grand mean, and then the squared deviations to find the sum for $SS_\text{Effect}$. Consider this table, showing the calculations for $SS_\text{Effect}$. suppressPackageStartupMessages(library(dplyr)) scores <- c(20,11,2,6,2,7,2,11,2) conditions <- as.character(rep(c("A","B","C"), each=3)) subjects <-rep(1:3,3) means <-c(11,11,11,5,5,5,5,5,5) diff <-means-mean(scores) diff_squared <-diff^2 df<-data.frame(subjects,conditions,scores,means,diff, diff_squared) df$conditions<-as.character(df$conditions) df$subjects<-as.character(df$subjects) df <- df %>% rbind(c("Sums","", colSums(df[1:9,3:6]))) %>% rbind(c("Means","",colMeans(df[1:9,3:6]))) knitr::kable(df) subjects conditions scores means diff diff_squared 1 A 20 11 4 16 2 A 11 11 4 16 3 A 2 11 4 16 1 B 6 5 -2 4 2 B 2 5 -2 4 3 B 7 5 -2 4 1 C 2 5 -2 4 2 C 11 5 -2 4 3 C 2 5 -2 4 Sums   63 63 0 72 Means   7 7 0 8 Notice we created a new column called means, these are the means for each condition, A, B, and C. $SS_\text{Effect}$ represents the amount of variation that is caused by differences between the means. The diff column is the difference between each condition mean and the grand mean, so for the first row, we have 11-7 = 4, and so on. We found that $SS_\text{Effect} = 72$, this is the same as the ANOVA from the previous chapter SS Error (within-conditions) Great, we made it to SS Error. We already found SS Total, and SS Effect, so now we can solve for SS Error just like this: $SS_\text{total} = SS_\text{Effect} + SS_\text{Error (within-conditions)} \nonumber$ switching around: $SS_\text{Error} = SS_\text{total} - SS_\text{Effect} \nonumber$ $SS_\text{Error (within conditions)} = 302 - 72 = 230 \nonumber$ Or, we could compute $SS_\text{Error (within conditions)}$ directly from the data as we did last time: suppressPackageStartupMessages(library(dplyr)) scores <- c(20,11,2,6,2,7,2,11,2) conditions <- as.character(rep(c("A","B","C"), each=3)) subjects <-rep(1:3,3) means <-c(11,11,11,5,5,5,5,5,5) diff <-means-scores diff_squared <-diff^2 df<-data.frame(subjects,conditions,scores,means,diff, diff_squared) df$conditions<-as.character(df$conditions) df$subjects<-as.character(df$subjects) df <- df %>% rbind(c("Sums","", colSums(df[1:9,3:6]))) %>% rbind(c("Means","",colMeans(df[1:9,3:6]))) knitr::kable(df) subjects conditions scores means diff diff_squared 1 A 20 11 -9 81 2 A 11 11 0 0 3 A 2 11 9 81 1 B 6 5 -1 1 2 B 2 5 3 9 3 B 7 5 -2 4 1 C 2 5 3 9 2 C 11 5 -6 36 3 C 2 5 3 9 Sums   63 63 0 230 Means   7 7 0 25.5555555555556 When we compute $SS_\text{Error (within conditions)}$ directly, we find the difference between each score and the condition mean for that score. This gives us the remaining error variation around the condition mean, that the condition mean does not explain. SS Subjects Now we are ready to calculate new partition, called $SS_\text{Subjects}$. We first find the means for each subject. For subject 1, this is the mean of their scores across Conditions A, B, and C. The mean for subject 1 is 9.33 (repeating). Notice there is going to be some rounding error here, that’s OK for now. The means column now shows all of the subject means. We then find the difference between each subject mean and the grand mean. These deviations are shown in the diff column. Then we square the deviations, and sum them up. suppressPackageStartupMessages(library(dplyr)) scores <- c(20,11,2,6,2,7,2,11,2) conditions <- as.character(rep(c("A","B","C"), each=3)) subjects <-rep(1:3,3) means <-c(9.33,8,3.66,9.33,8,3.66,9.33,8,3.66) diff <-means-mean(scores) diff_squared <-diff^2 df<-data.frame(subjects,conditions,scores,means,diff, diff_squared) df$conditions<-as.character(df$conditions) df$subjects<-as.character(df$subjects) df <- df %>% rbind(c("Sums","", colSums(df[1:9,3:6]))) %>% rbind(c("Means","",colMeans(df[1:9,3:6]))) knitr::kable(df) subjects conditions scores means diff diff_squared 1 A 20 9.33 2.33 5.4289 2 A 11 8 1 1 3 A 2 3.66 -3.34 11.1556 1 B 6 9.33 2.33 5.4289 2 B 2 8 1 1 3 B 7 3.66 -3.34 11.1556 1 C 2 9.33 2.33 5.4289 2 C 11 8 1 1 3 C 2 3.66 -3.34 11.1556 Sums   63 62.97 -0.0299999999999994 52.7535 Means   7 6.99666666666667 -0.00333333333333326 5.8615 We found that the sum of the squared deviations $SS_\text{Subjects}$ = 52.75. Note again, this has some small rounding error because some of the subject means had repeating decimal places, and did not divide evenly. We can see the effect of the rounding error if we look at the sum and mean in the diff column. We know these should be both zero, because the Grand mean is the balancing point in the data. The sum and mean are both very close to zero, but they are not zero because of rounding error. SS Error (left-over) Now we can do the last thing. Remember we wanted to split up the $SS_\text{Error (within conditions)}$ into two parts, $SS_\text{Subjects}$ and $SS_\text{Error (left-over)}$. Because we have already calculate $SS_\text{Error (within conditions)}$ and $SS_\text{Subjects}$, we can solve for $SS_\text{Error (left-over)}$: $SS_\text{Error (left-over)} = SS_\text{Error (within conditions)} - SS_\text{Subjects} \nonumber$ $SS_\text{Error (left-over)} = SS_\text{Error (within conditions)} - SS_\text{Subjects} = 230 - 52.75 = 177.25 \nonumber$ Check our work Before we continue to compute the MSEs and F-value for our data, let’s quickly check our work. For example, we could have R compute the repeated measures ANOVA for us, and then we could look at the ANOVA table and see if we are on the right track so far. suppressPackageStartupMessages(library(dplyr)) library(xtable) scores <- c(20,11,2,6,2,7,2,11,2) conditions <- as.character(rep(c("A","B","C"), each=3)) subjects <-rep(1:3,3) means <-c(9.33,8,3.66,9.33,8,3.66,9.33,8,3.66) diff <-means-mean(scores) diff_squared <-diff^2 df<-data.frame(subjects,conditions,scores,means,diff, diff_squared) df$conditions<-as.character(df$conditions) df$subjects<-as.character(df$subjects) summary_out <- summary(aov(scores~conditions + Error(subjects/conditions),df[1:9,])) knitr::kable(xtable(summary_out)) Df Sum Sq Mean Sq F value Pr(>F) Residuals 2 52.66667 26.33333 NA NA conditions 2 72.00000 36.00000 0.8120301 0.505848 Residuals 4 177.33333 44.33333 NA NA OK, looks good. We found the $SS_\text{Effect}$ to be 72, and the SS for the conditions (same thing) in the table is also 72. We found the $SS_\text{Subjects}$ to be 52.75, and the SS for the first residual (same thing) in the table is also 53.66 repeating. That’s close, and our number is off because of rounding error. Finally, we found the $SS_\text{Error (left-over)}$ to be 177.25, and the SS for the bottom residuals in the table (same thing) in the table is 177.33 repeating, again close but slightly off due to rounding error. We have finished our job of computing the sums of squares that we need in order to do the next steps, which include computing the MSEs for the effect and the error term. Once we do that, we can find the F-value, which is the ratio of the two MSEs. Before we do that, you may have noticed that we solved for $SS_\text{Error (left-over)}$, rather than directly computing it from the data. In this chapter we are not going to show you the steps for doing this. We are not trying to hide anything from, instead it turns out these steps are related to another important idea in ANOVA. We discuss this idea, which is called an interaction in the next chapter, when we discuss factorial designs (designs with more than one independent variable). Compute the MSEs Calculating the MSEs (mean squared error) that we need for the $F$-value involves the same general steps as last time. We divide each SS by the degrees of freedom for the SS. The degrees of freedom for $SS_\text{Effect}$ are the same as before, the number of conditions - 1. We have three conditions, so the df is 2. Now we can compute the $MSE_\text{Effect}$. $MSE_\text{Effect} = \frac{SS_\text{Effect}}{df} = \frac{72}{2} = 36 \nonumber$ The degrees of freedom for $SS_\text{Error (left-over)}$ are different than before, they are the (number of subjects - 1) multiplied by the (number of conditions -1). We have 3 subjects and three conditions, so $(3-1) * (3-1) = 2*2 =4$. You might be wondering why we are multiplying these numbers. Hold that thought for now and wait until the next chapter. Regardless, now we can compute the $MSE_\text{Error (left-over)}$. $MSE_\text{Error (left-over)} = \frac{SS_\text{Error (left-over)}}{df} = \frac{177.33}{4}= 44.33 \nonumber$ Compute F We just found the two MSEs that we need to compute $F$. We went through all of this to compute $F$ for our data, so let’s do it: $F = \frac{MSE_\text{Effect}}{MSE_\text{Error (left-over)}} = \frac{36}{44.33}= 0.812 \nonumber$ And, there we have it! p-value We already conducted the repeated-measures ANOVA using R and reported the ANOVA. Here it is again. The table shows the $p$-value associated with our $F$-value. suppressPackageStartupMessages(library(dplyr)) library(xtable) scores <- c(20,11,2,6,2,7,2,11,2) conditions <- as.character(rep(c("A","B","C"), each=3)) subjects <-rep(1:3,3) means <-c(9.33,8,3.66,9.33,8,3.66,9.33,8,3.66) diff <-means-mean(scores) diff_squared <-diff^2 df<-data.frame(subjects,conditions,scores,means,diff, diff_squared) df$conditions<-as.character(df$conditions) df$subjects<-as.character(df$subjects) summary_out <- summary(aov(scores~conditions + Error(subjects/conditions),df[1:9,])) knitr::kable(xtable(summary_out)) Df Sum Sq Mean Sq F value Pr(>F) Residuals 2 52.66667 26.33333 NA NA conditions 2 72.00000 36.00000 0.8120301 0.505848 Residuals 4 177.33333 44.33333 NA NA We might write up the results of our experiment and say that the main effect condition was not significant, F(2,4) = 0.812, MSE = 44.33, p = 0.505. What does this statement mean? Remember, that the $p$-value represents the probability of getting the $F$ value we observed or larger under the null (assuming that the samples come from the same distribution, the assumption of no differences). So, we know that an $F$-value of 0.812 or larger happens fairly often by chance (when there are no real differences), in fact it happens 50.5% of the time. As a result, we do not reject the idea that any differences in the means we have observed could have been produced by chance.
textbooks/stats/Applied_Statistics/Answering_Questions_with_Data_-__Introductory_Statistics_for_Psychology_Students_(Crump)/08%3A_Repeated_Measures_ANOVA/8.03%3A_Calculating_the_RM_ANOVA.txt
Repeated Measures ANOVAs have some special properties that are worth knowing about. The main special property is that the error term used to for the $F$-value (the MSE in the denominator) will always be smaller than the error term used for the $F$-value the ANOVA for a between-subjects design. We discussed this earlier. It is smaller, because we subtract out the error associated with the subject means. This can have the consequence of generally making $F$-values in repeated measures designs larger than $F$-values in between-subjects designs. When the number in the bottom of the $F$ formula is generally smaller, it will generally make the resulting ratio a larger number. That’s what happens when you make the number in the bottom smaller. Because big $F$ values usually let us reject the idea that differences in our means are due to chance, the repeated-measures ANOVA becomes a more sensitive test of the differences (its $F$-values are usually larger). At the same time, there is a trade-off here. The repeated measures ANOVA uses different degrees of freedom for the error term, and these are typically a smaller number of degrees of freedom. So, the $F$-distributions for the repeated measures and between-subjects designs are actually different $F$-distributions, because they have different degrees of freedom. Repeated vs between-subjects ANOVA Let’s do a couple simulations to see some the differences between the ANOVA for a repeated measures design, and the ANOVA for a between-subjects design. We will do the following. 1. Simulate a design with three conditions, A, B, and C 2. sample 10 scores into each condition from the same normal distribution (mean = 100, SD = 10) 3. We will include a subject factor for the repeated-measures version. Here there are 10 subjects, each contributing three scores, one each condition 4. For the between-subjects design there are 30 different subjects, each contributing one score in the condition they were assigned to (really the group). We run 1000 simulated experiments for each design. We calculate the $F$ for each experiment, for both the between and repeated measures designs. Here are the two sampling distributions of $F$ for both designs. These two $F$ sampling distributions look pretty similar. However, they are subtly different. The between $F$ distribution has degrees of freedom 2, and 27, for the numerator and denominator. There are 3 conditions, so $\textit{df}_{1}$ = 3-1 = 2. There are 30 subjects, so $\textit{df}_{2}$ = 30-3 =27. The critical value, assuming an alpha of 0.05 is 3.35. This means $F$ is 3.35 or larger 5% of the time under the null. The repeated-measures $F$ distribution has degrees of freedom 2, and 18, for the numerator and denominator. There are 3 conditions, so $\textit{df}_{1}$ = 3-1 = 2. There are 10 subjects, so $\textit{df}_{2}$ = (10-1)(3-1) = 92 = 18. The critical value, assuming an alpha of 0.05 is 3.55. This means $F$ is 3.55 or larger 5% of the time under the null. The critical value for the repeated measures version is slightly higher. This is because when $\textit{df}_{2}$ (the denominator) is smaller, the $F$-distribution spreads out to the right a little bit. When it is skewed like this, we get some bigger $F$s a greater proportion of the time. So, in order to detect a real difference, you need an $F$ of 3.35 or greater in a between-subjects design, or an $F$ of 3.55 or greater for a repeated-measures design. The catch here is that when there is a real difference between the means, you will detect it more often with the repeated-measures design, even though you need a larger $F$ (to pass the higher critical $F$-value for the repeated measures design). repeated measures designs are more sensitive To illustrate why repeated-measures designs are more sensitive, we will conduct another set of simulations. We will do something slightly different this time. We will make sure that the scores for condition A, are always a little bit higher than the other scores. In other words, we will program in a real true difference. Specifically, the scores for condition will be sampled from a normal distribution with mean = 105, and SD = 10. This mean is 5 larger than the means for the other two conditions (still set to 100). With a real difference in the means, we should now reject the hypothesis of no differences more often. We should find $F$ values larger than the critical value more often. And, we should find $p$-values for each experiment that are smaller than .05 more often, those should occur more than 5% of the time. To look at this we conduct 1000 experiments for each design, we conduct the ANOVA, then we save the $p$-value we obtained for each experiment. This is like asking how many times will we find a $p$-value less than 0.05, when there is a real difference (in this case an average of 5) between some of the means. We will plot histograms of the $p$-values: Here we have two distributions of observed p-values for the simulations. The red line shows the location of 0.05. Overall, we can see that for both designs, we got a full range of $p$-values from 0 to 1. This means that many times we would not have rejected the hypothesis of no differences (even though we know there is a small difference). We would have rejected the null every time the $p$-value was less than 0.05. For the between subject design, there were 599 experiments with a $p$ less than 0.05, or 0.599 of experiments were “significant”, with alpha=.05. For the within subject design, there were 570 experiments with a $p$ less than 0.05, or 0.57 of experiments were “significant”, with alpha=.05. OK, well, you still might not be impressed. In this case, the between-subjects design detected the true effect slightly more often than the repeated measures design. Both them were right around 55% of the time. Based on this, we could say the two designs are pretty comparable in their sensitivity, or ability to detect a true difference when there is one. However, remember that the between-subjects design uses 30 subjects, and the repeated measures design only uses 10. We had to make a big investment to get our 30 subjects. And, we’re kind of unfairly comparing the between design (which is more sensitive because it has more subjects) with the repeated measures design that has fewer subjects. What do you think would happen if we ran 30 subjects in the repeated measures design? Let’s find out. Here we redo the above, but this time only for the repeated measures design. We increase $N$ from 10 to 30. Wowsers! Look at that. When we ran 30 subjects in the repeated measures design almost all of the $p$-values were less than .05. There were 982 experiments with a $p$ less than 0.05, or 0.982 of experiments were “significant”, with alpha=.05. That’s huge! If we ran the repeated measures design, we would almost always detect the true difference when it is there. This is why the repeated measures design can be more sensitive than the between-subjects design.
textbooks/stats/Applied_Statistics/Answering_Questions_with_Data_-__Introductory_Statistics_for_Psychology_Students_(Crump)/08%3A_Repeated_Measures_ANOVA/8.04%3A_Things_worth_knowing.txt
Let’s look at some real data from a published experiment that uses a repeated measures design. This is the same example that you will be using in the lab for repeated measures ANOVA. The data happen to be taken from a recent study conducted by Lawrence Behmer and myself, at Brooklyn College (Behmer and Crump 2017). We were interested in how people perform sequences of actions. One question is whether people learn individual parts of actions, or the whole larger pattern of a sequence of actions. We looked at these issues in a computer keyboard typing task. One of our questions was whether we would replicate some well known findings about how people type words and letters. From prior work we knew that people type words way faster than than random letters, but if you made the random letters a little bit more English-like, then people type those letter strings a little bit faster, but not as slow as random string. In the study, 38 participants sat in front of a computer and typed 5 letter strings one at a time. Sometimes the 5 letter made a word (Normal condition, TRUCK), sometimes they were completely random (Random Condition, JWYFG), and sometimes they followed patterns like you find in English (Bigram Condition, QUEND), but were not actual words. So, the independent variable for the typing material had three levels. We measured every single keystroke that participants made. This gave us a few different dependent measures. Let’s take a look a the reaction times. This is how long it took for participants to start typing the first letter in the string. OK, I made a figure showing the mean reaction times for the different typing material conditions. You will notice that there are two sets of lines. That’s because there was another manipulation I didn’t tell you about. In one block of trials participants got to look at the keyboard while they typed, but in the other condition we covered up the keyboard so people had to type without looking. Finally, the error bars are standard error of the means. Note Note, the use of error bars for repeated-measures designs is not very straightforward. In fact the standard error of the means that we have added here are not very meaningful for judging whether the differences between the means are likely not due to chance. They would be if this was a between-subjects design. We will update this textbook with a longer discussion of this issue, for now we will just live with these error bars. For the purpose of this example, we will say, it sure looks like the previous finding replicated. For example, people started typing Normal words faster than Bigram strings (English-like), and they started typing random letters the most slowly of all. Just like prior research had found. Let’s focus only on the block of trials where participants were allowed to look at the keyboard while they typed, that’s the red line, for the “visible keyboard” block. We can see the means look different. Let’s next ask, what is the likelihood that chance (random sampling error) could have produced these mean differences. To do that we run a repeated-measures ANOVA in R. Here is the ANOVA table. ```library(data.table) library(ggplot2) library(xtable) suppressPackageStartupMessages(library(dplyr)) exp1_data <- fread( "https://raw.githubusercontent.com/CrumpLab/statistics/master/data/exp1_BehmerCrumpAPP.csv") exp1_data\$Block<-as.factor(exp1_data\$Block) levels(exp1_data\$Block) <- c("Visible keyboard","Covered Keyboard") ## get subject mean RTs subject_means <- exp1_data %>% filter(Order==1, Correct==1, PureRTs<5000) %>% dplyr::group_by(Subject, Block, Stimulus) %>% dplyr::summarise(mean_rt = mean(PureRTs), .groups='drop_last') subject_means\$Subject<-as.factor(subject_means\$Subject) subject_means\$Block<-as.factor(subject_means\$Block) subject_means\$Stimulus<-as.factor(subject_means\$Stimulus) visible_means<- subject_means %>% filter(Block=="Visible keyboard") s_out <- summary(aov(mean_rt~Stimulus + Error (Subject/Stimulus), visible_means)) knitr::kable(xtable(s_out))``` Df Sum Sq Mean Sq F value Pr(>F) Residuals 37 2452611.9 66286.808 NA NA Stimulus 2 1424914.0 712457.010 235.7342 0 Residuals1 74 223649.4 3022.289 NA NA Alright, we might report the results like this. There was a significant main effect of Stimulus type, F(2, 74) = 235.73, MSE = 3022.289, p < 0.001. Notice a couple things. First, this is a huge \(F\)-value. It’s 253! Notice also that the p-value is listed as 0. That doesn’t mean there is zero chance of getting an F-value this big under the null. This is a rounding error. The true p-value is 0.00000000000000… The zeros keep going for a while. This means there is only a vanishingly small probability that these differences could have been produced by sampling error. So, we reject the idea that the differences between our means could be explained by chance. Instead, we are pretty confident, based on this evidence and and previous work showing the same thing, that our experimental manipulation caused the difference. In other words, people really do type normal words faster than random letters, and they type English-like strings somewhere in the middle in terms of speed. 8.06: Summary In this chapter you were introduced to the repeated-measures ANOVA. This analyis is appropriate for within-subjects or repeated measures designs. The main difference between the independent factor ANOVA and the repeated measures ANOVA, is the ability to partial out variance due to the individual subject means. This can often result in the repeated-measures ANOVA being more sensitive to true effects than the between-subjects ANOVA. 8.07: References Behmer, Lawrence P, and Matthew JC Crump. 2017. “Spatial Knowledge During Skilled Action Sequencing: Hierarchical Versus Nonhierarchical Representations.” Attention, Perception, & Psychophysics 79 (8): 2435–48.
textbooks/stats/Applied_Statistics/Answering_Questions_with_Data_-__Introductory_Statistics_for_Psychology_Students_(Crump)/08%3A_Repeated_Measures_ANOVA/8.05%3A_Real_Data.txt
We have arrived to the most complicated thing we are going to discuss in this class. Unfortunately, we have to warn you that you might find this next stuff a bit complicated. You might not, and that would be great! We will try our best to present the issues in a few different ways, so you have a few different tools to help you understand the issue. What’s this so very complicated issue? Well, the first part it isn’t that complicated. For example, up until now we have been talking about experiments. Most every experiment has had two important bits, the independent variable (the manipulation), and the dependent variable (what we measure). In most cases, our independent variable has had two levels, or three or four; but, there has only been one independent variable. What if you wanted to manipulate more than one independent variable? If you did that you would at least two independent variables, each with their own levels. The rest of the book is about designs with more than one independent variable, and the statistical tests we use to analyze those designs. Let’s go through some examples of designs so can see what we are talking about. We will be imagining experiments that are trying to improve students grades. So, the dependent variable will always be grade on a test. 1. 1 IV (two levels) We would use a t-test for these designs, because they only have two levels. 1. Time of day (Morning versus Afternoon): Do students do better on tests when they take them in the morning versus the afternoon? There is one IV (time of day), with two levels (Morning vs. Afternoon) 2. Caffeine (some caffeine vs no caffeine): Do students do better on tests when they drink caffeine versus not drinking caffeine? There is one IV (caffeine), with two levels (some caffeine vs no caffeine) 1. 1 IV (three levels): We would use an ANOVA for these designs because they have more than two levels 1. Time of day (Morning, Afternoon, Night): Do students do better on tests when they take them in the morning, the afternoon, or at night? There is one IV (time of day), with three levels (Morning, Afternoon, and Night) 2. Caffeine (1 coffee, 2 coffees, 3 coffees): Do students do better on tests when they drink 1 coffee, 2 coffees, or three coffees? There is one IV (caffeine), with three levels (1 coffee, 2 coffees, and 3 coffees) 1. 2 IVs, IV1 (two levels), IV2 (two levels) We haven’t talked about what kind of test to run for this design (hint it is called a factorial ANOVA) 1. IV1 (Time of Day: Morning vs. Afternoon); IV2 (Caffeine: some caffeine vs. no caffeine): How does time of day and caffeine consumption influence student grades? We had students take tests in the morning or in the afternoon, with or without caffeine. There are two IVs (time of day & caffeine). IV1 (Time of day) has two levels (morning vs afternoon). IV2 (caffeine) has two levels (some caffeine vs. no caffeine) OK, let’s stop here for the moment. The first two designs both had one IV. The third design shows an example of a design with 2 IVs (time of day and caffeine), each with two levels. This is called a 2x2 Factorial Design. It is called a factorial design, because the levels of each independent variable are fully crossed. This means that first each level of one IV, the levels of the other IV are also manipulated. “HOLD ON STOP PLEASE!” Yes, it seems as if we are starting to talk in the foreign language of statistics and research designs. We apologize for that. We’ll keep mixing it up with some plain language, and some pictures.
textbooks/stats/Applied_Statistics/Answering_Questions_with_Data_-__Introductory_Statistics_for_Psychology_Students_(Crump)/09%3A_Factorial_ANOVA/9.00%3A_Prelude_to_Factorial_ANOVA.txt
2x2 Designs We’ve just started talking about a 2x2 Factorial design. We said this means the IVs are crossed. To illustrate this, take a look at the following tables. We show an abstract version and a concrete version using time of day and caffeine as the two IVs, each with two levels in the design: Let’s talk about this crossing business. Here’s what it means for the design. For the first level of Time of Day (morning), we measure test performance when some people drank caffeine and some did not. So, in the morning we manipulate whether or not caffeine is taken. Also, in the second level of the Time of Day (afternoon), we also manipulate caffeine. Some people drink or don’t drink caffeine in the afternoon as well, and we collect measures of test performance in both conditions. We could say the same thing, but talk from the point of view of the second IV. For example, when people drink caffeine, we test those people in the morning, and in the afternoon. So, time of day is manipulated for the people who drank caffeine. Also, when people do not drink caffeine, we test those people in the morning, and in the afternoon, So, time of day is manipulated for the people who did not drink caffeine. Finally, each of the four squares representing a DV, is called a condition. So, we have 2 IVs, each with 2 levels, for a total of 4 conditions. This is why we call it a 2x2 design. 2x2 = 4. The notation tells us how to calculate the total number of conditions. Factorial Notation Anytime all of the levels of each IV in a design are fully crossed, so that they all occur for each level of every other IV, we can say the design is a fully factorial design. We use a notation system to refer to these designs. The rules for notation are as follows. Each IV get’s it’s own number. The number of levels in the IV is the number we use for the IV. Let’s look at some examples: 2x2 = There are two IVS, the first IV has two levels, the second IV has 2 levels. There are a total of 4 conditions, 2x2 = 4. 2x3 = There are two IVs, the first IV has two levels, the second IV has three levels. There are a total of 6 conditions, 2x3 = 6 3x2 = There are two IVs, the first IV has three levels, the second IV has two levels. There are a total of 6 conditions, 3x2=6. 4x4 = There are two IVs, the first IV has 4 levels, the second IV has 4 levels. There are a total of 16 condition, 4x4=16 2x3x2 = There are a total of three IVs. The first IV has 2 levels. The second IV has 3 levels. The third IV has 2 levels. There are a total of 12 condition. 2x3x2 = 12. 2 x 3 designs Just for fun, let’s illustrate a 2x3 design using the same kinds of tables we looked at before for the 2x2 design. All we did was add another row for the second IV. It’s a 2x3 design, so it should have 6 conditions. As you can see there are now 6 cells to measure the DV.
textbooks/stats/Applied_Statistics/Answering_Questions_with_Data_-__Introductory_Statistics_for_Psychology_Students_(Crump)/09%3A_Factorial_ANOVA/9.01%3A_Factorial_Basics.txt
Factorial designs let researchers manipulate more than one thing at once. This immediately makes things more complicated, because as you will see, there are many more details to keep track of. Why would researchers want to make things more complicated? Why would they want to manipulate more than one IV at a time. Before we go on, let’s clarify what we mean by manipulating more than one thing at once. When you have one IV in your design, by definition, you are manipulating only one thing. This might seem confusing at first, because the IV has more than one level, so it seems to have more than one manipulation. Consider manipulating the number of coffees that people drink before they do a test. We could have one IV (coffee), with three levels (1, 2, or 3 coffees). You might want to say we have three manipulations here, drinking 1, 2, or 3 coffees. But, the way we define manipulation is terms of the IV. There is only one coffee IV. It does have three levels. Nevertheless, we say you are only doing one coffee manipulation. The only thing you are manipulating is the amount of coffee. That’s just one thing, so it’s called one manipulation. To do another, second manipulation, you need to additionally manipulate something that is not coffee (like time of day in our previous example). Returning to our question: why would researchers want to manipulate more than one thing in their experiment. The answer might be kind of obvious. They want to know if more than one thing causes change in the thing they are measuring! For example, if you are measuring people’s happiness, you might assume that more than one thing causes happiness to change. If you wanted to track down how two things caused changes in happiness, then you might want to have two manipulations of two different IVs. This is not a wrong way to think about the reasons why researchers use factorial designs. They are often interested in questions like this. However, we think this is an unhelpful way to first learn about factorial designs. We present a slightly different way of thinking about the usefulness of factorial designs, and we think it is so important, it get’s its own section. Factorials manipulate an effect of interest Here is how researchers often use factorial designs to understand the causal influences behind the effects they are interested in measuring. Notice we didn’t say the dependent variables they are measuring, we are now talking about something called effects. Effects are the change in a measure caused by a manipulation. You get an effect, any time one IV causes a change in a DV. Here is an example. We will stick with this one example for a while, so pay attention… In fact, the example is about paying attention. Let’s say you wanted to measure something like paying attention. You could something like this: 1. Pick a task for people to do that you can measure. For example, you can measure how well they perform the task. That will be the dependent measure 2. Pick a manipulation that you think will cause differences in paying attention. For example, we know that people can get distracted easily when there are distracting things around. You could have two levels for your manipulation: No distraction versus distraction. 3. Measure performance in the task under the two conditions 4. If your distraction manipulation changes how people perform the task, you may have successfully manipulated how well people can pay attention in your task. Spot the difference Let’s elaborate this with another fake example. First, we pick a task. It’s called spot the difference. You may have played this game before. You look at two pictures side-by-side, and then you locate as many differences as you can find. here is an example: How many differences can you spot? When you look for the differences, it feels like you are doing something we would call “paying attention”. If you pay attention to the clock tower, you will see that the hands on the clock are different. Ya! One difference spotted. We could give people 30 seconds to find as many differences as they can. Then we give them another set of pictures and do it again. Every time we will measure how many differences they can spot. So, our measure of performance, our dependent variable, could be the mean number of differences spotted. Distraction manipulation Now, let’s think about a manipulation that might cause differences in how people pay attention. If people need to pay attention to spot differences, then presumably if we made it difficult to pay attention, people would spot less differences. What is a good way to distract people? I’m sure there are lots of ways to do this. How about we do the following: 1. No distraction condition: Here people do the task with no added distractions. They sit in front of a computer, in a quiet, distraction-free room, and find as many differences as they can for each pair of pictures 2. Distraction condition: Here we blast super loud ambulance sounds and fire alarms and heavy metal music while people attempt to spot differences. We also randomly turn the sounds on and off, and make them super-duper annoying and distracting. We make sure that the sounds aren’t loud enough to do any physical damage to anybody’s ear-drums. But, we want to make them loud enough to be super distracting. If you don’t like this, we could also tickle people with a feather, or whisper silly things into their ears, or surround them by clowns, or whatever we want, it just has to be super distracting. Distraction effect If our distraction manipulation is super-distracting, then what should we expect to find when we compare spot-the-difference performance between the no-distraction and distraction conditions? We should find a difference! If our manipulation works, then we should find that people find more differences when they are not distracted, and less differences when they are distracted. For example, the data might look something like this: The figure shows a big difference in the mean number of difference spotted. People found 5 differences on average when they were distracted, and 10 differences when they were not distracted. We labelled the figure, “The distraction effect”, because it shows a big effect of distraction. The effect of distraction is a mean of 5 spot the differences. It’s the difference between performance in the Distraction and No-Distraction conditions. In general, it is very common to use the word effect to refer to the differences caused by the manipulation. We manipulated distraction, it caused a difference, so we call this the “distraction effect”. Manipulating the Distraction effect This is where factorial designs come in to play. We have done the hard work of finding an effect of interest, in this case the distraction effect. We think this distraction effect actually measures something about your ability to pay attention. For example, if you were the kind of person who had a small distraction effect (maybe you find 10 differences when you are not distracted, and 9 differences when you are distracted), that could mean you are very good at ignoring distracting things while you are paying attention. On the other hand, you could be the kind of person who had a big distraction effect (maybe you found 10 differences under no distraction, and only 1 difference when you were distracted); this could mean you are not very good at ignoring distracting things while you are paying attention. Overall now, we are thinking of our distraction effect (the difference in performance between the two conditions) as the important thing we want to measure. We then might want to know how to make people better at ignoring distracting things. Or, we might want to know what makes people worse at ignoring things. In other words we want to find out what manipulations control the size of the distraction effect (make it bigger or smaller, or even flip around!). Maybe there is a special drug that helps you ignore distracting things. People taking this drug should be less distracted, and if they took this drug while completing our task, they should have a smaller distraction effect compared to people not taking the drug. Maybe rewarding people with money can help you pay attention and ignore distracting things better. People receiving 5 dollars every time they spot a difference might be able to focus more because of the reward, and they would show a smaller distraction effect in our task, compared to people who got no money for finding differences. Let’s see what this would look like. We are going to add a second IV to our task. The second IV will manipulate reward. In one condition, people will get 5 dollars for every difference they find (so they could leave the study with lots of money if they find lots of differences). In the other condition, people will get no money, but they will still have find differences. Remember, this will be a factorial design, so everybody will have to find differences when they are distracted and when they are not distracted. The question we are now asking is: Will manipulating reward cause a change in the size of the distraction effect. We could predict that people receiving rewards will have a smaller distraction effect than people not receiving rewards. If that happened, the data would look something like this: I’ve just shown you a new kind of graph. I apologize right now for showing this to you first. It’s more unhelpful than the next graph. What I did was keep the x-axis the same as before (to be consistent). So, we have distraction vs. no distraction on the x-axis. In the distraction condition, there are means for spot-the-difference performance in the no-reward (red), and reward (aqua) conditions. The same goes for the no-distraction condition, a red and an aqua bar for the no-reward and reward conditions. We can try to interpret this graph, but the next graph plots the same data in a different way, which makes it easier to see what we are talking about. All we did was change the x-axis. Now the left side of the x-axis is for the no-reward condition, and the right side is for the reward condition. The red bar is for the distraction condition, and the aqua bar is for the no distraction condition. It is easier to see the distraction effect in this graph. The distraction effect is the difference in size between the red and aqua bars. For each reward condition, the red and aqua bars are right beside each other, so can see if there is a difference between them more easily, compared to the first graph. No-Reward condition: In the no-reward condition people played spot the difference when they were distracted and when they were not distracted. This is a replication of our first fake study. We should expect to find the same pattern of results, and that’s what the graph shows. There was a difference of 5. People found 5 differences when they were distracted and 10 when they were not distracted. So, there was a distraction effect of 5, same as we had last time. Reward condition: In the reward condition people played spot the difference when they were distracted and when they were not distracted. Except, they got 5 dollars every time they spotted a difference. We predicted this would cause people to pay more attention and do a better job of ignoring distracting things. The graph shows this is what happened. People found 9 differences when they were distracted and 11 when they were not distracted. So, there was a distraction effect of 2. If we had conducted this study, we might have concluded that reward can manipulate the distraction effect. When there was no reward, the size of the distraction effect was 5. When there was reward, the size of the distraction effect was 2. So, the reward manipulation changed the size of the distraction effect by 3 (5-2 =3). This is our description of why factorial designs are so useful. They allow researchers to find out what kinds of manipulations can cause changes in the effects they measure. We measured the distraction effect, then we found that reward causes changes in the distraction effect. If we were trying to understand how paying attention works, we would then need to explain how it is that reward levels could causally change how people pay attention. We would have some evidence that reward does cause change in paying attention, and we would have to come up with some explanations, and then run more experiments to test whether those explanations hold water.
textbooks/stats/Applied_Statistics/Answering_Questions_with_Data_-__Introductory_Statistics_for_Psychology_Students_(Crump)/09%3A_Factorial_ANOVA/9.02%3A_Purpose_of_Factorial_Designs.txt
In our example above we showed you two bar graphs of the very same means for our 2x2 design. Even though the graphs plot identical means, they look different, so they are more or less easy to interpret by looking at them. Results from 2x2 designs are also often plotted with line graphs. Those look different too. Here are four different graphs, using bars and lines to plot the very same means from before. We are showing you this so that you realize how you graph your data matters, and it makes it more or less easy for people to understand the results. Also, how the data is plotted matters for what you need to look at to interpret the results. 9.04: Knowing what you want to find out When you conduct a design with more than one IV, you get more means to look at. As a result, there are more kinds of questions that you can ask of the data. Sometimes it turns out that the questions that you can ask, are not the ones that you want to ask, or have an interest in asking. Because you ran the design with more than one IV, you have the opportunity to ask these kinds of extra questions. What kinds of new things are we talking about? Let’s keep going with our distraction effect experiment. We have the first IV where we manipulated distraction. So, we could find the overall means in spot-the difference for the distraction vs. no-distraction conditions (that’s two means). The second IV was reward. We could find the overall means in spot-the-difference performance for the reward vs. no-reward conditions (that’s two more means). We could do what we already did, and look at the means for each combination, that is the mean for distraction/reward, distraction/no-reward, no-distraction/reward, and no-distraction/no-reward (that’s four more means, if you’re counting). There’s even more. We could look at the mean distraction effect (the difference between distraction and no-distraction) for the reward condition, and the mean distraction effect for the no-reward condition (that’s two more). I hope you see here that there are a lot of means to look. And they are all different means. Let’s look at all of them together in one graph with four panels. The purpose of showing all of these means is to orient you to your problem. If you conduct a 2x2 design (and this is the most simple factorial that you can conduct), you will get all of these means. You need to know what you want to know from the means. That is, you need to be able to connect the research question to the specific means you are interested in analyzing. For example, in our example, the research question was whether reward would change the size of the distraction effect. The top left panel gives us some info about this question. We can see all of the condition means, and we can visually see that the distraction effect was larger in the No-reward compared to the reward condition. But, to “see” this, we need to do some visual subtraction. You need to look at the difference between the red and aqua bars for each of the reward and no-reward conditions. Does the top right panel tell us about whether reward changed the size of the distraction effect? NO, it just shows that there was an overall distraction effect (this is called the main effect of distraction). Main effects are any differences between the levels of one independent variable. Does the bottom left panel tell us about whether reward changed the size of the distraction effect? NO! it just shows that there was an overall reward effect, called the main effect of reward. People who were rewarded spotted a few more differences than the people who weren’t, but this doesn’t tell us if they were any less distracted. Finally, how about the bottom left panel. Does this tell us about whether the reward changed the size of the distraction effect? YES! Notice, the y-axis is different for this panel. The y-axis here is labelled “Distraction Effect”. You are looking at two difference scores. The distraction effect in the no-reward condition (10-5 = 5), and the distraction effect in the Reward condition (11-9 = 2). These two bars are different as a function of reward. So, it looks like reward did produce a difference between the distraction effects! This was the whole point of the fake study. It is these means that were most important for answering the question of the study. As a very last point, this panel contains what we call an interaction. We explain this in the next section. Pro tip: Make sure you know what you want to know from your means before you run the study, otherwise you will just have way too many means, and you won’t know what they mean.
textbooks/stats/Applied_Statistics/Answering_Questions_with_Data_-__Introductory_Statistics_for_Psychology_Students_(Crump)/09%3A_Factorial_ANOVA/9.03%3A_Graphing_the_means.txt
Normally in a chapter about factorial designs we would introduce you to Factorial ANOVAs, which are totally a thing. We will introduce you to them soon. But, before we do that, we are going to show you how to analyze a 2x2 repeated measures ANOVA design with paired-samples t-tests. This is probably something you won’t do very often. However, it turns out the answers you get from this method are the same ones you would get from an ANOVA. Admittedly, if you found the explanation of ANOVA complicated, it will just appear even more complicated for factorial designs. So, our purpose here is to delay the complication, and show you with t-tests what it is that the Factorial ANOVA is doing. More important, when you do the analysis with t-tests, you have to be very careful to make all of the comparisons in the right way. As a result, you will get some experience learning how to know what it is you want to know from factorial designs. Once you know what you want to know, you can use the ANOVA to find out the answers, and then you will also know what answers to look for after you run the ANOVA. Isn’t new knowledge fun! The first thing we need to do is define main effects and interactions. Whenever you conduct a Factorial design, you will also have the opportunity to analyze main effects and interactions. However, the number of main effects and interactions you get to analyse depends on the number of IVs in the design. Main effects Formally, main effects are the mean differences for a single Independent variable. There is always one main effect for each IV. A 2x2 design has 2 IVs, so there are two main effects. In our example, there is one main effect for distraction, and one main effect for reward. We will often ask if the main effect of some IV is significant. This refers to a statistical question: Were the differences between the means for that IV likely or unlikely to be caused by chance (sampling error). If you had a 2x2x2 design, you would measure three main effects, one for each IV. If you had a 3x3x3 design, you would still only have 3 IVs, so you would have three main effects. Interaction We find that the interaction concept is one of the most confusing concepts for factorial designs. Formally, we might say an interaction occurs whenever the effect of one IV has an influence on the size of the effect for another IV. That’s probably not very helpful. In more concrete terms, using our example, we found that the reward IV had an effect on the size of the distraction effect. The distraction effect was larger when there was no-reward, and it was smaller when there was a reward. So, there was an interaction. We might also say an interaction occurs when the difference between the differences are different! Yikes. Let’s explain. There was a difference in spot-the-difference performance between the distraction and no-distraction condition, this is called the distraction effect (it is a difference measure). The reward manipulation changed the size of the distraction effect, that means there was difference in the size of the distraction effect. The distraction effect is itself a measure of differences. So, we did find that the difference (in the distraction effect) between the differences (the two measures of the distraction effect between the reward conditions) were different. When you start to write down explanations of what interactions are, you find out why they come across as complicated. We’ll leave our definition of interaction like this for now. Don’t worry, we’ll go through lots of examples to help firm up this concept for you. The number of interactions in the design also depend on the number of IVs. For a 2x2 design there is only 1 interaction. The interaction between IV1 and IV2. This occurs when the effect of say IV2 (whether there is a difference between the levels of IV2) changes across the levels of IV1. We could write this in reverse, and ask if the effect of IV1 (whether there is a difference between the levels of IV1) changes across the levels of IV2. However, just because we can write this two ways, does not mean there are two interactions. We’ll see in a bit, that no matter how do the calculation to see if the difference scores–measure of effect for one IV– change across the levels of the other IV, we always get the same answer. That is why there is only one interaction for a 2x2. Similarly, there is only one interaction for a 3x3, because there again we only have two IVs (each with three levels). Only when we get up to designs with more than 2 IVs, do we find more possible interactions. A design with three IVS, has four interactions. If the IVs are labelled A, B, and C, then we have three 2-way interactions (AB, AC, and BC), and one three-way interaction (ABC). We hold off on this stuff for much later. Looking at the data It is most helpful to see some data in order to understand how we will analyze it. Let’s imagine we ran our fake attention study. We will have five people in the study, and they will participate in all conditions, so it will be a fully repeated-measures design. The data could look like this: No Reward Reward No Distraction Distraction No Distraction Distraction subject A B C D 1 10 5 12 9 2 8 4 13 8 3 11 3 14 10 4 9 4 11 11 5 10 2 13 12 Note: Number of differences spotted for each subject in each condition. Main effect of Distraction The main effect of distraction compares the overall means for all scores in the no-distraction and distraction conditions, collapsing over the reward conditions. The yellow columns show the no-distraction scores for each subject. The blue columns show the distraction scores for each subject. The overall means for for each subject, for the two distraction conditions are shown to the right. For example, subject 1 had a 10 and 12 in the no-distraction condition, so their mean is 11. We are interested in the main effect of distraction. This is the difference between the AC column (average of subject scores in the no-distraction condition) and the BD column (average of the subject scores in the distraction condition). These differences for each subjecct are shown in the last green column. The overall means, averaging over subjects are in the bottom green row. Just looking at the means, we can see there was a main effect of Distraction, the mean for the no-distraction condition was 11.1, and the mean for the distraction condition was 6.8. The size of the main effect was 4.3 (the difference between 11.1 and 6.8). Now, what if we wanted to know if this main effect of distraction (the difference of 4.3) could have been caused by chance, or sampling error. You could do two things. You could run a paired samples \(t\)-test between the mean no-distraction scores for each subject (column AC) and the mean distraction scores for each subject (column BD). Or, you could run a one-sample \(t\)-test on the difference scores column, testing against a mean difference of 0. Either way you will get the same answer. Here’s the paired samples version: ```A <- c(10,8,11,9,10) #nD_nR B <- c(5,4,3,4,2) #D_nR C <- c(12,13,14,11,13) #nD_R D <- c(9,8,10,11,12) #D_R AC<- (A+C)/2 BD<- (B+D)/2 t.test(AC,BD, paired=TRUE,var.equal=TRUE)``` ``` Paired t-test data: AC and BD t = 7.6615, df = 4, p-value = 0.00156 alternative hypothesis: true difference in means is not equal to 0 95 percent confidence interval: 2.741724 5.858276 sample estimates: mean of the differences 4.3 ``` Here’s the one sample version: ```A <- c(10,8,11,9,10) #nD_nR B <- c(5,4,3,4,2) #D_nR C <- c(12,13,14,11,13) #nD_R D <- c(9,8,10,11,12) #D_R AC<- (A+C)/2 BD<- (B+D)/2 t.test(AC-BD, mu=0)``` ``` One Sample t-test data: AC - BD t = 7.6615, df = 4, p-value = 0.00156 alternative hypothesis: true mean is not equal to 0 95 percent confidence interval: 2.741724 5.858276 sample estimates: mean of x 4.3 ``` If we were to write-up our results for the main effect of distraction we could say something like this: The main effect of distraction was significant, \(t\)(4) = 7.66, \(p\) = 0.001. The mean number of differences spotted was higher in the no-distraction condition (M = 11.1) than the distraction condition (M = 6.8). Main effect of Reward The main effect of reward compares the overall means for all scores in the no-reward and reward conditions, collapsing over the reward conditions. The yellow columns show the no-reward scores for each subject. The blue columns show the reward scores for each subject. The overall means for for each subject, for the two reward conditions are shown to the right. For example, subject 1 had a 10 and 5 in the no-reward condition, so their mean is 7.5. We are interested in the main effect of reward. This is the difference between the AB column (average of subject scores in the no-reward condition) and the CD column (average of the subject scores in the reward condition). These differences for each subjecct are shown in the last green column. The overall means, averaging over subjects are in the bottom green row. Just looking at the means, we can see there was a main effect of reward. The mean number of differences spotted was 11.3 in the reward condition, and 6.6 in the no-reward condition. So, the size of the main effectd of reward was 4.7. Is a difference of this size likely o unlikey due to chance? We could conduct a paired-samples \(t\)-test on the AB vs. CD means, or a one-sample \(t\)-test on the difference scores. They both give the same answer: Here’s the paired samples version: ```A <- c(10,8,11,9,10) #nD_nR B <- c(5,4,3,4,2) #D_nR C <- c(12,13,14,11,13) #nD_R D <- c(9,8,10,11,12) #D_R AB<- (A+B)/2 CD<- (C+D)/2 t.test(CD,AB, paired=TRUE,var.equal=TRUE)``` ``` Paired t-test data: CD and AB t = 8.3742, df = 4, p-value = 0.001112 alternative hypothesis: true difference in means is not equal to 0 95 percent confidence interval: 3.141724 6.258276 sample estimates: mean of the differences 4.7 ``` Here’s the one sample version: ```A <- c(10,8,11,9,10) #nD_nR B <- c(5,4,3,4,2) #D_nR C <- c(12,13,14,11,13) #nD_R D <- c(9,8,10,11,12) #D_R AB<- (A+B)/2 CD<- (C+D)/2 t.test(CD-AB, mu=0)``` ``` One Sample t-test data: CD - AB t = 8.3742, df = 4, p-value = 0.001112 alternative hypothesis: true mean is not equal to 0 95 percent confidence interval: 3.141724 6.258276 sample estimates: mean of x 4.7 ``` If we were to write-up our results for the main effect of reward we could say something like this: The main effect of reward was significant, t(4) = 8.37, p = 0.001. The mean number of differences spotted was higher in the reward condition (M = 11.3) than the no-reward condition (M = 6.6). Interaction between Distraction and Reward Now we are ready to look at the interaction. Remember, the whole point of this fake study was what? Can you remember? Here’s a reminder. We wanted to know if giving rewards versus not would change the size of the distraction effect. Notice, neither the main effect of distraction, or the main effect of reward, which we just went through the process of computing, answers this question. In order to answer the question we need to do two things. First, compute distraction effect for each subject when they were in the no-reward condition. Second, compute the distraction effect for each subject when they were in the reward condition. Then, we can compare the two distraction effects and see if they are different. The comparison between the two distraction effects is what we call the interaction effect. Remember, this is a difference between two difference scores. We first get the difference scores for the distraction effects in the no-reward and reward conditions. Then we find the difference scores between the two distraction effects. This difference of differences is the interaction effect (green column in the table) The mean distraction effects in the no-reward (6) and reward (2.6) conditions were different. This difference is the interaction effect. The size of the interaction effect was 3.4. How can we test whether the interaction effect was likely or unlikely due to chance? We could run another paired-sample \(t\)-test between the two distraction effect measures for each subject, or a one sample \(t\)-test on the green column (representing the difference between the differences). Both of these \(t\)-tests will give the same results: Here’s the paired samples version: ```A <- c(10,8,11,9,10) #nD_nR B <- c(5,4,3,4,2) #D_nR C <- c(12,13,14,11,13) #nD_R D <- c(9,8,10,11,12) #D_R A_B <- A-B C_D <- C-D t.test(A_B,C_D, paired=TRUE,var.equal=TRUE)``` ``` Paired t-test data: A_B and C_D t = 2.493, df = 4, p-value = 0.06727 alternative hypothesis: true difference in means is not equal to 0 95 percent confidence interval: -0.3865663 7.1865663 sample estimates: mean of the differences 3.4 ``` Here’s the one sample version: ```A <- c(10,8,11,9,10) #nD_nR B <- c(5,4,3,4,2) #D_nR C <- c(12,13,14,11,13) #nD_R D <- c(9,8,10,11,12) #D_R A_B <- A-B C_D <- C-D t.test(A_B-C_D, mu=0)``` ``` One Sample t-test data: A_B - C_D t = 2.493, df = 4, p-value = 0.06727 alternative hypothesis: true mean is not equal to 0 95 percent confidence interval: -0.3865663 7.1865663 sample estimates: mean of x 3.4 ``` Oh look, the interaction was not significant. At least, if we had set our alpha criterion to 0.05, it would not have met that criteria. We could write up the results like this. The two-way interaction between between distraction and reward was not significant, \(t\)(4) = 2.493, \(p\) = 0.067. Often times when a result is “not significant” according to the alpha criteria, the pattern among the means is not described further. One reason for this practice is that the researcher is treating the means as if they are not different (because there was an above alpha probability that the observed idfferences were due to chance). If they are not different, then there is no pattern to report. There are differences in opinion among reasonable and expert statisticians on what should or should not be reported. Let’s say we wanted to report the observed mean differences, we would write something like this: The two-way interaction between between distraction and reward was not significant, t(4) = 2.493, p = 0.067. The mean distraction effect in the no-reward condition was 6 and the mean distraction effect in the reward condition was 2.6. Writing it all up We have completed an analysis of a 2x2 repeated measures design using paired-samples \(t\)-tests. Here is what a full write-up of the results could look like. The main effect of distraction was significant, \(t\)(4) = 7.66, \(p\) = 0.001. The mean number of differences spotted was higher in the no-distraction condition (M = 11.1) than the distraction condition (M = 6.8). The main effect of reward was significant, \(t\)(4) = 8.37, \(p\) = 0.001. The mean number of differences spotted was higher in the reward condition (M = 11.3) than the no-reward condition (M = 6.6). The two-way interaction between between distraction and reward was not significant, \(t\)(4) = 2.493, \(p\) = 0.067. The mean distraction effect in the no-reward condition was 6 and the mean distraction effect in the reward condition was 2.6. Interim Summary. We went through this exercise to show you how to break up the data into individual comparisons of interest. Generally speaking, a 2x2 repeated measures design would not be anlayzed with three paired-samples \(t\)-test. This is because it is more convenient to use the repeated measures ANOVA for this task. We will do this in a moment to show you that they give the same results. And, by the same results, what we will show is that the \(p\)-values for each main effect, and the interaction, are the same. The ANOVA will give us \(F\)-values rather than \(t\) values. It turns out that in this situation, the \(F\)-values are related to the \(t\) values. In fact, \(t^2 = F\). 2x2 Repeated Measures ANOVA We just showed how a 2x2 repeated measures design can be analyzed using paired-sampled \(t\)-tests. We broke up the analysis into three parts. The main effect for distraction, the main effect for reward, and the 2-way interaction between distraction and reward. We claimed the results of the paired-samples \(t\)-test analysis would mirror what we would find if we conducted the analysis using an ANOVA. Let’s show that the results are the same. Here are the results from the 2x2 repeated-measures ANOVA, using the `aov` function in R. ```library(xtable) A <- c(10,8,11,9,10) #nD_nR B <- c(5,4,3,4,2) #D_nR C <- c(12,13,14,11,13) #nD_R D <- c(9,8,10,11,12) #D_R Number_spotted <- c(A, B, C, D) Distraction <- rep(rep(c("No Distraction", "Distraction"), each=5),2) Reward <- rep(c("No Reward","Reward"),each=10) Subjects <- rep(1:5,4) Distraction <- as.factor(Distraction) Reward <- as.factor(Reward) Subjects <- as.factor(Subjects) rm_df <- data.frame(Subjects, Distraction, Reward, Number_spotted) aov_summary <- summary(aov(Number_spotted~Distraction*Reward + Error(Subjects/(Distraction*Reward)), rm_df)) knitr::kable(xtable(aov_summary))``` Df Sum Sq Mean Sq F value Pr(>F) Distraction 1 92.45 92.450 58.698413 0.0015600 Distraction:Reward 1 14.45 14.450 6.215054 0.0672681 Residuals 4 3.70 0.925 NA NA Residuals 4 6.30 1.575 NA NA Residuals 4 9.30 2.325 NA NA Residuals1 4 6.30 1.575 NA NA Reward 1 110.45 110.450 70.126984 0.0011122 Let’s compare these results with the paired-samples \(t\)-tests. Main effect of Distraction: Using the paired samples \(t\)-test, we found \(t\)(4) =7.6615, \(p\)=0.00156. Using the ANOVA we found, \(F\)(1,4) = 58.69, \(p\)=0.00156. See, the \(p\)-values are the same, and \(t^2 = 7.6615^2 = 58.69 = F\). Main effect of Reward: Using the paired samples \(t\)-test, we found \(t\)(4) =8.3742, \(p\)=0.001112. Using the ANOVA we found, \(F\)(1,4) = 70.126, \(p\)=0.001112. See, the \(p\)-values are the same, and \(t^2 = 8.3742^2 = 70.12 = F\). Interaction effect: Using the paired samples \(t\)-test, we found \(t\)(4) =2.493, \(p\)=0.06727. Using the ANOVA we found, \(F\)(1,4) = 6.215, \(p\)=0.06727. See, the \(p\)-values are the same, and \(t^2 = 2.493^2 = 6.215 = F\). There you have it. The results from a 2x2 repeated measures ANOVA are the same as you would get if you used paired-samples \(t\)-tests for the main effects and interactions.
textbooks/stats/Applied_Statistics/Answering_Questions_with_Data_-__Introductory_Statistics_for_Psychology_Students_(Crump)/09%3A_Factorial_ANOVA/9.05%3A_Simple_analysis_of_2x2_repeated_measures_design.txt
You must be wondering how to calculate a 2x2 ANOVA. We haven’t discussed this yet. We’ve only shown you that you don’t have to do it when the design is a 2x2 repeated measures design (note this is a special case). We are now going to work through some examples of calculating the ANOVA table for 2x2 designs. We will start with the between-subjects ANOVA for 2x2 designs. We do essentially the same thing that we did before (in the other ANOVAs), and the only new thing is to show how to compute the interaction effect. Remember the logic of the ANOVA is to partition the variance into different parts. The SS formula for the between-subjects 2x2 ANOVA looks like this: $SS_\text{Total} = SS_\text{Effect IV1} + SS_\text{Effect IV2} + SS_\text{Effect IV1xIV2} + SS_\text{Error} \nonumber$ In the following sections we use tables to show the calculation of each SS. We use the same example as before with the exception that we are turning this into a between-subjects design. There are now 5 different subjects in each condition, for a total of 20 subjects. As a result, we remove the subjects column. SS Total We calculate the grand mean (mean of all of the score). Then, we calculate the differences between each score and the grand mean. We square the difference scores, and sum them up. That is $SS_\text{Total}$, reported in the bottom yellow row. SS Distraction We need to compute the SS for the main effect for distraction. We calculate the grand mean (mean of all of the scores). Then, we calculate the means for the two distraction conditions. Then we treat each score as if it was the mean for it’s respective distraction condition. We find the differences between each distraction condition mean and the grand mean. Then we square the differences and sum them up. That is $SS_\text{Distraction}$, reported in the bottom yellow row. These tables are a lot to look at! Notice here, that we first found the grand mean (8.95). Then we found the mean for all the scores in the no-distraction condition (columns A and C), that was 11.1. All of the difference scores for the no-distraction condition are 11.1-8.95 = 2.15. We also found the mean for the scores in the distraction condition (columns B and D), that was 6.8. So, all of the difference scores are 6.8-8.95 = -2.15. Remember, means are the balancing point in the data, this is why the difference scores are +2.15 and -2.15. The grand mean 8.95 is in between the two condition means (11.1 and 6.8), by a difference of 2.15. SS Reward We need to compute the SS for the main effect for reward. We calculate the grand mean (mean of all of the scores). Then, we calculate the means for the two reward conditions. Then we treat each score as if it was the mean for it’s respective reward condition. We find the differences between each reward condition mean and the grand mean. Then we square the differences and sum them up. That is $SS_\text{Reward}$, reported in the bottom yellow row. Now we treat each no-reward score as the mean for the no-reward condition (6.6), and subtract it from the grand mean (8.95), to get -2.35. Then, we treat each reward score as the mean for the reward condition (11.3), and subtract it from the grand mean (8.95), to get +2.35. Then we square the differences and sum them up. SS Distraction by Reward We need to compute the SS for the interaction effect between distraction and reward. This is the new thing that we do in an ANOVA with more than one IV. How do we calculate the variation explained by the interaction? The heart of the question is something like this. Do the individual means for each of the four conditions do something a little bit different than the group means for both of the independent variables. For example, consider the overall mean for all of the scores in the no reward group, we found that to be 6.6 Now, was the mean for each no-reward group in the whole design a 6.6? For example, in the no-distraction group, was the mean for column A (the no-reward condition in that group) also 6.6? The answer is no, it was 9.6. How about the distraction group? Was the mean for the reward condition in the distraction group (column B) 6.6? No, it was 3.6. The mean of 9.6 and 3.6 is 6.6. If there was no hint of an interaction, we would expect that the means for the reward condition in both levels of the distraction group would be the same, they would both be 6.6. However, when there is an interaction, the means for the reward group will depend on the levels of the group from another IV. In this case, it looks like there is an interaction because the means are different from 6.6, they are 9.6 and 3.6 for the no-distraction and distraction conditions. This is extra-variance that is not explained by the mean for the reward condition. We want to capture this extra variance and sum it up. Then we will have measure of the portion of the variance that is due to the interaction between the reward and distraction conditions. What we will do is this. We will find the four condition means. Then we will see how much additional variation they explain beyond the group means for reward and distraction. To do this we treat each score as the condition mean for that score. Then we subtract the mean for the distraction group, and the mean for the reward group, and then we add the grand mean. This gives us the unique variation that is due to the interaction. We could also say that we are subtracting each condition mean from the grand mean, and then adding back in the distraction mean and the reward mean, that would amount to the same thing, and perhaps make more sense. Here is a formula to describe the process for each score: $\bar{X}_\text{condition} -\bar{X}_\text{IV1} - \bar{X}_\text{IV2} + \bar{X}_\text{Grand Mean} \nonumber$ Or we could write it this way: $\bar{X}_\text{condition} - \bar{X}_\text{Grand Mean} + \bar{X}_\text{IV1} + \bar{X}_\text{IV2} \nonumber$ When you look at the following table, we apply this formula to the calculation of each of the differences scores. We then square the difference scores, and sum them up to get $SS_\text{Interaction}$, which is reported in the bottom yellow row. SS Error The last thing we need to find is the SS Error. We can solve for that because we found everything else in this formula: $SS_\text{Total} = SS_\text{Effect IV1} + SS_\text{Effect IV2} + SS_\text{Effect IV1xIV2} + SS_\text{Error} \nonumber$ Even though this textbook meant to explain things in a step by step way, we guess you are tired from watching us work out the 2x2 ANOVA by hand. You and me both, making these tables was a lot of work. We have already shown you how to compute the SS for error before, so we will not do the full example here. Instead, we solve for SS Error using the numbers we have already obtained. \begin{align*} SS_\text{Error} &= SS_\text{Total} - SS_\text{Effect IV1} - SS_\text{Effect IV2} - SS_\text{Effect IV1xIV2} \[4pt] & = 242.95 - 92.45 - 110.45 - 14.45 \[4pt] &= 25.6 \end{align*} Check your work We are going to skip the part where we divide the SSes by their dfs to find the MSEs so that we can compute the three $F$-values. Instead, if we have done the calculations of the $SS$es correctly, they should be same as what we would get if we used R to calculate the $SS$es. Let’s make R do the work, and then compare to check our work. library(xtable) A <- c(10,8,11,9,10) #nD_nR B <- c(5,4,3,4,2) #D_nR C <- c(12,13,14,11,13) #nD_R D <- c(9,8,10,11,12) #D_R Number_spotted <- c(A, B, C, D) Distraction <- rep(rep(c("No Distraction", "Distraction"), each=5),2) Reward <- rep(c("No Reward","Reward"),each=10) Distraction <- as.factor(Distraction) Reward <- as.factor(Reward) all_df <- data.frame(Distraction, Reward, Number_spotted) aov_summary <- summary(aov(Number_spotted~Distraction*Reward, all_df)) knitr::kable(xtable(aov_summary)) Df Sum Sq Mean Sq F value Pr(>F) Distraction 1 92.45 92.45 57.78125 0.0000011 Reward 1 110.45 110.45 69.03125 0.0000003 Distraction:Reward 1 14.45 14.45 9.03125 0.0083879 Residuals 16 25.60 1.60 NA NA A quick look through the column Sum Sq shows that we did our work by hand correctly. Congratulations to us! Note, this is not the same results as we had before with the repeated measures ANOVA. We conducted a between-subjects design, so we did not get to further partition the SS error into a part due to subject variation and a left-over part. We also gained degrees of freedom in the error term. It turns out with this specific set of data, we find p-values of less than 0.05 for all effects (main effects and the interaction, which was not less than 0.05 using the same data, but treating it as a repeated-measures design) 9.07: Fireside chat Sometimes it’s good to get together around a fire and have a chat. Let’s pretend we’re sitting around a fire. It’s been a long day. A long couple of weeks and months since we started this course on statistics. We just went through the most complicated things we have done so far. This is a long chapter. What should we do next? Here’s a couple of options. We could work through, by hand, more and more ANOVAs. Do you want to do that? I don’t, making these tables isn’t too bad, but it takes a lot of time. It’s really good to see everything that we do laid bare in the table form a few times. We’ve done that already. It’s really good for you to attempt to calculate an ANOVA by hand at least once in your life. It builds character. It helps you know that you know what you are doing, and what the ANOVA is doing. We can’t make you do this, we can only make the suggestion. If we keep doing these by hand, it is not good for us, and it is not you doing them by hand. So, what are the other options. The other options are to work at a slightly higher level. We will discuss some research designs, and the ANOVAs that are appropriate for their analysis. We will conduct the ANOVAs using R, and print out the ANOVA tables. This is what you do in the lab, and what most researchers do. They use software most of the time to make the computer do the work. Because of this, it is most important that you know what the software is doing. You can make mistakes when telling software what to do, so you need to be able to check the software’s work so you know when the software is giving you wrong answers. All of these skills are built up over time through the process of analyzing different data sets. So, for the remainder of our discussion on ANOVAs we stick to that higher level. No more monster tables of SSes. You are welcome.
textbooks/stats/Applied_Statistics/Answering_Questions_with_Data_-__Introductory_Statistics_for_Psychology_Students_(Crump)/09%3A_Factorial_ANOVA/9.06%3A_2x2_Between-subjects_ANOVA.txt
Let’s go through the process of looking at a 2x2 factorial design in the wild. This will be the very same data that you will analyze in the lab for factorial designs. Stand at attention Do you pay more attention when you are sitting or standing? This was the kind of research question the researchers were asking in the study we will look at. In fact, the general question and design is very similar to our fake study idea that we used to explain factorial designs in this chapter. The paper we look at is called “Stand by your Stroop: Standing up enhances selective attention and cognitive control” (Rosenbaum, Mama, and Algom 2017). This paper asked whether sitting versus standing would influence a measure of selective attention, the ability to ignore distracting information. They used a classic test of selective attention, called the Stroop effect. You may already know what the Stroop effect is. In a typical Stroop experiment, subjects name the color of words as fast as they can. The trick is that sometimes the color of the word is the same as the name of the word, and sometimes it is not. Here are some examples: Congruent trials occur when the color and word match. So, the correct answers for each of the congruent stimuli shown would be to say, red, green, blue and yellow. Incongruent trials occur when the color and word mismatch. The correct answers for each of the incongruent stimuli would be: blue, yellow, red, green. The Stroop effect is an example of a well-known phenomena. What happens is that people are faster to name the color of the congruent items compared to the color of the incongruent items. This difference (incongruent reaction time - congruent reaction time) is called the Stroop effect. Many researchers argue that the Stroop effect measures something about selective attention, the ability to ignore distracting information. In this case, the target information that you need to pay attention to is the color, not the word. For each item, the word is potentially distracting, it is not information that you are supposed to respond to. However, it seems that most people can’t help but notice the word, and their performance in the color-naming task is subsequently influenced by the presence of the distracting word. People who are good at ignoring the distracting words should have small Stroop effects. They will ignore the word, and it won’t influence them very much for either congruent or incongruent trials. As a result, the difference in performance (the Stroop effect) should be fairly small (if you have “good” selective attention in this task). People who are bad at ignoring the distracting words should have big Stroop effects. They will not ignore the words, causing them to be relatively fast when the word helps, and relatively slow when the word mismatches. As a result, they will show a difference in performance between the incongruent and congruent conditions. If we take the size of the Stroop effect as a measure of selective attention, we can then start wondering what sorts of things improve selective attention (e.g., that make the Stroop effect smaller), and what kinds of things impair selective attention (e.g., make the Stroop effect bigger). The research question of this study was to ask whether standing up improves selective attention compared to sitting down. They predicted smaller Stroop effects when people were standing up and doing the task, compared to when they were sitting down and doing the task. The design of the study was a 2x2 repeated-measures design. The first IV was congruency (congruent vs incongruent). The second IV was posture (sitting vs. standing). The DV was reaction time to name the word. Plot the data They had subjects perform many individual trials responding to single Stroop stimuli, both congruent and incongruent. And they had subjects stand up sometimes and do it, and sit-down sometimes and do it. Here is a graph of what they found: The figure shows the means. We can see that Stroop effects were observed in both the sitting position and the standing position. In the sitting position, mean congruent RTs were shorter than mean incongruent RTs (the red bar is lower than the aqua bar). The same general pattern is observed for the standing position. However, it does look as if the Stroop effect is slightly smaller in the stand condition: the difference between the red and aqua bars is slightly smaller compared to the difference when people were sitting. Conduct the ANOVA Let’s conduct a 2x2 repeated measures ANOVA on the data to evaluate whether the differences in the means are likely or unlikely to be due to chance. The ANOVA will give us main effects for congruency and posture (the two IVs), as well as one interaction effect to evaluate (congruency X posture). Remember, the interaction effect tells us whether the congruency effect changes across the levels of the posture manipulation. ```library(data.table) library(xtable) suppressPackageStartupMessages(library(dplyr)) stroop_data<-fread( "https://stats.libretexts.org/@api/deki/files/11081/stroop_stand.csv") RTs <- c(as.numeric(unlist(stroop_data[,1])), as.numeric(unlist(stroop_data[,2])), as.numeric(unlist(stroop_data[,3])), as.numeric(unlist(stroop_data[,4])) ) Congruency <- rep(rep(c("Congruent","Incongruent"),each=50),2) Posture <- rep(c("Stand","Sit"),each=100) Subject <- rep(1:50,4) stroop_df <- data.frame(Subject,Congruency,Posture,RTs) stroop_df\$Subject <- as.factor(stroop_df\$Subject) aov_summary <- summary(aov(RTs~Congruency*Posture + Error(Subject/(Congruency*Posture)), stroop_df)) knitr::kable(xtable(aov_summary))``` Df Sum Sq Mean Sq F value Pr(>F) Residuals 49 2250738.636 45933.4416 NA NA Congruency 1 576821.635 576821.6349 342.452244 0.0000000 Residuals 49 82534.895 1684.3856 NA NA Posture 1 32303.453 32303.4534 7.329876 0.0093104 Residuals1 49 215947.614 4407.0942 NA NA Congruency:Posture 1 6560.339 6560.3389 8.964444 0.0043060 Residuals 49 35859.069 731.8177 NA NA Main effect of Congruency Let’s talk about each aspect of the ANOVA table, one step at a time. First, we see that there was a significant main effect of congruency, \(F\)(1, 49) = 342.45, \(p\) < 0.001. The \(F\) value is extremely large, and the \(p\)-value is so small it reads as a zero. This \(F\)-value basically never happens by sampling error. We can be very confident that the overall mean difference between congruent and incongruent RTs was not caused by sampling error. What were the overall mean differences between mean RTs in the congruent and incongruent conditions? We would have to look at thos means to find out. Here’s a table: ```library(data.table) library(xtable) suppressPackageStartupMessages(library(dplyr)) stroop_data<-fread( "https://stats.libretexts.org/@api/deki/files/11081/stroop_stand.csv") RTs <- c(as.numeric(unlist(stroop_data[,1])), as.numeric(unlist(stroop_data[,2])), as.numeric(unlist(stroop_data[,3])), as.numeric(unlist(stroop_data[,4])) ) Congruency <- rep(rep(c("Congruent","Incongruent"),each=50),2) Posture <- rep(c("Stand","Sit"),each=100) Subject <- rep(1:50,4) stroop_df <- data.frame(Subject,Congruency,Posture,RTs) congruency_means <- stroop_df %>% group_by(Congruency) %>% summarise(mean_rt = mean(RTs), sd = sd(RTs), SEM = sd(RTs)/sqrt(length(RTs))) knitr::kable(congruency_means)``` Congruency mean_rt sd SEM Congruent 814.9415 111.3193 11.13193 Incongruent 922.3493 118.7960 11.87960 The table shows the mean RTs, standard deviation (sd), and standard error of the mean for each condition. These means show that there was a Stroop effect. Mean incongruent RTs were slower (larger number in milliseconds) than mean congruent RTs. The main effect of congruency is important for establishing that the researchers were able to measure the Stroop effect. However, the main effect of congruency does not say whether the size of the Stroop effect changed between the levels of the posture variable. So, this main effect was not particularly important for answering the specific question posed by the study. Main effect of Posture There was also a main effect of posture, \(F\)(1,49) = 7.329, \(p\) =0.009. Let’s look at the overall means for the sitting and standing conditions and see what this is all about: ```library(data.table) library(xtable) suppressPackageStartupMessages(library(dplyr)) stroop_data<-fread( "https://stats.libretexts.org/@api/deki/files/11081/stroop_stand.csv") RTs <- c(as.numeric(unlist(stroop_data[,1])), as.numeric(unlist(stroop_data[,2])), as.numeric(unlist(stroop_data[,3])), as.numeric(unlist(stroop_data[,4])) ) Congruency <- rep(rep(c("Congruent","Incongruent"),each=50),2) Posture <- rep(c("Stand","Sit"),each=100) Subject <- rep(1:50,4) stroop_df <- data.frame(Subject,Congruency,Posture,RTs) posture_means <- stroop_df %>% group_by(Posture) %>% summarise(mean_rt = mean(RTs), sd = sd(RTs), SEM = sd(RTs)/sqrt(length(RTs))) knitr::kable(posture_means)``` Posture mean_rt sd SEM Sit 881.3544 135.3842 13.53842 Stand 855.9365 116.9436 11.69436 Remember, the posture main effect collapses over the means in the congruency condition. We are not measuring a Stroop effect here. We are measuring a general effect of sitting vs standing on overall reaction time. The table shows that people were a little faster overall when they were standing, compared to when they were sitting. Again, the main effect of posture was not the primary effect of interest. The authors weren’t interested if people are in general faster when they stand. They wanted to know if their selective attention would improve when they stand vs when they sit. They were most interested in whether the size of the Stroop effect (difference between incongruent and congruent performance) would be smaller when people stand, compared to when they sit. To answer this question, we need to look at the interaction effect. Congruency X Posture Interaction Last, there was a significant congruency X posture interaction, \(F\)(1,49) = 8.96, \(p\) = 0.004. With this information, and by looking at the figure, we can get a pretty good idea of what this means. We know the size of the Stroop effect must have been different between the standing and sitting conditions, otherwise we would have gotten a smaller \(F\)-value and a much larger \(p\)-value. We can see from the figure the direction of this difference, but let’s look at the table to see the numbers more clearly. ```library(data.table) library(xtable) suppressPackageStartupMessages(library(dplyr)) stroop_data<-fread( "https://stats.libretexts.org/@api/deki/files/11081/stroop_stand.csv") RTs <- c(as.numeric(unlist(stroop_data[,1])), as.numeric(unlist(stroop_data[,2])), as.numeric(unlist(stroop_data[,3])), as.numeric(unlist(stroop_data[,4])) ) Congruency <- rep(rep(c("Congruent","Incongruent"),each=50),2) Posture <- rep(c("Stand","Sit"),each=100) Subject <- rep(1:50,4) stroop_df <- data.frame(Subject,Congruency,Posture,RTs) int_means <- stroop_df %>% group_by(Posture, Congruency) %>% summarise(mean_rt = mean(RTs), sd = sd(RTs), SEM = sd(RTs)/sqrt(length(RTs)), .groups='drop_last') knitr::kable(int_means)``` Posture Congruency mean_rt sd SEM Sit Congruent 821.9232 117.4069 16.60384 Sit Incongruent 940.7855 126.6457 17.91041 Stand Congruent 807.9599 105.6079 14.93521 Stand Incongruent 903.9131 108.5366 15.34939 In the sitting condition the Stroop effect was roughly 941-822 = 119 ms. In the standing condition the Stroop effect was roughly 904-808 = 96 ms. So, the Stroop effect was 119-96 = 23 ms smaller when people were standing. This is a pretty small effect in terms of the amount of time reduced, but even though it is small, a difference even this big was not very likely to be due to chance. What does it all mean? Based on this research there appears to be some support for the following logic chain. First, the researchers can say that standing up reduces the size of a person’s Stroop effect. Fine, what could that mean? Well, if the Stroop effect is an index of selective attention, then it could mean that standing up is one way to improve your ability to selectively focus and ignore distracting information. The actual size of the benefit is fairly small, so the real-world implications are not that clear. Nevertheless, maybe the next time you lose your keys, you should stand up and look for them, rather than sitting down and not look for them. 9.09: Factorial summary We have introduced you to factorial designs, which are simply designs with more than one IV. The special property of factorial designs is that all of the levels of each IV need to be crossed with the other IVs. We showed you how to analyse a repeated measures 2x2 design with paired samples-tests, and what an ANOVA table would look like if you did this in R. We also went through, by hand, the task of calculating an ANOVA table for a 2x2 between subjects design. The main point we want you take away is that factorial designs are extremely useful for determining things that cause effects to change. Generally a researcher measures an effect of interest (their IV 1). Then, they want to know what makes that effect get bigger or smaller. They want to exert experimental control over their effect. For example, they might have a theory that says doing X should make the effect bigger, but doing Y should make it smaller. They can test these theories using factorial designs, and manipulating X or Y as a second independent variable. In a factorial design each IV will have it’s own main effect. Sometimes the main effect themselves are what the researcher is interested in measures. But more often, it is the interaction effect that is most relevant. The interaction can test whether the effect of IV1 changes between the levels of IV2. When it does, researchers can infer that their second manipulation (IV2) causes change in their effet of interest. These changes are then documented and used to test underlying causal theories about the effects of interest. New Page Rosenbaum, David, Yaniv Mama, and Daniel Algom. 2017. “Stand by Your Stroop: Standing up Enhances Selective Attention and Cognitive Control.” Psychological Science 28 (12): 1864–7.
textbooks/stats/Applied_Statistics/Answering_Questions_with_Data_-__Introductory_Statistics_for_Psychology_Students_(Crump)/09%3A_Factorial_ANOVA/9.08%3A_Real_Data.txt
We are going to do a couple things in this chapter. The most important thing we do is give you more exposure to factorial designs. The second thing we do is show that you can mix it up with ANOVA. You already know that you can have more than one IV. And, you know that research designs can be between-subjects or within-subjects (repeated-measures). When you have more than one IV, they can all be between-subjects variables, they can all be within-subject repeated measures, or they can be a mix: say one between-subject variable and one within-subject variable. You can use ANOVA to anlayze all of these kinds of designs. You always get one main effect for each IV, and a number of interactions, or just one, depending on the number of IVs. 10: More On Factorial Designs Designs with multiple factors are very common. When you read a research article you will often see graphs that show the results from designs with multiple factors. It would be good for you if you were comfortable interpreting the meaning of those results. The skill here is to be able to look at a graph and see the pattern of main effects and interactions. This skill is important, because the patterns in the data can quickly become very complicated looking, especially when there are more than two independent variables, with more than two levels. 2x2 designs Let’s take the case of 2x2 designs. There will always be the possibility of two main effects and one interaction. You will always be able to compare the means for each main effect and interaction. If the appropriate means are different then there is a main effect or interaction. Here’s the thing, there a bunch of ways all of this can turn out. Check out the ways, there are 8 of them: 1. no IV1 main effect, no IV2 main effect, no interaction 2. IV1 main effect, no IV2 main effect, no interaction 3. IV1 main effect, no IV2 main effect, interaction 4. IV1 main effect, IV2 main effect, no interaction 5. IV1 main effect, IV2 main effect, interaction 6. no IV1 main effect, IV2 main effect, no interaction 7. no IV1 main effect, IV2 main effect, interaction 8. no IV1 main effect, no IV2 main effect, interaction OK, so if you run a 2x2, any of these 8 general patterns could occur in your data. That’s a lot to keep track of isn’t. As you develop your skills in examining graphs that plot means, you should be able to look at the graph and visually guesstimate if there is, or is not, a main effect or interaction. You will need you inferential statistics to tell you for sure, but it is worth knowing how to know see the patterns. In this section we show you some example patterns so that you can get some practice looking at the patterns. First, in bar graph form. Note, we used the following labels for the graph: • 1 = there was a main effect for IV1. • ~1 = there was not a main effect for IV1 • 2 = there was a main effect for IV2 • ~2 = there was not a main effect of IV2 • 1x2 = there was an interaction ` ~1x2 = there was not an interaction Next, we show you the same thing in line graph form: You might find the line graphs easier to interpret. Whenever the lines cross, or would cross if they kept going, you have a possibility of an interaction. Whenever the lines are parallel, there can’t be an interaction. When both of the points on the A side are higher or lower than both of the points on the B side, then you have a main effect for IV1 (A vs B). Whenever the green line is above or below the red line, then you have a main effect for IV2 (1 vs. 2). We know this is complicated. You should see what all the possibilities look like when we start adding more levels or more IVs. It gets nuts. Because of this nuttiness, it is often good practice to make your research designs simple (as few IVs and levels as possible to test your question). That way it will be easier to interpret your data. Whenever you see that someone ran a 4x3x7x2 design, your head should spin. It’s just too complicated.
textbooks/stats/Applied_Statistics/Answering_Questions_with_Data_-__Introductory_Statistics_for_Psychology_Students_(Crump)/10%3A_More_On_Factorial_Designs/10.01%3A_Looking_at_main_effects_and_interactions.txt
The interpretation of main effects and interactions can get tricky. Consider the concept of a main effect. This is the idea that a particular IV has a consistent effect. For example, drinking 5 cups of coffee makes you more awake compared to not drinking 5 cups of coffee. The main effect of drinking 5 cups of coffee vs not drinking coffee will generally be true across the levels of other IVs in our life. For example, let’s say you conducted an experiment testing whether the effect of drinking 5 cups of coffee vs not, changes depending on whether you are in your house or in a car. Perhaps the situation matters? No, probably not so much. You will probably still be more awake in your house, or your car, after having 5 cups of coffee, compared to if you hadn’t. The coffee example is a reasonably good example of a consistent main effect. Another silly kind of example might be the main effect of shoes on your height. For example, if your IV was wearing shoes or not, and your DV was height, then we could expect to find a main effect of wearing shoes on your measurement of height. When you wear shoes, you will become taller compared to when you don’t wear shoes. Wearing shoes adds to your total height. In fact, it’s hard to imagine how the effect of wearing shoes on your total height would ever interact with other kinds of variables. You will be always be that extra bit taller wearing shoes. Indeed, if there was another manipulation that could cause an interaction that would truly be strange. For example, imagine if the effect of being inside a bodega or outside a bodega interacted with the effect of wearing shoes on your height. That could mean that shoes make you taller when you are outside a bodega, but when you step inside, your shoes make you shorter…but, obviously this is just totally ridiculous. That’s correct, it is often ridiculous to expect that one IV will have an influence on the effect of another, especially when there is no good reason. The summary here is that it is convenient to think of main effects as a consistent influence of one manipulation. However, when an interaction is observed, this messes up the consistency of the main effect. That is the very definition of an interaction. It means that some main effect is not behaving consistently across different situations. Indeed, whenever we find an interaction, sometimes we can question whether or not there really is a general consistent effect of some manipulation, or instead whether that effect only happens in specific situations. For this reason, you will often see that researchers report their findings this way: “We found a main effect of X, BUT, this main effect was qualified by an interaction between X and Y”. Notice the big BUT. Why is it there? The sentence points out that before they talk about the main effect, they need to first talk about the interaction, which is making the main effect behave inconsistently. In other words, the interpretation of the main effect depends on the interaction, the two things have to be thought of together to make sense of them. Here are two examples to help you make sense of these issues: A consistent main effect and an interaction There is a main effect of IV2: the level 1 means (red points and bar) are both lower than the level 2 means (aqua points and bar). There is also an interaction. The size of the difference between the red and aqua points in the A condition (left) is bigger than the size of the difference in the B condition. How would we interpret this? We could say there WAS a main effect of IV2, BUT it was qualified by an IV1 x IV2 interaction. What’s the qualification? The size of the IV2 effect changed as a function of the levels of IV1. It was big for level A, and small for level B of IV1. What does the qualification mean for the main effect? Well, first it means the main effect can be changed by the other IV. That’s important to know. Does it also mean that the main effect is not a real main effect because there was an interaction? Not really, there is a generally consistent effect of IV2. The green points are above the red points in all cases. Whatever IV2 is doing, it seems to work in at least a couple situations, even if the other IV also causes some change to the influence. An inconsistent main effect and an interaction This figure shows another 2x2 design. You should see an interaction here straight away. The difference between the aqua and red points in condition A (left two dots) is huge, and there is 0 difference between them in condition B. Is there an interaction? Yes! Are there any main effects here? With data like this, sometimes an ANOVA will suggest that you do have significant main effects. For example, what is the mean difference between level 1 and 2 of IV2? That is the average of the green points ( (10+5)/2 = 15/2= 7.5 ) compared to the average of the red points (5). There will be a difference of 2.5 for the main effect (7.5 vs. 5). Starting to see the issue here? From the perspective of the main effect (which collapses over everything and ignores the interaction), there is an overall effect of 2.5. In other words, level 2 adds 2.5 in general compared to level 1. However, we can see from the graph that IV2 does not do anything in general. It does not add 2.5s everywhere. It adds 5 in condition A, and nothing in condition B. It only does one thing in one condition. What is happening here is that a “main effect” is produced by the process of averaging over a clear interaction. How would we interpret this? We might have to say there was a main effect of IV2, BUT we would definetely say it was qualified by an IV1 x IV2 interaction. What’s the qualification? The size of the IV2 effect completely changes as a function of the levels of IV1. It was big for level A, and nonexistent for level B of IV1. What does the qualification mean for the main effect? In this case, we might doubt whether there is a main effect of IV2 at all. It could turn out that IV2 does not have a general influence over the DV all of the time, it may only do something in very specific circumstances, in combination with the presence of other factors.
textbooks/stats/Applied_Statistics/Answering_Questions_with_Data_-__Introductory_Statistics_for_Psychology_Students_(Crump)/10%3A_More_On_Factorial_Designs/10.02%3A_Interpreting_main_effects_and_interactions.txt
Throughout this book we keep reminding you that research designs can take different forms. The manipulations can be between-subjects (different subjects in each group), or within-subjects (everybody contributes data in all conditions). If you have more than one manipulation, you can have a mixed design when one of your IVs is between-subjects and one of the other ones is within-subjects. The only “trick” to these designs is to use the appropriate error terms to construct the F-values for each effect. Effects that have a within-subjects repeated measure (IV) use different error terms than effects that only have a between-subject IV. In principle, you could run an ANOVA with any number of IVs, and any of them good be between or within-subjects variables. Because this is an introductory textbook, we leave out a full discussion on mixed designs. What we are leaving out are the formulas to construct ANOVA tables that show how to use the correct error terms for each effect. There are many good more advanced textbooks that discuss these issues in much more depth. And, these things can all be Googled. This is a bit of a cop-out on our part, and we may return to fill in this section at some point in the future (or perhaps someone else will add a chapter about this). In the lab manual, you will learn how to conduct a mixed design ANOVA using software. Generally speaking, the software takes care of the problem of using the correct error terms to construct the ANOVA table. 10.04: More complicated designs Up until now we have focused on the simplest case for factorial designs, the 2x2 design, with two IVs, each with 2 levels. It is worth spending some time looking at a few more complicated designs and how to interpret them. 2x3 design In a 2x3 design there are two IVs. IV1 has two levels, and IV2 has three levels. Typically, there would be one DV. Let’s talk about the main effects and interaction for this design. First, let’s make the design concrete. Let’s imagine we are running a memory experiment. We give people some words to remember, and then test them to see how many they can correctly remember. Our DV is proportion correct. We know that people forget things over time. Our first IV will be time of test, immediate vs. 1 week. The time of test IV will produce a forgetting effect. Generally, people will have a higher proportion correct on an immediate test of their memory for things they just saw, compared to testing a week later. We might be interested in manipulations that reduce the amount of forgetting that happens over the week. The second IV could be many things. Let’s make it the number of time people got to study the items before the memory test, once, twice or three times. We call IV2 the repetition manipulation. We might expect data that looks like this: The figure shows some pretend means in all conditions. Let’s talk about the main effects and interaction. First, the main effect of delay (time of test) is very obvious, the red line is way above the aqua line. Proportion correct on the memory test is always higher when the memory test is taken immediately compared to after one week. Second, the main effect of repetition seems to be clearly present. The more times people saw the items in the memory test (once, twice, or three times), the more they remembered, as measured by increasingly higher proportion correct as a function of number of repetitions. Is there an interaction? Yes, there is. Remember, an interaction occurs when the effect of one IV depends on the levels of an another. The delay IV measures the forgetting effect. Does the size of the forgetting effet change across the levels of the repetition variable? Yes it does. With one repetition the forgetting effect is .9-.6 =.4. With two repetitions, the forgetting effect is a little bit smaller, and with three, the repetition is even smaller still. So, the size of the forgetting effect changes as a function of the levels of the repetition IV. There is evidence in the means for an interaction. You would have to conduct an inferential test on the interaction term to see if these differences were likely or unlikely to be due to sampling error. If there was no interaction, and say, no main effect of repetition, we would see something like this: What would you say about the interaction if you saw something like this: The correct answer is that there is evidence in the means for an interaction. Remember, we are measuring the forgetting effect (effect of delay) three times. The forgetting effect is the same for repetition condition 1 and 2, but it is much smaller for repetition condition 3. The size of the forgetting effect depends on the levels of the repetition IV, so here again there is an interaction. 2x2x2 designs Let’s take it up a notch and look at a 2x2x2 design. Here, there are three IVs with 2 levels each. There are three main effects, three two-way (2x2) interactions, and one 3-way (2x2x2) interaction. We will use the same example as before but add an additional manipualtion of the kind of material that is to be remembered. For example, we could present words during an encoding phase either visually or spoken (auditory) over headphones. Now we have two panels one for auditory and one for visual. You can think of the 2x2x2, as two 2x2s, one for auditory and one for visual. What’s the take home from this example data? We can see that the graphs for auditory and visual are the same. They both show a 2x2 interaction between delay and repetition. People forgot more things across the week when they studied the material once, compared to when they studied the material twice. There is a main effect of delay, there is a main effect of repetition, there is no main effect of modality, and there is no three-way interaction. What is a three-way interaction anyway? That would occur if there was a difference between the 2x2 interactions. For example, consider the next pattern of results. We are looking at a 3-way interaction between modality, repetition and delay. What is going on here? These results would be very strange, here is an interpetation. For auditory stimuli, we see that there is a small forgetting effect when people studied things once, but the forgetting effect gets bigger if they studies things twice. A pattern like this would generally be very strange, usually people would do better if they got to review the material twice. The visual stimuli show a different pattern. Here, the forgetting effect is large when studying visual things once, and it get’s smaller when studying visual things twice. We see that there is an interaction between delay (the forgetting effect) and repetition for the auditory stimuli; BUT, this interaction effect is different from the interaction effect we see for the visual stimuli. The 2x2 interaction for the auditory stimuli is different from the 2x2 interaction for the visual stimuli. In other words, there is an interaction between the two interactions, as a result there is a three-way interaction, called a 2x2x2 interaction. We will note a general pattern here. Imagine you had a 2x2x2x2 design. That would have a 4-way interaction. What would that mean? It would mean that the pattern of the 2x2x2 interaction changes across the levels of the 4th IV. If two three-way interactions are different, then there is a four-way interaction.
textbooks/stats/Applied_Statistics/Answering_Questions_with_Data_-__Introductory_Statistics_for_Psychology_Students_(Crump)/10%3A_More_On_Factorial_Designs/10.03%3A_Mixed_Designs.txt
You may have noticed that throughout this book so far we have analyzed a lot of fake data. We used R to simulate pretend numbers, and then we analyzed those numbers. We also, from time to time, loaded in some “real” data, and analyzed that. In your labs each week, you have been analyzing a lot of real data. You might be thinking that the simulations we ran were just for educational purposes, to show you how things work. That’s partly true, that’s one reason we ran so many simulations. At the same time, conducting simulations to understand how data behaves is a legitimate branch of statistics. There are some problems out there where we don’t have really good analytic math formulas to tell us the correct answer, so we create and run simulations to approximate the answer. I’m going to say something mildy controversial right now: If you can’t simulate your data, then you probably don’t really understand your data or how to analyze it. Perhaps, this is too bold of a statement. There are many researchers out there who have never simulated their data, and it might be too much too claim that they don’t really understand their data because they didn’t simulate. Perhaps. There are also many students who have taken statistics classes, and learned how to press some buttons, or copy some code, to analyze some real data; but, who never learned how to run simulations. Perhaps my statement applies more to those students, who I believe would benefit greatly from learning some simulation tricks. 11: Simulating Data There are many good reasons to learn simulation techniques, here are some: 1. You force yourself to consider the details of your design, how many subjects, how many conditions, how many observations per condition per subject, and how you will store and represent the data to describe all of these details when you run the experiment 2. You force yourself to consider the kinds of numbers you will be collecting. Specifically, the distributional properties of those numbers. You will have to make decisions about the distributions that you sample from in your simulation, and thinking about this issue helps you better understand your own data when you get it. 3. You learn a bit of computer programming, and this is a very useful general skill that you can build upon to do many things. 4. You can make reasonable and informed assumptions about how your experiment might turn out, and then use the results of your simulation to choose parameters for your design (such as number of subjects, number of observations per condition and subject) that will improve the sensitivity of your design to detect the effects you are interested in measuring. 5. You can even run simulations on the data that you collect to learn more about how it behaves, and to do other kinds of advanced statistics that we don’t discuss in this book. 6. You get to improve your intuitions about how data behaves when you measure it. You can test your intuitions by running simulations, and you can learn things you didn’t know to begin with. Simulations can be highly informative. 7. When you simulate data in advance of collecting real data, you can work out exactly what kinds of tests you are planning to perform, and you will have already written your analysis code, so it will be ready and waiting for you as soon as you collect the data OK, so that’s just a few reasons why simulations are useful.
textbooks/stats/Applied_Statistics/Answering_Questions_with_Data_-__Introductory_Statistics_for_Psychology_Students_(Crump)/11%3A_Simulating_Data/11.01%3A_Reasons_to_simulate.txt
The basic idea here is actually pretty simple. You make some assumptions about how many subjects will be in your design (set N), you make some assumptions about the distributions that you will be sampling your scores from, then you use R to fabricate fake data according to the parameters you set. Once you build some simulated data, you can conduct a statistical analysis that you would be planning to run on the real data. Then you can see what happens. More importantly, you can repeat the above process many times. This is similar to conducting a replication of your experiment to see if you find the same thing, only you make the computer replicate your simulation 1000s of times. This way you can see how your simulated experiment would turn out over the long run. For example, you might find that the experiment you are planning to run will only produce a “signficant” result 25% of the time, that’s not very good. Your simulation might also tell you that if you increase your N by say 25, that could really help, and your new experiment with N=25 might succeed 90% of the time. That’s information worth knowing. Before we go into more simulation details, let’s just run a quick one. We’ll do an independent samples \(t\)-test. Imagine we have a study with N=10 in each group. There are two groups. We are measuring heart rate. Let’s say we know that heart rate is on average 100 beats per minute with a standard deviation of 7. We are going to measure heart rate in condition A where nothing happens, and we are going to measure heart rate in condition B while they watch a scary movie. We think the scary movie might increase heart rate by 5 beats per minute. Let’s run a simulation of this: ```group_A <- rnorm(10,100,7) group_B <- rnorm(10,105, 7) t.test(group_A,group_B,var.equal = TRUE)``` ``` Two Sample t-test data: group_A and group_B t = -1.7061, df = 18, p-value = 0.1052 alternative hypothesis: true difference in means is not equal to 0 95 percent confidence interval: -11.434802 1.185828 sample estimates: mean of x mean of y 98.20342 103.32791 ``` We sampled 10 scores from a normal distribution for each group. We changed the mean for group_b to 105, because we were thinking their heart rate would be 5 more than group A. We ran one \(t\)-test, and we got a result. This result tells us what happens for this one simulation. We could learn more by repeating the simulation 1000 times, saving the \(p\)-values from each replication, and then finding out how many of our 1000 simulated experiments give us a significant result: ```save_ps<-length(1000) for(i in 1:1000){ group_A <- rnorm(10,100,7) group_B <- rnorm(10,105, 7) t_results <- t.test(group_A,group_B,var.equal = TRUE) save_ps[i] <- t_results\$p.value } prop_p<-length(save_ps[save_ps<0.05])/1000 print(prop_p)``` ```[1] 0.344 ``` Now this is more interesting. We found that 34.4% of simulated experiments had a \(p\)-value less than 0.05. That’s not very good. If you were going to collect data in this kind of experiment, and you made the correct assumptions about the mean and standard deviation of the distribution, and you made the correct assumption about the size of difference between the groups, you would be planning to run an experiment that would not work-out most of the time. What happens if we increase the number of subject to 50 in each group? ```save_ps<-length(1000) for(i in 1:1000){ group_A <- rnorm(50,100,7) group_B <- rnorm(50,105, 7) t_results <- t.test(group_A,group_B,var.equal = TRUE) save_ps[i] <- t_results\$p.value } prop_p<-length(save_ps[save_ps<0.05])/1000 print(prop_p)``` ```[1] 0.957 ``` Ooh, look, almost all of the experiments are significant now. So, it would be better to use 50 subjects per group than 10 per group according to this simulation. Of course, you might already be wondering so many different kinds of things. How can we plausibly know the parameters for the distribution we are sampling from? Isn’t this all just guess work? We’ll discuss some of these issues as we move forward in this chapter.
textbooks/stats/Applied_Statistics/Answering_Questions_with_Data_-__Introductory_Statistics_for_Psychology_Students_(Crump)/11%3A_Simulating_Data/11.02%3A_Simulation_overview.txt
We’ve already seen some code for simulating a \(t\)-test 1000 times, saving the \(p\)-values, and then calculating the proportion of simulations that are significant (p<0.05). It looked like this: ```save_ps<-length(1000) for(i in 1:1000){ group_A <- rnorm(50,100,7) group_B <- rnorm(50,105, 7) t_results <- t.test(group_A,group_B,var.equal = TRUE) save_ps[i] <- t_results\$p.value } prop_p<-length(save_ps[save_ps<0.05])/1000 print(prop_p)``` ```[1] 0.953 ``` You could play around with that, and it would be very useful I think. Is there anything else that we can do that would be more useful? Sure there is. With the above simulation, you have to change N or the mean difference each time to see how proportion of significant experiments turns out. It would be nice to look at a graph where we could vary the number of subjects, and the size of the mean difference. That’s what the next simulation does. This kind of simulation can make your computer do some hard work depening on how many simulations you run. To make my computer do less work, we will only run 100 simulations for each parameter. But, what we will do is vary the number of subjects from 10 to 50 (steps of 10), and vary the size of the effect from 0 to 20 in steps of 4. A graph like this is very helpful to look at. Generally, before we run an experiment, we might not have a very good idea of the size of the effect that our manipulation might cause. Will it be a mean difference of 0 (no effect), or 5, or 10, or 20? If you are doing something new, you just might not have a good idea about this. You would know in general that bigger effects are easier to detect. You would be able to detect smaller and smaller effects if you ran more and more subjects. When you run this kind of simulation, you can vary the possible mean differences and the number of the subjects at the same time, and then see what happens. When the mean diference is 0, we should get an average of 5%, or (0.05 proportion) experiments being significant. This is what we expect by chance, and it doesn’t matter how many subjects we run. When there is no difference, we will reject the null 5% of the time (these would all be type 1 errors). How about when there is a difference of 4? This a pretty small effect. If we only run 10 subjects in each group, we can see that less than 25% of simulated experiments would show significant results. If we wanted a higher chance of success to measure an effect of this size, then we should go up to 40-50 subjects, that would get us around 75% success rates. If that’s not good enough for you (25% failures remember, that’s still alot), then re-run the simulation with even more subjects. Another thing worth pointing out is that if the mean difference is bigger than about 12.5, you can see that all of the designs produce significant outcomes nearly 100% of the time. If you knew this, perhaps you would simply run 10-20 subjects in your experiment, rather than 50. After all, 10-20 is just fine for detecting the effect, and 50 subjects might be a waste of resources (both yours and your participants). 11.04: Simulating one-factor ANOVAs The following builds simulated data for a one-factor ANOVA, appropriate for a between subjects design. We build the data frame containg a column for the group factor levels, and a column for the DV. Then, we run the ANOVA an print it out. ```library(xtable) N <- 10 groups <- rep(c("A","B","C"), each=10) DV <- c(rnorm(100,10,15), # means for group A rnorm(100,10,15), # means for group B rnorm(100,20,15) # means for group C ) sim_df<-data.frame(groups,DV) aov_results <- summary(aov(DV~groups, sim_df)) knitr::kable(xtable(aov_results))``` Df Sum Sq Mean Sq F value Pr(>F) groups 2 1187.127 593.5635 2.683555 0.0699765 Residuals 297 65692.093 221.1855 NA NA In this next example, we simulate the same design 100 times, save the \(p\)-values, and the determine the proportion of significant simulations. ```N <- 10 save_p<-length(100) for(i in 1:100){ groups <- rep(c("A","B","C"), each=10) DV <- c(rnorm(100,10,15), # means for group A rnorm(100,10,15), # means for group B rnorm(100,20,15) # means for group C ) sim_df<-data.frame(groups,DV) aov_results <- summary(aov(DV~groups, sim_df)) save_p[i]<-aov_results[[1]]\$`Pr(>F)`[1] } length(save_p[save_p<0.05])/100``` 0.07 11.05: Other resources OK, It’s a Tuesday, the summer is almost over. I’ve spent most of this summer (2018) writing this textbook, because we are using it this Fall 2018. Because I am running out of time, I need to finish this and make sure everything is in place for the course to work. As a result, I am not going to finish this chapter right now. The nice thing about this book, is that I (and other people) can fill things in over time. We have shown a few examples of data-simulation, so that’s at least something. If you want to see more examples, I suggest you check out this chapter: https://crumplab.github.io/programmingforpsych/simulating-and-analyzing-data-in-r.html#simulating-data-for-multi-factor-designs This section will get longer as I find more resources to add, and hopefully the entire chapter will get longer as I add in more examples over time.
textbooks/stats/Applied_Statistics/Answering_Questions_with_Data_-__Introductory_Statistics_for_Psychology_Students_(Crump)/11%3A_Simulating_Data/11.03%3A_Simulating_t-tests.txt
You might be happy that this is the last chapter (so far) of this textbook. At this point we are in the last weeks of our introductory statistics course. It’s called “introductory” for a reason. We have covered far less about statistics than we have covered. There’s just too much out there to cover in one short semester. In this chapter we acknowledge some of the things we haven’t yet covered, and treat them as things that you should think about. If there is one take home message that we want to get across to you, it’s that when you ask questions with data, you should be able to justify how you answer those questions. 12: Thinking about Answering Questions with Data If you already know something about statistics while you were reading this book, you might have noticed that we neglected to discuss the topic of effect-size, and we barely talked about statistical power. We will talk a little bit about these things here. First, it is worth pointing out that over the years, at least in Psychology, many societies and journals have made recommendations about how researchers should report their statistical analyses. Among the recommendations is that measures of “effect size” should be reported. Similarly, many journals now require that researchers report an “a priori” power-analysis (the recommendation is this should be done before the data is collected). Because these recommendations are so prevalent, it is worth discussing what these ideas refer to. At the same time, the meaning of effect-size and power somewhat depend on your “philosophical” bent, and these two ideas can become completely meaningless depending on how you think of statistics. For these complicating reasons we have suspended our discussion of the topic until now. The question or practice of using measures of effect size and conducting power-analyses are also good examples of the more general need to think about about what you are doing. If you are going to report effect size, and conduct power analyses, these activities should not be done blindly because someone else recommends that you do them, these activities and other suitable ones should be done as a part of justifying what you are doing. It is a part of thinking about how to make your data answer questions for you. Chance vs. real effects Let’s rehash something we’ve said over and over again. First, researchers are interested in whether their manipulation causes a change in their measurement. If it does, they can become confident that they have uncovered a causal force (the manipulation). However, we know that differences in the measure between experimental conditions can arise by chance alone, just by sampling error. In fact, we can create pictures that show us the window of chance for a given statistic, these tells us roughly the range and likelihoods of getting various differences just by chance. With these windows in hand, we can then determine whether the differences we found in some data that we collected were likely or unlikely to be due to chance. We also learned that sample-size plays a big role in the shape of the chance window. Small samples give chance a large opportunity make big differences. Large samples give chance a small opportunity to make big differences. The general lesson up to this point has been, design an experiment with a large enough sample to detect the effect of interest. If your design isn’t well formed, you could easily be measuring noise, and your differences could be caused by sampling error. Generally speaking, this is still a very good lesson: better designs produce better data; and you can’t fix a broken design with statistics. There is clearly another thing that can determine whether or not your differences are due to chance. That is the effect itself. If the manipulation does cause a change, then there is an effect, and that effect is a real one. Effects refer to differences in the measurement between experimental conditions. The thing about effects is that they can be big or small, they have a size. For example, you can think of a manipulation in terms of the size of its hammer. A strong manipulation is like a jack-hammer: it is loud, it produces a big effect, it creates huge differences. A medium manipulation is like regular hammer: it works, you can hear it, it drives a nail into wood, but it doesn’t destroy concrete like a jack-hammer, it produces a reliable effect. A small manipulation is like tapping something with a pencil: it does something, you can barely hear it, and only in a quiet room, it doesn’t do a good job of driving a nail into wood, and it does nothing to concrete, it produces tiny, unreliable effects. Finally, a really small effect would be hammering something with a feather, it leaves almost no mark and does nothing that is obviously perceptiple to nails or pavement. The lesson is, if you want to break up concrete, use a jack-hammer; or, if you want to measure your effect, make your manipulation stronger (like a jack-hammer) so it produces a bigger difference. Effect size: concrete vs. abstract notions Generally speaking, the big concept of effect size, is simply how big the differences are, that’s it. However, the biggness or smallness of effects quickly becomes a little bit complicated. On the one hand, the raw difference in the means can be very meaningful. Let’s saw we are measuring performance on a final exam, and we are testing whether or not a miracle drug can make you do better on the test. Let’s say taking the drug makes you do 5% better on the test, compared to not taking the drug. You know what 5% means, that’s basically a whole letter grade. Pretty good. An effect-size of 25% would be even better right! Lot’s of measures have a concrete quality to them, and we often want to the size of the effect expressed in terms of the original measure. Let’s talk about concrete measures some more. How about learning a musical instrument. Let’s say it takes 10,000 hours to become an expert piano, violin, or guitar player. And, let’s say you found something online that says that using their method, you will learn the instrument in less time than normal. That is a claim about the effect size of their method. You would want to know how big the effect is right? For example, the effect-size could be 10 hours. That would mean it would take you 9,980 hours to become an expert (that’s a whole 10 hours less). If I knew the effect-size was so tiny, I wouldn’t bother with their new method. But, if the effect size was say 1,000 hours, that’s a pretty big deal, that’s 10% less (still doesn’t seem like much, but saving 1,000 hours seems like a lot). Just as often as we have concrete measures that are readily interpretable, Psychology often produces measures that are extremely difficult to interpret. For example, questionnaire measures often have no concrete meaning, and only an abstract statistical meaning. If you wanted to know whether a manipulation caused people to more or less happy, and you used to questionnaire to measure happiness, you might find that people were 50 happy in condition 1, and 60 happy in condition 2, that’s a difference of 10 happy units. But how much is 10? Is that a big or small difference? It’s not immediately obvious. What is the solution here? A common solution is to provide a standardized measure of the difference, like a z-score. For example, if a difference of 10 reflected a shift of one standard deviation that would be useful to know, and that would be a sizeable shift. If the difference was only a .1 shift in terms of standard deviation, then the difference of 10 wouldn’t be very large. We elaborate on this idea next in describing cohen’s d. Cohen’s d Let’s look a few distributions to firm up some ideas about effect-size. In the graph below you will see four panels. The first panel (0) represents the null distribution of no differences. This is the idea that your manipulation (A vs. B) doesn’t do anything at all, as a result when you measure scores in conditions A and B, you are effectively sampling scores from the very same overall distribution. The panel shows the distribution as green for condition B, but the red one for condition A is identical and drawn underneath (it’s invisible). There is 0 difference between these distributions, so it represent a null effect. The remaining panels are hypothetical examples of what a true effect could look like, when your manipulation actually causes a difference. For example, if condition A is a control group, and condition B is a treatment group, we are looking at three cases where the treatment manipulation causes a positive shift in the mean of distribution. We are using normal curves with mean =0 and sd =1 for this demonstration, so a shift of .5 is a shift of half of a standard deviation. A shift of 1 is a shift of 1 standard deviation, and a shift of 2 is a shift of 2 standard deviations. We could draw many more examples showing even bigger shifts, or shifts that go in the other direction. Let’s look at another example, but this time we’ll use some concrete measurements. Let’s say we are looking at final exam performance, so our numbers are grade percentages. Let’s also say that we know the mean on the test is 65%, with a standard deviation of 5%. Group A could be a control that just takes the test, Group B could receive some “educational” manipulation designed to improve the test score. These graphs then show us some hypotheses about what the manipulation may or may not be doing. The first panel shows that both condition A and B will sample test scores from the same distribution (mean =65, with 0 effect). The other panels show shifted mean for condition B (the treatment that is supposed to increase test performance). So, the treatment could increase the test performance by 2.5% (mean 67.5, .5 sd shift), or by 5% (mean 70, 1 sd shift), or by 10% (mean 75%, 2 sd shift), or by any other amount. In terms of our previous metaphor, a shift of 2 standard deviations is more like jack-hammer in terms of size, and a shift of .5 standard deviations is more like using a pencil. The thing about research, is we often have no clue about whether our manipulation will produce a big or small effect, that’s why we are conducting the research. You might have noticed that the letter $d$ appears in the above figure. Why is that? Jacob Cohen used the letter $d$ in defining the effect-size for this situation, and now everyone calls it Cohen’s $d$. The formula for Cohen’s $d$ is: $d = \frac{\text{mean for condition 1} - \text{mean for condition 2}}{\text{population standard deviation}} \nonumber$ If you notice, this is just a kind of z-score. It is a way to standardize the mean difference in terms of the population standard deviation. It is also worth noting again that this measure of effect-size is entirely hypothetical for most purposes. In general, researchers do not know the population standard deviation, they can only guess at it, or estimate it from the sample. The same goes for means, in the formula these are hypothetical mean differences in two population distributions. In practice, researchers do not know these values, they guess at them from their samples. Before discussing why the concept of effect-size can be useful, we note that Cohen’s $d$ is useful for understanding abstract measures. For example, when you don’t know what a difference of 10 or 20 means as a raw score, you can standardize the difference by the sample standard deviation, then you know roughly how big the effect is in terms of standard units. If you thought a 20 was big, but it turned out to be only 1/10th of a standard deviation, then you would know the effect is actually quite small with respect to the overall variability in the data.
textbooks/stats/Applied_Statistics/Answering_Questions_with_Data_-__Introductory_Statistics_for_Psychology_Students_(Crump)/12%3A_Thinking_about_Answering_Questions_with_Data/12.01%3A_Effect-size_and_power.txt
When there is a true effect out there to measure, you want to make sure your design is sensitive enough to detect the effect, otherwise what’s the point. We’ve already talked about the idea that an effect can have different sizes. The next idea is that your design can be more less sensitive in its ability to reliabily measure the effect. We have discussed this general idea many times already in the textbook, for example we know that we will be more likely to detect “significant” effects (when there are real differences) when we increase our sample-size. Here, we will talk about the idea of design sensitivity in terms of the concept of power. Interestingly, the concept of power is a somewhat limited concept, in that it only exists as a concept within some philosophies of statistics. A digresssion about hypothesis testing In particular, the concept of power falls out of the Neyman-Pearson concept of null vs. alternative hypothesis testing. Up to this point, we have largely avoided this terminology. This is perhaps a disservice in that the Neyman-Pearson ideas are by now the most common and widespread, and in the opinion of some of us, they are also the most widely misunderstood and abused idea, which is why we have avoided these ideas until now. What we have been mainly doing is talking about hypothesis testing from the Fisherian (Sir Ronald Fisher, the ANOVA guy) perspective. This is a basic perspective that we think can’t be easily ignored. It is also quite limited. The basic idea is this: 1. We know that chance can cause some differences when we measure something between experimental conditions. 2. We want to rule out the possibility that the difference that we observed can not be due to chance 3. We construct large N designs that permit us to do this when a real effect is observed, such that we can confidently say that big differences that we find are so big (well outside the chance window) that it is highly implausible that chance alone could have produced. 4. The final conclusion is that chance was extremely unlikely to have produced the differences. We then infer that something else, like the manipulation, must have caused the difference. 5. We don’t say anything else about the something else. 6. We either reject the null distribution as an explanation (that chance couldn’t have done it), or retain the null (admit that chance could have done it, and if it did we couldn’t tell the difference between what we found and what chance could do) Neyman and Pearson introduced one more idea to this mix, the idea of an alternative hypothesis. The alternative hypothesis is the idea that if there is a true effect, then the data sampled into each condition of the experiment must have come from two different distributions. Remember, when there is no effect we assume all of the data cam from the same distribution (which by definition can’t produce true differences in the long run, because all of the numbers are coming from the same distribution). The graphs of effect-sizes from before show examples of these alternative distributions, with samples for condition A coming from one distribution, and samples from condition B coming from a shifted distribution with a different mean. So, under the Neyman-Pearson tradition, when a researcher find a signifcant effect they do more than one things. First, they reject the null-hypothesis of no differences, and they accept the alternative hypothesis that there was differences. This seems like a sensible thing to do. And, because the researcher is actually interested in the properties of the real effect, they might be interested in learning more about the actual alternative hypothesis, that is they might want to know if their data come from two different distributions that were separated by some amount…in other words, they would want to know the size of the effect that they were measuring. Back to power We have now discussed enough ideas to formalize the concept of statistical power. For this concept to exist we need to do a couple things. 1. Agree to set an alpha criterion. When the p-value for our test-statistic is below this value we will call our finding statistically significant, and agree to reject the null hypothesis and accept the “alternative” hypothesis (sidenote, usually it isn’t very clear which specific alternative hypothesis was accepted) 2. In advance of conducting the study, figure out what kinds of effect-sizes our design is capable of detecting with particular probabilites. The power of a study is determined by the relationship between 1. The sample-size of the study 2. The effect-size of the manipulation 3. The alpha value set by the researcher. To see this in practice let’s do a simulation. We will do a t-test on a between-groups design 10 subjects in each group. Group A will be a control group with scores sampled from a normal distribution with mean of 10, and standard deviation of 5. Group B will be a treatment group, we will say the treatment has an effect-size of Cohen’s \(d\) = .5, that’s a standard deviation shift of .5, so the scores with come from a normal distribution with mean =12.5 and standard deivation of 5. Remember 1 standard deviation here is 5, so half of a standard deviation is 2.5. The following R script runs this simulated experiment 1000 times. We set the alpha criterion to .05, this means we will reject the null whenever the \(p\)-value is less than .05. With this specific design, how many times out of of 1000 do we reject the null, and accept the alternative hypothesis? ```p<-length(1000) for(i in 1:1000){ A<-rnorm(10,10,5) B<-rnorm(10,12.5,5) p[i]<-t.test(A,B,var.equal = TRUE)\$p.value } length(p[p<.05])``` 179 The answer is that we reject the null, and accept the alternative 179 times out of 1000. In other words our experiment succesfully accepts the alternative hypothesis 17.9 percent of the time, this is known as the power of the study. Power is the probability that a design will succesfully detect an effect of a specific size. Importantly, power is completely abstract idea that is completely determined by many assumptions including N, effect-size, and alpha. As a result, it is best not to think of power as a single number, but instead as a family of numbers. For example, power is different when we change N. If we increase N, our samples will more precisely estimate the true distributions that they came from. Increasing N reduces sampling error, and shrinks the range of differences that can be produced by chance. Lets’ increase our N in this simulation from 10 to 20 in each group and see what happens. ```p<-length(1000) for(i in 1:1000){ A<-rnorm(20,10,5) B<-rnorm(20,12.5,5) p[i]<-t.test(A,B,var.equal = TRUE)\$p.value } length(p[p<.05])``` 360 Now the number of significant experiments is 360 out of 1000, or a power of 36 percent. That’s roughly doubled from before. We have made the design more sensitive to the effect by increasing N. We can change the power of the design by changing the alpha-value, which tells us how much evidence we need to reject the null. For example, if we set the alpha criterion to 0.01, then we will be more conservative, only rejecting the null when chance can produce the observed difference 1% of the time. In our example, this will have the effect of reducing power. Let’s keep N at 20, but reduce the alpha to 0.01 and see what happens: ```p<-length(1000) for(i in 1:1000){ A<-rnorm(20,10,5) B<-rnorm(20,12.5,5) p[i]<-t.test(A,B,var.equal = TRUE)\$p.value } length(p[p<.01])``` 138 Now only 138 out of 1000 experiments are significant, that’s 13.8 power. Finally, the power of the design depends on the actual size of the effect caused by the manipulation. In our example, we hypothesized that the effect caused a shift of .5 standard deviations. What if the effect causes a bigger shift? Say, a shift of 2 standard deviations. Let’s keep N= 20, and alpha < .01, but change the effect-size to two standard deviations. When the effect in the real-world is bigger, it should be easier to measure, so our power will increase. ```p<-length(1000) for(i in 1:1000){ A<-rnorm(20,10,5) B<-rnorm(20,30,5) p[i]<-t.test(A,B,var.equal = TRUE)\$p.value } length(p[p<.01])``` 1000 Neat, if the effect-size is actually huge (2 standard deviation shift), then we have power 100 percent to detect the true effect. Power curves We mentioned that it is best to think of power as a family of numbers, rather than as a single number. To elaborate on this consider the power curve below. This is the power curve for a specific design: a between groups experiments with two levels, that uses an independent samples t-test to test whether an observed difference is due to chance. Critically, N is set to 10 in each group, and alpha is set to .05 Power (as a proportion, not a percentage) is plotted on the y-axis, and effect-size (Cohen’s d) in standard deviation units is plotted on the x-axis. A power curve like this one is very helpful to understand the sensitivity of a particular design. For example, we can see that a between subjects design with N=10 in both groups, will detect an effect of d=.5 (half a standard deviation shift) about 20% of the time, will detect an effect of d=.8 about 50% of the time, and will detect an effect of d=2 about 100% of the time. All of the percentages reflect the power of the design, which is the percentage of times the design would be expected to find a \(p\) < 0.05. Let’s imagine that based on prior research, the effect you are interested in measuring is fairly small, d=0.2. If you want to run an experiment that will detect an effect of this size a large percentage of the time, how many subjects do you need to have in each group? We know from the above graph that with N=10, power is very low to detect an effect of d=0.2. Let’s make another graph, but vary the number of subjects rather than the size of the effect. The figure plots power to detect an effect of d=0.2, as a function of N. The green line shows where power = .8, or 80%. It looks like we would nee about 380 subjects in each group to measure an effect of d=0.2, with power = .8. This means that 80% of our experiments would succesfully show p < 0.05. Often times power of 80% is recommended as a reasonable level of power, however even when your design has power = 80%, your experiment will still fail to find an effect (associated with that level of power) 20% of the time!
textbooks/stats/Applied_Statistics/Answering_Questions_with_Data_-__Introductory_Statistics_for_Psychology_Students_(Crump)/12%3A_Thinking_about_Answering_Questions_with_Data/12.02%3A_Power.txt
Our discussion of effect size and power highlight the importance of the understanding the statistical limitations of an experimental design. In particular, we have seen the relationship between: 1. Sample-size 2. Effect-size 3. Alpha criterion 4. Power As a general rule of thumb, small N designs can only reliably detect very large effects, whereas large N designs can reliably detect much smaller effects. As a researcher, it is your responsibility to plan your design accordingly so that it is capable of reliably detecting the kinds of effects it is intended to measure. 12.04: Some considerations Low powered studies Consider the following case. A researcher runs a study to detect an effect of interest. There is good reason, from prior research, to believe the effect-size is d=0.5. The researcher uses a design that has 30% power to detect the effect. They run the experiment and find a significant p-value, (p<.05). They conclude their manipulation worked, because it was unlikely that their result could have been caused by chance. How would you interpret the results of a study like this? Would you agree with thte researchers that the manipulation likely caused the difference? Would you be skeptical of the result? The situation above requires thinking about two kinds of probabilities. On the one hand we know that the result observed by the researchers does not occur often by chance (p is less than 0.05). At the same time, we know that the design was underpowered, it only detects results of the expected size 30% of the time. We are face with wondering what kind of luck was driving the difference. The researchers could have gotten unlucky, and the difference really could be due to chance. In this case, they would be making a type I error (saying the result is real when it isn’t). If the result was not due to chance, then they would also be lucky, as their design only detects this effect 30% of the time. Perhaps another way to look at this situation is in terms of the replicability of the result. Replicability refers to whether or not the findings of the study would be the same if the experiment was repeated. Because we know that power is low here (only 30%), we would expect that most replications of this experiment would not find a significant effect. Instead, the experiment would be expected to replicate only 30% of the time. Large N and small effects Perhaps you have noticed that there is an intriguiing relationship between N (sample-size) and power and effect-size. As N increases, so does power to detect an effect of a particular size. Additionally, as N increases, a design is capable of detecting smaller and smaller effects with greater and greater power. For example, if N was large enough, we would have high power to detect very small effects, say d= 0.01, or even d=0.001. Let’s think about what this means. Imagine a drug company told you that they ran an experiment with 1 billion people to test whether their drug causes a significant change in headache pain. Let’s say they found a significant effect (with power =100%), but the effect was very small, it turns out the drug reduces headache pain by less than 1%, let’s say 0.01%. For our imaginary study we will also assume that this effect is very real, and not caused by chance. Clearly the design had enough power to detect the effect, and the effect was there, so the design did detect the effect. However, the issue is that there is little practical value to this effect. Nobody is going to by a drug to reduce their headache pain by 0.01%, even if it was “scientifcally proven” to work. This example brings up two issues. First, increasing N to very large levels will allow designs to detect almost any effect (even very tiny ones) with very high power. Second, sometimes effects are meaningless when they are very small, especially in applied research such as drug studies. These two issues can lead to interesting suggestions. For example, someone might claim that large N studies aren’t very useful, because they can always detect really tiny effects that are practically meaningless. On the other hand, large N studies will also detect larger effects too, and they will give a better estimate of the “true” effect in the population (because we know that larger samples do a better job of estimating population parameters). Additionally, although really small effects are often not interesting in the context of applied research, they can be very important in theoretical research. For example, one theory might predict that manipulating X should have no effect, but another theory might predict that X does have an effect, even if it is a small one. So, detecting a small effect can have theoretical implication that can help rule out false theories. Generally speaking, researchers asking both theoretical and applied questions should think about and establish guidelines for “meaningful” effect-sizes so that they can run designs of appropriate size to detect effects of “meaningful size”. Small N and Large effects All other things being equal would you trust the results from a study with small N or large N? This isn’t a trick question, but sometimes people tie themselves into a knot trying to answer it. We already know that large sample-sizes provide better estimates of the distributions the samples come from. As a result, we can safely conclude that we should trust the data from large N studies more than small N studies. At the same time, you might try to convince yourself otherwise. For example, you know that large N studies can detect very small effects that are practically and possibly even theoretically meaningless. You also know that that small N studies are only capable of reliably detecting very large effects. So, you might reason that a small N study is better than a large N study because if a small N study detects an effect, that effect must be big and meaningful; whereas, a large N study could easily detect an effect that is tiny and meaningless. This line of thinking needs some improvement. First, just because a large N study can detect small effects, doesn’t mean that it only detects small effects. If the effect is large, a large N study will easily detect it. Large N studies have the power to detect a much wider range of effects, from small to large. Second, just because a small N study detected an effect, does not mean that the effect is real, or that the effect is large. For example, small N studies have more variability, so the estimate of the effect size will have more error. Also, there is 5% (or alpha rate) chance that the effect was spurious. Interestingly, there is a pernicious relationship between effect-size and type I error rate. Type I errors are convincing when N is small So what is this pernicious relationship between Type I errors and effect-size? Mainly, this relationship is pernicious for small N studies. For example, the following figure illustrates the results of 1000s of simulated experiments, all assuming the null distribution. In other words, for all of these simulations there is no true effect, as the numbers are all sampled from an identical distribution (normal distribution with mean =0, and standard deviation =1). The true effect-size is 0 in all cases. We know that under the null, researchers will find p values that are less 5% about 5% of the time, remember that is the definition. So, if a researcher happened to be in this situation (where there manipulation did absolutely nothing), they would make a type I error 5% of the time, or if they conducted 100 experiments, they would expect to find a significant result for 5 of them. The following graph reports the findings from only the type I errors, where the simulated study did produce p < 0.05. For each type I error, we calculated the exact p-value, as well as the effect-size (cohen’s D) (mean difference divided by standard deviation). We already know that the true effect-size is zero, however take a look at this graph, and pay close attention to the smaller sample-sizes. For example, look at the red dots, when sample size is 10. Here we see that the effect-sizes are quite large. When p is near 0.05 the effect-size is around .8, and it goes up and up as when p gets smaller and smaller. What does this mean? It means that when you get unlucky with a small N design, and your manipulation does not work, but you by chance find a “significant” effect, the effect-size measurement will show you a “big effect”. This is the pernicious aspect. When you make a type I error for small N, your data will make you think there is no way it could be a type I error because the effect is just so big!. Notice that when N is very large, like 1000, the measure of effect-size approaches 0 (which is the true effect-size in the simulation).
textbooks/stats/Applied_Statistics/Answering_Questions_with_Data_-__Introductory_Statistics_for_Psychology_Students_(Crump)/12%3A_Thinking_about_Answering_Questions_with_Data/12.03%3A_Planning_your_design.txt
• 1.1: Data analysis steps A systematic, step-by-step approach is the best way to decide how to analyze biological data. • 1.2: Types of Biological Variables One of the first steps in deciding which statistical test to use is determining what kinds of variables you have. When you know what the relevant variables are, what kind of variables they are, and what your null and alternative hypotheses are, it's usually pretty easy to figure out which test you should use. I classify variables into three types: measurement variables, nominal variables, and ranked variables. • 1.3: Probability When dealing with probabilities in biology, you are often working with theoretical expectations, not population samples. For example, in a genetic cross of two individual Drosophila melanogaster that are heterozygous at the vestigial locus, Mendel's theory predicts that the probability of an offspring individual being a recessive homozygote (having teeny-tiny wings) is one-fourth, or 0.25. This is equivalent to saying that one-fourth of a population of offspring will have tiny wings. • 1.4: Basic Concepts of Hypothesis Testing The technique used by the vast majority of biologists, and the technique that most of this handbook describes, is sometimes called "frequentist" or "classical" statistics. It involves testing a null hypothesis by comparing the data you observe in your experiment with the predictions of a null hypothesis. You estimate what the probability would be of obtaining the observed results, or something more extreme, if the null hypothesis were true. • 1.5: Confounding Variables A confounding variable is a variable that may affect the dependent variable. This can lead to erroneous conclusions about the relationship between the independent and dependent variables. You deal with confounding variables by controlling them; by matching; by randomizing; or by statistical control. 01: Basics Learning Objectives • How to determine the best way to analyze a biological experiment How to determine the appropriate statistical test A systematic, step-by-step approach is the best way to decide how to analyze biological data. It is recommended that you follow these steps: 1. Specify the biological question you are asking. 2. Put the question in the form of a biological null hypothesis and alternate hypothesis. 3. Put the question in the form of a statistical null hypothesis and alternate hypothesis. 4. Determine which variables are relevant to the question. 5. Determine what kind of variable each one is. 6. Design an experiment that controls or randomizes the confounding variables. 7. Based on the number of variables, the kinds of variables, the expected fit to the parametric assumptions, and the hypothesis to be tested, choose the best statistical test to use. 8. If possible, do a power analysis to determine a good sample size for the experiment. 9. Do the experiment. 10. Examine the data to see if it meets the assumptions of the statistical test you chose (primarily normality and homoscedasticity for tests of measurement variables). If it doesn't, choose a more appropriate test. 11. Apply the statistical test you chose, and interpret the results. 12. Communicate your results effectively, usually with a graph or table. As you work your way through this textbook, you'll learn about the different parts of this process. One important point for you to remember: "do the experiment" is step 9, not step 1. You should do a lot of thinking, planning, and decision-making before you do an experiment. If you do this, you'll have an experiment that is easy to understand, easy to analyze and interpret, answers the questions you're trying to answer, and is neither too big nor too small. If you just slap together an experiment without thinking about how you're going to do the statistics, you may end up needing more complicated and obscure statistical tests, getting results that are difficult to interpret and explain to others, and maybe using too many subjects (thus wasting your resources) or too few subjects (thus wasting the whole experiment). Here's an example of how the procedure works. Verrelli and Eanes (2001) measured glycogen content in Drosophila melanogaster individuals. The flies were polymorphic at the genetic locus that codes for the enzyme phosphoglucomutase (PGM). At site \(52\) in the PGM protein sequence, flies had either a valine or an alanine. At site \(484\), they had either a valine or a leucine. All four combinations of amino acids (V-V, V-L, A-V, A-L) were present. 1. One biological question is "Do the amino acid polymorphisms at the Pgm locus have an effect on glycogen content?" The biological question is usually something about biological processes, often in the form "Does changing \(X\) cause a change in \(Y\)?" You might want to know whether a drug changes blood pressure; whether soil pH affects the growth of blueberry bushes; or whether protein Rab10 mediates membrane transport to cilia. 2. The biological null hypothesis is "Different amino acid sequences do not affect the biochemical properties of PGM, so glycogen content is not affected by PGM sequence." The biological alternative hypothesis is "Different amino acid sequences do affect the biochemical properties of PGM, so glycogen content is affected by PGM sequence." By thinking about the biological null and alternative hypotheses, you are making sure that your experiment will give different results for different answers to your biological question. 3. The statistical null hypothesis is "Flies with different sequences of the PGM enzyme have the same average glycogen content." The alternate hypothesis is "Flies with different sequences of PGM have different average glycogen contents." While the biological null and alternative hypotheses are about biological processes, the statistical null and alternative hypotheses are all about the numbers; in this case, the glycogen contents are either the same or different. Testing your statistical null hypothesis is the main subject of this handbook, and it should give you a clear answer; you will either reject or accept that statistical null. Whether rejecting a statistical null hypothesis is enough evidence to answer your biological question can be a more difficult, more subjective decision; there may be other possible explanations for your results, and you as an expert in your specialized area of biology will have to consider how plausible they are. 4. The two relevant variables in the Verrelli and Eanes experiment are glycogen content and PGM sequence. 5. Glycogen content is a measurement variable, something that you record as a number that could have many possible values. The sequence of PGM that a fly has (V-V, V-L, A-V or A-L) is a nominal variable, something with a small number of possible values (four, in this case) that you usually record as a word. 6. Other variables that might be important, such as age and where in a vial the fly pupated, were either controlled (flies of all the same age were used) or randomized (flies were taken randomly from the vials without regard to where they pupated). It also would have been possible to observe the confounding variables; for example, Verrelli and Eanes could have used flies of different ages, and then used a statistical technique that adjusted for the age. This would have made the analysis more complicated to perform and more difficult to explain, and while it might have turned up something interesting about age and glycogen content, it would not have helped address the main biological question about PGM genotype and glycogen content. 7. Because the goal is to compare the means of one measurement variable among groups classified by one nominal variable, and there are more than two categories, the appropriate statistical test is a one-way anova. Once you know what variables you're analyzing and what type they are, the number of possible statistical tests is usually limited to one or two (at least for tests I present in this handbook). 8. A power analysis would have required an estimate of the standard deviation of glycogen content, which probably could have been found in the published literature, and a number for the effect size (the variation in glycogen content among genotypes that the experimenters wanted to detect). In this experiment, any difference in glycogen content among genotypes would be interesting, so the experimenters just used as many flies as was practical in the time available. 9. The experiment was done: glycogen content was measured in flies with different PGM sequences. 10. The anova assumes that the measurement variable, glycogen content, is normal (the distribution fits the bell-shaped normal curve) and homoscedastic (the variances in glycogen content of the different PGM sequences are equal), and inspecting histograms of the data shows that the data fit these assumptions. If the data hadn't met the assumptions of anova, the Kruskal–Wallis test or Welch's test might have been better. 11. The one-way anova was done, using a spreadsheet, web page, or computer program, and the result of the anova is a \(P\) value less than \(0.05\). The interpretation is that flies with some PGM sequences have different average glycogen content than flies with other sequences of PGM. 12. The results could be summarized in a table, but a more effective way to communicate them is with a graph:
textbooks/stats/Applied_Statistics/Biological_Statistics_(McDonald)/01%3A_Basics/1.01%3A_Data_analysis_steps.txt
Learning Objectives • To identify the types of variables in an experiment in order to choose the correct method of analysis. Introduction One of the first steps in deciding which statistical test to use is determining what kinds of variables you have. When you know what the relevant variables are, what kind of variables they are, and what your null and alternative hypotheses are, it's usually pretty easy to figure out which test you should use. I classify variables into three types: measurement variables, nominal variables, and ranked variables. You'll see other names for these variable types and other ways of classifying variables in other statistics references, so try not to get confused. You'll analyze similar experiments, with similar null and alternative hypotheses, completely differently depending on which of these three variable types are involved. For example, let's say you've measured variable X in a sample of \(56\) male and \(67\) female isopods (Armadillidium vulgare, commonly known as pillbugs or roly-polies), and your null hypothesis is "Male and female A. vulgare have the same values of variable \(X\)." If variable \(X\) is width of the head in millimeters, it's a measurement variable, and you'd compare head width in males and females with a two-sample t–test or a one-way analysis of variance (anova). If variable \(X\) is a genotype (such as \(AA\), \(Aa\), or \(aa\)), it's a nominal variable, and you'd compare the genotype frequencies in males and females with a Fisher's exact test. If you shake the isopods until they roll up into little balls, then record which is the first isopod to unroll, the second to unroll, etc., it's a ranked variable and you'd compare unrolling time in males and females with a Kruskal–Wallis test. Types of Variables There are three main types of variables: • Measurement variables, which are expressed as numbers (such as \(3.7mm\)) • Nominal variables, which are expressed as names (such as "female") • Ranked variables, which are expressed as positions (such as "third") Measurement variables Measurement variables are, as the name implies, things you can measure. An individual observation of a measurement variable is always a number. Examples include length, weight, pH, and bone density. Other names for them include "numeric" or "quantitative" variables. Some authors divide measurement variables into two types. One type is continuous variables, such as length of an isopod's antenna, which in theory have an infinite number of possible values. The other is discrete (or meristic) variables, which only have whole number values; these are things you count, such as the number of spines on an isopod's antenna. The mathematical theories underlying statistical tests involving measurement variables assume that the variables are continuous. Luckily, these statistical tests work well on discrete measurement variables, so you usually don't need to worry about the difference between continuous and discrete measurement variables. The only exception would be if you have a very small number of possible values of a discrete variable, in which case you might want to treat it as a nominal variable instead. When you have a measurement variable with a small number of values, it may not be clear whether it should be considered a measurement or a nominal variable. For example, let's say your isopods have \(20\) to \(55\) spines on their left antenna, and you want to know whether the average number of spines on the left antenna is different between males and females. You should consider spine number to be a measurement variable and analyze the data using a two-sample t–test or a one-way anova. If there are only two different spine numbers—some isopods have \(32\) spines, and some have \(33\)—you should treat spine number as a nominal variable, with the values "\(32\)" and "\(33\)" and compare the proportions of isopods with \(32\) or \(33\) spines in males and females using a Fisher's exact test of independence (or chi-square or G–test of independence, if your sample size is really big). The same is true for laboratory experiments; if you give your isopods food with \(15\) different mannose concentrations and then measure their growth rate, mannose concentration would be a measurement variable; if you give some isopods food with \(5mM\) mannose, and the rest of the isopods get \(25mM\) mannose, then mannose concentration would be a nominal variable. But what if you design an experiment with three concentrations of mannose, or five, or seven? There is no rigid rule, and how you treat the variable will depend in part on your null and alternative hypotheses. If your alternative hypothesis is "different values of mannose have different rates of isopod growth," you could treat mannose concentration as a nominal variable. Even if there's some weird pattern of high growth on zero mannose, low growth on small amounts, high growth on intermediate amounts, and low growth on high amounts of mannose, a one-way anova could give a significant result. If your alternative hypothesis is "isopods grow faster with more mannose," it would be better to treat mannose concentration as a measurement variable, so you can do a regression. The following rule of thumb can be used: • a measurement variable with only two values should be treated as a nominal variable • a measurement variable with six or more values should be treated as a measurement variable • a measurement variable with three, four or five values does not exist Of course, in the real world there are experiments with three, four or five values of a measurement variable. Simulation studies show that analyzing such dependent variables with the methods used for measurement variables works well (Fagerland et al. 2011). I am not aware of any research on the effect of treating independent variables with small numbers of values as measurement or nominal. Your decision about how to treat your variable will depend in part on your biological question. You may be able to avoid the ambiguity when you design the experiment—if you want to know whether a dependent variable is related to an independent variable that could be measurement, it's a good idea to have at least six values of the independent variable. Something that could be measured is a measurement variable, even when you set the values. For example, if you grow isopods with one batch of food containing \(10mM\) mannose, another batch of food with \(20mM\) mannose, another batch with \(30mM\) mannose, etc. up to \(100mM\) mannose, the different mannose concentrations are a measurement variable, even though you made the food and set the mannose concentration yourself. Be careful when you count something, as it is sometimes a nominal variable and sometimes a measurement variable. For example, the number of bacteria colonies on a plate is a measurement variable; you count the number of colonies, and there are \(87\) colonies on one plate, \(92\) on another plate, etc. Each plate would have one data point, the number of colonies; that's a number, so it's a measurement variable. However, if the plate has red and white bacteria colonies and you count the number of each, it is a nominal variable. Now, each colony is a separate data point with one of two values of the variable, "red" or "white"; because that's a word, not a number, it's a nominal variable. In this case, you might summarize the nominal data with a number (the percentage of colonies that are red), but the underlying data are still nominal. Ratios Sometimes you can simplify your statistical analysis by taking the ratio of two measurement variables. For example, if you want to know whether male isopods have bigger heads, relative to body size, than female isopods, you could take the ratio of head width to body length for each isopod, and compare the mean ratios of males and females using a two-sample t–test. However, this assumes that the ratio is the same for different body sizes. We know that's not true for humans—the head size/body size ratio in babies is freakishly large, compared to adults—so you should look at the regression of head width on body length and make sure the regression line goes pretty close to the origin, as a straight regression line through the origin means the ratios stay the same for different values of the \(X\) variable. If the regression line doesn't go near the origin, it would be better to keep the two variables separate instead of calculating a ratio, and compare the regression line of head width on body length in males to that in females using an analysis of covariance. Circular variables One special kind of measurement variable is a circular variable. These have the property that the highest value and the lowest value are right next to each other; often, the zero point is completely arbitrary. The most common circular variables in biology are time of day, time of year, and compass direction. If you measure time of year in days, Day 1 could be January 1, or the spring equinox, or your birthday; whichever day you pick, Day 1 is adjacent to Day 2 on one side and Day 365 on the other. If you are only considering part of the circle, a circular variable becomes a regular measurement variable. For example, if you're doing a polynomial regression of bear attacks vs. time of the year in Yellowstone National Park, you could treat "month" as a measurement variable, with March as \(1\) and November as \(9\); you wouldn't have to worry that February (month \(12\)) is next to March, because bears are hibernating in December through February, and you would ignore those three months. However, if your variable really is circular, there are special, very obscure statistical tests designed just for circular data; chapters 26 and 27 in Zar (1999) are a good place to start. Nominal variables Nominal variables classify observations into discrete categories. Examples of nominal variables include sex (the possible values are male or female), genotype (values are \(AA\), \(Aa\), or \(aa\)), or ankle condition (values are normal, sprained, torn ligament, or broken). A good rule of thumb is that an individual observation of a nominal variable can be expressed as a word, not a number. If you have just two values of what would normally be a measurement variable, it's nominal instead: think of it as "present" vs. "absent" or "low" vs. "high." Nominal variables are often used to divide individuals up into categories, so that other variables may be compared among the categories. In the comparison of head width in male vs. female isopods, the isopods are classified by sex, a nominal variable, and the measurement variable head width is compared between the sexes. Nominal variables are also called categorical, discrete, qualitative, or attribute variables. "Categorical" is a more common name than "nominal," but some authors use "categorical" to include both what I'm calling "nominal" and what I'm calling "ranked," while other authors use "categorical" just for what I'm calling nominal variables. I'll stick with "nominal" to avoid this ambiguity. Nominal variables are often summarized as proportions or percentages. For example, if you count the number of male and female A. vulgare in a sample from Newark and a sample from Baltimore, you might say that \(52.3\%\) of the isopods in Newark and \(62.1\%\) of the isopods in Baltimore are female. These percentages may look like a measurement variable, but they really represent a nominal variable, sex. You determined the value of the nominal variable (male or female) on \(65\) isopods from Newark, of which \(34\) were female and \(31\) were male. You might plot \(52.3\%\) on a graph as a simple way of summarizing the data, but you should use the \(34\) female and \(31\) male numbers in all statistical tests. It may help to understand the difference between measurement and nominal variables if you imagine recording each observation in a lab notebook. If you are measuring head widths of isopods, an individual observation might be "\(3.41mm\)." That is clearly a measurement variable. An individual observation of sex might be "female," which clearly is a nominal variable. Even if you don't record the sex of each isopod individually, but just counted the number of males and females and wrote those two numbers down, the underlying variable is a series of observations of "male" and "female." Ranked variables Ranked variables, also called ordinal variables, are those for which the individual observations can be put in order from smallest to largest, even though the exact values are unknown. If you shake a bunch of A. vulgare up, they roll into balls, then after a little while start to unroll and walk around. If you wanted to know whether males and females unrolled at the same time, but your stopwatch was broken, you could pick up the first isopod to unroll and put it in a vial marked "first," pick up the second to unroll and put it in a vial marked "second," and so on, then sex the isopods after they've all unrolled. You wouldn't have the exact time that each isopod stayed rolled up (that would be a measurement variable), but you would have the isopods in order from first to unroll to last to unroll, which is a ranked variable. While a nominal variable is recorded as a word (such as "male") and a measurement variable is recorded as a number (such as "\(4.53\)"), a ranked variable can be recorded as a rank (such as "seventh"). You could do a lifetime of biology and never use a true ranked variable. When I write an exam question involving ranked variables, it's usually some ridiculous scenario like "Imagine you're on a desert island with no ruler, and you want to do statistics on the size of coconuts. You line them up from smallest to largest...." For a homework assignment, I ask students to pick a paper from their favorite biological journal and identify all the variables, and anyone who finds a ranked variable gets a donut; I've had to buy four donuts in \(13\) years. The only common biological ranked variables I can think of are dominance hierarchies in behavioral biology (see the dog example on the Kruskal-Wallis page) and developmental stages, such as the different instars that molting insects pass through. The main reason that ranked variables are important is that the statistical tests designed for ranked variables (called "non-parametric tests") make fewer assumptions about the data than the statistical tests designed for measurement variables. Thus the most common use of ranked variables involves converting a measurement variable to ranks, then analyzing it using a non-parametric test. For example, let's say you recorded the time that each isopod stayed rolled up, and that most of them unrolled after one or two minutes. Two isopods, who happened to be male, stayed rolled up for \(30\) minutes. If you analyzed the data using a test designed for a measurement variable, those two sleepy isopods would cause the average time for males to be much greater than for females, and the difference might look statistically significant. When converted to ranks and analyzed using a non-parametric test, the last and next-to-last isopods would have much less influence on the overall result, and you would be less likely to get a misleadingly "significant" result if there really isn't a difference between males and females. Some variables are impossible to measure objectively with instruments, so people are asked to give a subjective rating. For example, pain is often measured by asking a person to put a mark on a \(10cm\) scale, where \(0cm\) is "no pain" and \(10cm\) is "worst possible pain." This is not a ranked variable; it is a measurement variable, even though the "measuring" is done by the person's brain. For the purpose of statistics, the important thing is that it is measured on an "interval scale"; ideally, the difference between pain rated \(2\) and \(3\) is the same as the difference between pain rated \(7\) and \(8\). Pain would be a ranked variable if the pains at different times were compared with each other; for example, if someone kept a pain diary and then at the end of the week said "Tuesday was the worst pain, Thursday was second worst, Wednesday was third, etc...." These rankings are not an interval scale; the difference between Tuesday and Thursday may be much bigger, or much smaller, than the difference between Thursday and Wednesday. Just like with measurement variables, if there are a very small number of possible values for a ranked variable, it would be better to treat it as a nominal variable. For example, if you make a honeybee sting people on one arm and a yellowjacket sting people on the other arm, then ask them "Was the honeybee sting the most painful or the second most painful?", you are asking them for the rank of each sting. But you should treat the data as a nominal variable, one which has three values ("honeybee is worse" or "yellowjacket is worse" or "subject is so mad at your stupid, painful experiment that they refuse to answer"). Categorizing It is possible to convert a measurement variable to a nominal variable, dividing individuals up into a two or more classes based on ranges of the variable. For example, if you are studying the relationship between levels of HDL (the "good cholesterol") and blood pressure, you could measure the HDL level, then divide people into two groups, "low HDL" (less than \(40mg/dl\)) and "normal HDL" (\(40\) or more \(mg/dl\)) and compare the mean blood pressures of the two groups, using a nice simple two-sample t–test. Converting measurement variables to nominal variables ("dichotomizing" if you split into two groups, "categorizing" in general) is common in epidemiology, psychology, and some other fields. However, there are several problems with categorizing measurement variables (MacCallum et al. 2002). One problem is that you'd be discarding a lot of information; in our blood pressure example, you'd be lumping together everyone with HDL from \(0\) to \(39mg/dl\) into one group. This reduces your statistical power, decreasing your chances of finding a relationship between the two variables if there really is one. Another problem is that it would be easy to consciously or subconsciously choose the dividing line ("cutpoint") between low and normal HDL that gave an "interesting" result. For example, if you did the experiment thinking that low HDL caused high blood pressure, and a couple of people with HDL between \(40\) and \(45\) happened to have high blood pressure, you might put the dividing line between low and normal at \(45mg/dl\). This would be cheating, because it would increase the chance of getting a "significant" difference if there really isn't one. To illustrate the problem with categorizing, let's say you wanted to know whether tall basketball players weigh more than short players. Here's data for the 2012-2013 men's basketball team at Morgan State University: Height (inches) Weight (pounds) 69 180 72 185 74 170 74 190 74 220 76 200 77 190 77 225 78 215 78 225 80 210 81 208 81 220 86 270 Table 1.2.1 2012-2013 men's basketball team at Morgan State University If you keep both variables as measurement variables and analyze using linear regression, you get a \(P\) value of \(0.0007\); the relationship is highly significant. Tall basketball players really are heavier, as is obvious from the graph. However, if you divide the heights into two categories, "short" (\(77\) inches or less) and "tall" (more than \(77\) inches) and compare the mean weights of the two groups using a two-sample t–test, the \(P\) value is \(0.043\), which is barely significant at the usual \(P< 0.05\) level. And if you also divide the weights into two categories, "light" (\(210\) pounds and less) and "heavy" (greater than \(210\) pounds), you get \(6\) who are short and light, \(2\) who are short and heavy, \(2\) who are tall and light, and \(4\) who are tall and heavy. The proportion of short people who are heavy is not significantly different from the proportion of tall people who are heavy, when analyzed using Fisher's exact test (\(P=0.28\)). So by categorizing both measurement variables, you have made an obvious, highly significant relationship between height and weight become completely non-significant. This is not a good thing. I think it's better for most biological experiments if you don't categorize. Likert items Social scientists like to use Likert items. They'll present a statement like: "It's important for all biologists to learn statistics" and ask people to choose • 1=Strongly Disagree • 2=Disagree • 3=Neither Agree nor Disagree • 4=Agree, • 5=Strongly Agree. Sometimes they use seven values instead of five, by adding "Very Strongly Disagree" and "Very Strongly Agree"; and sometimes people are asked to rate their strength of agreement on a \(9\) or \(11\)-point scale. Similar questions may have answers such as • 1=Never • 2=Rarely • 3=Sometimes • 4=Often • 5=Always Strictly speaking, a Likert scale is the result of adding together the scores on several Likert items. Often, however, a single Likert item is called a Likert scale. There is a lot of controversy about how to analyze a Likert item. One option is to treat it as a nominal variable with five (or seven, or however many) items. The data would then be summarized by the proportion of people giving each answer, and analyzed using chi-square or G–tests. However, this ignores the fact that the values go in order from least agreement to most, which is pretty important information. The other options are to treat it as a ranked variable or a measurement variable. Treating a Likert item as a measurement variable lets you summarize the data using a mean and standard deviation, and analyze the data using the familiar parametric tests such as anova and regression. One argument against treating a Likert item as a measurement variable is that the data have a small number of values that are unlikely to be normally distributed, but the statistical tests used on measurement variables are not very sensitive to deviations from normality, and simulations have shown that tests for measurement variables work well even with small numbers of values (Fagerland et al. 2011). A bigger issue is that the answers on a Likert item are just crude subdivisions of some underlying measure of feeling, and the difference between "Strongly Disagree" and "Disagree" may not be the same size as the difference between "Disagree" and "Neither Agree nor Disagree"; in other words, the responses are not a true "interval" variable. As an analogy, imagine you asked a bunch of college students: "How much TV they watch in a typical week" and you give them the choices of • 0=None • 1=A Little • 2=A Moderate Amount • 3=A Lot • 4=Too Much If the people who said "A Little" watch one or two hours a week, the people who said "A Moderate Amount" watch three to nine hours a week, and the people who said "A Lot" watch \(10\) to \(20\) hours a week, then the difference between "None" and "A Little" is a lot smaller than the difference between "A Moderate Amount" and "A Lot." That would make your \(0-4\) point scale not be an interval variable. If your data actually were in hours, then the difference between \(0\) hours and \(1\) hour is the same size as the difference between \(19\) hours and \(20\) hours; "hours" would be an interval variable. Personally, I don't see how treating values of a Likert item as a measurement variable will cause any statistical problems. It is, in essence, a data transformation: applying a mathematical function to one variable to come up with a new variable. In chemistry, pH is the base-\(10 log\) of the reciprocal of the hydrogen activity, so the difference in hydrogen activity between a pH \(5\) and pH \(6\) solution is much bigger than the difference between pH \(8\) and pH 9. But I don't think anyone would object to treating pH as a measurement variable. Converting \(25-44\) on some underlying "agreeicity index" to "\(2\)" and converting \(45-54\) to "\(3\)" doesn't seem much different from converting hydrogen activity to pH, or micropascals of sound to decibels, or squaring a person's height to calculate body mass index. The impression I get, from briefly glancing at the literature, is that many of the people who use Likert items in their research treat them as measurement variables, while most statisticians think this is outrageously incorrect. I think treating them as measurement variables has several advantages, but you should carefully consider the practice in your particular field; it's always better if you're speaking the same statistical language as your peers. Because there is disagreement, you should include the number of people giving each response in your publications; this will provide all the information that other researchers need to analyze your data using the technique they prefer. All of the above applies to statistics done on a single Likert item. The usual practice is to add together a bunch of Likert items into a Likert scale; a political scientist might add the scores on Likert questions about abortion, gun control, taxes, the environment, etc. and come up with a 100-point liberal vs. conservative scale. Once a number of Likert items are added together to make a Likert scale, there seems to be less objection to treating the sum as a measurement variable; even some statisticians are okay with that. Independent and dependent variables Another way to classify variables is as independent or dependent variables. An independent variable (also known as a predictor, explanatory, or exposure variable) is a variable that you think may cause a change in a dependent variable (also known as an outcome or response variable). For example, if you grow isopods with \(10\) different mannose concentrations in their food and measure their growth rate, the mannose concentration is an independent variable and the growth rate is a dependent variable, because you think that different mannose concentrations may cause different growth rates. Any of the three variable types (measurement, nominal or ranked) can be either independent or dependent. For example, if you want to know whether sex affects body temperature in mice, sex would be an independent variable and temperature would be a dependent variable. If you wanted to know whether the incubation temperature of eggs affects sex in turtles, temperature would be the independent variable and sex would be the dependent variable. As you'll see in the descriptions of particular statistical tests, sometimes it is important to decide which is the independent and which is the dependent variable; it will determine whether you should analyze your data with a two-sample t–test or simple logistic regression, for example. Other times you don't need to decide whether a variable is independent or dependent. For example, if you measure the nitrogen content of soil and the density of dandelion plants, you might think that nitrogen content is an independent variable and dandelion density is a dependent variable; you'd be thinking that nitrogen content might affect where dandelion plants live. But maybe dandelions use a lot of nitrogen from the soil, so it's dandelion density that should be the independent variable. Or maybe some third variable that you didn't measure, such as moisture content, affects both nitrogen content and dandelion density. For your initial experiment, which you would analyze using correlation, you wouldn't need to classify nitrogen content or dandelion density as independent or dependent. If you found an association between the two variables, you would probably want to follow up with experiments in which you manipulated nitrogen content (making it an independent variable) and observed dandelion density (making it a dependent variable), and other experiments in which you manipulated dandelion density (making it an independent variable) and observed the change in nitrogen content (making it the dependent variable).
textbooks/stats/Applied_Statistics/Biological_Statistics_(McDonald)/01%3A_Basics/1.02%3A_Types_of_Biological_Variables.txt
Learning Objectives • Simple rules about adding and multiplying probabilities The basic idea of a statistical test is to identify a null hypothesis, collect some data, then estimate the probability of getting the observed data if the null hypothesis were true. If the probability of getting a result like the observed one is low under the null hypothesis, you conclude that the null hypothesis is probably not true. It is therefore useful to know a little about probability. One way to think about probability is as the proportion of individuals in a population that have a particular characteristic. The probability of sampling a particular kind of individual is equal to the proportion of that kind of individual in the population. For example, in fall 2013 there were $22,166$ students at the University of Delaware, and $3,679$ of them were graduate students. If you sampled a single student at random, the probability that they would be a grad student would be $\frac{3,679}{22,166}$ or $0.166$. In other words, $16.6\%$ of students were grad students, so if you'd picked one student at random, the probability that they were a grad student would have been $16.6\%$. When dealing with probabilities in biology, you are often working with theoretical expectations, not population samples. For example, in a genetic cross of two individual Drosophila melanogaster that are heterozygous at the vestigial locus, Mendel's theory predicts that the probability of an offspring individual being a recessive homozygote (having teeny-tiny wings) is one-fourth, or $0.25$. This is equivalent to saying that one-fourth of a population of offspring will have tiny wings. Multiplying probabilities You could take a semester-long course on mathematical probability, but most biologists just need to know a few basic principles. You calculate the probability that an individual has one value of a nominal variable AND another value of a second nominal variable by multiplying the probabilities of each value together. For example, if the probability that a Drosophila in a cross has vestigial wings is one-fourth, AND the probability that it has legs where its antennae should be is three-fourths, the probability that it has vestigial wings AND leg-antennae is one-fourth times three-fourths, or $0.25\times 0.75$, or $0.1875$. This estimate assumes that the two values are independent, meaning that the probability of one value is not affected by the other value. In this case, independence would require that the two genetic loci were on different chromosomes, among other things. Adding probabilities The probability that an individual has one value OR another, MUTUALLY EXCLUSIVE, value is found by adding the probabilities of each value together. "Mutually exclusive" means that one individual could not have both values. For example, if the probability that a flower in a genetic cross is red is one-fourth, the probability that it is pink is one-half, and the probability that it is white is one-fourth, then the probability that it is red OR pink is one-fourth plus one-half, or three-fourths. More complicated situations When calculating the probability that an individual has one value OR another, and the two values are NOT MUTUALLY EXCLUSIVE, it is important to break things down into combinations that are mutually exclusive. For example, let's say you wanted to estimate the probability that a fly from the cross above had vestigial wings OR leg-antennae. You could calculate the probability for each of the four kinds of flies: normal wings/normal antennae ($0.75\times 0.25=0.1875$), normal wings/leg-antennae ($0.75\times 0.75=0.5625$), vestigial wings/normal antennae ($0.25\times 0.25=0.0625$), and vestigial wings/leg-antennae ($0.25\times 0.75=0.1875$). Then, since the last three kinds of flies are the ones with vestigial wings or leg-antennae, you'd add those probabilities up ($0.5625+0.0625+0.1875=0.8125$). When to calculate probabilities While there are some kind of probability calculations underlying all statistical tests, it is rare that you'll have to use the rules listed above. About the only time you'll actually calculate probabilities by adding and multiplying is when figuring out the expected values for a goodness-of-fit test.
textbooks/stats/Applied_Statistics/Biological_Statistics_(McDonald)/01%3A_Basics/1.03%3A_Probability.txt
Learning Objectives • One of the main goals of statistical hypothesis testing is to estimate the $P$ value, which is the probability of obtaining the observed results, or something more extreme, if the null hypothesis were true. If the observed results are unlikely under the null hypothesis, reject the null hypothesis. • Alternatives to this "frequentist" approach to statistics include Bayesian statistics and estimation of effect sizes and confidence intervals. Introduction There are different ways of doing statistics. The technique used by the vast majority of biologists, and the technique that most of this handbook describes, is sometimes called "frequentist" or "classical" statistics. It involves testing a null hypothesis by comparing the data you observe in your experiment with the predictions of a null hypothesis. You estimate what the probability would be of obtaining the observed results, or something more extreme, if the null hypothesis were true. If this estimated probability (the $P$ value) is small enough (below the significance value), then you conclude that it is unlikely that the null hypothesis is true; you reject the null hypothesis and accept an alternative hypothesis. Many statisticians harshly criticize frequentist statistics, but their criticisms haven't had much effect on the way most biologists do statistics. Here I will outline some of the key concepts used in frequentist statistics, then briefly describe some of the alternatives. Null Hypothesis The null hypothesis is a statement that you want to test. In general, the null hypothesis is that things are the same as each other, or the same as a theoretical expectation. For example, if you measure the size of the feet of male and female chickens, the null hypothesis could be that the average foot size in male chickens is the same as the average foot size in female chickens. If you count the number of male and female chickens born to a set of hens, the null hypothesis could be that the ratio of males to females is equal to a theoretical expectation of a $1:1$ ratio. The alternative hypothesis is that things are different from each other, or different from a theoretical expectation. For example, one alternative hypothesis would be that male chickens have a different average foot size than female chickens; another would be that the sex ratio is different from $1:1$. Usually, the null hypothesis is boring and the alternative hypothesis is interesting. For example, let's say you feed chocolate to a bunch of chickens, then look at the sex ratio in their offspring. If you get more females than males, it would be a tremendously exciting discovery: it would be a fundamental discovery about the mechanism of sex determination, female chickens are more valuable than male chickens in egg-laying breeds, and you'd be able to publish your result in Science or Nature. Lots of people have spent a lot of time and money trying to change the sex ratio in chickens, and if you're successful, you'll be rich and famous. But if the chocolate doesn't change the sex ratio, it would be an extremely boring result, and you'd have a hard time getting it published in the Eastern Delaware Journal of Chickenology. It's therefore tempting to look for patterns in your data that support the exciting alternative hypothesis. For example, you might look at $48$ offspring of chocolate-fed chickens and see $31$ females and only $17$ males. This looks promising, but before you get all happy and start buying formal wear for the Nobel Prize ceremony, you need to ask "What's the probability of getting a deviation from the null expectation that large, just by chance, if the boring null hypothesis is really true?" Only when that probability is low can you reject the null hypothesis. The goal of statistical hypothesis testing is to estimate the probability of getting your observed results under the null hypothesis. Biological vs. Statistical Null Hypotheses It is important to distinguish between biological null and alternative hypotheses and statistical null and alternative hypotheses. "Sexual selection by females has caused male chickens to evolve bigger feet than females" is a biological alternative hypothesis; it says something about biological processes, in this case sexual selection. "Male chickens have a different average foot size than females" is a statistical alternative hypothesis; it says something about the numbers, but nothing about what caused those numbers to be different. The biological null and alternative hypotheses are the first that you should think of, as they describe something interesting about biology; they are two possible answers to the biological question you are interested in ("What affects foot size in chickens?"). The statistical null and alternative hypotheses are statements about the data that should follow from the biological hypotheses: if sexual selection favors bigger feet in male chickens (a biological hypothesis), then the average foot size in male chickens should be larger than the average in females (a statistical hypothesis). If you reject the statistical null hypothesis, you then have to decide whether that's enough evidence that you can reject your biological null hypothesis. For example, if you don't find a significant difference in foot size between male and female chickens, you could conclude "There is no significant evidence that sexual selection has caused male chickens to have bigger feet." If you do find a statistically significant difference in foot size, that might not be enough for you to conclude that sexual selection caused the bigger feet; it might be that males eat more, or that the bigger feet are a developmental byproduct of the roosters' combs, or that males run around more and the exercise makes their feet bigger. When there are multiple biological interpretations of a statistical result, you need to think of additional experiments to test the different possibilities. Testing the Null Hypothesis The primary goal of a statistical test is to determine whether an observed data set is so different from what you would expect under the null hypothesis that you should reject the null hypothesis. For example, let's say you are studying sex determination in chickens. For breeds of chickens that are bred to lay lots of eggs, female chicks are more valuable than male chicks, so if you could figure out a way to manipulate the sex ratio, you could make a lot of chicken farmers very happy. You've fed chocolate to a bunch of female chickens (in birds, unlike mammals, the female parent determines the sex of the offspring), and you get $25$ female chicks and $23$ male chicks. Anyone would look at those numbers and see that they could easily result from chance; there would be no reason to reject the null hypothesis of a $1:1$ ratio of females to males. If you got $47$ females and $1$ male, most people would look at those numbers and see that they would be extremely unlikely to happen due to luck, if the null hypothesis were true; you would reject the null hypothesis and conclude that chocolate really changed the sex ratio. However, what if you had $31$ females and $17$ males? That's definitely more females than males, but is it really so unlikely to occur due to chance that you can reject the null hypothesis? To answer that, you need more than common sense, you need to calculate the probability of getting a deviation that large due to chance. P values In the figure above, I used the BINOMDIST function of Excel to calculate the probability of getting each possible number of males, from $0$ to $48$, under the null hypothesis that $0.5$ are male. As you can see, the probability of getting $17$ males out of $48$ total chickens is about $0.015$. That seems like a pretty small probability, doesn't it? However, that's the probability of getting exactly $17$ males. What you want to know is the probability of getting $17$ or fewer males. If you were going to accept $17$ males as evidence that the sex ratio was biased, you would also have accepted $16$, or $15$, or $14$,… males as evidence for a biased sex ratio. You therefore need to add together the probabilities of all these outcomes. The probability of getting $17$ or fewer males out of $48$, under the null hypothesis, is $0.030$. That means that if you had an infinite number of chickens, half males and half females, and you took a bunch of random samples of $48$ chickens, $3.0\%$ of the samples would have $17$ or fewer males. This number, $0.030$, is the $P$ value. It is defined as the probability of getting the observed result, or a more extreme result, if the null hypothesis is true. So "$P=0.030$" is a shorthand way of saying "The probability of getting $17$ or fewer male chickens out of $48$ total chickens, IF the null hypothesis is true that $50\%$ of chickens are male, is $0.030$." False Positives vs. False Negatives After you do a statistical test, you are either going to reject or accept the null hypothesis. Rejecting the null hypothesis means that you conclude that the null hypothesis is not true; in our chicken sex example, you would conclude that the true proportion of male chicks, if you gave chocolate to an infinite number of chicken mothers, would be less than $50\%$. When you reject a null hypothesis, there's a chance that you're making a mistake. The null hypothesis might really be true, and it may be that your experimental results deviate from the null hypothesis purely as a result of chance. In a sample of $48$ chickens, it's possible to get $17$ male chickens purely by chance; it's even possible (although extremely unlikely) to get $0$ male and $48$ female chickens purely by chance, even though the true proportion is $50\%$ males. This is why we never say we "prove" something in science; there's always a chance, however miniscule, that our data are fooling us and deviate from the null hypothesis purely due to chance. When your data fool you into rejecting the null hypothesis even though it's true, it's called a "false positive," or a "Type I error." So another way of defining the $P$ value is the probability of getting a false positive like the one you've observed, if the null hypothesis is true. Another way your data can fool you is when you don't reject the null hypothesis, even though it's not true. If the true proportion of female chicks is $51\%$, the null hypothesis of a $50\%$ proportion is not true, but you're unlikely to get a significant difference from the null hypothesis unless you have a huge sample size. Failing to reject the null hypothesis, even though it's not true, is a "false negative" or "Type II error." This is why we never say that our data shows the null hypothesis to be true; all we can say is that we haven't rejected the null hypothesis. Significance Levels Does a probability of $0.030$ mean that you should reject the null hypothesis, and conclude that chocolate really caused a change in the sex ratio? The convention in most biological research is to use a significance level of $0.05$. This means that if the $P$ value is less than $0.05$, you reject the null hypothesis; if $P$ is greater than or equal to $0.05$, you don't reject the null hypothesis. There is nothing mathematically magic about $0.05$, it was chosen rather arbitrarily during the early days of statistics; people could have agreed upon $0.04$, or $0.025$, or $0.071$ as the conventional significance level. The significance level (also known as the "critical value" or "alpha") you should use depends on the costs of different kinds of errors. With a significance level of $0.05$, you have a $5\%$ chance of rejecting the null hypothesis, even if it is true. If you try $100$ different treatments on your chickens, and none of them really change the sex ratio, $5\%$ of your experiments will give you data that are significantly different from a $1:1$ sex ratio, just by chance. In other words, $5\%$ of your experiments will give you a false positive. If you use a higher significance level than the conventional $0.05$, such as $0.10$, you will increase your chance of a false positive to $0.10$ (therefore increasing your chance of an embarrassingly wrong conclusion), but you will also decrease your chance of a false negative (increasing your chance of detecting a subtle effect). If you use a lower significance level than the conventional $0.05$, such as $0.01$, you decrease your chance of an embarrassing false positive, but you also make it less likely that you'll detect a real deviation from the null hypothesis if there is one. The relative costs of false positives and false negatives, and thus the best $P$ value to use, will be different for different experiments. If you are screening a bunch of potential sex-ratio-changing treatments and get a false positive, it wouldn't be a big deal; you'd just run a few more tests on that treatment until you were convinced the initial result was a false positive. The cost of a false negative, however, would be that you would miss out on a tremendously valuable discovery. You might therefore set your significance value to $0.10$ or more for your initial tests. On the other hand, once your sex-ratio-changing treatment is undergoing final trials before being sold to farmers, a false positive could be very expensive; you'd want to be very confident that it really worked. Otherwise, if you sell the chicken farmers a sex-ratio treatment that turns out to not really work (it was a false positive), they'll sue the pants off of you. Therefore, you might want to set your significance level to $0.01$, or even lower, for your final tests. The significance level you choose should also depend on how likely you think it is that your alternative hypothesis will be true, a prediction that you make before you do the experiment. This is the foundation of Bayesian statistics, as explained below. You must choose your significance level before you collect the data, of course. If you choose to use a different significance level than the conventional $0.05$, people will be skeptical; you must be able to justify your choice. Throughout this handbook, I will always use $P< 0.05$ as the significance level. If you are doing an experiment where the cost of a false positive is a lot greater or smaller than the cost of a false negative, or an experiment where you think it is unlikely that the alternative hypothesis will be true, you should consider using a different significance level. One-tailed vs. Two-tailed Probabilities The probability that was calculated above, $0.030$, is the probability of getting $17$ or fewer males out of $48$. It would be significant, using the conventional $P< 0.05$criterion. However, what about the probability of getting $17$ or fewer females? If your null hypothesis is "The proportion of males is $17$ or more" and your alternative hypothesis is "The proportion of males is less than $0.5$," then you would use the $P=0.03$ value found by adding the probabilities of getting $17$ or fewer males. This is called a one-tailed probability, because you are adding the probabilities in only one tail of the distribution shown in the figure. However, if your null hypothesis is "The proportion of males is $0.5$", then your alternative hypothesis is "The proportion of males is different from $0.5$." In that case, you should add the probability of getting $17$ or fewer females to the probability of getting $17$ or fewer males. This is called a two-tailed probability. If you do that with the chicken result, you get $P=0.06$, which is not quite significant. You should decide whether to use the one-tailed or two-tailed probability before you collect your data, of course. A one-tailed probability is more powerful, in the sense of having a lower chance of false negatives, but you should only use a one-tailed probability if you really, truly have a firm prediction about which direction of deviation you would consider interesting. In the chicken example, you might be tempted to use a one-tailed probability, because you're only looking for treatments that decrease the proportion of worthless male chickens. But if you accidentally found a treatment that produced $87\%$ male chickens, would you really publish the result as "The treatment did not cause a significant decrease in the proportion of male chickens"? I hope not. You'd realize that this unexpected result, even though it wasn't what you and your farmer friends wanted, would be very interesting to other people; by leading to discoveries about the fundamental biology of sex-determination in chickens, in might even help you produce more female chickens someday. Any time a deviation in either direction would be interesting, you should use the two-tailed probability. In addition, people are skeptical of one-tailed probabilities, especially if a one-tailed probability is significant and a two-tailed probability would not be significant (as in our chocolate-eating chicken example). Unless you provide a very convincing explanation, people may think you decided to use the one-tailed probability after you saw that the two-tailed probability wasn't quite significant, which would be cheating. It may be easier to always use two-tailed probabilities. For this handbook, I will always use two-tailed probabilities, unless I make it very clear that only one direction of deviation from the null hypothesis would be interesting. Reporting your results In the olden days, when people looked up $P$ values in printed tables, they would report the results of a statistical test as "$P< 0.05$", "$P< 0.01$", "$P>0.10$", etc. Nowadays, almost all computer statistics programs give the exact $P$ value resulting from a statistical test, such as $P=0.029$, and that's what you should report in your publications. You will conclude that the results are either significant or they're not significant; they either reject the null hypothesis (if $P$ is below your pre-determined significance level) or don't reject the null hypothesis (if $P$ is above your significance level). But other people will want to know if your results are "strongly" significant ($P$ much less than $0.05$), which will give them more confidence in your results than if they were "barely" significant ($P=0.043$, for example). In addition, other researchers will need the exact $P$ value if they want to combine your results with others into a meta-analysis. Computer statistics programs can give somewhat inaccurate $P$ values when they are very small. Once your $P$ values get very small, you can just say "$P< 0.00001$" or some other impressively small number. You should also give either your raw data, or the test statistic and degrees of freedom, in case anyone wants to calculate your exact $P$ value. Effect Sizes and Confidence Intervals A fairly common criticism of the hypothesis-testing approach to statistics is that the null hypothesis will always be false, if you have a big enough sample size. In the chicken-feet example, critics would argue that if you had an infinite sample size, it is impossible that male chickens would have exactly the same average foot size as female chickens. Therefore, since you know before doing the experiment that the null hypothesis is false, there's no point in testing it. This criticism only applies to two-tailed tests, where the null hypothesis is "Things are exactly the same" and the alternative is "Things are different." Presumably these critics think it would be okay to do a one-tailed test with a null hypothesis like "Foot length of male chickens is the same as, or less than, that of females," because the null hypothesis that male chickens have smaller feet than females could be true. So if you're worried about this issue, you could think of a two-tailed test, where the null hypothesis is that things are the same, as shorthand for doing two one-tailed tests. A significant rejection of the null hypothesis in a two-tailed test would then be the equivalent of rejecting one of the two one-tailed null hypotheses. A related criticism is that a significant rejection of a null hypothesis might not be biologically meaningful, if the difference is too small to matter. For example, in the chicken-sex experiment, having a treatment that produced $49.9\%$ male chicks might be significantly different from $50\%$, but it wouldn't be enough to make farmers want to buy your treatment. These critics say you should estimate the effect size and put a confidence interval on it, not estimate a $P$ value. So the goal of your chicken-sex experiment should not be to say "Chocolate gives a proportion of males that is significantly less than $50\%$ (($P=0.015$)" but to say "Chocolate produced $36.1\%$ males with a $95\%$ confidence interval of $25.9\%$ to $47.4\%$." For the chicken-feet experiment, you would say something like "The difference between males and females in mean foot size is $2.45mm$, with a confidence interval on the difference of $\pm 1.98mm$." Estimating effect sizes and confidence intervals is a useful way to summarize your results, and it should usually be part of your data analysis; you'll often want to include confidence intervals in a graph. However, there are a lot of experiments where the goal is to decide a yes/no question, not estimate a number. In the initial tests of chocolate on chicken sex ratio, the goal would be to decide between "It changed the sex ratio" and "It didn't seem to change the sex ratio." Any change in sex ratio that is large enough that you could detect it would be interesting and worth follow-up experiments. While it's true that the difference between $49.9\%$ and $50\%$ might not be worth pursuing, you wouldn't do an experiment on enough chickens to detect a difference that small. Often, the people who claim to avoid hypothesis testing will say something like "the $95\%$ confidence interval of $25.9\%$ to $47.4\%$ does not include $50\%$, so we conclude that the plant extract significantly changed the sex ratio." This is a clumsy and roundabout form of hypothesis testing, and they might as well admit it and report the $P$ value. Bayesian statistics Another alternative to frequentist statistics is Bayesian statistics. A key difference is that Bayesian statistics requires specifying your best guess of the probability of each possible value of the parameter to be estimated, before the experiment is done. This is known as the "prior probability." So for your chicken-sex experiment, you're trying to estimate the "true" proportion of male chickens that would be born, if you had an infinite number of chickens. You would have to specify how likely you thought it was that the true proportion of male chickens was $50\%$, or $51\%$, or $52\%$, or $47.3\%$, etc. You would then look at the results of your experiment and use the information to calculate new probabilities that the true proportion of male chickens was $50\%$, or $51\%$, or $52\%$, or $47.3\%$, etc. (the posterior distribution). I'll confess that I don't really understand Bayesian statistics, and I apologize for not explaining it well. In particular, I don't understand how people are supposed to come up with a prior distribution for the kinds of experiments that most biologists do. With the exception of systematics, where Bayesian estimation of phylogenies is quite popular and seems to make sense, I haven't seen many research biologists using Bayesian statistics for routine data analysis of simple laboratory experiments. This means that even if the cult-like adherents of Bayesian statistics convinced you that they were right, you would have a difficult time explaining your results to your biologist peers. Statistics is a method of conveying information, and if you're speaking a different language than the people you're talking to, you won't convey much information. So I'll stick with traditional frequentist statistics for this handbook. Having said that, there's one key concept from Bayesian statistics that is important for all users of statistics to understand. To illustrate it, imagine that you are testing extracts from $1000$ different tropical plants, trying to find something that will kill beetle larvae. The reality (which you don't know) is that $500$ of the extracts kill beetle larvae, and $500$ don't. You do the $1000$ experiments and do the $1000$ frequentist statistical tests, and you use the traditional significance level of $P< 0.05$. The $500$ plant extracts that really work all give you $P< 0.05$; these are the true positives. Of the $500$ extracts that don't work, $5\%$ of them give you $P< 0.05$ by chance (this is the meaning of the $P$ value, after all), so you have $25$ false positives. So you end up with $525$ plant extracts that gave you a $P$ value less than $0.05$. You'll have to do further experiments to figure out which are the $25$ false positives and which are the $500$ true positives, but that's not so bad, since you know that most of them will turn out to be true positives. Now imagine that you are testing those extracts from $1000$ different tropical plants to try to find one that will make hair grow. The reality (which you don't know) is that one of the extracts makes hair grow, and the other $999$ don't. You do the $1000$ experiments and do the $1000$ frequentist statistical tests, and you use the traditional significance level of $P< 0.05$. The one plant extract that really works gives you P<0.05; this is the true positive. But of the $999$ extracts that don't work, $5\%$ of them give you $P< 0.05$ by chance, so you have about $50$ false positives. You end up with $51$ $P$ values less than $0.05$, but almost all of them are false positives. Now instead of testing $1000$ plant extracts, imagine that you are testing just one. If you are testing it to see if it kills beetle larvae, you know (based on everything you know about plant and beetle biology) there's a pretty good chance it will work, so you can be pretty sure that a $P$ value less than $0.05$ is a true positive. But if you are testing that one plant extract to see if it grows hair, which you know is very unlikely (based on everything you know about plants and hair), a $P$ value less than $0.05$ is almost certainly a false positive. In other words, if you expect that the null hypothesis is probably true, a statistically significant result is probably a false positive. This is sad; the most exciting, amazing, unexpected results in your experiments are probably just your data trying to make you jump to ridiculous conclusions. You should require a much lower $P$ value to reject a null hypothesis that you think is probably true. A Bayesian would insist that you put in numbers just how likely you think the null hypothesis and various values of the alternative hypothesis are, before you do the experiment, and I'm not sure how that is supposed to work in practice for most experimental biology. But the general concept is a valuable one: as Carl Sagan summarized it, "Extraordinary claims require extraordinary evidence." Recommendations Here are three experiments to illustrate when the different approaches to statistics are appropriate. In the first experiment, you are testing a plant extract on rabbits to see if it will lower their blood pressure. You already know that the plant extract is a diuretic (makes the rabbits pee more) and you already know that diuretics tend to lower blood pressure, so you think there's a good chance it will work. If it does work, you'll do more low-cost animal tests on it before you do expensive, potentially risky human trials. Your prior expectation is that the null hypothesis (that the plant extract has no effect) has a good chance of being false, and the cost of a false positive is fairly low. So you should do frequentist hypothesis testing, with a significance level of $0.05$. In the second experiment, you are going to put human volunteers with high blood pressure on a strict low-salt diet and see how much their blood pressure goes down. Everyone will be confined to a hospital for a month and fed either a normal diet, or the same foods with half as much salt. For this experiment, you wouldn't be very interested in the $P$ value, as based on prior research in animals and humans, you are already quite certain that reducing salt intake will lower blood pressure; you're pretty sure that the null hypothesis that "Salt intake has no effect on blood pressure" is false. Instead, you are very interested to know how much the blood pressure goes down. Reducing salt intake in half is a big deal, and if it only reduces blood pressure by $1mm$ Hg, the tiny gain in life expectancy wouldn't be worth a lifetime of bland food and obsessive label-reading. If it reduces blood pressure by $20mm$ with a confidence interval of $\pm 5mm$, it might be worth it. So you should estimate the effect size (the difference in blood pressure between the diets) and the confidence interval on the difference. In the third experiment, you are going to put magnetic hats on guinea pigs and see if their blood pressure goes down (relative to guinea pigs wearing the kind of non-magnetic hats that guinea pigs usually wear). This is a really goofy experiment, and you know that it is very unlikely that the magnets will have any effect (it's not impossible—magnets affect the sense of direction of homing pigeons, and maybe guinea pigs have something similar in their brains and maybe it will somehow affect their blood pressure—it just seems really unlikely). You might analyze your results using Bayesian statistics, which will require specifying in numerical terms just how unlikely you think it is that the magnetic hats will work. Or you might use frequentist statistics, but require a $P$ value much, much lower than $0.05$ to convince yourself that the effect is real. Reference 1. Picture of giant concrete chicken from Sue and Tony's Photo Site. 2. Picture of guinea pigs wearing hats from all over the internet; if you know the original photographer, please let me know.
textbooks/stats/Applied_Statistics/Biological_Statistics_(McDonald)/01%3A_Basics/1.04%3A_Basic_Concepts_of_Hypothesis_Testing.txt
Learning Objectives • A confounding variable is a variable that may affect the dependent variable. This can lead to erroneous conclusions about the relationship between the independent and dependent variables. You deal with confounding variables by controlling them; by matching; by randomizing; or by statistical control. Due to a variety of genetic, developmental, and environmental factors, no two organisms, no two tissue samples, no two cells are exactly alike. This means that when you design an experiment with samples that differ in independent variable \(X\), your samples will also differ in other variables that you may or may not be aware of. If these confounding variables affect the dependent variable \(Y\) that you're interested in, they may trick you into thinking there's a relationship between \(X\) and \(Y\) when there really isn't. Or, the confounding variables may cause so much variation in \(Y\) that it's hard to detect a real relationship between \(X\) and \(Y\) when there is one. As an example of confounding variables, imagine that you want to know whether the genetic differences between American elms (which are susceptible to Dutch elm disease) and Princeton elms (a strain of American elms that is resistant to Dutch elm disease) cause a difference in the amount of insect damage to their leaves. You look around your area, find \(20\) American elms and \(20\) Princeton elms, pick \(50\) leaves from each, and measure the area of each leaf that was eaten by insects. Imagine that you find significantly more insect damage on the Princeton elms than on the American elms (I have no idea if this is true). It could be that the genetic difference between the types of elm directly causes the difference in the amount of insect damage, which is what you were looking for. However, there are likely to be some important confounding variables. For example, many American elms are many decades old, while the Princeton strain of elms was made commercially available only recently and so any Princeton elms you find are probably only a few years old. American elms are often treated with fungicide to prevent Dutch elm disease, while this wouldn't be necessary for Princeton elms. American elms in some settings (parks, streetsides, the few remaining in forests) may receive relatively little care, while Princeton elms are expensive and are likely planted by elm fanatics who take good care of them (fertilizing, watering, pruning, etc.). It is easy to imagine that any difference in insect damage between American and Princeton elms could be caused, not by the genetic differences between the strains, but by a confounding variable: age, fungicide treatment, fertilizer, water, pruning, or something else. If you conclude that Princeton elms have more insect damage because of the genetic difference between the strains, when in reality it's because the Princeton elms in your sample were younger, you will look like an idiot to all of your fellow elm scientists as soon as they figure out your mistake. On the other hand, let's say you're not that much of an idiot, and you make sure your sample of Princeton elms has the same average age as your sample of American elms. There's still a lot of variation in ages among the individual trees in each sample, and if that affects insect damage, there will be a lot of variation among individual trees in the amount of insect damage. This will make it harder to find a statistically significant difference in insect damage between the two strains of elms, and you might miss out on finding a small but exciting difference in insect damage between the strains. Controlling confounding variables Designing an experiment to eliminate differences due to confounding variables is critically important. One way is to control a possible confounding variable, meaning you keep it identical for all the individuals. For example, you could plant a bunch of American elms and a bunch of Princeton elms all at the same time, so they'd be the same age. You could plant them in the same field, and give them all the same amount of water and fertilizer. It is easy to control many of the possible confounding variables in laboratory experiments on model organisms. All of your mice, or rats, or Drosophila will be the same age, the same sex, and the same inbred genetic strain. They will grow up in the same kind of containers, eating the same food and drinking the same water. But there are always some possible confounding variables that you can't control. Your organisms may all be from the same genetic strain, but new mutations will mean that there are still some genetic differences among them. You may give them all the same food and water, but some may eat or drink a little more than others. After controlling all of the variables that you can, it is important to deal with any other confounding variables by randomizing, matching or statistical control. Controlling confounding variables is harder with organisms that live outside the laboratory. Those elm trees that you planted in the same field? Different parts of the field may have different soil types, different water percolation rates, different proximity to roads, houses and other woods, and different wind patterns. And if your experimental organisms are humans, there are a lot of confounding variables that are impossible to control. Randomizing Once you've designed your experiment to control as many confounding variables as possible, you need to randomize your samples to make sure that they don't differ in the confounding variables that you can't control. For example, let's say you're going to make \(20\) mice wear sunglasses and leave \(20\) mice without glasses, to see if sunglasses help prevent cataracts. You shouldn't reach into a bucket of \(40\) mice, grab the first \(20\) you catch and put sunglasses on them. The first \(20\) mice you catch might be easier to catch because they're the slowest, the tamest, or the ones with the longest tails; or you might subconsciously pick out the fattest mice or the cutest mice. I don't know whether having your sunglass-wearing mice be slower, tamer, with longer tails, fatter, or cuter would make them more or less susceptible to cataracts, but you don't know either. You don't want to find a difference in cataracts between the sunglass-wearing and non-sunglass-wearing mice, then have to worry that maybe it's the extra fat or longer tails, not the sunglasses, that caused the difference. So you should randomly assign the mice to the different treatment groups. You could give each mouse an ID number and have a computer randomly assign them to the two groups, or you could just flip a coin each time you pull a mouse out of your bucket of mice. In the mouse example, you used all \(40\) of your mice for the experiment. Often, you will sample a small number of observations from a much larger population, and it's important that it be a random sample. In a random sample, each individual has an equal probability of being sampled. To get a random sample of \(50\) elm trees from a forest with \(700\) elm trees, you could figure out where each of the \(700\) elm trees is, give each one an ID number, write the numbers on \(700\) slips of paper, put the slips of paper in a hat, and randomly draw out \(50\) (or have a computer randomly choose \(50\), if you're too lazy to fill out \(700\) slips of paper or don't own a hat). You need to be careful to make sure that your sample is truly random. I started to write "Or an easier way to randomly sample \(50\) elm trees would be to randomly pick \(50\) locations in the forest by having a computer randomly choose GPS coordinates, then sample the elm tree nearest each random location." However, this would have been a mistake; an elm tree that was far away from other elm trees would almost certainly be the closest to one of your random locations, but you'd be unlikely to sample an elm tree in the middle of a dense bunch of elm trees. It's pretty easy to imagine that proximity to other elm trees would affect insect damage (or just about anything else you'd want to measure on elm trees), so I almost designed a stupid experiment for you. A random sample is one in which all members of a population have an equal probability of being sampled. If you're measuring fluorescence inside kidney cells, this means that all points inside a cell, and all the cells in a kidney, and all the kidneys in all the individuals of a species, would have an equal chance of being sampled. A perfectly random sample of observations is difficult to collect, and you need to think about how this might affect your results. Let's say you've used a confocal microscope to take a two-dimensional "optical slice" of a kidney cell. It would be easy to use a random-number generator on a computer to pick out some random pixels in the image, and you could then use the fluorescence in those pixels as your sample. However, if your slice was near the cell membrane, your "random" sample would not include any points deep inside the cell. If your slice was right through the middle of the cell, however, points deep inside the cell would be over-represented in your sample. You might get a fancier microscope, so you could look at a random sample of the "voxels" (three-dimensional pixels) throughout the volume of the cell. But what would you do about voxels right at the surface of the cell? Including them in your sample would be a mistake, because they might include some of the cell membrane and extracellular space, but excluding them would mean that points near the cell membrane are under-represented in your sample. Matching Sometimes there's a lot of variation in confounding variables that you can't control; even if you randomize, the large variation in confounding variables may cause so much variation in your dependent variable that it would be hard to detect a difference caused by the independent variable that you're interested in. This is particularly true for humans. Let's say you want to test catnip oil as a mosquito repellent. If you were testing it on rats, you would get a bunch of rats of the same age and sex and inbred genetic strain, apply catnip oil to half of them, then put them in a mosquito-filled room for a set period of time and count the number of mosquito bites. This would be a nice, well-controlled experiment, and with a moderate number of rats you could see whether the catnip oil caused even a small change in the number of mosquito bites. But if you wanted to test the catnip oil on humans going about their everyday life, you couldn't get a bunch of humans of the same "inbred genetic strain," it would be hard to get a bunch of people all of the same age and sex, and the people would differ greatly in where they lived, how much time they spent outside, the scented perfumes, soaps, deodorants, and laundry detergents they used, and whatever else it is that makes mosquitoes ignore some people and eat others up. The very large variation in number of mosquito bites among people would mean that if the catnip oil had a small effect, you'd need a huge number of people for the difference to be statistically significant. One way to reduce the noise due to confounding variables is by matching. You generally do this when the independent variable is a nominal variable with two values, such as "drug" vs. "placebo." You make observations in pairs, one for each value of the independent variable, that are as similar as possible in the confounding variables. The pairs could be different parts of the same people. For example, you could test your catnip oil by having people put catnip oil on one arm and placebo oil on the other arm. The variation in the size of the difference between the two arms on each person will be a lot smaller than the variation among different people, so you won't need nearly as big a sample size to detect a small difference in mosquito bites between catnip oil and placebo oil. Of course, you'd have to randomly choose which arm to put the catnip oil on. Other ways of pairing include before-and-after experiments. You could count the number of mosquito bites in one week, then have people use catnip oil and see if the number of mosquito bites for each person went down. With this kind of experiment, it's important to make sure that the dependent variable wouldn't have changed by itself (maybe the weather changed and the mosquitoes stopped biting), so it would be better to use placebo oil one week and catnip oil another week, and randomly choose for each person whether the catnip oil or placebo oil was first. For many human experiments, you'll need to match two different people, because you can't test both the treatment and the control on the same person. For example, let's say you've given up on catnip oil as a mosquito repellent and are going to test it on humans as a cataract preventer. You're going to get a bunch of people, have half of them take a catnip-oil pill and half take a placebo pill for five years, then compare the lens opacity in the two groups. Here the goal is to make each pair of people be as similar as possible in confounding variables that you think might be important. If you're studying cataracts, you'd want to match people based on known risk factors for cataracts: age, amount of time outdoors, use of sunglasses, blood pressure. Of course, once you have a matched pair of individuals, you'd want to randomly choose which one gets the catnip oil and which one gets the placebo. You wouldn't be able to find perfectly matching pairs of individuals, but the better the match, the easier it will be to detect a difference due to the catnip-oil pills. One kind of matching that is often used in epidemiology is the case-control study. "Cases" are people with some disease or condition, and each is matched with one or more controls. Each control is generally the same sex and as similar in other factors (age, ethnicity, occupation, income) as practical. The cases and controls are then compared to see whether there are consistent differences between them. For example, if you wanted to know whether smoking marijuana caused or prevented cataracts, you could find a bunch of people with cataracts. You'd then find a control for each person who was similar in the known risk factors for cataracts (age, time outdoors, blood pressure, diabetes, steroid use). Then you would ask the cataract cases and the non-cataract controls how much weed they'd smoked. If it's hard to find cases and easy to find controls, a case-control study may include two or more controls for each case. This gives somewhat more statistical power. Statistical control When it isn't practical to keep all the possible confounding variables constant, another solution is to statistically control them. Sometimes you can do this with a simple ratio. If you're interested in the effect of weight on cataracts, height would be a confounding variable, because taller people tend to weigh more. Using the body mass index (BMI), which is the ratio of weight in kilograms over the squared height in meters, would remove much of the confounding effects of height in your study. If you need to remove the effects of multiple confounding variables, there are multivariate statistical techniques you can use. However, the analysis, interpretation, and presentation of complicated multivariate analyses are not easy. Observer or subject bias as a confounding variable In many studies, the possible bias of the researchers is one of the most important confounding variables. Finding a statistically significant result is almost always more interesting than not finding a difference, so you need to constantly be on guard to control the effects of this bias. The best way to do this is by blinding yourself, so that you don't know which individuals got the treatment and which got the control. Going back to our catnip oil and mosquito experiment, if you know that Alice got catnip oil and Bob didn't, your subconscious body language and tone of voice when you talk to Alice might imply "You didn't get very many mosquito bites, did you? That would mean that the world will finally know what a genius I am for inventing this," and you might carefully scrutinize each red bump and decide that some of them were spider bites or poison ivy, not mosquito bites. With Bob, who got the placebo, you might subconsciously imply "Poor Bob—I'll bet you got a ton of mosquito bites, didn't you? The more you got, the more of a genius I am" and you might be more likely to count every hint of a bump on Bob's skin as a mosquito bite. Ideally, the subjects shouldn't know whether they got the treatment or placebo, either, so that they can't give you the result you want; this is especially important for subjective variables like pain. Of course, keeping the subjects of this particular imaginary experiment blind to whether they're rubbing catnip oil on their skin is going to be hard, because Alice's cat keeps licking Alice's arm and then acting stoned.
textbooks/stats/Applied_Statistics/Biological_Statistics_(McDonald)/01%3A_Basics/1.05%3A_Confounding_Variables.txt
• 2.1: Exact Test of Goodness-of-Fit The main goal of a statistical test is to answer the question, "What is the probability of getting a result like my observed data, if the null hypothesis were true?" If it is very unlikely to get the observed data under the null hypothesis, you reject the null hypothesis.  Exact tests, such as the exact test of goodness-of-fit, are different. There is no test statistic; instead, you directly calculate the probability of obtaining the observed data under the null hypothesis. • 2.2: Power Analysis Many statistical tests have been developed to estimate the sample size needed to detect a particular effect, or to estimate the size of the effect that can be detected with a particular sample size. In order to do a power analysis, you need to specify an effect size. This is the size of the difference between your null hypothesis and the alternative hypothesis that you hope to detect. For applied and clinical biological research, there may be a very definite effect size that you want to detect. • 2.3: Chi-Square Test of Goodness-of-Fit Use the chi-square test of goodness-of-fit when you have one nominal variable with two or more values. You compare the observed counts of observations in each category with the expected counts, which you calculate using some kind of theoretical expectation. If the expected number of observations in any category is too small, the chi-square test may give inaccurate results, and you should use an exact test instead. • 2.4: G–Test of Goodness-of-Fit Use the G–test of goodness-of-fit when you have one nominal variable with two or more values (such as male and female, or red, pink and white flowers). You compare the observed counts of numbers of observations in each category with the expected counts, which you calculate using some kind of theoretical expectation. • 2.5: Chi-square Test of Independence Use the chi-square test of independence when you have two nominal variables, each with two or more possible values. You want to know whether the proportions for one variable are different among values of the other variable. • 2.6: G–Test of Independence To use the G–test of independence when you have two nominal variables and you want to see whether the proportions of one variable are different for different values of the other variable. Use it when the sample size is large. • 2.7: Fisher's Exact Test Use Fisher's exact test when you have two nominal variables. You want to know whether the proportions for one variable are different among values of the other variable. • 2.8: Small Numbers in Chi-Square and G–Tests Chi-square and G–tests of goodness-of-fit or independence give inaccurate results when the expected numbers are small. When the sample sizes are too small, you should use exact tests instead of the chi-square test or G–test. • 2.9: Repeated G–Tests of Goodness-of-Fit You use the repeated G–test of goodness-of-fit when you have two nominal variables, one with two or more biologically interesting values (such as red vs. pink vs. white flowers), the other representing different replicates of the same experiment (different days, different locations, different pairs of parents). You compare the observed data with an extrinsic theoretical expectation (such as an expected 1: 2: 1 ratio in a genetic cross). • 2.10: Cochran-Mantel-Haenszel Test Use the Cochran–Mantel–Haenszel test (which is sometimes called the Mantel–Haenszel test) for repeated tests of independence. The most common situation is that you have multiple 2×2 tables of independence; you're analyzing the kind of experiment that you'd analyze with a test of independence, and you've done the experiment multiple times or at multiple locations. 02: Tests for Nominal Variables Learning Objectives • To learn when to use the test of goodness-of-fit when. • How to use it when you have one nominal variable, you want to see whether the number of observations in each category fits a theoretical expectation, and the sample size is small. Introduction The main goal of a statistical test is to answer the question, "What is the probability of getting a result like my observed data, if the null hypothesis were true?" If it is very unlikely to get the observed data under the null hypothesis, you reject the null hypothesis. Most statistical tests take the following form: 1. Collect the data. 2. Calculate a number, the test statistic, that measures how far the observed data deviate from the expectation under the null hypothesis. 3. Use a mathematical function to estimate the probability of getting a test statistic as extreme as the one you observed, if the null hypothesis were true. This is the P value. Exact tests, such as the exact test of goodness-of-fit, are different. There is no test statistic; instead, you directly calculate the probability of obtaining the observed data under the null hypothesis. This is because the predictions of the null hypothesis are so simple that the probabilities can easily be calculated. When to use it You use the exact test of goodness-of-fit when you have one nominal variable. The most common use is a nominal variable with only two values (such as male or female, left or right, green or yellow), in which case the test may be called the exact binomial test. You compare the observed data with the expected data, which are some kind of theoretical expectation (such as a $1:1$ sex ratio or a $3:1$ ratio in a genetic cross) that you determined before you collected the data. If the total number of observations is too high (around a thousand), computers may not be able to do the calculations for the exact test, and you should use a G–test or chi-square test of goodness-of-fit instead (and they will give almost exactly the same result). You can do exact multinomial tests of goodness-of-fit when the nominal variable has more than two values. The basic concepts are the same as for the exact binomial test. Here I'm limiting most of the explanation to the binomial test, because it's more commonly used and easier to understand. Null hypothesis For a two-tailed test, which is what you almost always should use, the null hypothesis is that the number of observations in each category is equal to that predicted by a biological theory, and the alternative hypothesis is that the observed data are different from the expected. For example, if you do a genetic cross in which you expect a $3:1$ ratio of green to yellow pea pods, and you have a total of $50$ plants, your null hypothesis is that there are $37.5$ plants with green pods and $12.5$ with yellow pods. If you are doing a one-tailed test, the null hypothesis is that the observed number for one category is equal to or less than the expected; the alternative hypothesis is that the observed number in that category is greater than expected. How the test works Let's say you want to know whether our cat, Gus, has a preference for one paw or uses both paws equally. You dangle a ribbon in his face and record which paw he uses to bat at it. You do this $10$ times, and he bats at the ribbon with his right paw $8$ times and his left paw $2$ times. Then he gets bored with the experiment and leaves. Can you conclude that he is right-pawed, or could this result have occurred due to chance under the null hypothesis that he bats equally with each paw? The null hypothesis is that each time Gus bats at the ribbon, the probability that he will use his right paw is $0.5$. The probability that he will use his right paw on the first time is $0.5$. The probability that he will use his right paw the first time AND the second time is $0.5\times 0.5$, or $0.5^2$, or $0.25$. The probability that he will use his right paw all ten times is $0.5^{10}$, or about $0.001$. For a mixture of right and left paws, the calculation of the binomial distribution is more complicated. Where $n$ is the total number of trials, $k$ is the number of "successes" (statistical jargon for whichever event you want to consider), $p$ is the expected proportion of successes if the null hypothesis is true, and $Y$ is the probability of getting $k$ successes in $n$ trials, the equation is: $Y=\frac{p^k(1-p)^{(n-k)}n!}{k!(n-k)!}$ Fortunately, there's a spreadsheet function that does the calculation for you. To calculate the probability of getting exactly $8$ out of $10$ right paws, you would enter =BINOMDIST(2, 10, 0.5, FALSE) The first number, $2$, is whichever event there are fewer than expected of; in this case, there are only two uses of the left paw, which is fewer than the expected $5$. The second number, $10$, is the total number of trials. The third number is the expected proportion of whichever event there were fewer than expected of, if the null hypothesis were true; here the null hypothesis predicts that half of all ribbon-battings will be with the left paw. And FALSE tells it to calculate the exact probability for that number of events only. In this case, the answer is $P=0.044$, so you might think it was significant at the $P<0.05$ level. However, it would be incorrect to only calculate the probability of getting exactly $2$ left paws and $8$ right paws. Instead, you must calculate the probability of getting a deviation from the null expectation as large as, or larger than, the observed result. So you must calculate the probability that Gus used his left paw $2$ times out of $10$, or $1$ time out of $10$, or $0$ times out of ten. Adding these probabilities together gives $P=0.055$, which is not quite significant at the $P<0.05$ level. You do this in a spreadsheet by entering =BINOMDIST(2, 10, 0.5, TRUE) The "TRUE" parameter tells the spreadsheet to calculate the sum of the probabilities of the observed number and all more extreme values; it's the equivalent of =BINOMDIST(2, 10, 0.5, FALSE)+BINOMDIST(1, 10, 0.5, FALSE)+BINOMDIST(0, 10, 0.5, FALSE) There's one more thing. The above calculation gives the total probability of getting $2$, $1$, or $0$ uses of the left paw out of $10$. However, the alternative hypothesis is that the number of uses of the right paw is not equal to the number of uses of the left paw. If there had been $2$, $1$, or $0$ uses of the right paw, that also would have been an equally extreme deviation from the expectation. So you must add the probability of getting $2$, $1$, or $0$ uses of the right paw, to account for both tails of the probability distribution; you are doing a two-tailed test. This gives you $P=0.109$, which is not very close to being significant. (If the null hypothesis had been $0.50$ or more uses of the left paw, and the alternative hypothesis had been less than $0.5$ uses of left paw, you could do a one-tailed test and use $P=0.054$. But you almost never have a situation where a one-tailed test is appropriate.) The most common use of an exact binomial test is when the null hypothesis is that numbers of the two outcomes are equal. In that case, the meaning of a two-tailed test is clear, and you calculate the two-tailed $P$ value by multiplying the one-tailed $P$ value times two. When the null hypothesis is not a $1:1$ ratio, but something like a $3:1$ ratio, statisticians disagree about the meaning of a two-tailed exact binomial test, and different statistical programs will give slightly different results. The simplest method is to use the binomial equation, as described above, to calculate the probability of whichever event is less common that expected, then multiply it by two. For example, let's say you've crossed a number of cats that are heterozygous at the hair-length gene; because short hair is dominant, you expect $75\%$ of the kittens to have short hair and $25\%$ to have long hair. You end up with $7$ short haired and $5$ long haired cats. There are $7$ short haired cats when you expected $9$, so you use the binomial equation to calculate the probability of $7$ or fewer short-haired cats; this adds up to $0.158$. Doubling this would give you a two-tailed $P$ value of $0.315$. This is what SAS and Richard Lowry's online calculator do. The alternative approach is called the method of small P values, and I think most statisticians prefer it. For our example, you use the binomial equation to calculate the probability of obtaining exactly $7$ out of $12$ short-haired cats; it is $0.103$. Then you calculate the probabilities for every other possible number of short-haired cats, and you add together those that are less than $0.103$. That is the probabilities for $6$, $5$, $4$...$0$ short-haired cats, and in the other tail, only the probability of $12$ out of $12$ short-haired cats. Adding these probabilities gives a $P$ value of $0.189$. This is what my exact binomial spreadsheet exactbin.xls does. I think the arguments in favor of the method of small $P$ values make sense. If you are using the exact binomial test with expected proportions other than $50:50$, make sure you specify which method you use (remember that it doesn't matter when the expected proportions are $50:50$). Sign test One common application of the exact binomial test is known as the sign test. You use the sign test when there are two nominal variables and one measurement variable. One of the nominal variables has only two values, such as "before" and "after" or "left" and "right," and the other nominal variable identifies the pairs of observations. In a study of a hair-growth ointment, "amount of hair" would be the measurement variable, "before" and "after" would be the values of one nominal variable, and "Arnold," "Bob," "Charles" would be values of the second nominal variable. The data for a sign test usually could be analyzed using a paired t–test or a Wilcoxon signed-rank test, if the null hypothesis is that the mean or median difference between pairs of observations is zero. However, sometimes you're not interested in the size of the difference, just the direction. In the hair-growth example, you might have decided that you didn't care how much hair the men grew or lost, you just wanted to know whether more than half of the men grew hair. In that case, you count the number of differences in one direction, count the number of differences in the opposite direction, and use the exact binomial test to see whether the numbers are different from a $1:1$ ratio. You should decide that a sign test is the test you want before you look at the data. If you analyze your data with a paired t–test and it's not significant, then you notice it would be significant with a sign test, it would be very unethical to just report the result of the sign test as if you'd planned that from the beginning. Exact multinomial test While the most common use of exact tests of goodness-of-fit is the exact binomial test, it is also possible to perform exact multinomial tests when there are more than two values of the nominal variable. The most common example in biology would be the results of genetic crosses, where one might expect a $1:2:1$ ratio from a cross of two heterozygotes at one codominant locus, a $9:3:3:1$ ratio from a cross of individuals heterozygous at two dominant loci, etc. The basic procedure is the same as for the exact binomial test: you calculate the probabilities of the observed result and all more extreme possible results and add them together. The underlying computations are more complicated, and if you have a lot of categories, your computer may have problems even if the total sample size is less than 1000. If you have a small sample size but so many categories that your computer program won't do an exact test, you can use a G–test or chi-square test of goodness-of-fit, but understand that the results may be somewhat inaccurate. Post-hoc test If you perform the exact multinomial test (with more than two categories) and get a significant result, you may want to follow up by testing whether each category deviates significantly from the expected number. It's a little odd to talk about just one category deviating significantly from expected; if there are more observations than expected in one category, there have to be fewer than expected in at least one other category. But looking at each category might help you understand better what's going on. For example, let's say you do a genetic cross in which you expect a $9:3:3:1$ ratio of purple, red, blue, and white flowers, and your observed numbers are $72$ purple, $38$ red, $20$ blue, and $18$ white. You do the exact test and get a $P$ value of $0.0016$, so you reject the null hypothesis. There are fewer purple and blue and more red and white than expected, but is there an individual color that deviates significantly from expected? To answer this, do an exact binomial test for each category vs. the sum of all the other categories. For purple, compare the $72$ purple and $76$ non-purple to the expected $9:7$ ratio. The $P$ value is $0.07$, so you can't say there are significantly fewer purple flowers than expected (although it's worth noting that it's close). There are $38$ red and $110$ non-red flowers; when compared to the expected $3:13$ ratio, the $P$ value is $0.035$. This is below the significance level of $0.05$, but because you're doing four tests at the same time, you need to correct for the multiple comparisons. Applying the Bonferroni correction, you divide the significance level ($0.05$) by the number of comparisons ($4$) and get a new significance level of $0.0125$; since $0.035$ is greater than this, you can't say there are significantly more red flowers than expected. Comparing the $18$ white and $130$ non-white to the expected ratio of $1:15$, the $P$ value is $0.006$, so you can say that there are significantly more white flowers than expected. It is possible that an overall significant $P$ value could result from moderate-sized deviations in all of the categories, and none of the post-hoc tests will be significant. This would be frustrating; you'd know that something interesting was going on, but you couldn't say with statistical confidence exactly what it was. I doubt that the procedure for post-hoc tests in a goodness-of-fit test that I've suggested here is original, but I can't find a reference to it; if you know who really invented this, e-mail me with a reference. And it seems likely that there's a better method that takes into account the non-independence of the numbers in the different categories (as the numbers in one category go up, the number in some other category must go down), but I have no idea what it might be. Intrinsic hypothesis You use exact test of goodness-of-fit that I've described here when testing fit to an extrinsic hypothesis, a hypothesis that you knew before you collected the data. For example, even before the kittens are born, you can predict that the ratio of short-haired to long-haired cats will be $3:1$ in a genetic cross of two heterozygotes. Sometimes you want to test the fit to an intrinsic null hypothesis: one that is based on the data you collect, where you can't predict the results from the null hypothesis until after you collect the data. The only example I can think of in biology is Hardy-Weinberg proportions, where the number of each genotype in a sample from a wild population is expected to be $p^2$ or $2pq$ or $q^2$ (with more possibilities when there are more than two alleles); you don't know the allele frequencies ($p$ and $q$) until after you collect the data. Exact tests of fit to Hardy-Weinberg raise a number of statistical issues and have received a lot of attention from population geneticists; if you need to do this, see Engels (2009) and the older references he cites. If you have biological data that you want to do an exact test of goodness-of-fit with an intrinsic hypothesis on, and it doesn't involve Hardy-Weinberg, e-mail me; I'd be very curious to see what kind of biological data requires this, and I will try to help you as best as I can. Assumptions Goodness-of-fit tests assume that the individual observations are independent, meaning that the value of one observation does not influence the value of other observations. To give an example, let's say you want to know what color flowers bees like. You plant four plots of flowers: one purple, one red, one blue, and one white. You get a bee, put it in a dark jar, carry it to a point equidistant from the four plots of flowers, and release it. You record which color flower it goes to first, then re-capture it and hold it prisoner until the experiment is done. You do this again and again for $100$ bees. In this case, the observations are independent; the fact that bee $\#1$ went to a blue flower has no influence on where bee $\#2$ goes. This is a good experiment; if significantly more than $1/4$ of the bees go to the blue flowers, it would be good evidence that the bees prefer blue flowers. Now let's say that you put a beehive at the point equidistant from the four plots of flowers, and you record where the first $100$ bees go. If the first bee happens to go to the plot of blue flowers, it will go back to the hive and do its bee-butt-wiggling dance that tells the other bees, "Go $15$ meters southwest, there's a bunch of yummy nectar there!" Then some more bees will fly to the blue flowers, and when they return to the hive, they'll do the same bee-butt-wiggling dance. The observations are NOT independent; where bee $\#2$ goes is strongly influenced by where bee $\#1$ happened to go. If "significantly" more than $1/4$ of the bees go to the blue flowers, it could easily be that the first bee just happened to go there by chance, and bees may not really care about flower color. Example Roptrocerus xylophagorum is a parasitoid of bark beetles. To determine what cues these wasps use to find the beetles, Sullivan et al. (2000) placed female wasps in the base of a $Y$-shaped tube, with a different odor in each arm of the $Y$, then counted the number of wasps that entered each arm of the tube. In one experiment, one arm of the $Y$ had the odor of bark being eaten by adult beetles, while the other arm of the $Y$ had bark being eaten by larval beetles. Ten wasps entered the area with the adult beetles, while $17$ entered the area with the larval beetles. The difference from the expected $1:1$ ratio is not significant ($P=0.248$). In another experiment that compared infested bark with a mixture of infested and uninfested bark, $36$ wasps moved towards the infested bark, while only $7$ moved towards the mixture; this is significantly different from the expected ratio ($P=9\times 10^{-6}$). Example Yukilevich and True (2008) mixed $30$ male and $30$ female Drosophila melanogaster from Alabama with $30$ male and $30$ females from Grand Bahama Island. They observed $246$ matings; $140$ were homotypic (male and female from the same location), while $106$ were heterotypic (male and female from different locations). The null hypothesis is that the flies mate at random, so that there should be equal numbers of homotypic and heterotypic matings. There were significantly more homotypic matings (exact binomial test, $P=0.035$) than heterotypic. Example As an example of the sign test, Farrell et al. (2001) estimated the evolutionary tree of two subfamilies of beetles that burrow inside trees as adults. They found ten pairs of sister groups in which one group of related species, or "clade," fed on angiosperms and one fed on gymnosperms, and they counted the number of species in each clade. There are two nominal variables, food source (angiosperms or gymnosperms) and pair of clades (Corthylina vs. Pityophthorus, etc.) and one measurement variable, the number of species per clade. The biological null hypothesis is that although the number of species per clade may vary widely due to a variety of unknown factors, whether a clade feeds on angiosperms or gymnosperms will not be one of these factors. In other words, you expect that each pair of related clades will differ in number of species, but half the time the angiosperm-feeding clade will have more species, and half the time the gymnosperm-feeding clade will have more species. Applying a sign test, there are $10$ pairs of clades in which the angiosperm-specialized clade has more species, and $0$ pairs with more species in the gymnosperm-specialized clade; this is significantly different from the null expectation ($P=0.002$), and you can reject the null hypothesis and conclude that in these beetles, clades that feed on angiosperms tend to have more species than clades that feed on gymnosperms. Angiosperm-feeding Spp. Gymonsperm-feeding Spp. Corthylina 458 Pityophthorus 200 Scolytinae 5200 Hylastini+Tomacini 180 Acanthotomicus+Premnobious 123 Orhotomicus 11 Xyleborini/Dryocoetini 1500 Ipini 195 Apion 1500 Antliarhininae 12 Belinae 150 Allocoryninae+Oxycorinae 30 Higher Curculionidae 44002 Nemonychidae 85 Higher Cerambycidae 25000 Aseminae + Spondylinae 78 Megalopodinae 400 Palophaginae 3 Higher Chrysomelidae 33400 Aulocoscelinae + Orsodacninae 26 Example Mendel (1865) crossed pea plants that were heterozygotes for green pod/yellow pod; pod color is the nominal variable, with "green" and "yellow" as the values. If this is inherited as a simple Mendelian trait, with green dominant over yellow, the expected ratio in the offspring is $3$ green: $1$ yellow. He observed $428$ green and $152$ yellow. The expected numbers of plants under the null hypothesis are $435$ green and $145$ yellow, so Mendel observed slightly fewer green-pod plants than expected. The $P$ value for an exact binomial test using the method of small $P$ values, as implemented in my spreadsheet, is $0.533$, indicating that the null hypothesis cannot be rejected; there is no significant difference between the observed and expected frequencies of pea plants with green pods. (SAS uses a different method that gives a $P$ value of $0.530$. With a smaller sample size, the difference between the "method of small $P$ values" that I and most statisticians prefer, and the cruder method that SAS uses, could be large enough to be important.) Example Mendel (1865) also crossed peas that were heterozygous at two genes: one for yellow vs. green, the other for round vs. wrinkled; yellow was dominant over green, and round was dominant over wrinkled. The expected and observed results were: phenotype expected proportion expected number observed number yellow+round 9 312.75 315 green+round 3 104.25 108 yellow+wrinkled 3 104.25 101 round+wrinkled 1 34.75 32 This is an example of the exact multinomial test, since there are four categories, not two. The $P$ value is $0.93$, so the difference between observed and expected is nowhere near significance. Graphing the results You plot the results of an exact test the same way would any other goodness-of-fit test. Similar tests A G–test or chi-square goodness-of-fit test could also be used for the same data as the exact test of goodness-of-fit. Where the expected numbers are small, the exact test will give more accurate results than the G–test or chi-squared tests. Where the sample size is large (over a thousand), attempting to use the exact test may give error messages (computers have a hard time calculating factorials for large numbers), so a G–test or chi-square test must be used. For intermediate sample sizes, all three tests give approximately the same results. I recommend that you use the exact test when $n$ is less than $1000$; see the web page on small sample sizes for further discussion. If you try to do an exact test with a large number of categories, your computer may not be able to do the calculations even if your total sample size is less than 1000. In that case, you can cautiously use the G–test or chi-square goodness-of-fit test, knowing that the results may be somewhat inaccurate. The exact test of goodness-of-fit is not the same as Fisher's exact test of independence. You use a test of independence for two nominal variables, such as sex and location. If you wanted to compare the ratio of males to female students at Delaware to the male:female ratio at Maryland, you would use a test of independence; if you want to compare the male:female ratio at Delaware to a theoretical $1:1$ ratio, you would use a goodness-of-fit test. How to do the test Spreadsheet I have set up a spreadsheet that performs the exact binomial test exactbin.xls for sample sizes up to $1000$. It is self-explanatory. It uses the method of small $P$ values when the expected proportions are different from $50:50$. Web page Richard Lowry has set up a web page that does the exact binomial test. It does not use the method of small $P$ values, so I do not recommend it if your expected proportions are different from $50:50$. I'm not aware of any web pages that will do the exact binomial test using the method of small $P$values, and I'm not aware of any web pages that will do exact multinomial tests. R Salvatore Mangiafico's R Companion has a sample R program for the exact test of goodness-of-fit. SAS Here is a sample SAS program, showing how to do the exact binomial test on the Gus data. The "$P=0.5$" gives the expected proportion of whichever value of the nominal variable is alphabetically first; in this case, it gives the expected proportion of "left." The SAS exact binomial function finds the two-tailed $P$ value by doubling the $P$ value of one tail. The binomial distribution is not symmetrical when the expected proportion is other than $50\%$, so the technique SAS uses isn't as good as the method of small $P$ values. I don't recommend doing the exact binomial test in SAS when the expected proportion is anything other than $50\%$. DATA gus; INPUT paw $; DATALINES; right left right right right right left right right right ; PROC FREQ DATA=gus; TABLES paw / BINOMIAL(P=0.5); EXACT BINOMIAL; RUN; Near the end of the output is this: Exact Test One-sided Pr <= P 0.0547 Two-sided = 2 * One-sided 0.1094 The "Two-sided=$2$*One-sided" number is the two-tailed $P$ value that you want. If you have the total numbers, rather than the raw values, you'd use a WEIGHT parameter in PROC FREQ. The ZEROS option tells it to include observations with counts of zero, for example if Gus had used his left paw $0$ times; it doesn't hurt to always include the ZEROS option. Example DATA gus; INPUT paw$ count; DATALINES; right 10 left 2 ; PROC FREQ DATA=gus; WEIGHT count / ZEROS; TABLES paw / BINOMIAL(P=0.5); EXACT BINOMIAL; RUN; This example shows how do to the exact multinomial test. The numbers are Mendel's data from a genetic cross in which you expect a $9:3:3:1$ ratio of peas that are round+yellow, round+green, wrinkled+yellow, and wrinkled+green. The ORDER=DATA option tells SAS to analyze the data in the order they are input (rndyel, rndgrn, wrnkyel, wrnkgrn, in this case), not alphabetical order. The TESTP=($0.5625\; \; 0.1875\; \; 0.0625\; \; 0.1875$) lists the expected proportions in the same order. Example DATA peas; INPUT color \$ count; DATALINES; rndyel 315 rndgrn 108 wrnkyel 101 wrnkgrn 32 ; PROC FREQ DATA=peas ORDER=DATA; WEIGHT count / ZEROS; TABLES color / CHISQ TESTP=(0.5625 0.1875 0.1875 0.0625); EXACT CHISQ; RUN; The P value you want is labeled "Exact Pr >= ChiSq": Chi-Square Test for Specified Proportions ------------------------------------- Chi-Square 0.4700 DF 3 Asymptotic Pr > ChiSq 0.9254 Exact Pr >= ChiSq 0.9272 Power analysis Before you do an experiment, you should do a power analysis to estimate the sample size you'll need. To do this for an exact binomial test using G*Power, choose "Exact" under "Test Family" and choose "Proportion: Difference from constant" under "Statistical test." Under "Type of power analysis", choose "A priori: Compute required sample size". For "Input parameters," enter the number of tails (you'll almost always want two), alpha (usually $0.05$), and Power (often $0.5$, $0.8$, or $0.9$). The "Effect size" is the difference in proportions between observed and expected that you hope to see, and the "Constant proportion" is the expected proportion for one of the two categories (whichever is smaller). Hit "Calculate" and you'll get the Total Sample Size. As an example, let's say you wanted to do an experiment to see if Gus the cat really did use one paw more than the other for getting my attention. The null hypothesis is that the probability that he uses his left paw is $0.50$, so enter that in "Constant proportion". You decide that if the probability of him using his left paw is $0.40$, you want your experiment to have an $80\%$ probability of getting a significant ($P< 0.05$) result, so enter $0.10$ for Effect Size, $0.05$ for Alpha, and $0.80$ for Power. If he uses his left paw $60\%$ of the time, you'll accept that as a significant result too, so it's a two-tailed test. The result is $199$. This means that if Gus really is using his left paw $40\%$ (or $60\%$) of the time, a sample size of $199$ observations will have an $80\%$ probability of giving you a significant ($P< 0.05$) exact binomial test. Many power calculations for the exact binomial test, like G*Power, find the smallest sample size that will give the desired power, but there is a "sawtooth effect" in which increasing the sample size can actually reduce the power. Chernick and Liu (2002) suggest finding the smallest sample size that will give the desired power, even if the sample size is increased. For the Gus example, the method of Chernick and Liu gives a sample size of $210$, rather than the $199$ given by G*Power. Because both power and effect size are usually just arbitrary round numbers, where it would be easy to justify other values that would change the required sample size, the small differences in the method used to calculate desired sample size are probably not very important. The only reason I mention this is so that you won't be alarmed if different power analysis programs for the exact binomial test give slightly different results for the same parameters. G*Power does not do a power analysis for the exact test with more than two categories. If you have to do a power analysis and your nominal variable has more than two values, use the power analysis for chi-square tests in G*Power instead. The results will be pretty close to a true power analysis for the exact multinomial test, and given the arbitrariness of parameters like power and effect size, the results should be close enough.
textbooks/stats/Applied_Statistics/Biological_Statistics_(McDonald)/02%3A_Tests_for_Nominal_Variables/2.01%3A_Exact_Test_of_Goodness-of-Fit.txt
Learning Objectives • How to perform a power analysis to estimate the number of observations you need to have a good chance of detecting the effect you're looking for. Introduction When you are designing an experiment, it is a good idea to estimate the sample size you'll need. This is especially true if you're proposing to do something painful to humans or other vertebrates, where it is particularly important to minimize the number of individuals (without making the sample size so small that the whole experiment is a waste of time and suffering), or if you're planning a very time-consuming or expensive experiment. Methods have been developed for many statistical tests to estimate the sample size needed to detect a particular effect, or to estimate the size of the effect that can be detected with a particular sample size. In order to do a power analysis, you need to specify an effect size. This is the size of the difference between your null hypothesis and the alternative hypothesis that you hope to detect. For applied and clinical biological research, there may be a very definite effect size that you want to detect. For example, if you're testing a new dog shampoo, the marketing department at your company may tell you that producing the new shampoo would only be worthwhile if it made dogs' coats at least \(25\%\) shinier, on average. That would be your effect size, and you would use it when deciding how many dogs you would need to put through the canine reflectometer. When doing basic biological research, you often don't know how big a difference you're looking for, and the temptation may be to just use the biggest sample size you can afford, or use a similar sample size to other research in your field. You should still do a power analysis before you do the experiment, just to get an idea of what kind of effects you could detect. For example, some anti-vaccination kooks have proposed that the U.S. government conduct a large study of unvaccinated and vaccinated children to see whether vaccines cause autism. It is not clear what effect size would be interesting: \(10\%\) more autism in one group? \(50\%\) more? twice as much? However, doing a power analysis shows that even if the study included every unvaccinated child in the United States aged \(3\) to \(6\), and an equal number of vaccinated children, there would have to be 25% more autism in one group in order to have a high chance of seeing a significant difference. A more plausible study, of \(5,000\) unvaccinated and \(5,000\)vaccinated children, would detect a significant difference with high power only if there were three times more autism in one group than the other. Because it is unlikely that there is such a big difference in autism between vaccinated and unvaccinated children, and because failing to find a relationship with such a study would not convince anti-vaccination kooks that there was no relationship (nothing would convince them there's no relationship—that's what makes them kooks), the power analysis tells you that such a large, expensive study would not be worthwhile. Parameters There are four or five numbers involved in a power analysis. You must choose the values for each one before you do the analysis. If you don't have a good reason for using a particular value, you can try different values and look at the effect on sample size. Effect size The effect size is the minimum deviation from the null hypothesis that you hope to detect. For example, if you are treating hens with something that you hope will change the sex ratio of their chicks, you might decide that the minimum change in the proportion of sexes that you're looking for is \(10\%\). You would then say that your effect size is \(10\%\). If you're testing something to make the hens lay more eggs, the effect size might be \(2\) eggs per month. Occasionally, you'll have a good economic or clinical reason for choosing a particular effect size. If you're testing a chicken feed supplement that costs \(\$1.50\) per month, you're only interested in finding out whether it will produce more than \(\$1.50\) worth of extra eggs each month; knowing that a supplement produces an extra \(0.1\) egg a month is not useful information to you, and you don't need to design your experiment to find that out. But for most basic biological research, the effect size is just a nice round number that you pulled out of your butt. Let's say you're doing a power analysis for a study of a mutation in a promoter region, to see if it affects gene expression. How big a change in gene expression are you looking for: \(10\%\)? \(20\%\)? \(50\%\)? It's a pretty arbitrary number, but it will have a huge effect on the number of transgenic mice who will give their expensive little lives for your science. If you don't have a good reason to look for a particular effect size, you might as well admit that and draw a graph with sample size on the X-axis and effect size on the Y-axis. G*Power will do this for you. Alpha Alpha is the significance level of the test (the P value), the probability of rejecting the null hypothesis even though it is true (a false positive). The usual value is alpha=\(0.05\). Some power calculators use the one-tailed alpha, which is confusing, since the two-tailed alpha is much more common. Be sure you know which you're using. Beta or power Beta, in a power analysis, is the probability of accepting the null hypothesis, even though it is false (a false negative), when the real difference is equal to the minimum effect size. The power of a test is the probability of rejecting the null hypothesis (getting a significant result) when the real difference is equal to the minimum effect size. Power is \(1\)−beta. There is no clear consensus on the value to use, so this is another number you pull out of your butt; a power of \(80\%\) (equivalent to a beta of \(20\%\)) is probably the most common, while some people use \(50\%\) or \(90\%\). The cost to you of a false negative should influence your choice of power; if you really, really want to be sure that you detect your effect size, you'll want to use a higher value for power (lower beta), which will result in a bigger sample size. Some power calculators ask you to enter beta, while others ask for power (\(1\)−beta); be very sure you understand which you need to use. Standard deviation For measurement variables, you also need an estimate of the standard deviation. As standard deviation gets bigger, it gets harder to detect a significant difference, so you'll need a bigger sample size. Your estimate of the standard deviation can come from pilot experiments or from similar experiments in the published literature. Your standard deviation once you do the experiment is unlikely to be exactly the same, so your experiment will actually be somewhat more or less powerful than you had predicted. For nominal variables, the standard deviation is a simple function of the sample size, so you don't need to estimate it separately. How it works The details of a power analysis are different for different statistical tests, but the basic concepts are similar; here I'll use the exact binomial test as an example. Imagine that you are studying wrist fractures, and your null hypothesis is that half the people who break one wrist break their right wrist, and half break their left. You decide that the minimum effect size is \(10\%\); if the percentage of people who break their right wrist is \(60\%\) or more, or \(40\%\) or less, you want to have a significant result from the exact binomial test. I have no idea why you picked \(10\%\), but that's what you'll use. Alpha is \(5\%\), as usual. You want power to be \(90\%\), which means that if the percentage of broken right wrists really is \(40\%\) or \(60\%\), you want a sample size that will yield a significant (\(P<0.05\)) result \(90\%\) of the time, and a non-significant result (which would be a false negative in this case) only \(10\%\) of the time. The first graph shows the probability distribution under the null hypothesis, with a sample size of \(50\) individuals. If the null hypothesis is true, you'll see less than \(36\%\) or more than \(64\%\) of people breaking their right wrists (a false positive) about \(5\%\) of the time. As the second graph shows, if the true percentage is \(40\%\), the sample data will be less than \(36\%\) or more than \(64\%\) only \(21\%\) of the time; you'd get a true positive only \(21\%\) of the time, and a false negative \(79\%\) of the time. Obviously, a sample size of \(50\) is too small for this experiment; it would only yield a significant result \(21\%\) of the time, even if there's a \(40:60\) ratio of broken right wrists to left wrists. The next graph shows the probability distribution under the null hypothesis, with a sample size of \(270\) individuals. In order to be significant at the \(P<0.05\) level, the observed result would have to be less than \(43.7\%\) or more than \(56.3\%\) of people breaking their right wrists. As the second graph shows, if the true percentage is \(40\%\), the sample data will be this extreme \(90\%\) of the time. A sample size of \(270\) is pretty good for this experiment; it would yield a significant result \(90\%\) of the time if there's a \(40:60\) ratio of broken right wrists to left wrists. If the ratio of broken right to left wrists is further away from \(40:60\), you'll have an even higher probability of getting a significant result. Example You plan to cross peas that are heterozygotes for Yellow/green pea color, where Yellow is dominant. The expected ratio in the offspring is \(3\) Yellow: \(1\) green. You want to know whether yellow peas are actually more or less fit, which might show up as a different proportion of yellow peas than expected. You arbitrarily decide that you want a sample size that will detect a significant (\(P<0.05\)) difference if there are \(3\%\) more or fewer yellow peas than expected, with a power of \(90\%\). You will test the data using the exact binomial test of goodness-of-fit if the sample size is small enough, or a G–test of goodness-of-fit if the sample size is larger. The power analysis is the same for both tests. Using G*Power as described for the exact test of goodness-of-fit, the result is that it would take \(2109\) pea plants if you want to get a significant (\(P<0.05\)) result \(90\%\) of the time, if the true proportion of yellow peas is \(78\%\), and \(2271\) peas if the true proportion is \(72\%\) yellow. Since you'd be interested in a deviation in either direction, you use the larger number, \(2271\). That's a lot of peas, but you're reassured to see that it's not a ridiculous number. If you want to detect a difference of \(0.1\%\) between the expected and observed numbers of yellow peas, you can calculate that you'll need \(1,970,142\) peas; if that's what you need to detect, the sample size analysis tells you that you're going to have to include a pea-sorting robot in your budget. Example The example data for the two-sample t–test shows that the average height in the \(2p.m.\) section of Biological Data Analysis was \(66.6\) inches and the average height in the \(5p.m.\) section was \(64.6\) inches, but the difference is not significant (\(P=0.207\)). You want to know how many students you'd have to sample to have an \(80\%\) chance of a difference this large being significant. Using G*Power as described on the two-sample t–test page, enter \(2.0\) for the difference in means. Using the STDEV function in Excel, calculate the standard deviation for each sample in the original data; it is \(4.8\) for sample \(1\) and \(3.6\) for sample \(2\). Enter \(0.05\) for alpha and \(0.80\) for power. The result is \(72\), meaning that if \(5p.m.\) students really were two inches shorter than \(2p.m.\) students, you'd need \(72\) students in each class to detect a significant difference \(80\%\) of the time, if the true difference really is \(2.0\) inches. How to do power analyses G*Power G*Power is an excellent free program, available for Mac and Windows, that will do power analyses for a large variety of tests. I will explain how to use G*Power for power analyses for most of the tests in this handbook. R Salvatore Mangiafico's R Companion has sample R programs to do power analyses for many of the tests in this handbook; go to the page for the individual test and scroll to the bottom for the power analysis program. SAS SAS has a PROC POWER that you can use for power analyses. You enter the needed parameters (which vary depending on the test) and enter a period (which symbolizes missing data in SAS) for the parameter you're solving for (usually ntotal, the total sample size, or npergroup, the number of samples in each group). I find that G*Power is easier to use than SAS for this purpose, so I don't recommend using SAS for your power analyses.
textbooks/stats/Applied_Statistics/Biological_Statistics_(McDonald)/02%3A_Tests_for_Nominal_Variables/2.02%3A_Power_Analysis.txt
Learning Objectives • Study the use of chi-square test of goodness-of-fit when you have one nominal variable • To see if the number of observations in each category fits a theoretical expectation, and the sample size is large When to use it Use the chi-square test of goodness-of-fit when you have one nominal variable with two or more values (such as red, pink and white flowers). You compare the observed counts of observations in each category with the expected counts, which you calculate using some kind of theoretical expectation (such as a $1:1$ sex ratio or a $1:2:1$ ratio in a genetic cross). If the expected number of observations in any category is too small, the chi-square test may give inaccurate results, and you should use an exact test instead. See the web page on small sample sizes for discussion of what "small" means. The chi-square test of goodness-of-fit is an alternative to the G–test of goodness-of-fit; each of these tests has some advantages and some disadvantages, and the results of the two tests are usually very similar. You should read the section on "Chi-square vs. G–test" near the bottom of this page, pick either chi-square or G–test, then stick with that choice for the rest of your life. Much of the information and examples on this page are the same as on the G–test page, so once you've decided which test is better for you, you only need to read one. Null hypothesis The statistical null hypothesis is that the number of observations in each category is equal to that predicted by a biological theory, and the alternative hypothesis is that the observed numbers are different from the expected. The null hypothesis is usually an extrinsic hypothesis, where you knew the expected proportions before doing the experiment. Examples include a $1:1$ sex ratio or a $1:2:1$ ratio in a genetic cross. Another example would be looking at an area of shore that had 59% of the area covered in sand, $28\%$ mud and $13\%$ rocks; if you were investigating where seagulls like to stand, your null hypothesis would be that $59\%$ of the seagulls were standing on sand, $28\%$ on mud and $13\%$ on rocks. In some situations, you have an intrinsic hypothesis. This is a null hypothesis where you calculate the expected proportions after you do the experiment, using some of the information from the data. The best-known example of an intrinsic hypothesis is the Hardy-Weinberg proportions of population genetics: if the frequency of one allele in a population is $p$ and the other allele is $q$, the null hypothesis is that expected frequencies of the three genotypes are $p^2$, $2pq$, and $q^2$. This is an intrinsic hypothesis, because you estimate $p$ and $q$ from the data after you collect the data, you can't predict $p$ and $q$ before the experiment. How the test works Unlike the exact test of goodness-of-fit, the chi-square test does not directly calculate the probability of obtaining the observed results or something more extreme. Instead, like almost all statistical tests, the chi-square test has an intermediate step; it uses the data to calculate a test statistic that measures how far the observed data are from the null expectation. You then use a mathematical relationship, in this case the chi-square distribution, to estimate the probability of obtaining that value of the test statistic. You calculate the test statistic by taking an observed number ($O$), subtracting the expected number ($E$), then squaring this difference. The larger the deviation from the null hypothesis, the larger the difference is between observed and expected. Squaring the differences makes them all positive. You then divide each difference by the expected number, and you add up these standardized differences. The test statistic is approximately equal to the log-likelihood ratio used in the G–test. It is conventionally called a "chi-square" statistic, although this is somewhat confusing because it's just one of many test statistics that follows the theoretical chi-square distribution. The equation is: $\text{chi}^{2}=\sum \frac{(O-E)^2}{E}$ As with most test statistics, the larger the difference between observed and expected, the larger the test statistic becomes. To give an example, let's say your null hypothesis is a $3:1$ ratio of smooth wings to wrinkled wings in offspring from a bunch of Drosophila crosses. You observe $770$ flies with smooth wings and $230$ flies with wrinkled wings; the expected values are $750$ smooth-winged and $250$ wrinkled-winged flies. Entering these numbers into the equation, the chi-square value is $2.13$. If you had observed $760$ smooth-winged flies and $240$ wrinkled-wing flies, which is closer to the null hypothesis, your chi-square value would have been smaller, at $0.53$; if you'd observed $800$ smooth-winged and $200$ wrinkled-wing flies, which is further from the null hypothesis, your chi-square value would have been $13.33$. The distribution of the test statistic under the null hypothesis is approximately the same as the theoretical chi-square distribution. This means that once you know the chi-square value and the number of degrees of freedom, you can calculate the probability of getting that value of chi-square using the chi-square distribution. The number of degrees of freedom is the number of categories minus one, so for our example there is one degree of freedom. Using the CHIDIST function in a spreadsheet, you enter =CHIDIST(2.13, 1) and calculate that the probability of getting a chi-square value of $2.13$ with one degree of freedom is $P=0.144$. The shape of the chi-square distribution depends on the number of degrees of freedom. For an extrinsic null hypothesis (the much more common situation, where you know the proportions predicted by the null hypothesis before collecting the data), the number of degrees of freedom is simply the number of values of the variable, minus one. Thus if you are testing a null hypothesis of a $1:1$ sex ratio, there are two possible values (male and female), and therefore one degree of freedom. This is because once you know how many of the total are females (a number which is "free" to vary from $0$ to the sample size), the number of males is determined. If there are three values of the variable (such as red, pink, and white), there are two degrees of freedom, and so on. An intrinsic null hypothesis is one where you estimate one or more parameters from the data in order to get the numbers for your null hypothesis. As described above, one example is Hardy-Weinberg proportions. For an intrinsic null hypothesis, the number of degrees of freedom is calculated by taking the number of values of the variable, subtracting $1$ for each parameter estimated from the data, then subtracting $1$ more. Thus for Hardy-Weinberg proportions with two alleles and three genotypes, there are three values of the variable (the three genotypes); you subtract one for the parameter estimated from the data (the allele frequency, $p$); and then you subtract one more, yielding one degree of freedom. There are other statistical issues involved in testing fit to Hardy-Weinberg expectations, so if you need to do this, see Engels (2009) and the older references he cites. Post-hoc test If there are more than two categories and you want to find out which ones are significantly different from their null expectation, you can use the same method of testing each category vs. the sum of all other categories, with the Bonferroni correction, as I describe for the exact test. You use chi-square tests for each category, of course. Assumptions The chi-square of goodness-of-fit assumes independence, as described for the exact test Examples Extrinsic Hypothesis examples Example European crossbills (Loxia curvirostra) have the tip of the upper bill either right or left of the lower bill, which helps them extract seeds from pine cones. Some have hypothesized that frequency-dependent selection would keep the number of right and left-billed birds at a $1:1$ ratio. Groth (1992) observed $1752$ right-billed and $1895$ left-billed crossbills. Calculate the expected frequency of right-billed birds by multiplying the total sample size ($3647$) by the expected proportion ($0.5$) to yield $1823.5$. Do the same for left-billed birds. The number of degrees of freedom when an for an extrinsic hypothesis is the number of classes minus one. In this case, there are two classes (right and left), so there is one degree of freedom. The result is chi-square=$5.61$, $1d.f.$, $P=0.018$, indicating that you can reject the null hypothesis; there are significantly more left-billed crossbills than right-billed. Example Shivrain et al. (2006) crossed clearfield rice, which are resistant to the herbicide imazethapyr, with red rice, which are susceptible to imazethapyr. They then crossed the hybrid offspring and examined the $F_2$ generation, where they found $772$ resistant plants, $1611$ moderately resistant plants, and $737$ susceptible plants. If resistance is controlled by a single gene with two co-dominant alleles, you would expect a $1:2:1$ ratio. Comparing the observed numbers with the $1:2:1$ ratio, the chi-square value is $4.12$. There are two degrees of freedom (the three categories, minus one), so the $P$ value is $0.127$; there is no significant difference from a $1:2:1$ ratio. Example Mannan and Meslow (1984) studied bird foraging behavior in a forest in Oregon. In a managed forest, $54\%$ of the canopy volume was Douglas fir, $28\%$ was ponderosa pine, $5\%$ was grand fir, and $1\%$ was western larch. They made $156$ observations of foraging by red-breasted nuthatches; $70$ observations ($45\%$ of the total) in Douglas fir, $79$ ($51\%$) in ponderosa pine, $3$ ($2\%$) in grand fir, and $4$ ($3\%$) in western larch. The biological null hypothesis is that the birds forage randomly, without regard to what species of tree they're in; the statistical null hypothesis is that the proportions of foraging events are equal to the proportions of canopy volume. The difference in proportions is significant (chi-square=$13.59$, $3d.f.$, $P=0.0035$). The expected numbers in this example are pretty small, so it would be better to analyze it with an exact test. I'm leaving it here because it's a good example of an extrinsic hypothesis that comes from measuring something (canopy volume, in this case), not a mathematical theory; I've had a hard time finding good examples of this. Intrinsic Hypothesis examples Example McDonald (1989) examined variation at the $\mathit{Mpi}$ locus in the amphipod crustacean Platorchestia platensis collected from a single location on Long Island, New York. There were two alleles, $\mathit{Mpi}^{90}$ and $\mathit{Mpi}^{100}$ and the genotype frequencies in samples from multiple dates pooled together were $1203$ $\mathit{Mpi}^{90/90}$, $2919$ $\mathit{Mpi}^{90/100}$, and $1678$ $\mathit{Mpi}^{100/100}$. The estimate of the $\mathit{Mpi}^{90}$ allele proportion from the data is $5325/11600=0.459$. Using the Hardy-Weinberg formula and this estimated allele proportion, the expected genotype proportions are $0.211$ $\mathit{Mpi}^{90/90}$, $0.497$ $\mathit{Mpi}^{90/100}$, and $0.293$ $\mathit{Mpi}^{100/100}$. There are three categories (the three genotypes) and one parameter estimated from the data (the $\mathit{Mpi}^{90}$ allele proportion), so there is one degree of freedom. The result is chi-square=$1.08$, $1d.f.$, $P=0.299$, which is not significant. You cannot reject the null hypothesis that the data fit the expected Hardy-Weinberg proportions. Graphing the results If there are just two values of the nominal variable, you shouldn't display the result in a graph, as that would be a bar graph with just one bar. Instead, just report the proportion; for example, Groth (1992) found $52.0\%$ left-billed crossbills. With more than two values of the nominal variable, you should usually present the results of a goodness-of-fit test in a table of observed and expected proportions. If the expected values are obvious (such as $50\%$) or easily calculated from the data (such as Hardy–Weinberg proportions), you can omit the expected numbers from your table. For a presentation you'll probably want a graph showing both the observed and expected proportions, to give a visual impression of how far apart they are. You should use a bar graph for the observed proportions; the expected can be shown with a horizontal dashed line, or with bars of a different pattern. If you want to add error bars to the graph, you should use confidence intervals for a proportion. Note that the confidence intervals will not be symmetrical, and this will be particularly obvious if the proportion is near $0$ or $1$. Some people use a "stacked bar graph" to show proportions, especially if there are more than two categories. However, it can make it difficult to compare the sizes of the observed and expected values for the middle categories, since both their tops and bottoms are at different levels, so I don't recommend it. Similar tests You use the chi-square test of independence for two nominal variables, not one. There are several tests that use chi-square statistics. The one described here is formally known as Pearson's chi-square. It is by far the most common chi-square test, so it is usually just called the chi-square test. You have a choice of three goodness-of-fit tests: the exact test of goodness-of-fit, the G–test of goodness-of-fit,, or the chi-square test of goodness-of-fit. For small values of the expected numbers, the chi-square and G–tests are inaccurate, because the distributions of the test statistics do not fit the chi-square distribution very well. The usual rule of thumb is that you should use the exact test when the smallest expected value is less than $5$, and the chi-square and G–tests are accurate enough for larger expected values. This rule of thumb dates from the olden days when people had to do statistical calculations by hand, and the calculations for the exact test were very tedious and to be avoided if at all possible. Nowadays, computers make it just as easy to do the exact test as the computationally simpler chi-square or G–test, unless the sample size is so large that even computers can't handle it. I recommend that you use the exact test when the total sample size is less than $1000$. With sample sizes between $50$ and $1000$ and expected values greater than $5$, it generally doesn't make a big difference which test you use, so you shouldn't criticize someone for using the chi-square or G–test for experiments where I recommend the exact test. See the web page on small sample sizes for further discussion. Chi-square vs. G–test The chi-square test gives approximately the same results as the G–test. Unlike the chi-square test, the G-values are additive; you can conduct an elaborate experiment in which the G-values of different parts of the experiment add up to an overall G-value for the whole experiment. Chi-square values come close to this, but the chi-square values of subparts of an experiment don't add up exactly to the chi-square value for the whole experiment. G–tests are a subclass of likelihood ratio tests, a general category of tests that have many uses for testing the fit of data to mathematical models; the more elaborate versions of likelihood ratio tests don't have equivalent tests using the Pearson chi-square statistic. The ability to do more elaborate statistical analyses is one reason some people prefer the G–test, even for simpler designs. On the other hand, the chi-square test is more familiar to more people, and it's always a good idea to use statistics that your readers are familiar with when possible. You may want to look at the literature in your field and use whichever is more commonly used. Of course, you should not analyze your data with both the G–test and the chi-square test, then pick whichever gives you the most interesting result; that would be cheating. Any time you try more than one statistical technique and just use the one that give the lowest P value, you're increasing your chance of a false positive. How to do the test Spreadsheet I have set up a spreadsheet for the chi-square test of goodness-of-fit chigof.xls . It is largely self-explanatory. It will calculate the degrees of freedom for you if you're using an extrinsic null hypothesis; if you are using an intrinsic hypothesis, you must enter the degrees of freedom into the spreadsheet. Web pages There are web pages that will perform the chi-square test here and here. None of these web pages lets you set the degrees of freedom to the appropriate value for testing an intrinsic null hypothesis. R Salvatore Mangiafico's R Companion has a sample R program for the chi-square test of goodness-of-fit. SAS Here is a SAS program that uses PROC FREQ for a chi-square test. It uses the Mendel pea data from above. The "WEIGHT count" tells SAS that the "count" variable is the number of times each value of "texture" was observed. The ZEROS option tells it to include observations with counts of zero, for example if you had $20$ smooth peas and $0$ wrinkled peas; it doesn't hurt to always include the ZEROS option. CHISQ tells SAS to do a chi-square test, and TESTP=(75 25); tells it the expected percentages. The expected percentages must add up to $100$. You must give the expected percentages in alphabetical order: because "smooth" comes before "wrinkled," you give the expected frequencies for $75\%$ smooth, $25\%$ wrinkled. DATA peas; INPUT texture $count; DATALINES; smooth 423 wrinkled 133 ; PROC FREQ DATA=peas; WEIGHT count / ZEROS; TABLES texture / CHISQ TESTP=(75 25); RUN; Here's a SAS program that uses PROC FREQ for a chi-square test on raw data, where you've listed each individual observation instead of counting them up yourself. I've used three dots to indicate that I haven't shown the complete data set. DATA peas; INPUT texture$; DATALINES; smooth wrinkled smooth smooth wrinkled smooth . . . smooth smooth ; PROC FREQ DATA=peas; TABLES texture / CHISQ TESTP=(75 25); RUN; The output includes the following: Chi-Square Test for Specified Proportions ------------------------- Chi-Square 0.3453 DF 1 Pr > ChiSq 0.5568 You would report this as "chi-square=0.3453, 1 d.f., P=0.5568." Power analysis To do a power analysis using the G*Power program, choose "Goodness-of-fit tests: Contingency tables" from the Statistical Test menu, then choose "Chi-squared tests" from the Test Family menu. To calculate effect size, click on the Determine button and enter the null hypothesis proportions in the first column and the proportions you hope to see in the second column. Then click on the Calculate and Transfer to Main Window button. Set your alpha and power, and be sure to set the degrees of freedom (Df); for an extrinsic null hypothesis, that will be the number of rows minus one. As an example, let's say you want to do a genetic cross of snapdragons with an expected $1:2:1$ ratio, and you want to be able to detect a pattern with $5\%$ more heterozygotes that expected. Enter $0.25$, $0.50$, and $0.25$ in the first column, enter $0.225$, $0.55$, and $0.225$ in the second column, click on Calculate and Transfer to Main Window, enter $0.05$ for alpha, $0.80$ for power, and $2$ for degrees of freedom. If you've done this correctly, your result should be a total sample size of $964$.
textbooks/stats/Applied_Statistics/Biological_Statistics_(McDonald)/02%3A_Tests_for_Nominal_Variables/2.03%3A_Chi-Square_Test_of_Goodness-of-Fit.txt
Learning Objectives • To study the use of G–test of goodness-of-fit (also known as the likelihood ratio test, the log-likelihood ratio test, or the G2 test) when you have one nominal variable • To see whether the number of observations in each category fits a theoretical expectation, and the sample size is large When to use it Use the G–test of goodness-of-fit when you have one nominal variable with two or more values (such as male and female, or red, pink and white flowers). You compare the observed counts of numbers of observations in each category with the expected counts, which you calculate using some kind of theoretical expectation (such as a $1:1$ sex ratio or a $1:2:1$ ratio in a genetic cross). If the expected number of observations in any category is too small, the G–test may give inaccurate results, and you should use an exact test instead. See the web page on small sample sizes for discussion of what "small" means. The G–test of goodness-of-fit is an alternative to the chi-square test of goodness-of-fit; each of these tests has some advantages and some disadvantages, and the results of the two tests are usually very similar. You should read the section on "Chi-square vs. G–test" near the bottom of this page, pick either chi-square or G–test, then stick with that choice for the rest of your life. Much of the information and examples on this page are the same as on the chi-square test page, so once you've decided which test is better for you, you only need to read one. Null hypothesis The statistical null hypothesis is that the number of observations in each category is equal to that predicted by a biological theory, and the alternative hypothesis is that the observed numbers are different from the expected. The null hypothesis is usually an extrinsic hypothesis, where you know the expected proportions before doing the experiment. Examples include a $1:1$ sex ratio or a $1:2:1$ ratio in a genetic cross. Another example would be looking at an area of shore that had $59\%$ of the area covered in sand, $28\%$ mud and $13\%$ rocks; if you were investigating where seagulls like to stand, your null hypothesis would be that $59\%$ of the seagulls were standing on sand, $28\%$ on mud and $13\%$ on rocks. In some situations, you have an intrinsic hypothesis. This is a null hypothesis where you calculate the expected proportions after the experiment is done, using some of the information from the data. The best-known example of an intrinsic hypothesis is the Hardy-Weinberg proportions of population genetics: if the frequency of one allele in a population is $p$ and the other allele is $q$, the null hypothesis is that expected frequencies of the three genotypes are $p^2$, $2pq$, and $q^2$. This is an intrinsic hypothesis, because you estimate $p$ and $q$ from the data after you collect the data, you can't predict $p$ and $q$ before the experiment. How the test works Unlike the exact test of goodness-of-fit, the G–test does not directly calculate the probability of obtaining the observed results or something more extreme. Instead, like almost all statistical tests, the G–test has an intermediate step; it uses the data to calculate a test statistic that measures how far the observed data are from the null expectation. You then use a mathematical relationship, in this case the chi-square distribution, to estimate the probability of obtaining that value of the test statistic. The G–test uses the log of the ratio of two likelihoods as the test statistic, which is why it is also called a likelihood ratio test or log-likelihood ratio test. (Likelihood is another word for probability.) To give an example, let's say your null hypothesis is a $3:1$ ratio of smooth wings to wrinkled wings in offspring from a bunch of Drosophila crosses. You observe $770$ flies with smooth wings and $230$ flies with wrinkled wings. Using the binomial equation, you can calculate the likelihood of obtaining exactly $770$ smooth-winged flies, if the null hypothesis is true that $75\%$ of the flies should have smooth wings ($L_{null}$); it is $0.01011$. You can also calculate the likelihood of obtaining exactly $770$ smooth-winged flies if the alternative hypothesis that $77\%$ of the flies should have smooth wings ($L_{alt}$); it is $0.02997$. This alternative hypothesis is that the true proportion of smooth-winged flies is exactly equal to what you observed in the experiment, so the likelihood under the alternative hypothesis will be higher than for the null hypothesis. To get the test statistic, you start with $L_{null}/L_{alt}$; this ratio will get smaller as $L_{null}$ gets smaller, which will happen as the observed results get further from the null expectation. Taking the natural log of this likelihood ratio, and multiplying it by $-2$, gives the log-likelihood ratio, or $G$-statistic. It gets bigger as the observed data get further from the null expectation. For the fly example, the test statistic is $G=2.17$. If you had observed $760$ smooth-winged flies and $240$ wrinkled-wing flies, which is closer to the null hypothesis, your $G$-value would have been smaller, at $0.54$; if you'd observed $800$ smooth-winged and $200$ wrinkled-wing flies, which is further from the null hypothesis, your $G$-value would have been $14.00$. You multiply the log-likelihood ratio by $-2$ because that makes it approximately fit the chi-square distribution. This means that once you know the G-statistic and the number of degrees of freedom, you can calculate the probability of getting that value of $G$ using the chi-square distribution. The number of degrees of freedom is the number of categories minus one, so for our example (with two categories, smooth and wrinkled) there is one degree of freedom. Using the CHIDIST function in a spreadsheet, you enter =CHIDIST(2.17, 1) and calculate that the probability of getting a $G$-value of $2.17$ with one degree of freedom is $P=0.140$. Directly calculating each likelihood can be computationally difficult if the sample size is very large. Fortunately, when you take the ratio of two likelihoods, a bunch of stuff divides out and the function becomes much simpler: you calculate the $G$-statistic by taking an observed number ($O$), dividing it by the expected number ($E$), then taking the natural log of this ratio. You do this for the observed number in each category. Multiply each log by the observed number, sum these products and multiply by $2$. The equation is: $G=2\sum \left [ O\times \ln \left ( \frac{O}{E}\right ) \right ]$ The shape of the chi-square distribution depends on the number of degrees of freedom. For an extrinsic null hypothesis (the much more common situation, where you know the proportions predicted by the null hypothesis before collecting the data), the number of degrees of freedom is simply the number of values of the variable, minus one. Thus if you are testing a null hypothesis of a $1:1$ sex ratio, there are two possible values (male and female), and therefore one degree of freedom. This is because once you know how many of the total are females (a number which is "free" to vary from 0 to the sample size), the number of males is determined. If there are three values of the variable (such as red, pink, and white), there are two degrees of freedom, and so on. An intrinsic null hypothesis is one where you estimate one or more parameters from the data in order to get the numbers for your null hypothesis. As described above, one example is Hardy-Weinberg proportions. For an intrinsic null hypothesis, the number of degrees of freedom is calculated by taking the number of values of the variable, subtracting $1$ for each parameter estimated from the data, then subtracting $1$ more. Thus for Hardy-Weinberg proportions with two alleles and three genotypes, there are three values of the variable (the three genotypes); you subtract one for the parameter estimated from the data (the allele frequency, $p$); and then you subtract one more, yielding one degree of freedom. There are other statistical issues involved in testing fit to Hardy-Weinberg expectations, so if you need to do this, see Engels (2009) and the older references he cites.
textbooks/stats/Applied_Statistics/Biological_Statistics_(McDonald)/02%3A_Tests_for_Nominal_Variables/2.04%3A_GTest_of_Goodness-of-Fit.txt
Learning Objectives • To use the chi-square test of independence when you have two nominal variables and you want to see whether the proportions of one variable are different for different values of the other variable. • Use it when the sample size is large. When to use it Use the chi-square test of independence when you have two nominal variables, each with two or more possible values. You want to know whether the proportions for one variable are different among values of the other variable. For example, Jackson et al. (2013) wanted to know whether it is better to give the diphtheria, tetanus and pertussis (DTaP) vaccine in either the thigh or the arm, so they collected data on severe reactions to this vaccine in children aged $3$ to $6$ years old. One nominal variable is severe reaction vs. no severe reaction; the other nominal variable is thigh vs. arm. No severe reaction Severe reaction Percent severe reaction Thigh 4758 30 0.63% Arm 8840 76 0.85% There is a higher proportion of severe reactions in children vaccinated in the arm; a chi-square of independence will tell you whether a difference this big is likely to have occurred by chance. A data set like this is often called an "$R\times C$ table," where $R$ is the number of rows and $C$ is the number of columns. This is a $2\times 2$ table. If the results were divided into "no reaction", "swelling," and "pain", it would have been a $2\times 3$ table, or a $3\times 2$ table; it doesn't matter which variable is the columns and which is the rows. It is also possible to do a chi-square test of independence with more than two nominal variables. For example, Jackson et al. (2013) also had data for children under $3$, so you could do an analysis of old vs. young, thigh vs. arm, and reaction vs. no reaction, all analyzed together. That experimental design doesn't occur very often in experimental biology and is rather complicated to analyze and interpret, so I don't cover it in this handbook (except for the special case of repeated $2\times 2$ tables, analyzed with the Cochran-Mantel-Haenszel test). Fisher's exact test is more accurate than the chi-square test of independence when the expected numbers are small, so I only recommend the chi-square test if your total sample size is greater than $1000$. See the web page on small sample sizes for further discussion of what it means to be "small". The chi-square test of independence is an alternative to the G–test of independence, and they will give approximately the same results. Most of the information on this page is identical to that on the G–test page. You should read the section on "Chi-square vs. G–test", pick either chi-square or G–test, then stick with that choice for the rest of your life. Null hypothesis The null hypothesis is that the relative proportions of one variable are independent of the second variable; in other words, the proportions at one variable are the same for different values of the second variable. In the vaccination example, the null hypothesis is that the proportion of children given thigh injections who have severe reactions is equal to the proportion of children given arm injections who have severe reactions. How the test works The math of the chi-square test of independence is the same as for the chi-square test of goodness-of-fit, only the method of calculating the expected frequencies is different. For the goodness-of-fit test, you use a theoretical relationship to calculate the expected frequencies. For the test of independence, you use the observed frequencies to calculate the expected. For the vaccination example, there are $4758+8840+30+76=13704$total children, and $30+76=106$ of them had reactions. The null hypothesis is therefore that $106/13704=0.7735\%$ of the children given injections in the thigh would have reactions, and $0.7735\%$ of children given injections in the arm would also have reactions. There are $4758+30=4788$ children given injections in the thigh, so you expect $0.007735\times 4788=37.0$ of the thigh children to have reactions, if the null hypothesis is true. You could do the same kind of calculation for each of the cells in this $2\times 2$ table of numbers. Once you have each of the four expected numbers, you could compare them to the observed numbers using the chi-square test, just like you did for the chi-square test of goodness-of-fit. The result is $\text{chi-square}=2.04$. To get the $P$ value, you also need the number of degrees of freedom. The degrees of freedom in a test of independence are equal to $\text{(number of rows)}-1\times \text{(number of columns)}-1$. Thus for a $2\times 2$ table, there are $(2-1)\times (2-1)=1$ degree of freedom; for a $4\times 3$ table, there are $(4-1)\times (3-1)=6$ degrees of freedom. For $\text{chi-square}=2.04$ with $1$ degree of freedom, the $P$ value is $0.15$, which is not significant; you cannot conclude that $3$-to-$6$-year-old children given DTaP vaccinations in the thigh have fewer reactions that those given injections in the arm. (Note that I'm just using the $3$-to-$6$ year olds as an example; Jackson et al. [2013] also analyzed a much larger number of children less than 3 and found significantly fewer reactions in children given DTaP in the thigh.) While in principle, the chi-square test of independence is the same as the test of goodness-of-fit, in practice, the calculations for the chi-square test of independence use shortcuts that don't require calculating the expected frequencies. Post-hoc tests When the chi-square test of a table larger than $2\times 2$ is significant (and sometimes when it isn't), it is desirable to investigate the data further. MacDonald and Gardner (2000) use simulated data to test several post-hoc tests for a test of independence, and they found that pairwise comparisons with Bonferroni corrections of the $P$ values work well. To illustrate this method, here is a study (Klein et al. 2011) of men who were randomly assigned to take selenium, vitamin E, both selenium and vitamin E, or placebo, and then followed up to see whether they developed prostate cancer: No cancer Prostate cancer Percent cancer Selenium 8177 575 6.6% Vitamin E 8117 620 7.1% Selenium and E 8147 555 6.4% Placebo 8167 529 6.1% The overall $4\times 2$ table has a chi-square value of $7.78$ with $3$ degrees of freedom, giving a $P$ value of $0.051$. This is not quite significant (by a tiny bit), but it's worthwhile to follow up to see if there's anything interesting. There are six possible pairwise comparisons, so you can do a $2\times 2$ chi-square test for each one and get the following $P$ values: P value Selenium vs. vitamin E 0.17 Selenium vs. both 0.61 Selenium vs. placebo 0.19 Vitamin E vs. both 0.06 Vitamin E vs. placebo 0.007 Both vs. placebo 0.42 Because there are six comparisons, the Bonferroni-adjusted $P$ value needed for significance is $0.05/6$, or $0.008$. The $P$ value for vitamin E vs. the placebo is less than $0.008$, so you can say that there were significantly more cases of prostate cancer in men taking vitamin E than men taking the placebo. For this example, I tested all six possible pairwise comparisons. Klein et al. (2011) decided before doing the study that they would only look at five pairwise comparisons (all except selenium vs. vitamin E), so their Bonferroni-adjusted $P$ value would have been $0.05/5$, or $0.01$. If they had decided ahead of time to just compare each of the three treatments vs. the placebo, their Bonferroni-adjusted $P$ value would have been $0.05/3$, or $0.017$. The important thing is to decide before looking at the results how many comparisons to do, then adjust the $P$ value accordingly. If you don't decide ahead of time to limit yourself to particular pair wise comparisons, you need to adjust for the number of all possible pairs. Another kind of post-hoc comparison involves testing each value of one nominal variable vs. the sum of all others. The same principle applies: get the $P$ value for each comparison, then apply the Bonferroni correction. For example, Latta et al. (2012) collected birds in remnant riparian habitat (areas along rivers in California with mostly native vegetation) and restored riparian habitat (once degraded areas that have had native vegetation re-established). They observed the following numbers (lumping together the less common bird species as "Uncommon"): Remnant Restored Ruby-crowned kinglet 677 198 White-crowned sparrow 408 260 Lincoln's sparrow 270 187 Golden-crowned sparrow 300 89 Bushtit 198 91 Song Sparrow 150 50 Spotted towhee 137 32 Bewick's wren 106 48 Hermit thrush 119 24 Dark-eyed junco 34 39 Lesser goldfinch 57 15 Uncommon 457 125 The overall table yields a chi-square value of $149.8$ with $11$ degrees of freedom, which is highly significant ($P=2\times 10^{-26}$). That tells us there's a difference in the species composition between the remnant and restored habitat, but it would be interesting to see which species are a significantly higher proportion of the total in each habitat. To do that, do a $2\times 2$ table for each species vs. all others, like this: Remnant Restored Ruby-crowned kinglet 677 198 All others 2236 960 This gives the following $P$ values: P value Ruby-crowned kinglet 0.000017 White-crowned sparrow 5.2×10−11 Lincoln's sparrow 3.5×10−10 Golden-crowned sparrow 0.011 Bushtit 0.23 Song Sparrow 0.27 Spotted towhee 0.0051 Bewick's wren 0.44 Hermit thrush 0.0017 Dark-eyed junco 1.8×10−6 Lesser goldfinch 0.15 Uncommon 0.00006 Because there are $12$ comparisons, applying the Bonferroni correction means that a $P$ value has to be less than $0.05/12=0.0042$ to be significant at the $P<0.05$ level, so six of the $12$ species show a significant difference between the habitats. When there are more than two rows and more than two columns, you may want to do all possible pairwise comparisons of rows and all possible pairwise comparisons of columns; in that case, simply use the total number of pairwise comparisons in your Bonferroni correction of the $P$ value. There are also several techniques that test whether a particular cell in an $R\times C$ table deviates significantly from expected; see MacDonald and Gardner (2000) for details. Assumptions The chi-square test of independence, like other tests of independence, assumes that the individual observations are independent. Example 1 Bambach et al. (2013) analyzed data on all bicycle accidents involving collisions with motor vehicles in New South Wales, Australia during 2001-2009. Their very extensive multi-variable analysis includes the following numbers, which I picked out both to use as an example of a $2\times 2$ table and to convince you to wear your bicycle helmet: Head injury Other injury % head injury Wearing helmet 372 4715 7.3% No helmet 267 1391 16.1% The results are $\text{chi-square}=112.7$, $1$ degree of freedom, $P=3\times 10^{-26}$, meaning that bicyclists who were not wearing a helmet have a higher proportion of head injuries. Example 2 Gardemann et al. (1998) surveyed genotypes at an insertion/deletion polymorphism of the apolipoprotein $B$ signal peptide in $2259$ men. The nominal variables are genotype (ins/ins, ins/del, del/del) and coronary artery disease (with or without disease). The data are: No disease Coronary artery disease % disease ins/ins 268 807 24.9% ins/del 199 759 20.8% del/del 42 184 18.6% The biological null hypothesis is that the apolipoprotein polymorphism doesn't affect the likelihood of getting coronary artery disease. The statistical null hypothesis is that the proportions of men with coronary artery disease are the same for each of the three genotypes. The result is $\text{chi-square}=7.26$, $2d.f.$, $P=0.027$. This indicates that you can reject the null hypothesis; the three genotypes have significantly different proportions of men with coronary artery disease. Graphing the results You should usually display the data used in a test of independence with a bar graph, with the values of one variable on the $X$-axis and the proportions of the other variable on the $Y$-axis. If the variable on the $Y$-axis only has two values, you only need to plot one of them. In the example below, there would be no point in plotting both the percentage of men with prostate cancer and the percentage without prostate cancer; once you know what percentage have cancer, you can figure out how many didn't have cancer. If the variable on the $Y$-axis has more than two values, you should plot all of them. Some people use pie charts for this, as illustrated by the data on bird landing sites from the Fisher's exact test page: But as much as I like pie, I think pie charts make it difficult to see small differences in the proportions, and difficult to show confidence intervals. In this situation, I prefer bar graphs: Similar tests There are several tests that use chi-square statistics. The one described here is formally known as Pearson's chi-square. It is by far the most common chi-square test, so it is usually just called the chi-square test. The chi-square test may be used both as a test of goodness-of-fit (comparing frequencies of one nominal variable to theoretical expectations) and as a test of independence (comparing frequencies of one nominal variable for different values of a second nominal variable). The underlying arithmetic of the test is the same; the only difference is the way you calculate the expected values. However, you use goodness-of-fit tests and tests of independence for quite different experimental designs and they test different null hypotheses, so I treat the chi-square test of goodness-of-fit and the chi-square test of independence as two distinct statistical tests. If the expected numbers in some classes are small, the chi-square test will give inaccurate results. In that case, you should use Fisher's exact test. I recommend using the chi-square test only when the total sample size is greater than $1000$, and using Fisher's exact test for everything smaller than that. See the web page on small sample sizes for further discussion. If the samples are not independent, but instead are before-and-after observations on the same individuals, you should use McNemar's test. Chi-square vs. G–test The chi-square test gives approximately the same results as the G–test. Unlike the chi-square test, $G$-values are additive, which means they can be used for more elaborate statistical designs. G–tests are a subclass of likelihood ratio tests, a general category of tests that have many uses for testing the fit of data to mathematical models; the more elaborate versions of likelihood ratio tests don't have equivalent tests using the Pearson chi-square statistic. The G–test is therefore preferred by many, even for simpler designs. On the other hand, the chi-square test is more familiar to more people, and it's always a good idea to use statistics that your readers are familiar with when possible. You may want to look at the literature in your field and see which is more commonly used. How to do the test Spreadsheet I have set up a spreadsheet chiind.xls that performs this test for up to $10$ columns and $50$ rows. It is largely self-explanatory; you just enter you observed numbers, and the spreadsheet calculates the chi-squared test statistic, the degrees of freedom, and the $P$ value. Web page There are many web pages that do chi-squared tests of independence, but most are limited to fairly small numbers of rows and columns. Here is a page that will do up to a 10×10 table. R Salvatore Mangiafico's R Companion has a sample R program for the chi-square test of independence. SAS Here is a SAS program that uses PROC FREQ for a chi-square test. It uses the apolipoprotein $B$ data from above. DATA cad; INPUT genotype $health$ count; DATALINES; ins-ins no_disease 268 ins-ins disease 807 ins-del no_disease 199 ins-del disease 759 del-del no_disease 42 del-del disease 184 ; PROC FREQ DATA=cad; WEIGHT count / ZEROS; TABLES genotype*health / CHISQ; RUN; The output includes the following: Statistics for Table of genotype by health Statistic DF Value Prob Chi-Square 2 7.2594 0.0265 Likelihood Ratio Chi-Square 2 7.3008 0.0260 Mantel-Haenszel Chi-Square 1 7.0231 0.0080 Phi Coefficient 0.0567 Contingency Coefficient 0.0566 Cramer's V 0.0567 The "Chi-Square" on the first line is the $P$ value for the chi-square test; in this case, $\text{chi-square}=7.2594$, $2d.f.$, $P=0.0265$. Power analysis If each nominal variable has just two values (a $2\times 2$ table), use the power analysis for Fisher's exact test. It will work even if the sample size you end up needing is too big for a Fisher's exact test. For a test with more than $2$ rows or columns, use G*Power to calculate the sample size needed for a test of independence. Under Test Family, choose chi-square tests, and under Statistical Test, choose Goodness-of-Fit Tests: Contingency Tables. Under Type of Power Analysis, choose A Priori: Compute Required Sample Size. You next need to calculate the effect size parameter $w$. You can do this in G*Power if you have just two columns; if you have more than two columns, use the chi-square spreadsheet chiind.xls. In either case, enter made-up proportions that look like what you hope to detect. This made-up data should have proportions equal to what you expect to see, and the difference in proportions between different categories should be the minimum size that you hope to see. G*Power or the spreadsheet will give you the value of w, which you enter into the Effect Size w box in G*Power. Finally, enter your alpha (usually $0.05$), your power (often $0.8$ or $0.9$), and your degrees of freedom (for a test with $R$ rows and $C$ columns, remember that degrees of freedom is $(R-1)\times (C-1)$), then hit Calculate. This analysis assumes that your total sample will be divided equally among the groups; if it isn't, you'll need a larger sample size than the one you estimate. As an example, let's say you're looking for a relationship between bladder cancer and genotypes at a polymorphism in the catechol-O-methyltransferase gene in humans. In the population you're studying, you know that the genotype frequencies in people without bladder cancer are $0.36GG$, $0.48GA$, and $0.16AA$; you want to know how many people with bladder cancer you'll have to genotype to get a significant result if they have $6\%$ more $AA$ genotypes. Enter $0.36$, $0.48$, and $0.16$ in the first column of the spreadsheet, and $0.33$, $0.45$, and $0.22$ in the second column; the effect size ($w$) is $0.10838$. Enter this in the G*Power page, enter $0.05$ for alpha, $0.80$ for power, and $2$ for degrees of freedom. The result is a total sample size of $821$, so you'll need $411$ people with bladder cancer and $411$ people without bladder cancer.
textbooks/stats/Applied_Statistics/Biological_Statistics_(McDonald)/02%3A_Tests_for_Nominal_Variables/2.05%3A_Chi-square_Test_of_Independence.txt
Learning Objectives • To use the G–test of independence when you have two nominal variables and you want to see whether the proportions of one variable are different for different values of the other variable. Use it when the sample size is large. When to use it Use the G–test of independence when you have two nominal variables, each with two or more possible values. You want to know whether the proportions for one variable are different among values of the other variable. For example, Jackson et al. (2013) wanted to know whether it is better to give the diphtheria, tetanus and pertussis (DTaP) vaccine in either the thigh or the arm, so they collected data on severe reactions to this vaccine in children aged $3$ to $6$ years old. One nominal variable is severe reaction vs. no severe reaction; the other nominal variable is thigh vs. arm. No severe reaction Severe reaction Percent severe reaction Thigh 4758 30 0.63% Arm 8840 76 0.85% There is a higher proportion of severe reactions in children vaccinated in the arm; a G–test of independence will tell you whether a difference this big is likely to have occurred by chance. A data set like this is often called an "$R\times C$ table," where $R$ is the number of rows and $C$ is the number of columns. This is a $2\times 2$ table. If the results had been divided into "no reaction", "swelling," and "pain", it would have been a $2\times 3$ table, or a $3\times 2$ table; it doesn't matter which variable is the columns and which is the rows. It is also possible to do a G–test of independence with more than two nominal variables. For example, Jackson et al. (2013) also had data for children under $3$, so you could do an analysis of old vs. young, thigh vs. arm, and reaction vs. no reaction, all analyzed together. That experimental design doesn't occur very often in experimental biology and is rather complicated to analyze and interpret, so I don't cover it here (except for the special case of repeated $2\times 2$ tables, analyzed with the Cochran-Mantel-Haenszel test). Fisher's exact test is more accurate than the G–test of independence when the expected numbers are small, so I only recommend the G–test if your total sample size is greater than $1000$. See the web page on small sample sizes for further discussion of what it means to be "small". The G–test of independence is an alternative to the chi-square test of independence, and they will give approximately the same results. Most of the information on this page is identical to that on the chi-square page. You should read the section on "Chi-square vs. G–test", pick either chi-square or G–test, then stick with that choice for the rest of your life. Null hypothesis The null hypothesis is that the relative proportions of one variable are independent of the second variable; in other words, the proportions at one variable are the same for different values of the second variable. In the vaccination example, the null hypothesis is that the proportion of children given thigh injections who have severe reactions is equal to the proportion of children given arm injections who have severe reactions. How the test works The math of the G–test of independence is the same as for the G–test of goodness-of-fit, only the method of calculating the expected frequencies is different. For the goodness-of-fit test, you use a theoretical relationship to calculate the expected frequencies. For the test of independence, you use the observed frequencies to calculate the expected. For the vaccination example, there are $4758+8840+30+76=13704$ total children, and $30+76=106$ of them had reactions. The null hypothesis is therefore that $106/13704=0.7735\%$ of the children given injections in the thigh would have reactions, and $0.7735\%$ of children given injections in the arm would also have reactions. There are $4758+30=4788$ children given injections in the thigh, so you expect $0.007735\times 4788=37.0$ of the thigh children to have reactions, if the null hypothesis is true. You could do the same kind of calculation for each of the cells in this $2\times 2$ table of numbers. Once you have each of the four expected numbers, you could compare them to the observed numbers using the G–test, just like you did for the G–test of goodness-of-fit. The result is $G=2.14$. To get the $P$ value, you also need the number of degrees of freedom. The degrees of freedom in a test of independence are equal to $(number\; of\; rows)−1\times (number\; of\; columns)−1$. Thus for a $2\times 2$ table, there are $(2−1)\times (2−1)=1$ degree of freedom; for a $4\times 3$ table, there are $(4−1)\times (3−1)=6$ degrees of freedom. For $G=2.14$ with $1$ degree of freedom, the $P$ value is $0.14$, which is not significant; you cannot conclude that $3$-to-$6$-year-old children given DTaP vaccinations in the thigh have fewer reactions that those given injections in the arm. (Note that I'm just using the $3$-to-$6$-year olds as an example; Jackson et al. [2013] also analyzed a much larger number of children less than $3$ and found significantly fewer reactions in children given DTaP in the thigh.) While in principle, the G–test of independence is the same as the test of goodness-of-fit, in practice, the calculations for the G–test of independence use shortcuts that don't require calculating the expected frequencies. Post-hoc tests When the G–test of a table larger than $2\times 2$ is significant (and sometimes when it isn't significant), it is desirable to investigate the data further. MacDonald and Gardner (2000) use simulated data to test several post-hoc tests for a test of independence, and they found that pairwise comparisons with Bonferroni corrections of the P values work well. To illustrate this method, here is a study (Klein et al. 2011) of men who were randomly assigned to take selenium, vitamin E, both selenium and vitamin E, or placebo, and then followed up to see whether they developed prostate cancer: No cancer Prostate cancer Percent cancer Selenium 8177 575 6.6% Vitamin E 8117 620 7.1% Selenium and E 8147 555 6.4% Placebo 8167 529 6.1% The overall $4\times 2$ table has a $G$-value of $7.73$ with $3$ degrees of freedom, giving a $P$ value of $0.052$. This is not quite significant (by a tiny bit), but it's worthwhile to follow up to see if there's anything interesting. There are six possible pairwise comparisons, so you can do a $2\times 2$ G–test for each one and get the following $P$ values: P value Selenium vs. vitamin E 0.17 Selenium vs. both 0.61 Selenium vs. placebo 0.19 Vitamin E vs. both 0.06 Vitamin E vs. placebo 0.007 Both vs. placebo 0.42 Because there are six comparisons, the Bonferroni-adjusted $P$ value needed for significance is $0.05/6$, or $0.008$. The $P$ value for vitamin E vs. the placebo is less than $0.008$, so you can say that there were significantly more cases of prostate cancer in men taking vitamin E than men taking the placebo. For this example, I tested all six possible pairwise comparisons. Klein et al. (2011) decided before doing the study that they would only look at five pairwise comparisons (all except selenium vs. vitamin E), so their Bonferroni-adjusted $P$ value would have been $0.05/5$, or $0.01$. If they had decided ahead of time to just compare each of the three treatments vs. the placebo, their Bonferroni-adjusted $P$ value would have been $0.05/3$, or $0.017$. The important thing is to decide before looking at the results how many comparisons to do, then adjust the $P$ value accordingly. If you don't decide ahead of time to limit yourself to particular pairwise comparisons, you need to adjust for the number of all possible pairs. Another kind of post-hoc comparison involves testing each value of one nominal variable vs. the sum of all others. The same principle applies: get the $P$ value for each comparison, then apply the Bonferroni correction. For example, Latta et al. (2012) collected birds in remnant riparian habitat (areas along rivers in California with mostly native vegetation) and restored riparian habitat (once degraded areas that have had native vegetation re-established). They observed the following numbers (lumping together the less common bird species as "Uncommon"): Remnant Restored Ruby-crowned kinglet 677 198 White-crowned sparrow 408 260 Lincoln's sparrow 270 187 Golden-crowned sparrow 300 89 Bushtit 198 91 Song Sparrow 150 50 Spotted towhee 137 32 Bewick's wren 106 48 Hermit thrush 119 24 Dark-eyed junco 34 39 Lesser goldfinch 57 15 Uncommon 457 125 The overall table yields a $G$-value of $146.5$ with $11$ degrees of freedom, which is highly significant ($P=7\times 10^{-26}$). That tells us there's a difference in the species composition between the remnant and restored habitat, but it would be interesting to see which species are a significantly higher proportion of the total in each habitat. To do that, do a $2\times 2$ table for each species vs. all others, like this: Remnant Restored Ruby-crowned kinglet 677 198 All others 2236 960 This gives the following $P$ values: P value Ruby-crowned kinglet 0.000012 White-crowned sparrow 1.5×10−10 Lincoln's sparrow 1.2×10−9 Golden-crowned sparrow 0.009 Bushtit 0.24 Song Sparrow 0.26 Spotted towhee 0.0036 Bewick's wren 0.45 Hermit thrush 0.0009 Dark-eyed junco 1.2×10−9 Lesser goldfinch 0.14 Uncommon 0.00004 Because there are $12$ comparisons, applying the Bonferroni correction means that a $P$ value has to be less than $0.05/12=0.0042$ to be significant at the $P<0.05$ level, so six of the $12$ species show a significant difference between the habitats. When there are more than two rows and more than two columns, you may want to do all possible pairwise comparisons of rows and all possible pairwise comparisons of columns; in that case, simply use the total number of pairwise comparisons in your Bonferroni correction of the $P$ value. There are also several techniques that test whether a particular cell in an $R\times C$ table deviates significantly from expected; see MacDonald and Gardner (2000) for details. Assumption The G–test of independence, like other tests of independence, assumes that the individual observations are independent. Examples Example 1 Bambach et al. (2013) analyzed data on all bicycle accidents involving collisions with motor vehicles in New South Wales, Australia during 2001-2009. Their very extensive multi-variable analysis includes the following numbers, which I picked out both to use as an example of a $2\times 2$ table and to convince you to wear your bicycle helmet: Head injury Other injury Percent head injury Wearing helmet 372 4715 7.3% No helmet 267 1391 16.1% The results are $G=101.5$, $1$ degree of freedom, $P=7\times 10^{-24}$, meaning that bicyclists who were not wearing a helmet have a higher proportion of head injuries. Example 2 Gardemann et al. (1998) surveyed genotypes at an insertion/deletion polymorphism of the apolipoprotein $B$ signal peptide in $2259$ men. The nominal variables are genotype (ins/ins, ins/del, del/del) and coronary artery disease (with or without disease). The data are: No disease Coronary artery disease Percent disease ins/ins 268 807 24.9% ins/del 199 759 20.8% del/del 42 184 18.6% The biological null hypothesis is that the apolipoprotein polymorphism doesn't affect the likelihood of getting coronary artery disease. The statistical null hypothesis is that the proportions of men with coronary artery disease are the same for each of the three genotypes. The result of the G–test of independence is $G=7.30$, $2d.f.$, $P=0.026$. This indicates that the you can reject the null hypothesis; the three genotypes have significantly different proportions of men with coronary artery disease. Graphing the results You should usually display the data used in a test of independence with a bar graph, with the values of one variable on the $X$-axis and the proportions of the other variable on the $Y$-axis. If the variable on the $Y$-axis only has two values, you only need to plot one of them. In the example below, there would be no point in plotting both the percentage of men with prostate cancer and the percentage without prostate cancer; once you know what percentage have cancer, you can figure out how many didn't have cancer. If the variable on the $Y$-axis has more than two values, you should plot all of them. Some people use pie charts for this, as illustrated by the data on bird landing sites from the Fisher's exact test page: But as much as I like pie, I think pie charts make it difficult to see small differences in the proportions, and difficult to show confidence intervals. In this situation, I prefer bar graphs: Similar tests You can use the G–test both as a test of goodness-of-fit (comparing frequencies of one nominal variable to theoretical expectations) and as a test of independence (comparing frequencies of one nominal variable for different values of a second nominal variable). The underlying arithmetic of the test is the same; the only difference is the way you calculate the expected values. However, you use goodness-of-fit tests and tests of independence for quite different experimental designs and they test different null hypotheses, so I treat the G–test of goodness-of-fit and the G–test of independence as two distinct statistical tests. If the expected numbers in some classes are small, the G–test will give inaccurate results. In that case, you should use Fisher's exact test. I recommend using the G–test only when the total sample size is greater than $1000$, and using Fisher's exact test for everything smaller than that. See the web page on small sample sizes for further discussion. If the samples are not independent, but instead are before-and-after observations on the same individuals, you should use McNemar's test. Chi-square vs. G–test The chi-square test gives approximately the same results as the G–test. Unlike the chi-square test, $G$-values are additive, which means they can be used for more elaborate statistical designs. G–tests are a subclass of likelihood ratio tests, a general category of tests that have many uses for testing the fit of data to mathematical models; the more elaborate versions of likelihood ratio tests don't have equivalent tests using the Pearson chi-square statistic. The G–test is therefore preferred by many, even for simpler designs. On the other hand, the chi-square test is more familiar to more people, and it's always a good idea to use statistics that your readers are familiar with when possible. You may want to look at the literature in your field and see which is more commonly used. How to do the test Spreadsheet I have set up an Excel spreadsheet gtestind.xls that performs this test for up to $10$ columns and $50$ rows. It is largely self-explanatory; you just enter you observed numbers, and the spreadsheet calculates the G–test statistic, the degrees of freedom, and the $P$ value. Web pages I am not aware of any web pages that will do G–tests of independence. R Salvatore Mangiafico's R Companion has a sample R program for the G–test of independence. SAS Here is a SAS program that uses PROC FREQ for a G–test. It uses the apolipoprotein $B$ data from above. DATA cad; INPUT genotype $health$ count; DATALINES; ins-ins no_disease 268 ins-ins disease 807 ins-del no_disease 199 ins-del disease 759 del-del no_disease 42 del-del disease 184 ; PROC FREQ DATA=cad; WEIGHT count / ZEROS; TABLES genotype*health / CHISQ; RUN; The output includes the following: Statistics for Table of genotype by health Statistic DF Value Prob Chi-Square 2 7.2594 0.0265 Likelihood Ratio Chi-Square 2 7.3008 0.0260 Mantel-Haenszel Chi-Square 1 7.0231 0.0080 Phi Coefficient 0.0567 Contingency Coefficient 0.0566 Cramer's V 0.0567 The "Likelihood Ratio Chi-Square" is what SAS calls the G–test; in this case, $G=7.3008$, $2d.f.$, $P=0.0260$. Power analysis If each nominal variable has just two values (a $2\times 2$ table), use the power analysis for Fisher's exact test. It will work even if the sample size you end up needing is too big for a Fisher's exact test. If either nominal variable has more than two values, use the power analysis for chi-squared tests of independence. The results will be close enough to a true power analysis for a G–test.
textbooks/stats/Applied_Statistics/Biological_Statistics_(McDonald)/02%3A_Tests_for_Nominal_Variables/2.06%3A_GTest_of_Independence.txt
Learning Objectives • Learn to use the Fisher's exact test of independence when you have two nominal variables and you want to see whether the proportions of one variable are different depending on the value of the other variable. Use it when the sample size is small. When to use it Use Fisher's exact test when you have two nominal variables. You want to know whether the proportions for one variable are different among values of the other variable. For example, van Nood et al. (2013) studied patients with Clostridium difficile infections, which cause persistent diarrhea. One nominal variable was the treatment: some patients were given the antibiotic vancomycin, and some patients were given a fecal transplant. The other nominal variable was outcome: each patient was either cured or not cured. The percentage of people who received one fecal transplant and were cured ($13$ out of $16$, or $81\%$) is higher than the percentage of people who received vancomycin and were cured ($4$ out of $13$, or $31\%$), which seems promising, but the sample sizes seem kind of small. Fisher's exact test will tell you whether this difference between $81$ and $31\%$ is statistically significant. A data set like this is often called an "$R\times C$ table," where $R$ is the number of rows and $C$ is the number of columns. The fecal-transplant vs. vancomycin data I'm using as an example is a $2\times 2$ table. van Nood et al. (2013) actually had a third treatment, $13$ people given vancomycin plus a bowel lavage, making the total data set a $2\times 3$ table (or a $3\times 2$ table; it doesn't matter which variable you call the rows and which the columns). The most common use of Fisher's exact test is for $2\times 2$ tables, so that's mostly what I'll describe here. Fisher's exact test is more accurate than the chi-square test or G–test of independence when the expected numbers are small. I recommend you use Fisher's exact test when the total sample size is less than $1000$, and use the chi-square or G–test for larger sample sizes. See the web page on small sample sizes for further discussion of what it means to be "small". Null hypothesis The null hypothesis is that the relative proportions of one variable are independent of the second variable; in other words, the proportions at one variable are the same for different values of the second variable. In the C. difficile example, the null hypothesis is that the probability of getting cured is the same whether you receive a fecal transplant or vancomycin. How the test works Unlike most statistical tests, Fisher's exact test does not use a mathematical function that estimates the probability of a value of a test statistic; instead, you calculate the probability of getting the observed data, and all data sets with more extreme deviations, under the null hypothesis that the proportions are the same. For the C. difficile experiment, there are $3$ sick and $13$ cured fecal-transplant patients, and $9$ sick and $4$ cured vancomycin patients. Given that there are $16$ total fecal-transplant patients, $13$ total vancomycin patients, and $12$ total sick patients, you can use the "hypogeometric distribution" (please don't ask me to explain it) to calculate the probability of getting these numbers: fecal vancomycin sick 3 9 cured 13 4 $P$ of these exact numbers: $0.00772$ Next you calculate the probability of more extreme ways of distributing the $12$ sick people: fecal vancomycin sick 2 10 cured 14 3 $P$ of these exact numbers: $0.000661$ fecal vancomycin sick 1 11 cured 15 2 $P$ of these exact numbers: $0.0000240$ fecal vancomycin sick 0 12 cured 16 1 $P$ of these exact numbers: $0.000000251$ To calculate the probability of $3$, $2$, $1$, or $0$ sick people in the fecal-transplant group, you add the four probabilities together to get $P=0.00840$. This is the one-tailed $P$ value, which is hardly ever what you want. In our example experiment, you would use a one-tailed test only if you decided, before doing the experiment, that you were only interested in a result that had fecal transplants being better than vancomycin, not if fecal transplants were worse; in other words, you decided ahead of time that your null hypothesis was that the proportion of sick fecal transplant people was the same as, or greater than, sick vancomycin people. Ruxton and Neuhauser (2010) surveyed articles in the journal Behavioral Ecology and Sociobiology and found several that reported the results of one-tailed Fisher's exact tests, even though two-tailed would have been more appropriate. Apparently some statistics textbooks and programs perpetuate confusion about one-tailed vs. two-tailed Fisher's tests. You should almost always use a two-tailed test, unless you have a very good reason. For the usual two-tailed test, you also calculate the probability of getting deviations as extreme as the observed, but in the opposite direction. This raises the issue of how to measure "extremeness." There are several different techniques, but the most common is to add together the probabilities of all combinations that have lower probabilities than that of the observed data. Martín Andrés and Herranz Tejedor (1995) did some computer simulations that show that this is the best technique, and it's the technique used by SAS and most of the web pages I've seen. For our fecal example, the extreme deviations in the opposite direction are those with $P<0.00772$, which are the tables with $0$ or $1$ sick vancomycin people. These tables have $P=0.000035$ and $P=0.00109$, respectively. Adding these to the one-tailed $P$ value ($P=0.00840$) gives you the two-tailed $P$ value, $P=0.00953$. Post-hoc tests When analyzing a table with more than two rows or columns, a significant result will tell you that there is something interesting going on, but you will probably want to test the data in more detail. For example, Fredericks (2012) wanted to know whether checking termite monitoring stations frequently would scare termites away and make it harder to detect termites. He checked the stations (small bits of wood in plastic tubes, placed in the ground near termite colonies) either every day, every week, every month, or just once at the end of the three-month study, and recorded how many had termite damage by the end of the study: Termite damage No termites Percent termite damage Daily 1 24 4% Weekly 5 20 20% Monthly 14 11 56% Quarterly 11 14 44% The overall $P$ value for this is $P=0.00012$, so it is highly significant; the frequency of disturbance is affecting the presence of termites. That's nice to know, but you'd probably want to ask additional questions, such as whether the difference between daily and weekly was significant, or the difference between weekly and monthly. You could do a $2\times 2$ Fisher's exact test for each of these pairwise comparisons, but there are $6$ possible pairs, so you need to correct for the multiple comparisons. One way to do this is with a modification of the Bonferroni-corrected pairwise technique suggested by MacDonald and Gardner (2000), substituting Fisher's exact test for the chi-square test they used. You do a Fisher's exact test on each of the $6$ possible pairwise comparisons (daily vs. weekly, daily vs. monthly, etc.), then apply the Bonferroni correction for multiple tests. With $6$ pairwise comparisons, the $P$ value must be less than $0.05/6$, or $0.008$, to be significant at the $P<0.05$ level. Two comparisons (daily vs. monthly and daily vs. quarterly) are therefore significant P value Daily vs. weekly 0.189 Daily vs. monthly 0.00010 Daily vs. quarterly 0.0019 Weekly vs. monthly 0.019 Weekly vs. quarterly 0.128 Monthly vs. quarterly 0.57 You could have decided, before doing the experiment, that testing all possible pairs would make it too hard to find a significant difference, so instead you would just test each treatment vs. quarterly. This would mean there were only $3$ possible pairs, so each pairwise $P$ value would have to be less than $0.05/3$, or $0.017$, to be significant. That would give you more power, but it would also mean that you couldn't change your mind after you saw the data and decide to compare daily vs. monthly. Assumptions Independence Fisher's exact test, like other tests of independence, assumes that the individual observations are independent. Fixed totals Unlike other tests of independence, Fisher's exact test assumes that the row and column totals are fixed, or "conditioned." An example would be putting $12$ female hermit crabs and $9$ male hermit crabs in an aquarium with $7$ red snail shells and $14$ blue snail shells, then counting how many crabs of each sex chose each color (you know that each hermit crab will pick one shell to live in). The total number of female crabs is fixed at $12$, the total number of male crabs is fixed at $9$, the total number of red shells is fixed at $7$, and the total number of blue shells is fixed at $14$. You know, before doing the experiment, what these totals will be; the only thing you don't know is how many of each sex-color combination there are. There are very few biological experiments where both the row and column totals are conditioned. In the much more common design, one or two of the row or column totals are free to vary, or "unconditioned." For example, in our C. difficile experiment above, the numbers of people given each treatment are fixed ($16$ given a fecal transplant, $13$ given vancomycin), but the total number of people who are cured could have been anything from $0$ to $29$. In the moray eel experiment below, both the total number of each species of eel, and the total number of eels in each habitat, are unconditioned. When one or both of the row or column totals are unconditioned, the Fisher's exact test is not, strictly speaking, exact. Instead, it is somewhat conservative, meaning that if the null hypothesis is true, you will get a significant ($P<0.05$) $P$ value less than $5\%$ of the time. This makes it a little less powerful (harder to detect a real difference from the null, when there is one). Statisticians continue to argue about alternatives to Fisher's exact test, but the improvements seem pretty small for reasonable sample sizes, with the considerable cost of explaining to your readers why you are using an obscure statistical test instead of the familiar Fisher's exact test. I think most biologists, if they saw you get a significant result using Barnard's test, or Boschloo's test, or Santner and Snell's test, or Suissa and Shuster's test, or any of the many other alternatives, would quickly run your numbers through Fisher's exact test. If your data weren't significant with Fisher's but were significant with your fancy alternative test, they would suspect that you fished around until you found a test that gave you the result you wanted, which would be highly evil. Even though you may have really decided on the obscure test ahead of time, you don't want cynical people to think you're evil, so stick with Fisher's exact test. Examples The eastern chipmunk trills when pursued by a predator, possibly to warn other chipmunks. Burke da Silva et al. (2002) released chipmunks either $10$ or $100$ meters from their home burrow, then chased them (to simulate predator pursuit). Out of $24$ female chipmunks released $10\; m$ from their burrow, $16$ trilled and $8$ did not trill. When released 100 m from their burrow, only 3 female chipmunks trilled, while 18 did not trill. The two nominal variables are thus distance from the home burrow (because there are only two values, distance is a nominal variable in this experiment) and trill vs. no trill. Applying Fisher's exact test, the proportion of chipmunks trilling is significantly higher ($P=0.0007$) when they are closer to their burrow. McDonald and Kreitman (1991) sequenced the alcohol dehydrogenase gene in several individuals of three species of Drosophila. Varying sites were classified as synonymous (the nucleotide variation does not change an amino acid) or amino acid replacements, and they were also classified as polymorphic (varying within a species) or fixed differences between species. The two nominal variables are thus substitution type (synonymous or replacement) and variation type (polymorphic or fixed). In the absence of natural selection, the ratio of synonymous to replacement sites should be the same for polymorphisms and fixed differences. There were $43$ synonymous polymorphisms, $2$ replacement polymorphisms, $17$ synonymous fixed differences, and $7$ replacement fixed differences. Synonymous Replacement polymorphisms 43 2 fixed 17 7 The result is $P=0.0067$, indicating that the null hypothesis can be rejected; there is a significant difference in synonymous/replacement ratio between polymorphisms and fixed differences. (Note that we used a G–test of independence in the original McDonald and Kreitman [1991] paper, which is a little embarrassing in retrospect, since I'm now telling you to use Fisher's exact test for such small sample sizes; fortunately, the $P$ value we got then, $P=0.006$, is almost the same as with the more appropriate Fisher's test.) Descamps et al. (2009) tagged 50 king penguins (Aptenodytes patagonicus) in each of three nesting areas (lower, middle, and upper) on Possession Island in the Crozet Archipelago, then counted the number that were still alive a year later, with these results: Alive Dead Lower nesting area 43 7 Middle nesting area 44 6 Upper nesting area 49 1 Seven penguins had died in the lower area, six had died in the middle area, and only one had died in the upper area. Descamps et al. analyzed the data with a G–test of independence, yielding a significant ($P=0.048$) difference in survival among the areas; however, analyzing the data with Fisher's exact test yields a non-significant ($P=0.090$) result. Young and Winn (2003) counted sightings of the spotted moray eel, Gymnothorax moringa, and the purplemouth moray eel, G. vicinus, in a $150\; m$ by $250\; m$ area of reef in Belize. They identified each eel they saw, and classified the locations of the sightings into three types: those in grass beds, those in sand and rubble, and those within one meter of the border between grass and sand/rubble. The number of sightings are shown in the table, with percentages in parentheses: G. moringa G. vicinus Percent G. vicinus Grass 127 116 47.7% Sand 99 67 40.4% Border 264 161 37.9% The nominal variables are the species of eel (G. moringa or G. vicinus) and the habitat type (grass, sand, or border). The difference in habitat use between the species is significant ($P=0.044$). Custer and Galli (2002) flew a light plane to follow great blue herons (Ardea herodias) and great egrets (Casmerodius albus) from their resting site to their first feeding site at Peltier Lake, Minnesota, and recorded the type of substrate each bird landed on. Heron Egret Vegetation 15 8 Shoreline 20 5 Water 14 7 Structures 6 1 Fisher's exact test yields $P=0.54$, so there is no evidence that the two species of birds use the substrates in different proportions. Graphing the results You plot the results of Fisher's exact test the same way would any other test of independence. Similar tests You can use the chi-square test of independence or the G–test of independence on the same kind of data as Fisher's exact test. When some of the expected values are small, Fisher's exact test is more accurate than the chi-square or G–test of independence. If all of the expected values are very large, Fisher's exact test becomes computationally impractical; fortunately, the chi-square or G–test will then give an accurate result. The usual rule of thumb is that Fisher's exact test is only necessary when one or more expected values are less than $5$, but this is a remnant of the days when doing the calculations for Fisher's exact test was really hard. I recommend using Fisher's exact test for any experiment with a total sample size less than $1000$. See the web page on small sample sizes for further discussion of the boundary between "small" and "large." You should use McNemar's test when the two samples are not independent, but instead are two sets of pairs of observations. Often, each pair of observations is made on a single individual, such as individuals before and after a treatment or individuals diagnosed using two different techniques. For example, Dias et al. (2014) surveyed $62$ men who were circumcised as adults. Before circumcision, $6$ of the $62$ men had erectile dysfunction; after circumcision, $16$ men had erectile dysfunction. This may look like data suitable for Fisher's exact test (two nominal variables, erect vs. flaccid and before vs. after circumcision), and if analyzed that way, the result would be $P=0.033$. However, we know more than just how many men had erectile dysfunction, we know that $10$ men switched from normal function to dysfunction after circumcision, and $0$ men switched from dysfunction to normal. The statistical null hypothesis of McNemar's test is that the number of switchers in one direction is equal to the number of switchers in the opposite direction. McNemar's test compares the observed data to the null expectation using a goodness-of-fit test. The numbers are almost always small enough that you can make this comparison using the exact test of goodness-of-fit. For the example data of $10$ switchers in one direction and $0$ in the other direction, McNemar's test gives $P=0.002$; this is a much smaller $P$ value than the result from Fisher's exact test. McNemar's test doesn't always give a smaller $P$ value than Fisher's. If all $6$ men in the Dias et al. (2014) study with erectile dysfunction before circumcision had switched to normal function, and $16$ men had switched from normal function before circumcision to erectile dysfunction, the $P$ value from McNemar's test would have been $0.052$. How to do the test Spreadsheet I've written a spreadsheet to perform Fisher's exact test for $2\times 2$ tables fishers.xls. It handles samples with the smaller column total less than $500$. Web pages Several people have created web pages that perform Fisher's exact test for $2\times 2$ tables. I like Øyvind Langsrud's web page for Fisher's exact test. Just enter the numbers into the cells on the web page, hit the Compute button, and get your answer. You should almost always use the "$2$-tail $P$ value" given by the web page. There is also a web page for Fisher's exact test for up to 6×6 tables. It will only take data with fewer than $100$ observations in each cell. R Salvatore Mangiafico's $R$ Companion has a sample R program for Fisher's exact test and another for McNemar's test. SAS Here is a SAS program that uses PROC FREQ for a Fisher's exact test. It uses the chipmunk data from above. DATA chipmunk; INPUT distance $sound$ count; DATALINES; 10m trill 16 10m notrill 8 100m trill 3 100m notrill 18 ; PROC FREQ DATA=chipmunk; WEIGHT count / ZEROS; TABLES distance*sound / FISHER; RUN; The output includes the following: Fisher's Exact Test ---------------------------------- Cell (1,1) Frequency (F) 18 Left-sided Pr <= F 1.0000 Right-sided Pr >= F 4.321E-04 Table Probability (P) 4.012E-04 Two-sided Pr <= P 6.862E-04 The "Two-sided Pr <= P" is the two-tailed $P$ value that you want. The output looks a little different when you have more than two rows or columns. Here is an example using the data on heron and egret substrate use from above: DATA birds; INPUT bird $substrate$ count; DATALINES; heron vegetation 15 heron shoreline 20 heron water 14 heron structures 6 egret vegetation 8 egret shoreline 5 egret water 7 egret structures 1 ; PROC FREQ DATA=birds; WEIGHT count / ZEROS; TABLES bird*substrate / FISHER; RUN; The results of the exact test are labeled "Pr <= P"; in this case, $P=0.5491$. Fisher's Exact Test ---------------------------------- Table Probability (P) 0.0073 Pr <= P 0.5491 Power analysis The G*Power program will calculate the sample size needed for a $2\times 2$ test of independence, whether the sample size ends up being small enough for a Fisher's exact test or so large that you must use a chi-square or G–test. Choose "Exact" from the "Test family" menu and "Proportions: Inequality, two independent groups (Fisher's exact test)" from the "Statistical test" menu. Enter the proportions you hope to see, your alpha (usually $0.05$) and your power (usually $0.80$ or $0.90$). If you plan to have more observations in one group than in the other, you can make the "Allocation ratio" different from 1. As an example, let's say you're looking for a relationship between bladder cancer and genotypes at a polymorphism in the catechol-O-methyltransferase gene in humans. Based on previous research, you're going to pool together the $GG$ and $GA$ genotypes and compare these $GG+GA$ and $AA$ genotypes. In the population you're studying, you know that the genotype frequencies in people without bladder cancer are $0.84 GG+GA$ and $0.16AA$; you want to know how many people with bladder cancer you'll have to genotype to get a significant result if they have $6\%$ more $AA$ genotypes. It's easier to find controls than people with bladder cancer, so you're planning to have twice as many people without bladder cancer. On the G*Power page, enter $0.16$ for proportion $p1$, $0.22$ for proportion $p2$, $0.05$ for alpha, $0.80$ for power, and $0.5$ for allocation ratio. The result is a total sample size of $1523$, so you'll need $508$ people with bladder cancer and $1016$ people without bladder cancer. Note that the sample size will be different if your effect size is a $6\%$ lower frequency of $AA$ in bladder cancer patients, instead of $6\%$ higher. If you don't have a strong idea about which direction of difference you're going to see, you should do the power analysis both ways and use the larger sample size estimate. If you have more than two rows or columns, use the power analysis for chi-square tests of independence. The results should be close enough to correct, even if the sample size ends up being small enough for Fisher's exact test.
textbooks/stats/Applied_Statistics/Biological_Statistics_(McDonald)/02%3A_Tests_for_Nominal_Variables/2.07%3A_Fisher%27s_Exact_Test.txt
Learning Objectives • Chi-square and G–tests are somewhat inaccurate when expected numbers are small, and you should use exact tests instead. A suggestion is to use a much higher definition of "small" than other people. The problem with small numbers Chi-square and G–tests of goodness-of-fit or independence give inaccurate results when the expected numbers are small. For example, let's say you want to know whether right-handed people tear the anterior cruciate ligament (ACL) in their right knee more or less often than the left ACL. You find $11$ people with ACL tears, so your expected numbers (if your null hypothesis is true) are $5.5$ right ACL tears and $5.5$ left ACL tears. Let's say you actually observe $9$ right ACL tears and $2$ left ACL tears. If you compare the observed numbers to the expected using the exact test of goodness-of-fit, you get a $P$ value of $0.065$; the chi-square test of goodness-of-fit gives a $P$ value of $0.035$, and the G–test of goodness-of-fit gives a $P$ value of $0.028$. If you analyzed the data using the chi-square or G–test, you would conclude that people tear their right ACL significantly more than their left ACL; if you used the exact binomial test, which is more accurate, the evidence would not be quite strong enough to reject the null hypothesis. When the sample sizes are too small, you should use exact tests instead of the chi-square test or G–test. However, how small is "too small"? The conventional rule of thumb is that if all of the expected numbers are greater than $5$, it's acceptable to use the chi-square or G–test; if an expected number is less than $5$, you should use an alternative, such as an exact test of goodness-of-fit or a Fisher's exact test of independence. This rule of thumb is left over from the olden days, when the calculations necessary for an exact test were exceedingly tedious and error-prone. Now that we have these new-fangled gadgets called computers, it's time to retire the "no expected values less than $5$" rule. But what new rule should you use? Here is a graph of relative $P$ values versus sample size. For each sample size, I found a pair of numbers that would give a $P$ value for the exact test of goodness-of-fit (null hypothesis, $1:1$ ratio) that was as close as possible to $P=0.05$ without going under it. For example, with a sample size of $11$, the numbers $9$ and $2$ give a $P$ value of $0.065$. I did the chi-square test on these numbers, and I divided the chi-square $P$ value by the exact binomial $P$ value. For $9$ and $2$, the chi-square $P$ value is $0.035$, so the ratio is $0.035/0.065 = 0.54$. In other words, the chi-square test gives a $P$ value that is only $54\%$ as large as the more accurate exact test. The G–test gives almost the same results as the chi-square test. Plotting these relative $P$ values vs. sample size (chi-square in black, G–test in green), it is clear that the chi-square and G–tests give $P$ values that are too low, even for sample sizes in the hundreds. This means that if you use a chi-square or G–test of goodness-of-fit and the $P$ value is just barely significant, you will reject the null hypothesis, even though the more accurate $P$ value of the exact binomial test would be above $0.05$. The results are similar for $2\times 2$ tests of independence; the chi-square and G–tests give $P$ values that are considerably lower than that of the more accurate Fisher's exact test. Yates' and William's corrections One solution to this problem is to use Yates' correction for continuity, sometimes just known as the continuity correction. To do this, you subtract $0.5$ from each observed value that is greater than the expected, add $0.5$ to each observed value that is less than the expected, then do the chi-square or G–test. This only applies to tests with one degree of freedom: goodness-of-fit tests with only two categories, and $2\times 2$ tests of independence. It works quite well for goodness-of-fit, yielding $P$ values that are quite close to those of the exact binomial. For tests of independence, Yates' correction yields $P$ values that are too high. Another correction that is sometimes used is Williams' correction. For a goodness-of-fit test, Williams' correction is found by dividing the chi-square or G values by the following: $q=\frac{1+(a^2-1)}{6nv}$ where $a$ is the number of categories, $n$ is the total sample size, and $v$ is the number of degrees of freedom. For a test of independence with $R$ rows and $C$ columns, Williams' correction is found by dividing the chi-square or $G$ value by the following: $q=\frac{1+(n\left \{ \left [ 1/\text(row\; 1\; total) \right ]+...+\left [ 1/\text(row\; R\; total) \right ]\right \}-1)(n\left \{ \left [ 1/\text(column\; 1\; total) \right ]+...+\left [ 1/\text(column\; C\; total) \right ]\right \}-1)}{6n(R-1)(C-1)}$ Unlike Yates' correction, it can be applied to tests with more than one degree of freedom. For the numbers I've tried, it increases the $P$ value a little, but not enough to make it very much closer to the more accurate $P$ value provided by the exact test of goodness-of-fit or Fisher's exact test. Some software may apply the Yates' or Williams' correction automatically. When reporting your results, be sure to say whether or not you used one of these corrections. Pooling When a variable has more than two categories, and some of them have small numbers, it often makes sense to pool some of the categories together. For example, let's say you want to compare the proportions of different kinds of ankle injuries in basketball players vs. volleyball players, and your numbers look like this: basketball volleyball sprains 18 16 breaks 13 5 torn ligaments 9 7 cuts 3 5 puncture wounds 1 3 infections 2 0 The numbers for cuts, puncture wounds, and infections are pretty small, and this will cause the $P$ value for your test of independence to be inaccurate. Having a large number of categories with small numbers will also decrease the power of your test to detect a significant difference; adding categories with small numbers can't increase the chi-square value or $G$-value very much, but it does increase the degrees of freedom. It would therefore make sense to pool some categories: basketball volleyball sprains 18 16 breaks 13 5 torn ligaments 9 7 other injuries 6 8 Depending on the biological question you're interested in, it might make sense to pool the data further: basketball volleyball orthopedic injuries 40 28 non-orthopedic injuries 6 8 It is important to make decisions about pooling before analyzing the data. In this case, you might have known, based on previous studies, that cuts, puncture wounds, and infections would be relatively rare and should be pooled. You could have decided before the study to pool all injuries for which the total was $10$ or fewer, or you could have decided to pool all non-orthopedic injuries because they're just not biomechanically interesting. Recommendation I recommend that you always use an exact test (exact test of goodness-of-fit, Fisher's exact test) if the total sample size is less than $1000$. There is nothing magical about a sample size of $1000$, it's just a nice round number that is well within the range where an exact test, chi-square test and G–test will give almost identical $P$ values. Spreadsheets, web-page calculators, and SAS shouldn't have any problem doing an exact test on a sample size of $1000$. When the sample size gets much larger than $1000$, even a powerful program such as SAS on a mainframe computer may have problems doing the calculations needed for an exact test, so you should use a chi-square or G–test for sample sizes larger than this. You can use Yates' correction if there is only one degree of freedom, but with such a large sample size, the improvement in accuracy will be trivial. For simplicity, I base my rule of thumb on the total sample size, not the smallest expected value; if one or more of your expected values are quite small, you should still try an exact test even if the total sample size is above $1000$, and hope your computer can handle the calculations. If you see someone else following the traditional rules and using chi-square or G–tests for total sample sizes that are smaller than $1000$, don't worry about it too much. Old habits die hard, and unless their expected values are really small (in the single digits), it probably won't make any difference in the conclusions. If their chi-square or G–test gives a $P$ value that's just a little below $0.05$, you might want to analyze their data yourself, and if an exact test brings the $P$ value above $0.05$, you should probably point this out. If you have a large number of categories, some with very small expected numbers, you should consider pooling the rarer categories, even if the total sample size is small enough to do an exact test; the fewer numbers of degrees of freedom will increase the power of your test.
textbooks/stats/Applied_Statistics/Biological_Statistics_(McDonald)/02%3A_Tests_for_Nominal_Variables/2.08%3A_Small_Numbers_in_Chi-Square_and_GTests.txt
Learning Objectives • To study the use of this method for repeated G–tests of goodness-of-fit when you have two nominal variables; one is something you'd analyze with a goodness-of-fit test, and the other variable represents repeating the experiment multiple times. It tells you whether there's an overall deviation from the expected proportions, and whether there's significant variation among the repeated experiments. When to use it Use this method for repeated tests of goodness-of-fit when you've done a goodness-of-fit experiment more than once; for example, you might look at the fit to a $3:1$ ratio of a genetic cross in more than one family, or fit to a $1:1$ sex ratio in more than one population, or fit to a $1:1$ ratio of broken right and left ankles on more than one sports team. One question then is, should you analyze each experiment separately, risking the chance that the small sample sizes will have insufficient power? Or should you pool all the data, ignoring the possibility that the different experiments gave different results? This is when the additive property of the G–test of goodness-of-fit becomes important, because you can do a repeated G–test of goodness-of-fit and test several hypotheses at once. You use the repeated G–test of goodness-of-fit when you have two nominal variables, one with two or more biologically interesting values (such as red vs. pink vs. white flowers), the other representing different replicates of the same experiment (different days, different locations, different pairs of parents). You compare the observed data with an extrinsic theoretical expectation (such as an expected $1:2:1$ ratio in a genetic cross). For example, Guttman et al. (1967) counted the number of people who fold their arms with the right arm on top ($R$) or the left arm on top ($L$) in six ethnic groups in Israel: Ethnic group R L Percent R Yemen 168 174 49.1% Djerba 132 195 40.4% Kurdistan 167 204 45.0% Libya 162 212 43.3% Berber 143 194 42.4% Cochin 153 174 46.8% The null hypothesis is that half the people would be $R$ and half would be $L$. It would be possible to add together the numbers from all six groups and test the fit with a chi-square or G–test of goodness-of-fit, but that could overlook differences among the groups. It would also be possible to test each group separately, but that could overlook deviations from the null hypothesis that were too small to detect in each ethnic group sample, but would be detectable in the overall sample. The repeated goodness-of-fit test tests the data both ways. I do not know if this analysis would be appropriate with an intrinsic hypothesis, such as the $p^2:2pq:q^2$ Hardy-Weinberg proportions of population genetics. Null hypotheses This technique actually tests four null hypotheses. The first statistical null hypothesis is that the numbers within each experiment fit the expectations; for our arm-folding example, the null hypothesis is that there is a $1:1$ ratio of $R$ and $L$ folders within each ethnic group. This is the same null hypothesis as for a regular G–test of goodness-of-fit applied to each experiment. The second null hypothesis is that the relative proportions are the same across the different experiments; in our example, this null hypothesis would be that the proportion of R folders is the same in the different ethnic groups. This is the same as the null hypothesis for a G–test of independence. The third null hypothesis is that the pooled data fit the expectations; for our example, it would be that the number of $R$ and $L$ folders, summed across all six ethnic groups, fits a $1:1$ ratio. The fourth null hypothesis is that overall, the data from the individual experiments fit the expectations. This null hypothesis is a bit difficult to grasp, but being able to test it is the main value of doing a repeated G–test of goodness-of-fit. How to do the test First, decide what you're going to do if there is significant variation among the replicates. Ideally, you should decide this before you look at the data, so that your decision is not subconsciously biased towards making the results be as interesting as possible. Your decision should be based on whether your goal is estimation or hypothesis testing. For the arm-folding example, if you were already confident that fewer than $50\%$ of people fold their arms with the right on top, and you were just trying to estimate the proportion of right-on-top folders as accurately as possible, your goal would be estimation. If this is the goal, and there is significant heterogeneity among the replicates, you probably shouldn't pool the results; it would be misleading to say "$42\%$ of people are right-on-top folders" if some ethnic groups are $30\%$ and some are $50\%$; the pooled estimate would depend a lot on your sample size in each ethnic group, for one thing. But if there's no significant heterogeneity, you'd want to pool the individual replicates to get one big sample and therefore make a precise estimate. If you're mainly interested in the knowing whether there's a deviation from the null expectation, and you're not as interested in the size of the deviation, then you're doing hypothesis testing, and you may want to pool the samples even if they are significantly different from each other. In the arm-folding example, finding out that there's asymmetry—that fewer than $50\%$ of people fold with their right arm on top—could say something interesting about developmental biology and would therefore be interesting, but you might not care that much if the asymmetry was stronger in some ethnic groups than others. So you might decide to pool the data even if there is significant heterogeneity. After you've planned what you're going to do, collect the data and do a G–test of goodness-of-fit for each individual data set. The resulting $G$-values are the "individual $G$-values." Also record the number of degrees of freedom for each individual data set; these are the "individual degrees of freedom." Note Some programs use continuity corrections, such as the Yates correction or the Williams correction, in an attempt to make G–tests more accurate for small sample sizes. Do not use any continuity corrections when doing a replicated G–test, or the $G$-values will not add up properly. My spreadsheet for G–tests of goodness-of-fit gtestgof.xls can provide the uncorrected $G$-values. Ethnic group R L Percent R G-value Degrees of freedom P value Yemen 168 174 49.1% 0.105 1 0.75 Djerba 132 195 40.4% 12.214 1 0.0005 Kurdistan 167 204 45.0% 3.696 1 0.055 Libya 162 212 43.3% 6.704 1 0.010 Berber 143 194 42.4% 7.748 1 0.005 Cochin 153 174 46.8% 1.350 1 0.25 As you can see, three of the ethnic groups (Djerba, Libya, and Berber) have $P$ values less than $0.05$. However, because you're doing $6$ tests at once, you should probably apply a correction for multiple comparisons. Applying a Bonferroni correction leaves only the Djerba and Berber groups as significant. Next, do a G–test of independence on the data. This give a "heterogeneity $G$-value," which for our example is $G=6.750$, $5d.f.$, $P=0.24$. This means that the $R:L$ ratio is not significantly different among the $6$ ethnic groups. If there had been a significant result, you'd have to look back at what you decided in the first step to know whether to go on and pool the results or not. If you're going to pool the results (either because the heterogeneity $G$-value was not significant, or because you decided to pool even if the heterogeneity was significant), add the numbers in each category across the repeated experiments, and do a G–test of goodness-of-fit on the totals. For our example, there are a total of $925 R$ and $1153 L$, which gives $G=25.067$, $1d.f.$, $P=5.5\times 10^{-7}$. The interpretation of this "pooled $G$-value" is that overall, significantly fewer than $50\%$ of people fold their arms with the right arm on top. Because the G–test of independence was not significant, you can be pretty sure that this is a consistent overall pattern, not just due to extreme deviations in one or two samples. If the G–test of independence had been significant, you'd be much more cautious about interpreting the goodness-of-fit test of the summed data. If you did the pooling, the next step is to add up the $G$-values from the individual goodness-of-fit tests to get the "total $G$-value," and add up the individual degrees of freedom to get the total degrees of freedom. Use the CHIDIST function in a spreadsheet or online chi-square calculator to find the $P$ value for the total $G$-value with the total degrees of freedom. For our example, the total $G$-value is $31.817$ and the total degrees of freedom is $6$, so enter "=CHIDIST(31.817, 6)" if you're using a spreadsheet. The result will be the $P$ value for the total $G$; in this case, $P=1.8\times 10^{-5}$. If it is significant, you can reject the null hypothesis that all of the data from the different experiments fit the expected ratio. Usually, you'll be able to look at the other results and see that the total $G$-value is significant because the goodness-of-fit of the pooled data is significant, or because the test of independence shows significant heterogeneity among the replicates, or both. However, it is possible for the total $G$-value to be significant even if none of the other results are significant. This would be frustrating; it would tell you that there's some kind of deviation from the null hypotheses, but it wouldn't be entirely clear what that deviation was. I've repeatedly mentioned that the main advantage of G–tests over chi-square tests is "additivity," and it's finally time to illustrate this. In our example, the $G$-value for the test of independence was $6.750$, with $5$ degrees of freedom, and the $G$-value for the goodness-of-fit test for the pooled data was $25.067$, with $1$ degree of freedom. Adding those together gives $G=31.817$ with $6$ degrees of freedom, which is exactly the same as the total of the $6$ individual goodness-of-fit tests. Isn't that amazing? So you can partition the total deviation from the null hypothesis into the portion due to deviation of the pooled data from the null hypothesis of a $1:1$ ratio, and the portion due to variation among the replicates. It's just an interesting little note for this design, but additivity becomes more important for more elaborate experimental designs. Chi-square values are not additive. If you do the above analysis with chi-square tests, the test of independence gives a chi-square value of $6.749$ and the goodness-of-fit test of the pooled data gives a chi-square value of $25.067$, which adds up to $31.816$. The 6 individual goodness-of-fit tests give chi-square values that add up to $31.684$, which is close to $31.816$ but not exactly the same. Example Connallon and Jakubowski (2009) performed mating competitions among male Drosophila melanogaster. They took the "unpreferred" males that had lost three competitions in a row and mated them with females, then looked at the sex ratio of the offspring. They did this for three separate sets of flies. Males Females G-value d.f. P value Trial 1 296 366 7.42 1 0.006 Trial 2 78 72 0.24 1 0.624 Trial 3 417 467 2.83 1 0.093 total G 10.49 3 0.015 pooled 791 905 pooled G 7.67 1 0.006 heterogeneity G 2.82 2 0.24 The total $G$-value is significant, so you can reject the null hypotheses that all three trials have the same $1:1$ sex ratio. The heterogeneity $G$-value is not significant; although the results of the second trial may look quite different from the results of the first and third trials, the three trials are not significantly different. YOU can therefore look at the pooled $G$-value. It is significant; the unpreferred males have significantly more daughters than sons. Similar tests If the numbers are small, you may want to use exact tests instead of G–tests. You'll lose the additivity and the ability to test the total fit, but the other results may be more accurate. First, do an exact test of goodness-of-fit for each replicate. Next, do Fisher's exact test of independence to compare the proportions in the different replicates. If Fisher's test is not significant, pool the data and do an exact test of goodness-of-fit on the pooled data. Note that I'm not saying how small your numbers should be to make you uncomfortable using G–tests. If some of your numbers are less than $10$ or so, you should probably consider using exact tests, while if all of your numbers are in the $10s$ or $100s$, you're probably okay using G–tests. In part this will depend on how important it is to test the total $G$-value. If you have repeated tests of independence, instead of repeated tests of goodness-of-fit, you should use the Cochran-Mantel-Haenszel test.
textbooks/stats/Applied_Statistics/Biological_Statistics_(McDonald)/02%3A_Tests_for_Nominal_Variables/2.09%3A_Repeated_GTests_of_Goodness-of-Fit.txt
Learning Objectives • To use the Cochran–Mantel–Haenszel test when you have data from $2\times 2$ tables that you've repeated at different times or locations. It will tell you whether you have a consistent difference in proportions across the repeats. When to use it Use the Cochran–Mantel–Haenszel test (which is sometimes called the Mantel–Haenszel test) for repeated tests of independence. The most common situation is that you have multiple $2\times 2$ tables of independence; you're analyzing the kind of experiment that you'd analyze with a test of independence, and you've done the experiment multiple times or at multiple locations. There are three nominal variables: the two variables of the $2\times 2$ test of independence, and the third nominal variable that identifies the repeats (such as different times, different locations, or different studies). There are versions of the Cochran–Mantel–Haenszel test for any number of rows and columns in the individual tests of independence, but they're rarely used and I won't cover them. For example, let's say you've found several hundred pink knit polyester legwarmers that have been hidden in a warehouse since they went out of style in 1984. You decide to see whether they reduce the pain of ankle osteoarthritis by keeping the ankles warm. In the winter, you recruit $36$ volunteers with ankle arthritis, randomly assign $20$ to wear the legwarmers under their clothes at all times while the other $16$ don't wear the legwarmers, then after a month you ask them whether their ankles are pain-free or not. With just the one set of people, you'd have two nominal variables (legwarmers vs. control, pain-free vs. pain), each with two values, so you'd analyze the data with Fisher's exact test. However, let's say you repeat the experiment in the spring, with $50$ new volunteers. Then in the summer you repeat the experiment again, with $28$ new volunteers. You could just add all the data together and do Fisher's exact test on the $114$ total people, but it would be better to keep each of the three experiments separate. Maybe legwarmers work in the winter but not in the summer, or maybe your first set of volunteers had worse arthritis than your second and third sets. In addition, pooling different studies together can show a "significant" difference in proportions when there isn't one, or even show the opposite of a true difference. This is known as Simpson's paradox. For these reasons, it's better to analyze repeated tests of independence using the Cochran-Mantel-Haenszel test. Null hypothesis The null hypothesis is that the relative proportions of one variable are independent of the other variable within the repeats; in other words, there is no consistent difference in proportions in the $2\times 2$ tables. For our imaginary legwarmers experiment, the null hypothesis would be that the proportion of people feeling pain was the same for legwarmer-wearers and non-legwarmer wearers, after controlling for the time of year. The alternative hypothesis is that the proportion of people feeling pain was different for legwarmer and non-legwarmer wearers. Technically, the null hypothesis of the Cochran–Mantel–Haenszel test is that the odds ratios within each repetition are equal to $1$. The odds ratio is equal to $1$ when the proportions are the same, and the odds ratio is different from $1$ when the proportions are different from each other. I think proportions are easier to understand than odds ratios, so I'll put everything in terms of proportions. But if you're in a field such as epidemiology where this kind of analysis is common, you're probably going to have to think in terms of odds ratios. How the test works If you label the four numbers in a $2\times 2$ test of independence like this: $\begin{matrix} a & b\ c & d \end{matrix}$ and $(a+b+c+d)=n$ you can write the equation for the Cochran–Mantel–Haenszel test statistic like this: $X_{MH}^{2}=\frac{\left \{ \left | \sum \left [ a-(a+b)(a+c)/n \right ] \right | -0.5\right \}^2}{\sum (a+b)(a+c)(b+d)(c+d)/(n^3-n^2)}$ The numerator contains the absolute value of the difference between the observed value in one cell ($a$) and the expected value under the null hypothesis, $(a+b)(a+c)/n$, so the numerator is the squared sum of deviations between the observed and expected values. It doesn't matter how you arrange the $2\times 2$ tables, any of the four values can be used as $a$. You subtract the $0.5$ as a continuity correction. The denominator contains an estimate of the variance of the squared differences. The test statistic, $X_{MH'}^{2}$, gets bigger as the differences between the observed and expected values get larger, or as the variance gets smaller (primarily due to the sample size getting bigger). It is chi-square distributed with one degree of freedom. Different sources present the formula for the Cochran–Mantel–Haenszel test in different forms, but they are all algebraically equivalent. The formula I've shown here includes the continuity correction (subtracting $0.5$ in the numerator), which should make the $P$ value more accurate. Some programs do the Cochran–Mantel–Haenszel test without the continuity correction, so be sure to specify whether you used it when reporting your results. Assumptions In addition to testing the null hypothesis, the Cochran-Mantel-Haenszel test also produces an estimate of the common odds ratio, a way of summarizing how big the effect is when pooled across the different repeats of the experiment. This require assuming that the odds ratio is the same in the different repeats. You can test this assumption using the Breslow-Day test, which I'm not going to explain in detail; its null hypothesis is that the odds ratios are equal across the different repeats. If some repeats have a big difference in proportion in one direction, and other repeats have a big difference in proportions but in the opposite direction, the Cochran-Mantel-Haenszel test may give a non-significant result. So when you get a non-significant Cochran-Mantel-Haenszel test, you should perform a test of independence on each $2\times 2$ table separately and inspect the individual $P$ values and the direction of difference to see whether something like this is going on. In our legwarmer example, if the proportion of people with ankle pain was much smaller for legwarmer-wearers in the winter, but much higher in the summer, and the Cochran-Mantel-Haenszel test gave a non-significant result, it would be erroneous to conclude that legwarmers had no effect. Instead, you could conclude that legwarmers had an effect, it just was different in the different seasons. Examples Example When you look at the back of someone's head, the hair either whorls clockwise or counterclockwise. Lauterbach and Knight (1927) compared the proportion of clockwise whorls in right-handed and left-handed children. With just this one set of people, you'd have two nominal variables (right-handed vs. left-handed, clockwise vs. counterclockwise), each with two values, so you'd analyze the data with Fisher's exact test. However, several other groups have done similar studies of hair whorl and handedness (McDonald 2011): Study group Handedness Right Left white children Clockwise 708 50 Counterclockwise 169 13 percent CCW 19.3% 20.6% British adults Clockwise 136 24 Counterclockwise 73 14 percent CCW 34.9% 38.0% Pennsylvania whites Clockwise 106 32 Counterclockwise 17 4 percent CCW 13.8% 11.1% Welsh men Clockwise 109 22 Counterclockwise 16 26 percent CCW 12.8% 54.2% German soldiers Clockwise 801 102 Counterclockwise 180 25 percent CCW 18.3% 19.7% German children Clockwise 159 27 Counterclockwise 18 13 percent CCW 10.2% 32.5% New York Clockwise 151 51 Counterclockwise 28 15 percent CCW 15.6% 22.7% American men Clockwise 950 173 Counterclockwise 218 33 percent CCW 18.7% 16.0% You could just add all the data together and do a test of independence on the $4463$ total people, but it would be better to keep each of the $8$ experiments separate. Some of the studies were done on children, while others were on adults; some were just men, while others were male and female; and the studies were done on people of different ethnic backgrounds. Pooling all these studies together might obscure important differences between them. Analyzing the data using the Cochran-Mantel-Haenszel test, the result is $X_{MH}^{2}=6.07$, $1d.f.$, $P=0.014$. Overall, left-handed people have a significantly higher proportion of counterclockwise whorls than right-handed people. Example McDonald and Siebenaller (1989) surveyed allele frequencies at the Lap locus in the mussel Mytilus trossulus on the Oregon coast. At four estuaries, we collected mussels from inside the estuary and from a marine habitat outside the estuary. There were three common alleles and a couple of rare alleles; based on previous results, the biologically interesting question was whether the Lap94 allele was less common inside estuaries, so we pooled all the other alleles into a "non-94" class. There are three nominal variables: allele ($94$ or non-$94$), habitat (marine or estuarine), and area (Tillamook, Yaquina, Alsea, or Umpqua). The null hypothesis is that at each area, there is no difference in the proportion of Lap94 alleles between the marine and estuarine habitats. This table shows the number of $94$ and non-$94$ alleles at each location. There is a smaller proportion of $94$ alleles in the estuarine location of each estuary when compared with the marine location; we wanted to know whether this difference is significant. Location Allele Marine Estuarine Tillamook 94 56 69 non-94 40 77 percent 94 58.3% 47.3% Yaquina 94 61 257 non-94 57 301 percent 94 51.7% 46.1% Alsea 94 73 65 non-94 71 79 percent 94 50.7% 45.1% Umpqua 94 71 48 non-94 55 48 percent 94 56.3% 50.0% The result is $X_{MH}^{2}=5.05$, $1d.f.$, $P=0.025$. We can reject the null hypothesis that the proportion of Lap94 alleles is the same in the marine and estuarine locations. Example Duggal et al. (2010) did a meta-analysis of placebo-controlled studies of niacin and heart disease. They found $5$ studies that met their criteria and looked for coronary artery revascularization in patients given either niacin or placebo: Study Revascularization No revasc. Percent revasc. FATS Niacin 2 46 4.2% Placebo 11 41 21.2% AFREGS Niacin 4 67 5.6% Placebo 12 60 16.7% ARBITER 2 Niacin 1 86 1.1% Placebo 4 76 5.0% HATS Niacin 1 37 2.6% Placebo 6 32 15.8% CLAS 1 Niacin 2 92 2.1% Placebo 1 93 1.1% There are three nominal variables: niacin vs. placebo, revascularization vs. no revascularization, and the name of the study. The null hypothesis is that the rate of revascularization is the same in patients given niacin or placebo. The different studies have different overall rates of revascularization, probably because they used different patient populations and looked for revascularization after different lengths of time, so it would be unwise to just add up the numbers and do a single $2\times 2$ test. The result of the Cochran-Mantel-Haenszel test is $X_{MH}^{2}=12.75$, $1d.f.$, $P=0.00036$. Significantly fewer patients on niacin developed coronary artery revascularization. Graphing the results To graph the results of a Cochran–Mantel–Haenszel test, pick one of the two values of the nominal variable that you're observing and plot its proportions on a bar graph, using bars of two different patterns. Similar tests Sometimes the Cochran–Mantel–Haenszel test is just called the Mantel–Haenszel test. This is confusing, as there is also a test for homogeneity of odds ratios called the Mantel–Haenszel test, and a Mantel–Haenszel test of independence for one $2\times 2$ table. Mantel and Haenszel (1959) came up with a fairly minor modification of the basic idea of Cochran (1954), so it seems appropriate (and somewhat less confusing) to give Cochran credit in the name of this test. If you have at least six $2\times 2$ tables, and you're only interested in the direction of the differences in proportions, not the size of the differences, you could do a sign test. The Cochran–Mantel–Haenszel test for nominal variables is analogous to a two-way anova or paired t–test for a measurement variable, or a Wilcoxon signed-rank test for rank data. In the arthritis-legwarmers example, if you measured ankle pain on a $10$-point scale (a measurement variable) instead of categorizing it as pain/no pain, you'd analyze the data with a two-way anova. How to do the test Spreadsheet I've written a spreadsheet to perform the Cochran–Mantel–Haenszel test cmh.xls. It handles up to $50$ $2\times 2$ tables. It gives you the choice of using or not using the continuity correction; the results are probably a little more accurate with the continuity correction. It does not do the Breslow-Day test. Web pages I'm not aware of any web pages that will perform the Cochran–Mantel–Haenszel test. R Salvatore Mangiafico's $R$ Companion has a sample R program for the Cochran-Mantel-Haenszel test, and also shows how to do the Breslow-Day test. SAS Here is a SAS program that uses PROC FREQ for a Cochran–Mantel–Haenszel test. It uses the mussel data from above. In the TABLES statement, the variable that labels the repeats must be listed first; in this case it is "location". DATA lap; INPUT location $habitat$ allele \$ count; DATALINES; Tillamook marine 94 56 Tillamook estuarine 94 69 Tillamook marine non-94 40 Tillamook estuarine non-94 77 Yaquina marine 94 61 Yaquina estuarine 94 257 Yaquina marine non-94 57 Yaquina estuarine non-94 301 Alsea marine 94 73 Alsea estuarine 94 65 Alsea marine non-94 71 Alsea estuarine non-94 79 Umpqua marine 94 71 Umpqua estuarine 94 48 Umpqua marine non-94 55 Umpqua estuarine non-94 48 ; PROC FREQ DATA=lap; WEIGHT count / ZEROS; TABLES location*habitat*allele / CMH; RUN; There is a lot of output, but the important part looks like this: Cochran-Mantel-Haenszel Statistics (Based on Table Scores) Statistic Alternative Hypothesis DF Value Prob --------------------------------------------------------- 1 Nonzero Correlation 1 5.3209 0.0211 2 Row Mean Scores Differ 1 5.3209 0.0211 3 General Association 1 5.3209 0.0211 For repeated $2\times 2$ tables, the three statistics are identical; they are the Cochran–Mantel–Haenszel chi-square statistic, without the continuity correction. For repeated tables with more than two rows or columns, the "general association" statistic is used when the values of the different nominal variables do not have an order (you cannot arrange them from smallest to largest); you should use it unless you have a good reason to use one of the other statistics. The results also include the Breslow-Day test of homogeneity of odds ratios: Breslow-Day Test for Homogeneity of the Odds Ratios ------------------------------ Chi-Square 0.5295 DF 3 Pr > ChiSq 0.9124 The Breslow-Day test for the example data shows no significant evidence for heterogeneity of odds ratios ($X^2=0.53$, $3d.f.$, $P=0.91$).
textbooks/stats/Applied_Statistics/Biological_Statistics_(McDonald)/02%3A_Tests_for_Nominal_Variables/2.10%3A_Cochran-Mantel-Haenszel_Test.txt
• 3.1: Statistics of Central Tendency A statistic of central tendency tells you where the middle of a set of measurements is. The arithmetic mean is by far the most common, but the median, geometric mean, and harmonic mean are sometimes useful. • 3.2: Statistics of Dispersion Summarizing data from a measurement variable requires a number that represents the "middle" of a set of numbers along with a measure of the "spread" of the numbers. You use a statistic of dispersion to give a single number that describes how compact or spread out a set of observations is. Although statistics of dispersion are usually not very interesting by themselves, they form the basis of most statistical tests used on measurement variables. • 3.3: Standard Error of the Mean Standard error of the mean tells you how accurate your estimate of the mean is likely to be. • 3.4: Confidence Limits Confidence limits tell you how accurate your estimate of the mean is likely to be. 03: Descriptive Statistics Learning Objectives • A statistic of central tendency tells you where the middle of a set of measurements is. The arithmetic mean is by far the most common, but the median, geometric mean, and harmonic mean are sometimes useful. Introduction All of the tests in the first part of this handbook have analyzed nominal variables. You summarize data from a nominal variable as a percentage or a proportion. For example, $76.1\%$ (or $0.761$) of the peas in one of Mendel's genetic crosses were smooth, and $23.9\%$ were wrinkled. If you have the percentage and the sample size ($556$, for Mendel's peas), you have all the information you need about the variable. The rest of the tests in this handbook analyze measurement variables. Summarizing data from a measurement variable is more complicated, and requires a number that represents the "middle" of a set of numbers (known as a "statistic of central tendency" or "statistic of location"), along with a measure of the "spread" of the numbers (known as a "statistic of dispersion"). The arithmetic mean is the most common statistic of central tendency, while the variance or standard deviation are usually used to describe the dispersion. The statistical tests for measurement variables assume that the probability distribution of the observations fits the normal (bell-shaped) curve. If this is true, the distribution can be accurately described by two parameters, the arithmetic mean and the variance. Because they assume that the distribution of the variables can be described by these two parameters, tests for measurement variables are called "parametric tests." If the distribution of a variable doesn't fit the normal curve, it can't be accurately described by just these two parameters, and the results of a parametric test may be inaccurate. In that case, the data can be converted to ranks and analyzed using a non-parametric test, which is less sensitive to deviations from normality. The Normal Distribution Many measurement variables in biology fit the normal distribution fairly well. According to the central limit theorem, if you have several different variables that each have some distribution of values and add them together, the sum follows the normal distribution fairly well. It doesn't matter what the shape of the distribution of the individual variables is, the sum will still be normal. The distribution of the sum fits the normal distribution more closely as the number of variables increases. The graphs below are frequency histograms of $5,000$ numbers. The first graph shows the distribution of a single number with a uniform distribution between $0$ and $1$. The other graphs show the distributions of the sums of two, three, or four random numbers with the same distribution. As you can see, as more random numbers are added together, the frequency distribution of the sum quickly approaches a bell-shaped curve. This is analogous to a biological variable that is the result of several different factors. For example, let's say that you've captured $100$ lizards and measured their maximum running speed. The running speed of an individual lizard would be a function of its genotype at many genes; its nutrition as it was growing up; the diseases it's had; how full its stomach is now; how much water it's drunk; and how motivated it is to run fast on a lizard racetrack. Each of these variables might not be normally distributed; the effect of disease might be to either subtract $10\; cm/sec$ if it has had lizard-slowing disease, or add $20\; cm/sec$ if it has not; the effect of gene A might be to add $25\; cm/sec$ for genotype $AA$, $20\; cm/sec$ for genotype $Aa$, or $15\; cm/sec$ for genotype $aa$. Even though the individual variables might not have normally distributed effects, the running speed that is the sum of all the effects would be normally distributed. If the different factors interact in a multiplicative, not additive, way, the distribution will be log-normal. An example would be if the effect of lizard-slowing disease is not to subtract $10\; cm/sec$ from the average speed, but instead to reduce the speed by $10\%$ (in other words, multiply the speed by $0.9$). The distribution of a log-normal variable will look like a bell curve that has been pushed to the left, with a long tail going to the right. Taking the log of such a variable will produce a normal distribution. This is why the log transformation is used so often. The figure above shows the frequency distribution for the product of four numbers, with each number having a uniform random distribution between $0.5$ and $1$. The graph on the left shows the untransformed product; the graph on the right is the distribution of the log-transformed products. Different measures of central tendency While the arithmetic mean is by far the most commonly used statistic of central tendency, you should be aware of a few others. Arithmetic mean The arithmetic mean is the sum of the observations divided by the number of observations. It is the most common statistic of central tendency, and when someone says simply "the mean" or "the average," this is what they mean. It is often symbolized by putting a bar over a letter; the mean of $Y_1,\; Y_2,\; Y_3,...$ is $Y$. The arithmetic mean works well for values that fit the normal distribution. It is sensitive to extreme values, which makes it not work well for data that are highly skewed. For example, imagine that you are measuring the heights of fir trees in an area where $99\%$ of trees are young trees, about $1$ meter tall, that grew after a fire, and $1\%$ of the trees are $50$-meter-tall trees that survived the fire. If a sample of $20$ trees happened to include one of the giants, the arithmetic mean height would be $3.45$ meters; a sample that didn't include a big tree would have a mean height of about $1$ meter. The mean of a sample would vary a lot, depending on whether or not it happened to include a big tree. In a spreadsheet, the arithmetic mean is given by the function AVERAGE(Ys), where $Ys$ represents a listing of cells ($A2,\; B7,\; B9$) or a range of cells ($A2:A20$) or both ($A2,\; B7,\; B9:B21$). Note that spreadsheets only count those cells that have numbers in them; you could enter AVERAGE($A1:A100$), put numbers in cells $A1\; to\; A9$, and the spreadsheet would correctly compute the arithmetic mean of those $9$ numbers. This is true for other functions that operate on a range of cells. Geometric mean The geometric mean is the $N^{th}$ root of the product of $N$ values of $Y$; for example, the geometric mean of $5$ values of $Y$ would be the $5^{th}$ root of $Y_1\times Y_2\times Y_3\times Y_4\times Y_5$. It is given by the spreadsheet function GEOMEAN($Ys$). The geometric mean is used for variables whose effect is multiplicative. For example, if a tree increases its height by $60\%$ one year, $8\%$ the next year, and $4\%$ the third year, its final height would be the initial height multiplied by $1.60\times 1.08\times 1.04=1.80$. Taking the geometric mean of these numbers ($1.216$) and multiplying that by itself three times also gives the correct final height ($1.80$), while taking the arithmetic mean ($1.24$) times itself three times does not give the correct final height. The geometric mean is slightly smaller than the arithmetic mean; unless the data are highly skewed, the difference between the arithmetic and geometric means is small. If any of your values are zero or negative, the geometric mean will be undefined. The geometric mean has some useful applications in economics involving interest rates, etc., but it is rarely used in biology. You should be aware that it exists, but I see no point in memorizing the definition. Harmonic mean The harmonic mean is the reciprocal of the arithmetic mean of reciprocals of the values; for example, the harmonic mean of $5$ values of $Y$ would be $\frac{5}{1/Y_1+1/Y_2+1/Y_3+1/Y_4+1/Y_5}$. It is given by the spreadsheet function HARMEAN($Ys$). The harmonic mean is less sensitive to a few large values than are the arithmetic or geometric mean, so it is sometimes used for highly skewed variables such as dispersal distance. For example, if six birds set up their first nest $1.0,\; 1.4,\; 1.7,\; 2.1,\; 2.8,\; and\; 47\; km$ from the nest they were born in, the arithmetic mean dispersal distance would be $9.33\; km$, the geometric mean would be $2.95\; km$, and the harmonic mean would be $1.90\; km$. If any of your values are zero, the harmonic mean will be undefined. I think the harmonic mean has some useful applications in engineering, but it is rarely used in biology. You should be aware that it exists, but I see no point in memorizing the definition. Median When the $Ys$ are sorted from lowest to highest, this is the value of $Y$ that is in the middle. For an odd number of $Ys$, the median is the single value of $Y$ in the middle of the sorted list; for an even number, it is the arithmetic mean of the two values of $Y$ in the middle. Thus for a sorted list of $5$ $Ys$, the median would be $Y_3$; for a sorted list of $6$ $Y$s, the median would be the arithmetic mean of $Y_3$ and $Y_4$. The median is given by the spreadsheet function MEDIAN(Ys). The median is useful when you are dealing with highly skewed distributions. For example, if you were studying acorn dispersal, you might find that the vast majority of acorns fall within $5$ meters of the tree, while a small number are carried $500$ meters away by birds. The arithmetic mean of the dispersal distances would be greatly inflated by the small number of long-distance acorns. It would depend on the biological question you were interested in, but for some purposes a median dispersal distance of $3.5$ meters might be a more useful statistic than a mean dispersal distance of $50$ meters. The second situation where the median is useful is when it is impractical to measure all of the values, such as when you are measuring the time until something happens. Survival time is a good example of this; in order to determine the mean survival time, you have to wait until every individual is dead, while determining the median survival time only requires waiting until half the individuals are dead. There are statistical tests for medians, such as Mood's median test, but not many people use them because of their lack of power, and I don't discuss them in this handbook. If you are working with survival times of long-lived organisms (such as people), you'll need to learn about the specialized statistics for that; Bewick et al. (2004) is one place to start. Mode This is the most common value in a data set. It requires that a continuous variable be grouped into a relatively small number of classes, either by making imprecise measurements or by grouping the data into classes. For example, if the heights of $25$ people were measured to the nearest millimeter, there would likely be $25$ different values and thus no mode. If the heights were measured to the nearest $5$ centimeters, or if the original precise measurements were grouped into $5$-centimeter classes, there would probably be one height that several people shared, and that would be the mode. It is rarely useful to determine the mode of a set of observations, but it is useful to distinguish between unimodal, bimodal, etc. distributions, where it appears that the parametric frequency distribution underlying a set of observations has one peak, two peaks, etc. The mode is given by the spreadsheet function MODE(Ys). Example The Maryland Biological Stream Survey used electrofishing to count the number of individuals of each fish species in randomly selected $75m$ long segments of streams in Maryland. Here are the numbers of blacknose dace, Rhinichthys atratulus, in streams of the Rock Creek watershed: Stream fish/75m Mill_Creek_1 76 Mill_Creek_2 102 North_Branch_Rock_Creek_1 12 North_Branch_Rock_Creek_2 39 Rock_Creek_1 55 Rock_Creek_2 93 Rock_Creek_3 98 Rock_Creek_4 53 Turkey_Branch 102 Here are the statistics of central tendency. In reality, you would rarely have any reason to report more than one of these: Arithmetic mean 70.0 Geometric mean 59.8 Harmonic mean 45.1 Median 76 Mode 102 How to calculate the statistics Spreadsheet I have made a descriptive statistics spreadsheet descriptive.xls that calculates the arithmetic, geometric and harmonic means, the median, and the mode, for up to $1000$ observations. Web pages This web page calculates arithmetic mean and median for up to $10,000$ observations. It also calculates standard deviation, standard error of the mean, and confidence intervals. R Salvatore Mangiafico's $R$ Companion has sample R programs for mean, median and mode. SAS There are three SAS procedures that do descriptive statistics, PROC MEANS, PROC SUMMARY, and PROC UNIVARIATE. I don't know why there are three. PROC UNIVARIATE will calculate a longer list of statistics, so you might as well use it. Here is an example, using the fish data from above. DATA fish; INPUT location $dacenumber; DATALINES; Mill_Creek_1 76 Mill_Creek_2 102 North_Branch_Rock_Creek_1 12 North_Branch_Rock_Creek_2 39 Rock_Creek_1 55 Rock_Creek_2 93 Rock_Creek_3 98 Rock_Creek_4 53 Turkey_Branch 102 ; PROC UNIVARIATE DATA=fish; RUN; There's a lot of output from PROC UNIVARIATE, including the arithmetic mean, median, and mode: Basic Statistical Measures Location Variability Mean 70.0000 Std Deviation 32.08582 Median 76.0000 Variance 1030 Mode 102.0000 Range 90.00000 Interquartile Range 45.00000 You can specify which variables you want the mean, median and mode of, using a VAR statement. You can also get the statistics for just those values of the measurement variable that have a particular value of a nominal variable, using a CLASS statement. This example calculates the statistics for the length of mussels, separately for each of two species, Mytilus edulis and M. trossulus. DATA mussels; INPUT species$ length width; DATALINES; edulis 49.0 11.0 tross 51.2 9.1 tross 45.9 9.4 edulis 56.2 13.2 edulis 52.7 10.7 edulis 48.4 10.4 tross 47.6 9.5 tross 46.2 8.9 tross 37.2 7.1 ; PROC UNIVARIATE DATA=mussels; VAR length; CLASS species; RUN; Surprisingly, none of the SAS procedures calculate harmonic or geometric mean. There are functions called HARMEAN and GEOMEAN, but they only calculate the means for a list of variables, not all the values of a single variable.
textbooks/stats/Applied_Statistics/Biological_Statistics_(McDonald)/03%3A_Descriptive_Statistics/3.01%3A_Statistics_of_Central_Tendency.txt
Learning Objectives • A statistic of dispersion tells you how spread out a set of measurements is. Standard deviation is the most common, but there are others. Summarizing data from a measurement variable requires a number that represents the "middle" of a set of numbers (known as a "statistic of central tendency" or "statistic of location"), along with a measure of the "spread" of the numbers (known as a "statistic of dispersion"). You use a statistic of dispersion to give a single number that describes how compact or spread out a set of observations is. Although statistics of dispersion are usually not very interesting by themselves, they form the basis of most statistical tests used on measurement variables. Range This is simply the difference between the largest and smallest observations. This is the statistic of dispersion that people use in everyday conversation; if you were telling your Uncle Cletus about your research on the giant deep-sea isopod Bathynomus giganteus, you wouldn't blather about means and standard deviations, you'd say they ranged from \(4.4cm\) to \(36.5cm\) long (Biornes-Fourzán and Lozano-Alvarez 1991). Then you'd explain that isopods are roly-polies, and \(36.5cm\) is about \(14\) American inches, and Uncle Cletus would finally be impressed, because a roly-poly that's over a foot long is pretty impressive. Range is not very informative for statistical purposes. The range depends only on the largest and smallest values, so that two sets of data with very different distributions could have the same range, or two samples from the same population could have very different ranges, purely by chance. In addition, the range increases as the sample size increases; the more observations you make, the greater the chance that you'll sample a very large or very small value. There is no range function in spreadsheets; you can calculate the range by using: Range = MAX(Ys)−MIN(Ys), where \(Ys\) represents a set of cells. Sum of squares This is not really a statistic of dispersion by itself, but I mention it here because it forms the basis of the variance and standard deviation. Subtract the mean from an observation and square this "deviate". Squaring the deviates makes all of the squared deviates positive and has other statistical advantages. Do this for each observation, then sum these squared deviates. This sum of the squared deviates from the mean is known as the sum of squares. It is given by the spreadsheet function DEVSQ(Ys) (not by the function SUMSQ). You'll probably never have a reason to calculate the sum of squares, but it's an important concept. Parametric variance If you take the sum of squares and divide it by the number of observations (\(n\)), you are computing the average squared deviation from the mean. As observations get more and more spread out, they get farther from the mean, and the average squared deviate gets larger. This average squared deviate, or sum of squares divided by \(n\), is the parametric variance. You can only calculate the parametric variance of a population if you have observations for every member of a population, which is almost never the case. I can't think of a good biological example where using the parametric variance would be appropriate; I only mention it because there's a spreadsheet function for it that you should never use, VARP(Ys). Sample variance You almost always have a sample of observations that you are using to estimate a population parameter. To get an unbiased estimate of the population variance, divide the sum of squares by \(n-1\), not by \(n\). This sample variance, which is the one you will always use, is given by the spreadsheet function VAR(Ys). From here on, when you see "variance," it means the sample variance. You might think that if you set up an experiment where you gave \(10\) guinea pigs little argyle sweaters, and you measured the body temperature of all \(10\) of them, that you should use the parametric variance and not the sample variance. You would, after all, have the body temperature of the entire population of guinea pigs wearing argyle sweaters in the world. However, for statistical purposes you should consider your sweater-wearing guinea pigs to be a sample of all the guinea pigs in the world who could have worn an argyle sweater, so it would be best to use the sample variance. Even if you go to Española Island and measure the length of every single tortoise (Geochelone nigra hoodensis) in the population of tortoises living there, for most purposes it would be best to consider them a sample of all the tortoises that could have been living there. Standard Deviation Variance, while it has useful statistical properties that make it the basis of many statistical tests, is in squared units. A set of lengths measured in centimeters would have a variance expressed in square centimeters, which is just weird; a set of volumes measured in \(cm^3\) would have a variance expressed in \(cm^6\), which is even weirder. Taking the square root of the variance gives a measure of dispersion that is in the original units. The square root of the parametric variance is the parametric standard deviation, which you will never use; is given by the spreadsheet function STDEVP(Ys). The square root of the sample variance is given by the spreadsheet function STDEV(Ys). You should always use the sample standard deviation; from here on, when you see "standard deviation," it means the sample standard deviation. The square root of the sample variance actually underestimates the sample standard deviation by a little bit. Gurland and Tripathi (1971) came up with a correction factor that gives a more accurate estimate of the standard deviation, but very few people use it. Their correction factor makes the standard deviation about \(3\%\) bigger with a sample size of \(9\), and about \(1\%\) bigger with a sample size of \(25\), for example, and most people just don't need to estimate standard deviation that accurately. Neither SAS nor Excel uses the Gurland and Tripathi correction; I've included it as an option in my descriptive statistics spreadsheet. If you use the standard deviation with the Gurland and Tripathi correction, be sure to say this when you write up your results. In addition to being more understandable than the variance as a measure of the amount of variation in the data, the standard deviation summarizes how close observations are to the mean in an understandable way. Many variables in biology fit the normal probability distribution fairly well. If a variable fits the normal distribution, \(68.3\%\) (or roughly two-thirds) of the values are within one standard deviation of the mean, \(95.4\%\) are within two standard deviations of the mean, and \(99.7\) (or almost all) are within \(3\) standard deviations of the mean. Thus if someone says the mean length of men's feet is \(270mm\) with a standard deviation of \(13mm\), you know that about two-thirds of men's feet are between \(257mm\) and \(283mm\) long, and about \(95\%\) of men's feet are between \(244mm\) and \(296mm\) long. Here's a histogram that illustrates this: The proportions of the data that are within \(1\), \(2\), or \(3\) standard deviations of the mean are different if the data do not fit the normal distribution, as shown for these two very non-normal data sets: Coefficient of Variation Coefficient of variation is the standard deviation divided by the mean; it summarizes the amount of variation as a percentage or proportion of the total. It is useful when comparing the amount of variation for one variable among groups with different means, or among different measurement variables. For example, the United States military measured foot length and foot width in 1774 American men. The standard deviation of foot length was \(13.1mm\) and the standard deviation for foot width was \(5.26mm\), which makes it seem as if foot length is more variable than foot width. However, feet are longer than they are wide. Dividing by the means (\(269.7mm\) for length, \(100.6mm\) for width), the coefficients of variation is actually slightly smaller for length (\(4.9\%\)) than for width (\(5.2\%\)), which for most purposes would be a more useful measure of variation. Example Here are the statistics of dispersion for the blacknose dace data from the central tendency web page. In reality, you would rarely have any reason to report all of these: • Range 90 • Variance 1029.5 • Standard deviation 32.09 • Coefficient of variation 45.8% How to calculate the statistics Spreadsheet I have made a spreadsheet descriptive.xls that calculates the range, sample variance, sample standard deviation (with or without the Gurland and Tripathi correction), and coefficient of variation, for up to \(1000\) observations. Web pages This web page calculates standard deviation and other descriptive statistics for up to \(10,000\) observations. This web page calculates range, variance, and standard deviation, along with other descriptive statistics. I don't know the maximum number of observations it can handle. R Salvatore Mangiafico's \(R\) Companion has a sample R program for calculating range, sample variance, standard deviation, and coefficient of variation. SAS PROC UNIVARIATE will calculate the range, variance, standard deviation (without the Gurland and Tripathi correction), and coefficient of variation. It calculates the sample variance and sample standard deviation. For examples, see the central tendency web page. Reference • Briones-Fourzán, P., and E. Lozano-Alvarez. 1991. Aspects of the biology of the giant isopod Bathynomus giganteus A. Milne Edwards, 1879 (Flabellifera: Cirolanidae), off the Yucatan Peninsula. Journal of Crustacean Biology 11: 375-385. • Gurland, J., and R.C. Tripathi. 1971. A simple approximation for unbiased estimation of the standard deviation. American Statistician 25: 30-32.
textbooks/stats/Applied_Statistics/Biological_Statistics_(McDonald)/03%3A_Descriptive_Statistics/3.02%3A_Statistics_of_Dispersion.txt
Learning Objectives • Standard error of the mean tells you how accurate your estimate of the mean is likely to be. Introduction When you take a sample of observations from a population and calculate the sample mean, you are estimating of the parametric mean, or mean of all of the individuals in the population. Your sample mean won't be exactly equal to the parametric mean that you're trying to estimate, and you'd like to have an idea of how close your sample mean is likely to be. If your sample size is small, your estimate of the mean won't be as good as an estimate based on a larger sample size. Here are $10$ random samples from a simulated data set with a true (parametric) mean of $5$. The $X's$ represent the individual observations, the red circles are the sample means, and the blue line is the parametric mean. As you can see, with a sample size of only $3$, some of the sample means aren't very close to the parametric mean. The first sample happened to be three observations that were all greater than $5$, so the sample mean is too high. The second sample has three observations that were less than $5$, so the sample mean is too low. With $20$ observations per sample, the sample means are generally closer to the parametric mean. Once you've calculated the mean of a sample, you should let people know how close your sample mean is likely to be to the parametric mean. One way to do this is with the standard error of the mean. If you take many random samples from a population, the standard error of the mean is the standard deviation of the different sample means. About two-thirds ($68.3\%$) of the sample means would be within one standard error of the parametric mean, $95.4\%$ would be within two standard errors, and almost all ($99.7\%$) would be within three standard errors. Here's a figure illustrating this. I took $100$ samples of $3$ from a population with a parametric mean of $5$ (shown by the blue line). The standard deviation of the $100$ means was $0.63$. Of the $100$ sample means, $70$ are between $4.37$ and $5.63$ (the parametric mean $\pm$ one standard error). Usually you won't have multiple samples to use in making multiple estimates of the mean. Fortunately, you can estimate the standard error of the mean using the sample size and standard deviation of a single sample of observations. The standard error of the mean is estimated by the standard deviation of the observations divided by the square root of the sample size. For some reason, there's no spreadsheet function for standard error, so you can use =STDEV(Ys)/SQRT(COUNT(Ys)), where $Ys$ is the range of cells containing your data. This figure is the same as the one above, only this time I've added error bars indicating $\pm 1$ standard error. Because the estimate of the standard error is based on only three observations, it varies a lot from sample to sample. With a sample size of $20$, each estimate of the standard error is more accurate. Of the $100$ samples in the graph below, $68$ include the parametric mean within $\pm 1$ standard error of the sample mean. As you increase your sample size, the standard error of the mean will become smaller. With bigger sample sizes, the sample mean becomes a more accurate estimate of the parametric mean, so the standard error of the mean becomes smaller. Note that it's a function of the square root of the sample size; for example, to make the standard error half as big, you'll need four times as many observations. "Standard error of the mean" and "standard deviation of the mean" are equivalent terms. People almost always say "standard error of the mean" to avoid confusion with the standard deviation of observations. Sometimes "standard error" is used by itself; this almost certainly indicates the standard error of the mean, but because there are also statistics for standard error of the variance, standard error of the median, standard error of a regression coefficient, etc., you should specify standard error of the mean. There is a myth that when two means have standard error bars that don't overlap, the means are significantly different (at the $P<0.05$ level). This is not true (Browne 1979, Payton et al. 2003); it is easy for two sets of numbers to have standard error bars that don't overlap, yet not be significantly different by a two-sample t–test. Don't try to do statistical tests by visually comparing standard error bars, just use the correct statistical test. Similar statistics Confidence intervals and standard error of the mean serve the same purpose, to express the reliability of an estimate of the mean. When you look at scientific papers, sometimes the "error bars" on graphs or the $\pm$ number after means in tables represent the standard error of the mean, while in other papers they represent $95\%$ confidence intervals. I prefer $95\%$ confidence intervals. When I see a graph with a bunch of points and error bars representing means and confidence intervals, I know that most ($95\%$) of the error bars include the parametric means. When the error bars are standard errors of the mean, only about two-thirds of the error bars are expected to include the parametric means; I have to mentally double the bars to get the approximate size of the $95\%$ confidence interval. In addition, for very small sample sizes, the $95\%$ confidence interval is larger than twice the standard error, and the correction factor is even more difficult to do in your head. Whichever statistic you decide to use, be sure to make it clear what the error bars on your graphs represent. I have seen lots of graphs in scientific journals that gave no clue about what the error bars represent, which makes them pretty useless. You use standard deviation and coefficient of variation to show how much variation there is among individual observations, while you use standard error or confidence intervals to show how good your estimate of the mean is. The only time you would report standard deviation or coefficient of variation would be if you're actually interested in the amount of variation. For example, if you grew a bunch of soybean plants with two different kinds of fertilizer, your main interest would probably be whether the yield of soybeans was different, so you'd report the mean yield ± either standard error or confidence intervals. If you were going to do artificial selection on the soybeans to breed for better yield, you might be interested in which treatment had the greatest variation (making it easier to pick the fastest-growing soybeans), so then you'd report the standard deviation or coefficient of variation. There's no point in reporting both standard error of the mean and standard deviation. As long as you report one of them, plus the sample size ($N$), anyone who needs to can calculate the other one. Example The standard error of the mean for the blacknose dace data from the central tendency web page is $10.70$. How to calculate the standard error Spreadsheet The descriptive statistics spreadsheet descriptive.xls calculates the standard error of the mean for up to $1000$ observations, using the function =STDEV(Ys)/SQRT(COUNT(Ys)). Web pages This web page calculates standard error of the mean and other descriptive statistics for up to $10,000$ observations. This web page calculates standard error of the mean, along with other descriptive statistics. I don't know the maximum number of observations it can handle. R Salvatore Mangiafico's $R$ Companion has a sample R program for standard error of the mean. SAS PROC UNIVARIATE will calculate the standard error of the mean. For examples, see the central tendency web page.
textbooks/stats/Applied_Statistics/Biological_Statistics_(McDonald)/03%3A_Descriptive_Statistics/3.03%3A_Standard_Error_of_the_Mean.txt
Learning Objectives • Confidence limits tell you how accurate your estimate of the mean is likely to be. Introduction After you've calculated the mean of a set of observations, you should give some indication of how close your estimate is likely to be to the parametric ("true") mean. One way to do this is with confidence limits. Confidence limits are the numbers at the upper and lower end of a confidence interval; for example, if your mean is $7.4$ with confidence limits of $5.4$ and $9.4$, your confidence interval is $5.4$ to $9.4$. Most people use $95\%$ confidence limits, although you could use other values. Setting $95\%$ confidence limits means that if you took repeated random samples from a population and calculated the mean and confidence limits for each sample, the confidence interval for $95\%$ of your samples would include the parametric mean. To illustrate this, here are the means and confidence intervals for $100$ samples of $3$ observations from a population with a parametric mean of $5$. Of the $100$ samples, $94$ (shown with $X$ for the mean and a thin line for the confidence interval) have the parametric mean within their $95\%$ confidence interval, and $6$ (shown with circles and thick lines) have the parametric mean outside the confidence interval. With larger sample sizes, the $95\%$ confidence intervals get smaller: When you calculate the confidence interval for a single sample, it is tempting to say that "there is a $95\%$ probability that the confidence interval includes the parametric mean." This is technically incorrect, because it implies that if you collected samples with the same confidence interval, sometimes they would include the parametric mean and sometimes they wouldn't. For example, the first sample in the figure above has confidence limits of $4.59$ and $5.51$. It would be incorrect to say that $95\%$ of the time, the parametric mean for this population would lie between $4.59$ and $5.51$. If you took repeated samples from this same population and repeatedly got confidence limits of $4.59$ and $5.51$, the parametric mean (which is $5$, remember) would be in this interval $100\%$ of the time. Some statisticians don't care about this confusing, pedantic distinction, but others are very picky about it, so it's good to know. Confidence limits for measurement variables To calculate the confidence limits for a measurement variable, multiply the standard error of the mean times the appropriate t-value. The $t$-value is determined by the probability ($0.05$ for a $95\%$ confidence interval) and the degrees of freedom ($n-1$). In a spreadsheet, you could use =(STDEV(Ys)/SQRT(COUNT(Ys)))*TINV(0.05, COUNT(Ys)-1), where $Ys$ is the range of cells containing your data. You add this value to and subtract it from the mean to get the confidence limits. Thus if the mean is $87$ and the $t$-value times the standard error is $10.3$, the confidence limits would be $76.7$ and $97.3$. You could also report this as "$87\pm 10.3$ ($95\%$ confidence limits)." People report both confidence limits and standard errors as the "mean $\pm$ something," so always be sure to specify which you're talking about. All of the above applies only to normally distributed measurement variables. For measurement data from a highly non-normal distribution, bootstrap techniques, which I won't talk about here, might yield better estimates of the confidence limits. Confidence limits for nominal variables There is a different, more complicated formula, based on the binomial distribution, for calculating confidence limits of proportions (nominal data). Importantly, it yields confidence limits that are not symmetrical around the proportion, especially for proportions near zero or one. John Pezzullo has an easy-to-use web page for confidence intervals of a proportion. To see how it works, let's say that you've taken a sample of $20$ men and found $2$ colorblind and $18$ non-colorblind. Go to the web page and enter $2$ in the "Numerator" box and $20$ in the "Denominator" box," then hit "Compute." The results for this example would be a lower confidence limit of $0.0124$ and an upper confidence limit of $0.3170$. You can't report the proportion of colorblind men as "$0.10\pm something$," instead you'd have to say "$0.10$ with $95\%$ confidence limits of $0.0124$ and $0.3170$." An alternative technique for estimating the confidence limits of a proportion assumes that the sample proportions are normally distributed. This approximate technique yields symmetrical confidence limits, which for proportions near zero or one are obviously incorrect. For example, if you calculate the confidence limits using the normal approximation on $0.10$ with a sample size of $20$, you get $-0.03$ and $0.23$, which is ridiculous (you couldn't have less than $0\%$ of men being color-blind). It would also be incorrect to say that the confidence limits were $0$ and $0.23$, because you know the proportion of colorblind men in your population is greater than $0$ (your sample had two colorblind men, so you know the population has at least two colorblind men). I consider confidence limits for proportions that are based on the normal approximation to be obsolete for most purposes; you should use the confidence interval based on the binomial distribution, unless the sample size is so large that it is computationally impractical. Unfortunately, more people use the confidence limits based on the normal approximation than use the correct, binomial confidence limits. The formula for the $95\%$ confidence interval using the normal approximation is $p\pm 1.96\sqrt{\left [ \frac{p(1-p)}{n} \right ]}$, where $p$ is the proportion and $n$ is the sample size. Thus, for $P=0.20$ and $n=100$, the confidence interval would be $\pm 1.96\sqrt{\left [ \frac{0.20(1-0.20)}{100} \right ]}$, or $0.20\pm 0.078$. A common rule of thumb says that it is okay to use this approximation as long as $npq$ is greater than $5$; my rule of thumb is to only use the normal approximation when the sample size is so large that calculating the exact binomial confidence interval makes smoke come out of your computer. Statistical testing with confidence intervals This handbook mostly presents "classical" or "frequentist" statistics, in which hypotheses are tested by estimating the probability of getting the observed results by chance, if the null is true (the $P$ value). An alternative way of doing statistics is to put a confidence interval on a measure of the deviation from the null hypothesis. For example, rather than comparing two means with a two-sample t–test, some statisticians would calculate the confidence interval of the difference in the means. This approach is valuable if a small deviation from the null hypothesis would be uninteresting, when you're more interested in the size of the effect rather than whether it exists. For example, if you're doing final testing of a new drug that you're confident will have some effect, you'd be mainly interested in estimating how well it worked, and how confident you were in the size of that effect. You'd want your result to be "This drug reduced systolic blood pressure by $10.7 mm\; \; Hg$, with a confidence interval of $7.8$ to $13.6$," not "This drug significantly reduced systolic blood pressure ($P=0.0007$)." Using confidence limits this way, as an alternative to frequentist statistics, has many advocates, and it can be a useful approach. However, I often see people saying things like "The difference in mean blood pressure was $10.7 mm\; \; Hg$, with a confidence interval of $7.8$ to $13.6$; because the confidence interval on the difference does not include $0$, the means are significantly different." This is just a clumsy, roundabout way of doing hypothesis testing, and they should just admit it and do a frequentist statistical test. There is a myth that when two means have confidence intervals that overlap, the means are not significantly different (at the $P<0.05$ level). Another version of this myth is that if each mean is outside the confidence interval of the other mean, the means are significantly different. Neither of these is true (Schenker and Gentleman 2001, Payton et al. 2003); it is easy for two sets of numbers to have overlapping confidence intervals, yet still be significantly different by a two-sample t–test; conversely, each mean can be outside the confidence interval of the other, yet they're still not significantly different. Don't try compare two means by visually comparing their confidence intervals, just use the correct statistical test. Similar statistics Confidence limits and standard error of the mean serve the same purpose, to express the reliability of an estimate of the mean. When you look at scientific papers, sometimes the "error bars" on graphs or the ± number after means in tables represent the standard error of the mean, while in other papers they represent $95\%$ confidence intervals. I prefer $95\%$ confidence intervals. When I see a graph with a bunch of points and error bars representing means and confidence intervals, I know that most ($95\%$) of the error bars include the parametric means. When the error bars are standard errors of the mean, only about two-thirds of the bars are expected to include the parametric means; I have to mentally double the bars to get the approximate size of the $95\%$ confidence interval (because $t(0.05)$ is approximately $2$ for all but very small values of $n$). Whichever statistic you decide to use, be sure to make it clear what the error bars on your graphs represent. A surprising number of papers don't say what their error bars represent, which means that the only information the error bars convey to the reader is that the authors are careless and sloppy. Examples Measurement data The blacknose dace data from the central tendency web page has an arithmetic mean of $70.0$. The lower confidence limit is $45.3$ ($70.0-24.7$), and the upper confidence limit is $94.7$ ($70+24.7$). Nominal data If you work with a lot of proportions, it's good to have a rough idea of confidence limits for different sample sizes, so you have an idea of how much data you'll need for a particular comparison. For proportions near $50\%$, the confidence intervals are roughly $\pm 30\%,\; 10\%,\; 3\%$, and $1\%$ for $n=10,\; 100,\; 1000,$ and $10,000$, respectively. This is why the "margin of error" in political polls, which typically have a sample size of around $1,000$, is usually about $3\%$. Of course, this rough idea is no substitute for an actual power analysis. n proportion=0.10 proportion=0.50 10 0.0025, 0.4450 0.1871, 0.8129 100 0.0490, 0.1762 0.3983, 0.6017 1000 0.0821, 0.1203 0.4685, 0.5315 10,000 0.0942, 0.1060 0.4902, 0.5098 How to calculate confidence limits Spreadsheets The descriptive statistics spreadsheet descriptive.xls calculates $95\%$ confidence limits of the mean for up to $1000$ measurements. The confidence intervals for a binomial proportion spreadsheet confidence.xls calculates $95\%$ confidence limits for nominal variables, using both the exact binomial and the normal approximation. Web pages This web page calculates confidence intervals of the mean for up to $10,000$ measurement observations. The web page for confidence intervals of a proportion handles nominal variables. R Salvatore Mangiafico's $R$ Companion has sample R programs for confidence limits for both measurement and nominal variables. SAS To get confidence limits for a measurement variable, add CIBASIC to the PROC UNIVARIATE statement, like this: data fish; input location $dacenumber; cards; Mill_Creek_1 76 Mill_Creek_2 102 North_Branch_Rock_Creek_1 12 North_Branch_Rock_Creek_2 39 Rock_Creek_1 55 Rock_Creek_2 93 Rock_Creek_3 98 Rock_Creek_4 53 Turkey_Branch 102 ; proc univariate data=fish cibasic; run; The output will include the $95\%$ confidence limits for the mean (and for the standard deviation and variance, which you would hardly ever need): Basic Confidence Limits Assuming Normality Parameter Estimate 95% Confidence Limits Mean 70.00000 45.33665 94.66335 Std Deviation 32.08582 21.67259 61.46908 Variance 1030 469.70135 3778 This shows that the blacknose dace data have a mean of $70$, with confidence limits of $45.3$ and $94.7$. You can get the confidence limits for a binomial proportion using PROC FREQ. Here's the sample program from the exact test of goodness-of-fit page: data gus; input paw$; cards; right left right right right right left right right right ; proc freq data=gus; tables paw / binomial(P=0.5); exact binomial; run; And here is part of the output: Binomial Proportion for paw = left -------------------------------- Proportion 0.2000 ASE 0.1265 95% Lower Conf Limit 0.0000 95% Upper Conf Limit 0.4479 Exact Conf Limits 95% Lower Conf Limit 0.0252 95% Upper Conf Limit 0.5561 The first pair of confidence limits shown is based on the normal approximation; the second pair is the better one, based on the exact binomial calculation. Note that if you have more than two values of the nominal variable, the confidence limits will only be calculated for the value whose name is first alphabetically. For example, if the Gus data set included "left," "right," and "both" as values, SAS would only calculate the confidence limits on the proportion of "both." One clumsy way to solve this would be to run the program three times, changing the name of "left" to "aleft," then changing the name of "right" to "aright," to make each one first in one run.
textbooks/stats/Applied_Statistics/Biological_Statistics_(McDonald)/03%3A_Descriptive_Statistics/3.04%3A_Confidence_Limits.txt
• 4.1: One-Sample t-Test Use Student's t–test for one sample when you have one measurement variable and a theoretical expectation of what the mean should be under the null hypothesis. It tests whether the mean of the measurement variable is different from the null expectation. • 4.2: Two-Sample t-Test To use Student's t-test for two samples when you have one measurement variable and one nominal variable, and the nominal variable has only two values. It tests whether the means of the measurement variable are different in the two groups. • 4.3: Independence Most statistical tests assume that you have a sample of independent observations, meaning that the value of one observation does not affect the value of other observations. Non-independent observations can make your statistical test give too many false positives. • 4.4: Normality Most tests for measurement variables assume that data are normally distributed (fit a bell-shaped curve). Here I explain how to check this and what to do if the data aren't normal. • 4.5: Homoscedasticity and Heteroscedasticity Parametric tests assume that data are homoscedastic (have the same standard deviation in different groups). To learn how to check this and what to do if the data are heteroscedastic (have different standard deviations in different groups). • 4.6: Data Transformations To learn how to use data transformation if a measurement variable does not fit a normal distribution or has greatly different standard deviations in different groups. • 4.7: One-way Anova To learn to use one-way anova when you have one nominal variable and one measurement variable; the nominal variable divides the measurements into two or more groups. It tests whether the means of the measurement variable are the same for the different groups. • 4.8: Kruskal–Wallis Test To learn to use the Kruskal–Wallis test when you have one nominal variable and one ranked variable. It tests whether the mean ranks are the same in all the groups. • 4.9: Nested Anova Use nested anova when you have one measurement variable and more than one nominal variable, and the nominal variables are nested (form subgroups within groups). It tests whether there is significant variation in means among groups, among subgroups within groups, etc. • 4.10: Two-way Anova To use two-way anova when you have one measurement variable and two nominal variables, and each value of one nominal variable is found in combination with each value of the other nominal variable. It tests three null hypotheses: that the means of the measurement variable are equal for different values of the first nominal variable; that the means are equal for different values of the second nominal variable; and that there is no interaction. • 4.11: Paired t–Test To use the paired t–test when you have one measurement variable and two nominal variables, one of the nominal variables has only two values, and you only have one observation for each combination of the nominal variables; in other words, you have multiple pairs of observations. It tests whether the mean difference in the pairs is different from 0. • 4.12: Wilcoxon Signed-Rank Test To use the Wilcoxon signed-rank test when you'd like to use the paired t–test, but the differences are severely non-normally distributed. 04: Tests for One Measurement Variable Learning Objectives • Use Student's $t$–test for one sample when you have one measurement variable and a theoretical expectation of what the mean should be under the null hypothesis. It tests whether the mean of the measurement variable is different from the null expectation. There are several statistical tests that use the $t$-distribution and can be called a $t$-test. One is Student's $t$-test for one sample, named after "Student," the pseudonym that William Gosset used to hide his employment by the Guinness brewery in the early 1900s (they had a rule that their employees weren't allowed to publish, and Guinness didn't want other employees to know that they were making an exception for Gosset). Student's $t$-test for one sample compares a sample to a theoretical mean. It has so few uses in biology that I didn't cover it in previous editions of this Handbook, but then I recently found myself using it (McDonald and Dunn 2013), so here it is. When to use it Use Student's $t$-test when you have one measurement variable, and you want to compare the mean value of the measurement variable to some theoretical expectation. It is commonly used in fields such as physics (you've made several observations of the mass of a new subatomic particle—does the mean fit the mass predicted by the Standard Model of particle physics?) and product testing (you've measured the amount of drug in several aliquots from a new batch—is the mean of the new batch significantly less than the standard you've established for that drug?). It's rare to have this kind of theoretical expectation in biology, so you'll probably never use the one-sample $t$-test. I've had a hard time finding a real biological example of a one-sample $t$-test, so imagine that you're studying joint position sense, our ability to know what position our joints are in without looking or touching. You want to know whether people over- or underestimate their knee angle. You blindfold $10$ volunteers, bend their knee to a $120^{\circ}$ angle for a few seconds, then return the knee to a $90^{\circ}$ angle. Then you ask each person to bend their knee to the $120^{\circ}$ angle. The measurement variable is the angle of the knee, and the theoretical expectation from the null hypothesis is $120^{\circ}$. You get the following imaginary data: Individual Angle A 120.6 B 116.4 C 117.2 D 118.1 E 114.1 F 116.9 G 113.3 H 121.1 I 116.9 J 117.0 If the null hypothesis were true that people don't over- or underestimate their knee angle, the mean of these $10$ numbers would be $120$. The mean of these ten numbers is $117.2$; the one-sample $t$–test will tell you whether that is significantly different from $120$. Null hypothesis The statistical null hypothesis is that the mean of the measurement variable is equal to a number that you decided on before doing the experiment. For the knee example, the biological null hypothesis is that people don't under- or overestimate their knee angle. You decided to move people's knees to $120^{\circ}$, so the statistical null hypothesis is that the mean angle of the subjects' knees will be $120^{\circ}$. How the test works Calculate the test statistic,$t_s$, using this formula: $t_s=\frac{(\bar{x}-\mu _\theta )}{(s/\sqrt{n})}$ where $\bar{x}$ is the sample mean, $\mu$ is the mean expected under the null hypothesis, $s$ is the sample standard deviation and $n$ is the sample size. The test statistic, $t_s$, gets bigger as the difference between the observed and expected means gets bigger, as the standard deviation gets smaller, or as the sample size gets bigger. Applying this formula to the imaginary knee position data gives a $t$-value of $-3.69$. You calculate the probability of getting the observed $t_s$ value under the null hypothesis using the t-distribution. The shape of the $t$-distribution, and thus the probability of getting a particular $t_s$ value, depends on the number of degrees of freedom. The degrees of freedom for a one-sample $t$-test is the total number of observations in the group minus $1$. For our example data, the $P$ value for a $t$-value of $-3.69$ with $9$ degrees of freedom is $0.005$, so you would reject the null hypothesis and conclude that people return their knee to a significantly smaller angle than the original position. Assumptions The $t$-test assumes that the observations within each group are normally distributed. If the distribution is symmetrical, such as a flat or bimodal distribution, the one-sample $t$-test is not at all sensitive to the non-normality; you will get accurate estimates of the $P$ value, even with small sample sizes. A severely skewed distribution can give you too many false positives unless the sample size is large (above $50$ or so). If your data are severely skewed and you have a small sample size, you should try a data transformation to make them less skewed. With large sample sizes (simulations I've done suggest $50$ is large enough), the one-sample $t$-test will give accurate results even with severely skewed data. Example McDonald and Dunn (2013) measured the correlation of transferrin (labeled red) and Rab-10 (labeled green) in five cells. The biological null hypothesis is that transferrin and Rab-10 are not colocalized (found in the same subcellular structures), so the statistical null hypothesis is that the correlation coefficient between red and green signals in each cell image has a mean of zero. The correlation coefficients were $0.52,\; 0.20,\; 0.59,\; 0.62$ and $0.60$ in the five cells. The mean is $0.51$, which is highly significantly different from $0$ ($t=6.46,\; 4d.f.,\; P=0.003$), indicating that transferrin and Rab-10 are colocalized in these cells. Graphing the results Because you're just comparing one observed mean to one expected value, you probably won't put the results of a one-sample $t$-test in a graph. If you've done a bunch of them, I guess you could draw a bar graph with one bar for each mean, and a dotted horizontal line for the null expectation. Similar tests The paired t–test is a special case of the one-sample $t$-test; it tests the null hypothesis that the mean difference between two measurements (such as the strength of the right arm minus the strength of the left arm) is equal to zero. Experiments that use a paired t–test are much more common in biology than experiments using the one-sample $t$-test, so I treat the paired $t$-test as a completely different test. The two-sample t–test compares the means of two different samples. If one of your samples is very large, you may be tempted to treat the mean of the large sample as a theoretical expectation, but this is incorrect. For example, let's say you want to know whether college softball pitchers have greater shoulder flexion angles than normal people. You might be tempted to look up the "normal" shoulder flexion angle ($150^{\circ}$) and compare your data on pitchers to the normal angle using a one-sample $t$-test. However, the "normal" value doesn't come from some theory, it is based on data that has a mean, a standard deviation, and a sample size, and at the very least you should dig out the original study and compare your sample to the sample the $150^{\circ}$ "normal" was based on, using a two-sample $t$-test that takes the variation and sample size of both samples into account. How to do the test Spreadsheets I have set up a spreadsheet to perform the one-sample $t$–test onesamplettest.xls. It will handle up to $1000$ observations. Web pages There are web pages to do the one-sample $t$–test here and here. R Salvatore Mangiafico's $R$ Companion has a sample R program for the one-sample t–test. SAS You can use PROC TTEST for Student's $t$-test; the CLASS parameter is the nominal variable, and the VAR parameter is the measurement variable. Here is an example program for the joint position sense data above. Note that $H0$ parameter for the theoretical value is $H$ followed by the numeral zero, not a capital letter $O$. DATA jps; INPUT angle; DATALINES; 120.6 116.4 117.2 118.1 114.1 116.9 113.3 121.1 116.9 117.0 ; PROC TTEST DATA=jps H0=50; VAR angle; RUN; The output includes some descriptive statistics, plus the $t$-value and $P$ value. For these data, the $P$ value is $0.005$. DF t Value Pr > |t| 9 -3.69 0.0050 Power analysis To estimate the sample size you to detect a significant difference between a mean and a theoretical value, you need the following: • the effect size, or the difference between the observed mean and the theoretical value that you hope to detect • the standard deviation • alpha, or the significance level (usually $0.05$) • beta, the probability of accepting the null hypothesis when it is false ($0.50,\; 0.80$ and $0.90$ are common values) The G*Power program will calculate the sample size needed for a one-sample $t$-test. Choose "t tests" from the "Test family" menu and "Means: Difference from constant (one sample case)" from the "Statistical test" menu. Click on the "Determine" button and enter the theoretical value ("Mean $H0$") and a mean with the smallest difference from the theoretical that you hope to detect ("Mean $H1$"). Enter an estimate of the standard deviation. Click on "Calculate and transfer to main window". Change "tails" to two, set your alpha (this will almost always be $0.05$) and your power ($0.5,\; 0.8,\; or\; 0.9$ are commonly used). As an example, let's say you want to follow up the knee joint position sense study that I made up above with a study of hip joint position sense. You're going to set the hip angle to $70^{\circ}$ (Mean $H0=70$) and you want to detect an over- or underestimation of this angle of $1^{\circ}$, so you set Mean $H1=71$. You don't have any hip angle data, so you use the standard deviation from your knee study and enter $2.4$ for SD. You want to do a two-tailed test at the $P<0.05$ level, with a probability of detecting a difference this large, if it exists, of $90\%$ ($1-\text {beta}=0.90$). Entering all these numbers in G*Power gives a sample size of $63$ people. Reference 1. McDonald, J.H., and K.W. Dunn. 2013. Statistical tests for measures of colocalization in biological microscopy. Journal of Microscopy 252: 295-302.
textbooks/stats/Applied_Statistics/Biological_Statistics_(McDonald)/04%3A_Tests_for_One_Measurement_Variable/4.01%3A_One-Sample_t-Test.txt
Learning Objectives • To use Student's $t$-test for two samples when you have one measurement variable and one nominal variable, and the nominal variable has only two values. It tests whether the means of the measurement variable are different in the two groups. Introduction There are several statistical tests that use the $t$-distribution and can be called a $t$-test. One of the most common is Student's $t$-test for two samples. Other $t$-tests include the one-sample t–test, which compares a sample mean to a theoretical mean, and the paired t–test. Student's $t$-test for two samples is mathematically identical to a one-way anova with two categories; because comparing the means of two samples is such a common experimental design, and because the $t$-test is familiar to many more people than anova, I treat the two-sample $t$-test separately. When to use it Use the two-sample $t$–test when you have one nominal variable and one measurement variable, and you want to compare the mean values of the measurement variable. The nominal variable must have only two values, such as "male" and "female" or "treated" and "untreated." Null hypothesis The statistical null hypothesis is that the means of the measurement variable are equal for the two categories. How the test works The test statistic, $t_s$, is calculated using a formula that has the difference between the means in the numerator; this makes $t_s$ get larger as the means get further apart. The denominator is the standard error of the difference in the means, which gets smaller as the sample variances decrease or the sample sizes increase. Thus $t_s$ gets larger as the means get farther apart, the variances get smaller, or the sample sizes increase. You calculate the probability of getting the observed $t_s$ value under the null hypothesis using the $t$-distribution. The shape of the t-distribution, and thus the probability of getting a particular $t_s$ value, depends on the number of degrees of freedom. The degrees of freedom for a $t$-test is the total number of observations in the groups minus $2$, or $n_1+n_2-2$. Assumptions The $t$-test assumes that the observations within each group are normally distributed. Fortunately, it is not at all sensitive to deviations from this assumption, if the distributions of the two groups are the same (if both distributions are skewed to the right, for example). I've done simulations with a variety of non-normal distributions, including flat, bimodal, and highly skewed, and the two-sample $t$-test always gives about $5\%$ false positives, even with very small sample sizes. If your data are severely non-normal, you should still try to find a data transformation that makes them more normal, but don't worry if you can't find a good transformation or don't have enough data to check the normality. If your data are severely non-normal, and you have different distributions in the two groups (one data set is skewed to the right and the other is skewed to the left, for example), and you have small samples (less than $50$ or so), then the two-sample $t$-test can give inaccurate results, with considerably more than $5\%$ false positives. A data transformation won't help you here, and neither will a Mann-Whitney U-test. It would be pretty unusual in biology to have two groups with different distributions but equal means, but if you think that's a possibility, you should require a $P$ value much less than $0.05$ to reject the null hypothesis. The two-sample $t$-test also assumes homoscedasticity (equal variances in the two groups). If you have a balanced design (equal sample sizes in the two groups), the test is not very sensitive to heteroscedasticity unless the sample size is very small (less than $10$ or so); the standard deviations in one group can be several times as big as in the other group, and you'll get $P< 0.05$ about $5\%$ of the time if the null hypothesis is true. With an unbalanced design, heteroscedasticity is a bigger problem; if the group with the smaller sample size has a bigger standard deviation, the two-sample $t$-test can give you false positives much too often. If your two groups have standard deviations that are substantially different (such as one standard deviation is twice as big as the other), and your sample sizes are small (less than $10$) or unequal, you should use Welch's t–test instead. Example In fall 2004, students in the $2p.m.$ section of my Biological Data Analysis class had an average height of $66.6$ inches, while the average height in the $5p.m.$ section was $64.6$ inches. Are the average heights of the two sections significantly different? Here are the data: 2 p.m. 5 p.m. 69 68 70 62 66 67 63 68 68 69 70 67 69 61 67 59 62 62 63 61 76 69 59 66 62 62 62 62 75 61 62 70 72 63 There is one measurement variable, height, and one nominal variable, class section. The null hypothesis is that the mean heights in the two sections are the same. The results of the $t$–test ($t=1.29$, $32 d.f.$, $P=0.21$) do not reject the null hypothesis. Graphing the results Because it's just comparing two numbers, you'll rarely put the results of a $t$-test in a graph for publication. For a presentation, you could draw a bar graph like the one for a one-way anova. Similar tests Student's $t$-test is mathematically identical to a one-way anova done on data with two categories; you will get the exact same $P$ value from a two-sample $t$-test and from a one-way anova, even though you calculate the test statistics differently. The $t$-test is easier to do and is familiar to more people, but it is limited to just two categories of data. You can do a one-way anova on two or more categories. I recommend that if your research always involves comparing just two means, you should call your test a two-sample $t$-test, because it is more familiar to more people. If you write a paper that includes some comparisons of two means and some comparisons of more than two means, you may want to call all the tests one-way anovas, rather than switching back and forth between two different names ($t$-test and one-way anova) for the same thing. The Mann-Whitney U-test is a non-parametric alternative to the two-sample $t$-test that some people recommend for non-normal data. However, if the two samples have the same distribution, the two-sample $t$-test is not sensitive to deviations from normality, so you can use the more powerful and more familiar $t$-test instead of the Mann-Whitney U-test. If the two samples have different distributions, the Mann-Whitney U-test is no better than the $t$-test. So there's really no reason to use the Mann-Whitney U-test unless you have a true ranked variable instead of a measurement variable. If the variances are far from equal (one standard deviation is two or more times as big as the other) and your sample sizes are either small (less than $10$) or unequal, you should use Welch's $t$-test (also know as Aspin-Welch, Welch-Satterthwaite, Aspin-Welch-Satterthwaite, or Satterthwaite $t$-test). It is similar to Student's $t$-test except that it does not assume that the standard deviations are equal. It is slightly less powerful than Student's $t$-test when the standard deviations are equal, but it can be much more accurate when the standard deviations are very unequal. My two-sample $t$-test spreadsheet will calculate Welch's t–test. You can also do Welch's $t$-test using this web page, by clicking the button labeled "Welch's unpaired $t$-test". Use the paired t–test when the measurement observations come in pairs, such as comparing the strengths of the right arm with the strength of the left arm on a set of people. Use the one-sample t–test when you have just one group, not two, and you are comparing the mean of the measurement variable for that group to a theoretical expectation. How to do the test Spreadsheets I've set up a spreadsheet for two-sample t–tests twosamplettest.xls. It will perform either Student's $t$-test or Welch's $t$-test for up to $2000$ observations in each group. Web pages There are web pages to do the $t$-test here and here. Both will do both the Student's $t$-test and Welch's t-test. R Salvatore Mangiafico's $R$ Companion has a sample R programs for the two-sample t–test and Welch's test. SAS You can use PROC TTEST for Student's $t$-test; the CLASS parameter is the nominal variable, and the VAR parameter is the measurement variable. Here is an example program for the height data above. DATA sectionheights; INPUT section \$ height @@; DATALINES; 2pm 69 2pm 70 2pm 66 2pm 63 2pm 68 2pm 70 2pm 69 2pm 67 2pm 62 2pm 63 2pm 76 2pm 59 2pm 62 2pm 62 2pm 75 2pm 62 2pm 72 2pm 63 5pm 68 5pm 62 5pm 67 5pm 68 5pm 69 5pm 67 5pm 61 5pm 59 5pm 62 5pm 61 5pm 69 5pm 66 5pm 62 5pm 62 5pm 61 5pm 70 ; PROC TTEST; CLASS section; VAR height; RUN; The output includes a lot of information; the $P$ value for the Student's t-test is under "Pr > |t| on the line labeled "Pooled", and the $P$ value for Welch's $t$-test is on the line labeled "Satterthwaite." For these data, the $P$ value is $0.2067$ for Student's $t$-test and $0.1995$ for Welch's. Variable Method Variances DF t Value Pr > |t| height Pooled Equal 32 1.29 0.2067 height Satterthwaite Unequal 31.2 1.31 0.1995 Power analysis To estimate the sample sizes needed to detect a significant difference between two means, you need the following: • the effect size, or the difference in means you hope to detect; • the standard deviation. Usually you'll use the same value for each group, but if you know ahead of time that one group will have a larger standard deviation than the other, you can use different numbers; • alpha, or the significance level (usually $0.05$); • beta, the probability of accepting the null hypothesis when it is false ($0.50$, $0.80$, and $0.90$ are common values); • the ratio of one sample size to the other. The most powerful design is to have equal numbers in each group ($N_1/N_2=1.0$), but sometimes it's easier to get large numbers of one of the groups. For example, if you're comparing the bone strength in mice that have been reared in zero gravity aboard the International Space Station vs. control mice reared on earth, you might decide ahead of time to use three control mice for every one expensive space mouse ($N_1/N_2=3.0$). The G*Power program will calculate the sample size needed for a two-sample $t$-test. Choose "t tests" from the "Test family" menu and "Means: Difference between two independent means (two groups" from the "Statistical test" menu. Click on the "Determine" button and enter the means and standard deviations you expect for each group. Only the difference between the group means is important; it is your effect size. Click on "Calculate and transfer to main window". Change "tails" to two, set your alpha (this will almost always be $0.05$) and your power ($0.5$, $0.8$, and $0.9$ are commonly used). If you plan to have more observations in one group than in the other, you can make the "Allocation ratio" different from $1$. As an example, let's say you want to know whether people who run regularly have wider feet than people who don't run. You look for previously published data on foot width and find the ANSUR data set, which shows a mean foot width for American men of $100.6mm$ and a standard deviation of $5.26mm$. You decide that you'd like to be able to detect a difference of $3mm$ in mean foot width between runners and non-runners. Using G*Power, you enter $100mm$ for the mean of group $1$, $103$ for the mean of group $2$, and $5.26$for the standard deviation of each group. You decide you want to detect a difference of $3mm$, at the $P< 0.05$ level, with a probability of detecting a difference this large, if it exists, of $90\%$ ($1-\text {beta}=0.90$). Entering all these numbers in G*Power gives a sample size for each group of $66$ people.
textbooks/stats/Applied_Statistics/Biological_Statistics_(McDonald)/04%3A_Tests_for_One_Measurement_Variable/4.02%3A_Two-Sample_t-Test.txt
Learning Objectives • Most statistical tests assume that you have a sample of independent observations, meaning that the value of one observation does not affect the value of other observations. Non-independent observations can make your statistical test give too many false positives. Measurement variables One of the assumptions of most tests is that the observations are independent of each other. This assumption is violated when the value of one observation tends to be too similar to the values of other observations. For example, let's say you wanted to know whether calico cats had a different mean weight than black cats. You get five calico cats, five black cats, weigh them, and compare the mean weights with a two-sample t–test. If the five calico cats are all from one litter, and the five black cats are all from a second litter, then the measurements are not independent. Some cat parents have small offspring, while some have large; so if Josie the calico cat is small, her sisters Valerie and Melody are not independent samples of all calico cats, they are instead also likely to be small. Even if the null hypothesis (that calico and black cats have the same mean weight) is true, your chance of getting a $P$ value less than $0.05$ could be much greater than $5\%$. A common source of non-independence is that observations are close together in space or time. For example, let's say you wanted to know whether tigers in a zoo were more active in the morning or the evening. As a measure of activity, you put a pedometer on Sally the tiger and count the number of steps she takes in a one-minute period. If you treat the number of steps Sally takes between $10:00a.m.$ and $10:01a.m.$ as one observation, and the number of steps between $10:01a.m.$ and $10:02a.m.$ as a separate observation, these observations are not independent. If Sally is sleeping from $10:00$ to $10:01$, she's probably still sleeping from $10:01$ to $10:02$; if she's pacing back and forth between $10:00$ and $10:01$, she's probably still pacing between $10:01$ and $10:02$. If you take five observations between $10:00$ and $10:05$ and compare them with five observations you take between $3:00$ and $3:05$ with a two-sample $t$–test, there a good chance you'll get five low-activity measurements in the morning and five high-activity measurements in the afternoon, or vice-versa. This increases your chance of a false positive; if the null hypothesis is true, lack of independence can give you a significant $P$ value much more than $5\%$ of the time. There are other ways you could get lack of independence in your tiger study. For example, you might put pedometers on four other tigers—Bob, Janet, Ralph, and Loretta—in the same enclosure as Sally, measure the activity of all five of them between $10:00$ and $10:01$, and treat that as five separate observations. However, it may be that when one tiger gets up and starts walking around, the other tigers are likely to follow it around and see what it's doing, while at other times all five tigers are likely to be resting. That would mean that Bob's amount of activity is not independent of Sally's; when Sally is more active, Bob is likely to be more active. Regression and correlation assume that observations are independent. If one of the measurement variables is time, or if the two variables are measured at different times, the data are often non-independent. For example, if I wanted to know whether I was losing weight, I could weigh my self every day and then do a regression of weight vs. day. However, my weight on one day is very similar to my weight on the next day. Even if the null hypothesis is true that I'm not gaining or losing weight, the non-independence will make the probability of getting a $P$ value less than $0.05$ much greater than $5\%$. I've put a more extensive discussion of independence on the regression/correlation page. Nominal variables Tests of nominal variables (independence or goodness-of-fit) also assume that individual observations are independent of each other. To illustrate this, let's say I want to know whether my statistics class is more boring than my evolution class. I set up a video camera observing the students in one lecture of each class, then count the number of students who yawn at least once. In statistics, $28$ students yawn and $15$ don't yawn; in evolution, $6$ yawn and $50$ don't yawn. It seems like there's a significantly ($P=2.4\times 10^{-8}$) higher proportion of yawners in the statistics class, but that could be due to chance, because the observations within each class are not independent of each other. Yawning is contagious (so contagious that you're probably yawning right now, aren't you?), which means that if one person near the front of the room in statistics happens to yawn, other people who can see the yawner are likely to yawn as well. So the probability that Ashley in statistics yawns is not independent of whether Sid yawns; once Sid yawns, Ashley will probably yawn as well, and then Megan will yawn, and then Dave will yawn. Solutions for lack of independence Unlike non-normality and heteroscedasticity, it is not easy to look at your data and see whether the data are non-independent. You need to understand the biology of your organisms and carefully design your experiment so that the observations will be independent. For your comparison of the weights of calico cats vs. black cats, you should know that cats from the same litter are likely to be similar in weight; you could therefore make sure to sample only one cat from each of many litters. You could also sample multiple cats from each litter, but treat "litter" as a second nominal variable and analyze the data using nested anova. For Sally the tiger, you might know from previous research that bouts of activity or inactivity in tigers last for $5$ to $10$ minutes, so that you could treat one-minute observations made an hour apart as independent. Or you might know from previous research that the activity of one tiger has no effect on other tigers, so measuring activity of five tigers at the same time would actually be okay. To really see whether students yawn more in my statistics class, I should set up partitions so that students can't see or hear each other yawning while I lecture. For regression and correlation analyses of data collected over a length of time, there are statistical tests developed for time series. I don't cover them in this handbook; if you need to analyze time series data, find out how other people in your field analyze similar data.
textbooks/stats/Applied_Statistics/Biological_Statistics_(McDonald)/04%3A_Tests_for_One_Measurement_Variable/4.03%3A_Independence.txt
Learning Objectives • Most tests for measurement variables assume that data are normally distributed (fit a bell-shaped curve). Here I explain how to check this and what to do if the data aren't normal. Introduction A probability distribution specifies the probability of getting an observation in a particular range of values; the normal distribution is the familiar bell-shaped curve, with a high probability of getting an observation near the middle and lower probabilities as you get further from the middle. A normal distribution can be completely described by just two numbers, or parameters, the mean and the standard deviation; all normal distributions with the same mean and same standard deviation will be exactly the same shape. One of the assumptions of an anova and other tests for measurement variables is that the data fit the normal probability distribution. Because these tests assume that the data can be described by two parameters, the mean and standard deviation, they are called parametric tests. Fig. 4.4.1 Histogram of dry weights of the amphipod crustacean Platorchestia platensis. When you plot a frequency histogram of measurement data, the frequencies should approximate the bell-shaped normal distribution. For example, the figure shown at the right is a histogram of dry weights of newly hatched amphipods (Platorchestia platensis), data I tediously collected for my Ph.D. research. It fits the normal distribution pretty well. Many biological variables fit the normal distribution quite well. This is a result of the central limit theorem, which says that when you take a large number of random numbers, the means of those numbers are approximately normally distributed. If you think of a variable like weight as resulting from the effects of a bunch of other variables averaged together—age, nutrition, disease exposure, the genotype of several genes, etc.—it's not surprising that it would be normally distributed. Other data sets don't fit the normal distribution very well. The histogram on the top is the level of sulphate in Maryland streams (data from the Maryland Biological Stream Survey). It doesn't fit the normal curve very well, because there are a small number of streams with very high levels of sulphate. The histogram on the bottom is the number of egg masses laid by indivuduals of the lentago host race of the treehopper Enchenopa (unpublished data courtesy of Michael Cast). The curve is bimodal, with one peak at around \(14\) egg masses and the other at zero. Parametric tests assume that your data fit the normal distribution. If your measurement variable is not normally distributed, you may be increasing your chance of a false positive result if you analyze the data with a test that assumes normality. What to do about non-normality Once you have collected a set of measurement data, you should look at the frequency histogram to see if it looks non-normal. There are statistical tests of the goodness-of-fit of a data set to the normal distribution, but I don't recommend them, because many data sets that are significantly non-normal would be perfectly appropriate for an anova or other parametric test. Fortunately, an anova is not very sensitive to moderate deviations from normality; simulation studies, using a variety of non-normal distributions, have shown that the false positive rate is not affected very much by this violation of the assumption (Glass et al. 1972, Harwell et al. 1992, Lix et al. 1996). This is another result of the central limit theorem, which says that when you take a large number of random samples from a population, the means of those samples are approximately normally distributed even when the population is not normal. Because parametric tests are not very sensitive to deviations from normality, I recommend that you don't worry about it unless your data appear very, very non-normal to you. This is a subjective judgement on your part, but there don't seem to be any objective rules on how much non-normality is too much for a parametric test. You should look at what other people in your field do; if everyone transforms the kind of data you're collecting, or uses a non-parametric test, you should consider doing what everyone else does even if the non-normality doesn't seem that bad to you. If your histogram looks like a normal distribution that has been pushed to one side, like the sulphate data above, you should try different data transformations to see if any of them make the histogram look more normal. It's best if you collect some data, check the normality, and decide on a transformation before you run your actual experiment; you don't want cynical people to think that you tried different transformations until you found one that gave you a signficant result for your experiment. If your data still look severely non-normal no matter what transformation you apply, it's probably still okay to analyze the data using a parametric test; they're just not that sensitive to non-normality. However, you may want to analyze your data using a non-parametric test. Just about every parametric statistical test has a non-parametric substitute, such as the Kruskal–Wallis test instead of a one-way anova, Wilcoxon signed-rank test instead of a paired \(t\)–test, and Spearman rank correlation instead of linear regression/correlation. These non-parametric tests do not assume that the data fit the normal distribution. They do assume that the data in different groups have the same distribution as each other, however; if different groups have different shaped distributions (for example, one is skewed to the left, another is skewed to the right), a non-parametric test will not be any better than a parametric one. Skewness and kurtosis A histogram with a long tail on the right side, such as the sulphate data above, is said to be skewed to the right; a histogram with a long tail on the left side is said to be skewed to the left. There is a statistic to describe skewness, \(g_1\), but I don't know of any reason to calculate it; there is no rule of thumb that you shouldn't do a parametric test if \(g_1\) is greater than some cutoff value. Another way in which data can deviate from the normal distribution is kurtosis. A histogram that has a high peak in the middle and long tails on either side is leptokurtic; a histogram with a broad, flat middle and short tails is platykurtic. The statistic to describe kurtosis is \(g_2\), but I can't think of any reason why you'd want to calculate it, either. How to look at normality Spreadsheet I've written a spreadsheet that will plot a frequency histogram histogram.xls for untransformed, log-transformed and square-root transformed data. It will handle up to \(1000\) observations. If there are not enough observations in each group to check normality, you may want to examine the residuals (each observation minus the mean of its group). To do this, open a separate spreadsheet and put the numbers from each group in a separate column. Then create columns with the mean of each group subtracted from each observation in its group, as shown below. Copy these numbers into the histogram spreadsheet. Web pages There are several web pages that will produce histograms, but most of them aren't very good; this histogram calculator is the best I've found. SAS You can use the PLOTS option in PROC UNIVARIATE to get a stem-and-leaf display, which is a kind of very crude histogram. You can also use the HISTOGRAM option to get an actual histogram, but only if you know how to send the output to a graphics device driver.
textbooks/stats/Applied_Statistics/Biological_Statistics_(McDonald)/04%3A_Tests_for_One_Measurement_Variable/4.04%3A_Normality.txt
Learning Objectives • Parametric tests assume that data are homoscedastic (have the same standard deviation in different groups). • To learn how to check this and what to do if the data are heteroscedastic (have different standard deviations in different groups). One of the assumptions of an anova and other parametric tests is that the within-group standard deviations of the groups are all the same (exhibit homoscedasticity). If the standard deviations are different from each other (exhibit heteroscedasticity), the probability of obtaining a false positive result even though the null hypothesis is true may be greater than the desired alpha level. To illustrate this problem, I did simulations of samples from three populations, all with the same population mean. I simulated taking samples of \(10\) observations from population \(A\), \(7\) from population \(B\), and \(3\) from population \(C\), and repeated this process thousands of times. When the three populations were homoscedastic (had the same standard deviation), the one-way anova on the simulated data sets were significant (\(P<0.05\)) about \(5\%\) of the time, as they should be. However, when I made the standard deviations different (\(1.0\) for population \(A\), \(2.0\) for population \(B\), and \(3.0\) for population \(C\)), I got a \(P\) value less than \(0.05\) in about \(18\%\) of the simulations. In other words, even though the population means were really all the same, my chance of getting a false positive result was \(18\%\), not the desired \(5\%\). There have been a number of simulation studies that have tried to determine when heteroscedasticity is a big enough problem that other tests should be used. Heteroscedasticity is much less of a problem when you have a balanced design (equal sample sizes in each group). Early results suggested that heteroscedasticity was not a problem at all with a balanced design (Glass et al. 1972), but later results found that large amounts of heteroscedasticity can inflate the false positive rate, even when the sample sizes are equal (Harwell et al. 1992). The problem of heteroscedasticity is much worse when the sample sizes are unequal (an unbalanced design) and the smaller samples are from populations with larger standard deviations; but when the smaller samples are from populations with smaller standard deviations, the false positive rate can actually be much less than 0.05, meaning the power of the test is reduced (Glass et al. 1972). What to do about heteroscedasticity You should always compare the standard deviations of different groups of measurements, to see if they are very different from each other. However, despite all of the simulation studies that have been done, there does not seem to be a consensus about when heteroscedasticity is a big enough problem that you should not use a test that assumes homoscedasticity. If you see a big difference in standard deviations between groups, the first things you should try are data transformations. A common pattern is that groups with larger means also have larger standard deviations, and a log or square-root transformation will often fix this problem. It's best if you can choose a transformation based on a pilot study, before you do your main experiment; you don't want cynical people to think that you chose a transformation because it gave you a significant result. If the standard deviations of your groups are very heterogeneous no matter what transformation you apply, there are a large number of alternative tests to choose from (Lix et al. 1996). The most commonly used alternative to one-way anova is Welch's anova, sometimes called Welch's t–test when there are two groups. Non-parametric tests, such as the Kruskal–Wallis test instead of a one-way anova, do not assume normality, but they do assume that the shapes of the distributions in different groups are the same. This means that non-parametric tests are not a good solution to the problem of heteroscedasticity. All of the discussion above has been about one-way anovas. Homoscedasticity is also an assumption of other anovas, such as nested and two-way anovas, and regression and correlation. Much less work has been done on the effects of heteroscedasticity on these tests; all I can recommend is that you inspect the data for heteroscedasticity and hope that you don't find it, or that a transformation will fix it. Bartlett's test There are several statistical tests for homoscedasticity, and the most popular is Bartlett's test. Use this test when you have one measurement variable, one nominal variable, and you want to test the null hypothesis that the standard deviations of the measurement variable are the same for the different groups. Bartlett's test is not a particularly good one, because it is sensitive to departures from normality as well as heteroscedasticity; you shouldn't panic just because you have a significant Bartlett's test. It may be more helpful to use Bartlett's test to see what effect different transformations have on the heteroscedasticity; you can choose the transformation with the highest (least significant) \(P\) value for Bartlett's test. An alternative to Bartlett's test that I won't cover here is Levene's test. It is less sensitive to departures from normality, but if the data are approximately normal, it is less powerful than Bartlett's test. While Bartlett's test is usually used when examining data to see if it's appropriate for a parametric test, there are times when testing the equality of standard deviations is the primary goal of an experiment. For example, let's say you want to know whether variation in stride length among runners is related to their level of experience—maybe as people run more, those who started with unusually long or short strides gradually converge on some ideal stride length. You could measure the stride length of non-runners, beginning runners, experienced amateur runners, and professional runners, with several individuals in each group, then use Bartlett's test to see whether there was significant heterogeneity in the standard deviations. How to do Bartlett's test Spreadsheet I have put together a spreadsheet that performs Bartlett's test for homogeneity of standard deviations bartletts.xls for up to \(1000\) observations in each of up to \(50\) groups. It allows you to see what the log or square-root transformation will do. It also shows a graph of the standard deviations plotted vs. the means. This gives you a quick visual display of the difference in amount of variation among the groups, and it also shows whether the mean and standard deviation are correlated. Entering the mussel shell data from the one-way anova web page into the spreadsheet, the \(P\) values are \(0.655\) for untransformed data, \(0.856\) for square-root transformed, and \(0.929\) for log-transformed data. None of these is close to significance, so there's no real need to worry. The graph of the untransformed data hints at a correlation between the mean and the standard deviation, so it might be a good idea to log-transform the data: Web page There is web page for Bartlett's test that will handle up to \(14\) groups. You have to enter the variances (not standard deviations) and sample sizes, not the raw data. SAS You can use the HOVTEST=BARTLETT option in the MEANS statement of PROC GLM to perform Bartlett's test. This modification of the program from the one-way anova page does Bartlett's test. PROC GLM DATA=musselshells; CLASS location; MODEL aam = location; MEANS location / HOVTEST=BARTLETT; run;
textbooks/stats/Applied_Statistics/Biological_Statistics_(McDonald)/04%3A_Tests_for_One_Measurement_Variable/4.05%3A_Homoscedasticity_and_Heteroscedasticity.txt
Learning Objectives • To learn how to use data transformation if a measurement variable does not fit a normal distribution or has greatly different standard deviations in different groups. Introduction Many biological variables do not meet the assumptions of parametric statistical tests: they are not normally distributed, the standard deviations are not homogeneous, or both. Using a parametric statistical test (such as an anova or linear regression) on such data may give a misleading result. In some cases, transforming the data will make it fit the assumptions better. To transform data, you perform a mathematical operation on each observation, then use these transformed numbers in your statistical test. For example, as shown in the first graph above, the abundance of the fish species Umbra pygmaea (Eastern mudminnow) in Maryland streams is non-normally distributed; there are a lot of streams with a small density of mudminnows, and a few streams with lots of them. Applying the log transformation makes the data more normal, as shown in the second graph. Here are $12$ numbers from the mudminnow data set; the first column is the untransformed data, the second column is the square root of the number in the first column, and the third column is the base-$10$ logarithm of the number in the first column. Untransformed Square-root transformed Log transformed 38 6.164 1.580 1 1.000 0.000 13 3.606 1.114 2 1.414 0.301 13 3.606 1.114 20 4.472 1.301 50 7.071 1.699 9 3.000 0.954 28 5.292 1.447 6 2.449 0.778 4 2.000 0.602 43 6.557 1.633 You do the statistics on the transformed numbers. For example, the mean of the untransformed data is $18.9$; the mean of the square-root transformed data is $3.89$; the mean of the log transformed data is $1.044$. If you were comparing the fish abundance in different watersheds, and you decided that log transformation was the best, you would do a one-way anova on the logs of fish abundance, and you would test the null hypothesis that the means of the log-transformed abundances were equal. Back transformation Even though you've done a statistical test on a transformed variable, such as the log of fish abundance, it is not a good idea to report your means, standard errors, etc. in transformed units. A graph that showed that the mean of the log of fish per $75m$ of stream was $1.044$ would not be very informative for someone who can't do fractional exponents in their head. Instead, you should back-transform your results. This involves doing the opposite of the mathematical function you used in the data transformation. For the log transformation, you would back-transform by raising 10 to the power of your number. For example, the log transformed data above has a mean of $1.044$ and a $95\%$ confidence interval of $\pm 0.344$ log-transformed fish. The back-transformed mean would be $10^{1.044}=11.1$ fish. The upper confidence limit would be $10^{(1.044+0.344)}=24.4$ fish, and the lower confidence limit would be $10^{(1.044-0.344)}=5.0$ fish. Note that the confidence interval is not symmetrical; the upper limit is $13.3$ fish above the mean, while the lower limit is $6.1$ fish below the mean. Also note that you can't just back-transform the confidence interval and add or subtract that from the back-transformed mean; you can't take $10^{0.344}$ and add or subtract that. Choosing the right transformation Data transformations are an important tool for the proper statistical analysis of biological data. To those with a limited knowledge of statistics, however, they may seem a bit fishy, a form of playing around with your data in order to get the answer you want. It is therefore essential that you be able to defend your use of data transformations. There are an infinite number of transformations you could use, but it is better to use a transformation that other researchers commonly use in your field, such as the square-root transformation for count data or the log transformation for size data. Even if an obscure transformation that not many people have heard of gives you slightly more normal or more homoscedastic data, it will probably be better to use a more common transformation so people don't get suspicious. Remember that your data don't have to be perfectly normal and homoscedastic; parametric tests aren't extremely sensitive to deviations from their assumptions. It is also important that you decide which transformation to use before you do the statistical test. Trying different transformations until you find one that gives you a significant result is cheating. If you have a large number of observations, compare the effects of different transformations on the normality and the homoscedasticity of the variable. If you have a small number of observations, you may not be able to see much effect of the transformations on the normality and homoscedasticity; in that case, you should use whatever transformation people in your field routinely use for your variable. For example, if you're studying pollen dispersal distance and other people routinely log-transform it, you should log-transform pollen distance too, even if you only have $10$ observations and therefore can't really look at normality with a histogram. Common transformations There are many transformations that are used occasionally in biology; here are three of the most common: Log transformation This consists of taking the log of each observation. You can use either base-$10$ logs (LOG in a spreadsheet, LOG10 in SAS) or base-$e$ logs, also known as natural logs (LN in a spreadsheet, LOG in SAS). It makes no difference for a statistical test whether you use base-$10$ logs or natural logs, because they differ by a constant factor; the base-$10$ log of a number is just $2.303…\times \text{the\; natural\; log\; of\; the\; number}$. You should specify which log you're using when you write up the results, as it will affect things like the slope and intercept in a regression. I prefer base-$10$ logs, because it's possible to look at them and see the magnitude of the original number: $log(1)=0,\; log(10)=1,\; log(100)=2$, etc. The back transformation is to raise $10$ or $e$ to the power of the number; if the mean of your base-$10$ log-transformed data is $1.43$, the back transformed mean is $10^{1.43}=26.9$ (in a spreadsheet, "=10^1.43"). If the mean of your base-e log-transformed data is $3.65$, the back transformed mean is $e^{3.65}=38.5$ (in a spreadsheet, "=EXP(3.65)". If you have zeros or negative numbers, you can't take the log; you should add a constant to each number to make them positive and non-zero. If you have count data, and some of the counts are zero, the convention is to add $0.5$ to each number. Many variables in biology have log-normal distributions, meaning that after log-transformation, the values are normally distributed. This is because if you take a bunch of independent factors and multiply them together, the resulting product is log-normal. For example, let's say you've planted a bunch of maple seeds, then $10$ years later you see how tall the trees are. The height of an individual tree would be affected by the nitrogen in the soil, the amount of water, amount of sunlight, amount of insect damage, etc. Having more nitrogen might make a tree $10\%$ larger than one with less nitrogen; the right amount of water might make it $30\%$ larger than one with too much or too little water; more sunlight might make it $20\%$ larger; less insect damage might make it $15\%$ larger, etc. Thus the final size of a tree would be a function of $\text{nitrogen}\times \text{water}\times \text{sunlight}\times \text{insects}$, and mathematically, this kind of function turns out to be log-normal. Square-root transformation This consists of taking the square root of each observation. The back transformation is to square the number. If you have negative numbers, you can't take the square root; you should add a constant to each number to make them all positive. People often use the square-root transformation when the variable is a count of something, such as bacterial colonies per petri dish, blood cells going through a capillary per minute, mutations per generation, etc. Arcsine transformation This consists of taking the arcsine of the square root of a number. (The result is given in radians, not degrees, and can range from $-\pi /2\; to\; \pi /2$.) The numbers to be arcsine transformed must be in the range $0$ to $1$. This is commonly used for proportions, which range from $0$ to $1$, such as the proportion of female Eastern mudminnows that are infested by a parasite. Note that this kind of proportion is really a nominal variable, so it is incorrect to treat it as a measurement variable, whether or not you arcsine transform it. For example, it would be incorrect to count the number of mudminnows that are or are not parasitized each of several streams in Maryland, treat the arcsine-transformed proportion of parasitized females in each stream as a measurement variable, then perform a linear regression on these data vs. stream depth. This is because the proportions from streams with a smaller sample size of fish will have a higher standard deviation than proportions from streams with larger samples of fish, information that is disregarded when treating the arcsine-transformed proportions as measurement variables. Instead, you should use a test designed for nominal variables; in this example, you should do logistic regression instead of linear regression. If you insist on using the arcsine transformation, despite what I've just told you, the back-transformation is to square the sine of the number. How to transform data Spreadsheet In a blank column, enter the appropriate function for the transformation you've chosen. For example, if you want to transform numbers that start in cell $A2$, you'd go to cell $B2$ and enter =LOG(A2) or =LN(A2) to log transform, =SQRT(A2) to square-root transform, or =ASIN(SQRT(A2)) to arcsine transform. Then copy cell $B2$ and paste into all the cells in column $B$ that are next to cells in column $A$ that contain data. To copy and paste the transformed values into another spreadsheet, remember to use the "Paste Special..." command, then choose to paste "Values." Using the "Paste Special...Values" command makes Excel copy the numerical result of an equation, rather than the equation itself. (If your spreadsheet is Calc, choose "Paste Special" from the Edit menu, uncheck the boxes labeled "Paste All" and "Formulas," and check the box labeled "Numbers.") To back-transform data, just enter the inverse of the function you used to transform the data. To back-transform log transformed data in cell $B2$, enter =10^B2 for base-$10$ logs or =EXP(B2) for natural logs; for square-root transformed data, enter =B2^2; for arcsine transformed data, enter =(SIN(B2))^2 Web pages I'm not aware of any web pages that will do data transformations. SAS To transform data in SAS, read in the original data, then create a new variable with the appropriate function. This example shows how to create two new variables, square-root transformed and log transformed, of the mudminnow data. DATA mudminnow; INPUT location $banktype$ count; countlog=log10(count); countsqrt=sqrt(count); DATALINES; Gwynn_1 forest 38 Gwynn_2 urban 1 Gwynn_3 urban 13 Jones_1 urban 2 Jones_2 forest 13 LGunpowder_1 forest 20 LGunpowder_2 field 50 LGunpowder_3 forest 9 BGunpowder_1 forest 28 BGunpowder_2 forest 6 BGunpowder_3 forest 4 BGunpowder_4 field 43 ; The dataset "mudminnow" contains all the original variables ("location", "banktype" and "count") plus the new variables ("countlog" and "countsqrt"). You then run whatever PROC you want and analyze these variables just like you would any others. Of course, this example does two different transformations only as an illustration; in reality, you should decide on one transformation before you analyze your data. The SAS function for arcsine-transforming X is ARSIN(SQRT(X)). You'll probably find it easiest to backtransform using a spreadsheet or calculator, but if you really want to do everything in SAS, the function for taking $10$ to the $X$ power is 10**X; the function for taking $e$ to a power is EXP(X); the function for squaring $X$ is X**2; and the function for backtransforming an arcsine transformed number is SIN(X)**2. Reference Picture of a mudminnow from The Virtual Aquarium of Virginia.
textbooks/stats/Applied_Statistics/Biological_Statistics_(McDonald)/04%3A_Tests_for_One_Measurement_Variable/4.06%3A_Data_Transformations.txt
Learning Objectives • To learn to use one-way anova when you have one nominal variable and one measurement variable; the nominal variable divides the measurements into two or more groups. It tests whether the means of the measurement variable are the same for the different groups. When to use it Analysis of variance (anova) is the most commonly used technique for comparing the means of groups of measurement data. There are lots of different experimental designs that can be analyzed with different kinds of anova; in this handbook, I describe only one-way anova, nested anova and two-way anova. In a one-way anova (also known as a one-factor, single-factor, or single-classification anova), there is one measurement variable and one nominal variable. You make multiple observations of the measurement variable for each value of the nominal variable. For example, here are some data on a shell measurement (the length of the anterior adductor muscle scar, standardized by dividing by length; I'll call this "AAM length") in the mussel Mytilus trossulus from five locations: Tillamook, Oregon; Newport, Oregon; Petersburg, Alaska; Magadan, Russia; and Tvarminne, Finland, taken from a much larger data set used in McDonald et al. (1991). Tillamook Newport Petersburg Magadan Tvarminne 0.0571 0.0873 0.0974 0.1033 0.0703 0.0813 0.0662 0.1352 0.0915 0.1026 0.0831 0.0672 0.0817 0.0781 0.0956 0.0976 0.0819 0.1016 0.0685 0.0973 0.0817 0.0749 0.0968 0.0677 0.1039 0.0859 0.0649 0.1064 0.0697 0.1045 0.0735 0.0835 0.105 0.0764 0.0659 0.0725 0.0689 0.0923 0.0836 The nominal variable is location, with the five values Tillamook, Newport, Petersburg, Magadan, and Tvarminne. There are six to ten observations of the measurement variable, AAM length, from each location. Null hypothesis The statistical null hypothesis is that the means of the measurement variable are the same for the different categories of data; the alternative hypothesis is that they are not all the same. For the example data set, the null hypothesis is that the mean AAM length is the same at each location, and the alternative hypothesis is that the mean AAM lengths are not all the same. How the test works The basic idea is to calculate the mean of the observations within each group, then compare the variance among these means to the average variance within each group. Under the null hypothesis that the observations in the different groups all have the same mean, the weighted among-group variance will be the same as the within-group variance. As the means get further apart, the variance among the means increases. The test statistic is thus the ratio of the variance among means divided by the average variance within groups, or $F_s$. This statistic has a known distribution under the null hypothesis, so the probability of obtaining the observed $F_s$ under the null hypothesis can be calculated. The shape of the $F$-distribution depends on two degrees of freedom, the degrees of freedom of the numerator (among-group variance) and degrees of freedom of the denominator (within-group variance). The among-group degrees of freedom is the number of groups minus one. The within-groups degrees of freedom is the total number of observations, minus the number of groups. Thus if there are $n$ observations in a groups, numerator degrees of freedom is $a-1$ and denominator degrees of freedom is $n-a$. For the example data set, there are $5$ groups and $39$ observations, so the numerator degrees of freedom is $4$ and the denominator degrees of freedom is $34$. Whatever program you use for the anova will almost certainly calculate the degrees of freedom for you. The conventional way of reporting the complete results of an anova is with a table (the "sum of squares" column is often omitted). Here are the results of a one-way anova on the mussel data: sum of squares d.f. mean square Fs P among groups 0.00452 4 0.001113 7.12 2.8×10-4 within groups 0.00539 34 0.000159 total 0.00991 38 If you're not going to use the mean squares for anything, you could just report this as "The means were significantly heterogeneous (one-way anova, $F_{4,34}=7.12\, ,\; P=2.8\times 10^{-4}$)." The degrees of freedom are given as a subscript to $F$, with the numerator first. Note that statisticians often call the within-group mean square the "error" mean square. I think this can be confusing to non-statisticians, as it implies that the variation is due to experimental error or measurement error. In biology, the within-group variation is often largely the result of real, biological variation among individuals, not the kind of mistakes implied by the word "error." That's why I prefer the term "within-group mean square." Assumptions One-way anova assumes that the observations within each group are normally distributed. It is not particularly sensitive to deviations from this assumption; if you apply one-way anova to data that are non-normal, your chance of getting a $P$ value less than $0.05$, if the null hypothesis is true, is still pretty close to $0.05$. It's better if your data are close to normal, so after you collect your data, you should calculate the residuals (the difference between each observation and the mean of its group) and plot them on a histogram. If the residuals look severely non-normal, try data transformations and see if one makes the data look more normal. If none of the transformations you try make the data look normal enough, you can use the Kruskal-Wallis test. Be aware that it makes the assumption that the different groups have the same shape of distribution, and that it doesn't test the same null hypothesis as one-way anova. Personally, I don't like the Kruskal-Wallis test; I recommend that if you have non-normal data that can't be fixed by transformation, you go ahead and use one-way anova, but be cautious about rejecting the null hypothesis if the $P$ value is not very far below $0.05$ and your data are extremely non-normal. One-way anova also assumes that your data are homoscedastic, meaning the standard deviations are equal in the groups. You should examine the standard deviations in the different groups and see if there are big differences among them. If you have a balanced design, meaning that the number of observations is the same in each group, then one-way anova is not very sensitive to heteroscedasticity (different standard deviations in the different groups). I haven't found a thorough study of the effects of heteroscedasticity that considered all combinations of the number of groups, sample size per group, and amount of heteroscedasticity. I've done simulations with two groups, and they indicated that heteroscedasticity will give an excess proportion of false positives for a balanced design only if one standard deviation is at least three times the size of the other, and the sample size in each group is fewer than $10$. I would guess that a similar rule would apply to one-way anovas with more than two groups and balanced designs. Heteroscedasticity is a much bigger problem when you have an unbalanced design (unequal sample sizes in the groups). If the groups with smaller sample sizes also have larger standard deviations, you will get too many false positives. The difference in standard deviations does not have to be large; a smaller group could have a standard deviation that's $50\%$ larger, and your rate of false positives could be above $10\%$ instead of at $5\%$ where it belongs. If the groups with larger sample sizes have larger standard deviations, the error is in the opposite direction; you get too few false positives, which might seem like a good thing except it also means you lose power (get too many false negatives, if there is a difference in means). You should try really hard to have equal sample sizes in all of your groups. With a balanced design, you can safely use a one-way anova unless the sample sizes per group are less than $10$ and the standard deviations vary by threefold or more. If you have a balanced design with small sample sizes and very large variation in the standard deviations, you should use Welch's anova instead. If you have an unbalanced design, you should carefully examine the standard deviations. Unless the standard deviations are very similar, you should probably use Welch's anova. It is less powerful than one-way anova for homoscedastic data, but it can be much more accurate for heteroscedastic data from an unbalanced design. Additional Analyses Tukey-Kramer test If you reject the null hypothesis that all the means are equal, you'll probably want to look at the data in more detail. One common way to do this is to compare different pairs of means and see which are significantly different from each other. For the mussel shell example, the overall $P$ value is highly significant; you would probably want to follow up by asking whether the mean in Tillamook is different from the mean in Newport, whether Newport is different from Petersburg, etc. It might be tempting to use a simple two-sample t–test on each pairwise comparison that looks interesting to you. However, this can result in a lot of false positives. When there are $a$ groups, there are $\frac{(a^2-a)}{2}$ possible pairwise comparisons, a number that quickly goes up as the number of groups increases. With $5$ groups, there are $10$ pairwise comparisons; with $10$ groups, there are $45$, and with $20$ groups, there are $190$ pairs. When you do multiple comparisons, you increase the probability that at least one will have a $P$ value less than $0.05$ purely by chance, even if the null hypothesis of each comparison is true. There are a number of different tests for pairwise comparisons after a one-way anova, and each has advantages and disadvantages. The differences among their results are fairly subtle, so I will describe only one, the Tukey-Kramer test. It is probably the most commonly used post-hoc test after a one-way anova, and it is fairly easy to understand. In the Tukey–Kramer method, the minimum significant difference (MSD) is calculated for each pair of means. It depends on the sample size in each group, the average variation within the groups, and the total number of groups. For a balanced design, all of the MSDs will be the same; for an unbalanced design, pairs of groups with smaller sample sizes will have bigger MSDs. If the observed difference between a pair of means is greater than the MSD, the pair of means is significantly different. For example, the Tukey MSD for the difference between Newport and Tillamook is $0.0172$. The observed difference between these means is $0.0054$, so the difference is not significant. Newport and Petersburg have a Tukey MSD of $0.0188$; the observed difference is $0.0286$, so it is significant. There are a couple of common ways to display the results of the Tukey–Kramer test. One technique is to find all the sets of groups whose means do not differ significantly from each other, then indicate each set with a different symbol. Location mean AAM Tukey–Kramer Newport 0.0748 a Magadan 0.0780 a, b Tillamook 0.0802 a, b Tvarminne 0.0957 b, c Petersburg 0.103 c Then you explain that "Means with the same letter are not significantly different from each other (Tukey–Kramer test, $P> 0.05$)." This table shows that Newport and Magadan both have an "a", so they are not significantly different; Newport and Tvarminne don't have the same letter, so they are significantly different. Another way you can illustrate the results of the Tukey–Kramer test is with lines connecting means that are not significantly different from each other. This is easiest when the means are sorted from smallest to largest: There are also tests to compare different sets of groups; for example, you could compare the two Oregon samples (Newport and Tillamook) to the two samples from further north in the Pacific (Magadan and Petersburg). The Scheffé test is probably the most common. The problem with these tests is that with a moderate number of groups, the number of possible comparisons becomes so large that the P values required for significance become ridiculously small. Partitioning variance The most familiar one-way anovas are "fixed effect" or "model I" anovas. The different groups are interesting, and you want to know which are different from each other. As an example, you might compare the AAM length of the mussel species Mytilus edulis, Mytilus galloprovincialis, Mytilus trossulus and Mytilus californianus; you'd want to know which had the longest AAM, which was shortest, whether M. edulis was significantly different from M. trossulus, etc. The other kind of one-way anova is a "random effect" or "model II" anova. The different groups are random samples from a larger set of groups, and you're not interested in which groups are different from each other. An example would be taking offspring from five random families of M. trossulus and comparing the AAM lengths among the families. You wouldn't care which family had the longest AAM, and whether family A was significantly different from family B; they're just random families sampled from a much larger possible number of families. Instead, you'd be interested in how the variation among families compared to the variation within families; in other words, you'd want to partition the variance. Under the null hypothesis of homogeneity of means, the among-group mean square and within-group mean square are both estimates of the within-group parametric variance. If the means are heterogeneous, the within-group mean square is still an estimate of the within-group variance, but the among-group mean square estimates the sum of the within-group variance plus the group sample size times the added variance among groups. Therefore subtracting the within-group mean square from the among-group mean square, and dividing this difference by the average group sample size, gives an estimate of the added variance component among groups. The equation is: $\text{among}\: -\: {group\:variance}=\frac{MS_{among}-MS_{within}}{n}$ where $n_o$ is a number that is close to, but usually slightly less than, the arithmetic mean of the sample size ($n_i$) of each of the $a$ groups: $n_o=\left ( \frac{1}{a-1} \right )\ast \left ( \frac{\text{sum}(n_i)-\text{sum}(n_i)^2}{\text{sum}(n_i)} \right )$ Each component of the variance is often expressed as a percentage of the total variance components. Thus an anova table for a one-way anova would indicate the among-group variance component and the within-group variance component, and these numbers would add to $100\%$. Although statisticians say that each level of an anova "explains" a proportion of the variation, this statistical jargon does not mean that you've found a biological cause-and-effect explanation. If you measure the number of ears of corn per stalk in $10$ random locations in a field, analyze the data with a one-way anova, and say that the location "explains" $74.3\%$ of the variation, you haven't really explained anything; you don't know whether some areas have higher yield because of different water content in the soil, different amounts of insect damage, different amounts of nutrients in the soil, or random attacks by a band of marauding corn bandits. Partitioning the variance components is particularly useful in quantitative genetics, where the within-family component might reflect environmental variation while the among-family component reflects genetic variation. Of course, estimating heritability involves more than just doing a simple anova, but the basic concept is similar. Another area where partitioning variance components is useful is in designing experiments. For example, let's say you're planning a big experiment to test the effect of different drugs on calcium uptake in rat kidney cells. You want to know how many rats to use, and how many measurements to make on each rat, so you do a pilot experiment in which you measure calcium uptake on $6$ rats, with $4$ measurements per rat. You analyze the data with a one-way anova and look at the variance components. If a high percentage of the variation is among rats, that would tell you that there's a lot of variation from one rat to the next, but the measurements within one rat are pretty uniform. You could then design your big experiment to include a lot of rats for each drug treatment, but not very many measurements on each rat. Or you could do some more pilot experiments to try to figure out why there's so much rat-to-rat variation (maybe the rats are different ages, or some have eaten more recently than others, or some have exercised more) and try to control it. On the other hand, if the among-rat portion of the variance was low, that would tell you that the mean values for different rats were all about the same, while there was a lot of variation among the measurements on each rat. You could design your big experiment with fewer rats and more observations per rat, or you could try to figure out why there's so much variation among measurements and control it better. There's an equation you can use for optimal allocation of resources in experiments. It's usually used for nested anova, but you can use it for a one-way anova if the groups are random effect (model II). Partitioning the variance applies only to a model II (random effects) one-way anova. It doesn't really tell you anything useful about the more common model I (fixed effects) one-way anova, although sometimes people like to report it (because they're proud of how much of the variance their groups "explain," I guess). Example Here are data on the genome size (measured in picograms of DNA per haploid cell) in several large groups of crustaceans, taken from Gregory (2014). The cause of variation in genome size has been a puzzle for a long time; I'll use these data to answer the biological question of whether some groups of crustaceans have different genome sizes than others. Because the data from closely related species would not be independent (closely related species are likely to have similar genome sizes, because they recently descended from a common ancestor), I used a random number generator to randomly choose one species from each family. Amphipods Barnacles Branchiopods Copepods Decapods Isopods Ostracods 0.74 0.67 0.19 0.25 1.60 1.71 0.46 0.95 0.90 0.21 0.25 1.65 2.35 0.70 1.71 1.23 0.22 0.58 1.80 2.40 0.87 1.89 1.40 0.22 0.97 1.90 3.00 1.47 3.80 1.46 0.28 1.63 1.94 5.65 3.13 3.97 2.60 0.30 1.77 2.28 5.70 7.16 0.40 2.67 2.44 6.79 8.48 0.47 5.45 2.66 8.60 13.49 0.63 6.81 2.78 8.82 16.09 0.87 2.80 27.00 2.77 2.83 50.91 2.91 3.01 64.62 4.34 4.50 4.55 4.66 4.70 4.75 4.84 5.23 6.20 8.29 8.53 10.58 15.56 22.16 38.00 38.47 40.89 After collecting the data, the next step is to see if they are normal and homoscedastic. It's pretty obviously non-normal; most of the values are less than $10$, but there are a small number that are much higher. A histogram of the largest group, the decapods (crabs, shrimp and lobsters), makes this clear: The data are also highly heteroscedastic; the standard deviations range from $0.67$ in barnacles to $20.4$ in amphipods. Fortunately, log-transforming the data make them closer to homoscedastic (standard deviations ranging from $0.20$ to $0.63$) and look more normal: Analyzing the log-transformed data with one-way anova, the result is $F_{6,76}=11.72\, ,\; P=2.9\times 10^{-9}$. So there is very significant variation in mean genome size among these seven taxonomic groups of crustaceans. The next step is to use the Tukey-Kramer test to see which pairs of taxa are significantly different in mean genome size. The usual way to display this information is by identifying groups that are not significantly different; here I do this with horizontal bars: This graph suggests that there are two sets of genome sizes, groups with small genomes (branchiopods, ostracods, barnacles, and copepods) and groups with large genomes (decapods and amphipods); the members of each set are not significantly different from each other. Isopods are in the middle; the only group they're significantly different from is branchiopods. So the answer to the original biological question, "do some groups of crustaceans have different genome sizes than others," is yes. Why different groups have different genome sizes remains a mystery. Graphing the results The usual way to graph the results of a one-way anova is with a bar graph. The heights of the bars indicate the means, and there's usually some kind of error bar, either 95% confidence intervals or standard errors. Be sure to say in the figure caption what the error bars represent. Similar tests If you have only two groups, you can do a two-sample t–test. This is mathematically equivalent to an anova and will yield the exact same $P$ value, so if all you'll ever do is comparisons of two groups, you might as well call them $t$–tests. If you're going to do some comparisons of two groups, and some with more than two groups, it will probably be less confusing if you call all of your tests one-way anovas. If there are two or more nominal variables, you should use a two-way anova, a nested anova, or something more complicated that I won't cover here. If you're tempted to do a very complicated anova, you may want to break your experiment down into a set of simpler experiments for the sake of comprehensibility. If the data severely violate the assumptions of the anova, you can use Welch's anova if the standard deviations are heterogeneous or use the Kruskal-Wallis test if the distributions are non-normal. How to do the test Spreadsheet I have put together a spreadsheet to do one-way anova anova.xls on up to $50$ groups and $1000$ observations per group. It calculates the $P$ value, does the Tukey–Kramer test, and partitions the variance. Some versions of Excel include an "Analysis Toolpak," which includes an "Anova: Single Factor" function that will do a one-way anova. You can use it if you want, but I can't help you with it. It does not include any techniques for unplanned comparisons of means, and it does not partition the variance. Web pages Several people have put together web pages that will perform a one-way anova; one good one is here. It is easy to use, and will handle three to $26$ groups and $3$ to $1024$ observations per group. It does not do the Tukey-Kramer test and does not partition the variance. R Salvatore Mangiafico's $R$ Companion has a sample R program for one-way anova. SAS There are several SAS procedures that will perform a one-way anova. The two most commonly used are PROC ANOVA and PROC GLM. Either would be fine for a one-way anova, but PROC GLM (which stands for "General Linear Models") can be used for a much greater variety of more complicated analyses, so you might as well use it for everything. Here is a SAS program to do a one-way anova on the mussel data from above. DATA musselshells; INPUT location \$ aam @@; DATALINES; Tillamook 0.0571 Tillamook 0.0813 Tillamook 0.0831 Tillamook 0.0976 Tillamook 0.0817 Tillamook 0.0859 Tillamook 0.0735 Tillamook 0.0659 Tillamook 0.0923 Tillamook 0.0836 Newport 0.0873 Newport 0.0662 Newport 0.0672 Newport 0.0819 Newport 0.0749 Newport 0.0649 Newport 0.0835 Newport 0.0725 Petersburg 0.0974 Petersburg 0.1352 Petersburg 0.0817 Petersburg 0.1016 Petersburg 0.0968 Petersburg 0.1064 Petersburg 0.1050 Magadan 0.1033 Magadan 0.0915 Magadan 0.0781 Magadan 0.0685 Magadan 0.0677 Magadan 0.0697 Magadan 0.0764 Magadan 0.0689 Tvarminne 0.0703 Tvarminne 0.1026 Tvarminne 0.0956 Tvarminne 0.0973 Tvarminne 0.1039 Tvarminne 0.1045 ; PROC glm DATA=musselshells; CLASS location; MODEL aam = location; RUN; The output includes the traditional anova table; the P value is given under "Pr > F". Sum of Source DF Squares Mean Square F Value Pr > F Model 4 0.00451967 0.00112992 7.12 0.0003 Error 34 0.00539491 0.00015867 Corrected Total 38 0.00991458 PROC GLM doesn't calculate the variance components for an anova. Instead, you use PROC VARCOMP. You set it up just like PROC GLM, with the addition of METHOD=TYPE1 (where "TYPE1" includes the numeral 1, not the letter el. The procedure has four different methods for estimating the variance components, and TYPE1 seems to be the same technique as the one I've described above. Here's how to do the one-way anova, including estimating the variance components, for the mussel shell example. PROC GLM DATA=musselshells; CLASS location; MODEL aam = location; PROC VARCOMP DATA=musselshells METHOD=TYPE1; CLASS location; MODEL aam = location; RUN; The results include the following: Type 1 Estimates Variance Component Estimate Var(location) 0.0001254 Var(Error) 0.0001587 The output is not given as a percentage of the total, so you'll have to calculate that. For these results, the among-group component is $\frac{0.0001254}{(0.0001254+0.0001586)}=0.4415$, or $44.15\%$; the within-group component is $\frac{0.0001587}{(0.0001254+0.0001586)}=0.5585$, or $55.85\%$. Welch's anova If the data show a lot of heteroscedasticity (different groups have different standard deviations), the one-way anova can yield an inaccurate $P$ value; the probability of a false positive may be much higher than $5\%$. In that case, you should use Welch's anova. I've written a spreadsheet to do Welch's anova welchanova.xls. It includes the Games-Howell test, which is similar to the Tukey-Kramer test for a regular anova. (Note: the original spreadsheet gave incorrect results for the Games-Howell test; it was corrected on April 28, 2015). You can do Welch's anova in SAS by adding a MEANS statement, the name of the nominal variable, and the word WELCH following a slash. Unfortunately, SAS does not do the Games-Howell post-hoc test. Here is the example SAS program from above, modified to do Welch's anova: PROC GLM DATA=musselshells; CLASS location; MODEL aam = location; MEANS location / WELCH; RUN; Here is part of the output: Welch's ANOVA for AAM Source DF F Value Pr > F location 4.0000 5.66 0.0051 Error 15.6955 Power analysis To do a power analysis for a one-way anova is kind of tricky, because you need to decide what kind of effect size you're looking for. If you're mainly interested in the overall significance test, the sample size needed is a function of the standard deviation of the group means. Your estimate of the standard deviation of means that you're looking for may be based on a pilot experiment or published literature on similar experiments. If you're mainly interested in the comparisons of means, there are other ways of expressing the effect size. Your effect could be a difference between the smallest and largest means, for example, that you would want to be significant by a Tukey-Kramer test. There are ways of doing a power analysis with this kind of effect size, but I don't know much about them and won't go over them here. To do a power analysis for a one-way anova using the free program G*Power, choose "F tests" from the "Test family" menu and "ANOVA: Fixed effects, omnibus, one-way" from the "Statistical test" menu. To determine the effect size, click on the Determine button and enter the number of groups, the standard deviation within the groups (the program assumes they're all equal), and the mean you want to see in each group. Usually you'll leave the sample sizes the same for all groups (a balanced design), but if you're planning an unbalanced anova with bigger samples in some groups than in others, you can enter different relative sample sizes. Then click on the "Calculate and transfer to main window" button; it calculates the effect size and enters it into the main window. Enter your alpha (usually $0.05$) and power (typically $0.80$ or $0.90$) and hit the Calculate button. The result is the total sample size in the whole experiment; you'll have to do a little math to figure out the sample size for each group. As an example, let's say you're studying transcript amount of some gene in arm muscle, heart muscle, brain, liver, and lung. Based on previous research, you decide that you'd like the anova to be significant if the means were $10$ units in arm muscle, $10$ units in heart muscle, $15$ units in brain, $15$ units in liver, and $15$ units in lung. The standard deviation of transcript amount within a tissue type that you've seen in previous research is $12$ units. Entering these numbers in G*Power, along with an alpha of $0.05$ and a power of $0.80$, the result is a total sample size of $295$. Since there are five groups, you'd need $59$ observations per group to have an $80\%$ chance of having a significant ($P< 0.05$) one-way anova.
textbooks/stats/Applied_Statistics/Biological_Statistics_(McDonald)/04%3A_Tests_for_One_Measurement_Variable/4.07%3A_One-way_Anova.txt
Learning Objectives • To learn to use the Kruskal–Wallis test when you have one nominal variable and one ranked variable. It tests whether the mean ranks are the same in all the groups. When to use it The most common use of the Kruskal–Wallis test is when you have one nominal variable and one measurement variable, an experiment that you would usually analyze using one-way anova, but the measurement variable does not meet the normality assumption of a one-way anova. Some people have the attitude that unless you have a large sample size and can clearly demonstrate that your data are normal, you should routinely use Kruskal–Wallis; they think it is dangerous to use one-way anova, which assumes normality, when you don't know for sure that your data are normal. However, one-way anova is not very sensitive to deviations from normality. I've done simulations with a variety of non-normal distributions, including flat, highly peaked, highly skewed, and bimodal, and the proportion of false positives is always around $5\%$ or a little lower, just as it should be. For this reason, I don't recommend the Kruskal-Wallis test as an alternative to one-way anova. Because many people use it, you should be familiar with it even if I convince you that it's overused. The Kruskal-Wallis test is a non-parametric test, which means that it does not assume that the data come from a distribution that can be completely described by two parameters, mean and standard deviation (the way a normal distribution can). Like most non-parametric tests, you perform it on ranked data, so you convert the measurement observations to their ranks in the overall data set: the smallest value gets a rank of $1$, the next smallest gets a rank of $2$, and so on. You lose information when you substitute ranks for the original values, which can make this a somewhat less powerful test than a one-way anova; this is another reason to prefer one-way anova. The other assumption of one-way anova is that the variation within the groups is equal (homoscedasticity). While Kruskal-Wallis does not assume that the data are normal, it does assume that the different groups have the same distribution, and groups with different standard deviations have different distributions. If your data are heteroscedastic, Kruskal–Wallis is no better than one-way anova, and may be worse. Instead, you should use Welch's anova for heteoscedastic data. The only time I recommend using Kruskal-Wallis is when your original data set actually consists of one nominal variable and one ranked variable; in this case, you cannot do a one-way anova and must use the Kruskal–Wallis test. Dominance hierarchies (in behavioral biology) and developmental stages are the only ranked variables I can think of that are common in biology. The Mann–Whitney $U$-test (also known as the Mann–Whitney–Wilcoxon test, the Wilcoxon rank-sum test, or the Wilcoxon two-sample test) is limited to nominal variables with only two values; it is the non-parametric analogue to two-sample t–test. It uses a different test statistic ($U$ instead of the $H$ of the Kruskal–Wallis test), but the $P$ value is mathematically identical to that of a Kruskal–Wallis test. For simplicity, I will only refer to Kruskal–Wallis on the rest of this web page, but everything also applies to the Mann–Whitney $U$-test. The Kruskal–Wallis test is sometimes called Kruskal–Wallis one-way anova or non-parametric one-way anova. I think calling the Kruskal–Wallis test an anova is confusing, and I recommend that you just call it the Kruskal–Wallis test. Null hypothesis The null hypothesis of the Kruskal–Wallis test is that the mean ranks of the groups are the same. The expected mean rank depends only on the total number of observations (for $n$ observations, the expected mean rank in each group is ($\frac{n+1}{2}$), so it is not a very useful description of the data; it's not something you would plot on a graph. You will sometimes see the null hypothesis of the Kruskal–Wallis test given as "The samples come from populations with the same distribution." This is correct, in that if the samples come from populations with the same distribution, the Kruskal–Wallis test will show no difference among them. I think it's a little misleading, however, because only some kinds of differences in distribution will be detected by the test. For example, if two populations have symmetrical distributions with the same center, but one is much wider than the other, their distributions are different but the Kruskal–Wallis test will not detect any difference between them. The null hypothesis of the Kruskal–Wallis test is not that the means are the same. It is therefore incorrect to say something like "The mean concentration of fructose is higher in pears than in apples (Kruskal–Wallis test, $P=0.02$)," although you will see data summarized with means and then compared with Kruskal–Wallis tests in many publications. The common misunderstanding of the null hypothesis of Kruskal-Wallis is yet another reason I don't like it. The null hypothesis of the Kruskal–Wallis test is often said to be that the medians of the groups are equal, but this is only true if you assume that the shape of the distribution in each group is the same. If the distributions are different, the Kruskal–Wallis test can reject the null hypothesis even though the medians are the same. To illustrate this point, I made up these three sets of numbers. They have identical means ($43.5$), and identical medians ($27.5$), but the mean ranks are different ($34.6$, $27.5$ and $20.4$, respectively), resulting in a significant ($P=0.025$) Kruskal–Wallis test: Group 1 Group 2 Group 3 1 10 19 2 11 20 3 12 21 4 13 22 5 14 23 6 15 24 7 16 25 8 17 26 9 18 27 46 37 28 47 58 65 48 59 66 49 60 67 50 61 68 51 62 69 52 63 70 53 64 71 342 193 72 How the test works Here are some data on Wright's $F_{ST}$ (a measure of the amount of geographic variation in a genetic polymorphism) in two populations of the American oyster, Crassostrea virginica. McDonald et al. (1996) collected data on $F_{ST}$ for six anonymous DNA polymorphisms (variation in random bits of DNA of no known function) and compared the $F_{ST}$ values of the six DNA polymorphisms to $F_{ST}$ values on $13$ proteins from Buroker (1983). The biological question was whether protein polymorphisms would have generally lower or higher $F_{ST}$ values than anonymous DNA polymorphisms. McDonald et al. (1996) knew that the theoretical distribution of $F_{ST}$ for two populations is highly skewed, so they analyzed the data with a Kruskal–Wallis test. When working with a measurement variable, the Kruskal–Wallis test starts by substituting the rank in the overall data set for each measurement value. The smallest value gets a rank of $1$, the second-smallest gets a rank of $2$, etc. Tied observations get average ranks; in this data set, the two $F_{ST}$ values of $-0.005$ are tied for second and third, so they get a rank of $2.5$. gene class FST Rank Rank CVJ5 DNA -0.006 1 CVB1 DNA -0.005 2.5 6Pgd protein -0.005 2.5 Pgi protein -0.002 4 CVL3 DNA 0.003 5 Est-3 protein 0.004 6 Lap-2 protein 0.006 7 Pgm-1 protein 0.015 8 Aat-2 protein 0.016 9.5 Adk-1 protein 0.016 9.5 Sdh protein 0.024 11 Acp-3 protein 0.041 12 Pgm-2 protein 0.044 13 Lap-1 protein 0.049 14 CVL1 DNA 0.053 15 Mpi-2 protein 0.058 16 Ap-1 protein 0.066 17 CVJ6 DNA 0.095 18 CVB2m DNA 0.116 19 Est-1 protein 0.163 20 You calculate the sum of the ranks for each group, then the test statistic, $H$. $H$ is given by a rather formidable formula that basically represents the variance of the ranks among groups, with an adjustment for the number of ties. $H$ is approximately chi-square distributed, meaning that the probability of getting a particular value of $H$ by chance, if the null hypothesis is true, is the $P$ value corresponding to a chi-square equal to $H$; the degrees of freedom is the number of groups minus $1$. For the example data, the mean rank for DNA is $10.08$ and the mean rank for protein is $10.68$, $H=0.043$, there is $1$ degree of freedom, and the $P$ value is $0.84$. The null hypothesis that the $F_{ST}$ of DNA and protein polymorphisms have the same mean ranks is not rejected. For the reasons given above, I think it would actually be better to analyze the oyster data with one-way anova. It gives a $P$ value of $0.75$, which fortunately would not change the conclusions of McDonald et al. (1996). If the sample sizes are too small, $H$ does not follow a chi-squared distribution very well, and the results of the test should be used with caution. $N$ less than $5$ in each group seems to be the accepted definition of "too small." Assumptions The Kruskal–Wallis test does NOT assume that the data are normally distributed; that is its big advantage. If you're using it to test whether the medians are different, it does assume that the observations in each group come from populations with the same shape of distribution, so if different groups have different shapes (one is skewed to the right and another is skewed to the left, for example, or they have different variances), the Kruskal–Wallis test may give inaccurate results (Fagerland and Sandvik 2009). If you're interested in any difference among the groups that would make the mean ranks be different, then the Kruskal–Wallis test doesn't make any assumptions. Heteroscedasticity is one way in which different groups can have different shaped distributions. If the distributions are heteroscedastic, the Kruskal–Wallis test won't help you; instead, you should use Welch's t–test for two groups, or Welch's anova for more than two groups. Example Bolek and Coggins (2003) collected multiple individuals of the toad Bufo americanus,, the frog Rana pipiens, and the salamander Ambystoma laterale from a small area of Wisconsin. They dissected the amphibians and counted the number of parasitic helminth worms in each individual. There is one measurement variable (worms per individual amphibian) and one nominal variable (species of amphibian), and the authors did not think the data fit the assumptions of an anova. The results of a Kruskal–Wallis test were significant ($H=63.48$, $2 d.f.$, $P=1.6\times 10^{-14}$); the mean ranks of worms per individual are significantly different among the three species. Dog Sex Rank Merlino Male 1 Gastone Male 2 Pippo Male 3 Leon Male 4 Golia Male 5 Lancillotto Male 6 Mamy Female 7 Nanà Female 8 Isotta Female 9 Diana Female 10 Simba Male 11 Pongo Male 12 Semola Male 13 Kimba Male 14 Morgana Female 15 Stella Female 16 Hansel Male 17 Cucciola Male 18 Mammolo Male 19 Dotto Male 20 Gongolo Male 21 Gretel Female 22 Brontolo Female 23 Eolo Female 24 Mag Female 25 Emy Female 26 Pisola Female 27 Cafazzo et al. (2010) observed a group of free-ranging domestic dogs in the outskirts of Rome. Based on the direction of $1815$ observations of submissive behavior, they were able to place the dogs in a dominance hierarchy, from most dominant (Merlino) to most submissive (Pisola). Because this is a true ranked variable, it is necessary to use the Kruskal–Wallis test. The mean rank for males ($11.1$) is lower than the mean rank for females ($17.7$), and the difference is significant ($H=4.61$, $1 d.f.$, $P=0.032$). Graphing the results It is tricky to know how to visually display the results of a Kruskal–Wallis test. It would be misleading to plot the means or medians on a bar graph, as the Kruskal–Wallis test is not a test of the difference in means or medians. If there are relatively small number of observations, you could put the individual observations on a bar graph, with the value of the measurement variable on the $Y$ axis and its rank on the $X$ axis, and use a different pattern for each value of the nominal variable. Here's an example using the oyster $F_{ST}$ data: If there are larger numbers of observations, you could plot a histogram for each category, all with the same scale, and align them vertically. I don't have suitable data for this handy, so here's an illustration with imaginary data: Similar tests One-way anova is more powerful and a lot easier to understand than the Kruskal–Wallis test, so unless you have a true ranked variable, you should use it. How to do the test Spreadsheet I have put together a spreadsheet to do the Kruskal–Wallis test kruskalwallis.xls on up to $20$ groups, with up to $1000$ observations per group. Web pages Richard Lowry has web pages for performing the Kruskal–Wallis test for two groups, three groups, or four groups. R Salvatore Mangiafico's $R$ Companion has a sample R program for the Kruskal–Wallis test. SAS To do a Kruskal–Wallis test in SAS, use the NPAR1WAY procedure (that's the numeral "one," not the letter "el," in NPAR1WAY). WILCOXON tells the procedure to only do the Kruskal–Wallis test; if you leave that out, you'll get several other statistical tests as well, tempting you to pick the one whose results you like the best. The nominal variable that gives the group names is given with the CLASS parameter, while the measurement or ranked variable is given with the VAR parameter. Here's an example, using the oyster data from above: DATA oysters; INPUT markername $markertype$ fst; DATALINES; CVB1 DNA -0.005 CVB2m DNA 0.116 CVJ5 DNA -0.006 CVJ6 DNA 0.095 CVL1 DNA 0.053 CVL3 DNA 0.003 6Pgd protein -0.005 Aat-2 protein 0.016 Acp-3 protein 0.041 Adk-1 protein 0.016 Ap-1 protein 0.066 Est-1 protein 0.163 Est-3 protein 0.004 Lap-1 protein 0.049 Lap-2 protein 0.006 Mpi-2 protein 0.058 Pgi protein -0.002 Pgm-1 protein 0.015 Pgm-2 protein 0.044 Sdh protein 0.024 ; PROC NPAR1WAY DATA=oysters WILCOXON; CLASS markertype; VAR fst; RUN; The output contains a table of "Wilcoxon scores"; the "mean score" is the mean rank in each group, which is what you're testing the homogeneity of. "Chi-square" is the $H$-statistic of the Kruskal–Wallis test, which is approximately chi-square distributed. The "Pr > Chi-Square" is your $P$ value. You would report these results as "$H=0.04$, $1 d.f.$, $P=0.84$." Wilcoxon Scores (Rank Sums) for Variable fst classified by Variable markertype Sum of Expected Std Dev Mean markertype N Scores Under H0 Under H0 Score ----------------------------------------------------------------- DNA 6 60.50 63.0 12.115236 10.083333 protein 14 149.50 147.0 12.115236 10.678571 Kruskal–Wallis Test Chi-Square 0.0426 DF 1 Pr > Chi-Square 0.8365 Power analysis I am not aware of a technique for estimating the sample size needed for a Kruskal–Wallis test.
textbooks/stats/Applied_Statistics/Biological_Statistics_(McDonald)/04%3A_Tests_for_One_Measurement_Variable/4.08%3A_KruskalWallis_Test.txt
Learning Objectives • Use nested anova when you have one measurement variable and more than one nominal variable, and the nominal variables are nested (form subgroups within groups). It tests whether there is significant variation in means among groups, among subgroups within groups, etc. When to use it Use a nested anova (also known as a hierarchical anova) when you have one measurement variable and two or more nominal variables. The nominal variables are nested, meaning that each value of one nominal variable (the subgroups) is found in combination with only one value of the higher-level nominal variable (the groups). All of the lower level subgroupings must be random effects (model II) variables, meaning they are random samples of a larger set of possible subgroups. Nested analysis of variance is an extension of one-way anova in which each group is divided into subgroups. In theory, you choose these subgroups randomly from a larger set of possible subgroups. For example, a friend of mine was studying uptake of fluorescently labeled protein in rat kidneys. He wanted to know whether his two technicians, who I'll call Brad and Janet, were performing the procedure consistently. So Brad randomly chose three rats, and Janet randomly chose three rats of her own, and each technician measured protein uptake in each rat. If Brad and Janet had measured protein uptake only once on each rat, you would have one measurement variable (protein uptake) and one nominal variable (technician) and you would analyze it with one-way anova. However, rats are expensive and measurements are cheap, so Brad and Janet measured protein uptake at several random locations in the kidney of each rat: Technician: Brad Janet Rat: Arnold Ben Charlie Dave Eddy Frank 1.119 1.045 0.9873 1.3883 1.3952 1.2574 1.2996 1.1418 0.9873 1.104 0.9714 1.0295 1.5407 1.2569 0.8714 1.1581 1.3972 1.1941 1.5084 0.6191 0.9452 1.319 1.5369 1.0759 1.6181 1.4823 1.1186 1.1803 1.3727 1.3249 1.5962 0.8991 1.2909 0.8738 1.2909 0.9494 1.2617 0.8365 1.1502 1.387 1.1874 1.1041 1.2288 1.2898 1.1635 1.301 1.1374 1.1575 1.3471 1.1821 1.151 1.3925 1.0647 1.294 1.0206 0.9177 0.9367 1.0832 0.9486 1.4543 Because there are several observations per rat, the identity of each rat is now a nominal variable. The values of this variable (the identities of the rats) are nested under the technicians; rat A is only found with Brad, and rat D is only found with Janet. You would analyze these data with a nested anova. In this case, it's a two-level nested anova; the technicians are groups, and the rats are subgroups within the groups. If the technicians had looked at several random locations in each kidney and measured protein uptake several times at each location, you'd have a three-level nested anova, with kidney location as subsubgroups within the rats. You can have more than three levels of nesting, and it doesn't really make the analysis that much more complicated. Note that if the subgroups, subsubgroups, etc. are distinctions with some interest (fixed effects, or model I, variables), rather than random, you should not use a nested anova. For example, Brad and Janet could have looked at protein uptake in two male rats and two female rats apiece. In this case you would use a two-way anova to analyze the data, rather than a nested anova. When you do a nested anova, you are often only interested in testing the null hypothesis about the group means; you may not care whether the subgroups are significantly different. For this reason, you may be tempted to ignore the subgrouping and just use all of the observations in a one-way anova, ignoring the subgrouping. This would be a mistake. For the rats, this would be treating the $30$ observations for each technician ($10$ observations from each of three rats) as if they were $30$ independent observations. By using all of the observations in a one-way anova, you compare the difference in group means to the amount of variation within each group, pretending that you have $30$ independent measurements of protein uptake. This large number of measurements would make it seem like you had a very accurate estimate of mean protein uptake for each technician, so the difference between Brad and Janet wouldn't have to be very big to seem "significant." You would have violated the assumption of independence that one-way anova makes, and instead you have what's known as pseudoreplication. What you could do with a nested design, if you're only interested in the difference among group means, is take the average for each subgroup and analyze them using a one-way anova. For the example data, you would take the average protein uptake for each of the three rats that Brad used, and each of the three rats that Janet used, and you would analyze these six values using one-way anova. If you have a balanced design (equal sample sizes in each subgroup), comparing group means with a one-way anova of subgroup means is mathematically identical to comparing group means using a nested anova (and this is true for a nested anova with more levels, such as subsubgroups). If you don't have a balanced design, the results won't be identical, but they'll be pretty similar unless your design is very unbalanced. The advantage of using one-way anova is that it will be more familiar to more people than nested anova; the disadvantage is that you won't be able to compare the variation among subgroups to the variation within subgroups. Testing the variation among subgroups often isn't biologically interesting, but it can be useful in the optimal allocation of resources, deciding whether future experiments should use more rats with fewer observations per rat. Null hypotheses A nested anova has one null hypothesis for each level. In a two-level nested anova, one null hypothesis is that the groups have the same mean. For our rats, this null would be that Brad's rats had the same mean protein uptake as the Janet's rats. The second null hypothesis is that the subgroups within each group have the same means. For the example, this null would be that all of Brad's rats have the same mean, and all of Janet's rats have the same mean (which could be different from the mean for Brad's rats). A three-level nested anova would have a third null hypothesis, that all of the locations within each kidney have the same mean (which could be a different mean for each kidney), and so on. How the test works Remember that in a one-way anova, the test statistic, $F_s$, is the ratio of two mean squares: the mean square among groups divided by the mean square within groups. If the variation among groups (the group mean square) is high relative to the variation within groups, the test statistic is large and therefore unlikely to occur by chance. In a two-level nested anova, there are two $F$ statistics, one for subgroups ($F_{subgroup}$) and one for groups ($F_{group}$). You find the subgroup $F$-statistic by dividing the among-subgroup mean square, $MS_{subgroup}$ (the average variance of subgroup means within each group) by the within-subgroup mean square, $MS_{within}$ (the average variation among individual measurements within each subgroup). You find the group $F$-statistic by dividing the among-group mean square, $MS_{group}$ (the variation among group means) by $MS_{subgroup}$. You then calculate the $P$ value for the $F$-statistic at each level. For the rat example, the within-subgroup mean square is $0.0360$ and the subgroup mean square is $0.1435$, making the $F_{subgroup}$ $0.1435/0.0360=3.9818$. There are $4$ degrees of freedom in the numerator (the total number of subgroups minus the number of groups) and $54$ degrees of freedom in the denominator (the number of observations minus the number of subgroups), so the $P$ value is $0.0067$. This means that there is significant variation in protein uptake among rats within each technician. The $F_{group}$ is the mean square for groups, $0.0384$, divided by the mean square for subgroups, $0.1435$, which equals $0.2677$. There is one degree of freedom in the numerator (the number of groups minus $1$) and $4$ degrees of freedom in the denominator (the total number of subgroups minus the number of groups), yielding a $P$ value of $0.632$. So there is no significant difference in protein abundance between the rats Brad measured and the rats Janet measured. For a nested anova with three or more levels, you calculate the $F$-statistic at each level by dividing the $MS$ at that level by the $MS$ at the level immediately below it. If the subgroup $F$-statistic is not significant, it is possible to calculate the group $F$-statistic by dividing $MS_{group}$ by $MS_{pooled}$, a combination of $MS_{subgroup}$ and $MS_{within}$. The conditions under which this is acceptable are complicated, and some statisticians think you should never do it; for simplicity, I suggest always using $MS_{group}/MS_{subgroup}$ to calculate $F_{group}$. Partitioning variance and optimal allocation of resources In addition to testing the equality of the means at each level, a nested anova also partitions the variance into different levels. This can be a great help in designing future experiments. For our rat example, if most of the variation is among rats, with relatively little variation among measurements within each rat, you would want to do fewer measurements per rat and use a lot more rats in your next experiment. This would give you greater statistical power than taking repeated measurements on a smaller number of rats. But if the nested anova tells you there is a lot of variation among measurements but relatively little variation among rats, you would either want to use more observations per rat or try to control whatever variable is causing the measurements to differ so much. If you have an estimate of the relative cost of different parts of the experiment (in time or money), you can use this formula to estimate the best number of observations per subgroup, a process known as optimal allocation of resources: $n=\sqrt{\frac{(C_{subgroup}\times V_{within})}{(C_{within}\times V_{subgroup})}}$ where $n$ is the number of observations per subgroup, $C_{within}$ is the cost per observation, $C_{subgroup}$ is the cost per subgroup (not including the cost of the individual observations), $V_{subgroup}$ is the percentage of the variation partitioned to the subgroup, and $V_{within}$ is the percentage of the variation partitioned to within groups. For the rat example, $V_{subgroup}$ is $23.0\%$ and $V_{within}$ is $77\%$ (there's usually some variation partitioned to the groups, but for these data, groups had $0\%$ of the variation). If we estimate that each rat costs $\ 200$ to raise, and each measurement of protein uptake costs $\ 10$, then the optimal number of observations per rat is $\sqrt{\frac{(200\times 77)}{(10\times 23)}}$, which equals $8$ rats per subgroup. The total cost per subgroup will then be $\ 200$ to raise the rat and $8\times \ 10=\ 80$ for the observations, for a total of $\ 280$; based on your total budget for your next experiment, you can use this to decide how many rats to use for each group. For a three-level nested anova, you would use the same equation to allocate resources; for example, if you had multiple rats, with multiple tissue samples per rat kidney, and multiple protein uptake measurements per tissue sample. You would start by determining the number of observations per subsubgroup; once you knew that, you could calculate the total cost per subsubgroup (the cost of taking the tissue sample plus the cost of making the optimal number of observations). You would then use the same equation, with the variance partitions for subgroups and subsubgroups, and the cost for subgroups and the total cost for subsubgroups, and determine the optimal number of subsubgroups to use for each subgroup. You could use the same procedure for as higher levels of nested anova. It's possible for a variance component to be zero; the groups (Brad vs. Janet) in our rat example had 0% of the variance, for example. This just means that the variation among group means is smaller than you would expect, based on the amount of variation among subgroups. Because there's variation among rats in mean protein uptake, you would expect that two random samples of three rats each would have different means, and you could predict the average size of that difference. As it happens, the means of the three rats Brad studied and the three rats Janet studied happened to be closer than expected by chance, so they contribute $0\%$ to the overall variance. Using zero, or a very small number, in the equation for allocation of resources may give you ridiculous numbers. If that happens, just use your common sense. So if $V_{subgroup}$ in our rat example (the variation among rats within technicians) had turned out to be close to $0\%$, the equation would have told you that you would need hundreds or thousands of observations per rat; in that case, you would design your experiment to include one rat per group, and as many measurements per rat as you could afford. Often, the reason you use a nested anova is because the higher level groups are expensive and lower levels are cheaper. Raising a rat is expensive, but looking at a tissue sample with a microscope is relatively cheap, so you want to reach an optimal balance of expensive rats and cheap observations. If the higher level groups are very inexpensive relative to the lower levels, you don't need a nested design; the most powerful design will be to take just one observation per higher level group. For example, let's say you're studying protein uptake in fruit flies (Drosophila melanogaster). You could take multiple tissue samples per fly and make multiple observations per tissue sample, but because raising $100$ flies doesn't cost any more than raising $10$ flies, it will be better to take one tissue sample per fly and one observation per tissue sample, and use as many flies as you can afford; you'll then be able to analyze the data with one-way anova. The variation among flies in this design will include the variation among tissue samples and among observations, so this will be the most statistically powerful design. The only reason for doing a nested anova in this case would be to see whether you're getting a lot of variation among tissue samples or among observations within tissue samples, which could tell you that you need to make your laboratory technique more consistent. Unequal sample sizes When the sample sizes in a nested anova are unequal, the $P$ values corresponding to the $F$-statistics may not be very good estimates of the actual probability. For this reason, you should try to design your experiments with a "balanced" design, meaning equal sample sizes in each subgroup. (This just means equal numbers at each level; the rat example, with three subgroups per group and $10$ observations per subgroup, is balanced). Often this is impractical; if you do have unequal sample sizes, you may be able to get a better estimate of the correct $P$ value by using modified mean squares at each level, found using a correction formula called the Satterthwaite approximation. Under some situations, however, the Satterthwaite approximation will make the $P$ values less accurate. If you cannot use the Satterthwaite approximation, the $P$ values will be conservative (less likely to be significant than they ought to be), so if you never use the Satterthwaite approximation, you're not fooling yourself with too many false positives. Note that the Satterthwaite approximation results in fractional degrees of freedom, such as $2.87$; don't be alarmed by that (and be prepared to explain it to people if you use it). If you do a nested anova with an unbalanced design, be sure to specify whether you use the Satterthwaite approximation when you report your results. Assumptions Nested anova, like all anovas, assumes that the observations within each subgroup are normally distributed and have equal standard deviations. Example Keon and Muir (2002) wanted to know whether habitat type affected the growth rate of the lichen Usnea longissima. They weighed and transplanted $30$ individuals into each of $12$ sites in Oregon. The $12$ sites were grouped into $4$ habitat types, with $3$ sites in each habitat. One year later, they collected the lichens, weighed them again, and calculated the change in weight. There are two nominal variables (site and habitat type), with sites nested within habitat type. You could analyze the data using two measurement variables, beginning weight and ending weight, but because the lichen individuals were chosen to have similar beginning weights, it makes more sense to use the change in weight as a single measurement variable. The results of a nested anova are that there is significant variation among sites within habitats ($F_{8,\: 200}=8.11,\; \; P=1.8\times 10^{-9}$) and significant variation among habitats ($F_{3,\: 8}=8.29,\; \; P=0.008$). When the Satterthwaite approximation is used, the test of the effect of habitat is only slightly different ($F_{3,\: 8.13}=8.76,\; \; P=0.006$) Graphing the results The way you graph the results of a nested anova depends on the outcome and your biological question. If the variation among subgroups is not significant and the variation among groups is significant—you're really just interested in the groups, and you used a nested anova to see if it was okay to combine subgroups—you might just plot the group means on a bar graph, as shown for one-way anova. If the variation among subgroups is interesting, you can plot the means for each subgroup, with different patterns or colors indicating the different groups. Similar tests Both nested anova and two-way anova (and higher level anovas) have one measurement variable and more than one nominal variable. The difference is that in a two-way anova, the values of each nominal variable are found in all combinations with the other nominal variable; in a nested anova, each value of one nominal variable (the subgroups) is found in combination with only one value of the other nominal variable (the groups). If you have a balanced design (equal number of subgroups in each group, equal number of observations in each subgroup), you can perform a one-way anova on the subgroup means. For the rat example, you would take the average protein uptake for each rat. The result is mathematically identical to the test of variation among groups in a nested anova. It may be easier to explain a one-way anova to people, but you'll lose the information about how variation among subgroups compares to variation among individual observations. How to do the test Spreadsheet I have made spreadsheets to do two-level nested anova nested2.xls, with equal or unequal sample sizes, on up to $50$ subgroups with up to $1000$ observations per subgroup. It does significance tests and partitions the variance. The spreadsheet tells you whether the Satterthwaite approximation is appropriate, using the rules on p. 298 of Sokal and Rohlf (1983), and gives you the option to use it. $F_{group}$ is calculated as $MS_{group}/MS_{subgroup}$. The spreadsheet gives the variance components as percentages of the total. If the estimate of the group component would be negative (which can happen), it is set to zero. I have also written spreadsheets to do three-level nested anova nested3.xls and four-level nested anova nested4.xls. Web page I don't know of a web page that will let you do nested anova. R Salvatore Mangiafico's $R$ Companion has a sample R program for nested anova. SAS You can do a nested anova with either PROC GLM or PROC NESTED. PROC GLM will handle both balanced and unbalanced designs, but does not partition the variance; PROC NESTED partitions the variance but does not calculate P values if you have an unbalanced design, so you may need to use both procedures. You may need to sort your dataset with PROC SORT, and it doesn't hurt to include it. In PROC GLM, list all the nominal variables in the CLASS statement. In the MODEL statement, give the name of the measurement variable, then after the equals sign give the name of the group variable, then the name of the subgroup variable followed by the group variable in parentheses. SS1 (with the numeral one, not the letter el) tells it to use type I sums of squares. The TEST statement tells it to calculate the $F$-statistic for groups by dividing the group mean square by the subgroup mean square, instead of the within-group mean square ($H$ stands for "hypothesis" and $E$ stands for "error"). "HTYPE=1 ETYPE=1" also tells SAS to use "type I sums of squares"; I couldn't tell you the difference between them and types II, III and IV, but I'm pretty sure that type I is appropriate for a nested anova. Here is an example of a two-level nested anova using the rat data. DATA bradvsjanet; INPUT tech $rat$ protein @@; DATALINES; Janet 1 1.119 Janet 1 1.2996 Janet 1 1.5407 Janet 1 1.5084 Janet 1 1.6181 Janet 1 1.5962 Janet 1 1.2617 Janet 1 1.2288 Janet 1 1.3471 Janet 1 1.0206 Janet 2 1.045 Janet 2 1.1418 Janet 2 1.2569 Janet 2 0.6191 Janet 2 1.4823 Janet 2 0.8991 Janet 2 0.8365 Janet 2 1.2898 Janet 2 1.1821 Janet 2 0.9177 Janet 3 0.9873 Janet 3 0.9873 Janet 3 0.8714 Janet 3 0.9452 Janet 3 1.1186 Janet 3 1.2909 Janet 3 1.1502 Janet 3 1.1635 Janet 3 1.151 Janet 3 0.9367 Brad 5 1.3883 Brad 5 1.104 Brad 5 1.1581 Brad 5 1.319 Brad 5 1.1803 Brad 5 0.8738 Brad 5 1.387 Brad 5 1.301 Brad 5 1.3925 Brad 5 1.0832 Brad 6 1.3952 Brad 6 0.9714 Brad 6 1.3972 Brad 6 1.5369 Brad 6 1.3727 Brad 6 1.2909 Brad 6 1.1874 Brad 6 1.1374 Brad 6 1.0647 Brad 6 0.9486 Brad 7 1.2574 Brad 7 1.0295 Brad 7 1.1941 Brad 7 1.0759 Brad 7 1.3249 Brad 7 0.9494 Brad 7 1.1041 Brad 7 1.1575 Brad 7 1.294 Brad 7 1.4543 ; PROC SORT DATA=bradvsjanet; BY tech rat; PROC GLM DATA=bradvsjanet; CLASS tech rat; MODEL protein=tech rat(tech) / SS1; TEST H=tech E=rat(tech) / HTYPE=1 ETYPE=1; RUN; The output includes $F_{group}$ calculated two ways, as $MS_{group}/MS_{within}$ and as $MS_{group}/MS_{subgroup}$. Source DF Type I SS Mean Sq. F Value Pr > F tech 1 0.03841046 0.03841046 1.07 0.3065 MSgroup/MSwithin; don't use this rat(tech) 4 0.57397543 0.14349386 3.98 0.0067 use this for testing subgroups Tests of Hypotheses Using the Type I MS for rat(tech) as an Error Term Source DF Type I SS Mean Sq. F Value Pr > F tech 1 0.03841046 0.03841046 0.27 0.6322 MSgroup/MSsubgroup; use this for testing groups You can do the Tukey-Kramer test to compare pairs of group means, if you have more than two groups. You do this with a MEANS statement. This shows how (even though you wouldn't do Tukey-Kramer with just two groups): PROC GLM DATA=bradvsjanet; CLASS tech rat; MODEL protein=tech rat(tech) / SS1; TEST H=tech E=rat(tech) / HTYPE=1 ETYPE=1; MEANS tech /LINES TUKEY; RUN; PROC GLM does not partition the variance. PROC NESTED will partition the variance, but it only does the hypothesis testing for a balanced nested anova, so if you have an unbalanced design you'll want to run both PROC GLM and PROC NESTED. In PROC NESTED, the group is given first in the CLASS statement, then the subgroup. PROC SORT DATA=bradvsjanet; BY tech rat; PROC NESTED DATA=bradvsjanet; CLASS tech rat; VAR protein; RUN; Here's the output; if the data set was unbalanced, the "$F$ Value" and "Pr>F" columns would be blank. Variance Sum of F Error Mean Variance Percent Source DF Squares Value Pr>F Term Square Component of Total Total 59 2.558414 0.043363 0.046783 100.0000 tech 1 0.038410 0.27 0.6322 rat 0.038410 -0.003503 0.0000 rat 4 0.573975 3.98 0.0067 Error 0.143494 0.010746 22.9690 Error 54 1.946028 0.036038 0.036038 77.0310 You set up a nested anova with three or more levels the same way, except the MODEL statement has more terms, and you specify a TEST statement for each level. Here's how you would set it up if there were multiple rats per technician, with multiple tissue samples per rat, and multiple protein measurements per sample: PROC GLM DATA=bradvsjanet; CLASS tech rat sample; MODEL protein=tech rat(tech) sample(rat tech)/ SS1; TEST H=tech E=rat(tech) / HTYPE=1 ETYPE=1; TEST H=rat E=sample(rat tech) / HTYPE=1 ETYPE=1; RUN; PROC NESTED DATA=bradvsjanet; CLASS sample tech rat; VAR protein; RUN;
textbooks/stats/Applied_Statistics/Biological_Statistics_(McDonald)/04%3A_Tests_for_One_Measurement_Variable/4.09%3A_Nested_Anova.txt
Learning Objectives • To use two-way anova when you have one measurement variable and two nominal variables, and each value of one nominal variable is found in combination with each value of the other nominal variable. It tests three null hypotheses: that the means of the measurement variable are equal for different values of the first nominal variable; that the means are equal for different values of the second nominal variable; and that there is no interaction (the effects of one nominal variable don't depend on the value of the other nominal variable). When to use it You use a two-way anova (also known as a factorial anova, with two factors) when you have one measurement variable and two nominal variables. The nominal variables (often called "factors" or "main effects") are found in all possible combinations. For example, here's some data I collected on the enzyme activity of mannose-6-phosphate isomerase (MPI) and MPI genotypes in the amphipod crustacean Platorchestia platensis. Because I didn't know whether sex also affected MPI activity, I separated the amphipods by sex. Genotype Female Male FF 2.838 4.216 2.889 4.198 1.884 2.283 4.939 3.486 FS 3.55 4.556 3.087 1.943 2.396 2.956 3.105 2.649 SS 3.620 3.079 3.586 2.669 2.801 3.421 4.275 3.110 Unlike a nested anova, each grouping extends across the other grouping: each genotype contains some males and some females, and each sex contains all three genotypes. A two-way anova is usually done with replication (more than one observation for each combination of the nominal variables). For our amphipods, a two-way anova with replication means there are more than one male and more than one female of each genotype. You can also do two-way anova without replication (only one observation for each combination of the nominal variables), but this is less informative (you can't test the interaction term) and requires you to assume that there is no interaction. Repeated measures One experimental design that people analyze with a two-way anova is repeated measures, where an observation has been made on the same individual more than once. This usually involves measurements taken at different time points. For example, you might measure running speed before, one week into, and three weeks into a program of exercise. Because individuals would start with different running speeds, it is better to analyze using a two-way anova, with "individual" as one of the factors, rather than lumping everyone together and analyzing with a one-way anova. Sometimes the repeated measures are repeated at different places rather than different times, such as the hip abduction angle measured on the right and left hip of individuals. Repeated measures experiments are often done without replication, although they could be done with replication. In a repeated measures design, one of main effects is usually uninteresting and the test of its null hypothesis may not be reported. If the goal is to determine whether a particular exercise program affects running speed, there would be little point in testing whether individuals differed from each other in their average running speed; only the change in running speed over time would be of interest. Randomized blocks Another experimental design that is analyzed by a two-way anova is randomized blocks. This often occurs in agriculture, where you may want to test different treatments on small plots within larger blocks of land. Because the larger blocks may differ in some way that may affect the measurement variable, the data are analyzed with a two-way anova, with the block as one of the nominal variables. Each treatment is applied to one or more plot within the larger block, and the positions of the treatments are assigned at random. This is most commonly done without replication (one plot per block), but it can be done with replication as well. Null hypotheses A two-way anova with replication tests three null hypotheses: that the means of observations grouped by one factor are the same; that the means of observations grouped by the other factor are the same; and that there is no interaction between the two factors. The interaction test tells you whether the effects of one factor depend on the other factor. In the amphipod example, imagine that female amphipods of each genotype have about the same MPI activity, while male amphipods with the $SS$ genotype had much lower MPI activity than male $FF$ or $FS$ amphipods (they don't, but imagine they do for a moment). The different effects of genotype on activity in female and male amphipods would result in a significant interaction term in the anova, meaning that the effect of genotype on activity would depend on whether you were looking at males or females. If there were no interaction, the differences among genotypes in enzyme activity would be the same for males and females, and the difference in activity between males and females would be the same for each of the three genotypes. When the interaction term is significant, the usual advice is that you should not test the effects of the individual factors. In this example, it would be misleading to examine the individual factors and conclude "$SS$ amphipods have lower activity than $FF$ or $FS$," when that is only true for males, or "Male amphipods have lower MPI activity than females," when that is only true for the $SS$ genotype. What you can do, if the interaction term is significant, is look at each factor separately, using a one-way anova. In the amphipod example, you might be able to say that for female amphipods, there is no significant effect of genotype on MPI activity, while for male amphipods, there is a significant effect of genotype on MPI activity. Or, if you're more interested in the sex difference, you might say that male amphipods have a significantly lower mean enzyme activity than females when they have the $SS$ genotype, but not when they have the other two genotypes. When you do a two-way anova without replication, you can still test the two main effects, but you can't test the interaction. This means that your tests of the main effects have to assume that there's no interaction. If you find a significant difference in the means for one of the main effects, you wouldn't know whether that difference was consistent for different values of the other main effect. How the test works With replication When the sample sizes in each subgroup are equal (a "balanced design"), you calculate the mean square for each of the two factors (the "main effects"), for the interaction, and for the variation within each combination of factors. You then calculate each $F$-statistic by dividing a mean square by the within-subgroup mean square. When the sample sizes for the subgroups are not equal (an "unbalanced design"), the analysis is much more complicated, and there are several different techniques for testing the main and interaction effects that I'm not going to cover here. If you're doing a two-way anova, your statistical life will be a lot easier if you make it a balanced design. Without replication When there is only a single observation for each combination of the nominal variables, there are only two null hypotheses: that the means of observations grouped by one factor are the same, and that the means of observations grouped by the other factor are the same. It is impossible to test the null hypothesis of no interaction; instead, you have to assume that there is no interaction in order to test the two main effects. When there is no replication, you calculate the mean square for each of the two main effects, and you also calculate a total mean square by considering all of the observations as a single group. The remainder mean square (also called the discrepance or error mean square) is found by subtracting the two main effect mean squares from the total mean square. The $F$-statistic for a main effect is the main effect mean square divided by the remainder mean square. Assumptions Two-way anova, like all anovas, assumes that the observations within each cell are normally distributed and have equal standard deviations. I don't know how sensitive it is to violations of these assumptions. Example Shimoji and Miyatake (2002) raised the West Indian sweetpotato weevil for $14$ generations on an artificial diet. They compared these artificial diet weevils ($AD$ strain) with weevils raised on sweet potato roots ($SP$ strain), the weevil's natural food. They placed multiple females of each strain on either the artificial diet or sweet potato root, and they counted the number of eggs each female laid over a $28$-day period. There are two nominal variables, the strain of weevil ($AD$ or $SP$) and the oviposition test food (artificial diet or sweet potato), and one measurement variable (the number of eggs laid). The results of the two-way anova with replication include a significant interaction term ($F_{1,\: 117}=17.02,\; \; P=7\times 10^{-5}$). Looking at the graph, the interaction can be interpreted this way: on the sweet potato diet, the $SP$ strain laid more eggs than the $AD$ strain; on the artificial diet, the $AD$ strain laid more eggs than the $SP$ strain. Each main effect is also significant: weevil strain ($F_{1,\: 117}=8.82,\; \; P=0.0036$) and oviposition test food ($F_{1,\: 117}=345.92,\; \; P=9\times 10^{-37}$). However, the significant effect of strain is a bit misleading, as the direction of the difference between strains depends on which food they ate. This is why it is important to look at the interaction term first. Place and Abramson (2008) put diamondback rattlesnakes (Crotalus atrox) in a "rattlebox," a box with a lid that would slide open and shut every $5$ minutes. At first, the snake would rattle its tail each time the box opened. After a while, the snake would become habituated to the box opening and stop rattling its tail. They counted the number of box openings until a snake stopped rattling; fewer box openings means the snake was more quickly habituated. They repeated this experiment on each snake on four successive days, which I'll treat as a nominal variable for this example. Place and Abramson (2008) used $10$ snakes, but some of them never became habituated; to simplify this example, I'll use data from the $6$ snakes that did become habituated on each day: Snake ID Day 1 Day 2 Day 3 Day 4 D1 85 58 15 57 D3 107 51 30 12 D5 61 60 68 36 D8 22 41 63 21 D11 40 45 28 10 D12 65 27 3 16 The measurement variable is trials to habituation, and the two nominal variables are day ($1$ to $4$) and snake ID. This is a repeated measures design, as the measurement variable is measured repeatedly on each snake. It is analyzed using a two-way anova without replication. The effect of snake is not significant ($F_{5,\: 15}=1.24,\; \; P=0.34$), while the effect of day is significant ($F_{3,\: 15}=3.32,\; \; P=0.049$). Graphing the results Some people plot the results of a two-way anova on a $3-D$ graph, with the measurement variable on the $Y$ axis, one nominal variable on the $X$-axis, and the other nominal variable on the $Z$ axis (going into the paper). This makes it difficult to visually compare the heights of the bars in the front and back rows, so I don't recommend this. Instead, I suggest you plot a bar graph with the bars clustered by one nominal variable, with the other nominal variable identified using the color or pattern of the bars. If one of the nominal variables is the interesting one, and the other is just a possible confounder, I'd group the bars by the possible confounder and use different patterns for the interesting variable. For the amphipod data described above, I was interested in seeing whether MPI phenotype affected enzyme activity, with any difference between males and females as an annoying confounder, so I grouped the bars by sex. Similar tests A two-way anova without replication and only two values for the interesting nominal variable may be analyzed using a paired t–test. The results of a paired $t$–test are mathematically identical to those of a two-way anova, but the paired $t$–test is easier to do and is familiar to more people. Data sets with one measurement variable and two nominal variables, with one nominal variable nested under the other, are analyzed with a nested anova. Three-way and higher order anovas are possible, as are anovas combining aspects of a nested and a two-way or higher order anova. The number of interaction terms increases rapidly as designs get more complicated, and the interpretation of any significant interactions can be quite difficult. It is better, when possible, to design your experiments so that as many factors as possible are controlled, rather than collecting a hodgepodge of data and hoping that a sophisticated statistical analysis can make some sense of it. How to do the test Spreadsheet I haven't put together a spreadsheet to do two-way anovas. Web page There's a web page to perform a two-way anova with replication, with up to $4$ groups for each main effect. R Salvatore Mangiafico's $R$ Companion has a sample R program for two-way anova. SAS Use PROC GLM for a two-way anova. The CLASS statement lists the two nominal variables. The MODEL statement has the measurement variable, then the two nominal variables and their interaction after the equals sign. Here is an example using the MPI activity data described above: DATA amphipods; INPUT id $sex$ genotype \$ activity @@; DATALINES; 1 male ff 1.884 2 male ff 2.283 3 male fs 2.396 4 female ff 2.838 5 male fs 2.956 6 female ff 4.216 7 female ss 3.620 8 female ff 2.889 9 female fs 3.550 10 male fs 3.105 11 female fs 4.556 12 female fs 3.087 13 male ff 4.939 14 male ff 3.486 15 female ss 3.079 16 male fs 2.649 17 female fs 1.943 19 female ff 4.198 20 female ff 2.473 22 female ff 2.033 24 female fs 2.200 25 female fs 2.157 26 male ss 2.801 28 male ss 3.421 29 female ff 1.811 30 female fs 4.281 32 female fs 4.772 34 female ss 3.586 36 female ff 3.944 38 female ss 2.669 39 female ss 3.050 41 male ss 4.275 43 female ss 2.963 46 female ss 3.236 48 female ss 3.673 49 male ss 3.110 ; PROC GLM DATA=amphipods; CLASS sex genotype; MODEL activity=sex genotype sex*genotype; RUN; The results indicate that the interaction term is not significant ($P=0.60$), the effect of genotype is not significant ($P=0.84$), and the effect of sex concentration not significant ($P=0.77$). Source DF Type I SS Mean Square F Value Pr > F sex 1 0.06808050 0.06808050 0.09 0.7712 genotype 2 0.27724017 0.13862008 0.18 0.8400 sex*genotype 2 0.81464133 0.40732067 0.52 0.6025 If you are using SAS to do a two-way anova without replication, do not put an interaction term in the model statement (sex*genotype is the interaction term in the example above).
textbooks/stats/Applied_Statistics/Biological_Statistics_(McDonald)/04%3A_Tests_for_One_Measurement_Variable/4.10%3A_Two-way_Anova.txt
Learning Objectives • To use the paired \(t\)–test when you have one measurement variable and two nominal variables, one of the nominal variables has only two values, and you only have one observation for each combination of the nominal variables; in other words, you have multiple pairs of observations. It tests whether the mean difference in the pairs is different from \(0\). When to use it Use the paired \(t\)–test when there is one measurement variable and two nominal variables. One of the nominal variables has only two values, so that you have multiple pairs of observations. The most common design is that one nominal variable represents individual organisms, while the other is "before" and "after" some treatment. Sometimes the pairs are spatial rather than temporal, such as left vs. right, injured limb vs. uninjured limb, etc. You can use the paired \(t\)–test for other pairs of observations; for example, you might sample an ecological measurement variable above and below a source of pollution in several streams. As an example, volunteers count the number of breeding horseshoe crabs on beaches on Delaware Bay every year; here are data from 2011 and 2012. The measurement variable is number of horseshoe crabs, one nominal variable is 2011 vs. 2012, and the other nominal variable is the name of the beach. Each beach has one pair of observations of the measurement variable, one from 2011 and one from 2012. The biological question is whether the number of horseshoe crabs has gone up or down between 2011 and 2012. Beach 2011 2012 2012−2011 Bennetts Pier 35282 21814 -13468 Big Stone 359350 83500 -275850 Broadkill 45705 13290 -32415 Cape Henlopen 49005 30150 -18855 Fortescue 68978 125190 56212 Fowler 8700 4620 -4080 Gandys 18780 88926 70146 Higbees 13622 1205 -12417 Highs 24936 29800 4864 Kimbles 17620 53640 36020 Kitts Hummock 117360 68400 -48960 Norburys Landing 102425 74552 -27873 North Bowers 59566 36790 -22776 North Cape May 32610 4350 -28260 Pickering 137250 110550 -26700 Pierces Point 38003 43435 5432 Primehook 101300 20580 -80720 Reeds 62179 81503 19324 Slaughter 203070 53940 -149130 South Bowers 135309 87055 -48254 South CSL 150656 112266 -38390 Ted Harvey 115090 90670 -24420 Townbank 44022 21942 -22080 Villas 56260 32140 -24120 Woodland 125 1260 1135 As you might expect, there's a lot of variation from one beach to the next. If the difference between years is small relative to the variation within years, it would take a very large sample size to get a significant two-sample t–test comparing the means of the two years. A paired \(t\)–test just looks at the differences, so if the two sets of measurements are correlated with each other, the paired \(t\)–test will be more powerful than a two-sample \(t\)–test. For the horseshoe crabs, the \(P\) value for a two-sample \(t\)–test is \(0.110\), while the paired \(t\)–test gives a \(P\) value of \(0.045\). You can only use the paired \(t\)–test when there is just one observation for each combination of the nominal values. If you have more than one observation for each combination, you have to use two-way anova with replication. For example, if you had multiple counts of horseshoe crabs at each beach in each year, you'd have to do the two-way anova. You can only use the paired \(t\)–test when the data are in pairs. If you wanted to compare horseshoe crab abundance in 2010, 2011, and 2012, you'd have to do a two-way anova without replication. "Paired \(t\)–test" is just a different name for "two-way anova without replication, where one nominal variable has just two values"; the results are mathematically identical. The paired design is a common one, and if all you're doing is paired designs, you should call your test the paired \(t\)–test; it will sound familiar to more people. But if some of your data sets are in pairs, and some are in sets of three or more, you should call all of your tests two-way anovas; otherwise people will think you're using two different tests. Null hypothesis The null hypothesis is that the mean difference between paired observations is zero. When the mean difference is zero, the means of the two groups must also be equal. Because of the paired design of the data, the null hypothesis of a paired \(t\)–test is usually expressed in terms of the mean difference. Assumption The paired \(t\)–test assumes that the differences between pairs are normally distributed; you can use the histogram spreadsheet described on that page to check the normality. If the differences between pairs are severely non-normal, it would be better to use the Wilcoxon signed-rank test. I don't think the test is very sensitive to deviations from normality, so unless the deviation from normality is really obvious, you shouldn't worry about it. The paired \(t\)–test does not assume that observations within each group are normal, only that the differences are normal. And it does not assume that the groups are homoscedastic. How the test works The first step in a paired \(t\)–test is to calculate the difference for each pair, as shown in the last column above. Then you use a one-sample t–test to compare the mean difference to \(0\). So the paired \(t\)–test is really just one application of the one-sample \(t\)–test, but because the paired experimental design is so common, it gets a separate name. Example Wiebe and Bortolotti (2002) examined color in the tail feathers of northern flickers. Some of the birds had one "odd" feather that was different in color or length from the rest of the tail feathers, presumably because it was regrown after being lost. They measured the yellowness of one odd feather on each of \(16\) birds and compared it with the yellowness of one typical feather from the same bird. There are two nominal variables, type of feather (typical or odd) and the individual bird, and one measurement variable, yellowness. Because these birds were from a hybrid zone between red-shafted flickers and yellow-shafted flickers, there was a lot of variation among birds in color, making a paired analysis more appropriate. The difference was significant (\(P=0.001\)), with the odd feathers significantly less yellow than the typical feathers (higher numbers are more yellow). Yellowness index Bird Typical feather Odd feather A -0.255 -0.324 B -0.213 -0.185 C -0.19 -0.299 D -0.185 -0.144 E -0.045 -0.027 F -0.025 -0.039 G -0.015 -0.264 H 0.003 -0.077 I 0.015 -0.017 J 0.02 -0.169 K 0.023 -0.096 L 0.04 -0.33 M 0.04 -0.346 N 0.05 -0.191 O 0.055 -0.128 P 0.058 -0.182 Wilder and Rypstra (2004) tested the effect of praying mantis excrement on the behavior of wolf spiders. They put \(12\) wolf spiders in individual containers; each container had two semicircles of filter paper, one semicircle that had been smeared with praying mantis excrement and one without excrement. They observed each spider for one hour, and measured its walking speed while it was on each half of the container. There are two nominal variables, filter paper type (with or without excrement) and the individual spider, and one measurement variable (walking speed). Different spiders may have different overall walking speed, so a paired analysis is appropriate to test whether the presence of praying mantis excrement changes the walking speed of a spider. The mean difference in walking speed is almost, but not quite, significantly different from \(0\) (\(t=2.11,\; 11d.f.,\; P=0.053\)). Graphing the results If there are a moderate number of pairs, you could either plot each individual value on a bar graph, or plot the differences. Here is one graph in each format for the flicker data: Related tests The paired \(t\)–test is mathematically equivalent to one of the hypothesis tests of a two-way anova without replication. The paired \(t\)–test is simpler to perform and may sound familiar to more people. You should use two-way anova if you're interested in testing both null hypotheses (equality of means of the two treatments and equality of means of the individuals); for the horseshoe crab example, if you wanted to see whether there was variation among beaches in horseshoe crab density, you'd use two-way anova and look at both hypothesis tests. In a paired \(t\)–test, the means of individuals are so likely to be different that there's no point in testing them. If you have multiple observations for each combination of the nominal variables (such as multiple observations of horseshoe crabs on each beach in each year), you have to use two-way anova with replication. If you ignored the pairing of the data, you would use a one-way anova or a two-sample t–test. When the difference of each pair is small compared to the variation among pairs, a paired \(t\)–test can give you a lot more statistical power than a two-sample \(t\)–test, so you should use the paired test whenever your data are in pairs. One non-parametric analogue of the paired \(t\)–test is Wilcoxon signed-rank test; you should use if the differences are severely non-normal. A simpler and even less powerful test is the sign test, which considers only the direction of difference between pairs of observations, not the size of the difference. How to do the test Spreadsheet Spreadsheets have a built-in function to perform paired \(t\)–tests. Put the "before" numbers in one column, and the "after" numbers in the adjacent column, with the before and after observations from each individual on the same row. Then enter =TTEST(array1, array2, tails, type), where array1 is the first column of data, array2 is the second column of data, tails is normally set to \(2\) for a two-tailed test, and type is set to \(1\) for a paired \(t\)–test. The result of this function is the \(P\) value of the paired \(t\)–test. Even though it's easy to do yourself, I've written a spreadsheet to do a paired t-test pairedttest.xls. Web pages There are web pages to do paired \(t\)–tests here, here, here, and here. R Salvatore Mangiafico's \(R\) Companion has a sample R program for the paired t–test. SAS To do a paired \(t\)–test in SAS, you use PROC TTEST with the PAIRED option. Here is an example using the feather data from above: DATA feathers; INPUT bird \$ typical odd; DATALINES; A -0.255 -0.324 B -0.213 -0.185 C -0.190 -0.299 D -0.185 -0.144 E -0.045 -0.027 F -0.025 -0.039 G -0.015 -0.264 H 0.003 -0.077 I 0.015 -0.017 J 0.020 -0.169 K 0.023 -0.096 L 0.040 -0.330 M 0.040 -0.346 N 0.050 -0.191 O 0.055 -0.128 P 0.058 -0.182 ; PROC TTEST DATA=feathers; PAIRED typical*odd; RUN; The results include the following, which shows that the \(P\) value is \(0.0010\): t–tests Difference DF t Value Pr > |t| typical - odd 15 4.06 0.0010 Power analysis To estimate the sample sizes needed to detect a mean difference that is significantly different from zero, you need the following: • the effect size, or the mean difference. In the feather data used above, the mean difference between typical and odd feathers is \(0.137\) yellowness units. • the standard deviation of differences. Note that this is not the standard deviation within each group. For example, in the feather data, the standard deviation of the differences is \(0.135\); this is not the standard deviation among typical feathers, or the standard deviation among odd feathers, but the standard deviation of the differences; • alpha, or the significance level (usually \(0.05\)); • power, the probability of rejecting the null hypothesis when it is false and the true difference is equal to the effect size (\(0.80\) and \(0.90\) are common values). As an example, let's say you want to do a study comparing the redness of typical and odd tail feathers in cardinals. The closest you can find to preliminary data is the Weibe and Bortolotti (2002) paper on yellowness in flickers. They found a mean difference of \(0.137\) yellowness units, with a standard deviation of \(0.135\); you arbitrarily decide you want to be able to detect a mean difference of \(0.10\) redness units in your cardinals. In G*Power, choose "t tests" under Test Family and "Means: Difference between two dependent means (matched pairs)" under Statistical Test. Choose "A priori: Compute required sample size" under Type of Power Analysis. Under Input Parameters, choose the number of tails (almost always two), the alpha (usually \(0.05\)), and the power (usually something like \(0.8\) or \(0.9\)). Click on the "Determine" button and enter the effect size you want (\(0.10\) for our example) and the standard deviation of differences, then hit the "Calculate and transfer to main window" button. The result for our example is a total sample size of \(22\), meaning that if the true mean difference is \(0.10\) redness units and the standard deviation of differences is \(0.135\), you'd have a \(90\%\) chance of getting a result that's significant at the \(P<0.05\) level if you sampled typical and odd feathers from \(22\) cardinals.
textbooks/stats/Applied_Statistics/Biological_Statistics_(McDonald)/04%3A_Tests_for_One_Measurement_Variable/4.11%3A_Paired_tTest.txt
Learning Objectives • To use the Wilcoxon signed-rank test when you'd like to use the paired \(t\)–test, but the differences are severely non-normally distributed. When to use it Use the Wilcoxon signed-rank test when there are two nominal variables and one measurement variable. One of the nominal variables has only two values, such as "before" and "after," and the other nominal variable often represents individuals. This is the non-parametric analogue to the paired t–test, and you should use it if the distribution of differences between pairs is severely non-normally distributed. For example, Laureysens et al. (2004) measured metal content in the wood of \(13\) popular clones growing in a polluted area, once in August and once in November. Concentrations of aluminum (in micrograms of Al per gram of wood) are shown below. Clone August November August−November Columbia River 18.3 12.7 -5.6 Fritzi Pauley 13.3 11.1 -2.2 Hazendans 16.5 15.3 -1.2 Primo 12.6 12.7 0.1 Raspalje 9.5 10.5 1.0 Hoogvorst 13.6 15.6 2.0 Balsam Spire 8.1 11.2 3.1 Gibecq 8.9 14.2 5.3 Beaupre 10.0 16.3 6.3 Unal 8.3 15.5 7.2 Trichobel 7.9 19.9 12.0 Gaver 8.1 20.4 12.3 Wolterson 13.4 36.8 23.4 There are two nominal variables: time of year (August or November) and poplar clone (Columbia River, Fritzi Pauley, etc.), and one measurement variable (micrograms of aluminum per gram of wood). The differences are somewhat skewed; the Wolterson clone, in particular, has a much larger difference than any other clone. To be safe, the authors analyzed the data using a Wilcoxon signed-rank test, and I'll use it as the example. Null hypothesis The null hypothesis is that the median difference between pairs of observations is zero. Note that this is different from the null hypothesis of the paired \(t\)–test, which is that the mean difference between pairs is zero, or the null hypothesis of the sign test, which is that the numbers of differences in each direction are equal. How the test works Rank the absolute value of the differences between observations from smallest to largest, with the smallest difference getting a rank of \(1\), then next larger difference getting a rank of \(2\), etc. Give average ranks to ties. Add the ranks of all differences in one direction, then add the ranks of all differences in the other direction. The smaller of these two sums is the test statistic, \(W\) (sometimes symbolized \(T_s\)). Unlike most test statistics, smaller values of \(W\) are less likely under the null hypothesis. For the aluminum in wood example, the median change from August to November (\(3.1\) micrograms Al/g wood) is significantly different from zero (\(W=16,\; P=0.040\)). Example Buchwalder and Huber-Eicher (2004) wanted to know whether turkeys would be less aggressive towards unfamiliar individuals if they were housed in larger pens. They tested \(10\) groups of three turkeys that had been reared together, introducing an unfamiliar turkey and then counting the number of times it was pecked during the test period. Each group of turkeys was tested in a small pen and in a large pen. There are two nominal variables, size of pen (small or large) and the group of turkeys, and one measurement variable (number of pecks per test). The median difference between the number of pecks per test in the small pen vs. the large pen was significantly greater than zero (\(W=10,\; P=0.04\)). Ho et al. (2004) inserted a plastic implant into the soft palate of \(12\) chronic snorers to see if it would reduce the volume of snoring. Snoring loudness was judged by the sleeping partner of the snorer on a subjective \(10\)-point scale. There are two nominal variables, time (before the operations or after the operation) and individual snorer, and one measurement variable (loudness of snoring). One person left the study, and the implant fell out of the palate in two people; in the remaining nine people, the median change in snoring volume was significantly different from zero (\(W=0,\; P=0.008\)). Graphing the results You should graph the data for a Wilcoxon signed rank test the same way you would graph the data for a paired t–test, a bar graph with either the values side-by-side for each pair, or the differences at each pair. Similar tests You can analyze paired observations of a measurement variable using a paired t–test, if the null hypothesis is that the mean difference between pairs of observations is zero and the differences are normally distributed. If you have a large number of paired observations, you can plot a histogram of the differences to see if they look normally distributed. The paired \(t\)–test isn't very sensitive to non-normal data, so the deviation from normality has to be pretty dramatic to make the paired \(t\)–test inappropriate. Use the sign test when the null hypothesis is that there are equal number of differences in each direction, and you don't care about the size of the differences. How to do the test Spreadsheet I have prepared a spreadsheet to do the Wilcoxon signed-rank test signedrank.xls. It will handle up to \(1000\) pairs of observations. Web pages There is a web page that will perform the Wilcoxon signed-rank test. You may enter your paired numbers directly onto the web page; it will be easier if you enter them into a spreadsheet first, then copy them and paste them into the web page. R Salvatore Mangiafico's \(R\) Companion has a sample R program for the Wilcoxon signed rank test. SAS To do Wilcoxon signed-rank test in SAS, you first create a new variable that is the difference between the two observations. You then run PROC UNIVARIATE on the difference, which automatically does the Wilcoxon signed-rank test along with several others. Here's an example using the poplar data from above: DATA POPLARS; INPUT clone \$ augal noval; diff=augal - noval; DATALINES; Balsam_Spire 8.1 11.2 Beaupre 10.0 16.3 Hazendans 16.5 15.3 Hoogvorst 13.6 15.6 Raspalje 9.5 10.5 Unal 8.3 15.5 Columbia_River 18.3 12.7 Fritzi_Pauley 13.3 11.1 Trichobel 7.9 19.9 Gaver 8.1 20.4 Gibecq 8.9 14.2 Primo 12.6 12.7 Wolterson 13.4 36.8 ; PROC UNIVARIATE DATA=poplars; VAR diff; RUN; PROC UNIVARIATE returns a bunch of descriptive statistics that you don't need; the result of the Wilcoxon signed-rank test is shown in the row labeled "Signed rank": Tests for Location: Mu0=0 Test -Statistic- -----p Value------ Student's t t -2.3089 Pr > |t| 0.0396 Sign M -3.5 Pr >= |M| 0.0923 Signed Rank S -29.5 Pr >= |S| 0.0398
textbooks/stats/Applied_Statistics/Biological_Statistics_(McDonald)/04%3A_Tests_for_One_Measurement_Variable/4.12%3A_Wilcoxon_Signed-Rank_Test.txt
• 5.1: Linear Regression and Correlation Use correlation/linear regression when you have two measurement variables, such as food intake and weight, drug dosage and blood pressure, air temperature and metabolic rate, etc. There's also one nominal variable that keeps the two measurements together in pairs, such as the name of an individual organism, experimental trial, or location. I'm not aware that anyone else considers this nominal variable to be part of correlation and regression. • 5.2: Spearman Rank Correlation Use Spearman rank correlation when you have two ranked variables, and you want to see whether the two variables covary; whether, as one variable increases, the other variable tends to increase or decrease. You also use Spearman rank correlation if you have one measurement variable and one ranked variable; in this case, you convert the measurement variable to ranks and use Spearman rank correlation on the two sets of ranks. • 5.3: Curvilinear (Nonlinear) Regression Sometimes, when you analyze data with correlation and linear regression, you notice that the relationship between the independent (X) variable and dependent (Y) variable looks like it follows a curved line, not a straight line. In that case, the linear regression line will not be very good for describing and predicting the relationship, and the P value may not be an accurate test of the null hypothesis that the variables are not associated. • 5.4: Analysis of Covariance Use analysis of covariance (ancova) when you have two measurement variables and one nominal variable. The nominal variable divides the regressions into two or more sets. • 5.5: Multiple Regression Use multiple regression when you have three or more measurement variables. One of the measurement variables is the dependent (Y) variable. The rest of the variables are the independent (X) variables; you think they may have an effect on the dependent variable. The purpose of a multiple regression is to find an equation that best predicts the Y variable as a linear function of the X variables. • 5.6: Simple Logistic Regression Use simple logistic regression when you have one nominal variable with two values (male/female, dead/alive, etc.) and one measurement variable. The nominal variable is the dependent variable, and the measurement variable is the independent variable. I'm separating simple logistic regression, with only one independent variable, from multiple logistic regression, which has more than one independent variable. • 5.7: Multiple Logistic Regression Use multiple logistic regression when you have one nominal and two or more measurement variables. The nominal variable is the dependent (Y) variable; you are studying the effect that the independent (X) variables have on the probability of obtaining a particular value of the dependent variable. For example, you might want to know the effect that blood pressure, age, and weight have on the probability that a person will have a heart attack in the next year. 05: Tests for Multiple Measurement Variables Learning Objectives • To use linear regression or correlation when you want to know whether one measurement variable is associated with another measurement variable; you want to measure the strength of the association ($r^2$); or you want an equation that describes the relationship and can be used to predict unknown values. One of the most common graphs in science plots one measurement variable on the $x$ (horizontal) axis vs. another on the $y$ (vertical) axis. For example, here are two graphs. For the first, I dusted off the elliptical machine in our basement and measured my pulse after one minute of ellipticizing at various speeds: Speed, kph Pulse, bpm 0 57 1.6 69 3.1 78 4 80 5 85 6 87 6.9 90 7.7 92 8.7 97 12.4 108 15.3 119 For the second graph, I dusted off some data from McDonald (1989): I collected the amphipod crustacean Platorchestia platensis on a beach near Stony Brook, Long Island, in April, 1987, removed and counted the number of eggs each female was carrying, then freeze-dried and weighed the mothers: Weight, mg Eggs 5.38 29 7.36 23 6.13 22 4.75 20 8.10 25 8.62 25 6.30 17 7.44 24 7.26 20 7.17 27 7.78 24 6.23 21 5.42 22 7.87 22 5.25 23 7.37 35 8.01 27 4.92 23 7.03 25 6.45 24 5.06 19 6.72 21 7.00 20 9.39 33 6.49 17 6.34 21 6.16 25 5.74 22 There are three things you can do with this kind of data. One is a hypothesis test, to see if there is an association between the two variables; in other words, as the $X$ variable goes up, does the $Y$ variable tend to change (up or down). For the exercise data, you'd want to know whether pulse rate was significantly higher with higher speeds. The $P$ value is $1.3\times 10^{-8}$, but the relationship is so obvious from the graph, and so biologically unsurprising (of course my pulse rate goes up when I exercise harder!), that the hypothesis test wouldn't be a very interesting part of the analysis. For the amphipod data, you'd want to know whether bigger females had more eggs or fewer eggs than smaller amphipods, which is neither biologically obvious nor obvious from the graph. It may look like a random scatter of points, but there is a significant relationship ($P=0.015)$. The second goal is to describe how tightly the two variables are associated. This is usually expressed with $r$, which ranges from $-1$ to $1$, or $r^2$, which ranges from $0$ to $1$. For the exercise data, there's a very tight relationship, as shown by the $r^2$ of $0.98$; this means that if you knew my speed on the elliptical machine, you'd be able to predict my pulse quite accurately. The $r^2$ for the amphipod data is a lot lower, at $0.21$; this means that even though there's a significant relationship between female weight and number of eggs, knowing the weight of a female wouldn't let you predict the number of eggs she had with very much accuracy. The final goal is to determine the equation of a line that goes through the cloud of points. The equation of a line is given in the form $\hat{Y}=a+bX$, where $\hat{Y}$ is the value of $Y$ predicted for a given value of $X$, a is the $Y$ intercept (the value of $Y$ when $X$ is zero), and $b$ is the slope of the line (the change in $\hat{Y}$ for a change in $X$ of one unit). For the exercise data, the equation is $\hat{Y}=63.5+3.75X$; this predicts that my pulse would be $63.5$ when the speed of the elliptical machine is $0 kph$, and my pulse would go up by $3.75$ beats per minute for every $1 kph$ increase in speed. This is probably the most useful part of the analysis for the exercise data; if I wanted to exercise with a particular level of effort, as measured by pulse rate, I could use the equation to predict the speed I should use. For the amphipod data, the equation is $\hat{Y}=12.7+1.60X$. For most purposes, just knowing that bigger amphipods have significantly more eggs (the hypothesis test) would be more interesting than knowing the equation of the line, but it depends on the goals of your experiment. When to use them Use correlation/linear regression when you have two measurement variables, such as food intake and weight, drug dosage and blood pressure, air temperature and metabolic rate, etc. There's also one nominal variable that keeps the two measurements together in pairs, such as the name of an individual organism, experimental trial, or location. I'm not aware that anyone else considers this nominal variable to be part of correlation and regression, and it's not something you need to know the value of—you could indicate that a food intake measurement and weight measurement came from the same rat by putting both numbers on the same line, without ever giving the rat a name. For that reason, I'll call it a "hidden" nominal variable. The main value of the hidden nominal variable is that it lets me make the blanket statement that any time you have two or more measurements from a single individual (organism, experimental trial, location, etc.), the identity of that individual is a nominal variable; if you only have one measurement from an individual, the individual is not a nominal variable. I think this rule helps clarify the difference between one-way, two-way, and nested anova. If the idea of hidden nominal variables in regression confuses you, you can ignore it. There are three main goals for correlation and regression in biology. One is to see whether two measurement variables are associated with each other; whether as one variable increases, the other tends to increase (or decrease). You summarize this test of association with the $P$ value. In some cases, this addresses a biological question about cause-and-effect relationships; a significant association means that different values of the independent variable cause different values of the dependent. An example would be giving people different amounts of a drug and measuring their blood pressure. The null hypothesis would be that there was no relationship between the amount of drug and the blood pressure. If you reject the null hypothesis, you would conclude that the amount of drug causes the changes in blood pressure. In this kind of experiment, you determine the values of the independent variable; for example, you decide what dose of the drug each person gets. The exercise and pulse data are an example of this, as I determined the speed on the elliptical machine, then measured the effect on pulse rate. In other cases, you want to know whether two variables are associated, without necessarily inferring a cause-and-effect relationship. In this case, you don't determine either variable ahead of time; both are naturally variable and you measure both of them. If you find an association, you infer that variation in $X$ may cause variation in $Y$, or variation in $Y$ may cause variation in $X$, or variation in some other factor may affect both $Y$ and $X$. An example would be measuring the amount of a particular protein on the surface of some cells and the pH of the cytoplasm of those cells. If the protein amount and pH are correlated, it may be that the amount of protein affects the internal pH; or the internal pH affects the amount of protein; or some other factor, such as oxygen concentration, affects both protein concentration and pH. Often, a significant correlation suggests further experiments to test for a cause and effect relationship; if protein concentration and pH were correlated, you might want to manipulate protein concentration and see what happens to pH, or manipulate pH and measure protein, or manipulate oxygen and see what happens to both. The amphipod data are another example of this; it could be that being bigger causes amphipods to have more eggs, or that having more eggs makes the mothers bigger (maybe they eat more when they're carrying more eggs?), or some third factor (age? food intake?) makes amphipods both larger and have more eggs. The second goal of correlation and regression is estimating the strength of the relationship between two variables; in other words, how close the points on the graph are to the regression line. You summarize this with the $r^2$ value. For example, let's say you've measured air temperature (ranging from $15^{\circ}C$ to $30^{\circ}C$) and running speed in the lizard Agama savignyi, and you find a significant relationship: warmer lizards run faster. You would also want to know whether there's a tight relationship (high $r^2$), which would tell you that air temperature is the main factor affecting running speed; if the $r^2$ is low, it would tell you that other factors besides air temperature are also important, and you might want to do more experiments to look for them. You might also want to know how the $r^2$ for Agama savignyi compared to that for other lizard species, or for Agama savignyi under different conditions. The third goal of correlation and regression is finding the equation of a line that fits the cloud of points. You can then use this equation for prediction. For example, if you have given volunteers diets with $500 mg$ to $2500 mg$ of salt per day, and then measured their blood pressure, you could use the regression line to estimate how much a person's blood pressure would go down if they ate $500 mg$ less salt per day. Correlation versus Linear Regression The statistical tools used for hypothesis testing, describing the closeness of the association, and drawing a line through the points, are correlation and linear regression. Unfortunately, I find the descriptions of correlation and regression in most textbooks to be unnecessarily confusing. Some statistics textbooks have correlation and linear regression in separate chapters, and make it seem as if it is always important to pick one technique or the other. I think this overemphasizes the differences between them. Other books muddle correlation and regression together without really explaining what the difference is. There are real differences between correlation and linear regression, but fortunately, they usually don't matter. Correlation and linear regression give the exact same $P$ value for the hypothesis test, and for most biological experiments, that's the only really important result. So if you're mainly interested in the $P$ value, you don't need to worry about the difference between correlation and regression. For the most part, I'll treat correlation and linear regression as different aspects of a single analysis, and you can consider correlation/linear regression to be a single statistical test. Be aware that my approach is probably different from what you'll see elsewhere. The main difference between correlation and regression is that in correlation, you sample both measurement variables randomly from a population, while in regression you choose the values of the independent ($X$) variable. For example, let's say you're a forensic anthropologist, interested in the relationship between foot length and body height in humans. If you find a severed foot at a crime scene, you'd like to be able to estimate the height of the person it was severed from. You measure the foot length and body height of a random sample of humans, get a significant $P$ value, and calculate $r^2$ to be $0.72$. This is a correlation, because you took measurements of both variables on a random sample of people. The $r^2$ is therefore a meaningful estimate of the strength of the association between foot length and body height in humans, and you can compare it to other $r^2$ values. You might want to see if the $r^2$ for feet and height is larger or smaller than the $r^2$ for hands and height, for example. As an example of regression, let's say you've decided forensic anthropology is too disgusting, so now you're interested in the effect of air temperature on running speed in lizards. You put some lizards in a temperature chamber set to $10^{\circ}C$, chase them, and record how fast they run. You do the same for $10$ different temperatures, ranging up to $30^{\circ}C$. This is a regression, because you decided which temperatures to use. You'll probably still want to calculate $r^2$, just because high values are more impressive. But it's not a very meaningful estimate of anything about lizards. This is because the $r^2$ depends on the values of the independent variable that you chose. For the exact same relationship between temperature and running speed, a narrower range of temperatures would give a smaller $r^2$. Here are three graphs showing some simulated data, with the same scatter (standard deviation) of $Y$ values at each value of $X$. As you can see, with a narrower range of $X$ values, the $r^2$ gets smaller. If you did another experiment on humidity and running speed in your lizards and got a lower $r^2$, you couldn't say that running speed is more strongly associated with temperature than with humidity; if you had chosen a narrower range of temperatures and a broader range of humidities, humidity might have had a larger $r^2$ than temperature. If you try to classify every experiment as either regression or correlation, you'll quickly find that there are many experiments that don't clearly fall into one category. For example, let's say that you study air temperature and running speed in lizards. You go out to the desert every Saturday for the eight months of the year that your lizards are active, measure the air temperature, then chase lizards and measure their speed. You haven't deliberately chosen the air temperature, just taken a sample of the natural variation in air temperature, so is it a correlation? But you didn't take a sample of the entire year, just those eight months, and you didn't pick days at random, just Saturdays, so is it a regression? If you are mainly interested in using the $P$ value for hypothesis testing, to see whether there is a relationship between the two variables, it doesn't matter whether you call the statistical test a regression or correlation. If you are interested in comparing the strength of the relationship ($r^2$) to the strength of other relationships, you are doing a correlation and should design your experiment so that you measure $X$ and $Y$ on a random sample of individuals. If you determine the $X$ values before you do the experiment, you are doing a regression and shouldn't interpret the $r^2$ as an estimate of something general about the population you've observed. Correlation and Causation You have probably heard people warn you, "Correlation does not imply causation." This is a reminder that when you are sampling natural variation in two variables, there is also natural variation in a lot of possible confounding variables that could cause the association between $A$ and $B$. So if you see a significant association between $A$ and $B$, it doesn't necessarily mean that variation in $A$ causes variation in $B$; there may be some other variable, $C$, that affects both of them. For example, let's say you went to an elementary school, found $100$ random students, measured how long it took them to tie their shoes, and measured the length of their thumbs. I'm pretty sure you'd find a strong association between the two variables, with longer thumbs associated with shorter shoe-tying times. I'm sure you could come up with a clever, sophisticated biomechanical explanation for why having longer thumbs causes children to tie their shoes faster, complete with force vectors and moment angles and equations and $3-D$ modeling. However, that would be silly; your sample of $100$ random students has natural variation in another variable, age, and older students have bigger thumbs and take less time to tie their shoes. So what if you make sure all your student volunteers are the same age, and you still see a significant association between shoe-tying time and thumb length; would that correlation imply causation? No, because think of why different children have different length thumbs. Some people are genetically larger than others; could the genes that affect overall size also affect fine motor skills? Maybe. Nutrition affects size, and family economics affects nutrition; could poor children have smaller thumbs due to poor nutrition, and also have slower shoe-tying times because their parents were too overworked to teach them to tie their shoes, or because they were so poor that they didn't get their first shoes until they reached school age? Maybe. I don't know, maybe some kids spend so much time sucking their thumb that the thumb actually gets longer, and having a slimy spit-covered thumb makes it harder to grip a shoelace. But there would be multiple plausible explanations for the association between thumb length and shoe-tying time, and it would be incorrect to conclude "Longer thumbs make you tie your shoes faster." Since it's possible to think of multiple explanations for an association between two variables, does that mean you should cynically sneer "Correlation does not imply causation!" and dismiss any correlation studies of naturally occurring variation? No. For one thing, observing a correlation between two variables suggests that there's something interesting going on, something you may want to investigate further. For example, studies have shown a correlation between eating more fresh fruits and vegetables and lower blood pressure. It's possible that the correlation is because people with more money, who can afford fresh fruits and vegetables, have less stressful lives than poor people, and it's the difference in stress that affects blood pressure; it's also possible that people who are concerned about their health eat more fruits and vegetables and exercise more, and it's the exercise that affects blood pressure. But the correlation suggests that eating fruits and vegetables may reduce blood pressure. You'd want to test this hypothesis further, by looking for the correlation in samples of people with similar socioeconomic status and levels of exercise; by statistically controlling for possible confounding variables using techniques such as multiple regression; by doing animal studies; or by giving human volunteers controlled diets with different amounts of fruits and vegetables. If your initial correlation study hadn't found an association of blood pressure with fruits and vegetables, you wouldn't have a reason to do these further studies. Correlation may not imply causation, but it tells you that something interesting is going on. In a regression study, you set the values of the independent variable, and you control or randomize all of the possible confounding variables. For example, if you are investigating the relationship between blood pressure and fruit and vegetable consumption, you might think that it's the potassium in the fruits and vegetables that lowers blood pressure. You could investigate this by getting a bunch of volunteers of the same sex, age, and socioeconomic status. You randomly choose the potassium intake for each person, give them the appropriate pills, have them take the pills for a month, then measure their blood pressure. All of the possible confounding variables are either controlled (age, sex, income) or randomized (occupation, psychological stress, exercise, diet), so if you see an association between potassium intake and blood pressure, the only possible cause would be that potassium affects blood pressure. So if you've designed your experiment correctly, regression does imply causation. Null Hypothesis The null hypothesis of correlation/linear regression is that the slope of the best-fit line is equal to zero; in other words, as the $X$ variable gets larger, the associated $Y$ variable gets neither higher nor lower. It is also possible to test the null hypothesis that the $Y$ value predicted by the regression equation for a given value of $X$ is equal to some theoretical expectation; the most common would be testing the null hypothesis that the $Y$ intercept is $0$. This is rarely necessary in biological experiments, so I won't cover it here, but be aware that it is possible. Independent vs. dependent variables When you are testing a cause-and-effect relationship, the variable that causes the relationship is called the independent variable and you plot it on the $X$ axis, while the effect is called the dependent variable and you plot it on the $Y$ axis. In some experiments you set the independent variable to values that you have chosen; for example, if you're interested in the effect of temperature on calling rate of frogs, you might put frogs in temperature chambers set to $10^{\circ}C$, $15^{\circ}C$, $20^{\circ}C$, etc. In other cases, both variables exhibit natural variation, but any cause-and-effect relationship would be in one way; if you measure the air temperature and frog calling rate at a pond on several different nights, both the air temperature and the calling rate would display natural variation, but if there's a cause-and-effect relationship, it's temperature affecting calling rate; the rate at which frogs call does not affect the air temperature. Sometimes it's not clear which is the independent variable and which is the dependent, even if you think there may be a cause-and-effect relationship. For example, if you are testing whether salt content in food affects blood pressure, you might measure the salt content of people's diets and their blood pressure, and treat salt content as the independent variable. But if you were testing the idea that high blood pressure causes people to crave high-salt foods, you'd make blood pressure the independent variable and salt intake the dependent variable. Sometimes, you're not looking for a cause-and-effect relationship at all, you just want to see if two variables are related. For example, if you measure the range-of-motion of the hip and the shoulder, you're not trying to see whether more flexible hips cause more flexible shoulders, or more flexible shoulders cause more flexible hips; instead, you're just trying to see if people with more flexible hips also tend to have more flexible shoulders, presumably due to some factor (age, diet, exercise, genetics) that affects overall flexibility. In this case, it would be completely arbitrary which variable you put on the $X$ axis and which you put on the $Y$ axis. Fortunately, the $P$ value and the $r^2$ are not affected by which variable you call the $X$ and which you call the $Y$; you'll get mathematically identical values either way. The least-squares regression line does depend on which variable is the $X$ and which is the $Y$; the two lines can be quite different if the $r^2$ is low. If you're truly interested only in whether the two variables covary, and you are not trying to infer a cause-and-effect relationship, you may want to avoid using the linear regression line as decoration on your graph. Researchers in a few fields traditionally put the independent variable on the $Y$ axis. Oceanographers, for example, often plot depth on the $Y$ axis (with $0$ at the top) and a variable that is directly or indirectly affected by depth, such as chlorophyll concentration, on the $X$ axis. I wouldn't recommend this unless it's a really strong tradition in your field, as it could lead to confusion about which variable you're considering the independent variable in a linear regression. Regression Line Linear regression finds the line that best fits the data points. There are actually a number of different definitions of "best fit," and therefore a number of different methods of linear regression that fit somewhat different lines. By far the most common is "ordinary least-squares regression"; when someone just says "least-squares regression" or "linear regression" or "regression," they mean ordinary least-squares regression. In ordinary least-squares regression, the "best" fit is defined as the line that minimizes the squared vertical distances between the data points and the line. For a data point with an $X$ value of $X_1$ and a $Y$ value of $Y_1$, the difference between $Y_1$ and $\hat{Y_1}$ (the predicted value of $Y$ at $X_1$) is calculated, then squared. This squared deviate is calculated for each data point, and the sum of these squared deviates measures how well a line fits the data. The regression line is the one for which this sum of squared deviates is smallest. I'll leave out the math that is used to find the slope and intercept of the best-fit line; you're a biologist and have more important things to think about. The equation for the regression line is usually expressed as $\hat{Y}=a+bX$, where $a$ is the $Y$ intercept and $b$ is the slope. Once you know $a$ and $b$, you can use this equation to predict the value of $Y$ for a given value of $X$. For example, the equation for the heart rate-speed experiment is $\text{rate}=63.357+3.749\times \text{speed}$. I could use this to predict that for a speed of $10 kph$, my heart rate would be $100.8 bpm$. You should do this kind of prediction within the range of $X$ values found in the original data set (interpolation). Predicting $Y$ values outside the range of observed values (extrapolation) is sometimes interesting, but it can easily yield ridiculous results if you go far outside the observed range of $X$. In the frog example below, you could mathematically predict that the inter-call interval would be about $16$ seconds at $-40^{\circ}C$. Actually, the inter-calling interval would be infinity at that temperature, because all the frogs would be frozen solid. Sometimes you want to predict $X$ from $Y$. The most common use of this is constructing a standard curve. For example, you might weigh some dry protein and dissolve it in water to make solutions containing $0,\; 100,\; 200…1000\; µg$ protein per $ml$, add some reagents that turn color in the presence of protein, then measure the light absorbance of each solution using a spectrophotometer. Then when you have a solution with an unknown concentration of protein, you add the reagents, measure the light absorbance, and estimate the concentration of protein in the solution. There are two common methods to estimate $X$ from $Y$. One way is to do the usual regression with $X$ as the independent variable and $Y$ as the dependent variable; for the protein example, you'd have protein as the independent variable and absorbance as the dependent variable. You get the usual equation, $\hat{Y}=a+bX$, then rearrange it to solve for $X$, giving you $\hat{X}=\frac{(Y-a)}{b}$. This is called "classical estimation." The other method is to do linear regression with $Y$ as the independent variable and $X$ as the dependent variable, also known as regressing $X$ on $Y$. For the protein standard curve, you would do a regression with absorbance as the $X$ variable and protein concentration as the $Y$ variable. You then use this regression equation to predict unknown values of $X$ from $Y$. This is known as "inverse estimation." Several simulation studies have suggested that inverse estimation gives a more accurate estimate of $X$ than classical estimation (Krutchkoff 1967, Krutchkoff 1969, Lwin and Maritz 1982, Kannan et al. 2007), so that is what I recommend. However, some statisticians prefer classical estimation (Sokal and Rohlf 1995, pp. 491-493). If the $r^2$ is high (the points are close to the regression line), the difference between classical estimation and inverse estimation is pretty small. When you're construction a standard curve for something like protein concentration, the $r^2$ is usually so high that the difference between classical and inverse estimation will be trivial. But the two methods can give quite different estimates of $X$ when the original points were scattered around the regression line. For the exercise and pulse data, with an $r^2$ of $0.98$, classical estimation predicts that to get a pulse of $100 bpm$, I should run at $9.8 kph$, while inverse estimation predicts a speed of $9.7 kph$. The amphipod data has a much lower $r^2$ of $0.25$, so the difference between the two techniques is bigger; if I want to know what size amphipod would have $30$ eggs, classical estimation predicts a size of $10.8 mg$, while inverse estimation predicts a size of $7.5 mg$. Sometimes your goal in drawing a regression line is not predicting $Y$ from $X$, or predicting $X$ from $Y$, but instead describing the relationship between two variables. If one variable is the independent variable and the other is the dependent variable, you should use the least-squares regression line. However, if there is no cause-and-effect relationship between the two variables, the least-squares regression line is inappropriate. This is because you will get two different lines, depending on which variable you pick to be the independent variable. For example, if you want to describe the relationship between thumb length and big toe length, you would get one line if you made thumb length the independent variable, and a different line if you made big-toe length the independent variable. The choice would be completely arbitrary, as there is no reason to think that thumb length causes variation in big-toe length, or vice versa. A number of different lines have been proposed to describe the relationship between two variables with a symmetrical relationship (where neither is the independent variable). The most common method is reduced major axis regression (also known as standard major axis regression or geometric mean regression). It gives a line that is intermediate in slope between the least-squares regression line of $Y$ on $X$ and the least-squares regression line of $X$ on $Y$; in fact, the slope of the reduced major axis line is the geometric mean of the two least-squares regression lines. While reduced major axis regression gives a line that is in some ways a better description of the symmetrical relationship between two variables (McArdle 2003, Smith 2009), you should keep two things in mind. One is that you shouldn't use the reduced major axis line for predicting values of $X$ from $Y$, or $Y$ from $X$; you should still use least-squares regression for prediction. The other thing to know is that you cannot test the null hypothesis that the slope of the reduced major axis line is zero, because it is mathematically impossible to have a reduced major axis slope that is exactly zero. Even if your graph shows a reduced major axis line, your $P$ value is the test of the null that the least-square regression line has a slope of zero. Coefficient of determination ($r^2$) The coefficient of determination, or $r^2$, expresses the strength of the relationship between the $X$ and $Y$ variables. It is the proportion of the variation in the $Y$ variable that is "explained" by the variation in the $X$ variable. $r^2$ can vary from $0$ to $1$; values near $1$ mean the $Y$ values fall almost right on the regression line, while values near $0$ mean there is very little relationship between $X$ and $Y$. As you can see, regressions can have a small $r^2$ and not look like there's any relationship, yet they still might have a slope that's significantly different from zero. To illustrate the meaning of r2, here are six pairs of X and Y values: X Y Deviate from mean Squared deviate 1 2 8 64 3 9 1 1 5 9 1 1 6 11 1 1 7 14 4 16 9 15 5 25 sum of squares: 108 If you didn't know anything about the $X$ value and were told to guess what a $Y$ value was, your best guess would be the mean $Y$; for this example, the mean $Y$ is $10$. The squared deviates of the $Y$ values from their mean is the total sum of squares, familiar from analysis of variance. The vertical lines on the left graph below show the deviates from the mean; the first point has a deviate of $8$, so its squared deviate is $64$, etc. The total sum of squares for these numbers is $64+1+1+1+16+25=108$. If you did know the $X$ value and were told to guess what a $Y$ value was, you'd calculate the regression equation and use it. The regression equation for these numbers is $\hat{Y}=2.0286+1.5429X$, so for the first $X$ value you'd predict a $Y$ value of $2.0286+1.5429\times 1=3.5715$, etc. The vertical lines on the right graph above show the deviates of the actual $Y$ values from the predicted $\hat{Y}$ values. As you can see, most of the points are closer to the regression line than they are to the overall mean. Squaring these deviates and taking the sum gives us the regression sum of squares, which for these numbers is $10.8$. X Y Predicted Y value Deviate from predicted Squared deviate 1 2 3.57 1.57 2.46 3 9 6.66 2.34 5.48 5 9 9.74 0.74 0.55 6 11 11.29 0.29 0.08 7 14 12.83 1.17 1.37 9 15 15.91 0.91 0.83 Regression sum of squares: 10.8 The regression sum of squares is $10.8$, which is $90\%$ smaller than the total sum of squares ($108$). This difference between the two sums of squares, expressed as a fraction of the total sum of squares, is the definition of $r^2$. In this case we would say that $r^2=0.90$; the $X$ variable "explains" $90\%$ of the variation in the $Y$ variable. The $r^2$ value is formally known as the "coefficient of determination," although it is usually just called $r^2$. The square root of $r^2$, with a negative sign if the slope is negative, is the Pearson product-moment correlation coefficient, $r$, or just "correlation coefficient." You can use either $r$ or $r^2$ to describe the strength of the association between two variables. I prefer $r^2$, because it is used more often in my area of biology, it has a more understandable meaning (the proportional difference between total sum of squares and regression sum of squares), and it doesn't have those annoying negative values. You should become familiar with the literature in your field and use whichever measure is most common. One situation where r is more useful is if you have done linear regression/correlation for multiple sets of samples, with some having positive slopes and some having negative slopes, and you want to know whether the mean correlation coefficient is significantly different from zero; see McDonald and Dunn (2013) for an application of this idea. Test statistic The test statistic for a linear regression is $t_s=\frac{\sqrt{d.f.}\times r^2}{\sqrt{(1-r^2)}}$. It gets larger as the degrees of freedom ($n-2$) get larger or the $r^2$ gets larger. Under the null hypothesis, the test statistic is $t$-distributed with $n-2$ degrees of freedom. When reporting the results of a linear regression, most people just give the r2 and degrees of freedom, not the $t_s$ value. Anyone who really needs the $t_s$ value can calculate it from the $r^2$ and degrees of freedom. For the heart rate–speed data, the $r^2$ is $0.976$ and there are $9$ degrees of freedom, so the $t_s$-statistic is $19.2$. It is significant ($P=1.3\times 10^{-8}$). Some people square $t_s$ and get an $F$-statistic with $1$ degree of freedom in the numerator and $n-2$ degrees of freedom in the denominator. The resulting $P$ value is mathematically identical to that calculated with $t_s$. Because the P value is a function of both the $r^2$ and the sample size, you should not use the $P$ value as a measure of the strength of association. If the correlation of $A$ and $B$ has a smaller $P$ value than the correlation of $A$ and $C$, it doesn't necessarily mean that $A$ and $B$ have a stronger association; it could just be that the data set for the $A$–$B$ experiment was larger. If you want to compare the strength of association of different data sets, you should use $r$ or $r^2$. Assumptions Normality and homoscedasticity Two assumptions, similar to those for anova, are that for any value of $X$, the $Y$ values will be normally distributed and they will be homoscedastic. Although you will rarely have enough data to test these assumptions, they are often violated. Fortunately, numerous simulation studies have shown that regression and correlation are quite robust to deviations from normality; this means that even if one or both of the variables are non-normal, the $P$ value will be less than $0.05$ about $5\%$ of the time if the null hypothesis is true (Edgell and Noon 1984, and references therein). So in general, you can use linear regression/correlation without worrying about non-normality. Sometimes you'll see a regression or correlation that looks like it may be significant due to one or two points being extreme on both the $x$ and $y$ axes. In this case, you may want to use Spearman's rank correlation, which reduces the influence of extreme values, or you may want to find a data transformation that makes the data look more normal. Another approach would be analyze the data without the extreme values, and report the results with or without them outlying points; your life will be easier if the results are similar with or without them. When there is a significant regression or correlation, $X$ values with higher mean $Y$ values will often have higher standard deviations of $Y$ as well. This happens because the standard deviation is often a constant proportion of the mean. For example, people who are $1.5$ meters tall might have a mean weight of $50 kg$ and a standard deviation of $10 kg$, while people who are $2$ meters tall might have a mean weight of $100 kg$ and a standard deviation of $20 kg$. When the standard deviation of $Y$ is proportional to the mean, you can make the data be homoscedastic with a log transformation of the $Y$ variable. Linearity Linear regression and correlation assume that the data fit a straight line. If you look at the data and the relationship looks curved, you can try different data transformations of the $X$, the $Y$, or both, and see which makes the relationship straight. Of course, it's best if you choose a data transformation before you analyze your data. You can choose a data transformation beforehand based on previous data you've collected, or based on the data transformation that others in your field use for your kind of data. A data transformation will often straighten out a J-shaped curve. If your curve looks U-shaped, S-shaped, or something more complicated, a data transformation won't turn it into a straight line. In that case, you'll have to use curvilinear regression. Independence Linear regression and correlation assume that the data points are independent of each other, meaning that the value of one data point does not depend on the value of any other data point. The most common violation of this assumption in regression and correlation is in time series data, where some $Y$ variable has been measured at different times. For example, biologists have counted the number of moose on Isle Royale, a large island in Lake Superior, every year. Moose live a long time, so the number of moose in one year is not independent of the number of moose in the previous year, it is highly dependent on it; if the number of moose in one year is high, the number in the next year will probably be pretty high, and if the number of moose is low one year, the number will probably be low the next year as well. This kind of non-independence, or "autocorrelation," can give you a "significant" regression or correlation much more often than $5\%$ of the time, even when the null hypothesis of no relationship between time and $Y$ is true. If both $X$ and $Y$ are time series—for example, you analyze the number of wolves and the number of moose on Isle Royale—you can also get a "significant" relationship between them much too often. To illustrate how easy it is to fool yourself with time-series data, I tested the correlation between the number of moose on Isle Royale in the winter and the number of strikeouts thrown by major league baseball teams the following season, using data for 2004–2013. I did this separately for each baseball team, so there were 30 statistical tests. I'm pretty sure the null hypothesis is true (I can't think of anything that would affect both moose abundance in the winter and strikeouts the following summer), so with $30$ baseball teams, you'd expect the $P$ value to be less than $0.05$ for $5\%$ of the teams, or about one or two. Instead, the $P$ value is significant for $7$ teams, which means that if you were stupid enough to test the correlation of moose numbers and strikeouts by your favorite team, you'd have almost a $1$-in-$4$ chance of convincing yourself there was a relationship between the two. Some of the correlations look pretty good: strikeout numbers by the Cleveland team and moose numbers have an $r^2$ of $0.70$ and a $P$ value of $0.002$: There are special statistical tests for time-series data. I will not cover them here; if you need to use them, see how other people in your field have analyzed data similar to yours, then find out more about the methods they used. Spatial autocorrelation is another source of non-independence. This occurs when you measure a variable at locations that are close enough together that nearby locations will tend to have similar values. For example, if you want to know whether the abundance of dandelions is associated with the among of phosphate in the soil, you might mark a bunch of $1 m^2$ squares in a field, count the number of dandelions in each quadrat, and measure the phosphate concentration in the soil of each quadrat. However, both dandelion abundance and phosphate concentration are likely to be spatially autocorrelated; if one quadrat has a lot of dandelions, its neighboring quadrats will also have a lot of dandelions, for reasons that may have nothing to do with phosphate. Similarly, soil composition changes gradually across most areas, so a quadrat with low phosphate will probably be close to other quadrats that are low in phosphate. It would be easy to find a significant correlation between dandelion abundance and phosphate concentration, even if there is no real relationship. If you need to learn about spatial autocorrelation in ecology, Dale and Fortin (2009) is a good place to start. Another area where spatial autocorrelation is a problem is image analysis. For example, if you label one protein green and another protein red, then look at the amount of red and green protein in different parts of a cell, the high level of autocorrelation between neighboring pixels makes it very easy to find a correlation between the amount of red and green protein, even if there is no true relationship. See McDonald and Dunn (2013) for a solution to this problem. Example A common observation in ecology is that species diversity decreases as you get further from the equator. To see whether this pattern could be seen on a small scale, I used data from the Audubon Society's Christmas Bird Count, in which birders try to count all the birds in a $15\; mile$ diameter area during one winter day. I looked at the total number of species seen in each area on the Delmarva Peninsula during the 2005 count. Latitude and number of bird species are the two measurement variables; location is the hidden nominal variable. Location Latitude Number of species Bombay Hook, DE 39.217 128 Cape Henlopen, DE 38.8 137 Middletown, DE 39.467 108 Milford, DE 38.958 118 Rehoboth, DE 38.6 135 Seaford-Nanticoke, DE 38.583 94 Wilmington, DE 39.733 113 Crisfield, MD 38.033 118 Denton, MD 38.9 96 Elkton, MD 39.533 98 Lower Kent County, MD 39.133 121 Ocean City, MD 38.317 152 Salisbury, MD 38.333 108 S. Dorchester County, MD 38.367 118 Cape Charles, VA 37.2 157 Chincoteague, VA 37.967 125 Wachapreague, VA 37.667 114 The result is $r^2=0.214$, with $15 d.f.$, so the $P$ value is $0.061$. The trend is in the expected direction, but it is not quite significant. The equation of the regression line is $\text {number of species}=-12.039\times \text {latitude}+585.14$. Even if it were significant, I don't know what you'd do with the equation; I suppose you could extrapolate and use it to predict that above the $49^{th}$ parallel, there would be fewer than zero bird species. Gayou (1984) measured the intervals between male mating calls in the gray tree frog, Hyla versicolor, at different temperatures. The regression line is $\text {interval}=-0.205\times \text{temperature}+8.36$, and it is highly significant ($r^2=0.29,\; 45 d.f.,\; P=9\times 10^{-5}$). You could rearrange the equation, $\text{temperature}=\frac{(\text{interval}-8.36)}{(-0.205)}$, measure the interval between frog mating calls, and estimate the air temperature. Or you could buy a thermometer. Goheen et al. (2003) captured $14$ female northern grasshopper mice (Onchomys leucogaster) in north-central Kansas, measured the body length, and counted the number of offspring. There are two measurement variables, body length and number of offspring, and the authors were interested in whether larger body size causes an increase in the number of offspring, so they did a linear regression. The results are significant: $r^2=0.46,\; 12 d.f.,\; P=0.008$. The equation of the regression line is $\text{offspring}=0.108\times \text{length}-7.88$. Graphing the results In a spreadsheet, you show the results of a regression on a scatter graph, with the independent variable on the $X$ axis. To add the regression line to the graph, finish making the graph, then select the graph and go to the Chart menu. Choose "Add Trendline" and choose the straight line. If you want to show the regression line extending beyond the observed range of $X$ values, choose "Options" and adjust the "Forecast" numbers until you get the line you want. Similar tests Sometimes it is not clear whether an experiment includes one measurement variable and two nominal variables, and should be analyzed with a two-way anova or paired t–test, or includes two measurement variables and one hidden nominal variable, and should be analyzed with correlation and regression. In that case, your choice of test is determined by the biological question you're interested in. For example, let's say you've measured the range of motion of the right shoulder and left shoulder of a bunch of right-handed people. If your question is "Is there an association between the range of motion of people's right and left shoulders—do people with more flexible right shoulders also tend to have more flexible left shoulders?", you'd treat "right shoulder range-of-motion" and "left shoulder range-of-motion" as two different measurement variables, and individual as one hidden nominal variable, and analyze with correlation and regression. If your question is "Is the right shoulder more flexible than the left shoulder?", you'd treat "range of motion" as one measurement variable, "right vs. left" as one nominal variable, individual as one nominal variable, and you'd analyze with two-way anova or a paired $t$–test. If the dependent variable is a percentage, such as percentage of people who have heart attacks on different doses of a drug, it's really a nominal variable, not a measurement. Each individual observation is a value of the nominal variable ("heart attack" or "no heart attack"); the percentage is not really a single observation, it's a way of summarizing a bunch of observations. One approach for percentage data is to arcsine transform the percentages and analyze with correlation and linear regression. You'll see this in the literature, and it's not horrible, but it's better to analyze using logistic regression. If the relationship between the two measurement variables is best described by a curved line, not a straight one, one possibility is to try different transformations on one or both of the variables. The other option is to use curvilinear regression. If one or both of your variables are ranked variables, not measurement, you should use Spearman rank correlation. Some people recommend Spearman rank correlation when the assumptions of linear regression/correlation (normality and homoscedasticity) are not met, but I'm not aware of any research demonstrating that Spearman is really better in this situation. To compare the slopes or intercepts of two or more regression lines to each other, use ancova. If you have more than two measurement variables, use multiple regression. How to do the test Spreadsheet I have put together a spreadsheet regression.xls to do linear regression and correlation on up to $1000 pairs$ of observations. It provides the following: • The regression coefficient (the slope of the regression line). • The $Y$ intercept. With the slope and the intercept, you have the equation for the regression line: $\hat{Y}=a+bX$, where $a$ is the $y$ intercept and $b$ is the slope. • The $r^2$ value. • The degrees of freedom. There are $n-2$ degrees of freedom in a regression, where $n$ is the number of observations. • The $P$ value. This gives you the probability of finding a slope that is as large or larger than the observed slope, under the null hypothesis that the true slope is $0$. • A $Y$ estimator and an $X$ estimator. This enables you to enter a value of $X$ and find the corresponding value of $Y$ on the best-fit line, or vice-versa. This would be useful for constructing standard curves, such as used in protein assays for example. Web pages Web pages that will perform linear regression are here, here, and here. They all require you to enter each number individually, and thus are inconvenient for large data sets. This web page does linear regression and lets you paste in a set of numbers, which is more convenient for large data sets. R Salvatore Mangiafico's $R$ Companion has a sample R program for correlation and linear regression. SAS You can use either PROC GLM or PROC REG for a simple linear regression; since PROC REG is also used for multiple regression, you might as well learn to use it. In the MODEL statement, you give the $Y$ variable first, then the $X$ variable after the equals sign. Here's an example using the bird data from above. DATA birds; INPUT town $state$ latitude species; DATALINES; Bombay_Hook DE 39.217 128 Cape_Henlopen DE 38.800 137 Middletown DE 39.467 108 Milford DE 38.958 118 Rehoboth DE 38.600 135 Seaford-Nanticoke DE 38.583 94 Wilmington DE 39.733 113 Crisfield MD 38.033 118 Denton MD 38.900 96 Elkton MD 39.533 98 Lower_Kent_County MD 39.133 121 Ocean_City MD 38.317 152 Salisbury MD 38.333 108 S_Dorchester_County MD 38.367 118 Cape_Charles VA 37.200 157 Chincoteague VA 37.967 125 Wachapreague VA 37.667 114 ; PROC REG DATA=birds; MODEL species=latitude; RUN; The output includes an analysis of variance table. Don't be alarmed by this; if you dig down into the math, regression is just another variety of anova. Below the anova table are the $r^2$, slope, intercept, and $P$ value: Root MSE 16.37357 R-Square 0.2143 r2 Dependent Mean 120.00000 Adj R-Sq 0.1619 Coeff Var 13.64464 Parameter Estimates Parameter Standard Variable DF Estimate Error t Value Pr > |t| intercept Intercept 1 585.14462 230.02416 2.54 0.0225 latitude 1 -12.03922 5.95277 -2.02 0.0613 P value slope These results indicate an $r^2$ of $0.21$, intercept of $585.1$, a slope of $-12.04$, and a $P$ value of $0.061$. Power analysis The G*Power program will calculate the sample size needed for a regression/correlation. The effect size is the absolute value of the correlation coefficient $r$; if you have $r^2$, take the positive square root of it. Choose "t tests" from the "Test family" menu and "Correlation: Point biserial model" from the "Statistical test" menu. Enter the $r$ value you hope to see, your alpha (usually $0.05$) and your power (usually $0.80$ or $0.90$). For example, let's say you want to look for a relationship between calling rate and temperature in the barking tree frog, Hyla gratiosa. Gayou (1984) found an $r^2$ of $0.29$ in another frog species, H. versicolor, so you decide you want to be able to detect an $r^2$ of $0.25$ or more. The square root of $0.25$ is $0.5$, so you enter $0.5$ for "Effect size", $0.05$ for alpha, and $0.8$ for power. The result is $26$ observations of temperature and frog calling rate. It's important to note that the distribution of $X$ variables, in this case air temperatures, should be the same for the proposed study as for the pilot study the sample size calculation was based on. Gayou (1984) measured frog calling rate at temperatures that were fairly evenly distributed from $10^{\circ}C$ to $34^{\circ}C$. If you looked at a narrower range of temperatures, you'd need a lot more observations to detect the same kind of relationship.
textbooks/stats/Applied_Statistics/Biological_Statistics_(McDonald)/05%3A_Tests_for_Multiple_Measurement_Variables/5.01%3A_Linear_Regression_and_Correlation.txt
Learning Objectives • To use Spearman rank correlation to test the association between two ranked variables, or one ranked variable and one measurement variable. You can also use Spearman rank correlation instead of linear regression/correlation for two measurement variables if you're worried about non-normality, but this is not usually necessary. When to use it Use Spearman rank correlation when you have two ranked variables, and you want to see whether the two variables covary; whether, as one variable increases, the other variable tends to increase or decrease. You also use Spearman rank correlation if you have one measurement variable and one ranked variable; in this case, you convert the measurement variable to ranks and use Spearman rank correlation on the two sets of ranks. For example, Melfi and Poyser (2007) observed the behavior of $6$ male colobus monkeys (Colobus guereza) in a zoo. By seeing which monkeys pushed other monkeys out of their way, they were able to rank the monkeys in a dominance hierarchy, from most dominant to least dominant. This is a ranked variable; while the researchers know that Erroll is dominant over Milo because Erroll pushes Milo out of his way, and Milo is dominant over Fraiser, they don't know whether the difference in dominance between Erroll and Milo is larger or smaller than the difference in dominance between Milo and Fraiser. After determining the dominance rankings, Melfi and Poyser (2007) counted eggs of Trichuris nematodes per gram of monkey feces, a measurement variable. They wanted to know whether social dominance was associated with the number of nematode eggs, so they converted eggs per gram of feces to ranks and used Spearman rank correlation. Monkey name Dominance rank Eggs per gram Eggs per gram (rank) Erroll 1 5777 1 Milo 2 4225 2 Fraiser 3 2674 3 Fergus 4 1249 4 Kabul 5 749 6 Hope 6 870 5 Some people use Spearman rank correlation as a non-parametric alternative to linear regression and correlation when they have two measurement variables and one or both of them may not be normally distributed; this requires converting both measurements to ranks. Linear regression and correlation that the data are normally distributed, while Spearman rank correlation does not make this assumption, so people think that Spearman correlation is better. In fact, numerous simulation studies have shown that linear regression and correlation are not sensitive to non-normality; one or both measurement variables can be very non-normal, and the probability of a false positive ($P<0.05$, when the null hypothesis is true) is still about $0.05$ (Edgell and Noon 1984, and references therein). It's not incorrect to use Spearman rank correlation for two measurement variables, but linear regression and correlation are much more commonly used and are familiar to more people, so I recommend using linear regression and correlation any time you have two measurement variables, even if they look non-normal. Null hypothesis The null hypothesis is that the Spearman correlation coefficient, $\rho$ ("rho"), is $0$. A $\rho$ of $0$ means that the ranks of one variable do not covary with the ranks of the other variable; in other words, as the ranks of one variable increase, the ranks of the other variable do not increase (or decrease). Assumption When you use Spearman rank correlation on one or two measurement variables converted to ranks, it does not assume that the measurements are normal or homoscedastic. It also doesn't assume the relationship is linear; you can use Spearman rank correlation even if the association between the variables is curved, as long as the underlying relationship is monotonic (as $X$ gets larger, $Y$ keeps getting larger, or keeps getting smaller). If you have a non-monotonic relationship (as $X$ gets larger, $Y$ gets larger and then gets smaller, or $Y$ gets smaller and then gets larger, or something more complicated), you shouldn't use Spearman rank correlation. Like linear regression and correlation, Spearman rank correlation assumes that the observations are independent. How the test works Spearman rank correlation calculates the $P$ value the same way as linear regression and correlation, except that you do it on ranks, not measurements. To convert a measurement variable to ranks, make the largest value $1$, second largest $2$, etc. Use the average ranks for ties; for example, if two observations are tied for the second-highest rank, give them a rank of $2.5$ (the average of $2$ and $3$). When you use linear regression and correlation on the ranks, the Pearson correlation coefficient ($r$) is now the Spearman correlation coefficient, $\rho$, and you can use it as a measure of the strength of the association. For $11$ or more observations, you calculate the test statistic using the same equation as for linear regression and correlation, substituting $\rho$ for $r$: $t_s=\frac{\sqrt{d.f.}\times \rho ^2}{\sqrt{(1-\rho ^2)}}$. If the null hypothesis (that $\rho =0$) is true, $t_s$ is $t$-distributed with $n-2$ degrees of freedom. If you have $10$ or fewer observations, the $P$ value calculated from the $t$-distribution is somewhat inaccurate. In that case, you should look up the $P$ value in a table of Spearman t-statistics for your sample size. My Spearman spreadsheet does this for you. You will almost never use a regression line for either description or prediction when you do Spearman rank correlation, so don't calculate the equivalent of a regression line. For the Colobus monkey example, Spearman's $\rho$ is $0.943$, and the $P$ value from the table is less than $0.025$, so the association between social dominance and nematode eggs is significant. Example Volume (cm3) Frequency (Hz) 1760 529 2040 566 2440 473 2550 461 2730 465 2740 532 3010 484 3080 527 3370 488 3740 485 4910 478 5090 434 5090 468 5380 449 5850 425 6730 389 6990 421 7960 416 Males of the magnificent frigatebird (Fregata magnificens) have a large red throat pouch. They visually display this pouch and use it to make a drumming sound when seeking mates. Madsen et al. (2004) wanted to know whether females, who presumably choose mates based on their pouch size, could use the pitch of the drumming sound as an indicator of pouch size. The authors estimated the volume of the pouch and the fundamental frequency of the drumming sound in $18$ males. There are two measurement variables, pouch size and pitch. The authors analyzed the data using Spearman rank correlation, which converts the measurement variables to ranks, and the relationship between the variables is significant (Spearman's $\rho =-0.76,\; 16 d.f.,\; P=0.0002$). The authors do not explain why they used Spearman rank correlation; if they had used regular correlation, they would have obtained $r=-0.82,\; P=0.00003$. Graphing the results You can graph Spearman rank correlation data the same way you would for a linear regression or correlation. Don't put a regression line on the graph, however; it would be misleading to put a linear regression line on a graph when you've analyzed it with rank correlation. How to do the test Spreadsheet I've put together a spreadsheet that will perform a Spearman rank correlation spearman.xls on up to $1000$ observations. With small numbers of observations ($10$ or fewer), the spreadsheet looks up the $P$ value in a table of critical values. Web page This web page will do Spearman rank correlation. R Salvatore Mangiafico's $R$ Companion has a sample R program for Spearman rank correlation. SAS Use PROC CORR with the SPEARMAN option to do Spearman rank correlation. Here is an example using the bird data from the correlation and regression web page: PROC CORR DATA=birds SPEARMAN; VAR species latitude; RUN; The results include the Spearman correlation coefficient ρ, analogous to the r value of a regular correlation, and the P value: Spearman Correlation Coefficients, $N = 17$ Prob > |r| under H0: Rho=0 species latitude species 1.00000 -0.36263 Spearman correlation coefficient 0.1526 P value latitude -0.36263 1.00000 0.1526
textbooks/stats/Applied_Statistics/Biological_Statistics_(McDonald)/05%3A_Tests_for_Multiple_Measurement_Variables/5.02%3A_Spearman_Rank_Correlation.txt
Learning Objectives • To use curvilinear regression when you have graphed two measurement variables and you want to fit an equation for a curved line to the points on the graph. Sometimes, when you analyze data with correlation and linear regression, you notice that the relationship between the independent ($X$) variable and dependent ($Y$) variable looks like it follows a curved line, not a straight line. In that case, the linear regression line will not be very good for describing and predicting the relationship, and the $P$ value may not be an accurate test of the null hypothesis that the variables are not associated. You have three choices in this situation. If you only want to know whether there is an association between the two variables, and you're not interested in the line that fits the points, you can use the $P$ value from linear regression and correlation. This could be acceptable if the line is just slightly curved; if your biological question is "Does more $X$ cause more $Y$?", you may not care whether a straight line or a curved line fits the relationship between $X$ and $Y$ better. However, it will look strange if you use linear regression and correlation on a relationship that is strongly curved, and some curved relationships, such as a U-shape, can give a non-significant $P$ value even when the fit to a U-shaped curve is quite good. And if you want to use the regression equation for prediction or you're interested in the strength of the relationship ($r^2$), you should definitely not use linear regression and correlation when the relationship is curved. Null hypotheses One null hypothesis you can test when doing curvilinear regression is that there is no relationship between the $X$ and $Y$ variables; in other words, that knowing the value of $X$ would not help you predict the value of $Y$. This is analogous to testing the null hypothesis that the slope is $0$ in a linear regression. You measure the fit of an equation to the data with $R^2$, analogous to the $r^2$ of linear regression. As you add more parameters to an equation, it will always fit the data better; for example, a quadratic equation of the form $\hat{Y}=a+b_1X+b_2X^2$ will always be closer to the points than a linear equation of the form $\hat{Y}=a+b_1X$ so the quadratic equation will always have a higher $R^2$ than the linear. A cubic equation will always have a higher $R^2$ than quadratic, and so on. The second null hypothesis of curvilinear regression is that the increase in $R^2$ is only as large as you would expect by chance. Your third option is curvilinear regression: finding an equation that produces a curved line that fits your points. There are a lot of equations that will produce curved lines, including exponential (involving $b^x$, where $b$ is a constant), power (involving $X^b$), logarithmic (involving$\log (X)$), and trigonometric (involving sine, cosine, or other trigonometric functions). For any particular form of equation involving such terms, you can find the equation for the curved line that best fits the data points, and compare the fit of the more complicated equation to that of a simpler equation (such as the equation for a straight line). Here I will use polynomial regression as one example of curvilinear regression, then briefly mention a few other equations that are commonly used in biology. A polynomial equation is any equation that has $X$ raised to integer powers such as $X^2$ and $X^3$. One polynomial equation is a quadratic equation, which has the form $\hat{Y}=a+b_1X+b_2X^2$ where $a$ is the $y$–intercept and $b_1$ and $b_2$ are constants. It produces a parabola. A cubic equation has the form $\hat{Y}=a+b_1X+b_2X^2+b_3X^3$ and produces an S-shaped curve, while a quartic equation has the form $\hat{Y}=a+b_1X+b_2X^2+b_3X^3+b_4X^4$ and can produce $M$ or $W$ shaped curves. You can fit higher-order polynomial equations, but it is very unlikely that you would want to use anything more than the cubic in biology. Assumptions If you are testing the null hypothesis that there is no association between the two measurement variables, curvilinear regression assumes that the $Y$ variable is normally distributed and homoscedastic for each value of $X$. Since linear regression is robust to these assumptions (violating them doesn't increase your chance of a false positive very much), I'm guessing that curvilinear regression may not be sensitive to violations of normality or homoscedasticity either. I'm not aware of any simulation studies on this, however. Curvilinear regression also assumes that the data points are independent, just as linear regression does. You shouldn't test the null hypothesis of no association for non-independent data, such as many time series. However, there are many experiments where you already know there's an association between the $X$ and $Y$ variables, and your goal is not hypothesis testing, but estimating the equation that fits the line. For example, a common practice in microbiology is to grow bacteria in a medium with abundant resources, measure the abundance of the bacteria at different times, and fit an exponential equation to the growth curve. The amount of bacteria after 30 minutes is not independent of the amount of bacteria after $20$ minutes; if there are more at $20$ minutes, there are bound to be more at $30$ minutes. However, the goal of such an experiment would not be to see whether bacteria increase in abundance over time (duh, of course they do); the goal would be to estimate how fast they grow, by fitting an exponential equation to the data. For this purpose, it doesn't matter that the data points are not independent. Just as linear regression assumes that the relationship you are fitting a straight line to is linear, curvilinear regression assumes that you are fitting the appropriate kind of curve to your data. If you are fitting a quadratic equation, the assumption is that your data are quadratic; if you are fitting an exponential curve, the assumption is that your data are exponential. Violating this assumption—fitting a quadratic equation to an exponential curve, for example—can give you an equation that doesn't fit your data very well. In some cases, you can pick the kind of equation to use based on a theoretical understanding of the biology of your experiment. If you are growing bacteria for a short period of time with abundant resources, you expect their growth to follow an exponential curve; if they grow for long enough that resources start to limit their growth, you expect the growth to fit a logistic curve. Other times, there may not be a clear theoretical reason for a particular equation, but other people in your field have found one that fits your kind of data well. And in other cases, you just need to try a variety of equations until you find one that works well for your data. How the test works In polynomial regression, you add different powers of the $X$ variable ($X,\; X^2,\; X^3…$) to an equation to see whether they increase the $R^2$ significantly. First you do a linear regression, fitting an equation of the form $\hat{Y}=a+b_1X$ to the data. Then you fit an equation of the form \hat{Y}=a+b_1X+b_2X^2\), which produces a parabola, to the data. The $R^2$ will always increase when you add a higher-order term, but the question is whether the increase in $R^2$ is significantly greater than expected due to chance. Next, you fit an equation of the form $\hat{Y}=a+b_1X+b_2X^2+b_3X^3$, which produces an S-shaped line, and you test the increase in $R^2$. You can keep doing this until adding another term does not increase $R^2$ significantly, although in most cases it is hard to imagine a biological meaning for exponents greater than $3$. Once you find the best-fitting equation, you test it to see whether it fits the data significantly better than an equation of the form $Y=a$; in other words, a horizontal line. Even though the usual procedure is to test the linear regression first, then the quadratic, then the cubic, you don't need to stop if one of these is not significant. For example, if the graph looks U-shaped, the linear regression may not be significant, but the quadratic could be. Example 1 Fernandez-Juricic et al. (2003) examined the effect of human disturbance on the nesting of house sparrows (Passer domesticus). They counted breeding sparrows per hectare in $18$ parks in Madrid, Spain, and also counted the number of people per minute walking through each park (both measurement variables). The linear regression is not significant ($r^2=0.174,\; 16 d.f.,\; P=0.08$). The quadratic regression is significant ($R^2=0.372,\; 15 d.f.,\; P=0.03$), and it is significantly better than the linear regression ($P=0.03$). This seems biologically plausible; the data suggest that there is some intermediate level of human traffic that is best for house sparrows. Perhaps areas with too many humans scare the sparrows away, while areas with too few humans favor other birds that outcompete the sparrows for nest sites or something. The cubic graph is significant ($R^2=0.765,\; 14 d.f.,\; P=0.0001$), and the increase in $R^2$ between the cubic and the quadratic equation is highly significant ($P=1\times 10^{-5}$). The cubic equation is $Ŷ=−87.765+50.601X−2.916X^2+0.0443X^3.$ The quartic equation does not fit significantly better than the cubic equation ($P=0.80$). Even though the cubic equation fits significantly better than the quadratic, it's more difficult to imagine a plausible biological explanation for this. I'd want to see more samples from areas with more than $35$ people per hectare per minute before I accepted that the sparrow abundance really starts to increase again above that level of pedestrian traffic. Example 2 Ashton et al. (2007) measured the carapace length (in mm) of $18$ female gopher tortoises (Gopherus polyphemus) in Okeeheelee County Park, Florida, and X-rayed them to count the number of eggs in each. The data are shown below in the SAS example. The linear regression is not significant ($r^2=0.015,\; 16 d.f.,\; P=0.63$), but the quadratic is significant ($R^2=0.43,\; 15 d.f.,\; P=0.014$). The increase in $R^2$ from linear to quadratic is significant ($P= 0.001$). The best-fit quadratic equation is $\hat{Y}=-899.9+5.857X-0.009425X^2$. Adding the cubic and quartic terms does not significantly increase the $R^2$. The first part of the graph is not surprising; it's easy to imagine why bigger tortoises would have more eggs. The decline in egg number above $310 mm$ carapace length is the interesting result; it suggests that egg production declines in these tortoises as they get old and big. Graphing the results As shown above, you graph a curvilinear regression the same way you would a linear regression, a scattergraph with the independent variable on the $X$ axis and the dependent variable on the $Y$ axis. In general, you shouldn't show the regression line for values outside the range of observed $X$ values, as extrapolation with polynomial regression is even more likely than linear regression to yield ridiculous results. For example, extrapolating the quadratic equation relating tortoise carapace length and number of eggs predicts that tortoises with carapace length less than $279 mm$ or greater than $343 mm$ would have negative numbers of eggs. Similar tests Before performing a curvilinear regression, you should try different transformations when faced with an obviously curved relationship between an $X$ and a $Y$ variable. A linear equation relating transformed variables is simpler and more elegant than a curvilinear equation relating untransformed variables. You should also remind yourself of your reason for doing a regression. If your purpose is prediction of unknown values of $Y$ corresponding to known values of $X$, then you need an equation that fits the data points well, and a polynomial regression may be appropriate if transformations do not work. However, if your purpose is testing the null hypothesis that there is no relationship between $X$ and $Y$, and a linear regression gives a significant result, you may want to stick with the linear regression even if curvilinear gives a significantly better fit. Using a less-familiar technique that yields a more-complicated equation may cause your readers to be a bit suspicious of your results; they may feel you went fishing around for a statistical test that supported your hypothesis, especially if there's no obvious biological reason for an equation with terms containing exponents. Spearman rank correlation is a nonparametric test of the association between two variables. It will work well if there is a steady increase or decrease in $Y$ as $X$ increases, but not if $Y$ goes up and then goes down. Polynomial regression is a form of multiple regression. In multiple regression, there is one dependent ($Y$) variable and multiple independent ($X$) variables, and the $X$ variables ($X_1,\; X_2,\; X_3...$) are added to the equation to see whether they increase the $R^2$ significantly. In polynomial regression, the independent "variables" are just $X^1,\; X^2,\; X^3$, etc. How to do the test Spreadsheet I have prepared a spreadsheet polyreg.xls that will help you perform a polynomial regression. It tests equations up to quartic, and it will handle up to $1000$ observations. Web pages There is a very powerful web page that will fit just about any equation you can think of to your data (not just polynomial). R Salvatore Mangiafico's $R$ Companion has sample R programs for polynomial regression and other forms of regression that I don't discuss here (B-spline regression and other forms of nonlinear regression). SAS To do polynomial regression in SAS, you create a data set containing the square of the independent variable, the cube, etc. You then use PROC REG for models containing the higher-order variables. It's possible to do this as a multiple regression, but I think it's less confusing to use multiple model statements, adding one term to each model. There doesn't seem to be an easy way to test the significance of the increase in $R^2$ in SAS, so you'll have to do that by hand. If $R_{i}^{2}$ is the $R^2$ for the $i_{th}$ order, and $R_{j}^{2}$ is the $R^2$ for the next higher order, and $d.f._j$ is the degrees of freedom for the higher-order equation, the $F$-statistic is $d.f._j\times (R_{j}^{2}-R_{i}^{2})/(1-R_{j}^{2})$. It has $j$ degrees of freedom in the numerator and $d.f._j=n-j-1$ degrees of freedom in the denominator. Here's an example, using the data on tortoise carapace length and clutch size from Ashton et al. (2007). DATA turtles; INPUT length clutch; length2=length*length; length3=length*length*length; length4=length*length*length*length; DATALINES; 284 3 290 2 290 7 290 7 298 11 299 12 302 10 306 8 306 8 309 9 310 10 311 13 317 7 317 9 320 6 323 13 334 2 334 8 ; PROC REG DATA=TURTLES; MODEL clutch=length; MODEL clutch=length length2; MODEL clutch=length length2 length3; RUN; In the output, first look for the $R^2$ values under each model: The REG Procedure Model: MODEL1 Dependent Variable: clutch . . . Root MSE 3.41094 R-Square 0.0148 linear R-sq Dependent Mean 8.05556 Adj R-Sq -0.0468 Coeff Var 42.34268 . . . The REG Procedure Model: MODEL2 Dependent Variable: clutch . . . Root MSE 2.67050 R-Square 0.4338 quadratic R-sq Dependent Mean 8.05556 Adj R-Sq 0.3583 Coeff Var 33.15104 For this example, $n=18$. The $F$-statistic for the increase in $R^2$ from linear to quadratic is $15\times \frac{0.4338-0.0148}{1-0.4338}=11.10$ with $d.f.=2,\; 15$. Using a spreadsheet (enter =FDIST(11.10, 2, 15)), this gives a $P$ value of $0.0011$. So the quadratic equation fits the data significantly better than the linear equation. Once you've figured out which equation is best (the quadratic, for our example, since the cubic and quartic equations do not significantly increase the $R^2$), look for the parameters in the output: Parameter Estimates Parameter Standard Variable DF Estimate Error t Value Pr > |t| Intercept 1 -899.93459 270.29576 -3.33 0.0046 length 1 5.85716 1.75010 3.35 0.0044 length2 1 -0.00942 0.00283 -3.33 0.0045 This tells you that the equation for the best-fit quadratic curve is $\hat{Y}=-899.9+5.857X-0.009425X^2$.
textbooks/stats/Applied_Statistics/Biological_Statistics_(McDonald)/05%3A_Tests_for_Multiple_Measurement_Variables/5.03%3A_Curvilinear_%28Nonlinear%29_Regression.txt
Learning Objectives • To use analysis of covariance (ancova) when you want to compare two or more regression lines to each other; ancova will tell you whether the regression lines are different from each other in either slope or intercept. When to use it Use analysis of covariance (ancova) when you have two measurement variables and one nominal variable. The nominal variable divides the regressions into two or more sets. The purpose of ancova is to compare two or more linear regression lines. It is a way of comparing the $Y$ variable among groups while statistically controlling for variation in $Y$ caused by variation in the $X$ variable. For example, Walker (1962) studied the mating songs of male tree crickets. Each wingstroke by a cricket produces a pulse of song, and females may use the number of pulses per second to identify males of the correct species. Walker (1962) wanted to know whether the chirps of the crickets Oecanthus exclamationis and Oecanthus niveus had different pulse rates. He measured the pulse rate of the crickets at a variety of temperatures: O. exclamationis O. niveus Temperature (C°) Pulses per second Temperature (C°) Pulses per second 20.8 67.9 17.2 44.3 20.8 65.1 18.3 47.2 24.0 77.3 18.3 47.6 24.0 78.7 18.3 49.6 24.0 79.4 18.9 50.3 24.0 80.4 18.9 51.8 26.2 85.8 20.4 60.0 26.2 86.6 21.0 58.5 26.2 87.5 21.0 58.9 26.2 89.1 22.1 60.7 28.4 98.6 23.5 69.8 29.0 100.8 24.2 70.9 30.4 99.3 25.9 76.2 30.4 101.7 26.5 76.1 26.5 77.0 26.5 77.7 28.6 84.7 mean 85.6 mean 62.4 If you ignore the temperatures and just compare the mean pulse rates, O. exclamationis has a higher rate than O. niveus, and the difference is highly significant (two-sample t–test, $P=2\times 10^{-5}). However, you can see from the graph that pulse rate is highly associated with temperature. This confounding variable means that you'd have to worry that any difference in mean pulse rate was caused by a difference in the temperatures at which you measured pulse rate, as the average temperature for the O. exclamationis measurements was \(3.6^{\circ}C$ higher than for O. niveus. You'd also have to worry that O. exclamationis might have a higher rate than O. niveus at some temperatures but not others. You can control for temperature with ancova, which will tell you whether the regression line for O. exclamationis is higher than the line for O. niveus; if it is, that means that O. exclamationis would have a higher pulse rate at any temperature. Null hypotheses You test two null hypotheses in an ancova. Remember that the equation of a regression line takes the form $\hat{Y}=a+bX$, where $a$ is the $Y$ intercept and $b$ is the slope. The first null hypothesis of ancova is that the slopes of the regression lines ($b$) are all equal; in other words, that the regression lines are parallel to each other. If you accept the null hypothesis that the regression lines are parallel, you test the second null hypothesis: that the $Y$ intercepts of the regression lines ($a$) are all the same. Some people define the second null hypothesis of ancova to be that the adjusted means (also known as least-squares means) of the groups are the same. The adjusted mean for a group is the predicted $Y$ variable for that group, at the mean $X$ variable for all the groups combined. Because the regression lines you use for estimating the adjusted mean are parallel (have the same slope), the difference in adjusted means is equal to the difference in $Y$ intercepts. Stating the null hypothesis in terms of $Y$ intercepts makes it easier to understand that you're testing null hypotheses about the two parts of the regression equations; stating it in terms of adjusted means may make it easier to get a feel for the relative size of the difference. For the cricket data, the adjusted means are $78.4$ pulses per second for O. exclamationis and $68.3$ for O. niveus; these are the predicted values at the mean temperature of all observations, $23.8^{\circ}C$. The $Y$ intercepts are $-7.2$ and $-17.3$ pulses per second, respectively; while the difference is the same ($10.1$ more pulses per second in O. exclamationis), the adjusted means give you some idea of how big this difference is compared to the mean. Assumptions Ancova makes the same assumptions as linear regression: normality and homoscedasticity of Y for each value of X, and independence. I have no idea how sensitive it is to deviations from these assumptions. How the test works The first step in performing an ancova is to compute each regression line. In the cricket example, the regression line for O. exclamationis is $\hat{Y}=3.75X-11.0$, and the line for O. niveus is $\hat{Y}=3.52X-15.4$. Next, you see whether the slopes are significantly different. You do this because you can't do the final step of the anova, comparing the $Y$ intercepts, if the slopes are significantly different from each other. If the slopes of the regression lines are different, the lines cross each other somewhere, and one group has higher $Y$ values in one part of the graph and lower $Y$ values in another part of the graph. (If the slopes are different, there are techniques for testing the null hypothesis that the regression lines have the same $Y$ value for a particular $X$ value, but they're not used very often and I won't consider them here.) If the slopes are not significantly different, you then draw a regression line through each group of points, all with the same slope. This common slope is a weighted average of the slopes of the different groups. For the crickets, the slopes are not significantly different ($P=0.25$); the common slope is $3.60$, which is between the slopes for the separate lines ($3.52$ and $3.75$). The final test in the ancova is to test the null hypothesis that all of the $Y$ intercepts of the regression lines with a common slope are the same. Because the lines are parallel, saying that they are significantly different at one point (the $Y$ intercept) means that the lines are different at any point. You may see "adjusted means," also known as "least-squares means," in the output of an ancova program. The adjusted mean for a group is the predicted value for the $Y$ variable when the $X$ variable is the mean of all the observations in all groups, using the regression equation with the common slope. For the crickets, the mean of all the temperatures (for both species) is $23.76^{\circ}C$. The regression equation for O. exclamationis (with the common slope) is $\hat{Y}=3.60X-7.14$, so the adjusted mean for O. exclamationis is found by substituting $23.76$ for $X$ in the regression equation, yielding $78.40$. Because the regression lines are parallel, the difference is adjusted means is equal to the difference in $y$-intercepts, so you can report either one. Although the most common use of ancova is for comparing two regression lines, it is possible to compare three or more regressions. If their slopes are all the same, you can test each pair of lines to see which pairs have significantly different $Y$ intercepts, using a modification of the Tukey-Kramer test. Example $1$ In the firefly species Photinus ignitus, the male transfers a large spermatophore to the female during mating. Rooney and Lewis (2002) wanted to know whether the extra resources from this "nuptial gift" enable the female to produce more offspring. They collected $40$ virgin females and mated $20$ of them to one male and $20$ to three males. They then counted the number of eggs each female laid. Because fecundity varies with the size of the female, they analyzed the data using ancova, with female weight (before mating) as the independent measurement variable and number of eggs laid as the dependent measurement variable. Because the number of males has only two values ("one" or "three"), it is a nominal variable, not measurement. The slopes of the two regression lines (one for single-mated females and one for triple-mated females) are not significantly different ($F_{1,\; 36}=1.1,\; P=0.30$). The $Y$ intercepts are significantly different ($F_{1,\; 36}=8.8, P=0.005$); females that have mated three times have significantly more offspring than females mated once. Example $2$ Paleontologists would like to be able to determine the sex of dinosaurs from their fossilized bones. To see whether this is feasible, Prieto-Marquez et al. (2007) measured several characters that are thought to distinguish the sexes in alligators (Alligator mississipiensis), which are among the closest living non-bird relatives of dinosaurs. One of the characters was pelvic canal width, which they wanted to standardize using snout-vent length. The slopes of the regression lines are not significantly different ($P=0.9101$). The Y intercepts are significantly different ($P=0.0267$), indicating that male alligators of a given length have a significantly greater pelvic canal width. However, inspection of the graph shows that there is a lot of overlap between the sexes even after standardizing for sex, so it would not be possible to reliably determine the sex of a single individual with this character alone. Graphing the results You graph an ancova with a scattergraph, with the independent variable on the $X$ axis and the dependent variable on the $Y$ axis. Use a different symbol for each value of the nominal variable, as in the firefly graph above, where filled circles are used for the thrice-mated females and open circles are used for the once-mated females. To get this kind of graph in a spreadsheet, you would put all of the $X$ values in column $A$, one set of $Y$ values in column $B$, the next set of $Y$ values in column $C$, and so on. Most people plot the individual regression lines for each set of points, as shown in the firefly graph, even if the slopes are not significantly different. This lets people see how similar or different the slopes look. This is easy to do in a spreadsheet; just click on one of the symbols and choose "Add Trendline" from the Chart menu. Similar tests Another way to standardize one measurement variable by another is to take the ratio of the two. For example, let's say some neighborhood ruffians have been giving you the finger, and this inspires you to compare the middle-finger length of boys vs. girls. Obviously, taller children will tend to have longer middle fingers, so you want to standardize for height; you want to know whether boys and girls of the same height have different middle-finger lengths. A simple way to do this would be to divide the middle-finger length by the child's height and compare these ratios between boys and girls using a two-sample t–test. Using a ratio like this makes the statistics simpler and easier to understand, but you should only use ratios when the two measurement variables are isometric. This means that the ratio of $Y$ over $X$ does not change as $X$ increases; in other words, the $Y$ intercept of the regression line is $0$. As you can see from the graph, middle-finger length in a sample of $645$ boys (Snyder et al. 1977) does look isometric, so you could analyze the ratios. The average ratio in the Snyder et al. (1977) data set is $0.0472$ for boys and $0.0470$ for girls, and the difference is not significant (two-sample $t$–test, $P=0.50$). However, many measurements are allometric: the ratio changes as the $X$ variable gets bigger. For example, let's say that in addition to giving you the finger, the rapscallions have been cursing at you, so you decide to compare the mouth width of boys and girls. As you can see from the graph, mouth width is very allometric; smaller children have bigger mouths as a proportion of their height. As a result, any difference between boys and girls in mouth width/height ratio could just be due to a difference in height between boys and girls. For data where the regression lines do not have a $Y$ intercept of zero, you need to compare groups using ancova. Sometimes the two measurement variables are just the same variable measured at different times or places. For example, if you measured the weights of two groups of individuals, put some on a new weight-loss diet and the others on a control diet, then weighed them again a year later, you could treat the difference between final and initial weights as a single variable, and compare the mean weight loss for the control group to the mean weight loss of the diet group using a one-way anova. The alternative would be to treat final and initial weights as two different variables and analyze using an ancova: you would compare the regression line of final weight vs. initial weight for the control group to the regression line for the diet group. The one-way anova would be simpler, and probably perfectly adequate; the ancova might be better, particularly if you had a wide range of initial weights, because it would allow you to see whether the change in weight depended on the initial weight. How to do the test Spreadsheet and web pages Richard Lowry has made web pages that allow you to perform ancova with two, three or four groups, and a downloadable spreadsheet for ancova with more than four groups. You may cut and paste data from a spreadsheet to the web pages. In the results, the $P$ value for "adjusted means" is the $P$ value for the difference in the intercepts among the regression lines; the $P$ value for "between regressions" is the $P$ value for the difference in slopes. R Salvatore Mangiafico's $R$ Companion has a sample R program for analysis of covariance. SAS Here's how to do analysis of covariance in SAS, using the cricket data from Walker (1962); I estimated the values by digitizing the graph, so the results may be slightly different from in the paper. DATA crickets; INPUT species \$ temp pulse @@; DATALINES; ex 20.8 67.9 ex 20.8 65.1 ex 24 77.3 ex 24 78.7 ex 24 79.4 ex 24 80.4 ex 26.2 85.8 ex 26.2 86.6 ex 26.2 87.5 ex 26.2 89.1 ex 28.4 98.6 ex 29 100.8 ex 30.4 99.3 ex 30.4 101.7 niv 17.2 44.3 niv 18.3 47.2 niv 18.3 47.6 niv 18.3 49.6 niv 18.9 50.3 niv 18.9 51.8 niv 20.4 60 niv 21 58.5 niv 21 58.9 niv 22.1 60.7 niv 23.5 69.8 niv 24.2 70.9 niv 25.9 76.2 niv 26.5 76.1 niv 26.5 77 niv 26.5 77.7 niv 28.6 84.7 ; PROC GLM DATA=crickets; CLASS species; MODEL pulse=temp species temp*species; PROC GLM DATA=crickets; CLASS species; MODEL pulse=temp species; RUN; The CLASS statement gives the nominal variable, and the MODEL statement has the $Y$ variable to the left of the equals sign. The first time you run PROC GLM, the MODEL statement includes the $X$ variable, the nominal variable, and the interaction term ("temp*species" in the example). This tests whether the slopes of the regression lines are significantly different. You'll see both Type I and Type III sums of squares; the Type III sums of squares are the correct ones to use: Source DF Type III SS Mean Square F Value Pr > F temp 1 4126.440681 4126.440681 1309.61 <.0001 species 1 2.420117 2.420117 0.77 0.3885 temp*species 1 4.275779 4.275779 1.36 0.2542 slope P value If the $P$ value of the slopes is significant, you'd be done. In this case it isn't, so you look at the output from the second run of PROC GLM. This time, the MODEL statement doesn't include the interaction term, so the model assumes that the slopes of the regression lines are equal. This $P$ value tells you whether the $Y$ intercepts are significantly different: Source DF Type III SS Mean Square F Value Pr > F temp 1 4376.082568 4376.082568 1371.35 <.0001 species 1 598.003953 598.003953 187.40 <.0001 intercept P value If you want the common slope and the adjusted means, add Solution to the MODEL statement and another line with LSMEANS and the CLASS variable: PROC GLM DATA=crickets; CLASS species; MODEL pulse=temp species/Solution; LSMEANS species; yields this as part of the output: Standard Parameter Estimate Error t Value Pr > |t| Intercept -17.27619743 B 2.19552853 -7.87 <.0001 temp 3.60275287 0.09728809 37.03 <.0001 species ex 10.06529123 B 0.73526224 13.69 <.0001 species niv 0.00000000 B . . . NOTE: The $X'X$ matrix has been found to be singular, and a generalized inverse was used to solve the normal equations. Terms whose estimates are followed by the letter B are not uniquely estimable. The GLM Procedure: Least Squares Means species pulse LSMEAN ex 78.4067726 niv 68.3414814 Under "Estimate," $3.60$ is the common slope. $-17.27$ is the $Y$ intercept for the regression line for O. niveus. $10.06$ means that the $Y$ intercept for O. exclamationis is $10.06$ higher ($-17.27+10.06$). Ignore the scary message about the matrix being singular. If you have more than two regression lines, you can do a Tukey-Kramer test comparing all pairs of $y$-intercepts. If there were three cricket species in the example, you'd say "LSMEANS species/PDIFF ADJUST=TUKEY;". Power analysis You can't do a power analysis for ancova with G*Power, so I've prepared a spreadsheet to do power analysis for ancova ancovapower.xls, using the method of Borm et al. (2007). It only works for ancova with two groups, and it assumes each group has the same standard deviation and the same $r^2$. To use it, you'll need: • the effect size, or the difference in $Y$ intercepts you hope to detect; • the standard deviation. This is the standard deviation of all the $Y$ values within each group (without controlling for the $X$ variable). For example, in the alligator data above, this would be the standard deviation of pelvic width among males, or the standard deviation of pelvic width among females. • alpha, or the significance level (usually $0.05$); • power, the probability of rejecting the null hypothesis when the given effect size is the true difference ($0.80$ or $0.90$ are common values); • the $r^2$ within groups. For the alligator data, this would be the $r^2$ of pelvic width vs. snout-vent length among males, or the $r^2$ among females. As an example, let's say you want to do a study with an ancova on pelvic width vs. snout-vent length in male and female crocodiles, and since you don't have any preliminary data on crocodiles, you're going to base your sample size calculation on the alligator data. You want to detect a difference in $Y$ intercepts of $0.2 cm$. The standard deviation of pelvic width in the male alligators is $1.45$ and for females is $1.02$; taking the average, enter $1.23$ for standard deviation. The $r^2$ in males is $0.774$ and for females it's $0.780$, so enter the average ($0.777$) for $r^2$ in the form. With $0.05$ for the alpha and $0.80$ for the power, the result is that you'll need $133$ male crocodiles and $133$ female crocodiles.
textbooks/stats/Applied_Statistics/Biological_Statistics_(McDonald)/05%3A_Tests_for_Multiple_Measurement_Variables/5.04%3A_Analysis_of_Covariance.txt
Learning Objectives • To use multiple regression when you have a more than two measurement variables, one is the dependent variable and the rest are independent variables. You can use it to predict values of the dependent variable, or if you're careful, you can use it for suggestions about which independent variables have a major effect on the dependent variable. When to use it Use multiple regression when you have three or more measurement variables. One of the measurement variables is the dependent ($Y$) variable. The rest of the variables are the independent ($X$) variables; you think they may have an effect on the dependent variable. The purpose of a multiple regression is to find an equation that best predicts the $Y$ variable as a linear function of the $X$ variables. Multiple regression for prediction One use of multiple regression is prediction or estimation of an unknown $Y$ value corresponding to a set of $X$ values. For example, let's say you're interested in finding suitable habitat to reintroduce the rare beach tiger beetle, Cicindela dorsalis dorsalis, which lives on sandy beaches on the Atlantic coast of North America. You've gone to a number of beaches that already have the beetles and measured the density of tiger beetles (the dependent variable) and several biotic and abiotic factors, such as wave exposure, sand particle size, beach steepness, density of amphipods and other prey organisms, etc. Multiple regression would give you an equation that would relate the tiger beetle density to a function of all the other variables. Then if you went to a beach that doesn't have tiger beetles and measured all the independent variables (wave exposure, sand particle size, etc.) you could use your multiple regression equation to predict the density of tiger beetles that could live there if you introduced them. This could help you guide your conservation efforts, so you don't waste resources introducing tiger beetles to beaches that won't support very many of them. Multiple regression for understanding causes A second use of multiple regression is to try to understand the functional relationships between the dependent and independent variables, to try to see what might be causing the variation in the dependent variable. For example, if you did a regression of tiger beetle density on sand particle size by itself, you would probably see a significant relationship. If you did a regression of tiger beetle density on wave exposure by itself, you would probably see a significant relationship. However, sand particle size and wave exposure are correlated; beaches with bigger waves tend to have bigger sand particles. Maybe sand particle size is really important, and the correlation between it and wave exposure is the only reason for a significant regression between wave exposure and beetle density. Multiple regression is a statistical way to try to control for this; it can answer questions like "If sand particle size (and every other measured variable) were the same, would the regression of beetle density on wave exposure be significant?" I'll say this more than once on this page: you have to be very careful if you're going to try to use multiple regression to understand cause-and-effect relationships. It's very easy to get misled by the results of a fancy multiple regression analysis, and you should use the results more as a suggestion, rather than for hypothesis testing. Null hypothesis The main null hypothesis of a multiple regression is that there is no relationship between the $X$ variables and the $Y$ variable; in other words, the $Y$ values you predict from your multiple regression equation are no closer to the actual $Y$ values than you would expect by chance. As you are doing a multiple regression, you'll also test a null hypothesis for each $X$ variable, that adding that $X$ variable to the multiple regression does not improve the fit of the multiple regression equation any more than expected by chance. While you will get $P$ values for the null hypotheses, you should use them as a guide to building a multiple regression equation; you should not use the $P$ values as a test of biological null hypotheses about whether a particular $X$ variable causes variation in $Y$. How it works The basic idea is that you find an equation that gives a linear relationship between the $X$ variables and the $Y$ variable, like this: $\hat{Y}=a+b_1X_1+b_2X_2+b_3X_3+...$ The $\hat{Y}$ is the expected value of $Y$ for a given set of $X$ values. $b_1$ is the estimated slope of a regression of $Y$ on $X_1$, if all of the other $X$ variables could be kept constant, and so on for $b_2,\; b_3,\; etc$; $a$ is the intercept. I'm not going to attempt to explain the math involved, but multiple regression finds values of $b_1$, etc. (the "partial regression coefficients") and the intercept ($a$) that minimize the squared deviations between the expected and observed values of $Y$. How well the equation fits the data is expressed by $R^2$, the "coefficient of multiple determination." This can range from $0$ (for no relationship between $Y$ and the $X$ variables) to $1$ (for a perfect fit, no difference between the observed and expected $Y$ values). The $P$ value is a function of the $R^2$, the number of observations, and the number of $X$ variables. When the purpose of multiple regression is prediction, the important result is an equation containing partial regression coefficients. If you had the partial regression coefficients and measured the $X$ variables, you could plug them into the equation and predict the corresponding value of $Y$. The magnitude of the partial regression coefficient depends on the unit used for each variable, so it does not tell you anything about the relative importance of each variable. When the purpose of multiple regression is understanding functional relationships, the important result is an equation containing standard partial regression coefficients, like this: $\hat{Y'}=a+b'_1X'_1+b'_2X'_2+b'_3X'_3+...$ where $b'_1$ is the standard partial regression coefficient of $Y$ on $X_1$. It is the number of standard deviations that $Y$ would change for every one standard deviation change in $X_1$, if all the other $X$ variables could be kept constant. The magnitude of the standard partial regression coefficients tells you something about the relative importance of different variables; $X$ variables with bigger standard partial regression coefficients have a stronger relationship with the $Y$ variable. Using nominal variables in a multiple regression Often, you'll want to use some nominal variables in your multiple regression. For example, if you're doing a multiple regression to try to predict blood pressure (the dependent variable) from independent variables such as height, weight, age, and hours of exercise per week, you'd also want to include sex as one of your independent variables. This is easy; you create a variable where every female has a $0$ and every male has a $1$, and treat that variable as if it were a measurement variable. When there are more than two values of the nominal variable, it gets more complicated. The basic idea is that for $k$ values of the nominal variable, you create $k-1$ dummy variables. So if your blood pressure study includes occupation category as a nominal variable with $23$ values (management, law, science, education, construction, etc.,), you'd use $22$ dummy variables: one variable with one number for management and one number for non-management, another dummy variable with one number for law and another number for non-law, etc. One of the categories would not get a dummy variable, since once you know the value for the $22$ dummy variables that aren't farming, you know whether the person is a farmer. When there are more than two values of the nominal variable, choosing the two numbers to use for each dummy variable is complicated. You can start reading about it at this page about using nominal variables in multiple regression, and go on from there. Selecting variables in multiple regression Every time you add a variable to a multiple regression, the $R^2$ increases (unless the variable is a simple linear function of one of the other variables, in which case $R^2$ will stay the same). The best-fitting model is therefore the one that includes all of the $X$ variables. However, whether the purpose of a multiple regression is prediction or understanding functional relationships, you'll usually want to decide which variables are important and which are unimportant. In the tiger beetle example, if your purpose was prediction it would be useful to know that your prediction would be almost as good if you measured only sand particle size and amphipod density, rather than measuring a dozen difficult variables. If your purpose was understanding possible causes, knowing that certain variables did not explain much of the variation in tiger beetle density could suggest that they are probably not important causes of the variation in beetle density. One way to choose variables, called forward selection, is to do a linear regression for each of the $X$ variables, one at a time, then pick the $X$ variable that had the highest $R^2$. Next you do a multiple regression with the $X$ variable from step 1 and each of the other $X$ variables. You add the $X$ variable that increases the $R^2$ by the greatest amount, if the $P$ value of the increase in $R^2$ is below the desired cutoff (the "$P$-to-enter", which may or may not be $0.05$, depending on how you feel about extra variables in your regression). You continue adding $X$ variables until adding another $X$ variable does not significantly increase the $R^2$. To calculate the $P$ value of an increase in $R^2$ when increasing the number of $X$ variables from $d$ to $e$, where the total sample size is $n$, use the formula: $F_s=\frac{(R_{e}^{2}-R_{d}^{2})/(e-d)}{(1-R_{e}^{2})/(n-e-1)}$ A second technique, called backward elimination, is to start with a multiple regression using all of the $X$ variables, then perform multiple regressions with each $X$ variable removed in turn. You eliminate the $X$ variable whose removal causes the smallest decrease in $R^2$, if the $P$ value is greater than the "$P$-to-leave". You continue removing $X$ variables until removal of any $X$ variable would cause a significant decrease in $R^2$. Odd things can happen when using either of the above techniques. You could add variables $X_1,\; X_2,\; X_3,\; \text {and}\; X_4$, with a significant increase in $R^2$ at each step, then find that once you've added $X_3$ and $X_4$, you can remove $X_1$ with little decrease in $R^2$. It is even possible to do multiple regression with independent variables $A,\; B,\; C,\; \text {and}\; D$, and have forward selection choose variables $A$ and $B$, and backward elimination choose variables $C$ and $D$. To avoid this, many people use stepwise multiple regression. To do stepwise multiple regression, you add $X$ variables as with forward selection. Each time you add an $X$ variable to the equation, you test the effects of removing any of the other $X$ variables that are already in your equation, and remove those if removal does not make the equation significantly worse. You continue this until adding new $X$ variables does not significantly increase $R^2$ and removing $X$ variables does not significantly decrease it. Important warning It is easy to throw a big data set at a multiple regression and get an impressive-looking output. However, many people are skeptical of the usefulness of multiple regression, especially for variable selection. They argue that you should use both careful examination of the relationships among the variables, and your understanding of the biology of the system, to construct a multiple regression model that includes all the independent variables that you think belong in it. This means that different researchers, using the same data, could come up with different results based on their biases, preconceived notions, and guesses; many people would be upset by this subjectivity. Whether you use an objective approach like stepwise multiple regression, or a subjective model-building approach, you should treat multiple regression as a way of suggesting patterns in your data, rather than rigorous hypothesis testing. To illustrate some problems with multiple regression, imagine you did a multiple regression on vertical leap in children $5$ to $12$ years old, with height, weight, age and score on a reading test as independent variables. All four independent variables are highly correlated in children, since older children are taller, heavier and read better, so it's possible that once you've added weight and age to the model, there is so little variation left that the effect of height is not significant. It would be biologically silly to conclude that height had no influence on vertical leap. Because reading ability is correlated with age, it's possible that it would contribute significantly to the model; that might suggest some interesting followup experiments on children all of the same age, but it would be unwise to conclude that there was a real effect of reading ability on vertical leap based solely on the multiple regression. Assumptions Like most other tests for measurement variables, multiple regression assumes that the variables are normally distributed and homoscedastic. It's probably not that sensitive to violations of these assumptions, which is why you can use a variable that just has the values $0$ or $1$. It also assumes that each independent variable would be linearly related to the dependent variable, if all the other independent variables were held constant. This is a difficult assumption to test, and is one of the many reasons you should be cautious when doing a multiple regression (and should do a lot more reading about it, beyond what is on this page). You can (and should) look at the correlation between the dependent variable and each independent variable separately, but just because an individual correlation looks linear, it doesn't mean the relationship would be linear if everything else were held constant. Another assumption of multiple regression is that the $X$ variables are not multicollinear. Multicollinearity occurs when two independent variables are highly correlated with each other. For example, let's say you included both height and arm length as independent variables in a multiple regression with vertical leap as the dependent variable. Because height and arm length are highly correlated with each other, having both height and arm length in your multiple regression equation may only slightly improve the $R^2$ over an equation with just height. So you might conclude that height is highly influential on vertical leap, while arm length is unimportant. However, this result would be very unstable; adding just one more observation could tip the balance, so that now the best equation had arm length but not height, and you could conclude that height has little effect on vertical leap. If your goal is prediction, multicollinearity isn't that important; you'd get just about the same predicted Y values, whether you used height or arm length in your equation. However, if your goal is understanding causes, multicollinearity can confuse you. Before doing multiple regression, you should check the correlation between each pair of independent variables, and if two are highly correlated, you may want to pick just one. Example $1$ I extracted some data from the Maryland Biological Stream Survey to practice multiple regression on; the data are shown below in the SAS example. The dependent variable is the number of longnose dace (Rhinichthys cataractae) per $75 m$ section of stream. The independent variables are the area (in acres) drained by the stream; the dissolved oxygen (in mg/liter); the maximum depth (in cm) of the $75 m$ segment of stream; nitrate concentration (mg/liter); sulfate concentration (mg/liter); and the water temperature on the sampling date (in degrees C). One biological goal might be to measure the physical and chemical characteristics of a stream and be able to predict the abundance of longnose dace; another goal might be to generate hypotheses about the causes of variation in longnose dace abundance. The results of a stepwise multiple regression, with $P$-to-enter and $P$-to-leave both equal to $0.15$, is that acreage, nitrate, and maximum depth contribute to the multiple regression equation. The $R^2$ of the model including these three terms is $0.28$, which isn't very high. Graphing the results If the multiple regression equation ends up with only two independent variables, you might be able to draw a three-dimensional graph of the relationship. Because most humans have a hard time visualizing four or more dimensions, there's no good visual way to summarize all the information in a multiple regression with three or more independent variables. Similar tests If the dependent variable is a nominal variable, you should do multiple logistic regression. There are many other techniques you can use when you have three or more measurement variables, including principal components analysis, principal coordinates analysis, discriminant function analysis, hierarchical and non-hierarchical clustering, and multidimensional scaling. I'm not going to write about them; your best bet is probably to see how other researchers in your field have analyzed data similar to yours. How to do multiple regression Spreadsheet If you're serious about doing multiple regressions as part of your research, you're going to have to learn a specialized statistical program such as SAS or SPSS. I've written a spreadsheet multreg.xls that will enable you to do a multiple regression with up to $12\; X$ variables and up to $1000$ observations. It's fun to play with, but I'm not confident enough in it that you should use it for publishable results. The spreadsheet includes histograms to help you decide whether to transform your variables, and scattergraphs of the $Y$ variable vs. each $X$ variable so you can see if there are any non-linear relationships. It doesn't do variable selection automatically, you manually choose which variables to include. Web pages I've seen a few web pages that are supposed to perform multiple regression, but I haven't been able to get them to work on my computer. R Salvatore Mangiafico's $R$ Companion has a sample R program for multiple regression. SAS You use PROC REG to do multiple regression in SAS. Here is an example using the data on longnose dace abundance described above. DATA fish; VAR stream \$ longnosedace acreage do2 maxdepth no3 so4 temp; DATALINES; BASIN_RUN 13 2528 9.6 80 2.28 16.75 15.3 BEAR_BR 12 3333 8.5 83 5.34 7.74 19.4 BEAR_CR 54 19611 8.3 96 0.99 10.92 19.5 BEAVER_DAM_CR 19 3570 9.2 56 5.44 16.53 17.0 BEAVER_RUN 37 1722 8.1 43 5.66 5.91 19.3 BENNETT_CR 2 583 9.2 51 2.26 8.81 12.9 BIG_BR 72 4790 9.4 91 4.10 5.65 16.7 BIG_ELK_CR 164 35971 10.2 81 3.20 17.53 13.8 BIG_PIPE_CR 18 25440 7.5 120 3.53 8.20 13.7 BLUE_LICK_RUN 1 2217 8.5 46 1.20 10.85 14.3 BROAD_RUN 53 1971 11.9 56 3.25 11.12 22.2 BUFFALO_RUN 16 12620 8.3 37 0.61 18.87 16.8 BUSH_CR 32 19046 8.3 120 2.93 11.31 18.0 CABIN_JOHN_CR 21 8612 8.2 103 1.57 16.09 15.0 CARROLL_BR 23 3896 10.4 105 2.77 12.79 18.4 COLLIER_RUN 18 6298 8.6 42 0.26 17.63 18.2 CONOWINGO_CR 112 27350 8.5 65 6.95 14.94 24.1 DEAD_RUN 25 4145 8.7 51 0.34 44.93 23.0 DEEP_RUN 5 1175 7.7 57 1.30 21.68 21.8 DEER_CR 26 8297 9.9 60 5.26 6.36 19.1 DORSEY_RUN 8 7814 6.8 160 0.44 20.24 22.6 FALLS_RUN 15 1745 9.4 48 2.19 10.27 14.3 FISHING_CR 11 5046 7.6 109 0.73 7.10 19.0 FLINTSTONE_CR 11 18943 9.2 50 0.25 14.21 18.5 GREAT_SENECA_CR 87 8624 8.6 78 3.37 7.51 21.3 GREENE_BR 33 2225 9.1 41 2.30 9.72 20.5 GUNPOWDER_FALLS 22 12659 9.7 65 3.30 5.98 18.0 HAINES_BR 98 1967 8.6 50 7.71 26.44 16.8 HAWLINGS_R 1 1172 8.3 73 2.62 4.64 20.5 HAY_MEADOW_BR 5 639 9.5 26 3.53 4.46 20.1 HERRINGTON_RUN 1 7056 6.4 60 0.25 9.82 24.5 HOLLANDS_BR 38 1934 10.5 85 2.34 11.44 12.0 ISRAEL_CR 30 6260 9.5 133 2.41 13.77 21.0 LIBERTY_RES 12 424 8.3 62 3.49 5.82 20.2 LITTLE_ANTIETAM_CR 24 3488 9.3 44 2.11 13.37 24.0 LITTLE_BEAR_CR 6 3330 9.1 67 0.81 8.16 14.9 LITTLE_CONOCOCHEAGUE_CR 15 2227 6.8 54 0.33 7.60 24.0 LITTLE_DEER_CR 38 8115 9.6 110 3.40 9.22 20.5 LITTLE_FALLS 84 1600 10.2 56 3.54 5.69 19.5 LITTLE_GUNPOWDER_R 3 15305 9.7 85 2.60 6.96 17.5 LITTLE_HUNTING_CR 18 7121 9.5 58 0.51 7.41 16.0 LITTLE_PAINT_BR 63 5794 9.4 34 1.19 12.27 17.5 MAINSTEM_PATUXENT_R 239 8636 8.4 150 3.31 5.95 18.1 MEADOW_BR 234 4803 8.5 93 5.01 10.98 24.3 MILL_CR 6 1097 8.3 53 1.71 15.77 13.1 MORGAN_RUN 76 9765 9.3 130 4.38 5.74 16.9 MUDDY_BR 25 4266 8.9 68 2.05 12.77 17.0 MUDLICK_RUN 8 1507 7.4 51 0.84 16.30 21.0 NORTH_BR 23 3836 8.3 121 1.32 7.36 18.5 NORTH_BR_CASSELMAN_R 16 17419 7.4 48 0.29 2.50 18.0 NORTHWEST_BR 6 8735 8.2 63 1.56 13.22 20.8 NORTHWEST_BR_ANACOSTIA_R 100 22550 8.4 107 1.41 14.45 23.0 OWENS_CR 80 9961 8.6 79 1.02 9.07 21.8 PATAPSCO_R 28 4706 8.9 61 4.06 9.90 19.7 PINEY_BR 48 4011 8.3 52 4.70 5.38 18.9 PINEY_CR 18 6949 9.3 100 4.57 17.84 18.6 PINEY_RUN 36 11405 9.2 70 2.17 10.17 23.6 PRETTYBOY_BR 19 904 9.8 39 6.81 9.20 19.2 RED_RUN 32 3332 8.4 73 2.09 5.50 17.7 ROCK_CR 3 575 6.8 33 2.47 7.61 18.0 SAVAGE_R 106 29708 7.7 73 0.63 12.28 21.4 SECOND_MINE_BR 62 2511 10.2 60 4.17 10.75 17.7 SENECA_CR 23 18422 9.9 45 1.58 8.37 20.1 SOUTH_BR_CASSELMAN_R 2 6311 7.6 46 0.64 21.16 18.5 SOUTH_BR_PATAPSCO 26 1450 7.9 60 2.96 8.84 18.6 SOUTH_FORK_LINGANORE_CR 20 4106 10.0 96 2.62 5.45 15.4 TUSCARORA_CR 38 10274 9.3 90 5.45 24.76 15.0 WATTS_BR 19 510 6.7 82 5.25 14.19 26.5 ; PROC REG DATA=fish; MODEL longnosedace=acreage do2 maxdepth no3 so4 temp / SELECTION=STEPWISE SLENTRY=0.15 SLSTAY=0.15 DETAILS=SUMMARY STB; RUN; In the MODEL statement, the dependent variable is to the left of the equals sign, and all the independent variables are to the right. SELECTION determines which variable selection method is used; choices include FORWARD, BACKWARD, STEPWISE, and several others. You can omit the SELECTION parameter if you want to see the multiple regression model that includes all the independent variables. SLENTRY is the significance level for entering a variable into the model, or $P$-to-enter, if you're using FORWARD or STEPWISE selection; in this example, a variable must have a $P$ value less than $0.15$ to be entered into the regression model. SLSTAY is the significance level for removing a variable in BACKWARD or STEPWISE selection, or $P$-to-leave; in this example, a variable with a $P$ value greater than $0.15$ will be removed from the model. DETAILS=SUMMARY produces a shorter output file; you can omit it to see more details on each step of the variable selection process. The STB option causes the standard partial regression coefficients to be displayed. Summary of Stepwise Selection Variable Variable Number Partial Model Step Entered Removed Vars In R-Square R-Square C(p) F Value Pr > F 1 acreage 1 0.1201 0.1201 14.2427 9.01 0.0038 2 no3 2 0.1193 0.2394 5.6324 10.20 0.0022 3 maxdepth 3 0.0404 0.2798 4.0370 3.59 0.0625 The summary shows that "acreage" was added to the model first, yielding an $R^2$ of $0.1201$. Next, "no3" was added. The $R^2$ increased to $0.2394$, and the increase in $R^2$ was significant ($P=0.0022$). Next, "maxdepth" was added. The $R^2$ increased to $0.2798$, which was not quite significant ($P=0.0625$); SLSTAY was set to $0.15$, not $0.05$, because you might want to include this variable in a predictive model even if it's not quite significant. None of the other variables increased $R^2$ enough to have a $P$ value less than $0.15$, and removing any of the variables caused a decrease in $R^2$ big enough that $P$ was less than $0.15$, so the stepwise process is done. Parameter Estimates Parameter Standard Standardized Variable DF Estimate Error t Value Pr > |t| Estimate Intercept 1 -23.82907 15.27399 -1.56 0.1237 0 acreage 1 0.00199 0.00067421 2.95 0.0045 0.32581 maxdepth 1 0.33661 0.17757 1.90 0.0625 0.20860 no3 1 8.67304 2.77331 3.13 0.0027 0.33409 The "parameter estimates" are the partial regression coefficients; they show that the model is: $\hat{Y}=0.00199(acreage)+0.3361(maxdepth)+8.67304(no3)−23.82907$ The "standardized estimates" are the standard partial regression coefficients; they show that "no3" has the greatest contribution to the model, followed by "acreage" and then "maxdepth". The value of this multiple regression would be that it suggests that the acreage of a stream's watershed is somehow important. Because watershed area wouldn't have any direct effect on the fish in the stream, I would carefully look at the correlations between the acreage and the other independent variables; I would also try to see if there are other variables that were not analyzed that might be both correlated with watershed area and directly important to fish, such as current speed, water clarity, or substrate type. Power analysis You need to have several times as many observations as you have independent variables, otherwise you can get "overfitting"—it could look like every independent variable is important, even if they're not. A common rule of thumb is that you should have at least $10$ to $20$ times as many observations as you have independent variables. You'll probably just want to collect as much data as you can afford, but if you really need to figure out how to do a formal power analysis for multiple regression, Kelley and Maxwell (2003) is a good place to start.
textbooks/stats/Applied_Statistics/Biological_Statistics_(McDonald)/05%3A_Tests_for_Multiple_Measurement_Variables/5.05%3A_Multiple_Regression.txt
Learning Objectives • To use simple logistic regression when you have one nominal variable and one measurement variable, and you want to know whether variation in the measurement variable causes variation in the nominal variable. When to use it Use simple logistic regression when you have one nominal variable with two values (male/female, dead/alive, etc.) and one measurement variable. The nominal variable is the dependent variable, and the measurement variable is the independent variable. I'm separating simple logistic regression, with only one independent variable, from multiple logistic regression, which has more than one independent variable. Many people lump all logistic regression together, but I think it's useful to treat simple logistic regression separately, because it's simpler. Simple logistic regression is analogous to linear regression, except that the dependent variable is nominal, not a measurement. One goal is to see whether the probability of getting a particular value of the nominal variable is associated with the measurement variable; the other goal is to predict the probability of getting a particular value of the nominal variable, given the measurement variable. Grain size (mm) Spiders 0.245 absent 0.247 absent 0.285 present 0.299 present 0.327 present 0.347 present 0.356 absent 0.36 present 0.363 absent 0.364 present 0.398 absent 0.4 present 0.409 absent 0.421 present 0.432 absent 0.473 present 0.509 present 0.529 present 0.561 absent 0.569 absent 0.594 present 0.638 present 0.656 present 0.816 present 0.853 present 0.938 present 1.036 present 1.045 present As an example of simple logistic regression, Suzuki et al. (2006) measured sand grain size on $28$ beaches in Japan and observed the presence or absence of the burrowing wolf spider Lycosa ishikariana on each beach. Sand grain size is a measurement variable, and spider presence or absence is a nominal variable. Spider presence or absence is the dependent variable; if there is a relationship between the two variables, it would be sand grain size affecting spiders, not the presence of spiders affecting the sand. One goal of this study would be to determine whether there was a relationship between sand grain size and the presence or absence of the species, in hopes of understanding more about the biology of the spiders. Because this species is endangered, another goal would be to find an equation that would predict the probability of a wolf spider population surviving on a beach with a particular sand grain size, to help determine which beaches to reintroduce the spider to. You can also analyze data with one nominal and one measurement variable using a one-way anova or a Student's t–test, and the distinction can be subtle. One clue is that logistic regression allows you to predict the probability of the nominal variable. For example, imagine that you had measured the cholesterol level in the blood of a large number of $55$-year-old women, then followed up ten years later to see who had had a heart attack. You could do a two-sample $t$–test, comparing the cholesterol levels of the women who did have heart attacks vs. those who didn't, and that would be a perfectly reasonable way to test the null hypothesis that cholesterol level is not associated with heart attacks; if the hypothesis test was all you were interested in, the $t$–test would probably be better than the less-familiar logistic regression. However, if you wanted to predict the probability that a $55$-year-old woman with a particular cholesterol level would have a heart attack in the next ten years, so that doctors could tell their patients "If you reduce your cholesterol by $40$ points, you'll reduce your risk of heart attack by $X\%$," you would have to use logistic regression. Another situation that calls for logistic regression, rather than an anova or $t$–test, is when you determine the values of the measurement variable, while the values of the nominal variable are free to vary. For example, let's say you are studying the effect of incubation temperature on sex determination in Komodo dragons. You raise $10$ eggs at $30^{\circ}C$, $30$ eggs at $32^{\circ}C$, $12$ eggs at $34^{\circ}C$, etc., then determine the sex of the hatchlings. It would be silly to compare the mean incubation temperatures between male and female hatchlings, and test the difference using an anova or $t$–test, because the incubation temperature does not depend on the sex of the offspring; you've set the incubation temperature, and if there is a relationship, it's that the sex of the offspring depends on the temperature. When there are multiple observations of the nominal variable for each value of the measurement variable, as in the Komodo dragon example, you'll often sees the data analyzed using linear regression, with the proportions treated as a second measurement variable. Often the proportions are arc-sine transformed, because that makes the distributions of proportions more normal. This is not horrible, but it's not strictly correct. One problem is that linear regression treats all of the proportions equally, even if they are based on much different sample sizes. If $6$ out of $10$ Komodo dragon eggs raised at $30^{\circ}C$ were female, and $15$ out of $30$ eggs raised at $32^{\circ}C$ were female, the $60\%$ female at $30^{\circ}C$ and $50\%$ at $32^{\circ}C$ would get equal weight in a linear regression, which is inappropriate. Logistic regression analyzes each observation (in this example, the sex of each Komodo dragon) separately, so the $30$ dragons at $32^{\circ}C$ would have $3$ times the weight of the $10$ dragons at $30^{\circ}C$. While logistic regression with two values of the nominal variable (binary logistic regression) is by far the most common, you can also do logistic regression with more than two values of the nominal variable, called multinomial logistic regression. I'm not going to cover it here at all. Sorry. You can also do simple logistic regression with nominal variables for both the independent and dependent variables, but to be honest, I don't understand the advantage of this over a chi-squared or G–test of independence. Null hypothesis The statistical null hypothesis is that the probability of a particular value of the nominal variable is not associated with the value of the measurement variable; in other words, the line describing the relationship between the measurement variable and the probability of the nominal variable has a slope of zero. How the test works Simple logistic regression finds the equation that best predicts the value of the $Y$ variable for each value of the $X$ variable. What makes logistic regression different from linear regression is that you do not measure the $Y$ variable directly; it is instead the probability of obtaining a particular value of a nominal variable. For the spider example, the values of the nominal variable are "spiders present" and "spiders absent." The $Y$ variable used in logistic regression would then be the probability of spiders being present on a beach. This probability could take values from $0$ to $1$. The limited range of this probability would present problems if used directly in a regression, so the odds, $Y/(1-Y)$, is used instead. (If the probability of spiders on a beach is $0.25$, the odds of having spiders are $0.25/(1-0.25)=1/3$. In gambling terms, this would be expressed as "$3$ to $1$ odds against having spiders on a beach.") Taking the natural log of the odds makes the variable more suitable for a regression, so the result of a logistic regression is an equation that looks like this: $ln\left [ \frac{Y}{(1-Y)}\right ]=a+bX$ You find the slope ($b$) and intercept ($a$) of the best-fitting equation in a logistic regression using the maximum-likelihood method, rather than the least-squares method you use for linear regression. Maximum likelihood is a computer-intensive technique; the basic idea is that it finds the values of the parameters under which you would be most likely to get the observed results. For the spider example, the equation is: $ln\left [ \frac{Y}{(1-Y)}\right ]=-1.6476+5.1215(\text{grain size})$ Rearranging to solve for $Y$ (the probability of spiders on a beach) yields: $Y=\frac{e^{-1.6476+5.1215(\text{grain size})}}{(1+e^{-1.6476+5.1215(\text{grain size}))}}$ where $e$ is the root of natural logs. So if you went to a beach and wanted to predict the probability that spiders would live there, you could measure the sand grain size, plug it into the equation, and get an estimate of $Y$, the probability of spiders being on the beach. There are several different ways of estimating the $P$ value. The Wald chi-square is fairly popular, but it may yield inaccurate results with small sample sizes. The likelihood ratio method may be better. It uses the difference between the probability of obtaining the observed results under the logistic model and the probability of obtaining the observed results in a model with no relationship between the independent and dependent variables. I recommend you use the likelihood-ratio method; be sure to specify which method you've used when you report your results. For the spider example, the $P$ value using the likelihood ratio method is $0.033$, so you would reject the null hypothesis. The $P$ value for the Wald method is $0.088$, which is not quite significant. Assumptions Simple logistic regression assumes that the observations are independent; in other words, that one observation does not affect another. In the Komodo dragon example, if all the eggs at $30^{\circ}C$ were laid by one mother, and all the eggs at $32^{\circ}C$ were laid by a different mother, that would make the observations non-independent. If you design your experiment well, you won't have a problem with this assumption. Simple logistic regression assumes that the relationship between the natural log of the odds ratio and the measurement variable is linear. You might be able to fix this with a transformation of your measurement variable, but if the relationship looks like a $U$ or upside-down $U$, a transformation won't work. For example, Suzuki et al. (2006) found an increasing probability of spiders with increasing grain size, but I'm sure that if they looked at beaches with even larger sand (in other words, gravel), the probability of spiders would go back down. In that case you couldn't do simple logistic regression; you'd probably want to do multiple logistic regression with an equation including both $X$ and $X^2$ terms, instead. Simple logistic regression does not assume that the measurement variable is normally distributed. Example McDonald (1985) counted allele frequencies at the mannose-6-phosphate isomerase (Mpi) locus in the amphipod crustacean Megalorchestia californiana, which lives on sandy beaches of the Pacific coast of North America. There were two common alleles, Mpi90 and Mpi100. The latitude of each collection location, the count of each of the alleles, and the proportion of the Mpi100 allele, are shown here: Location Latitude Mpi90 Mpi100 p, Mpi100 Port Townsend, WA 48.1 47 139 0.748 Neskowin, OR 45.2 177 241 0.577 Siuslaw R., OR 44 1087 1183 0.521 Umpqua R., OR 43.7 187 175 0.483 Coos Bay, OR 43.5 397 671 0.628 San Francisco, CA 37.8 40 14 0.259 Carmel, CA 36.6 39 17 0.304 Santa Barbara, CA 34.3 30 0 0 Allele (Mpi90 or Mpi100) is the nominal variable, and latitude is the measurement variable. If the biological question were "Do different locations have different allele frequencies?", you would ignore latitude and do a chi-square or G–test of independence; here the biological question is "Are allele frequencies associated with latitude?" Note that although the proportion of the Mpi100 allele seems to increase with increasing latitude, the sample sizes for the northern and southern areas are pretty small; doing a linear regression of allele frequency vs. latitude would give them equal weight to the much larger samples from Oregon, which would be inappropriate. Doing a logistic regression, the result is $chi2=83.3,\; 1 d.f.,\; P=7\times 10^{-20}$. The equation of the relationship is: $ln\left [ \frac{Y}{(1-Y)}\right ]=-7.6469+0.1786(latitude)$ where $Y$ is the predicted probability of getting an Mpi100 allele. Solving this for $Y$ gives: $Y=\frac{e^{-7.6469+0.1786(latitude)}}{1+e^{-7.6469+0.1786(latitude)}}$ This logistic regression line is shown on the graph; note that it has a gentle $S$-shape. All logistic regression equations have an $S$-shape, although it may not be obvious if you look over a narrow range of values. Graphing the results If you have multiple observations for each value of the measurement variable, as in the amphipod example above, you can plot a scattergraph with the measurement variable on the $X$ axis and the proportions on the $Y$ axis. You might want to put 95% confidence intervals on the points; this gives a visual indication of which points contribute more to the regression (the ones with larger sample sizes have smaller confidence intervals). There's no automatic way in spreadsheets to add the logistic regression line. Here's how I got it onto the graph of the amphipod data. First, I put the latitudes in column $A$ and the proportions in column $B$. Then, using the Fill: Series command, I added numbers $30,\; 30.1,\; 30.2,...50$ to cells $A10$ through $A210$. In column $C$ I entered the equation for the logistic regression line; in Excel format, it's $=exp(-7.6469+0.1786*(A10))/(1+exp(-7.6469+0.1786*(A10)))$ for row $10$. I copied this into cells $C11$ through $C210$. Then when I drew a graph of the numbers in columns $A,\; B,\; \text{and}\; C$, I gave the numbers in column B symbols but no line, and the numbers in column $C$ got a line but no symbols. If you only have one observation of the nominal variable for each value of the measurement variable, as in the spider example, it would be silly to draw a scattergraph, as each point on the graph would be at either $0$ or $1$ on the $Y$ axis. If you have lots of data points, you can divide the measurement values into intervals and plot the proportion for each interval on a bar graph. Here is data from the Maryland Biological Stream Survey on $2180$ sampling sites in Maryland streams. The measurement variable is dissolved oxygen concentration, and the nominal variable is the presence or absence of the central stoneroller, Campostoma anomalum. If you use a bar graph to illustrate a logistic regression, you should explain that the grouping was for heuristic purposes only, and the logistic regression was done on the raw, ungrouped data. Similar tests You can do logistic regression with a dependent variable that has more than two values, known as multinomial, polytomous, or polychotomous logistic regression. I don't cover this here. Use multiple logistic regression when the dependent variable is nominal and there is more than one independent variable. It is analogous to multiple linear regression, and all of the same caveats apply. Use linear regression when the $Y$ variable is a measurement variable. When there is just one measurement variable and one nominal variable, you could use one-way anova or a t–test to compare the means of the measurement variable between the two groups. Conceptually, the difference is whether you think variation in the nominal variable causes variation in the measurement variable (use a $t$–test) or variation in the measurement variable causes variation in the probability of the nominal variable (use logistic regression). You should also consider who you are presenting your results to, and how they are going to use the information. For example, Tallamy et al. (2003) examined mating behavior in spotted cucumber beetles (Diabrotica undecimpunctata). Male beetles stroke the female with their antenna, and Tallamy et al. wanted to know whether faster-stroking males had better mating success. They compared the mean stroking rate of $21$ successful males ($50.9$ strokes per minute) and $16$ unsuccessful males ($33.8$ strokes per minute) with a two-sample $t$–test, and found a significant result ($P<0.0001$). This is a simple and clear result, and it answers the question, "Are female spotted cucumber beetles more likely to mate with males who stroke faster?" Tallamy et al. (2003) could have analyzed these data using logistic regression; it is a more difficult and less familiar statistical technique that might confuse some of their readers, but in addition to answering the yes/no question about whether stroking speed is related to mating success, they could have used the logistic regression to predict how much increase in mating success a beetle would get as it increased its stroking speed. This could be useful additional information (especially if you're a male cucumber beetle). How to do the test Spreadsheet I have written a spreadsheet to do simple logistic regression logistic.xls. You can enter the data either in summarized form (for example, saying that at $30^{\circ}C$ there were $7$ male and $3$ female Komodo dragons) or non-summarized form (for example, entering each Komodo dragon separately, with "$0$" for a male and "$1$" for a female). It uses the likelihood-ratio method for calculating the $P$ value. The spreadsheet makes use of the "Solver" tool in Excel. If you don't see Solver listed in the Tools menu, go to Add-Ins in the Tools menu and install Solver. The spreadsheet is fun to play with, but I'm not confident enough in it to recommend that you use it for publishable results. Web page There is a very nice web page that will do logistic regression, with the likelihood-ratio chi-square. You can enter the data either in summarized form or non-summarized form, with the values separated by tabs (which you'll get if you copy and paste from a spreadsheet) or commas. You would enter the amphipod data like this: 48.1,47,139 45.2,177,241 44.0,1087,1183 43.7,187,175 43.5,397,671 37.8,40,14 36.6,39,17 34.3,30,0 R Salvatore Mangiafico's $R$ Companion has a sample R program for simple logistic regression. SAS Use PROC LOGISTIC for simple logistic regression. There are two forms of the MODEL statement. When you have multiple observations for each value of the measurement variable, your data set can have the measurement variable, the number of "successes" (this can be either value of the nominal variable), and the total (which you may need to create a new variable for, as shown here). Here is an example using the amphipod data: DATA amphipods; INPUT location $latitude mpi90 mpi100; total=mpi90+mpi100; DATALINES; Port_Townsend,_WA 48.1 47 139 Neskowin,_OR 45.2 177 241 Siuslaw_R.,_OR 44.0 1087 1183 Umpqua_R.,_OR 43.7 187 175 Coos_Bay,_OR 43.5 397 671 San_Francisco,_CA 37.8 40 14 Carmel,_CA 36.6 39 17 Santa_Barbara,_CA 34.3 30 0 ; PROC LOGISTIC DATA=amphipods; MODEL mpi100/total=latitude; RUN; Note that you create the new variable TOTAL in the DATA step by adding the number of Mpi90 and Mpi100 alleles. The MODEL statement uses the number of Mpi100 alleles out of the total as the dependent variable. The $P$ value would be the same if you used Mpi90; the equation parameters would be different. There is a lot of output from PROC LOGISTIC that you don't need. The program gives you three different $P$ values; the likelihood ratio $P$ value is the most commonly used: Testing Global Null Hypothesis: BETA=0 Test Chi-Square DF Pr > ChiSq Likelihood Ratio 83.3007 1 <.0001 P value Score 80.5733 1 <.0001 Wald 72.0755 1 <.0001 The coefficients of the logistic equation are given under "estimate": Analysis of Maximum Likelihood Estimates Standard Wald Parameter DF Estimate Error Chi-Square Pr > ChiSq Intercept 1 -7.6469 0.9249 68.3605 <.0001 latitude 1 0.1786 0.0210 72.0755 <.0001 Using these coefficients, the maximum likelihood equation for the proportion of Mpi100 alleles at a particular latitude is: $Y=\frac{e^{-7.6469+0.1786(latitude)}}{1+e^{-7.6469+0.1786(latitude)}}$ It is also possible to use data in which each line is a single observation. In that case, you may use either words or numbers for the dependent variable. In this example, the data are height (in inches) of the $2004$ students of my class, along with their favorite insect (grouped into beetles vs. everything else, where "everything else" includes spiders, which a biologist really should know are not insects): DATA insect; INPUT height insect$ @@; DATALINES; 62 beetle 66 other 61 beetle 67 other 62 other 76 other 66 other 70 beetle 67 other 66 other 70 other 70 other 77 beetle 76 other 72 beetle 76 beetle 72 other 70 other 65 other 63 other 63 other 70 other 72 other 70 beetle 74 other ; PROC LOGISTIC DATA=insect; MODEL insect=height; RUN; The format of the results is the same for either form of the MODEL statement. In this case, the model would be the probability of BEETLE, because it is alphabetically first; to model the probability of OTHER, you would add an EVENT after the nominal variable in the MODEL statement, making it "MODEL insect (EVENT='other')=height;" Power analysis You can use G*Power to estimate the sample size needed for a simple logistic regression. Choose "$z$ tests" under Test family and "Logistic regression" under Statistical test. Set the number of tails (usually two), alpha (usually $0.05$), and power (often $0.8$ or $0.9$). For simple logistic regression, set "X distribution" to Normal, "R2 other X" to $0$, "X parm μ" to $0$, and "X parm σ" to $1$. The last thing to set is your effect size. This is the odds ratio of the difference you're hoping to find between the odds of $Y$ when $X$ is equal to the mean $X$, and the odds of $Y$ when $X$ is equal to the mean $X$ plus one standard deviation. You can click on the "Determine" button to calculate this. For example, let's say you want to study the relationship between sand particle size and the presences or absence of tiger beetles. You set alpha to $0.05$ and power to $0.90$. You expect, based on previous research, that $30\%$ of the beaches you'll look at will have tiger beetles, so you set "Pr(Y=1|X=1) H0" to $0.30$. Also based on previous research, you expect a mean sand grain size of $0.6 mm$ with a standard deviation of $0.2 mm$. The effect size (the minimum deviation from the null hypothesis that you hope to see) is that as the sand grain size increases by one standard deviation, from $0.6 mm$ to $0.8 mm$, the proportion of beaches with tiger beetles will go from $0.30$ to $0.40$. You click on the "Determine" button and enter $0.40$ for "Pr(Y=1|X=1) H1" and $0.30$ for "Pr(Y=1|X=1) H0", then hit "Calculate and transfer to main window." It will fill in the odds ratio ($1.555$ for our example) and the "Pr(Y=1|X=1) H0". The result in this case is $206$, meaning your experiment is going to require that you travel to $206$ warm, beautiful beaches.
textbooks/stats/Applied_Statistics/Biological_Statistics_(McDonald)/05%3A_Tests_for_Multiple_Measurement_Variables/5.06%3A_Simple_Logistic_Regression.txt